TO
MACHINE LEARNING
AN EARLY DRAFT OF A PROPOSED
TEXTBOOK
Nils J. Nilsson
Robotics Laboratory
Department of Computer Science
Stanford University
Stanford, CA 94305
email: nilsson@cs.stanford.edu
November 3, 1998
Copyright c 2005 Nils J. Nilsson
This material may not be copied, reproduced, or distributed without the
written permission of the copyright holder.
ii
Contents
1 Preliminaries 1
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 What is Machine Learning? . . . . . . . . . . . . . . . . . 1
1.1.2 Wellsprings of Machine Learning . . . . . . . . . . . . . . 3
1.1.3 Varieties of Machine Learning . . . . . . . . . . . . . . . . 4
1.2 Learning InputOutput Functions . . . . . . . . . . . . . . . . . . 5
1.2.1 Types of Learning . . . . . . . . . . . . . . . . . . . . . . 5
1.2.2 Input Vectors . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.3 Outputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.2.4 Training Regimes . . . . . . . . . . . . . . . . . . . . . . . 8
1.2.5 Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2.6 Performance Evaluation . . . . . . . . . . . . . . . . . . . 9
1.3 Learning Requires Bias . . . . . . . . . . . . . . . . . . . . . . . . 9
1.4 Sample Applications . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.5 Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.6 Bibliographical and Historical Remarks . . . . . . . . . . . . . . 13
2 Boolean Functions 15
2.1 Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.1.1 Boolean Algebra . . . . . . . . . . . . . . . . . . . . . . . 15
2.1.2 Diagrammatic Representations . . . . . . . . . . . . . . . 16
2.2 Classes of Boolean Functions . . . . . . . . . . . . . . . . . . . . 17
2.2.1 Terms and Clauses . . . . . . . . . . . . . . . . . . . . . . 17
2.2.2 DNF Functions . . . . . . . . . . . . . . . . . . . . . . . . 18
2.2.3 CNF Functions . . . . . . . . . . . . . . . . . . . . . . . . 21
2.2.4 Decision Lists . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.2.5 Symmetric and Voting Functions . . . . . . . . . . . . . . 23
2.2.6 Linearly Separable Functions . . . . . . . . . . . . . . . . 23
2.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.4 Bibliographical and Historical Remarks . . . . . . . . . . . . . . 25
iii
3 Using Version Spaces for Learning 27
3.1 Version Spaces and Mistake Bounds . . . . . . . . . . . . . . . . 27
3.2 Version Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.3 Learning as Search of a Version Space . . . . . . . . . . . . . . . 32
3.4 The Candidate Elimination Method . . . . . . . . . . . . . . . . 32
3.5 Bibliographical and Historical Remarks . . . . . . . . . . . . . . 34
4 Neural Networks 35
4.1 Threshold Logic Units . . . . . . . . . . . . . . . . . . . . . . . . 35
4.1.1 Deﬁnitions and Geometry . . . . . . . . . . . . . . . . . . 35
4.1.2 Special Cases of Linearly Separable Functions . . . . . . . 37
4.1.3 ErrorCorrection Training of a TLU . . . . . . . . . . . . 38
4.1.4 Weight Space . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.1.5 The WidrowHoﬀ Procedure . . . . . . . . . . . . . . . . . 42
4.1.6 Training a TLU on NonLinearlySeparable Training Sets 44
4.2 Linear Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.3 Networks of TLUs . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.3.1 Motivation and Examples . . . . . . . . . . . . . . . . . . 46
4.3.2 Madalines . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.3.3 Piecewise Linear Machines . . . . . . . . . . . . . . . . . . 50
4.3.4 Cascade Networks . . . . . . . . . . . . . . . . . . . . . . 51
4.4 Training Feedforward Networks by Backpropagation . . . . . . . 52
4.4.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.4.2 The Backpropagation Method . . . . . . . . . . . . . . . . 53
4.4.3 Computing Weight Changes in the Final Layer . . . . . . 56
4.4.4 Computing Changes to the Weights in Intermediate Layers 58
4.4.5 Variations on Backprop . . . . . . . . . . . . . . . . . . . 59
4.4.6 An Application: Steering a Van . . . . . . . . . . . . . . . 60
4.5 Synergies Between Neural Network and KnowledgeBased Methods 61
4.6 Bibliographical and Historical Remarks . . . . . . . . . . . . . . 61
5 Statistical Learning 63
5.1 Using Statistical Decision Theory . . . . . . . . . . . . . . . . . . 63
5.1.1 Background and General Method . . . . . . . . . . . . . . 63
5.1.2 Gaussian (or Normal) Distributions . . . . . . . . . . . . 65
5.1.3 Conditionally Independent Binary Components . . . . . . 68
5.2 Learning Belief Networks . . . . . . . . . . . . . . . . . . . . . . 70
5.3 NearestNeighbor Methods . . . . . . . . . . . . . . . . . . . . . . 70
5.4 Bibliographical and Historical Remarks . . . . . . . . . . . . . . 72
iv
6 Decision Trees 73
6.1 Deﬁnitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
6.2 Supervised Learning of Univariate Decision Trees . . . . . . . . . 74
6.2.1 Selecting the Type of Test . . . . . . . . . . . . . . . . . . 75
6.2.2 Using Uncertainty Reduction to Select Tests . . . . . . . 75
6.2.3 NonBinary Attributes . . . . . . . . . . . . . . . . . . . . 79
6.3 Networks Equivalent to Decision Trees . . . . . . . . . . . . . . . 79
6.4 Overﬁtting and Evaluation . . . . . . . . . . . . . . . . . . . . . 80
6.4.1 Overﬁtting . . . . . . . . . . . . . . . . . . . . . . . . . . 80
6.4.2 Validation Methods . . . . . . . . . . . . . . . . . . . . . 81
6.4.3 Avoiding Overﬁtting in Decision Trees . . . . . . . . . . . 82
6.4.4 MinimumDescription Length Methods . . . . . . . . . . . 83
6.4.5 Noise in Data . . . . . . . . . . . . . . . . . . . . . . . . . 84
6.5 The Problem of Replicated Subtrees . . . . . . . . . . . . . . . . 84
6.6 The Problem of Missing Attributes . . . . . . . . . . . . . . . . . 86
6.7 Comparisons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
6.8 Bibliographical and Historical Remarks . . . . . . . . . . . . . . 87
7 Inductive Logic Programming 89
7.1 Notation and Deﬁnitions . . . . . . . . . . . . . . . . . . . . . . . 90
7.2 A Generic ILP Algorithm . . . . . . . . . . . . . . . . . . . . . . 91
7.3 An Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
7.4 Inducing Recursive Programs . . . . . . . . . . . . . . . . . . . . 98
7.5 Choosing Literals to Add . . . . . . . . . . . . . . . . . . . . . . 100
7.6 Relationships Between ILP and Decision Tree Induction . . . . . 101
7.7 Bibliographical and Historical Remarks . . . . . . . . . . . . . . 104
8 Computational Learning Theory 107
8.1 Notation and Assumptions for PAC Learning Theory . . . . . . . 107
8.2 PAC Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
8.2.1 The Fundamental Theorem . . . . . . . . . . . . . . . . . 109
8.2.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
8.2.3 Some Properly PACLearnable Classes . . . . . . . . . . . 112
8.3 The VapnikChervonenkis Dimension . . . . . . . . . . . . . . . . 113
8.3.1 Linear Dichotomies . . . . . . . . . . . . . . . . . . . . . . 113
8.3.2 Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
8.3.3 A More General Capacity Result . . . . . . . . . . . . . . 116
8.3.4 Some Facts and Speculations About the VC Dimension . 117
8.4 VC Dimension and PAC Learning . . . . . . . . . . . . . . . . . 118
8.5 Bibliographical and Historical Remarks . . . . . . . . . . . . . . 118
v
9 Unsupervised Learning 119
9.1 What is Unsupervised Learning? . . . . . . . . . . . . . . . . . . 119
9.2 Clustering Methods . . . . . . . . . . . . . . . . . . . . . . . . . . 120
9.2.1 A Method Based on Euclidean Distance . . . . . . . . . . 120
9.2.2 A Method Based on Probabilities . . . . . . . . . . . . . . 124
9.3 Hierarchical Clustering Methods . . . . . . . . . . . . . . . . . . 125
9.3.1 A Method Based on Euclidean Distance . . . . . . . . . . 125
9.3.2 A Method Based on Probabilities . . . . . . . . . . . . . . 126
9.4 Bibliographical and Historical Remarks . . . . . . . . . . . . . . 130
10 TemporalDiﬀerence Learning 131
10.1 Temporal Patterns and Prediction Problems . . . . . . . . . . . . 131
10.2 Supervised and TemporalDiﬀerence Methods . . . . . . . . . . . 131
10.3 Incremental Computation of the (∆W)
i
. . . . . . . . . . . . . . 134
10.4 An Experiment with TD Methods . . . . . . . . . . . . . . . . . 135
10.5 Theoretical Results . . . . . . . . . . . . . . . . . . . . . . . . . . 138
10.6 IntraSequence Weight Updating . . . . . . . . . . . . . . . . . . 138
10.7 An Example Application: TDgammon . . . . . . . . . . . . . . . 140
10.8 Bibliographical and Historical Remarks . . . . . . . . . . . . . . 141
11 DelayedReinforcement Learning 143
11.1 The General Problem . . . . . . . . . . . . . . . . . . . . . . . . 143
11.2 An Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
11.3 Temporal Discounting and Optimal Policies . . . . . . . . . . . . 145
11.4 QLearning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
11.5 Discussion, Limitations, and Extensions of QLearning . . . . . . 150
11.5.1 An Illustrative Example . . . . . . . . . . . . . . . . . . . 150
11.5.2 Using Random Actions . . . . . . . . . . . . . . . . . . . 152
11.5.3 Generalizing Over Inputs . . . . . . . . . . . . . . . . . . 153
11.5.4 Partially Observable States . . . . . . . . . . . . . . . . . 154
11.5.5 Scaling Problems . . . . . . . . . . . . . . . . . . . . . . . 154
11.6 Bibliographical and Historical Remarks . . . . . . . . . . . . . . 155
vi
12 ExplanationBased Learning 157
12.1 Deductive Learning . . . . . . . . . . . . . . . . . . . . . . . . . . 157
12.2 Domain Theories . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
12.3 An Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
12.4 Evaluable Predicates . . . . . . . . . . . . . . . . . . . . . . . . . 162
12.5 More General Proofs . . . . . . . . . . . . . . . . . . . . . . . . . 164
12.6 Utility of EBL . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
12.7 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
12.7.1 MacroOperators in Planning . . . . . . . . . . . . . . . . 164
12.7.2 Learning Search Control Knowledge . . . . . . . . . . . . 167
12.8 Bibliographical and Historical Remarks . . . . . . . . . . . . . . 168
vii
viii
Preface
These notes are in the process of becoming a textbook. The process is quite
unﬁnished, and the author solicits corrections, criticisms, and suggestions from
students and other readers. Although I have tried to eliminate errors, some un
doubtedly remain—caveat lector. Many typographical infelicities will no doubt
persist until the ﬁnal version. More material has yet to be added. Please let Some of my plans for additions and
other reminders are mentioned in
marginal notes.
me have your suggestions about topics that are too important to be left out.
I hope that future versions will cover Hopﬁeld nets, Elman nets and other re
current nets, radial basis functions, grammar and automata learning, genetic
algorithms, and Bayes networks . . .. I am also collecting exercises and project
suggestions which will appear in future versions.
My intention is to pursue a middle ground between a theoretical textbook
and one that focusses on applications. The book concentrates on the important
ideas in machine learning. I do not give proofs of many of the theorems that I
state, but I do give plausibility arguments and citations to formal proofs. And, I
do not treat many matters that would be of practical importance in applications;
the book is not a handbook of machine learning practice. Instead, my goal is
to give the reader suﬃcient preparation to make the extensive literature on
machine learning accessible.
Students in my Stanford courses on machine learning have already made
several useful suggestions, as have my colleague, Pat Langley, and my teaching
assistants, Ron Kohavi, Karl Pﬂeger, Robert Allen, and Lise Getoor.
ix
Chapter 1
Preliminaries
1.1 Introduction
1.1.1 What is Machine Learning?
Learning, like intelligence, covers such a broad range of processes that it is dif
ﬁcult to deﬁne precisely. A dictionary deﬁnition includes phrases such as “to
gain knowledge, or understanding of, or skill in, by study, instruction, or expe
rience,” and “modiﬁcation of a behavioral tendency by experience.” Zoologists
and psychologists study learning in animals and humans. In this book we fo
cus on learning in machines. There are several parallels between animal and
machine learning. Certainly, many techniques in machine learning derive from
the eﬀorts of psychologists to make more precise their theories of animal and
human learning through computational models. It seems likely also that the
concepts and techniques being explored by researchers in machine learning may
illuminate certain aspects of biological learning.
As regards machines, we might say, very broadly, that a machine learns
whenever it changes its structure, program, or data (based on its inputs or in
response to external information) in such a manner that its expected future
performance improves. Some of these changes, such as the addition of a record
to a data base, fall comfortably within the province of other disciplines and are
not necessarily better understood for being called learning. But, for example,
when the performance of a speechrecognition machine improves after hearing
several samples of a person’s speech, we feel quite justiﬁed in that case to say
that the machine has learned.
Machine learning usually refers to the changes in systems that perform tasks
associated with artiﬁcial intelligence (AI). Such tasks involve recognition, diag
nosis, planning, robot control, prediction, etc. The “changes” might be either
enhancements to already performing systems or ab initio synthesis of new sys
tems. To be slightly more speciﬁc, we show the architecture of a typical AI
1
2 CHAPTER 1. PRELIMINARIES
“agent” in Fig. 1.1. This agent perceives and models its environment and com
putes appropriate actions, perhaps by anticipating their eﬀects. Changes made
to any of the components shown in the ﬁgure might count as learning. Diﬀerent
learning mechanisms might be employed depending on which subsystem is being
changed. We will study several diﬀerent learning methods in this book.
Sensory signals
Perception
Actions
Action
Computation
Model
Planning and
Reasoning
Goals
Figure 1.1: An AI System
One might ask “Why should machines have to learn? Why not design ma
chines to perform as desired in the ﬁrst place?” There are several reasons why
machine learning is important. Of course, we have already mentioned that the
achievement of learning in machines might help us understand how animals and
humans learn. But there are important engineering reasons as well. Some of
these are:
• Some tasks cannot be deﬁned well except by example; that is, we might be
able to specify input/output pairs but not a concise relationship between
inputs and desired outputs. We would like machines to be able to adjust
their internal structure to produce correct outputs for a large number of
sample inputs and thus suitably constrain their input/output function to
approximate the relationship implicit in the examples.
• It is possible that hidden among large piles of data are important rela
tionships and correlations. Machine learning methods can often be used
to extract these relationships (data mining).
1.1. INTRODUCTION 3
• Human designers often produce machines that do not work as well as
desired in the environments in which they are used. In fact, certain char
acteristics of the working environment might not be completely known
at design time. Machine learning methods can be used for onthejob
improvement of existing machine designs.
• The amount of knowledge available about certain tasks might be too large
for explicit encoding by humans. Machines that learn this knowledge
gradually might be able to capture more of it than humans would want to
write down.
• Environments change over time. Machines that can adapt to a changing
environment would reduce the need for constant redesign.
• New knowledge about tasks is constantly being discovered by humans.
Vocabulary changes. There is a constant stream of new events in the
world. Continuing redesign of AI systems to conform to new knowledge is
impractical, but machine learning methods might be able to track much
of it.
1.1.2 Wellsprings of Machine Learning
Work in machine learning is now converging from several sources. These dif
ferent traditions each bring diﬀerent methods and diﬀerent vocabulary which
are now being assimilated into a more uniﬁed discipline. Here is a brief listing
of some of the separate disciplines that have contributed to machine learning;
more details will follow in the the appropriate chapters:
• Statistics: A longstanding problem in statistics is how best to use sam
ples drawn from unknown probability distributions to help decide from
which distribution some new sample is drawn. A related problem is how
to estimate the value of an unknown function at a new point given the
values of this function at a set of sample points. Statistical methods
for dealing with these problems can be considered instances of machine
learning because the decision and estimation rules depend on a corpus of
samples drawn from the problem environment. We will explore some of
the statistical methods later in the book. Details about the statistical the
ory underlying these methods can be found in statistical textbooks such
as [Anderson, 1958].
• Brain Models: Nonlinear elements with weighted inputs
have been suggested as simple models of biological neu
rons. Networks of these elements have been studied by sev
eral researchers including [McCulloch & Pitts, 1943, Hebb, 1949,
Rosenblatt, 1958] and, more recently by [Gluck & Rumelhart, 1989,
Sejnowski, Koch, & Churchland, 1988]. Brain modelers are interested
in how closely these networks approximate the learning phenomena of
4 CHAPTER 1. PRELIMINARIES
living brains. We shall see that several important machine learning
techniques are based on networks of nonlinear elements—often called
neural networks. Work inspired by this school is sometimes called
connectionism, brainstyle computation, or subsymbolic processing.
• Adaptive Control Theory: Control theorists study the problem of con
trolling a process having unknown parameters which must be estimated
during operation. Often, the parameters change during operation, and the
control process must track these changes. Some aspects of controlling a
robot based on sensory inputs represent instances of this sort of problem.
For an introduction see [Bollinger & Duﬃe, 1988].
• Psychological Models: Psychologists have studied the performance of
humans in various learning tasks. An early example is the EPAM net
work for storing and retrieving one member of a pair of words when
given another [Feigenbaum, 1961]. Related work led to a number of
early decision tree [Hunt, Marin, & Stone, 1966] and semantic network
[Anderson & Bower, 1973] methods. More recent work of this sort has
been inﬂuenced by activities in artiﬁcial intelligence which we will be pre
senting.
Some of the work in reinforcement learning can be traced to eﬀorts to
model how reward stimuli inﬂuence the learning of goalseeking behavior in
animals [Sutton & Barto, 1987]. Reinforcement learning is an important
theme in machine learning research.
• Artiﬁcial Intelligence: From the beginning, AI research has been con
cerned with machine learning. Samuel developed a prominent early pro
gram that learned parameters of a function for evaluating board posi
tions in the game of checkers [Samuel, 1959]. AI researchers have also
explored the role of analogies in learning [Carbonell, 1983] and how fu
ture actions and decisions can be based on previous exemplary cases
[Kolodner, 1993]. Recent work has been directed at discovering rules
for expert systems using decisiontree methods [Quinlan, 1990] and in
ductive logic programming [Muggleton, 1991, Lavraˇc & Dˇzeroski, 1994].
Another theme has been saving and generalizing the results of prob
lem solving using explanationbased learning [DeJong & Mooney, 1986,
Laird, et al., 1986, Minton, 1988, Etzioni, 1993].
• Evolutionary Models:
In nature, not only do individual animals learn to perform better, but
species evolve to be better ﬁt in their individual niches. Since the distinc
tion between evolving and learning can be blurred in computer systems,
techniques that model certain aspects of biological evolution have been
proposed as learning methods to improve the performance of computer
programs. Genetic algorithms [Holland, 1975] and genetic programming
[Koza, 1992, Koza, 1994] are the most prominent computational tech
niques for evolution.
1.2. LEARNING INPUTOUTPUT FUNCTIONS 5
1.1.3 Varieties of Machine Learning
Orthogonal to the question of the historical source of any learning technique is
the more important question of what is to be learned. In this book, we take it
that the thing to be learned is a computational structure of some sort. We will
consider a variety of diﬀerent computational structures:
• Functions
• Logic programs and rule sets
• Finitestate machines
• Grammars
• Problem solving systems
We will present methods both for the synthesis of these structures from examples
and for changing existing structures. In the latter case, the change to the
existing structure might be simply to make it more computationally eﬃcient
rather than to increase the coverage of the situations it can handle. Much of
the terminology that we shall be using throughout the book is best introduced
by discussing the problem of learning functions, and we turn to that matter
ﬁrst.
1.2 Learning InputOutput Functions
We use Fig. 1.2 to help deﬁne some of the terminology used in describing the
problem of learning a function. Imagine that there is a function, f, and the task
of the learner is to guess what it is. Our hypothesis about the function to be
learned is denoted by h. Both f and h are functions of a vectorvalued input
X = (x
1
, x
2
, . . . , x
i
, . . . , x
n
) which has n components. We think of h as being
implemented by a device that has X as input and h(X) as output. Both f and
h themselves may be vectorvalued. We assume a priori that the hypothesized
function, h, is selected from a class of functions H. Sometimes we know that
f also belongs to this class or to a subset of this class. We select h based on a
training set, Ξ, of m input vector examples. Many important details depend on
the nature of the assumptions made about all of these entities.
1.2.1 Types of Learning
There are two major settings in which we wish to learn a function. In one,
called supervised learning, we know (sometimes only approximately) the values
of f for the m samples in the training set, Ξ. We assume that if we can ﬁnd
a hypothesis, h, that closely agrees with f for the members of Ξ, then this
hypothesis will be a good guess for f—especially if Ξ is large.
6 CHAPTER 1. PRELIMINARIES
h(X)
h
U = {X
1
, X
2
, . . . X
i
, . . ., X
m
}
Training Set:
X =
x
1
.
.
.
x
i
.
.
.
x
n
h D H
Figure 1.2: An InputOutput Function
Curveﬁtting is a simple example of supervised learning of a function. Sup
pose we are given the values of a twodimensional function, f, at the four sample
points shown by the solid circles in Fig. 1.3. We want to ﬁt these four points
with a function, h, drawn from the set, H, of seconddegree functions. We show
there a twodimensional parabolic surface above the x
1
, x
2
plane that ﬁts the
points. This parabolic function, h, is our hypothesis about the function, f, that
produced the four samples. In this case, h = f at the four samples, but we need
not have required exact matches.
In the other setting, termed unsupervised learning, we simply have a train
ing set of vectors without function values for them. The problem in this case,
typically, is to partition the training set into subsets, Ξ
1
, . . . , Ξ
R
, in some ap
propriate way. (We can still regard the problem as one of learning a function;
the value of the function is the name of the subset to which an input vector be
longs.) Unsupervised learning methods have application in taxonomic problems
in which it is desired to invent ways to classify data into meaningful categories.
We shall also describe methods that are intermediate between supervised
and unsupervised learning.
We might either be trying to ﬁnd a new function, h, or to modify an existing
one. An interesting special case is that of changing an existing function into an
equivalent one that is computationally more eﬃcient. This type of learning is
sometimes called speedup learning. A very simple example of speedup learning
involves deduction processes. From the formulas A ⊃ B and B ⊃ C, we can
deduce C if we are given A. From this deductive process, we can create the
formula A ⊃ C—a new formula but one that does not sanction any more con
1.2. LEARNING INPUTOUTPUT FUNCTIONS 7
10
5
0
5
10
10
5
0
5
10
0
500
1000
1500
10
5
0
5
10
10
5
0
5
10
0
00
00
0
x
1
x
2
h
sample fvalue
Figure 1.3: A Surface that Fits Four Points
clusions than those that could be derived from the formulas that we previously
had. But with this new formula we can derive C more quickly, given A, than
we could have done before. We can contrast speedup learning with methods
that create genuinely new functions—ones that might give diﬀerent results after
learning than they did before. We say that the latter methods involve inductive
learning. As opposed to deduction, there are no correct inductions—only useful
ones.
1.2.2 Input Vectors
Because machine learning methods derive from so many diﬀerent traditions, its
terminology is rife with synonyms, and we will be using most of them in this
book. For example, the input vector is called by a variety of names. Some
of these are: input vector, pattern vector, feature vector, sample, example, and
instance. The components, x
i
, of the input vector are variously called features,
attributes, input variables, and components.
The values of the components can be of three main types. They might
be realvalued numbers, discretevalued numbers, or categorical values. As an
example illustrating categorical values, information about a student might be
represented by the values of the attributes class, major, sex, adviser. A par
ticular student would then be represented by a vector such as: (sophomore,
history, male, higgins). Additionally, categorical values may be ordered (as in
{small, medium, large}) or unordered (as in the example just given). Of course,
mixtures of all these types of values are possible.
In all cases, it is possible to represent the input in unordered form by listing
the names of the attributes together with their values. The vector form assumes
that the attributes are ordered and given implicitly by a form. As an example
of an attributevalue representation, we might have: (major: history, sex: male,
8 CHAPTER 1. PRELIMINARIES
class: sophomore, adviser: higgins, age: 19). We will be using the vector form
exclusively.
An important specialization uses Boolean values, which can be regarded as
a special case of either discrete numbers (1,0) or of categorical variables (True,
False).
1.2.3 Outputs
The output may be a real number, in which case the process embodying the
function, h, is called a function estimator, and the output is called an output
value or estimate.
Alternatively, the output may be a categorical value, in which case the pro
cess embodying h is variously called a classiﬁer, a recognizer, or a categorizer,
and the output itself is called a label, a class, a category, or a decision. Classi
ﬁers have application in a number of recognition problems, for example in the
recognition of handprinted characters. The input in that case is some suitable
representation of the printed character, and the classiﬁer maps this input into
one of, say, 64 categories.
Vectorvalued outputs are also possible with components being real numbers
or categorical values.
An important special case is that of Boolean output values. In that case,
a training pattern having value 1 is called a positive instance, and a training
sample having value 0 is called a negative instance. When the input is also
Boolean, the classiﬁer implements a Boolean function. We study the Boolean
case in some detail because it allows us to make important general points in
a simpliﬁed setting. Learning a Boolean function is sometimes called concept
learning, and the function is called a concept.
1.2.4 Training Regimes
There are several ways in which the training set, Ξ, can be used to produce a
hypothesized function. In the batch method, the entire training set is available
and used all at once to compute the function, h. A variation of this method
uses the entire training set to modify a current hypothesis iteratively until an
acceptable hypothesis is obtained. By contrast, in the incremental method, we
select one member at a time from the training set and use this instance alone
to modify a current hypothesis. Then another member of the training set is
selected, and so on. The selection method can be random (with replacement)
or it can cycle through the training set iteratively. If the entire training set
becomes available one member at a time, then we might also use an incremental
method—selecting and using training set members as they arrive. (Alterna
tively, at any stage all training set members so far available could be used in a
“batch” process.) Using the training set members as they become available is
called an online method. Online methods might be used, for example, when the
1.3. LEARNING REQUIRES BIAS 9
next training instance is some function of the current hypothesis and the previ
ous instance—as it would be when a classiﬁer is used to decide on a robot’s next
action given its current set of sensory inputs. The next set of sensory inputs
will depend on which action was selected.
1.2.5 Noise
Sometimes the vectors in the training set are corrupted by noise. There are two
kinds of noise. Class noise randomly alters the value of the function; attribute
noise randomly alters the values of the components of the input vector. In either
case, it would be inappropriate to insist that the hypothesized function agree
precisely with the values of the samples in the training set.
1.2.6 Performance Evaluation
Even though there is no correct answer in inductive learning, it is important
to have methods to evaluate the result of learning. We will discuss this matter
in more detail later, but, brieﬂy, in supervised learning the induced function is
usually evaluated on a separate set of inputs and function values for them called
the testing set . A hypothesized function is said to generalize when it guesses
well on the testing set. Both meansquarederror and the total number of errors
are common measures.
1.3 Learning Requires Bias
Long before now the reader has undoubtedly asked why is learning a function
possible at all? Certainly, for example, there are an uncountable number of
diﬀerent functions having values that agree with the four samples shown in Fig.
1.3. Why would a learning procedure happen to select the quadratic one shown
in that ﬁgure? In order to make that selection we had at least to limit a priori
the set of hypotheses to quadratic functions and then to insist that the one we
chose passed through all four sample points. This kind of a priori information
is called bias, and useful learning without bias is impossible.
We can gain more insight into the role of bias by considering the special case
of learning a Boolean function of n dimensions. There are 2
n
diﬀerent Boolean
inputs possible. Suppose we had no bias; that is H is the set of all 2
2
n
Boolean
functions, and we have no preference among those that ﬁt the samples in the
training set. In this case, after being presented with one member of the training
set and its value we can rule out precisely onehalf of the members of H—those
Boolean functions that would misclassify this labeled sample. The remaining
functions constitute what is called a “version space;” we’ll explore that concept
in more detail later. As we present more members of the training set, the graph
of the number of hypotheses not yet ruled out as a function of the number of
diﬀerent patterns presented is as shown in Fig. 1.4. At any stage of the process,
10 CHAPTER 1. PRELIMINARIES
half of the remaining Boolean functions have value 1 and half have value 0 for
any training pattern not yet seen. No generalization is possible in this case
because the training patterns give no clue about the value of a pattern not yet
seen. Only memorization is possible here, which is a trivial sort of learning.
log
2
H
v

2
n
2
n
j = no. of labeled
patterns already seen
0
0
2
n
< j
(generalization is not possible)
H
v
 = no. of functions not ruled out
Figure 1.4: Hypotheses Remaining as a Function of Labeled Patterns Presented
But suppose we limited H to some subset, H
c
, of all Boolean functions.
Depending on the subset and on the order of presentation of training patterns,
a curve of hypotheses not yet ruled out might look something like the one
shown in Fig. 1.5. In this case it is even possible that after seeing fewer than
all 2
n
labeled samples, there might be only one hypothesis that agrees with
the training set. Certainly, even if there is more than one hypothesis remaining,
most of them may have the same value for most of the patterns not yet seen! The
theory of Probably Approximately Correct (PAC) learning makes this intuitive
idea precise. We’ll examine that theory later.
Let’s look at a speciﬁc example of how bias aids learning. A Boolean function
can be represented by a hypercube each of whose vertices represents a diﬀerent
input pattern. We show a 3dimensional version in Fig. 1.6. There, we show a
training set of six sample patterns and have marked those having a value of 1 by
a small square and those having a value of 0 by a small circle. If the hypothesis
set consists of just the linearly separable functions—those for which the positive
and negative instances can be separated by a linear surface, then there is only
one function remaining in this hypothsis set that is consistent with the training
set. So, in this case, even though the training set does not contain all possible
patterns, we can already pin down what the function must be—given the bias.
1.4. SAMPLE APPLICATIONS 11
log
2
H
v

2
n
2
n
j = no. of labeled
patterns already seen
0
0
H
v
 = no. of functions not ruled out
depends on order
of presentation
log
2
H
c

Figure 1.5: Hypotheses Remaining From a Restricted Subset
Machine learning researchers have identiﬁed two main varieties of bias, ab
solute and preference. In absolute bias (also called restricted hypothesisspace
bias), one restricts Hto a deﬁnite subset of functions. In our example of Fig. 1.6,
the restriction was to linearly separable Boolean functions. In preference bias,
one selects that hypothesis that is minimal according to some ordering scheme
over all hypotheses. For example, if we had some way of measuring the complex
ity of a hypothesis, we might select the one that was simplest among those that
performed satisfactorily on the training set. The principle of Occam’s razor,
used in science to prefer simple explanations to more complex ones, is a type
of preference bias. (William of Occam, 1285?1349, was an English philosopher
who said: “non sunt multiplicanda entia praeter necessitatem,” which means
“entities should not be multiplied unnecessarily.”)
1.4 Sample Applications
Our main emphasis in this book is on the concepts of machine learning—not
on its applications. Nevertheless, if these concepts were irrelevant to realworld
problems they would probably not be of much interest. As motivation, we give
a short summary of some areas in which machine learning techniques have been
successfully applied. [Langley, 1992] cites some of the following applications and
others:
a. Rule discovery using a variant of ID3 for a printing industry problem
12 CHAPTER 1. PRELIMINARIES
x
1
x
2
x
3
Figure 1.6: A Training Set That Completely Determines a Linearly Separable
Function
[Evans & Fisher, 1992].
b. Electric power load forecasting using a knearestneighbor rule system
[Jabbour, K., et al., 1987].
c. Automatic “help desk” assistant using a nearestneighbor system
[Acorn & Walden, 1992].
d. Planning and scheduling for a steel mill using ExpertEase, a marketed
(ID3like) system [Michie, 1992].
e. Classiﬁcation of stars and galaxies [Fayyad, et al., 1993].
Many applicationoriented papers are presented at the annual conferences
on Neural Information Processing Systems. Among these are papers on: speech
recognition, dolphin echo recognition, image processing, bioengineering, diag
nosis, commodity trading, face recognition, music composition, optical character
recognition, and various control applications [Various Editors, 19891994].
As additional examples, [Hammerstrom, 1993] mentions:
a. Sharp’s Japanese kanji character recognition system processes 200 char
acters per second with 99+% accuracy. It recognizes 3000+ characters.
b. NeuroForecasting Centre’s (London Business School and University Col
lege London) trading strategy selection network earned an average annual
proﬁt of 18% against a conventional system’s 12.3%.
1.5. SOURCES 13
c. Fujitsu’s (plus a partner’s) neural network for monitoring a continuous
steel casting operation has been in successful operation since early 1990.
In summary, it is rather easy nowadays to ﬁnd applications of machine learn
ing techniques. This fact should come as no surprise inasmuch as many machine
learning techniques can be viewed as extensions of well known statistical meth
ods which have been successfully applied for many years.
1.5 Sources
Besides the rich literature in machine learning (a small part of
which is referenced in the Bibliography), there are several text
books that are worth mentioning [Hertz, Krogh, & Palmer, 1991,
Weiss & Kulikowski, 1991, Natarjan, 1991, Fu, 1994, Langley, 1996].
[Shavlik & Dietterich, 1990, Buchanan & Wilkins, 1993] are edited vol
umes containing some of the most important papers. A survey paper by
[Dietterich, 1990] gives a good overview of many important topics. There are
also well established conferences and publications where papers are given and
appear including:
• The Annual Conferences on Advances in Neural Information Processing
Systems
• The Annual Workshops on Computational Learning Theory
• The Annual International Workshops on Machine Learning
• The Annual International Conferences on Genetic Algorithms
(The Proceedings of the abovelisted four conferences are published by
Morgan Kaufmann.)
• The journal Machine Learning (published by Kluwer Academic Publish
ers).
There is also much information, as well as programs and datasets, available over
the Internet through the World Wide Web.
1.6 Bibliographical and Historical Remarks
To be added. Every chapter will
contain a brief survey of the history
of the material covered in that
chapter.
14 CHAPTER 1. PRELIMINARIES
Chapter 2
Boolean Functions
2.1 Representation
2.1.1 Boolean Algebra
Many important ideas about learning of functions are most easily presented
using the special case of Boolean functions. There are several important sub
classes of Boolean functions that are used as hypothesis classes for function
learning. Therefore, we digress in this chapter to present a review of Boolean
functions and their properties. (For a more thorough treatment see, for example,
[Unger, 1989].)
A Boolean function, f(x
1
, x
2
, . . . , x
n
) maps an ntuple of (0,1) values to
{0, 1}. Boolean algebra is a convenient notation for representing Boolean func
tions. Boolean algebra uses the connectives ·, +, and . For example, the and
function of two variables is written x
1
· x
2
. By convention, the connective, “·”
is usually suppressed, and the and function is written x
1
x
2
. x
1
x
2
has value 1 if
and only if both x
1
and x
2
have value 1; if either x
1
or x
2
has value 0, x
1
x
2
has
value 0. The (inclusive) or function of two variables is written x
1
+x
2
. x
1
+x
2
has value 1 if and only if either or both of x
1
or x
2
has value 1; if both x
1
and
x
2
have value 0, x
1
+x
2
has value 0. The complement or negation of a variable,
x, is written x. x has value 1 if and only if x has value 0; if x has value 1, x has
value 0.
These deﬁnitions are compactly given by the following rules for Boolean
algebra:
1 + 1 = 1, 1 + 0 = 1, 0 + 0 = 0,
1 · 1 = 1, 1 · 0 = 0, 0 · 0 = 0, and
1 = 0, 0 = 1.
Sometimes the arguments and values of Boolean functions are expressed in
terms of the constants T (True) and F (False) instead of 1 and 0, respectively.
15
16 CHAPTER 2. BOOLEAN FUNCTIONS
The connectives · and + are each commutative and associative. Thus, for
example, x
1
(x
2
x
3
) = (x
1
x
2
)x
3
, and both can be written simply as x
1
x
2
x
3
.
Similarly for +.
A Boolean formula consisting of a single variable, such as x
1
is called an
atom. One consisting of either a single variable or its complement, such as x
1
,
is called a literal.
The operators · and + do not commute between themselves. Instead, we
have DeMorgan’s laws (which can be veriﬁed by using the above deﬁnitions):
x
1
x
2
= x
1
+x
2
, and
x
1
+x
2
= x
1
x
2
.
2.1.2 Diagrammatic Representations
We saw in the last chapter that a Boolean function could be represented by
labeling the vertices of a cube. For a function of n variables, we would need
an ndimensional hypercube. In Fig. 2.1 we show some 2 and 3dimensional
examples. Vertices having value 1 are labeled with a small square, and vertices
having value 0 are labeled with a small circle.
x
1
x
2
x
1
x
2
x
1
x
2
and
or
xor (exclusive or)
x
1
x
2
x
1
+ x
2
x
1
x
2
+ x
1
x
2
even parity function
x
1
x
2
x
3
x
1
x
2
x
3
+ x
1
x
2
x
3
+
x
1
x
2
x
3
+
x
1
x
2
x
3
Figure 2.1: Representing Boolean Functions on Cubes
Using the hypercube representations, it is easy to see how many Boolean
functions of n dimensions there are. A 3dimensional cube has 2
3
= 8 vertices,
and each may be labeled in two diﬀerent ways; thus there are 2
(2
3
)
= 256
2.2. CLASSES OF BOOLEAN FUNCTIONS 17
diﬀerent Boolean functions of 3 variables. In general, there are 2
2
n
Boolean
functions of n variables.
We will be using 2 and 3dimensional cubes later to provide some intuition
about the properties of certain Boolean functions. Of course, we cannot visualize
hypercubes (for n > 3), and there are many surprising properties of higher
dimensional spaces, so we must be careful in using intuitions gained in low
dimensions. One diagrammatic technique for dimensions slightly higher than
3 is the Karnaugh map. A Karnaugh map is an array of values of a Boolean
function in which the horizontal rows are indexed by the values of some of
the variables and the vertical columns are indexed by the rest. The rows and
columns are arranged in such a way that entries that are adjacent in the map
correspond to vertices that are adjacent in the hypercube representation. We
show an example of the 4dimensional even parity function in Fig. 2.2. (An
even parity function is a Boolean function that has value 1 if there are an even
number of its arguments that have value 1; otherwise it has value 0.) Note
that all adjacent cells in the table correspond to inputs diﬀering in only one
component. Also describe general logic
diagrams, [Wnek, et al., 1990].
00 01 10 11
00
01
10
11
1 1
1
1
1
1
1
1 0
0 0
0
0
0
0
0
x
1
,x
2
x
3
,x
4
Figure 2.2: A Karnaugh Map
2.2 Classes of Boolean Functions
2.2.1 Terms and Clauses
To use absolute bias in machine learning, we limit the class of hypotheses. In
learning Boolean functions, we frequently use some of the common subclasses of
those functions. Therefore, it will be important to know about these subclasses.
One basic subclass is called terms. A term is any function written in the
form l
1
l
2
· · · l
k
, where the l
i
are literals. Such a form is called a conjunction of
literals. Some example terms are x
1
x
7
and x
1
x
2
x
4
. The size of a term is the
number of literals it contains. The examples are of sizes 2 and 3, respectively.
(Strictly speaking, the class of conjunctions of literals is called the monomials,
18 CHAPTER 2. BOOLEAN FUNCTIONS
and a conjunction of literals itself is called a term. This distinction is a ﬁne one
which we elect to blur here.)
It is easy to show that there are exactly 3
n
possible terms of n variables.
The number of terms of size k or less is bounded from above by
¸
k
i=0
C(2n, i) =
O(n
k
), where C(i, j) =
i!
(i−j)!j!
is the binomial coeﬃcient. Probably I’ll put in a simple
termlearning algorithm here—so
we can get started on learning!
Also for DNF functions and
decision lists—as they are deﬁned
in the next few pages.
A clause is any function written in the form l
1
+l
2
+· · · +l
k
, where the l
i
are
literals. Such a form is called a disjunction of literals. Some example clauses
are x
3
+ x
5
+ x
6
and x
1
+ x
4
. The size of a clause is the number of literals it
contains. There are 3
n
possible clauses and fewer than
¸
k
i=0
C(2n, i) clauses of
size k or less. If f is a term, then (by De Morgan’s laws) f is a clause, and vice
versa. Thus, terms and clauses are duals of each other.
In psychological experiments, conjunctions of literals seem easier for humans
to learn than disjunctions of literals.
2.2.2 DNF Functions
A Boolean function is said to be in disjunctive normal form (DNF) if it can be
written as a disjunction of terms. Some examples in DNF are: f = x
1
x
2
+x
2
x
3
x
4
and f = x
1
x
3
+ x
2
x
3
+ x
1
x
2
x
3
. A DNF expression is called a kterm DNF
expression if it is a disjunction of k terms; it is in the class kDNF if the size of
its largest term is k. The examples above are 2term and 3term expressions,
respectively. Both expressions are in the class 3DNF.
Each term in a DNF expression for a function is called an implicant because
it “implies” the function (if the term has value 1, so does the function). In
general, a term, t, is an implicant of a function, f, if f has value 1 whenever
t does. A term, t, is a prime implicant of f if the term, t
, formed by taking
any literal out of an implicant t is no longer an implicant of f. (The implicant
cannot be “divided” by any term and remain an implicant.)
Thus, both x
2
x
3
and x
1
x
3
are prime implicants of f = x
2
x
3
+x
1
x
3
+x
2
x
1
x
3
,
but x
2
x
1
x
3
is not.
The relationship between implicants and prime implicants can be geometri
cally illustrated using the cube representation for Boolean functions. Consider,
for example, the function f = x
2
x
3
+ x
1
x
3
+ x
2
x
1
x
3
. We illustrate it in Fig.
2.3. Note that each of the three planes in the ﬁgure “cuts oﬀ” a group of
vertices having value 1, but none cuts oﬀ any vertices having value 0. These
planes are pictorial devices used to isolate certain lower dimensional subfaces
of the cube. Two of them isolate onedimensional edges, and the third isolates
a zerodimensional vertex. Each group of vertices on a subface corresponds to
one of the implicants of the function, f, and thus each implicant corresponds
to a subface of some dimension. A kdimensional subface corresponds to an
(n − k)size implicant term. The function is written as the disjunction of the
implicants—corresponding to the union of all the vertices cut oﬀ by all of the
planes. Geometrically, an implicant is prime if and only if its corresponding
subface is the largest dimensional subface that includes all of its vertices and
2.2. CLASSES OF BOOLEAN FUNCTIONS 19
no other vertices having value 0. Note that the term x
2
x
1
x
3
is not a prime
implicant of f. (In this case, we don’t even have to include this term in the
function because the vertex cut oﬀ by the plane corresponding to x
2
x
1
x
3
is
already cut oﬀ by the plane corresponding to x
2
x
3
.) The other two implicants
are prime because their corresponding subfaces cannot be expanded without
including vertices having value 0.
x
2
x
1
x
3
1, 0, 0
1, 0, 1
1, 1, 1
0, 0, 1
f = x
2
x
3
+ x
1
x
3
+ x
2
x
1
x
3
= x
2
x
3
+ x
1
x
3
x
2
x
3
and x
1
x
3
are prime implicants
Figure 2.3: A Function and its Implicants
Note that all Boolean functions can be represented in DNF—trivially by
disjunctions of terms of size n where each term corresponds to one of the vertices
whose value is 1. Whereas there are 2
2
n
functions of n dimensions in DNF (since
any Boolean function can be written in DNF), there are just 2
O(n
k
)
functions
in kDNF.
All Boolean functions can also be represented in DNF in which each term is
a prime implicant, but that representation is not unique, as shown in Fig. 2.4.
If we can express a function in DNF form, we can use the consensus method
to ﬁnd an expression for the function in which each term is a prime implicant.
The consensus method relies on two results: We may replace this section with
one describing the
QuineMcCluskey method instead.
• Consensus:
20 CHAPTER 2. BOOLEAN FUNCTIONS
x
2
x
1
x
3
1, 0, 0
1, 0, 1
1, 1, 1
0, 0, 1
f = x
2
x
3
+ x
1
x
3
+ x
1
x
2
= x
1
x
2
+ x
1
x
3
All of the terms are prime implicants, but there
is not a unique representation
Figure 2.4: NonUniqueness of Representation by Prime Implicants
x
i
· f
1
+x
i
· f
2
= x
i
· f
1
+x
i
· f
2
+f
1
· f
2
where f
1
and f
2
are terms such that no literal appearing in f
1
appears
complemented in f
2
. f
1
· f
2
is called the consensus of x
i
· f
1
and x
i
·
f
2
. Readers familiar with the resolution rule of inference will note that
consensus is the dual of resolution.
Examples: x
1
is the consensus of x
1
x
2
and x
1
x
2
. The terms x
1
x
2
and x
1
x
2
have no consensus since each term has more than one literal appearing
complemented in the other.
• Subsumption:
x
i
· f
1
+f
1
= f
1
where f
1
is a term. We say that f
1
subsumes x
i
· f
1
.
Example: x
1
x
4
x
5
subsumes x
1
x
4
x
2
x
5
2.2. CLASSES OF BOOLEAN FUNCTIONS 21
The consensus method for ﬁnding a set of prime implicants for a function,
f, iterates the following operations on the terms of a DNF expression for f until
no more such operations can be applied:
a. initialize the process with the set, T , of terms in the DNF expression of
f,
b. compute the consensus of a pair of terms in T and add the result to T ,
c. eliminate any terms in T that are subsumed by other terms in T .
When this process halts, the terms remaining in T are all prime implicants of
f.
Example: Let f = x
1
x
2
+x
1
x
2
x
3
+x
1
x
2
x
3
x
4
x
5
. We show a derivation of
a set of prime implicants in the consensus tree of Fig. 2.5. The circled numbers
adjoining the terms indicate the order in which the consensus and subsumption
operations were performed. Shaded boxes surrounding a term indicate that it
was subsumed. The ﬁnal form of the function in which all terms are prime
implicants is: f = x
1
x
2
+x
1
x
3
+x
1
x
4
x
5
. Its terms are all of the nonsubsumed
terms in the consensus tree.
x
1
x
2
x
1
x
2
x
3
x
1
x
2
x
3
x
4
x
5
x
1
x
3
x
1
x
2
x
4
x
5
x
1
x
4
x
5
f = x
1
x
2
+
+ x
1
x
3
x
1
x
4
x
5
1
2
6
4
5
3
Figure 2.5: A Consensus Tree
2.2.3 CNF Functions
Disjunctive normal form has a dual: conjunctive normal form (CNF). A Boolean
function is said to be in CNF if it can be written as a conjunction of clauses.
22 CHAPTER 2. BOOLEAN FUNCTIONS
An example in CNF is: f = (x
1
+x
2
)(x
2
+x
3
+x
4
). A CNF expression is called
a kclause CNF expression if it is a conjunction of k clauses; it is in the class
kCNF if the size of its largest clause is k. The example is a 2clause expression
in 3CNF. If f is written in DNF, an application of De Morgan’s law renders f
in CNF, and vice versa. Because CNF and DNF are duals, there are also 2
O(n
k
)
functions in kCNF.
2.2.4 Decision Lists
Rivest has proposed a class of Boolean functions called decision lists [Rivest, 1987].
A decision list is written as an ordered list of pairs:
(t
q
, v
q
)
(t
q−1
, v
q−1
)
· · ·
(t
i
, v
i
)
· · ·
(t
2
, v
2
)
(T, v
1
)
where the v
i
are either 0 or 1, the t
i
are terms in (x
1
, . . . , x
n
), and T is a term
whose value is 1 (regardless of the values of the x
i
). The value of a decision list
is the value of v
i
for the ﬁrst t
i
in the list that has value 1. (At least one t
i
will
have value 1, because the last one does; v
1
can be regarded as a default value of
the decision list.) The decision list is of size k, if the size of the largest term in
it is k. The class of decision lists of size k or less is called kDL.
An example decision list is:
f =
(x
1
x
2
, 1)
(x
1
x
2
x
3
, 0)
x
2
x
3
, 1)
(1, 0)
f has value 0 for x
1
= 0, x
2
= 0, and x
3
= 1. It has value 1 for x
1
= 1, x
2
= 0,
and x
3
= 1. This function is in 3DL.
It has been shown that the class kDL is a strict superset of the union of
kDNF and kCNF. There are 2
O[n
k
k log(n)]
functions in kDL [Rivest, 1987].
Interesting generalizations of decision lists use other Boolean functions in
place of the terms, t
i
. For example we might use linearly separable functions in
place of the t
i
(see below and [Marchand & Golea, 1993]).
2.2. CLASSES OF BOOLEAN FUNCTIONS 23
2.2.5 Symmetric and Voting Functions
A Boolean function is called symmetric if it is invariant under permutations
of the input variables. For example, any function that is dependent only on
the number of input variables whose values are 1 is a symmetric function. The
parity functions, which have value 1 depending on whether or not the number
of input variables with value 1 is even or odd is a symmetric function. (The
exclusive or function, illustrated in Fig. 2.1, is an oddparity function of two
dimensions. The or and and functions of two dimensions are also symmetric.)
An important subclass of the symmetric functions is the class of voting func
tions (also called mofn functions). A kvoting function has value 1 if and only
if k or more of its n inputs has value 1. If k = 1, a voting function is the same
as an nsized clause; if k = n, a voting function is the same as an nsized term;
if k = (n + 1)/2 for n odd or k = 1 + n/2 for n even, we have the majority
function.
2.2.6 Linearly Separable Functions
The linearly separable functions are those that can be expressed as follows:
f = thresh(
n
¸
i=1
w
i
x
i
, θ)
where w
i
, i = 1, . . . , n, are realvalued numbers called weights, θ is a realvalued
number called the threshold, and thresh(σ, θ) is 1 if σ ≥ θ and 0 otherwise.
(Note that the concept of linearly separable functions can be extended to non
Boolean inputs.) The kvoting functions are all members of the class of linearly
separable functions in which the weights all have unit value and the threshold
depends on k. Thus, terms and clauses are special cases of linearly separable
functions.
A convenient way to write linearly separable functions uses vector notation:
f = thresh(X· W, θ)
where X = (x
1
, . . . , x
n
) is an ndimensional vector of input variables, W =
(w
1
, . . . , w
n
) is an ndimensional vector of weight values, and X· W is the dot
(or inner) product of the two vectors. Input vectors for which f has value 1 lie
in a halfspace on one side of (and on) a hyperplane whose orientation is normal
to W and whose position (with respect to the origin) is determined by θ. We
saw an example of such a separating plane in Fig. 1.6. With this idea in mind,
it is easy to see that two of the functions in Fig. 2.1 are linearly separable, while
two are not. Also note that the terms in Figs. 2.3 and 2.4 are linearly separable
functions as evidenced by the separating planes shown.
There is no closedform expression for the number of linearly separable func
tions of n dimensions, but the following table gives the numbers for n up to 6.
24 CHAPTER 2. BOOLEAN FUNCTIONS
n Boolean Linearly Separable
Functions Functions
1 4 4
2 16 14
3 256 104
4 65,536 1,882
5 ≈ 4.3 ×10
9
94,572
6 ≈ 1.8 ×10
19
15,028,134
[Muroga, 1971] has shown that (for n > 1) there are no more than 2
n
2
linearly
separable functions of n dimensions. (See also [Winder, 1961, Winder, 1962].)
2.3 Summary
The diagram in Fig. 2.6 shows some of the set inclusions of the classes of Boolean
functions that we have considered. We will be confronting these classes again
in later chapters.
DNF
(All)
kDL
kDNF
ksize
terms
terms
lin sep
Figure 2.6: Classes of Boolean Functions
The sizes of the various classes are given in the following table (adapted from
[Dietterich, 1990, page 262]):
2.4. BIBLIOGRAPHICAL AND HISTORICAL REMARKS 25
Class Size of Class
terms 3
n
clauses 3
n
kterm DNF 2
O(kn)
kclause CNF 2
O(kn)
kDNF 2
O(n
k
)
kCNF 2
O(n
k
)
kDL 2
O[n
k
k log(n)]
lin sep 2
O(n
2
)
DNF 2
2
n
2.4 Bibliographical and Historical Remarks
To be added.
26 CHAPTER 2. BOOLEAN FUNCTIONS
Chapter 3
Using Version Spaces for
Learning
3.1 Version Spaces and Mistake Bounds
The ﬁrst learning methods we present are based on the concepts of version
spaces and version graphs. These ideas are most clearly explained for the case
of Boolean function learning. Given an initial hypothesis set H (a subset of
all Boolean functions) and the values of f(X) for each X in a training set, Ξ,
the version space is that subset of hypotheses, H
v
, that is consistent with these
values. A hypothesis, h, is consistent with the values of X in Ξ if and only if
h(X) = f(X) for all X in Ξ. We say that the hypotheses in H that are not
consistent with the values in the training set are ruled out by the training set.
We could imagine (conceptually only!) that we have devices for implement
ing every function in H. An incremental training procedure could then be
deﬁned which presented each pattern in Ξ to each of these functions and then
eliminated those functions whose values for that pattern did not agree with its
given value. At any stage of the process we would then have left some subset
of functions that are consistent with the patterns presented so far; this subset
is the version space for the patterns already presented. This idea is illustrated
in Fig. 3.1.
Consider the following procedure for classifying an arbitrary input pattern,
X: the pattern is put in the same class (0 or 1) as are the majority of the
outputs of the functions in the version space. During the learning procedure,
if this majority is not equal to the value of the pattern presented, we say a
mistake is made, and we revise the version space accordingly—eliminating all
those (majority of the) functions voting incorrectly. Thus, whenever a mistake
is made, we rule out at least half of the functions remaining in the version space.
How many mistakes can such a procedure make? Obviously, we can make
no more than log
2
(H) mistakes, where H is the number of hypotheses in the
27
28 CHAPTER 3. USING VERSION SPACES FOR LEARNING
h
1
h
2
h
i
h
K
X
A Subset, H, of all
Boolean Functions
Rule out hypotheses not
consistent with training patterns
h
j
Hypotheses not ruled out
constitute the version space
K = H
1 or 0
Figure 3.1: Implementing the Version Space
original hypothesis set, H. (Note, though, that the number of training patterns
seen before this maximum number of mistakes is made might be much greater.)
This theoretical (and very impractical!) result (due to [Littlestone, 1988]) is an
example of a mistake bound—an important concept in machine learning theory.
It shows that there must exist a learning procedure that makes no more mistakes
than this upper bound. Later, we’ll derive other mistake bounds.
As a special case, if our bias was to limit H to terms, we would make no
more than log
2
(3
n
) = nlog
2
(3) = 1.585n mistakes before exhausting the version
space. This result means that if f were a term, we would make no more than
1.585n mistakes before learning f, and otherwise we would make no more than
that number of mistakes before being able to decide that f is not a term.
Even if we do not have suﬃcient training patterns to reduce the version
space to a single function, it may be that there are enough training patterns
to reduce the version space to a set of functions such that most of them assign
the same values to most of the patterns we will see henceforth. We could select
one of the remaining functions at random and be reasonably assured that it
will generalize satisfactorily. We next discuss a computationally more feasible
method for representing the version space.
3.2. VERSION GRAPHS 29
3.2 Version Graphs
Boolean functions can be ordered by generality. A Boolean function, f
1
, is more
general than a function, f
2
, (and f
2
is more speciﬁc than f
1
), if f
1
has value 1
for all of the arguments for which f
2
has value 1, and f
1
= f
2
. For example, x
3
is more general than x
2
x
3
but is not more general than x
3
+x
2
.
We can form a graph with the hypotheses, {h
i
}, in the version space as
nodes. A node in the graph, h
i
, has an arc directed to node, h
j
, if and only if
h
j
is more general than h
i
. We call such a graph a version graph. In Fig. 3.2,
we show an example of a version graph over a 3dimensional input space for
hypotheses restricted to terms (with none of them yet ruled out).
0
x
1
x
2 x
3 x
2
x
3
1
x
1
x
2
x
3
x
1
x
2
x
1
Version Graph for Terms
x
1
x
2
x
3
(for simplicity, only some arcs in the graph are shown)
(none yet ruled out)
(k = 1)
(k = 2)
(k = 3)
x
1
x
3
Figure 3.2: A Version Graph for Terms
That function, denoted here by “1,” which has value 1 for all inputs, corre
sponds to the node at the top of the graph. (It is more general than any other
term.) Similarly, the function “0” is at the bottom of the graph. Just below
“1” is a row of nodes corresponding to all terms having just one literal, and just
below them is a row of nodes corresponding to terms having two literals, and
30 CHAPTER 3. USING VERSION SPACES FOR LEARNING
so on. There are 3
3
= 27 functions altogether (the function “0,” included in
the graph, is technically not a term). To make our portrayal of the graph less
cluttered only some of the arcs are shown; each node in the actual graph has an
arc directed to all of the nodes above it that are more general.
We use this same example to show how the version graph changes as we
consider a set of labeled samples in a training set, Ξ. Suppose we ﬁrst consider
the training pattern (1, 0, 1) with value 0. Some of the functions in the version
graph of Fig. 3.2 are inconsistent with this training pattern. These ruled out
nodes are no longer in the version graph and are shown shaded in Fig. 3.3. We
also show there the threedimensional cube representation in which the vertex
(1, 0, 1) has value 0.
0
x
1
x
2 x
3
x
2
x
3
1
x
1
x
2
x
3
x
1
x
2
x
1
New Version Graph
1, 0, 1 has
value 0
x
1
x
3
x
1
x
2
x
2
x
3
x
1
x
2
x
3
x
1
x
2
x
3
x
1
x
3
(only some arcs in the graph are shown)
ruled out nodes
Figure 3.3: The Version Graph Upon Seeing (1, 0, 1)
In a version graph, there are always a set of hypotheses that are maximally
general and a set of hypotheses that are maximally speciﬁc. These are called
the general boundary set (gbs) and the speciﬁc boundary set (sbs), respectively.
In Fig. 3.4, we have the version graph as it exists after learning that (1,0,1) has
value 0 and (1, 0, 0) has value 1. The gbs and sbs are shown.
3.2. VERSION GRAPHS 31
0
x
1
x
2
x
3
x
2
x
3
1
x
1
x
2
x
3
x
1
x
2
x
3 x
1
x
3
general boundary set
(gbs)
specific boundary set (sbs)
x
1
x
2
more specific than gbs,
more general than sbs
1, 0, 1 has
value 0
x
1
x
2
x
3
1, 0, 0 has
value 1
Figure 3.4: The Version Graph Upon Seeing (1, 0, 1) and (1, 0, 0)
Boundary sets are important because they provide an alternative to repre
senting the entire version space explicitly, which would be impractical. Given
only the boundary sets, it is possible to determine whether or not any hypoth
esis (in the prescribed class of Boolean functions we are using) is a member or
not of the version space. This determination is possible because of the fact that
any member of the version space (that is not a member of one of the boundary
sets) is more speciﬁc than some member of the general boundary set and is more
general than some member of the speciﬁc boundary set.
If we limit our Boolean functions that can be in the version space to terms,
it is a simple matter to determine maximally general and maximally speciﬁc
functions (assuming that there is some term that is in the version space). A
maximally speciﬁc one corresponds to a subface of minimal dimension that
contains all the members of the training set labelled by a 1 and no members
labelled by a 0. A maximally general one corresponds to a subface of maximal
dimension that contains all the members of the training set labelled by a 1 and
no members labelled by a 0. Looking at Fig. 3.4, we see that the subface of
minimal dimension that contains (1, 0, 0) but does not contain (1, 0, 1) is just
the vertex (1, 0, 0) itself—corresponding to the function x
1
x
2
x
3
. The subface
32 CHAPTER 3. USING VERSION SPACES FOR LEARNING
of maximal dimension that contains (1, 0, 0) but does not contain (1, 0, 1) is
the bottom face of the cube—corresponding to the function x
3
. In Figs. 3.2
through 3.4 the sbs is always singular. Version spaces for terms always have
singular speciﬁc boundary sets. As seen in Fig. 3.3, however, the gbs of a
version space for terms need not be singular.
3.3 Learning as Search of a Version Space
[To be written. Relate to term learning algorithm presented in Chapter
Two. Also discuss bestﬁrst search methods. See Pat Langley’s example us
ing “pseudocells” of how to generate and eliminate hypotheses.]
Selecting a hypothesis from the version space can be thought of as a search
problem. One can start with a very general function and specialize it through
various specialization operators until one ﬁnds a function that is consistent (or
adequately so) with a set of training patterns. Such procedures are usually
called topdown methods. Or, one can start with a very special function and
generalize it—resulting in bottomup methods. We shall see instances of both
styles of learning in this book. Compare this view of topdown
versus bottomup with the
divideandconquer and the
covering (or AQ) methods of
decisiontree induction.
3.4 The Candidate Elimination Method
The candidate elimination method, is an incremental method for computing the
boundary sets. Quoting from [Hirsh, 1994, page 6]:
“The candidateelimination algorithm manipulates the boundaryset
representation of a version space to create boundary sets that rep
resent a new version space consistent with all the previous instances
plus the new one. For a positive exmple the algorithm generalizes
the elements of the [sbs] as little as possible so that they cover the
new instance yet remain consistent with past data, and removes
those elements of the [gbs] that do not cover the new instance. For
a negative instance the algorithm specializes elements of the [gbs]
so that they no longer cover the new instance yet remain consis
tent with past data, and removes from the [sbs] those elements that
mistakenly cover the new, negative instance.”
The method uses the following deﬁnitions (adapted from
[Genesereth & Nilsson, 1987]):
• a hypothesis is called suﬃcient if and only if it has value 1 for all training
samples labeled by a 1,
• a hypothesis is called necessary if and only if it has value 0 for all training
samples labeled by a 0.
3.4. THE CANDIDATE ELIMINATION METHOD 33
Here is how to think about these deﬁnitions: A hypothesis implements a suﬃ
cient condition that a training sample has value 1 if the hypothesis has value 1
for all of the positive instances; a hypothesis implements a necessary condition
that a training sample has value 1 if the hypothesis has value 0 for all of the
negative instances. A hypothesis is consistent with the training set (and thus is
in the version space) if and only if it is both suﬃcient and necessary.
We start (before receiving any members of the training set) with the function
“0” as the singleton element of the speciﬁc boundary set and with the function
“1” as the singleton element of the general boundary set. Upon receiving a new
labeled input vector, the boundary sets are changed as follows:
a. If the new vector is labelled with a 1:
The new general boundary set is obtained from the previous one by ex
cluding any elements in it that are not suﬃcient. (That is, we exclude any
elements that have value 0 for the new vector.)
The new speciﬁc boundary set is obtained from the previous one by re
placing each element, h
i
, in it by all of its least generalizations.
The hypothesis h
g
is a least generalization of h if and only if: a) h is
more speciﬁc than h
g
, b) h
g
is suﬃcient, c) no function (including h) that
is more speciﬁc than h
g
is suﬃcient, and d) h
g
is more speciﬁc than some
member of the new general boundary set. It might be that h
g
= h. Also,
least generalizations of two diﬀerent functions in the speciﬁc boundary set
may be identical.
b. If the new vector is labelled with a 0:
The new speciﬁc boundary set is obtained from the previous one by ex
cluding any elements in it that are not necessary. (That is, we exclude
any elements that have value 1 for the new vector.)
The new general boundary set is obtained from the previous one by re
placing each element, h
i
, in it by all of its least specializations.
The hypothesis h
s
is a least specialization of h if and only if: a) h is more
general than h
s
, b) h
s
is necessary, c) no function (including h) that is
more general than h
s
is necessary, and d) h
s
is more general than some
member of the new speciﬁc boundary set. Again, it might be that h
s
= h,
and least specializations of two diﬀerent functions in the general boundary
set may be identical.
As an example, suppose we present the vectors in the following order:
vector label
(1, 0, 1) 0
(1, 0, 0) 1
(1, 1, 1) 0
(0, 0, 1) 0
34 CHAPTER 3. USING VERSION SPACES FOR LEARNING
We start with general boundary set, “1”, and speciﬁc boundary set, “0.”
After seeing the ﬁrst sample, (1, 0, 1), labeled with a 0, the speciﬁc boundary
set stays at “0” (it is necessary), and we change the general boundary set to
{x
1
, x
2
, x
3
}. Each of the functions, x
1
, x
2
, and x
3
, are least specializations of
“1” (they are necessary, “1” is not, they are more general than “0”, and there
are no functions that are more general than they and also necessary).
Then, after seeing (1, 0, 0), labeled with a 1, the general boundary set
changes to {x
3
} (because x
1
and x
2
are not suﬃcient), and the speciﬁc boundary
set is changed to {x
1
x
2
x
3
}. This single function is a least generalization of “0”
(it is suﬃcient, “0” is more speciﬁc than it, no function (including “0”) that is
more speciﬁc than it is suﬃcient, and it is more speciﬁc than some member of
the general boundary set.
When we see (1, 1, 1), labeled with a 0, we do not change the speciﬁc
boundary set because its function is still necessary. We do not change the
general boundary set either because x
3
is still necessary.
Finally, when we see (0, 0, 1), labeled with a 0, we do not change the speciﬁc
boundary set because its function is still necessary. We do not change the general
boundary set either because x
3
is still necessary. Maybe I’ll put in an example of a
version graph for nonBoolean
functions.
3.5 Bibliographical and Historical Remarks
The concept of version spaces and their role in learning was ﬁrst investigated
by Tom Mitchell [Mitchell, 1982]. Although these ideas are not used in prac
tical machine learning procedures, they do provide insight into the nature of
hypothesis selection. In order to accomodate noisy data, version spaces have
been generalized by [Hirsh, 1994] to allow hypotheses that are not necessarily
consistent with the training set. More to be added.
Chapter 4
Neural Networks
In chapter two we deﬁned several important subsets of Boolean functions. Sup
pose we decide to use one of these subsets as a hypothesis set for supervised
function learning. We next have the question of how best to implement the
function as a device that gives the outputs prescribed by the function for arbi
trary inputs. In this chapter we describe how networks of nonlinear elements
can be used to implement various inputoutput functions and how they can be
trained using supervised learning methods.
Networks of nonlinear elements, interconnected through adjustable weights,
play a prominent role in machine learning. They are called neural networks be
cause the nonlinear elements have as their inputs a weighted sum of the outputs
of other elements—much like networks of biological neurons do. These networks
commonly use the threshold element which we encountered in chapter two in
our study of linearly separable Boolean functions. We begin our treatment of
neural nets by studying this threshold element and how it can be used in the
simplest of all networks, namely ones composed of a single threshold element.
4.1 Threshold Logic Units
4.1.1 Deﬁnitions and Geometry
Linearly separable (threshold) functions are implemented in a straightforward
way by summing the weighted inputs and comparing this sum to a threshold
value as shown in Fig. 4.1. This structure we call a threshold logic unit (TLU).
Its output is 1 or 0 depending on whether or not the weighted sum of its inputs is
greater than or equal to a threshold value, θ. It has also been called an Adaline
(for adaptive linear element) [Widrow, 1962, Widrow & Lehr, 1990], an LTU
(linear threshold unit), a perceptron, and a neuron. (Although the word “per
ceptron” is often used nowadays to refer to a single TLU, Rosenblatt originally
deﬁned it as a class of networks of threshold elements [Rosenblatt, 1958].)
35
36 CHAPTER 4. NEURAL NETWORKS
!
x
1
x
2
x
n+1
= 1
x
i
w
1
w
2
w
n+1
w
i
w
n
X
threshold weight
x
n
W
threshold " = 0
f
f = thresh( ! w
i
x
i
, 0)
i = 1
n+1
Figure 4.1: A Threshold Logic Unit (TLU)
The ndimensional feature or input vector is denoted by X = (x
1
, . . . , x
n
).
When we want to distinguish among diﬀerent feature vectors, we will attach
subscripts, such as X
i
. The components of X can be any realvalued numbers,
but we often specialize to the binary numbers 0 and 1. The weights of a TLU
are represented by an ndimensional weight vector, W = (w
1
, . . . , w
n
). Its
components are realvalued numbers (but we sometimes specialize to integers).
The TLU has output 1 if
¸
n
i=1
x
i
w
i
≥ θ; otherwise it has output 0. The
weighted sum that is calculated by the TLU can be simply represented as a
vector dot product, X•W. (If the pattern and weight vectors are thought of as
“column” vectors, this dot product is then sometimes written as X
t
W, where
the “row” vector X
t
is the transpose of X.) Often, the threshold, θ, of the TLU
is ﬁxed at 0; in that case, arbitrary thresholds are achieved by using (n + 1)
dimensional “augmented” vectors, Y, and V, whose ﬁrst n components are the
same as those of X and W, respectively. The (n + 1)st component, x
n+1
, of
the augmented feature vector, Y, always has value 1; the (n+1)st component,
w
n+1
, of the augmented weight vector, V, is set equal to the negative of the
desired threshold value. (When we want to emphasize the use of augmented
vectors, we’ll use the Y,V notation; however when the context of the discussion
makes it clear about what sort of vectors we are talking about, we’ll lapse back
into the more familiar X,W notation.) In the Y,V notation, the TLU has an
output of 1 if Y•V ≥ 0. Otherwise, the output is 0.
We can give an intuitively useful geometric description of a TLU. A TLU
divides the input space by a hyperplane as sketched in Fig. 4.2. The hyperplane
is the boundary between patterns for which X•W + w
n+1
> 0 and patterns
for which X•W + w
n+1
< 0. Thus, the equation of the hyperplane itself is
X•W+w
n+1
= 0. The unit vector that is normal to the hyperplane is n =
W
W
,
where W =
(w
2
1
+. . . +w
2
n
) is the length of the vector W. (The normal
4.1. THRESHOLD LOGIC UNITS 37
form of the hyperplane equation is X•n +
W
W
= 0.) The distance from the
hyperplane to the origin is
wn+1
W
, and the distance from an arbitrary point, X,
to the hyperplane is
X•W+wn+1
W
. When the distance from the hyperplane to the
origin is negative (that is, when w
n+1
< 0), then the origin is on the negative
side of the hyperplane (that is, the side for which X•W+w
n+1
< 0).
X
.
W + w
n+1
> 0
on this side
W
X
W
n =
W
W
Origin
Unit vector normal
to hyperplane
W + w
n+1
= 0 X
n + = 0 X
Equations of hyperplane:
w
n+1
W
w
n+1
W + w
n+1
X
X
.
W + w
n+1
< 0
on this side
Figure 4.2: TLU Geometry
Adjusting the weight vector, W, changes the orientation of the hyperplane;
adjusting w
n+1
changes the position of the hyperplane (relative to the origin).
Thus, training of a TLU can be achieved by adjusting the values of the weights.
In this way the hyperplane can be moved so that the TLU implements diﬀerent
(linearly separable) functions of the input.
4.1.2 Special Cases of Linearly Separable Functions
Terms
Any term of size k can be implemented by a TLU with a weight from each of
those inputs corresponding to variables occurring in the term. A weight of +1 is
used from an input corresponding to a positive literal, and a weight of −1 is used
from an input corresponding to a negative literal. (Literals not mentioned in
the term have weights of zero—that is, no connection at all—from their inputs.)
The threshold, θ, is set equal to k
p
− 1/2, where k
p
is the number of positive
literals in the term. Such a TLU implements a hyperplane boundary that is
38 CHAPTER 4. NEURAL NETWORKS
parallel to a subface of dimension (n − k) of the unit hypercube. We show a
threedimensional example in Fig. 4.3. Thus, linearly separable functions are a
superset of terms.
(1,1,1)
(1,1,0)
x
2
x
1
x
3
f = x
1
x
2
x
1
+ x
2
 3/2 = 0
Equation of plane is:
Figure 4.3: Implementing a Term
Clauses
The negation of a clause is a term. For example, the negation of the clause
f = x
1
+ x
2
+ x
3
is the term f = x
1
x
2
x
3
. A hyperplane can be used to
implement this term. If we “invert” the hyperplane, it will implement the
clause instead. Inverting a hyperplane is done by multiplying all of the TLU
weights—even w
n+1
—by −1. This process simply changes the orientation of the
hyperplane—ﬂipping it around by 180 degrees and thus changing its “positive
side.” Therefore, linearly separable functions are also a superset of clauses. We
show an example in Fig. 4.4.
4.1.3 ErrorCorrection Training of a TLU
There are several procedures that have been proposed for adjusting the weights
of a TLU. We present next a family of incremental training procedures with
parameter c. These methods make adjustments to the weight vector only when
the TLU being trained makes an error on a training pattern; they are called
errorcorrection procedures. We use augmented feature and weight vectors in
describing them.
a. We start with a ﬁnite training set, Ξ, of vectors, Y
i
, and their binary
labels.
4.1. THRESHOLD LOGIC UNITS 39
f = x
1
+ x
2
+ x
3
x
1
x
1
+ x
2
+ x
3
< 1/2 = 0
f = x
1
x
2
x
3
Equation of plane is:
x
2
x
3
Figure 4.4: Implementing a Clause
b. Compose an inﬁnite training sequence, Σ, of vectors from Ξ and their
labels such that each member of Ξ occurs inﬁnitely often in Σ. Set the
initial weight values of an TLU to arbitrary values.
c. Repeat forever:
Present the next vector, Y
i
, in Σ to the TLU and note its response.
(a) If the TLU responds correctly, make no change in the weight vector.
(b) If Y
i
is supposed to produce an output of 0 and produces an output
of 1 instead, modify the weight vector as follows:
V ←− V−c
i
Y
i
where c
i
is a positive real number called the learning rate parame
ter (whose value is diﬀererent in diﬀerent instances of this family of
procedures and may depend on i).
Note that after this adjustment the new dot product will be (V −
c
i
Y
i
)•Y
i
= V•Y
i
−c
i
Y
i
•Y
i
, which is smaller than it was before the
weight adjustment.
(c) If Y
i
is supposed to produce an output of 1 and produces an output
of 0 instead, modify the weight vector as follows:
V ←− V+c
i
Y
i
In this case, the new dot product will be (V + c
i
Y
i
)•Y
i
= V•Y
i
+
c
i
Y
i
•Y
i
, which is larger than it was before the weight adjustment.
Note that all three of these cases can be combined in the following rule:
40 CHAPTER 4. NEURAL NETWORKS
V ←− V+c
i
(d
i
−f
i
)Y
i
where d
i
is the desired response (1 or 0) for Y
i
, and f
i
is the actual
response (1 or 0) for Y
i
.]
Note also that because the weight vector V now includes the w
n+1
thresh
old component, the threshold of the TLU is also changed by these adjust
ments.
We identify two versions of this procedure:
1) In the ﬁxedincrement procedure, the learning rate parameter, c
i
, is the
same ﬁxed, positive constant for all i. Depending on the value of this constant,
the weight adjustment may or may not correct the response to an erroneously
classiﬁed feature vector.
2) In the fractionalcorrection procedure, the parameter c
i
is set to λ
Y
i
•V
Y
i
•Y
i
,
where V is the weight vector before it is changed. Note that if λ = 0, no
correction takes place at all. If λ = 1, the correction is just suﬃcient to make
Y
i
•V = 0. If λ > 1, the error will be corrected.
It can be proved that if there is some weight vector, V, that produces a
correct output for all of the feature vectors in Ξ, then after a ﬁnite number
of feature vector presentations, the ﬁxedincrement procedure will ﬁnd such a
weight vector and thus make no more weight changes. The same result holds
for the fractionalcorrection procedure if 1 < λ ≤ 2.
For additional background, proofs, and examples of errorcorrection proce
dures, see [Nilsson, 1990]. See [Maass & Tur´an, 1994] for a
hyperplaneﬁnding procedure that
makes no more than O(n
2
log n)
mistakes.
4.1.4 Weight Space
We can give an intuitive idea about how these procedures work by considering
what happens to the augmented weight vector in “weight space” as corrections
are made. We use augmented vectors in our discussion here so that the threshold
function compares the dot product, Y
i
•V, against a threshold of 0. A particular
weight vector, V, then corresponds to a point in (n + 1)dimensional weight
space. Now, for any pattern vector, Y
i
, consider the locus of all points in
weight space corresponding to weight vectors yielding Y
i
•V = 0. This locus is
a hyperplane passing through the origin of the (n+1)dimensional space. Each
pattern vector will have such a hyperplane corresponding to it. Weight points
in one of the halfspaces deﬁned by this hyperplane will cause the corresponding
pattern to yield a dot product less than 0, and weight points in the other half
space will cause the corresponding pattern to yield a dot product greater than
0.
We show a schematic representation of such a weight space in Fig. 4.5.
There are four pattern hyperplanes, 1, 2, 3, 4 , corresponding to patterns Y
1
,
4.1. THRESHOLD LOGIC UNITS 41
Y
2
, Y
3
, Y
4
, respectively, and we indicate by an arrow the halfspace for each
in which weight vectors give dot products greater than 0. Suppose we wanted
weight values that would give positive responses for patterns Y
1
, Y
3
, and Y
4
,
and a negative response for pattern Y
2
. The weight point, V, indicated in the
ﬁgure is one such set of weight values.
2
3
4
1
V
Figure 4.5: Weight Space
The question of whether or not there exists a weight vector that gives desired
responses for a given set of patterns can be given a geometric interpretation. To
do so involves reversing the “polarity” of those hyperplanes corresponding to
patterns for which a negative response is desired. If we do that for our example
above, we get the weight space diagram shown in Fig. 4.6.
2
3
4
1
V
0
1
1
2
3
2
3
4
Figure 4.6: Solution Region in Weight Space
42 CHAPTER 4. NEURAL NETWORKS
If a weight vector exists that correctly classiﬁes a set of patterns, then the
halfspaces deﬁned by the correct responses for these patterns will have a non
empty intersection, called the solution region. The solution region will be a
“hyperwedge” region whose vertex is at the origin of weight space and whose
crosssection increases with increasing distance from the origin. This region
is shown shaded in Fig. 4.6. (The boxed numbers show, for later purposes,
the number of errors made by weight vectors in each of the regions.) The
ﬁxedincrement errorcorrection procedure changes a weight vector by moving it
normal to any pattern hyperplane for which that weight vector gives an incorrect
response. Suppose in our example that we present the patterns in the sequence
Y
1
, Y
2
, Y
3
, Y
4
, and start the process with a weight point V
1
, as shown in Fig.
4.7. Starting at V
1
, we see that it gives an incorrect response for pattern Y
1
, so
we move V
1
to V
2
in a direction normal to plane 1. (That is what adding Y
1
to
V
1
does.) Y
2
gives an incorrect response for pattern Y
2
, and so on. Ultimately,
the responses are only incorrect for planes bounding the solution region. Some
of the subsequent corrections may overshoot the solution region, but eventually
we work our way out far enough in the solution region that corrections (for
a ﬁxed increment size) take us within it. The proofs for convergence of the
ﬁxedincrement rule make this intuitive argument precise.
2
3
4
1
V
V
1
V
2
V
3
V
4
V
5
V
6
Figure 4.7: Moving Into the Solution Region
4.1.5 The WidrowHoﬀ Procedure
The WidrowHoﬀ procedure (also called the LMS or the delta procedure) at
tempts to ﬁnd weights that minimize a squarederror function between the pat
tern labels and the dot product computed by a TLU. For this purpose, the
pattern labels are assumed to be either +1 or −1 (instead of 1 or 0). The
4.1. THRESHOLD LOGIC UNITS 43
squared error for a pattern, X
i
, with label d
i
(for desired output) is:
ε
i
= (d
i
−
n+1
¸
j=1
x
ij
w
j
)
2
where x
ij
is the jth component of X
i
. The total squared error (over all patterns
in a training set, Ξ, containing m patterns) is then:
ε =
m
¸
i=1
(d
i
−
n+1
¸
j=1
x
ij
w
j
)
2
We want to choose the weights w
j
to minimize this squared error. One way to
ﬁnd such a set of weights is to start with an arbitrary weight vector and move it
along the negative gradient of ε as a function of the weights. Since ε is quadratic
in the w
j
, we know that it has a global minimum, and thus this steepest descent
procedure is guaranteed to ﬁnd the minimum. Each component of the gradient
is the partial derivative of ε with respect to one of the weights. One problem
with taking the partial derivative of ε is that ε depends on all the input vectors
in Ξ. Often, it is preferable to use an incremental procedure in which we try the
TLU on just one element, X
i
, of Ξ at a time, compute the gradient of the single
pattern squared error, ε
i
, make the appropriate adjustment to the weights, and
then try another member of Ξ. Of course, the results of the incremental version
can only approximate those of the batch one, but the approximation is usually
quite eﬀective. We will be describing the incremental version here.
The jth component of the gradient of the singlepattern error is:
∂ε
i
∂w
j
= −2(d
i
−
n+1
¸
j=1
x
ij
w
j
)x
ij
An adjustment in the direction of the negative gradient would then change each
weight as follows:
w
j
←− w
j
+c
i
(d
i
−f
i
)x
ij
where f
i
=
¸
n+1
j=1
x
ij
w
j
, and c
i
governs the size of the adjustment. The entire
weight vector (in augmented, or V, notation) is thus adjusted according to the
following rule:
V ←− V+c
i
(d
i
−f
i
)Y
i
where, as before, Y
i
is the ith augmented pattern vector.
The WidrowHoﬀ procedure makes adjustments to the weight vector when
ever the dot product itself, Y
i
•V, does not equal the speciﬁed desired target
44 CHAPTER 4. NEURAL NETWORKS
value, d
i
(which is either 1 or −1). The learningrate factor, c
i
, might de
crease with time toward 0 to achieve asymptotic convergence. The Widrow
Hoﬀ formula for changing the weight vector has the same form as the standard
ﬁxedincrement errorcorrection formula. The only diﬀerence is that f
i
is the
thresholded response of the TLU in the errorcorrection case while it is the dot
product itself for the WidrowHoﬀ procedure.
Finding weight values that give the desired dot products corresponds to solv
ing a set of linear equalities, and the WidrowHoﬀ procedure can be interpreted
as a descent procedure that attempts to minimize the meansquarederror be
tween the actual and desired values of the dot product. (For more on Widrow
Hoﬀ and other related procedures, see [Duda & Hart, 1973, pp. 151ﬀ].) Examples of training curves for
TLU’s; performance on training
set; performance on test set;
cumulative number of corrections.
4.1.6 Training a TLU on NonLinearlySeparable Training
Sets
When the training set is not linearly separable (perhaps because of noise or
perhaps inherently), it may still be desired to ﬁnd a “best” separating hy
perplane. Typically, the errorcorrection procedures will not do well on non
linearlyseparable training sets because they will continue to attempt to correct
inevitable errors, and the hyperplane will never settle into an acceptable place.
Several methods have been proposed to deal with this case. First, we might
use the WidrowHoﬀ procedure, which (although it will not converge to zero
error on nonlinearly separable problems) will give us a weight vector that min
imizes the meansquarederror. A meansquarederror criterion often gives un
satisfactory results, however, because it prefers many small errors to a few large
ones. As an alternative, error correction with a continuous decrease toward zero
of the value of the learning rate constant, c, will result in ever decreasing changes
to the hyperplane. Duda [Duda, 1966] has suggested keeping track of the average
value of the weight vector during error correction and using this average to give a
separating hyperplane that performs reasonably well on nonlinearlyseparable
problems. Gallant [Gallant, 1986] proposed what he called the “pocket algo
rithm.” As described in [Hertz, Krogh, & Palmer, 1991, p. 160]:
. . . the pocket algorithm . . . consists simply in storing (or “putting
in your pocket”) the set of weights which has had the longest un
modiﬁed run of successes so far. The algorithm is stopped after some
chosen time t . . .
After stopping, the weights in the pocket are used as a set that should give a
small number of errors on the training set. Errorcorrection proceeds as usual
with the ordinary set of weights. Also see methods proposed by
[John, 1995] and by
[Marchand & Golea, 1993]. The
latter is claimed to outperform the
pocket algorithm.
4.2 Linear Machines
The natural generalization of a (twocategory) TLU to an Rcategory classiﬁer
is the structure, shown in Fig. 4.8, called a linear machine. Here, to use more
4.2. LINEAR MACHINES 45
familiar notation, the Ws and X are meant to be augmented vectors (with an
(n+1)st component). Such a structure is also sometimes called a “competitive”
net or a “winnertakeall” net. The output of the linear machine is one of
the numbers, {1, . . . , R}, corresponding to which dot product is largest. Note
that when R = 2, the linear machine reduces to a TLU with weight vector
W = (W
1
−W
2
).
X
W
1
W
R
. . .
Y
Y
ARGMAX
W
1
.
X
W
R
.
X
Figure 4.8: A Linear Machine
The diagram in Fig. 4.9 shows the character of the regions in a 2dimensional
space created by a linear machine for R = 5. In n dimensions, every pair of
regions is either separated by a section of a hyperplane or is nonadjacent.
R
1
R
3
R
4
R
5
X
.
W
4
* X
.
W
i
for i & 4
R
2
In this region:
Figure 4.9: Regions For a Linear Machine
To train a linear machine, there is a straightforward generalization of the
2category errorcorrection rule. Assemble the patterns in the training set into
a sequence as before.
a. If the machine classiﬁes a pattern correctly, no change is made to any of
46 CHAPTER 4. NEURAL NETWORKS
the weight vectors.
b. If the machine mistakenly classiﬁes a category u pattern, X
i
, in category
v (u = v), then:
W
u
←− W
u
+c
i
X
i
and
W
v
←− W
v
−c
i
X
i
and all other weight vectors are not changed.
This correction increases the value of the uth dot product and decreases the
value of the vth dot product. Just as in the 2category ﬁxed increment proce
dure, this procedure is guaranteed to terminate, for constant c
i
, if there exists
weight vectors that make correct separations of the training set. Note that when
R = 2, this procedure reduces to the ordinary TLU errorcorrection procedure.
A proof that this procedure terminates is given in [Nilsson, 1990, pp. 8890]
and in [Duda & Hart, 1973, pp. 174177].
4.3 Networks of TLUs
4.3.1 Motivation and Examples
Layered Networks
To classify correctly all of the patterns in nonlinearlyseparable training sets re
quires separating surfaces more complex than hyperplanes. One way to achieve
more complex surfaces is with networks of TLUs. Consider, for example, the 2
dimensional, even parity function, f = x
1
x
2
+x
1
x
2
. No single line through the
2dimensional square can separate the vertices (1,1) and (0,0) from the vertices
(1,0) and (0,1)—the function is not linearly separable and thus cannot be im
plemented by a single TLU. But, the network of three TLUs shown in Fig. 4.10
does implement this function. In the ﬁgure, we show the weight values along
input lines to each TLU and the threshold value inside the circle representing
the TLU.
The function implemented by a network of TLUs depends on its topology
as well as on the weights of the individual TLUs. Feedforward networks have
no cycles; in a feedforward network no TLU’s input depends (through zero
or more intermediate TLUs) on that TLU’s output. (Networks that are not
feedforward are called recurrent networks). If the TLUs of a feedforward network
are arranged in layers, with the elements of layer j receiving inputs only from
TLUs in layer j − 1, then we say that the network is a layered, feedforward
4.3. NETWORKS OF TLUS 47
f
x
1
x
2
1.5
0.5
0.5
1
1
1
1
1
1
Figure 4.10: A Network for the Even Parity Function
network. The network shown in Fig. 4.10 is a layered, feedforward network
having two layers (of weights). (Some people count the layers of TLUs and
include the inputs as a layer also; they would call this network a threelayer
network.) In general, a feedforward, layered network has the structure shown
in Fig. 4.11. All of the TLUs except the “output” units are called hidden units
(they are “hidden” from the output).
X
hidden units
output units
Figure 4.11: A Layered, Feedforward Network
Implementing DNF Functions by TwoLayer Networks
We have already deﬁned kterm DNF functions—they are DNF functions having
k terms. A kterm DNF function can be implemented by a twolayer network
with k units in the hidden layer—to implement the k terms—and one output
unit to implement the disjunction of these terms. Since any Boolean function
has a DNF form, any Boolean function can be implemented by some twolayer
network of TLUs. As an example, consider the function f = x
1
x
2
+ x
2
x
3
+
x
1
x
3
. The form of the network that implements this function is shown in Fig.
4.12. (We leave it to the reader to calculate appropriate values of weights and
48 CHAPTER 4. NEURAL NETWORKS
thresholds.) The 3cube representation of the function is shown in Fig. 4.13.
The network of Fig. 4.12 can be designed so that each hidden unit implements
one of the planar boundaries shown in Fig. 4.13.
x
conjuncts
disjunct
A Feedforward, 2layer Network
TLUs
disjunction
of terms
conjunctions
of literals
(terms)
Figure 4.12: A TwoLayer Network
x
2
x
1
x
3
f = x
1
x
2
+ x
2
x
3
+ x
1
x
3
Figure 4.13: Three Planes Implemented by the Hidden Units
To train a twolayer network that implements a kterm DNF function, we
ﬁrst note that the output unit implements a disjunction, so the weights in the
ﬁnal layer are ﬁxed. The weights in the ﬁrst layer (except for the “threshold
weights”) can all have values of 1, −1, or 0. Later, we will present a training
procedure for this ﬁrst layer of weights. Discuss halfspace intersections,
halfspace unions, NPhardness of
optimal versions,
singlesideerrorhypeplane
methods, relation to “AQ”
methods.
4.3. NETWORKS OF TLUS 49
Important Comment About Layered Networks
Adding additional layers cannot compensate for an inadequate ﬁrst layer of
TLUs. The ﬁrst layer of TLUs partitions the feature space so that no two dif
ferently labeled vectors are in the same region (that is, so that no two such
vectors yield the same set of outputs of the ﬁrstlayer units). If the ﬁrst layer
does not partition the feature space in this way, then regardless of what subse
quent layers do, the ﬁnal outputs will not be consistent with the labeled training
set. Add diagrams showing the
nonlinear transformation
performed by a layered network.
4.3.2 Madalines
TwoCategory Networks
An interesting example of a layered, feedforward network is the twolayer one
which has an odd number of hidden units, and a “votetaking” TLU as the
output unit. Such a network was called a “Madaline” (for many adalines by
Widrow. Typically, the response of the vote taking unit is deﬁned to be the
response of the majority of the hidden units, although other output logics are
possible. Ridgway [Ridgway, 1962] proposed the following errorcorrection rule
for adjusting the weights of the hidden units of a Madaline:
• If the Madaline correctly classiﬁes a pattern, X
i
, no corrections are made
to any of the hidden units’ weight vectors,
• If the Madaline incorrectly classiﬁes a pattern, X
i
, then determine the
minimum number of hidden units whose responses need to be changed
(from 0 to 1 or from 1 to 0—depending on the type of error) in order that
the Madaline would correctly classify X
i
. Suppose that minimum number
is k
i
. Of those hidden units voting incorrectly, change the weight vectors
of those k
i
of them whose dot products are closest to 0 by using the error
correction rule:
W←− W+c
i
(d
i
−f
i
)X
i
where d
i
is the desired response of the hidden unit (0 or 1) and f
i
is the
actual response (0 or 1). (We assume augmented vectors here even though
we are using X, W notation.)
That is, we perform errorcorrection on just enough hidden units to correct
the vote to a majority voting correctly, and we change those that are easiest to
change. There are example problems in which even though a set of weight values
exists for a given Madaline structure such that it could classify all members of
a training set correctly, this procedure will fail to ﬁnd them. Nevertheless, the
procedure works eﬀectively in most experiments with it.
We leave it to the reader to think about how this training procedure could
be modiﬁed if the output TLU implemented an or function (or an and function).
50 CHAPTER 4. NEURAL NETWORKS
RCategory Madalines and ErrorCorrecting Output Codes
If there are k hidden units (k > 1) in a twolayer network, their responses
correspond to vertices of a kdimensional hypercube. The ordinary twocategory
Madaline identiﬁes two special points in this space, namely the vertex consisting
of k 1’s and the vertex consisting of k 0’s. The Madaline’s response is 1 if the
point in “hiddenunitspace” is closer to the all 1’s vertex than it is to the all
0’s vertex. We could design an Rcategory Madaline by identifying R vertices
in hiddenunit space and then classifying a pattern according to which of these
vertices the hiddenunit response is closest to. A machine using that idea was
implemented in the early 1960s at SRI [Brain, et al., 1962]. It used the fact
that the 2
p
socalled maximallength shiftregister sequences [Peterson, 1961, pp.
147ﬀ] in a (2
p
−1)dimensional Boolean space are mutually equidistant (for any
integer p). For similar, more recent work see [Dietterich & Bakiri, 1991].
4.3.3 Piecewise Linear Machines
A twocategory training set is linearly separable if there exists a threshold func
tion that correctly classiﬁes all members of the training set. Similarly, we can
say that an Rcategory training set is linearly separable if there exists a linear
machine that correctly classiﬁes all members of the training set. When an R
category problem is not linearly separable, we need a more powerful classiﬁer.
A candidate is a structure called a piecewise linear (PWL) machine illustrated
in Fig. 4.14.
X
W
1
W
1
. . .
Y
Y
MAX
. . .
Y
Y
MAX
. . .
W
R
W
R
ARG
MAX
1
R
1
N
1
1
N
R
Figure 4.14: A Piecewise Linear Machine
4.3. NETWORKS OF TLUS 51
The PWL machine groups its weighted summing units into R banks corre
sponding to the R categories. An input vector X is assigned to that category
corresponding to the bank with the largest weighted sum. We can use an error
correction training algorithm similar to that used for a linear machine. If a
pattern is classiﬁed incorrectly, we subtract (a constant times) the pattern vec
tor from the weight vector producing the largest dot product (it was incorrectly
the largest) and add (a constant times) the pattern vector to that weight vector
in the correct bank of weight vectors whose dot product is locally largest in
that bank. (Again, we use augmented vectors here.) Unfortunately, there are
example training sets that are separable by a given PWL machine structure
but for which this errorcorrection training method fails to ﬁnd a solution. The
method does appear to work well in some situations [Duda & Fossum, 1966], al
though [Nilsson, 1965, page 89] observed that “it is probably not a very eﬀective
method for training PWL machines having more than three [weight vectors] in
each bank.”
4.3.4 Cascade Networks
Another interesting class of feedforward networks is that in which all of the TLUs
are ordered and each TLU receives inputs from all of the pattern components
and from all TLUs lower in the ordering. Such a network is called a cascade
network. An example is shown in Fig. 4.15 in which the TLUs are labeled by
the linearly separable functions (of their inputs) that they implement. Each
TLU in the network implements a set of 2
k
parallel hyperplanes, where k is
the number of TLUs from which it receives inputs. (Each of the k preceding
TLUs can have an output of 1 or 0; that’s 2
k
diﬀerent combinations—resulting
in 2
k
diﬀerent positions for the parallel hyperplanes.) We show a 3dimensional
sketch for a network of two TLUs in Fig. 4.16. The reader might consider how
the ndimensional parity function might be implemented by a cascade network
having log
2
n TLUs.
x
L
1
L
2
output
L
3
Figure 4.15: A Cascade Network
52 CHAPTER 4. NEURAL NETWORKS
L
1
L
2
L
2
Figure 4.16: Planes Implemented by a Cascade Network with Two TLUs
Cascade networks might be trained by ﬁrst training L
1
to do as good a job
as possible at separating all the training patterns (perhaps by using the pocket
algorithm, for example), then training L
2
(including the weight from L
1
to L
2
)
also to do as good a job as possible at separating all the training patterns,
and so on until the resulting network classiﬁes the patterns in the training set
satisfactorily. Also mention the
“cascadecorrelation” method of
[Fahlman & Lebiere, 1990].
4.4 Training Feedforward Networks by Back
propagation
4.4.1 Notation
The general problem of training a network of TLUs is diﬃcult. Consider, for
example, the layered, feedforward network of Fig. 4.11. If such a network makes
an error on a pattern, there are usually several diﬀerent ways in which the error
can be corrected. It is diﬃcult to assign “blame” for the error to any particular
TLU in the network. Intuitively, one looks for weightadjusting procedures that
move the network in the correct direction (relative to the error) by making
minimal changes. In this spirit, the WidrowHoﬀ method of gradient descent
has been generalized to deal with multilayer networks.
In explaining this generalization, we use Fig. 4.17 to introduce some nota
tion. This network has only one output unit, but, of course, it is possible to have
several TLUs in the output layer—each implementing a diﬀerent function. Each
of the layers of TLUs will have outputs that we take to be the components of
vectors, just as the input features are components of an input vector. The jth
layer of TLUs (1 ≤ j < k) will have as their outputs the vector X
(j)
. The input
feature vector is denoted by X
(0)
, and the ﬁnal output (of the kth layer TLU)
is f. Each TLU in each layer has a weight vector (connecting it to its inputs)
and a threshold; the ith TLU in the jth layer has a weight vector denoted by
W
(j)
i
. (We will assume that the “threshold weight” is the last component of
the associated weight vector; we might have used V notation instead to include
4.4. TRAININGFEEDFORWARDNETWORKS BYBACKPROPAGATION53
this threshold component, but we have chosen here to use the familiar X,W
notation, assuming that these vectors are “augmented” as appropriate.) We
denote the weighted sum input to the ith threshold unit in the jth layer by
s
(j)
i
. (That is, s
(j)
i
= X
(j−1)
•W
(j)
i
.) The number of TLUs in the jth layer is
given by m
j
. The vector W
(j)
i
has components w
(j)
l,i
for l = 1, . . . , m
(j−1)
+ 1.
X
(0)
. . .
. . .
. . .
. . .
W
i
(1)
W
(k)
X
(1)
m
1
TLUs
. . .
W
i
(j)
. . .
X
(j)
. . .
W
i
(k1)
X
(k1)
m
j
TLUs m
(k1)
TLUs
w
li
(j)
w
l
(k)
First Layer jth Layer (k1)th Layer kth Layer
. . .
f
s
i
(1)
s
i
(j)
s
i
(k1)
s
(k)
Figure 4.17: A klayer Network
4.4.2 The Backpropagation Method
A gradient descent method, similar to that used in the Widrow Hoﬀ method,
has been proposed by various authors for training a multilayer, feedforward
network. As before, we deﬁne an error function on the ﬁnal output of the
network and we adjust each weight in the network so as to minimize the error.
If we have a desired response, d
i
, for the ith input vector, X
i
, in the training
set, Ξ, we can compute the squared error over the entire training set to be:
ε =
¸
Xi Ξ
(d
i
−f
i
)
2
where f
i
is the actual response of the network for input X
i
. To do gradient
descent on this squared error, we adjust each weight in the network by an
amount proportional to the negative of the partial derivative of ε with respect
to that weight. Again, we use a singlepattern error function so that we can
use an incremental weight adjustment procedure. The squared error for a single
input vector, X, evoking an output of f when the desired output is d is:
54 CHAPTER 4. NEURAL NETWORKS
ε = (d −f)
2
It is convenient to take the partial derivatives of ε with respect to the various
weights in groups corresponding to the weight vectors. We deﬁne a partial
derivative of a quantity φ, say, with respect to a weight vector, W
(j)
i
, thus:
∂φ
∂W
(j)
i
def
=
¸
∂φ
∂w
(j)
1i
, . . . ,
∂φ
∂w
(j)
li
, . . . ,
∂φ
∂w
(j)
mj−1+1,i
¸
where w
(j)
li
is the lth component of W
(j)
i
. This vector partial derivative of φ is
called the gradient of φ with respect to W and is sometimes denoted by ∇
W
φ.
Since ε’s dependence on W
(j)
i
is entirely through s
(j)
i
, we can use the chain
rule to write:
∂ε
∂W
(j)
i
=
∂ε
∂s
(j)
i
∂s
(j)
i
∂W
(j)
i
Because s
(j)
i
= X
(j−1)
•W
(j)
i
,
∂s
(j)
i
∂W
(j)
i
= X
(j−1)
. Substituting yields:
∂ε
∂W
(j)
i
=
∂ε
∂s
(j)
i
X
(j−1)
Note that
∂ε
∂s
(j)
i
= −2(d −f)
∂f
∂s
(j)
i
. Thus,
∂ε
∂W
(j)
i
= −2(d −f)
∂f
∂s
(j)
i
X
(j−1)
The quantity (d−f)
∂f
∂s
(j)
i
plays an important role in our calculations; we shall
denote it by δ
(j)
i
. Each of the δ
(j)
i
’s tells us how sensitive the squared error of
the network output is to changes in the input to each threshold function. Since
we will be changing weight vectors in directions along their negative gradient,
our fundamental rule for weight changes throughout the network will be:
W
(j)
i
←W
(j)
i
+c
(j)
i
δ
(j)
i
X
(j−1)
where c
(j)
i
is the learning rate constant for this weight vector. (Usually, the
learning rate constants for all weight vectors in the network are the same.) We
see that this rule is quite similar to that used in the error correction procedure
4.4. TRAININGFEEDFORWARDNETWORKS BYBACKPROPAGATION55
for a single TLU. A weight vector is changed by the addition of a constant times
its vector of (unweighted) inputs.
Now, we must turn our attention to the calculation of the δ
(j)
i
’s. Using the
deﬁnition, we have:
δ
(j)
i
= (d −f)
∂f
∂s
(j)
i
We have a problem, however, in attempting to carry out the partial deriva
tives of f with respect to the s’s. The network output, f, is not continuously
diﬀerentiable with respect to the s’s because of the presence of the threshold
functions. Most small changes in these sums do not change f at all, and when
f does change, it changes abruptly from 1 to 0 or vice versa.
A way around this diﬃculty was proposed by Werbos [Werbos, 1974] and
(perhaps independently) pursued by several other researchers, for example
[Rumelhart, Hinton, & Williams, 1986]. The trick involves replacing all the
threshold functions by diﬀerentiable functions called sigmoids.
1
The output
of a sigmoid function, superimposed on that of a threshold function, is shown
in Fig. 4.18. Usually, the sigmoid function used is f(s) =
1
1+e
−s
, where s is
the input and f is the output.
sigmoid
threshold function
f (s)
s
f (s) = 1/[1 + e
<s
]
Figure 4.18: A Sigmoid Function
1
[Russell & Norvig 1995, page 595] attributes the use of this idea to [Bryson & Ho 1969].
56 CHAPTER 4. NEURAL NETWORKS
We show the network containing sigmoid units in place of TLUs in Fig. 4.19.
The output of the ith sigmoid unit in the jth layer is denoted by f
(j)
i
. (That
is, f
(j)
i
=
1
1+e
−s
(j)
i
.)
X
(0)
. . .
. . .
. . .
. . .
W
i
(1)
s
i
(1)
W
(k)
X
(1)
f
i
(1)
m
1
sigmoids
. . .
W
i
(j)
f
i
(j)
s
i
(j)
. . .
X
(j)
. . .
W
i
(k1)
f
i
(k1)
s
i
(k1)
f
(k)
s
(k)
X
(k1)
m
j
sigmoids
m
(k1)
sigmoids
w
li
(j)
w
l
(k)
b
i
(j)
b
i
(1)
b
i
(k1)
b
(k)
First Layer jth Layer (k1)th Layer kth Layer
. . .
Figure 4.19: A Network with Sigmoid Units
4.4.3 Computing Weight Changes in the Final Layer
We ﬁrst calculate δ
(k)
in order to compute the weight change for the ﬁnal sigmoid
unit:
4.4. TRAININGFEEDFORWARDNETWORKS BYBACKPROPAGATION57
δ
(k)
= (d −f
(k)
)
∂f
(k)
∂s
(k)
Given the sigmoid function that we are using, namely f(s) =
1
1+e
−s
, we have
that
∂f
∂s
= f(1 −f). Substituting gives us:
δ
(k)
= (d −f
(k)
)f
(k)
(1 −f
(k)
)
Rewriting our general rule for weight vector changes, the weight vector in
the ﬁnal layer is changed according to the rule:
W
(k)
←W
(k)
+c
(k)
δ
(k)
X
(k−1)
where δ
(k)
= (d −f
(k)
)f
(k)
(1 −f
(k)
)
It is interesting to compare backpropagation to the errorcorrection rule and
to the WidrowHoﬀ rule. The backpropagation weight adjustment for the single
element in the ﬁnal layer can be written as:
W←− W+c(d −f)f(1 −f)X
Written in the same format, the errorcorrection rule is:
W←− W+c(d −f)X
and the WidrowHoﬀ rule is:
W←− W+c(d −f)X
The only diﬀerence (except for the fact that f is not thresholded in Widrow
Hoﬀ) is the f(1 − f) term due to the presence of the sigmoid function. With
the sigmoid function, f(1 − f) can vary in value from 0 to 1. When f is 0,
f(1 − f) is also 0; when f is 1, f(1 − f) is 0; f(1 − f) obtains its maximum
value of 1/4 when f is 1/2 (that is, when the input to the sigmoid is 0). The
sigmoid function can be thought of as implementing a “fuzzy” hyperplane. For
a pattern far away from this fuzzy hyperplane, f(1 − f) has value close to 0,
and the backpropagation rule makes little or no change to the weight values
regardless of the desired output. (Small changes in the weights will have little
eﬀect on the output for inputs far from the hyperplane.) Weight changes are
only made within the region of “fuzz” surrounding the hyperplane, and these
changes are in the direction of correcting the error, just as in the errorcorrection
and WidrowHoﬀ rules.
58 CHAPTER 4. NEURAL NETWORKS
4.4.4 Computing Changes to the Weights in Intermediate
Layers
Using our expression for the δ’s, we can similarly compute how to change each
of the weight vectors in the network. Recall:
δ
(j)
i
= (d −f)
∂f
∂s
(j)
i
Again we use a chain rule. The ﬁnal output, f, depends on s
(j)
i
through
each of the summed inputs to the sigmoids in the (j + 1)th layer. So:
δ
(j)
i
= (d −f)
∂f
∂s
(j)
i
= (d −f)
¸
∂f
∂s
(j+1)
1
∂s
(j+1)
1
∂s
(j)
i
+· · · +
∂f
∂s
(j+1)
l
∂s
(j+1)
l
∂s
(j)
i
+· · · +
∂f
∂s
(j+1)
mj+1
∂s
(j+1)
mj+1
∂s
(j)
i
¸
=
mj+1
¸
l=1
(d −f)
∂f
∂s
(j+1)
l
∂s
(j+1)
l
∂s
(j)
i
=
mj+1
¸
l=1
δ
(j+1)
l
∂s
(j+1)
l
∂s
(j)
i
It remains to compute the
∂s
(j+1)
l
∂s
(j)
i
’s. To do that we ﬁrst write:
s
(j+1)
l
= X
(j)
•W
(j+1)
l
=
mj+1
¸
ν=1
f
(j)
ν
w
(j+1)
νl
And then, since the weights do not depend on the s’s:
∂s
(j+1)
l
∂s
(j)
i
=
∂
¸
mj+1
ν=1
f
(j)
ν
w
(j+1)
νl
∂s
(j)
i
=
mj+1
¸
ν=1
w
(j+1)
νl
∂f
(j)
ν
∂s
(j)
i
Now, we note that
∂f
(j)
ν
∂s
(j)
i
= 0 unless ν = i, in which case
∂f
(j)
ν
∂s
(j)
ν
= f
(j)
ν
(1 − f
(j)
ν
).
Therefore:
∂s
(j+1)
l
∂s
(j)
i
= w
(j+1)
il
f
(j)
i
(1 −f
(j)
i
)
4.4. TRAININGFEEDFORWARDNETWORKS BYBACKPROPAGATION59
We use this result in our expression for δ
(j)
i
to give:
δ
(j)
i
= f
(j)
i
(1 −f
(j)
i
)
mj+1
¸
l=1
δ
(j+1)
l
w
(j+1)
il
The above equation is recursive in the δ’s. (It is interesting to note that
this expression is independent of the error function; the error function explicitly
aﬀects only the computation of δ
(k)
.) Having computed the δ
(j+1)
i
’s for layer
j + 1, we can use this equation to compute the δ
(j)
i
’s. The base case is δ
(k)
,
which we have already computed:
δ
(k)
= (d −f
(k)
)f
(k)
(1 −f
(k)
)
We use this expression for the δ’s in our generic weight changing rule, namely:
W
(j)
i
←W
(j)
i
+c
(j)
i
δ
(j)
i
X
(j−1)
Although this rule appears complex, it has an intuitively reasonable explanation.
The quantity δ
(k)
= (d −f)f(1 −f) controls the overall amount and sign of all
weight adjustments in the network. (Adjustments diminish as the ﬁnal output,
f, approaches either 0 or 1, because they have vanishing eﬀect on f then.) As
the recursion equation for the δ’s shows, the adjustments for the weights going
in to a sigmoid unit in the jth layer are proportional to the eﬀect that such
adjustments have on that sigmoid unit’s output (its f
(j)
(1−f
(j)
) factor). They
are also proportional to a kind of “average” eﬀect that any change in the output
of that sigmoid unit will have on the ﬁnal output. This average eﬀect depends
on the weights going out of the sigmoid unit in the jth layer (small weights
produce little downstream eﬀect) and the eﬀects that changes in the outputs of
(j +1)th layer sigmoid units will have on the ﬁnal output (as measured by the
δ
(j+1)
’s). These calculations can be simply implemented by “backpropagating”
the δ’s through the weights in reverse direction (thus, the name backprop for
this algorithm).
4.4.5 Variations on Backprop
[To be written: problem of local minima, simulated annealing, momemtum
(Plaut, et al., 1986, see [Hertz, Krogh, & Palmer, 1991]), quickprop, regulariza
tion methods]
Simulated Annealing
To apply simulated annealing, the value of the learning rate constant is gradually
decreased with time. If we fall early into an errorfunction valley that is not
very deep (a local minimum), it typically will neither be very broad, and soon
60 CHAPTER 4. NEURAL NETWORKS
a subsequent large correction will jostle us out of it. It is less likely that we will
move out of deep valleys, and at the end of the process (with very small values
of the learning rate constant), we descend to its deepest point. The process
gets its name by analogy with annealing in metallurgy, in which a material’s
temperature is gradually decreased allowing its crystalline structure to reach a
minimal energy state.
4.4.6 An Application: Steering a Van
A neural network system called ALVINN (Autonomous Land Vehicle in a Neural
Network) has been trained to steer a Chevy van successfully on ordinary roads
and highways at speeds of 55 mph [Pomerleau, 1991, Pomerleau, 1993]. The
input to the network is derived from a lowresolution (30 x 32) television image.
The TV camera is mounted on the van and looks at the road straight ahead.
This image is sampled and produces a stream of 960dimensional input vectors
to the neural network. The network is shown in Fig. 4.20.
960 inputs
30 x 32 retina
. . .
5 hidden
units connected
to all 960 inputs
30 output units
connected to all
hidden units
. . .
sharp left
sharp right
straight ahead
centroid
of outputs
steers
vehicle
Figure 4.20: The ALVINN Network
The network has ﬁve hidden units in its ﬁrst layer and 30 output units in the
second layer; all are sigmoid units. The output units are arranged in a linear
order and control the van’s steering angle. If a unit near the top of the array
of output units has a higher output than most of the other units, the van is
steered to the left; if a unit near the bottom of the array has a high output, the
van is steered to the right. The “centroid” of the responses of all of the output
4.5. SYNERGIES BETWEENNEURAL NETWORKANDKNOWLEDGEBASEDMETHODS61
units is computed, and the van’s steering angle is set at a corresponding value
between hard left and hard right.
The system is trained by a modiﬁed online training regime. A driver drives
the van, and his actual steering angles are taken as the correct labels for the
corresponding inputs. The network is trained incrementally by backprop to
produce the driverspeciﬁed steering angles in response to each visual pattern
as it occurs in real time while driving.
This simple procedure has been augmented to avoid two potential problems.
First, since the driver is usually driving well, the network would never get any
experience with farfromcenter vehicle positions and/or incorrect vehicle orien
tations. Also, on long, straight stretches of road, the network would be trained
for a long time only to produce straightahead steering angles; this training
would swamp out earlier training to follow a curved road. We wouldn’t want
to try to avoid these problems by instructing the driver to drive erratically
occasionally, because the system would learn to mimic this erratic behavior.
Instead, each original image is shifted and rotated in software to create 14
additional images in which the vehicle appears to be situated diﬀerently relative
to the road. Using a model that tells the system what steering angle ought to
be used for each of these shifted images, given the driverspeciﬁed steering angle
for the original image, the system constructs an additional 14 labeled training
patterns to add to those encountered during ordinary driver training.
4.5 Synergies Between Neural Network and
KnowledgeBased Methods
To be written; discuss
rulegenerating procedures (such as
[Towell & Shavlik, 1992]) and how
expertprovided rules can aid
neural net training and viceversa
[Towell, Shavlik, & Noordweier, 1990].
4.6 Bibliographical and Historical Remarks
To be added.
62 CHAPTER 4. NEURAL NETWORKS
Chapter 5
Statistical Learning
5.1 Using Statistical Decision Theory
5.1.1 Background and General Method
Suppose the pattern vector, X, is a random variable whose probability distri
bution for category 1 is diﬀerent than it is for category 2. (The treatment given
here can easily be generalized to Rcategory problems.) Speciﬁcally, suppose we
have the two probability distributions (perhaps probability density functions),
p(X  1) and p(X  2). Given a pattern, X, we want to use statistical tech
niques to determine its category—that is, to determine from which distribution
it was drawn. These techniques are based on the idea of minimizing the ex
pected value of a quantity similar to the error function we used in deriving the
weightchanging rules for backprop.
In developing a decision method, it is necessary to know the relative serious
ness of the two kinds of mistakes that might be made. (We might decide that a
pattern really in category 1 is in category 2, and vice versa.) We describe this
information by a loss function, λ(i  j), for i, j = 1, 2. λ(i  j) represents the loss
incurred when we decide a pattern is in category i when really it is in category
j. We assume here that λ(1  1) and λ(2  2) are both 0. For any given pattern,
X, we want to decide its category in such a way that minimizes the expected
value of this loss.
Given a pattern, X, if we decide category i, the expected value of the loss
will be:
L
X
(i) = λ(i  1)p(1  X) +λ(i  2)p(2  X)
where p(j  X) is the probability that given a pattern X, its category is j. Our
decision rule will be to decide that X belongs to category 1 if L
X
(1) ≤ L
X
(2),
and to decide on category 2 otherwise.
63
64 CHAPTER 5. STATISTICAL LEARNING
We can use Bayes’ Rule to get expressions for p(j  X) in terms of p(X  j),
which we assume to be known (or estimatible):
p(j  X) =
p(X  j)p(j)
p(X)
where p(j) is the (a priori) probability of category j (one category may be much
more probable than the other); and p(X) is the (a priori) probability of pattern
X being the pattern we are asked to classify. Performing the substitutions given
by Bayes’ Rule, our decision rule becomes:
Decide category 1 iﬀ:
λ(1  1)
p(X  1)p(1)
p(X)
+λ(1  2)
p(X  2)p(2)
p(X)
≤ λ(2  1)
p(X  1)p(1)
p(X)
+λ(2  2)
p(X  2)p(2)
p(X)
Using the fact that λ(i  i) = 0, and noticing that p(X) is common to both
expressions, we obtain,
Decide category 1 iﬀ:
λ(1  2)p(X  2)p(2) ≤ λ(2  1)p(X  1)p(1)
If λ(1  2) = λ(2  1) and if p(1) = p(2), then the decision becomes particu
larly simple:
Decide category 1 iﬀ:
p(X  2) ≤ p(X  1)
Since p(X  j) is called the likelihood of j with respect to X, this simple decision
rule implements what is called a maximumlikelihood decision. More generally,
if we deﬁne k(i  j) as λ(i  j)p(j), then our decision rule is simply,
Decide category1 iﬀ:
k(1  2)p(X  2) ≤ k(2  1)p(X  1)
In any case, we need to compare the (perhaps weighted) quantities p(X  i) for
i = 1 and 2. The exact decision rule depends on the the probability distributions
assumed. We will treat two interesting distributions.
5.1. USING STATISTICAL DECISION THEORY 65
5.1.2 Gaussian (or Normal) Distributions
The multivariate (ndimensional) Gaussian distribution is given by the proba
bility density function:
p(X) =
1
(2π)
n/2
Σ
1/2
e
−(X−M)
t
Σ
−1
(X−M)
2
where n is the dimension of the column vector X, the column vector M is called
the mean vector, (X − M)
t
is the transpose of the vector (X − M), Σ is the
covariance matrix of the distribution (an n × n symmetric, positive deﬁnite
matrix), Σ
−1
is the inverse of the covariance matrix, and Σ is the determinant
of the covariance matrix.
The mean vector, M, with components (m
1
, . . . , m
n
), is the expected value
of X (using this distribution); that is, M = E[X]. The components of the
covariance matrix are given by:
σ
2
ij
= E[(x
i
−m
i
)(x
j
−m
j
)]
In particular, σ
2
ii
is called the variance of x
i
.
Although the formula appears complex, an intuitive idea for Gaussian dis
tributions can be given when n = 2. We show a twodimensional Gaussian
distribution in Fig. 5.1. A threedimensional plot of the distribution is shown
at the top of the ﬁgure, and contours of equal probability are shown at the bot
tom. In this case, the covariance matrix, Σ, is such that the elliptical contours
of equal probability are skewed. If the covariance matrix were diagonal, that is
if all oﬀdiagonal terms were 0, then the major axes of the elliptical contours
would be aligned with the coordinate axes. In general the principal axes are
given by the eigenvectors of Σ. In any case, the equiprobability contours are
all centered on the mean vector, M, which in our ﬁgure happens to be at the
origin. In general, the formula in the exponent in the Gaussian distribution
is a positive deﬁnite quadratic form (that is, its value is always positive); thus
equiprobability contours are hyperellipsoids in ndimensional space.
Suppose we now assume that the two classes of pattern vectors that we
want to distinguish are each distributed according to a Gaussian distribution
but with diﬀerent means and covariance matrices. That is, one class tends to
have patterns clustered around one point in the ndimensional space, and the
other class tends to have patterns clustered around another point. We show a
twodimensional instance of this problem in Fig. 5.2. (In that ﬁgure, we have
plotted the sum of the two distributions.) What decision rule should we use to
separate patterns into the two appropriate categories?
Substituting the Gaussian distributions into our maximum likelihood for
mula yields:
66 CHAPTER 5. STATISTICAL LEARNING
5
0
5
5
0
5
0
0.25
0.5
0.75
1
5
0
5
5
0
5
0
25
.5
75
1
6 4 2 0 2 4 6
6
4
2
0
2
4
6
x
1
x
2
p(x
1
,x
2
)
2
4
6
2 4 6
x
1
x
2
Figure 5.1: The TwoDimensional Gaussian Distribution
Decide category 1 iﬀ:
1
(2π)
n/2
Σ
2

1/2
e
−1/2(X−M2)
t
Σ
−1
2
(X−M2)
is less than or equal to
1
(2π)
n/2
Σ
1

1/2
e
−1/2(X−M1)
t
Σ
−1
1
(X−M1)
where the category 1 patterns are distributed with mean and covariance M
1
and Σ
1
, respectively, and the category 2 patterns are distributed with mean
and covariance M
2
and Σ
2
.
The result of the comparison isn’t changed if we compare logarithms instead.
After some manipulation, our decision rule is then:
5.1. USING STATISTICAL DECISION THEORY 67
5
0
5
10
5
0
5
10
0
0.25
0.5
0.75
1
5
0
5
10
5
0
5
10
0
25
.5
75
1
x
1
x
2
p(x
1
,x
2
)
5 2.5 0 2.5 5 7.5 10
5
2.5
0
2.5
5
7.5
10
Figure 5.2: The Sum of Two Gaussian Distributions
Decide category 1 iﬀ:
(X−M
1
)
t
Σ
−1
1
(X−M
1
) < (X−M
2
)
t
Σ
−1
2
(X−M
2
) +B
where B, a constant bias term, incorporates the logarithms of the fractions
preceding the exponential, etc.
When the quadratic forms are multiplied out and represented in terms of
the components x
i
, the decision rule involves a quadric surface (a hyperquadric)
in ndimensional space. The exact shape and position of this hyperquadric is
determined by the means and the covariance matrices. The surface separates
the space into two parts, one of which contains points that will be assigned to
category 1 and the other contains points that will be assigned to category 2.
It is interesting to look at a special case of this surface. If the covariance
matrices for each category are identical and diagonal, with all σ
ii
equal to each
other, then the contours of equal probability for each of the two distributions
68 CHAPTER 5. STATISTICAL LEARNING
are hyperspherical. The quadric forms then become (1/Σ)(X−M
i
)
t
(X−M
i
),
and the decision rule is:
Decide category 1 iﬀ:
(X−M
1
)
t
(X−M
1
) < (X−M
2
)
t
(X−M
2
)
Multiplying out yields:
X•X−2X•M
1
+M
1
•M
1
< X•X−2X•M
2
+M
2
•M
2
or ﬁnally,
Decide category 1 iﬀ:
X•M
1
≥ X•M
2
+ Constant
or
X•(M
1
−M
2
) ≥ Constant
where the constant depends on the lengths of the mean vectors.
We see that the optimal decision surface in this special case is a hyperplane.
In fact, the hyperplane is perpendicular to the line joining the two means. The
weights in a TLU implementation are equal to the diﬀerence in the mean vectors.
If the parameters (M
i
, Σ
i
) of the probability distributions of the categories
are not known, there are various techniques for estimating them, and then using
those estimates in the decision rule. For example, if there are suﬃcient training
patterns, one can use sample means and sample covariance matrices. (Caution:
the sample covariance matrix will be singular if the training patterns happen to
lie on a subspace of the whole ndimensional space—as they certainly will, for
example, if the number of training patterns is less than n.)
5.1.3 Conditionally Independent Binary Components
Suppose the vector X is a random variable having binary (0,1) components.
We continue to denote the two probability distributions by p(X  1) and p(X 
2). Further suppose that the components of these vectors are conditionally
independent given the category. By conditional independence in this case, we
mean that the formulas for the distribution can be expanded as follows:
5.1. USING STATISTICAL DECISION THEORY 69
p(X  i) = p(x
1
 i)p(x
2
 i) · · · p(x
n
 i)
for i = 1, 2
Recall the minimumaverageloss decision rule,
Decide category 1 iﬀ:
λ(1  2)p(X  2)p(2) ≤ λ(2  1)p(X  1)p(1)
Assuming conditional independence of the components and that λ(1  2) = λ(2 
1), we obtain,
Decide category 1 iﬀ:
p(1)p(x
1
 1)p(x
2
 1) · · · p(x
n
 1) ≥ p(x
1
 2)p(x
2
 2) · · · p(x
n
 2)p(2)
or iﬀ:
p(x
1
 1)p(x
2
 1) . . . p(x
n
 1)
p(x
1
 2)p(x
2
 2) . . . p(x
n
 2)
≥
p(2)
p(1)
or iﬀ:
log
p(x
1
 1)
p(x
1
 2)
+ log
p(x
2
 1)
p(x
2
 2)
+· · · + log
p(x
n
 1)
p(x
n
 2)
+ log
p(1)
p(2)
≥ 0
Let us deﬁne values of the components of the distribution for speciﬁc values of
their arguments, x
i
:
p(x
i
= 1  1) = p
i
p(x
i
= 0  1) = 1 −p
i
p(x
i
= 1  2) = q
i
p(x
i
= 0  2) = 1 −q
i
Now, we note that since x
i
can only assume the values of 1 or 0:
log
p(x
i
 1)
p(x
i
 2)
= x
i
log
p
i
q
i
+ (1 −x
i
) log
(1 −p
i
)
(1 −q
i
)
70 CHAPTER 5. STATISTICAL LEARNING
= x
i
log
p
i
(1 −q
i
)
q
i
(1 −p
i
)
+ log
(1 −p
i
)
(1 −q
i
)
Substituting these expressions into our decision rule yields:
Decide category 1 iﬀ:
n
¸
i=1
x
i
log
p
i
(1 −q
i
)
q
i
(1 −p
i
)
+
n
¸
i=1
log
(1 −p
i
)
(1 −q
i
)
+ log
p(1)
p(2)
≥ 0
We see that we can achieve this decision with a TLU with weight values as
follows:
w
i
= log
p
i
(1 −q
i
)
q
i
(1 −p
i
)
for i = 1, . . . , n, and
w
n+1
= log
p(1)
1 −p(1)
+
n
¸
i=1
log
(1 −p
i
)
(1 −q
i
)
If we do not know the p
i
, q
i
and p(1), we can use a sample of labeled training
patterns to estimate these parameters.
5.2 Learning Belief Networks
To be added.
5.3 NearestNeighbor Methods
Another class of methods can be related to the statistical ones. These are called
nearestneighbor methods or, sometimes, memorybased methods. (A collection
of papers on this subject is in [Dasarathy, 1991].) Given a training set Ξ of m
labeled patterns, a nearestneighbor procedure decides that some new pattern,
X, belongs to the same category as do its closest neighbors in Ξ. More precisely,
a knearestneighbor method assigns a new pattern, X, to that category to which
the plurality of its k closest neighbors belong. Using relatively large values of
k decreases the chance that the decision will be unduly inﬂuenced by a noisy
training pattern close to X. But large values of k also reduce the acuity of the
method. The knearestneighbor method can be thought of as estimating the
values of the probabilities of the classes given X. Of course the denser are the
points around X, and the larger the value of k, the better the estimate.
5.3. NEARESTNEIGHBOR METHODS 71
The distance metric used in nearestneighbor methods (for numerical at
tributes) can be simple Euclidean distance. That is, the distance between two
patterns (x
11
, x
12
, . . . , x
1n
) and (x
21
, x
22
, . . . , x
2n
) is
¸
n
j=1
(x
1j
−x
2j
)
2
. This
distance measure is often modiﬁed by scaling the features so that the spread of
attribute values along each dimension is approximately the same. In that case,
the distance between the two vectors would be
¸
n
j=1
a
2
j
(x
1j
−x
2j
)
2
, where
a
j
is the scale factor for dimension j.
An example of a nearestneighbor decision problem is shown in Fig. 5.3. In
the ﬁgure the class of a training pattern is indicated by the number next to it.
k = 8
X (a pattern to be classified)
1
1
1
1
1
1 1
1
2
1
2
2
2
2
2
2
2
2
3
3
3
3
3
3
3
3
3
training pattern
class of training pattern
four patterns of category 1
two patterns of category 2
two patterns of category 3
plurality are in category 1, so
decide X is in category 1
Figure 5.3: An 8NearestNeighbor Decision
See [Baum, 1994] for theoretical
analysis of error rate as a function
of the number of training patterns
for the case in which points are
randomly distributed on the surface
of a unit sphere and underlying
function is linearly separable.
Nearestneighbor methods are memory intensive because a large number of
training patterns must be stored to achieve good generalization. Since memory
cost is now reasonably low, the method and its derivatives have seen several
practical applications. (See, for example, [Moore, 1992, Moore, et al., 1994].
Also, the distance calculations required to ﬁnd nearest neighbors can often be
eﬃciently computed by kdtree methods [Friedman, et al., 1977].
A theorem by Cover and Hart [Cover & Hart, 1967] relates the performance
of the 1nearestneighbor method to the performance of a minimumprobability
oferror classiﬁer. As mentioned earlier, the minimumprobabilityoferror clas
siﬁer would assign a new pattern Xto that category that maximized p(i)p(X  i),
where p(i) is the a priori probability of category i, and p(X  i) is the probability
(or probability density function) of X given that X belongs to category i, for
categories i = 1, . . . , R. Suppose the probability of error in classifying patterns
of such a minimumprobabilityoferror classiﬁer is ε. The CoverHart theo
rem states that under very mild conditions (having to do with the smoothness
72 CHAPTER 5. STATISTICAL LEARNING
of probability density functions) the probability of error, ε
nn
, of a 1nearest
neighbor classiﬁer is bounded by:
ε ≤ ε
nn
≤ ε
2 −ε
R
R −1
≤ 2ε
where R is the number of categories. Also see [Aha, 1991].
5.4 Bibliographical and Historical Remarks
To be added.
Chapter 6
Decision Trees
6.1 Deﬁnitions
A decision tree (generally deﬁned) is a tree whose internal nodes are tests (on
input patterns) and whose leaf nodes are categories (of patterns). We show an
example in Fig. 6.1. A decision tree assigns a class number (or output) to an
input pattern by ﬁltering the pattern down through the tests in the tree. Each
test has mutually exclusive and exhaustive outcomes. For example, test T
2
in
the tree of Fig. 6.1 has three outcomes; the leftmost one assigns the input
pattern to class 3, the middle one sends the input pattern down to test T
4
, and
the rightmost one assigns the pattern to class 1. We follow the usual convention
of depicting the leaf nodes by the class number.
1
Note that in discussing decision
trees we are not limited to implementing Boolean functions—they are useful for
general, categorically valued functions.
There are several dimensions along which decision trees might diﬀer:
a. The tests might be multivariate (testing on several features of the input
at once) or univariate (testing on only one of the features).
b. The tests might have two outcomes or more than two. (If all of the tests
have two outcomes, we have a binary decision tree.)
c. The features or attributes might be categorical or numeric. (Binaryvalued
ones can be regarded as either.)
1
One of the researchers who has done a lot of work on learning decision trees is Ross
Quinlan. Quinlan distinguishes between classes and categories. He calls the subsets of patterns
that ﬁlter down to each tip categories and subsets of patterns having the same label classes.
In Quinlan’s terminology, our example tree has nine categories and three classes. We will not
make this distinction, however, but will use the words “category” and “class” interchangeably
to refer to what Quinlan calls “class.”
73
74 CHAPTER 6. DECISION TREES
T
1
T
2
T
3
T
4
T
4
T
4
3
1
3
2
1
2 3
2
1
Figure 6.1: A Decision Tree
d. We might have two classes or more than two. If we have two classes and
binary inputs, the tree implements a Boolean function, and is called a
Boolean decision tree.
It is straightforward to represent the function implemented by a univariate
Boolean decision tree in DNF form. The DNF form implemented by such a tree
can be obtained by tracing down each path leading to a tip node corresponding
to an output value of 1, forming the conjunction of the tests along this path,
and then taking the disjunction of these conjunctions. We show an example in
Fig. 6.2. In drawing univariate decision trees, each nonleaf node is depicted by
a single attribute. If the attribute has value 0 in the input pattern, we branch
left; if it has value 1, we branch right.
The kDL class of Boolean functions can be implemented by a multivariate
decision tree having the (highly unbalanced) form shown in Fig. 6.3. Each test,
c
i
, is a term of size k or less. The v
i
all have values of 0 or 1.
6.2 Supervised Learning of Univariate Decision
Trees
Several systems for learning decision trees have been proposed. Prominent
among these are ID3 and its new version, C4.5 [Quinlan, 1986, Quinlan, 1993],
and CART [Breiman, et al., 1984] We discuss here only batch methods, al
though incremental ones have also been proposed [Utgoﬀ, 1989].
6.2. SUPERVISED LEARNING OF UNIVARIATE DECISION TREES 75
x
3
x
2
x
4
x
1
1
0
1
1
0
0
0
1
x
3
x
2
x
3
x
2
x
3
x
4
x
3
x
4
x
1 x
3
x
4
x
1
f = x
3
x
2
+ x
3
x
4
x
1
1
0
0
1
0
Figure 6.2: A Decision Tree Implementing a DNF Function
6.2.1 Selecting the Type of Test
As usual, we have n features or attributes. If the attributes are binary, the
tests are simply whether the attribute’s value is 0 or 1. If the attributes are
categorical, but nonbinary, the tests might be formed by dividing the attribute
values into mutually exclusive and exhaustive subsets. A decision tree with such
tests is shown in Fig. 6.4. If the attributes are numeric, the tests might involve
“interval tests,” for example 7 ≤ x
i
≤ 13.2.
6.2.2 Using Uncertainty Reduction to Select Tests
The main problem in learning decision trees for the binaryattribute case is
selecting the order of the tests. For categorical and numeric attributes, we
must also decide what the tests should be (besides selecting the order). Several
techniques have been tried; the most popular one is at each stage to select that
test that maximally reduces an entropylike measure.
We show how this technique works for the simple case of tests with binary
outcomes. Extension to multipleoutcome tests is straightforward computation
ally but gives poor results because entropy is always decreased by having more
outcomes.
The entropy or uncertainty still remaining about the class of a pattern—
knowing that it is in some set, Ξ, of patterns is deﬁned as:
H(Ξ) = −
¸
i
p(iΞ) log
2
p(iΞ)
76 CHAPTER 6. DECISION TREES
c
q
c
q1
c
i
1
v
n
v
n1
v
i
v
1
Figure 6.3: A Decision Tree Implementing a Decision List
where p(iΞ) is the probability that a pattern drawn at random from Ξ belongs
to class i, and the summation is over all of the classes. We want to select tests at
each node such that as we travel down the decision tree, the uncertainty about
the class of a pattern becomes less and less.
Since we do not in general have the probabilities p(iΞ), we estimate them by
sample statistics. Although these estimates might be errorful, they are never
theless useful in estimating uncertainties. Let ˆ p(iΞ) be the number of patterns
in Ξ belonging to class i divided by the total number of patterns in Ξ. Then an
estimate of the uncertainty is:
ˆ
H(Ξ) = −
¸
i
ˆ p(iΞ) log
2
ˆ p(iΞ)
For simplicity, from now on we’ll drop the “hats” and use sample statistics as
if they were real probabilities.
If we perform a test, T, having k possible outcomes on the patterns in Ξ, we
will create k subsets, Ξ
1
, Ξ
2
, . . . , Ξ
k
. Suppose that n
i
of the patterns in Ξ are in
Ξ
i
for i = 1, ..., k. (Some n
i
may be 0.) If we knew that T applied to a pattern
in Ξ resulted in the jth outcome (that is, we knew that the pattern was in Ξ
j
),
the uncertainty about its class would be:
H(Ξ
j
) = −
¸
i
p(iΞ
j
) log
2
p(iΞ
j
)
and the reduction in uncertainty (beyond knowing only that the pattern was in
Ξ) would be:
6.2. SUPERVISED LEARNING OF UNIVARIATE DECISION TREES 77
x
3
= a, b, c, or d
{a, c}
{b}
x
1
= e, b, or d
{e,b}
{d}
x
4
= a, e, f, or g
{a, g}
{e, f}
x
2
= a, or g
{a}
{g}
1
2
1
1 2
{d}
2
Figure 6.4: A Decision Tree with Categorical Attributes
H(Ξ) −H(Ξ
j
)
Of course we cannot say that the test T is guaranteed always to produce that
amount of reduction in uncertainty because we don’t know that the result of
the test will be the jth outcome. But we can estimate the average uncertainty
over all the Ξ
j
, by:
E[H
T
(Ξ)] =
¸
j
p(Ξ
j
)H(Ξ
j
)
where by H
T
(Ξ) we mean the average uncertainty after performing test T on
the patterns in Ξ, p(Ξ
j
) is the probability that the test has outcome j, and the
sum is taken from 1 to k. Again, we don’t know the probabilities p(Ξ
j
), but we
can use sample values. The estimate ˆ p(Ξ
j
) of p(Ξ
j
) is just the number of those
patterns in Ξ that have outcome j divided by the total number of patterns in
Ξ. The average reduction in uncertainty achieved by test T (applied to patterns
in Ξ) is then:
R
T
(Ξ) = H(Ξ) −E[H
T
(Ξ)]
An important family of decision tree learning algorithms selects for the root
of the tree that test that gives maximum reduction of uncertainty, and then
applies this criterion recursively until some termination condition is met (which
we shall discuss in more detail later). The uncertainty calculations are particu
larly simple when the tests have binary outcomes and when the attributes have
78 CHAPTER 6. DECISION TREES
binary values. We’ll give a simple example to illustrate how the test selection
mechanism works in that case.
Suppose we want to use the uncertaintyreduction method to build a decision
tree to classify the following patterns:
pattern class
(0, 0, 0) 0
(0, 0, 1) 0
(0, 1, 0) 0
(0, 1, 1) 0
(1, 0, 0) 0
(1, 0, 1) 1
(1, 1, 0) 0
(1, 1, 1) 1
What single test, x
1
, x
2
, or x
3
, should be performed ﬁrst? The illustration in
Fig. 6.5 gives geometric intuition about the problem.
x
1
x
2
x
3
The test x
1
Figure 6.5: Eight Patterns to be Classiﬁed by a Decision Tree
The initial uncertainty for the set, Ξ, containing all eight points is:
H(Ξ) = −(6/8) log
2
(6/8) −(2/8) log
2
(2/8) = 0.81
Next, we calculate the uncertainty reduction if we perform x
1
ﬁrst. The left
hand branch has only patterns belonging to class 0 (we call them the set Ξ
l
), and
the righthandbranch (Ξ
r
) has two patterns in each class. So, the uncertainty
of the lefthand branch is:
6.3. NETWORKS EQUIVALENT TO DECISION TREES 79
H
x1
(Ξ
l
) = −(4/4) log
2
(4/4) −(0/4) log
2
(0/4) = 0
And the uncertainty of the righthand branch is:
H
x1
(Ξ
r
) = −(2/4) log
2
(2/4) −(2/4) log
2
(2/4) = 1
Half of the patterns “go left” and half “go right” on test x
1
. Thus, the average
uncertainty after performing the x
1
test is:
1/2H
x1
(Ξ
l
) + 1/2H
x1
(Ξ
r
) = 0.5
Therefore the uncertainty reduction on Ξ achieved by x
1
is:
R
x1
(Ξ) = 0.81 −0.5 = 0.31
By similar calculations, we see that the test x
3
achieves exactly the same
uncertainty reduction, but x
2
achieves no reduction whatsoever. Thus, our
“greedy” algorithm for selecting a ﬁrst test would select either x
1
or x
3
. Suppose
x
1
is selected. The uncertaintyreduction procedure would select x
3
as the next
test. The decision tree that this procedure creates thus implements the Boolean
function: f = x
1
x
3
. See [Quinlan, 1986, sect. 4] for
another example.
6.2.3 NonBinary Attributes
If the attributes are nonbinary, we can still use the uncertaintyreduction tech
nique to select tests. But now, in addition to selecting an attribute, we must
select a test on that attribute. Suppose for example that the value of an at
tribute is a real number and that the test to be performed is to set a threshold
and to test to see if the number is greater than or less than that threshold. In
principle, given a set of labeled patterns, we can measure the uncertainty reduc
tion for each test that is achieved by every possible threshold (there are only
a ﬁnite number of thresholds that give diﬀerent test results if there are only
a ﬁnite number of training patterns). Similarly, if an attribute is categorical
(with a ﬁnite number of categories), there are only a ﬁnite number of mutually
exclusive and exhaustive subsets into which the values of the attribute can be
split. We can calculate the uncertainty reduction for each split.
6.3 Networks Equivalent to Decision Trees
Since univariate Boolean decision trees are implementations of DNF functions,
they are also equivalent to twolayer, feedforward neural networks. We show
an example in Fig. 6.6. The decision tree at the left of the ﬁgure implements
80 CHAPTER 6. DECISION TREES
the same function as the network at the right of the ﬁgure. Of course, when
implemented as a network, all of the features are evaluated in parallel for any
input pattern, whereas when implemented as a decision tree only those features
on the branch traveled down by the input pattern need to be evaluated. The
decisiontree induction methods discussed in this chapter can thus be thought of
as particular ways to establish the structure and the weight values for networks.
X
x
1
x
2
x
3
x
4
terms
1
+1
disjunction
x
3
x
2
x
3
x
4
x
1
+1
1
+1
f
1.5
0.5
x
3
x
2
x
4
x
1
1 0
1
1
0
0
0
1
x
3
x
2
x
3
x
2
x
3
x
4
x
3
x
4
x
1
x
3
x
4
x
1
f = x
3
x
2
+ x
3
x
4
x
1
1
0
0
1
0
Figure 6.6: A Univariate Decision Tree and its Equivalent Network
Multivariate decision trees with linearly separable functions at each node can
also be implemented by feedforward networks—in this case threelayer ones. We
show an example in Fig. 6.7 in which the linearly separable functions, each im
plemented by a TLU, are indicated by L
1
, L
2
, L
3
, and L
4
. Again, the ﬁnal layer
has ﬁxed weights, but the weights in the ﬁrst two layers must be trained. Dif
ferent approaches to training procedures have been discussed by [Brent, 1990],
by [John, 1995], and (for a special case) by [Marchand & Golea, 1993].
6.4 Overﬁtting and Evaluation
6.4.1 Overﬁtting
In supervised learning, we must choose a function to ﬁt the training set from
among a set of hypotheses. We have already showed that generalization is
impossible without bias. When we know a priori that the function we are
trying to guess belongs to a small subset of all possible functions, then, even
with an incomplete set of training samples, it is possible to reduce the subset
of functions that are consistent with the training set suﬃciently to make useful
guesses about the value of the function for inputs not in the training set. And,
6.4. OVERFITTING AND EVALUATION 81
L
1
L
2
L
3
L
4
1 0
1
1
0
0
0
1
1
0
0
1
0
X
L
1
L
2
L
3
L
4
conjunctions
L
1
L
2
L
1
L
3
L
4
<
+
+
+
disjunction
<
f
Figure 6.7: A Multivariate Decision Tree and its Equivalent Network
the larger the training set, the more likely it is that even a randomly selected
consistent function will have appropriate outputs for patterns not yet seen.
However, even with bias, if the training set is not suﬃciently large compared
with the size of the hypothesis space, there will still be too many consistent
functions for us to make useful guesses, and generalization performance will be
poor. When there are too many hypotheses that are consistent with the training
set, we say that we are overﬁtting the training data. Overﬁtting is a problem
that we must address for all learning methods.
Since a decision tree of suﬃcient size can implement any Boolean function
there is a danger of overﬁtting—especially if the training set is small. That
is, even if the decision tree is synthesized to classify all the members of the
training set correctly, it might perform poorly on new patterns that were not
used to build the decision tree. Several techniques have been proposed to avoid
overﬁtting, and we shall examine some of them here. They make use of methods
for estimating how well a given decision tree might generalize—methods we shall
describe next.
6.4.2 Validation Methods
The most straightforward way to estimate how well a hypothesized function
(such as a decision tree) performs on a test set is to test it on the test set! But,
if we are comparing several learning systems (for example, if we are comparing
diﬀerent decision trees) so that we can select the one that performs the best on
the test set, then such a comparison amounts to “training on the test data.”
True, training on the test data enlarges the training set, with a consequent ex
pected improvement in generalization, but there is still the danger of overﬁtting
if we are comparing several diﬀerent learning systems. Another technique is to
82 CHAPTER 6. DECISION TREES
split the training set—using (say) twothirds for training and the other third
for estimating generalization performance. But splitting reduces the size of the
training set and thereby increases the possibility of overﬁtting. We next describe
some validation techniques that attempt to avoid these problems.
CrossValidation
In crossvalidation, we divide the training set Ξ into K mutually exclusive and
exhaustive equalsized subsets: Ξ
1
, . . . , Ξ
K
. For each subset, Ξ
i
, train on the
union of all of the other subsets, and empirically determine the error rate, ε
i
,
on Ξ
i
. (The error rate is the number of classiﬁcation errors made on Ξ
i
divided
by the number of patterns in Ξ
i
.) An estimate of the error rate that can be
expected on new patterns of a classiﬁer trained on all the patterns in Ξ is then
the average of the ε
i
.
Leaveoneout Validation
Leaveoneout validation is the same as cross validation for the special case in
which K equals the number of patterns in Ξ, and each Ξ
i
consists of a single
pattern. When testing on each Ξ
i
, we simply note whether or not a mistake
was made. We count the total number of mistakes and divide by K to get
the estimated error rate. This type of validation is, of course, more expensive
computationally, but useful when a more accurate estimate of the error rate for
a classiﬁer is needed. Describe “bootstrapping” also
[Efron, 1982].
6.4.3 Avoiding Overﬁtting in Decision Trees
Near the tips of a decision tree there may be only a few patterns per node.
For these nodes, we are selecting a test based on a very small sample, and thus
we are likely to be overﬁtting. This problem can be dealt with by terminating
the testgenerating procedure before all patterns are perfectly split into their
separate categories. That is, a leaf node may contain patterns of more than one
class, but we can decide in favor of the most numerous class. This procedure
will result in a few errors but often accepting a small number of errors on the
training set results in fewer errors on a testing set.
This behavior is illustrated in Fig. 6.8.
One can use crossvalidation techniques to determine when to stop splitting
nodes. If the cross validation error increases as a consequence of a node split,
then don’t split. One has to be careful about when to stop, though, because
underﬁtting usually leads to more errors on test sets than does overﬁtting. There
is a general rule that the lowest errorrate attainable by a subtree of a fully
expanded tree can be no less than 1/2 of the error rate of the fully expanded
tree [Weiss & Kulikowski, 1991, page 126].
6.4. OVERFITTING AND EVALUATION 83
(From Weiss, S., and Kulikowski, C., Computer Systems that Learn,
Morgan Kaufmann, 1991)
training errors
validation errors
1 2 3 4 5 6 7 8 9
0.2
0.4
0.6
0.8
1.0
0
0
Error Rate
Number of Terminal
Nodes
Iris Data Decision Tree
Figure 6.8: Determining When Overﬁtting Begins
Rather than stopping the growth of a decision tree, one might grow it to
its full size and then prune away leaf nodes and their ancestors until cross
validation accuracy no longer increases. This technique is called postpruning.
Various techniques for pruning are discussed in [Weiss & Kulikowski, 1991].
6.4.4 MinimumDescription Length Methods
An important treegrowing and pruning technique is based on the minimum
descriptionlength (MDL) principle. (MDL is an important idea that extends
beyond decisiontree methods [Rissanen, 1978].) The idea is that the simplest
decision tree that can predict the classes of the training patterns is the best
one. Consider the problem of transmitting just the labels of a training set of
patterns, assuming that the receiver of this information already has the ordered
set of patterns. If there are m patterns, each labeled by one of R classes,
one could transmit a list of m Rvalued numbers. Assuming equally probable
classes, this transmission would require mlog
2
R bits. Or, one could transmit a
decision tree that correctly labelled all of the patterns. The number of bits that
this transmission would require depends on the technique for encoding decision
trees and on the size of the tree. If the tree is small and accurately classiﬁes
all of the patterns, it might be more economical to transmit the tree than to
transmit the labels directly. In between these extremes, we might transmit a
tree plus a list of labels of all the patterns that the tree misclassiﬁes.
In general, the number of bits (or description length of the binary encoded
message) is t + d, where t is the length of the message required to transmit
the tree, and d is the length of the message required to transmit the labels of
84 CHAPTER 6. DECISION TREES
the patterns misclassiﬁed by the tree. In a sense, that tree associated with the
smallest value of t + d is the best or most economical tree. The MDL method
is one way of adhering to the Occam’s razor principle.
Quinlan and Rivest [Quinlan & Rivest, 1989] have proposed techniques for
encoding decision trees and lists of exception labels and for calculating the
description length (t+d) of these trees and labels. They then use the description
length as a measure of quality of a tree in two ways:
a. In growing a tree, they use the reduction in description length to select
tests (instead of reduction in uncertainty).
b. In pruning a tree after it has been grown to zero error, they prune away
those nodes (starting at the tips) that achieve a decrease in the description
length.
These techniques compare favorably with the uncertaintyreduction method,
although they are quite sensitive to the coding schemes used.
6.4.5 Noise in Data
Noise in the data means that one must inevitably accept some number of
errors—depending on the noise level. Refusal to tolerate errors on the training
set when there is noise leads to the problem of “ﬁtting the noise.” Dealing with
noise, then, requires accepting some errors at the leaf nodes just as does the
fact that there are a small number of patterns at leaf nodes.
6.5 The Problem of Replicated Subtrees
Decision trees are not the most economical means of implementing some Boolean
functions. Consider, for example, the function f = x
1
x
2
+x
3
x
4
. A decision tree
for this function is shown in Fig. 6.9. Notice the replicated subtrees shown
circled. The DNFform equivalent to the function implemented by this decision
tree is f = x
1
x
2
+ x
1
x
2
x
3
x
4
+ x
1
x
3
x
4
. This DNF form is nonminimal (in the
number of disjunctions) and is equivalent to f = x
1
x
2
+x
3
x
4
.
The need for replication means that it takes longer to learn the tree and
that subtrees replicated further down the tree must be learned using a smaller
training subset. This problem is sometimes called the fragmentation problem.
Several approaches might be suggested for dealing with fragmenta
tion. One is to attempt to build a decision graph instead of a tree
[Oliver, Dowe, & Wallace, 1992, Kohavi, 1994]. A decision graph that imple
ments the same decisions as that of the decision tree of Fig. 6.9 is shown in Fig.
6.10.
Another approach is to use multivariate (rather than univariate tests at each
node). In our example of learning f = x
1
x
2
+ x
3
x
4
, if we had a test for x
1
x
2
6.6. THE PROBLEM OF MISSING ATTRIBUTES 85
x
1
x
3
x
2
1
0
x
4
0
1
x
3
0
x
4
0
1
Figure 6.9: A Decision Tree with Subtree Replication
and a test for x
3
x
4
, the decision tree could be much simpliﬁed, as shown in Fig.
6.11. Several researchers have proposed techniques for learning decision trees in
which the tests at each node are linearly separable functions. [John, 1995] gives
a nice overview (with several citations) of learning such linear discriminant trees
and presents a method based on “soft entropy.”
A third method for dealing with the replicated subtree problem involves ex
tracting propositional “rules” from the decision tree. The rules will have as an
tecedents the conjunctions that lead down to the leaf nodes, and as consequents
the name of the class at the corresponding leaf node. An example rule from the
tree with the repeating subtree of our example would be: x
1
∧¬x
2
∧x
3
∧x
4
⊃ 1.
Quinlan [Quinlan, 1987] discusses methods for reducing a set of rules to a sim
pler set by 1) eliminating from the antecedent of each rule any “unnecessary”
conjuncts, and then 2) eliminating “unnecessary” rules. A conjunct or rule is
determined to be unnecessary if its elimination has little eﬀect on classiﬁcation
accuracy—as determined by a chisquare test, for example. After a rule set is
processed, it might be the case that more than one rule is “active” for any given
pattern, and care must be taken that the active rules do not conﬂict in their
decision about the class of a pattern.
86 CHAPTER 6. DECISION TREES
x
1
x
3
x
2
1
0
x
4
0
1
Figure 6.10: A Decision Graph
6.6 The Problem of Missing Attributes
To be added.
6.7 Comparisons
Several experimenters have compared decisiontree, neuralnet, and nearest
neighbor classiﬁers on a wide variety of problems. For a comparison of
neural nets versus decision trees, for example, see [Dietterich, et al., 1990,
Shavlik, Mooney, & Towell, 1991, Quinlan, 1994]. In their StatLog project,
[Taylor, Michie, & Spiegalhalter, 1994] give thorough comparisons of several
machine learning algorithms on several diﬀerent types of problems. There seems
x
1
x
2
1
0
x
3
x
4
1
Figure 6.11: A Multivariate Decision Tree
6.8. BIBLIOGRAPHICAL AND HISTORICAL REMARKS 87
to be no single type of classiﬁer that is best for all problems. And, there do
not seem to be any general conclusions that would enable one to say which
classiﬁer method is best for which sorts of classiﬁcation problems, although
[Quinlan, 1994] does provide some intuition about properties of problems that
might render them ill suited for decision trees, on the one hand, or backpropa
gation, on the other.
6.8 Bibliographical and Historical Remarks
To be added.
88 CHAPTER 6. DECISION TREES
Chapter 7
Inductive Logic
Programming
There are many diﬀerent representational forms for functions of input vari
ables. So far, we have seen (Boolean) algebraic expressions, decision trees, and
neural networks, plus other computational mechanisms such as techniques for
computing nearest neighbors. Of course, the representation most important
in computer science is a computer program. For example, a Lisp predicate of
binaryvalued inputs computes a Boolean function of those inputs. Similarly, a
logic program (whose ordinary application is to compute bindings for variables)
can also be used simply to decide whether or not a predicate has value True
(T) or False (F). For example, the Boolean exclusiveor (odd parity) function
of two variables can be computed by the following logic program:
Parity(x,y) : True(x), ¬ True(y)
: True(y), ¬ True(x)
We follow Prolog syntax (see, for example, [Mueller & Page, 1988]), except that
our convention is to write variables as strings beginning with lowercase letters
and predicates as strings beginning with uppercase letters. The unary function
“True” returns T if and only if the value of its argument is T. (We now think
of Boolean functions and arguments as having values of T and F instead of 0
and 1.) Programs will be written in “typewriter” font.
In this chapter, we consider the matter of learning logic programs given
a set of variable values for which the logic program should return T (the
positive instances) and a set of variable values for which it should return
F (the negative instances). The subspecialty of machine learning that deals
with learning logic programs is called inductive logic programming (ILP)
[Lavraˇc & Dˇzeroski, 1994]. As with any learning problem, this one can be quite
complex and intractably diﬃcult unless we constrain it with biases of some sort.
89
90 CHAPTER 7. INDUCTIVE LOGIC PROGRAMMING
In ILP, there are a variety of possible biases (called language biases). One might
restrict the program to Horn clauses, not allow recursion, not allow functions,
and so on.
As an example of an ILP problem, suppose we are trying to induce a func
tion Nonstop(x,y), that is to have value T for pairs of cities connected by a
nonstop air ﬂight and F for all other pairs of cities. We are given a training set
consisting of positive and negative examples. As positive examples, we might
have (A,B), (A, A1), and some other pairs; as negative examples, we might
have (A1, A2), and some other pairs. In ILP, we usually have additional infor
mation about the examples, called “background knowledge.” In our airﬂight
problem, the background information might be such ground facts as Hub(A),
Hub(B), Satellite(A1,A), plus others. (Hub(A) is intended to mean that the
city denoted by A is a hub city, and Satellite(A1,A) is intended to mean that
the city denoted by A1 is a satellite of the city denoted by A.) From these train
ing facts, we want to induce a program Nonstop(x,y), written in terms of the
background relations Hub and Satellite, that has value T for all the positive
instances and has value F for all the negative instances. Depending on the exact
set of examples, we might induce the program:
Nonstop(x,y) : Hub(x), Hub(y)
: Satellite(x,y)
: Satellite(y,x)
which would have value T if both of the two cities were hub cities or if one were
a satellite of the other. As with other learning problems, we want the induced
program to generalize well; that is, if presented with arguments not represented
in the training set (but for which we have the needed background knowledge),
we would like the function to guess well.
7.1 Notation and Deﬁnitions
In evaluating logic programs in ILP, we implicitly append the background facts
to the program and adopt the usual convention that a program has value T for
a set of inputs if and only if the program interpreter returns T when actually
running the program (with background facts appended) on those inputs; oth
erwise it has value F. Using the given background facts, the program above
would return T for input (A, A1), for example. If a logic program, π, returns
T for a set of arguments X, we say that the program covers the arguments and
write covers(π, X). Following our terminology introduced in connection with
version spaces, we will say that a program is suﬃcient if it covers all of the
positive instances and that it is necessary if it does not cover any of the neg
ative instances. (That is, a program implements a suﬃcient condition that a
training instance is positive if it covers all of the positive training instances; it
7.2. A GENERIC ILP ALGORITHM 91
implements a necessary condition if it covers none of the negative instances.) In
the noiseless case, we want to induce a program that is both suﬃcient and nec
essary, in which case we will call it consistent. With imperfect (noisy) training
sets, we might relax this criterion and settle for a program that covers all but
some fraction of the positive instances while allowing it to cover some fraction
of the negative instances. We illustrate these deﬁnitions schematically in Fig.
7.1.
<
<
<
<
<
<
<
/
1
is a necessary program
/
2
is a sufficient program
/
3
is a consistent program
+
+
+
+
+
+
+
+
+
+
<
<
A positive instance
covered by /
2
and /
3
Figure 7.1: Suﬃcient, Necessary, and Consistent Programs
As in version spaces, if a program is suﬃcient but not necessary it can be
made to cover fewer examples by specializing it. Conversely, if it is necessary
but not suﬃcient, it can be made to cover more examples by generalizing it.
Suppose we are attempting to induce a logic program to compute the relation
ρ. The most general logic program, which is certainly suﬃcient, is the one that
has value T for all inputs, namely a single clause with an empty body, [ρ :
], which is called a fact in Prolog. The most special logic program, which is
certainly necessary, is the one that has value F for all inputs, namely [ρ :
F ]. Two of the many diﬀerent ways to search for a consistent logic program
are: 1) start with [ρ : ] and specialize until the program is consistent, or 2)
start with [ρ : F ] and generalize until the program is consistent. We will
be discussing a method that starts with [ρ : ], specializes until the program
is necessary (but might no longer be suﬃcient), then reachieves suﬃciency in
stages by generalizing—ensuring within each stage that the program remains
necessary (by specializing).
7.2 A Generic ILP Algorithm
Since the primary operators in our search for a consistent program are special
ization and generalization, we must next discuss those operations. There are
92 CHAPTER 7. INDUCTIVE LOGIC PROGRAMMING
three major ways in which a logic program might be generalized:
a. Replace some terms in a program clause by variables. (Readers familiar
with substitutions in the predicate calculus will note that this process is
the inverse of substitution.)
b. Remove literals from the body of a clause.
c. Add a clause to the program
Analogously, there are three ways in which a logic program might be specialized:
a. Replace some variables in a program clause by terms (a substitution).
b. Add literals to the body of a clause.
c. Remove a clause from the program
We will be presenting an ILP learning method that adds clauses to a program
when generalizing and that adds literals to the body of a clause when special
izing. When we add a clause, we will always add the clause [ρ : ] and then
specialize it by adding literals to the body. Thus, we need only describe the
process for adding literals.
Clauses can be partially ordered by the specialization relation. In general,
clause c
1
is more special than clause c
2
if c
2
= c
1
. A special case, which is what
we use here, is that a clause c
1
is more special than a clause c
2
if the set of
literals in the body of c
2
is a subset of those in c
1
. This ordering relation can
be used in a structure of partially ordered clauses, called the reﬁnement graph,
that is similar to a version space. Clause c
1
is an immediate successor of clause
c
2
in this graph if and only if clause c
1
can be obtained from clause c
2
by adding
a literal to the body of c
2
. A reﬁnement graph then tells us the ways in which
we can specialize a clause by adding a literal to it.
Of course there are unlimited possible literals we might add to the body of
a clause. Practical ILP systems restrict the literals in various ways. Typical
allowed additions are:
a. Literals used in the background knowledge.
b. Literals whose arguments are a subset of those in the head of the clause.
c. Literals that introduce a new distinct variable diﬀerent from those in the
head of the clause.
d. A literal that equates a variable in the head of the clause with another
such variable or with a term mentioned in the background knowledge.
(This possibility is equivalent to forming a specialization by making a
substitution.)
7.2. A GENERIC ILP ALGORITHM 93
e. A literal that is the same (except for its arguments) as that in the head
of the clause. (This possibility admits recursive programs, which are dis
allowed in some systems.)
We can illustrate these possibilities using our airﬂight example. We start
with the program [Nonstop(x,y) : ]. The literals used in the background
knowledge are Hub and Satellite. Thus the literals that we might consider
adding are:
Hub(x)
Hub(y)
Hub(z)
Satellite(x,y)
Satellite(y,x)
Satellite(x,z)
Satellite(z,y)
(x = y)
(If recursive programs are allowed, we could also add the literals Nonstop(x,z)
and Nonstop(z,y).) These possibilities are among those illustrated in the re
ﬁnement graph shown in Fig. 7.2. Whatever restrictions on additional literals
are imposed, they are all syntactic ones from which the successors in the reﬁne
ment graph are easily computed. ILP programs that follow the approach we
are discussing (of specializing clauses by adding a literal) thus have well deﬁned
methods of computing the possible literals to add to a clause.
Now we are ready to write down a simple generic algorithm for inducing a
logic program, π for inducing a relation ρ. We are given a training set, Ξ of
argument sets some known to be in the relation ρ and some not in ρ; Ξ
+
are
the positive instances, and Ξ
−
are the negative instances. The algorithm has
an outer loop in which it successively adds clauses to make π more and more
suﬃcient. It has an inner loop for constructing a clause, c, that is more and
more necessary and in which it refers only to a subset, Ξ
cur
, of the training
instances. (The positive instances in Ξ
cur
will be denoted by Ξ
+
cur
, and the
negative ones by Ξ
−
cur
.) The algorithm is also given background relations and
the means for adding literals to a clause. It uses a logic program interpreter to
compute whether or not the program it is inducing covers training instances.
The algorithm can be written as follows:
Generic ILP Algorithm
(Adapted from [Lavraˇc & Dˇzeroski, 1994, p. 60].)
94 CHAPTER 7. INDUCTIVE LOGIC PROGRAMMING
Nonstop(x,y) :
Nonstop(x,y) :
Hub(x)
Nonstop(x,y) :
Satellite(x,y)
Nonstop(x,y) :
(x = y)
. . .
. . .
. . . . . .
Nonstop(x,y) : Hub(x), Hub(y)
. . .
. . .
. . .
Figure 7.2: Part of a Reﬁnement Graph
Initialize Ξ
cur
:= Ξ.
Initialize π := empty set of clauses.
repeat [The outer loop works to make π suﬃcient.]
Initialize c := ρ : − .
repeat [The inner loop makes c necessary.]
Select a literal l to add to c. [This is a nondeterministic choice point.]
Assign c := c, l.
until c is necessary. [That is, until c covers no negative instances in Ξ
cur
.]
Assign π := π, c. [We add the clause c to the program.]
Assign Ξ
cur
:= Ξ
cur
− (the positive instances in Ξ
cur
covered by π).
until π is suﬃcient.
(The termination tests for the inner and outer loops can be relaxed as appro
priate for the case of noisy instances.)
7.3 An Example
We illustrate how the algorithm works by returning to our example of airline
ﬂights. Consider the portion of an airline route map, shown in Fig. 7.3. Cities
A, B, and C are “hub” cities, and we know that there are nonstop ﬂights between
all hub cities (even those not shown on this portion of the route map). The other
7.3. AN EXAMPLE 95
cities are “satellites” of one of the hubs, and we know that there are nonstop
ﬂights between each satellite city and its hub. The learning program is given a
set of positive instances, Ξ
+
, of pairs of cities between which there are nonstop
ﬂights and a set of negative instances, Ξ
−
, of pairs of cities between which there
are not nonstop ﬂights. Ξ
+
contains just the pairs:
{< A, B >, < A, C >, < B, C >, < B, A >, < C, A >, < C, B >,
< A, A1 >, < A, A2 >, < A1, A >, < A2, A >, < B, B1 >, < B, B2 >,
< B1, B >, < B2, B >, < C, C1 >, < C, C2 >, < C1, C >, < C2, C >}
For our example, we will assume that Ξ
−
contains all those pairs of cities shown
in Fig. 7.3 that are not in Ξ
+
(a type of closedworld assumption). These are:
{< A, B1 >, < A, B2 >, < A, C1 >, < A, C2 >, < B, C1 >, < B, C2 >,
< B, A1 >, < B, A2 >, < C, A1 >, < C, A2 >, < C, B1 >, < C, B2 >,
< B1, A >, < B2, A >, < C1, A >, < C2, A >, < C1, B >, < C2, B >,
< A1, B >, < A2, B >, < A1, C >, < A2, C >, < B1, C >, < B2, C >}
There may be other cities not shown on this map, so the training set does not
necessarily exhaust all the cities.
A
B
C
C1
C2
B1
B2
A1
A2
Figure 7.3: Part of an Airline Route Map
We want the learning program to induce a program for computing the value
of the relation Nonstop. The training set, Ξ, can be thought of as a partial
96 CHAPTER 7. INDUCTIVE LOGIC PROGRAMMING
description of this relation in extensional form—it explicitly names some pairs
in the relation and some pairs not in the relation. We desire to learn the
Nonstop relation as a logic program in terms of the background relations, Hub
and Satellite, which are also given in extensional form. Doing so will give us
a more compact, intensional, description of the relation, and this description
could well generalize usefully to other cities not mentioned in the map.
We assume the learning program has the following extensional deﬁnitions of
the relations Hub and Satellite:
Hub
{< A >, < B >, < C >}
All other cities mentioned in the map are assumed not in the relation Hub. We
will use the notation Hub(x) to express that the city named x is in the relation
Hub.
Satellite
{< A1, A, >, < A2, A >, < B1, B >, < B2, B >, < C1, C >, < C2, C >}
All other pairs of cities mentioned in the map are not in the relation Satellite.
We will use the notation Satellite(x,y) to express that the pair < x, y > is
in the relation Satellite.
Knowing that the predicate Nonstop is a twoplace predicate, the inner loop
of our algorithm initializes the ﬁrst clause to Nonstop(x,y) : . This clause
is not necessary because it covers all the negative examples (since it covers all
examples). So we must add a literal to its (empty) body. Suppose (selecting
a literal from the reﬁnement graph) the algorithm adds Hub(x). The following
positive instances in Ξ are covered by Nonstop(x,y) : Hub(x):
{< A, B >, < A, C >, < B, C >, < B, A >, < C, A >, < C, B >,
< A, A1 >, < A, A2 >, < B, B1 >, < B, B2 >, < C, C1 >, < C, C2 >}
To compute this covering, we interpret the logic program Nonstop(x,y) :
Hub(x) for all pairs of cities in Ξ, using the pairs given in the background
relation Hub as ground facts. The following negative instances are also covered:
7.3. AN EXAMPLE 97
{< A, B1 >, < A, B2 >, < A, C1 >, < A, C2 >, < C, A1 >, < C, A2 >,
< C, B1 >, < C, B2 >, < B, A1 >, < B, A2 >, < B, C1 >, < B, C2 >}
Thus, the clause is not yet necessary and another literal must be added. Sup
pose we next add Hub(y). The following positive instances are covered by
Nonstop(x,y) : Hub(x), Hub(y):
{< A, B >, < A, C >, < B, C >, < B, A >, < C, A >, < C, B >}
There are no longer any negative instances in Ξ covered so the clause
Nonstop(x,y) : Hub(x), Hub(y) is necessary, and we can terminate the ﬁrst
pass through the inner loop.
But the program, π, consisting of just this clause is not suﬃcient. These
positive instances are not covered by the clause:
{< A, A1 >, < A, A2 >, < A1, A >, < A2, A >, < B, B1 >, < B, B2 >,
< B1, B >, < B2, B >, < C, C1 >, < C, C2 >, < C1, C >, < C2, C >}
The positive instances that were covered by Nonstop(x,y) : Hub(x), Hub(y)
are removed from Ξ to form the Ξ
cur
to be used in the next pass through the
inner loop. Ξ
cur
consists of all the negative instances in Ξ plus the positive
instances (listed above) that are not yet covered. In order to attempt to cover
them, the inner loop creates another clause c, initially set to Nonstop(x,y)
: . This clause covers all the negative instances, and so we must add liter
als to make it necessary. Suppose we add the literal Satellite(x,y). The
clause Nonstop(x,y) : Satellite(x,y) covers no negative instances, so it is
necessary. It does cover the following positive instances in Ξ
cur
:
{< A1, A >, < A2, A >, < B1, B >, < B2, B >, < C1, C >, < C2, C >}
These instances are removed from Ξ
cur
for the next pass through the inner loop.
The program now contains two clauses:
Nonstop(x,y) : Hub(x), Hub(y)
: Satellite(x,y)
This program is not yet suﬃcient since it does not cover the following positive
instances:
{< A, A1 >, < A, A2 >, < B, B1 >, < B, B2 >, < C, C1 >, < C, C2 >}
98 CHAPTER 7. INDUCTIVE LOGIC PROGRAMMING
During the next pass through the inner loop, we add the clause Nonstop(x,y)
: Satellite(y,x). This clause is necessary, and since the program containing
all three clauses is now suﬃcient, the procedure terminates with:
Nonstop(x,y) : Hub(x), Hub(y)
: Satellite(x,y)
: Satellite(y,x)
Since each clause is necessary, and the whole program is suﬃcient, the pro
gram is also consistent with all instances of the training set. Note that this
program can be applied (perhaps with good generalization) to other cities be
sides those in our partial map—so long as we can evaluate the relations Hub and
Satellite for these other cities. In the next section, we show how the technique
can be extended to use recursion on the relation we are inducing. With that
extension, the method can be used to induce more general logic programs.
7.4 Inducing Recursive Programs
To induce a recursive program, we allow the addition of a literal having the
same predicate letter as that in the head of the clause. Various mechanisms
must be used to ensure that such a program will terminate; one such is to make
sure that the new literal has diﬀerent variables than those in the head literal.
The process is best illustrated with another example. Our example continues
the one using the airline map, but we make the map somewhat simpler in order
to reduce the size of the extensional relations used. Consider the map shown
in Fig. 7.4. Again, B and C are hub cities, B1 and B2 are satellites of B, C1
and C2 are satellites of C. We have introduced two new cities, B3 and C3. No
ﬂights exist between these cities and any other cities—perhaps there are only
bus routes as shown by the grey lines in the map.
We now seek to learn a program for Canfly(x,y) that covers only those
pairs of cities that can be reached by one or more nonstop ﬂights. The relation
Canfly is satisﬁed by the following pairs of postive instances:
{< B1, B >, < B1, B2 >, < B1, C >, < B1, C1 >, < B1, C2 >,
< B, B1 >, < B2, B1 >, < C, B1 >, < C1, B1 >, < C2, B1 >,
< B2, B >, < B2, C >, < B2, C1 >, < B2, C2 >, < B, B2 >,
< C, B2 >, < C1, B2 >, < C2, B2 >, < B, C >, < B, C1 >,
< B, C2 >, < C, B >, < C1, B >, < C2, B >, < C, C1 >,
< C, C2 >, < C1, C >, < C2, C >, < C1, C2 >, < C2, C1 >}
7.4. INDUCING RECURSIVE PROGRAMS 99
B
C
C1
C2
B1
B2
B3
C3
Figure 7.4: Another Airline Route Map
Using a closedworld assumption on our map, we take the negative instances of
Canfly to be:
{< B3, B2 >, < B3, B >, < B3, B1 >, < B3, C >, < B3, C1 >,
< B3, C2 >, < B3, C3 >, < B2, B3 >, < B, B3 >, < B1, B3 >,
< C, B3 >, < C1, B3 >, < C2, B3 >, < C3, B3 >, < C3, B2 >,
< C3, B >, < C3, B1 >, < C3, C >, < C3, C1 >, < C3, C2 >,
< B2, C3 >, < B, C3 >, < B1, C3 >, < C, C3 >, < C1, C3 >,
< C2, C3 >}
We will induce Canfly(x,y) using the extensionally deﬁned background
relation Nonstop given earlier (modiﬁed as required for our reduced airline map)
and Canfly itself (recursively).
As before, we start with the empty program and proceed to the inner loop
to construct a clause that is necessary. Suppose that the inner loop adds the
background literal Nonstop(x,y). The clause Canfly(x,y) : Nonstop(x,y)
is necessary; it covers no negative instances. But it is not suﬃcient because it
does not cover the following positive instances:
{< B1, B2 >, < B1, C >, < B1, C1 >, < B1, C2 >, < B2, B1 >,
< C, B1 >, < C1, B1 >, < C2, B1 >, < B2, C >, < B2, C1 >,
< B2, C2 >, < C, B2 >, < C1, B2 >, < C2, B2 >, < B, C1 >,
100 CHAPTER 7. INDUCTIVE LOGIC PROGRAMMING
< B, C2 >, < C1, B >, < C2, B >, < C1, C2 >, < C2, C1 >}
Thus, we must add another clause to the program. In the inner loop, we ﬁrst
create the clause Canfly(x,y) : Nonstop(x,z) which introduces the new
variable z. We digress brieﬂy to describe how a program containing a clause
with unbound variables in its body is interpreted. Suppose we try to inter
pret it for the positive instance Canfly(B1,B2). The interpreter attempts to
establish Nonstop(B1,z) for some z. Since Nonstop(B1, B), for example, is
a background fact, the interpreter returns T—which means that the instance
< B1, B2 > is covered. Suppose now, we attempt to interpret the clause
for the negative instance Canfly(B3,B). The interpreter attempts to estab
lish Nonstop(B3,z) for some z. There are no background facts that match, so
the clause does not cover < B3, B >. Using the interpreter, we see that the
clause Canfly(x,y) : Nonstop(x,z) covers all of the positive instances not
already covered by the ﬁrst clause, but it also covers many negative instances
such as < B2, B3 >, and < B, B3 >. So the inner loop must add another literal.
This time, suppose it adds Canfly(z,y) to yield the clause Canfly(x,y) :
Nonstop(x,z), Canfly(z,y). This clause is necessary; no negative instances
are covered. The program is now suﬃcient and consistent; it is:
Canfly(x,y) : Nonstop(x,y)
: Nonstop(x,z), Canfly(z,y)
7.5 Choosing Literals to Add
One of the ﬁrst practical ILP systems was Quinlan’s FOIL [Quinlan, 1990]. A
major problem involves deciding how to select a literal to add in the inner loop
(from among the literals that are allowed). In FOIL, Quinlan suggested that
candidate literals can be compared using an informationlike measure—similar
to the measures used in inducing decision trees. A measure that gives the same
comparison as does Quinlan’s is based on the amount by which adding a literal
increases the odds that an instance drawn at random from those covered by the
new clause is a positive instance beyond what these odds were before adding
the literal.
Let p be an estimate of the probability that an instance drawn at random
from those covered by a clause before adding the literal is a positive instance.
That is, p =(number of positive instances covered by the clause)/(total number
of instances covered by the clause). It is convenient to express this probability
in “odds form.” The odds, o, that a covered instance is positive is deﬁned to
be o = p/(1 − p). Expressing the probability in terms of the odds, we obtain
p = o/(1 +o).
7.6. RELATIONSHIPS BETWEENILP ANDDECISIONTREE INDUCTION101
After selecting a literal, l, to add to a clause, some of the instances previously
covered are still covered; some of these are positive and some are negative. Let
p
l
denote the probability that an instance drawn at random from the instances
covered by the new clause (with l added) is positive. The odds will be denoted
by o
l
. We want to select a literal, l, that gives maximal increase in these
odds. That is, if we deﬁne λ
l
= o
l
/o, we want a literal that gives a high
value of λ
l
. Specializing the clause in such a way that it fails to cover many of
the negative instances previously covered but still covers most of the positive
instances previously covered will result in a high value of λ
l
. (It turns out that
the value of Quinlan’s information theoretic measure increases monotonically
with λ
l
, so we could just as well use the latter instead.)
Besides ﬁnding a literal with a high value of λ
l
, Quinlan’s FOIL system also
restricts the choice to literals that:
a) contain at least one variable that has already been used,
b) place further restrictions on the variables if the literal selected has the
same predicate letter as the literal being induced (in order to prevent inﬁnite
recursion), and
c) survive a pruning test based on the values of λ
l
for those literals selected
so far.
We refer the reader to Quinlan’s paper for further discussion of these points.
Quinlan also discusses postprocessing pruning methods and presents experi
mental results of the method applied to learning recursive relations on lists, on
learning rules for chess endgames and for the card game Eleusis, and for some
other standard tasks mentioned in the machine learning literature.
The reader should also refer to [Pazzani & Kibler, 1992,
Lavraˇc & Dˇzeroski, 1994, Muggleton, 1991, Muggleton, 1992]. Discuss preprocessing,
postprocessing, bottomup
methods, and LINUS.
7.6 Relationships Between ILP and Decision
Tree Induction
The generic ILP algorithm can also be understood as a type of decision tree
induction. Recall the problem of inducing decision trees when the values of
attributes are categorical. When splitting on a single variable, the split at
each node involves asking to which of several mutually exclusive and exhaustive
subsets the value of a variable belongs. For example, if a node tested the variable
x
i
, and if x
i
could have values drawn from {A, B, C, D, E, F}, then one possible
split (among many) might be according to whether the value of x
i
had as value
one of {A, B, C} or one of {D, E, F}.
It is also possible to make a multivariate split—testing the values of two or
more variables at a time. With categorical variables, an nvariable split would
be based on which of several nary relations the values of the variables satisﬁed.
For example, if a node tested the variables x
i
and x
j
, and if x
i
and x
j
both
could have values drawn from {A, B, C, D, E, F}, then one possible binary split
102 CHAPTER 7. INDUCTIVE LOGIC PROGRAMMING
(among many) might be according to whether or not < x
i
, x
j
> satisﬁed the
relation {< A, C >, < C, D >}. (Note that our subset method of forming single
variable splits could equivalently have been framed using 1ary relations—which
are usually called properties.)
In this framework, the ILP problem is as follows: We are given a training set,
Ξ, of positively and negatively labeled patterns whose components are drawn
from a set of variables {x, y, z, . . .}. The positively labeled patterns in Ξ form an
extensional deﬁnition of a relation, R. We are also given background relations,
R
1
, . . . , R
k
, on various subsets of these variables. (That is, we are given sets
of tuples that are in these relations.) We desire to construct an intensional
deﬁnition of R in terms of the R
1
, . . . , R
k
, such that all of the positively labeled
patterns in Ξ are satisﬁed by R and none of the negatively labeled patterns
are. The intensional deﬁnition will be in terms of a logic program in which the
relation R is the head of a set of clauses whose bodies involve the background
relations.
The generic ILP algorithm can be understood as decision tree induction,
where each node of the decision tree is itself a subdecision tree, and each sub
decision tree consists of nodes that make binary splits on several variables using
the background relations, R
i
. Thus we will speak of a toplevel decision tree
and various subdecision trees. (Actually, our decision trees will be decision
lists—a special case of decision trees, but we will refer to them as trees in our
discussions.)
In broad outline, the method for inducing an intensional version of the rela
tion R is illustrated by considering the decision tree shown in Fig. 7.5. In this
diagram, the patterns in Ξ are ﬁrst ﬁltered through the decision tree in top
level node 1. The background relation R
1
is satisﬁed by some of these patterns;
these are ﬁltered to the right (to relation R
2
), and the rest are ﬁltered to the
left (more on what happens to these later). Rightgoing patterns are ﬁltered
through a sequence of relational tests until only positively labeled patterns sat
isfy the last relation—in this case R
3
. That is, the subset of patterns satisfying
all the relations, R
1
, R
2
, and R
3
contains only positive instances from Ξ. (We
might say that this combination of tests is necessary. They correspond to the
clause created in the ﬁrst pass through the inner loop of the generic ILP algo
rithm.) Let us call the subset of patterns satisfying these relations, Ξ
1
; these
satisfy Node 1 at the top level. All other patterns, that is {Ξ − Ξ
1
} = Ξ
2
are
ﬁltered to the left by Node 1.
Ξ
2
is then ﬁltered by toplevel Node 2 in much the same manner, so that
Node 2 is satisﬁed only by the positively labeled samples in Ξ
2
. We continue
ﬁltering through toplevel nodes until only the negatively labeled patterns fail to
satisfy a top node. In our example, Ξ
4
contains only negatively labeled patterns
and the union of Ξ
1
and Ξ
3
contains all the positively labeled patterns. The
relation, R, that distinguishes positive from negative patterns in Ξ is then given
in terms of the following logic program:
R : R1, R2, R3
7.6. RELATIONSHIPS BETWEENILP ANDDECISIONTREE INDUCTION103
R
1
R
2
R
3
T
T
T
F
F
F
T
F
R
4
R
5
T
T
F
F
T
F
U
U
1
U
2
= U < U
1
U
3
U
4
= U
2
< U
3
Node 1
Node 2
(only positive
instances
satisfy all three
tests)
(only positivel
instances satisfy
these two tests)
(only negative
instances)
Figure 7.5: A Decision Tree for ILP
: R4, R5
If we apply this sort of decisiontree induction procedure to the problem
of generating a logic program for the relation Nonstop (refer to Fig. 7.3), we
obtain the decision tree shown in Fig. 7.6. The logic program resulting from
this decision tree is the same as that produced by the generic ILP algorithm.
In setting up the problem, the training set, Ξ can be expressed as a set of 2
dimensional vectors with components x and y. The values of these components
range over the cities {A, B, C, A1, A2, B1, B2, C1, C2} except (for simplicity)
we do not allow patterns in which x and y have the same value. As before, the
relation, Nonstop, contains the following pairs of cities, which are the positive
instances:
{< A, B >, < A, C >, < B, C >, < B, A >, < C, A >, < C, B >,
< A, A1 >, < A, A2 >, < A1, A >, < A2, A >, < B, B1 >, < B, B2 >,
< B1, B >, < B2, B >, < C, C1 >, < C, C2 >, < C1, C >, < C2, C >}
All other pairs of cities named in the map of Fig. 7.3 (using the closed world
assumption) are not in the relation Nonstop and thus are negative instances.
Because the values of x and y are categorical, decisiontree induction would
be a very diﬃcult task—involving as it does the need to invent relations on
104 CHAPTER 7. INDUCTIVE LOGIC PROGRAMMING
x and y to be used as tests. But with the background relations, R
i
(in this
case Hub and Satellite), the problem is made much easier. We select these
relations in the same way that we select literals; from among the available tests,
we make a selection based on which leads to the largest value of λ
Ri
.
7.7 Bibliographical and Historical Remarks
To be added.
7.7. BIBLIOGRAPHICAL AND HISTORICAL REMARKS 105
Hub(x)
T
F
U
Node 1
(top level)
{<A,B>, <A,C>,
<B,C>, <B,A>,
<C,A>, <C,B>}
Hub(y)
T
T
F Node 2
(top level)
Satellite(x,y)
F
T
T
{<A1,A>, <A2,A>, <B1,B>,
<B2,B>, <C1,C>, <C2,C>}
F
{<A,A1>, <A,A2>,<B,B1>,
<B,B2>, <C,C1>, <C,C2>}
Satellite(y,x)
F
F
T
Node 3
(top level)
T
{Only negative instances}
(Only positive instances)
(Only positive instances)
(Only positive instances)
F
Figure 7.6: A Decision Tree for the Airline Route Problem
106 CHAPTER 7. INDUCTIVE LOGIC PROGRAMMING
Chapter 8
Computational Learning
Theory
In chapter one we posed the problem of guessing a function given a set of
sample inputs and their values. We gave some intuitive arguments to support
the claim that after seeing only a small fraction of the possible inputs (and
their values) that we could guess almost correctly the values of most subsequent
inputs—if we knew that the function we were trying to guess belonged to an
appropriately restricted subset of functions. That is, a given training set of
sample patterns might be adequate to allow us to select a function, consistent
with the labeled samples, from among a restricted set of hypotheses such that
with high probability the function we select will be approximately correct (small
probability of error) on subsequent samples drawn at random according to the
same distribution from which the labeled samples were drawn. This insight
led to the theory of probably approximately correct (PAC) learning—initially
developed by Leslie Valiant [Valiant, 1984]. We present here a brief description
of the theory for the case of Boolean functions. [Dietterich, 1990, Haussler, 1988,
Haussler, 1990] give nice surveys of the important results. Other overviews?
8.1 Notation and Assumptions for PAC Learn
ing Theory
We assume a training set Ξ of ndimensional vectors, X
i
, i = 1, . . . , m, each
labeled (by 1 or 0) according to a target function, f, which is unknown to
the learner. The probability of any given vector X being in Ξ, or later being
presented to the learner, is P(X). The probability distribution, P, can be
arbitrary. (In the literature of PAC learning theory, the target function is usually
called the target concept and is denoted by c, but to be consistent with our
previous notation we will continue to denote it by f.) Our problem is to guess
107
108 CHAPTER 8. COMPUTATIONAL LEARNING THEORY
a function, h(X), based on the labeled samples in Ξ. In PAC theory such a
guessed function is called the hypothesis. We assume that the target function
is some element of a set of functions, C. We also assume that the hypothesis,
h, is an element of a set, H, of hypotheses, which includes the set, C, of target
functions. H is called the hypothesis space.
In general, h won’t be identical to f, but we can strive to have the value of
h(X) = the value of f(X) for most X’s. That is, we want h to be approximately
correct. To quantify this notion, we deﬁne the error of h, ε
h
, as the probability
that an X drawn randomly according to P will be misclassiﬁed:
ε
h
=
¸
[X:h(X)=f(X)]
P(X)
Boldface symbols need to be
smaller when they are subscripts in
math environments.
We say that h is approximately (except for ε ) correct if ε
h
≤ ε, where ε is the
accuracy parameter.
Suppose we are able to ﬁnd an h that classiﬁes all m randomly drawn training
samples correctly; that is, h is consistent with this randomly selected training
set, Ξ. If m is large enough, will such an h be approximately correct (and
for what value of ε)? On some training occasions, using m randomly drawn
training samples, such an h might turn out to be approximately correct (for a
given value of ε), and on others it might not. We say that h is probably (except
for δ) approximately correct (PAC) if the probability that it is approximately
correct is greater than 1−δ, where δ is the conﬁdence parameter. We shall show
that if m is greater than some bound whose value depends on ε and δ, such an
h is guaranteed to be probably approximately correct.
In general, we say that a learning algorithm PAClearns functions from C in
terms of H iﬀ for every function f C, it outputs a hypothesis h H, such that
with probability at least (1 − δ), ε
h
≤ ε. Such a hypothesis is called probably
(except for δ) approximately (except for ε) correct.
We want learning algorithms that are tractable, so we want an algorithm
that PAClearns functions in polynomial time. This can only be done for certain
classes of functions. If there are a ﬁnite number of hypotheses in a hypothesis
set (as there are for many of the hypothesis sets we have considered), we could
always produce a consistent hypothesis from this set by testing all of them
against the training data. But if there are an exponential number of hypotheses,
that would take exponential time. We seek training methods that produce
consistent hypotheses in less time. The time complexities for various hypothesis
sets have been determined, and these are summarized in a table to be presented
later.
A class, C, is polynomially PAC learnable in terms of H provided there exists
a polynomialtime learning algorithm (polynomial in the number of samples
needed, m, in the dimension, n, in 1/ε, and in 1/δ) that PAClearns functions
in C in terms of H.
Initial work on PAC assumed H = C, but it was later shown that some func
tions cannot be polynomially PAClearned under such an assumption (assuming
8.2. PAC LEARNING 109
P = NP)—but can be polynomially PAClearned if H is a strict superset of C!
Also our deﬁnition does not specify the distribution, P, from which patterns
are drawn nor does it say anything about the properties of the learning algo
rithm. Since C and H do not have to be identical, we have the further restrictive
deﬁnition:
A properly PAClearnable class is a class C for which there exists an algorithm
that polynomially PAClearns functions from C in terms of C.
8.2 PAC Learning
8.2.1 The Fundamental Theorem
Suppose our learning algorithm selects some h randomly from among those that
are consistent with the values of f on the m training patterns. The probability
that the error of this randomly selected h is greater than some ε, with h consis
tent with the values of f(X) for m instances of X (drawn according to arbitrary
P), is less than or equal to He
−εm
, where H is the number of hypotheses in
H. We state this result as a theorem [Blumer, et al., 1987]:
Theorem 8.1 (Blumer, et al.) Let H be any set of hypotheses, Ξ be a set of
m ≥ 1 training examples drawn independently according to some distribution
P, f be any classiﬁcation function in H, and ε > 0. Then, the probability that
there exists a hypothesis h consistent with f for the members of Ξ but with error
greater than ε is at most He
−εm
.
Proof:
Consider the set of all hypotheses, {h
1
, h
2
, . . . , h
i
, . . . , h
S
}, in H, where S =
H. The error for h
i
is ε
hi
= the probability that h
i
will classify a pattern in
error (that is, diﬀerently than f would classify it). The probability that h
i
will
classify a pattern correctly is (1−ε
hi
). A subset, H
B
, of Hwill have error greater
than ε. We will call the hypotheses in this subset bad. The probability that any
particular one of these bad hypotheses, say h
b
, would classify a pattern correctly
is (1−ε
h
b
). Since ε
h
b
> ε, the probability that h
b
(or any other bad hypothesis)
would classify a pattern correctly is less than (1 − ε). The probability that it
would classify all m independently drawn patterns correctly is then less than
(1 −ε)
m
.
That is,
prob[h
b
classiﬁes all m patterns correctly h
b
H
B
] ≤ (1 −ε)
m
.
prob[some h H
B
classiﬁes all m patterns correctly]
=
¸
h
b
H
B
prob[h
b
classiﬁes all m patterns correctly h
b
H
B
]
≤ K(1 −ε)
m
, where K = H
B
.
110 CHAPTER 8. COMPUTATIONAL LEARNING THEORY
That is,
prob[there is a bad hypothesis that classiﬁes all m patterns correctly]
≤ K(1 −ε)
m
.
Since K ≤ H and (1 −ε)
m
≤ e
−εm
, we have:
prob[there is a bad hypothesis that classiﬁes all m patterns correctly]
= prob[there is a hypothesis with error > ε and that classiﬁes all m patterns
correctly] ≤ He
−εm
.
QED
A corollary of this theorem is:
Corollary 8.2 Given m ≥ (1/ε)(ln H + ln(1/δ)) independent samples, the
probability that there exists a hypothesis in H that is consistent with f on these
samples and has error greater than ε is at most δ.
Proof: We are to ﬁnd a bound on m that guarantees that
prob[there is a hypothesis with error > ε and that classiﬁes all m patterns
correctly] ≤ δ. Thus, using the result of the theorem, we must show that
He
−εm
≤ δ. Taking the natural logarithm of both sides yields:
ln H −εm ≤ ln δ
or
m ≥ (1/ε)(ln H + ln(1/δ))
QED
This corollary is important for two reasons. First it clearly states that we
can select any hypothesis consistent with the m samples and be assured that
with probability (1 − δ) its error will be less than ε. Also, it shows that in
order for m to increase no more than polynomially with n, H can be no larger
than 2
O(n
k
)
. No class larger than that can be guaranteed to be properly PAC
learnable.
Here is a possible point of confusion: The bound given in the corollary is
an upper bound on the value of m needed to guarantee polynomial probably ap
proximately correct learning. Values of m greater than that bound are suﬃcient
(but might not be necessary). We will present a lower (necessary) bound later
in the chapter.
8.2. PAC LEARNING 111
8.2.2 Examples
Terms
Let H be the set of terms (conjunctions of literals). Then, H = 3
n
, and
m ≥ (1/ε)(ln(3
n
) + ln(1/δ))
≥ (1/ε)(1.1n + ln(1/δ))
Note that the bound on m increases only polynomially with n, 1/ε, and 1/δ.
For n = 50, ε = 0.01 and δ = 0.01, m ≥ 5, 961 guarantees PAC learnability.
In order to show that terms are properly PAC learnable, we additionally
have to show that one can ﬁnd in time polynomial in m and n a hypothesis
h consistent with a set of m patterns labeled by the value of a term. The
following procedure for ﬁnding such a consistent hypothesis requires O(nm)
steps (adapted from [Dietterich, 1990, page 268]):
We are given a training sequence, Ξ, of m examples. Find the ﬁrst pattern,
say X
1
, in that list that is labeled with a 1. Initialize a Boolean function,
h, to the conjunction of the n literals corresponding to the values of the n
components of X
1
. (Components with value 1 will have corresponding positive
literals; components with value 0 will have corresponding negative literals.) If
there are no patterns labeled by a 1, we exit with the null concept (h ≡ 0 for
all patterns). Then, for each additional pattern, X
i
, that is labeled with a 1,
we delete from h any Boolean variables appearing in X
i
with a sign diﬀerent
from their sign in h. After processing all the patterns labeled with a 1, we check
all of the patterns labeled with a 0 to make sure that none of them is assigned
value 1 by h. If, at any stage of the algorithm, any patterns labeled with a 0
are assigned a 1 by h, then there exists no term that consistently classiﬁes the
patterns in Ξ, and we exit with failure. Otherwise, we exit with h. Change this paragraph if this
algorithm was presented in Chapter
Three. As an example, consider the following patterns, all labeled with a 1 (from
[Dietterich, 1990]):
(0, 1, 1, 0)
(1, 1, 1, 0)
(1, 1, 0, 0)
After processing the ﬁrst pattern, we have h = x
1
x
2
x
3
x
4
; after processing the
second pattern, we have h = x
2
x
3
x
4
; ﬁnally, after the third pattern, we have
h = x
2
x
4
.
Linearly Separable Functions
Let H be the set of all linearly separable functions. Then, H ≤ 2
n
2
, and
112 CHAPTER 8. COMPUTATIONAL LEARNING THEORY
m ≥ (1/ε)
n
2
ln 2 + ln(1/δ)
Again, note that the bound on m increases only polynomially with n, 1/ε, and
1/δ.
For n = 50, ε = 0.01 and δ = 0.01, m ≥ 173, 748 guarantees PAC learnabil
ity.
To show that linearly separable functions are properly PAC learnable, we
would have additionally to show that one can ﬁnd in time polynomial in m and
n a hypothesis h consistent with a set of m labeled linearly separable patterns. Linear programming is polynomial.
8.2.3 Some Properly PACLearnable Classes
Some properly PAClearnable classes of functions are given in the following
table. (Adapted from [Dietterich, 1990, pages 262 and 268] which also gives
references to proofs of some of the time complexities.)
H H Time Complexity P. Learnable?
terms 3
n
polynomial yes
kterm DNF 2
O(kn)
NPhard no
(k disjunctive terms)
kDNF 2
O(n
k
)
polynomial yes
(a disjunction of ksized terms)
kCNF 2
O(n
k
)
polynomial yes
(a conjunction of ksized clauses)
kDL 2
O(n
k
k lg n)
polynomial yes
(decision lists with ksized terms)
lin. sep. 2
O(n
2
)
polynomial yes
lin. sep. with (0,1) weights ? NPhard no
k2NN ? NPhard no
DNF 2
2
n
polynomial no
(all Boolean functions)
(Members of the class k2NN are twolayer, feedforward neural networks with
exactly k hidden units and one output unit.)
Summary: In order to show that a class of functions is Properly PAC
Learnable :
a. Show that there is an algorithm that produces a consistent hypothesis on
m ndimensional samples in time polynomial in m and n.
b. Show that the sample size, m, needed to ensure PAC learnability is polyno
mial (or better) in (1/ε), (1/δ), and n by showing that ln H is polynomial
or better in the number of dimensions.
8.3. THE VAPNIKCHERVONENKIS DIMENSION 113
As hinted earlier, sometimes enlarging the class of hypotheses makes learning
easier. For example, the table above shows that kCNF is PAC learnable, but
ktermDNF is not. And yet, ktermDNF is a subclass of kCNF! So, even if
the target function were in ktermDNF, one would be able to ﬁnd a hypothesis
in kCNF that is probably approximately correct for the target function. Sim
ilarly, linearly separable functions implemented by TLUs whose weight values
are restricted to 0 and 1 are not properly PAC learnable, whereas unrestricted
linearly separable functions are. It is possible that enlarging the space of hy
potheses makes ﬁnding one that is consistent with the training examples easier.
An interesting question is whether or not the class of functions in k2NN is poly
nomially PAC learnable if the hypotheses are drawn from k
2NN with k
> k.
(At the time of writing, this matter is still undecided.)
Although PAC learning theory is a powerful analytic tool, it (like complexity
theory) deals mainly with worstcase results. The fact that the class of two
layer, feedforward neural networks is not polynomially PAC learnable is more an
attack on the theory than it is on the networks, which have had many successful
applications. As [Baum, 1994, page 41617] says: “ . . . humans are capable of
learning in the natural world. Therefore, a proof within some model of learning
that learning is not feasible is an indictment of the model. We should examine
the model to see what constraints can be relaxed and made more realistic.”
8.3 The VapnikChervonenkis Dimension
8.3.1 Linear Dichotomies
Consider a set, H, of functions, and a set, Ξ, of (unlabeled) patterns. One
measure of the expressive power of a set of hypotheses, relative to Ξ, is its
ability to make arbitrary classiﬁcations of the patterns in Ξ.
1
If there are m
patterns in Ξ, there are 2
m
diﬀerent ways to divide these patterns into two
disjoint and exhaustive subsets. We say there are 2
m
diﬀerent dichotomies of
Ξ. If Ξ were to include all of the 2
n
Boolean patterns, for example, there are
2
2
n
ways to dichotomize them, and (of course) the set of all possible Boolean
functions dichotomizes them in all of these ways. But a subset, H, of the Boolean
functions might not be able to dichotomize an arbitrary set, Ξ, of m Boolean
patterns in all 2
m
ways. In general (that is, even in the nonBoolean case), we
say that if a subset, H, of functions can dichotomize a set, Ξ, of m patterns in
all 2
m
ways, then H shatters Ξ.
As an example, consider a set Ξ of m patterns in the ndimensional space,
R
n
. (That is, the n components of these patterns are real numbers.) We deﬁne
a linear dichotomy as one implemented by an (n−1)dimensional hyperplane in
the ndimensional space. How many linear dichotomies of m patterns in n di
mensions are there? For example, as shown in Fig. 8.1, there are 14 dichotomies
1
And, of course, if a hypothesis drawn from a set that could make arbitrary classiﬁcations
of a set of training patterns, there is little likelihood that such a hypothesis will generalize
well beyond the training set.
114 CHAPTER 8. COMPUTATIONAL LEARNING THEORY
of four points in two dimensions (each separating line yields two dichotomies
depending on whether the points on one side of the line are classiﬁed as 1 or 0).
(Note that even though there are an inﬁnite number of hyperplanes, there are,
nevertheless, only a ﬁnite number of ways in which hyperplanes can dichotomize
a ﬁnite number of patterns. Small movements of a hyperplane typically do not
change the classiﬁcations of any patterns.)
1 2
3
4
14 dichotomies of 4 points in 2 dimensions
5
6
7
Figure 8.1: Dichotomizing Points in Two Dimensions
The number of dichotomies achievable by hyperplanes depends on how the
patterns are disposed. For the maximum number of linear dichotomies, the
points must be in what is called general position. For m > n, we say that a set
of m points is in general position in an ndimensional space if and only if no
subset of (n+1) points lies on an (n−1)dimensional hyperplane. When m ≤ n,
a set of m points is in general position if no (m − 2)dimensional hyperplane
contains the set. Thus, for example, a set of m ≥ 4 points is in general position
in a threedimensional space if no four of them lie on a (twodimensional) plane.
We will denote the number of linear dichotomies of m points in general position
in an ndimensional space by the expression Π
L
(m, n).
It is not too diﬃcult to verify that: Include the derivation.
Π
L
(m, n) = 2
n
¸
i=0
C(m−1, i) for m > n, and
= 2
m
for m ≤ n
8.3. THE VAPNIKCHERVONENKIS DIMENSION 115
where C(m−1, i) is the binomial coeﬃcient
(m−1)!
(m−1−i)!i!
.
The table below shows some values for Π
L
(m, n).
m n
(no. of patterns) (dimension)
1 2 3 4 5
1 2 2 2 2 2
2 4 4 4 4 4
3 6 8 8 8 8
4 8 14 16 16 16
5 10 22 30 32 32
6 12 32 52 62 64
7 14 44 84 114 126
8 16 58 128 198 240
Note that the class of linear dichotomies shatters the m patterns if m ≤ n + 1.
The boldface entries in the table correspond to the highest values of m for
which linear dichotomies shatter m patterns in n dimensions.
8.3.2 Capacity
Let P
m,n
=
Π
L
(m,n)
2
m
= the probability that a randomly selected dichotomy (out
of the 2
m
possible dichotomies of m patterns in n dimensions) will be linearly
separable. In Fig. 8.2 we plot P
λ(n+1),n
versus λ and n, where λ = m/(n + 1).
Note that for large n (say n > 30) how quickly P
m,n
falls from 1 to 0 as
m goes above 2(n + 1). For m < 2(n + 1), any dichotomy of the m points is
almost certainly linearly separable. But for m > 2(n + 1), a randomly selected
dichotomy of the m points is almost certainly not linearly separable. For this
reason m = 2(n + 1) is called the capacity of a TLU [Cover, 1965]. Unless the
number of training patterns exceeds the capacity, the fact that a TLU separates
those training patterns according to their labels means nothing in terms of how
well that TLU will generalize to new patterns. There is nothing special about
a separation found for m < 2(n + 1) patterns—almost any dichotomy of those
patterns would have been linearly separable. To make sure that the separation
found is forced by the training set and thus generalizes well, it has to be the
case that there are very few linearly separable functions that would separate
the m training patterns.
Analogous results about the generalizing abilities of neural networks have
been developed by [Baum & Haussler, 1989] and given intuitive and experimen
tal justiﬁcation in [Baum, 1994, page 438]:
“The results seemed to indicate the following heuristic rule holds. If
M examples [can be correctly classiﬁed by] a net with W weights (for
M >> W), the net will make a fraction ε of errors on new examples
chosen from the same [uniform] distribution where ε = W/M.”
116 CHAPTER 8. COMPUTATIONAL LEARNING THEORY
0
1
2
3
4
10
20
30
40
50
0
0.25
0.5
0.75
1
0
1
2
3
4
10
20
30
40
50
0
25
.5
75
1
P
h(n + 1), n
h
n
Figure 8.2: Probability that a Random Dichotomy is Linearly Separable
8.3.3 A More General Capacity Result
Corollary 7.2 gave us an expression for the number of training patterns suﬃcient
to guarantee a required level of generalization—assuming that the function we
were guessing was a function belonging to a class of known and ﬁnite cardinality.
The capacity result just presented applies to linearly separable functions for non
binary patterns. We can extend these ideas to general dichotomies of nonbinary
patterns.
In general, let us denote the maximum number of dichotomies of any set
of m ndimensional patterns by hypotheses in H as Π
H
(m, n). The number of
dichotomies will, of course, depend on the disposition of the m points in the
ndimensional space; we take Π
H
(m, n) to be the maximum over all possible
arrangements of the m points. (In the case of the class of linearly separable
functions, the maximum number is achieved when the m points are in general
position.) For each class, H, there will be some maximum value of m for which
Π
H
(m, n) = 2
m
, that is, for which H shatters the m patterns. This maximum
number is called the VapnikChervonenkis (VC) dimension and is denoted by
VCdim(H) [Vapnik & Chervonenkis, 1971].
We saw that for the class of linear dichotomies, VCdim(Linear) = (n + 1).
As another example, let us calculate the VC dimension of the hypothesis space
of single intervals on the real line—used to classify points on the real line. We
show an example of how points on the line might be dichotomized by a single
interval in Fig. 8.3. The set Ξ could be, for example, {0.5, 2.5,  2.3, 3.14}, and
one of the hypotheses in our set would be [1, 4.5]. This hypothesis would label
the points 2.5 and 3.14 with a 1 and the points  2.3 and 0.5 with a 0. This
8.3. THE VAPNIKCHERVONENKIS DIMENSION 117
set of hypotheses (single intervals on the real line) can arbitrarily classify any
two points. But no single interval can classify three points such that the outer
two are classiﬁed as 1 and the inner one as 0. Therefore the VC dimension of
single intervals on the real line is 2. As soon as we have many more than 2
training patterns on the real line and provided we know that the classiﬁcation
function we are trying to guess is a single interval, then we begin to have good
generalization.
Figure 8.3: Dichotomizing Points by an Interval
The VC dimension is a useful measure of the expressive power of a hypothesis
set. Since any dichotomy of VCdim(H) or fewer patterns in general position in n
dimensions can be achieved by some hypothesis in H, we must have many more
than VCdim(H) patterns in the training set in order that a hypothesis consistent
with the training set is suﬃciently constrained to imply good generalization.
Our examples have shown that the concept of VC dimension is not restricted
to Boolean functions.
8.3.4 Some Facts and Speculations About the VC Dimen
sion
• If there are a ﬁnite number, H, of hypotheses in H, then:
VCdim(H) ≤ log(H)
• The VC dimension of terms in n dimensions is n.
• Suppose we generalize our example that used a hypothesis set of single
intervals on the real line. Now let us consider an ndimensional feature
space and tests of the form L
i
≤ x
i
≤ H
i
. We allow only one such test per
dimension. A hypothesis space consisting of conjunctions of these tests
(called axisparallel hyperrectangles) has VC dimension bounded by:
n ≤ VCdim ≤ 2n
• As we have already seen, TLUs with n inputs have a VC dimension of
n + 1.
• [Baum, 1994, page 438] gives experimental evidence for the proposition
that “ . . . multilayer [neural] nets have a VC dimension roughly equal to
their total number of [adjustable] weights.”
118 CHAPTER 8. COMPUTATIONAL LEARNING THEORY
8.4 VC Dimension and PAC Learning
There are two theorems that connect the idea of VC dimension with PAC learn
ing [Blumer, et al., 1990]. We state these here without proof.
Theorem 8.3 (Blumer, et al.) A hypothesis space H is PAC learnable iﬀ it
has ﬁnite VC dimension.
Theorem 8.4 A set of hypotheses, H, is properly PAC learnable if:
a. m ≥ (1/ε) max [4 lg(2/δ), 8 VCdim lg(13/ε)], and
b. if there is an algorithm that outputs a hypothesis h H consistent with the
training set in polynomial (in m and n) time.
The second of these two theorems improves the bound on the number of
training patterns needed for linearly separable functions to one that is linear
in n. In our previous example of how many training patterns were needed to
ensure PAC learnability of a linearly separable function if n = 50, ε = 0.01, and
δ = 0.01, we obtained m ≥ 173, 748. Using the Blumer, et al. result we would
get m ≥ 52, 756.
As another example of the second theorem, let us take H to be the set of
closed intervals on the real line. The VC dimension is 2 (as shown previously).
With n = 50, ε = 0.01, and δ = 0.01, m ≥ 16, 551 ensures PAC learnability.
There is also a theorem that gives a lower (necessary) bound on the number
of training patterns required for PAC learning [Ehrenfeucht, et al., 1988]:
Theorem 8.5 Any PAC learning algorithm must examine at least
Ω(1/ε lg(1/δ) + VCdim(H)) training patterns.
The diﬀerence between the lower and upper bounds is
O(log(1/ε)VCdim(H)/ε).
8.5 Bibliographical and Historical Remarks
To be added.
Chapter 9
Unsupervised Learning
9.1 What is Unsupervised Learning?
Consider the various sets of points in a twodimensional space illustrated in Fig.
9.1. The ﬁrst set (a) seems naturally partitionable into two classes, while the
second (b) seems diﬃcult to partition at all, and the third (c) is problematic.
Unsupervised learning uses procedures that attempt to ﬁnd natural partitions
of patterns. There are two stages:
• Form an Rway partition of a set Ξ of unlabeled training patterns (where
the value of R, itself, may need to be induced from the patterns). The
partition separates Ξ into R mutually exclusive and exhaustive subsets,
Ξ
1
, . . . , Ξ
R
, called clusters.
• Design a classiﬁer based on the labels assigned to the training patterns by
the partition.
We will explain shortly various methods for deciding how many clusters there
should be and for separating a set of patterns into that many clusters. We can
base some of these methods, and their motivation, on minimumdescription
length (MDL) principles. In that setting, we assume that we want to encode
a description of a set of points, Ξ, into a message of minimal length. One
encoding involves a description of each point separately; other, perhaps shorter,
encodings might involve a description of clusters of points together with how
each point in a cluster can be described given the cluster it belongs to. The
speciﬁc techniques described in this chapter do not explicitly make use of MDL
principles, but the MDL method has been applied with success. One of the
MDLbased methods, Autoclass II [Cheeseman, et al., 1988] discovered a new
classiﬁcation of stars based on the properties of infrared sources.
Another type of unsupervised learning involves ﬁnding hierarchies of par
titionings or clusters of clusters. A hierarchical partition is one in which Ξ is
119
120 CHAPTER 9. UNSUPERVISED LEARNING
a) two clusters
b) one cluster
c) ?
Figure 9.1: Unlabeled Patterns
divided into mutually exclusive and exhaustive subsets, Ξ
1
, . . . , Ξ
R
; each set,
Ξ
i
, (i = 1, . . . , R) is divided into mutually exclusive and exhaustive subsets,
and so on. We show an example of such a hierarchical partition in Fig. 9.2.
The hierarchical form is best displayed as a tree, as shown in Fig. 9.3. The tip
nodes of the tree can further be expanded into their individual pattern elements.
One application of such hierarchical partitions is in organizing individuals into
taxonomic hierarchies such as those used in botany and zoology.
9.2 Clustering Methods
9.2.1 A Method Based on Euclidean Distance
Most of the unsupervised learning methods use a measure of similarity between
patterns in order to group them into clusters. The simplest of these involves
deﬁning a distance between patterns. For patterns whose features are numeric,
the distance measure can be ordinary Euclidean distance between two points in
an ndimensional space.
There is a simple, iterative clustering method based on distance. It can
be described as follows. Suppose we have R randomly chosen cluster seekers,
C
1
, . . . , C
R
. These are points in an ndimensional space that we want to adjust
so that they each move toward the center of one of the clusters of patterns.
We present the (unlabeled) patterns in the training set, Ξ, to the algorithm
9.2. CLUSTERING METHODS 121
U
11
U
12
U
21
U
22
U
23
U
31
U
32
U
11
F
U
12
= U
1
U
21
F
U
22
F
U
23
= U
2
U
31
F U
32
= U
3
U
1
F U
2
F U
3
= U
Figure 9.2: A Hierarchy of Clusters
onebyone. For each pattern, X
i
, presented, we ﬁnd that cluster seeker, C
j
,
that is closest to X
i
and move it closer to X
i
:
C
j
←− (1 −α
j
)C
j
+α
j
X
i
where α
j
is a learning rate parameter for the jth cluster seeker; it determines
how far C
j
is moved toward X
i
.
Reﬁnements on this procedure make the cluster seekers move less far as
training proceeds. Suppose each cluster seeker, C
j
, has a mass, m
j
, equal to
the number of times that it has moved. As a cluster seeker’s mass increases it
moves less far towards a pattern. For example, we might set α
j
= 1/(1 + m
j
)
and use the above rule together with m
j
←− m
j
+1. With this adjustment rule,
a cluster seeker is always at the center of gravity (sample mean) of the set of
patterns toward which it has so far moved. Intuitively, if a cluster seeker ever
gets within some reasonably well clustered set of patterns (and if that cluster
seeker is the only one so located), it will converge to the center of gravity of
that cluster.
122 CHAPTER 9. UNSUPERVISED LEARNING
U
U
2
U
11
U
12
U
31
U
32
U
21
U
22
U
23
U
1 U
3
Figure 9.3: Displaying a Hierarchy as a Tree
Once the cluster seekers have converged, the classiﬁer implied by the now
labeled patterns in Ξ can be based on a Voronoi partitioning of the space (based
on distances to the various cluster seekers). This kind of classiﬁcation, an ex
ample of which is shown in Fig. 9.4, can be implemented by a linear machine.
Georgy Fedoseevich Voronoi, was a
Russian mathematician who lived
from 1868 to 1909. When basing partitioning on distance, we seek clusters whose patterns are
as close together as possible. We can measure the badness, V , of a cluster of
patterns, {X
i
}, by computing its sample variance deﬁned by:
V = (1/K)
¸
i
(X
i
−M)
2
where M is the sample mean of the cluster, which is deﬁned to be:
M = (1/K)
¸
i
X
i
and K is the number of points in the cluster.
We would like to partition a set of patterns into clusters such that the sum of
the sample variances (badnesses) of these clusters is small. Of course if we have
one cluster for each pattern, the sample variances will all be zero, so we must
arrange that our measure of the badness of a partition must increase with the
number of clusters. In this way, we can seek a tradeoﬀ between the variances of
9.2. CLUSTERING METHODS 123
C
1
C
2
C
3
Separating boundaries
Figure 9.4: MinimumDistance Classiﬁcation
the clusters and the number of them in a way somewhat similar to the principle
of minimal description length discussed earlier.
Elaborations of our basic clusterseeking procedure allow the number of clus
ter seekers to vary depending on the distances between them and depending on
the sample variances of the clusters. For example, if the distance, d
ij
, between
two cluster seekers, C
i
and C
j
, ever falls below some threshold ε, then we can
replace them both by a single cluster seeker placed at their center of gravity
(taking into account their respective masses). In this way we can decrease the
overall badness of a partition by reducing the number of clusters for compara
tively little penalty in increased variance.
On the other hand, if any of the cluster seekers, say C
i
, deﬁnes a cluster
whose sample variance is larger than some amount δ, then we can place a new
cluster seeker, C
j
, at some random location somewhat adjacent to C
i
and reset
the masses of both C
i
and C
j
to zero. In this way the badness of the par
tition might ultimately decrease by decreasing the total sample variance with
comparatively little penalty for the additional cluster seeker. The values of the
parameters ε and δ are set depending on the relative weights given to sample
variances and numbers of clusters.
In distancebased methods, it is important to scale the components of the
pattern vectors. The variation of values along some dimensions of the pattern
vector may be much diﬀerent than that of other dimensions. One commonly
used technique is to compute the standard deviation (i.e., the square root of the
variance) of each of the components over the entire training set and normalize
the values of the components so that their adjusted standard deviations are
equal.
124 CHAPTER 9. UNSUPERVISED LEARNING
9.2.2 A Method Based on Probabilities
Suppose we have a partition of the training set, Ξ, into R mutually exclusive
and exhaustive clusters, C
1
, . . . , C
R
. We can decide to which of these clusters
some arbitrary pattern, X, should be assigned by selecting the C
i
for which
the probability, p(C
i
X), is largest, providing p(C
i
X) is larger than some ﬁxed
threshold, δ. As we saw earlier, we can use Bayes rule and base our decision on
maximizing p(XC
i
)p(C
i
). Assuming conditional independence of the pattern
components, x
i
, the quantity to be maximized is:
S(X, C
i
) = p(x
1
C
i
)p(x
2
C
i
) · · · p(x
n
C
i
)p(C
i
)
The p(x
j
C
i
) can be estimated from the sample statistics of the patterns in the
clusters and then used in the above expression. (Recall the linear form that this
formula took in the case of binaryvalued components.)
We call S(X, C
i
) the similarity of X to a cluster, C
i
, of patterns. Thus, we
assign X to the cluster to which it is most similar, providing the similarity is
larger than δ.
Just as before, we can deﬁne the sample mean of a cluster, C
i
, to be:
M
i
= (1/K
i
)
¸
Xj Ci
X
j
where K
i
is the number of patterns in C
i
.
We can base an iterative clustering algorithm on this measure of similarity
[Mahadevan & Connell, 1992]. It can be described as follows:
a. Begin with a set of unlabeled patterns Ξ and an empty list, L, of clusters.
b. For the next pattern, X, in Ξ, compute S(X, C
i
) for each cluster, C
i
.
(Initially, these similarities are all zero.) Suppose the largest of these
similarities is S(X, C
max
).
(a) If S(X, C
max
) > δ, assign X to C
max
. That is,
C
max
←− C
max
∪ {X}
Update the sample statistics p(x
1
C
max
), p(x
2
C
max
), . . . , p(x
n
C
max
),
and p(C
max
) to take the new pattern into account. Go to 3.
(b) If S(X, C
max
) ≤ δ, create a new cluster, C
new
= {X} and add C
new
to L. Go to 3.
c. Merge any existing clusters, C
i
and C
j
if (M
i
− M
j
)
2
< ε. Compute
new sample statistics p(x
1
C
merge
), p(x
2
C
merge
), . . . , p(x
n
C
merge
), and
p(C
merge
) for the merged cluster, C
merge
= C
i
∪ C
j
.
9.3. HIERARCHICAL CLUSTERING METHODS 125
d. If the sample statistics of the clusters have not changed during an entire
iteration through Ξ, then terminate with the clusters in L; otherwise go
to 2.
The value of the parameter δ controls the number of clusters. If δ is high,
there will be a large number of clusters with few patterns in each cluster. For
small values of δ, there will be a small number of clusters with many patterns in
each cluster. Similarly, the larger the value of ε, the smaller the number clusters
that will be found.
Designing a classiﬁer based on the patterns labeled by the partitioning is
straightforward. We assign any pattern, X, to that category that maximizes
S(X, C
i
). Mention “kmeans and “EM”
methods.
9.3 Hierarchical Clustering Methods
9.3.1 A Method Based on Euclidean Distance
Suppose we have a set, Ξ, of unlabeled training patterns. We can form a hi
erarchical classiﬁcation of the patterns in Ξ by a simple agglomerative method.
(The description of this algorithm is based on an unpublished manuscript by
Pat Langley.) Our description here gives the general idea; we leave it to the
reader to generate a precise algorithm.
We ﬁrst compute the Euclidean distance between all pairs of patterns in Ξ.
(Again, appropriate scaling of the dimensions is assumed.) Suppose the smallest
distance is between patterns X
i
and X
j
. We collect X
i
and X
j
into a cluster,
C, eliminate X
i
and X
j
from Ξ and replace them by a cluster vector, C, equal
to the average of X
i
and X
j
. Next we compute the Euclidean distance again
between all pairs of points in Ξ. If the smallest distance is between pairs of
patterns, we form a new cluster, C, as before and replace the pair of patterns
in Ξ by their average. If the shortest distance is between a pattern, X
i
, and
a cluster vector, C
j
(representing a cluster, C
j
), we form a new cluster, C,
consisting of the union of C
j
and {X
i
}. In this case, we replace C
j
and X
i
in Ξ by their (appropriately weighted) average and continue. If the shortest
distance is between two cluster vectors, C
i
and C
j
, we form a new cluster, C,
consisting of the union of C
i
and C
j
. In this case, we replace C
i
and C
j
by their
(appropriately weighted) average and continue. Since we reduce the number of
points in Ξ by one each time, we ultimately terminate with a tree of clusters
rooted in the cluster containing all of the points in the original training set.
An example of how this method aggregates a set of two dimensional patterns
is shown in Fig. 9.5. The numbers associated with each cluster indicate the order
in which they were formed. These clusters can be organized hierarchically in a
binary tree with cluster 9 as root, clusters 7 and 8 as the two descendants of the
root, and so on. A ternary tree could be formed instead if one searches for the
three points in Ξ whose triangle deﬁned by those patterns has minimal area.
126 CHAPTER 9. UNSUPERVISED LEARNING
1
2
3
5
4
6
7
8
9
Figure 9.5: Agglommerative Clustering
9.3.2 A Method Based on Probabilities
A probabilistic quality measure for partitions
We can develop a measure of the goodness of a partitioning based on how
accurately we can guess a pattern given only what partition it is in. Suppose
we are given a partitioning of Ξ into R classes, C
1
, . . . , C
R
. As before, we can
compute the sample statistics p(x
i
C
k
) which give probability values for each
component given the class assigned to it by the partitioning. Suppose each
component x
i
of X can take on the values v
ij
, where the index j steps over the
domain of that component. We use the notation p
i
(v
ij
C
k
) = probability(x
i
=
v
ij
C
k
).
Suppose we use the following probabilistic guessing rule about the values
of the components of a vector X given only that it is in class k. Guess that
x
i
= v
ij
with probability p
i
(v
ij
C
k
). Then, the probability that we guess the
ith component correctly is:
¸
j
probability(guess is v
ij
)p
i
(v
ij
C
k
) =
¸
j
[p
i
(v
ij
C
k
)]
2
The average number of (the n) components whose values are guessed correctly
by this method is then given by the sum of these probabilities over all of the
components of X:
¸
i
¸
j
[p
i
(v
ij
C
k
)]
2
9.3. HIERARCHICAL CLUSTERING METHODS 127
Given our partitioning into R classes, the goodness measure, G, of this parti
tioning is the average of the above expression over all classes:
G =
¸
k
p(C
k
)
¸
i
¸
j
[p
i
(v
ij
C
k
)]
2
where p(C
k
) is the probability that a pattern is in class C
k
. In order to penalize
this measure for having a large number of classes, we divide it by R to get an
overall “quality” measure of a partitioning:
Z = (1/R)
¸
k
p(C
k
)
¸
i
¸
j
[p
i
(v
ij
C
k
)]
2
We give an example of the use of this measure for a trivially simple
clustering of the four threedimensional patterns shown in Fig. 9.6. There
are several diﬀerent partitionings. Let’s evaluate Z values for the follow
ing ones: P
1
= {a, b, c, d}, P
2
= {{a, b}, {c, d}}, P
3
= {{a, c}, {b, d}}, and
P
4
= {{a}, {b}, {c}, {d}}. The ﬁrst, P
1
, puts all of the patterns into a single
cluster. The sample probabilities p
i
(v
i1
= 1) and p
i
(v
i0
= 0) are all equal to 1/2
for each of the three components. Summing over the values of the components
(0 and 1) gives (1/2)
2
+ (1/2)
2
= 1/2. Summing over the three components
gives 3/2. Averaging over all of the clusters (there is just one) also gives 3/2.
Finally, dividing by the number of clusters produces the ﬁnal Z value of this
partition, Z(P
1
) = 3/2.
The second partition, P
2
, gives the following sample probabilities:
p
1
(v
11
= 1C
1
) = 1
p
2
(v
21
= 1C
1
) = 1/2
p
3
(v
31
= 1C
1
) = 1
Summing over the values of the components (0 and 1) gives (1)
2
+ (0)
2
= 1 for
component 1, (1/2)
2
+ (1/2)
2
= 1/2 for component 2, and (1)
2
+ (0)
2
= 1 for
component 3. Summing over the three components gives 2 1/2 for class 1. A
similar calculation also gives 2 1/2 for class 2. Averaging over the two clusters
also gives 2 1/2. Finally, dividing by the number of clusters produces the ﬁnal
Z value of this partition, Z(P
2
) = 1 1/4, not quite as high as Z(P
1
).
Similar calculations yield Z(P
3
) = 1 and Z(P
4
) = 3/4, so this method of
evaluating partitions would favor placing all patterns in a single cluster.
128 CHAPTER 9. UNSUPERVISED LEARNING
x
2
x
3
x
1
a
b
c
d
Figure 9.6: Patterns in 3Dimensional Space
An iterative method for hierarchical clustering
Evaluating all partitionings of m patterns and then selecting the best would be
computationally intractable. The following iterative method is based on a hi
erarchical clustering procedure called COBWEB [Fisher, 1987]. The procedure
grows a tree each node of which is labeled by a set of patterns. At the end
of the process, the root node contains all of the patterns in Ξ. The successors
of the root node will contain mutually exclusive and exhaustive subsets of Ξ.
In general, the successors of a node, η, are labeled by mutually exclusive and
exhaustive subsets of the pattern set labelling node η. The tips of the tree will
contain singleton sets. The method uses Z values to place patterns at the vari
ous nodes; sample statistics are used to update the Z values whenever a pattern
is placed at a node. The algorithm is as follows:
a. We start with a tree whose root node contains all of the patterns in Ξ
and a single empty successor node. We arrange that at all times dur
ing the process every nonempty node in the tree has (besides any other
successors) exactly one empty successor.
b. Select a pattern X
i
in Ξ (if there are no more patterns to select, terminate).
c. Set µ to the root node.
d. For each of the successors of µ (including the empty successor!), calculate
the best host for X
i
. A best host is determined by tentatively placing
X
i
in one of the successors and calculating the resulting Z value for each
9.3. HIERARCHICAL CLUSTERING METHODS 129
one of these ways of accomodating X
i
. The best host corresponds to the
assignment with the highest Z value.
e. If the best host is an empty node, η, we place X
i
in η, generate an empty
successor node of η, generate an empty sibling node of η, and go to 2.
f. If the best host is a nonempty, singleton (tip) node, η, we place X
i
in η,
create one successor node of η containing the singleton pattern that was
in η, create another successor node of η containing X
i
, create an empty
successor node of η, create empty successor nodes of the new nonempty
successors of η, and go to 2.
g. If the best host is a nonempty, nonsingleton node, η, we place X
i
in η,
set µ to η, and go to 4.
This process is rather sensitive to the order in which patterns are presented.
To make the ﬁnal classiﬁcation tree less order dependent, the COBWEB proce
dure incorporates node merging and splitting.
Node merging:
It may happen that two nodes having the same parent could be merged with
an overall increase in the quality of the resulting classiﬁcation performed by the
successors of that parent. Rather than try all pairs to merge, a good heuristic
is to attempt to merge the two best hosts. When such a merging improves the
Z value, a new node containing the union of the patterns in the merged nodes
replaces the merged nodes, and the two nodes that were merged are installed
as successors of the new node.
Node splitting:
A heuristic for node splitting is to consider replacing the best host among a
group of siblings by that host’s successors. This operation is performed only if
it increases the Z value of the classiﬁcation performed by a group of siblings.
Example results from COBWEB
We mention two experiments with COBWEB. In the ﬁrst, the program at
tempted to ﬁnd two categories (we will call them Class 1 and Class 2) of United
States Senators based on their votes (yes or no) on six issues. After the clus
ters were established, the majority vote in each class was computed. These are
shown in the table below.
Issue Class 1 Class 2
Toxic Waste yes no
Budget Cuts yes no
SDI Reduction no yes
Contra Aid yes no
LineItem Veto yes no
MX Production yes no
130 CHAPTER 9. UNSUPERVISED LEARNING
In the second experiment, the program attempted to classify soybean dis
eases based on various characteristics. COBWEB grouped the diseases in the
taxonomy shown in Fig. 9.7.
N
0
soybean
diseases
N
1
Diaporthe
Stem Canker
N
2
Charcoal
Rot
N
3
N
31
Rhizoctonia
Rot
N
32
Phytophthora
Rot
Figure 9.7: Taxonomy Induced for Soybean Diseases
9.4 Bibliographical and Historical Remarks
To be added.
Chapter 10
TemporalDiﬀerence
Learning
10.1 Temporal Patterns and Prediction Prob
lems
In this chapter, we consider problems in which we wish to learn to predict the
future value of some quantity, say z, from an ndimensional input pattern, X.
In many of these problems, the patterns occur in temporal sequence, X
1
, X
2
,
. . ., X
i
, X
i+1
, . . ., X
m
, and are generated by a dynamical process. The
components of X
i
are features whose values are available at time, t = i. We
distinguish two kinds of prediction problems. In one, we desire to predict the
value of z at time t = i + 1 based on input X
i
for every i. For example, we
might wish to predict some aspects of tomorrow’s weather based on a set of
measurements made today. In the other kind of prediction problem, we desire
to make a sequence of predictions about the value of z at some ﬁxed time, say
t = m + 1, based on each of the X
i
, i = 1, . . . , m. For example, we might wish
to make a series of predictions about some aspect of the weather on next New
Year’s Day, based on measurements taken every day before New Year’s. Sutton
[Sutton, 1988] has called this latter problem, multistep prediction, and that is
the problem we consider here. In multistep prediction, we might expect that
the prediction accuracy should get better and better as i increases toward m.
10.2 Supervised and TemporalDiﬀerence Meth
ods
A training method that naturally suggests itself is to use the actual value of
z at time m + 1 (once it is known) in a supervised learning procedure using a
131
132 CHAPTER 10. TEMPORALDIFFERENCE LEARNING
sequence of training patterns, {X
1
, X
2
, . . ., X
i
, X
i+1
, . . ., X
m
}. That is, we
seek to learn a function, f, such that f(X
i
) is as close as possible to z for each i.
Typically, we would need a training set, Ξ, consisting of several such sequences.
We will show that a method that is better than supervised learning for some
important problems is to base learning on the diﬀerence between f(X
i+1
) and
f(X
i
) rather than on the diﬀerence between z and f(X
i
). Such methods involve
what is called temporaldiﬀerence (TD) learning.
We assume that our prediction, f(X), depends on a vector of modiﬁable
weights, W. To make that dependence explicit, we write f(X, W). For su
pervised learning, we consider procedures of the following type: For each X
i
,
the prediction f(X
i
, W) is computed and compared to z, and the learning rule
(whatever it is) computes the change, (∆W
i
), to be made to W. Then, taking
into account the weight changes for each pattern in a sequence all at once after
having made all of the predictions with the old weight vector, we change W as
follows:
W←− W+
m
¸
i=1
(∆W)
i
Whenever we are attempting to minimize the squared error between z and
f(X
i
, W) by gradient descent, the weightchanging rule for each pattern is:
(∆W)
i
= c(z −f
i
)
∂f
i
∂W
where c is a learning rate parameter, f
i
is our prediction of z, f(X
i
, W),
at time t = i, and
∂fi
∂W
is, by deﬁnition, the vector of partial derivatives
(
∂fi
∂w1
, . . . ,
∂fi
∂wi
, . . . ,
∂fi
∂wn
) in which the w
i
are the individual components of W.
(The expression
∂fi
∂W
is sometimes written ∇
W
f
i
.) The reader will recall that
we used an equivalent expression for (∆W)
i
in deriving the backpropagation
formulas used in training multilayer neural networks.
The WidrowHoﬀ rule results when f(X, W) = X• W. Then:
(∆W)
i
= c(z −f
i
)X
i
An interesting form for (∆W)
i
can be developed if we note that
(z −f
i
) =
m
¸
k=i
(f
k+1
−f
k
)
where we deﬁne f
m+1
= z. Substituting in our formula for (∆W)
i
yields:
(∆W)
i
= c(z −f
i
)
∂f
i
∂W
10.2. SUPERVISED AND TEMPORALDIFFERENCE METHODS 133
= c
∂f
i
∂W
m
¸
k=i
(f
k+1
−f
k
)
In this form, instead of using the diﬀerence between a prediction and the value
of z, we use the diﬀerences between successive predictions—thus the phrase
temporaldiﬀerence (TD) learning.
In the case when f(X, W) = X • W, the temporal diﬀerence form of the
WidrowHoﬀ rule is:
(∆W)
i
= cX
i
m
¸
k=i
(f
k+1
−f
k
)
One reason for writing (∆W)
i
in temporaldiﬀerence form is to permit an
interesting generalization as follows:
(∆W)
i
= c
∂f
i
∂W
m
¸
k=i
λ
(k−i)
(f
k+1
−f
k
)
where 0 < λ ≤ 1. Here, the λ term gives exponentially decreasing weight to
diﬀerences later in time than t = i. When λ = 1, we have the same rule with
which we began—weighting all diﬀerences equally, but as λ →0, we weight only
the (f
i+1
−f
i
) diﬀerence. With the λ term, the method is called TD(λ).
It is interesting to compare the two extreme cases:
For TD(0):
(∆W)
i
= c(f
i+1
−f
i
)
∂f
i
∂W
For TD(1):
(∆W)
i
= c(z −f
i
)
∂f
i
∂W
Both extremes can be handled by the same learning mechanism; only the error
term is diﬀerent. In TD(0), the error is the diﬀerence between successive predic
tions, and in TD(1), the error is the diﬀerence between the ﬁnally revealed value
of z and the prediction. Intermediate values of λ take into account diﬀerently
weighted diﬀerences between future pairs of successive predictions.
Only TD(1) can be considered a pure supervised learning procedure, sensitive
to the ﬁnal value of z provided by the teacher. For λ < 1, we have various degrees
of unsupervised learning, in which the prediction function strives to make each
prediction more like successive ones (whatever they might be). We shall soon
see that these unsupervised procedures result in better learning than do the
supervised ones for an important class of problems.
134 CHAPTER 10. TEMPORALDIFFERENCE LEARNING
10.3 Incremental Computation of the (∆W)
i
We can rewrite our formula for (∆W)
i
, namely
(∆W)
i
= c
∂f
i
∂W
m
¸
k=i
λ
(k−i)
(f
k+1
−f
k
)
to allow a type of incremental computation. First we write the expression for
the weight change rule that takes into account all of the (∆W)
i
:
W←− W+
m
¸
i=1
c
∂f
i
∂W
m
¸
k=i
λ
(k−i)
(f
k+1
−f
k
)
Interchanging the order of the summations yields:
W←− W+
m
¸
k=1
c
k
¸
i=1
λ
(k−i)
(f
k+1
−f
k
)
∂f
i
∂W
= W+
m
¸
k=1
c(f
k+1
−f
k
)
k
¸
i=1
λ
(k−i)
∂f
i
∂W
Interchanging the indices k and i ﬁnally yields:
W←− W+
m
¸
i=1
c(f
i+1
−f
i
)
i
¸
k=1
λ
(i−k)
∂f
k
∂W
If, as earlier, we want to use an expression of the form W←− W+
¸
m
i=1
(∆W)
i
,
we see that we can write:
(∆W)
i
= c(f
i+1
−f
i
)
i
¸
k=1
λ
(i−k)
∂f
k
∂W
Now, if we let e
i
=
¸
i
k=1
λ
(i−k) ∂f
k
∂W
, we can develop a computationally eﬃcient
recurrence equation for e
i+1
as follows:
e
i+1
=
i+1
¸
k=1
λ
(i+1−k)
∂f
k
∂W
=
∂f
i+1
∂W
+
i
¸
k=1
λ
(i+1−k)
∂f
k
∂W
10.4. AN EXPERIMENT WITH TD METHODS 135
=
∂f
i+1
∂W
+λe
i
Rewriting (∆W)
i
in these terms, we obtain:
(∆W)
i
= c(f
i+1
−f
i
)e
i
where:
e
1
=
∂f
1
∂W
e
2
=
∂f
2
∂W
+λe
1
etc.
Quoting Sutton [Sutton, 1988, page 15] (about a diﬀerent equation, but the
quote applies equally well to this one):
“. . . this equation can be computed incrementally, because each
(∆W)
i
depends only on a pair of successive predictions and on the
[weighted] sum of all past values for
∂fi
∂W
. This saves substantially on
memory, because it is no longer necessary to individually remember
all past values of
∂fi
∂W
.”
10.4 An Experiment with TD Methods
TD prediction methods [especially TD(0)] are well suited to situations in which
the patterns are generated by a dynamic process. In that case, sequences of
temporally presented patterns contain important information that is ignored
by a conventional supervised method such as the WidrowHoﬀ rule. Sutton
[Sutton, 1988, page 19] gives an interesting example involving a random walk,
which we repeat here. In Fig. 10.1, sequences of vectors, X, are generated as
follows: We start with vector X
D
; the next vector in the sequence is equally
likely to be one of the adjacent vectors in the diagram. If the next vector is
X
C
(or X
E
), the next one after that is equally likely to be one of the vectors
adjacent to X
C
(or X
E
). When X
B
is in the sequence, it is equally likely that
the sequence terminates with z = 0 or that the next vector is X
C
. Similarly,
when X
F
is in the sequence, it is equally likely that the sequence terminates
with z = 1 or that the next vector is X
E
. Thus the sequences are random, but
they always start with X
D
. Some sample sequences are shown in the ﬁgure.
136 CHAPTER 10. TEMPORALDIFFERENCE LEARNING
1
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
1
0
0
0
0
0
1
z = 0
z = 1
X
B
X
C
X
D
X
E
X
F
Typical Sequences:
X
D
X
C
X
D
X
E
X
F
1
X
D
X
C
X
B
X
C
X
D
X
E
X
D
X
E
X
F
1
X
D
X
E
X
D
X
C
X
B
0
Figure 10.1: A Markov Process
This random walk is an example of a Markov process; transitions from state i
to state j occur with probabilities that depend only on i and j.
Given a set of sequences generated by this process as a training set, we want
to be able to predict the value of z for each X in a test sequence. We assume
that the learning system does not know the transition probabilities.
For his experiments with this process, Sutton used a linear predictor, that
is f(X, W) = X• W. The learning problem is to ﬁnd a weight vector, W, that
minimizes the meansquared error between z and the predicted value of z. Given
the ﬁve diﬀerent values that X can take on, we have the following predictions:
f(X
B
) = w
1
, f(X
C
) = w
2
, f(X
D
) = w
3
, f(X
E
) = w
4
, f(X
F
) = w
5
, where
w
i
is the ith component of the weight vector. (Note that the values of the
predictions are not limited to 1 or 0—even though z can only have one of
those values—because we are minimizing meansquared error.) After training,
these predictions will be compared with the optimal ones—given the transition
probabilities.
The experimental setup was as follows: ten random sequences were generated
using the transition probabilities. Each of these sequences was presented in turn
to a TD(λ) method for various values of λ. Weight vector increments, (∆W)
i
,
were computed after each pattern presentation but no weight changes were
made until all ten sequences were presented. The weight vector increments were
summed after all ten sequences were presented, and this sum was used to change
the weight vector to be used for the next pass through the ten sequences. This
process was repeated over and over (using the same training sequences) until
(quoting Sutton) “the procedure no longer produced any signiﬁcant changes in
the weight vector. For small c, the weight vector always converged in this way,
10.4. AN EXPERIMENT WITH TD METHODS 137
and always to the same ﬁnal value [for 100 diﬀerent training sets of ten random
sequences], independent of its initial value.” (Even though, for ﬁxed, small c,
the weight vector always converged to the same vector, it might converge to a
somewhat diﬀerent vector for diﬀerent values of c.)
After convergence, the predictions made by the ﬁnal weight vector are com
pared with the optimal predictions made using the transition probabilities.
These optimal predictions are simply p(z = 1X). We can compute these proba
bilities to be 1/6, 1/3, 1/2, 2/3, and 5/6 for X
B
, X
C
, X
D
, X
E
, X
F
, respectively.
The rootmeansquared diﬀerences between the best learned predictions (over
all c) and these optimal ones are plotted in Fig. 10.2 for seven diﬀerent values
of λ. (For each data point, the standard error is approximately σ = 0.01.)
0.10
0.12
0.14
0.16
0.18
0.20
0.0 0.1 0.3 0.5 0.7 0.9 1.0
h
Error using
best c
WidrowHoff
TD(1)
TD(0)
(Adapted from Sutton, p. 20, 1988)
Figure 10.2: Prediction Errors for TD(λ)
Notice that the WidrowHoﬀ procedure does not perform as well as other
versions of TD(λ) for λ < 1! Quoting [Sutton, 1988, page 21]:
“This result contradicts conventional wisdom. It is well known that,
under repeated presentations, the WidrowHoﬀ procedure minimizes
the RMS error between its predictions and the actual outcomes in
the training set ([Widrow & Stearns, 1985]). How can it be that this
optimal method peformed worse than all the TD methods for λ <
1? The answer is that the WidrowHoﬀ procedure only minimizes
error on the training set; it does not necessarily minimize error for
future experience. [Later] we prove that in fact it is linear TD(0)
that converges to what can be considered the optimal estimates for
138 CHAPTER 10. TEMPORALDIFFERENCE LEARNING
matching future experience—those consistent with the maximum
likelihood estimate of the underlying Markov process.”
10.5 Theoretical Results
It is possible to analyze the performance of the linearprediction TD(λ) methods
on Markov processes. We state some theorems here without proof.
Theorem 10.1 (Sutton, page 24, 1988) For any absorbing Markov chain,
and for any linearly independent set of observation vectors {X
i
} for the non
terminal states, there exists an ε > 0 such that for all positive c < ε and for any
initial weight vector, the predictions of linear TD(0) (with weight updates after
each sequence) converge in expected value to the optimal (maximum likelihood)
predictions of the true process.
Even though the expected values of the predictions converge, the predictions
themselves do not converge but vary around their expected values depending on
their most recent experience. Sutton conjectures that if c is made to approach
0 as training progresses, the variance of the predictions will approach 0 also.
Dayan [Dayan, 1992] has extended the result of Theorem 9.1 to TD(λ) for
arbitrary λ between 0 and 1. (Also see [Dayan & Sejnowski, 1994].)
10.6 IntraSequence Weight Updating
Our standard weight updating rule for TD(λ) methods is:
W←− W+
m
¸
i=1
c(f
i+1
−f
i
)
i
¸
k=1
λ
(i−k)
∂f
k
∂W
where the weight update occurs after an entire sequence is observed. To make
the method truly incremental (in analogy with weight updating rules for neural
nets), it would be desirable to change the weight vector after every pattern
presentation. The obvious extension is:
W
i+1
←− W
i
+c(f
i+1
−f
i
)
i
¸
k=1
λ
(i−k)
∂f
k
∂W
where f
i+1
is computed before making the weight change; that is, f
i+1
=
f(X
i+1
, W
i
). But that would make f
i
= f(X
i
, W
i−1
), and such a rule would
make the prediction diﬀerence, namely (f
i+1
−f
i
), sensitive both to changes in
X and changes in W and could lead to instabilities. Instead, we modify the rule
so that, for every pair of predictions, f
i+1
= f(X
i+1
, W
i
) and f
i
= f(X
i
, W
i
).
This version of the rule has been used in practice with excellent results.
10.6. INTRASEQUENCE WEIGHT UPDATING 139
For TD(0) and linear predictors, the rule is:
W
i+1
= W
i
+c(f
i+1
−f
i
)X
i
The rule is implemented as follows:
a. Initialize the weight vector, W, arbitrarily.
b. For i = 1, ..., m, do:
(a) f
i
←− X
i
• W
(We compute f
i
anew each time through rather than use the value
of f
i+1
the previous time through.)
(b) f
i+1
←− X
i+1
• W
(c) d
i+1
←− f
i+1
−f
i
(d) W←− W+c d
i+1
X
i
(If f
i
were computed again with this changed weight vector, its value
would be closer to f
i+1
as desired.)
The linear TD(0) method can be regarded as a technique for training a
very simple network consisting of a single dot product unit (and no threshold
or sigmoid function). TD methods can also be used in combination with back
propagation to train neural networks. For TD(0) we change the network weights
according to the expression:
W
i+1
= W
i
+c(f
i+1
−f
i
)
∂f
i
∂W
The only change that must be made to the standard backpropagation weight
changing rule is that the diﬀerence term between the desired output and the
output of the unit in the ﬁnal (kth) layer, namely (d −f
(k)
), must be replaced
by a diﬀerence term between successive outputs, (f
i+1
−f
i
). This change has a
direct eﬀect only on the expression for δ
(k)
which becomes:
δ
(k)
= 2(f
(k)
−f
(k)
)f
(k)
(1 −f
(k)
)
where f
(k)
and f
(k)
are two successive outputs of the network.
The weight changing rule for the ith weight vector in the jth layer of weights
has the same form as before, namely:
W
(j)
i
←− W
(j)
i
+cδ
(j)
i
X
(j−1)
where the δ
(j)
i
are given recursively by:
δ
(j)
i
= f
(j)
i
(1 −f
(j)
i
)
mj+1
¸
l=1
δ
(j+1)
l
w
(j+1)
il
and w
(j+1)
il
is the lth component of the ith weight vector in the (j +1)th layer
of weights. Of course, here also it is assumed that f
(k)
and f
(k)
are computed
using the same weights and then the weights are changed. In the next section
we shall see an interesting example of this application of TD learning.
140 CHAPTER 10. TEMPORALDIFFERENCE LEARNING
10.7 An Example Application: TDgammon
A program called TDgammon [Tesauro, 1992] learns to play backgammon by
training a neural network via temporaldiﬀerence methods. The structure of
the neural net, and its coding is as shown in Fig. 10.3. The network is trained
to minimize the error between actual payoﬀ and estimated payoﬀ, where the
actual payoﬀ is deﬁned to be d
f
= p
1
+2p
2
−p
3
−2p
4
, and the p
i
are the actual
probabilities of the various outcomes as deﬁned in the ﬁgure.
. . .
p
3
= pr(black wins)
p
4
= pr(black gammons)
p
1
= pr(white wins)
p
2
= pr(white gammons)
estimated payoff:
d = p
1
+ 2p
2
< p
3
< 2p
4
no. of white
on cell 1
no. on bar,
off board,
and who
moves
198 inputs
1
2
3
# > 3
. . .
up to 40 hidden units
2 x 24
cells
4 output units
hidden and output units are sigmoids
learning rate: c = 0.1; initial weights chosen
randomly between <0.5 and +0.5.
estimated probabilities:
Figure 10.3: The TDgammon Network
TDgammon learned by using the network to select that move that results
in the best predicted payoﬀ. That is, at any stage of the game some ﬁnite set of
moves is possible and these lead to the set, {X}, of new board positions. Each
member of this set is evaluated by the network, and the one with the largest
10.8. BIBLIOGRAPHICAL AND HISTORICAL REMARKS 141
predicted payoﬀ is selected if it is white’s move (and the smallest if it is black’s).
The move is made, and the network weights are adjusted to make the predicted
payoﬀ from the original position closer to that of the resulting position.
The weight adjustment procedure combines temporaldiﬀerence (TD(λ))
learning with backpropagation. If d
t
is the network’s estimate of the payoﬀ
at time t (before a move is made), and d
t+1
is the estimate at time t + 1 (after
a move is made), the weight adjustment rule is:
∆W
t
= c(d
t+1
−d
t
)
t
¸
k=1
λ
t−k
∂d
k
∂W
where W
t
is a vector of all weights in the network at time t, and
∂d
k
∂W
is the
gradient of d
k
in this weight space. (For a layered, feedforward network, such
as that of TDgammon, the weight changes for the weight vectors in each layer
can be expressed in the usual manner.)
To make the special cases clear, recall that for TD(0), the network would be
trained so that, for all t, its output, d
t
, for input X
t
tended toward its expected
output, d
t+1
, for input X
t+1
. For TD(1), the network would be trained so that,
for all t, its output, d
t
, for input X
t
tended toward the expected ﬁnal payoﬀ,
d
f
, given that input. The latter case is the same as the WidrowHoﬀ rule.
After about 200,000 games the following results were obtained. TDgammon
(with 40 hidden units, λ = 0.7, and c = 0.1) won 66.2% of 10,000 games against
SUN Microsystems Gammontool and 55% of 10,000 games against a neural
network trained using expert moves. Commenting on a later version of TD
gammon, incorporating special features as inputs, Tesauro said: “It appears to
be the strongest program ever seen by this author.”
10.8 Bibliographical and Historical Remarks
To be added.
142 CHAPTER 10. TEMPORALDIFFERENCE LEARNING
Chapter 11
DelayedReinforcement
Learning
11.1 The General Problem
Imagine a robot that exists in an environment in which it can sense and act.
Suppose (as an extreme case) that it has no idea about the eﬀects of its actions.
That is, it doesn’t know how acting will change its sensory inputs. Along with
its sensory inputs are “rewards,” which it occasionally receives. How should it
choose its actions so as to maximize its rewards over the long run? To maximize
rewards, it will need to be able to predict how actions change inputs, and in
particular, how actions lead to rewards.
We formalize the problem in the following way: The robot exists in an
environment consisting of a set, S, of states. We assume that the robot’s sensory
apparatus constructs an input vector, X, from the environment, which informs
the robot about which state the environment is in. For the moment, we will
assume that the mapping from states to vectors is onetoone, and, in fact, will
use the notation X to refer to the state of the environment as well as to the
input vector. When presented with an input vector, the robot decides which
action from a set, A, of actions to perform. Performing the action produces an
eﬀect on the environment—moving it to a new state. The new state results in
the robot perceiving a new input vector, and the cycle repeats. We assume a
discrete time model; the input vector at time t = i is X
i
, the action taken at
that time is a
i
, and the expected reward, r
i
, received at t = i depends on the
action taken and on the state, that is r
i
= r(X
i
, a
i
). The learner’s goal is to ﬁnd
a policy, π(X), that maps input vectors to actions in such a way that maximizes
rewards accumulated over time. This type of learning is called reinforcement
learning. The learner must ﬁnd the policy by trial and error; it has no initial
knowledge of the eﬀects of its actions. The situation is as shown in Fig. 11.1.
143
144 CHAPTER 11. DELAYEDREINFORCEMENT LEARNING
X
i
r
i
Learner
Environment
(reward)
(state)
(action)
a
i
Figure 11.1: Reinforcement Learning
11.2 An Example
A “grid world,” such as the one shown in Fig. 11.2 is often used to illustrate
reinforcement learning. Imagine a robot initially in cell (2,3). The robot receives
input vector (x
1
, x
2
) telling it what cell it is in; it is capable of four actions,
n, e, s, w moving the robot one cell up, right, down, or left, respectively. It is
rewarded one negative unit whenever it bumps into the wall or into the blocked
cells. For example, if the input to the robot is (1,3), and the robot chooses
action w, the next input to the robot is still (1,3) and it receives a reward of
−1. If the robot lands in the cell marked G (for goal), it receives a reward of
+10. Let’s suppose that whenever the robot lands in the goal cell and gets its
reward, it is immediately transported out to some random cell, and the quest
for reward continues.
A policy for our robot is a speciﬁcation of what action to take for every one
of its inputs, that is, for every one of the cells in the grid. For example, a com
ponent of such a policy would be “when in cell (3,1), move right.” An optimal
policy is a policy that maximizes longterm reward. One way of displaying a
policy for our gridworld robot is by an arrow in each cell indicating the direc
tion the robot should move when in that cell. In Fig. 11.3, we show an optimal
policy displayed in this manner. In this chapter we will describe methods for
learning optimal policies based on reward values received by the learner.
11.3. TEMPORAL DISCOUNTING AND OPTIMAL POLICIES 145
R
G
1 2 3 4 5 6 7
1
2
3
4
5
6
7
8
Figure 11.2: A Grid World
11.3 Temporal Discounting and Optimal Poli
cies
In delayed reinforcement learning, one often assumes that rewards in the distant
future are not as valuable as are more immediate rewards. This preference can
be accomodated by a temporal discount factor, 0 ≤ γ < 1. The present value of
a reward, r
i
, occuring i time units in the future, is taken to be γ
i
r
i
. Suppose
we have a policy π(X) that maps input vectors into actions, and let r
π(X)
i
be
the reward that will be received on the ith time step after one begins executing
policy π starting in state X. Then the total reward accumulated over all time
steps by policy π beginning in state X is:
V
π
(X) =
∞
¸
i=0
γ
i
r
π(X)
i
One reason for using a temporal discount factor is so that the above sum will
be ﬁnite. An optimal policy is one that maximizes V
π
(X) for all inputs, X.
In general, we want to consider the case in which the rewards, r
i
, are random
variables and in which the eﬀects of actions on environmental states are random.
In Markovian environments, for example, the probability that action a in state
X
i
will lead to state X
j
is given by a transition probability p[X
j
X
i
, a]. Then,
we will want to maximize expected future reward and would deﬁne V
π
(X) as:
V
π
(X) = E
¸
∞
¸
i=0
γ
i
r
π(X)
i
¸
In either case, we call V
π
(X) the value of policy π for input X.
146 CHAPTER 11. DELAYEDREINFORCEMENT LEARNING
R
G
1 2 3 4 5 6 7
1
2
3
4
5
6
7
8
Figure 11.3: An Optimal Policy in the Grid World
If the action prescribed by π taken in state X leads to state X
(randomly
according to the transition probabilities), then we can write V
π
(X) in terms of
V
π
(X
) as follows:
V
π
(X) = r[X, π(X)] +γ
¸
X
p[X
X, π(X)]V
π
(X
)
where (in summary):
γ = the discount factor,
V
π
(X) = the value of state X under policy π,
r[X, π(X)] = the expected immediate reward received when we execute the
action prescribed by π in state X, and
p[X
X, π(X)] = the probability that the environment transitions to state
X
when we execute the action prescribed by π in state X.
In other words, the value of state X under policy π is the expected value of
the immediate reward received when executing the action recommended by π
plus the average value (under π) of all of the states accessible from X.
For an optimal policy, π
∗
(and no others!), we have the famous “optimality
equation:”
V
π
∗
(X) = max
a
¸
r(X, a) +γ
¸
X
p[X
X, a]V
π
∗
(X
)
¸
The theory of dynamic programming (DP) [Bellman, 1957, Ross, 1983] assures
us that there is at least one optimal policy, π
∗
, that satisﬁes this equation. DP
11.4. QLEARNING 147
also provides methods for calculating V
π
∗
(X) and at least one π
∗
, assuming
that we know the average rewards and the transition probabilities. If we knew
the transition probabilities, the average rewards, and V
π
∗
(X) for all X and a,
then it would be easy to implement an optimal policy. We would simply select
that a that maximizes r(X, a) +γ
¸
X
p[X
X, a]V
π
∗
(X
). That is,
π
∗
(X) = arg max
a
¸
r(X, a) +γ
¸
X
p[X
X, a]V
π
∗
(X
)
¸
But, of course, we are assuming that we do not know these average rewards nor
the transition probabilities, so we have to ﬁnd a method that eﬀectively learns
them.
If we had a model of actions, that is, if we knew for every state, X, and
action a, which state, X
resulted, then we could use a method called value
iteration to ﬁnd an optimal policy. Value iteration works as follows: We begin
by assigning, randomly, an estimated value
ˆ
V (X) to every state, X. On the ith
step of the process, suppose we are at state X
i
(that is, our input on the ith
step is X
i
), and that the estimated value of state X
i
on the ith step is
ˆ
V
i
(X
i
).
We then select that action a that maximizes the estimated value of the predicted
subsequent state. Suppose this subsequent state having the highest estimated
value is X
i
. Then we update the estimated value,
ˆ
V
i
(X
i
), of state X
i
as follows:
ˆ
V
i
(X) = (1 −c
i
)
ˆ
V
i−1
(X) +c
i
r
i
+γ
ˆ
V
i−1
(X
i
)
if X = X
i
,
=
ˆ
V
i−1
(X)
otherwise.
We see that this adjustment moves the value of
ˆ
V
i
(X
i
) an increment (depend
ing on c
i
) closer to
r
i
+γ
ˆ
V
i
(X
i
)
. Assuming that
ˆ
V
i
(X
i
) is a good estimate for
V
i
(X
i
), then this adjustment helps to make the two estimates more consistent.
Providing that 0 < c
i
< 1 and that we visit each state inﬁnitely often, this
process of value iteration will converge to the optimal values. Discuss synchronous dynamic
programming, asynchronous
dynamic programming, and policy
iteration.
11.4 QLearning
Watkins [Watkins, 1989] has proposed a technique that he calls incremental
dynamic programming. Let a; π stand for the policy that chooses action a once,
and thereafter chooses actions according to policy π. We deﬁne:
Q
π
(X, a) = V
a;π
(X)
148 CHAPTER 11. DELAYEDREINFORCEMENT LEARNING
Then the optimal value from state X is given by:
V
π
∗
(X) = max
a
Q
π
∗
(X, a)
This equation holds only for an optimal policy, π
∗
. The optimal policy is given
by:
π
∗
(X) = arg max
a
Q
π
∗
(X, a)
Note that if an action a makes Q
π
(X, a) larger than V
π
(X), then we can improve
π by changing it so that π(X) = a. Making such a change is the basis for a
powerful learning rule that we shall describe shortly.
Suppose action a in state X leads to state X
. Then using the deﬁnitions of
Q and V , it is easy to show that:
Q
π
(X, a) = r(X, a) +γE[V
π
(X
)]
where r(X, a) is the average value of the immediate reward received when we
execute action a in state X. For an optimal policy (and no others), we have
another version of the optimality equation in terms of Q values:
Q
π
∗
(X, a) = max
a
r(X, a) +γE
Q
π
∗
(X
, a)
for all actions, a, and states, X. Now, if we had the optimal Q values (for all
a and X), then we could implement an optimal policy simply by selecting that
action that maximized r(X, a) +γE
Q
π
∗
(X
, a)
.
That is,
π
∗
(X) = arg max
a
r(X, a) +γE
Q
π
∗
(X
, a)
Watkins’ proposal amounts to a TD(0) method of learning the Q values.
We quote (with minor notational changes) from [Watkins & Dayan, 1992, page
281]:
“In QLearning, the agent’s experience consists of a sequence of dis
tinct stages or episodes. In the ith episode, the agent:
• observes its current state X
i
,
• selects [using the method described below] and performs an
action a
i
,
• observes the subsequent state X
i
,
• receives an immediate reward r
i
, and
11.4. QLEARNING 149
• adjusts its Q
i−1
values using a learning factor c
i
, according to:
Q
i
(X, a) = (1 −c
i
)Q
i−1
(X, a) +c
i
[r
i
+γV
i−1
(X
i
)]
if X = X
i
and a = a
i
,
= Q
i−1
(X, a)
otherwise,
where
V
i−1
(X
) = max
b
[Q
i−1
(X
, b)]
is the best the agent thinks it can do from state X
. . . . The
initial Q values, Q
0
(X, a), for all states and actions are assumed
given.”
Using the current Q values, Q
i
(X, a), the agent always selects that action
that maximizes Q
i
(X, a). Note that only the Q value corresponding to the
state just exited and the action just taken is adjusted. And that Q value is
adjusted so that it is closer (by an amount determined by c
i
) to the sum of
the immediate reward plus the discounted maximum (over all actions) of the Q
values of the state just entered. If we imagine the Q values to be predictions of
ultimate (inﬁnite horizon) total reward, then the learning procedure described
above is exactly a TD(0) method of learning how to predict these Q values.
Q learning strengthens the usual TD methods, however, because TD (applied
to reinforcement problems using value iteration) requires a onestep lookahead,
using a model of the eﬀects of actions, whereas Q learning does not.
A convenient notation (proposed by [Schwartz, 1993]) for representing the
change in Q value is:
Q(X, a)
β
←− r +γV (X
)
where Q(X, a) is the new Q value for input X and action a, r is the immediate
reward when action a is taken in response to input X, V (X
) is the maximum
(over all actions) of the Q value of the state next reached when action a is taken
from state X, and β is the fraction of the way toward which the new Q value,
Q(X, a), is adjusted to equal r +γV (X
).
Watkins and Dayan [Watkins & Dayan, 1992] prove that, under certain con
ditions, the Q values computed by this learning procedure converge to optimal
ones (that is, to ones on which an optimal policy can be based).
We deﬁne n
i
(X, a) as the index (episode number) of the ith time that action
a is tried in state X. Then, we have:
150 CHAPTER 11. DELAYEDREINFORCEMENT LEARNING
Theorem 11.1 (Watkins and Dayan) For Markov problems with states {X}
and actions {a}, and given bounded rewards r
n
 ≤ R, learning rates 0 ≤ c
n
< 1,
and
∞
¸
i=0
c
n
i
(X,a)
= ∞,
∞
¸
i=0
c
n
i
(X,a)
2
< ∞
for all X and a, then
Q
n
(X, a) → Q
∗
n
(X, a) as n → ∞, for all X and a, with probability 1, where
Q
∗
n
(X, a) corresponds to the Q values of an optimal policy.
Again, we quote from [Watkins & Dayan, 1992, page 281]:
“The most important condition implicit in the convergence theorem
. . . is that the sequence of episodes that forms the basis of learning
must include an inﬁnite number of episodes for each starting state
and action. This may be considered a strong condition on the way
states and actions are selected—however, under the stochastic con
ditions of the theorem, no method could be guaranteed to ﬁnd an
optimal policy under weaker conditions. Note, however, that the
episodes need not form a continuous sequence—that is the X
of one
episode need not be the X of the next episode.”
The relationships among Q learning, dynamic programming, and control
are very well described in [Barto, Bradtke, & Singh, 1994]. Q learning is best
thought of as a stochastic approximation method for calculating the Q values.
Although the deﬁnition of the optimal Q values for any state depends recursively
on expected values of the Q values for subsequent states (and on the expected
values of rewards), no expected values are explicitly computed by the procedure.
Instead, these values are approximated by iterative sampling using the actual
stochastic mechanism that produces successor states.
11.5 Discussion, Limitations, and Extensions of
QLearning
11.5.1 An Illustrative Example
The Qlearning procedure requires that we maintain a table of Q(X, a) values
for all stateaction pairs. In the grid world that we described earlier, such a
table would not be excessively large. We might start with random entries in the
table; a portion of such an intial table might be as follows:
11.5. DISCUSSION, LIMITATIONS, ANDEXTENSIONS OF QLEARNING151
X a Q(X, a) r(X, a)
(2,3) w 7 0
(2,3) n 4 0
(2,3) e 3 0
(2,3) s 6 0
(1,3) w 4 1
(1,3) n 5 0
(1,3) e 2 0
(1,3) s 4 0
Suppose the robot is in cell (2,3). The maximum Q value occurs for a = w, so the
robot moves west to cell (1,3)—receiving no immediate reward. The maximum
Q value in cell (1,3) is 5, and the learning mechanism attempts to make the
value of Q((2, 3), w) closer to the discounted value of 5 plus the immediate
reward (which was 0 in this case). With a learning rate parameter c = 0.5
and γ = 0.9, the Q value of Q((2, 3), w) is adjusted from 7 to 5.75. No other
changes are made to the table at this episode. The reader might try this learning
procedure on the grid world with a simple computer program. Notice that an
optimal policy might not be discovered if some cells are not visited nor some
actions not tried frequently enough.
The learning problem faced by the agent is to associate speciﬁc actions with
speciﬁc input patterns. Q learning gradually reinforces those actions that con
tribute to positive rewards by increasing the associated Q values. Typically, as
in this example, rewards occur somewhat after the actions that lead to them—
hence the phrase delayedreinforcement learning. One can imagine that better
and better approximations to the optimal Q values gradually propagate back
from states producing rewards toward all of the other states that the agent fre
quently visits. With random Q values to begin, the agent’s actions amount to a
random walk through its space of states. Only when this random walk happens
to stumble into rewarding states does Q learning begin to produce Q values
that are useful, and, even then, the Q values have to work their way outward
from these rewarding states. The general problem of associating rewards with
stateaction pairs is called the temporal credit assignment problem—how should
credit for a reward be apportioned to the actions leading up to it? Q learning is,
to date, the most successful technique for temporal credit assignment, although
a related method, called the bucket brigade algorithm, has been proposed by
[Holland, 1986].
Learning problems similar to that faced by the agent in our grid world have
been thoroughly studied by Sutton who has proposed an architecture, called
DYNA, for solving them [Sutton, 1990]. DYNA combines reinforcement learning
with planning. Sutton characterizes planning as learning in a simulated world
that models the world that the agent inhabits. The agent’s model of the world
is obtained by Q learning in its actual world, and planning is accomplished by
Q learning in its model of the world.
We should note that the learning problem faced by our gridworld robot
152 CHAPTER 11. DELAYEDREINFORCEMENT LEARNING
could be modiﬁed to have several places in the grid that give positive rewards.
This possibility presents an interesting way to generalize the classical notion of
a “goal” in AI planning systems—even in those that do no learning. Instead of
representing a goal as a condition to be achieved, we represent a “goal struc
ture” as a set of rewards to be given for achieving various conditions. Then,
the generalized “goal” becomes maximizing discounted future reward instead of
simply achieving some particular condition. This generalization can be made to
encompass socalled goals of maintenance and goals of avoidance. The exam
ple presented above included avoiding bumping into the gridworld boundary.
A goal of maintenance, of a particular state, could be expressed in terms of a
reward that was earned whenever the agent was in that state and performed an
action that transitioned back to that state in one step.
11.5.2 Using Random Actions
When the next pattern presentation in a sequence of patterns is the one caused
by the agent’s own action in response to the last pattern, we have what is called
an online learning method. In Watkins and Dayan’s terminology, in online
learning the episodes form a continous sequence. As already mentioned, the
convergence theorem for Q learning does not require online learning; indeed,
special precautions must be taken to ensure that online learning meets the
conditions of the theorem. If online learning discovers some good paths to
rewards, the agent may ﬁxate on these and never discover a policy that leads
to a possibly greater longterm reward. In reinforcement learning phraseology,
this problem is referred to as the problem of exploitation (of already learned
behavior) versus exploration (of possibly better behavior).
One way to force exploration is to perform occasional random actions (in
stead of that single action prescribed by the current Q values). For example,
in the gridworld problem, one could imagine selecting an action randomly ac
cording to a probability distribution over the actions (n, e, s, and w). This
distribution, in turn, could depend on the Q values. For example, we might
ﬁrst ﬁnd that action prescribed by the Q values and then choose that action
with probability 1/2, choose the two orthogonal actions with probability 3/16
each, and choose the opposite action with probability 1/8. This policy might be
modiﬁed by “simulated annealing” which would gradually increase the probabil
ity of the action prescribed by the Q values more and more as time goes on. This
strategy would favor exploration at the beginning of learning and exploitation
later.
Other methods, also, have been proposed for dealing with exploration, in
cluding making unvisited states intrinsically rewarding and using an “interval
estimate,” which is related to the uncertainty in the estimate of a state’s value
[Kaelbling, 1993].
11.5. DISCUSSION, LIMITATIONS, ANDEXTENSIONS OF QLEARNING153
11.5.3 Generalizing Over Inputs
For large problems it would be impractical to maintain a table like that used
in our gridworld example. Various researchers have suggested mechanisms for
computing Q values, given pattern inputs and actions. One method that sug
gests itself is to use a neural network. For example, consider the simple linear
machine shown in Fig. 11.4.
X
. . .
. . .
Y
Y
Y
trainable weights
Y
W
i
R dot product units
Q(a
i
, X) = X
.
W
i
Q(a
1
, X)
Q(a
2
, X)
Q(a
R
, X)
Figure 11.4: A Net that Computes Q Values
Such a neural net could be used by an agent that has R actions to select
from. The Q values (as a function of the input pattern X and the action a
i
) are
computed as dot products of weight vectors (one for each action) and the input
vector. Weight adjustments are made according to a TD(0) procedure to bring
the Q value for the action last selected closer to the sum of the immediate reward
(if any) and the (discounted) maximum Q value for the next input pattern.
If the optimum Q values for the problem (whatever they might be) are more
complex than those that can be computed by a linear machine, a layered neural
network might be used. Sigmoid units in the ﬁnal layer would compute Q values
in the range 0 to 1. The TD(0) method for updating Q values would then have to
be combined with a multilayer weightchanging rule, such as backpropagation.
Networks of this sort are able to aggregate diﬀerent input vectors into regions
for which the same action should be performed. This kind of aggregation is an
example of what has been called structural credit assignment. Combining TD(λ)
and backpropagation is a method for dealing with both the temporal and the
structural credit assignment problems.
154 CHAPTER 11. DELAYEDREINFORCEMENT LEARNING
Interesting examples of delayedreinforcement training of simulated and
actual robots requiring structural credit assignment have been reported by
[Lin, 1992, Mahadevan & Connell, 1992].
11.5.4 Partially Observable States
So far, we have identiﬁed the input vector, X, with the actual state of the envi
ronment. When the input vector results from an agent’s perceptual apparatus
(as we assume it does), there is no reason to suppose that it uniquely identiﬁes
the environmental state. Because of inevitable perceptual limitations, several
diﬀerent environmental states might give rise to the same input vector. This
phenomenon has been referred to as perceptual aliasing. With perceptual alias
ing, we can no longer guarantee that Q learning will result in even useful action
policies, let alone optimal ones. Several researchers have attempted to deal with
this problem using a variety of methods including attempting to model “hid
den” states by using internal memory [Lin, 1993]. That is, if some aspect of
the environment cannot be sensed currently, perhaps it was sensed once and
can be remembered by the agent. When such is the case, we no longer have a
Markov problem; that is, the next X vector, given any action, may depend on
a sequence of previous ones rather than just the immediately preceding one. It
might be possible to reinstate a Markov framework (over the X’s) if X includes
not only current sensory precepts but information from the agent’s memory.
11.5.5 Scaling Problems
Several diﬃculties have so far prohibited wide application of reinforcement learn
ing to large problems. (The TDgammon program, mentioned in the last chap
ter, is probably unique in terms of success on a highdimensional problem.)
We have already touched on some diﬃculties; these and others are summarized
below with references to attempts to overcome them.
a. Exploration versus exploitation.
• use random actions
• favor states not visited recently
• separate the learning phase from the use phase
• employ a teacher to guide exploration
b. Slow time to convergence
• combine learning with prior knowledge; use estimates of Q values
(rather than random values) initially
• use a hierarchy of actions; learn primitive actions ﬁrst and freeze the
useful sequences into macros and then learn how to use the macros
11.6. BIBLIOGRAPHICAL AND HISTORICAL REMARKS 155
• employ a teacher; use graded “lessons”—starting near the rewards
and then backing away, and use examples of good behavior [Lin, 1992]
• use more eﬃcient computations; e.g. do several updates per episode
[Moore & Atkeson, 1993]
c. Large state spaces
• use handcoded features
• use neural networks
• use nearestneighbor methods [Moore, 1990]
d. Temporal discounting problems. Using small γ can make the learner too
greedy for present rewards and indiﬀerent to the future; but using large γ
slows down learning.
• use a learning method based on average rewards [Schwartz, 1993]
e. No “transfer” of learning . What is learned depends on the reward struc
ture; if the rewards change, learning has to start over.
• Separate the learning into two parts: learn an “action model” which
predicts how actions change states (and is constant over all prob
lems), and then learn the “values” of states by reinforcement learn
ing for each diﬀerent set of rewards. Sometimes the reinforcement
learning part can be replaced by a “planner” that uses the action
model to produce plans to achieve goals.
Also see other articles in the special issue on reinforcement learning: Machine
Learning, 8, May, 1992.
11.6 Bibliographical and Historical Remarks
To be added.
156 CHAPTER 11. DELAYEDREINFORCEMENT LEARNING
Chapter 12
ExplanationBased
Learning
12.1 Deductive Learning
In the learning methods studied so far, typically the training set does not ex
haust the version space. Using logical terminology, we could say that the classi
ﬁer’s output does not logically follow from the training set. In this sense, these
methods are inductive. In logic, a deductive system is one whose conclusions
logically follow from a set of input facts, if the system is sound.
1
To contrast inductive with deductive systems in a logical setting, suppose
we have a set of facts (the training set) that includes the following formulas:
{Round(Obj1), Round(Obj2), Round(Obj3), Round(Obj4),
Ball(Obj1), Ball(Obj2), Ball(Obj3), Ball(Obj4)}
A learning system that forms the conclusion (∀x)[Ball(x) ⊃ Round(x)] is in
ductive. This conclusion may be useful (if there are no facts of the form
Ball(σ) ∧ ¬Round(σ)), but it does not logically follow from the facts. On the
other hand, if we had the facts Green(Obj5) and Green(Obj5) ⊃ Round(Obj5),
then we could logically conclude Round(Obj5). Making this conclusion and sav
ing it is an instance of deductive learning—a topic we study in this chapter.
Suppose that some logical proposition, φ, logically follows from some set of
facts, ∆. Under what circumstances might we say that the process of deducing
φ from ∆ results in our learning φ? In a sense, we implicitly knew φ all along,
since it was inherent in knowing ∆. Yet, φ might not be obvious given ∆, and
1
Logical reasoning systems that are not sound, for example those using nonmonotonic
reasoning, themselves might produce inductive conclusions that do not logically follow from
the input facts.
157
158 CHAPTER 12. EXPLANATIONBASED LEARNING
the deduction process to establish φ might have been arduous. Rather than have
to deduce φ again, we might want to save it, perhaps along with its deduction,
in case it is needed later. Shouldn’t that process count as learning? Dietterich
[Dietterich, 1990] has called this type of learning speedup learning.
Strictly speaking, speedup learning does not result in a system being able to
make decisions that, in principle, could not have been made before the learning
took place. Speedup learning simply makes it possible to make those decisions
more eﬃciently. But, in practice, this type of learning might make possible
certain decisions that might otherwise have been infeasible.
To take an extreme case, a chess player can be said to learn chess even though
optimal play is inherent in the rules of chess. On the surface, there seems to be
no real diﬀerence between the experiencebased hypotheses that a chess player
makes about what constitutes good play and the kind of learning we have been
studying so far.
As another example, suppose we are given some theorems about geometry
and are asked to prove that the sum of the angles of a right triangle is 180
degrees. Let us further suppose that the proof we constructed did not depend
on the given triangle being a right triangle; in that case we can learn a more
general fact. The learning technique that we are going to study next is related
to this example. It is called explanationbased learning (EBL). EBL can be
thought of as a process in which implicit knowledge is converted into explicit
knowledge.
In EBL, we specialize parts of a domain theory to explain a particular ex
ample, then we generalize the explanation to produce another element of the
domain theory that will be useful on similar examples. This process is illustrated
in Fig. 12.1.
12.2 Domain Theories
Two types of information were present in the inductive methods we have studied:
the information inherent in the training samples and the information about the
domain that is implied by the “bias” (for example, the hypothesis set from which
we choose functions). The learning methods are successful only if the hypothesis
set is appropriate for the problem. Typically, the smaller the hypothesis set (that
is, the more a priori information we have about the function being sought), the
less dependent we are on information being supplied by a training set (that
is, fewer samples). A priori information about a problem can be expressed in
several ways. The methods we have studied so far restrict the hypotheses in a
rather direct way. A less direct method involves making assertions in a logical
language about the property we are trying to learn. A set of such assertions is
usually called a “domain theory.”
Suppose, for example, that we wanted to classify people according to whether
or not they were good credit risks. We might represent a person by a set of
properties (income, marital status, type of employment, etc.), assemble such
12.3. AN EXAMPLE 159
Domain
Theory
Example
(X is P)
Prove: X is P
specialize
Explanation
(Proof)
generalize
A New Domain Rule:
Things "like" X are P
Y is like X
Complex Proof
Process
Trivial Proof
Y is P
Figure 12.1: The EBL Process
data about people who are known to be good and bad credit risks and train a
classiﬁer to make decisions. Or, we might go to a loan oﬃcer of a bank, ask him
or her what sorts of things s/he looks for in making a decision about a loan,
encode this knowledge into a set of rules for an expert system, and then use
the expert system to make decisions. The knowledge used by the loan oﬃcer
might have originated as a set of “policies” (the domain theory), but perhaps the
application of these policies were specialized and made more eﬃcient through
experience with the special cases of loans made in his or her district.
12.3 An Example
To make our discussion more concrete, let’s consider the following fanciful exam
ple. We want to ﬁnd a way to classify robots as “robust” or not. The attributes
that we use to represent a robot might include some that are relevant to this
decision and some that are not.
160 CHAPTER 12. EXPLANATIONBASED LEARNING
Suppose we have a domain theory of logical sentences that taken together,
help to deﬁne whether or not a robot can be classiﬁed as robust. (The same
domain theory may be useful for several other purposes also, but among other
things, it describes the concept “robust.”)
In this example, let’s suppose that our domain theory includes the sentences:
Fixes(u, u) ⊃ Robust(u)
(An individual that can ﬁx itself is robust.)
Sees(x, y) ∧ Habile(x) ⊃ Fixes(x, y)
(A habile individual that can see another entity can ﬁx that entity.)
Robot(w) ⊃ Sees(w, w)
(All robots can see themselves.)
R2D2(x) ⊃ Habile(x)
(R2D2class individuals are habile.)
C3PO(x) ⊃ Habile(x)
(C3POclass individuals are habile.)
. . .
(By convention, variables are assumed to be universally quantiﬁed.) We could
use theoremproving methods operating on this domain theory to conclude
whether certain robots are robust. These methods might be computationally
quite expensive because extensive search may have to be performed to derive a
conclusion. But after having found a proof for some particular robot, we might
be able to derive some new sentence whose use allows a much faster conclusion.
We next show how such a new rule might be derived in this example. Suppose
we are given a number of facts about Num5, such as:
Robot(Num5)
R2D2(Num5)
Age(Num5, 5)
Manufacturer(Num5, GR)
. . .
12.3. AN EXAMPLE 161
Fixes(u, u) => Robust(u)
Robust(Num5)
Fixes(Num5, Num5)
Sees(Num5,Num5)
Habile(Num5)
Sees(x,y) & Habile(x)
=> Fixes(x,y)
Robot(w)
=> Sees(w,w)
Robot(Num5)
R2D2(x)
=> Habile(x)
R2D2(Num5)
Figure 12.2: A Proof Tree
We are also told that Robust(Num5) is true, but we nevertheless attempt to
ﬁnd a proof of that assertion using these facts about Num5 and the domain
theory. The facts about Num5 correspond to the features that we might use
to represent Num5. In this example, not all of them are relevant to a decision
about Robust(Num5). The relevant ones are those used or needed in proving
Robust(Num5) using the domain theory. The proof tree in Fig. 12.2 is one that
a typical theoremproving system might produce.
In the language of EBL, this proof is an explanation for the fact
Robust(Num5). We see from this explanation that the only facts about Num5
that were used were Robot(Num5) and R2D2(Num5). In fact, we could con
struct the following rule from this explanation:
Robot(Num5) ∧ R2D2(Num5) ⊃ Robust(Num5)
The explanation has allowed us to prune some attributes about Num5 that are
irrelevant (at least for deciding Robust(Num5)). This type of pruning is the ﬁrst
sense in which an explanation is used to generalize the classiﬁcation problem.
([DeJong & Mooney, 1986] call this aspect of explanationbased learning feature
elimination.) But the rule we extracted from the explanation applies only to
Num5. There might be little value in learning that rule since it is so speciﬁc.
Can it be generalized so that it can be applied to other individuals as well?
162 CHAPTER 12. EXPLANATIONBASED LEARNING
Examination of the proof shows that the same proof structure, using the
same sentences from the domain theory, could be used independently of whether
we are talking about Num5 or some other individual. We can generalize the
proof by a process that replaces constants in the tip nodes of the proof tree
with variables and works upward—using uniﬁcation to constrain the values of
variables as needed to obtain a proof.
In this example, we replace Robot(Num5) by Robot(r) and R2D2(Num5)
by R2D2(s) and redo the proof—using the explanation proof as a template.
Note that we use diﬀerent values for the two diﬀerent occurrences of Num5 at
the tip nodes. Doing so sometimes results in more general, but nevertheless
valid rules. We now apply the rules used in the proof in the forward direction,
keeping track of the substitutions imposed by the most general uniﬁers used in
the proof. (Note that we always substitute terms that are already in the tree for
variables in rules.) This process results in the generalized proof tree shown in
Fig. 12.3. Note that the occurrence of Sees(r, r) as a node in the tree forces the
uniﬁcation of x with y in the domain rule, Sees(x, y)∧Habile(y) ⊃ Fixes(x, y).
The substitutions are then applied to the variables in the tip nodes and the root
node to yield the general rule: Robot(r) ∧ R2D2(r) ⊃ Robust(r).
This rule is the end result of EBL for this example. The process
by which Num5 in this example was generalized to a variable is what
[DeJong & Mooney, 1986] call identity elimination (the precise identity of Num5
turned out to be irrelevant). (The generalization process described in this ex
ample is based on that of [DeJong & Mooney, 1986] and diﬀers from that of
[Mitchell, et al., 1986]. It is also similar to that used in [Fikes, et al., 1972].)
Clearly, under certain assumptions, this general rule is more easily used to con
clude Robust about an individual than the original proof process was.
It is important to note that we could have derived the general rule from the
domain theory without using the example. (In the literature, doing so is called
static analysis [Etzioni, 1991].) In fact, the example told us nothing new other
than what it told us about Num5. The sole role of the example in this instance
of EBL was to provide a template for a proof to help guide the generalization
process. Basing the generalization process on examples helps to insure that we
learn rules matched to the distribution of problems that occur.
There are a number of qualiﬁcations and elaborations about EBL that need
to be mentioned.
12.4 Evaluable Predicates
The domain theory includes a number of predicates other than the one occuring
in the formula we are trying to prove and other than those that might custom
arily be used to describe an individual. One might note, for example, that if we
used Habile(Num5) to describe Num5, the proof would have been shorter. Why
didn’t we? The situation is analogous to that of using a data base augmented
by logical rules. In the latter application, the formulas in the actual data base
12.4. EVALUABLE PREDICATES 163
Robust(r)
Fixes(r, r)
Sees(r,r) Habile(s)
Robot(r) R2D2(s)
{r/w}
{s/x}
{r/x, r/y, r/s}
{r/u}
Robot(w)
=> Sees(w,w)
R2D2(x)
=> Habile(x)
Sees(x,y) & Habile(x)
=> Fixes(x,y)
Fixes(u, u) => Robust(u)
becomes R2D2(r) after
applying {r/s}
Figure 12.3: A Generalized Proof Tree
are “extensional,” and those in the logical rules are “intensional.” This usage
reﬂects the fact that the predicates in the data base part are deﬁned by their
extension—we explicitly list all the tuples sastisfying a relation. The logical
rules serve to connect the data base predicates with higher level abstractions
that are described (if not deﬁned) by the rules. We typically cannot look up
the truth values of formulas containing these intensional predicates; they have
to be derived using the rules and the database.
The EBL process assumes something similar. The domain theory is useful
for connecting formulas that we might want to prove with those whose truth
values can be “looked up” or otherwise evaluated. In the EBL literature, such
formulas satisfy what is called the operationality criterion. Perhaps another
analogy might be to neural networks. The evaluable predicates correspond to
the components of the input pattern vector; the predicates in the domain theory
correspond to the hidden units. Finding the new rule corresponds to ﬁnding a
simpler expression for the formula to be proved in terms only of the evaluable
predicates.
164 CHAPTER 12. EXPLANATIONBASED LEARNING
12.5 More General Proofs
Examining the domain theory for our example reveals that an alternative rule
might have been: Robot(u) ∧ C3PO(u) ⊃ Robust(u). Such a rule might
have resulted if we were given {C3PO(Num6), Robot(Num6), . . .} and proved
Robust(Num6). After considering these two examples (Num5 and Num6),
the question arises, do we want to generalize the two rules to something like:
Robot(u)∧[C3PO(u)∨R2D2(u)] ⊃ Robust(u)? Doing so is an example of what
[DeJong & Mooney, 1986] call structural generalization (via disjunctive augmen
tation ).
Adding disjunctions for every alternative proof can soon become cumbersome
and destroy any eﬃciency advantage of EBL. In our example, the eﬃciency
might be retrieved if there were another evaluable predicate, say, Bionic(u) such
that the domain theory also contained R2D2(x) ⊃ Bionic(x) and C3PO(x) ⊃
Bionic(x). After seeing a number of similar examples, we might be willing to
induce the formula Bionic(u) ⊃ [C3PO(u) ∨ R2D2(u)] in which case the rule
with the disjunction could be replaced with Robot(u) ∧Bionic(u) ⊃ Robust(u).
12.6 Utility of EBL
It is well known in theorem proving that the complexity of ﬁnding a proof
depends both on the number of formulas in the domain theory and on the depth
of the shortest proof. Adding a new rule decreases the depth of the shortest
proof but it also increases the number of formulas in the domain theory. In
realistic applications, the added rules will be relevant for some tasks and not for
others. Thus, it is unclear whether the overall utility of the new rules will turn
out to be positive. EBL methods have been applied in several settings, usually
with positive utility. (See [Minton, 1990] for an analysis).
12.7 Applications
There have been several applications of EBL methods. We mention two here,
namely the formation of macrooperators in automatic plan generation and
learning how to control search.
12.7.1 MacroOperators in Planning
In automatic planning systems, eﬃciency can sometimes be enhanced by chain
ing together a sequence of operators into macrooperators. We show an exam
ple of a process for creating macrooperators based on techniques explored by
[Fikes, et al., 1972].
Referring to Fig. 12.4, consider the problem of ﬁnding a plan for a robot in
room R1 to fetch a box, B1, by going to an adjacent room, R2, and pushing it
12.7. APPLICATIONS 165
back to R1. The goal for the robot is INROOM(B1, R1), and the facts that
are true in the initial state are listed in the ﬁgure.
R1
R2
R3
D1
D2
B1
Initial State:
INROOM(ROBOT, R1)
INROOM(B1,R2)
CONNECTS(D1,R1,R2)
CONNECTS(D1,R2,R1)
. . .
Figure 12.4: Initial State of a Robot Problem
We will construct the plan from a set of STRIPS operators that include:
GOTHRU(d, r1, r2)
Preconditions: INROOM(ROBOT, r1), CONNECTS(d, r1, r2)
Delete list: INROOM(ROBOT, r1)
Add list: INROOM(ROBOT, r2)
PUSHTHRU(b, d, r1, r2)
Preconditions: INROOM(ROBOT, r1), CONNECTS(d, r1, r2), INROOM(b, r1)
Delete list: INROOM(ROBOT, r1), INROOM(b, r1)
Add list: INROOM(ROBOT, r2), INROOM(b, r2)
A backwardreasoning STRIPS system might produce the plan shown in
Fig. 12.5. We show there the main goal and the subgoals along a solution path.
(The conditions in each subgoal that are true in the initial state are shown
underlined.) The preconditions for this plan, true in the initial state, are:
INROOM(ROBOT, R1)
166 CHAPTER 12. EXPLANATIONBASED LEARNING
CONNECTS(D1, R1, R2)
CONNECTS(D1, R2, R1)
INROOM(B1, R2)
Saving this speciﬁc plan, valid only for the speciﬁc constants it mentions, would
not be as useful as would be saving a more general one. We ﬁrst generalize
these preconditions by substituting variables for constants. We then follow the
structure of the speciﬁc plan to produce the generalized plan shown in Fig. 12.6
that achieves INROOM(b1, r4). Note that the generalized plan does not require
pushing the box back to the place where the robot started. The preconditions
for the generalized plan are:
INROOM(ROBOT, r1)
CONNECTS(d1, r1, r2)
CONNECTS(d2, r2, r4)
INROOM(b, r4)
INROOM(B1,R1)
PUSHTHRU(B1,d,r1,R1)
INROOM(ROBOT, r1),
CONNECTS(d, r1, R1),
INROOM(B1, r1)
INROOM(ROBOT, R2),
CONNECTS(D1, R2, R1),
INROOM(B1, R2)
{R2/r1,
D1/d}
GOTHRU(d2, r3, R2)
INROOM(ROBOT, r3),
CONNECTS(d2, r3, R2),
CONNECTS(D1, R2, R1),
INROOM(B1, R2)
{R1/r3, D1/d2}
INROOM(ROBOT, R1),
CONNECTS(D1, R1, R2),
CONNECTS(D1, R2, R1),
INROOM(B1, R2)
R1
R2
R3
D1
D2
GOTHRU(D1,R1,R2)
PUSHTHRU(B1,D1,R2,R1)
B1
PLAN:
Figure 12.5: A Plan for the Robot Problem
Another related technique that chains together sequences of operators to
form more general ones is the chunking mechanism in Soar [Laird, et al., 1986].
12.7. APPLICATIONS 167
INROOM(b1,r4)
PUSHTHRU(b1,d2,r2,r4)
INROOM(ROBOT, r2),
CONNECTS(d1, r1, r2),
CONNECTS(d2, r2, r4),
INROOM(b1, r4)
GOTHRU(d1, r1, r2)
INROOM(ROBOT, r1),
CONNECTS(d1, r1, r2),
CONNECTS(d2, r2, r4),
INROOM(b1, r4)
Figure 12.6: A Generalized Plan
12.7.2 Learning Search Control Knowledge
Besides their use in creating macrooperators, EBL methods can be used to
improve the eﬃciency of planning in another way also. In his system called
PRODIGY, Minton proposed using EBL to learn eﬀective ways to control
search [Minton, 1988]. PRODIGY is a STRIPSlike system that solves planning
problems in the blocksworld, in a simple mobile robot world, and in jobshop
scheduling. PRODIGY has a domain theory involving both the domain of the
problem and a simple (meta) theory about planning. Its meta theory includes
statements about whether a control choice about a subgoal to work on, an oper
ator to apply, etc. either succeeds or fails. After producing a plan, it analyzes its
successful and its unsuccessful choices and attempts to explain them in terms
of its domain theory. Using an EBLlike process, it is able to produce useful
control rules such as:
168 CHAPTER 12. EXPLANATIONBASED LEARNING
IF (AND (CURRENT −NODE node)
(CANDIDATE −GOAL node (ON x y))
(CANDIDATE −GOAL node (ON y z)))
THEN (PREFER GOAL (ON y z) TO (ON x y))
PRODIGY keeps statistics on how often these learned rules are used, their
savings (in time to ﬁnd plans), and their cost of application. It saves only the
rules whose utility, thus measured, is judged to be high. Minton [Minton, 1990]
has shown that there is an overall advantage of using these rules (as against not
having any rules and as against handcoded search control rules).
12.8 Bibliographical and Historical Remarks
To be added.
Bibliography
[Acorn & Walden, 1992] Acorn, T., and Walden, S., “SMART: Support Man
agement Automated Reasoning Technology for COMPAQ Customer Ser
vice,” Proc. Fourth Annual Conf. on Innovative Applications of Artiﬁcial
Intelligence, Menlo Park, CA: AAAI Press, 1992.
[Aha, 1991] Aha, D., Kibler, D., and Albert, M., “InstanceBased Learning
Algorithms,” Machine Learning, 6, 3766, 1991.
[Anderson & Bower, 1973] Anderson, J. R., and Bower, G. H., Human Asso
ciative Memory, Hillsdale, NJ: Erlbaum, 1973.
[Anderson, 1958] Anderson, T. W., An Introduction to Multivariate Statistical
Analysis, New York: John Wiley, 1958.
[Barto, Bradtke, & Singh, 1994] Barto, A., Bradtke, S., and Singh, S., “Learn
ing to Act Using RealTime Dynamic Programming,” to appear in Ar
tiﬁcial Intelligence, 1994.
[Baum & Haussler, 1989] Baum, E, and Haussler, D., “What Size Net Gives
Valid Generalization?” Neural Computation, 1, pp. 151160, 1989.
[Baum, 1994] Baum, E., “When Are kNearest Neighbor and Backpropagation
Accurate for FeasibleSized Sets of Examples?” in Hanson, S., Drastal,
G., and Rivest, R., (eds.), Computational Learning Theory and Natural
Learning Systems, Volume 1: Constraints and Prospects, pp. 415442,
Cambridge, MA: MIT Press, 1994.
[Bellman, 1957] Bellman, R. E., Dynamic Programming, Princeton: Princeton
University Press, 1957.
[Blumer, et al., 1987] Blumer, A., et al., “Occam’s Razor,” Info. Process. Lett.,
vol 24, pp. 37780, 1987.
[Blumer, et al., 1990] Blumer, A., et al., “Learnability and the Vapnik
Chervonenkis Dimension,” JACM, 1990.
[Bollinger & Duﬃe, 1988] Bollinger, J., and Duﬃe, N., Computer Control of
Machines and Processes, Reading, MA: AddisonWesley, 1988.
169
170 BIBLIOGRAPHY
[Brain, et al., 1962] Brain, A. E., et al., “Graphical Data Processing Research
Study and Experimental Investigation,” Report No. 8 (pp. 913) and No.
9 (pp. 310), Contract DA 36039 SC78343, SRI International, Menlo
Park, CA, June 1962 and September 1962.
[Breiman, et al., 1984] Breiman, L., Friedman, J., Olshen, R., and Stone, C.,
Classiﬁcation and Regression Trees, Monterey, CA: Wadsworth, 1984.
[Brent, 1990] Brent, R. P., “Fast Training Algorithms for MultiLayer Neural
Nets,” Numerical Analysis Project Manuscript NA9003, Computer Sci
ence Department, Stanford University, Stanford, CA 94305, March 1990.
[Bryson & Ho 1969] Bryson, A., and Ho, Y.C., Applied Optimal Control, New
York: Blaisdell.
[Buchanan & Wilkins, 1993] Buchanan, B. and Wilkins, D., (eds.), Readings in
Knowledge Acquisition and Learning, San Francisco: Morgan Kaufmann,
1993.
[Carbonell, 1983] Carbonell, J., “Learning by Analogy,” in Machine Learning:
An Artiﬁcial Intelligence Approach, Michalski, R., Carbonell, J., and
Mitchell, T., (eds.), San Francisco: Morgan Kaufmann, 1983.
[Cheeseman, et al., 1988] Cheeseman, P., et al., “AutoClass: A Bayesian Clas
siﬁcation System,” Proc. Fifth Intl. Workshop on Machine Learning,
Morgan Kaufmann, San Mateo, CA, 1988. Reprinted in Shavlik, J. and
Dietterich, T., Readings in Machine Learning, Morgan Kaufmann, San
Francisco, pp. 296306, 1990.
[Cover & Hart, 1967] Cover, T., and Hart, P., “Nearest Neighbor Pattern Clas
siﬁcation,” IEEE Trans. on Information Theory, 13, 2127, 1967.
[Cover, 1965] Cover, T., “Geometrical and Statistical Properties of Systems
of Linear Inequalities with Applications in Pattern Recognition,” IEEE
Trans. Elec. Comp., EC14, 326334, June, 1965.
[Dasarathy, 1991] Dasarathy, B. V., Nearest Neighbor Pattern Classiﬁcation
Techniques, IEEE Computer Society Press, 1991.
[Dayan & Sejnowski, 1994] Dayan, P., and Sejnowski, T., “TD(λ) Converges
with Probability 1,” Machine Learning, 14, pp. 295301, 1994.
[Dayan, 1992] Dayan, P., “The Convergence of TD(λ) for General λ,” Machine
Learning, 8, 341362, 1992.
[DeJong & Mooney, 1986] DeJong, G., and Mooney, R., “ExplanationBased
Learning: An Alternative View,” Machine Learning, 1:145176, 1986.
Reprinted in Shavlik, J. and Dietterich, T., Readings in Machine Learn
ing, San Francisco: Morgan Kaufmann, 1990, pp 452467.
BIBLIOGRAPHY 171
[Dietterich & Bakiri, 1991] Dietterich, T. G., and Bakiri, G., “ErrorCorrecting
Output Codes: A General Method for Improving Multiclass Induc
tive Learning Programs,” Proc. Ninth Nat. Conf. on A.I., pp. 572577,
AAAI91, MIT Press, 1991.
[Dietterich, et al., 1990] Dietterich, T., Hild, H., and Bakiri, G., “A Compara
tive Study of ID3 and Backpropagation for English TexttoSpeech Map
ping,” Proc. Seventh Intl. Conf. Mach. Learning, Porter, B. and Mooney,
R. (eds.), pp. 2431, San Francisco: Morgan Kaufmann, 1990.
[Dietterich, 1990] Dietterich, T., “Machine Learning,” Annu. Rev. Comput.
Sci., 4:255306, Palo Alto: Annual Reviews Inc., 1990.
[Duda & Fossum, 1966] Duda, R. O., and Fossum, H., “Pattern Classiﬁcation
by Iteratively Determined Linear and Piecewise Linear Discriminant
Functions,” IEEE Trans. on Elect. Computers, vol. EC15, pp. 220232,
April, 1966.
[Duda & Hart, 1973] Duda, R. O., and Hart, P.E., Pattern Classiﬁcation and
Scene Analysis, New York: Wiley, 1973.
[Duda, 1966] Duda, R. O., “Training a Linear Machine on Mislabeled Patterns,”
SRI Tech. Report prepared for ONR under Contract 3438(00), SRI In
ternational, Menlo Park, CA, April 1966.
[Efron, 1982] Efron, B., The Jackknife, the Bootstrap and Other Resampling
Plans, Philadelphia: SIAM, 1982.
[Ehrenfeucht, et al., 1988] Ehrenfeucht, A., et al., “A General Lower Bound on
the Number of Examples Needed for Learning,” in Proc. 1988 Workshop
on Computational Learning Theory, pp. 110120, San Francisco: Morgan
Kaufmann, 1988.
[Etzioni, 1991] Etzioni, O., “STATIC: A ProblemSpace Compiler for
PRODIGY,” Proc. of Ninth National Conf. on Artiﬁcial Intelligence,
pp. 533540, Menlo Park: AAAI Press, 1991.
[Etzioni, 1993] Etzioni, O., “A Structural Theory of ExplanationBased Learn
ing,” Artiﬁcial Intelligence, 60:1, pp. 93139, March, 1993.
[Evans & Fisher, 1992] Evans, B., and Fisher, D., Process Delay Analyses Using
DecisionTree Induction, Tech. Report CS9206, Department of Com
puter Science, Vanderbilt University, TN, 1992.
[Fahlman & Lebiere, 1990] Fahlman, S., and Lebiere, C., “The Cascade
Correlation Learning Architecture,” in Touretzky, D., (ed.), Advances in
Neural Information Processing Systems, 2, pp. 524532, San Francisco:
Morgan Kaufmann, 1990.
172 BIBLIOGRAPHY
[Fayyad, et al., 1993] Fayyad, U. M., Weir, N., and Djorgovski, S., “SKICAT:
A Machine Learning System for Automated Cataloging of Large Scale
Sky Surveys,” in Proc. Tenth Intl. Conf. on Machine Learning, pp. 112
119, San Francisco: Morgan Kaufmann, 1993. (For a longer version of
this paper see: Fayyad, U. Djorgovski, G., and Weir, N., “Automating
the Analysis and Cataloging of Sky Surveys,” in Fayyad, U., et al.(eds.),
Advances in Knowledge Discovery and Data Mining, Chapter 19, pp.
471ﬀ., Cambridge: The MIT Press, March, 1996.)
[Feigenbaum, 1961] Feigenbaum, E. A., “The Simulation of Verbal Learning Be
havior,” Proceedings of the Western Joint Computer Conference, 19:121
132, 1961.
[Fikes, et al., 1972] Fikes, R., Hart, P., and Nilsson, N., “Learning and Execut
ing Generalized Robot Plans,” Artiﬁcial Intelligence, pp 251288, 1972.
Reprinted in Shavlik, J. and Dietterich, T., Readings in Machine Learn
ing, San Francisco: Morgan Kaufmann, 1990, pp 468486.
[Fisher, 1987] Fisher, D., “Knowledge Acquisition via Incremental Conceptual
Clustering,” Machine Learning, 2:139172, 1987. Reprinted in Shavlik,
J. and Dietterich, T., Readings in Machine Learning, San Francisco:
Morgan Kaufmann, 1990, pp. 267–283.
[Friedman, et al., 1977] Friedman, J. H., Bentley, J. L., and Finkel, R. A., “An
Algorithm for Finding Best Matches in Logarithmic Expected Time,”
ACM Trans. on Math. Software, 3(3):209226, September 1977.
[Fu, 1994] Fu, L., Neural Networks in Artiﬁcial Intelligence, New York:
McGrawHill, 1994.
[Gallant, 1986] Gallant, S. I., “Optimal Linear Discriminants,” in Eighth Inter
national Conf. on Pattern Recognition, pp. 849852, New York: IEEE,
1986.
[Genesereth & Nilsson, 1987] Genesereth, M., and Nilsson, N., Logical Founda
tions of Artiﬁcial Intelligence, San Francisco: Morgan Kaufmann, 1987.
[Gluck & Rumelhart, 1989] Gluck, M. and Rumelhart, D., Neuroscience and
Connectionist Theory, The Developments in Connectionist Theory, Hills
dale, NJ: Erlbaum Associates, 1989.
[Hammerstrom, 1993] Hammerstrom, D., “Neural Networks at Work,” IEEE
Spectrum, pp. 2632, June 1993.
[Haussler, 1988] Haussler, D., “Quantifying Inductive Bias: AI Learning Al
gorithms and Valiant’s Learning Framework,” Artiﬁcial Intelligence,
36:177221, 1988. Reprinted in Shavlik, J. and Dietterich, T., Readings in
Machine Learning, San Francisco: Morgan Kaufmann, 1990, pp. 96107.
BIBLIOGRAPHY 173
[Haussler, 1990] Haussler, D., “Probably Approximately Correct Learning,”
Proc. Eighth Nat. Conf. on AI, pp. 11011108. Cambridge, MA: MIT
Press, 1990.
[Hebb, 1949] Hebb, D. O., The Organization of Behaviour, New York: John
Wiley, 1949.
[Hertz, Krogh, & Palmer, 1991] Hertz, J., Krogh, A, and Palmer, R., Introduc
tion to the Theory of Neural Computation, Lecture Notes, vol. 1, Santa
Fe Inst. Studies in the Sciences of Complexity, New York: Addison
Wesley, 1991.
[Hirsh, 1994] Hirsh, H., “Generalizing Version Spaces,” Machine Learning, 17,
545, 1994.
[Holland, 1975] Holland, J., Adaptation in Natural and Artiﬁcial Systems, Ann
Arbor: The University of Michigan Press, 1975. (Second edition printed
in 1992 by MIT Press, Cambridge, MA.)
[Holland, 1986] Holland, J. H., “Escaping Brittleness; The Possibilities of
GeneralPurpose Learning Algorithms Applied to Parallel RuleBased
Systems.” In Michalski, R., Carbonell, J., and Mitchell, T. (eds.) , Ma
chine Learning: An Artiﬁcial Intelligence Approach, Volume 2, chapter
20, San Francisco: Morgan Kaufmann, 1986.
[Hunt, Marin, & Stone, 1966] Hunt, E., Marin, J., and Stone, P., Experiments
in Induction, New York: Academic Press, 1966.
[Jabbour, K., et al., 1987] Jabbour, K., et al., “ALFA: Automated Load Fore
casting Assistant,” Proc. of the IEEE Pwer Engineering Society Summer
Meeting, San Francisco, CA, 1987.
[John, 1995] John, G., “Robust Linear Discriminant Trees,” Proc. of the Conf.
on Artiﬁcial Intelligence and Statistics, Ft. Lauderdale, FL, January,
1995.
[Kaelbling, 1993] Kaelbling, L. P., Learning in Embedded Systems, Cambridge,
MA: MIT Press, 1993.
[Kohavi, 1994] Kohavi, R., “BottomUp Induction of Oblivious ReadOnce De
cision Graphs,” Proc. of European Conference on Machine Learning
(ECML94), 1994.
[Kolodner, 1993] Kolodner, J., CaseBased Reasoning, San Francisco: Morgan
Kaufmann, 1993.
[Koza, 1992] Koza, J., Genetic Programming: On the Programming of Comput
ers by Means of Natural Selection, Cambridge, MA: MIT Press, 1992.
[Koza, 1994] Koza, J., Genetic Programming II: Automatic Discovery of
Reusable Programs, Cambridge, MA: MIT Press, 1994.
174 BIBLIOGRAPHY
[Laird, et al., 1986] Laird, J., Rosenbloom, P., and Newell, A., “Chunking in
Soar: The Anatomy of a General Learning Mechanism,” Machine Learn
ing, 1, pp. 1146, 1986. Reprinted in Buchanan, B. and Wilkins, D.,
(eds.), Readings in Knowledge Acquisition and Learning, pp. 518535,
Morgan Kaufmann, San Francisco, CA, 1993.
[Langley, 1992] Langley, P., “Areas of Application for Machine Learning,” Proc.
of Fifth Int’l. Symp. on Knowledge Engineering, Sevilla, 1992.
[Langley, 1996] Langley, P., Elements of Machine Learning, San Francisco:
Morgan Kaufmann, 1996.
[Lavraˇc & Dˇzeroski, 1994] Lavraˇc, N., and Dˇzeroski, S., Inductive Logic Pro
gramming, Chichester, England: Ellis Horwood, 1994.
[Lin, 1992] Lin, L., “SelfImproving Reactive Agents Based on Reinforcement
Learning, Planning, and Teaching,” Machine Learning, 8, 293321, 1992.
[Lin, 1993] Lin, L., “Scaling Up Reinforcement Learning for Robot Control,”
Proc. Tenth Intl. Conf. on Machine Learning, pp. 182189, San Francisco:
Morgan Kaufmann, 1993.
[Littlestone, 1988] Littlestone, N., “Learning Quickly When Irrelevant At
tributes Abound: A New LinearThreshold Algorithm,” Machine Learn
ing 2: 285318, 1988.
[Maass & Tur´an, 1994] Maass, W., and Tur´an, G., “How Fast Can a Thresh
old Gate Learn?,” in Hanson, S., Drastal, G., and Rivest, R., (eds.),
Computational Learning Theory and Natural Learning Systems, Volume
1: Constraints and Prospects, pp. 381414, Cambridge, MA: MIT Press,
1994.
[Mahadevan & Connell, 1992] Mahadevan, S., and Connell, J., “Automatic
Programming of BehaviorBased Robots Using Reinforcement Learn
ing,” Artiﬁcial Intelligence, 55, pp. 311365, 1992.
[Marchand & Golea, 1993] Marchand, M., and Golea, M., “On Learning Sim
ple Neural Concepts: From Halfspace Intersections to Neural Decision
Lists,” Network, 4:6785, 1993.
[McCulloch & Pitts, 1943] McCulloch, W. S., and Pitts, W. H., “A Logical Cal
culus of the Ideas Immanent in Nervous Activity,” Bulletin of Mathe
matical Biophysics, Vol. 5, pp. 115133, Chicago: University of Chicago
Press, 1943.
[Michie, 1992] Michie, D., “Some Directions in Machine Intelligence,” unpub
lished manuscript, The Turing Institute, Glasgow, Scotland, 1992.
[Minton, 1988] Minton, S., Learning Search Control Knowledge: An
ExplanationBased Approach, Kluwer Academic Publishers, Boston,
MA, 1988.
BIBLIOGRAPHY 175
[Minton, 1990] Minton, S., “Quantitative Results Concerning the Utility of
ExplanationBased Learning,” Artiﬁcial Intelligence, 42, pp. 363392,
1990. Reprinted in Shavlik, J. and Dietterich, T., Readings in Machine
Learning, San Francisco: Morgan Kaufmann, 1990, pp. 573587.
[Mitchell, et al., 1986] Mitchell, T., et al., “ExplanationBased Generalization:
A Unifying View,” Machine Learning, 1:1, 1986. Reprinted in Shavlik,
J. and Dietterich, T., Readings in Machine Learning, San Francisco:
Morgan Kaufmann, 1990, pp. 435451.
[Mitchell, 1982] Mitchell, T., “Generalization as Search,” Artiﬁcial Intelligence,
18:203226, 1982. Reprinted in Shavlik, J. and Dietterich, T., Readings in
Machine Learning, San Francisco: Morgan Kaufmann, 1990, pp. 96–107.
[Moore & Atkeson, 1993] Moore, A., and Atkeson, C., “Prioritized Sweeping:
Reinforcement Learning with Less Data and Less Time,” Machine Learn
ing, 13, pp. 103130, 1993.
[Moore, et al., 1994] Moore, A. W., Hill, D. J., and Johnson, M. P., “An Em
pirical Investigation of Brute Force to Choose Features, Smoothers, and
Function Approximators,” in Hanson, S., Judd, S., and Petsche, T.,
(eds.), Computational Learning Theory and Natural Learning Systems,
Vol. 3, Cambridge: MIT Press, 1994.
[Moore, 1990] Moore, A., Eﬃcient Memorybased Learning for Robot Control,
PhD. Thesis; Technical Report No. 209, Computer Laboratory, Univer
sity of Cambridge, October, 1990.
[Moore, 1992] Moore, A., “Fast, Robust Adaptive Control by Learning Only
Forward Models,” in Moody, J., Hanson, S., and Lippman, R., (eds.),
Advances in Neural Information Processing Systems 4, San Francisco:
Morgan Kaufmann, 1992.
[Mueller & Page, 1988] Mueller, R. and Page, R., Symbolic Computing with
Lisp and Prolog, New York: John Wiley & Sons, 1988.
[Muggleton, 1991] Muggleton, S., “Inductive Logic Programming,” New Gen
eration Computing, 8, pp. 295318, 1991.
[Muggleton, 1992] Muggleton, S., Inductive Logic Programming, London: Aca
demic Press, 1992.
[Muroga, 1971] Muroga, S., Threshold Logic and its Applications, New York:
Wiley, 1971.
[Natarjan, 1991] Natarajan, B., Machine Learning: A Theoretical Approach,
San Francisco: Morgan Kaufmann, 1991.
176 BIBLIOGRAPHY
[Nilsson, 1965] Nilsson, N. J., “Theoretical and Experimental Investigations in
Trainable PatternClassifying Systems,” Tech. Report No. RADCTR
65257, Final Report on Contract AF30(602)3448, Rome Air Develop
ment Center (Now Rome Laboratories), Griﬃss Air Force Base, New
York, September, 1965.
[Nilsson, 1990] Nilsson, N. J., The Mathematical Foundations of Learning Ma
chines, San Francisco: Morgan Kaufmann, 1990. (This book is a reprint
of Learning Machines: Foundations of Trainable PatternClassifying
Systems, New York: McGrawHill, 1965.)
[Oliver, Dowe, & Wallace, 1992] Oliver, J., Dowe, D., and Wallace, C., “Infer
ring Decision Graphs using the Minimum Message Length Principle,”
Proc. 1992 Australian Artiﬁcial Intelligence Conference, 1992.
[Pagallo & Haussler, 1990] Pagallo, G. and Haussler, D., “Boolean Feature Dis
covery in Empirical Learning,” Machine Learning, vol.5, no.1, pp. 7199,
March 1990.
[Pazzani & Kibler, 1992] Pazzani, M., and Kibler, D., “The Utility of Knowl
edge in Inductive Learning,” Machine Learning, 9, 5794, 1992.
[Peterson, 1961] Peterson, W., Error Correcting Codes, New York: John Wiley,
1961.
[Pomerleau, 1991] Pomerleau, D., “Rapidly Adapting Artiﬁcial Neural Net
works for Autonomous Navigation,” in Lippmann, P., et al. (eds.), Ad
vances in Neural Information Processing Systems, 3, pp. 429435, San
Francisco: Morgan Kaufmann, 1991.
[Pomerleau, 1993] Pomerleau, D, Neural Network Perception for Mobile Robot
Guidance, Boston: Kluwer Academic Publishers, 1993.
[Quinlan & Rivest, 1989] Quinlan, J. Ross, and Rivest, Ron, “Inferring Deci
sion Trees Using the Minimum Description Length Principle,” Informa
tion and Computation, 80:227–248, March, 1989.
[Quinlan, 1986] Quinlan, J. Ross, “Induction of Decision Trees,” Machine
Learning, 1:81–106, 1986. Reprinted in Shavlik, J. and Dietterich, T.,
Readings in Machine Learning, San Francisco: Morgan Kaufmann, 1990,
pp. 57–69.
[Quinlan, 1987] Quinlan, J. R., “Generating Production Rules from Decision
Trees,” In IJCAI87: Proceedings of the Tenth Intl. Joint Conf. on Ar
tiﬁcial Intelligence, pp. 3047, San Francisco: MorganKaufmann, 1987.
[Quinlan, 1990] Quinlan, J. R., “Learning Logical Deﬁnitions from Relations,”
Machine Learning, 5, 239266, 1990.
BIBLIOGRAPHY 177
[Quinlan, 1993] Quinlan, J. Ross, C4.5: Programs for Machine Learning, San
Francisco: Morgan Kaufmann, 1993.
[Quinlan, 1994] Quinlan, J. R., “Comparing Connectionist and Symbolic Learn
ing Methods,” in Hanson, S., Drastal, G., and Rivest, R., (eds.), Com
putational Learning Theory and Natural Learning Systems, Volume 1:
Constraints and Prospects, pp. 445456,, Cambridge, MA: MIT Press,
1994.
[Ridgway, 1962] Ridgway, W. C., An Adaptive Logic System with Generalizing
Properties, PhD thesis, Tech. Rep. 15561, Stanford Electronics Labs.,
Stanford, CA, April 1962.
[Rissanen, 1978] Rissanen, J., “Modeling by Shortest Data Description,” Auto
matica, 14:465471, 1978.
[Rivest, 1987] Rivest, R. L., “Learning Decision Lists,” Machine Learning, 2,
229246, 1987.
[Rosenblatt, 1958] Rosenblatt, F., Principles of Neurodynamics, Washington:
Spartan Books, 1961.
[Ross, 1983] Ross, S., Introduction to Stochastic Dynamic Programming, New
York: Academic Press, 1983.
[Rumelhart, Hinton, & Williams, 1986] Rumelhart, D. E., Hinton, G. E., and
Williams, R. J., “Learning Internal Representations by Error Propa
gation,” In Rumelhart, D. E., and McClelland, J. L., (eds.) Parallel
Distributed Processing, Vol 1, 318–362, 1986.
[Russell & Norvig 1995] Russell, S., and Norvig, P., Artiﬁcial Intelligence: A
Modern Approach, Englewood Cliﬀs, NJ: Prentice Hall, 1995.
[Samuel, 1959] Samuel, A., “Some Studies in Machine Learning Using the Game
of Checkers,” IBM Journal of Research and Development, 3:211229, July
1959.
[Schwartz, 1993] Schwartz, A., “A Reinforcement Learning Method for Max
imizing Undiscounted Rewards,” Proc. Tenth Intl. Conf. on Machine
Learning, pp. 298305, San Francisco: Morgan Kaufmann, 1993.
[Sejnowski, Koch, & Churchland, 1988] Sejnowski, T., Koch, C., and Church
land, P., “Computational Neuroscience,” Science, 241: 12991306, 1988.
[Shavlik, Mooney, & Towell, 1991] Shavlik, J., Mooney, R., and Towell, G.,
“Symbolic and Neural Learning Algorithms: An Experimental Compar
ison,” Machine Learning, 6, pp. 111143, 1991.
[Shavlik & Dietterich, 1990] Shavlik, J. and Dietterich, T., Readings in Ma
chine Learning, San Francisco: Morgan Kaufmann, 1990.
178 BIBLIOGRAPHY
[Sutton & Barto, 1987] Sutton, R. S., and Barto, A. G., “A Temporal
Diﬀerence Model of Classical Conditioning,” in Proceedings of the Ninth
Annual Conference of the Cognitive Science Society, Hillsdale, NJ: Erl
baum, 1987.
[Sutton, 1988] Sutton, R. S., “Learning to Predict by the Methods of Temporal
Diﬀerences,” Machine Learning 3: 944, 1988.
[Sutton, 1990] Sutton, R., “Integrated Architectures for Learning, Planning,
and Reacting Based on Approximating Dynamic Programming,” Proc. of
the Seventh Intl. Conf. on Machine Learning, pp. 216224, San Francisco:
Morgan Kaufmann, 1990.
[Taylor, Michie, & Spiegalhalter, 1994] Taylor, C., Michie, D., and Spiegal
halter, D., Machine Learning, Neural and Statistical Classiﬁcation,
Paramount Publishing International.
[Tesauro, 1992] Tesauro, G., “Practical Issues in Temporal Diﬀerence Learn
ing,” Machine Learning, 8, nos. 3/4, pp. 257277, 1992.
[Towell & Shavlik, 1992] Towell G., and Shavlik, J., “Interpretation of Artiﬁ
cial Neural Networks: Mapping KnowledgeBased Neural Networks into
Rules,” in Moody, J., Hanson, S., and Lippmann, R., (eds.), Advances in
Neural Information Processing Systems, 4, pp. 977984, San Francisco:
Morgan Kaufmann, 1992.
[Towell, Shavlik, & Noordweier, 1990] Towell, G., Shavlik, J., and Noordweier,
M., “Reﬁnement of Approximate Domain Theories by KnowledgeBased
Artiﬁcial Neural Networks,” Proc. Eighth Natl., Conf. on Artiﬁcial In
telligence, pp. 861866, 1990.
[Unger, 1989] Unger, S., The Essence of Logic Circuits, Englewood Cliﬀs, NJ:
PrenticeHall, 1989.
[Utgoﬀ, 1989] Utgoﬀ, P., “Incremental Induction of Decision Trees,” Machine
Learning, 4:161–186, Nov., 1989.
[Valiant, 1984] Valiant, L., “A Theory of the Learnable,” Communications of
the ACM, Vol. 27, pp. 11341142, 1984.
[Vapnik & Chervonenkis, 1971] Vapnik, V., and Chervonenkis, A., “On the
Uniform Convergence of Relative Frequencies, Theory of Probability and
its Applications, Vol. 16, No. 2, pp. 264280, 1971.
[Various Editors, 19891994] Advances in Neural Information Processing Sys
tems, vols 1 through 6, San Francisco: Morgan Kaufmann, 1989 1994.
[Watkins & Dayan, 1992] Watkins, C. J. C. H., and Dayan, P., “Technical Note:
QLearning,” Machine Learning, 8, 279292, 1992.
BIBLIOGRAPHY 179
[Watkins, 1989] Watkins, C. J. C. H., Learning From Delayed Rewards, PhD
Thesis, University of Cambridge, England, 1989.
[Weiss & Kulikowski, 1991] Weiss, S., and Kulikowski, C., Computer Systems
that Learn, San Francisco: Morgan Kaufmann, 1991.
[Werbos, 1974] Werbos, P., Beyond Regression: New Tools for Prediction and
Analysis in the Behavioral Sciences, Ph.D. Thesis, Harvard University,
1974.
[Widrow & Lehr, 1990] Widrow, B., and Lehr, M. A., “30 Years of Adaptive
Neural Networks: Perceptron, Madaline and Backpropagation,” Proc.
IEEE, vol. 78, no. 9, pp. 14151442, September, 1990.
[Widrow & Stearns, 1985] Widrow, B., and Stearns, S., Adaptive Signal Pro
cessing, Englewood Cliﬀs, NJ: PrenticeHall.
[Widrow, 1962] Widrow, B., “Generalization and Storage in Networks of Ada
line Neurons,” in Yovits, Jacobi, and Goldstein (eds.), Selforganizing
Systems—1962, pp. 435461, Washington, DC: Spartan Books, 1962.
[Winder, 1961] Winder, R., “Single Stage Threshold Logic,” Proc. of the AIEE
Symp. on Switching Circuits and Logical Design, Conf. paper CP60
1261, pp. 321332, 1961.
[Winder, 1962] Winder, R., Threshold Logic, PhD Dissertation, Princeton Uni
versity, Princeton, NJ, 1962.
[Wnek, et al., 1990] Wnek, J., et al., “Comparing Learning Paradigms via Di
agrammatic Visualization,” in Proc. Fifth Intl. Symp. on Methodologies
for Intelligent Systems, pp. 428437, 1990. (Also Tech. Report MLI902,
University of Illinois at UrbanaChampaign.)
ii
Contents
1 Preliminaries 1.1 Introduction . . . . . . . . . . . . . . . . 1.1.1 What is Machine Learning? . . . 1.1.2 Wellsprings of Machine Learning 1.1.3 Varieties of Machine Learning . . 1.2 Learning InputOutput Functions . . . . 1.2.1 Types of Learning . . . . . . . . 1.2.2 Input Vectors . . . . . . . . . . . 1.2.3 Outputs . . . . . . . . . . . . . . 1.2.4 Training Regimes . . . . . . . . . 1.2.5 Noise . . . . . . . . . . . . . . . 1.2.6 Performance Evaluation . . . . . 1.3 Learning Requires Bias . . . . . . . . . . 1.4 Sample Applications . . . . . . . . . . . 1.5 Sources . . . . . . . . . . . . . . . . . . 1.6 Bibliographical and Historical Remarks 2 Boolean Functions 2.1 Representation . . . . . . . . . . . . . . 2.1.1 Boolean Algebra . . . . . . . . . 2.1.2 Diagrammatic Representations . 2.2 Classes of Boolean Functions . . . . . . 2.2.1 Terms and Clauses . . . . . . . . 2.2.2 DNF Functions . . . . . . . . . . 2.2.3 CNF Functions . . . . . . . . . . 2.2.4 Decision Lists . . . . . . . . . . . 2.2.5 Symmetric and Voting Functions 2.2.6 Linearly Separable Functions . . 2.3 Summary . . . . . . . . . . . . . . . . . 2.4 Bibliographical and Historical Remarks iii 1 1 1 3 4 5 5 7 8 8 9 9 9 11 13 13 15 15 15 16 17 17 18 21 22 23 23 24 25
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . 4. . . . . . . . . . . . . . . . . . . . . .2 Special Cases of Linearly Separable Functions . .5 Variations on Backprop . . . . . . . . . . . . . . . . . . . . . .1 Motivation and Examples . . 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. . . . . . . . . . . . . . . . .1 Threshold Logic Units . . . . . . . . . . . . .1 Deﬁnitions and Geometry . . . . .3 Learning as Search of a Version Space . .4 Bibliographical and Historical Remarks . . . . . .6 Bibliographical and Historical Remarks . . . . . .1 Notation . . . . . .1. . . . . . . . . . .4 Training Feedforward Networks by Backpropagation . .5 The WidrowHoﬀ Procedure . . . . . . . 4. . . . . . . . . . .2 Madalines . . . . .2 The Backpropagation Method . . . . . 5. . . .1 Version Spaces and Mistake Bounds . . . . . . . . . . . . . . . . . . . . . . . .5 Bibliographical and Historical Remarks . . . . 4. .3. . . . . . . . . . . . . . . . . . .1. 4. . 27 27 29 32 32 34 35 35 35 37 38 40 42 44 44 46 46 49 50 51 52 52 53 56 58 59 60 61 61 63 63 63 65 68 70 70 72 4 Neural Networks 4. . 4. . . . . 3. .4. . . . . . . . . . . . . . . . 4. . 4. . . . . . . . .3 Networks of TLUs . . . . . 5. .1.1. . . . . . . . . 4. .1 Using Statistical Decision Theory . 4.1. . . . . . . . . . . 4. . . .4. . . . . . . . . . . . . . . . . . . . . . . . 5.4.4 Cascade Networks . . .3.1. . .2 Linear Machines . . . . . . . . . 3. . . . .6 Training a TLU on NonLinearlySeparable Training Sets 4. . . .1. . . . .4 Weight Space . . . . . . . . . . . . . .4 The Candidate Elimination Method . . . . . . . . . . . 4. . . . .4. . . .5 Synergies Between Neural Network and KnowledgeBased Methods 4. . . . . . . . . . . .1. . . 3. . . . . . .3 Piecewise Linear Machines . . .2 Version Graphs . . . . . . . . . . . . . . . . . . . . . . 5. . . . . . .2 Learning Belief Networks .1. . . . . . . . . . . . .3 Conditionally Independent Binary Components 5. . .3. . . . . . . iv . . . . . . . .6 An Application: Steering a Van . . . . . . . . . .3 Using Version Spaces for Learning 3. . . . .2 Gaussian (or Normal) Distributions . . . . . . . . . 4. . . . . . . . 4. . . . . . . . . . . . .3 NearestNeighbor Methods . . . . 5. . . . . .3 ErrorCorrection Training of a TLU . . .4. . 4. .3 Computing Weight Changes in the Final Layer . . . . 4.4 Computing Changes to the Weights in Intermediate Layers 4. .4. . . . . . . . 3. . . . . . . . . . . .1 Background and General Method . .3. . . . . 5 Statistical Learning 5. . . . . . . . . 4. . . . .
. . 6. . . . . .2. . .6 The Problem of Missing Attributes . . . . . . . . . .2. . . . . . . . . . . . . . . . . 6.4. . . . . . . . . . .5 Choosing Literals to Add . . 98 .2 Using Uncertainty Reduction to Select Tests 6. . . . .1 Selecting the Type of Test . . . . . . . . .2 Examples . . 6. . . . . . . . . . . . .2. . . . . . . . . . . . . . . .2. . . . . . . . . . . . . . .4 Inducing Recursive Programs . .2. . . . . . . . 8. . . . . . . . . . .7 Bibliographical and Historical Remarks . 8. . . . . . . . .8 Bibliographical and Historical Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8. . . . . . . . . . . . . . . . . . . .3 Some Properly PACLearnable Classes . . 8. . . . . . . . .3 NonBinary Attributes . 7. . . . . . . 73 73 74 75 75 79 79 80 80 81 82 83 84 84 86 86 87 . . . . . . . 94 . 8. . 8. . . . .3 A More General Capacity Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6. . Induction . . .1 Deﬁnitions . .3 The VapnikChervonenkis Dimension . . . . . . . . . . .1 The Fundamental Theorem . . . . .3 Networks Equivalent to Decision Trees . . . .2 PAC Learning . . . . . 6. . . 6. . . . . . 7.1 Notation and Assumptions for PAC Learning Theory . . . . . . . . . .5 The Problem of Replicated Subtrees . . . . . . . . . . . . . 8.4. 104 107 107 109 109 111 112 113 113 115 116 117 118 118 8 Computational Learning Theory 8. . . . . . .2 A Generic ILP Algorithm . . . . . . .4 Overﬁtting and Evaluation . . 90 . . .2 Supervised Learning of Univariate Decision Trees . . .1 Linear Dichotomies . . 100 . . . . . 6. . . . 8. . . . . . .4. . . . . . .4 Some Facts and Speculations About the VC Dimension 8. . . .3 Avoiding Overﬁtting in Decision Trees . . . . . . . . 7. . . .4. .6 Decision Trees 6. 7. . . . . 8.3. .5 Bibliographical and Historical Remarks .3 An Example . . . . . . . . .2 Validation Methods . . . . . . . . . . . . . . . . . . 6. . . . . . . . . . . . . . . 7 Inductive Logic Programming 7. .1 Notation and Deﬁnitions . . . . . . . . . . . . . . . 6. . 7. . . . . 6. . . . . . . . . . . 6. . . . . . . 6. . . . . . . . . . . . . . . . . . . . . . . . . . . .7 Comparisons . 8. . . . . . . . . . . . . . . . . .4. .3.4 MinimumDescription Length Methods . . . . . .6 Relationships Between ILP and Decision 7. . . . . 101 .2 Capacity . . . . . . . . . . .4 VC Dimension and PAC Learning . .2. . . . . . . . . v . . .3. . . . . 6. . . . . . . . . . 91 . . . . .3. . . . . Tree . . . . . . . . . . . . . . . . . 6. . . . . . . . . . . . . . . . . . . . . . . . .1 Overﬁtting . . . 89 . . . . . . . . . . . .5 Noise in Data . . . . .
. . . . . . .2 A Method Based on Euclidean Distance . . . .5. . . . . . . . . . . . . . . . . . . . .3 Temporal Discounting and Optimal Policies . . . . . . . . . . . . . . . . 125 9. . . . . . . . . . . . . . . . . . .8 Bibliographical and Historical Remarks . 155 vi . . . . . . . . . . . . . . . . 152 11. . . . . . . . . . . .5. .1 9. . 154 11. and Extensions of QLearning . . . . . . 120 A Method Based on Probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 10. . . . 135 10.1 9. . . . .3. .7 An Example Application: TDgammon . . . . . . . . . . . . . . 147 11. . . . . . . . . . .5. . . .4 An Experiment with TD Methods . . 125 A Method Based on Probabilities . . . . . . . . . . . . . . . . . . . . . . . . 153 11. . .4 Bibliographical and Historical Remarks .6 IntraSequence Weight Updating .9 Unsupervised Learning 9. . . 124 9. . . .5 Scaling Problems . . . . . . . .2 Using Random Actions . . . . . .3. . . . . . . . . .2 An Example . . . . . . .3 Generalizing Over Inputs . . . .6 Bibliographical and Historical Remarks . . . . . . .5. . . . . . . . . . . . . . . . . . . . . . 140 10. . .3 Hierarchical Clustering Methods . . . . . .2 Supervised and TemporalDiﬀerence Methods . 131 10. 119 Clustering Methods . . . . . . . . .2 119 What is Unsupervised Learning? . . . . . . . . . . 141 11 DelayedReinforcement Learning 143 11. .3 Incremental Computation of the (∆W)i . . 144 11. 150 11. . . . . 150 11. . . . . . . . . . . . . . . . . . . . . 134 10. . . . . 143 11. . 130 131 10 TemporalDiﬀerence Learning 10.1 The General Problem . . . . . . . . . . . . . . . . . 120 9. . . . . . 126 9.1 An Illustrative Example . . . . .2 A Method Based on Euclidean Distance . . . . . . . . . . . . . . . . . .5 Theoretical Results . . . . .2. . . . . . . . . . . 131 10.2. . . . . . . . . . . . Limitations. . . . . . . . . .4 QLearning . . 138 10. . . . . . .5 Discussion. . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 11. . . . 154 11. . .4 Partially Observable States . . . . . .5. . .1 Temporal Patterns and Prediction Problems . . . . . .1 9. . . . .
. . . 168 vii . . . . . . . . .6 Utility of EBL . . . .7. . . . . . . . . .7 Applications . . . . . . . . . .3 An Example . 164 12. . .2 Domain Theories . . . . . . . .2 Learning Search Control Knowledge . . . . .4 Evaluable Predicates . . . . . . . . . . . . 162 12. . . . . . . . 164 12. . . . . . . . . . . . . . . . . . . . . . 159 12. . . . . .8 Bibliographical and Historical Remarks . . . . . 158 12. . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Deductive Learning . . . . . . . . . . . . . . . . . . . . . .1 MacroOperators in Planning . . . . . . . . . . . . . . . . . . . . 157 12. . . . . . . . . .5 More General Proofs . . . . 164 12. . . . . . . . . . .7. . . 164 12. 167 12. . . . . . . . . . . . . . . . . . . .12 ExplanationBased Learning 157 12. . . . . . . . . . . . . . . . . . . .
viii .
. I do not give proofs of many of the theorems that I state. radial basis functions. and the author solicits corrections. . Some of my plans for additions and other reminders are mentioned in marginal notes. Instead.Preface These notes are in the process of becoming a textbook. Robert Allen. . my goal is to give the reader suﬃcient preparation to make the extensive literature on machine learning accessible. Elman nets and other recurrent nets. as have my colleague. The book concentrates on the important ideas in machine learning. Many typographical infelicities will no doubt persist until the ﬁnal version. and suggestions from students and other readers. My intention is to pursue a middle ground between a theoretical textbook and one that focusses on applications. And. some undoubtedly remain—caveat lector. The process is quite unﬁnished. I am also collecting exercises and project suggestions which will appear in future versions. genetic algorithms. and my teaching assistants. More material has yet to be added. ix . I do not treat many matters that would be of practical importance in applications. Students in my Stanford courses on machine learning have already made several useful suggestions. Ron Kohavi. and Bayes networks . grammar and automata learning. criticisms. I hope that future versions will cover Hopﬁeld nets. Pat Langley. Although I have tried to eliminate errors. the book is not a handbook of machine learning practice. Please let me have your suggestions about topics that are too important to be left out. and Lise Getoor. Karl Pﬂeger. but I do give plausibility arguments and citations to formal proofs.
prediction. by study. Such tasks involve recognition.1 1. fall comfortably within the province of other disciplines and are not necessarily better understood for being called learning. diagnosis. robot control. Some of these changes. we might say. Certainly. In this book we focus on learning in machines. Machine learning usually refers to the changes in systems that perform tasks associated with artiﬁcial intelligence (AI). program. or data (based on its inputs or in response to external information) in such a manner that its expected future performance improves. As regards machines.” Zoologists and psychologists study learning in animals and humans. There are several parallels between animal and machine learning. The “changes” might be either enhancements to already performing systems or ab initio synthesis of new systems. But. very broadly. many techniques in machine learning derive from the eﬀorts of psychologists to make more precise their theories of animal and human learning through computational models. for example. like intelligence. It seems likely also that the concepts and techniques being explored by researchers in machine learning may illuminate certain aspects of biological learning.Chapter 1 Preliminaries 1.1. we feel quite justiﬁed in that case to say that the machine has learned. covers such a broad range of processes that it is difﬁcult to deﬁne precisely. when the performance of a speechrecognition machine improves after hearing several samples of a person’s speech. planning. or skill in. or understanding of.1 Introduction What is Machine Learning? Learning. A dictionary deﬁnition includes phrases such as “to gain knowledge. that a machine learns whenever it changes its structure. we show the architecture of a typical AI 1 . instruction. etc. To be slightly more speciﬁc. such as the addition of a record to a data base. or experience.” and “modiﬁcation of a behavioral tendency by experience.
Changes made to any of the components shown in the ﬁgure might count as learning.1: An AI System One might ask “Why should machines have to learn? Why not design machines to perform as desired in the ﬁrst place?” There are several reasons why machine learning is important. we have already mentioned that the achievement of learning in machines might help us understand how animals and humans learn. Machine learning methods can often be used to extract these relationships (data mining).2 CHAPTER 1. that is. . Diﬀerent learning mechanisms might be employed depending on which subsystem is being changed. We would like machines to be able to adjust their internal structure to produce correct outputs for a large number of sample inputs and thus suitably constrain their input/output function to approximate the relationship implicit in the examples. 1.1. perhaps by anticipating their eﬀects. we might be able to specify input/output pairs but not a concise relationship between inputs and desired outputs. We will study several diﬀerent learning methods in this book. This agent perceives and models its environment and computes appropriate actions. Sensory signals Goals Perception Model Planning and Reasoning Action Computation Actions Figure 1. But there are important engineering reasons as well. • It is possible that hidden among large piles of data are important relationships and correlations. PRELIMINARIES “agent” in Fig. Some of these are: • Some tasks cannot be deﬁned well except by example. Of course.
Here is a brief listing of some of the separate disciplines that have contributed to machine learning. 1958]. • New knowledge about tasks is constantly being discovered by humans. In fact. Brain modelers are interested in how closely these networks approximate the learning phenomena of . Hebb. • The amount of knowledge available about certain tasks might be too large for explicit encoding by humans. Continuing redesign of AI systems to conform to new knowledge is impractical. more details will follow in the the appropriate chapters: • Statistics: A longstanding problem in statistics is how best to use samples drawn from unknown probability distributions to help decide from which distribution some new sample is drawn. A related problem is how to estimate the value of an unknown function at a new point given the values of this function at a set of sample points. 1989. Networks of these elements have been studied by several researchers including [McCulloch & Pitts. Vocabulary changes. 1943. 1. 1988]. Machine learning methods can be used for onthejob improvement of existing machine designs. We will explore some of the statistical methods later in the book. • Environments change over time. Statistical methods for dealing with these problems can be considered instances of machine learning because the decision and estimation rules depend on a corpus of samples drawn from the problem environment. Sejnowski. & Churchland.1.2 Wellsprings of Machine Learning Work in machine learning is now converging from several sources. Koch. Rosenblatt. more recently by [Gluck & Rumelhart. certain characteristics of the working environment might not be completely known at design time. Details about the statistical theory underlying these methods can be found in statistical textbooks such as [Anderson. 1949. INTRODUCTION 3 • Human designers often produce machines that do not work as well as desired in the environments in which they are used.1. but machine learning methods might be able to track much of it.1. These different traditions each bring diﬀerent methods and diﬀerent vocabulary which are now being assimilated into a more uniﬁed discipline. • Brain Models: Nonlinear elements with weighted inputs have been suggested as simple models of biological neurons. 1958] and. There is a constant stream of new events in the world. Machines that learn this knowledge gradually might be able to capture more of it than humans would want to write down. Machines that can adapt to a changing environment would reduce the need for constant redesign.
• Psychological Models: Psychologists have studied the performance of humans in various learning tasks. . 1994] are the most prominent computational techniques for evolution. Minton. et al. 1986. Some aspects of controlling a robot based on sensory inputs represent instances of this sort of problem. 1988. or subsymbolic processing. 1983] and how future actions and decisions can be based on previous exemplary cases [Kolodner.. 1973] methods. 1961]. For an introduction see [Bollinger & Duﬃe. More recent work of this sort has been inﬂuenced by activities in artiﬁcial intelligence which we will be presenting. 1992. Koza. Etzioni. 1975] and genetic programming [Koza. 1959]. An early example is the EPAM network for storing and retrieving one member of a pair of words when given another [Feigenbaum. 1991. 1990] and inductive logic programming [Muggleton. 1994]. Samuel developed a prominent early program that learned parameters of a function for evaluating board positions in the game of checkers [Samuel. • Adaptive Control Theory: Control theorists study the problem of controlling a process having unknown parameters which must be estimated during operation. 1987]. AI researchers have also explored the role of analogies in learning [Carbonell. 1993]. not only do individual animals learn to perform better. Marin. Some of the work in reinforcement learning can be traced to eﬀorts to model how reward stimuli inﬂuence the learning of goalseeking behavior in animals [Sutton & Barto. c z Another theme has been saving and generalizing the results of problem solving using explanationbased learning [DeJong & Mooney. techniques that model certain aspects of biological evolution have been proposed as learning methods to improve the performance of computer programs. brainstyle CHAPTER 1. Lavraˇ & Dˇeroski. 1993]. • Evolutionary Models: In nature. 1988]. but species evolve to be better ﬁt in their individual niches. Recent work has been directed at discovering rules for expert systems using decisiontree methods [Quinlan. the parameters change during operation. Laird. 1986. PRELIMINARIES see that several important machine learning networks of nonlinear elements—often called inspired by this school is sometimes called computation. Often. AI research has been concerned with machine learning.4 living brains. Since the distinction between evolving and learning can be blurred in computer systems. Genetic algorithms [Holland. Related work led to a number of early decision tree [Hunt. • Artiﬁcial Intelligence: From the beginning. We shall techniques are based on neural networks. 1966] and semantic network [Anderson & Bower. & Stone. Work connectionism. and the control process must track these changes. Reinforcement learning is an important theme in machine learning research.
. . called supervised learning. f . .1. 1.2 to help deﬁne some of the terminology used in describing the problem of learning a function. . In this book. x2 . and the task of the learner is to guess what it is. Both f and h are functions of a vectorvalued input X = (x1 . In the latter case. . We assume that if we can ﬁnd a hypothesis. . the change to the existing structure might be simply to make it more computationally eﬃcient rather than to increase the coverage of the situations it can handle. 1. of m input vector examples. .2 Learning InputOutput Functions We use Fig. LEARNING INPUTOUTPUT FUNCTIONS 5 1.1 Types of Learning There are two major settings in which we wish to learn a function.1. We select h based on a training set. that closely agrees with f for the members of Ξ. Both f and h themselves may be vectorvalued. h. Our hypothesis about the function to be learned is denoted by h. Sometimes we know that f also belongs to this class or to a subset of this class. 1. xn ) which has n components.2.3 Varieties of Machine Learning Orthogonal to the question of the historical source of any learning technique is the more important question of what is to be learned.2. . Ξ. Ξ. In one. h. We will consider a variety of diﬀerent computational structures: • Functions • Logic programs and rule sets • Finitestate machines • Grammars • Problem solving systems We will present methods both for the synthesis of these structures from examples and for changing existing structures. we know (sometimes only approximately) the values of f for the m samples in the training set. we take it that the thing to be learned is a computational structure of some sort. Much of the terminology that we shall be using throughout the book is best introduced by discussing the problem of learning functions. Many important details depend on the nature of the assumptions made about all of these entities. and we turn to that matter ﬁrst. We think of h as being implemented by a device that has X as input and h(X) as output. xi . We assume a priori that the hypothesized function. is selected from a class of functions H. then this hypothesis will be a good guess for f —especially if Ξ is large. . Imagine that there is a function.
. This type of learning is sometimes called speedup learning. is our hypothesis about the function. . h. h = f at the four samples. An interesting special case is that of changing an existing function into an equivalent one that is computationally more eﬃcient. we can deduce C if we are given A. . Xm} x1 . (We can still regard the problem as one of learning a function. . In the other setting. of seconddegree functions. x2 plane that ﬁts the points. Xi. We show there a twodimensional parabolic surface above the x1 . . A very simple example of speedup learning involves deduction processes. This parabolic function. but we need not have required exact matches. . termed unsupervised learning. . h. We might either be trying to ﬁnd a new function. . . we simply have a training set of vectors without function values for them. . the value of the function is the name of the subset to which an input vector belongs. ΞR . we can create the formula A ⊃ C—a new formula but one that does not sanction any more con . X2.6 CHAPTER 1.. In this case. Ξ1 . Suppose we are given the values of a twodimensional function. We want to ﬁt these four points with a function. H. 1. drawn from the set. . . f . is to partition the training set into subsets. From the formulas A ⊃ B and B ⊃ C.2: An InputOutput Function Curveﬁtting is a simple example of supervised learning of a function. in some appropriate way. We shall also describe methods that are intermediate between supervised and unsupervised learning. . The problem in this case. xi . From this deductive process. h. xn X= h(X) h h H Figure 1. typically. at the four sample points shown by the solid circles in Fig.) Unsupervised learning methods have application in taxonomic problems in which it is desired to invent ways to classify data into meaningful categories. f . that produced the four samples. . PRELIMINARIES Training Set: = {X1.3. or to modify an existing one.
Additionally.2. there are no correct inductions—only useful ones. given A. than we could have done before.1. the input vector is called by a variety of names. . xi . A particular student would then be represented by a vector such as: (sophomore. adviser. its terminology is rife with synonyms. We say that the latter methods involve inductive learning. attributes. history. medium. higgins). As opposed to deduction. sample. Some of these are: input vector. We can contrast speedup learning with methods that create genuinely new functions—ones that might give diﬀerent results after learning than they did before. and components. The components.2 Input Vectors Because machine learning methods derive from so many diﬀerent traditions. The vector form assumes that the attributes are ordered and given implicitly by a form. pattern vector. male. As an example illustrating categorical values. discretevalued numbers. The values of the components can be of three main types. But with this new formula we can derive C more quickly. it is possible to represent the input in unordered form by listing the names of the attributes together with their values. major. of the input vector are variously called features. sex: male. or categorical values.3: A Surface that Fits Four Points clusions than those that could be derived from the formulas that we previously had. large}) or unordered (as in the example just given). They might be realvalued numbers. and we will be using most of them in this book. categorical values may be ordered (as in {small. feature vector. In all cases. As an example of an attributevalue representation. LEARNING INPUTOUTPUT FUNCTIONS 7 h sample fvalue 1500 0 1000 00 500 00 0 10 5 0 5 10 10 5 10 5 0 x2 x1 Figure 1.2. For example. input variables. 1. sex. Of course. and instance. we might have: (major: history. example. information about a student might be represented by the values of the attributes class. mixtures of all these types of values are possible.
h. the output may be a categorical value. We will be using the vector form exclusively. Learning a Boolean function is sometimes called concept learning. False). is called a function estimator. By contrast.0) or of categorical variables (True. We study the Boolean case in some detail because it allows us to make important general points in a simpliﬁed setting. a class. we select one member at a time from the training set and use this instance alone to modify a current hypothesis. and the output itself is called a label. in which case the process embodying the function. at any stage all training set members so far available could be used in a “batch” process. h. An important special case is that of Boolean output values. Ξ. for example. The input in that case is some suitable representation of the printed character. and the function is called a concept. Online methods might be used. and a training sample having value 0 is called a negative instance. A variation of this method uses the entire training set to modify a current hypothesis iteratively until an acceptable hypothesis is obtained. 64 categories. age: 19).4 Training Regimes There are several ways in which the training set.2. and so on. a category.) Using the training set members as they become available is called an online method. which can be regarded as a special case of either discrete numbers (1. PRELIMINARIES class: sophomore. and the classiﬁer maps this input into one of. and the output is called an output value or estimate. when the . then we might also use an incremental method—selecting and using training set members as they arrive. in which case the process embodying h is variously called a classiﬁer. Alternatively. If the entire training set becomes available one member at a time. Vectorvalued outputs are also possible with components being real numbers or categorical values. An important specialization uses Boolean values. can be used to produce a hypothesized function. (Alternatively. or a decision.8 CHAPTER 1. When the input is also Boolean. 1. the entire training set is available and used all at once to compute the function. In that case.3 Outputs The output may be a real number. In the batch method.2. Then another member of the training set is selected. in the incremental method. The selection method can be random (with replacement) or it can cycle through the training set iteratively. a training pattern having value 1 is called a positive instance. the classiﬁer implements a Boolean function. adviser: higgins. Classiﬁers have application in a number of recognition problems. for example in the recognition of handprinted characters. or a categorizer. 1. say. a recognizer.
that is H is the set of all 22 Boolean functions. There are two kinds of noise.3. after being presented with one member of the training set and its value we can rule out precisely onehalf of the members of H—those Boolean functions that would misclassify this labeled sample. attribute noise randomly alters the values of the components of the input vector.4.2.1. Why would a learning procedure happen to select the quadratic one shown in that ﬁgure? In order to make that selection we had at least to limit a priori the set of hypotheses to quadratic functions and then to insist that the one we chose passed through all four sample points. Both meansquarederror and the total number of errors are common measures. The next set of sensory inputs will depend on which action was selected. there are an uncountable number of diﬀerent functions having values that agree with the four samples shown in Fig. 1. it would be inappropriate to insist that the hypothesized function agree precisely with the values of the samples in the training set. and we have no preference among those that ﬁt the samples in the training set. 1. We will discuss this matter in more detail later.” we’ll explore that concept in more detail later.2. it is important to have methods to evaluate the result of learning. In this case. Suppose we had no bias. 1.3 Learning Requires Bias Long before now the reader has undoubtedly asked why is learning a function possible at all? Certainly.3. in supervised learning the induced function is usually evaluated on a separate set of inputs and function values for them called the testing set . 1. 1. We can gain more insight into the role of bias by considering the special case of learning a Boolean function of n dimensions. As we present more members of the training set.6 Performance Evaluation Even though there is no correct answer in inductive learning. A hypothesized function is said to generalize when it guesses well on the testing set. brieﬂy. LEARNING REQUIRES BIAS 9 next training instance is some function of the current hypothesis and the previous instance—as it would be when a classiﬁer is used to decide on a robot’s next action given its current set of sensory inputs. and useful learning without bias is impossible. for example. There are 2n diﬀerent Boolean n inputs possible. In either case. At any stage of the process. Class noise randomly alters the value of the function. The remaining functions constitute what is called a “version space. the graph of the number of hypotheses not yet ruled out as a function of the number of diﬀerent patterns presented is as shown in Fig. This kind of a priori information is called bias.5 Noise Sometimes the vectors in the training set are corrupted by noise. . but.
So. in this case. Let’s look at a speciﬁc example of how bias aids learning. We show a 3dimensional version in Fig. 1. Only memorization is possible here. We’ll examine that theory later. Depending on the subset and on the order of presentation of training patterns. of all Boolean functions. of labeled patterns already seen Figure 1. then there is only one function remaining in this hypothsis set that is consistent with the training set. A Boolean function can be represented by a hypercube each of whose vertices represents a diﬀerent input pattern. even if there is more than one hypothesis remaining. a curve of hypotheses not yet ruled out might look something like the one shown in Fig. No generalization is possible in this case because the training patterns give no clue about the value of a pattern not yet seen. If the hypothesis set consists of just the linearly separable functions—those for which the positive and negative instances can be separated by a linear surface.10 CHAPTER 1. which is a trivial sort of learning. we can already pin down what the function must be—given the bias. there might be only one hypothesis that agrees with the training set. There. In this case it is even possible that after seeing fewer than all 2n labeled samples. 1. most of them may have the same value for most of the patterns not yet seen! The theory of Probably Approximately Correct (PAC) learning makes this intuitive idea precise. Hv = no. . Hc . Certainly. even though the training set does not contain all possible patterns.5. PRELIMINARIES half of the remaining Boolean functions have value 1 and half have value 0 for any training pattern not yet seen. of functions not ruled out log2Hv 2n 2n j (generalization is not possible) 0 0 2n j = no.4: Hypotheses Remaining as a Function of Labeled Patterns Presented But suppose we limited H to some subset.6. we show a training set of six sample patterns and have marked those having a value of 1 by a small square and those having a value of 0 by a small circle.
” which means “entities should not be multiplied unnecessarily. one restricts H to a deﬁnite subset of functions. Rule discovery using a variant of ID3 for a printing industry problem . For example. 1285?1349. SAMPLE APPLICATIONS 11 Hv = no.5: Hypotheses Remaining From a Restricted Subset Machine learning researchers have identiﬁed two main varieties of bias. The principle of Occam’s razor.”) 1. the restriction was to linearly separable Boolean functions. In absolute bias (also called restricted hypothesisspace bias). (William of Occam. we might select the one that was simplest among those that performed satisfactorily on the training set.1.4. As motivation. if these concepts were irrelevant to realworld problems they would probably not be of much interest. we give a short summary of some areas in which machine learning techniques have been successfully applied. [Langley. if we had some way of measuring the complexity of a hypothesis. 1992] cites some of the following applications and others: a. is a type of preference bias.6. one selects that hypothesis that is minimal according to some ordering scheme over all hypotheses. Nevertheless. of functions not ruled out log2Hv 2n depends on order of presentation log2Hc 0 0 2n j = no. 1. In preference bias. absolute and preference.4 Sample Applications Our main emphasis in this book is on the concepts of machine learning—not on its applications. of labeled patterns already seen Figure 1. In our example of Fig. used in science to prefer simple explanations to more complex ones. was an English philosopher who said: “non sunt multiplicanda entia praeter necessitatem.
1992]. bioengineering. [Hammerstrom. face recognition. Among these are papers on: speech recognition. Electric power load forecasting using a knearestneighbor rule system [Jabbour. and various control applications [Various Editors. b. PRELIMINARIES x3 x2 x1 Figure 1. d. 1993]. 1992]. Classiﬁcation of stars and galaxies [Fayyad. Automatic “help desk” assistant using a nearestneighbor system [Acorn & Walden. 19891994]. Planning and scheduling for a steel mill using ExpertEase. image processing.3%. 1992]. As additional examples.6: A Training Set That Completely Determines a Linearly Separable Function [Evans & Fisher.12 CHAPTER 1. dolphin echo recognition. c. a marketed (ID3like) system [Michie. 1993] mentions: a. b. Many applicationoriented papers are presented at the annual conferences on Neural Information Processing Systems. Sharp’s Japanese kanji character recognition system processes 200 characters per second with 99+% accuracy. commodity trading. It recognizes 3000+ characters. e. 1987]. . et al. NeuroForecasting Centre’s (London Business School and University College London) trading strategy selection network earned an average annual proﬁt of 18% against a conventional system’s 12. K... optical character recognition. music composition.. diagnosis. et al.
1.5. SOURCES
13
c. Fujitsu’s (plus a partner’s) neural network for monitoring a continuous steel casting operation has been in successful operation since early 1990. In summary, it is rather easy nowadays to ﬁnd applications of machine learning techniques. This fact should come as no surprise inasmuch as many machine learning techniques can be viewed as extensions of well known statistical methods which have been successfully applied for many years.
1.5
Sources
Besides the rich literature in machine learning (a small part of which is referenced in the Bibliography), there are several textbooks that are worth mentioning [Hertz, Krogh, & Palmer, 1991, Weiss & Kulikowski, 1991, Natarjan, 1991, Fu, 1994, Langley, 1996]. [Shavlik & Dietterich, 1990, Buchanan & Wilkins, 1993] are edited volumes containing some of the most important papers. A survey paper by [Dietterich, 1990] gives a good overview of many important topics. There are also well established conferences and publications where papers are given and appear including: • The Annual Conferences on Advances in Neural Information Processing Systems • The Annual Workshops on Computational Learning Theory • The Annual International Workshops on Machine Learning • The Annual International Conferences on Genetic Algorithms (The Proceedings of the abovelisted four conferences are published by Morgan Kaufmann.) • The journal Machine Learning (published by Kluwer Academic Publishers). There is also much information, as well as programs and datasets, available over the Internet through the World Wide Web.
1.6
Bibliographical and Historical Remarks
To be added. Every chapter will contain a brief survey of the history of the material covered in that chapter.
14
CHAPTER 1. PRELIMINARIES
Chapter 2
Boolean Functions
2.1
2.1.1
Representation
Boolean Algebra
Many important ideas about learning of functions are most easily presented using the special case of Boolean functions. There are several important subclasses of Boolean functions that are used as hypothesis classes for function learning. Therefore, we digress in this chapter to present a review of Boolean functions and their properties. (For a more thorough treatment see, for example, [Unger, 1989].) A Boolean function, f (x1 , x2 , . . . , xn ) maps an ntuple of (0,1) values to {0, 1}. Boolean algebra is a convenient notation for representing Boolean functions. Boolean algebra uses the connectives ·, +, and . For example, the and function of two variables is written x1 · x2 . By convention, the connective, “·” is usually suppressed, and the and function is written x1 x2 . x1 x2 has value 1 if and only if both x1 and x2 have value 1; if either x1 or x2 has value 0, x1 x2 has value 0. The (inclusive) or function of two variables is written x1 + x2 . x1 + x2 has value 1 if and only if either or both of x1 or x2 has value 1; if both x1 and x2 have value 0, x1 + x2 has value 0. The complement or negation of a variable, x, is written x. x has value 1 if and only if x has value 0; if x has value 1, x has value 0. These deﬁnitions are compactly given by the following rules for Boolean algebra: 1 + 1 = 1, 1 + 0 = 1, 0 + 0 = 0, 1 · 1 = 1, 1 · 0 = 0, 0 · 0 = 0, and 1 = 0, 0 = 1. Sometimes the arguments and values of Boolean functions are expressed in terms of the constants T (True) and F (False) instead of 1 and 0, respectively. 15
16 CHAPTER 2. One consisting of either a single variable or its complement. For a function of n variables. for example. The operators · and + do not commute between themselves. it is easy to see how many Boolean functions of n dimensions there are. 3 and each may be labeled in two diﬀerent ways. thus there are 2(2 ) = 256 . 2.1: Representing Boolean Functions on Cubes Using the hypercube representations. such as x1 .and 3dimensional examples. 2. A 3dimensional cube has 23 = 8 vertices.1 we show some 2. and both can be written simply as x1 x2 x3 . such as x1 is called an atom. Instead. Vertices having value 1 are labeled with a small square. BOOLEAN FUNCTIONS The connectives · and + are each commutative and associative. we have DeMorgan’s laws (which can be veriﬁed by using the above deﬁnitions): x1 x2 = x1 + x2 . and vertices having value 0 are labeled with a small circle. we would need an ndimensional hypercube. x2 x1x2 x2 x1 + x2 x1 and x2 x1x2 + x1x2 x3 or x1 x1x2x3 + x1x2x3 + x1x2x3 + x1x2x3 x1 xor (exclusive or) x1 x2 even parity function Figure 2. Thus. A Boolean formula consisting of a single variable. In Fig.2 Diagrammatic Representations We saw in the last chapter that a Boolean function could be represented by labeling the vertices of a cube. and x1 + x2 = x1 x2 . x1 (x2 x3 ) = (x1 x2 )x3 . is called a literal. Similarly for +.1.
The examples are of sizes 2 and 3. In general.and 3dimensional cubes later to provide some intuition about the properties of certain Boolean functions.2. Such a form is called a conjunction of literals. the class of conjunctions of literals is called the monomials. Therefore. 1990].x2 Figure 2. The rows and columns are arranged in such a way that entries that are adjacent in the map correspond to vertices that are adjacent in the hypercube representation.) Note that all adjacent cells in the table correspond to inputs diﬀering in only one component. respectively. We show an example of the 4dimensional even parity function in Fig. it will be important to know about these subclasses. so we must be careful in using intuitions gained in low dimensions. A term is any function written in the form l1 l2 · · · lk . One basic subclass is called terms. A Karnaugh map is an array of values of a Boolean function in which the horizontal rows are indexed by the values of some of the variables and the vertical columns are indexed by the rest.. We will be using 2. Also describe general logic diagrams.2. One diagrammatic technique for dimensions slightly higher than 3 is the Karnaugh map. where the li are literals. . (Strictly speaking. 2. there are 22 Boolean functions of n variables.2. x3. and there are many surprising properties of higher dimensional spaces.1 Classes of Boolean Functions Terms and Clauses To use absolute bias in machine learning. CLASSES OF BOOLEAN FUNCTIONS n 17 diﬀerent Boolean functions of 3 variables.2. we cannot visualize hypercubes (for n > 3). [Wnek.2: A Karnaugh Map 2. In learning Boolean functions. Some example terms are x1 x7 and x1 x2 x4 . The size of a term is the number of literals it contains. we frequently use some of the common subclasses of those functions.x4 00 01 11 10 00 1 0 1 0 01 0 1 0 1 11 1 0 1 0 10 0 1 0 1 x1. (An even parity function is a Boolean function that has value 1 if there are an even number of its arguments that have value 1. otherwise it has value 0. Of course. we limit the class of hypotheses. et al.2 2.
These planes are pictorial devices used to isolate certain lower dimensional subfaces of the cube. BOOLEAN FUNCTIONS and a conjunction of literals itself is called a term. where C(i. and thus each implicant corresponds to a subface of some dimension. t . A clause is any function written in the form l1 + l2 + · · · + lk .2. There are 3n possible clauses and fewer than i=0 C(2n.18 CHAPTER 2. We illustrate it in Fig. If f is a term. A term. and the third isolates a zerodimensional vertex. a term. This distinction is a ﬁne one which we elect to blur here. but none cuts oﬀ any vertices having value 0. both x2 x3 and x1 x3 are prime implicants of f = x2 x3 +x1 x3 +x2 x1 x3 . In psychological experiments. Probably I’ll put in a simple termlearning algorithm here—so we can get started on learning! Also for DNF functions and decision lists—as they are deﬁned in the next few pages. The size of a clause is the number of literals it k contains. Each group of vertices on a subface corresponds to one of the implicants of the function. The examples above are 2term and 3term expressions. for example. but x2 x1 x3 is not. Consider. Note that each of the three planes in the ﬁgure “cuts oﬀ” a group of vertices having value 1. conjunctions of literals seem easier for humans to learn than disjunctions of literals.3. In general. 2. Thus. i) clauses of size k or less. t. j) = (i−j)!j! is the binomial coeﬃcient. formed by taking any literal out of an implicant t is no longer an implicant of f . and vice versa. Both expressions are in the class 3DNF. The relationship between implicants and prime implicants can be geometrically illustrated using the cube representation for Boolean functions. where the li are literals.) It is easy to show that there are exactly 3n possible terms of n variables. (The implicant cannot be “divided” by any term and remain an implicant. an implicant is prime if and only if its corresponding subface is the largest dimensional subface that includes all of its vertices and . Each term in a DNF expression for a function is called an implicant because it “implies” the function (if the term has value 1. terms and clauses are duals of each other. then (by De Morgan’s laws) f is a clause. f . t. is a prime implicant of f if the term. respectively.) Thus. the function f = x2 x3 + x1 x3 + x2 x1 x3 . Such a form is called a disjunction of literals. Some examples in DNF are: f = x1 x2 +x2 x3 x4 and f = x1 x3 + x2 x3 + x1 x2 x3 . it is in the class kDNF if the size of its largest term is k. A kdimensional subface corresponds to an (n − k)size implicant term. i) = i! O(nk ). is an implicant of a function. The function is written as the disjunction of the implicants—corresponding to the union of all the vertices cut oﬀ by all of the planes. k The number of terms of size k or less is bounded from above by i=0 C(2n. 2. Two of them isolate onedimensional edges. if f has value 1 whenever t does.2 DNF Functions A Boolean function is said to be in disjunctive normal form (DNF) if it can be written as a disjunction of terms. f . so does the function). Some example clauses are x3 + x5 + x6 and x1 + x4 . Geometrically. A DNF expression is called a kterm DNF expression if it is a disjunction of k terms.
1 1.2.4. we can use the consensus method to ﬁnd an expression for the function in which each term is a prime implicant. 1 1. . there are just 2O(n ) functions in kDNF. (In this case. 0 x2 x1 f = x2x3 + x1x3 + x2x1x3 = x2x3 + x1x3 x2x3 and x1x3 are prime implicants Figure 2. 0. 2. Whereas there are 22 functions of n dimensions in DNF (since k any Boolean function can be written in DNF). CLASSES OF BOOLEAN FUNCTIONS 19 no other vertices having value 0.) The other two implicants are prime because their corresponding subfaces cannot be expanded without including vertices having value 0. 0. All Boolean functions can also be represented in DNF in which each term is a prime implicant. as shown in Fig. 1 0. but that representation is not unique. we don’t even have to include this term in the function because the vertex cut oﬀ by the plane corresponding to x2 x1 x3 is already cut oﬀ by the plane corresponding to x2 x3 . 1. If we can express a function in DNF form. 0. The consensus method relies on two results: • Consensus: We may replace this section with one describing the QuineMcCluskey method instead.3: A Function and its Implicants Note that all Boolean functions can be represented in DNF—trivially by disjunctions of terms of size n where each term corresponds to one of the vertices n whose value is 1. x3 1.2. Note that the term x2 x1 x3 is not a prime implicant of f .
1 1. 1. 0 x2 x1 f = x2x3 + x1x3 + x1x2 = x1x2 + x1x3 All of the terms are prime implicants. 1 1. The terms x1 x2 and x1 x2 have no consensus since each term has more than one literal appearing complemented in the other. • Subsumption: xi · f1 + f1 = f1 where f1 is a term. 0. We say that f1 subsumes xi · f1 . but there is not a unique representation Figure 2. Examples: x1 is the consensus of x1 x2 and x1 x2 . Readers familiar with the resolution rule of inference will note that consensus is the dual of resolution. 0. 0.4: NonUniqueness of Representation by Prime Implicants xi · f1 + xi · f2 = xi · f1 + xi · f2 + f1 · f2 where f1 and f2 are terms such that no literal appearing in f1 appears complemented in f2 . f1 · f2 is called the consensus of xi · f1 and xi · f2 . Example: x1 x4 x5 subsumes x1 x4 x2 x5 .20 CHAPTER 2. BOOLEAN FUNCTIONS x3 1. 1 0.
iterates the following operations on the terms of a DNF expression for f until no more such operations can be applied: a. The ﬁnal form of the function in which all terms are prime implicants is: f = x1 x2 + x1 x3 + x1 x4 x5 . 2.2. f . c. 2 x1x2 x1x2x3 4 x1 x2 x3 x4 x5 x1x3 1 3 x1x2x4x5 6 f = x1x2 + x x + x x x 1 3 1 4 5 5 x1x4x5 Figure 2. Example: Let f = x1 x2 + x1 x2 x3 + x1 x2 x3 x4 x5 .2. Shaded boxes surrounding a term indicate that it was subsumed. . of terms in the DNF expression of f.5. b. A Boolean function is said to be in CNF if it can be written as a conjunction of clauses. Its terms are all of the nonsubsumed terms in the consensus tree. compute the consensus of a pair of terms in T and add the result to T . CLASSES OF BOOLEAN FUNCTIONS 21 The consensus method for ﬁnding a set of prime implicants for a function. We show a derivation of a set of prime implicants in the consensus tree of Fig. the terms remaining in T are all prime implicants of f.5: A Consensus Tree 2.3 CNF Functions Disjunctive normal form has a dual: conjunctive normal form (CNF). The circled numbers adjoining the terms indicate the order in which the consensus and subsumption operations were performed. eliminate any terms in T that are subsumed by other terms in T . When this process halts. T . initialize the process with the set.2.
There are 2O[n k log(n)] functions in kDL [Rivest. It has been shown that the class kDL is a strict superset of the union of k kDNF and kCNF.22 CHAPTER 2. 1987]. . 0) f has value 0 for x1 = 0. the ti are terms in (x1 . v1 ) where the vi are either 0 or 1. x2 = 0. v1 can be regarded as a default value of the decision list. and vice versa. vq ) (tq−1 . The value of a decision list is the value of vi for the ﬁrst ti in the list that has value 1. For example we might use linearly separable functions in place of the ti (see below and [Marchand & Golea. 0) x2 x3 . .) The decision list is of size k. Interesting generalizations of decision lists use other Boolean functions in place of the terms. This function is in 3DL. 1993]). and x3 = 1. A decision list is written as an ordered list of pairs: (tq . an application of De Morgan’s law renders f k in CNF.2. An example decision list is: f= (x1 x2 . If f is written in DNF. Because CNF and DNF are duals. 1987]. . x2 = 0. The class of decision lists of size k or less is called kDL. 1) (x1 x2 x3 . because the last one does. . v2 ) (T. BOOLEAN FUNCTIONS An example in CNF is: f = (x1 + x2 )(x2 + x3 + x4 ). (At least one ti will have value 1. . ti . The example is a 2clause expression in 3CNF. It has value 1 for x1 = 1. there are also 2O(n ) functions in kCNF. vi ) ··· (t2 . and T is a term whose value is 1 (regardless of the values of the xi ). it is in the class kCNF if the size of its largest clause is k. if the size of the largest term in it is k. 1) (1.4 Decision Lists Rivest has proposed a class of Boolean functions called decision lists [Rivest. vq−1 ) ··· (ti . xn ). and x3 = 1. 2. A CNF expression is called a kclause CNF expression if it is a conjunction of k clauses.
For example. (Note that the concept of linearly separable functions can be extended to nonBoolean inputs. The or and and functions of two dimensions are also symmetric. θ) is 1 if σ ≥ θ and 0 otherwise. illustrated in Fig. CLASSES OF BOOLEAN FUNCTIONS 23 2.5 Symmetric and Voting Functions A Boolean function is called symmetric if it is invariant under permutations of the input variables. 2. . .1 are linearly separable. while two are not. 2. θ) where wi . are realvalued numbers called weights. .) An important subclass of the symmetric functions is the class of voting functions (also called mofn functions). and X · W is the dot (or inner) product of the two vectors. θ is a realvalued number called the threshold.6 Linearly Separable Functions The linearly separable functions are those that can be expressed as follows: n f = thresh( i=1 wi xi . . . but the following table gives the numbers for n up to 6. . . Also note that the terms in Figs. . is an oddparity function of two dimensions.2. Input vectors for which f has value 1 lie in a halfspace on one side of (and on) a hyperplane whose orientation is normal to W and whose position (with respect to the origin) is determined by θ. and thresh(σ. 1. There is no closedform expression for the number of linearly separable functions of n dimensions. . it is easy to see that two of the functions in Fig. A kvoting function has value 1 if and only if k or more of its n inputs has value 1. With this idea in mind.4 are linearly separable functions as evidenced by the separating planes shown.2.1. terms and clauses are special cases of linearly separable functions. If k = 1. a voting function is the same as an nsized clause. The parity functions. if k = (n + 1)/2 for n odd or k = 1 + n/2 for n even. 2. θ) where X = (x1 .2. wn ) is an ndimensional vector of weight values. we have the majority function. . i = 1. if k = n. (The exclusive or function. which have value 1 depending on whether or not the number of input variables with value 1 is even or odd is a symmetric function. .3 and 2. any function that is dependent only on the number of input variables whose values are 1 is a symmetric function. . a voting function is the same as an nsized term. 2.6.2. A convenient way to write linearly separable functions uses vector notation: f = thresh(X · W. We saw an example of such a separating plane in Fig.) The kvoting functions are all members of the class of linearly separable functions in which the weights all have unit value and the threshold depends on k. . xn ) is an ndimensional vector of input variables. W = (w1 . n. Thus.
page 262]): .8 × 1019 Linearly Separable Functions 4 14 104 1. 2.3 × 109 ≈ 1. ksizeterms kDNF kDL terms DNF (All) lin sep Figure 2. Winder.134 2 [Muroga. 1971] has shown that (for n > 1) there are no more than 2n linearly separable functions of n dimensions. (See also [Winder.) 2.882 94.24 CHAPTER 2. 1961. We will be confronting these classes again in later chapters. BOOLEAN FUNCTIONS n 1 2 3 4 5 6 Boolean Functions 4 16 256 65.6 shows some of the set inclusions of the classes of Boolean functions that we have considered. 1962].028.536 ≈ 4.6: Classes of Boolean Functions The sizes of the various classes are given in the following table (adapted from [Dietterich. 1990.572 15.3 Summary The diagram in Fig.
4 Bibliographical and Historical Remarks To be added.4. . BIBLIOGRAPHICAL AND HISTORICAL REMARKS Class terms clauses kterm DNF kclause CNF kDNF kCNF kDL lin sep DNF Size of Class 3n 3n O(kn) 2 2O(kn) k 2O(n ) k 2O(n ) k 2O[n k log(n)] 2 2O(n ) n 22 25 2.2.
BOOLEAN FUNCTIONS .26 CHAPTER 2.
During the learning procedure.1. We say that the hypotheses in H that are not consistent with the values in the training set are ruled out by the training set. These ideas are most clearly explained for the case of Boolean function learning. h. A hypothesis. Ξ. that is consistent with these values. we can make no more than log2 (H) mistakes. Consider the following procedure for classifying an arbitrary input pattern. At any stage of the process we would then have left some subset of functions that are consistent with the patterns presented so far. we rule out at least half of the functions remaining in the version space.Chapter 3 Using Version Spaces for Learning 3. we say a mistake is made. This idea is illustrated in Fig. How many mistakes can such a procedure make? Obviously. Given an initial hypothesis set H (a subset of all Boolean functions) and the values of f (X) for each X in a training set. An incremental training procedure could then be deﬁned which presented each pattern in Ξ to each of these functions and then eliminated those functions whose values for that pattern did not agree with its given value. is consistent with the values of X in Ξ if and only if h(X) = f (X) for all X in Ξ. We could imagine (conceptually only!) that we have devices for implementing every function in H. the version space is that subset of hypotheses. Thus.1 Version Spaces and Mistake Bounds The ﬁrst learning methods we present are based on the concepts of version spaces and version graphs. where H is the number of hypotheses in the 27 . X: the pattern is put in the same class (0 or 1) as are the majority of the outputs of the functions in the version space. Hv . whenever a mistake is made. this subset is the version space for the patterns already presented. and we revise the version space accordingly—eliminating all those (majority of the) functions voting incorrectly. if this majority is not equal to the value of the pattern presented. 3.
it may be that there are enough training patterns to reduce the version space to a set of functions such that most of them assign the same values to most of the patterns we will see henceforth. 1988]) is an example of a mistake bound—an important concept in machine learning theory.1: Implementing the Version Space original hypothesis set. USING VERSION SPACES FOR LEARNING A Subset. We could select one of the remaining functions at random and be reasonably assured that it will generalize satisfactorily. H.) This theoretical (and very impractical!) result (due to [Littlestone. of all Boolean Functions h1 h2 1 or 0 Rule out hypotheses not consistent with training patterns X hi hj Hypotheses not ruled out constitute the version space hK K = H Figure 3. Even if we do not have suﬃcient training patterns to reduce the version space to a single function. if our bias was to limit H to terms. and otherwise we would make no more than that number of mistakes before being able to decide that f is not a term. that the number of training patterns seen before this maximum number of mistakes is made might be much greater. Later. We next discuss a computationally more feasible method for representing the version space. we would make no more than log2 (3n ) = n log2 (3) = 1. we’ll derive other mistake bounds. (Note. It shows that there must exist a learning procedure that makes no more mistakes than this upper bound.585n mistakes before exhausting the version space.585n mistakes before learning f . H. . This result means that if f were a term.28 CHAPTER 3. though. we would make no more than 1. As a special case.
in the version space as nodes. and just below them is a row of nodes corresponding to terms having two literals.3.2. the function “0” is at the bottom of the graph. f2 . 3. and f1 = f2 . f1 . if and only if hj is more general than hi .) Similarly. We call such a graph a version graph.2: A Version Graph for Terms That function. hi . if f1 has value 1 for all of the arguments for which f2 has value 1. A node in the graph. {hi }. For example. x3 Version Graph for Terms (none yet ruled out) 1 x2 x1 x1 x1 x2 x2 x3 (k = 1) x3 (k = 2) x1 x3 x1x2 x3 x1x2 (k = 3) 0 (for simplicity.” which has value 1 for all inputs. has an arc directed to node. and . denoted here by “1. A Boolean function. is more general than a function. hj . x3 is more general than x2 x3 but is not more general than x3 + x2 . corresponds to the node at the top of the graph. we show an example of a version graph over a 3dimensional input space for hypotheses restricted to terms (with none of them yet ruled out).2 Version Graphs Boolean functions can be ordered by generality. We can form a graph with the hypotheses. Just below “1” is a row of nodes corresponding to all terms having just one literal. only some arcs in the graph are shown) Figure 3.2. In Fig. VERSION GRAPHS 29 3. (It is more general than any other term. (and f2 is more speciﬁc than f1 ).
x3 1.4. We use this same example to show how the version graph changes as we consider a set of labeled samples in a training set.3. Some of the functions in the version graph of Fig. 3. These ruled out nodes are no longer in the version graph and are shown shaded in Fig. . 0. 0. USING VERSION SPACES FOR LEARNING so on. To make our portrayal of the graph less cluttered only some of the arcs are shown. 0) has value 1. 3. Ξ. In Fig. 3. 0. 1) with value 0. 0. 1) has value 0. we have the version graph as it exists after learning that (1. is technically not a term). 1 has value 0 x2 1 New Version Graph x1 ruled out nodes x1 x1 x2 x2 x1x2 x3 x3 x2 x3 x1x2 x1x3 x1x2 x3 x1x3 x1x2x3 0 (only some arcs in the graph are shown) Figure 3. We also show there the threedimensional cube representation in which the vertex (1. 1) In a version graph.3: The Version Graph Upon Seeing (1.2 are inconsistent with this training pattern. Suppose we ﬁrst consider the training pattern (1.1) has value 0 and (1. The gbs and sbs are shown. These are called the general boundary set (gbs) and the speciﬁc boundary set (sbs). There are 33 = 27 functions altogether (the function “0.0. there are always a set of hypotheses that are maximally general and a set of hypotheses that are maximally speciﬁc. 0. respectively. each node in the actual graph has an arc directed to all of the nodes above it that are more general.” included in the graph.30 CHAPTER 3.
it is possible to determine whether or not any hypothesis (in the prescribed class of Boolean functions we are using) is a member or not of the version space. A maximally speciﬁc one corresponds to a subface of minimal dimension that contains all the members of the training set labelled by a 1 and no members labelled by a 0. 3. which would be impractical. VERSION GRAPHS 31 x3 1.3. it is a simple matter to determine maximally general and maximally speciﬁc functions (assuming that there is some term that is in the version space). This determination is possible because of the fact that any member of the version space (that is not a member of one of the boundary sets) is more speciﬁc than some member of the general boundary set and is more general than some member of the speciﬁc boundary set. 0) but does not contain (1. 0) Boundary sets are important because they provide an alternative to representing the entire version space explicitly. If we limit our Boolean functions that can be in the version space to terms. A maximally general one corresponds to a subface of maximal dimension that contains all the members of the training set labelled by a 1 and no members labelled by a 0. 0. 0) itself—corresponding to the function x1 x2 x3 . 0. more general than sbs x2 general boundary set (gbs) x3 x1 x1x3 x2x3 x1x2 x1 x2 x2 x3 x1x2 x3 specific boundary set (sbs) 0 Figure 3. The subface . 1 has value 0 1. 0.4: The Version Graph Upon Seeing (1. 0. 1) is just the vertex (1. 0 has value 1 x1 1 more specific than gbs. 1) and (1.2.4. we see that the subface of minimal dimension that contains (1. 0. Looking at Fig. 0. Given only the boundary sets. 0.
[To be written.32 CHAPTER 3. For a negative instance the algorithm specializes elements of the [gbs] so that they no longer cover the new instance yet remain consistent with past data. Or.] Selecting a hypothesis from the version space can be thought of as a search problem.3. and removes from the [sbs] those elements that mistakenly cover the new. 3. page 6]: “The candidateelimination algorithm manipulates the boundaryset representation of a version space to create boundary sets that represent a new version space consistent with all the previous instances plus the new one. 3. USING VERSION SPACES FOR LEARNING of maximal dimension that contains (1. is an incremental method for computing the boundary sets. Version spaces for terms always have singular speciﬁc boundary sets. 1994. 1987]): following deﬁnitions (adapted from • a hypothesis is called suﬃcient if and only if it has value 1 for all training samples labeled by a 1. Quoting from [Hirsh. • a hypothesis is called necessary if and only if it has value 0 for all training samples labeled by a 0. the gbs of a version space for terms need not be singular. and removes those elements of the [gbs] that do not cover the new instance. Also discuss bestﬁrst search methods. 1) is the bottom face of the cube—corresponding to the function x3 . 3. . one can start with a very special function and generalize it—resulting in bottomup methods. Such procedures are usually called topdown methods.3 Learning as Search of a Version Space Compare this view of topdown versus bottomup with the divideandconquer and the covering (or AQ) methods of decisiontree induction.2 through 3. negative instance. In Figs. As seen in Fig. For a positive exmple the algorithm generalizes the elements of the [sbs] as little as possible so that they cover the new instance yet remain consistent with past data. Relate to term learning algorithm presented in Chapter Two.” The method uses the [Genesereth & Nilsson. however. We shall see instances of both styles of learning in this book. 0. See Pat Langley’s example using “pseudocells” of how to generate and eliminate hypotheses. One can start with a very general function and specialize it through various specialization operators until one ﬁnds a function that is consistent (or adequately so) with a set of training patterns. 0.4 The Candidate Elimination Method The candidate elimination method.4 the sbs is always singular. 0) but does not contain (1. 3.
If the new vector is labelled with a 0: The new speciﬁc boundary set is obtained from the previous one by excluding any elements in it that are not necessary. hi . A hypothesis is consistent with the training set (and thus is in the version space) if and only if it is both suﬃcient and necessary. 1. b. 1) label 0 1 0 0 . in it by all of its least generalizations. As an example. If the new vector is labelled with a 1: The new general boundary set is obtained from the previous one by excluding any elements in it that are not suﬃcient. b) hs is necessary. 1) (0. We start (before receiving any members of the training set) with the function “0” as the singleton element of the speciﬁc boundary set and with the function “1” as the singleton element of the general boundary set.) The new general boundary set is obtained from the previous one by replacing each element. we exclude any elements that have value 0 for the new vector. 0.) The new speciﬁc boundary set is obtained from the previous one by replacing each element. The hypothesis hs is a least specialization of h if and only if: a) h is more general than hs . 0) (1. (That is. hi . in it by all of its least specializations. c) no function (including h) that is more general than hs is necessary. 0. 0. b) hg is suﬃcient. Also. 1) (1. c) no function (including h) that is more speciﬁc than hg is suﬃcient. Again. least generalizations of two diﬀerent functions in the speciﬁc boundary set may be identical.4. and least specializations of two diﬀerent functions in the general boundary set may be identical. we exclude any elements that have value 1 for the new vector. it might be that hs = h. Upon receiving a new labeled input vector. and d) hs is more general than some member of the new speciﬁc boundary set. the boundary sets are changed as follows: a.3. THE CANDIDATE ELIMINATION METHOD 33 Here is how to think about these deﬁnitions: A hypothesis implements a suﬃcient condition that a training sample has value 1 if the hypothesis has value 1 for all of the positive instances. It might be that hg = h. (That is. The hypothesis hg is a least generalization of h if and only if: a) h is more speciﬁc than hg . and d) hg is more speciﬁc than some member of the new general boundary set. suppose we present the vectors in the following order: vector (1. a hypothesis implements a necessary condition that a training sample has value 1 if the hypothesis has value 0 for all of the negative instances.
0). x3 }. Although these ideas are not used in practical machine learning procedures. The concept of version spaces and their role in learning was ﬁrst investigated by Tom Mitchell [Mitchell. labeled with a 0. when we see (0. . “1” is not. Finally. after seeing (1. the speciﬁc boundary set stays at “0” (it is necessary). labeled with a 0. 0. 1). version spaces have been generalized by [Hirsh. We do not change the general boundary set either because x3 is still necessary. “0” is more speciﬁc than it. 3. When we see (1. “1”. 1). USING VERSION SPACES FOR LEARNING We start with general boundary set. 0. 1.” After seeing the ﬁrst sample. x2 . they do provide insight into the nature of hypothesis selection. we do not change the speciﬁc boundary set because its function is still necessary. they are more general than “0”. Maybe I’ll put in an example of a version graph for nonBoolean functions. We do not change the general boundary set either because x3 is still necessary. no function (including “0”) that is more speciﬁc than it is suﬃcient. x1 . the general boundary set changes to {x3 } (because x1 and x2 are not suﬃcient). and there are no functions that are more general than they and also necessary).5 Bibliographical and Historical Remarks More to be added. we do not change the speciﬁc boundary set because its function is still necessary. In order to accomodate noisy data. 1). and the speciﬁc boundary set is changed to {x1 x2 x3 }. labeled with a 0. labeled with a 1. and it is more speciﬁc than some member of the general boundary set. Then. Each of the functions. 0.34 CHAPTER 3. x2 . and we change the general boundary set to {x1 . 1994] to allow hypotheses that are not necessarily consistent with the training set. This single function is a least generalization of “0” (it is suﬃcient. (1. and x3 . 1982]. “0. are least specializations of “1” (they are necessary. and speciﬁc boundary set.
) 35 . and a neuron. Suppose we decide to use one of these subsets as a hypothesis set for supervised function learning. play a prominent role in machine learning. 4. θ. Widrow & Lehr.Chapter 4 Neural Networks In chapter two we deﬁned several important subsets of Boolean functions.1 Threshold Logic Units Deﬁnitions and Geometry Linearly separable (threshold) functions are implemented in a straightforward way by summing the weighted inputs and comparing this sum to a threshold value as shown in Fig. 1990]. interconnected through adjustable weights. (Although the word “perceptron” is often used nowadays to refer to a single TLU. 4. a perceptron. It has also been called an Adaline (for adaptive linear element) [Widrow. namely ones composed of a single threshold element. We begin our treatment of neural nets by studying this threshold element and how it can be used in the simplest of all networks. This structure we call a threshold logic unit (TLU). These networks commonly use the threshold element which we encountered in chapter two in our study of linearly separable Boolean functions.1. 1962. an LTU (linear threshold unit). 1958].1. They are called neural networks because the nonlinear elements have as their inputs a weighted sum of the outputs of other elements—much like networks of biological neurons do. Networks of nonlinear elements. Rosenblatt originally deﬁned it as a class of networks of threshold elements [Rosenblatt. In this chapter we describe how networks of nonlinear elements can be used to implement various inputoutput functions and how they can be trained using supervised learning methods.1 4. We next have the question of how best to implement the function as a device that gives the outputs prescribed by the function for arbitrary inputs. Its output is 1 or 0 depending on whether or not the weighted sum of its inputs is greater than or equal to a threshold value.
of the augmented weight vector.36 CHAPTER 4. wn ). The weights of a TLU are represented by an ndimensional weight vector. respectively. 0) Figure 4. The hyperplane is the boundary between patterns for which X•W + wn+1 > 0 and patterns for which X•W + wn+1 < 0. . . xn ). however when the context of the discussion makes it clear about what sort of vectors we are talking about. V.V notation. is set equal to the negative of the desired threshold value. . we’ll lapse back into the more familiar X. . but we often specialize to the binary numbers 0 and 1. we will attach subscripts. of the augmented feature vector. θ. the TLU has an output of 1 if Y•V ≥ 0. whose ﬁrst n components are the same as those of X and W. of the TLU is ﬁxed at 0. We can give an intuitively useful geometric description of a TLU. otherwise it has output 0. NEURAL NETWORKS X x1 x 2 x i xn W w1 w2 wi wn wn+1 threshold " = 0 ! threshold weight n+1 f xn+1 = 1 f = thresh( i=1 ! wi xi. in that case. we’ll use the Y. . n The TLU has output 1 if i=1 xi wi ≥ θ. A TLU divides the input space by a hyperplane as sketched in Fig. the (n + 1)st component. where W = 2 2 (w1 + . Otherwise. (The normal . always has value 1. W = (w1 . The unit vector that is normal to the hyperplane is n = W . + wn ) is the length of the vector W. . and V. the equation of the hyperplane itself is W X•W + wn+1 = 0. The components of X can be any realvalued numbers.W notation. 4. The (n + 1)st component. xn+1 . (When we want to emphasize the use of augmented vectors. X•W. this dot product is then sometimes written as Xt W.2. Its components are realvalued numbers (but we sometimes specialize to integers).) Often. . The weighted sum that is calculated by the TLU can be simply represented as a vector dot product. where the “row” vector Xt is the transpose of X. Thus. . wn+1 . the threshold.) In the Y. Y. such as Xi .V notation. . Y. (If the pattern and weight vectors are thought of as “column” vectors. arbitrary thresholds are achieved by using (n + 1)dimensional “augmented” vectors. the output is 0. When we want to distinguish among diﬀerent feature vectors. .1: A Threshold Logic Unit (TLU) The ndimensional feature or input vector is denoted by X = (x1 .
Thus.) The threshold. no connection at all—from their inputs. and the distance from an arbitrary point.W + wn+1 > 0 X wn+1 W on this side X W + wn+1 X.2 Terms Special Cases of Linearly Separable Functions Any term of size k can be implemented by a TLU with a weight from each of those inputs corresponding to variables occurring in the term. A weight of +1 is used from an input corresponding to a positive literal. is set equal to kp − 1/2. THRESHOLD LOGIC UNITS 37 W form of the hyperplane equation is X•n + W = 0. Equations of hyperplane: X W + wn+1 = 0 wn+1 =0 X n+ W X. 4. adjusting wn+1 changes the position of the hyperplane (relative to the origin). In this way the hyperplane can be moved so that the TLU implements diﬀerent (linearly separable) functions of the input. where kp is the number of positive literals in the term.4.2: TLU Geometry Adjusting the weight vector. W. θ. to the hyperplane is X•W+wn+1 . training of a TLU can be achieved by adjusting the values of the weights. the side for which X•W + wn+1 < 0). changes the orientation of the hyperplane.1. when wn+1 < 0). Such a TLU implements a hyperplane boundary that is .) The distance from the wn+1 hyperplane to the origin is W .1. (Literals not mentioned in the term have weights of zero—that is.W + wn+1 < 0 on this side W Origin n= W W Unit vector normal to hyperplane Figure 4. X. then the origin is on the negative side of the hyperplane (that is. When the distance from the hyperplane to the W origin is negative (that is. and a weight of −1 is used from an input corresponding to a negative literal.
For example. . This process simply changes the orientation of the hyperplane—ﬂipping it around by 180 degrees and thus changing its “positive side. Yi .3/2 = 0 Figure 4. and their binary labels. 4. x3 f = x1x2 (1. We start with a ﬁnite training set. linearly separable functions are a superset of terms.4. If we “invert” the hyperplane. it will implement the clause instead.1. 4. We use augmented feature and weight vectors in describing them.1. the negation of the clause f = x1 + x2 + x3 is the term f = x1 x2 x3 .0) x1 Equation of plane is: x1 + x2 .” Therefore. We show an example in Fig. of vectors. they are called errorcorrection procedures. Inverting a hyperplane is done by multiplying all of the TLU weights—even wn+1 —by −1.3. A hyperplane can be used to implement this term. Thus. NEURAL NETWORKS parallel to a subface of dimension (n − k) of the unit hypercube.1.1) x2 (1. We present next a family of incremental training procedures with parameter c. Ξ.3: Implementing a Term Clauses The negation of a clause is a term. We show a threedimensional example in Fig.38 CHAPTER 4.3 ErrorCorrection Training of a TLU There are several procedures that have been proposed for adjusting the weights of a TLU. 4. a. linearly separable functions are also a superset of clauses. These methods make adjustments to the weight vector only when the TLU being trained makes an error on a training pattern.
4. the new dot product will be (V + ci Yi )•Yi = V•Yi + ci Yi •Yi .1. Repeat forever: Present the next vector. in Σ to the TLU and note its response. (a) If the TLU responds correctly. Compose an inﬁnite training sequence. (c) If Yi is supposed to produce an output of 1 and produces an output of 0 instead. make no change in the weight vector. which is larger than it was before the weight adjustment. which is smaller than it was before the weight adjustment.4: Implementing a Clause b. c. Note that all three of these cases can be combined in the following rule: . Σ. Yi . of vectors from Ξ and their labels such that each member of Ξ occurs inﬁnitely often in Σ. (b) If Yi is supposed to produce an output of 0 and produces an output of 1 instead. Note that after this adjustment the new dot product will be (V − ci Yi )•Yi = V•Yi − ci Yi •Yi . Set the initial weight values of an TLU to arbitrary values. modify the weight vector as follows: V ←− V + ci Yi In this case. modify the weight vector as follows: V ←− V − ci Yi where ci is a positive real number called the learning rate parameter (whose value is diﬀererent in diﬀerent instances of this family of procedures and may depend on i). THRESHOLD LOGIC UNITS 39 f = x1 + x2 + x3 x3 f = x1x2x3 x2 x1 Equation of plane is: x1 + x2 + x3 1/2 = 0 Figure 4.
against a threshold of 0.5.40 CHAPTER 4. the parameter ci is set to λ Yii•Yi . 4. NEURAL NETWORKS V ←− V + ci (di − fi )Yi where di is the desired response (1 or 0) for Yi . and examples of errorcorrection procedures. consider the locus of all points in weight space corresponding to weight vectors yielding Yi •V = 0. for any pattern vector. proofs. the error will be corrected. 3. the ﬁxedincrement procedure will ﬁnd such a weight vector and thus make no more weight changes. 1994] for a a hyperplaneﬁnding procedure that makes no more than O(n2 log n) mistakes. Depending on the value of this constant. It can be proved that if there is some weight vector. the threshold of the TLU is also changed by these adjustments. Yi •V. Note that if λ = 0. and fi is the actual response (1 or 0) for Yi .1. positive constant for all i. 1990]. that produces a correct output for all of the feature vectors in Ξ. then after a ﬁnite number of feature vector presentations. If λ = 1. the weight adjustment may or may not correct the response to an erroneously classiﬁed feature vector. where V is the weight vector before it is changed. 4. is the same ﬁxed. We identify two versions of this procedure: 1) In the ﬁxedincrement procedure. . There are four pattern hyperplanes.] Note also that because the weight vector V now includes the wn+1 threshold component. If λ > 1. V. Yi . 4 . and weight points in the other halfspace will cause the corresponding pattern to yield a dot product greater than 0. the correction is just suﬃcient to make Yi •V = 0. We use augmented vectors in our discussion here so that the threshold function compares the dot product. 1. Each pattern vector will have such a hyperplane corresponding to it. A particular weight vector. ci . 2.4 Weight Space We can give an intuitive idea about how these procedures work by considering what happens to the augmented weight vector in “weight space” as corrections are made. This locus is a hyperplane passing through the origin of the (n + 1)dimensional space. For additional background. the learning rate parameter. corresponding to patterns Y1 . V. The same result holds for the fractionalcorrection procedure if 1 < λ ≤ 2. Y •V 2) In the fractionalcorrection procedure. We show a schematic representation of such a weight space in Fig. no correction takes place at all. Now. then corresponds to a point in (n + 1)dimensional weight space. see [Nilsson. See [Maass & Tur´n. Weight points in one of the halfspaces deﬁned by this hyperplane will cause the corresponding pattern to yield a dot product less than 0.
To do so involves reversing the “polarity” of those hyperplanes corresponding to patterns for which a negative response is desired.1. Suppose we wanted weight values that would give positive responses for patterns Y1 . THRESHOLD LOGIC UNITS 41 Y2 . Y4 . 3 3 2 2 1 1 4 V 4 0 3 2 1 Figure 4. If we do that for our example above.6.6: Solution Region in Weight Space .4. and Y4 . Y3 . 4. we get the weight space diagram shown in Fig. indicated in the ﬁgure is one such set of weight values. 3 2 1 V 4 Figure 4. Y3 . respectively.5: Weight Space The question of whether or not there exists a weight vector that gives desired responses for a given set of patterns can be given a geometric interpretation. and we indicate by an arrow the halfspace for each in which weight vectors give dot products greater than 0. and a negative response for pattern Y2 . The weight point. V.
The solution region will be a “hyperwedge” region whose vertex is at the origin of weight space and whose crosssection increases with increasing distance from the origin. (The boxed numbers show. Ultimately.7. Y3 . for later purposes. 3 V1 2 V5 1 V2 V3 V4 V6 4 V Figure 4. For this purpose. 4. (That is what adding Y1 to V1 does. Starting at V1 .7: Moving Into the Solution Region 4. and start the process with a weight point V1 .42 CHAPTER 4. Some of the subsequent corrections may overshoot the solution region. the pattern labels are assumed to be either +1 or −1 (instead of 1 or 0). so we move V1 to V2 in a direction normal to plane 1.6.5 The WidrowHoﬀ Procedure The WidrowHoﬀ procedure (also called the LMS or the delta procedure) attempts to ﬁnd weights that minimize a squarederror function between the pattern labels and the dot product computed by a TLU. but eventually we work our way out far enough in the solution region that corrections (for a ﬁxed increment size) take us within it.) The ﬁxedincrement errorcorrection procedure changes a weight vector by moving it normal to any pattern hyperplane for which that weight vector gives an incorrect response. then the halfspaces deﬁned by the correct responses for these patterns will have a nonempty intersection. we see that it gives an incorrect response for pattern Y1 . Y4 . The proofs for convergence of the ﬁxedincrement rule make this intuitive argument precise. and so on. called the solution region.) Y2 gives an incorrect response for pattern Y2 . as shown in Fig. the responses are only incorrect for planes bounding the solution region. Y2 . This region is shown shaded in Fig. The . Suppose in our example that we present the patterns in the sequence Y1 . NEURAL NETWORKS If a weight vector exists that correctly classiﬁes a set of patterns.1. the number of errors made by weight vectors in each of the regions. 4.
we know that it has a global minimum. does not equal the speciﬁed desired target n+1 n+1 . We will be describing the incremental version here. Ξ. The jth component of the gradient of the singlepattern error is: ∂εi = −2(di − xij wj )xij ∂wj j=1 An adjustment in the direction of the negative gradient would then change each weight as follows: wj ←− wj + ci (di − fi )xij where fi = j=1 xij wj . Of course. make the appropriate adjustment to the weights.4. Each component of the gradient is the partial derivative of ε with respect to one of the weights. compute the gradient of the singlepattern squared error. Yi is the ith augmented pattern vector. as before. The total squared error (over all patterns in a training set. εi . One problem with taking the partial derivative of ε is that ε depends on all the input vectors in Ξ. THRESHOLD LOGIC UNITS squared error for a pattern. with label di (for desired output) is: n+1 43 εi = (di − j=1 xij wj )2 where xij is the jth component of Xi . but the approximation is usually quite eﬀective. and ci governs the size of the adjustment. The entire weight vector (in augmented. Since ε is quadratic in the wj . One way to ﬁnd such a set of weights is to start with an arbitrary weight vector and move it along the negative gradient of ε as a function of the weights. the results of the incremental version can only approximate those of the batch one. or V. and thus this steepest descent procedure is guaranteed to ﬁnd the minimum. The WidrowHoﬀ procedure makes adjustments to the weight vector whenever the dot product itself. of Ξ at a time. and then try another member of Ξ.1. Xi . Xi . notation) is thus adjusted according to the following rule: V ←− V + ci (di − fi )Yi where. Yi •V. Often. it is preferable to use an incremental procedure in which we try the TLU on just one element. containing m patterns) is then: m n+1 ε= i=1 (di − j=1 xij wj )2 We want to choose the weights wj to minimize this squared error.
Gallant [Gallant. performance on test set. to use more .) 4. A meansquarederror criterion often gives unsatisfactory results. . we might use the WidrowHoﬀ procedure. pp. performance on training set. consists simply in storing (or “putting in your pocket”) the set of weights which has had the longest unmodiﬁed run of successes so far. will result in ever decreasing changes to the hyperplane. Krogh. 1966] has suggested keeping track of the average value of the weight vector during error correction and using this average to give a separating hyperplane that performs reasonably well on nonlinearlyseparable problems. As an alternative. . 160]: . Several methods have been proposed to deal with this case. however. . The only diﬀerence is that fi is the thresholded response of the TLU in the errorcorrection case while it is the dot product itself for the WidrowHoﬀ procedure. Errorcorrection proceeds as usual with the ordinary set of weights. . The learningrate factor. & Palmer. The algorithm is stopped after some chosen time t . 151ﬀ]. NEURAL NETWORKS Examples of training curves for TLU’s.8. The WidrowHoﬀ formula for changing the weight vector has the same form as the standard ﬁxedincrement errorcorrection formula. the pocket algorithm .6 Training a TLU on NonLinearlySeparable Training Sets When the training set is not linearly separable (perhaps because of noise or perhaps inherently). Duda [Duda. Here. 1986] proposed what he called the “pocket algorithm. and the WidrowHoﬀ procedure can be interpreted as a descent procedure that attempts to minimize the meansquarederror between the actual and desired values of the dot product. the errorcorrection procedures will not do well on nonlinearlyseparable training sets because they will continue to attempt to correct inevitable errors. might decrease with time toward 0 to achieve asymptotic convergence. it may still be desired to ﬁnd a “best” separating hyperplane. Typically. p. shown in Fig. called a linear machine. After stopping. di (which is either 1 or −1).” As described in [Hertz.1. value. First. ci . because it prefers many small errors to a few large ones.2 Linear Machines The natural generalization of a (twocategory) TLU to an Rcategory classiﬁer is the structure. 1993]. 4. see [Duda & Hart. error correction with a continuous decrease toward zero of the value of the learning rate constant. 1991. The latter is claimed to outperform the pocket algorithm. . Also see methods proposed by [John. 1973. and the hyperplane will never settle into an acceptable place. c.44 CHAPTER 4. . 4. the weights in the pocket are used as a set that should give a small number of errors on the training set. (For more on WidrowHoﬀ and other related procedures. Finding weight values that give the desired dot products corresponds to solving a set of linear equalities. 1995] and by [Marchand & Golea. cumulative number of corrections. which (although it will not converge to zero error on nonlinearly separable problems) will give us a weight vector that minimizes the meansquarederror.
. the linear machine reduces to a TLU with weight vector W = (W1 − W2 ). there is a straightforward generalization of the 2category errorcorrection rule. WR. a. corresponding to which dot product is largest. Such a structure is also sometimes called a “competitive” net or a “winnertakeall” net. {1. . the Ws and X are meant to be augmented vectors (with an (n+1)st component). In n dimensions. Note that when R = 2.9: Regions For a Linear Machine To train a linear machine. LINEAR MACHINES 45 familiar notation. .W4 X. The output of the linear machine is one of the numbers. every pair of regions is either separated by a section of a hyperplane or is nonadjacent.2. no change is made to any of . . R}.X .X ARGMAX Figure 4. 4.9 shows the character of the regions in a 2dimensional space created by a linear machine for R = 5.8: A Linear Machine The diagram in Fig. .4. R1 R2 R3 R4 R5 In this region: X. Assemble the patterns in the training set into a sequence as before..Wi for i 4 Figure 4. If the machine classiﬁes a pattern correctly. W1 X WR W1.
1973. if there exists weight vectors that make correct separations of the training set. pp. 1990. in a feedforward network no TLU’s input depends (through zero or more intermediate TLUs) on that TLU’s output. 4. CHAPTER 4. Note that when R = 2. The function implemented by a network of TLUs depends on its topology as well as on the weights of the individual TLUs. A proof that this procedure terminates is given in [Nilsson. then we say that the network is a layered. 174177]. In the ﬁgure. for example. this procedure reduces to the ordinary TLU errorcorrection procedure. One way to achieve more complex surfaces is with networks of TLUs. feedforward .0) from the vertices (1. in category v (u = v). Consider. 8890] and in [Duda & Hart.46 the weight vectors. for constant ci . the network of three TLUs shown in Fig. 4. If the TLUs of a feedforward network are arranged in layers. f = x1 x2 + x1 x2 . But. This correction increases the value of the uth dot product and decreases the value of the vth dot product. we show the weight values along input lines to each TLU and the threshold value inside the circle representing the TLU.10 does implement this function.0) and (0. then: Wu ←− Wu + ci Xi and Wv ←− Wv − ci Xi and all other weight vectors are not changed. No single line through the 2dimensional square can separate the vertices (1.1) and (0. Just as in the 2category ﬁxed increment procedure.1 Networks of TLUs Motivation and Examples Layered Networks To classify correctly all of the patterns in nonlinearlyseparable training sets requires separating surfaces more complex than hyperplanes. this procedure is guaranteed to terminate. even parity function. Xi . the 2dimensional. Feedforward networks have no cycles. NEURAL NETWORKS b. (Networks that are not feedforward are called recurrent networks).3 4. If the machine mistakenly classiﬁes a category u pattern. pp.3.1)—the function is not linearly separable and thus cannot be implemented by a single TLU. with the elements of layer j receiving inputs only from TLUs in layer j − 1.
12. any Boolean function can be implemented by some twolayer network of TLUs.5 1 Figure 4. All of the TLUs except the “output” units are called hidden units (they are “hidden” from the output). consider the function f = x1 x2 + x2 x3 + x1 x3 . 4. a feedforward. The network shown in Fig.10 is a layered. 4. Feedforward Network Implementing DNF Functions by TwoLayer Networks We have already deﬁned kterm DNF functions—they are DNF functions having k terms. Since any Boolean function has a DNF form. 4.11: A Layered. feedforward network having two layers (of weights).) In general. (Some people count the layers of TLUs and include the inputs as a layer also. X output units hidden units Figure 4.10: A Network for the Even Parity Function network. (We leave it to the reader to calculate appropriate values of weights and .5 f 0. they would call this network a threelayer network. As an example.4. A kterm DNF function can be implemented by a twolayer network with k units in the hidden layer—to implement the k terms—and one output unit to implement the disjunction of these terms.3. The form of the network that implements this function is shown in Fig. NETWORKS OF TLUS 47 x1 1 x2 1 1 1 1.11. layered network has the structure shown in Fig.5 1 0.
12: A TwoLayer Network f = x1x2 + x2x3 + x1x3 x3 x2 x1 Figure 4. singlesideerrorhypeplane methods. 4. 4. The weights in the ﬁrst layer (except for the “threshold weights”) can all have values of 1. or 0. NEURAL NETWORKS thresholds.13. NPhardness of optimal versions. so the weights in the ﬁnal layer are ﬁxed. .13.13: Three Planes Implemented by the Hidden Units To train a twolayer network that implements a kterm DNF function. halfspace unions. 4.12 can be designed so that each hidden unit implements one of the planar boundaries shown in Fig. Later. we will present a training procedure for this ﬁrst layer of weights. The network of Fig. relation to “AQ” methods. −1. Discuss halfspace intersections.48 CHAPTER 4.) The 3cube representation of the function is shown in Fig. we ﬁrst note that the output unit implements a disjunction. 2layer Network Figure 4. TLUs x disjunct disjunction of terms conjunctions conjuncts of literals (terms) A Feedforward.
then determine the minimum number of hidden units whose responses need to be changed (from 0 to 1 or from 1 to 0—depending on the type of error) in order that the Madaline would correctly classify Xi . the ﬁnal outputs will not be consistent with the labeled training set. There are example problems in which even though a set of weight values exists for a given Madaline structure such that it could classify all members of a training set correctly. 1962] proposed the following errorcorrection rule for adjusting the weights of the hidden units of a Madaline: • If the Madaline correctly classiﬁes a pattern. NETWORKS OF TLUS Important Comment About Layered Networks 49 Adding additional layers cannot compensate for an inadequate ﬁrst layer of TLUs. Suppose that minimum number is ki .) That is. so that no two such vectors yield the same set of outputs of the ﬁrstlayer units). We leave it to the reader to think about how this training procedure could be modiﬁed if the output TLU implemented an or function (or an and function). Add diagrams showing the nonlinear transformation performed by a layered network. this procedure will fail to ﬁnd them. Such a network was called a “Madaline” (for many adalines by Widrow. If the ﬁrst layer does not partition the feature space in this way. Xi . W notation. the procedure works eﬀectively in most experiments with it. then regardless of what subsequent layers do. Nevertheless. (We assume augmented vectors here even though we are using X. Ridgway [Ridgway. . although other output logics are possible. we perform errorcorrection on just enough hidden units to correct the vote to a majority voting correctly. no corrections are made to any of the hidden units’ weight vectors.4. and we change those that are easiest to change. Of those hidden units voting incorrectly. The ﬁrst layer of TLUs partitions the feature space so that no two differently labeled vectors are in the same region (that is. • If the Madaline incorrectly classiﬁes a pattern.3. Xi .3. change the weight vectors of those ki of them whose dot products are closest to 0 by using the error correction rule: W ←− W + ci (di − fi )Xi where di is the desired response of the hidden unit (0 or 1) and fi is the actual response (0 or 1). feedforward network is the twolayer one which has an odd number of hidden units. the response of the vote taking unit is deﬁned to be the response of the majority of the hidden units.2 Madalines TwoCategory Networks An interesting example of a layered. Typically. 4. and a “votetaking” TLU as the output unit.
et al. namely the vertex consisting of k 1’s and the vertex consisting of k 0’s.50 CHAPTER 4.. For similar. W1 1 .. 1961. We could design an Rcategory Madaline by identifying R vertices in hiddenunit space and then classifying a pattern according to which of these vertices the hiddenunit response is closest to. we need a more powerful classiﬁer.3. NEURAL NETWORKS RCategory Madalines and ErrorCorrecting Output Codes If there are k hidden units (k > 1) in a twolayer network. Similarly. A candidate is a structure called a piecewise linear (PWL) machine illustrated in Fig. The Madaline’s response is 1 if the point in “hiddenunitspace” is closer to the all 1’s vertex than it is to the all 0’s vertex. ARG MAX .3 Piecewise Linear Machines A twocategory training set is linearly separable if there exists a threshold function that correctly classiﬁes all members of the training set. The ordinary twocategory Madaline identiﬁes two special points in this space. pp. It used the fact that the 2p socalled maximallength shiftregister sequences [Peterson. 1962].14. R MAX WR NR Figure 4. 1 MAX W1 N1 X WR 1 . 147ﬀ] in a (2p − 1)dimensional Boolean space are mutually equidistant (for any integer p). 1991]. When an Rcategory problem is not linearly separable. more recent work see [Dietterich & Bakiri. we can say that an Rcategory training set is linearly separable if there exists a linear machine that correctly classiﬁes all members of the training set.14: A Piecewise Linear Machine ... their responses correspond to vertices of a kdimensional hypercube. 4. 4.. A machine using that idea was implemented in the early 1960s at SRI [Brain...
Such a network is called a cascade network. 1965.3.16. there are example training sets that are separable by a given PWL machine structure but for which this errorcorrection training method fails to ﬁnd a solution. we subtract (a constant times) the pattern vector from the weight vector producing the largest dot product (it was incorrectly the largest) and add (a constant times) the pattern vector to that weight vector in the correct bank of weight vectors whose dot product is locally largest in that bank.4. we use augmented vectors here. where k is the number of TLUs from which it receives inputs.3.) We show a 3dimensional sketch for a network of two TLUs in Fig.) Unfortunately. L1 x L2 output L3 Figure 4. 4.” 4. that’s 2k diﬀerent combinations—resulting in 2k diﬀerent positions for the parallel hyperplanes.15: A Cascade Network . Each TLU in the network implements a set of 2k parallel hyperplanes. We can use an errorcorrection training algorithm similar to that used for a linear machine.4 Cascade Networks Another interesting class of feedforward networks is that in which all of the TLUs are ordered and each TLU receives inputs from all of the pattern components and from all TLUs lower in the ordering. page 89] observed that “it is probably not a very eﬀective method for training PWL machines having more than three [weight vectors] in each bank. An input vector X is assigned to that category corresponding to the bank with the largest weighted sum. (Each of the k preceding TLUs can have an output of 1 or 0.15 in which the TLUs are labeled by the linearly separable functions (of their inputs) that they implement. An example is shown in Fig. NETWORKS OF TLUS 51 The PWL machine groups its weighted summing units into R banks corresponding to the R categories. If a pattern is classiﬁed incorrectly. 1966]. (Again. although [Nilsson. The reader might consider how the ndimensional parity function might be implemented by a cascade network having log2 n TLUs. The method does appear to work well in some situations [Duda & Fossum. 4.
The jth layer of TLUs (1 ≤ j < k) will have as their outputs the vector X(j) . Also mention the “cascadecorrelation” method of [Fahlman & Lebiere. of course. then training L2 (including the weight from L1 to L2 ) also to do as good a job as possible at separating all the training patterns. the layered. 1990]. Each of the layers of TLUs will have outputs that we take to be the components of vectors. Each TLU in each layer has a weight vector (connecting it to its inputs) and a threshold. for example). but. In this spirit. the WidrowHoﬀ method of gradient descent has been generalized to deal with multilayer networks.52 CHAPTER 4. the ith TLU in the jth layer has a weight vector denoted by (j) Wi . 4. The input feature vector is denoted by X(0) . It is diﬃcult to assign “blame” for the error to any particular TLU in the network. just as the input features are components of an input vector. Intuitively. there are usually several diﬀerent ways in which the error can be corrected. This network has only one output unit. feedforward network of Fig. for example. (We will assume that the “threshold weight” is the last component of the associated weight vector.4. 4.11. If such a network makes an error on a pattern. Consider. 4. we use Fig. and the ﬁnal output (of the kth layer TLU) is f . NEURAL NETWORKS L2 L2 L1 Figure 4.4 4. In explaining this generalization.16: Planes Implemented by a Cascade Network with Two TLUs Cascade networks might be trained by ﬁrst training L1 to do as good a job as possible at separating all the training patterns (perhaps by using the pocket algorithm. it is possible to have several TLUs in the output layer—each implementing a diﬀerent function. we might have used V notation instead to include . and so on until the resulting network classiﬁes the patterns in the training set satisfactorily.1 Training Feedforward Networks by Backpropagation Notation The general problem of training a network of TLUs is diﬃcult.17 to introduce some notation. one looks for weightadjusting procedures that move the network in the correct direction (relative to the error) by making minimal changes.
4.4. TRAINING FEEDFORWARD NETWORKS BY BACKPROPAGATION53 this threshold component, but we have chosen here to use the familiar X,W notation, assuming that these vectors are “augmented” as appropriate.) We denote the weighted sum input to the ith threshold unit in the jth layer by (j) (j) (j) si . (That is, si = X(j−1) •Wi .) The number of TLUs in the jth layer is (j) (j) given by mj . The vector Wi has components wl,i for l = 1, . . . , m(j−1) + 1.
First Layer jth Layer (k1)th Layer kth Layer
X(0)
X(1)
X(j)
X(k1)
... ...
...
Wi(1) si(1)
wli(j)
...
Wi(j) s (j)
...
Wi(k1) s (k1)
wl(k)
W(k) f s(k)
...
. .i .
. .i .
m1 TLUs
mj TLUs
m(k1) TLUs
Figure 4.17: A klayer Network
4.4.2
The Backpropagation Method
A gradient descent method, similar to that used in the Widrow Hoﬀ method, has been proposed by various authors for training a multilayer, feedforward network. As before, we deﬁne an error function on the ﬁnal output of the network and we adjust each weight in the network so as to minimize the error. If we have a desired response, di , for the ith input vector, Xi , in the training set, Ξ, we can compute the squared error over the entire training set to be: ε=
Xi
(di − fi )2
Ξ
where fi is the actual response of the network for input Xi . To do gradient descent on this squared error, we adjust each weight in the network by an amount proportional to the negative of the partial derivative of ε with respect to that weight. Again, we use a singlepattern error function so that we can use an incremental weight adjustment procedure. The squared error for a single input vector, X, evoking an output of f when the desired output is d is:
54
CHAPTER 4. NEURAL NETWORKS
ε = (d − f )2 It is convenient to take the partial derivatives of ε with respect to the various weights in groups corresponding to the weight vectors. We deﬁne a partial (j) derivative of a quantity φ, say, with respect to a weight vector, Wi , thus: ∂φ
(j) ∂Wi (j) def
=
∂φ
(j) ∂w1i
,...,
∂φ
(j) ∂wli (j)
,...,
∂φ
(j) ∂wmj−1 +1,i
where wli is the lth component of Wi . This vector partial derivative of φ is called the gradient of φ with respect to W and is sometimes denoted by W φ. Since ε’s dependence on Wi rule to write:
(j)
is entirely through si , we can use the chain
(j) (j)
(j)
∂ε ∂Wi Because si
(j) (j)
=
∂ε ∂si
(j)
∂si
∂Wi
= X(j−1) •Wi ,
(j)
∂si
(j) (j)
∂Wi
= X(j−1) . Substituting yields: ∂ε ∂si
(j)
∂ε
(j) ∂Wi
=
X(j−1)
Note that
∂ε (j) ∂si
= −2(d − f ) ∂ε
∂f (j) . ∂si
Thus, ∂f ∂si
(j)
(j) ∂Wi
= −2(d − f )
X(j−1)
The quantity (d−f )
(j) δi .
∂f (j) ∂si
plays an important role in our calculations; we shall
(j)
denote it by Each of the δi ’s tells us how sensitive the squared error of the network output is to changes in the input to each threshold function. Since we will be changing weight vectors in directions along their negative gradient, our fundamental rule for weight changes throughout the network will be: Wi
(j) (j)
← Wi
(j)
+ ci δi X(j−1)
(j) (j)
where ci is the learning rate constant for this weight vector. (Usually, the learning rate constants for all weight vectors in the network are the same.) We see that this rule is quite similar to that used in the error correction procedure
4.4. TRAINING FEEDFORWARD NETWORKS BY BACKPROPAGATION55 for a single TLU. A weight vector is changed by the addition of a constant times its vector of (unweighted) inputs. Now, we must turn our attention to the calculation of the δi ’s. Using the deﬁnition, we have: δi
(j) (j)
= (d − f )
∂f ∂si
(j)
We have a problem, however, in attempting to carry out the partial derivatives of f with respect to the s’s. The network output, f , is not continuously diﬀerentiable with respect to the s’s because of the presence of the threshold functions. Most small changes in these sums do not change f at all, and when f does change, it changes abruptly from 1 to 0 or vice versa. A way around this diﬃculty was proposed by Werbos [Werbos, 1974] and (perhaps independently) pursued by several other researchers, for example [Rumelhart, Hinton, & Williams, 1986]. The trick involves replacing all the threshold functions by diﬀerentiable functions called sigmoids.1 The output of a sigmoid function, superimposed on that of a threshold function, is shown 1 in Fig. 4.18. Usually, the sigmoid function used is f (s) = 1+e−s , where s is the input and f is the output.
f (s)
threshold function
sigmoid f (s) = 1/[1 + e s] s
Figure 4.18: A Sigmoid Function
1 [Russell
& Norvig 1995, page 595] attributes the use of this idea to [Bryson & Ho 1969].
4. m1 sigmoids mj sigmoids m(k1) sigmoids Figure 4..... ... .. 4.... NEURAL NETWORKS We show the network containing sigmoid units in place of TLUs in Fig. . Wi(1) . fi(1) Wi(j) fi(j) (1) (j) wli(j) i i si(1) si(j) ..3 Computing Weight Changes in the Final Layer We ﬁrst calculate δ (k) in order to compute the weight change for the ﬁnal sigmoid unit: .. Wi(k1) (k1) w (k) l fi (k1) i si(k1) W(k) f(k) s(k) (k) .) −s 1+e i First Layer jth Layer (k1)th Layer kth Layer X(0) X(1) X(j) X(k1) . (j) The output of the ith sigmoid unit in the jth layer is denoted by fi .19..19: A Network with Sigmoid Units 4. (That (j) 1 is..56 CHAPTER 4. fi = (j) .. ..
4. For a pattern far away from this fuzzy hyperplane. f (1 − f ) obtains its maximum value of 1/4 when f is 1/2 (that is. When f is 0. f (1 − f ) can vary in value from 0 to 1. when f is 1. The sigmoid function can be thought of as implementing a “fuzzy” hyperplane. just as in the errorcorrection and WidrowHoﬀ rules.4. With the sigmoid function. and the backpropagation rule makes little or no change to the weight values regardless of the desired output. TRAINING FEEDFORWARD NETWORKS BY BACKPROPAGATION57 δ (k) = (d − f (k) ) ∂f (k) ∂s(k) 1 1+e−s . . the errorcorrection rule is: W ←− W + c(d − f )X and the WidrowHoﬀ rule is: W ←− W + c(d − f )X The only diﬀerence (except for the fact that f is not thresholded in WidrowHoﬀ) is the f (1 − f ) term due to the presence of the sigmoid function. and these changes are in the direction of correcting the error. namely f (s) = that ∂f = f (1 − f ). f (1 − f ) has value close to 0. The backpropagation weight adjustment for the single element in the ﬁnal layer can be written as: W ←− W + c(d − f )f (1 − f )X Written in the same format. f (1 − f ) is 0. the weight vector in the ﬁnal layer is changed according to the rule: W(k) ← W(k) + c(k) δ (k) X(k−1) where δ (k) = (d − f (k) )f (k) (1 − f (k) ) It is interesting to compare backpropagation to the errorcorrection rule and to the WidrowHoﬀ rule. (Small changes in the weights will have little eﬀect on the output for inputs far from the hyperplane. Given the sigmoid function that we are using. when the input to the sigmoid is 0). f (1 − f ) is also 0. Substituting gives us: ∂s δ (k) = (d − f (k) )f (k) (1 − f (k) ) we have Rewriting our general rule for weight vector changes.) Weight changes are only made within the region of “fuzz” surrounding the hyperplane.
in which case ∂sν (j) = fν (1 − fν ). The ﬁnal output.4 Computing Changes to the Weights in Intermediate Layers Using our expression for the δ’s. we note that Therefore: ∂si = 0 unless ν = i. NEURAL NETWORKS 4.58 CHAPTER 4.4. since the weights do not depend on the s’s: ∂sl (j+1) (j) ∂ = mj +1 ν=1 fν wνl (j) (j) (j+1) mj +1 = ν=1 ∂si ∂si (j) ∂fν (j) wνl (j) (j+1) ∂fν (j) ∂si (j) ∂fν Now. To do that we ﬁrst write: (j+1) (j+1) = X(j) •Wl mj +1 = ν=1 (j) fν wνl (j+1) And then. (j) (j) ∂sl (j+1) (j) ∂si = wil (j+1) (j) fi (1 − fi ) (j) . So: δi (j) (j) = (d − f ) ∂f ∂si (j) = (d − f ) ∂f ∂s1 (j+1) ∂s1 (j+1) (j) + ··· + ∂f ∂sl (j+1) ∂sl (j+1) (j) + ··· + ∂f (j+1) ∂smj+1 (j) (j+1) ∂si ∂si ∂smj+1 ∂si (j+1) (j+1) ∂sl (j) ∂si mj+1 = l=1 (d − f ) ∂f ∂sl (j+1) ∂sl (j+1) (j) mj+1 = l=1 ∂si δl It remains to compute the sl ∂sl (j+1) (j) ∂si ’s. depends on si through each of the summed inputs to the sigmoids in the (j + 1)th layer. f . Recall: δi (j) = (d − f ) ∂f ∂si (j) Again we use a chain rule. we can similarly compute how to change each of the weight vectors in the network.
we can use this equation to compute the δi ’s. 1991]). momemtum (Plaut. and soon . Krogh. They are also proportional to a kind of “average” eﬀect that any change in the output of that sigmoid unit will have on the ﬁnal output. (It is interesting to note that this expression is independent of the error function. These calculations can be simply implemented by “backpropagating” the δ’s through the weights in reverse direction (thus. This average eﬀect depends on the weights going out of the sigmoid unit in the jth layer (small weights produce little downstream eﬀect) and the eﬀects that changes in the outputs of (j + 1)th layer sigmoid units will have on the ﬁnal output (as measured by the δ (j+1) ’s). regularization methods] Simulated Annealing To apply simulated annealing. namely: Wi (j) ← Wi (j) + ci δi X(j−1) (j) (j) Although this rule appears complex. the error function explicitly (j+1) aﬀects only the computation of δ (k) .5 Variations on Backprop [To be written: problem of local minima.) As the recursion equation for the δ’s shows.4. f . which we have already computed: δ (k) = (d − f (k) )f (k) (1 − f (k) ) We use this expression for the δ’s in our generic weight changing rule. et al. 4. approaches either 0 or 1. it has an intuitively reasonable explanation. the value of the learning rate constant is gradually decreased with time. & Palmer. The quantity δ (k) = (d − f )f (1 − f ) controls the overall amount and sign of all weight adjustments in the network.4. it typically will neither be very broad. (Adjustments diminish as the ﬁnal output.. The base case is δ (k) . quickprop. the adjustments for the weights going in to a sigmoid unit in the jth layer are proportional to the eﬀect that such adjustments have on that sigmoid unit’s output (its f (j) (1 − f (j) ) factor). TRAINING FEEDFORWARD NETWORKS BY BACKPROPAGATION59 We use this result in our expression for δi (j) δi (j) to give: δl (j+1) mj+1 = (j) fi (1 − (j) fi ) l=1 wil (j+1) The above equation is recursive in the δ’s. 1986. simulated annealing. the name backprop for this algorithm).) Having computed the δi ’s for layer (j) j + 1. see [Hertz. because they have vanishing eﬀect on f then.4. If we fall early into an errorfunction valley that is not very deep (a local minimum).
60
CHAPTER 4. NEURAL NETWORKS
a subsequent large correction will jostle us out of it. It is less likely that we will move out of deep valleys, and at the end of the process (with very small values of the learning rate constant), we descend to its deepest point. The process gets its name by analogy with annealing in metallurgy, in which a material’s temperature is gradually decreased allowing its crystalline structure to reach a minimal energy state.
4.4.6
An Application: Steering a Van
A neural network system called ALVINN (Autonomous Land Vehicle in a Neural Network) has been trained to steer a Chevy van successfully on ordinary roads and highways at speeds of 55 mph [Pomerleau, 1991, Pomerleau, 1993]. The input to the network is derived from a lowresolution (30 x 32) television image. The TV camera is mounted on the van and looks at the road straight ahead. This image is sampled and produces a stream of 960dimensional input vectors to the neural network. The network is shown in Fig. 4.20.
sharp left
960 inputs 30 x 32 retina
...
centroid of outputs steers vehicle
straight ahead
...
5 hidden units connected to all 960 inputs sharp right 30 output units connected to all hidden units
Figure 4.20: The ALVINN Network The network has ﬁve hidden units in its ﬁrst layer and 30 output units in the second layer; all are sigmoid units. The output units are arranged in a linear order and control the van’s steering angle. If a unit near the top of the array of output units has a higher output than most of the other units, the van is steered to the left; if a unit near the bottom of the array has a high output, the van is steered to the right. The “centroid” of the responses of all of the output
4.5. SYNERGIES BETWEEN NEURAL NETWORK AND KNOWLEDGEBASED METHODS61 units is computed, and the van’s steering angle is set at a corresponding value between hard left and hard right. The system is trained by a modiﬁed online training regime. A driver drives the van, and his actual steering angles are taken as the correct labels for the corresponding inputs. The network is trained incrementally by backprop to produce the driverspeciﬁed steering angles in response to each visual pattern as it occurs in real time while driving. This simple procedure has been augmented to avoid two potential problems. First, since the driver is usually driving well, the network would never get any experience with farfromcenter vehicle positions and/or incorrect vehicle orientations. Also, on long, straight stretches of road, the network would be trained for a long time only to produce straightahead steering angles; this training would swamp out earlier training to follow a curved road. We wouldn’t want to try to avoid these problems by instructing the driver to drive erratically occasionally, because the system would learn to mimic this erratic behavior. Instead, each original image is shifted and rotated in software to create 14 additional images in which the vehicle appears to be situated diﬀerently relative to the road. Using a model that tells the system what steering angle ought to be used for each of these shifted images, given the driverspeciﬁed steering angle for the original image, the system constructs an additional 14 labeled training patterns to add to those encountered during ordinary driver training.
4.5
Synergies Between Neural Network and KnowledgeBased Methods Bibliographical and Historical Remarks
To be written; discuss rulegenerating procedures (such as [Towell & Shavlik, 1992]) and how expertprovided rules can aid neural net training and viceversa [Towell, Shavlik, & Noordweier, 1990]. To be added.
4.6
62
CHAPTER 4. NEURAL NETWORKS
X.Chapter 5 Statistical Learning 5.) Speciﬁcally. These techniques are based on the idea of minimizing the expected value of a quantity similar to the error function we used in deriving the weightchanging rules for backprop. X. λ(i  j). suppose we have the two probability distributions (perhaps probability density functions).) We describe this information by a loss function. (The treatment given here can easily be generalized to Rcategory problems. Given a pattern. X. In developing a decision method. is a random variable whose probability distribution for category 1 is diﬀerent than it is for category 2. Our decision rule will be to decide that X belongs to category 1 if LX (1) ≤ LX (2).1 5. and vice versa. λ(i  j) represents the loss incurred when we decide a pattern is in category i when really it is in category j. and to decide on category 2 otherwise. Given a pattern. We assume here that λ(1  1) and λ(2  2) are both 0. (We might decide that a pattern really in category 1 is in category 2.1. to determine from which distribution it was drawn. For any given pattern. it is necessary to know the relative seriousness of the two kinds of mistakes that might be made.1 Using Statistical Decision Theory Background and General Method Suppose the pattern vector. we want to decide its category in such a way that minimizes the expected value of this loss. 63 . its category is j. we want to use statistical techniques to determine its category—that is. 2. if we decide category i. the expected value of the loss will be: LX (i) = λ(i  1)p(1  X) + λ(i  2)p(2  X) where p(j  X) is the probability that given a pattern X. for i. j = 1. p(X  1) and p(X  2). X.
64 CHAPTER 5. The exact decision rule depends on the the probability distributions assumed. we need to compare the (perhaps weighted) quantities p(X  i) for i = 1 and 2. We will treat two interesting distributions. we obtain. Performing the substitutions given by Bayes’ Rule. and noticing that p(X) is common to both expressions. and p(X) is the (a priori) probability of pattern X being the pattern we are asked to classify. Decide category1 iﬀ: k(1  2)p(X  2) ≤ k(2  1)p(X  1) In any case. which we assume to be known (or estimatible): p(j  X) = p(X  j)p(j) p(X) where p(j) is the (a priori) probability of category j (one category may be much more probable than the other). this simple decision rule implements what is called a maximumlikelihood decision. STATISTICAL LEARNING We can use Bayes’ Rule to get expressions for p(j  X) in terms of p(X  j). if we deﬁne k(i  j) as λ(i  j)p(j). . Decide category 1 iﬀ: λ(1  2)p(X  2)p(2) ≤ λ(2  1)p(X  1)p(1) If λ(1  2) = λ(2  1) and if p(1) = p(2). then our decision rule is simply. More generally. our decision rule becomes: Decide category 1 iﬀ: λ(1  1) p(X  2)p(2) p(X  1)p(1) + λ(1  2) p(X) p(X) p(X  1)p(1) p(X  2)p(2) + λ(2  2) p(X) p(X) ≤ λ(2  1) Using the fact that λ(i  i) = 0. then the decision becomes particularly simple: Decide category 1 iﬀ: p(X  2) ≤ p(X  1) Since p(X  j) is called the likelihood of j with respect to X.
with components (m1 . The components of the covariance matrix are given by: 2 σij = E[(xi − mi )(xj − mj )] 2 In particular. we have plotted the sum of the two distributions.1.1. the equiprobability contours are all centered on the mean vector. one class tends to have patterns clustered around one point in the ndimensional space. an intuitive idea for Gaussian distributions can be given when n = 2. is such that the elliptical contours of equal probability are skewed. Suppose we now assume that the two classes of pattern vectors that we want to distinguish are each distributed according to a Gaussian distribution but with diﬀerent means and covariance matrices. . and Σ is the determinant of the covariance matrix. A threedimensional plot of the distribution is shown at the top of the ﬁgure. positive deﬁnite matrix). then the major axes of the elliptical contours would be aligned with the coordinate axes. (In that ﬁgure. . 5. that is. USING STATISTICAL DECISION THEORY 65 5. Σ.2. . and the other class tends to have patterns clustered around another point. In this case. . thus equiprobability contours are hyperellipsoids in ndimensional space. its value is always positive). Σ−1 is the inverse of the covariance matrix. the formula in the exponent in the Gaussian distribution is a positive deﬁnite quadratic form (that is. σii is called the variance of xi .) What decision rule should we use to separate patterns into the two appropriate categories? Substituting the Gaussian distributions into our maximum likelihood formula yields: . In any case. the column vector M is called the mean vector. We show a twodimensional instance of this problem in Fig.2 Gaussian (or Normal) Distributions The multivariate (ndimensional) Gaussian distribution is given by the probability density function: p(X) = e (2π)n/2 Σ1/2 1 −1 −(X−M)t Σ (X−M) 2 where n is the dimension of the column vector X. 5. That is. and contours of equal probability are shown at the bottom. The mean vector. If the covariance matrix were diagonal. M = E[X]. is the expected value of X (using this distribution). Σ is the covariance matrix of the distribution (an n × n symmetric.1. In general the principal axes are given by the eigenvectors of Σ. which in our ﬁgure happens to be at the origin. Although the formula appears complex. (X − M)t is the transpose of the vector (X − M). M.5. We show a twodimensional Gaussian distribution in Fig. mn ). In general. M. the covariance matrix. that is if all oﬀdiagonal terms were 0.
5 .25 25 0 5 0 5 5 5 0 x 1 x 2 6 4 2 0 2 4 6 6 4 2 6 4 2 x 2 4 6 1 0 2 4 6 Figure 5.75 75 0. The result of the comparison isn’t changed if we compare logarithms instead.5 0.66 CHAPTER 5. and the category 2 patterns are distributed with mean and covariance M2 and Σ2 . After some manipulation.1: The TwoDimensional Gaussian Distribution Decide category 1 iﬀ: 1 (2π)n/2 Σ is less than or equal to 2 1/2 e−1/2(X−M2 ) t Σ2 (X−M2 ) −1 t −1 1 e−1/2(X−M1 ) Σ1 (X−M1 ) (2π)n/2 Σ1 1/2 where the category 1 patterns are distributed with mean and covariance M1 and Σ1 .x ) 1 2 x 2 1 0. respectively. our decision rule is then: . STATISTICAL LEARNING p(x .
then the contours of equal probability for each of the two distributions .5 5 5 2. one of which contains points that will be assigned to category 1 and the other contains points that will be assigned to category 2. The exact shape and position of this hyperquadric is determined by the means and the covariance matrices.5 0 2. If the covariance matrices for each category are identical and diagonal.5. with all σii equal to each other. the decision rule involves a quadric surface (a hyperquadric) in ndimensional space. The surface separates the space into two parts.25 25 0 5 0 5 10 5 0 10 5 x 1 10 7.1. When the quadratic forms are multiplied out and represented in terms of the components xi .x ) 1 2 x 2 1 0. incorporates the logarithms of the fractions preceding the exponential.5 5 2.5 10 Figure 5.5 . a constant bias term.5 0 2.5 5 7. etc.75 75 0. It is interesting to look at a special case of this surface.2: The Sum of Two Gaussian Distributions Decide category 1 iﬀ: (X − M1 )t Σ−1 (X − M1 ) < (X − M2 )t Σ−1 (X − M2 ) + B 1 2 where B.5 0. USING STATISTICAL DECISION THEORY 67 p(x .
if there are suﬃcient training patterns. For example. By conditional independence in this case. there are various techniques for estimating them. In fact. if the number of training patterns is less than n. If the parameters (Mi . STATISTICAL LEARNING are hyperspherical. We continue to denote the two probability distributions by p(X  1) and p(X  2). the hyperplane is perpendicular to the line joining the two means. (Caution: the sample covariance matrix will be singular if the training patterns happen to lie on a subspace of the whole ndimensional space—as they certainly will. Decide category 1 iﬀ: X•M1 ≥ X•M2 + Constant or X•(M1 − M2 ) ≥ Constant where the constant depends on the lengths of the mean vectors. The weights in a TLU implementation are equal to the diﬀerence in the mean vectors.3 Conditionally Independent Binary Components Suppose the vector X is a random variable having binary (0.68 CHAPTER 5. one can use sample means and sample covariance matrices.1. and the decision rule is: Decide category 1 iﬀ: (X − M1 )t (X − M1 ) < (X − M2 )t (X − M2 ) Multiplying out yields: X•X − 2X•M1 + M1 •M1 < X•X − 2X•M2 + M2 •M2 or ﬁnally. we mean that the formulas for the distribution can be expanded as follows: . We see that the optimal decision surface in this special case is a hyperplane. The quadric forms then become (1/Σ)(X − Mi )t (X − Mi ).1) components. Further suppose that the components of these vectors are conditionally independent given the category.) 5. for example. and then using those estimates in the decision rule. Σi ) of the probability distributions of the categories are not known.
we note that since xi can only assume the values of 1 or 0: log p(xi  1) pi (1 − pi ) = xi log + (1 − xi ) log p(xi  2) qi (1 − qi ) . p(xn  2) p(1) or iﬀ: log p(x1  1) p(x2  1) p(xn  1) p(1) + log + · · · + log + log ≥0 p(x1  2) p(x2  2) p(xn  2) p(2) Let us deﬁne values of the components of the distribution for speciﬁc values of their arguments. . Decide category 1 iﬀ: p(1)p(x1  1)p(x2  1) · · · p(xn  1) ≥ p(x1  2)p(x2  2) · · · p(xn  2)p(2) or iﬀ: p(2) p(x1  1)p(x2  1) . . Decide category 1 iﬀ: λ(1  2)p(X  2)p(2) ≤ λ(2  1)p(X  1)p(1) Assuming conditional independence of the components and that λ(1  2) = λ(2  1). .5.1. . xi : p(xi = 1  1) = pi p(xi = 0  1) = 1 − pi p(xi = 1  2) = qi p(xi = 0  2) = 1 − qi Now. USING STATISTICAL DECISION THEORY 69 p(X  i) = p(x1  i)p(x2  i) · · · p(xn  i) for i = 1. p(xn  1) ≥ p(x1  2)p(x2  2) . 2 Recall the minimumaverageloss decision rule. we obtain.
a knearestneighbor method assigns a new pattern.3 Another class of methods can be related to the statistical ones. . X. we can use a sample of labeled training patterns to estimate these parameters. 5. and the larger the value of k. But large values of k also reduce the acuity of the method. memorybased methods. sometimes. to that category to which the plurality of its k closest neighbors belong. 1991]. belongs to the same category as do its closest neighbors in Ξ. . Using relatively large values of k decreases the chance that the decision will be unduly inﬂuenced by a noisy training pattern close to X. Learning Belief Networks NearestNeighbor Methods 5. STATISTICAL LEARNING = xi log (1 − pi ) pi (1 − qi ) + log qi (1 − pi ) (1 − qi ) Substituting these expressions into our decision rule yields: Decide category 1 iﬀ: n xi log i=1 pi (1 − qi ) p(1) (1 − pi ) + + log ≥0 log qi (1 − pi ) i=1 (1 − qi ) p(2) n We see that we can achieve this decision with a TLU with weight values as follows: wi = log for i = 1. . . These are called nearestneighbor methods or. the better the estimate. More precisely. a nearestneighbor procedure decides that some new pattern. . X.70 CHAPTER 5.2 To be added.) Given a training set Ξ of m labeled patterns. n. (A collection of papers on this subject is in [Dasarathy. Of course the denser are the points around X. The knearestneighbor method can be thought of as estimating the values of the probabilities of the classes given X. and wn+1 = log p(1) (1 − pi ) + log 1 − p(1) i=1 (1 − qi ) n pi (1 − qi ) qi (1 − pi ) If we do not know the pi . qi and p(1).
the method and its derivatives have seen several practical applications.. and p(X  i) is the probability (or probability density function) of X given that X belongs to category i. . so decide X is in category 1 Figure 5. et al. . . 1994] for theoretical analysis of error rate as a function of the number of training patterns for the case in which points are randomly distributed on the surface of a unit sphere and underlying function is linearly separable. The CoverHart theorem states that under very mild conditions (having to do with the smoothness See [Baum. Also. the distance calculations required to ﬁnd nearest neighbors can often be eﬃciently computed by kdtree methods [Friedman. Suppose the probability of error in classifying patterns of such a minimumprobabilityoferror classiﬁer is ε. NEARESTNEIGHBOR METHODS 71 The distance metric used in nearestneighbor methods (for numerical attributes) can be simple Euclidean distance. (See. [Moore. R. As mentioned earlier. .3. 5. for categories i = 1. 1992. x2n ) is j=1 (x1j − x2j ) . . . . . Since memory cost is now reasonably low. x1n ) and (x21 .3: An 8NearestNeighbor Decision Nearestneighbor methods are memory intensive because a large number of training patterns must be stored to achieve good generalization. . That is. for example. where p(i) is the a priori probability of category i. This distance measure is often modiﬁed by scaling the features so that the spread of attribute values along each dimension is approximately the same. x22 . Moore. 1994]. . A theorem by Cover and Hart [Cover & Hart. . et al. the minimumprobabilityoferror classiﬁer would assign a new pattern X to that category that maximized p(i)p(X  i). where aj is the scale factor for dimension j. the distance between two 2 patterns (x11 . 1977].. In that case. An example of a nearestneighbor decision problem is shown in Fig. n 2 2 the distance between the two vectors would be j=1 aj (x1j − x2j ) . . n class of training pattern 3 3 3 2 2 3 3 1 training pattern 3 2 1 1 2 2 2 2 3 3 3 2 2 1 1 1 1 1 1 X (a pattern to be classified) k=8 four patterns of category 1 two patterns of category 2 two patterns of category 3 plurality are in category 1. x12 . In the ﬁgure the class of a training pattern is indicated by the number next to it. 1967] relates the performance of the 1nearestneighbor method to the performance of a minimumprobabilityoferror classiﬁer. .5.3.
Bibliographical and Historical Remarks .4 To be added. 1991]. STATISTICAL LEARNING of probability density functions) the probability of error. 5. εnn . of a 1nearestneighbor classiﬁer is bounded by: ε ≤ εnn ≤ ε 2 − ε Also see [Aha. R R−1 ≤ 2ε where R is the number of categories.72 CHAPTER 5.
We follow the usual convention of depicting the leaf nodes by the class number. Each test has mutually exclusive and exhaustive outcomes. but will use the words “category” and “class” interchangeably to refer to what Quinlan calls “class. (Binaryvalued ones can be regarded as either. and the rightmost one assigns the pattern to class 1. We will not make this distinction. The tests might have two outcomes or more than two.1 has three outcomes.Chapter 6 Decision Trees 6. We show an example in Fig. He calls the subsets of patterns that ﬁlter down to each tip categories and subsets of patterns having the same label classes. In Quinlan’s terminology. The tests might be multivariate (testing on several features of the input at once) or univariate (testing on only one of the features). A decision tree assigns a class number (or output) to an input pattern by ﬁltering the pattern down through the tests in the tree.1 Deﬁnitions A decision tree (generally deﬁned) is a tree whose internal nodes are tests (on input patterns) and whose leaf nodes are categories (of patterns). our example tree has nine categories and three classes.1. test T2 in the tree of Fig. (If all of the tests have two outcomes. categorically valued functions. The features or attributes might be categorical or numeric.1 Note that in discussing decision trees we are not limited to implementing Boolean functions—they are useful for general.) c.” 73 . There are several dimensions along which decision trees might diﬀer: a. 6. b. the leftmost one assigns the input pattern to class 3. Quinlan distinguishes between classes and categories.) 1 One of the researchers who has done a lot of work on learning decision trees is Ross Quinlan. the middle one sends the input pattern down to test T4 . however. 6. we have a binary decision tree. For example.
Prominent among these are ID3 and its new version. It is straightforward to represent the function implemented by a univariate Boolean decision tree in DNF form. the tree implements a Boolean function. In drawing univariate decision trees. The kDL class of Boolean functions can be implemented by a multivariate decision tree having the (highly unbalanced) form shown in Fig.1: A Decision Tree d. 1993]. forming the conjunction of the tests along this path. and is called a Boolean decision tree. Each test.3. et al. we branch left. each nonleaf node is depicted by a single attribute. and then taking the disjunction of these conjunctions.2 Supervised Learning of Univariate Decision Trees Several systems for learning decision trees have been proposed. 1984] We discuss here only batch methods. 6.. is a term of size k or less. and CART [Breiman. if it has value 1. 1986. If we have two classes and binary inputs. The DNF form implemented by such a tree can be obtained by tracing down each path leading to a tip node corresponding to an output value of 1. We show an example in Fig. DECISION TREES T1 T2 T4 T3 3 1 T4 1 2 3 T4 2 3 2 1 Figure 6. we branch right.5 [Quinlan. 6. Quinlan. 1989]. C4. . We might have two classes or more than two. The vi all have values of 0 or 1.2. 6. although incremental ones have also been proposed [Utgoﬀ.74 CHAPTER 6. ci . If the attribute has value 0 in the input pattern.
6. we have n features or attributes. Several techniques have been tried. If the attributes are binary. 6. We show how this technique works for the simple case of tests with binary outcomes. A decision tree with such tests is shown in Fig. but nonbinary. If the attributes are categorical.2. of patterns is deﬁned as: H(Ξ) = − i p(iΞ) log2 p(iΞ) .2. the tests might be formed by dividing the attribute values into mutually exclusive and exhaustive subsets.” for example 7 ≤ xi ≤ 13.2 Using Uncertainty Reduction to Select Tests The main problem in learning decision trees for the binaryattribute case is selecting the order of the tests. 6. Extension to multipleoutcome tests is straightforward computationally but gives poor results because entropy is always decreased by having more outcomes. If the attributes are numeric. the tests might involve “interval tests. we must also decide what the tests should be (besides selecting the order).2. The entropy or uncertainty still remaining about the class of a pattern— knowing that it is in some set. the most popular one is at each stage to select that test that maximally reduces an entropylike measure. For categorical and numeric attributes.2: A Decision Tree Implementing a DNF Function 6.2. Ξ.4. SUPERVISED LEARNING OF UNIVARIATE DECISION TREES 75 x3 0 x2 0 0 x3x2 1 0 x3x4 1 x3x2 1 x3 x4 x1 f = x3x2 + x3x4x1 0 x1 1 0 x3x4x1 1 x4 0 1 Figure 6. the tests are simply whether the attribute’s value is 0 or 1.1 Selecting the Type of Test As usual.
T .. Ξk . Then an estimate of the uncertainty is: ˆ H(Ξ) = − i p(iΞ) log2 p(iΞ) ˆ ˆ For simplicity. Let p(iΞ) be the number of patterns ˆ in Ξ belonging to class i divided by the total number of patterns in Ξ. Suppose that ni of the patterns in Ξ are in Ξi for i = 1. k.) If we knew that T applied to a pattern in Ξ resulted in the jth outcome (that is. and the summation is over all of the classes. We want to select tests at each node such that as we travel down the decision tree. they are nevertheless useful in estimating uncertainties. having k possible outcomes on the patterns in Ξ. DECISION TREES cq cq1 vn ci vn1 vi 1 v1 Figure 6. . (Some ni may be 0.76 CHAPTER 6. If we perform a test. Ξ1 . Although these estimates might be errorful. Ξ2 . we will create k subsets. .3: A Decision Tree Implementing a Decision List where p(iΞ) is the probability that a pattern drawn at random from Ξ belongs to class i. . the uncertainty about its class would be: H(Ξj ) = − i p(iΞj ) log2 p(iΞj ) and the reduction in uncertainty (beyond knowing only that the pattern was in Ξ) would be: .. we knew that the pattern was in Ξj ). Since we do not in general have the probabilities p(iΞ).. the uncertainty about the class of a pattern becomes less and less. . . from now on we’ll drop the “hats” and use sample statistics as if they were real probabilities. we estimate them by sample statistics.
The average reduction in uncertainty achieved by test T (applied to patterns in Ξ) is then: RT (Ξ) = H(Ξ) − E[HT (Ξ)] An important family of decision tree learning algorithms selects for the root of the tree that test that gives maximum reduction of uncertainty. The estimate p(Ξj ) of p(Ξj ) is just the number of those ˆ patterns in Ξ that have outcome j divided by the total number of patterns in Ξ. but we can use sample values. SUPERVISED LEARNING OF UNIVARIATE DECISION TREES x3 = a.2. by: E[HT (Ξ)] = j p(Ξj )H(Ξj ) where by HT (Ξ) we mean the average uncertainty after performing test T on the patterns in Ξ. p(Ξj ) is the probability that the test has outcome j. b. c} {b} {d} 2 x4 = a. The uncertainty calculations are particularly simple when the tests have binary outcomes and when the attributes have . b. f} 2 77 x1 = e. and then applies this criterion recursively until some termination condition is met (which we shall discuss in more detail later). or d {e. or g {e. e. f. But we can estimate the average uncertainty over all the Ξj .b} 1 1 x2 = a. we don’t know the probabilities p(Ξj ).6. and the sum is taken from 1 to k. or d {a. Again. c. or g {a} {g} 1 {d} {a.4: A Decision Tree with Categorical Attributes H(Ξ) − H(Ξj ) Of course we cannot say that the test T is guaranteed always to produce that amount of reduction in uncertainty because we don’t know that the result of the test will be the jth outcome. g} 2 Figure 6.
the uncertainty of the lefthand branch is: . x3 x2 The test x1 x1 Figure 6. 1) class 0 0 0 0 0 1 0 1 What single test.78 CHAPTER 6. 1.5: Eight Patterns to be Classiﬁed by a Decision Tree The initial uncertainty for the set. should be performed ﬁrst? The illustration in Fig. containing all eight points is: H(Ξ) = −(6/8) log2 (6/8) − (2/8) log2 (2/8) = 0.81 Next. Suppose we want to use the uncertaintyreduction method to build a decision tree to classify the following patterns: pattern (0. So. 0) (0. DECISION TREES binary values. 1) (0. Ξ. 1. 1) (1. We’ll give a simple example to illustrate how the test selection mechanism works in that case. 0. and the righthandbranch (Ξr ) has two patterns in each class. 1. 1) (1. x2 . we calculate the uncertainty reduction if we perform x1 ﬁrst. 0. 0. or x3 . 0) (1. x1 . The lefthand branch has only patterns belonging to class 0 (we call them the set Ξl ).5 gives geometric intuition about the problem. 0. 1. 6. 0) (1. 0) (0.
in addition to selecting an attribute.31 By similar calculations. 6. they are also equivalent to twolayer. we must select a test on that attribute. The decision tree at the left of the ﬁgure implements . but x2 achieves no reduction whatsoever. 4] for another example. NETWORKS EQUIVALENT TO DECISION TREES 79 Hx1 (Ξl ) = −(4/4) log2 (4/4) − (0/4) log2 (0/4) = 0 And the uncertainty of the righthand branch is: Hx1 (Ξr ) = −(2/4) log2 (2/4) − (2/4) log2 (2/4) = 1 Half of the patterns “go left” and half “go right” on test x1 .81 − 0. there are only a ﬁnite number of mutually exclusive and exhaustive subsets into which the values of the attribute can be split. 6. Suppose for example that the value of an attribute is a real number and that the test to be performed is to set a threshold and to test to see if the number is greater than or less than that threshold. See [Quinlan.6. Thus. But now. Suppose x1 is selected. We show an example in Fig. The decision tree that this procedure creates thus implements the Boolean function: f = x1 x3 .3 NonBinary Attributes If the attributes are nonbinary. In principle. feedforward neural networks.3.2. we can measure the uncertainty reduction for each test that is achieved by every possible threshold (there are only a ﬁnite number of thresholds that give diﬀerent test results if there are only a ﬁnite number of training patterns).3 Networks Equivalent to Decision Trees Since univariate Boolean decision trees are implementations of DNF functions. 1986.6. if an attribute is categorical (with a ﬁnite number of categories). we can still use the uncertaintyreduction technique to select tests. sect.5 Therefore the uncertainty reduction on Ξ achieved by x1 is: Rx1 (Ξ) = 0. the average uncertainty after performing the x1 test is: 1/2Hx1 (Ξl ) + 1/2Hx1 (Ξr ) = 0. The uncertaintyreduction procedure would select x3 as the next test. our “greedy” algorithm for selecting a ﬁrst test would select either x1 or x3 . 6. given a set of labeled patterns.5 = 0. Thus. we see that the test x3 achieves exactly the same uncertainty reduction. Similarly. We can calculate the uncertainty reduction for each split.
f = x3x2 + x3x4x1 1 x1 x3 0 x2 0 0 x3x2 1 x3x2 1 0 x3x4 x3x4x1 1 +1 +1 1. then. L3 . and L4 . we must choose a function to ﬁt the training set from among a set of hypotheses. when implemented as a network.1 Overﬁtting and Evaluation Overﬁtting In supervised learning. Different approaches to training procedures have been discussed by [Brent. whereas when implemented as a decision tree only those features on the branch traveled down by the input pattern need to be evaluated.6: A Univariate Decision Tree and its Equivalent Network Multivariate decision trees with linearly separable functions at each node can also be implemented by feedforward networks—in this case threelayer ones.5 terms disjunction 0 x3x4x1 x3x4x1 x3x2 Figure 6. We have already showed that generalization is impossible without bias. the ﬁnal layer has ﬁxed weights. 6. Again. even with an incomplete set of training samples. are indicated by L1 . it is possible to reduce the subset of functions that are consistent with the training set suﬃciently to make useful guesses about the value of the function for inputs not in the training set.7 in which the linearly separable functions. And. but the weights in the ﬁrst two layers must be trained. all of the features are evaluated in parallel for any input pattern.4 6. When we know a priori that the function we are trying to guess belongs to a small subset of all possible functions.80 CHAPTER 6. 1990]. . Of course. and (for a special case) by [Marchand & Golea. We show an example in Fig. DECISION TREES the same function as the network at the right of the ﬁgure.4. each implemented by a TLU. The decisiontree induction methods discussed in this chapter can thus be thought of as particular ways to establish the structure and the weight values for networks.5 x2 0 x4 1 x1 0 1 1 X x3 x4 f +1 1 0. 1995]. L2 . 1993]. by [John. 6.
That is.” True. Several techniques have been proposed to avoid overﬁtting. Since a decision tree of suﬃcient size can implement any Boolean function there is a danger of overﬁtting—especially if the training set is small.4.4. if we are comparing diﬀerent decision trees) so that we can select the one that performs the best on the test set. then such a comparison amounts to “training on the test data. Another technique is to .2 Validation Methods The most straightforward way to estimate how well a hypothesized function (such as a decision tree) performs on a test set is to test it on the test set! But. the more likely it is that even a randomly selected consistent function will have appropriate outputs for patterns not yet seen. with a consequent expected improvement in generalization. training on the test data enlarges the training set. When there are too many hypotheses that are consistent with the training set. but there is still the danger of overﬁtting if we are comparing several diﬀerent learning systems. even with bias. However. even if the decision tree is synthesized to classify all the members of the training set correctly. OVERFITTING AND EVALUATION 81 L1 0 L2 0 0 1 1 0 1 0 L4 1 1 L3 1 L1L2 L1 L2 X L3 L4 + + + disjunction 0 f 0 L1 L3 L4 conjunctions Figure 6. it might perform poorly on new patterns that were not used to build the decision tree. there will still be too many consistent functions for us to make useful guesses.7: A Multivariate Decision Tree and its Equivalent Network the larger the training set. Overﬁtting is a problem that we must address for all learning methods. 6. if the training set is not suﬃciently large compared with the size of the hypothesis space. and generalization performance will be poor. we say that we are overﬁtting the training data. if we are comparing several learning systems (for example. and we shall examine some of them here. They make use of methods for estimating how well a given decision tree might generalize—methods we shall describe next.6.
but we can decide in favor of the most numerous class.82 CHAPTER 6. . There is a general rule that the lowest errorrate attainable by a subtree of a fully expanded tree can be no less than 1/2 of the error rate of the fully expanded tree [Weiss & Kulikowski. 1991. but useful when a more accurate estimate of the error rate for a classiﬁer is needed. . more expensive computationally. One has to be careful about when to stop.4. ΞK . This problem can be dealt with by terminating the testgenerating procedure before all patterns are perfectly split into their separate categories. and each Ξi consists of a single pattern. and thus we are likely to be overﬁtting. For these nodes. If the cross validation error increases as a consequence of a node split. This behavior is illustrated in Fig. For each subset. This procedure will result in a few errors but often accepting a small number of errors on the training set results in fewer errors on a testing set. train on the union of all of the other subsets. εi .8. Leaveoneout Validation Leaveoneout validation is the same as cross validation for the special case in which K equals the number of patterns in Ξ. When testing on each Ξi . We count the total number of mistakes and divide by K to get the estimated error rate. page 126]. we simply note whether or not a mistake was made. because underﬁtting usually leads to more errors on test sets than does overﬁtting. Ξi . though. (The error rate is the number of classiﬁcation errors made on Ξi divided by the number of patterns in Ξi . and empirically determine the error rate. of course. We next describe some validation techniques that attempt to avoid these problems. CrossValidation In crossvalidation. .) An estimate of the error rate that can be expected on new patterns of a classiﬁer trained on all the patterns in Ξ is then the average of the εi . This type of validation is. That is. . we divide the training set Ξ into K mutually exclusive and exhaustive equalsized subsets: Ξ1 . 6. One can use crossvalidation techniques to determine when to stop splitting nodes. . 6.3 Avoiding Overﬁtting in Decision Trees Near the tips of a decision tree there may be only a few patterns per node. a leaf node may contain patterns of more than one class. we are selecting a test based on a very small sample. DECISION TREES split the training set—using (say) twothirds for training and the other third for estimating generalization performance. Describe “bootstrapping” also [Efron. But splitting reduces the size of the training set and thereby increases the possibility of overﬁtting. 1982]. then don’t split. on Ξi .
Computer Systems that Learn. (MDL is an important idea that extends beyond decisiontree methods [Rissanen.. Morgan Kaufmann. one could transmit a decision tree that correctly labelled all of the patterns. each labeled by one of R classes.. where t is the length of the message required to transmit the tree. one could transmit a list of m Rvalued numbers. OVERFITTING AND EVALUATION 83 Error Rate 1. The number of bits that this transmission would require depends on the technique for encoding decision trees and on the size of the tree. it might be more economical to transmit the tree than to transmit the labels directly.0 0. In general.6. Assuming equally probable classes.) The idea is that the simplest decision tree that can predict the classes of the training patterns is the best one. If there are m patterns. Various techniques for pruning are discussed in [Weiss & Kulikowski.8: Determining When Overﬁtting Begins Rather than stopping the growth of a decision tree. assuming that the receiver of this information already has the ordered set of patterns. 1991]. In between these extremes. Or. 6.4 MinimumDescription Length Methods An important treegrowing and pruning technique is based on the minimumdescriptionlength (MDL) principle. 1991) Figure 6. S.2 0 0 1 2 3 4 5 6 validation errors 7 8 9 Number of Terminal Nodes (From Weiss. we might transmit a tree plus a list of labels of all the patterns that the tree misclassiﬁes. This technique is called postpruning. Consider the problem of transmitting just the labels of a training set of patterns. the number of bits (or description length of the binary encoded message) is t + d. and d is the length of the message required to transmit the labels of .4. 1978]. and Kulikowski.4 0. C. one might grow it to its full size and then prune away leaf nodes and their ancestors until crossvalidation accuracy no longer increases. this transmission would require m log2 R bits.8 Iris Data Decision Tree training errors 0.6 0.4. If the tree is small and accurately classiﬁes all of the patterns.
although they are quite sensitive to the coding schemes used. 6. This problem is sometimes called the fragmentation problem. A decision tree for this function is shown in Fig.5 The Problem of Replicated Subtrees Decision trees are not the most economical means of implementing some Boolean functions. Dowe. These techniques compare favorably with the uncertaintyreduction method. 6. DECISION TREES the patterns misclassiﬁed by the tree. Refusal to tolerate errors on the training set when there is noise leads to the problem of “ﬁtting the noise.9. 6. In pruning a tree after it has been grown to zero error. 1989] have proposed techniques for encoding decision trees and lists of exception labels and for calculating the description length (t+d) of these trees and labels. In our example of learning f = x1 x2 + x3 x4 . The MDL method is one way of adhering to the Occam’s razor principle. they use the reduction in description length to select tests (instead of reduction in uncertainty). Quinlan and Rivest [Quinlan & Rivest. they prune away those nodes (starting at the tips) that achieve a decrease in the description length. The need for replication means that it takes longer to learn the tree and that subtrees replicated further down the tree must be learned using a smaller training subset. requires accepting some errors at the leaf nodes just as does the fact that there are a small number of patterns at leaf nodes. Several approaches might be suggested for dealing with fragmentation. A decision graph that implements the same decisions as that of the decision tree of Fig. then.9 is shown in Fig. Notice the replicated subtrees shown circled. This DNF form is nonminimal (in the number of disjunctions) and is equivalent to f = x1 x2 + x3 x4 . 6. & Wallace. 1992.10. In a sense. the function f = x1 x2 + x3 x4 . Another approach is to use multivariate (rather than univariate tests at each node). Kohavi. They then use the description length as a measure of quality of a tree in two ways: a. b.84 CHAPTER 6.5 Noise in Data Noise in the data means that one must inevitably accept some number of errors—depending on the noise level. In growing a tree. The DNFform equivalent to the function implemented by this decision tree is f = x1 x2 + x1 x2 x3 x4 + x1 x3 x4 .” Dealing with noise. 1994]. One is to attempt to build a decision graph instead of a tree [Oliver. if we had a test for x1 x2 . Consider. 6. that tree associated with the smallest value of t + d is the best or most economical tree.4. for example.
and then 2) eliminating “unnecessary” rules. The rules will have as antecedents the conjunctions that lead down to the leaf nodes. it might be the case that more than one rule is “active” for any given pattern. for example. [John. . Several researchers have proposed techniques for learning decision trees in which the tests at each node are linearly separable functions. An example rule from the tree with the repeating subtree of our example would be: x1 ∧ ¬x2 ∧ x3 ∧ x4 ⊃ 1. 6. the decision tree could be much simpliﬁed.9: A Decision Tree with Subtree Replication and a test for x3 x4 .” A third method for dealing with the replicated subtree problem involves extracting propositional “rules” from the decision tree. and as consequents the name of the class at the corresponding leaf node.11. and care must be taken that the active rules do not conﬂict in their decision about the class of a pattern. as shown in Fig. 1987] discusses methods for reducing a set of rules to a simpler set by 1) eliminating from the antecedent of each rule any “unnecessary” conjuncts.6. THE PROBLEM OF MISSING ATTRIBUTES 85 x1 x3 x2 0 x3 x4 0 0 1 1 x4 0 1 Figure 6.6. A conjunct or rule is determined to be unnecessary if its elimination has little eﬀect on classiﬁcation accuracy—as determined by a chisquare test. 1995] gives a nice overview (with several citations) of learning such linear discriminant trees and presents a method based on “soft entropy. Quinlan [Quinlan. After a rule set is processed.
Michie. 1994].. Shavlik. Mooney. for example. & Spiegalhalter. 1990.7 The Problem of Missing Attributes To be added. neuralnet. [Taylor. 1991. Comparisons Several experimenters have compared decisiontree. Quinlan. and nearestneighbor classiﬁers on a wide variety of problems.11: A Multivariate Decision Tree . In their StatLog project. see [Dietterich. There seems x1x2 x3x4 1 0 1 Figure 6.86 CHAPTER 6. For a comparison of neural nets versus decision trees. 1994] give thorough comparisons of several machine learning algorithms on several diﬀerent types of problems. & Towell.10: A Decision Graph 6. DECISION TREES x1 x2 x3 1 0 x4 0 1 Figure 6.6 6. et al.
And. 1994] does provide some intuition about properties of problems that might render them ill suited for decision trees.6. BIBLIOGRAPHICAL AND HISTORICAL REMARKS 87 to be no single type of classiﬁer that is best for all problems. or backpropagation. 6.8. on the one hand. although [Quinlan.8 Bibliographical and Historical Remarks To be added. on the other. there do not seem to be any general conclusions that would enable one to say which classiﬁer method is best for which sorts of classiﬁcation problems. .
DECISION TREES .88 CHAPTER 6.
we consider the matter of learning logic programs given a set of variable values for which the logic program should return T (the positive instances) and a set of variable values for which it should return F (the negative instances).y) :. we have seen (Boolean) algebraic expressions. decision trees. [Mueller & Page. 1988]). Similarly. the representation most important in computer science is a computer program. The unary function “True” returns T if and only if the value of its argument is T . and neural networks. a logic program (whose ordinary application is to compute bindings for variables) can also be used simply to decide whether or not a predicate has value True (T) or False (F). (We now think of Boolean functions and arguments as having values of T and F instead of 0 and 1. for example.True(y). the Boolean exclusiveor (odd parity) function of two variables can be computed by the following logic program: Parity(x. 1994].True(x).) Programs will be written in “typewriter” font. For example. except that our convention is to write variables as strings beginning with lowercase letters and predicates as strings beginning with uppercase letters. As with any learning problem. 89 . a Lisp predicate of binaryvalued inputs computes a Boolean function of those inputs. this one can be quite c z complex and intractably diﬃcult unless we constrain it with biases of some sort. plus other computational mechanisms such as techniques for computing nearest neighbors. So far. ¬ True(x) We follow Prolog syntax (see. Of course. The subspecialty of machine learning that deals with learning logic programs is called inductive logic programming (ILP) [Lavraˇ & Dˇeroski. ¬ True(y) :. For example. In this chapter.Chapter 7 Inductive Logic Programming There are many diﬀerent representational forms for functions of input variables.
a program implements a suﬃcient condition that a training instance is positive if it covers all of the positive training instances.Hub(x). One might restrict the program to Horn clauses. we implicitly append the background facts to the program and adopt the usual convention that a program has value T for a set of inputs if and only if the program interpreter returns T when actually running the program (with background facts appended) on those inputs. A1). if presented with arguments not represented in the training set (but for which we have the needed background knowledge). A2). we want the induced program to generalize well. We are given a training set consisting of positive and negative examples. written in terms of the background relations Hub and Satellite. that is. that is to have value T for pairs of cities connected by a nonstop air ﬂight and F for all other pairs of cities. INDUCTIVE LOGIC PROGRAMMING In ILP. we would like the function to guess well.Satellite(y. suppose we are trying to induce a function Nonstop(x.Satellite(x. π.B).y). we want to induce a program Nonstop(x. If a logic program. plus others. we will say that a program is suﬃcient if it covers all of the positive instances and that it is necessary if it does not cover any of the negative instances.A).y) :. called “background knowledge.90 CHAPTER 7. not allow recursion. (A. Hub(y) :. we might have (A. Satellite(A1. and Satellite(A1. As with other learning problems.” In our airﬂight problem. not allow functions. Hub(B). and so on. we usually have additional information about the examples.y) :. Depending on the exact set of examples. the background information might be such ground facts as Hub(A). there are a variety of possible biases (called language biases). 7. we might induce the program: Nonstop(x.) From these training facts. for example. In ILP. X). that has value T for all the positive instances and has value F for all the negative instances. As an example of an ILP problem. As positive examples. we might have (A1.A) is intended to mean that the city denoted by A1 is a satellite of the city denoted by A. A1). returns T for a set of arguments X.1 Notation and Deﬁnitions In evaluating logic programs in ILP. and some other pairs. the program above would return T for input (A. as negative examples. (That is. we say that the program covers the arguments and write covers(π. it . and some other pairs. otherwise it has value F .y). (Hub(A) is intended to mean that the city denoted by A is a hub city.x) which would have value T if both of the two cities were hub cities or if one were a satellite of the other. Using the given background facts. Following our terminology introduced in connection with version spaces.
) In the noiseless case.7. in which case we will call it consistent. Two of the many diﬀerent ways to search for a consistent logic program are: 1) start with [ρ :. and Consistent Programs As in version spaces. which is certainly suﬃcient. which is called a fact in Prolog. we might relax this criterion and settle for a program that covers all but some fraction of the positive instances while allowing it to cover some fraction of the negative instances. There are . Conversely. The most general logic program. is the one that has value F for all inputs.]. it can be made to cover more examples by generalizing it. 7. 1 is a necessary program A positive instance covered by 2 and 3 + + + + + + + + + 2 is a sufficient program + 3 is a consistent program Figure 7.F ] and generalize until the program is consistent. if a program is suﬃcient but not necessary it can be made to cover fewer examples by specializing it. namely a single clause with an empty body. or 2) start with [ρ :.] and specialize until the program is consistent. we want to induce a program that is both suﬃcient and necessary. specializes until the program is necessary (but might no longer be suﬃcient). We will be discussing a method that starts with [ρ :.2.2 A Generic ILP Algorithm Since the primary operators in our search for a consistent program are specialization and generalization.1: Suﬃcient. is the one that has value T for all inputs. Suppose we are attempting to induce a logic program to compute the relation ρ. With imperfect (noisy) training sets. A GENERIC ILP ALGORITHM 91 implements a necessary condition if it covers none of the negative instances. 7. [ρ :]. then reachieves suﬃciency in stages by generalizing—ensuring within each stage that the program remains necessary (by specializing). which is certainly necessary. Necessary.1. We illustrate these deﬁnitions schematically in Fig. we must next discuss those operations. The most special logic program. if it is necessary but not suﬃcient. namely [ρ :F ].
Literals whose arguments are a subset of those in the head of the clause.) . Of course there are unlimited possible literals we might add to the body of a clause. Replace some variables in a program clause by terms (a substitution). In general. we need only describe the process for adding literals. we will always add the clause [ρ :. When we add a clause. c. which is what we use here. is that a clause c1 is more special than a clause c2 if the set of literals in the body of c2 is a subset of those in c1 . A special case. Practical ILP systems restrict the literals in various ways.) b. Literals that introduce a new distinct variable diﬀerent from those in the head of the clause. Add literals to the body of a clause. A reﬁnement graph then tells us the ways in which we can specialize a clause by adding a literal to it. there are three ways in which a logic program might be specialized: a. b. A literal that equates a variable in the head of the clause with another such variable or with a term mentioned in the background knowledge. Replace some terms in a program clause by variables. INDUCTIVE LOGIC PROGRAMMING three major ways in which a logic program might be generalized: a.] and then specialize it by adding literals to the body. Add a clause to the program Analogously. Typical allowed additions are: a. Thus. This ordering relation can be used in a structure of partially ordered clauses. c. Literals used in the background knowledge. Clause c1 is an immediate successor of clause c2 in this graph if and only if clause c1 can be obtained from clause c2 by adding a literal to the body of c2 . that is similar to a version space. (This possibility is equivalent to forming a specialization by making a substitution. Remove literals from the body of a clause. b. clause c1 is more special than clause c2 if c2 = c1 . called the reﬁnement graph. c. d. Remove a clause from the program We will be presenting an ILP learning method that adds clauses to a program when generalizing and that adds literals to the body of a clause when specializing. (Readers familiar with substitutions in the predicate calculus will note that this process is the inverse of substitution.92 CHAPTER 7. Clauses can be partially ordered by the specialization relation.
x) Satellite(x. Ξcur . (The positive instances in Ξcur will be denoted by Ξ+ .7. A GENERIC ILP ALGORITHM 93 e. Now we are ready to write down a simple generic algorithm for inducing a logic program. of the training instances. we could also add the literals Nonstop(x.2. π for inducing a relation ρ. We are given a training set.y). It uses a logic program interpreter to compute whether or not the program it is inducing covers training instances. The algorithm has an outer loop in which it successively adds clauses to make π more and more suﬃcient. It has an inner loop for constructing a clause. and the cur negative ones by Ξ− .) The algorithm is also given background relations and cur the means for adding literals to a clause.) c z . We start with the program [Nonstop(x. A literal that is the same (except for its arguments) as that in the head of the clause. Ξ of argument sets some known to be in the relation ρ and some not in ρ. Whatever restrictions on additional literals are imposed.y) :. and Ξ− are the negative instances. Thus the literals that we might consider adding are: Hub(x) Hub(y) Hub(z) Satellite(x.) We can illustrate these possibilities using our airﬂight example.y) (x = y) (If recursive programs are allowed. (This possibility admits recursive programs. 7. ILP programs that follow the approach we are discussing (of specializing clauses by adding a literal) thus have well deﬁned methods of computing the possible literals to add to a clause.]. that is more and more necessary and in which it refers only to a subset. 60]. The algorithm can be written as follows: Generic ILP Algorithm (Adapted from [Lavraˇ & Dˇeroski. they are all syntactic ones from which the successors in the reﬁnement graph are easily computed.y) Satellite(y.) These possibilities are among those illustrated in the reﬁnement graph shown in Fig. c. p. Ξ+ are the positive instances.z) Satellite(z. which are disallowed in some systems. 1994. The literals used in the background knowledge are Hub and Satellite.z) and Nonstop(z.2.
.) 7.y) :Hub(x) . (The termination tests for the inner and outer loops can be relaxed as appropriate for the case of noisy instances.3. .y) :(x = y) .] Select a literal l to add to c.] Assign π := π.y) .y) :Satellite(x.] Assign Ξcur := Ξcur − (the positive instances in Ξcur covered by π). . . . c.94 CHAPTER 7. . The other . Consider the portion of an airline route map. Nonstop(x. . . Nonstop(x. . [We add the clause c to the program. until π is suﬃcient. Initialize π := empty set of clauses.] Assign c := c. INDUCTIVE LOGIC PROGRAMMING Nonstop(x. [That is. . repeat [The inner loop makes c necessary.Hub(x). l. Figure 7. until c covers no negative instances in Ξcur . . B. repeat [The outer loop works to make π suﬃcient. Cities A. [This is a nondeterministic choice point. . .y) :. . Hub(y) . .y) :. . until c is necessary.3 An Example We illustrate how the algorithm works by returning to our example of airline ﬂights. Nonstop(x. and we know that there are nonstop ﬂights between all hub cities (even those not shown on this portion of the route map). Nonstop(x. shown in Fig.2: Part of a Reﬁnement Graph Initialize Ξcur := Ξ. and C are “hub” cities.] Initialize c := ρ : − . 7.
A >. B >. < B. Ξ. C2 >. These are: {< A. B1 >. 7. < C. C >} There may be other cities not shown on this map. A1 >. C >. < A. B >. < C2. The training set. C1 >. < A2. A >. < C1. The learning program is given a set of positive instances.7. < B. can be thought of as a partial . < C. B1 >. B2 >. < C. A2 >. of pairs of cities between which there are nonstop ﬂights and a set of negative instances. < A. C2 >. A1 >. < C1. < A1.3. B2 >. < B1.3 that are not in Ξ+ (a type of closedworld assumption). C >. AN EXAMPLE 95 cities are “satellites” of one of the hubs. < B2. A >. < B2. A >. B >. < B. of pairs of cities between which there are not nonstop ﬂights. < C. B >. < A2. < A1. C1 >. A >. < A. B >. A1 >. we will assume that Ξ− contains all those pairs of cities shown in Fig. C >. < A. < C1. < B1. < B2. Ξ+ contains just the pairs: {< A. B2 >. < C. B1 >. C2 >. < A. B >. C >. < C. B >. < B1. so the training set does not necessarily exhaust all the cities. < C2. < C2. < B. < B. < B. Ξ− . Ξ+ . A >.3: Part of an Airline Route Map We want the learning program to induce a program for computing the value of the relation Nonstop. C >} For our example. and we know that there are nonstop ﬂights between each satellite city and its hub. C >. < A2. B >. A >. < B. B1 B2 B A1 C1 A A2 C C2 Figure 7. < A1. C >. < B. < A. A >. < C. C1 >. A2 >. < C. A2 >.
A >. A >. < C. Doing so will give us a more compact.y) :. C1 >. C >. B >. < B >. B1 >. < A.Hub(x): {< A. C >. The following positive instances in Ξ are covered by Nonstop(x. < C. This clause is not necessary because it covers all the negative examples (since it covers all examples). < A2.y) to express that the pair < x. INDUCTIVE LOGIC PROGRAMMING description of this relation in extensional form—it explicitly names some pairs in the relation and some pairs not in the relation. A >. C >. < C >} All other cities mentioned in the map are assumed not in the relation Hub. The following negative instances are also covered: . intensional. We desire to learn the Nonstop relation as a logic program in terms of the background relations. Suppose (selecting a literal from the reﬁnement graph) the algorithm adds Hub(x).y) :Hub(x) for all pairs of cities in Ξ. C2 >} To compute this covering. A1 >. < B1. < A.y) :. description of the relation. which are also given in extensional form. A2 >. B >. < B. and this description could well generalize usefully to other cities not mentioned in the map. Satellite {< A1. < C2. the inner loop of our algorithm initializes the ﬁrst clause to Nonstop(x. B >.96 CHAPTER 7. We will use the notation Hub(x) to express that the city named x is in the relation Hub. we interpret the logic program Nonstop(x. < B. B >. A. C >} All other pairs of cities mentioned in the map are not in the relation Satellite. < B. using the pairs given in the background relation Hub as ground facts. < C1. y > is in the relation Satellite. We will use the notation Satellite(x. < A. < C.. So we must add a literal to its (empty) body. We assume the learning program has the following extensional deﬁnitions of the relations Hub and Satellite: Hub {< A >. < B2. < C. < B. Knowing that the predicate Nonstop is a twoplace predicate. Hub and Satellite. >. B2 >.
< B. A >. < A. C >. B1 >. But the program. < B. A >. C2 >} . Suppose we add the literal Satellite(x. Suppose we next add Hub(y). so it is necessary. B1 >. These positive instances are not covered by the clause: {< A. B2 >. < C. < B.y) :. A >. A1 >. < C. < A. consisting of just this clause is not suﬃcient. the clause is not yet necessary and another literal must be added. A2 >. A2 >. A1 >. and we can terminate the ﬁrst pass through the inner loop. < A2. < C1.Satellite(x. A1 >.y). C >} These instances are removed from Ξcur for the next pass through the inner loop. < C. Hub(y) is necessary. C >} The positive instances that were covered by Nonstop(x. C1 >. C2 >. C >. B >. This clause covers all the negative instances. < B. AN EXAMPLE {< A. C2 >} 97 Thus.y) :.y) :.Hub(x). < C.y) :. π. B >.Satellite(x. < C.Hub(x). The program now contains two clauses: Nonstop(x. C1 >. < B2. C2 >.3. B2 >. < B. < C. A >. < A. < C1. A1 >. < B1. Hub(y) are removed from Ξ to form the Ξcur to be used in the next pass through the inner loop. < A1. and so we must add literals to make it necessary. < A. < B.y) :. A2 >. A2 >. Hub(y): {< A.y) :. < C. < B1. The clause Nonstop(x. It does cover the following positive instances in Ξcur : {< A1. the inner loop creates another clause c. C1 >. initially set to Nonstop(x. Hub(y) :. < B. C >. < A. < A2.Hub(x). The following positive instances are covered by Nonstop(x.y) This program is not yet suﬃcient since it does not cover the following positive instances: {< A. B >} There are no longer any negative instances in Ξ covered so the clause Nonstop(x. B2 >. < A. In order to attempt to cover them. < C. Ξcur consists of all the negative instances in Ξ plus the positive instances (listed above) that are not yet covered. C >. B1 >. < C2. B >.7. < B. B >.. A >. B2 >. < B. < B2. < C. C1 >. < B. < C2. A >. < C.Hub(x).y) covers no negative instances. B1 >. B >.
and the whole program is suﬃcient. < C1. Hub(y) :. < B. C2 >. B1 >.Satellite(y. B >. C >. No ﬂights exist between these cities and any other cities—perhaps there are only bus routes as shown by the grey lines in the map. < B2. C1 >. one such is to make sure that the new literal has diﬀerent variables than those in the head literal. B1 >. B >. We have introduced two new cities. < C.4. < C1. < C2. Our example continues the one using the airline map. Various mechanisms must be used to ensure that such a program will terminate. Again.Satellite(y. the method can be used to induce more general logic programs. Consider the map shown in Fig. < B2. < C2. < B. but we make the map somewhat simpler in order to reduce the size of the extensional relations used. < B1.98 CHAPTER 7. < B. B2 >. C1 >. < C. C >. < C. < C. C2 >. In the next section. < B1. C2 >. C2 >. B >. < C2. we allow the addition of a literal having the same predicate letter as that in the head of the clause. B3 and C3.y) :.x). < B1. C >. < B.Satellite(x. < B. B >. B2 >. we show how the technique can be extended to use recursion on the relation we are inducing. the procedure terminates with: Nonstop(x. B and C are hub cities. C1 and C2 are satellites of C. B2 >.4 Inducing Recursive Programs To induce a recursive program. < B2. < B2. C1 >. C >. < C1. This clause is necessary. < C1. < C. B2 >. B2 >. B1 and B2 are satellites of B. < B1.y) :. B1 >.x) Since each clause is necessary. the program is also consistent with all instances of the training set. < B2. C1 >} . 7. B1 >. C2 >. 7. < C2. C >. With that extension. Note that this program can be applied (perhaps with good generalization) to other cities besides those in our partial map—so long as we can evaluate the relations Hub and Satellite for these other cities. B1 >. B >. we add the clause Nonstop(x. The relation Canfly is satisﬁed by the following pairs of postive instances: {< B1. < C1. INDUCTIVE LOGIC PROGRAMMING During the next pass through the inner loop. < C2. The process is best illustrated with another example. We now seek to learn a program for Canfly(x. and since the program containing all three clauses is now suﬃcient.y) that covers only those pairs of cities that can be reached by one or more nonstop ﬂights.Hub(x). C1 >.y) :.
we start with the empty program and proceed to the inner loop to construct a clause that is necessary. < C2. C1 >. < C1. B3 >. < C. < C2. < B1.7. C >. B2 >. < B3. But it is not suﬃcient because it does not cover the following positive instances: {< B1.Nonstop(x. C >. B2 >. < C3. < B1. < C. B3 >. INDUCING RECURSIVE PROGRAMS 99 B3 B1 B2 B C1 C C3 C2 Figure 7. C3 >. < C.y) is necessary. < C1. B3 >.4. < B2. < C3. < B2. B1 >. < C3. < B2. B3 >. C1 >. . < C3. < B3. B1 >. < C1. we take the negative instances of Canfly to be: {< B3. B2 >. B3 >. C3 >. C3 >. < B1. < B2. < C3. C1 >. C3 >. B1 >.y) :. < C1. it covers no negative instances. < B1. < B1. < B3. < B2. C1 >.y) using the extensionally deﬁned background relation Nonstop given earlier (modiﬁed as required for our reduced airline map) and Canfly itself (recursively). C1 >. As before. < B3. B3 >. < B. < C3. C >. B >. < B2. C2 >. B1 >. Suppose that the inner loop adds the background literal Nonstop(x.4: Another Airline Route Map Using a closedworld assumption on our map. B3 >. B2 >. The clause Canfly(x.y). C3 >} We will induce Canfly(x. < B. < B3. B >. < C2. C2 >. < C3. C >. C3 >. < C2. < B. C2 >. B2 >. < B3. < C. B1 >. C2 >. B1 >. C3 >. B2 >.
< C1. B >. A major problem involves deciding how to select a literal to add in the inner loop (from among the literals that are allowed). p =(number of positive instances covered by the clause)/(total number of instances covered by the clause). Quinlan suggested that candidate literals can be compared using an informationlike measure—similar to the measures used in inducing decision trees. B3 >. A measure that gives the same comparison as does Quinlan’s is based on the amount by which adding a literal increases the odds that an instance drawn at random from those covered by the new clause is a positive instance beyond what these odds were before adding the literal. Canfly(z.5 Choosing Literals to Add One of the ﬁrst practical ILP systems was Quinlan’s FOIL [Quinlan. This time. B >. is a background fact.y) 7. so the clause does not cover < B3. < C2.y) :. Since Nonstop(B1. In the inner loop. suppose it adds Canfly(z.z). o. for example. That is. It is convenient to express this probability in “odds form.y) to yield the clause Canfly(x. we ﬁrst create the clause Canfly(x.100 CHAPTER 7. B2 > is covered.Nonstop(x. Expressing the probability in terms of the odds. INDUCTIVE LOGIC PROGRAMMING < B. we see that the clause Canfly(x. In FOIL. < C1. C1 >} Thus.z) for some z. we attempt to interpret the clause for the negative instance Canfly(B3. B).B2).z) for some z. The program is now suﬃcient and consistent.z) covers all of the positive instances not already covered by the ﬁrst clause. C2 >.Nonstop(x. Let p be an estimate of the probability that an instance drawn at random from those covered by a clause before adding the literal is a positive instance.y) :. no negative instances are covered. Suppose now. B >.Nonstop(x.B). We digress brieﬂy to describe how a program containing a clause with unbound variables in its body is interpreted. .z). that a covered instance is positive is deﬁned to be o = p/(1 − p). the interpreter returns T —which means that the instance < B1.y). < C2.y) :Nonstop(x. Suppose we try to interpret it for the positive instance Canfly(B1. So the inner loop must add another literal. but it also covers many negative instances such as < B2.Nonstop(x. C2 >. it is: Canfly(x. Using the interpreter. The interpreter attempts to establish Nonstop(B1. 1990]. Canfly(z.y) :.z) which introduces the new variable z.” The odds. and < B. we obtain p = o/(1 + o). This clause is necessary. B3 >. The interpreter attempts to establish Nonstop(B3. we must add another clause to the program. There are no background facts that match.y) :.
Let pl denote the probability that an instance drawn at random from the instances covered by the new clause (with l added) is positive. Quinlan’s FOIL system also restricts the choice to literals that: a) contain at least one variable that has already been used. C} or one of {D. b) place further restrictions on the variables if the literal selected has the same predicate letter as the literal being induced (in order to prevent inﬁnite recursion). The reader should also refer to [Pazzani & Kibler. l. and c) survive a pruning test based on the values of λl for those literals selected so far. and if xi could have values drawn from {A. bottomup methods. we want a literal that gives a high value of λl . With categorical variables. F }. Muggleton. 1994.7. Muggleton. B. the split at each node involves asking to which of several mutually exclusive and exhaustive subsets the value of a variable belongs. (It turns out that the value of Quinlan’s information theoretic measure increases monotonically with λl . and if xi and xj both could have values drawn from {A.) Besides ﬁnding a literal with a high value of λl . Quinlan also discusses postprocessing pruning methods and presents experimental results of the method applied to learning recursive relations on lists. C. then one possible binary split . if we deﬁne λl = ol /o.6 Relationships Between ILP and Decision Tree Induction The generic ILP algorithm can also be understood as a type of decision tree induction. The odds will be denoted by ol . l.6. E. if a node tested the variables xi and xj . We refer the reader to Quinlan’s paper for further discussion of these points. RELATIONSHIPS BETWEEN ILP AND DECISION TREE INDUCTION101 After selecting a literal. that gives maximal increase in these odds. It is also possible to make a multivariate split—testing the values of two or more variables at a time. D. if a node tested the variable xi . and LINUS. For example. B. We want to select a literal. E. Recall the problem of inducing decision trees when the values of attributes are categorical. 1991. F }. Specializing the clause in such a way that it fails to cover many of the negative instances previously covered but still covers most of the positive instances previously covered will result in a high value of λl . Lavraˇ & Dˇeroski. on learning rules for chess endgames and for the card game Eleusis. C. some of these are positive and some are negative. When splitting on a single variable. 1992]. postprocessing. F }. c z Discuss preprocessing. to add to a clause. 1992. D. For example. an nvariable split would be based on which of several nary relations the values of the variables satisﬁed. so we could just as well use the latter instead. B. That is. E. 7. and for some other standard tasks mentioned in the machine learning literature. then one possible split (among many) might be according to whether the value of xi had as value one of {A. some of the instances previously covered are still covered.
. and R3 contains only positive instances from Ξ. The background relation R1 is satisﬁed by some of these patterns. C >. R2 . . where each node of the decision tree is itself a subdecision tree. R. . INDUCTIVE LOGIC PROGRAMMING (among many) might be according to whether or not < xi . that distinguishes positive from negative patterns in Ξ is then given in terms of the following logic program: R :. Ξ. R1 .) We desire to construct an intensional deﬁnition of R in terms of the R1 . and each subdecision tree consists of nodes that make binary splits on several variables using the background relations.) Let us call the subset of patterns satisfying these relations. (We might say that this combination of tests is necessary. on various subsets of these variables. . our decision trees will be decision lists—a special case of decision trees. Rightgoing patterns are ﬁltered through a sequence of relational tests until only positively labeled patterns satisfy the last relation—in this case R3 . these satisfy Node 1 at the top level.}. and the rest are ﬁltered to the left (more on what happens to these later). these are ﬁltered to the right (to relation R2 ). They correspond to the clause created in the ﬁrst pass through the inner loop of the generic ILP algorithm. Ξ2 is then ﬁltered by toplevel Node 2 in much the same manner.5. Ξ1 . but we will refer to them as trees in our discussions.) In broad outline. . we are given sets of tuples that are in these relations. that is {Ξ − Ξ1 } = Ξ2 are ﬁltered to the left by Node 1. of positively and negatively labeled patterns whose components are drawn from a set of variables {x. Ξ4 contains only negatively labeled patterns and the union of Ξ1 and Ξ3 contains all the positively labeled patterns. D >}. The generic ILP algorithm can be understood as decision tree induction. The positively labeled patterns in Ξ form an extensional deﬁnition of a relation. In this diagram. the ILP problem is as follows: We are given a training set. That is. < C. so that Node 2 is satisﬁed only by the positively labeled samples in Ξ2 . . In our example. (That is. R. y. We are also given background relations. . xj > satisﬁed the relation {< A.102 CHAPTER 7. Ri . the method for inducing an intensional version of the relation R is illustrated by considering the decision tree shown in Fig. z. R1 . 7. The relation. R3 . such that all of the positively labeled patterns in Ξ are satisﬁed by R and none of the negatively labeled patterns are. Rk . . R2. . . . Rk . (Actually. the subset of patterns satisfying all the relations. The intensional deﬁnition will be in terms of a logic program in which the relation R is the head of a set of clauses whose bodies involve the background relations.R1. (Note that our subset method of forming singlevariable splits could equivalently have been framed using 1ary relations—which are usually called properties. All other patterns. the patterns in Ξ are ﬁrst ﬁltered through the decision tree in toplevel node 1.) In this framework. We continue ﬁltering through toplevel nodes until only the negatively labeled patterns fail to satisfy a top node. Thus we will speak of a toplevel decision tree and various subdecision trees.
3). A >. which are the positive instances: {< A. A2. A1 >. < B1. B2. C. As before. < C. we obtain the decision tree shown in Fig. Nonstop. < C2. 7. 7. the training set. < C.6. < C1. A >. decisiontree induction would be a very diﬃcult task—involving as it does the need to invent relations on . The values of these components range over the cities {A. 7. C >} All other pairs of cities named in the map of Fig. < A. < C. < C. Because the values of x and y are categorical. < B. C >. C1. < B. C >. < B2. B >.R4. contains the following pairs of cities. B2 >. A2 >. RELATIONSHIPS BETWEEN ILP AND DECISION TREE INDUCTION103 Node 1 R1 F F T R2 T R3 F T (only positive instances satisfy all three tests) T 1 Node 2 R4 F R5 F F 4= 2 (only negative 3 instances) F 2= T T (only positivel instances satisfy these two tests) T 3 1 Figure 7. < B.3 (using the closed world assumption) are not in the relation Nonstop and thus are negative instances. C1 >. C2 >. B >. B.5: A Decision Tree for ILP :. < A2. In setting up the problem. < B.6. B >. A1. R5 If we apply this sort of decisiontree induction procedure to the problem of generating a logic program for the relation Nonstop (refer to Fig. B1 >. The logic program resulting from this decision tree is the same as that produced by the generic ILP algorithm.7. the relation. A >. A >. < A. Ξ can be expressed as a set of 2dimensional vectors with components x and y. < A. B >. < A1. C >. C2} except (for simplicity) we do not allow patterns in which x and y have the same value. B1.
We select these relations in the same way that we select literals. Ri (in this case Hub and Satellite).7 To be added. the problem is made much easier. 7. But with the background relations.104 CHAPTER 7. we make a selection based on which leads to the largest value of λRi . from among the available tests. Bibliographical and Historical Remarks . INDUCTIVE LOGIC PROGRAMMING x and y to be used as tests.
<A.C>.<B.B2>.B1>. <B. <C.B>.A1>.A>.B>.B>} (Only positive instances) Satellite(x.A>.C2>} (Only positive instances) {Only negative instances} Figure 7.A>. <B1. <C1.A2>. <C.7.C>. <C.C1>.C>.6: A Decision Tree for the Airline Route Problem .y) F T T F Node 3 (top level) {<A1.7.C>} (Only positive instances) Satellite(y.x) F T T F {<A. <B.A>.B>. <A. BIBLIOGRAPHICAL AND HISTORICAL REMARKS 105 Node 1 (top level) F Hub(x) T Hub(y) F T T Node 2 (top level) F {<A. <C. <C2. <B2. <A2. <B.
INDUCTIVE LOGIC PROGRAMMING .106 CHAPTER 7.
. 1988.) Our problem is to guess 107 . We gave some intuitive arguments to support the claim that after seeing only a small fraction of the possible inputs (and their values) that we could guess almost correctly the values of most subsequent inputs—if we knew that the function we were trying to guess belonged to an appropriately restricted subset of functions. The probability distribution. f . (In the literature of PAC learning theory. . or later being presented to the learner. from among a restricted set of hypotheses such that with high probability the function we select will be approximately correct (small probability of error) on subsequent samples drawn at random according to the same distribution from which the labeled samples were drawn. m. each labeled (by 1 or 0) according to a target function. a given training set of sample patterns might be adequate to allow us to select a function. [Dietterich. Haussler. . 1990] give nice surveys of the important results. This insight led to the theory of probably approximately correct (PAC) learning—initially developed by Leslie Valiant [Valiant. which is unknown to the learner. the target function is usually called the target concept and is denoted by c. can be arbitrary. Haussler. 1990. That is. is P (X). We present here a brief description of the theory for the case of Boolean functions. .1 Notation and Assumptions for PAC Learning Theory We assume a training set Ξ of ndimensional vectors. Xi . but to be consistent with our previous notation we will continue to denote it by f . P . Other overviews? 8. i = 1.Chapter 8 Computational Learning Theory In chapter one we posed the problem of guessing a function given a set of sample inputs and their values. consistent with the labeled samples. The probability of any given vector X being in Ξ. 1984].
Suppose we are able to ﬁnd an h that classiﬁes all m randomly drawn training samples correctly. as the probability that an X drawn randomly according to P will be misclassiﬁed: εh = [X:h(X)=f (X)] Boldface symbols need to be smaller when they are subscripts in math environments. εh ≤ ε. using m randomly drawn training samples. but we can strive to have the value of h(X) = the value of f (X) for most X’s. in 1/ε. where δ is the conﬁdence parameter. and on others it might not. Such a hypothesis is called probably (except for δ) approximately (except for ε) correct. of target functions. that would take exponential time. where ε is the accuracy parameter. we want h to be approximately correct. will such an h be approximately correct (and for what value of ε)? On some training occasions. In general. That is. C. This can only be done for certain classes of functions. it outputs a hypothesis h H. But if there are an exponential number of hypotheses. but it was later shown that some functions cannot be polynomially PAClearned under such an assumption (assuming . Initial work on PAC assumed H = C. is polynomially PAC learnable in terms of H provided there exists a polynomialtime learning algorithm (polynomial in the number of samples needed. A class. P (X) We say that h is approximately (except for ε ) correct if εh ≤ ε. We say that h is probably (except for δ) approximately correct (PAC) if the probability that it is approximately correct is greater than 1 − δ. Ξ. In general. we deﬁne the error of h. based on the labeled samples in Ξ. and these are summarized in a table to be presented later. and in 1/δ) that PAClearns functions in C in terms of H. εh . To quantify this notion. that is. We shall show that if m is greater than some bound whose value depends on ε and δ. COMPUTATIONAL LEARNING THEORY a function. In PAC theory such a guessed function is called the hypothesis. We seek training methods that produce consistent hypotheses in less time. m. We want learning algorithms that are tractable. We also assume that the hypothesis. If there are a ﬁnite number of hypotheses in a hypothesis set (as there are for many of the hypothesis sets we have considered). we could always produce a consistent hypothesis from this set by testing all of them against the training data. h won’t be identical to f . h. we say that a learning algorithm PAClearns functions from C in terms of H iﬀ for every function f C. such that with probability at least (1 − δ). H. C. of hypotheses. C. h is consistent with this randomly selected training set. such an h is guaranteed to be probably approximately correct. is an element of a set. The time complexities for various hypothesis sets have been determined. We assume that the target function is some element of a set of functions. h(X).108 CHAPTER 8. in the dimension. If m is large enough. which includes the set. so we want an algorithm that PAClearns functions in polynomial time. such an h might turn out to be approximately correct (for a given value of ε). n. H is called the hypothesis space.
et al. Ξ be a set of m ≥ 1 training examples drawn independently according to some distribution P . prob[some h HB classiﬁes all m patterns correctly] = hb HB prob[hb classiﬁes all m patterns correctly hb ≤ K(1 − ε)m . 8. The probability that the error of this randomly selected h is greater than some ε. A subset. where H is the number of hypotheses in H. hS }. {h1 . The error for hi is εhi = the probability that hi will classify a pattern in error (that is. . PAC LEARNING 109 P = NP)—but can be polynomially PAClearned if H is a strict superset of C! Also our deﬁnition does not specify the distribution. . Then. f be any classiﬁcation function in H. where S = H. The probability that hi will classify a pattern correctly is (1−εhi ). . the probability that hb (or any other bad hypothesis) would classify a pattern correctly is less than (1 − ε). . HB .) Let H be any set of hypotheses. P . say hb .8. We will call the hypotheses in this subset bad. That is. .1 PAC Learning The Fundamental Theorem Suppose our learning algorithm selects some h randomly from among those that are consistent with the values of f on the m training patterns. would classify a pattern correctly is (1 − εhb ). We state this result as a theorem [Blumer. in H. et al. diﬀerently than f would classify it). from which patterns are drawn nor does it say anything about the properties of the learning algorithm. prob[hb classiﬁes all m patterns correctly hb HB ] ≤ (1 − ε)m . Since εhb > ε.2 8. 1987]: Theorem 8.. . Proof: Consider the set of all hypotheses.2. with h consistent with the values of f (X) for m instances of X (drawn according to arbitrary P ). . where K = HB . is less than or equal to He−εm .1 (Blumer. The probability that any particular one of these bad hypotheses. The probability that it would classify all m independently drawn patterns correctly is then less than (1 − ε)m . Since C and H do not have to be identical. the probability that there exists a hypothesis h consistent with f for the members of Ξ but with error greater than ε is at most He−εm . of H will have error greater than ε. hi . we have the further restrictive deﬁnition: A properly PAClearnable class is a class C for which there exists an algorithm that polynomially PAClearns functions from C in terms of C. . h2 .2. and ε > 0. HB ] .
using the result of the theorem. we must show that He−εm ≤ δ. First it clearly states that we can select any hypothesis consistent with the m samples and be assured that with probability (1 − δ) its error will be less than ε.2 Given m ≥ (1/ε)(ln H + ln(1/δ)) independent samples. CHAPTER 8. H can be no larger k than 2O(n ) . COMPUTATIONAL LEARNING THEORY prob[there is a bad hypothesis that classiﬁes all m patterns correctly] ≤ K(1 − ε)m . . No class larger than that can be guaranteed to be properly PAC learnable. Here is a possible point of confusion: The bound given in the corollary is an upper bound on the value of m needed to guarantee polynomial probably approximately correct learning. Values of m greater than that bound are suﬃcient (but might not be necessary). we have: prob[there is a bad hypothesis that classiﬁes all m patterns correctly] = prob[there is a hypothesis with error > ε and that classiﬁes all m patterns correctly] ≤ He−εm . QED A corollary of this theorem is: Corollary 8. We will present a lower (necessary) bound later in the chapter. Proof: We are to ﬁnd a bound on m that guarantees that prob[there is a hypothesis with error > ε and that classiﬁes all m patterns correctly] ≤ δ. it shows that in order for m to increase no more than polynomially with n. Thus. the probability that there exists a hypothesis in H that is consistent with f on these samples and has error greater than ε is at most δ. Since K ≤ H and (1 − ε)m ≤ e−εm . Taking the natural logarithm of both sides yields: ln H − εm ≤ ln δ or m ≥ (1/ε)(ln H + ln(1/δ)) QED This corollary is important for two reasons. Also.110 That is.
to the conjunction of the n literals corresponding to the values of the n components of X1 . (Components with value 1 will have corresponding positive literals. any patterns labeled with a 0 are assigned a 1 by h. m ≥ 5. If. Then. For n = 50. PAC LEARNING 111 8. Then. Initialize a Boolean function.8. we exit with the null concept (h ≡ 0 for all patterns). 0. 0) (1. Ξ. components with value 0 will have corresponding negative literals. ε = 0. we have h = x2 x4 . 1. H ≤ 2n . for each additional pattern. and m ≥ (1/ε)(ln(3n ) + ln(1/δ)) ≥ (1/ε)(1. we delete from h any Boolean variables appearing in Xi with a sign diﬀerent from their sign in h. Find the ﬁrst pattern. 1/ε.2. consider the following patterns. page 268]): We are given a training sequence.1n + ln(1/δ)) Note that the bound on m increases only polynomially with n. we exit with h. ﬁnally. 1990]): (0.) If there are no patterns labeled by a 1. 1. and 2 Change this paragraph if this algorithm was presented in Chapter Three. Otherwise. 0) After processing the ﬁrst pattern. say X1 . after processing the second pattern. of m examples. that is labeled with a 1. The following procedure for ﬁnding such a consistent hypothesis requires O(nm) steps (adapted from [Dietterich. H = 3n . we additionally have to show that one can ﬁnd in time polynomial in m and n a hypothesis h consistent with a set of m patterns labeled by the value of a term. Linearly Separable Functions Let H be the set of all linearly separable functions. we have h = x1 x2 x3 x4 . 1. 1. Then. and 1/δ. 1.01. after the third pattern. 0) (1. 1990.2. After processing all the patterns labeled with a 1. In order to show that terms are properly PAC learnable. h.01 and δ = 0. 961 guarantees PAC learnability. we check all of the patterns labeled with a 0 to make sure that none of them is assigned value 1 by h. and we exit with failure. we have h = x2 x3 x4 . at any stage of the algorithm.2 Terms Examples Let H be the set of terms (conjunctions of literals). then there exists no term that consistently classiﬁes the patterns in Ξ. . As an example. Xi . all labeled with a 1 (from [Dietterich. in that list that is labeled with a 1.
Linear programming is polynomial. sep. Learnable? yes no yes yes yes yes no no no ) k ) k lg n) 2O(n ? ? n 22 2 ) (Members of the class k2NN are twolayer.01 and δ = 0. . Show that the sample size. b.112 CHAPTER 8. For n = 50. m ≥ 173. (1/δ). and 1/δ. Show that there is an algorithm that produces a consistent hypothesis on m ndimensional samples in time polynomial in m and n. note that the bound on m increases only polynomially with n.3 Some Properly PACLearnable Classes Some properly PAClearnable classes of functions are given in the following table. feedforward neural networks with exactly k hidden units and one output unit. 1990.) H terms kterm DNF (k disjunctive terms) kDNF (a disjunction of ksized terms) kCNF (a conjunction of ksized clauses) kDL (decision lists with ksized terms) lin. sep. COMPUTATIONAL LEARNING THEORY m ≥ (1/ε) n2 ln 2 + ln(1/δ) Again. 748 guarantees PAC learnability. and n by showing that ln H is polynomial or better in the number of dimensions. with (0. lin. To show that linearly separable functions are properly PAC learnable.01. pages 262 and 268] which also gives references to proofs of some of the time complexities.) Summary: In order to show that a class of functions is Properly PACLearnable : a. we would have additionally to show that one can ﬁnd in time polynomial in m and n a hypothesis h consistent with a set of m labeled linearly separable patterns. 8. m. (Adapted from [Dietterich. 1/ε. needed to ensure PAC learnability is polynomial (or better) in (1/ε).1) weights k2NN DNF (all Boolean functions) H 3n 2O(kn) 2O(n 2O(n 2O(n k k Time Complexity polynomial NPhard polynomial polynomial polynomial polynomial NPhard NPhard polynomial P.2. ε = 0.
As [Baum. it (like complexity theory) deals mainly with worstcase results. of functions.) We deﬁne a linear dichotomy as one implemented by an (n − 1)dimensional hyperplane in the ndimensional space. H. One measure of the expressive power of a set of hypotheses.” 8. we say that if a subset.3. Ξ. if a hypothesis drawn from a set that could make arbitrary classiﬁcations of a set of training patterns. a proof within some model of learning that learning is not feasible is an indictment of the model. In general (that is. 8. sometimes enlarging the class of hypotheses makes learning easier. but ktermDNF is not. Rn . even if the target function were in ktermDNF. THE VAPNIKCHERVONENKIS DIMENSION 113 As hinted earlier. Therefore. An interesting question is whether or not the class of functions in k2NN is polynomially PAC learnable if the hypotheses are drawn from k 2NN with k > k.1 The VapnikChervonenkis Dimension Linear Dichotomies Consider a set. It is possible that enlarging the space of hypotheses makes ﬁnding one that is consistent with the training examples easier. as shown in Fig.) Although PAC learning theory is a powerful analytic tool. humans are capable of learning in the natural world. If Ξ were to include all of the 2n Boolean patterns. (That is. We say there are 2m diﬀerent dichotomies of Ξ.1. Ξ. of (unlabeled) patterns. And yet. and a set. Similarly. there is little likelihood that such a hypothesis will generalize well beyond the training set.3.1 If there are m patterns in Ξ. relative to Ξ. H. of the Boolean functions might not be able to dichotomize an arbitrary set. of functions can dichotomize a set. of m patterns in all 2m ways. 1994. whereas unrestricted linearly separable functions are. . then H shatters Ξ. there are 2m diﬀerent ways to divide these patterns into two disjoint and exhaustive subsets. (At the time of writing.8. linearly separable functions implemented by TLUs whose weight values are restricted to 0 and 1 are not properly PAC learnable. But a subset. ktermDNF is a subclass of kCNF! So. . The fact that the class of twolayer. We should examine the model to see what constraints can be relaxed and made more realistic. Ξ. which have had many successful applications. consider a set Ξ of m patterns in the ndimensional space. For example. of course. and (of course) the set of all possible Boolean functions dichotomizes them in all of these ways. for example. even in the nonBoolean case). feedforward neural networks is not polynomially PAC learnable is more an attack on the theory than it is on the networks. . one would be able to ﬁnd a hypothesis in kCNF that is probably approximately correct for the target function. page 41617] says: “ . is its ability to make arbitrary classiﬁcations of the patterns in Ξ. there are 14 dichotomies 1 And. the n components of these patterns are real numbers.3 8. As an example. there are n 22 ways to dichotomize them. this matter is still undecided. of m Boolean patterns in all 2m ways. How many linear dichotomies of m patterns in n dimensions are there? For example. H. the table above shows that kCNF is PAC learnable.
114
CHAPTER 8. COMPUTATIONAL LEARNING THEORY
of four points in two dimensions (each separating line yields two dichotomies depending on whether the points on one side of the line are classiﬁed as 1 or 0). (Note that even though there are an inﬁnite number of hyperplanes, there are, nevertheless, only a ﬁnite number of ways in which hyperplanes can dichotomize a ﬁnite number of patterns. Small movements of a hyperplane typically do not change the classiﬁcations of any patterns.)
7
6 5
2 3
1
4
14 dichotomies of 4 points in 2 dimensions
Figure 8.1: Dichotomizing Points in Two Dimensions The number of dichotomies achievable by hyperplanes depends on how the patterns are disposed. For the maximum number of linear dichotomies, the points must be in what is called general position. For m > n, we say that a set of m points is in general position in an ndimensional space if and only if no subset of (n+1) points lies on an (n−1)dimensional hyperplane. When m ≤ n, a set of m points is in general position if no (m − 2)dimensional hyperplane contains the set. Thus, for example, a set of m ≥ 4 points is in general position in a threedimensional space if no four of them lie on a (twodimensional) plane. We will denote the number of linear dichotomies of m points in general position in an ndimensional space by the expression ΠL (m, n).
Include the derivation.
It is not too diﬃcult to verify that:
n
ΠL (m, n) = 2
i=0
C(m − 1, i)
for m > n, and
= 2m
for m ≤ n
8.3. THE VAPNIKCHERVONENKIS DIMENSION where C(m − 1, i) is the binomial coeﬃcient
(m−1)! (m−1−i)!i! .
115
The table below shows some values for ΠL (m, n). m (no. of patterns) 1 2 3 4 5 6 7 8 1 2 4 6 8 10 12 14 16 n (dimension) 2 3 4 2 2 2 4 4 4 8 8 8 14 16 16 22 30 32 32 52 62 44 84 114 58 128 198
5 2 4 8 16 32 64 126 240
Note that the class of linear dichotomies shatters the m patterns if m ≤ n + 1. The boldface entries in the table correspond to the highest values of m for which linear dichotomies shatter m patterns in n dimensions.
8.3.2
Capacity
(m,n) Let Pm,n = ΠL2m = the probability that a randomly selected dichotomy (out m of the 2 possible dichotomies of m patterns in n dimensions) will be linearly separable. In Fig. 8.2 we plot Pλ(n+1),n versus λ and n, where λ = m/(n + 1). Note that for large n (say n > 30) how quickly Pm,n falls from 1 to 0 as m goes above 2(n + 1). For m < 2(n + 1), any dichotomy of the m points is almost certainly linearly separable. But for m > 2(n + 1), a randomly selected dichotomy of the m points is almost certainly not linearly separable. For this reason m = 2(n + 1) is called the capacity of a TLU [Cover, 1965]. Unless the number of training patterns exceeds the capacity, the fact that a TLU separates those training patterns according to their labels means nothing in terms of how well that TLU will generalize to new patterns. There is nothing special about a separation found for m < 2(n + 1) patterns—almost any dichotomy of those patterns would have been linearly separable. To make sure that the separation found is forced by the training set and thus generalizes well, it has to be the case that there are very few linearly separable functions that would separate the m training patterns. Analogous results about the generalizing abilities of neural networks have been developed by [Baum & Haussler, 1989] and given intuitive and experimental justiﬁcation in [Baum, 1994, page 438]:
“The results seemed to indicate the following heuristic rule holds. If M examples [can be correctly classiﬁed by] a net with W weights (for M >> W ), the net will make a fraction ε of errors on new examples chosen from the same [uniform] distribution where ε = W/M .”
116
CHAPTER 8. COMPUTATIONAL LEARNING THEORY
P (n + 1), n n
1 0.75 75 0.5 .5 0.25 25 0 0 1 2 3 4 10
50 40 30 20
Figure 8.2: Probability that a Random Dichotomy is Linearly Separable
8.3.3
A More General Capacity Result
Corollary 7.2 gave us an expression for the number of training patterns suﬃcient to guarantee a required level of generalization—assuming that the function we were guessing was a function belonging to a class of known and ﬁnite cardinality. The capacity result just presented applies to linearly separable functions for nonbinary patterns. We can extend these ideas to general dichotomies of nonbinary patterns. In general, let us denote the maximum number of dichotomies of any set of m ndimensional patterns by hypotheses in H as ΠH (m, n). The number of dichotomies will, of course, depend on the disposition of the m points in the ndimensional space; we take ΠH (m, n) to be the maximum over all possible arrangements of the m points. (In the case of the class of linearly separable functions, the maximum number is achieved when the m points are in general position.) For each class, H, there will be some maximum value of m for which ΠH (m, n) = 2m , that is, for which H shatters the m patterns. This maximum number is called the VapnikChervonenkis (VC) dimension and is denoted by VCdim(H) [Vapnik & Chervonenkis, 1971]. We saw that for the class of linear dichotomies, VCdim(Linear) = (n + 1). As another example, let us calculate the VC dimension of the hypothesis space of single intervals on the real line—used to classify points on the real line. We show an example of how points on the line might be dichotomized by a single interval in Fig. 8.3. The set Ξ could be, for example, {0.5, 2.5,  2.3, 3.14}, and one of the hypotheses in our set would be [1, 4.5]. This hypothesis would label the points 2.5 and 3.14 with a 1 and the points  2.3 and 0.5 with a 0. This
. Figure 8. • Suppose we generalize our example that used a hypothesis set of single intervals on the real line. TLUs with n inputs have a VC dimension of n + 1. A hypothesis space consisting of conjunctions of these tests (called axisparallel hyperrectangles) has VC dimension bounded by: n ≤ VCdim ≤ 2n • As we have already seen. Therefore the VC dimension of single intervals on the real line is 2. Since any dichotomy of VCdim(H) or fewer patterns in general position in n dimensions can be achieved by some hypothesis in H. 1994.” . H.3: Dichotomizing Points by an Interval The VC dimension is a useful measure of the expressive power of a hypothesis set. As soon as we have many more than 2 training patterns on the real line and provided we know that the classiﬁcation function we are trying to guess is a single interval.3. of hypotheses in H. THE VAPNIKCHERVONENKIS DIMENSION 117 set of hypotheses (single intervals on the real line) can arbitrarily classify any two points.3.8. then we begin to have good generalization. we must have many more than VCdim(H) patterns in the training set in order that a hypothesis consistent with the training set is suﬃciently constrained to imply good generalization. • [Baum. Our examples have shown that the concept of VC dimension is not restricted to Boolean functions. Now let us consider an ndimensional feature space and tests of the form Li ≤ xi ≤ Hi . then: VCdim(H) ≤ log(H) • The VC dimension of terms in n dimensions is n. 8. page 438] gives experimental evidence for the proposition that “ . . We allow only one such test per dimension. multilayer [neural] nets have a VC dimension roughly equal to their total number of [adjustable] weights. But no single interval can classify three points such that the outer two are classiﬁed as 1 and the inner one as 0.4 Some Facts and Speculations About the VC Dimension • If there are a ﬁnite number.
5 To be added..) A hypothesis space H is PAC learnable iﬀ it has ﬁnite VC dimension. With n = 50. ε = 0. 748. COMPUTATIONAL LEARNING THEORY 8. ε = 0.01. m ≥ 16. H consistent with the The second of these two theorems improves the bound on the number of training patterns needed for linearly separable functions to one that is linear in n. and δ = 0. The diﬀerence between O(log(1/ε)VCdim(H)/ε). 551 ensures PAC learnability. et al.3 (Blumer. Theorem 8. m ≥ (1/ε) max [4 lg(2/δ).. is properly PAC learnable if: a. As another example of the second theorem. and δ = 0. There is also a theorem that gives a lower (necessary) bound on the number of training patterns required for PAC learning [Ehrenfeucht. et al.4 VC Dimension and PAC Learning There are two theorems that connect the idea of VC dimension with PAC learning [Blumer. Bibliographical and Historical Remarks . et al. 756. et al. 8 VCdim lg(13/ε)].01. Theorem 8.01.118 CHAPTER 8. We state these here without proof. In our previous example of how many training patterns were needed to ensure PAC learnability of a linearly separable function if n = 50. we obtained m ≥ 173. and b. The VC dimension is 2 (as shown previously).01. the lower must examine at least and upper bounds is 8. if there is an algorithm that outputs a hypothesis h training set in polynomial (in m and n) time. Using the Blumer.5 Any PAC learning algorithm Ω(1/ε lg(1/δ) + VCdim(H)) training patterns. 1988]: Theorem 8. let us take H to be the set of closed intervals on the real line.4 A set of hypotheses. result we would get m ≥ 52. 1990]. H.
Ξ. but the MDL method has been applied with success. The speciﬁc techniques described in this chapter do not explicitly make use of MDL principles. other. on minimumdescriptionlength (MDL) principles. into a message of minimal length.. . There are two stages: • Form an Rway partition of a set Ξ of unlabeled training patterns (where the value of R. The partition separates Ξ into R mutually exclusive and exhaustive subsets. 1988] discovered a new classiﬁcation of stars based on the properties of infrared sources. A hierarchical partition is one in which Ξ is 119 . . One of the MDLbased methods.Chapter 9 Unsupervised Learning 9. may need to be induced from the patterns). and their motivation. We can base some of these methods. we assume that we want to encode a description of a set of points.1. One encoding involves a description of each point separately. Autoclass II [Cheeseman. encodings might involve a description of clusters of points together with how each point in a cluster can be described given the cluster it belongs to. Another type of unsupervised learning involves ﬁnding hierarchies of partitionings or clusters of clusters. perhaps shorter. et al. In that setting. ΞR . . called clusters.1 What is Unsupervised Learning? Consider the various sets of points in a twodimensional space illustrated in Fig. . Unsupervised learning uses procedures that attempt to ﬁnd natural partitions of patterns. 9. The ﬁrst set (a) seems naturally partitionable into two classes. while the second (b) seems diﬃcult to partition at all. • Design a classiﬁer based on the labels assigned to the training patterns by the partition. itself. Ξ1 . We will explain shortly various methods for deciding how many clusters there should be and for separating a set of patterns into that many clusters. and the third (c) is problematic.
ΞR . 9. the distance measure can be ordinary Euclidean distance between two points in an ndimensional space. The simplest of these involves deﬁning a distance between patterns. . and so on. 9. Ξi . Suppose we have R randomly chosen cluster seekers. . R) is divided into mutually exclusive and exhaustive subsets. . These are points in an ndimensional space that we want to adjust so that they each move toward the center of one of the clusters of patterns. One application of such hierarchical partitions is in organizing individuals into taxonomic hierarchies such as those used in botany and zoology.2 9. .3. UNSUPERVISED LEARNING a) two clusters b) one cluster c) ? Figure 9. iterative clustering method based on distance. There is a simple. For patterns whose features are numeric. . . (i = 1. It can be described as follows. The hierarchical form is best displayed as a tree. . We show an example of such a hierarchical partition in Fig. C1 . . .1: Unlabeled Patterns divided into mutually exclusive and exhaustive subsets. The tip nodes of the tree can further be expanded into their individual pattern elements.1 Clustering Methods A Method Based on Euclidean Distance Most of the unsupervised learning methods use a measure of similarity between patterns in order to group them into clusters. CR . . We present the (unlabeled) patterns in the training set. each set. Ξ. .120 CHAPTER 9. to the algorithm .2. as shown in Fig. Ξ1 .2. . 9.
With this adjustment rule. As a cluster seeker’s mass increases it moves less far towards a pattern. Xi .2. For each pattern. . it determines how far Cj is moved toward Xi .9. a cluster seeker is always at the center of gravity (sample mean) of the set of patterns toward which it has so far moved. mj . Reﬁnements on this procedure make the cluster seekers move less far as training proceeds. it will converge to the center of gravity of that cluster. that is closest to Xi and move it closer to Xi : Cj ←− (1 − αj )Cj + αj Xi where αj is a learning rate parameter for the jth cluster seeker. Suppose each cluster seeker. if a cluster seeker ever gets within some reasonably well clustered set of patterns (and if that cluster seeker is the only one so located). For example. we might set αj = 1/(1 + mj ) and use the above rule together with mj ←− mj + 1. equal to the number of times that it has moved.2: A Hierarchy of Clusters onebyone. Intuitively. presented. Cj . has a mass. Cj . we ﬁnd that cluster seeker. CLUSTERING METHODS 121 21 21 22 23 = 2 22 11 11 12 = 1 23 12 1 31 31 32 32 = 3 2 3= Figure 9.
We would like to partition a set of patterns into clusters such that the sum of the sample variances (badnesses) of these clusters is small. was a Russian mathematician who lived from 1868 to 1909. Of course if we have one cluster for each pattern. so we must arrange that our measure of the badness of a partition must increase with the number of clusters. We can measure the badness. we seek clusters whose patterns are as close together as possible. the classiﬁer implied by the nowlabeled patterns in Ξ can be based on a Voronoi partitioning of the space (based on distances to the various cluster seekers). by computing its sample variance deﬁned by: V = (1/K) i (Xi − M)2 where M is the sample mean of the cluster. an example of which is shown in Fig.4. the sample variances will all be zero. In this way. Georgy Fedoseevich Voronoi. V . of a cluster of patterns. 9. This kind of classiﬁcation.3: Displaying a Hierarchy as a Tree Once the cluster seekers have converged. When basing partitioning on distance. can be implemented by a linear machine. UNSUPERVISED LEARNING 1 3 2 11 12 31 32 21 22 23 Figure 9. which is deﬁned to be: M = (1/K) i Xi and K is the number of points in the cluster.122 CHAPTER 9. we can seek a tradeoﬀ between the variances of . {Xi }.
deﬁnes a cluster whose sample variance is larger than some amount δ. In distancebased methods. dij . Cj . then we can replace them both by a single cluster seeker placed at their center of gravity (taking into account their respective masses). if any of the cluster seekers. .4: MinimumDistance Classiﬁcation the clusters and the number of them in a way somewhat similar to the principle of minimal description length discussed earlier. Elaborations of our basic clusterseeking procedure allow the number of cluster seekers to vary depending on the distances between them and depending on the sample variances of the clusters.2. it is important to scale the components of the pattern vectors. between two cluster seekers. For example.e. at some random location somewhat adjacent to Ci and reset the masses of both Ci and Cj to zero. On the other hand.. In this way the badness of the partition might ultimately decrease by decreasing the total sample variance with comparatively little penalty for the additional cluster seeker.9. The values of the parameters ε and δ are set depending on the relative weights given to sample variances and numbers of clusters. say Ci . The variation of values along some dimensions of the pattern vector may be much diﬀerent than that of other dimensions. In this way we can decrease the overall badness of a partition by reducing the number of clusters for comparatively little penalty in increased variance. Ci and Cj . the square root of the variance) of each of the components over the entire training set and normalize the values of the components so that their adjusted standard deviations are equal. One commonly used technique is to compute the standard deviation (i. if the distance. then we can place a new cluster seeker. CLUSTERING METHODS 123 Separating boundaries C1 C3 C2 Figure 9. ever falls below some threshold ε.
of patterns. we can use Bayes rule and base our decision on maximizing p(XCi )p(Ci ). Begin with a set of unlabeled patterns Ξ and an empty list. (a) If S(X. It can be described as follows: a. c. p(xn Cmerge ). Ci and Cj if (Mi − Mj )2 < ε. Ci ) for each cluster. Ξ. X. That is. and p(Cmerge ) for the merged cluster. .124 CHAPTER 9. to be: Mi = (1/Ki ) Xj Ci Xj where Ki is the number of patterns in Ci . should be assigned by selecting the Ci for which the probability. . of clusters. Go to 3. Cnew = {X} and add Cnew to L. Go to 3. As we saw earlier. X. . . L. (Initially. the quantity to be maximized is: S(X. compute S(X. is largest. . Assuming conditional independence of the pattern components. p(xn Cmax ). Cmax ) ≤ δ. . C1 . . these similarities are all zero. p(x2 Cmerge ). providing the similarity is larger than δ. p(Ci X).2. . CR . Cmax ←− Cmax ∪ {X} Update the sample statistics p(x1 Cmax ). (b) If S(X.) We call S(X.) Suppose the largest of these similarities is S(X. Ci . For the next pattern. create a new cluster. Merge any existing clusters. . we assign X to the cluster to which it is most similar. We can decide to which of these clusters some arbitrary pattern. Ci ) the similarity of X to a cluster. xi . δ.2 A Method Based on Probabilities Suppose we have a partition of the training set. Ci . Cmerge = Ci ∪ Cj . Compute new sample statistics p(x1 Cmerge ). . Ci . (Recall the linear form that this formula took in the case of binaryvalued components. . p(x2 Cmax ). Cmax ). We can base an iterative clustering algorithm on this measure of similarity [Mahadevan & Connell. Thus. . and p(Cmax ) to take the new pattern into account. UNSUPERVISED LEARNING 9. 1992]. . assign X to Cmax . Just as before. b. Ci ) = p(x1 Ci )p(x2 Ci ) · · · p(xn Ci )p(Ci ) The p(xj Ci ) can be estimated from the sample statistics of the patterns in the clusters and then used in the above expression. we can deﬁne the sample mean of a cluster. providing p(Ci X) is larger than some ﬁxed threshold. in Ξ. Cmax ) > δ. into R mutually exclusive and exhaustive clusters.
the larger the value of ε.3. and a cluster vector.1 Hierarchical Clustering Methods A Method Based on Euclidean Distance Suppose we have a set. we form a new cluster. consisting of the union of Ci and Cj . of unlabeled training patterns. C. We can form a hierarchical classiﬁcation of the patterns in Ξ by a simple agglomerative method. we ultimately terminate with a tree of clusters rooted in the cluster containing all of the points in the original training set.9. Similarly. Next we compute the Euclidean distance again between all pairs of points in Ξ. These clusters can be organized hierarchically in a binary tree with cluster 9 as root. then terminate with the clusters in L. we replace Cj and Xi in Ξ by their (appropriately weighted) average and continue. Xi . X. (The description of this algorithm is based on an unpublished manuscript by Pat Langley. eliminate Xi and Xj from Ξ and replace them by a cluster vector. we form a new cluster. The value of the parameter δ controls the number of clusters. For small values of δ. appropriate scaling of the dimensions is assumed. we replace Ci and Cj by their (appropriately weighted) average and continue. C. consisting of the union of Cj and {Xi }. In this case. If the shortest distance is between two cluster vectors. equal to the average of Xi and Xj . A ternary tree could be formed instead if one searches for the three points in Ξ whose triangle deﬁned by those patterns has minimal area. we form a new cluster. We ﬁrst compute the Euclidean distance between all pairs of patterns in Ξ.3 9.3.) Our description here gives the general idea. Cj (representing a cluster. We collect Xi and Xj into a cluster. otherwise go to 2. Ci and Cj . clusters 7 and 8 as the two descendants of the root. 9. there will be a large number of clusters with few patterns in each cluster. . there will be a small number of clusters with many patterns in each cluster. HIERARCHICAL CLUSTERING METHODS 125 d. If δ is high. C. Cj ). Designing a classiﬁer based on the patterns labeled by the partitioning is straightforward. as before and replace the pair of patterns in Ξ by their average. we leave it to the reader to generate a precise algorithm. If the sample statistics of the clusters have not changed during an entire iteration through Ξ. the smaller the number clusters that will be found. C. Ξ. In this case. We assign any pattern. 9.5. C. The numbers associated with each cluster indicate the order in which they were formed.) Suppose the smallest distance is between patterns Xi and Xj . Ci ). (Again. Mention “kmeans and “EM” methods. and so on. If the smallest distance is between pairs of patterns. If the shortest distance is between a pattern. Since we reduce the number of points in Ξ by one each time. An example of how this method aggregates a set of two dimensional patterns is shown in Fig. to that category that maximizes S(X.
As before. Then. C1 . . . UNSUPERVISED LEARNING 4 1 2 3 7 9 8 5 6 Figure 9.2 A Method Based on Probabilities A probabilistic quality measure for partitions We can develop a measure of the goodness of a partitioning based on how accurately we can guess a pattern given only what partition it is in.3. . . We use the notation pi (vij Ck ) = probability(xi = vij Ck ). where the index j steps over the domain of that component. we can compute the sample statistics p(xi Ck ) which give probability values for each component given the class assigned to it by the partitioning.5: Agglommerative Clustering 9. Suppose each component xi of X can take on the values vij .126 CHAPTER 9. the probability that we guess the ith component correctly is: probability(guess is vij )pi (vij Ck ) = j j [pi (vij Ck )] 2 The average number of (the n) components whose values are guessed correctly by this method is then given by the sum of these probabilities over all of the components of X: [pi (vij Ck )] i j 2 . Suppose we use the following probabilistic guessing rule about the values of the components of a vector X given only that it is in class k. Guess that xi = vij with probability pi (vij Ck ). CR . Suppose we are given a partitioning of Ξ into R classes.
There are several diﬀerent partitionings. so this method of evaluating partitions would favor placing all patterns in a single cluster.9. gives the following sample probabilities: p1 (v11 = 1C1 ) = 1 p2 (v21 = 1C1 ) = 1/2 p3 (v31 = 1C1 ) = 1 Summing over the values of the components (0 and 1) gives (1)2 + (0)2 = 1 for component 1. Finally. Z(P2 ) = 1 1/4. dividing by the number of clusters produces the ﬁnal Z value of this partition. In order to penalize this measure for having a large number of classes. puts all of the patterns into a single cluster. {c. HIERARCHICAL CLUSTERING METHODS 127 Given our partitioning into R classes. {b. The ﬁrst. Finally. Summing over the values of the components (0 and 1) gives (1/2)2 + (1/2)2 = 1/2. Averaging over the two clusters also gives 2 1/2. G. .3. d}}. d}}. Averaging over all of the clusters (there is just one) also gives 3/2. P3 = {{a. c. we divide it by R to get an overall “quality” measure of a partitioning: Z = (1/R) k p(Ck ) i j [pi (vij Ck )] 2 We give an example of the use of this measure for a trivially simple clustering of the four threedimensional patterns shown in Fig.6. P2 . b. 9. {d}}. Summing over the three components gives 3/2. Z(P1 ) = 3/2. P1 . {c}. Summing over the three components gives 2 1/2 for class 1. d}. c}. The sample probabilities pi (vi1 = 1) and pi (vi0 = 0) are all equal to 1/2 for each of the three components. not quite as high as Z(P1 ). P2 = {{a. and P4 = {{a}. A similar calculation also gives 2 1/2 for class 2. of this partitioning is the average of the above expression over all classes: G= k p(Ck ) i j [pi (vij Ck )] 2 where p(Ck ) is the probability that a pattern is in class Ck . and (1)2 + (0)2 = 1 for component 3. dividing by the number of clusters produces the ﬁnal Z value of this partition. b}. (1/2)2 + (1/2)2 = 1/2 for component 2. Let’s evaluate Z values for the following ones: P1 = {a. {b}. The second partition. Similar calculations yield Z(P3 ) = 1 and Z(P4 ) = 3/4. the goodness measure.
The tips of the tree will contain singleton sets. 1987]. c. We start with a tree whose root node contains all of the patterns in Ξ and a single empty successor node. the successors of a node. The algorithm is as follows: a. the root node contains all of the patterns in Ξ.6: Patterns in 3Dimensional Space An iterative method for hierarchical clustering Evaluating all partitionings of m patterns and then selecting the best would be computationally intractable. sample statistics are used to update the Z values whenever a pattern is placed at a node. The procedure grows a tree each node of which is labeled by a set of patterns. calculate the best host for Xi . UNSUPERVISED LEARNING x3 d b c a x2 x1 Figure 9.128 CHAPTER 9. In general. A best host is determined by tentatively placing Xi in one of the successors and calculating the resulting Z value for each . The following iterative method is based on a hierarchical clustering procedure called COBWEB [Fisher. terminate). η. Select a pattern Xi in Ξ (if there are no more patterns to select. The method uses Z values to place patterns at the various nodes. are labeled by mutually exclusive and exhaustive subsets of the pattern set labelling node η. The successors of the root node will contain mutually exclusive and exhaustive subsets of Ξ. Set µ to the root node. d. At the end of the process. For each of the successors of µ (including the empty successor!). b. We arrange that at all times during the process every nonempty node in the tree has (besides any other successors) exactly one empty successor.
create one successor node of η containing the singleton pattern that was in η. generate an empty sibling node of η. η. The best host corresponds to the assignment with the highest Z value. and go to 2. singleton (tip) node. a good heuristic is to attempt to merge the two best hosts. If the best host is a nonempty. g. These are shown in the table below. set µ to η. create empty successor nodes of the new nonempty successors of η. the majority vote in each class was computed.3. To make the ﬁnal classiﬁcation tree less order dependent. This operation is performed only if it increases the Z value of the classiﬁcation performed by a group of siblings. If the best host is an empty node. nonsingleton node. the program attempted to ﬁnd two categories (we will call them Class 1 and Class 2) of United States Senators based on their votes (yes or no) on six issues. After the clusters were established. Rather than try all pairs to merge. HIERARCHICAL CLUSTERING METHODS 129 one of these ways of accomodating Xi . create an empty successor node of η. This process is rather sensitive to the order in which patterns are presented. Issue Toxic Waste Budget Cuts SDI Reduction Contra Aid LineItem Veto MX Production Class 1 yes yes no yes yes yes Class 2 no no yes no no no . and go to 4. we place Xi in η. the COBWEB procedure incorporates node merging and splitting. generate an empty successor node of η. η. f. Node merging: It may happen that two nodes having the same parent could be merged with an overall increase in the quality of the resulting classiﬁcation performed by the successors of that parent. create another successor node of η containing Xi . and the two nodes that were merged are installed as successors of the new node. e.9. Example results from COBWEB We mention two experiments with COBWEB. If the best host is a nonempty. In the ﬁrst. we place Xi in η. a new node containing the union of the patterns in the merged nodes replaces the merged nodes. Node splitting: A heuristic for node splitting is to consider replacing the best host among a group of siblings by that host’s successors. When such a merging improves the Z value. we place Xi in η. and go to 2. η.
4 To be added. UNSUPERVISED LEARNING In the second experiment.130 CHAPTER 9. N0 soybean diseases N1 Diaporthe Stem Canker N2 Charcoal Rot N3 N31 Rhizoctonia Rot N32 Phytophthora Rot Figure 9.7. the program attempted to classify soybean diseases based on various characteristics. COBWEB grouped the diseases in the taxonomy shown in Fig. 9.7: Taxonomy Induced for Soybean Diseases 9. Bibliographical and Historical Remarks .
we desire to predict the value of z at time t = i + 1 based on input Xi for every i. say t = m + 1. Xi+1 . . we desire to make a sequence of predictions about the value of z at some ﬁxed time. i = 1. we might expect that the prediction accuracy should get better and better as i increases toward m. X. Xm . from an ndimensional input pattern. . In the other kind of prediction problem. m. 10.. . the patterns occur in temporal sequence. Sutton [Sutton. For example. .2 Supervised and TemporalDiﬀerence Methods A training method that naturally suggests itself is to use the actual value of z at time m + 1 (once it is known) in a supervised learning procedure using a 131 . . Xi .1 Temporal Patterns and Prediction Problems In this chapter. based on each of the Xi .Chapter 10 TemporalDiﬀerence Learning 10. For example. The components of Xi are features whose values are available at time. based on measurements taken every day before New Year’s. . We distinguish two kinds of prediction problems. we consider problems in which we wish to learn to predict the future value of some quantity. 1988] has called this latter problem. and that is the problem we consider here. In multistep prediction. In many of these problems. . we might wish to predict some aspects of tomorrow’s weather based on a set of measurements made today. . multistep prediction. . and are generated by a dynamical process. In one. t = i. X1 .. we might wish to make a series of predictions about some aspect of the weather on next New Year’s Day. say z. . X2 .
For supervised learning. {X1 . we change W as follows: m W ←− W + i=1 (∆W)i Whenever we are attempting to minimize the squared error between z and f (Xi . and the learning rule (whatever it is) computes the change. we write f (X. the prediction f (Xi . ∂fi at time t = i. W). 1 n ∂fi (The expression ∂W is sometimes written W fi . X2 . Xi+1 . Then. the weightchanging rule for each pattern is: (∆W)i = c(z − fi ) ∂fi ∂W where c is a learning rate parameter.132 CHAPTER 10. The WidrowHoﬀ rule results when f (X.. . . . . That is. we seek to learn a function. (∆Wi ). consisting of several such sequences. the vector of partial derivatives ∂f ∂f ∂f ( ∂wi . ∂wi ) in which the wi are the individual components of W. taking into account the weight changes for each pattern in a sequence all at once after having made all of the predictions with the old weight vector. Ξ.. . W) = X • W. Such methods involve what is called temporaldiﬀerence (TD) learning. We will show that a method that is better than supervised learning for some important problems is to base learning on the diﬀerence between f (Xi+1 ) and f (Xi ) rather than on the diﬀerence between z and f (Xi ). Substituting in our formula for (∆W)i yields: (∆W)i = c(z − fi ) ∂fi ∂W . depends on a vector of modiﬁable weights. f (X). W). to be made to W. we would need a training set. fi is our prediction of z. and ∂W is. W) is computed and compared to z. ∂wii . such that f (Xi ) is as close as possible to z for each i. by deﬁnition. . . . W. f . To make that dependence explicit. TEMPORALDIFFERENCE LEARNING sequence of training patterns. .) The reader will recall that we used an equivalent expression for (∆W)i in deriving the backpropagation formulas used in training multilayer neural networks. Xm }. f (Xi . . . . we consider procedures of the following type: For each Xi . Then: (∆W)i = c(z − fi )Xi An interesting form for (∆W)i can be developed if we note that m (z − fi ) = k=i (fk+1 − fk ) where we deﬁne fm+1 = z. We assume that our prediction. . W) by gradient descent. Typically. . Xi .
Intermediate values of λ take into account diﬀerently weighted diﬀerences between future pairs of successive predictions. we weight only the (fi+1 − fi ) diﬀerence. the error is the diﬀerence between successive predictions. the method is called TD(λ).10. in which the prediction function strives to make each prediction more like successive ones (whatever they might be). only the error term is diﬀerent. we have various degrees of unsupervised learning. SUPERVISED AND TEMPORALDIFFERENCE METHODS 133 =c ∂fi ∂W m (fk+1 − fk ) k=i In this form. W) = X • W. we have the same rule with which we began—weighting all diﬀerences equally.2. but as λ → 0. . In the case when f (X. the error is the diﬀerence between the ﬁnally revealed value of z and the prediction. Only TD(1) can be considered a pure supervised learning procedure. sensitive to the ﬁnal value of z provided by the teacher. When λ = 1. We shall soon see that these unsupervised procedures result in better learning than do the supervised ones for an important class of problems. It is interesting to compare the two extreme cases: For TD(0): (∆W)i = c(fi+1 − fi ) For TD(1): (∆W)i = c(z − fi ) ∂fi ∂W ∂fi ∂W Both extremes can be handled by the same learning mechanism. we use the diﬀerences between successive predictions—thus the phrase temporaldiﬀerence (TD) learning. Here. In TD(0). For λ < 1. instead of using the diﬀerence between a prediction and the value of z. and in TD(1). the temporal diﬀerence form of the WidrowHoﬀ rule is: m (∆W)i = cXi k=i (fk+1 − fk ) One reason for writing (∆W)i in temporaldiﬀerence form is to permit an interesting generalization as follows: (∆W)i = c ∂fi ∂W m λ(k−i) (fk+1 − fk ) k=i where 0 < λ ≤ 1. With the λ term. the λ term gives exponentially decreasing weight to diﬀerences later in time than t = i.
namely (∆W)i = c ∂fi ∂W λ(k−i) (fk+1 − fk ) k=i to allow a type of incremental computation. If. we can develop a computationally eﬃcient recurrence equation for ei+1 as follows: i+1 ei+1 = k=1 λ(i+1−k) ∂fk ∂W = ∂fi+1 + ∂W i λ(i+1−k) k=1 ∂fk ∂W . as earlier. TEMPORALDIFFERENCE LEARNING 10. if we let ei = k=1 λ(i−k) ∂W . First we write the expression for the weight change rule that takes into account all of the (∆W)i : m W ←− W + i=1 c ∂fi ∂W m λ(k−i) (fk+1 − fk ) k=i Interchanging the order of the summations yields: m k W ←− W + k=1 c i=1 λ(k−i) (fk+1 − fk ) ∂fi ∂W m k =W+ k=1 c(fk+1 − fk ) i=1 λ(k−i) ∂fi ∂W Interchanging the indices k and i ﬁnally yields: m i W ←− W + i=1 c(fi+1 − fi ) k=1 λ(i−k) ∂fk ∂W m i=1 (∆W)i .3 Incremental Computation of the (∆W)i m We can rewrite our formula for (∆W)i . we want to use an expression of the form W ←− W+ we see that we can write: i (∆W)i = c(fi+1 − fi ) k=1 i λ(i−k) ∂fk ∂W ∂fk Now.134 CHAPTER 10.
X. This saves substantially on memory. Some sample sequences are shown in the ﬁgure. Sutton [Sutton. it is equally likely that the sequence terminates with z = 0 or that the next vector is XC . which we repeat here. because each (∆W)i depends only on a pair of successive predictions and on the ∂fi [weighted] sum of all past values for ∂W . When XB is in the sequence. 1988. .10. we obtain: (∆W)i = c(fi+1 − fi )ei where: e1 = ∂f1 ∂W e2 = etc.1. it is equally likely that the sequence terminates with z = 1 or that the next vector is XE . page 15] (about a diﬀerent equation. . are generated as follows: We start with vector XD . page 19] gives an interesting example involving a random walk. AN EXPERIMENT WITH TD METHODS 135 = ∂fi+1 + λei ∂W Rewriting (∆W)i in these terms. when XF is in the sequence.” 10. 1988. but they always start with XD . If the next vector is XC (or XE ). In Fig. but the quote applies equally well to this one): “. In that case. the next one after that is equally likely to be one of the vectors adjacent to XC (or XE ). ∂f2 + λe1 ∂W Quoting Sutton [Sutton. 10. sequences of vectors. this equation can be computed incrementally. Similarly. . sequences of temporally presented patterns contain important information that is ignored by a conventional supervised method such as the WidrowHoﬀ rule. Thus the sequences are random. the next vector in the sequence is equally likely to be one of the adjacent vectors in the diagram. because it is no longer necessary to individually remember ∂fi all past values of ∂W .4 An Experiment with TD Methods TD prediction methods [especially TD(0)] are well suited to situations in which the patterns are generated by a dynamic process.4.
we want to be able to predict the value of z for each X in a test sequence. (Note that the values of the predictions are not limited to 1 or 0—even though z can only have one of those values—because we are minimizing meansquared error. The learning problem is to ﬁnd a weight vector. The experimental setup was as follows: ten random sequences were generated using the transition probabilities. TEMPORALDIFFERENCE LEARNING 1 0 0 0 0 XB z=0 0 1 0 0 0 XC 0 0 1 0 0 XD 0 0 0 1 0 XE 0 0 0 0 1 XF z=1 Typical Sequences: XDXCXDXEXF 1 XDXCXBXCXDXEXDXEXF 1 XDXEXDXCXB 0 Figure 10.136 CHAPTER 10. f (XF ) = w5 . . W) = X • W. that minimizes the meansquared error between z and the predicted value of z. We assume that the learning system does not know the transition probabilities. Sutton used a linear predictor. we have the following predictions: f (XB ) = w1 . Given a set of sequences generated by this process as a training set. and this sum was used to change the weight vector to be used for the next pass through the ten sequences.) After training. (∆W)i . where wi is the ith component of the weight vector. were computed after each pattern presentation but no weight changes were made until all ten sequences were presented. W.1: A Markov Process This random walk is an example of a Markov process. the weight vector always converged in this way. This process was repeated over and over (using the same training sequences) until (quoting Sutton) “the procedure no longer produced any signiﬁcant changes in the weight vector. that is f (X. Each of these sequences was presented in turn to a TD(λ) method for various values of λ. f (XE ) = w4 . f (XD ) = w3 . Weight vector increments. For small c. For his experiments with this process. f (XC ) = w2 . transitions from state i to state j occur with probabilities that depend only on i and j. Given the ﬁve diﬀerent values that X can take on. The weight vector increments were summed after all ten sequences were presented. these predictions will be compared with the optimal ones—given the transition probabilities.
20 0. XC . 1988. the standard error is approximately σ = 0. How can it be that this optimal method peformed worse than all the TD methods for λ < 1? The answer is that the WidrowHoﬀ procedure only minimizes error on the training set.) After convergence.7 0. (For each data point. for ﬁxed. under repeated presentations. [Later] we prove that in fact it is linear TD(0) that converges to what can be considered the optimal estimates for . it might converge to a somewhat diﬀerent vector for diﬀerent values of c. XE . 2/3. AN EXPERIMENT WITH TD METHODS 137 and always to the same ﬁnal value [for 100 diﬀerent training sets of ten random sequences].10 WidrowHoff TD(1) TD(0) 0.14 0.” (Even though.01.0 (Adapted from Sutton. 1988) Figure 10. We can compute these probabilities to be 1/6. p. it does not necessarily minimize error for future experience.16 0.0 0. XD .2 for seven diﬀerent values of λ. 1985]). The rootmeansquared diﬀerences between the best learned predictions (over all c) and these optimal ones are plotted in Fig. independent of its initial value. and 5/6 for XB . It is well known that.) Error using best c 0. page 21]: “This result contradicts conventional wisdom. the predictions made by the ﬁnal weight vector are compared with the optimal predictions made using the transition probabilities.1 0. respectively. 10. These optimal predictions are simply p(z = 1X).4.18 0.12 0. the weight vector always converged to the same vector.2: Prediction Errors for TD(λ) Notice that the WidrowHoﬀ procedure does not perform as well as other versions of TD(λ) for λ < 1! Quoting [Sutton. 1/2.10. small c. XF . 20.9 1.5 0. the WidrowHoﬀ procedure minimizes the RMS error between its predictions and the actual outcomes in the training set ([Widrow & Stearns.3 0. 1/3.
” 10. (Also see [Dayan & Sejnowski. and for any linearly independent set of observation vectors {Xi } for the nonterminal states. Instead. that is. the predictions of linear TD(0) (with weight updates after each sequence) converge in expected value to the optimal (maximum likelihood) predictions of the true process. there exists an ε > 0 such that for all positive c < ε and for any initial weight vector. 1992] has extended the result of Theorem 9. But that would make fi = f (Xi . sensitive both to changes in X and changes in W and could lead to instabilities.1 (Sutton.1 to TD(λ) for arbitrary λ between 0 and 1. Wi−1 ). it would be desirable to change the weight vector after every pattern presentation. fi+1 = f (Xi+1 .6 IntraSequence Weight Updating Our standard weight updating rule for TD(λ) methods is: m i W ←− W + i=1 c(fi+1 − fi ) k=1 λ(i−k) ∂fk ∂W where the weight update occurs after an entire sequence is observed. Wi ). the predictions themselves do not converge but vary around their expected values depending on their most recent experience. . The obvious extension is: i Wi+1 ←− Wi + c(fi+1 − fi ) k=1 λ(i−k) ∂fk ∂W where fi+1 is computed before making the weight change. the variance of the predictions will approach 0 also.) 10.5 Theoretical Results It is possible to analyze the performance of the linearprediction TD(λ) methods on Markov processes. We state some theorems here without proof. TEMPORALDIFFERENCE LEARNING matching future experience—those consistent with the maximumlikelihood estimate of the underlying Markov process. Even though the expected values of the predictions converge. 1994]. 1988) For any absorbing Markov chain. Wi ) and fi = f (Xi . Wi ). page 24.138 CHAPTER 10. namely (fi+1 − fi ). and such a rule would make the prediction diﬀerence. Theorem 10. Dayan [Dayan. fi+1 = f (Xi+1 . for every pair of predictions. we modify the rule so that. Sutton conjectures that if c is made to approach 0 as training progresses. To make the method truly incremental (in analogy with weight updating rules for neural nets). This version of the rule has been used in practice with excellent results.
6.) The linear TD(0) method can be regarded as a technique for training a very simple network consisting of a single dot product unit (and no threshold or sigmoid function). namely (d − f (k) ). The weight changing rule for the ith weight vector in the jth layer of weights has the same form as before.) (b) fi+1 ←− Xi+1 • W (c) di+1 ←− fi+1 − fi (d) W ←− W + c di+1 Xi (If fi were computed again with this changed weight vector.10. (fi+1 − fi ). This change has a direct eﬀect only on the expression for δ (k) which becomes: Wi+1 = Wi + c(fi+1 − fi ) δ (k) = 2(f (k) − f (k) )f (k) (1 − f (k) ) where f (k) and f (k) are two successive outputs of the network. namely: Wi where the δi (j) (j) ←− Wi (j) + cδi X(j−1) (j) are given recursively by: mj+1 δi (j+1) wil (j) = fi (1 − fi ) l=1 (j) (j) δl (j+1) wil (j+1) and is the lth component of the ith weight vector in the (j + 1)th layer of weights. Initialize the weight vector. b. .. the rule is: Wi+1 = Wi + c(fi+1 − fi )Xi The rule is implemented as follows: a. must be replaced by a diﬀerence term between successive outputs. . W. here also it is assumed that f (k) and f (k) are computed using the same weights and then the weights are changed.. For i = 1. TD methods can also be used in combination with backpropagation to train neural networks. Of course. For TD(0) we change the network weights according to the expression: ∂fi ∂W The only change that must be made to the standard backpropagation weightchanging rule is that the diﬀerence term between the desired output and the output of the unit in the ﬁnal (kth) layer. m. its value would be closer to fi+1 as desired.. arbitrarily. INTRASEQUENCE WEIGHT UPDATING For TD(0) and linear predictors. In the next section we shall see an interesting example of this application of TD learning. do: 139 (a) fi ←− Xi • W (We compute fi anew each time through rather than use the value of fi+1 the previous time through.
off board. TEMPORALDIFFERENCE LEARNING 10. Each member of this set is evaluated by the network. The network is trained to minimize the error between actual payoﬀ and estimated payoﬀ. p3 = pr(black wins) p4 = pr(black gammons) 4 output units no.. 1 no..5. and the pi are the actual probabilities of the various outcomes as deﬁned in the ﬁgure. .5 and +0. 1992] learns to play backgammon by training a neural network via temporaldiﬀerence methods. initial weights chosen randomly between 0. at any stage of the game some ﬁnite set of moves is possible and these lead to the set. The structure of the neural net. and the one with the largest .140 CHAPTER 10. on bar. of white on cell 1 2 3 #>3 estimated payoff: d = p1 + 2p2 p3 2p4 estimated probabilities: p1 = pr(white wins) p2 = pr(white gammons) 2 x 24 cells . where the actual payoﬀ is deﬁned to be df = p1 + 2p2 − p3 − 2p4 . and its coding is as shown in Fig.7 An Example Application: TDgammon A program called TDgammon [Tesauro.. 10.3. and who moves up to 40 hidden units 198 inputs hidden and output units are sigmoids learning rate: c = 0.1. {X}. of new board positions.3: The TDgammon Network TDgammon learned by using the network to select that move that results in the best predicted payoﬀ. Figure 10. That is..
000 games the following results were obtained. Tesauro said: “It appears to be the strongest program ever seen by this author. . given that input.8 Bibliographical and Historical Remarks To be added.2% of 10.1) won 66. feedforward network. BIBLIOGRAPHICAL AND HISTORICAL REMARKS 141 predicted payoﬀ is selected if it is white’s move (and the smallest if it is black’s). dt+1 .) To make the special cases clear. for all t. its output. Commenting on a later version of TDgammon. for input Xt tended toward its expected output. (For a layered. λ = 0. After about 200. and c = 0. recall that for TD(0). The latter case is the same as the WidrowHoﬀ rule. the weight adjustment rule is: t ∆Wt = c(dt+1 − dt ) k=1 λt−k ∂dk ∂W ∂dk where Wt is a vector of all weights in the network at time t. and dt+1 is the estimate at time t + 1 (after a move is made). The weight adjustment procedure combines temporaldiﬀerence (TD(λ)) learning with backpropagation. dt . for all t. For TD(1). the network would be trained so that. TDgammon (with 40 hidden units. If dt is the network’s estimate of the payoﬀ at time t (before a move is made). incorporating special features as inputs.10.000 games against SUN Microsystems Gammontool and 55% of 10. the network would be trained so that.” 10. such as that of TDgammon. and ∂W is the gradient of dk in this weight space. for input Xt tended toward the expected ﬁnal payoﬀ.7.000 games against a neural network trained using expert moves. The move is made. for input Xt+1 . and the network weights are adjusted to make the predicted payoﬀ from the original position closer to that of the resulting position. dt . the weight changes for the weight vectors in each layer can be expressed in the usual manner. df . its output.8.
TEMPORALDIFFERENCE LEARNING .142 CHAPTER 10.
received at t = i depends on the action taken and on the state. that maps input vectors to actions in such a way that maximizes rewards accumulated over time. We assume a discrete time model. we will assume that the mapping from states to vectors is onetoone. A. When presented with an input vector. it will need to be able to predict how actions change inputs. X. We formalize the problem in the following way: The robot exists in an environment consisting of a set. and the expected reward. This type of learning is called reinforcement learning. ai ). The situation is as shown in Fig. For the moment. the action taken at that time is ai . 143 . of actions to perform.1. The learner’s goal is to ﬁnd a policy. the robot decides which action from a set. Performing the action produces an eﬀect on the environment—moving it to a new state. the input vector at time t = i is Xi . Suppose (as an extreme case) that it has no idea about the eﬀects of its actions. ri . That is. The new state results in the robot perceiving a new input vector. in fact. and.1 The General Problem Imagine a robot that exists in an environment in which it can sense and act. that is ri = r(Xi . How should it choose its actions so as to maximize its rewards over the long run? To maximize rewards. The learner must ﬁnd the policy by trial and error. We assume that the robot’s sensory apparatus constructs an input vector. it doesn’t know how acting will change its sensory inputs. how actions lead to rewards. will use the notation X to refer to the state of the environment as well as to the input vector. and in particular.Chapter 11 DelayedReinforcement Learning 11. it has no initial knowledge of the eﬀects of its actions. which informs the robot about which state the environment is in. 11.” which it occasionally receives. π(X). from the environment. S. of states. Along with its sensory inputs are “rewards. and the cycle repeats.
If the robot lands in the cell marked G (for goal).1). Imagine a robot initially in cell (2. 11. Let’s suppose that whenever the robot lands in the goal cell and gets its reward. In Fig. A policy for our robot is a speciﬁcation of what action to take for every one of its inputs. n. it is capable of four actions. s. if the input to the robot is (1. .” An optimal policy is a policy that maximizes longterm reward. It is rewarded one negative unit whenever it bumps into the wall or into the blocked cells. right. it receives a reward of +10. respectively. for every one of the cells in the grid.3. In this chapter we will describe methods for learning optimal policies based on reward values received by the learner. For example. a component of such a policy would be “when in cell (3. w moving the robot one cell up. down. and the robot chooses action w.2 is often used to illustrate reinforcement learning. x2 ) telling it what cell it is in. One way of displaying a policy for our gridworld robot is by an arrow in each cell indicating the direction the robot should move when in that cell. e. we show an optimal policy displayed in this manner. that is. it is immediately transported out to some random cell. and the quest for reward continues. The robot receives input vector (x1 .3). For example.1: Reinforcement Learning 11.3). or left.” such as the one shown in Fig.144 CHAPTER 11. DELAYEDREINFORCEMENT LEARNING (state) Xi Learner (reward) (action) ri Environment ai Figure 11.3) and it receives a reward of −1. the next input to the robot is still (1. move right. 11.2 An Example A “grid world.
3 Temporal Discounting and Optimal Policies In delayed reinforcement learning. we call V π (X) the value of policy π for input X. . and let ri be the reward that will be received on the ith time step after one begins executing policy π starting in state X. a]. TEMPORAL DISCOUNTING AND OPTIMAL POLICIES 145 8 7 6 5 4 3 2 1 G R 1 2 3 4 5 6 7 Figure 11. This preference can be accomodated by a temporal discount factor.3. one often assumes that rewards in the distant future are not as valuable as are more immediate rewards. the probability that action a in state Xi will lead to state Xj is given by a transition probability p[Xj Xi . is taken to be γ i ri . we want to consider the case in which the rewards.11. X. In general. An optimal policy is one that maximizes V π (X) for all inputs. Then. Then the total reward accumulated over all time steps by policy π beginning in state X is: ∞ V π (X) = i=0 γ i ri π(X) One reason for using a temporal discount factor is so that the above sum will be ﬁnite. In Markovian environments. ri . The present value of a reward. Suppose π(X) we have a policy π(X) that maps input vectors into actions. are random variables and in which the eﬀects of actions on environmental states are random. 0 ≤ γ < 1. occuring i time units in the future.2: A Grid World 11. for example. ri . we will want to maximize expected future reward and would deﬁne V π (X) as: ∞ V π (X) = E i=0 γ i ri π(X) In either case.
π ∗ (and no others!). 1983] assures us that there is at least one optimal policy. a]V π (X ) ∗ The theory of dynamic programming (DP) [Bellman. we have the famous “optimality equation:” V π (X) = max r(X. r[X. For an optimal policy. π(X)] + γ X p[X X. then we can write V π (X) in terms of V π (X ) as follows: V π (X) = r[X. π ∗ . In other words. Ross. π(X)] = the probability that the environment transitions to state X when we execute the action prescribed by π in state X. DP . V π (X) = the value of state X under policy π. that satisﬁes this equation. 1957.3: An Optimal Policy in the Grid World If the action prescribed by π taken in state X leads to state X (randomly according to the transition probabilities). π(X)] = the expected immediate reward received when we execute the action prescribed by π in state X. and p[X X. DELAYEDREINFORCEMENT LEARNING 8 7 6 5 4 3 2 1 G R 1 2 3 4 5 6 7 Figure 11. the value of state X under policy π is the expected value of the immediate reward received when executing the action recommended by π plus the average value (under π) of all of the states accessible from X. a) + γ a X ∗ p[X X.146 CHAPTER 11. π(X)]V π (X ) where (in summary): γ = the discount factor.
and action a. a) + γ a X p[X X. If we knew ∗ the transition probabilities. On the ith step of the process. an estimated value V (X) to every state. If we had a model of actions. Assuming that Vi (X ) is a good estimate for i i Vi (Xi ). a]V π (X ) ∗ But. that is.4 QLearning Watkins [Watkins. π ∗ (X) = arg max r(X. ˆ We see that this adjustment moves the value of Vi (Xi ) an increment (dependˆ ˆ ing on ci ) closer to ri + γ Vi (X ) . asynchronous dynamic programming. our input on the ith ˆ step is Xi ). so we have to ﬁnd a method that eﬀectively learns them. and V π (X) for all X and a. then it would be easy to implement an optimal policy. We deﬁne: Qπ (X. Vi (Xi ). X resulted. ˆ = Vi−1 (X) otherwise.11. We then select that action a that maximizes the estimated value of the predicted subsequent state. the average rewards. Let a. then we could use a method called value iteration to ﬁnd an optimal policy. of course. this process of value iteration will converge to the optimal values. if we knew for every state. X. and policy iteration. then this adjustment helps to make the two estimates more consistent. a) + γ X p[X X. and thereafter chooses actions according to policy π.π (X) . Discuss synchronous dynamic programming. randomly. Then we update the estimated value. X. QLEARNING ∗ 147 also provides methods for calculating V π (X) and at least one π ∗ . Suppose this subsequent state having the highest estimated ˆ value is Xi . a]V π (X ). and that the estimated value of state Xi on the ith step is Vi (Xi ). We would simply select ∗ that a that maximizes r(X. 1989] has proposed a technique that he calls incremental dynamic programming. Providing that 0 < ci < 1 and that we visit each state inﬁnitely often. of state Xi as follows: ˆ ˆ ˆ Vi (X) = (1 − ci )Vi−1 (X) + ci ri + γ Vi−1 (Xi ) if X = Xi . suppose we are at state Xi (that is. Value iteration works as follows: We begin ˆ by assigning. That is. a) = V a.4. 11. π stand for the policy that chooses action a once. which state. we are assuming that we do not know these average rewards nor the transition probabilities. assuming that we know the average rewards and the transition probabilities.
we have another version of the optimality equation in terms of Q values: Qπ (X. a. a) + γE Qπ (X . a) + γE[V π (X )] where r(X. a) = r(X. • observes the subsequent state Xi . a) + γE Qπ (X . a) a ∗ ∗ This equation holds only for an optimal policy. Then using the deﬁnitions of Q and V . Making such a change is the basis for a powerful learning rule that we shall describe shortly. We quote (with minor notational changes) from [Watkins & Dayan. then we could implement an optimal policy simply by selecting that ∗ action that maximized r(X. Suppose action a in state X leads to state X . a) . DELAYEDREINFORCEMENT LEARNING Then the optimal value from state X is given by: V π (X) = max Qπ (X. the agent’s experience consists of a sequence of distinct stages or episodes. • receives an immediate reward ri . a) a ∗ ∗ for all actions. a) is the average value of the immediate reward received when we execute action a in state X. a) = max r(X. page 281]: “In QLearning. That is. the agent: • observes its current state Xi . a) a ∗ Watkins’ proposal amounts to a TD(0) method of learning the Q values. π ∗ . In the ith episode.148 CHAPTER 11. a) + γE Qπ (X . then we can improve π by changing it so that π(X) = a. • selects [using the method described below] and performs an action ai . Now. if we had the optimal Q values (for all a and X). a) larger than V π (X). For an optimal policy (and no others). and . X. and states. it is easy to show that: Qπ (X. π ∗ (X) = arg max r(X. 1992. a) a ∗ Note that if an action a makes Qπ (X. The optimal policy is given by: π ∗ (X) = arg max Qπ (X.
Qi (X. a). a) + ci [ri + γVi−1 (Xi )] if X = Xi and a = ai . a) ←− r + γV (X ) where Q(X. . for all states and actions are assumed given. according to: Qi (X. then the learning procedure described above is exactly a TD(0) method of learning how to predict these Q values. Then. a). The initial Q values. to ones on which an optimal policy can be based). Q learning strengthens the usual TD methods. the Q values computed by this learning procedure converge to optimal ones (that is. QLEARNING • adjusts its Qi−1 values using a learning factor ci . V (X ) is the maximum (over all actions) of the Q value of the state next reached when action a is taken from state X. the agent always selects that action that maximizes Qi (X. and β is the fraction of the way toward which the new Q value. Q(X. whereas Q learning does not. b)] b 149 is the best the agent thinks it can do from state X . however.11. We deﬁne ni (X. = Qi−1 (X. using a model of the eﬀects of actions. 1992] prove that. a). a) is the new Q value for input X and action a. Q0 (X. a) otherwise. Watkins and Dayan [Watkins & Dayan. . a) as the index (episode number) of the ith time that action a is tried in state X.4. where Vi−1 (X ) = max [Qi−1 (X . a) = (1 − ci )Qi−1 (X. A convenient notation (proposed by [Schwartz. Note that only the Q value corresponding to the state just exited and the action just taken is adjusted. r is the immediate reward when action a is taken in response to input X. we have: β . . is adjusted to equal r + γV (X ). a). And that Q value is adjusted so that it is closer (by an amount determined by ci ) to the sum of the immediate reward plus the discounted maximum (over all actions) of the Q values of the state just entered. because TD (applied to reinforcement problems using value iteration) requires a onestep lookahead. under certain conditions. If we imagine the Q values to be predictions of ultimate (inﬁnite horizon) total reward.” Using the current Q values. 1993]) for representing the change in Q value is: Q(X.
. with probability 1.a) 2 <∞ for all X and a. and Extensions of QLearning An Illustrative Example The Qlearning procedure requires that we maintain a table of Q(X. Note.5 11. . i=0 i=0 cni (X. no expected values are explicitly computed by the procedure. then Qn (X.a) = ∞. where n Q∗ (X. such a table would not be excessively large.1 (Watkins and Dayan) For Markov problems with states {X} and actions {a}. Limitations. In the grid world that we described earlier. a) → Q∗ (X.1 Discussion. Although the deﬁnition of the optimal Q values for any state depends recursively on expected values of the Q values for subsequent states (and on the expected values of rewards). that the episodes need not form a continuous sequence—that is the X of one episode need not be the X of the next episode. and ∞ ∞ cni (X. 1992. for all X and a. a) values for all stateaction pairs. and control are very well described in [Barto. We might start with random entries in the table. page 281]: “The most important condition implicit in the convergence theorem . learning rates 0 ≤ cn < 1. dynamic programming. a) as n → ∞. & Singh.” The relationships among Q learning. these values are approximated by iterative sampling using the actual stochastic mechanism that produces successor states. a portion of such an intial table might be as follows: . however.5.150 CHAPTER 11. DELAYEDREINFORCEMENT LEARNING Theorem 11. Instead. n Again. Q learning is best thought of as a stochastic approximation method for calculating the Q values. under the stochastic conditions of the theorem. 11. is that the sequence of episodes that forms the basis of learning must include an inﬁnite number of episodes for each starting state and action. and given bounded rewards rn  ≤ R. Bradtke. This may be considered a strong condition on the way states and actions are selected—however. a) corresponds to the Q values of an optimal policy. 1994]. we quote from [Watkins & Dayan. no method could be guaranteed to ﬁnd an optimal policy under weaker conditions.
3) (2. for solving them [Sutton. With random Q values to begin.3) a w n e s w n e s Q(X. DYNA combines reinforcement learning with planning. One can imagine that better and better approximations to the optimal Q values gradually propagate back from states producing rewards toward all of the other states that the agent frequently visits. The maximum Q value occurs for a = w. We should note that the learning problem faced by our gridworld robot . called the bucket brigade algorithm. 3). Notice that an optimal policy might not be discovered if some cells are not visited nor some actions not tried frequently enough. w) closer to the discounted value of 5 plus the immediate reward (which was 0 in this case).3) is 5. and. the Q values have to work their way outward from these rewarding states. Sutton characterizes planning as learning in a simulated world that models the world that the agent inhabits.3) (2.5 and γ = 0. AND EXTENSIONS OF QLEARNING151 X (2. the agent’s actions amount to a random walk through its space of states. called DYNA.3) (2. a) 0 0 0 0 1 0 0 0 Suppose the robot is in cell (2. 1990].3) (1. Learning problems similar to that faced by the agent in our grid world have been thoroughly studied by Sutton who has proposed an architecture. although a related method. Typically.11.9. DISCUSSION. as in this example. and the learning mechanism attempts to make the value of Q((2. to date. The maximum Q value in cell (1. 3).3) (1. w) is adjusted from 7 to 5. and planning is accomplished by Q learning in its model of the world.5. With a learning rate parameter c = 0. so the robot moves west to cell (1. The agent’s model of the world is obtained by Q learning in its actual world. the most successful technique for temporal credit assignment.75. The learning problem faced by the agent is to associate speciﬁc actions with speciﬁc input patterns. rewards occur somewhat after the actions that lead to them— hence the phrase delayedreinforcement learning. No other changes are made to the table at this episode. Only when this random walk happens to stumble into rewarding states does Q learning begin to produce Q values that are useful. 1986].3) (1.3). The general problem of associating rewards with stateaction pairs is called the temporal credit assignment problem—how should credit for a reward be apportioned to the actions leading up to it? Q learning is. the Q value of Q((2. The reader might try this learning procedure on the grid world with a simple computer program. Q learning gradually reinforces those actions that contribute to positive rewards by increasing the associated Q values. has been proposed by [Holland.3) (1.3)—receiving no immediate reward. a) 7 4 3 6 4 5 2 4 r(X. even then. LIMITATIONS.
special precautions must be taken to ensure that online learning meets the conditions of the theorem. Instead of representing a goal as a condition to be achieved. s. could be expressed in terms of a reward that was earned whenever the agent was in that state and performed an action that transitioned back to that state in one step. in online learning the episodes form a continous sequence. in the gridworld problem. This strategy would favor exploration at the beginning of learning and exploitation later.152 CHAPTER 11. the agent may ﬁxate on these and never discover a policy that leads to a possibly greater longterm reward. Then. of a particular state. DELAYEDREINFORCEMENT LEARNING could be modiﬁed to have several places in the grid that give positive rewards. Other methods. A goal of maintenance. in turn. One way to force exploration is to perform occasional random actions (instead of that single action prescribed by the current Q values). This distribution. In Watkins and Dayan’s terminology.5. we have what is called an online learning method. indeed. In reinforcement learning phraseology. including making unvisited states intrinsically rewarding and using an “interval estimate. For example. 11. also. e. the convergence theorem for Q learning does not require online learning. This possibility presents an interesting way to generalize the classical notion of a “goal” in AI planning systems—even in those that do no learning.2 Using Random Actions When the next pattern presentation in a sequence of patterns is the one caused by the agent’s own action in response to the last pattern. one could imagine selecting an action randomly according to a probability distribution over the actions (n. . and choose the opposite action with probability 1/8. 1993]. This policy might be modiﬁed by “simulated annealing” which would gradually increase the probability of the action prescribed by the Q values more and more as time goes on. For example. we might ﬁrst ﬁnd that action prescribed by the Q values and then choose that action with probability 1/2. and w). have been proposed for dealing with exploration. choose the two orthogonal actions with probability 3/16 each. If online learning discovers some good paths to rewards. This generalization can be made to encompass socalled goals of maintenance and goals of avoidance. As already mentioned. this problem is referred to as the problem of exploitation (of already learned behavior) versus exploration (of possibly better behavior). The example presented above included avoiding bumping into the gridworld boundary. we represent a “goal structure” as a set of rewards to be given for achieving various conditions. the generalized “goal” becomes maximizing discounted future reward instead of simply achieving some particular condition.” which is related to the uncertainty in the estimate of a state’s value [Kaelbling. could depend on the Q values.
Weight adjustments are made according to a TD(0) procedure to bring the Q value for the action last selected closer to the sum of the immediate reward (if any) and the (discounted) maximum Q value for the next input pattern.. Networks of this sort are able to aggregate diﬀerent input vectors into regions for which the same action should be performed. One method that suggests itself is to use a neural network. X) R dot product units Figure 11.4. Wi ..3 Generalizing Over Inputs For large problems it would be impractical to maintain a table like that used in our gridworld example. The Q values (as a function of the input pattern X and the action ai ) are computed as dot products of weight vectors (one for each action) and the input vector.. trainable weights Q(a1. This kind of aggregation is an example of what has been called structural credit assignment. LIMITATIONS. X) Q(a2. AND EXTENSIONS OF QLEARNING153 11. Combining TD(λ) and backpropagation is a method for dealing with both the temporal and the structural credit assignment problems. given pattern inputs and actions. For example. 11. DISCUSSION. Wi Q(ai..5. Q(aR. a layered neural network might be used. The TD(0) method for updating Q values would then have to be combined with a multilayer weightchanging rule. Sigmoid units in the ﬁnal layer would compute Q values in the range 0 to 1.4: A Net that Computes Q Values Such a neural net could be used by an agent that has R actions to select from. consider the simple linear machine shown in Fig.11. X) = X . Various researchers have suggested mechanisms for computing Q values. such as backpropagation. If the optimum Q values for the problem (whatever they might be) are more complex than those that can be computed by a linear machine. . X) X .5.
we can no longer guarantee that Q learning will result in even useful action policies. is probably unique in terms of success on a highdimensional problem. That is.) We have already touched on some diﬃculties. Because of inevitable perceptual limitations. 11. Mahadevan & Connell.4 Partially Observable States So far. if some aspect of the environment cannot be sensed currently. When such is the case.5. there is no reason to suppose that it uniquely identiﬁes the environmental state. X. perhaps it was sensed once and can be remembered by the agent. Slow time to convergence • combine learning with prior knowledge.5 Scaling Problems Several diﬃculties have so far prohibited wide application of reinforcement learning to large problems. may depend on a sequence of previous ones rather than just the immediately preceding one. 11. Exploration versus exploitation. • use random actions • favor states not visited recently • separate the learning phase from the use phase • employ a teacher to guide exploration b. 1992]. 1992. It might be possible to reinstate a Markov framework (over the X’s) if X includes not only current sensory precepts but information from the agent’s memory. (The TDgammon program. that is. learn primitive actions ﬁrst and freeze the useful sequences into macros and then learn how to use the macros . This phenomenon has been referred to as perceptual aliasing. 1993]. with the actual state of the environment. given any action. these and others are summarized below with references to attempts to overcome them. a. the next X vector. several diﬀerent environmental states might give rise to the same input vector.154 CHAPTER 11. let alone optimal ones. DELAYEDREINFORCEMENT LEARNING Interesting examples of delayedreinforcement training of simulated and actual robots requiring structural credit assignment have been reported by [Lin. Several researchers have attempted to deal with this problem using a variety of methods including attempting to model “hidden” states by using internal memory [Lin. we no longer have a Markov problem. When the input vector results from an agent’s perceptual apparatus (as we assume it does). mentioned in the last chapter.5. use estimates of Q values (rather than random values) initially • use a hierarchy of actions. With perceptual aliasing. we have identiﬁed the input vector.
8. and then learn the “values” of states by reinforcement learning for each diﬀerent set of rewards. 1993] c. • use a learning method based on average rewards [Schwartz. Sometimes the reinforcement learning part can be replaced by a “planner” that uses the action model to produce plans to achieve goals.6. 1992. do several updates per episode [Moore & Atkeson.g. . 11.6 Bibliographical and Historical Remarks To be added. What is learned depends on the reward structure. 1993] e. but using large γ slows down learning. Using small γ can make the learner too greedy for present rewards and indiﬀerent to the future.11. if the rewards change. 1990] d. • Separate the learning into two parts: learn an “action model” which predicts how actions change states (and is constant over all problems). learning has to start over. e. use graded “lessons”—starting near the rewards and then backing away. 1992] • use more eﬃcient computations. Also see other articles in the special issue on reinforcement learning: Machine Learning. May. No “transfer” of learning . Temporal discounting problems. and use examples of good behavior [Lin. BIBLIOGRAPHICAL AND HISTORICAL REMARKS 155 • employ a teacher. Large state spaces • use handcoded features • use neural networks • use nearestneighbor methods [Moore.
156 CHAPTER 11. DELAYEDREINFORCEMENT LEARNING .
φ.Chapter 12 ExplanationBased Learning 12. Yet. if the system is sound. Making this conclusion and saving it is an instance of deductive learning—a topic we study in this chapter.1 Deductive Learning In the learning methods studied so far. ∆. if we had the facts Green(Obj5) and Green(Obj5) ⊃ Round(Obj5). In logic. a deductive system is one whose conclusions logically follow from a set of input facts.1 To contrast inductive with deductive systems in a logical setting. for example those using nonmonotonic reasoning. Using logical terminology. In this sense. Under what circumstances might we say that the process of deducing φ from ∆ results in our learning φ? In a sense. logically follows from some set of facts. we implicitly knew φ all along. On the other hand. suppose we have a set of facts (the training set) that includes the following formulas: {Round(Obj1). Suppose that some logical proposition. we could say that the classiﬁer’s output does not logically follow from the training set. φ might not be obvious given ∆. Round(Obj3). 157 . This conclusion may be useful (if there are no facts of the form Ball(σ) ∧ ¬Round(σ)). Round(Obj2). typically the training set does not exhaust the version space. Ball(Obj1). and 1 Logical reasoning systems that are not sound. Round(Obj4). these methods are inductive. Ball(Obj4)} A learning system that forms the conclusion (∀x)[Ball(x) ⊃ Round(x)] is inductive. since it was inherent in knowing ∆. but it does not logically follow from the facts. then we could logically conclude Round(Obj5). Ball(Obj2). Ball(Obj3). themselves might produce inductive conclusions that do not logically follow from the input facts.
On the surface. in practice. then we generalize the explanation to produce another element of the domain theory that will be useful on similar examples. Strictly speaking. the more a priori information we have about the function being sought). Shouldn’t that process count as learning? Dietterich [Dietterich.158 CHAPTER 12. speedup learning does not result in a system being able to make decisions that. The learning technique that we are going to study next is related to this example. fewer samples). Typically. A less direct method involves making assertions in a logical language about the property we are trying to learn. We might represent a person by a set of properties (income. the smaller the hypothesis set (that is. A set of such assertions is usually called a “domain theory. Speedup learning simply makes it possible to make those decisions more eﬃciently. EBL can be thought of as a process in which implicit knowledge is converted into explicit knowledge.). we might want to save it. EXPLANATIONBASED LEARNING the deduction process to establish φ might have been arduous. As another example. type of employment.2 Domain Theories Two types of information were present in the inductive methods we have studied: the information inherent in the training samples and the information about the domain that is implied by the “bias” (for example.1. a chess player can be said to learn chess even though optimal play is inherent in the rules of chess. 12. could not have been made before the learning took place.” Suppose. Let us further suppose that the proof we constructed did not depend on the given triangle being a right triangle. we specialize parts of a domain theory to explain a particular example. perhaps along with its deduction. in that case we can learn a more general fact. there seems to be no real diﬀerence between the experiencebased hypotheses that a chess player makes about what constitutes good play and the kind of learning we have been studying so far. that we wanted to classify people according to whether or not they were good credit risks. This process is illustrated in Fig. for example. suppose we are given some theorems about geometry and are asked to prove that the sum of the angles of a right triangle is 180 degrees. In EBL. in case it is needed later. The methods we have studied so far restrict the hypotheses in a rather direct way. It is called explanationbased learning (EBL). Rather than have to deduce φ again. this type of learning might make possible certain decisions that might otherwise have been infeasible. The learning methods are successful only if the hypothesis set is appropriate for the problem. To take an extreme case. in principle. But. A priori information about a problem can be expressed in several ways. the hypothesis set from which we choose functions). marital status. 12. etc. assemble such . the less dependent we are on information being supplied by a training set (that is. 1990] has called this type of learning speedup learning.
we might go to a loan oﬃcer of a bank. but perhaps the application of these policies were specialized and made more eﬃcient through experience with the special cases of loans made in his or her district. . ask him or her what sorts of things s/he looks for in making a decision about a loan. The knowledge used by the loan oﬃcer might have originated as a set of “policies” (the domain theory).12. AN EXAMPLE 159 Domain Theory specialize Example (X is P) Prove: X is P Complex Proof Process Explanation (Proof) generalize A New Domain Rule: Things "like" X are P Y is like X Trivial Proof Y is P Figure 12. 12.1: The EBL Process data about people who are known to be good and bad credit risks and train a classiﬁer to make decisions.3. let’s consider the following fanciful example.3 An Example To make our discussion more concrete. The attributes that we use to represent a robot might include some that are relevant to this decision and some that are not. encode this knowledge into a set of rules for an expert system. Or. We want to ﬁnd a way to classify robots as “robust” or not. and then use the expert system to make decisions.
) Robot(w) ⊃ Sees(w. we might be able to derive some new sentence whose use allows a much faster conclusion.. GR) . but among other things. Suppose we are given a number of facts about Num5.) R2D2(x) ⊃ Habile(x) (R2D2class individuals are habile. 5) M anuf acturer(N um5. such as: Robot(N um5) R2D2(N um5) Age(N um5. y) ∧ Habile(x) ⊃ F ixes(x. y) (A habile individual that can see another entity can ﬁx that entity. These methods might be computationally quite expensive because extensive search may have to be performed to derive a conclusion.) We could use theoremproving methods operating on this domain theory to conclude whether certain robots are robust. let’s suppose that our domain theory includes the sentences: F ixes(u. (By convention. But after having found a proof for some particular robot.160 CHAPTER 12. ..”) In this example. We next show how such a new rule might be derived in this example. it describes the concept “robust.. w) (All robots can see themselves. EXPLANATIONBASED LEARNING Suppose we have a domain theory of logical sentences that taken together.) Sees(x..) . variables are assumed to be universally quantiﬁed. (The same domain theory may be useful for several other purposes also. u) ⊃ Robust(u) (An individual that can ﬁx itself is robust.) C3P O(x) ⊃ Habile(x) (C3POclass individuals are habile. help to deﬁne whether or not a robot can be classiﬁed as robust.
w) Robot(Num5) R2D2(x) => Habile(x) R2D2(Num5) Figure 12. The facts about Num5 correspond to the features that we might use to represent Num5.Num5) Habile(Num5) Robot(w) => Sees(w. This type of pruning is the ﬁrst sense in which an explanation is used to generalize the classiﬁcation problem. The proof tree in Fig. The relevant ones are those used or needed in proving Robust(N um5) using the domain theory. not all of them are relevant to a decision about Robust(N um5). we could construct the following rule from this explanation: Robot(N um5) ∧ R2D2(N um5) ⊃ Robust(N um5) The explanation has allowed us to prune some attributes about Num5 that are irrelevant (at least for deciding Robust(N um5)).y) Sees(Num5. In fact. There might be little value in learning that rule since it is so speciﬁc. but we nevertheless attempt to ﬁnd a proof of that assertion using these facts about Num5 and the domain theory. Can it be generalized so that it can be applied to other individuals as well? .2 is one that a typical theoremproving system might produce. this proof is an explanation for the fact Robust(N um5). ([DeJong & Mooney. AN EXAMPLE 161 Robust(Num5) Fixes(u.3. In this example. Num5) Sees(x. 12.2: A Proof Tree We are also told that Robust(N um5) is true.y) & Habile(x) => Fixes(x.) But the rule we extracted from the explanation applies only to Num5. 1986] call this aspect of explanationbased learning feature elimination. u) => Robust(u) Fixes(Num5. In the language of EBL.12. We see from this explanation that the only facts about Num5 that were used were Robot(N um5) and R2D2(N um5).
for example. Note that the occurrence of Sees(r. the example told us nothing new other than what it told us about Num5. this general rule is more easily used to conclude Robust about an individual than the original proof process was.162 CHAPTER 12. It is important to note that we could have derived the general rule from the domain theory without using the example. keeping track of the substitutions imposed by the most general uniﬁers used in the proof. 12. One might note... the proof would have been shorter. 1972]. Sees(x. et al. EXPLANATIONBASED LEARNING Examination of the proof shows that the same proof structure.) This process results in the generalized proof tree shown in Fig. 12. could be used independently of whether we are talking about Num5 or some other individual. under certain assumptions. (Note that we always substitute terms that are already in the tree for variables in rules. In this example. using the same sentences from the domain theory.4 Evaluable Predicates The domain theory includes a number of predicates other than the one occuring in the formula we are trying to prove and other than those that might customarily be used to describe an individual. Note that we use diﬀerent values for the two diﬀerent occurrences of N um5 at the tip nodes. that if we used Habile(N um5) to describe Num5. Basing the generalization process on examples helps to insure that we learn rules matched to the distribution of problems that occur. we replace Robot(N um5) by Robot(r) and R2D2(N um5) by R2D2(s) and redo the proof—using the explanation proof as a template. doing so is called static analysis [Etzioni. (In the literature. r) as a node in the tree forces the uniﬁcation of x with y in the domain rule. Doing so sometimes results in more general. We now apply the rules used in the proof in the forward direction. y) ∧ Habile(y) ⊃ F ixes(x. 1986]. 1986] and diﬀers from that of [Mitchell. et al. There are a number of qualiﬁcations and elaborations about EBL that need to be mentioned. the formulas in the actual data base . (The generalization process described in this example is based on that of [DeJong & Mooney. This rule is the end result of EBL for this example.) In fact. In the latter application.) Clearly. but nevertheless valid rules.3. The process by which N um5 in this example was generalized to a variable is what [DeJong & Mooney. 1991]. We can generalize the proof by a process that replaces constants in the tip nodes of the proof tree with variables and works upward—using uniﬁcation to constrain the values of variables as needed to obtain a proof. The sole role of the example in this instance of EBL was to provide a template for a proof to help guide the generalization process. 1986] call identity elimination (the precise identity of Num5 turned out to be irrelevant). Why didn’t we? The situation is analogous to that of using a data base augmented by logical rules. It is also similar to that used in [Fikes. y). The substitutions are then applied to the variables in the tip nodes and the root node to yield the general rule: Robot(r) ∧ R2D2(r) ⊃ Robust(r).
y) & Habile(x) => Fixes(x. the predicates in the domain theory correspond to the hidden units. We typically cannot look up the truth values of formulas containing these intensional predicates. . they have to be derived using the rules and the database. The domain theory is useful for connecting formulas that we might want to prove with those whose truth values can be “looked up” or otherwise evaluated.” This usage reﬂects the fact that the predicates in the data base part are deﬁned by their extension—we explicitly list all the tuples sastisfying a relation. such formulas satisfy what is called the operationality criterion. The logical rules serve to connect the data base predicates with higher level abstractions that are described (if not deﬁned) by the rules.y) {r/x. u) => Robust(u) {r/u} Fixes(r. The EBL process assumes something similar. Finding the new rule corresponds to ﬁnding a simpler expression for the formula to be proved in terms only of the evaluable predicates.r) Habile(s) {s/x} R2D2(x) => Habile(x) {r/w} Robot(w) => Sees(w.12. The evaluable predicates correspond to the components of the input pattern vector. In the EBL literature. EVALUABLE PREDICATES 163 Robust(r) Fixes(u.3: A Generalized Proof Tree are “extensional. r) Sees(x. r/s} Sees(r.w) Robot(r) R2D2(s) becomes R2D2(r) after applying {r/s} Figure 12.” and those in the logical rules are “intensional.4. Perhaps another analogy might be to neural networks. r/y.
R2. namely the formation of macrooperators in automatic plan generation and learning how to control search. 12. the eﬃciency might be retrieved if there were another evaluable predicate. the question arises. Robot(N um6). say. We mention two here. Adding a new rule decreases the depth of the shortest proof but it also increases the number of formulas in the domain theory. EXPLANATIONBASED LEARNING 12. After seeing a number of similar examples. We show an example of a process for creating macrooperators based on techniques explored by [Fikes. we might be willing to induce the formula Bionic(u) ⊃ [C3P O(u) ∨ R2D2(u)] in which case the rule with the disjunction could be replaced with Robot(u) ∧ Bionic(u) ⊃ Robust(u). 12. EBL methods have been applied in several settings. it is unclear whether the overall utility of the new rules will turn out to be positive. 12. the added rules will be relevant for some tasks and not for others.4. Adding disjunctions for every alternative proof can soon become cumbersome and destroy any eﬃciency advantage of EBL. 1990] for an analysis). Referring to Fig. . Bionic(u) such that the domain theory also contained R2D2(x) ⊃ Bionic(x) and C3P O(x) ⊃ Bionic(x).7 Applications There have been several applications of EBL methods. do we want to generalize the two rules to something like: Robot(u) ∧ [C3P O(u) ∨ R2D2(u)] ⊃ Robust(u)? Doing so is an example of what [DeJong & Mooney.7. eﬃciency can sometimes be enhanced by chaining together a sequence of operators into macrooperators. 12. 1972].1 MacroOperators in Planning In automatic planning systems. In our example. Such a rule might have resulted if we were given {C3P O(N um6). and pushing it . In realistic applications. by going to an adjacent room. B1.} and proved Robust(N um6). . consider the problem of ﬁnding a plan for a robot in room R1 to fetch a box.5 More General Proofs Examining the domain theory for our example reveals that an alternative rule might have been: Robot(u) ∧ C3P O(u) ⊃ Robust(u). usually with positive utility.164 CHAPTER 12. (See [Minton. After considering these two examples (Num5 and Num6). Thus. et al. . 1986] call structural generalization (via disjunctive augmentation )..6 Utility of EBL It is well known in theorem proving that the complexity of ﬁnding a proof depends both on the number of formulas in the domain theory and on the depth of the shortest proof.
d. APPLICATIONS 165 back to R1.5. true in the initial state. R1) INROOM(B1. r1) Add list: IN ROOM (ROBOT. r1. IN ROOM (b. 12.R2. R1).. r2). We show there the main goal and the subgoals along a solution path. r1.7. CON N ECT S(d.R1. are: IN ROOM (ROBOT. Figure 12. r2) A backwardreasoning STRIPS system might produce the plan shown in Fig. IN ROOM (b.) The preconditions for this plan. r2) Preconditions: IN ROOM (ROBOT. (The conditions in each subgoal that are true in the initial state are shown underlined. r1).12. r2) Delete list: IN ROOM (ROBOT. r1. r1).4: Initial State of a Robot Problem We will construct the plan from a set of STRIPS operators that include: GOTHRU(d. r2) PUSHTHRU(b. IN ROOM (b. r1. r1) Add list: IN ROOM (ROBOT. and the facts that are true in the initial state are listed in the ﬁgure. r1). r2).R2) CONNECTS(D1.R2) CONNECTS(D1. CON N ECT S(d. R1 D1 B1 D2 R3 R2 Initial State: INROOM(ROBOT. R1) . r1) Delete list: IN ROOM (ROBOT.R1) . r2) Preconditions: IN ROOM (ROBOT. The goal for the robot is IN ROOM (B1..
CONNECTS(D1. D1/d2} INROOM(ROBOT. r4). R2) B1 D2 PLAN: GOTHRU(D1. R1). INROOM(B1.r1. CONNECTS(D1. r4) IN ROOM (b. EXPLANATIONBASED LEARNING CON N ECT S(D1. R1). R2). R1. r1) CON N ECT S(d1. 1986]. r1. CONNECTS(d. et al. R2. CONNECTS(D1.R1) PUSHTHRU(B1. We then follow the structure of the speciﬁc plan to produce the generalized plan shown in Fig.d. R2) D1/d} GOTHRU(d2. The preconditions for the generalized plan are: IN ROOM (ROBOT. R1. CONNECTS(d2.166 CHAPTER 12. R2) Saving this speciﬁc plan. r1. R2) INROOM(ROBOT. INROOM(B1. INROOM(B1. CONNECTS(D1. R2). . r4) INROOM(B1. r3.R1.6 that achieves IN ROOM (b1. r2. r2) CON N ECT S(d2. 12. R2) CON N ECT S(D1. R2. R2) Figure 12.R1) R1 D1 R2 INROOM(ROBOT. R1). We ﬁrst generalize these preconditions by substituting variables for constants. r3. Note that the generalized plan does not require pushing the box back to the place where the robot started. valid only for the speciﬁc constants it mentions. r1). R1).D1.. R2.5: A Plan for the Robot Problem Another related technique that chains together sequences of operators to form more general ones is the chunking mechanism in Soar [Laird. r3).R2) PUSHTHRU(B1. {R2/r1. INROOM(B1.R2. R1). R2. R1) IN ROOM (B1. R2). r1) INROOM(ROBOT. would not be as useful as would be saving a more general one.R1) R3 {R1/r3.
Minton proposed using EBL to learn eﬀective ways to control search [Minton.7. In his system called PRODIGY. r4) GOTHRU(d1. an operator to apply.r4) INROOM(ROBOT. r2). INROOM(b1.r4) PUSHTHRU(b1. r4) Figure 12. r1. it analyzes its successful and its unsuccessful choices and attempts to explain them in terms of its domain theory. r4). r1. it is able to produce useful control rules such as: . CONNECTS(d2. r1). Using an EBLlike process. EBL methods can be used to improve the eﬃciency of planning in another way also. APPLICATIONS 167 INROOM(b1. in a simple mobile robot world. CONNECTS(d2. CONNECTS(d1. either succeeds or fails. r1.12. After producing a plan. r2. r2) INROOM(ROBOT. and in jobshop scheduling. PRODIGY is a STRIPSlike system that solves planning problems in the blocksworld.d2.7. r2).2 Learning Search Control Knowledge Besides their use in creating macrooperators. r2.6: A Generalized Plan 12. PRODIGY has a domain theory involving both the domain of the problem and a simple (meta) theory about planning.r2. INROOM(b1. 1988]. etc. Its meta theory includes statements about whether a control choice about a subgoal to work on. r4). CONNECTS(d1. r2).
168 CHAPTER 12. EXPLANATIONBASED LEARNING IF (AND (CURRENT − NODE node) (CANDIDATE − GOAL node (ON x y)) (CANDIDATE − GOAL node (ON y z))) THEN (PREFER GOAL (ON y z) TO (ON x y)) PRODIGY keeps statistics on how often these learned rules are used. Minton [Minton. 12. 1990] has shown that there is an overall advantage of using these rules (as against not having any rules and as against handcoded search control rules). is judged to be high. their savings (in time to ﬁnd plans). thus measured. Bibliographical and Historical Remarks . and their cost of application.8 To be added. It saves only the rules whose utility.
Reading. and Rivest. D. [Anderson & Bower. 151160. & Singh.” JACM. G. Drastal. Princeton: Princeton University Press.. Lett. NJ: Erlbaum. 1958.. and Walden.. MA: MIT Press. J. 1957. (eds.. E. [Baum & Haussler.).. T... D. on Innovative Applications of Artiﬁcial Intelligence. Fourth Annual Conf. 1973. 169 . 1992. “SMART: Support Management Automated Reasoning Technology for COMPAQ Customer Service.. Computer Control of Machines and Processes. and Duﬃe. MA: AddisonWesley. 1991] Aha. A. N.. 1973] Anderson. [Anderson. and Albert.. and Haussler. [Bollinger & Duﬃe. [Barto. [Blumer. A.” to appear in Artiﬁcial Intelligence. 1994. vol 24. 1990] Blumer. D.. and Singh. 1994. Computational Learning Theory and Natural Learning Systems.. “InstanceBased Learning Algorithms. T.. 3766. Volume 1: Constraints and Prospects.. CA: AAAI Press.. R. Bradtke. Human Associative Memory.. et al. Bradtke. 1... M. 1987. 1991. Dynamic Programming. “Occam’s Razor. 37780.. S. An Introduction to Multivariate Statistical Analysis. et al. 1957] Bellman. pp.Bibliography [Acorn & Walden.” Machine Learning. [Bellman. 1990. 1987] Blumer.. H.. 1992] Acorn. S.. E. 1989. S. 415442.. “Learning to Act Using RealTime Dynamic Programming. Process.. 6. R. “When Are kNearest Neighbor and Backpropagation Accurate for FeasibleSized Sets of Examples?” in Hanson. 1989] Baum. G. 1994] Baum. pp. pp.” Proc. [Baum. 1988] Bollinger. et al. 1988. Menlo Park..” Info. 1994] Barto. New York: John Wiley. et al. [Blumer.. W. “Learnability and the VapnikChervonenkis Dimension. [Aha. E. R. “What Size Net Gives Valid Generalization?” Neural Computation. 1958] Anderson. J. S. Kibler. Hillsdale. and Bower. Cambridge. A.
Carbonell. T. T. V. R. 14. [Cover & Hart.. pp. “The Convergence of TD(λ) for General λ. San Mateo. T. 9 (pp.). 1993] Buchanan. [Dayan & Sejnowski. Olshen. J. and Ho. 295301. 1962] Brain.” IEEE Trans. and Dietterich. 1967] Cover. Classiﬁcation and Regression Trees. Workshop on Machine Learning. 13. San Francisco. Morgan Kaufmann. A. Nearest Neighbor Pattern Classiﬁcation Techniques.. 1984] Breiman. 1991.” in Machine Learning: An Artiﬁcial Intelligence Approach.. and Sejnowski. et al.. 1988] Cheeseman.. Contract DA 36039 SC78343.. R. 1965. J.170 BIBLIOGRAPHY [Brain... Fifth Intl. P. [Breiman. T. 1:145176. (eds. Stanford.. C. Morgan Kaufmann. 8 (pp. 1983] Carbonell. P.” Report No. [Buchanan & Wilkins. Elec. Menlo Park.. “Nearest Neighbor Pattern Classiﬁcation. SRI International. [Dayan. “ExplanationBased Learning: An Alternative View. 1992.. G. 1994... [Dasarathy. on Information Theory.. Computer Science Department. Readings in Machine Learning. San Francisco: Morgan Kaufmann. June. 913) and No. and Dietterich. 1990] Brent.” Machine Learning. “Learning by Analogy. B. J. 1992] Dayan. IEEE Computer Society Press. 1984. 1988.. A. . E.” Numerical Analysis Project Manuscript NA9003.. P.). 8.. et al.. J. [Brent. June 1962 and September 1962. 2127. D. Applied Optimal Control. 1994] Dayan.. R. Readings in Machine Learning. Readings in Knowledge Acquisition and Learning.. CA.. “Geometrical and Statistical Properties of Systems of Linear Inequalities with Applications in Pattern Recognition. pp. pp 452467. 341362. 326334. EC14. “Graphical Data Processing Research Study and Experimental Investigation. March 1990. [Bryson & Ho 1969] Bryson. CA 94305. 1991] Dasarathy. 1986. 296306. Michalski.. T.. 1990.. L.. Friedman.. and Wilkins. and Mitchell. Y. and Hart. 1986] DeJong.” IEEE Trans.” Machine Learning. and Stone.. P. CA: Wadsworth. 1967. “T D(λ) Converges with Probability 1.. 1965] Cover. Comp.” Machine Learning. 1993.” Proc. [Cover. (eds. San Francisco: Morgan Kaufmann. R.. Stanford University. B. et al. P. 1983. New York: Blaisdell. et al. Reprinted in Shavlik.. “Fast Training Algorithms for MultiLayer Neural Nets. [DeJong & Mooney. Monterey.. et al. [Cheeseman. [Carbonell. “AutoClass: A Bayesian Classiﬁcation System. J. 1990. CA. Reprinted in Shavlik. San Francisco: Morgan Kaufmann.C. T. 310). and Mooney.
1993. O. [Ehrenfeucht. “STATIC: A ProblemSpace Compiler for PRODIGY.. 1973] Duda. April 1966. Report prepared for ONR under Contract 3438(00).. R. Menlo Park: AAAI Press. pp. Process Delay Analyses Using DecisionTree Induction. T. Porter. Comput. 533540. et al.. 1990. MIT Press. Computers. G. of Ninth National Conf. pp. 1982] Efron.. 1993] Etzioni. [Duda. 1992. O. “ErrorCorrecting Output Codes: A General Method for Improving Multiclass Inductive Learning Programs. Sci. Rev. 1991. 1990] Fahlman. Pattern Classiﬁcation and Scene Analysis. CA. 1966.. on A. T. O. 572577. T. Philadelphia: SIAM. et al. San Francisco: Morgan Kaufmann. (ed. [Dietterich. Menlo Park. San Francisco: Morgan Kaufmann. “A Structural Theory of ExplanationBased Learning. H. the Bootstrap and Other Resampling Plans. 1973.” Artiﬁcial Intelligence. B. and Mooney.. Tech. 220232. Seventh Intl. SRI International. Hild. April.. B. “Training a Linear Machine on Mislabeled Patterns. G. 1990] Dietterich. and Fossum.. Report CS9206.. 2. on Elect. and Lebiere. Conf. B. G. R.. S. EC15. vol. Learning. H. 1991] Dietterich.. 1982. P.” SRI Tech.” IEEE Trans. 1990] Dietterich. The Jackknife. pp. 1992] Evans. pp. 1990. 1990. [Duda & Hart. (eds. . 2431.. New York: Wiley. “The CascadeCorrelation Learning Architecture. [Etzioni. [Fahlman & Lebiere. Mach. pp. March.. [Dietterich. 60:1. R.. [Duda & Fossum..” Annu.. 1966] Duda. A... Ninth Nat. AAAI91. “Pattern Classiﬁcation by Iteratively Determined Linear and Piecewise Linear Discriminant Functions. O. et al.I. and Fisher.BIBLIOGRAPHY 171 [Dietterich & Bakiri. Palo Alto: Annual Reviews Inc. [Etzioni. pp. “Machine Learning.” in Touretzky..” in Proc. 1991] Etzioni. O. and Bakiri. C. “A Comparative Study of ID3 and Backpropagation for English TexttoSpeech Mapping.. 1988 Workshop on Computational Learning Theory. Vanderbilt University. “A General Lower Bound on the Number of Examples Needed for Learning.” Proc. and Hart.” Proc. TN. Conf. pp. R.).). [Evans & Fisher.E. 1991...” Proc. San Francisco: Morgan Kaufmann. [Efron.. 524532. D. 1988] Ehrenfeucht. 110120. 1988. Advances in Neural Information Processing Systems. D. 4:255306.... 1966] Duda. 93139. on Artiﬁcial Intelligence. Department of Computer Science. and Bakiri.
D. The Developments in Connectionist Theory. “Quantifying Inductive Bias: AI Learning Algorithms and Valiant’s Learning Framework. 1994. pp 468486.(eds. and Rumelhart. G. on Pattern Recognition. 36:177221. “SKICAT: A Machine Learning System for Automated Cataloging of Large Scale Sky Surveys. [Fisher. P. September 1977. J. L. U. Hillsdale. A. J. 1986..” ACM Trans. March. “The Simulation of Verbal Learning Behavior. “Neural Networks at Work. et al. Reprinted in Shavlik. 1987. I. San Francisco: Morgan Kaufmann. 1986] Gallant. et al. San Francisco: Morgan Kaufmann. Conf. and Dietterich. N. [Gallant.. Tenth Intl... J. N. T. pp.” IEEE Spectrum. et al. [Hammerstrom. San Francisco: Morgan Kaufmann. 1990.” in Eighth International Conf. 2:139172. 2632. 19:121132.. Neural Networks in Artiﬁcial Intelligence. ..). 1972] Fikes. M. “Automating the Analysis and Cataloging of Sky Surveys. [Fikes.. R. 1988] Haussler. M. U.172 BIBLIOGRAPHY [Fayyad.. 849852.. and Nilsson. San Francisco: Morgan Kaufmann. D. 1987.. and Finkel. 1993] Fayyad... and Weir.. 3(3):209226. Hart. R. L. Neuroscience and Connectionist Theory. 1977] Friedman.. 112119.. et al. J. NJ: Erlbaum Associates. 267–283. 1989.. and Djorgovski.. pp 251288.. New York: IEEE. Reprinted in Shavlik. Logical Foundations of Artiﬁcial Intelligence.. pp.” in Fayyad. 96107.. “Knowledge Acquisition via Incremental Conceptual Clustering. [Friedman. Reprinted in Shavlik. “An Algorithm for Finding Best Matches in Logarithmic Expected Time. (For a longer version of this paper see: Fayyad. U. M.” Artiﬁcial Intelligence. E. Readings in Machine Learning. pp. N. [Genesereth & Nilsson. H. 1989] Gluck.. pp.” Artiﬁcial Intelligence. [Haussler. 1987] Genesereth. and Nilsson.. 1993. 1990.. and Dietterich. Weir. S. D. S..” Machine Learning. 1993] Hammerstrom. D. J. on Math. on Machine Learning. Cambridge: The MIT Press. Djorgovski. New York: McGrawHill. June 1993.. 1988.) [Feigenbaum.. Bentley.” Proceedings of the Western Joint Computer Conference. “Learning and Executing Generalized Robot Plans. 1994] Fu. 1990. Software. 1961] Feigenbaum. pp. 1996. pp. T.. T. San Francisco: Morgan Kaufmann.” in Proc. 1972. Readings in Machine Learning. N. [Gluck & Rumelhart. Readings in Machine Learning. 1961. Chapter 19. 471ﬀ. Advances in Knowledge Discovery and Data Mining. 1987] Fisher. and Dietterich. “Optimal Linear Discriminants.. [Fu. A.
Lecture Notes. K.) . pp. 545. “Probably Approximately Correct Learning. “Generalizing Version Spaces. 1993] Kolodner. on Artiﬁcial Intelligence and Statistics. MA: MIT Press. CaseBased Reasoning. Learning in Embedded Systems. 1994. Machine Learning: An Artiﬁcial Intelligence Approach. J. 1966] Hunt. 1995] John.. FL. 11011108. 1987. The Organization of Behaviour. Studies in the Sciences of Complexity. 1991] Hertz. Santa Fe Inst. D. Genetic Programming: On the Programming of Computers by Means of Natural Selection. Krogh. MA. 1994] Hirsh. New York: John Wiley. A. Conf. & Stone. et al. P. & Palmer. Genetic Programming II: Automatic Discovery of Reusable Programs. K. 1966. R. 1991. H. 1986] Holland. Cambridge. Cambridge.. Krogh. “Robust Linear Discriminant Trees. 1995.. 1994] Kohavi. O. J. 1949] Hebb. 1992.. Cambridge. Carbonell. L. [John. [Holland. T. (Second edition printed in 1992 by MIT Press.” Machine Learning. Cambridge.. Lauderdale. and Mitchell. J. 1990. of the Conf... [Kohavi. on AI. [Hertz.. 1. vol. J. 1994..” Proc. 1993.” Proc. 1994] Koza. [Hebb... and Stone. P. MA: MIT Press. Marin. San Francisco: Morgan Kaufmann. January. and Palmer. MA: MIT Press. 1994. [Kaelbling. 1975] Holland. “Escaping Brittleness.. 1993] Kaelbling. [Hirsh. The Possibilities of GeneralPurpose Learning Algorithms Applied to Parallel RuleBased Systems... 1990] Haussler. E.. [Hunt. Volume 2. Marin. “BottomUp Induction of Oblivious ReadOnce Decision Graphs.” In Michalski.. 1987] Jabbour. G.BIBLIOGRAPHY 173 [Haussler. chapter 20.” Proc.. J. R.. [Kolodner. [Koza. San Francisco.” Proc. “ALFA: Automated Load Forecasting Assistant. 1949. J. et al. . Ann Arbor: The University of Michigan Press. 1993. J. Experiments in Induction. New York: AddisonWesley. Ft. of the IEEE Pwer Engineering Society Summer Meeting... D. San Francisco: Morgan Kaufmann. R. 1975. 1986. MA: MIT Press. Adaptation in Natural and Artiﬁcial Systems. J. 1992] Koza. Cambridge. 17.. (eds.. of European Conference on Machine Learning (ECML94). Eighth Nat. [Jabbour. [Koza.) [Holland. Introduction to the Theory of Neural Computation. H. CA. New York: Academic Press.
” Bulletin of Mathematical Biophysics. W. 1988. 1994. P.. S. San Francisco: Morgan Kaufmann. pp.. [Lin. (eds. and Rivest. 1992.” Network. Boston.174 BIBLIOGRAPHY [Laird. 381414. 1992. 1988.. Inductive Logic Proc z c z gramming. Conf. on Knowledge Engineering. “Some Directions in Machine Intelligence. 1986] Laird.. Scotland. “Areas of Application for Machine Learning. Volume 1: Constraints and Prospects. 1993.. 1994] Lavraˇ.. MA: MIT Press.. 1992] Michie. 1993. Rosenbloom. M. B. N. Planning. [Littlestone. [Lavraˇ & Dˇeroski. and Newell. D. Reprinted in Buchanan.” Machine Learning. MA. Chichester. L. 1993] Lin. 1988] Minton.. Readings in Knowledge Acquisition and Learning. 1943. L. S. pp. Kluwer Academic Publishers. 1996] Langley. San Francisco: Morgan Kaufmann. “How Fast Can a Thresha a old Gate Learn?. A..” in Hanson. 1146. S. W. Elements of Machine Learning. “Chunking in Soar: The Anatomy of a General Learning Mechanism. 1993] Marchand. “Automatic Programming of BehaviorBased Robots Using Reinforcement Learning. and Golea. Tenth Intl. M. pp. 1992] Langley. H. Cambridge. Morgan Kaufmann. 1993. on Machine Learning. 518535. [Lin. J.. S. G. 4:6785... et al. Symp. “A Logical Calculus of the Ideas Immanent in Nervous Activity.. Sevilla. Vol. of Fifth Int’l. J. 8. and Teaching. 1.” Machine Learning.. [Marchand & Golea.” Artiﬁcial Intelligence.” Proc.. “On Learning Simple Neural Concepts: From Halfspace Intersections to Neural Decision Lists. England: Ellis Horwood.). [Mahadevan & Connell. and Dˇeroski.. 5. “Learning Quickly When Irrelevant Attributes Abound: A New LinearThreshold Algorithm.. 1992] Mahadevan.. pp. P. [Michie. R.. [Langley. 182189. [Maass & Tur´n. Drastal. pp.. and Pitts. S.. .” unpublished manuscript. Glasgow. 1992. 1986. The Turing Institute. 311365.). G. “SelfImproving Reactive Agents Based on Reinforcement Learning.. CA. San Francisco. 1994] Maass. 1943] McCulloch. 1988] Littlestone. 1992] Lin. 293321.” Machine Learning 2: 285318. N. D. 55.. (eds. [Langley. 1994. 115133. and Wilkins.. 1996. W. Computational Learning Theory and Natural Learning Systems. Chicago: University of Chicago Press. P. pp. and Connell. 1992. [McCulloch & Pitts..” Proc. [Minton. “Scaling Up Reinforcement Learning for Robot Control. Learning Search Control Knowledge: An ExplanationBased Approach. and Tur´n.
1991] Muggleton. 435451. San Francisco: Morgan Kaufmann. S.. New York: Wiley. Eﬃcient Memorybased Learning for Robot Control. (eds. Machine Learning: A Theoretical Approach. University of Cambridge. Readings in Machine Learning. 1992. A. 1988. and Lippman. 295318. “An Empirical Investigation of Brute Force to Choose Features. Threshold Logic and its Applications.. 42. 1988] Mueller. and Petsche. M.. 1:1.” New Generation Computing. 1982] Mitchell.” in Moody.. Computer Laboratory. PhD. 1971. 1992] Muggleton. 209. San Francisco: Morgan Kaufmann. 1991. Robust Adaptive Control by Learning Only Forward Models.. C. [Moore & Atkeson. San Francisco: Morgan Kaufmann. T. pp.BIBLIOGRAPHY 175 [Minton. J.). A.. [Moore. pp. 1982. 1993] Moore. and Dietterich. [Mueller & Page. P. “Quantitative Results Concerning the Utility of ExplanationBased Learning. [Moore. R. 1994] Moore. R. 1992. 8. et al. 96–107.).” Machine Learning. “Prioritized Sweeping: Reinforcement Learning with Less Data and Less Time. October. [Natarjan. and Dietterich. 573587. . Vol. (eds. [Muggleton... S. 18:203226. Judd. 1986.. Technical Report No.. “Inductive Logic Programming. London: Academic Press. et al. Reprinted in Shavlik. Advances in Neural Information Processing Systems 4.. pp. and Function Approximators. San Francisco: Morgan Kaufmann. “Fast. T. and Atkeson. Symbolic Computing with Lisp and Prolog. T. A. and Page. 1990. 1990. R. and Johnson. 1991] Natarajan. 13. B.. [Moore. D. 1992] Moore. 1990..” Artiﬁcial Intelligence. Hanson. Cambridge: MIT Press. J. S. 1990] Moore. Smoothers. [Muggleton. pp. Thesis.. [Muroga. 1994. T. A. 1986] Mitchell. San Francisco: Morgan Kaufmann. “Generalization as Search. Reprinted in Shavlik. S.. Hill. and Dietterich.. Readings in Machine Learning. 363392.... pp. Readings in Machine Learning.. New York: John Wiley & Sons.. et al. Computational Learning Theory and Natural Learning Systems.” Artiﬁcial Intelligence. T. 1991. Inductive Logic Programming. [Mitchell. [Mitchell.. 3.. S. 1990] Minton. J.. Reprinted in Shavlik. J. pp.. J.. T. S. 1990. 1971] Muroga.” in Hanson. 103130. S. “ExplanationBased Generalization: A Unifying View. 1993. W.” Machine Learning. 1990..
Readings in Machine Learning... [Quinlan. “Induction of Decision Trees. “Theoretical and Experimental Investigations in Trainable PatternClassifying Systems.” Machine Learning. 7199. 80:227–248. 1987. 1986] Quinlan.” Machine Learning. Advances in Neural Information Processing Systems. Dowe. (eds. New York: John Wiley. G. M. The Mathematical Foundations of Learning Machines. R. Joint Conf. J.” Tech. J. 1986. New York. T. J. pp. San Francisco: Morgan Kaufmann. San Francisco: Morgan Kaufmann. “Boolean Feature Discovery in Empirical Learning. and Dietterich.. March. N. 1990] Quinlan. [Pazzani & Kibler. RADCTR65257. 5794. and Haussler.. and Rivest. . [Pomerleau. September.1..” Machine Learning. Boston: Kluwer Academic Publishers. “Inferring Decision Graphs using the Minimum Message Length Principle. San Francisco: MorganKaufmann.” Proc. 5. pp. Report No. 1990] Nilsson. J. N. and Wallace. [Quinlan & Rivest. P. on Artiﬁcial Intelligence. C. 429435.. “The Utility of Knowledge in Inductive Learning..176 BIBLIOGRAPHY [Nilsson. pp. Reprinted in Shavlik.. [Nilsson. 3. “Learning Logical Deﬁnitions from Relations. Rome Air Development Center (Now Rome Laboratories). (This book is a reprint of Learning Machines: Foundations of Trainable PatternClassifying Systems. 1992] Oliver. 1993. San Francisco: Morgan Kaufmann.. Griﬃss Air Force Base. 1991. D. J. D. 1992. 3047. Dowe. 1961.. 1991] Pomerleau. et al. 1:81–106.” in Lippmann.” Information and Computation. [Peterson. 1990] Pagallo. Ron.) [Oliver. “Inferring Decision Trees Using the Minimum Description Length Principle. “Rapidly Adapting Artiﬁcial Neural Networks for Autonomous Navigation. [Pagallo & Haussler. 239266. 1965. & Wallace. J. “Generating Production Rules from Decision Trees. [Quinlan. 1993] Pomerleau.” Machine Learning. New York: McGrawHill. 1989] Quinlan. pp. D. 9. 57–69.).5. vol. March 1990. J. Final Report on Contract AF30(602)3448.. no. 1989. 1990. Neural Network Perception for Mobile Robot Guidance.. [Pomerleau. 1965] Nilsson.. 1992] Pazzani. Ross. 1961] Peterson.. Ross. [Quinlan. 1992. J. 1987] Quinlan. Error Correcting Codes. 1990. D.” In IJCAI87: Proceedings of the Tenth Intl. 1992 Australian Artiﬁcial Intelligence Conference. 1965. R. D. W. 1990. and Kibler.
Stanford Electronics Labs. & Williams. 1986. [Ridgway. & Towell. “Symbolic and Neural Learning Algorithms: An Experimental Comparison. 1995. (eds. T. 241: 12991306. D. . R. E. Cambridge. 3:211229.. [Russell & Norvig 1995] Russell. 298305..” Science.. NJ: Prentice Hall.. Conf. 1983. and Dietterich. San Francisco: Morgan Kaufmann. and Norvig. Volume 1: Constraints and Prospects. P. C. 111143. New York: Academic Press. S. L. G... D... Koch.. “Comparing Connectionist and Symbolic Learning Methods. Drastal. 318–362. J. A. An Adaptive Logic System with Generalizing Properties. F. J. Tenth Intl. R... 229246. J.. (eds. [Rosenblatt. 1962] Ridgway. A. 1987] Rivest. 1993] Quinlan. R. 1991] Shavlik. and Rivest. C. [Rissanen. E. “Modeling by Shortest Data Description. G. PhD thesis. 6. and Williams.” In Rumelhart. 1978.” IBM Journal of Research and Development. Hinton. Artiﬁcial Intelligence: A Modern Approach. G.BIBLIOGRAPHY 177 [Quinlan. [Rivest.. [Quinlan. J. E. and Churchland. L. Principles of Neurodynamics. 1961. S. pp. [Ross. [Shavlik & Dietterich. [Schwartz. 1993.” in Hanson.. S. & Churchland. Mooney. 1987. 445456. 1993] Schwartz... 1983] Ross.. 2. R. 1994] Quinlan. 14:465471. 1988. Introduction to Stochastic Dynamic Programming. 1986] Rumelhart. 15561. 1959] Samuel. Tech. “A Reinforcement Learning Method for Maximizing Undiscounted Rewards. C4. 1990] Shavlik.. 1994. 1978] Rissanen. San Francisco: Morgan Kaufmann.. W. P.” Machine Learning. J. CA. San Francisco: Morgan Kaufmann. 1991. and McClelland. T. and Towell. April 1962.” Automatica. R.). Mooney. 1988] Sejnowski.... J. 1990. [Shavlik. Washington: Spartan Books. [Samuel. Rep. [Rumelhart. pp. 1958] Rosenblatt. MA: MIT Press. Stanford. “Computational Neuroscience. on Machine Learning.. July 1959. “Some Studies in Machine Learning Using the Game of Checkers.. [Sejnowski. “Learning Decision Lists.” Proc. Ross.” Machine Learning.. Hinton.. 1993. Englewood Cliﬀs. Koch.5: Programs for Machine Learning. “Learning Internal Representations by Error Propagation. Computational Learning Theory and Natural Learning Systems. Readings in Machine Learning.) Parallel Distributed Processing. pp.. Vol 1. J.
” in Moody. J.. S. 1990. Hillsdale. . vols 1 through 6.. 19891994] Advances in Neural Information Processing Systems.” in Proceedings of the Ninth Annual Conference of the Cognitive Science Society. & Spiegalhalter. of the Seventh Intl. 1984. and Chervonenkis.. 977984.. Nov.” Proc. 1988] Sutton. on Artiﬁcial Intelligence. 279292.. NJ: Erlbaum.). 264280.. 1989] Unger.. 1990] Towell. Conf. D. 1994] Taylor. [Towell & Shavlik. pp. 1992] Towell G.. pp. 1988. 1989. Planning.178 BIBLIOGRAPHY [Sutton & Barto. 1992] Watkins. 27.. C.” Machine Learning. 1984] Valiant.. C. “On the Uniform Convergence of Relative Frequencies. 1992. 11341142.” Proc. L. 1992. The Essence of Logic Circuits. G. Michie. “Learning to Predict by the Methods of Temporal Diﬀerences. on Machine Learning. 2. D. 1992] Tesauro.. and Spiegalhalter. Paramount Publishing International. “Interpretation of Artiﬁcial Neural Networks: Mapping KnowledgeBased Neural Networks into Rules. [Sutton. “Practical Issues in Temporal Diﬀerence Learning. Vol.. NJ: PrenticeHall. “A Theory of the Learnable. 1987.. R. [Watkins & Dayan. & Noordweier. “A TemporalDiﬀerence Model of Classical Conditioning. 1989 1994. and Noordweier. Conf. San Francisco: Morgan Kaufmann. G. V.. “Incremental Induction of Decision Trees... R. 16. [Sutton. 4:161–186.. “Reﬁnement of Approximate Domain Theories by KnowledgeBased Artiﬁcial Neural Networks. Shavlik.. (eds. [Various Editors. [Tesauro. G. M. Michie. Vol. 1989] Utgoﬀ. J. [Unger. 1971] Vapnik. Theory of Probability and its Applications. pp. J. Advances in Neural Information Processing Systems. Neural and Statistical Classiﬁcation. S. 1990. Eighth Natl. [Towell. nos. San Francisco: Morgan Kaufmann. and Barto. pp..” Machine Learning. H. [Vapnik & Chervonenkis... Hanson. 3/4.” Communications of the ACM. J. [Utgoﬀ. A. “Technical Note: QLearning.” Machine Learning 3: 944... R. and Lippmann. 861866. 257277. R. 1971. “Integrated Architectures for Learning. pp. San Francisco: Morgan Kaufmann. No. Shavlik. 4. 1987] Sutton. Machine Learning. [Taylor. [Valiant. 1992. S. and Dayan. Englewood Cliﬀs. 1989. A... and Shavlik. P. 1990] Sutton. P. 216224.” Machine Learning. S. pp. 8. C. and Reacting Based on Approximating Dynamic Programming. 8.
. PhD Dissertation. pp. Threshold Logic. 1991] Weiss. Princeton. Selforganizing Systems—1962. “Comparing Learning Paradigms via Diagrammatic Visualization. C. 1990.” Proc.). Madaline and Backpropagation. 1961] Winder. Ph. 1989. A. “Generalization and Storage in Networks of Adaline Neurons. of the AIEE Symp. S. Adaptive Signal Processing. 1974] Werbos. paper CP601261. [Widrow & Stearns.. et al.. Jacobi. J.. University of Illinois at UrbanaChampaign. J. 9. Fifth Intl. [Widrow. on Switching Circuits and Logical Design.. and Stearns. NJ. Princeton University.. and Kulikowski. no.D. Symp. 428437. Harvard University. vol. [Winder. pp. San Francisco: Morgan Kaufmann. (Also Tech. [Wnek. 78. and Goldstein (eds. pp.BIBLIOGRAPHY 179 [Watkins. [Widrow & Lehr. B. Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences. “Single Stage Threshold Logic.” in Yovits. [Winder. R.. “30 Years of Adaptive Neural Networks: Perceptron. B.” Proc. S.) . 1962. DC: Spartan Books. IEEE. B. 14151442. [Werbos. M. Conf. Computer Systems that Learn. P. University of Cambridge. Thesis. NJ: PrenticeHall. England. PhD Thesis.” in Proc. 1985] Widrow. 1962] Widrow. 1961.. Report MLI902. [Weiss & Kulikowski... on Methodologies for Intelligent Systems. 1991... Washington. September. et al. Learning From Delayed Rewards. 1990. R. 1974. 1962] Winder.. 1989] Watkins. 1962. C. 1990] Wnek. 1990] Widrow. 321332. Englewood Cliﬀs. pp. H. 435461. and Lehr. C..