Professional Documents
Culture Documents
Third Edition
Michael Paluszek
Stephanie Thomas
MATLAB Machine Learning Recipes: A Problem-Solution Approach
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material
is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, repro-
duction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic
adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed.
Trademarked names, logos, and images may appear in this book. Rather than use a trademark symbol with every
occurrence of a trademarked name, logo, or image we use the names, logos, and images only in an editorial fashion
and to the benefit of the trademark owner, with no intention of infringement of the trademark.
The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not
identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary
rights.
While the advice and information in this book are believed to be true and accurate at the date of publication, neither
the authors nor the editors nor the publisher can accept any legal responsibility for any errors or omissions that may
be made. The publisher makes no warranty, express or implied, with respect to the material contained herein.
Managing Director, Apress Media LLC: Welmoed Spahr
Acquisitions Editor: Celestin Suresh John
Development Editor: Laura Berendson
Coordinating Editor: Mark Powers
Introduction XXI
V
CONTENTS
2.1.6 Datastore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.1.7 Tall Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.1.8 Sparse Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.1.9 Tables and Categoricals . . . . . . . . . . . . . . . . . . . . . . . . 30
2.1.10 Large MAT-Files . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.2 Initializing a Data Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.2.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.2.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.2.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.3 mapreduce on an Image Datastore . . . . . . . . . . . . . . . . . . . . . . . 36
2.3.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.3.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.3.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.4 Processing Table Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.4.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.4.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.4.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.5 String Concatenation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.5.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.5.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.5.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.6 Arrays of Strings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.6.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.6.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.6.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.7 Substrings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.7.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.7.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.7.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.8 Reading an Excel Spreadsheet into a Table . . . . . . . . . . . . . . . . . . . 44
2.8.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.8.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.8.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.9 Accessing ChatGPT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.9.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.9.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.9.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.10 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
VI
CONTENTS
3 MATLAB Graphics 49
3.1 2D Line Plots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.1.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.1.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.1.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.2 General 2D Graphics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.2.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.2.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.2.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.3 Custom Two-Dimensional Diagrams . . . . . . . . . . . . . . . . . . . . . . 54
3.3.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.3.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.3.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.4 Three-Dimensional Box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.4.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.4.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.4.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.5 Draw a 3D Object with a Texture . . . . . . . . . . . . . . . . . . . . . . . . 59
3.5.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.5.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.5.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.6 General 3D Graphics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.6.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.6.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.6.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.7 Building a GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.7.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.7.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.7.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.8 Animating a Bar Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
3.8.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
3.8.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.8.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.9 Drawing a Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
3.9.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
3.9.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
3.9.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
3.10 Importing a Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.10.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.10.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.10.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.11 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
VII
CONTENTS
4 Kalman Filters 85
4.1 Gaussian Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
4.2 A State Estimator Using a Linear Kalman Filter . . . . . . . . . . . . . . . . 87
4.2.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
4.2.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
4.2.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
4.3 Using the Extended Kalman Filter for State Estimation . . . . . . . . . . . . 106
4.3.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
4.3.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
4.3.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
4.4 Using the UKF for State Estimation . . . . . . . . . . . . . . . . . . . . . . 111
4.4.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
4.4.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
4.4.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
4.5 Using the UKF for Parameter Estimation . . . . . . . . . . . . . . . . . . . . 117
4.5.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
4.5.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
4.5.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
4.6 Range to a Car . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
4.6.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
4.6.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
4.6.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
4.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
VIII
CONTENTS
5.5 Ship Steering: Implement Gain Scheduling for Steering Control of a Ship . . 145
5.5.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
5.5.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
5.5.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
5.6 Spacecraft Pointing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
5.6.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
5.6.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
5.6.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
5.7 Direct Adaptive Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
5.7.1 Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
5.7.2 Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
5.7.3 How It Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
5.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
IX
CONTENTS
X
CONTENTS
XII
CONTENTS
XIII
CONTENTS
XIV
CONTENTS
XV
CONTENTS
Bibliography 431
Index 435
XVI
About the Authors
XVII
ABOUT THE AUTHORS
XVIII
About the Technical Reviewer
XIX
Introduction
1. Autonomous cars: Machine learning is used in almost every aspect of car control systems.
2. Plasma physicists use machine learning to help guide experiments on fusion reactors.
TAE Technologies has used it with great success in guiding fusion experiments. The
Princeton Plasma Physics Laboratory (PPPL) has used it for the National Spherical Torus
Experiment to study a promising candidate for a nuclear fusion power plant.
5. Law enforcement and others use it for facial recognition. Several crimes have been solved
using facial recognition!
XXI
INTRODUCTION
1. MATLAB functions
2. MATLAB scripts
3. HTML help
The MATLAB scripts implement all of the examples in this book. The functions encapsulate the
algorithms. Many functions have built-in demos. Just type the function name in the command
window, and it will execute the demo. The demo is usually encapsulated in a subfunction. You
can copy out this code for your demos and paste it into a script. For example, type the function
name PlotSet into the command window, and the plot in Figure 1 will appear.
1 >> PlotSet
cos
1
A
B
0.5
0
y
-0.5
-1
0 100 200 300 400 500 600 700 800 900 1000
x
sin
1
0.5
0
y
-0.5
-1
0 100 200 300 400 500 600 700 800 900 1000
x
XXII
INTRODUCTION
You can use these demos to start your scripts. Some functions, like right-hand-side functions
for numerical integration, don’t have demos. If you type a function name at the command line
that doesn’t have a built-in demo, you will get an error as in the code snippet below.
1 >> RHSAutomobileXY
2 Error using RHSAutomobileXY (line 17)
3 A built-in demo is not available.
The toolbox is organized according to the chapters in this book. The folder names are
Chapter_01, Chapter_02, etc. In addition, there is a General folder with functions that support
the rest of the toolbox. In addition, you will need the open source package GLPK (GNU Linear
Programming Kit) to run some of the code. Nicolo Giorgetti has written a MATLAB mex
interface to GLPK that is available on SourceForge and included with this toolbox. The interface
consists of
1. glpk.m
3. GLPKTest.m
XXIII
CHAPTER 1
1.1 Introduction
Machine Learning is a field in computer science where data is used to predict, or respond to,
future data. It is closely related to the fields of pattern recognition, computational statistics, and
artificial intelligence. The data may be historical or updated in real time. Machine learning is
important in areas like facial recognition, spam filtering, content generation, and other areas
where it is not feasible, or even possible, to write algorithms to perform a task.
For example, early attempts at filtering junk emails had the user write rules to determine
what was junk or spam. Your success depended on your ability to correctly identify the attributes
of the message that would categorize an email as junk, such as a sender address or words in the
subject, and the time you were willing to spend to tweak your rules. This was only moderately
successful as junk mail generators had little difficulty anticipating people’s handmade rules.
Modern systems use machine learning techniques with much greater success. Most of us are
now familiar with the concept of simply marking a given message as “junk” or “not junk” and
take for granted that the email system can quickly learn which features of these emails identify
them as junk and prevent them from appearing in our inbox. This could now be any combination
of IP or email addresses and words and phrases in the subject or body of the email, with a variety
of matching criteria. Note how the machine learning in this example is data driven, autonomous,
and continuously updating itself as you receive emails and flag them. However, even today, these
systems are not completely successful since they do not yet understand the “meaning” of the
text that they are processing.
Content generation is an evolving area. By training engines over massive data sets, the
engines can generate content such as music scores, computer code, and news articles. This has
the potential to revolutionize many areas that have been exclusively handled by people.
In a more general sense, what does machine learning mean? Machine learning can mean
using machines (computers and software) to gain meaning from data. It can also mean giving
machines the ability to learn from their environment. Machines have been used to assist humans
for thousands of years. Consider a simple lever, which can be fashioned using a rock and a length
of wood, or an inclined plane. Both of these machines perform useful work and assist people,
but neither can learn. Both are limited by how they are built. Once built, they cannot adapt to
changing needs without human interaction.
© The Author(s), under exclusive license to APress Media, LLC, part of Springer Nature 2024 1
M. Paluszek, S. Thomas, MATLAB Machine Learning Recipes,
https://doi.org/10.1007/978-1-4842-9846-6 1
CHAPTER 1 OVERVIEW
Machine learning involves using data to create a model that can be used to solve a problem.
The model can be explicit, in which case the machine learning algorithm adjusts the model’s
parameters, or the data can form the model. The data can be collected once and used to train a
machine learning algorithm, which can then be applied. For example, ChatGPT scrapes textual
data from the Internet to allow it to generate text based on queries. An adaptive control system
measures inputs and command responses to those inputs to update parameters for the control
algorithm.
In the context of the software we will be writing in this book, machine learning refers to
the process by which an algorithm converts the input data into parameters it can use when
interpreting future data. Many of the processes used to mechanize this learning derive from
optimization techniques and, in turn, are related to the classic field of automatic control. In
the remainder of this chapter, we will introduce the nomenclature and taxonomy of machine
learning systems.
1.2.1 Data
All learning methods are data driven. Sets of data are used to train the system. These sets may
be collected and edited by humans or gathered autonomously by other software tools. Control
systems may collect data from sensors as the systems operate and use that data to identify pa-
rameters or train the system. Content generation systems scour the Internet for information. The
data sets may be very large, and it is the explosion of data storage infrastructure and available
databases that is largely driving the growth in machine learning software today. It is still true that
a machine learning tool is only as good as the data used to create it, and the selection of training
data is practically a field in itself. Selection of data for many systems is highly automated.
NOTE When collecting data for training, one must be careful to ensure that the time
variation of the system is understood. If the structure of a system changes with time, it may be
necessary to discard old data before training the system. In automatic control, this is sometimes
called a forgetting factor in an estimator.
1.2.2 Models
Models are often used in learning systems. A model provides a mathematical framework for
learning. A model is human-derived and based on human observations and experiences. For
example, a model of a car, seen from above, might be that it is rectangular with dimensions that
fit within a standard parking spot. Models are usually thought of as human-derived and provide
a framework for machine learning. However, some forms of machine learning develop their
models without a human-derived structure.
2
CHAPTER 1 OVERVIEW
1.2.3 Training
A system which maps an input to an output needs training to do this in a useful way. Just as
people need to be trained to perform tasks, machine learning systems need to be trained. Train-
ing is accomplished by giving the system an input and the corresponding output and modifying
the structure (models or data) in the learning machine so that mapping is learned. In some ways,
this is like curve fitting or regression. If we have enough training pairs, then the system should
be able to produce correct outputs when new inputs are introduced. For example, if we give
a face recognition system thousands of cat images and tell it that those are cats, we hope that
when it is given new cat images it will also recognize them as cats. Problems can arise when you
don’t give it enough training sets, or the training data is not sufficiently diverse, for instance,
identifying a long-haired cat or hairless cat when the training data is only of short-haired cats.
A diversity of training data is required for a functioning algorithm.
Supervised Learning
Supervised learning means that specific training sets of data are applied to the system. The
learning is supervised in that the “training sets” are human-derived. It does not necessarily
mean that humans are actively validating the results. The process of classifying the systems’
outputs for a given set of inputs is called “labeling.” That is, you explicitly say which results are
correct or which outputs are expected for each set of inputs.
The process of generating training sets can be time-consuming. Great care must be taken
to ensure that the training sets will provide sufficient training so that when real-world data is
collected, the system will produce correct results. They must cover the full range of expected
inputs and desired outputs. The training is followed by test sets to validate the results. If the
results aren’t good, then the test sets are cycled into the training sets, and the process is repeated.
A human example would be a ballet dancer trained exclusively in classical ballet technique.
If they were then asked to dance a modern dance, the results might not be as good as required
because the dancer did not have the appropriate training sets; their training sets were not suffi-
ciently diverse.
Unsupervised Learning
Unsupervised learning does not utilize training sets. It is often used to discover patterns in data
for which there is no “right” answer. For example, if you used unsupervised learning to train
a face identification system, the system might cluster the data in sets, some of which might be
faces. Clustering algorithms are generally examples of unsupervised learning. The advantage
of unsupervised learning is that you can learn things about the data that you might not know in
advance. It is a way of finding hidden structures in data.
3
CHAPTER 1 OVERVIEW
Semi-supervised Learning
With this approach, some of the data are in the form of labeled training sets, and other data are
not [12]. Typically, only a small amount of the input data is labeled, while most are not, as the
labeling may be an intensive process requiring a skilled human. The small set of labeled data is
leveraged to interpret the unlabeled data.
Online Learning
The system is continually updated with new data [12]. This is called “online” because many of
the learning systems use data collected while the system is operating. It could also be called
recursive learning. It can be beneficial to periodically “batch” process data used up to a given
time and then return to the online learning mode. The spam filtering systems collect data from
emails and update their spam filter. Generative deep learning systems like ChatGPT use massive
online learning.
Measurements (Learning)
Learning
Parameters Actions
Machine
Environment
Actions
Figure 1.1: A learning machine that senses the environment and stores data in memory
4
CHAPTER 1 OVERVIEW
Note that the machine produces output in the form of actions. A copy of the actions may
be passed to the learning system so that it can separate the effects of the machine’s actions
from those of the environment. This is akin to a feedforward control system, which can result
in improved performance.
A few examples will clarify the diagram. We will discuss a medical example, a security
system, and spacecraft maneuvering.
A doctor might want to diagnose diseases more quickly. They would collect data on tests
on patients and then collate the results. Patient data might include age, height, weight, historical
data like blood pressure readings and medications prescribed, and exhibited symptoms. The
machine learning algorithm would detect patterns so that when new tests were performed on a
patient, the machine learning algorithm would be able to suggest diagnoses or additional tests to
narrow down the possibilities. As the machine learning algorithm was used, it would, hopefully,
get better with each success or failure. Of course, the definition of success or failure is fuzzy. In
this case, the environment would be the patients themselves. The machine would use the data
to generate actions, which would be new diagnoses. This system could be built in two ways.
In the supervised learning process, test data and known correct diagnoses are used to train the
machine. In an unsupervised learning process, the data would be used to generate patterns that
might not have been known before, and these could lead to diagnosing conditions that would
normally not be associated with those symptoms.
A security system might be put into place to identify faces. The measurements are camera
images of people. The system would be trained with a wide range of face images taken from
multiple angles. The system would then be tested with these known persons and its success rate
validated. Those that are in the database memory should be readily identified, and those that are
not should be flagged as unknown. If the success rate was not acceptable, more training might
be needed, or the algorithm itself might need to be tuned. This type of face recognition is now
common, used in Mac OS X’s “Faces” feature in Photos, face identification on the new iPhone
X, and Facebook when “tagging” friends in photos.
For precision maneuvering of a spacecraft, the inertia of the spacecraft needs to be known.
If the spacecraft has an inertial measurement unit that can measure angular rates, the inertia
matrix can be identified. This is where machine learning is tricky. The torque applied to the
spacecraft, whether by thrusters or momentum exchange devices, is only known to a certain
degree of accuracy. Thus, the system identification must sort out, if it can, the torque scaling
factor from the inertia. The inertia can only be identified if torques are applied. This leads to
the issue of stimulation. A learning system cannot learn if the system to be studied does not
have known inputs, and those inputs must be sufficiently diverse to stimulate the system so that
the learning can be accomplished. Training a face recognition system with one picture will not
work.
5
CHAPTER 1 OVERVIEW
Autonomous
Learning
State Inductive
Estimation Learning
Pattern
Adaptive Recognition Expert
Control Systems
Data Mining
System
Fuzzy Logic
Optimal
Control Optimization
Figure 1.2: Taxonomy of machine learning. The dotted lines show connections between branches
6
CHAPTER 1 OVERVIEW
There are three categories under Autonomous Learning. The first is Control. Feedback con-
trol is used to compensate for uncertainty in a system or to make a system behave differently
than it would normally behave. If there was no uncertainty, you wouldn’t need feedback. For
example, if you are a quarterback throwing a football at a running player, assume for a moment
that you know everything about the upcoming play. You know exactly where the player should
be at a given time, so you can close your eyes, count, and just throw the ball to that spot. As-
suming the player has good hands, you would have a 100% reception rate! More realistically,
you watch the player, estimate the player’s speed, and throw the ball. You are applying feedback
to the problem. As stated, this is not a learning system. However, if now you practice the same
play repeatedly, look at your success rate, and modify the mechanics and timing of your throw
using that information, you would have an adaptive control system, the second box from the top
of the control list. Learning in control takes place in adaptive control systems and also in the
general area of system identification.
System identification is learning about a system. By system, we mean the data that rep-
resents anything and the relationships between elements of that data. For example, a particle
moving in a straight line is a system defined by its mass, the force on that mass, its velocity,
and its position. The position is related to the velocity times time, and the velocity is related and
determined by the acceleration, which is the force divided by the mass.
Optimal control may not involve any learning. For example, what is known as full-state
feedback produces an optimal control signal but does not involve learning. In full-state feed-
back, the combination of model and data tells us everything we need to know about the system.
However, in more complex systems, we can’t measure all the states and don’t know the param-
eters perfectly, so some form of learning is needed to produce “optimal” or the best possible re-
sults. In a learning system, optimal control would need to be redefined as the system learns. For
example, an optimal space trajectory assumes thruster characteristics. As a mission progresses,
the thruster performance may change, requiring recomputation of the “optimal” trajectory.
System identification is the process of identifying the characteristics of a system. A sys-
tem can, to a first approximation, be defined by a set of dynamical states and parameters. For
example, in a linear time-invariant system, the dynamical equation is
ẋ = Ax + Bu
. (1.1)
where A and B are matrices of parameters, u is an input vector, and x is the state vector. System
identification would find A and B. In a real system, A and B are not necessarily time invariant,
and most systems are only linear to a first approximation.
The second category is what many people consider true Machine Learning. This is mak-
ing use of data to produce behavior that solves problems. Much of its background comes from
statistics and optimization. The learning process may be done once in a batch process or con-
tinually in a recursive process. For example, in a stock buying package, a developer might have
processed stock data for several years, say before 2008, and used that to decide which stocks
to buy. That software might not have worked well during the financial crash. A recursive pro-
gram would continuously incorporate new data. Pattern recognition and data mining fall into
this category. Pattern recognition is looking for patterns in images. For example, the early AI
7
Another random document with
no related content on Scribd:
them well between cloths; and make a pickle of the following
ingredients:
Six eschalots, minced
White peppercorns 2 oz.
Mace, bruised 1½ oz.
Nutmeg, sliced 1½ oz.
Common table salt 6 oz.
White-wine vinegar 5 pints
Skim this well, boiling it fifteen minutes, and, filling jars with the fruit,
pour the liquor and spices equally upon them, when about new milk
warm, and tie bladder over the jars.
beet-roots.
Pickled beet-roots which have both fine colour and flavour to
recommend them are seldom to be met with, particularly in the
provinces. If this method is tried, it will most certainly recommend
them. Take half a dozen roots of the deepest blood-red colour, put
them into a pail of cold water, and with a soft brush scour and wash
them well, and without breaking the skin in the least. Put them into a
saucepan of boiling water, and let them boil gently until tender, and
no longer, then take them up, wipe dry, and leave them until the next
day. Now peel them nicely, and cut them across in slices a quarter of
an inch thick, not using the extremities. You may cut the slices into
various ornamental and grotesque figures, and lay them in open-
mouthed jars, and make the following pickle:
Mace 1 oz.
Cloves, bruised 2 oz.
Peppercorns 2 oz.
Bay salt, pounded 4 oz.
Ginger, sliced 2 oz.
Horseradish, sliced 1 oz.
Best vinegar ½ gallon
Boil these ten or fifteen minutes, skimming well, and, when cold,
pour over the roots. Replenish the next day what pickle may have
been absorbed, and cover the jars with bladder and leather. This
pickle is ready in a month, and is very good. It makes a beautiful
garnish with fish at dinner, &c. &c.
green parsley.
Take fresh green curled parsley just at maturity, pick out the most
handsome sprigs and put them into salt and water strong enough to
float an egg, and let remain so for five or six days; set them to drain
on a sieve, and then immerse them in another fresh pickle of the
same strength for ten days longer, changing the brine twice. Then
drain them again, and put them into pure cold spring water for two
days, changing the water daily, and when again drained scald them
in boiling water until they are of a nice green, and dry them between
soft cloths. Make, then, the following pickle of
Mace ½ oz.
Nutmeg, sliced 1 oz.
Eschalots, minced 1 oz.
Horseradish, sliced 2 oz.
White-wine vinegar 3 pints
which must be boiled ten or twelve minutes and well skimmed. Put
the parsley branches lightly into jars and pour the pickle over,
covering well. Fill up again with pickle the next day, and cover that
again with pure olive oil to the thickness of an inch or thereabouts.
Cover close with wetted bladder, and over that, when dried, with soft
leather, and keep in a dry airy room.
walnut catsup.
When walnuts have attained maturity, and are being deprived of
the outside green shells by the fruiterers, take half a peck of these
husks, put them into a jar, and pour on them as much cold strong
pickling vinegar as will quite cover them; bung up the jar, and so let
them remain three months. Then press out the liquor upon a sieve,
and to every gallon of it take
Cloves 1 oz.
Mace ¾ oz.
Ginger 1½ oz.
Jamaica pepper 1½ oz.
Black pepper 1 oz.
Garlic 1 oz.
Port wine lees 1½ quart
Anchovies 8 oz.
With all these boil up the liquor of the walnuts, and let them simmer
twenty minutes, skimming well the whole time, then put it aside for
two days and boil it again until reduced one-third part. When cold,
you may put it in bottles, which cork well and seal with wax. It will be
an excellent catsup, and will be greatly improved by long keeping.
mushroom catsup.
Throw large black flap mushrooms into a vessel, and crush them
with the hands well, throwing in a large handful of common salt to
each peck, and let them so lie for two days. Then put them into a
crock of earthenware, and let them be macerated in a cool baker’s
oven for six hours or so, and, when cold, press out the juice, which
boil with the following, to each gallon of the liquor:
Mace ½ oz.
Jamaica pepper 1 oz.
Black pepper 1 oz.
Cloves 1½ oz.
Ginger 1 oz.
Garlic 1 oz.
Bay salt 9 oz.
The simmering and skimming must be continued as long as any filth
rises, and let it then be put away for a day or two, and boiled up
again, being kept well up to the boiling point until reduced to half its
original quantity. When cold it may be put into bottles and firmly
corked and waxed.
tomato catsup.
When tomatoes are fully ripe take two dozen of fine, large, sound
ones, put them into jars and bake until they are tender; strain off the
water from them, and pass the pulp through a sieve, then add to
every pound of the pulp,
Eschalots, shred 1 oz.
Garlic, shred ½ oz.
Bay salt ¼ oz.
White pepper, finely
powdered ¼ oz.
Chili vinegar 1 pint
Boil them together until the whole is quite soft, and pass it again
through a sieve. Now, to every pound of the pulp add the juice of two
lemons, and one large Seville orange, boil it again until it has
attained the consistence of thick cream, and when cold bottle it; cork
and seal well.
elder-flower vinegar.
Pick out all the stalks from a peck of fresh elder flowers and put
them into a vessel with two gallons of white-wine vinegar, set them
under the influence of bright sunbeams for fourteen days and
upwards, or at a short distance from a continuous fire, and then filter
the vinegar through a new flannel bag; fill bottles, which must be well
corked and sealed.
tarragon vinegar.
Take the leaves of tarragon just before it blossoms, put a pound of
them to three quarts of the best white-wine vinegar in a stone jar,
and let them infuse sixteen days. Then drain it and strain through a
flannel bag; add for every two gallons a quarter of an ounce of
isinglass dissolved in sherry wine, and let it be agitated briskly in a
large stone bottle two days. Leave it a month to get fine, then draw it
off into clean dry glass bottles, which cork well and seal.
white-gooseberry vinegar.
Vinegars should be made at home if you wish to rely upon their
quality. This will be superior to any white-wine vinegar, “so called at
the shops,” and as such will be extremely serviceable in all large
establishments and families. Choose fruit of the lightest colour you
can get when fully ripe, mash it with a wooden mallet or potato
beetle. To every peck of the fruit put two gallons of water, stir them
well for an hour and let them ferment three weeks, repeating the
stirring daily. Then strain off the liquor and add for every gallon:
Loaf sugar 1 lb.
Yeast, thick and fresh 1 tablespoonful
Treacle 1 tablespoonful
Let it work for three or four days, then put it into a sweet barrel of
convenient size, and stop it down for twelve months.
an excellent curry-powder.
Turmeric 2 oz.
Coriander seeds 6 oz.
Ginger ½ oz.
Cinnamon 2 drachms
Cayenne pepper 6 drachms
Black pepper ½ oz.
Mace 1 drachm
Fenugreek 1½ oz.
Pimento 2 drachms
Cloves 1 drachm
Nutmeg ½ oz.
Pound all the above separately in a mortar, mix thoroughly for twenty
minutes, then sift and again pound the returns, which, when in finest
powder, mix with bulk; put into dry bottles, cork them well and seal.
Some persons prefer more turmeric and less coriander. Others add
two ounces of the best Durham mustard (scorched). Others, half an
ounce of cardamoms or two ounces of cummin. The colour should
be light yellow—brown, not bright yellow.
notes.
It has been incontestably proved by Baron Liebig and other
Professors of Chemistry, that the albumen and gelatine constitute
the leading nutritive ingredients in the different kinds of flesh and fish
used as food; and I have arrived at the conclusion, that any mode of
curing which deprives them of these valuable properties, is opposed
to facts in science and to common-sense, and cannot therefore be
tolerated.
On the nutritive properties of animal food, Professor Brande
writes: “When the muscular parts of animals are washed repeatedly
in cold water, the fibrinous matter which remains, consists chiefly of
albumen, and is, in its chemical properties, analogous to the clot of
blood.”
In mutton, the albumen or fibrin amounts to as much as twenty-
two per cent., and of gelatine to seven per cent., giving a total of
twenty-nine per cent. of nutritive matter. In beef, the albumen is
twenty, and the gelatine six per cent., yielding a total of twenty-six
per cent. of nutritive matter.
When a piece of meat is covered with salt, or immersed in brine,
the salt penetrates the whole fibre of the flesh, and the juices
contained within are drawn out, and mix with the brine; the salts of
potass contained in it, are exchanged and superseded by those of
soda, derived from the salt with which it has been cured; now, as a
constant supply of potass is required in the system to renew the
muscular fibre, it is quite clear that the want of it must be attended
with some derangement of the health; and hence the benefit derived
from the taking of vegetables, which by supplying potass, make up
for the want of this alkali in the meat.
Albumen is coagulated by heat, and is drawn out by cold water;
this fact is referred to in Note, No. 11.
Geese, smoked, 79
German saveloys, 89
Gherkins, pickled, 156
Grapes, „ 152
Goose, a perpetual (beef’s heart), 34
Green West India ginger, preserved, 134
Italian Cincerelli, 65
Kippered Herrings, 52
— superior, spiced, 53
— salmon, superior, 40
Mackarel Kippered, 45
— (May-fish), 46
— superior pressed, 47
Maltcooms, to keep cured goods in, 4
Mangoes, pickled, 161
Marinated herrings, 103
— eels, 99
— high flavour, 100
— salmon, 92
— sprats, 104
— shrimps, 96
— salmon roes, 127
— tench and carp, 93
— trout and grayling, 97
— veal, 125
— another method, 126
Marmalade, raspberry, 144
Moor-game, potted, 115
Morello cherries, jam of, 144
Mushroom catsup, 171
— buttons, pickled, for pies and sauces, 168
Mutton, dried as in the Ardennes, 29
— breast of, collar as venison, 33
— haunch as venison, 26
— thigh of l’Diable, 27
— Welsh hams, 28
Pickled Vegetables,
— asparagus, 155
— barberries, 154
— beetroots, 167
— cauliflowers, 146
— currants, red, 151
— celery, 151
— codlins, 154
— gherkins, 156
— golden pippins, 165
— grapes, 152
— mushrooms, white, 147
— mangoes (lemon), 159
— lemon pickle, 160
— mangoes (cucumber), 161
— nasturtiums, 166
— mushroom buttons, 168
— peaches and nectarines, 165
— piccalilli, 157
— parsley (green), 169
— onions, silver, 148
— walnuts, green, 163
— „ white, 164
— samphire, 146
Pickled Meats and Fish,
— herrings, 73
— smelts, 101
— lobsters, 102
Pickle for pork, 31
— superior, 32
— a preservative (excellent), 32
— the Hambro’, for beef and pork, 31
Pig, a young one collared, 112
Polony, Russian, 87
Provocative, a, 132
Portable soup, 78
— much richer, 78
Porker’s head, smoked, 23
Preservatives, 4
Potted beef’s heart, 122
— crabs, 107
— hare, 114
— eels, 118
— lobsters, 106
— Moor game, 115
— ox cheek, 84
— neat’s tongue, 121
— beef as hare, 120
— pigeons, 86
— snipes and woodcocks, 116
— shrimps, 119
— „ l’Diable, 85
— trout, 117
— venison, 124
Preserved
— apricots, 140
— barberries, 142
— cucumbers, 137
— golden pippins, 143
— greengage plums, 138
— damsons, 140
— Hambro’ grapes, 142
— lemons, 139
— Morello cherries, 141
— peaches and nectarines, 138
— tomatoes, 136
Smoked Meats,
— beef’s heart, 10
— beef hams, 13
— „ Breslau, 14
— boar’s head, 19
— calf’s head brawn, 76
— Dutch beef, 25
— geese, smoked, 78
— goose, a perpetual, 34
— Hambro beef, 13
— hung beef, 6
— Leicestershire spiced bacon, 23
— Melton hunt beef, 9
— mutton, as in the Ardennes, 29
— neats’ tongues, high flavour,17
— Norfolk chine, 21
— porker’s head, 23
— polony, Russian, 87
— German saveloys, 89
— venison, side of, 111
— Whitehaven corned beef, 15
— Westphalia hams, 19
— „ eclipsed,20
Smoked Fish,
— eels, river, 62
— „ conger, 66
— Gorgona anchovies, 63
— herrings, bloaters, 50
— „ kippered, 51
— Mackerel, kippered, 45
— „ May-fish, 46
— „ superior, 47
— salmon, Welsh, 37
— „ Dutch, 39
— „ superior kipper, 40
— „ American, 48
— „ collared, 43
— herrings, Digby, 55
— „ Aberdeen reds,55
— speldings, 56
— sprats, 56
Smelts, pickled, 101
— potted, 105
Snipes and woodcocks, potted, 116
Sprats, marinated, 104
Shrimps, essence of, 128
Sausage spice (French), 132
Syrup for preserving fruit, to prepare, 132
Samphire, green, pickled, 146
Silver onions, pickled, 148
Syrup d’Orgeat (French), 174