Professional Documents
Culture Documents
Textbook Adaptive Dynamic Programming With Applications in Optimal Control 1St Edition Derong Liu Ebook All Chapter PDF
Textbook Adaptive Dynamic Programming With Applications in Optimal Control 1St Edition Derong Liu Ebook All Chapter PDF
https://textbookfull.com/product/stochastic-optimal-control-in-
infinite-dimension-dynamic-programming-and-hjb-equations-1st-
edition-giorgio-fabbri/
https://textbookfull.com/product/robust-adaptive-dynamic-
programming-1st-edition-hao-yu/
https://textbookfull.com/product/intelligent-optimal-adaptive-
control-for-mechatronic-systems-1st-edition-marcin-szuster/
https://textbookfull.com/product/programming-interview-problems-
dynamic-programming-with-solutions-in-python-1st-edition-
leonardo-rossi/
Optimal Control in Thermal Engineering 1st Edition
Viorel Badescu
https://textbookfull.com/product/optimal-control-in-thermal-
engineering-1st-edition-viorel-badescu/
https://textbookfull.com/product/robust-control-theory-and-
applications-kang-zhi-liu/
https://textbookfull.com/product/adaptive-aeroservoelastic-
control-1st-edition-tewari/
https://textbookfull.com/product/sliding-mode-control-
methodology-in-the-applications-of-industrial-power-systems-
jianxing-liu/
https://textbookfull.com/product/adaptive-critic-control-with-
robust-stabilization-for-uncertain-nonlinear-systems-ding-wang/
Advances in Industrial Control
Derong Liu
Qinglai Wei
Ding Wang
Xiong Yang
Hongliang Li
Adaptive Dynamic
Programming with
Applications in
Optimal Control
Advances in Industrial Control
Series editors
Michael J. Grimble, Glasgow, UK
Michael A. Johnson, Kidlington, UK
More information about this series at http://www.springer.com/series/1412
Derong Liu Qinglai Wei Ding Wang
• •
Adaptive Dynamic
Programming
with Applications
in Optimal Control
123
Derong Liu Xiong Yang
Institute of Automation Tianjin University
Chinese Academy of Sciences Tianjin
Beijing China
China
Hongliang Li
Qinglai Wei Tencent Inc.
Institute of Automation Shenzhen
Chinese Academy of Sciences China
Beijing
China
Ding Wang
Institute of Automation
Chinese Academy of Sciences
Beijing
China
v
vi Foreword
The series Advances in Industrial Control aims to report and encourage technology
transfer in control engineering. The rapid development of control technology has an
impact on all areas of the control discipline: new theory, new controllers, actuators,
sensors, new industrial processes, computer methods, new applications, new design
philosophies, and new challenges. Much of this development work resides in
industrial reports, feasibility study papers, and the reports of advanced collaborative
projects. The series offers an opportunity for researchers to present an extended
exposition of such new work in all aspects of industrial control for wider and rapid
dissemination.
The method of dynamic programming has a long history in the field of optimal
control. It dates back to those days when the subject of control was emerging in a
modern form in the 1950s and 1960s. It was devised by Richard Bellman who gave
it a modern revision in a publication of 1954 [1]. The name of Bellman became
linked to an optimality equation, key to the method, and like the name of Kalman
became uniquely associated with the early development of optimal control. One
notable extension to the method was that of differential dynamic programming due
to David Q. Mayne in 1966 and developed at length in the book by Jacobson and
Mayne [2]. Their new technique used locally quadratic models for the system
dynamics and cost functions and improved the convergence of the dynamic pro-
gramming method for optimal trajectory control problems.
Since those early days, the subject of control has taken many different directions,
but dynamic programming has always retained a place in the theory of optimal
control fundamentals. It is therefore instructive for the Advances in Industrial
Control monograph series to have a contribution that presents new ways of solving
dynamic programming and demonstrating these methods with some up-to-date
industrial problems. This monograph, Adaptive Dynamic Programming with
Applications in Optimal Control, by Derong Liu, Qinglai Wei, Ding Wang, Xiong
Yang and Hongliang Li, has precisely that objective.
The authors open the monograph with a very interesting and relevant discussion
of another computationally difficult problem, namely devising a computer program
to defeat human master players at the Chinese game of Go. Inspiration from the
vii
viii Series Editors’ Foreword
better programming techniques used in the Go-master problem was used by the
authors to defeat the “curse of dimensionality” that arises in dynamic programming
methods.
More formally, the objective of the techniques reported in the monograph is to
control in an optimal fashion an unknown or uncertain nonlinear multivariable
system using recorded and instantaneous output signals. The algorithms’ technical
framework is then constructed through different categories of the usual state-space
nonlinear ordinary differential system model. The system model can be continuous
or discrete, have affine or nonaffine control inputs, be subject to no constraints, or
have constraints present. A set of 11 chapters contains the theory for various
formulations of the system features.
Since standard dynamic programming schemes suffer from various implemen-
tation obstacles, adaptive dynamic programming procedures have been developed
to find computable practical suboptimal control solutions. A key technique used by
the authors is that of neural networks which are trained using recorded data and
updated, or “adapted,” to accommodate uncertain system knowledge. The theory
chapters are arranged in two parts: Part 1 Discrete-Time Systems—five chapters;
and Part 2 Continuous-Time Systems—five chapters.
An important feature of the monographs of the Advances in Industrial Control
series is a demonstration of potential or actual application to industrial problems.
After a comprehensive presentation of the theory of adaptive dynamic program-
ming, the authors devote Part 3 of their monograph to three chapter-length appli-
cation studies. Chapter 12 examines the scheduling of energy supplies in a smart
home environment, a topic and problem of considerable contemporary interest.
Chapter 13 uses a coal gasification process that is suitably challenging to demon-
strate the authors’ techniques. And finally, Chapter 14 concerns the control of the
water gas shift reaction. In this example, the data used was taken from a real-world
operational system.
This monograph is very comprehensive in its presentation of the adaptive
dynamic programming theory and has demonstrations with three challenging pro-
cesses. It should find a wide readership in both the industrial control engineering
and the academic control theory communities. Readers in other fields such as
computer science and chemical engineering may also find the monograph of con-
siderable interest.
Michael J. Grimble
Michael A. Johnson
Industrial Control Centre
University of Strathclyde
Glasgow, Scotland, UK
Series Editors’ Foreword ix
References
1. Bellman R (1954) The theory of dynamic programming. Bulletin of the American Mathematical
Society 60(6):503–515
2. Jacobson DH, Mayne DQ (1970) Differential dynamic programming, American Elsevier Pub.
Co. New York
Preface
With the rapid development in information science and technology, many busi-
nesses and industries have undergone great changes, such as chemical industry,
electric power engineering, electronics industry, mechanical engineering, trans-
portation, and logistics business. While the scale of industrial enterprises is
increasing, production equipment and industrial processes are becoming more and
more complex. For these complex systems, decision and control are necessary to
ensure that they perform properly and meet prescribed performance objectives.
Under this circumstance, how to design safe, reliable, and efficient control for
complex systems is essential for our society. As modern systems become more
complex and performance requirements become more stringent, advanced control
methods are greatly needed to achieve guaranteed performance and satisfactory
goals.
In general, optimal control deals with the problem of finding a control law for a
given system such that a certain optimality criterion is achieved. The main differ-
ence between optimal control of linear and nonlinear systems lies in that the latter
often requires solving the nonlinear Bellman equation instead of the Riccati
equation. Although dynamic programming is a conventional method in solving
optimization and optimal control problems, it often suffers from the “curse of
dimensionality.” To overcome this difficulty, based on function approximators such
as neural networks, adaptive/approximate dynamic programming (ADP) was pro-
posed by Werbos as a method for solving optimal control problems
forward-in-time.
This book presents the recent results of ADP with applications in optimal
control. It is composed of 14 chapters which cover most of the hot research areas of
ADP and are divided into three parts. Part I concerns discrete-time systems,
including five chapters from Chaps. 2 to 6. Part II concerns continuous-time sys-
tems, including five chapters from Chaps. 7 to 11. Part III concerns applications,
including three chapters from Chaps. 12 to 14.
In Chap. 1, an introduction to the history of ADP is provided, including the basic
and iterative forms of ADP. The review begins with the origin of ADP and
xi
xii Preface
describes the basic structures and the algorithm development in detail. Connections
between ADP and reinforcement learning are also discussed.
Part I: Discrete-Time Systems (Chaps. 2–6)
In Chap. 2, optimal control problems of discrete-time nonlinear dynamical systems,
including optimal regulation, optimal tracking control, and constrained optimal
control, are studied using a series of value iteration ADP approaches. First, an ADP
scheme based on general value iteration is developed to obtain near-optimal control
for discrete-time affine nonlinear systems with continuous state and control spaces.
The present scheme is also employed to solve infinite-horizon optimal tracking
control problems for a class of discrete-time nonlinear systems. In particular, using
the globalized dual heuristic programming technique, a value iteration-based
optimal control strategy of unknown discrete-time nonlinear dynamical systems
with input constraints is established as a case study. Second, an iterative θ-ADP
algorithm is given to solve the optimal control problem of infinite-horizon
discrete-time nonlinear systems, which shows that each of the iterative controls can
stabilize the nonlinear dynamical systems and the condition of initial admissible
control is avoided effectively.
In Chap. 3, a series of iterative ADP algorithms are developed to solve the
infinite-horizon optimal control problems for discrete-time nonlinear dynamical
systems with finite approximation errors. Iterative control laws are obtained by
using the present algorithms such that the iterative value functions reach the opti-
mum. Then, the numerical optimal control problems are solved by a novel
numerical adaptive learning control scheme based on ADP algorithm. Moreover, a
general value iteration algorithm with finite approximate errors is developed to
guarantee the iterative value function to converge to the solution of the Bellman
equation. The general value iteration algorithm permits an arbitrary positive
semidefinite function to initialize itself, which overcomes the disadvantage of tra-
ditional value iteration algorithms.
In Chap. 4, a discrete-time policy iteration ADP method is developed to solve
the infinite-horizon optimal control problems for nonlinear dynamical systems. The
idea is to use an iterative ADP technique to obtain iterative control laws that
optimize the iterative value functions. The convergence, stability, and optimality
properties are analyzed for policy iteration method for discrete-time nonlinear
dynamical systems, and it is shown that the iterative value functions are nonin-
creasingly convergent to the optimal solution of the Bellman equation. It is also
proven that any of the iterative control laws obtained from the present policy
iteration algorithm can stabilize the nonlinear dynamical systems.
In Chap. 5, a generalized policy iteration algorithm is developed to solve the
optimal control problems for infinite-horizon discrete-time nonlinear systems.
Generalized policy iteration algorithm uses the idea of interacting the policy iter-
ation algorithm and the value iteration algorithm of ADP. It permits an arbitrary
positive semidefinite function to initialize the algorithm, where two iteration indices
are used for policy evaluation and policy improvement, respectively. The
Preface xiii
control design, which extends the application scope of ADP methods to nonlinear
and uncertain environment.
In Chap. 10, by using neural network-based online learning optimal control
approach, a decentralized control strategy is developed to stabilize a class of
continuous-time large-scale interconnected nonlinear systems. The decentralized
control strategy of the overall system can be established by adding appropriate
feedback gains to the optimal control laws of isolated subsystems. Then, an online
policy iteration algorithm is presented to solve the Hamilton–Jacobi–Bellman
equations related to the optimal control problems. Furthermore, as a generalization,
a neural network-based decentralized control law is developed to stabilize the
large-scale interconnected nonlinear systems with unknown dynamics by using an
online model-free integral policy iteration algorithm.
In Chap. 11, differential game problems of continuous-time systems, including
two-player zero-sum games, multiplayer zero-sum games, and multiplayer
nonzero-sum games, are studied via a series of ADP approaches. First, an integral
policy iteration algorithm is developed to learn online the Nash equilibrium solution
of two-player zero-sum differential games with completely unknown
continuous-time linear dynamics. Second, multiplayer zero-sum differential games
for a class of continuous-time uncertain nonlinear systems are solved by using an
iterative ADP algorithm. Finally, an online synchronous approximate optimal
learning algorithm based on policy iteration is developed to solve multiplayer
nonzero-sum games of continuous-time nonlinear systems without requiring exact
knowledge of system dynamics.
Part III: Applications (Chaps. 12–14)
In Chap. 12, intelligent optimization methods based on ADP are applied to the
challenges of intelligent price-responsive management of residential energy, with
an emphasis on home battery use connected to the power grid. First, an
action-dependent heuristic dynamic programming is developed to obtain the opti-
mal control law for residential energy management. Second, a dual iterative
Q-learning algorithm is developed to solve the optimal battery management and
control problem in smart residential environments where two iterations are intro-
duced, which are respectively internal and external iterations. Based on the dual
iterative Q-learning algorithm, the convergence property of iterative Q-learning
method for the optimal battery management and control problem is proven. Finally,
a distributed iterative ADP method is developed to solve the multibattery optimal
coordination control problem for home energy management systems.
In Chap. 13, a coal gasification optimal tracking control problem is solved
through a data-based iterative optimal learning control scheme by using iterative
ADP approach. According to system data, neural networks are used to construct the
dynamics of coal gasification process, coal quality, and reference control, respec-
tively. Via system transformation, the optimal tracking control problem with
approximation errors and disturbances is effectively transformed into a two-person
zero-sum optimal control problem. An iterative ADP algorithm is developed to
obtain the optimal control laws for the transformed system.
Preface xv
The authors would like to acknowledge the help and encouragement they have
received from colleagues in Beijing and Chicago during the course of writing this
book. Some materials presented in this book are based on the research conducted
with several Ph.D. students, including Yuzhu Huang, Dehua Zhang, Pengfei Yan,
Yancai Xu, Hongwen Ma, Chao Li, and Guang Shi. The authors also wish to thank
Oliver Jackson, Editor (Engineering) from Springer for his patience and
encouragements.
The authors are very grateful to the National Natural Science Foundation of
China (NSFC) for providing necessary financial support to our research in the past
five years. The present book is the result of NSFC Grants 61034002, 61233001,
61273140, 61304086, and 61374105.
xvii
Contents
xix
xx Contents
2.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
3 Finite Approximation Error-Based Value Iteration ADP . . . . . .... 91
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 91
3.2 Iterative θ-ADP Algorithm with Finite
Approximation Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 92
3.2.1 Properties of the Iterative ADP Algorithm
with Finite Approximation Errors . . . . . . . . . . . . . . . . . 93
3.2.2 Neural Network Implementation . . . . . . . . . . . . . . . . . . 100
3.2.3 Simulation Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
3.3 Numerical Iterative θ-Adaptive Dynamic Programming . . . . . . . 107
3.3.1 Derivation of the Numerical Iterative θ-ADP
Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 107
3.3.2 Properties of the Numerical Iterative θ-ADP
Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 111
3.3.3 Summary of the Numerical Iterative θ-ADP
Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 120
3.3.4 Simulation Study . . . . . . . . . . . . . . . . . . . . . . . . . . .... 121
3.4 General Value Iteration ADP Algorithm with Finite
Approximation Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 125
3.4.1 Derivation and Properties of the GVI Algorithm
with Finite Approximation Errors . . . . . . . . . . . . . .... 125
3.4.2 Designs of Convergence Criteria with Finite
Approximation Errors . . . . . . . . . . . . . . . . . . . . . . . . . . 133
3.4.3 Simulation Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
3.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
4 Policy Iteration for Optimal Control of Discrete-Time Nonlinear
Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
4.2 Policy Iteration Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
4.2.1 Derivation of Policy Iteration Algorithm . . . . . . . . . . . . 153
4.2.2 Properties of Policy Iteration Algorithm . . . . . . . . . . . . 154
4.2.3 Initial Admissible Control Law . . . . . . . . . . . . . . . . . . . 160
4.2.4 Summary of Policy Iteration ADP Algorithm . . . . . . . . 162
4.3 Numerical Simulation and Analysis . . . . . . . . . . . . . . . . . . . . . . 162
4.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
Contents xxi
xxvii
Symbols
2 Belong to
8 For all
) Implies
, Equivalent, or if and only if
Kronecker product
; The empty set
, Equal to by definition
Cn ðΩÞ The class of functions having continuous nth derivative on Ω
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
R ffi
2
ℒ2(Ω) The ℒ2 space defined on Ω, i.e., Ω kf ðxÞk dx\1 for f 2 ℒ2(Ω)
ℒ∞(Ω) The ℒ∞ space defined on Ω, i.e., supx2Ω kf ðxÞk\1 for f 2 ℒ∞(Ω)
λmin ðAÞ The minimum eigenvalue of matrix A
λmax ðAÞ The maximum eigenvalue of matrix A
In The n by n identity matrix
A[0 Matrix A is positive definite
det(A) Determinant of matrix A
A1 The inverse of matrix A
tr(A) The trace of matrix A
xxix
xxx Symbols
1.1 Introduction
Big data, artificial intelligence (AI), and deep learning are the three topics talked about
the most lately in information technology. The recent emergence of deep learning
[10, 17, 38, 68, 88] has pushed neural networks (NNs) to become a hot research
topic again. It has also gained huge success in almost every branch of AI, includ-
ing machine learning, pattern recognition, speech recognition, computer vision, and
natural language processing [17, 25, 26, 35, 74]. On the other hand, the study of
big data often uses AI technologies such as machine learning [80] and deep learning
[17]. One particular subject of study in AI, i.e., the computer game of Go, faced
a great challenge of dealing with vast amounts of data. The ancient Chinese board
game Go has been studied for years with the hope that one day, computer programs
can defeat human professional players. The board of game Go consists of 19 × 19
grid of squares. At the beginning of the game, each of the two players has roughly
360 options for placing each stone. However, the number of potential legal board
positions grows exponentially, and it quickly becomes greater than the total number
of atoms in the whole universe [103]. Such a number leads to so many directions any
given game can move in that makes it impossible for a computer to play by brute
force computation of all possible outcomes.
Previous computer programs focused less on evaluating the state of the board
positions and more on speeding up simulations of how the game might play out.
The Monte Carlo tree search approach was used often in computer game programs,
which samples only some of the possible sequences of plays randomly at each step to
choose between different possible moves instead of trying to calculate every possible
ones. Google DeepMind, an AI company in London acquired by Google in 2014,
developed a program called AlphaGo [92] that has shown performance previously
thought to be impossible for at least a decade. Instead of exploring various sequences
of moves, AlphaGo learns to make a move by evaluating the strength of its position on
the board. Such an evaluation was made possible by NN’s deep learning capabilities.
ADP as well as iterative forms. A few related books will be briefly reviewed before
the end of this chapter.
The main research results in RL can be found in the book by Sutton and Barto
[98] and the references cited in the book. Even though both RL and the main topic
studied in the present book, i.e., ADP, provide approximate solutions to dynamic
programming, research in these two directions has been somewhat independent [7]
in the past. The most famous algorithms in RL are the temporal difference algorithm
[97] and the Q-learning algorithm [112, 113]. Compared to ADP, the area of RL is
more mature and has a vast amount of literature (cf. [27, 34, 47, 98]).
An RL system typically consists of the following four components: {S, A, R, F},
where S is the set of states, A is the set of actions, R is the set of scaler reinforcement
signals or rewards, and F is the function describing the transition from one state
to the next under a given action, i.e., F : S × A → S. A policy π is defined as a
mapping π : S → A. At any given time t, the system can be in a state st ∈ S, take
an action at ∈ A determined by the policy π , i.e., at = π(st ), transition to the next
state s = st+1 , which is denoted by st+1 = F(st , at ), and at the same time, receive a
reward signal rt+1 = r(st , at , st+1 ) ∈ R. The goal of RL is to determine a policy to
maximize the accumulated reward starting from initial state s0 at t = 0.
An RL task always involves estimating some kind of value functions. A value
function estimates how good it is to be in a given state s and it is defined as,
∞
∞
V π (s) = γ k rk+1 s0 =s = γ k r(sk , ak , sk+1 )s0 =s,
k=0 k=0
where at+k = π(st+k ) and st+k+1 = F(st+k , at+k ) for k = 0, 1, . . . . V π (s) is referred
to as the state-value function for policy π . On the other hand, the action-value function
for policy π estimates how good it is to perform a given action a in a given state s
under the policy π and it is defined as,
∞
∞
Qπ (s, a) = γ k rt+k+1 st =s, at =a = γ k r(st+k , at+k , st+k+1 )st =s, at =a,
k=0 k=0
(1.2.2)
Another random document with
no related content on Scribd:
“We know a great deal about a good many things,” said Mrs.
Maplebury.
“What is it, Bradbury?” said Mrs. Fisher.
“I’m afraid I shall have to leave you for a couple of days. Great
nuisance, but there it is. But, of course, I must be there.”
“Where?”
“Ah, where?” said Mrs. Maplebury.
“At Sing-Sing. I see in the paper that to-morrow and the day after
they are inaugurating the new Osborne Stadium. All the men of my
class will be attending, and I must go, too.”
“Must you really?”
“I certainly must. Not to do so would be to show a lack of college
spirit. The boys are playing Yale, and there is to be a big dinner
afterwards. I shouldn’t wonder if I had to make a speech. But don’t
worry, honey,” he said, kissing his wife affectionately. “I shall be back
before you know I’ve gone.” He turned sharply to Mrs. Maplebury. “I
beg your pardon?” he said, stiffly.
“I did not speak.”
“I thought you did.”
“I merely inhaled. I simply drew in air through my nostrils. If I am
not at liberty to draw in air through my nostrils in your house, pray
inform me.”
“I would prefer that you didn’t,” said Bradbury, between set teeth.
“Then I would suffocate.”
“Yes,” said Bradbury Fisher.
And what of Felicia, meanwhile? She was, alas, far from returning
the devotion which scorched Chester’s vital organs. He seemed to
her precisely the sort of man she most disliked. From childhood up
Felicia Blakeney had lived in an atmosphere of highbrowism, and the
type of husband she had always seen in her daydreams was the
man who was simple and straightforward and earthy and did not
know whether Artbashiekeff was a suburb of Moscow or a new kind
of Russian drink. A man like Chester, who on his own statement
would rather read one of her mother’s novels than eat, revolted her.
And his warm affection for her brother Crispin set the seal on her
distaste.
Felicia was a dutiful child, and she loved her parents. It took a bit
of doing, but she did it. But at her brother Crispin she drew the line.
He wouldn’t do, and his friends were worse than he was. They were
high-voiced, supercilious, pince-nezed young men who talked
patronisingly of Life and Art, and Chester’s unblushing confession
that he was one of them had put him ten down and nine to play right
away.
You may wonder why the boy’s undeniable skill on the links had no
power to soften the girl. The unfortunate fact was that all the good
effects of his prowess were neutralised by his behaviour while
playing. All her life she had treated golf with a proper reverence and
awe, and in Chester’s attitude towards the game she seemed to
detect a horrible shallowness. The fact is, Chester, in his efforts to
keep himself from using strong language, had found a sort of relief in
a girlish giggle, and it made her shudder every time she heard it.
His deportment, therefore, in the space of time leading up to the
proposal could not have been more injurious to his cause. They
started out quite happily, Chester doing a nice two-hundred-yarder
off the first tee, which for a moment awoke the girl’s respect. But at
the fourth, after a lovely brassie-shot, he found his ball deeply
embedded in the print of a woman’s high heel. It was just one of
those rubs of the green which normally would have caused him to
ease his bosom with a flood of sturdy protest, but now he was on his
guard.
“Tee-hee!” simpered Chester, reaching for his niblick. “Too bad, too
bad!” and the girl shuddered to the depths of her soul.
Having holed out, he proceeded to enliven the walk to the next tee
with a few remarks on her mother’s literary style, and it was while
they were walking after their drives that he proposed.
His proposal, considering the circumstances, could hardly have
been less happily worded. Little knowing that he was rushing upon
his doom, Chester stressed the Crispin note. He gave Felicia the
impression that he was suggesting this marriage more for Crispin’s
sake than anything else. He conveyed the idea that he thought how
nice it would be for brother Crispin to have his old chum in the family.
He drew a picture of their little home, with Crispin for ever popping in
and out like a rabbit. It is not to be wondered at that, when at length
he had finished and she had time to speak, the horrified girl turned
him down with a thud.
It is at moments such as these that a man reaps the reward of a
good upbringing.
In similar circumstances those who have not had the benefit of a
sound training in golf are too apt to go wrong. Goaded by the sudden
anguish, they take to drink, plunge into dissipation, and write vers
libre. Chester was mercifully saved from this. I saw him the day after
he had been handed the mitten, and was struck by the look of grim
determination in his face. Deeply wounded though he was, I could
see that he was the master of his fate and the captain of his soul.
“I am sorry, my boy,” I said, sympathetically, when he had told me
the painful news.
“It can’t be helped,” he replied, bravely.
“Her decision was final?”
“Quite.”
“You do not contemplate having another pop at her?”
“No good. I know when I’m licked.”
I patted him on the shoulder and said the only thing it seemed
possible to say.
“After all, there is always golf.”
He nodded.
“Yes. My game needs a lot of tuning up. Now is the time to do it.
From now on I go at this pastime seriously. I make it my life-work.
Who knows?” he murmured, with a sudden gleam in his eyes. “The
Amateur Championship—”
“The Open!” I cried, falling gladly into his mood.
“The American Amateur,” said Chester, flushing.
“The American Open,” I chorused.
“No one has ever copped all four.”
“No one.”
“Watch me!” said Chester Meredith, simply.