Professional Documents
Culture Documents
PDF An Introduction To Computational Science Allen Holder Ebook Full Chapter
PDF An Introduction To Computational Science Allen Holder Ebook Full Chapter
https://textbookfull.com/product/an-introduction-to-the-crusades-
s-j-allen/
https://textbookfull.com/product/computational-materials-science-
an-introduction-second-edition-lee/
https://textbookfull.com/product/an-introduction-to-
computational-origami-tetsuo-ida/
https://textbookfull.com/product/an-introduction-to-
computational-macroeconomics-economic-methodology-aneli-bongers/
Introduction to Interdisciplinary Studies Allen F.
Repko
https://textbookfull.com/product/introduction-to-
interdisciplinary-studies-allen-f-repko/
https://textbookfull.com/product/an-introduction-to-electrical-
science-second-edition-waygood/
https://textbookfull.com/product/cs-101-an-introduction-to-
computational-thinking-1st-edition-sarbo-roy/
https://textbookfull.com/product/an-introduction-to-physical-
science-15th-edition-james-shipman/
https://textbookfull.com/product/python-programming-an-
introduction-to-computer-science-john-m-zelle/
International Series in
Operations Research & Management Science
Allen Holder
Joseph Eichholz
An Introduction to
Computational
Science
International Series in Operations Research
& Management Science
Volume 278
Series Editor
Camille C. Price
Department of Computer Science, Stephen F. Austin State University,
Nacogdoches, TX, USA
Associate Editor
Joe Zhu
Foisie Business School, Worcester Polytechnic Institute, Worcester, MA, USA
Founding Editor
Frederick S. Hillier
Stanford University, Stanford, CA, USA
More information about this series at http://www.springer.com/series/6161
Allen Holder • Joseph Eichholz
An Introduction to
Computational Science
123
Allen Holder Joseph Eichholz
Department of Mathematics Department of Mathematics
Rose-Hulman Institute of Technology Rose-Hulman Institute of Technology
Terre Haute, IN, USA Terre Haute, IN, USA
This Springer imprint is published by the registered company Springer Nature Switzerland AG.
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Dedicated to the educators who have so
nurtured our lives.
vii
viii Foreword
temperatures are on the rise at that location. Another exercise in the chapter
on modern parallel computing requires students to generate the famous frac-
tal called the Mandelbrot set. Yet another exercise in the material on linear
regression models asks students to reformulate a nonlinear regression model
that computes an estimate of the center and radius of a set of random data
that lies around a circle. The reformulation is linear, which allows students
to solve the problem with their own code from earlier exercises.
The computational part of the book assumes the reader has access to either
MATLAB or Octave (or both). MATLAB is perhaps the most widely used
scripting language for the type of scientific computing addressed in the book.
Octave is freely downloadable and is very similar to MATLAB. The book
includes an extensive collection of exercises, most of which involve writing
code in either of these two languages. The exercises are very interesting, and
a reader/student who invests the effort to solve a significant fraction of these
exercises will become well educated in computational science.
Robert J. Vanderbei
Preface
ix
x Preface
The intended audience is the undergraduate who has completed her or his
introductory coursework in mathematics and computer science. Our general
aim is a student who has completed calculus along with either a traditional
course in ordinary differential equations or linear algebra. An introductory
course in a modern computer language is also assumed. We introduce topics
so that any student with a firm command of calculus and programming should
be able to approach the material. Most of our numerical work is completed
in MATLAB, or its free counterpart Octave, as these computational plat-
forms are standard in engineering and science. Other computational resources
are introduced to broaden awareness. We also introduce parallel computing,
which can then be used to supplement other concepts.
The text is written in two parts. The first introduces essential numerical
methods and canvases computational elements from calculus, linear algebra,
differential equations, statistics, and optimization. Part I is designed to pro-
vide a succinct, one-term inauguration into the primary routines on which
a further study of computational science rests. The material is organized so
that the transition to computational science from coursework in calculus, dif-
ferential equations, and linear algebra is natural. Beyond the mathematical
and computational content of Part I, students will gain proficiency with el-
emental programming constructs and visualization, which are presented in
their MATLAB syntax. Part II addresses modeling, and the goal is to have
students build computational models, compute solutions, and report their
findings. The models purposely intersect numerous areas of science and engi-
neering to demonstrate the pervasive role played by computational science.
This part is also written to fill a one-term course that builds from the compu-
tational background of Part I. While the authors teach most of the material
over two (10 week) terms, we have attempted to modularize the presentation
to facilitate single-term courses that might combine elements from both parts.
Nearly all of this text has been vetted by Rose-Hulman students, and these
careful readings have identified several areas of improvement and pinpointed
numerous typographical errors. The authors appreciate everyone who has
toiled through earlier drafts, and we hope that you have gained as much
from us as we have gained from you.
The authors have received aid from several colleagues, including many who
have suffered long conversations as we have honed content. We thank Eivind
Almaas, Mike DeVasher, Fred Haan, Leanne Holder, Jeff Leader, John Mc-
Sweeney, Omid Nohadani, Adam Nolte, and Eric Reyes. Of these, Jeff Leader,
Eric Reyes, and John McSweeney deserve special note. Dr. Leader initiated
the curriculum that motivated this text, and he proofread several of our ear-
lier versions. Dr. Reyes’ statistical assistance has been invaluable, and he
was surely tired of our statistical discourse. Dr. McSweeney counseled our
stochastic presentation. Allen Holder also thanks Matthias Ehrgott for sab-
batical support. Both authors are thankful for Robert Vanderbei’s authorship
of the Foreword.
The professional effort to author this text has been a proverbial labor of
love, and while the authors have enjoyed the activity, the task has impacted
our broader lives. We especially want to acknowledge the continued support
of our families—Amanda Kozak, Leanne Holder, Ridge Holder, and Rowyn
Holder. Please know that your love and support are at the forefront of our
appreciation.
We lastly thank Camille Price, Neil Levine, and the team at Springer.
Their persistent nudging and encouragement has been much appreciated.
xi
Contents
3 Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
3.1 Linear Models and the Method of Least Squares . . . . . . . . . . . . 67
3.1.1 Lagrange Polynomials: An Exact Fit . . . . . . . . . . . . . . . . 72
3.2 Linear Regression: A Statistical Perspective . . . . . . . . . . . . . . . . 74
3.2.1 Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
3.2.2 Stochastic Analysis and Regression . . . . . . . . . . . . . . . . . 83
xiii
xiv Contents
4 Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
4.1 Unconstrained Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
4.1.1 The Search Direction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
4.1.2 The Line Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
4.1.3 Example Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
4.2 Constrained Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
4.2.1 Linear and Quadratic Programming . . . . . . . . . . . . . . . . 151
4.3 Global Optimization and Heuristics . . . . . . . . . . . . . . . . . . . . . . . 164
4.3.1 Simulated Annealing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
4.3.2 Genetic Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
4.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
Part I
Computational Methods
Chapter 1
Solving Single Variable
Equations
No more fiction for us: we calculate; but that we may calculate, we had to make
fiction first. – Friedrich Nietzsche
The problem of solving the equation f (x) = 0 is among the most storied
in all of mathematics, and it is with this problem that we initiate our study
of computational science. We assume functions√have their natural domains in
the real numbers. For instance, a function like x exists over the collection of
nonnegative reals. The right-hand side being zero is not generally restrictive
since solving either f (x) = k or g(x) = h(x) can be re-expressed as f (x)−k =
0 or g(x) − h(x) = 0. Hence looking for roots provides a general method to
solve equations.
The equation f (x) = 0 may have a variety of solutions or none at all.
For example, the equation x2 + 1 = 0 has no solution over the reals, the
equation x3 + 1 = 0 has a single solution, and the equation sin(x) = 0 has
an infinite number of solutions. Such differences complicate searching for a
root, or multiple roots, especially without forehand knowledge of f . Most
algorithms attempt to find a single solution, and this is the case we consider.
If more than one solution is desired, then the algorithm can be re-applied in
an attempt to locate others.
Four different algorithms are presented, those being the time-tested meth-
ods of bisection, secants, interpolation, and Newton. Each has advantages
and disadvantages, and collectively they are broadly applicable. At least one
typically suffices for the computational need at hand. Students are encour-
aged to study and implement fail-safe programs for each, as they are often
useful.
1.1 Bisection
Fig. 1.1 An illustration of bisection Fig. 1.2 The function fq (x) = x1/(2q+1)
solving f (x) = x3 − x = 0 over [−2, 1.5]. for q = 0, 1, 5, 10, and 50. The red dots
The intervals of each iteration are shown show the value of fq (x) at the approxi-
in red at the bottom of the figure, and mate solution to fq (x) = 0 after 15 itera-
vertical green lines are midpoint evalua- tions of bisection initiated with [0.5, 1.3].
tions.
f (a) − f (b)
y − f (a) = (x − a).
a−b
Solving for x with y = 0 gives the solution x̂ so that
a−b
x̂ = a − f (a) . (1.1)
f (a) − f (b)
If f (x̂) agrees in sign with f (a), then the interval [x̂, b] contains a root and
the process repeats by replacing a with x̂. If f (x̂) agrees in sign with f (b),
then the interval [a, x̂] contains a root and the process repeats by replacing
8 1 Solving Single Variable Equations
b with x̂. If |f (x̂)| is sufficiently small, then the process terminates since x̂ is
a computed root. The pseudocode for linear interpolation is in Algorithm 2.
The method of secants uses the same linear approximation as the method
of linear interpolation, but it removes the mathematical certainty of captur-
ing a root within an interval. One advantage is that the function need not
1.3 The Method of Secants 9
Fig. 1.3 An illustration of the first few Fig. 1.4 An illustration of the first few
iterations of linear interpolation solving iterations of linear interpolation solving
f (x) = x3 − x = 0 over [−2, 1.5]. The red f (x) = cos(x) − x = 0 over [−0.5, 4]. The
lines at the bottom show how the inter- red lines at the bottom show how the
vals update according to the roots of the intervals update according to the roots
linear interpolants (shown in magenta). of the linear interpolants (shown in ma-
genta).
change sign over the original interval. A disadvantage is that the new iterate
might not provide an improvement without the certainty guaranteed by the
Intermediate Value Theorem.
The method of secants is a transition from the method of linear interpo-
lation toward Newton’s method, which is developed in Sect. 1.4. The mathe-
matics of the method of secants moves us theoretically from the Intermediate
Value Theorem to the Mean Value Theorem.
Theorem 2 (Mean Value Theorem). Assume f is continuous on [a, b]
and differentiable on (a, b). Then, for any x in [a, b] there is c in (a, x) such
that
f (x) = f (a) + f (c)(x − a).
10 1 Solving Single Variable Equations
This statement of the Mean Value Theorem is different than what is typi-
cally offered in calculus, but note that if x = b, then we have the common
observation that for some c in (a, b),
f (b) − f (a)
f (c) = .
b−a
The statement in Theorem 2 highlights that the Mean Value Theorem is a
direct application of Taylor’s Theorem, a result discussed more completely
later.
The method of secants approximates f (c) with the ratio
f (b) − f (a)
f (c) ≈ ,
b−a
which suggests that
f (b) − f (a)
f (x) ≈ f (a) + (x − a) .
b−a
The method of secants iteratively replaces the equation f (x) = 0 with the
approximate equation
f (b) − f (a)
0 = f (a) + (x − a) ,
b−a
which gives a solution of
b−a
x̂ = a − f (a) .
f (b) − f (a)
This update is identical to (1.1), and hence the iteration to calculate the new
potential root is the same as the method of linear interpolation. What changes
is that we no longer check the sign of f at the updated value. Pseudocode
for the method of secants is in Algorithm 3.
The iterates from the method of secants can stray, and unlike the methods
of bisection and interpolation, there is no guarantee of a diminishing interval
as the algorithm progresses. The value of f (x) can thus worsen as the algo-
rithm continues, and for this reason it is sensible to track the best calculated
solution. That is, we should track the iterate xbest that has the nearest func-
tion value to zero among those calculated. Sometimes migrating outside the
original interval is innocuous or even beneficial. Consider, for instance, the
first three iterations of solving f (x) = x3 − x = 0 over the interval [−2, 1.5]
in Table 1.3 for the methods of secants and linear interpolation. The algo-
rithms are the same as long as the interval of linear interpolation agrees with
the last two iterates of secants, which is the case for the first two updates.
The third update of secants in iteration 2 is not contained in the interval
[a, b] = [0.6667, 0.8041] because the function no longer changes sign at the
1.3 The Method of Secants 11
Table 1.3 The first three iterations of linear interpolation and the method of secants
solving f (x) = x3 − x = 0 over [−2, 1.5].
end points, and in this case, the algorithm strays outside the interval to es-
timates the root as 1.2572. Notice that if f (a) and f (b) had been closer in
value, then the line through (a, f (a)) and (b, f (b)) would have been flatter.
The resulting update would have been a long way from the earlier iterates.
However, the algorithm for this example instead converges to the root x = 1,
so no harm is done. The first ten iterations are in Table 1.4, and the first few
iterations are depicted in Fig. 1.5.
Widely varying iterates are fairly common with the method of secants,
especially during the initial iterations. If the secants of the most recent iter-
ations favorably point to a solution, then the algorithm will likely converge.
An example is solving f (x) = cos(x)−x = 0 initiated with a = 10 and b = 20,
for which linear interpolation would have been impossible because f (a) and
f (b) agree in sign. The first two iterates provide improvements, but the third
does not. Even so, the algorithm continues and converges with |f (x)| < 10−4
in seven iterations, see Table 1.5. Indeed, the method converges quickly once
the algorithm sufficiently approximates its terminal solution. This behavior is
12 1 Solving Single Variable Equations
Fig. 1.5 The first three iterations of the secant method solving f (x) = x3 −x = 0 initiated
with a = −2.0 and b = 1.5. The third secant line produces an iterate outside the interval
of the previous two.
Table 1.5 The third iterate of the method of secants solving cos(x) − x = 0 lacks im-
provement, but the algorithm continues to converge. The iterates attempting to solve
x2 /(1 + x2 ) = 0 diverge.
The substitution of f (a) for f (c) points to the analytic assumption that f
be continuous.
Newton’s method iteratively replaces the equation f (x) = 0 with the ap-
proximate equation
f (a) + f (a)(x − a) = 0,
which has the solution
f (a)
x̂ = a −
f (a)
as long as f (a) = 0. The iterate x̂ is the new estimate of the root, and unless
the process reaches a termination criterion, x̂ replaces a and the process
continues. Pseudocode for Newton’s method is in Algorithm 4. The first six
iterates solving x3 − x = 0 and cos(x) − x = 0 are listed in Table 1.6. Several
of the first iterations are illustrated in Figs. 1.6 and 1.7.
14 1 Solving Single Variable Equations
Table 1.6 The first six iterates of Newton’s method solving x3 − x = 0 initialized with
x = −0.48 and cos(x) − x = 0 initiated with x = −1.
Fig. 1.6 An illustration of the first three Fig. 1.7 An illustration of the first
iterates of Newton’s method solving x3 − three iterates of Newton’s method solving
x = 0 initiated with x = −0.48. The cos(x) − x = 0 initiated with x = −1.0.
tangent approximations are the magenta The tangent approximations are the ma-
lines. genta lines.
1.4 Newton’s Method 15
The calculation of x̂ assumes that f (a) is not zero, and indeed, this is
a computational concern. If r is a root and f (r) = 0, then the assumed
continuity of f ensures that |f (a)| is arbitrarily small as the process con-
verges to r. Division by small values challenges numerical computation since
it can result in less accurate values or exceed the maximum possible value
on the computing platform. A reasonable assumption for both computational
and theoretical developments is that f (r) = 0, and a reasonable termination
criterion is to halt the process if |f (a)| is sufficiently small.
A notational convenience is to let Δx = x̂ − a, which succinctly expresses
the update as
x̂ = a + Δx, where f (a) Δx = −f (a).
The change between iterations, Δx, is called the Newton step. The expression
f (a)Δx = −f (a) removes the algebraic concern of dividing by zero, and it
nicely extends to the higher dimensional problems of the next chapter. Of
course, the solution is unique if and only if f (a) = 0.
Newton’s method shares a loss of guaranteed convergence with the method
of secants, and iterates can similarly stray during the search for a solution.
However, it is true that if the starting point is close enough to the root, then
Newton’s method will converge. A common approach is to “hope for the best”
when using Newton’s method, and for this reason the exit statuses of both
the method of secants and Newton’s method are particularly critical.
Ora, sendo o tempo uma cousa tão preciosa que cumpria não
esperdiçar, antes conservar avara e egoístamente, roubar tempo é
um crime, alêm de ser roubo. Os inglêses, dizem que tempo é
dinheiro. Eu direi que roubá-lo é uma grande pouca vergonha, digna
de muito cacete ou de fôrca, se quiserem. Não há país onde se
roube tempo como em Portugal. Em Portugal, país do Amanhã, não
há a noção do tempo. Ninguêm sabe o que seja pontualidade.
Prometer e cumprir caso é de estranhar, agora prometer e faltar é
caso trivial. Senão vejam os senhores em tudo e por tudo. Faz-se
um pedido, qual é a resposta? Deixe para amanhã. Procura-se um
devedor; amanhã paga. Pede-se fiado: «Hoje não se fia; amanhã
sim». Tudo é amanhã, tudo é no outro dia, um amanhã que nunca
chega e que só é pretexto para nos extorquir tempo e paciência.
¿Os senhores nunca foram registar uma carta ao correio geral?
Pois vão, mas vão com pressa. Isso é que é pândego! Uma pessoa
chega: há 15 pessoas agarra-das a um postiguinho muito
pequenino, um daqueles postigos que através da distância de
quinze pessoas só se vê por óculo. A gente chega e sossega. Póde