0 views

Uploaded by mayilmaz

Numerical Recipes ODEs

- Trench Diff Eqns II
- Kuantum-Teorisi-Felsefe-ve-Tanr-
- SpaceClaim2011_SP0_UsersGuide
- Grabowski Ł., Pietrykowski K., Wendeker M. (eds.) - AVL Simulation Tools Practical Applications.pdf
- 383 NO01 Class Intro
- The Subtle Art of Not Giving a F*ck: A Counterintuitive Approach to Living a Good Life
- Sapiens: A Brief History of Humankind
- Hidden Figures: The American Dream and the Untold Story of the Black Women Mathematicians Who Helped Win the Space Race
- Shoe Dog: A Memoir by the Creator of Nike
- The Library Book
- The Unwinding: An Inner History of the New America
- Never Split the Difference: Negotiating As If Your Life Depended On It
- Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future
- Yes Please
- A Heartbreaking Work Of Staggering Genius: A Memoir Based on a True Story
- Grit: The Power of Passion and Perseverance
- Devil in the Grove: Thurgood Marshall, the Groveland Boys, and the Dawn of a New America
- The Emperor of All Maladies: A Biography of Cancer
- John Adams
- This Changes Everything: Capitalism vs. The Climate

You are on page 1of 8

Chapter 16. Integration of Ordinary
Differential Equations
16.0 Introduction
~ Problems involving ordinary differential equations (ODES) can always be
reduced to the study of sets of first-order differential equations, Far example the
second-order equation
&y
+a)
r(2) (16.0.1)
te
‘can be rewritten as two first-order equations
(16.02)
‘where z is new variable, This exemplifies the procedure for an arbitrary ODE. The
usual choice for the new variables is to let them be just derivatives ofeach other (and
‘of the original variable). Occasionally, itis useful to incorporate into their definition
some other factors in the equation, or some powers of the independent variable,
{or the purpose of mitigating singular behavior that could result in overflows or
increased roundoff error. Let common sense be your guide: If you find thatthe
‘original variables are smooth ina solution, while your auxiliary variables are doing
crazy things, then figure out why and choose different auxiliary variables.
“The generic problem in onlinary differential equations is thus reduced to the
study of a set of NV coupled first-order differential equations for the functions
diy 1 1,2,..44N%, having the general form.
WO) - pent i= ten (1603)
‘where the functions f, on the righthand side are knovn,
‘A problem involving ODES is not completely specified by its equations. Even
‘more eruial in determining how to attack the problem numerically is the nature of
te problem's boundary conditions. Boundary conditions are algebraic conditions
70102 Chapter 16. _Integration of Ordinary Diferential Equations
‘nthe values of the functions y, in (16.0.3). In general they can be satisfied at
discrete specified points, but donot hold between those points, i. are not preserved
‘automatically by the differential equations. Boundary conditions can be as simple as
equiting that certain variables have certain numerical value, or as complicated as
set of nonlinear algebraic equations among the variables
Usually, itis the nature of the boundary conditions that determines which
‘numerical methods will be feasible. Boundary conditions divide into two broad
categories.
‘© In inital value problems all the y are given at some starting value 24, and
itis desired to find the ys at some final pointy, oF at some discrete list
of points for example, a tabulated intervals).
© In two-point boundary value problems, on the other hand, boundary
‘conditions are specified at more than one x. Typically, some of the
conditions will be specified at 2, and the remainder at.
‘This chapter will consider exclusively the inital value problem, deferring two-
point boundary value problems, which are generally more dificult to Chapter 17.
‘The underlying idea of any routine for solving the intial value problem is
always this: Rewrite the dy’s and da’s in (16.0.3) as finite steps Ay and A, and
multiply the equations by Az. This gives algebraic formulas for the change inthe
functions when the independent variable 2 is “stepped” by one “stepsize” Ar. In
the steptize very small, a good approximation to the underlying
mn is achieved. Litera implementation of this procedure results
in Buler's method (16.1.1, below), which is, however, not recommended for any
practical use. Euler's method is conceptually important, however, one way or
‘nother, practical methods all come down to this same idea: Add small increments
to your functions corresponding to derivatives (right-hand sides of the equations)
multiplied by stepszes.
In this chapter we consider three major types of practi
{or solving intial value problems for ODEs:
© Runge-Kutta methods
«Richardson extrapolation ands particular implementation asthe Bulirseh-
Stoet method
+ predictorcorrector methods.
A brief description of each of these types follows.
1, Runge-Kutta methods propagate a solution over an interval by combining
{he information from several Bulerstyle steps (each involving one evaluation ofthe
right-hand J's), and then using the information obtained to match a Taylor series
expansion up to some higher order.
2, Richardson extrapolation uses the powerful idea of extrapolating a computed
result to the value that would have been obtained if the stepsize had been very
‘much smaller than it actually was. In particular, extrapolation to zero stepsize
the desired goal, The first practical ODE integrator that implemented this idea was
developed by Boulirsch and Stoer, and so extrapolation methods are often called
Bolirch-Stoer methods.
3, Predietor-corrector methods store the solution along the way, and use
those results to extrapolate the solution one step advanced; they then corect the
extrapolation using derivative information at the new point. ‘These are best for
‘very smooth functions.
31 numerical methods16.0 introduction 703,
Runge-Kutta is what you use when @j) you don’t know any better, oF (i) you
have an intransigent problem where Bulirsch-Stoers filing, or Gi) you have trivial
problem where computational efficiency is of no concern, Runge-Kutta succeeds
Virtually always; but it is not usually fastest, except when evaluating f is cheap and
moderate accuracy ( 10-*) is required, Predictor corrector methods, since they
tse past information, are somewhat more difficult to startup, but, for many smooth
problems, they are computationally more efficient than Runge-Kutta, In recent years
Bulirsch-Stoer hes been replacing predictor-corrector in many applications, but it
is too soon fo say that predictor-corrector is dominated in all cates. However, it
appears that only rather sophisticated predictr-corrector routines are competitive.
‘Accordingly, we have chosen not to give an implementation of predictor-corector
in this book, We discuss predictor-corrector further in §16.7, so that you can use
‘canned routine should you encounter a suitable problem. In our experience, the
relatively simple Runge-Kutta and Bulirsch-Stoer routines we give are adequate
for most problems.
Each of the three types of methods can be organized to monitor intemal
consistency. This allows numerical errors which are inevitably introduced into
the solution to be controlled by automatic, (adaprive) changing of the fundamental
stepsize. We always recommend that adaptive stepsize control be implemented,
and we will do 50 below.
In general, all three types of methods can be applied to any initial value
problem. Each comes with its own set of debits and credits that must be understood
Defore it is used.
We have onganized the routines in this chapter into thee nested levels. The
lowest or “nitty-gritty” level is the piece we call the algorithm routine. This
implements the basic formula ofthe method, starts with dependent variables ys atx,
and returns new values of the dependent variables atthe value @ +A, The algorithm
routine also yields up some information about the quality of the solution ater the
step. The routine is dumb, however, and its unable to make any adaptive decision
about whether the solution is of acceptable quality or not.
‘That quality-control decision we encode in a stepper routine. “The stepper
routine calls the algorithm routine. Itmay reject the result, sta smaller stepsize, and
call the algorithm routine again, until compatibility witha predetermined accuracy
citerion has been achieved. ‘The stepper’s fundamental task is to take the largest
stepsize consistent with specified performance. Only when this is accomplished
does the true power of an algorithm come to light.
Above the stepper is the driver routine, which starts and stops the integration,
stores intermediate results, and generally acts as an interface withthe user. There is
‘nothing at all canonical about our driver routines. You should consider them to be
‘examples, and you can customize them for your particular application,
Of the routines that follow, rid, rkek, amid, stoorm, and simpr are algorithm
routines; rhgs, bestep, stiff, and stitbs are steppers; riduab and odeint
are drivers
Section 16.6 of this chapter teats the subject of stiff equations, relevant oth to
‘ordinary differential equations and also to partial differential equations (Chapter 19),

- Trench Diff Eqns IIUploaded byJackie Peccadillo Lim
- Kuantum-Teorisi-Felsefe-ve-Tanr-Uploaded bymayilmaz
- SpaceClaim2011_SP0_UsersGuideUploaded byAnonymous P8Bt46mk5I
- Grabowski Ł., Pietrykowski K., Wendeker M. (eds.) - AVL Simulation Tools Practical Applications.pdfUploaded bymayilmaz
- 383 NO01 Class IntroUploaded bymayilmaz