This action might not be possible to undo. Are you sure you want to continue?
https://www.scribd.com/doc/132664706/Lecture011
03/27/2013
text
original
Another important problem in linear algebra is the computation of the
eigenvalues and eigenvectors of a matrix. Eigenvalue problems occur
frequently in connection with mechanical vibrations and other cases of periodic
motion.
Example 0: Taut String
Three equal masses are connected by a taut string, uniformly spaced, between
two supports.
y
1
y
2
y
3
Figure: 1.0.0 A simple vibration problem.
When the masses are pulled from equilibrium in the ydirection, the tension in
the string will pull them back toward equilibrium and cause the entire assembly
to vibrate. Using a number of simplifying assumptions (e.g. that the extension
from equilibrium is small compared with the length of the string) and some
suitable scaling, the equations of motion will have the form
d
2
y
1
dt
2
= ÷2 y
1
+y
2
d
2
y
2
dt
2
= y
1
÷2 y
2
+y
3
d
2
y
3
dt
2
= y
2
÷2 y
3
2012 © G. Baumann
Under certain conditions, the assembly will settle into a steady state in which all
three masses move periodically with the same frequency. If we write
y
i
= a
i
cos(ct)
and substitute into the equation of motions, we find that the vibrational
frequencies are determined by
÷c
2
a
1
= ÷2 a
1
+a
2
÷c
2
a
2
= a
1
÷2 a
2
+a
3
÷c
2
a
3
= a
2
÷2 a
3
Putting ' = c
2
, this can be written in matrix form as
2 ÷1 0
÷1 2 ÷1
0 ÷1 2
.
a
1
a
2
a
3
= '
a
1
a
2
a
3
.
The vibrational frequencies are therefore determined by the eigenvalues of a
symmetric 3×3 matrix.Ô
The preceding is an instance of the simple eigenvalue problem
A.u = ' u
that is common in practice. Some physical situations lead to the more general
eigenvalue problem
A.u = ' B.u.
In both cases, the matrices are square, and we want to find eigenvalues ' for
which the equation has a nontrivial eigenvector u. Although the general
eigenvalue problem has applications, we will restrict our discussion to the more
common simple problem. The set of all eigenvalues of an eigenvalue problem,
called the spectrum of A, will be denoted by o(A). The pair ', u] is an
eigensolution.
As in the case of solutions of linear systems, the algorithms for computing
eigenvalues involve a fair amount of manipulative detail. Fortunately, computer
programs for them are readily available so the need for implementing them
does not often arise. For most users it is not essential to remember all the fine
2 Lecture_011.nb
2012 © G. Baumann
points of the eigenvalue algorithms, but it is important to understand the basic
ideas behind these methods, know how to use them, and have an idea of what
their limitations are. For this, we first need to review some of the elementary
results from the theory of eigenvalues.
The results quoted here are results stated in the lecture on linear algebra. We
will leave it to the reader to look up the proofs in books on linear algebra. The
first few theorems are simple and can be proved as exercises.
Theorem: 1.0.0 Eigensolution
If ', u] is an eigensolution of the matrix A, then o ' + ß, u] is an eigensolution
of the matrix o A+ ß I. If u is an eigenvector, so is c u for any constant c = 0.«
The second part of this theorem states that the eigenvectors are indeterminate
to within a constant. When convenient, we can therefore assume that all
eigenvectors are normalized by ¦¦ u¦¦ = 1.
Theorem: 1.0.0 Eigenvalues of A
÷1
If ', u] is an eigensolution for the matrix A and A is invertible, then ¡
1
'
, u¡ is an
eigensolution of A
÷1
.«
Many of the techniques for solving eigenvalue problems involve transforming
the original matrix into another one with closely related, but more easily
computed, eigensolutions. The primary transformation used in this is given in
the following definition.
Definition: 1.0.0 Similarity Transformation
Let T be an invertible matrix. Then the transformation
B = T.A.T
÷1
is called a similarity transformation. The matrices A and B are said to be
similar.«
Theorem: 1.0.0 Eigensolution of Similar Matrices
Suppose that ', u] is an eigensolution for the matrix A. Then ', T.u] is an
eigensolution of the similar matrix
Lecture_011.nb 3
2012 © G. Baumann
B = T.A.T
÷1
.«
An important theorem, called the Alternative Theorem, connects the solvability
of a system A.x = b with the spectrum of A.
Theorem: 1.0.0 Alternative Theorem
The n×n linear system
(A÷o I).x = y
has a unique solution if and only if o is not in the spectrum of A.«
The alternative theorem gives us a way of analyzing the spectrum of a matrix. If
' is in o(A) then A÷' I cannot have an inverse; consequently, if A÷' I is not
invertible, then ' must be in the spectrum of A. This implies that the spectrum
of A is the set of all ' for which
det(A÷' I) = 0.
Since the determinant of any matrix can be found with a finite number of
multiplications and divisions, say by expansion of minors, it follows that
det(A÷' I) is a polynomial of degree n in '. This is the characteristic polynomial
which will be denoted by
p
A
(') = det(A÷' I).
The eigenvalue problem is therefore reducible to a polynomial rootfinding
problem which, at least in theory, is well understood. While we usually do not
solve eigenvalue problems quite in this way, the observation is used in many of
the eigenvalue algorithms. For instance, we can draw on our extensive
knowledge of polynomials and their roots to characterize the spectrum.
Theorem: 1.0.0 Number of Eigenvalues
An n ×n matrix has exactly n eigenvalues in the complex plane.«
In this theorem we must account for multiple roots of p
A
('). If '
0
is a root of
p
A
(') with multiplicity k, then '
0
is an eigenvalue with the same multiplicity.
In practice, one way of finding eigenvalues is to transform the original matrix by
a sequence of similarity transformations until it becomes more manageable, for
example, so that its determinant can be evaluated quickly or so that its
4 Lecture_011.nb
2012 © G. Baumann
eigenvalues can be immediately read off. The latter is certainly true if the matrix
is diagonal. When a matrix is not diagonal, but in some way nearly so, we can
use the next result to locate its eigenvalues approximately.
Theorem: 1.0.0 Spectrum of a Matrix A
The spectrum of a matrix A is contained entirely in the union of the disks (in the
complex plane)
d
i
(o) = o: A
ii
÷o s _
j=i
A
ij
 i = 1, 2, 3, …, n.«
This is a simplified version of the Gerschgorin Theorem. The more general
version states that each disjoint region, which is made up from the union of one
or more disk components, contains exactly as many eigenvalues as it has
components.
Theorem 6.2.9 is useful because it tells us exactly how many eigenvalues there
are in a region. Unfortunately, we may have to go into the complex plane to find
them and this complicates matters. There is also the issue of eigenvectors that
is less readily resolved. All of this becomes a lot easier when we deal with
symmetric matrices.
Theorem: 1.0.0 Symmetric Matrices Eigenvalues
The eigenvalues of a symmetric matrix are all real. To each eigenvalue there
corresponds a distinct, real eigenvector.«
Theorem: 1.0.0 Symmetric Matrices Eigenvectors
If A is symmetric, then the eigenvectors associated with two different
eigenvalues are orthogonal to each other. This implies that, even if there are
multiple eigenvalues, there are eigenvectors u
1
, u
2
, …, u
n
 such that
u
i
T
u
j
= 0 for all i = j.«
Since the orthogonality of the eigenvectors makes them linearly independent, it
follows from this theorem that every nvector x can be expanded as a linear
combination of the eigenvectors of a symmetric n ×n matrix. In particular,
because we can always normalize the eigenvectors such that u
i
T
u
i
= 1, we
have that for every nvector x
Lecture_011.nb 5
2012 © G. Baumann
x = _
i=1
n
x
T
u
i
] u
i
.
Such simple results normally do not hold for nonsymmetric matrices, but they
are of great help in working with symmetric cases. Since in practice symmetric
matrices are very common, the symmetric eigenvalue problem has received a
great deal of attention.
The power method is conceptually the simplest way of getting eigenvalues of a
symmetric matrix. It is an iterative method that, in many cases, converges to
the eigenvalue of largest magnitude. For simplicity, we will assume that the
eigenvalues of the symmetric matrix A are all simple and have been labeled so
that '
1
> '
2
± …. The corresponding eigenvectors will be u
1
, u
2
,…
Take any vector x; by Theorem 6.2.11, this vector can be expanded as
x = _
i=1
n
a
i
u
i
.
Then
A.x = _
i=1
n
a
i
'
i
u
i
and generally
A
k
.x = _
i=1
n
a
i
'
i
k
u
i
= '
1
k
_
i=1
n
a
i
'
i
'
1
k
u
i
As k becomes large all the terms in this sum will become small except the first,
so that eventually
A
k
x · '
1
k
a
1
u
1
.
If we take any component of this vector, say the ith one, and compare it for
successive iterates, we find that the dominant eigenvalue '
1
is approximated by
'
1
·
¡A
k+1
.x¡
i
¡A
k
.x¡
i
.
For stability reasons, we usually take i so that we get the largest component of
A
k
.x]
i
, but in principle any i will do. An approximation to the corresponding
eigenvector can be obtained by
6 Lecture_011.nb
2012 © G. Baumann
u
1
· c A
k
.x,
where the constant c can be chosen for normalization.
In practice, we do not evaluate (6.2.24) by computing the powers of A. Instead,
we iteratively compute
x
k+1
= A.x
k
with initial guess x
0
= x. Then
'
1
·
(x
k+1
)
i
(x
k
)
i
.
Example 0: Power Method
Apply the power method to the following matrix
A =
1 0 0.2 0.1
0 3 0.1 0.2
0.2 0.1 0 0.1
0.1 0.2 0.1 2
; MatrixForm#A'
1 0 0.2 0.1
0 3 0.1 0.2
0.2 0.1 0 0.1
0.1 0.2 0.1 2
with the initial guess
x = 1, 1, 1, 1
{1, 1, 1, 1]
x = RandomReal#'
i=1
4
{0.624162, 0.456389, 0.61321, 0.514327]
The iterations of the vectors are done by nested applications of the matrix
product on the initial vector
Lecture_011.nb 7
2012 © G. Baumann
l1 = NestList#+A.¤/ &, x, 24'
¸{0.624162, 0.456389, 0.61321, 0.514327],
{0.798237, 1.53335, 0.221904, 0.81364],
{0.761254, 4.45952, 0.231619, 2.03596],
{1.01117, 13.8089, 0.801799, 3.08074],
{0.86346, 40.8908, 1.27505, 9.10456],
{2.02893, 124.621, 5.17223, 9.8171],
{2.08166, 372.416, 11.8862, 45.2785],
{8.98674, 1127.49, 42.1858, 14.6769],
{15.9562, 3383.76, 113.079, 259.97],
{64.569, 10214.6, 367.565, 169.717],
{155.054, 30714.5, 1051.34, 1746.7],
{539.992, 92597.9, 3277.13, 2770.14],
{1472.43, 278675., 9644.8, 13361.],
{4737.49, 839663., 29 498.1, 30124.8],
¸13649.6, 2.52796 x10
6
, 87926.3, 111107.¡,
¸42345.5, 7.6149 x10
6
, 266637., 293537.¡,
¸125027., 2.29301 x10
7
, 799313., 966805.¡,
¸381570., 6.90635 x10
7
, 2.41469 x10
6
, 2.74484 x10
6
¡,
¸1.13899 x10
6
, 2.07981x10
8
, 7.25715x10
6
, 8.60265 x10
6
¡,
¸3.45069 x10
6
, 6.26389x10
8
, 2.18862x10
7
, 2.52305 x10
7
¡,
¸1.0351 x10
7
, 1.8864x10
9
, 6.58521x10
7
, 7.73505 x10
7
¡,
¸3.12565 x10
7
, 5.68126x10
9
, 1.98446x10
8
, 2.302 x10
8
¡,
¸9.39656 x10
7
, 1.71097x10
10
, 5.97398x10
8
, 6.98823 x10
8
¡,
¸2.83327 x10
8
, 5.15285 x10
10
, 1.79964x10
9
, 2.09343 x10
9
¡,
¸8.52599 x10
8
, 1.55184 x10
11
, 5.41886x10
9
, 6.32715 x10
9
¡¡
The observation is that the numbers in the vector increase in each iteration.
However the ratio of the last and the previous to the last value deliver the
eigenvalue
8 Lecture_011.nb
2012 © G. Baumann
l = Last#Last#l1''sLast#Part#l1, 2''
3.02239
The normalization of the related vector delivers the eigenvector as
u1 = Last#l1'u Last#l1'.Last#l1'
{0.00548612, 0.998547, 0.0348682, 0.0407126]
These approximations of the eigenvalue and the eigenvector can be expected
to be accurate to at least three significant digits.Ô
The power method is simple, but it has some obvious shortcomings. First, note
that the argument leading to the iteration formula for ' works only if a
1
= 0; that
is, a starting guess x must have a component in the direction of the eigenvector
u
1
. Since we do not know u
1
, this is hard to enforce. In practice, though,
rounding will eventually introduce a small component in this direction, so the
power method should work in any case. But it may be quite slow and it is a
good idea to use the best guess for u
1
as an initial guess. If no reasonable
value for u
1
is available, we can simply use a random number generator to
choose a starting value.
The power method is iterative, so its rate of convergence is of concern. It is not
hard to see that it has an iterative order of convergence one and that each step
reduces the error roughly by a factor of
c =
'
2
'
1
.
The method, as decried, works reasonably well only if the dominant eigenvalue
is simple and significantly separated from the next largest eigenvalue. We can
sometimes make an improvement by shifting the origin, as indicated by
Theorem 6.2.4. If we know the approximate positions of the eigenvalue closest
to '
1
say '
2
, and the eigenvalue farthest from it, say '
k
, we can shift the origin
so that
'
2
'
1
=
'
k
'
1
,
Lecture_011.nb 9
2012 © G. Baumann
which minimizes the ratio of largest to second largest eigenvalue. This can be
achieved by shifting the origin by an amount ß, where ß is halfway between '
2
and '
k
. When we now apply the power method to the matrix A÷I ß, we can
expect faster convergence.
Example 0: Power Method Improved
Consider the matrix A from the last example
A ss MatrixForm
1 0 0.2 0.1
0 3 0.1 0.2
0.2 0.1 0 0.1
0.1 0.2 0.1 2
Gerschgorin's theorem tells us that the approximated location of the
eigenvalues are ¦÷2, 0, 1, 3¦. We therefore expect that in the computations the
error is reduced approximately by a factor of 2 f 3 on each iteration. If we shift
the origin with
p =
+1  2/
2

1
2
If we shift the origin by
B = A  p IdentityMatrix#4'; B ss MatrixForm
3
2
0 0.2 0.1
0
7
2
0.1 0.2
0.2 0.1
1
2
0.1
0.1 0.2 0.1 
3
2
10 Lecture_011.nb
2012 © G. Baumann
we find the eigenvalues by the same iteration procedure by using the same
initial guess for x
x = 1, 1, 1, 1
{1, 1, 1, 1]
The iteration delivers the transformed vectors
l1 = NestList#+B.¤/ &, x, 14'
¸{1, 1, 1, 1], {1.8, 3.8, 0.9, 1.1],
{2.77, 13.17, 1.08, 2.68], {4.639, 46.739, 2.679, 1.001],
{7.3942, 163.654, 6.8411, 11.5811],
{13.6176, 575.79, 22.4229, 16.7827],
{26.5893, 2020.86, 73.1923, 93.588],
{63.8812, 7099.06, 253.359, 273.769],
{173.871, 24926.8, 876.739, 1040.88],
{540.242, 87539.7, 3069.91, 3529.1],
{1777.25, 307402., 10 749.9, 12575.3],
¸6073.39, 1.0795 x10
6
, 37728.1, 43870.1¡,
¸21042.7, 3.79078 x10
6
, 132415., 154474.¡,
¸73494.5, 1.33119 x10
7
, 464942., 541791.¡,
¸257409., 4.67464 x10
7
, 1.63254 x10
6
, 1.90353 x10
6
¡¡
The ratio of the last and the previous to the last value deliver the eigenvalue
l = Last#Last#l1''sLast#Part#l1, 2'' + p
3.01341
Here we have to use the shift ß again because otherwise the eigenvalue is to
large. The normalization of the related vector delivers the eigenvector as
Lecture_011.nb 11
2012 © G. Baumann
u1 = Last#l1'u Last#l1'.Last#l1'
{0.00549851, 0.998549, 0.0348726, 0.0406613]
We can expect an error attenuation of about 3 f 7 = 0.428571 on each iteration,
about twice as fast as the original computation.Ô
Other adjustments can be made to increase the usefulness of the power
method. For example, we can get the smallest eigenvalue by inverting the
matrix and applying the power method to A
÷1
. By theorem 6.2.5 this will give
the reciprocal of the smallest eigenvalue of A. Once the eigenvector u
1
has
been found we can deflate the original matrix by
A
1
= A÷' u
1
u
1
T
.
As is easily shown, the spectrum of A
1
is ¦'
2
, '
3
, …, '
n
, 0¦ so that we can use
this observation to compute the second largest eigenvalue, and so on.
A1 = A  l u1.u1
{{2.01341, 3.01341, 2.81341, 2.91341],
{3.01341, 0.0134066, 2.91341, 2.81341],
{2.81341, 2.91341, 3.01341, 2.91341],
{2.91341, 2.81341, 2.91341, 5.01341]]
fn1#x_' := Block%vec1, l,
vec1 = A1.x;
l = Last#vec1'sLast#x';
Print#"X = ", l';
vec1 = vec1u vec1.vec1 )
l1 = Nest#fn1, x, 34'
` = 13.6536
` = 11.5984
12 Lecture_011.nb
2012 © G. Baumann
` = 11.4991
` = 11.4736
` = 11.4736
` = 11.473
` = 11.4731
` = 11.4731
` = 11.4731
` = 11.4731
` = 11.4731
` = 11.4731
` = 11.4731
` = 11.4731
` = 11.4731
` = 11.4731
` = 11.4731
` = 11.4731
` = 11.4731
` = 11.4731
` = 11.4731
` = 11.4731
` = 11.4731
` = 11.4731
` = 11.4731
` = 11.4731
` = 11.4731
` = 11.4731
` = 11.4731
` = 11.4731
` = 11.4731
` = 11.4731
` = 11.4731
Lecture_011.nb 13
2012 © G. Baumann
` = 11.4731
{0.464574, 0.399631, 0.50224, 0.610099]
14 Lecture_011.nb
2012 © G. Baumann
Slide 1 of 17
Ordinary Differential Equations
Mathematical Preliminaries
It will be useful to review some elementary definitions and concepts from the
theory of differential equations. An equation involving a relation between the
values of an unknown function and one or more of its derivatives is called a
differential equation. We shall always assume that the equation can be solved
explicitly for the derivative of highest order. An ordinary differential equation of
order n will then have the form
y
(n)
(x) = f x, y(x), y
(1)
(x), …, y
(n÷1)
(x)]
By a solution of (7.2.1) we mean a function ¢(x) which is n times continuously
differentiable on a prescribed interval and which satisfies (7.2.1); that is ¢(x)
must satisfy
¢
(n)
(x) = f x, ¢(x), ¢
(1)
(x), …, ¢
(n÷1)
(x)]
The general solution of (7.2.1) will normally contain n arbitrary constants, and
hence there exists an nparameter family of solutions. If y(x
0
), y' (x
0
), …,
y
(n÷1)
(x
0
) are prescribed at one point x = x
0
, we have an initialvalue problem.
We shall always assume that the function f satisfies conditions sufficient to
guarantee a unique solution to this initialvalue problem. A simple example of a
firstorder equation is
eq1 = ô
x
y#x' == y#x'
y
·
[x] = y[x]
Its general solution is
Lecture_011.nb 15
2012 © G. Baumann
solution = DSolve#eq1, y, x'
{{y  Function[{x], c
x
C[1]]]]
where C
1
is an arbitrary constant. If the initial condition y(x
0
) = y
0
is prescribed,
the solution can be written
solution = DSolve#eq1, y#x0' == y0, y, x'
¸¸y  Function¡{x], c
xx0
y0¡¡¡
Differential equations are further classified as linear and nonlinear. An equation
is said to be linear if the function f in (7.2.1) involves y and its derivatives
linearly. Linear differential equations possess the important property that if
y
1
(x), y
2
(x), …, y
m
(x) are any solutions of (7.2.1), then so is
y(x) = C
1
y
1
(x) +C
2
y
2
(x) +…+C
m
y
m
(x) = _
k=1
m
C
k
y
k
(x)
for arbitrary constants C
k
. A simple secondorder equation is
eq2 = ô
x,x
y#x' == y#x'
y
··
[x] = y[x]
It is easily verified that :
x
and :
÷x
are solutions of this equation, and hence by
linearity the following sum is also a solution:
sol2 = y > Function#x, C#1' e
x
+ C#2' e
x
'
y  Function[x, C[1] c
x
+ C[2] c
x
]
which can be evrified by a direct substitution of the solution into the equation
eq2 s. sol2
True
16 Lecture_011.nb
2012 © G. Baumann
Two solutions y
1
and y
2
of a secondorder linear differential equation are said
to be linearly independent if the Wronskian of the solution does not vanish, the
Wronskian being defined by
W(y
1
, y
2
) = y
1
y
2
,
÷y
2
y
1
,
=
y
1
y
1
,
y
2
y
2
,
The concept of linear independence can be extended to the solutions of
equations of higher order. If y
1
(x), y
2
(x),…, y
n
(x) are n linearly independent
solutions of a homogeneous differential equation of order n, then
y(x) = _
k=1
n
C
k
y
k
(x)
is called a general solution.

Lecture_011.nb 17
2012 © G. Baumann
Slide 2 of 17
Characteristic Method
Among linear equations, those with constant coefficients are particularly useful
since they lend themselves to a simple treatment. We write the n
th
order linear
differential equation with constant coefficients in the form
L y = y
(n)
+a
n÷1
y
(n÷1)
+…+a
0
y
(0)
= 0
where the a
i
are assumed to be real. If we seek solutions of (7.2.6) in the form
:
ß x
, then direct substitution shows that ß must satisfy the polynomial equation
ß
n
+a
n÷1
ß
n÷1
+…+a
0
= 0
This is called the characteristic equation of the n
th
order differential equation
(7.2.6). If the equation (7.2.7) has n distinct roots ß
i
(i = 1, 2, …, n) then it can
be shown that
y(x) = _
k=1
n
C
k
:
ß
k
x
where the C
k
are arbitrary constants, is the general solution of (7.2.6). If
ß
1
= o+i , is a complex root of (7.2.7), so is its conjugate, ß
2
= o÷i ,.
Corresponding to such a pair of conjugatecomplex roots are two solutions
y
1
= :
o x
cos(, x) and y
2
= :
o x
sin(, x), which are linearly independent. When
(7.2.7) has multiple roots, special techniques are available for obtaining
linearly independent solutions. In particular, if ß
1
is a double root of (7.2.7),
then y
1
= :
ß
1
x
and y
2
= x :
ß
1
x
are linearly independent solutions of (7.2.6). For
the special equation
eq3 = ô
x,x
y#x' + a
2
y#x' == 0
a
2
y[x] + y
··
[x] = 0
the characteristic equation is
18 Lecture_011.nb
2012 © G. Baumann
chareq =
Thread$,eq3 s. y > Function$x, e
p x
(0t e
p x
, Equal( ss
Simplify
a
2
+ /
2
= 0
with roots
solCar3 = Solve#chareq, p'
{{/  f a], {/  f a]]
and its general solution is
sol = Fold$Plus, 0, MapThread$¤2 e
p x
s. ¤1 &,
solCar3, C#1', r C#2'((
c
f a x
C[1] + f c
f a x
C[2]
which can be represented by trigonometric functions
sol ss ExpToTrig ss Simplify
(C[1] + f C[2]) Cos[a x] + (f C[1]  C[2]) Sin[a x]
Finally, if Eq. (7.2.6) is linear but nonhomogeneous, i.e., if
L y = y
(n)
+a
n÷1
y
(n÷1)
+…+a
0
y
(0)
= g(x)
and if ç(x) is a particular solution of (7.2.9), i.e., if
L ç(x) = g(x)
then the general solution of (7.2.9), assuming that the roots of (7.2.7) are
distinct, is
y(x) = ç(x) +_
k=1
n
C
k
:
ß
k
x
Lecture_011.nb 19
2012 © G. Baumann

20 Lecture_011.nb
2012 © G. Baumann
Slide 3 of 17
Example: Characteristic Method
Find the solution of the equation
eq4 = ô
x,x
y#x'  4 ô
x
y#x' + 3 y#x' == x
3 y[x]  4 y
·
[x] + y
··
[x] = x
satisfying the initial conditions
initialConditions = y#0' ==
4
9
, y'#0' ==
7
3
!
¸y[0] =
4
9
, y
·
[0] =
7
3
¸
To find a particular solution ç(x), we try
particularSolution = y > Function#x, a x + b'
y  Function[x, a x + b]
since the right side is a polynomial of degree < 1 and the left side is such a
polynomial whenever y = y(x) is. Substituting this ansatz into the ODE, we find
a polynomial relation
ansatz1 = eq4 s. particularSolution
4 a + 3 (b + a x) = x
whos coefficients should vanish to satisfy the relation. The resulting equations
or the coefficiens are
Lecture_011.nb 21
2012 © G. Baumann
coefficentEquations =
Thread#
Flatten#Normal#CoefficientArrays#ansatz1, x''' == 0'
{4 a + 3 b = 0, 1 + 3 a = 0]
which results to
solCoeff = Solve#coefficentEquations, a, b'
¸¸a 
1
3
, b 
4
9
¸¸
Hence the particular solution of the ODE is given by
particularSolution = particularSolution s. solCoeff
¸y  Function¡x,
x
3
+
4
9
¸¸
To find solutions of the homogeneous equation
homEq4 = First#eq4' == 0
3 y[x]  4 y
·
[x] + y
··
[x] = 0
we examine the characteristic equation
charEq =
Thread$,homEq4 s. y > Function$x, e
p x
(0t e
p x
, Equal( ss
Simplify
3 + /
2
= 4 /
Its roots are
22 Lecture_011.nb
2012 © G. Baumann
solCharEq = Solve#charEq, p' ss Flatten
{/  1, /  3]
Hence the two linearly independent solutions of the homogeneous system are
homogenousSol =
Fold$Plus, 0, MapThread$,¤2 e
p x
s. ¤10 &,
solCharEq, C#1', C#2'((
c
x
C[1] + c
3 x
C[2]
The general solution of equation thus is
generalSolution = +y#x' s. particularSolution/ +
homogenousSol
4
9
+
x
3
+ c
x
C[1] + c
3 x
C[2]
To find the solution satisfying the initial conditions, we must have
ic1 = +generalSolution s. x > 0/ == initialConditions31, 27
4
9
+ C[1] + C[2] =
4
9
and
ic2 = ++ô
x
generalSolution/ s. x > 0/ ==
initialConditions32, 27
1
3
+ C[1] + 3 C[2] =
7
3
defining a system of determining equations for the unknown constants C
1
and
Lecture_011.nb 23
2012 © G. Baumann
g y g
C
2
. The solution of these equations is
solDet = Solve#ic1, ic2, C#1', C#2'' ss Flatten
{C[1]  1, C[2]  1]
Hence the desired solution is
solution = y > Function#x, $Y' s.
$Y > +generalSolution s. solDet/
y  Function¡x,
4
9
 c
x
+ c
3 x
+
x
3
¸
This solution cam be used to verify the original ODE just by inserting it into the
equation
eq4 s. solution ss Simplify
True
demonstrating that the derived solution is in fact a solution.

24 Lecture_011.nb
2012 © G. Baumann
Slide 4 of 17
Numerical Integration by Taylor Series
We are now prepared to consider numerical methods for integrating differential
equations. We shall first consider a firstorder initialvalue differential equation
of the form
y' (x) = f (x, y(x)) with y(x
0
) = y
0
.
The function f may be linear or nonlinear, but we assume that f is sufficiently
differentiable with respect to both x and y. It is known that (7.3.1) possesses a
unique solution if all of f oy is continuous on the domain of interest. If y(x) is the
exact solution of (7.3.1), we can expand y(x) into a Taylor series about the
point x = x
0
:
ser1 = Series#y#x', x, x0, 3'
y[x0] + y
·
[x0] (x  x0) +
1
2
y
··
[x0] (x  x0)
2
+
1
6
y
(3)
[x0] (x  x0)
3
+ O[x  x0]
4
The derivatives in this expansion are not known explicitly since the solution is
not known. However, if f is sufficiently differentiable, they can be obtained by
taking the total derivative of (7.3.1) with respect to x, keeping in mind that y is
itself a function of x. Thus we obtain for the first few derivatives:
Lecture_011.nb 25
2012 © G. Baumann
t0 = f#x, y#x'';
Do#t0 = Append#t0, ô
x
Last#t0' s. y'#x' > f#x, y#x''',
i, 1, 3';
t0 ss TableForm
f[x, y[x]]
f[x, y[x]] f
(0,1)
[x, y[x]] + f
(1,0)
[x, y[x]]
f
(0,1)
[x, y[x]] ¡f[x, y[x]] f
(0,1)
[x, y[x]] + f
(1,0)
[x, y[x]], + f
¡f[x, y[x]] f
(0,1)
[x, y[x]] + f
(1,0)
[x, y[x]], f
(1,1)
[x, y[x]] + 2
Continuing in this manner, we can express any derivative of y in terms of f (x, y)
and its partial derivatives. It is already clear, however, that unless f (x, y) is a
very simple function, the higher total derivatives become increasingly complex.
For practical reasons then, one must limit the number of terms in the Taylor
expansion of the right hand side of (7.3.1) to a reasonable number, and this
restriction leads to a restriction on the value of x for which the expansion is a
reasonable approximation. If we assume that the truncated series (8.20)
yields a good approximation for a step of length h, that is, for x ÷x
0
= h, we can
then evaluate y at x
0
+h; reevaluate the derivatives y', y ''; etc., at x = x
0
+h;
and then use (8.20) to proceed to the next step. If we continue in this manner,
we will obtain a discrete set of values y
n
which are approximations to the true
solution at the points x
n
= x
0
+n h (n = 0, 1, 2, …). In this chapter we shall
always denote the value of the exact solution at a point x
n
by y(x
n
) and of an
approximate solution by y
n
.
In order to formalize this procedure, we first introduce the operator
T
k
(x, y) = f (x, y) +
h
2!
f ' (x, y) +…+
h
k÷1
k!
f
(k÷1)
(x, y) with k = 1, 2, …
26 Lecture_011.nb
2012 © G. Baumann
TaylorFormula#x_, f_, x0_, y0_, n_' := Block#t0,
t0 = f;
Do#t0 = Append#t0, ô
x
Last#t0' s. y'#x' > f',
i, 1, n';
rule1 =
Map#Apply#Rule, ¤' &,
Transpose#Table#D#y#x', x, i', i, 0, n',
t0'' s. x > x0;
Normal#Series#y#x', x, x0, n'' s. rule1 s.
y#x0' > y0, x  x0 > h
'
h TaylorFormula#x, f#x, y#x'', x0, y0, 0'
h f[x0, y0]
where we assume that a fixed step size h is being used, and where f
( j)
denotes
the j
th
total derivative of the function f (x, y(x)) with respect to x. We can then
state the following Algorithm.

Lecture_011.nb 27
2012 © G. Baumann
Slide 5 of 17
Taylor Algorithm
Algorithm Taylor: Taylor's algorithm of order k to find an approximate solution
of the differential equation
y' = f (x, y)
y(a) = y
0
over an interval [a, b]
Choose a step h = (b ÷a) f N. Set
x
n
= a+n h with n = 0, 1, 2, …, N
Generate approximations y
n
to y(x
n
) from the recursion
y
n+1
= y
n
+h T
k
(x
n
, y
n
) for n = 0, 1, 2, …, N÷1
where T
k
(x, y) is defined by (7.3.2).

28 Lecture_011.nb
2012 © G. Baumann
Slide 6 of 17
Taylor Function
The discussed algorithm is implemented in the following function TaylorSolve
which uses the ODE and its initial conditions as its main input information. The
dependent variable is defined in the second slot. The third slot contains the
independent variable and the interval [a, b] on which the solution is derived.
N
max
specifies the maximal number of subintervals in the interval [a, b]. The
orderK parameter defiines the order of approximation in the Taylor series
expansion.
Lecture_011.nb 29
2012 © G. Baumann
In[8]:=
TaylorSolve#eq_List, depend_, independ_, a_, b_,
Nmax_, orderK_' :=
Block%f$h, h, x0, yn, lis, yn1, tylor,
++ extract information from equation +/
f$h = Last#First#eq'';
y0 = Last#Last#eq'';
++ fix step length +/
h =
+b  a/
Nmax
;
++initialize iteration +/
x0 = a;
yn = y0;
xn = x0;
lis = x0, yn;
++ generate Taylor formula +/
tylor = TaylorFormula#independ, f$h, [, ç, orderK';
++ iterate the formula numerically +/
Do#
yn1 = yn + N#h tylor s. [ > xn, ç > yn';
xn = x0 + +n + 1/ h;
AppendTo#lis, xn, yn1';
yn = yn1, n, 0, Nmax  1';
++ return the results +/
y > Interpolation#lis'
)

30 Lecture_011.nb
2012 © G. Baumann
Slide 7 of 17
Example: Taylor Function
The following line shows an application of the function TylorSolve to the initial
value problem
y' =
(x÷y(x))
2
1+y(x)
with y(0) = 1 on the interval [0, 2].
In[9]:=
TaylorSolve%ô
x
y#x' ==
+x  y#x'/
2
1 + y#x'
, y#0' == 1!, y,
x, 0, 2, 38, 1)
Out[9]=
$Aborted
The returned solution represents the numerical solution pairs ¦x
n
, y
n
¦ by means
of an interpolation function. The accuracy of the algorithm depends on N
max
and k the order of approximation. If we keep both of these numbers low, we
can derive the following approximations of the solution
s1 = Table%TaylorSolve%ô
x
y#x' ==
+x  y#x'/
2
1 + y#x'
, y#0' == 1!,
y, x, 0, 2, 8, k), k, 0, 3)
{y  InterpolatingFunction[{{0., 2.]], ·>],
y  InterpolatingFunction[{{0., 2.]], ·>],
y  InterpolatingFunction[{{0., 2.]], ·>],
y  InterpolatingFunction[{{0., 2.]], ·>]]
These foor solutions can be compared with a more sofisticated approach used
by the Mathematica function NDSolve which generates for the same initial
value problem the solution by
Lecture_011.nb 31
2012 © G. Baumann
sh = NDSolve%ô
x
y#x' ==
+x  y#x'/
2
1 + y#x'
, y#0' == 1!, y,
x, 0, 2)
{{y  InterpolatingFunction[{{0., 2.]], ·>]]]
Plotting all five solutions in a common plot alows us to compare the solutions.
Show#
Join#
Map#Plot#y#x' s. ¤, x, 0, 2, FrameLabel > x, y' &,
s1',
Plot#y#x' s. sh, x, 0, 2,
PlotStyle > RGBColor#0, 0, 1''''
0.0 0.5 1.0 1.5 2.0
1.00
1.05
1.10
1.15
1.20
1.25
1.30
x
y
The different approximation orders of the taylor algorithm are located either
above or below the Mathematica solution. The lowest order approximation,
corresponding to an Euler approximation is located above the Mathematica
solution. The higher order approximations of the Taylor series are below the
Mathematica solution. The Taylor approximations only deviate from each other
in the first approximation. However the higher approximations are very close to
32 Lecture_011.nb
2012 © G. Baumann
g y
each other. This means that in a Taylor aproximation of ODEs it is sufficient to
work with a second order approximation. Higher order approximations do not
increase the accuracy if the time steps are large.

Lecture_011.nb 33
2012 © G. Baumann
Slide 8 of 17
Error of the Taylor Approximation
Taylor's algorithm, and other methods based on the above algorithm, which
calculate y at x = x
n+1
by using only information about y and y' at a single point
x = x
n
, are frequently called onestep methods.
Taylor's theorem with remainder shows that the local error of Taylor's algorithm
of order k is
E =
h
k+1
f
(k)
(ç,y(ç))
(k+1)!
=
h
k+1
(k+1)!
y
(k+1)
(ç) with x
n
< ç < x
n
+h
The Taylor algorithm is said to be of order k if the local error E as defined
above is Oh
k+1
].

34 Lecture_011.nb
2012 © G. Baumann
Slide 9 of 17
Euler Approximation
On setting k = 0 in the Algorithm above, we obtain Euler's method and its local
error,
y
n+1
= y
n
+h f (x
n
, y
n
) with E =
h
2
2
y '' (ç)

Lecture_011.nb 35
2012 © G. Baumann
Slide 10 of 17
Example: Euler Method
To illustrate Euler's method, consider the initialvalue problem
y' = y with y(0) = 1
On applying (7.3.3) with h = 0.01 and retaining six decimal places, we obtain
sol2 = TaylorSolve#ô
x
y#x' == y#x', y#0' == 1, y,
x, 0, 1, 100, 0'
y  InterpolatingFunction[{{0., 1.]], ·>]
Since the exact solution of this equation is y = :
x
, we can compare Euler's
algorithm with rge exact solution. It is clear that, to obtain more accuracy with
Euler's method, we must take a considerably smaller value for h this
demonstrated in the following sequence of calculations.
36 Lecture_011.nb
2012 © G. Baumann
Nit
{
0.2 0.4 0.6 0.8 1.0
1.5
2.0
2.5
h = 0.0116279
¸ {
0.2 0.4 0.6 0.8 1.0
÷0.10
÷0.08
÷0.06
÷0.04
÷0.02
¸
Because of the relatively small step size required, Euler's method is not
commonly used for integrating differential equations.
We could, of course, apply Taylor's algorithm of higher order to obtain better
accuracy, and in general, we would expect that the higher the order of the
algorithm, the greater the accuracy for a given step size. If f (x, y) is a relatively
simple function of x and y, then it is often possible to generate the required
derivatives relatively cheaply on a computer by employing symbolic
differentiation, or else by taking advantage of any particular properties the
function f (x, y) may have. However, the necessity of calculating the higher
derivatives makes Taylor's algorithm completely unsuitable on highspeed
computers for general integration purposes. Nevertheless, it is of great
theoretical interest because most of the practical methods attempt to achieve
the same accuracy as a Taylor algorithm of a given order without the
disadvantage of having to calculate the higher derivatives. Although the general
Taylor algorithm is hardly ever used for practical purposes, the special case of
Euler's method will be considered in more detail for its theoretical implications.

Lecture_011.nb 37
2012 © G. Baumann
Slide 11 of 17
Example: Taylor Algorithm
Using Taylor's series, find the solution of the differential equation
eq5 = ô
x
y#x' ==
1
x
2

y#x'
x
 y#x'
2
y
·
[x] =
1
x
2

y[x]
x
 y[x]
2
with the initial value
initial = y#1' == 1
y[1] = 1
for the interval [1, 2]. The numerical solution follows by applying TaylorSolve to
the ODE
sol5 = TaylorSolve#eq5, initial, y, x, 1, 2, 17, 2'
y  InterpolatingFunction[{{1., 2.]], ·>]
the graphical representation shows that the solution agrees fayrly well with the
exact solution y(x) =
÷1
x
.
38 Lecture_011.nb
2012 © G. Baumann
Plot%y#x' s. sol5,
1
x
!, x, 1, 2, FrameLabel > x, y,
PlotStyle > RGBColor#1, 0, 0', RGBColor#0, 0, 1',
PlotLabel > "h = " <> ToString%
1.
17
))
1.0 1.2 1.4 1.6 1.8 2.0
÷1.0
÷0.9
÷0.8
÷0.7
÷0.6
÷0.5
x
y
h = 0.0588235
Increasing the total number of subintervals decreases the time step so that the
solution becomes fayrly good for an aproximation order of k = 2
Lecture_011.nb 39
2012 © G. Baumann
1.0 1.2 1.4 1.6 1.8 2.0
÷1.0
÷0.9
÷0.8
÷0.7
÷0.6
÷0.5
x
y
h = 0.00588235
However, if we increas the interval and keep the number of total subdevisions,
we observe that the numerical solution colapses
40 Lecture_011.nb
2012 © G. Baumann
sol5 = TaylorSolve#eq5, initial, y, x, 1, 20, 170, 2';
Plot%y#x' s. sol5,
1
x
!, x, 1, 20, FrameLabel > x, y,
PlotStyle > RGBColor#1, 0, 0', RGBColor#0, 0, 1',
PlotLabel > "h = " <> ToString%
1.
170
))
General::ovfl : Overflow occurred in computation. >
General::ovfl : Overflow occurred in computation. >
5 10 15 20
÷1.0
÷0.5
0.0
0.5
x
y
h = 0.00588235
The increase of the total integrationsteps and the increase of the approximation
order can resolve this problem with a dramatic increase in calculation time
Lecture_011.nb 41
2012 © G. Baumann
Timing%
sol5 = TaylorSolve#eq5, initial, y, x, 1, 20,
2700, 7';
Plot%y#x' s. sol5,
1
x
!, x, 1, 20, FrameLabel > x, y,
PlotStyle > RGBColor#1, 0, 0', RGBColor#0, 0, 1',
PlotLabel > "h = " <> ToString%
1.
2700
)))
¸159.81,
5 10 15 20
÷0.4
÷0.3
÷0.2
÷0.1
x
y
h = 0.00037037
¸

42 Lecture_011.nb
2012 © G. Baumann
Slide 12 of 17
RungeKutta Methods
As mentioned previously, Euler's method is not very useful in practical
problems because it requires a very small step size for reasonable accuracy.
Taylor's algorithm of higher order is unacceptable as a generalpurpose
procedure because of the need to obtain higher total derivatives of y(x). The
RungeKutta methods attempt to obtain greater accuracy, and at the same time
avoid the need for higher derivatives, by evaluating the function f (x, y) at
selected points on each subinterval. We shall derive here the simplest of the
RungeKutta methods. A formula of the following form is sought:
y
n+1
= y
n
+a k
1
+b k
2
where
k
1
= h f (x
n
, y
n
)
k
2
= h f (x
n
+o h y
n
+ ß k
1
)
and a, b, o, and ß are constants to be determined so that (7.4.1) will agree with
the Taylor algorithm of as high an order as possible.

Lecture_011.nb 43
2012 © G. Baumann
Slide 13 of 17
Derivation of RungeKutta's Method
On expanding y(x
n+1
) in a Taylor series through terms of order h
3
, we obtain
xn =.; yn =.
ser1 = Series#y#xn + h', h, 0, 3'
y[xn] + y
·
[xn] h +
1
2
y
··
[xn] h
2
+
1
6
y
(3)
[xn] h
3
+ O[h]
4
t0 = f#x, y#x'';
Do#t0 = Append#t0, ô
x
Last#t0' s. y'#x' > f#x, y#x''',
i, 1, 2';
rule1 =
Map#Apply#Rule, ¤' &,
Transpose#Table#D#y#x', x, i', i, 1, 3', t0'' s.
x > xn, y#_' > yn ss Simplify
¸y
·
[xn]  f[xn, yn],
y
··
[xn]  f[xn, yn] f
(0,1)
[xn, yn] + f
(1,0)
[xn, yn],
y
(3)
[xn]  f[xn, yn]
2
f
(0,2)
[xn, yn] +
f
(0,1)
[xn, yn] f
(1,0)
[xn, yn] + f[xn, yn]
¡f
(0,1)
[xn, yn]
2
+ 2 f
(1,1)
[xn, yn], + f
(2,0)
[xn, yn]¡
44 Lecture_011.nb
2012 © G. Baumann
ser01 = ser1 s. rule1
y[xn] + f[xn, yn] h +
1
2
¡f[xn, yn] f
(0,1)
[xn, yn] + f
(1,0)
[xn, yn], h
2
+
1
6
¡f[xn, yn]
2
f
(0,2)
[xn, yn] + f
(0,1)
[xn, yn] f
(1,0)
[xn, yn] +
f[xn, yn] ¡f
(0,1)
[xn, yn]
2
+ 2 f
(1,1)
[xn, yn], +
f
(2,0)
[xn, yn], h
3
+ O[h]
4
where we have used the Taylor expansions, and all functions involved are to be
evaluated at ¦x
n
, y
n
¦.
On the other hand, using Taylor's expansion for functions of two variables, we
find that
k2Rule =
k2 >
h
+Normal#Series#f#xn + o h, yn + p k1', h, 0, 2,
k1, 0, 2'' s. Derivative#2, 2'#f'#xn, yn' > 0,
Derivative#1, 2'#f'#xn, yn' > 0,
Derivative#2, 1'#f'#xn, yn' > 0 ss Simplify/
k2 
h f[xn, yn] + k1 / f
(0,1)
[xn, yn] +
1
2
k1
2
/
2
f
(0,2)
[xn, yn] +
h c f
(1,0)
[xn, yn] + h k1 c / f
(1,1)
[xn, yn] +
1
2
h
2
c
2
f
(2,0)
[xn, yn]
where all derivatives are evaluated at ¦x
n
, y
n
¦.
If we now substitute this expression for k2 into (7.4.1) and note that
k
1
= h f (x
n
, y
n
), we find upon re arangement in powers of h that
Lecture_011.nb 45
2012 © G. Baumann
yn1 =.
yn1 ==
Expand#yn + a h f#xn, yn' + b k2 s. k2Rule s.
k1 > h f#xn, yn', h'
yn1 = yn + a h f[xn, yn] +
b h f[xn, yn] + b h
2
/ f[xn, yn] f
(0,1)
[xn, yn] +
1
2
b h
3
/
2
f[xn, yn]
2
f
(0,2)
[xn, yn] + b h
2
c f
(1,0)
[xn, yn] +
b h
3
c / f[xn, yn] f
(1,1)
[xn, yn] +
1
2
b h
3
c
2
f
(2,0)
[xn, yn]
form1 = CoefficientList#
Expand#yn + a h f#xn, yn' + b k2 s. k2Rule s.
k1 > h f#xn, yn'', h'
¸yn, a f[xn, yn] + b f[xn, yn],
b / f[xn, yn] f
(0,1)
[xn, yn] + b c f
(1,0)
[xn, yn],
1
2
b /
2
f[xn, yn]
2
f
(0,2)
[xn, yn] +
b c / f[xn, yn] f
(1,1)
[xn, yn] +
1
2
b c
2
f
(2,0)
[xn, yn]¸
form2 = CoefficientList#ser01, h'
¸y[xn], f[xn, yn],
1
2
¡f[xn, yn] f
(0,1)
[xn, yn] + f
(1,0)
[xn, yn],,
1
6
¡f[xn, yn]
2
f
(0,2)
[xn, yn] + f
(0,1)
[xn, yn] f
(1,0)
[xn, yn] +
f[xn, yn] ¡f
(0,1)
[xn, yn]
2
+ 2 f
(1,1)
[xn, yn], +
f
(2,0)
[xn, yn],¸
46 Lecture_011.nb
2012 © G. Baumann
On comparing this with (7.4.1) we see that to make the corresponding powers
of h and h
2
agee we must have
Thread#form1 == form2' ss TableForm
yn = y[xn]
a f[xn, yn] + b f[xn, yn] = f[xn, yn]
b / f[xn, yn] f
(0,1)
[xn, yn] + b c f
(1,0)
[xn, yn] =
1
2
¡f[xn, yn] f
1
2
b /
2
f[xn, yn]
2
f
(0,2)
[xn, yn] + b c / f[xn, yn] f
(1,1)
[xn, yn] +
solred = Reduce%a + b == 1, o b ==
1
2
, p b ==
1
2
!, a, b, o, p)
b = 1  a && 1 + a = 0 && c = 
1
2 (1 + a)
&& / = c
Although we have four unknowns, we have only three equations, and hence we
still have one degree of freedom in the solution of this system. We might hope
to use this additional degree of freedom to obtain a ageement of the
coefficients in the h
3
terms. It is obvious, however, that this is impossible for all
functions f (x, y).
There are many solutions to this system, the simplest perhaps being
solred s. a >
1
2
!
b =
1
2
&& c = 1 && / = c

Lecture_011.nb 47
2012 © G. Baumann
Slide 14 of 17
Algorithm RungeKutta
Algorithm RungeKutta: RungeKutta method of order 2 for the equation
y' = f (x, y)
y(x
0
) = y
0
generate approximations y
n
to y(x
0
+n h), for h fixed and n = 0, 1, … using the
recursion formula
y
n+1
= y
n
+
1
2
(k
1
+k
2
)
with
k
1
= h f (x
n
, y
n
)
k
2
= h f (x
n
+h, y
n
+k
1
)
The local error of (7.4.2) is of the form
y(x
n+1
) ÷y
n+1
=
h
3
12
f
xx
+2 f f
xy
+f
2
f
yy
÷2 f
x
f
y
÷2 f f
y
2
] +Oh
4
]
The complexity of the coefficient in this error term is characteristic of all Runge
Kutta methods and constitutes one of the least desirable features of such
methods since local error estimates are very difficult to obtain. The local error
of (7.4.2), is, however, of order h
3
, whereas that of Euler's method is h
2
. We
can therefore expect to be able to use a larger step size with (7.2.4). The price
we pay for this is that we must evaluate the function f (x, y) twice for each step
of the integration. Formulas of the RungeKutta type for any order can be
derived by the method used above. However, the derivations become
exceedingly complicated. The most popular and most commonly used formula
of this type is contained in the following Algorithm.
Algorithm RungeKutta: RungeKutta method of order 4 for the equation
48 Lecture_011.nb
2012 © G. Baumann
y' = f (x, y)
y(x
0
) = y
0
generate approximations y
n
to y(x
0
+n h), for h fixed and n = 0, 1, … using the
recursion formula
y
n+1
= y
n
+
1
6
(k
1
+2 k
2
+2 k
3
+k
4
)
with
k
1
= h f (x
n
, y
n
)
k
2
= h f ¡x
n
+
h
2
, y
n
+
1
2
k
1
¡
k
3
= h f ¡x
n
+
h
2
, y
n
+
1
2
k
2
¡
k
4
= h f (x
n
+h, y
n
+k
3
)
The local discretization error of this Algorithm is O(h^5). Again the price we pay
for the favorable discretization error is that four function evaluations are
required per step. This price may be considerable in computer time for those
problems in which the function f(x,y) is complicated. The RungeKutta methods
have additional disadvantages, which will be discussed later. Formula (7.4.5) is
widely used in practice with considerable success. It has the important
advantage that it is selfstarting: i.e., it requires only the value of y at a point
x = x
n
to find y and y' at x = x
n+1
.

Lecture_011.nb 49
2012 © G. Baumann
Slide 15 of 17
4
th
Order RungeKutta Function
The following function contains an implementation of a 4
th
order RungeKutta
algorithm.
50 Lecture_011.nb
2012 © G. Baumann
RungeKutta#eq_List, depend_, independ_, a_, b_,
Nmax_' :=
Block%f$h, h, x0, y0, xn, yn, lis, k1, k2, k3, k4,
++ extract information from equation +/
f$h = Last#First#eq'';
y0 = Last#Last#eq'';
++ fix step length +/
h =
+b  a/
Nmax
;
++initialize iteration +/
x0 = a;
yn = y0;
xn = x0;
lis = x0, yn;
++ iterate the 4th
order Runge Kutta algorithm +/
Do%
k1 = h f$h s. x > xn, y#x' > yn;
k2 = h f$h s. x > xn +
h
2
, y#x' > yn +
1
2
k1!;
k3 = h f$h s. x > xn +
h
2
, y#x' > yn +
1
2
k2!;
k4 = h f$h s. x > xn + h, y#x' > yn + k3;
yn = N%yn +
1
6
+k1 + 2 k2 + 2 k3 + k4/);
xn = N#x0 + n h';
AppendTo#lis, xn, yn', n, 1, Nmax);
depend > Interpolation#lis')

Lecture_011.nb 51
2012 © G. Baumann
Slide 16 of 17
Example: RungeKutta
The folowing initial value problem is solved by the 4
th
order RKalgorithm
y' = y with y(0) = 1
The line below carries out the iteration
sol02 = RungeKutta#ô
x
y#x' == y#x', y#0' == 1, y,
x, 0, 8, 8'
y  InterpolatingFunction[{{0., 8.]], ·>]
Comparing the numerical with the exact solution shows excellent agreement
52 Lecture_011.nb
2012 © G. Baumann
Plot#y#x' s. sol02, e
x
, x, 0, 8,
PlotStyle > RGBColor#1, 0, 0', RGBColor#0, 0, 1',
FrameLabel > x, y'
0 2 4 6 8
0
500
1000
1500
x
y
However the absolute error of the calculations shows large deviations
especially for large arguments.
Lecture_011.nb 53
2012 © G. Baumann
Plot#Evaluate#Abs#+y#x' s. sol02/  e
x
'', x, 0, 8,
PlotStyle > RGBColor#1, 0, 1', FrameLabel > x, y'
0 2 4 6 8
0
10
20
30
40
50
x
y
However increasing the number of integration steps reduces the error
dramatically by two orders of magnitude.
54 Lecture_011.nb
2012 © G. Baumann
sol02 = RungeKutta#ô
x
y#x' == y#x', y#0' == 1, y,
x, 0, 8, 100';
Plot#Evaluate#Abs#+y#x' s. sol02/  e
x
'', x, 0, 8,
PlotStyle > RGBColor#1, 0, 1', FrameLabel > x, y,
PlotRange > All, PlotPoints > 150'
0 2 4 6 8
0.000
0.002
0.004
0.006
0.008
x
y

Lecture_011.nb 55
2012 © G. Baumann
Slide 17 of 17
Example: RungeKutta Algorithm
Using RKalgorithm, find the solution of the differential equation
eq5 = ô
x
y#x' ==
1
x
2

y#x'
x
 y#x'
2
y
·
[x] =
1
x
2

y[x]
x
 y[x]
2
with the initial value
initial = y#1' == 1
y[1] = 1
for the interval [1, 2]. The numerical solution follows by applying TaylorSolve to
the ODE
sol5 = RungeKutta#eq5, initial, y, x, 1, 2, 17'
y  InterpolatingFunction[{{1., 2.]], ·>]
the graphical representation shows that the solution agrees fayrly well with the
exact solution y(x) =
÷1
x
.
56 Lecture_011.nb
2012 © G. Baumann
Plot%y#x' s. sol5,
1
x
!, x, 1, 2, FrameLabel > x, y,
PlotStyle > RGBColor#1, 0, 0', RGBColor#0, 0, 1',
PlotLabel > "h = " <> ToString%
1.
17
))
1.0 1.2 1.4 1.6 1.8 2.0
÷1.0
÷0.9
÷0.8
÷0.7
÷0.6
÷0.5
x
y
h = 0.0588235
However, if we increas now the interval and keep the number of total
subdevisions, we observe that the numerical solution is stable. This was not the
case with the Taylor series approximation where for the same initial value
problem the procedure became instable.
Lecture_011.nb 57
2012 © G. Baumann
sol5 = RungeKutta#eq5, initial, y, x, 1, 20, 170';
Plot%y#x' s. sol5,
1
x
!, x, 1, 20, FrameLabel > x, y,
PlotStyle > RGBColor#1, 0, 0', RGBColor#0, 0, 1',
PlotLabel > "h = " <> ToString%
1.
170
))
5 10 15 20
÷0.5
÷0.4
÷0.3
÷0.2
÷0.1
x
y
h = 0.00588235

58 Lecture_011.nb
2012 © G. Baumann
This action might not be possible to undo. Are you sure you want to continue?