You are on page 1of 40

Basic Numerical Methods and FreeMat

By Timothy Cyders and Gary Schaefer

Essentially, all models are wrong, but some are useful.


- George Box

We demand guaranteed, rigidly-defined areas of doubt and uncertainty.


- Douglas Adams

About This Document


In a nutshell,

Students are poor. Commercial software packages such as MATLAB are expensive. FreeMat
is free, as in freedom, and as in beer.

Students are poor. Engineering/Mathematics texts are expensive. This document is free as
in freedom, and as in beer.

A lot of basic ideas put forth in most Engineering/Mathematics texts are rather old, and are
generally common knowledge, at least to the educated. Especially with respect to explanations of
commercial software packages such as MATLAB, and introductions/explanations of numerical
methods, texts tend to under-explain and overcharge. My personal experience with college texts
has been that the ones where explanation would have greatly augmented learning lacked such
explanation, but certainly didn't hold back on cost.
This text is a response to that phenomenon. This is a basic introduction to numerical methods,
showing how the commonly used methods work by some simple examples. Also, it's a bit of an
introduction to the language of FreeMat, which is easily interchangeable with commercial
packages such as MATLAB, or is at the least readable pseudo-code for examples to use in other
languages (such as C++ or FORTRAN). Probably the most significant aspect of this document is
that it is free (as in freedom). If there are pictures, asides, explanations or examples that can make
learning better, you can add them and republish the document (citing the original work,
obviously)!
There are certain things that this text is and isn't. This text is a basic introduction. It is by no means
a full-on graduate-level text on the intricacies of numerics and solution of differential equations.
This text, as you'll see, doesn't present much of anything in the way of analytical solution of
differential equations. This text very lightly addresses error and uncertainty, but substantially
addresses examples of basic problems engineers may face. This text is a basic introduction to the
idiosyncrasies of the FreeMat programming environment, and does provide downloadable
examples of every type of problem covered.
This text is not for commercial sale, and may not be printed, copied or modified for the purpose of
selling it. This text is free to copy, distribute, download and disseminate we encourage exactly
this. We just ask that you give FreeMat a try, and you might fall in love with it like we have.

Table of Contents
Basic Numerical Methods and FreeMat...................................................................................................1
About This Document...............................................................................................................................3
Basic Numerical Methods and FreeMat...................................................................................................5
Section 1: Root Finding.........................................................................................................................5
1.1 - The Bisection Method...............................................................................................................5
1.2 - The Newton-Raphson Method................................................................................................8
1.3 - The Secant Method.................................................................................................................10
1.4 - Nonlinear Systems of Equations.................................................................................................16
The Newton-Raphson Method for Systems of Equations...........................................................16
Section 2: Numerical Differentiation .................................................................................................21
Section 3: Numerical Integration.......................................................................................................21
2.1 - Rectangular Rule....................................................................................................................24
2.2 - Trapezoid Rule........................................................................................................................31
2.3 - Simpson's Rule........................................................................................................................31
2.4 - Gaussian Quadrature..............................................................................................................31
2.5 - Monte Carlo Integration.........................................................................................................31
Section 3: Initial Value Problems (Ordinary Differential Equations)...............................................34
3.1 - Euler's Method.......................................................................................................................34
3.2 - Runge-Kutta Methods...........................................................................................................39
3.3 - Adams-Bashforth Methods...................................................................................................43
3.4 - Systems of Differential Equations........................................................................................43
Section 4: Boundary Value Problems................................................................................................44
4.1 - Linear Shooting Method........................................................................................................44
4.2 - Shooting Method for Nonlinear Problems...........................................................................44
4.3 - Finite-Difference Methods....................................................................................................44
Section 5: Partial Differential Equations and the Finite Element Method......................................44

Basic Numerical Methods and FreeMat


Numerical methods provide a way to solve problems quickly and easily compared to analytic
solutions. Whether the goal is integration or solution of complex differential equations, there are
many tools available to reduce the solution of what can be sometimes quite difficult analytical
math to simple algebra and some basic loop programming. FreeMat has a big speed advantage in
terms of simple looping thanks to the new JIT compiler, which can run loops and other simple
programming as fast if not much faster than even most commercial packages such as MATLAB.

Section 1: Root Finding


One of the most basic applications of numerical methods is to find the roots of a single equation.
In cases where an equation's solution cannot be obtained by algebraic means (as is the case with
many non-linear equations), there are several methods for finding the roots (solutions) with the
power of a computer and some basic algorithms, which will be discussed here.

1.1 - The Bisection Method


If you have ever searched in a phone book for a name, you've intuitively performed something like
the bisection method. The bisection method is a simple way to find a single root of an equation,
given that the root exists between two bounds (which must be defined at the outset of the
script), and it is the only root between those bounds. The method then reduces these bounds by
half, always selecting the half containing the root, until the bounds are reduced below an
acceptable error limit.
Example: Let's examine the nonlinear equation

x2
sin x =0
4

We can quickly notice several things about this equation. First, it can be described as the
intersection of two functions,
x2
f 1 x =
4
f 2 x =sin x

Second, we see both by inspection of either graph that there are two solutions, one at zero (a
trivial solution) and one near 2. Finally (and most pertinently to the subject under discussion here),
we see that this equation cannot be solved with conventional algebra, so the only way to obtain a
solution without using something like a Taylor series expansion is either graphically or by a
numerical method. We'll approach this problem using the bisection method. The basic operating
principle of this method is as follows:
You are looking for the name Stevens in a phonebook. Pick up the phonebook, and open
it to the middle, you will find names beginning with M. Which half of the book is
Stevens in? As Stevens is in the latter half, we then take that half of the book, and split
it in two. The resulting page has the name Reuben on it. Stevens comes after Reuben
(i.e.: Stevens is in the second half), and so we split the second half again. Continue this
process, always selecting the half containing the solution you are looking for, and you
will find the page with Stevens on it in a matter of seconds, even with a large
phonebook. You reduce your bounds by a factor of two each iteration, so the method
converges very quickly!
We can now apply this method to root finding using a while loop with some simple boolean
operators (if-then or true-false statements). First, we will determine which half of our bounds
contains the solution. Then, we'll reduce our bounds by setting a new point for either the upper
limit or lower limit to the midpoint of the bounds, and reiterate the selection process. After just a
few iterations, we will narrow our bounds quickly to the point that we can estimate an answer.
Let's start with a set of bounds that surround the root in question (and only the root in question!),

[1 3]. An example code would look like so:


% bisection.m
%% --- Define Bounds and Error --- %%
hi = 3; % Upper bound
low = 1; % Lower bound
epsilon = 2e-5; % Acceptable error for solution
counter = 0; % Counter (to avoid infinite loops)
limit = 1000; % Limit number of iterations to avoid infinite loops
%% --- Start loop --- %%
while (hi-low) > epsilon && counter < limit; % Cycle bound reduction
until solution or maxits reached
mid = (hi + low) / 2; % Set midpoint halfway between lo and hi
if (((hi*hi/4) - sin(hi))*((mid*mid/4) - sin(mid)) < 0) %
evaluate % F(x) at mid and hi, and multiply the results a negative
result means % the function crosses an axis (a root!)
low = mid; % Eliminate lower half if it doesn't contain the
%root
else
hi = mid; % Eliminate upper half if it doesn't contain the
%root
end
counter = counter + 1; % Increase counter by 1 for each iteration
end
mid = (hi + low) / 2 % Set midpoint halfway between lo and hi one last
% time (to output solution)

Execute this script, and it will arrive at the answer 1.93375. If we plug this into the original
equation, we get a very small error on the order of 10e-7, in only a millisecond and 17 iterations! To
determine an appropriate number for maximum iterations, remember that your bound width is:

hilow
n
2
where n is the number of iterations completed. To narrow Ayn Rand's book Atlas Shrugged
(645,000 words, approximately 590 words per page the third largest single-volume work in the
English language) to the two words to shrug (page 422, Centennial Edition) would take only
2=

ln 322500
645000
n=
19
n
ln 2
2

iterations to narrow down to those words alone.


There are a few things to note about this method. First, there are several ways to select which half
contains a solution, but perhaps the simplest is to multiply F(mid)*F(hi). If the solution lies in
between mid and hi, the effect of the function crossing an axis will force the output to be negative

(so we compare the product to 0). Second, if there are multiple roots inside the bounds, the
sequence will converge on one of the original bounds (except in the case of an odd number of
roots, in which case it will converge on only one root). For these reasons, you must know
acceptable bounds before running the script to use this method, so it is somewhat limited.

1.2 - The Newton-Raphson Method


The Newton-Raphson Method (or simply Newton's Method) is another so-called local method to
determine the root of an equation. This method uses a single starting point (as opposed to the
bounds required by the bisection method), and repeatedly uses a derivative to project a line to the
axis of the root in question, as shown below. This results in a very fast convergence in many cases,
but does require the solution of a derivative before the method can be used.

Here's how it works: First, you need to evaluate the function and its derivative. We'll start again
with the same equations we used in the last section:
x2
f x =
sin x
4
f ' x =

x
cos x
2

Now, we take our initial guess, find the tangent line to our function F(x), and set our new guess to
the x-intercept of that tangent line. The actual equation to calculate this is:

p i1 = p i

f pi
f ' pi

where i is counting iterations, starting at 1. Essentially, the new guess (p i+1) is displaced along the xaxis from the old guess (pi) by the function value over the slope:

f ( p) = dy

dy

f ' ( p) =
dx =

dx

dy
dx

dy
f ( p)
=
f ' ( p)
dy

dx

( )

We continue this process until each new iteration of p increases by an interval less than our
acceptable error (i.e. |pi+1 - pi| < error). Now, to do this in FreeMat, we'll use a while loop again,
because we're not sure just how many iterations we will need. We'll also use a method called
anonymous functions that allows you to pass a function as a variable. This will help condense our
code, and allow for future modification to our program. First, we'll define our function f, and its
derivative fp:
f = @(x) ((x.^2/4) sin(x));
fp = @(x) (x/2 cos(x));

Notice the decimal after x in the first relation this ensures that the matrix elements are squared,
instead of FreeMat trying to square the matrix, which is a different operation all together requiring
the matrix to have specific properties. Now, at the command window, if you type
-->
ans
-->
ans

f(0)
= 0
f(1)
= -.05915

We can see from the plot of our function on page 86 that these values are simply the evaluation of
the function at those two points. So now, we can set our error and initial guess, and form a while
statement to do our looping for us:

error = 2e-5; % Set error bound


p = 3; % Initial guess
i = 1; % Start counter at 1
p(i+1) = p(i) - f(p(i))/fp(p(i)); % Get initial value for p(2) for
while loop
while abs(p(i+1) - p(i)) > error
i = i + 1;
p(i+1) = p(i) - f(p(i))/fp(p(i));
end

Run this script, and it will complete after only 5 iterations, much faster than the bisection method!
The way we did it here was by simply creating new elements in a p matrix at each iteration. In the
end, we have:
--> p'
ans =
3.00000000000000
2.15305769201339
1.95403864200580
1.93397153275207
1.93375378855763
1.93375376282702

Notice that each iteration, we double the number of accurate digits after our first guess. The last
iteration is accurate to all the decimal places shown in long format. If we plug p(6) into our original
function, we should get something very close to 0:
--> f(p(6))
ans =
5.55111512312578e-016

Our last guess was actually accurate on the level of 1e-16! Newton's method is very powerful, but
you have to know the derivative beforehand. Also, if you pass a minimum or maximum in the
function (or a spot where the derivative approaches 0), this method will quickly diverge without
warning. To get around our derivative problems, we can estimate a derivative numerically and
include some workarounds convergence won't be as fast, but it will be easier in some cases, and
a bit more reliable. This is called the Secant Method.

1.3 - The Secant Method


The Secant Method performs the same task and works much in the same way as the NewtonRaphson Method, but without needing to know the derivative (and also without being subject to
singularities with said analytical function). It differs on several key points: first, we start with two
initial guesses at the root. It is important that these guesses be close to each other and near the
root, otherwise we (as always) risk divergence. Second, instead of using the analytical derivative,
we numerically calculate a secant line, saying that it's close to the derivative. By drawing this line
between f(x) at our two guesses, the root of that line gives us our next guess for the function

root. As the method converges, our guess gets closer and closer to zero. Here are several images
depicting what happens - given again our test function:
f x =

x2
sin x
4

We pick two points; in this case we'll use 3.5 and 2.5. Now, we'll draw a line between the two
points and find its root using the following formula:

x n1 = x n

x n x n1
f xn
f x n f x n1

The x-intercept of our line turns out to be 2.1064. Let's check to see how close we are to zero:
2.10642
sin 2.1064 = 0.2493
4
We're still a fair bit from the root, so now we'll use the points 2.5 and 2.1064 for our next iteration:

Using our formula again, we get 1.9691. Again, we'll check our answer, and see how close we are
to the root:
1.9691 2
sin 1.9691 = 0.0477
4
Getting closer! As we continue this process, we eventually find our result at 1.9337, our
approximate root. So, how do we code this? Here's an example script:
%secantmethod.m script
x = [3.5; 2.5]; % x array of guesses
n = 2;
while( (x(n)^2)/4 sin(x(n)) > 1e-5)
x(n+1) = x(n) ((x(n) - x(n-1))/(((x(n)^2)/4 sin(x(n))) ((x(n-1)^2)/4 sin(x(n-1)))))*((x(n)^2)/4 sin(x(n)));
n = n+1;
end

If we run this from the FreeMat command line, we get:


--> secantmethod
--> x
ans =
3.50000000000000
2.50000000000000
2.10639961557374
1.96913325845350
1.93668028297996
1.93380863593915
1.93375384981807

Perhaps a more robust program would allow us to simply pass our function and two initial guesses
as arguments:
function ret = secant(fx,x1,x2)
x = [x1; x2]; % initial guesses
n = 2; % initiate counter for loop
while(abs(fx(x(n)) ) > 1e-5) % precision of 1e-5
x(n+1) = x(n) - ((x(n) - x(n-1))/(fx(x(n)) - fx(x(n1))))*fx(x(n));
n = n+1;
end
ret = x(n); % return last value of x (our root)

Now, we can pass any function we like, with two initial guesses:
--> fx = @(x) (x.^5 - x 1); % we can't solve this one analytically
--> secant(fx,0.9,1) % one of the roots is somewhere near 1
ans =
1.16730389499850
--> fx(ans) % let's check our answer
ans =
-6.89697854161508e-07

The secant method converges a bit slower than Newton's Method in most cases, but tends to be
somewhat more stable, and is easier to embed into adaptive code that helps it avoid divergence.

1.4 - Nonlinear Systems of Equations


Perhaps a more interesting (and more useful) application of root-finding is to solve systems of nonlinear equations. In many areas of science, we want to model whole systems, which can in many
cases prove difficult to solve analytically, so we can use the method described here to
approximate some solutions.

The Newton-Raphson Method for Systems of Equations


Section 1.3 detailed Newton's Method to solve for the roots of a nonlinear equation. But what if
we have a whole system of equations? Newton-Raphson also works for systems of equations, and
is relatively simple (especially in FreeMat!) when we use a matrix approach. I'll skip some theory
here, but by the same assumptions made in Newton's Method, we can arrive at the following
equation:

[ ][
f1
x1
f2
x1

f1
x2
x1
f

= 1
f 2 x2
f2
x2

] [ ]

The first matrix seen here is called the Jacobian Matrix, simply a matrix with values for partial
derivatives corresponding to their location inside the matrix. If you look at this equation, it has the
classic Ax = b appearance, which means we can simply use A\b (Gaussian Elimination) to solve for
x. Given that the only unknowns in this equation are x1 and x2, we start with a guess (just as in
Newton's Method), and use Gaussian Elimination to solve for these values. We then get our new
values for x1 and x2 from our original x1 and x2 as follows:

[] [] [ ]
x1
x2

new

x1
x2

old

x1
x2

We iterate this idea until our delta matrix (the error) becomes smaller than some threshold that
we set, and we can then say 'close enough!'. Straight away, we can make a simple program to
evaluate this method for a system of two equations with two unknowns. Let's just put the
equations and their derivatives right into the code and make a script .m file:
% newtsys.m script file
%% --- Two Simple Test Functions --- %%
f1 = @(x,y) (x^2 4*(y^3));
f2 = @(x,y) (sin(x) + 3*cos(3*y));
%% --- Their Derivatives --- %%
df1dx = @(x,y) (2*x);
df2dx = @(x,y) (cos(x));
df1dy = @(x,y) (-12*y^2);

df2dy = @(x,y) (-9*sin(3*y));


xy = [1;1]; % Initial Guess at Solution
deltax = [10; 10]; % starter value for delta x matrix
counter = 0; % initialize counter variable
%% --- Jacobian Matrix --- %%
J = @(x,y) [df1dx(x,y) df1dy(x,y);df2dx(x,y) df2dy(x,y)];
%% --- Function Matrix --- %%
F = @(x,y) [f1(x,y);f2(x,y)];
%% --- Iterative Technique --- %%
while abs(max(deltax)) > 1e-5
deltax = J(xy(1),xy(2))\F(xy(1),xy(2)); %Gaussian Elim. Step
xy = xy deltax; % modify [x;y] per iterative technique
counter = counter + 1; % count number of iterations
end
printf('%3d iterations\n',counter);
printf('x = %3f\t y = %3f\n',xy(1),xy(2));

Here, we've set up a matrix J(x,y) which contains anonymous functions, each also a function of
(x,y) and a matrix F(x,y) with similar properties. This way, we can simply put in our guess for x and
y by evaluating J(x,y) and F(x,y). In this case, x is xy(1) and y is xy(2). We perform our iterative
technique by Gaussian Elimination using the backslash operator (A\b or in this case, J(x,y)\F(x,y)),
and the resulting output is our delta matrix containing x and y. We simply loop these two
commands, and we eventually reach a solution:
--> newtsys
6 iterations
x = 4.339563

y = 1.676013

If we evaluate f1 and f2 at these coordinates, we should get something very close to zero:
--> f1(4.339563,1.676013)
ans =
-7.1054e-15
--> f2(4.339563,1.676013)
ans =
2.6645e-15

So, it worked! This technique can be easily extrapolated to systems of many equations and many
unknowns, with two caveats: all partial derivatives have to be known (or numerically
approximated), and in such systems, many times there are multiple solutions. This method can

diverge, and the solutions reached are usually heavily dependent on your initial guesses, so it's
necessary to have an idea of what your functions look like in the first place, and where the
solutions might be. You've been warned.
Now, let's look at a real-world example, along with a program that allows us some input and
manipulation. Kinematic analysis of the four bar mechanism shown below results in the following
system of equations:

] [

f 1 3 , 4
r cos 4 r 3 cos 3 r 2 cos 2 r 1
= 4
f 2 3 , 4
r 4 sin 4 r 3 sin 3 r 2 sin 2

Here, all radii (linkage lengths) are known, as is 2, so we really just have two functions, f1 and f2 of
two variables, 3 and 4. For the Jacobian, we take the partial derivatives as shown previously:

[ ][
f1
3
f2
3

f1
4
f2
4

r 3 sin 3 r 4 sin 4
r 3 cos 3 r 4 cos 4

Now, if we have an initial guess at 3 and 4, we can plug them into our iterative technique, and
eventually get real, accurate values! Let's start by coding our functions and derivatives as
anonymous functions (functions we can enter right from the command console, and pass around
like variables). We'll assume here that you know the values for r1, r2, r3, r4 and th2. Instead of
typing their variable names here, type in their actual values (e.g.: f1=@(th3,th4)
(1.27*cos(th4)...etc.)

f1 = @(th3, th4)
f2 = @(th3, th4)
df1dth3 = @(th3,
df2dth3 = @(th3,
df1dth4 = @(th3,
df2dth4 = @(th3,

(r4*cos(th4) - r3*cos(th3) + r2*cos(th2) + r1;


(r4*sin(th4) - r3*sin(th3) + r2*sin(th2);
th4) (r3*sin(th3));
th4) (-r4*sin(th4));
th4) (-r3*cos(th3));
th4) (r4*cos(th4));

Next, we'll put together a function to which we can pass our equations as matrix blocks. We'll
simply define matrices for f and J like so:
f = @(th3,th4) [f1(th3,th4); f2(th3,th4)];
J = @(th3,th4) [df1dth3(th3,th4) df1dth4(th3,th4);df2dth3(th3,th4)
df2dth4(th3,th4)];

and we'll design our function so we can pass these blocks to it as an argument. Here's an example:
function ret = nrsys2(f,J,guess) % Newton-Raphson for 2 equations in 2
unknowns
epsilon = 0.001; % acceptable error
counter = 1;
delta = [1;1]
while delta >
delta =
guess =
counter
end

epsilon
J(guess(1),guess(2))\f(guess(1),guess(2));
guess + delta;
= counter + 1;

fprintf('Solved in %d iterations',counter)
ret = guess;

But wait, there's more! What if we don't know the analytical derivatives of our functions? Perhaps
they're behind a black box, or just really difficult to differentiate (or perhaps we just don't have
time)? As we'll see in the next section, we can calculate the derivative by taking what's called a
finite difference. Simply stated, a derivative is the rate of change of a function with respect to a
change in its input, or rise over run. So, we could calculate an approximate partial derivative for
our four-bar function above like so:
2, guess = 40 , 3, guess = 65
f1
f 1 2, guess , 3, guess f 1 2, guess , 3, guess

2
2
where is relatively small. This means we can use the Newton-Raphson method for systems
(effectively the secant method here) without having to analytically calculate the derivatives! We
simply substitute our estimation of each derivative into the Jacobian matrix, and solve as usual.
We must be careful, though. Numerical differentiation can be subject to large inaccuracy for a lot

of different reasons (which will be discussed a bit in the next section), so use caution when you
use this method!
One last thing to mention for multiple root finding: there is now a built-in function in FreeMat
called fsolve() that performs a Newton-Raphson very much like the scripts shown here, but with a
little more teeth and error checking. For most problems, this built-in subroutine can give you an
out-of-the-box solution should you choose to use it. Simply type 'help fsolve' (sans quotes) in your
FreeMat command window for an explanation of usage.

Section 2: Numerical Differentiation


There are several different methods that can be used to calculate a numerical derivative.

Section 3: Numerical Integration


Numerical integration is something that computers excel at. In fact, numerical integrals are the
basis for many of the powerful things computers are capable of today. Whether it's projecting the
position of your mouse cursor on the screen based on an infrared diode's electrical behavior or
calculating the physics effects in your favorite computer game, this basic functionality is integral
(pun intended) to the usefulness of the machine in front of you. 'Why integrate numerically?', you
might ask. Well, take a quick look at what an integral is doing; it's the sum of a given function,
iterated an infinite (or just a high number) of times over a very small interval. For example, look at
this integral:
2

x 2 dx
0

We can solve this analytically quite easily using an inverse power rule. There is no reason to
numerically integrate this formula., because the closed form is easily obtained. But what if you
ever encounter this mess of an equation:

12 exp
z
x

x
x
2
2 x

dx=1

Figure 1: Gaussian probability density function (PDF) curve with a standard


deviation of 1.

This is the calculation for the area from some point z to infinity under the curve of the Gaussian or
normal probability density function (PDF). Such a curve is shown in Figure 1. This is actually a very
important function for a lot of different applications, and as an engineer or mathematician, you
will indeed use it, especially its integral. The integral of this curve is equal to the probability that a
normally distributed variable's value lies between two bounds, expressed in standard deviations
from the mean at zero. The total area under the curve, from minus infinity to plus infinity, is equal
to one (the function was designed with that specific characteristic in mind there is a 100%
probability that the variable has some value on this interval, follow?). If we change one of the
limits, however, then we may have something that looks like this:

Pr Zz=
z

1
exp
x 2

x x
2
2 x

dx=?

This is a rather nasty integral, but in the immortal words of the Hitchhiker's Guide to the Galaxy,
don't panic. All the equation above does is calculate the area under the Gaussian curve from some
point z up to infinity. An example is if z=1, then the area to be calculated would be as shown in
Figure 2.

Figure 2: Graph of Gaussian curve showing example area to


be calculated using numerical integration.
This is an example of an equation for which there is no closed form solution (short of an infinite
series expansion). The only way to handle this type of equation is through numerical integration.
Thanks to computers and software such as FreeMat, that's much easier than if you happened to
be using the solar-powered alternative shown in Figure 3.

Figure 3: Hey, it's a Hemmi!

We're going to use this equation to demonstrate some some of the methods of numerical
integration. Okay, back to the equation for the Gaussian, also called the normal, probability
density function. It is:
PDF =

1
e
x 2

xx
2
2 x

where:

x= the standard deviation of the distribution. This is an indication of how wide or fat the
distribution is.
x = the average of the distribution. This is where the center of the distribution is located.
Using Freemat, we can create a plot of this function, shown in Figure 4.

Figure 4: Gaussian probability density function


This was created using this small bit of code:
sigma=1;
x=-5:0.01:5;
gaussian=(1/(sigma*(2*pi)^0.5))*exp(-x.^2.0/(2*sigma^2));
plot(x,gaussian)

From this, we'll explain and demonstrate several methods for calculating the area underneath the
curve.

2.1 - Rectangular Rule


One way to add up the area is with the a bunch of rectangles. Just place a bunch of equal-width
rectangles under the area. Each rectangle stretches so that one of the top corners touches the
curve and the bottom touches the bottom, which in this case is zero. This is shown in Figure 5.

Figure 5: Example of calculating the area from 1 to infinity


using rectangular areas.
By adding up the areas of each rectangle, we can calculate the area under the curve. If we look at
each rectangle as having a width w and a height equal to the distance from zero to where it
touches the graph, we'll have something as shown in Figure 6.

Figure 6: Gaussian curve with height and width annotated.


We'll now use this this graph and some FreeMat code to add up the area of all of the rectangles.
How many rectangles? Good question! The graph goes to infinity. But we don't want (or need) to
go that high. So how high do we need to go? Note that the graph falls close to zero when it gets
above 3. So, we'll go a bit above that and go to 10. And what kind of width do we want to use?
Let's just use 1/4, or 0.25.
% Script to calculate the specific area under a Gaussian probability
% density function using the Rectangular Rule.
w=0.25; % This is the width of each rectangles.
start_point=1;
stop_point=10;
x=start_point+w:w:stop_point+w; % This is the values of x. Note that

%
%
%
%
%

each rectangle touches the graph on


the *right* side. This side is at
the point of x+w, which is why the
counter goes from 1+w to 10+w, with
a step size of just w.

sigma=1; % The standard deviation of the curve.


% The next line calculates the height of each rectangle.
h=(1/(sigma*(2*pi)^0.5))*exp(-(x.^2)/(2*sigma^2));
area_rect=w*h; % Calculate the area of each rectangle.
area_sum=cumsum(area_rect); % Sum up the areas of each rectangle.
total_area=area_sum(length(area_sum)); % This is the final area.
printf('The area under the curve from %f to %f is
%f.\n',start_point,stop_point,total_area);

Saving this script as gaussian_area.m, then running it, we get the following result.
--> gaussian_area
The area under the curve from 1.000000 to 10.000000 is 0.129672.

The area we calculated is 0.129672. But is this correct? Another good question! One way to find
out is to make the width of each rectangle smaller, rerunning the routine, and comparing the
answer to this one. Let's make the width 0.1, which means our code will appear as follows:
% Script to calculate the specific area under a Gaussian probability
% density function using the Rectangular Rule.
w=0.1; % This is the width of each rectangles.
start_point=1;
stop_point=10;
x=start_point+w:w:stop_point+w; % This is the values of x. Note that
% each rectangle touches the graph on
% the *right* side. This side is at
% the point of x+w, which is why the
% counter goes from 1+w to 10+w, with
% a step size of just w.
sigma=1; % The standard deviation of the curve.
% The next line calculates the height of each rectangle.
h=(1/(sigma*(2*pi)^0.5))*exp(-(x.^2)/(2*sigma^2));
area_rect=w*h; % Calculate the area of each rectangle.
area_sum=cumsum(area_rect); % Sum up the areas of each rectangle.
total_area=area_sum(length(area_sum)); % This is the final area.
printf('The area under the curve from %f to %f is
%f.\n',start_point,stop_point,total_area);

Re-running the routine, we get:


--> gaussian_area
The area under the curve from 1.000000 to 10.000000 is 0.146758.

The area has gone up to 0.146758. The difference from the old area is 0.146758 - 0.129672 =
0.017086. The difference is 11.6% of the total area we just calculated. For most math programs,
that's not good enough. But what is good enough? Typically, for numerical integration, we want
to have a difference of 10-6 or less. That means a one in a million difference. The great thing is that
we can set up FreeMat to decide when it has reached that threshold. We can keep making the
width smaller and smaller until it reaches the point where the difference is only 1 in 1 millionth of
the calculated area.
We're not going to go that far yet. Instead, we'll use a less precise level of 10 -2 just to demonstrate
the principle. To make the decision when this has been reached, we'll use a while loop, as shown
below.
% Script to calculate the specific area under a Gaussian probability
% density function using the Rectangular Rule.
clear all; % Start by clearing all variables.
w=0.1; % This is the width of each rectangles.
start_point=1;
stop_point=10;
x=start_point+w:w:stop_point+w; % This is the values of x. Note that
% each rectangle touches the graph on
% the *right* side. This side is at
% the point of x+w, which is why the
% counter goes from 1+w to 10+w, with
% a step size of just w.
sigma=1; % The standard deviation of the curve.
% The next line calculates the height of each rectangle.
h=(1/(sigma*(2*pi)^0.5))*exp(-(x.^2)/(2*sigma^2));
area_rect=w*h; % Calculate the area of each rectangle.
area_sum=cumsum(area_rect); % Sum up the areas of each rectangle.
total_area=area_sum(length(area_sum)); % This is the final area.
% Create a while loop that runs until the precision criteria is met.
diff_area=total_area; % This will be the difference between loops used
% to tell if the precision level has been met.
last_area=total_area; % This will be used to hold the area from the
% previous loop. It will be compared with the
% area of the current loop.
precision=1e-2; % This is the desired precision.
while (diff_area>(total_area*precision)); % Start the while loop.
w=w/2; % Start by dividing the width in half.
x=start_point+w:w:stop_point+w; % Set the values for x, as before.
h=(1/(sigma*(2*pi)^0.5))*exp(-(x.^2)/(2*sigma^2)); % Calculate h.
area_rect=w*h; % Calculate the areas of each rectangle.
area_sum=cumsum(area_rect); % Sum up the areas of each rectangle.
total_area=area_sum(length(area_sum)); % Total area under the curve.
diff_area=abs(total_area-last_area); % The difference from the previous
% calculation.

last_area=total_area; % Re-set the last area's calculation before


running
% the next loop.
end
printf('The area under the curve from %f to %f is
%f.\n',start_point,stop_point,total_area);
printf('The number of steps required is %i for a precision of
%f.\n',length(x),precision );

Next question: We're adding up our rectangles going from 1 to 10. In the actual integral, we're
going from 1 to infinity. So is 10 high enough? Let's find out. We'll set the upper limit, defined by
the variable stop_point, to 20 and see if it makes a difference. We won't show all of the code here.
Just change the fifth line above to read:
stop_point=20;

Now we'll re-run the script. Here's the results, along with the results from the previous run where
the upper limit was 10.
The
The
-->
The
The

area under the curve from 1.000000 to 10.000000 is 0.157146.


number of steps required is 721 for a precision of 0.010000.
gaussian_area
area under the curve from 1.000000 to 20.000000 is 0.157146.
number of steps required is 1521 for a precision of 0.010000.

The change from 10 to 20, while it doubled the number of steps, it didn't change the final answer.
It would appear that, once the upper limit is near 10, the amount of additional area above that is
negligible. For the rest of this exercise, I'm going to leave the upper limit at 20 just to keep the
number of changes low.
Next, we're going to slowly dial down the precision. Thus far, we've used a precision of 10 -2. Next,
we'll set it to 10-3 to see how big of a change it makes. Again, I'm not going to display all of the
code; instead, just change the line that sets the variable precision to:
precision=1e-3; % This is the desired precision.

Running the script, we get:


--> gaussian_area
The area under the curve from 1.000000 to 20.000000 is 0.158561.
The number of steps required is 24321 for a precision of 0.001000.

The total area went up from 0.157146 to 0.158561. That's a fairly large increase. Also note that the
number of steps went up by a factor of almost 16. Where did this extra area come from? To
understand that, look back at the original graph showing how the rectangles fit into the curve.
Note that there is a lot of open space between the tops of the rectangles and the curve itself, as
shown in Figure 7.

Figure 7: When calculating the area of a curve using


rectangles, there's a lot of space between the rectangles
and the curve that is missed.
So, how do we capture that missed area? The only way to do so is to decrease the width of the
rectangle. As we make the width smaller and smaller, the amount of missed area will also
decrease. The drawback, for us, is that, as we make the width smaller, the number of steps goes
up. That means that it's going to require more calculations to get to the level of precision desired.
We want a level of precision of 10-6. Let's set the precision as shown here:
precision=1e-6; % This is the desired precision.

Now, let's run the script again.


--> gaussian_area
The area under the curve from 1.000000 to 20.000000 is 0.158655.
The number of steps required is 24903681 for a precision of 0.000001.

Whoa! That took a little while! At least, it did on my computer 1. Perhaps yours ran faster. Still, it
should have taken more time than when you ran with the precision set to 10 -3. And how long did it
take on my computer? Long enough for me to walk to the kitchen, grab a soda from the fridge,
and walk back. But how long was that, precisely? Let's find out. We'll use the Freemat commands
of tic and toc. The command tic starts a timer running; toc stops it and outputs the time, in
seconds. The code now looks like this, with the new code set in boldface:
% Script to calculate the specific area under a Gaussian probability
% density function using the Rectangular Rule.
tic
clear all; % Start by clearing all variables.
w=0.1; % This is the width of each rectangles.
start_point=1;
stop_point=20;
x=start_point+w:w:stop_point+w; % This is the values of x. Note that
% each rectangle touches the graph on
% the *right* side. This side is at
1

Currently, that computer is a Gateway 700GR Pentium 4 with hyper-threading. Bought it on sale at Best Buy back
in 2003.

% the point of x+w, which is why the


% counter goes from 1+w to 10+w, with
% a step size of just w.
sigma=1; % The standard deviation of the curve.
% The next line calculates the height of each rectangle.
h=(1/(sigma*(2*pi)^0.5))*exp(-(x.^2)/(2*sigma^2));
area_rect=w*h; % Calculate the area of each rectangle.
area_sum=cumsum(area_rect); % Sum up the areas of each rectangle.
total_area=area_sum(length(area_sum)); % This is the final area.
% Create a while loop that runs until the precision criteria is met.
diff_area=total_area; % This will be the difference between loops used
% to tell if the precision level has been met.
last_area=total_area; % This will be used to hold the area from the
% previous loop. It will be compared with the
% area of the current loop.
precision=1e-6; % This is the desired precision.
while (diff_area>(total_area*precision));
w=w/2; % Start by dividing the width in half.
x=start_point+w:w:stop_point+w; % Set the values for x, as before.
h=(1/(sigma*(2*pi)^0.5))*exp(-(x.^2)/(2*sigma^2)); % Calculate h.
area_rect=w*h; % Calculate the areas of each rectangle.
area_sum=cumsum(area_rect); % Sum up the areas of each rectangle.
total_area=area_sum(length(area_sum)); % Total area under the curve.
diff_area=abs(total_area-last_area); % The difference from the previous
% calculation.
last_area=total_area; % Re-set the last area's calculation before
% running the next loop.
end
printf('The area under the curve from %f to %f is
%f.\n',start_point,stop_point,total_area);
printf('The number of steps required is %i for a precision of
%f.\n',length(x),precision );
total_time=toc;
printf('The time required for this running was %f
seconds.\n',total_time);

The result is:


-->
The
The
The

gaussian_area
area under the curve from 1.000000 to 20.000000 is 0.158655.
number of steps required is 24903681 for a precision of 0.000001.
time required for this running was 45.763000 seconds.

Is it time to trade in the old computer? Hardly. This beast is good for another 100 billion
computations. At least. Is there a way to improve the code such that it doesn't take so long? Yes.
That's where the trapezoid and Simpson's Rule come into play.

2.2 - Trapezoid Rule


2.3 - Simpson's Rule
2.4 - Gaussian Quadrature
2.5 - Monte Carlo Integration
Monte Carlo methods are methods that use random numbers to do different mathematical tasks.
Probably the most common usage is for numerical integration, as we'll describe here. This method
can work on anything from simple functions to image analysis. Here's how it works:
Say we were to lay out the domain [0 ] onto a dart board, and plot cos(x) on that domain. If we
throw a random distribution of darts at this board, it might look like this: (red crosses are darts)

Numerically, the proportion of darts under the curve (between the curve and the x-axis) to the
total number of darts thrown is equal to the proportion of area under the curve to total area of
the domain. Note that darts above the curve but below the x-axis subtract from this total.
Knowing that there are 100 darts in the above figure, we can manually count and come up with 18
darts below the curve in the left half of the graph, and 18 darts above the curve (but below the
axis) in the right half. We subtract this from our first number and get 0. The total area of our
domain is 2, so 0*2 is the area under the curve. We know, of course, that the value of this
integral should be zero, and see that we come up with that here.
Pretty simple, right? Let's look at some simple code then! Let's start by making a function we can

pass arguments to. We need to tell it three things: our function (which we can pass as a variable
using an anonymous function), the minimum x and the maximum x (the extrema of our domain).
function return_value = montecarlo(fx, xmin, xmax)
n_darts = 100; % number of darts to throw; more darts = more accurate
%% --- Determine Domain Size --- %%
x = linspace(xmin, xmax, 200);
for n = 1:length(x)
y(n) = fx(x(n)); % draw curve to be integrated
end
ymin = min(y);
if ymin > 0 % if y is all positive, min is 0
ymin = 0;
end
ymax = max(y);
%% --- Throw Darts! --- %%
dartcounter = 0; % This will count the number of darts below the curve
for i = 1:n_darts
rx = rand()*(xmax-xmin) - xmin; % generates random dart
ry = rand()*(ymax-ymin) - ymin;
%% --- Count Darts for Calculation --- %%
if ry < fx(rx) && ry > 0
dartcounter = dartcounter + 1;
else if ry > f(rx) && ry < 0
dartcounter = dartcounter - 1;
end
end
end
totalarea = abs(xmax-xmin)*abs(ymax-ymin);
return_value = (dartcounter/n_darts)*totalarea; % area under curve

Now, let's pass an anonymous function (a function stored in a variable) and a domain to our
program and see what happens:
--> f = @(x) (exp(-x.^2); % VERY hard to integrate analytically!

I'll take a quick aside and explain what I just did the name of our function (f) can be passed
around just like any other variable. When we say f = @(x), the @ symbol denotes that it's a
function (not a vector or cell or other entity), and the (x) denotes that it's a function of x. If it were
a function of x,y and z, we would have f = @(x,y,z). The rest of the expression is simply the
function to be evaluated, in this case:
f x =e x

When we call f(3), for example, it will return exp(-3^2), or 1.2341x10 - 4. Let's continue:
--> x0 = 0; x1 = 2; % evaluate integral from x = 0 to 2
-->montecarlo(f,x0,x1)
ans =
0.8840

Straight away we notice two things. First, our answer 0.8840 is close to (but not exactly) the
analytical solution to the integral, 0.8821. The more darts you throw, the more accurate the
answer will become, at the expense of computational time. Second, if we run the same input
several times, we get different answers. The random number generator obviously tries to
generate numbers randomly, so the distribution of darts inside and outside the curve in question
changes slightly from one run to the next. This method is powerful because it can integrate any
function that can be evaluated, but it's expensive in terms of time, and its precision is sometimes
unreliable.

Section 3: Initial Value Problems (Ordinary Differential Equations)


Probably one of the most significant applications of numerical techniques, especially in
engineering, is evaluation of differential equations. Many of today's pursuits in engineering involve
solution of ordinary differential equations (ODE's) and/or partial differential equations (PDE's).
Anything from dynamics and vibrations to heat transfer, fluid mechanics and even
electromagnetism can be modeled using differential equations. Many times, however, these
equations prove either very difficult or very time consuming to solve analytically, so we turn to
numerical methods for quick, approximate solutions.
Application of numerical methods to differential equations can vary in difficulty. Just as some
differential equations are hard to solve (and some solutions are hard to compute even with an
analytical solution!), we often encounter difficulty in numerics as well. Different criteria such as the
Lipschitz Condition or the Courant-Freidrichs-Lewy Condition exist as guides to the using these
methods for solutions, and extensive use of numerical methods requires at the very least a basic
understanding of these. We won't discuss them in detail here we trust that if you're using these
methods for really specific work that you've taken it upon yourself to learn when to use what. It's
easy to get models to provide an answer. It's less easy to make sure that answer is accurate.
The most common, simplest form of ODE we as engineers generally encounter are initial value
problems (IVP's). They are ODE's for which we know starting values. For example, if we want to
describe the acceleration of a car from rest, we would likely describe it as an IVP because we know
its initial properties (position, speed and acceleration are at zero to start with). The methods here
all attempt to approximate a solution to the basic equation
y ' t = f t , y t ,

f t 0 = y 0

As we'll see later, this single equation can translate to higher order equations (such as our car's
acceleration, or current in an RLC circuit, or motion of a mass-spring-damper system, etc.) by a
simple extension of the work in this chapter, which we'll discuss. Let's start with the most basic
method for differential equations, Euler's Method.

3.1 - Euler's Method


Euler's method takes a very simple approach to approximating solutions for IVP's. This method is
widely regarded as too simplistic and not accurate enough for most serious uses, but it makes for
easy practice for some very similar functions that are really accurate.
We'll skip some of the theory, but a long time ago, a Swiss mathematician Leonhard Euler
(pronounced 'oyler') performed a lot of calculation and analysis in numerical integration to
produce what are now known as the Euler Approximations. Among these is one notable method
that can be used for a simple numerical solution of a basic IVP. Essentially, his approximation
works as follows: if we separate the domain we're interested in into small time steps, using the
information at the first time step and our unsolved differential equation, we can make a guess at
what the value will be at the next iteration. For small enough time steps, we can continue this over

the whole domain and get a reasonable approximation of the actual solution.
Let's start with a basic differential equation Newton's Law of Cooling says that my cup of coffee
will cool more quickly when the difference between its temperature and the air is large, and less
quickly as the temperature approaches the ambient air temperature. In other words, the change in
temperature per unit time is proportional by a constant, k, to the difference between its current
temperature and the air temperature. Assuming the air temperature remains constant, we can
translate this into equations as follows:
dT
= k T air T
dt
T 0 = 160 [ F ], T air = 75 [ F ] , k = 0.1

[ ]
1
min

Euler's iterative function looks like this:


y n1 = y n hf t n , y n
So, our first couple of iterations using Euler's method to provide an estimate would look like this:
T 1 = T 0 hk T air T 0
T 2 = T 1 hk T air T 1
where h is the length of our time step. Essentially, we draw a straight line from our initial point at
(t0, T0) to (t0 + h, T1) with the slope implicitly given in the initial condition by evaluating our firstorder derivative (dT/dt) using our initial condition. We can see right here, this looks like a pretty
simple bit of code, and in reality it is. Our script would look like this:
temp(1) = 160;
% initial coffee temp, T0
tair = 75;
% air temp Tair
k = .1; % cooling coefficient
h = .1;
% time step [1/min]
t = [0:h:60]; % one hour
for i = 1:length(t)-1 % we're going one extra step with temp
temp(i+1) = temp(i) + h* k*(tair-temp(i));
end

The next to last line of code (the only thing we loop) is identical to Euler's iterative function above.
Let's run this and plot the results:
--> plot(t, temp)

That's it! We have a numerical solution. We have a nice smooth curve that makes sense as the
coffee gets closer and closer to room temperature, it cools more slowly, and our initial condition
of 160 is met. One problem though run this four or five times with different values for h, and plot
the results. You might get something that looks like this:

As we make our time step bigger, our solution moves away from the actual solution (I've plotted
the exact analytical solution here it's the one most towards the top). Our first guess at h (6
seconds) is small enough for the time scale we're looking at this particular differential equation
isn't too sensitive. Others, though, can be really sensitive, and so Euler's method doesn't work as
well. Thus, in practice, we'll use a bit more sophisticated method, the Runge-Kutta.

3.2 - Runge-Kutta Methods


This is the real deal. Runge-Kutta Methods are by far the most commonly used methods in most
engineering applications today. They were developed around 100 years ago (relatively new in
terms of math history Newton was 17th century, Euler was early 18th century), and are an
extension of the same math Euler developed. If you look at Euler's work, he seemed to love
applying Taylor series to all sorts of different problems. In fact, that's how he came up with the
method we just discussed it's a first-order Taylor polynomial expansion, so its accuracy is limited,
and if the derivatives of the function don't behave nicely, we can get some pretty serious error.
Runge-Kutta Methods include additional calculation of slope in the middle of each time step, and
take a weighted average of the values to evaluate the function. This helps reduce our error as we
go from time step to time step, and can result in some very accurate results. These methods are
named by how many terms they use: an RK2 (2-term model) is very basic, and equivalent to the
Midpoint Method, which we didn't bother to discuss since we're doing it here. An RK4 has four
terms, and is generally the most accurate per computation time. FreeMat's (and MATLAB's)
ODE45 routine switches between an RK4 and RK5 based on which is providing a better result,
hence the name ODE45. Let's start by looking at the RK2. Runge-Kutta methods work by several
evaluations of our ODE at different points, then averaging those values:
k 1,n = h f t n , y n

k
h
k 2, n = h f t n , y n 1, n
2
2
y n1 = y n

k 1k 2
2

k1 is just Euler's Method. k2 is Euler's Method again, but evaluating our function at the midpoint,
using our previous calculation of y in k1. Then, we average these values to get our (more accurate)
estimation for yn+1. Let's walk through a quick example. We just did a Newton's Law of Cooling
problem with my cup of coffee in Euler's Method; let's do it again here:
dT
= k T air T
dt
T 0 = 160 [ F ], T air = 75 [ F ] , k = 0.1

[ ]
1
min

Remember that our function, f(t,y) is the right hand side of our differential equation. So, our first
iteration looks like this (taking h = 0.1):

k 1 = hk T air T 0 = 0.10.175 160 = 0.85


k 2 = hk T air [T 1

y n1 = y n

k 1k 2
2

k1
] = 0.841
2

= 160 0.846 = 159.154

We just take our k1 value, plug it into our k2 formula, and average the two for the total
approximation yn+1. Notice that our final temperature change is a little different from what Euler's
Method (k1) gives us. As we increase h (make our time step bigger), this difference generally
increases. We would also see bigger differences for more difficult functions, even when using a
small h. Anyway, let's write a script!
temp(1) = 160;
% initial coffee temp, T0
tair = 75;
% air temp Tair
k = .1; % cooling coefficient
h = .1;
% time step [1/min]
t = [0:h:60]; % one hour
for i = 1:length(t)-1 % we're going one extra step with temp
k1 = h*k*(tair - temp(i));
k2 = h*k*(tair - (temp(i)+(k1/2)));
temp(i+1) = temp(i) + .5*(k1+k2);
end

Plot your results, and you'll see a curve nearly exactly like we saw in Euler's Method. Looking at
our code, we can see that it's very similar to the setup for Euler's Method, but we calculate
temp(i+1) in a different way. Also, we have to call our function f(t,y) several times each iteration.
As we move to higher-order RK methods, it will be easier to name a function and call it, instead of
writing it out for each k value. Let's move on to an RK4, and a more sophisticated example.
The RK4 works exactly like the RK2 except for two points there are more k terms, and we weight
our average, traditionally in the middle. Technically, you can weight these however you like, but for
the most part, we use the weighting shown here. Here's our iterative function:

k 1 = h f t n , y n
k1
h
k 2 = h f t n , y n
2
2
k2
h
k 3 = h f t n , y n
2
2
k 4 = h f t n h , y nk 3
y n1 = y n

k 12 k 22 k 3 k 4
6

Notice that the first two terms are exactly the same as the RK2. The third term k 3 is calculated
exactly the same way as k2, but with k2 as our y-value instead of k1. This is just a refinement method
for k2's value. k4 evaluates y at (t+h) using k3's approximation for y, and then we take a weighted
average where the middle values are more weighted than the ends. This is shown in Figure <>.

Figure 8: RK values for guess at single time step. The black line is the
actual solution. Note that a weighted average would be near k 2 and k3,
therefore it would have some error. This is due to a large time step (0.2
seconds)
We can quickly implement this again in our script from before, just as a demonstration:

temp(1) = 160; % initial coffee temp, T0


tair = 75; % air temp Tair
k = .1; % cooling coefficient
h = .1; % time step [1/min]
t = [0:h:60]; % one hour
for i = 1:length(t)-1 % we're going one extra step with temp
k1 = h*k*(tair - temp(i));
k2 = h*k*(tair - (temp(i)+(k1/2)));
k3 = h*k*(tair - (temp(i)+(k2/2)));
k4 = h*k*(tair - (temp(i)+k2));
temp(i+1) = temp(i) + (1/6)*(k1+2*k2+2*k3+k4);
end

Really quickly, let's look at error. The exact solution of this equation is pretty easy to come by:
T t = 85e kt 75
We can run our three scripts, eulerex.m rk2.m and rk4.m, and put their values up against the real
solution for a comparison. I just took h = 1, and picked a few points over the domain:
Euler
RK2
RK4
Exact
160.0000 160.0000 160.0000 160.0000
151.5000 151.7125 151.9108 151.9112
107.9307 108.7632 109.5570 109.5584
86.4822 87.1036 87.7122 87.7133

Notice Euler isn't very accurate compared to the RK2, and likewise on to the RK4. Here's a table of
absolute error (error that doesn't take into consideration the size of the values it's comparing:

Error [F]
Euler
0.0000
0.4112
1.6277
1.2311

RK2
0.0000
0.1987
0.7952
0.6097

RK4
0.0000
0.0003
0.0014
0.0011

These numbers will shrink significantly if we take h to be smaller, but we don't want to waste time
and memory. The lesson to take away here is that the more sophisticated model gives you (in
most cases) a more accurate result. In a stiffer or less simple equation, this difference would be
greatly magnified. Now let's take a look at a more advanced program with some real teeth.
function ret = rk4(f,t,f0);
y(1) = f0;
for i = 1:length(t)-1

h = t(i+1) t(i);
%--k1 =
k2 =
k3 =
k4 =

solves Runge-Kutta K values ---%


h*f(t(i),y(i));
h*f(t(i)+.5*h,y(i)+.5*k1);
h*f(t(i)+.5*h,y(i)+.5*k2);
h*f(t(i)+h,y(i)+k3);

%--- solves for y from Runge-Kutta coefficients ---%


y(i+1) = y(i) + (1/6)*(k1+2*k2+2*k3+k4);
end
plot(t,y); % optional plot, comment out if you don't want this
ret = [t' y'];
end

Notice that this program is the same length as our previous script, but it's much more powerful.
Using this program, we can pass any function using an anonymous function (the f argument here)
from the console, along with a row vector of time values (t), and our initial condition (f0). H is
calculated by looking at the distance between two time values at each iteration, which means we
can alter this based on how our function looks our domain doesn't have to be equally spaced!
We can certainly pass a monotonic vector, but for functions with long flat boring spots, we can
make our time step much bigger there to speed up calculation. If you don't want this feature, you
can simply move h outside the loop and fix it at t(2) t(1). We can verify that this thing works by
using our test function again:
--> t = [0:h:60]; % one hour
--> p = @(t,y) (0.1*(75-y)); %our function t is time, y is
temperature
--> p0 = 160; % our initial condition
--> rk4(p,t,p0)

We get the exact same values we had before with our RK4 script, and a plot to match! A couple of
notes here: whether our function is a function of both t and y or not, we need to define it as such.
This test function doesn't really care about t, just y (which is temperature, T in our case)
remember that t is included implicitly by using h in the calculation of each value for y. When our
rk4 program calls the function though, it wants to pass two arguments, time and temperature. We
can pass fewer arguments than are defined in a function, but not more. Therefore, we defined p =
@(t,y) instead of just p = @(y). The t value doesn't do anything in our function, but rk4 doesn't
really know or care.

3.3 - Adams-Bashforth Methods


3.4 - Systems of Differential Equations

Section 4: Boundary Value Problems

4.1 - Linear Shooting Method


The linear shooting method is a relatively simple approach to solving a boundary value problem. In
general, we know conditions at two boundaries, resulting in a more complex system than a simple
Runge-Kutta can solve we need to take into account not only the initial value of the solution, but
values at later points as well. A physical equivalent of a boundary value problem could be
throwing a baseball to a catcher, as shown in Figure <>. The thrower adjusts the speed of the ball
and the angle at which it's thrown to make the ball land where they want. In fact, the human brain
is possibly one of the fastest solvers for boundary value problems, in that it can solve these
systems in an analog fashion in milliseconds, whether it's moving one's hand to catch a ball, or
manually landing an aircraft on an aircraft carrier deck.
For our purposes here, we'll generally consider functions of the form:
y ' x = f x , y x ,

f x 0 = y 0, f x 1 = y 1

Here, x0 is the initial point and x1 is the final point on the independent axis (usually denoting a
length or displacement, depending on the problem). Continuing the example of a pitcher throwing
a ball, he or she knows several key pieces of information before throwing the ball: the initial height
of the ball (the position of the pitcher's hand at release, x 0), the final height of the catcher's glove
(x1), and the general behavior of the ball in flight (f [x,y[x]]). With this information, the pitcher
makes a guess at the initial values for slope and ball velocity needed to hit the catcher's glove.
Shooting methods simply solve these problems the same way a pitcher would: guess the initial
conditions you need to hit the final boundary with good enough precision. Practiced pitchers are
just good at that initial guess a product of muscle memory and lots of practice. With the shooting
method, we simply take a guess and use that guess to solve an IVP (which is usually pretty simple,
as we saw in the last chapter). We take a look at the value on the final boundary, and adjust our
guess accordingly. This process is repeated until we get within an acceptable error at the opposing
boundary, and we're left with an accurate guess at a correct initial condition as well as the
solution.

4.2 - Shooting Method for Nonlinear Problems


4.3 - Finite-Difference Methods
Section 5: Partial Differential Equations and the Finite Element Method

You might also like