You are on page 1of 9

COS2633/104/3/2019

Tutorial letter 104/3/2019

NUMERICAL METHODS I
COS2633

Semesters 1 and 2

Department of Mathematical Sciences

This tutorial letter contains important information:

• Lesson 1 - Error and Related Concepts: Additional notes


to supplement prescribed content in Chapter 1 of the text-
book.

• Please read this information along with this chapter and Tu-
torial Letter 102 to have other viewpoints of the discussions
in these sections.

BARCODE

university
Define tomorrow. of south africa
1 Error and Related Topics
1.1 Objectives and Outcomes
1.1.1 Reading: Chapter 1 of textbook
1.1.2 Objectives
The objectives of this lesson are:

• To understand the process involved in studying the various numerical methods;

• To understand the notion of error, the types of error and their effect when carrying out computations;

• To understand the notion of convergence as it relates to general numerical analysis, as well as to the
discussion of the various techniques and their algorithms.

1.1.3 Learning Outcomes


At the end of this lesson you will be able to:

• Distinguish between roundoff error, truncation error, experimental error, and Machine error.

• Understand the core concepts associated with each algorithm: formulation, implementation, and error
and performance measurement.

• Characterise a numerical algorithm in terms of its sl stability and convergence.

• Learn how to read and use a typical algorithm given in the book.

• Identify and select a computer software that you will use in numerical computations.

• Access the textbook companion website at

https://sites.google.com/site/numericalanalysis1burden/

1.2 Introduction
Many of the problems used to model different situations arising in real life cannot be solved by analytic
techniques. Numerical techniques are available as an alternative to solving many practical problems. An un-
derstanding of how the various numerical schemes work is inevitable and brings about the need for analysis
of the various techniques and aspects of their ’goodness’.

Many numerical algorithms are too tedious to be implemented by a hand calculator because of the volume
of computations involved. The increased use of computers have made them an indispensable tool for many
computational processes. An understanding of how a computer handles numbers and computations is neces-
sary in knowing what results to expect from an algorithm.

Many of the numerical schemes are used to approximate the exact solutions of certain problems. Thus any
numerical approximation is done with an error which may be sometimes large depending on how good the
particular scheme is. The superiority of a numerical scheme is at times judged based on the computational

2
COS2633/104/3/2019

error incurred when using it.

The process of numerical analysis involves three components:

(i) Formulation of the method;

(ii) Execution or implementation of the method;

(iii) Validation of the method - which involves analysis of convergence of the method and error thereof.

This tutorial letter supplements Tutorial Letter 102 and Chapter 1 of the textbook [1] (section 1.2) in which
a brief discussion of Error and Computer Arithmetic is given.

1.3 Error
Since in computations we work with a finite number of digits and carry out finitely many steps, the numerical
methods of computation are finite processes. A numerical result is an approximate value of the unknown
exact result, except for rare cases where the exact answer is sufficiently simple rational number and we can
use a numerical method which gives the exact result.

1.3.1 Notation
The following notation shall be adopted for this lesson (or discussion) and subsequent ones. If x is an
approximate value of the quantity whose exact value is X, then the difference

e=X −x

is the (actual) error of X. Hence

X = x + e; i.e. Approximation = True value + Error.

The absolute error is defined as


e = |X − x|
The relative error
e X −x error
er = = =
X X true value
If the absolute value of the error is sufficiently small (i.e. much less than |x|), then
e
er ≈ .
X
Sometimes it is useful to introduce the quantity γ − −e = x − X and call it the correction. That is

x=X +γ True value = Approximation + Correction

Finally, an error bound for x is the number β such that

|X − x| ≤ β

That is |e| ≤ β.

3
1.3.2 Types and Sources of Error
We distinguish between several types of error associated with differnt sources as follows:
1. Experimental error is error in given data, usually due to the emprecision of the given data. This
probably arises from measurement. this type of error limits the accuracy of a result in subsequent
calculations.
2. Truncation error arises from replacing an exact method with an approximate one (often resulting
from cutting off some terms of an expression). Thus truncation error is due to the method used for
the computation. it is associated with the fact that a finite or infinite sequence of computational steps
necessary to produce an exact result is ”truncated” prematurely after a certain number of steps. This
error depends on the computational method and has to be discussed individually with each method.
3. Round-off error is caused by the physical limit to the number of digits which can be retained in a
number (by choice or machine/device limitation). It occurs in the process of rounding off during a
computation.
4. Machine error is due to data representation and manipulation in computer memory (section 1.2 of
the textbook focuses on this type of error).
Section 1.2 of the textbook discusses in depth the subject of Round-off error and Computer Arithmetic,
thus giving a glimpse into how error due to device limitations occur.

In what follows is an introduction to how error propagates as the various arithmetic operations are carried
out in calculations.

1.3.3 Propagation of Error


The question may be asked whether it is possible to ”measure” the error in the final answer that results from a
sequence of computations. An answer to this question is that it is not always possible to analyse the problem
by taking each step of the computation. It is not practical. However, it is possible to analyse the effect of
errors on each basic arithmetic operation (+, −, ×, ÷).

It is usually assumed that whenever unspecified values are used, the errors which occur in the numbers used
(in basic operations) are small.

NB: An approximate number given to k decimal places has 21 10−k uncertainty in the last digit. Hence the
exact number is x + e where e = ± 21 10−k and x − 21 10−k < X < x + 12 10−k .

Let x1 and x2 be approximations to the true values of X1 and X2 , respectively. Let also e1 and e2 be the
corresponding errors in these approximations. i.e. Let X1 = x1 + e1 and X2 = x2 + e2 . Then the propagation
of error when carying out the various arithmetic operations is discussed below for each one.

Error in Addition and Subtraction

We analyse the approximation to the sum X1 + X2 as follows:

X1 + X2 = (x1 + e1 ) + (x2 + e2 ) (1)


= (x1 + x2 ) + (e1 + e2 ) (2)

4
COS2633/104/3/2019

So that the error is


(X1 + X2 ) − (x1 + x2 ) = e1 + e2
i.e. the approximate sum is in error by the sum of the individual errors.

Example 1.1 Two lengths measured correct to the nearest 0.1 mm are 3.2 mm and 1.6 mm. What
is the best estimate we can obtain from these two measurements of the sum of the two (exact)
lengths? Discuss the error in the estimate.

Solution:
First length is 3.2 ± 0.05 mm i.e. between 3.15 mm and 3.25 mm
Second length is 1.6 ± 0.05 mm i.e. between 1.55 mm and 1.65 mm.
Hence the minimum value of the sum is
(3.15 + 1.55) mm = 4.70 mm
and the maximum value is
(3.25 + 1.65) mm = 4.90 mm
The approximation
(3.2 + 1.6) mm = 4.8 mm
has the possible error of 0.1 mm, noting that 0.1 mm is the sum of the maximum individual errors
in the original measurements since each is 0.05 mm.
Thus the estimate of the sum can be given as 4.8 ± 0.1 mm.

Changing the addition to subtraction gives the same analysis of error propagation in subtraction.
i.e. Minimum difference is 3.15−1.65 = 1.5; Maximum difference is 3.25−1.55 = 1.7 and the answer
for 3.2 − 1.6 is 1.6 ± 0.1 mm.
Note the order of numbers subtracted to get the minimum and maximum differences.

In general, if the error e1 is due to rounding-off X1 to k1 decimal places and the error e2 is due to rounding-off
X2 to k2 decimal places, then e1 and e2 respectively satisfy
1 1
− 10−k1 ≤ e1 ≤ 10−k1
2 2
and
1 1
− 10−k2 ≤ e2 ≤ 10−k2
2 2
That is
1 1
|e1 | ≤ 10−k1 and |e2 | ≤ 10−k2
2 2
Hence
|e1 + e2 | ≤ |e1 | + |e2 |
or equivalently,
1 1
|e1 + e2 | ≤ 10−k1 + 10−k2
2 2
Thus the error (e1 + e2 ) in the approximation (x1 + x2 ) to the true value (X1 + X2 ) lies within the limits
±( 12 10−k1 + 21 10−k2 ).

5
In particular, as will commonly happen in practice, if both X1 and X2 are rounded to the same number, k, of
decimal places, then k1 = k2 = k and the error in x1 + x2 lies within ±10−k .

Error in Multiplication

Using the same notation, the analysis of a product of two numbers is as follows:

X1 X2 = (x1 + e1 )(x2 + e2 ) (3)


= x1 x2 + x1 e2 + e1 x2 + e1 e2 (4)
≈ x1 x2 + x1 e2 + x2 e1 , (5)

the last line resulting from the fact that e1 e2 is very small.

The error in the approximation is therefore

X 1 X 2 − x1 x2 ≈ x1 x 2 + x1 e 2 + x2 e 1

The absolute error

|X1 X2 − x1 x2 | ≈ |x1 e2 + x2 e1 | ≤ |x1 e2 | + |x2 e1 | = |x1 ||e2 | + |x2 ||e1 |

and if e1 and e2 are due to rounding off to k1 and k2 decimal places, respectively, we have
1 1
|e1 x2 + e2 x1 | ≤ |x2 | 10−k1 + |x1 | 10−k2 .
2 2
Furthermore, if k1 = k2 = k, then
1
|e1 x2 + e2 x1 | ≤ (|x1 | + |x2 |) 10−k .
2
The approximate relative error in the product x1 x2 is
e 1 x2 + e 2 x1 e 1 x2 + e 2 x1
≈ , X1 , X2 not known (6)
X1 X2 x1 x2
e1 e2
= + (7)
x1 x2
Thus
e1 x2 + e2 x1 e1
≈ + e2 ≤ e1 + e2


X 1 X 2 x1 x 2 x1 x2
Example 1.2 The numbers X1 and X2 when rounded to 3d.p. are 4.701 and 0.832, respectively.
Evaluate an approximation to X1 X1 and discuss the error involved.

Solution:
Absolute error:
X1 X2 is approximated by (4.701)(0.832) = 3.911232.
Here k = 3, so the approximate absolute error is less or equal to
1
(4.701 + 0.832) × 10−3 ≈ 0.0028.
2

6
COS2633/104/3/2019

Whence,
3.908 < X1 X2 < 3.914
or
3.911 ± 0.003 or 3.91 to 2d.p.
Relative error:
Relative error in 4.701 ≤ 0.0005
4.701
≈ 0.00011
Relative error in 0.832 ≤ 0.0005
0.832
≈ 0.00060
The approximate relative error in the product is less than or equal to
0.00011 + 0.00060 = 0.00071.
Therefore, the approximate error in the product is less than or equal to
(0.00071)(3.911) ≈ 0.0028

Error in Division

Using the same notation,


X1 x1 + e1
= = (x1 + e1 )(x2 + e2 )−1 (8)
X2 x2 + e2
 −1
x1 + e1 e2
1+ (9)
x2 x2
 −1
Expanding the binomial 1 + xe22 yields
−1
e22 e32
  
x1 + e1 e2 x1 + e 1 e2
1+ = 1− + − + ... (10)
x2 x2 x2 x2 x22 x32
e22
 
1 e2
= (x1 + e1 ) − + − ... (11)
x2 x22 x32
x1 + e1 e2 (x1 + e1 )
= − + ... (12)
x2 x22
x1 e1 x1 e2 e1 e2
= + − 2 − 2 + ... (13)
x2 x2 x2 x2
Since e1 and e2 are small, the numbers from their products are even smaller. So
x1 + e 1 x1 e1 x1 e 2
≈ + − 2 .
x2 + e 2 x2 x2 x2
hence the error,
X1 x 1 e1 x1 e2
− ≈ − 2
X2 x 2 x2 x2
the relative error is
e1 x1 e 2
X1
X2
− x1
x2 x2
− x22
X1
≈ X1
(14)
X2 X2
e1
x2
− xx1 2e2 e1 e2
≈ x1
2
= − (15)
x2
x1 x2

7
This means that the approximate relative error in the quotient equals the difference in the individual relative
errors. taking the modulus yields
X1 x1


X2 x2
e1 e2
X1
≈ −
(16)
X2
x1 x2

e1 e2
≤ + ; (17)
x1 x2

that is, the relative error modulus in the quotient is less than or equal to the sum of the individual relative
errors.

it is worth noting that in division it is eqsier to use the relative error than it is to use absolute error.

Example 1.3
The numbers X and Y , when rounded to 4 significant figures are 37.26 and 0.05371, respectively.
Evaluate an approximation to X
Y
and discuss the error involved.

Solution:

37.026
Approximation: 0.05371 = 693.72556 = 693.7 (4 s.f.)
0.005
Approx. relative error ≤ 37.26 + 0.000005
0.05371
≈ 0.00023.
X
Hence the approx. absolute error in Y ≤ 0.00023(693.7) ≈ 0.16
true value lies between 693.565 and 693.885 or between 693.6 and 693.9 (4 s.f.). Hence
X
= 693.7 ± 0.16
Y
The validity of the formulae for error approximation can be extended to 3 or more values; e.g. If x1 , x2 and
x3 are approximate values to X1 , X2 , x3 then
≤ e 1 + e2 + e3
Addition/Subtraction: Abs. error
e1 e2 e3
Multiplication: Rel. error ≤ + + .
x1 x2 x3

1.4 Convergence
As different methods and their algorithms are developed, it is natural to ask the question of whether their
implimentation will lead to the expected result - the actual answer. This aspect of analysis is convergence.

This topic is discussed in depth in Section 1.3 of the textbook. The theory of sequences comes in handy be-
cause the implementation of numerical schemes generates a sequence of values in the search for an approx-
imate solution. The solution of the given problem will be the limiting value of the sequence of approximate
values as it converges (if it does) to the solution being sought.

It is very important that you read this section because it forms the basis of the discussion on convergence of
the various algorithms presented in subsequent sections. Of particular note are the terminology and notation
for the rate or order of convergence. Also, the fact that the analysis of convergence may vary according to
method under discussion - easier for some methods than others.

8
COS2633/104/3/2019

Finally, a note on the algorithms given in the textbook. The textbook gives general-purpose algorithms (so
called pseudocode), which outlines the steps needed to implement the various methods. These algorithms are
not written in any particular computer programming language. They can be translated to any programming
language code of your choice of software.

Your access to some of the standard software available in the institution may be denied. However, there are
various open source software suitable for numerical analysis that can be downloaded free via internet. It is
worth taking your time to do a little bit of research for these. MATLAB is one of the mostly used software
for numerical analysis. OCTAVE is the closest equivalent available on the open domain software. What is
good about these software is that they can be used interactively (as a super calculator) or by programming.
Their handling of matrix computation is highly commendable. Learning how to use suitable software can
save you a lot of computational time.

1.5 In an Nutshell
The things to bear in mind as you study the various numerical methods are:

(i) their formulation;

(ii) their implementation;

(iii) their performance - i.e. their convergence and the error incurred in their use.

In addition, the main goal of using the various methods is efficiency of the methods, achieved by minimising
the cost of computational error.

1.5.1 A Note on Mathematical Notation


The notation used in the textbook is very extensive. In order to understand and manage the various algorithms
it is important to understand and interpret the notation used. A quick review of basic notation used to compact
mathematical expressions, such as Σ for summation, Π for the product, subscripting (xi ), and indexing
(i = 1, 2, . . . , n or (k), k = 0, 1, 2, . . . ), is essential.

REFERENCES
[1] Richard L Burden and Douglas J. Faires. Numerical Analysis. Tenth Edition. BROOKES/COLE, CEN-
GAGE Learning 2011 - Chapter 1

You might also like