You are on page 1of 7

Chapter One: Basic Concepts on Error Estimation

1.1.Sources of errors

Error is uncertainty that may be present in the solution to a problem. It is the difference between
a true (exact) value and an estimate (approximate) value. Errors are inevitable in the subject of
numerical analysis. It is largely preoccupied with understanding and controlling errors of various
kinds.

Sources of errors are

i. Inaccurate representation of numbers: Such errors arise due to the intrinsic limitation
of the finite precision representation of numbers (except for a restricted set of
integers) in computers. Computer memory has a finite capacity.
ii. The arithmetic performed in a computing machine (computer, scientific calculator,…)
iii. Using approximate formulae
iv. Originally in stating the problem
1.2.Approximations of errors

There are two kinds of numbers, exact and approximate. Examples of exact numbers are

√ etc. Approximate numbers are those that represent the numbers to a

certain degree of accuracy (based on number of decimal point or number of significant


digits/figures). The approximate value of the numbers √ is
respectively to five significant digit/figures of four decimal places.

Q. What are the errors in the above approximation?

Note: Significant figure is the number of digits used to express a number. For example the
number of significant figures in the numbers 3.14, 0.00046, 5.0600, 10.246, 2500 are three, two,
three, five, uncertain respectively.

The digits 1, 2, 3, 4, 5, 6, 7, 8, 9 are significant digits. ‘0’ is also a significant figure except when
it is used to fix the decimal point or to fill the places of unknown or discarded digits.
Sometimes the term significant figures refer to the number of important single digits (0 through 9
inclusive) in the coefficient of an expression in scientific notation . The number of significant
figures in an expression indicates the confidence or precision with which an engineer or scientist
states a quantity.

Any real number , is accurately representable by an infinite sequence of digits as

( ) { }

Or

( ) { }

The mantissa is given by

The mantissas do not all have the same significance because they represent different powers of
10. Thus we say that is the most significant digit, and the significance of the digits diminishes
from left to right.

To represent any real number on the computer, we associate to a floating point representation
( ) of a form similar to that of , but with only digits, so

( ) ( )

If , it is significant figure/digit.

If , it is significant figure/digit.

Therefore, we normalize the representation by insisting that .

Example: a) the number of significant figures in are


two, two, five respectively.

b) ( )

To represent it with seven significant figures, we have


, (chopping/rounding down)

, (rounding up)

Single precision is to with seven significant figure number (six


decimal places).

Double precision to , where with sixteen significant figure number (fifteen


decimal places).. Several computers implement double precision and used in problems requiring
greater accuracy. Beyond this it considers as not a number (NA) i.e. .

A computer can only represent a number approximately in decimal or scientific notation. For
example, a number like may be represented as 0.333333 on a PC to six significant figures.

1.3.Rounding off errors

If we divide 2 by 7, we get 0.285714... a quotient which is a non-terminating decimal fraction.


For using such a number in practical computation, it is to be cut-off to a manageable size such as
0.29, 0.286, 0.2857,.... etc. The process of cutting off super-float digits and retaining as many
digits as desired is known as rounding off a number or we can say that process of dropping
unwanted digits is called rounding-off. Numbers are rounded-off according to the following
rules:

To round-off the number to significant figures, discard all digits to the right of th decimal
digit and if this discarded number is

(1) Less than 5 in ( )th place, leave the th decimal digit unaltered e.g., 8.893 to 8.89.
(chopping/round down)
(2) Greater than 5 in ( )th place, increase the th decimal digit by unity e.g., 5.3456
to 5.346. (round up)
(3) Exactly 5 in ( )th place, increase the th decimal digit by unity if it is odd
otherwise leave it unchanged. e.g., 11.675 to 11.68, 11.685 to 11.68.
The round-off error is the quantity, which arises from the process of rounding off numbers. The
round-off error can be reduced by carrying the computation to more significant figures at each
step of computation.

Inherent error: is that quantity which is already present in the statement of the problem before its
solution. The inherent error arises either due to the simplified assumptions in the mathematical
formulation of the problem or due to the errors in the physical measurements of the parameters
of the problem. Inherent error can be minimized by obtaining better data, by using high precision
computing aids and by correcting obvious errors in the data.

Truncation error: is caused by using approximate formulae in computation or on replace an


infinite process by a finite one that is when a function ( ) is evaluated from an infinite series
for x after ‘truncating or chopping’ it at a certain stage, we have this type of error. The study of
this type of error is usually associated with the problem of convergence.

1.4.Absolute and relative errors

Without any further details it should already be clear that representing by ( ) necessarily
causes an error. A central question is how accurate the floating point representation of the real
numbers is. Specifically, we ask how large the relative error, defined below, in such a
representation can be.

Absolute error: is the numerical difference between the true value of a quantity and its
approximate value. Thus if is the approximate value of quantity then the absolute error is
denoted and given by

| |

Relative error: is denoted and given by

| |

Percentage error: is denoted and given by


Example: a) Round-off the numbers 979.267, 0.065738 and 56.395 correct to four significant
figures and compute the absolute, relative and percentage errors.

b) Suppose that , then use rounding off to five significant figures for calculating

Solution: (a)

| | | |

| |

| | | |

| |

exercise

(b)

| | | |

| |

| | | |

| |

The rest are left as exercises


c) Fill the following table based on the given data

̅ ̅
̅ ̅
̅ ̅

Note: that as a measure of accuracy, the absolute error can be misleading and the
relative/percentage error is more meaningful, because the relative error takes into consideration
the size of the value.

1.5.Propagations of error

A numerical process is unstable if small errors made at one stage of the process are magnified
and propagated in subsequent stages and seriously degrades the accuracy of the overall
calculation. Whether a process is stable or unstable should be decided on the basis of relative
error.

Example: let , then


observe the following values with eight significant digit.

( ) ( )
( ) ( )

Here, two different but mathematically equivalent methods (associativity) for evaluating the
same expression may lead to different results if floating-point arithmetic is used.
For numerical purposes it is therefore important to distinguish between different evaluation
schemes even if they are mathematically equivalent. Thus we call a finite sequence of
elementary operations (as given for instance by consecutive computer instructions) which pre-
scribes how to calculate the solution of a problem from given input data, an algorithm.

1.6.Instability

An algorithm is a procedure that describes, in an unambiguous manner, a finite sequence of


steps to be performed in a specified order.
If small errors in the input produce small errors in the output, the problem is called stable.

Or if an algorithm satisfies a property that, small changes in the initial data produce
correspondingly small changes in the final results, then is called stable; otherwise it is unstable.
Some algorithms are stable only for certain choices of initial data, and are called conditionally
stable. We will characterize the stability properties of algorithms whenever possible.

You might also like