You are on page 1of 31

ELEN 30073

Lesson 1
Approximation and Rounding errors
Introduction
 This module is all about the approximations and rounding errors that is being
used for numerical analysis of equations and formulas.

Overview
 Numerical methods must be sufficiently exact and precise to satisfy the
requirements of electrical engineering problems may it be theoretical (Ohm’s
law, Kirchoff’s Voltage and Current Law, Fault Analysis) and practical (sizing
and length of wire estimation, selecting of optimal brand of devices based on
cost). Thus, it is important to differentiate the terminologies and its
application first before going into the numerical analysis.
Objectives
After successful completion of this module, you should be able to:
• Differentiate between accuracy, precision and bias
• Define the significant figures
• Determine different error types
• Conduct an error analysis for solutions and answers
Approximations and Rounding Errors

 Precision: The ability to give multiple estimates that are


near to each other (a measure of random deviations).
 Bias: The difference between the center of the holes and the
center of the target (a systematic deviation of values from
the true value).
 Accuracy: The degree to which the measurements deviate
from the true value.
Numerical methods must be sufficiently exact (without bias) and precise to satisfy the
requirements of engineering problems. From now on we will refer to error to refer to the
inaccuracy and lack of precision of our predictions.

4
Approximations and Rounding Errors

5
Approximations and Rounding Errors

(a) inaccurate imprecise

(b) accurate imprecise

(c) Inaccurate precise

(d) Accurate precise

6
Approximations and Rounding Errors

Four shooting results:

 A is successful.
 B : holes agree with each other (consistency or precision), but they deviate
considerably from where the shooter was aiming (no correctness)
 B lacks correctness (exactness).
 C lacks both correctness and consistency.
 D lacks consistency (precision).
 The shooters of targets C and D were imprecise. 3- 7
Summary of Bias, Precision and Accuracy

Target Bias Precision Accuracy


None
A High High
(unbiased)
B High High Low
None
C Low Low
(unbiased)
D Moderate Low Low
3- 8
Error Types

 In general, errors can be classified based on their


sources as non-numerical and numerical errors.

 Non-numerical errors:
(1) modeling errors: generated by assumptions and
limitations.
(2) blunders and mistakes: human errors
(3) uncertainty in information and data
3- 9
 Numerical errors:
(1) round-off errors: due to a limited number of
significant digits
(2) truncation errors: due to the truncated terms
e.g. infinite Taylor series
(3) propagation errors: due to a sequence of
operations. It can be reduced with a good
computational order. e.g.
In summing several values, we can rank the
values in ascending order before performing
the summation.
(4) mathematical-approximation errors:
e.g. To use a linear model for representing a
nonlinear expression.
3- 10
Measurement and Truncation Errors

 error(e): the difference between the computed (xc) and true (xt) values of a
number x

 The relative true error (er) :

e = x c − xt

x c − xt e
er = =
xt xt

3- 11
 Example: Truncation Error in Atomic Weight
The weight of oxygen is 15.9994. If we
round the atomic weight of oxygen to 16,
the error is
e = 16 - 15.9994 - 0.0006
The relative true error:

0.0006
er = = 0.4  10−4
15.9994

3- 12
Approximations and Rounding Errors
 Error definitions:
 True value = approximation + absolute error.
 Absolute error = true value - approximation .
 Relative error = absolute error / true value .

𝑎𝑏𝑠𝑜𝑙𝑢𝑡𝑒 𝑒𝑟𝑟𝑜𝑟
𝜀𝑡 = 𝑥100%
𝑡𝑟𝑢𝑒𝑣𝑎𝑙𝑢𝑒

 In real cases not always one can know the true value, thus:
𝑎𝑝𝑟𝑜𝑥𝑖𝑚𝑎𝑡𝑒 𝑒𝑟𝑟𝑜𝑟
𝜀𝑎 = 𝑥 100%
𝑎𝑝𝑝𝑟𝑜𝑥𝑖𝑚𝑎𝑡𝑒 𝑣𝑎𝑙𝑢𝑒

 In many occasions, the error is calculated as the difference


between the previous and the actual approximations.
𝑎𝑐𝑡𝑢𝑎𝑙 𝑎𝑝𝑝𝑟𝑜𝑥𝑖𝑚𝑎𝑡𝑖𝑜𝑛 − 𝑝𝑟𝑒𝑣𝑖𝑜𝑢𝑠 𝑎𝑝𝑝𝑟𝑜𝑥𝑖𝑚𝑎𝑡𝑖𝑜𝑛
𝜀𝑎 = 𝑥 100%
𝑎𝑐𝑡𝑢𝑎𝑙 𝑎𝑝𝑝𝑟𝑜𝑥𝑖𝑚𝑎𝑡𝑖𝑜𝑛 13
 Example: Numerical Errors Analysis

𝑥 3 − 3𝑥 2 − 6𝑥 + 8 = 0

8
x = 3x + 6 −
x
The initial estimate x0 = 2
8
x1 = 3x0 + 6 − = 2.828427
x0
error:
See the table on the next page.

e = x1 − x0 = 0.828427

3- 14
Table: Error Analysis with xt =4
Trail i xi ei xt − xi
0 2.000000 - 2.000000
1 2.828427 0.828427 1.171573
2 3.414214 0.582786 0.585786
3 3.728203 0.313989 0.271797
4 3.877989 0.149787 0.122011
5 3.946016 0.068027 0.053984
6 3.976265 0.030249 0.023735
7 3.989594 0.013328 0.010406
8 3.995443 0.005849 0.004557
9 3.998005 00002563 0.001995
3- 15

10 3.999127 0.001122 0.000873


APPROXIMATION AND ERRORS
Numerical Methods
Instead of solving for the exact solution we solve math problems
with a series of arithmetic operations.

b
1
Example: dx
x
a
analytical solution: ln(b) – ln(a)
numerical solution e. g., Trapezoidal Rule
Error Analysis
(a) identify the possible sources of error
(b) estimate the magnitude of the error
(c) determine how to minimize and control error
16
Approximations and Rounding Errors
 Thus, the stopping criterium of a numerical method can be:

a  s
 s = prefixed percent tolerance

 It is convenient to relate the errors with the number of


significant figures.If the following relation holds, one can be
sure that at least n significant figures are correct.
s = (0.5 * 102−n )%

17
Approximations and Rounding Errors
 Numerical systems:
 A numerical system is a 104 103 102 101 100
convention to represent 8 6 4 0 9
quantities. Since we have
9x 1= 9
10 fingers in our hands, the 0x 10 = 0
4 x 100 = 400
most popular numerical a)
6 x 1000 = 6000
8 x 10000 = 80000
system has basis 10. It uses 86409

10 different digits. 27 26 25 24 23 22 21 20

1 0 1 0 1 1 0 1

1x 1= 1
 However, computers, due 0x 2= 0
1x 4= 4
to the memory structure, 1x 8= 8
0 x 16 = 16
can only store two digits: 0 b)
1 x 32 = 32
0 x 64 = 64
and 1. Thus, they use the 1 x 128 = 128
173
binary system of numeric
representation.

18
Approximations and Rounding Errors
 Unfortunately, computers introduce errors in the calculations.
However, since many engineering problems have not analytical
solution, we are forced to use numerical methods
(approximations). The only option we have is to accept the
error and try to reduce it up to a tolerable level.
 The only way of minimizing the errors is by knowing and
understanding why they occur and how we can diminish them.
 The most frequent errors are:
 Rounding errors, due to the fact that computers can work
only with a finite representation of numbers.
 Truncation errors, due to differences between the exact
and the approximate (numeric) formulations of the
mathematical problem being dealt with.
 Before analyzing each one of them, we will see two important
concepts on the computer representation of numbers.

19
Approximations and Rounding Errors
 Significant figures of a number:

 The significant figures of a number are those which can be


used with confidence.
 This concept has two important implications:

1. An approximation is acceptable when it is exact for a given


number of significant figures.
2. There are magnitudes or constants that cannot be represented
exactly:

 = 3.14159265...
17 = 4.123105...

20
Significant Figures
 If 46.23 is exact to the four digits shown, it has
four significant digits (The last digit is
imprecise). The error is no more than 0.005.
 The digits from 1 to 9 are always significant,
with zero being significant where it is not being
used to set the position of the decimal point.
 2410, 2.41, 0.00241: three significant digits
(0 in 2410 is only used to set the decimal place.)
 Scientific notation can be used to avoid
confusion:
2.41×103: three significant digits
2.410×103: four significant digits
3- 21
 Computation : Any mathematical operation using an
imprecise digit is imprecise.
 Example: 3 significant digits (underline indicates an
imprecise digit.)

4.26 starting number


 8.39 starting number
0.3834 0.09 times 4.26
1.278 0.3 times 4.26
34.08 8 times 4.26
35.7414 Total (product result)
3- 22
• Example: Compute Yˆ = 11.587 + 1.9860x
Yˆ3 = 11.6 + 1.99 x three significan t digits
Yˆ4 = 11.59 + 1.986x four significan t digits
Yˆ5 = 11.587 + 1.9860x five significan t digits
Rounding should be made at the end of computation, not at intermediate
calculation

Table: Rounding Numerical Calculations

x Yˆ3 Ŷ4 Yˆ5

1 13.59 13.576 13.573


20 51.40 51.310 51.307
40 91.20 91.030 91.027
100 210.60 210.19 210.187 3- 23
• Example: Arithmetic Operations and Significant
Digits. To compute the area of a triangle:
base=12.3 3 significant digits
height=17.2 3 significant digits
area A=0.5bh=0.5(12.3)(17.2)=106
(If we ignore the concept of significant digits,
A=105.78)
The true value is expected to lie between
0.5(12.25)(17.15)=105.04375
and
0.5(12.35)(17.25)=106.51875
Note that 0.5 is an exact value, though it has only one 3- 24

significant digit.
Round-off Errors
Background: How are numbers stored in a computer?

• The fundamental unit, a "word," consists of a string of "bits" (binary digits).


• Because computers are made up of gates or switches which are either
closed or open, we work in binary or base 2 system.
• A number in base q will be denoted by

(anan-1...a1a0.b1b2..bk..)q

The conversion to base 10 is, by definition

(anan-1...a1a0.b1b2..bk..)q =anqn+an-1qn-1+...+a1q+a0q0+b1q-1+b2q-2+...

Example:

(1011.01)2=1x23+0x22+1x2+1x20+0x2-1+1x2-2=11.25
25
Round-off Errors
Conversion from base 10 to base q.

This is the recipe for conversion:

Integer part: we have to divide the integer part by 2 (successively) and


to retain the fractional part in each step.

Fractional part: we have to multiply by 2 and to retain the integer part


in each step.

Example:

(26.1)10=(11010.00011)2

26
Approximations and Rounding Errors

 Representation of integer numbers:

 To represent base 10 numbers in binary form the signed


magnitude method is used. The first digit stores the sign (0,
positive and 1, negative). The remaining bits are used to
store the number.

 A computer working with words of 16 bits can store integer


numbers in the range -32768 to 32767.
27
Approximations and Rounding Errors
 Floating point representation:
 This representation is used for fractional quantities. It has
the fraction part, called mantissa, and an integer part,
called exponent or characteristic.
m * be

 The mantissa is usually normalized, so that the value of m


is limited (b=2 in binary):
1
m1
b

28
Approximations and Rounding Errors
IEEE-floating point formats: there are two types of “precision” (simple and double). They differ in
the number of digits available for storing the numbers:
 Simple precision (32 bits): 1 bit for the sign, 8 bits for the exponent, 23 bits for the mantissa.
 Double precision (64 bits, two words of 32 bits): 1 bit for the sign, 11 bits for the exponent, 52
bits for the mantissa.

The number of bits for the exponent and the mantissa determine the “underflow” and
“overflow” numbers.

29
Round-off Error due to Arithmetic Operations

Subtractive Cancellation (subtracting numbers of


almost equal size) – too few significant figures
left.

Examples:

1. Use of the “standard” formulas for solving P(x)=0,


being P(x) a polynomial of degree 2.
2. Computation of f(x)=(x+1)1/2-x1/2 for large x.
3. Computation of g(x)=(1-cos(x))/x2.

30
Mathematical Models
 Comparing solutions:

Approximate Approximat
Numerical solution,
t=2seg
60
t(sec) Exact
(t=2s.) (t=1s.)
0 0 0 0 50

2 16,422 19,62 17,819339

V (m/sec)
40
4 27,798 32,037357 29,697439 Exact
6 35,678 39,896213 37,615198 solution
30
8 41,137 44,870026 42,893056
10 44,919 48,017917 46,411195 20
Numerical solution,
12 47,539 50,010194 48,756333 t=1seg
14 49,353 51,271092 50,319566 10

16 50,611 52,069105 51,361594


18 51,481 52,574162 52,056193 0
0 2 4 6 8 10 12 14 16 18 20
20 52,085 52,893809 52,519203 T (sec)

31

You might also like