This action might not be possible to undo. Are you sure you want to continue?

Welcome to Scribd! Start your free trial and access books, documents and more.Find out more

Approximation and Round-Off Errors

Speed: 48.X Mileage: 87324.4X

Significant Figures

• Number of significant figures indicates precision. Significant digits of a number are those that can be used with confidence, e.g., the number of certain digits plus one estimated digit.

2

53,800 How many significant figures? 5.38 x 104 5.380 x 104 5.3800 x 104 3 4 5

Zeros are sometimes used to locate the decimal point not significant figures. 0.00001753 0.0001753 0.001753 4 4 4

– Only rarely given data are exact. no analytical solutions.. etc … – The output information will then contain error from both of these sources. We cannot exactly compute the errors associated with numerical methods. 3 • How confident we are in our approximate result? • The question is “how much error is present in our calculation and is it tolerable?” . unavoidable round-offs. e. Therefore there is probably error in the input information.Approximations and Round-Off Errors • For many engineering problems.g. • Numerical methods yield approximate results. – Algorithm itself usually introduces errors as well. since they originate from measurements.

– Magnitude of scatter. • Imprecision (or uncertainty).4 • Accuracy – How close is a computed or measured value to the true value • Precision (or reproducibility) – How close is a computed or measured value to previously computed or measured values. • Inaccuracy (or bias) – A systematic deviation from the actual value. .

2 5 .Fig 3.

Error Definitions 6 True Value = Approximation + Error Et = True value – Approximation (+/-) True error true error True fractional relative error = true value true error True percent relative error. ε t = ×100% true value .

the true value will be known only when we deal with functions that can be solved analytically (simple systems).Previous approximation εa = Current approximation (+ / -) . we usually not know the answer a priori.• For numerical methods. example Newton’s method Current approximation . Then Approximate error εa = Approximation • Iterative approach. In real world applications.

• Computations are repeated until stopping criterion is satisfied. .5 ×10(2-n) )% you can be sure that the result is correct to at least n significant figures.8 • Use absolute value.5 ×10-n = (0. εa 〈 εs Pre-specified % tolerance based on the knowledge of your solution • If the following criterion is met ε s = 0.

3 decimal.Fig 3. binary 9 .

4 Signed binary 10 • 1000 0000 0000 0001 = (-1) • 2’s complement – 0000 0000 0000 0001 = 1 – 0000 0000 0000 0000 = 0 – 1111 1111 1111 1111 = -1 – 1111 1111 1111 1110 = -2 • Number range ?.Fig 3. How to compute − a from a ? .

011 = 1× 2 2 + 0 × 21 + 1× 20 + 0 × 2 −1 + 1× 2 −2 + 1× 2 −3 = 4 + 1 + 0.456 = 1× 10 2 + 2 ×101 + 3 ×10 0 + 4 × 10 −1 + 5 × 10 −2 + 6 × 10 −3 11 101.Fractional number – decimal.125 = 5.375 Fixed point number fraction . binary 123.25 + 0.

78 0.15678x103 in a floating point base-10 system 12 1 = 0. Multiply the mantissa by 10 and lower the exponent by 1 0.Floating point number (base-10) 156.0294×100 ≤ m <1 10 • Normalized to remove the leading zeroes.2941 x 10-1 Additional significant Chapter 3 figure is retained .029411765 Suppose only 4 34 decimal places to be stored 1 0.

base-2) 13 101.Floating point number (binary.101011× 23 0.011 = 0.00101101 = 0.5 .101101× 2 −2 Fig 3.

Floating point number • Numbers such as π. they cannot precisely represent certain exact base-10 numbers. • Fractional quantities are typically represented in computer using “floating point” form.. 14 m ⋅ be mantissa exponent Base of the number system used . • Computers use a base-2 representation.g. or 7 cannot be expressed by a fixed number of significant figures. e. e.

However.1 ≤m<1 0. • not anymore – Round-off errors are introduced because mantissa holds only a finite number of significant figures.Floating point number 1 ≤ m <1 b 15 Therefore for a base-10 system for a base-2 system 0. . – Floating point numbers take up more room. – Take longer to process than integer numbers.5 ≤m<1 • Floating point representation allows both fractions and very large numbers.

π=3.141592 chopping error εt=0. 16 .00000065 If rounded π=3. Since number of significant figures is large enough. because rounding adds to the computational overhead.141593 εt=0.14159265358 to be stored on a base-10 system carrying 7 significant digits.Chopping. resulting chopping error is negligible. Rounding Example: π=3.00000035 • Some machines use chopping.

example 17 .Fig 3.6 .

Example 3.156250 – 2-2×(1 × 2-1 + 1 × 2-2 +0 × 2-3) =0.093750 – 2-3×(1 × 2-1 + 1 × 2-2 +1 × 2-3) =0.015625 – 2-2 ×(1 × 2-1+ 0 × 2-2 +0 × 2-3) =0.109375 – Evenly spaced by 2-3×(0 × 2-1 + 0 × 2-2 +1 × 2-3)= 0. 61) – 2-3 ×(1 × 2-1+ 0 × 2-2 +0 × 2-3) =0.4 (p.078125 – 2-3×(1 × 2-1 + 1 × 2-2 +0 × 2-3) =0.062500 (the smallest) – 2-3×(1 × 2-1 + 0 × 2-2 +1 × 2-3) =0.218750 – Evenly spaced by 2-2×(0 × 2-1 + 0 × 2-2 +1 × 2-3) =0.187500 – 2-2×(1 × 2-1 + 1 × 2-2 +1 × 2-3) =0.03125 18 .125000 – 2-2×(1 × 2-1 + 0 × 2-2 +1 × 2-3) =0.

61) – 22 ×(1 × 2-1+ 0 × 2-2 +0 × 2-3) =2 – 22×(1 × 2-1 + 0 × 2-2 +1 × 2-3) =2.4 (p.5 – Evenly spaced by 22×(0 × 2-1 + 0 × 2-2 +1 × 2-3)= 0.5 – 23 ×(1 × 2-1+ 0 × 2-2 +0 × 2-3) =4 – 23×(1 × 2-1 + 0 × 2-2 +1 × 2-3) =5 – 23×(1 × 2-1 + 1 × 2-2 +0 × 2-3) =6 – 23×(1 × 2-1 + 1 × 2-2 +1 × 2-3) =7 (the largest) – Evenly spaced by 23×(0 × 2-1 + 0 × 2-2 +1 × 2-3) =1 19 .5 – 22×(1 × 2-1 + 1 × 2-2 +0 × 2-3) =3 – 22×(1 × 2-1 + 1 × 2-2 +1 × 2-3) =3.Example 3.

7 20 .Fig 3.

exponent: 11 bits. mantissa: 23 bits – 7 significant base-10 digits with range 10^-38 to 10^39 Double precision (64-bit) – sign: 1 bit. mantissa: 52 bits – 15-16 significant base-10 digits with range 10^-308 to 10^308 21 • . exponent: 8 bits.IEEE Standard 754 Floating Point Numbers • Single precision (32-bit) – sign: 1 bit.

4381 × 10-1 0.1600 ×101 0.1557 × 101 + 0.004381 ×101 0.Arithmetic Manipulations 22 • Common Arithmetic operations – The mantissa of the number with the smaller exponent is modified so that the exponents are the same – 0.1557 ×101 0.4381×10 1 −1 0.160081 ×101 .1557 ×10 0.

2686 ×102 2 0.Arithmetic Manipulations • Subtraction – 0.3641 10 0.2686 ×10 − 0.9550 ×10 1 .3641×10 2 2 2 × 0.3641 × 102 .2686 × 102 23 0.0.0955 ×10 0.

Arithmetic Manipulations 24 • Subtraction – 0.7642 10 0.7642 × 103 .7642 ×10 3 3 3 × 0.7641 × 103 0.7641×103 3 0.7642 ×10 0.1000 ×10 0 .0001×10 0.0.

1363 × 103 × 0.1363 ×103 × 0.08754549 × 102 0.6423 ×10−1 0.8754 ×10 1 .1363 ×103 × 0.8754549 ×10 1 – Exponents are added – Mantissas are multiplied 0.6423 × 103 0.6423 ×10−1 0.Arithmetic Manipulations 25 • Multiplication – 0.

4000001×10 0.1000 × 10-2 0.Errors 26 • Adding a large and a small number – 0.4000 ×10 4 + 0.4000 ×10 4 4 0.4000 × 104 + 0.4000 ×10 4 .0000001×10 + 0.0000001×104 4 0.

12345678 0.00000012 0.12345??? − 0.12345666 − 0.00000??? %≅b b − 4ac = b 2 % = " small " −b + b .Errors 27 • Subtractive Cancellation – If b 2 >> 4ac −b ± b 2 − 4ac x= 2a 0.12345??? 0.

- Thanks to You Lyrics
- Assessment
- 334-Logistics Manager .
- 46-pra
- 127-d1_Resume_2
- 968 Resume
- 44-Resume Neha Singh
- 808-BIO-PRE
- K63 One Chip Am Receiver Lab
- Fiber Optic Homework - Copy
- To Access VM Ware for NIDA
- Understanding dB Without Calc
- A Bode Plotter
- Asianic Digitalcameras Pricelist
- Swot Analysis
- Calamba
- Engineering Ethics1
- Diag Exam
- 868-RESUME_PRATIK.doc
- Resume Soft Copy
- SWOTanalysis.doc
- Document.rtf
- mac addresses.txt
- calamba.txt
- Captured Power It is the Power Available at the Antenna

iiiiiii

iiiiiii

- 4829.State space search
- Work
- Lecture 02
- Lecture 12
- SkipLists-1-1
- Bin Packing
- A Greedy Algorithm for Scheduling Jobs with Deadlines and Prots
- HW3 (1)
- datamining
- UWATERLOO CS341 Midterm Solution 2011
- mapsp
- CS161 Lecture 01 Spring 2013
- Constraint Satisfaction Problem
- MIT15_082JF10_lec11
- Constraint Satisfaction Problem
- Mazed
- II (Art 2) Algorithms for Minimum Flows
- Handout 01
- Euclidian GCD Analysis
- Microsoft Word - p1
- Make Span
- combinatorial algorithms
- Advanced Algorithms Analysis and Design - CS702 Power Point Slides Lecture 45
- Ignou Mcs 031 Solved Paper
- Review Midterm
- Asquith Dissertation
- A Lecture 2 Error Analysis
- Seminar Case Study
- Evaluation of Information Retrieval Systems
- mode5

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

We've moved you to where you read on your other device.

Get the full title to continue

Get the full title to continue listening from where you left off, or restart the preview.

scribd