P. 1
Numerical Methods Reviewer

Numerical Methods Reviewer

|Views: 5|Likes:
Published by Kim Brian Carbo

More info:

Published by: Kim Brian Carbo on Mar 18, 2013
Copyright:Attribution Non-commercial


Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less





Numerical methods are techniques by which mathematical problems are reformulated so that they can be solved with arithmetic

operations. Numerical methods are characterized by large numbers of tedious arithmetic calculations. HOW ENGINEERS APPROACH PROBLEM SOLVING BEFORE THE COMPUTER ERA: 1. Uses Analytical or Exact Methods - Applicable for only limited class of problems (e.g. Linear Model, Simple Geometry, Low Dimensionality, etc) 2. Graphical Solutions - Can be used to solve complex problems but results are not very precise and less dimensionality. Without the aids of computer, it becomes tedious and awkward to implement. 3. Calculators and Slide Rules - Calculations are still tedious and slow even because it is still manual. SIGNIFICANCE OF NUMERICAL METHODS: 1. Numerical methods are extremely powerful problemsolving tools and will they greatly enhance your problemsolving skills. 2. If you are conversant with numerical methods and are adept at computer programming, you can design your own programs to solve problems. 3. Numerical methods provide a vehicle for you to reinforce your understanding of mathematics because one function of numerical methods is to reduce higher mathematics to basic arithmetic operations. “COMPUTER ARE PRACTICALLY USELESS WITHOUT A FUNDAMENTAL UNDERSTANDING OF HOW ENGINEERING SYSTEMS WORK.” What are Generalizations? Generalizations can serve as organizing principles that can be employed to synthesize observations and experimental results into a coherent and comprehensive framework from which conclusions can be drawn. Generalizations, in engineering problem-solving perspective, it is expressed in the form of a mathematical model. Mathematical Model defined as a formulation or equation that expresses the essential features of a physical system or process in mathematical terms. Dependent variable: Characteristic that usually reflects the state of the system Independent variables: Dimensions such as time and space along which the systems behavior is being determined Parameters: reflect the system’s properties or composition Forcing functions: external influences acting upon the system NEWTON’S SECOND LAW OF MOTION States that “the time rate change of momentum of a body is equal to the resulting force acting on it.” NEWTON’S SECOND LAW OF MOTION States that “The acceleration produced by a particular force acting on a body is directly proportional to the magnitude of the force and inversely proportional to the mass of the body.” INTERPRETATION: It can be seen that the numerical method (Approximate) captures the essential features of the exact solution. However, because straight-line segments is employed, there are some discrepancies between the two results. One way to minimize such discrepancies is to use a smaller step size. PART I: BASICS OF COMPUTER PROGRAMMING & SOFTWARE

Computer – give ease in calculating for extremely laborious and time-consuming solutions to problems than by hand. What can the computer provide you with? 1. Software Packages – capable of doing the basic and standard way of computations. 2. Programming – extends the capability of the software packages by writing programs run by them Computer Programs - set of instructions that direct the computer to perform a certain task. Structured Programming - set of rules that prescribe good style habits for the programmer. - Well-structured algorithms are invariably easier to debug and test, resulting in programs that take a shorter time to develop, test, and update. Forms of Programming Structuring: 1. Flowchart - visual or graphical representation of an algorithm - employs a series of blocks which represent a particular operation or step in the algorithm and arrows which represent sequence of algorithm. - useful in planning, unraveling, or communicating the logic of your own or someone else's program Terminal- Represents the beginning or end of a program Flowlines- Represent the flow of logic. The humps on the horizontal arrow indicated that it passes over and does not connect with the vertical flowlines. Process- Represents calculation or data manipulations Input/Output – Represent inputs and outputs of data information. Decision – Represents a comparison, question that determines alternative paths to be followed. Junction- Represent the confluence of flowlines. Off-Page Connector – Represents a break a that is continued on another page. Count Controlled Loop – User for loops which repeat a prespecified number of iterations 2. Pseudocode - uses code-like statements in place of the graphical symbols of the flowchart. - bridges the gap between flowcharts and computer code - easier to use to develop a program than with a flowchart Fundamental Control Structures (Logical Representation): 1. Sequence - computer code is implemented one instruction at a time (unless directed altered by the user) 2. Selection - provides a means to split the program's flow into branches based on the outcome of a logical condition. Cascade - chain of decisions Case - branching is based on the value of a single test expression 3. Repetition - provides a means to implement instructions repeatedly (loops). a. Decision Loop – terminates based on the result of a logical condition. i. DOEXIT (Break Loop) – structure repeats until a logical condition is true. ii. DOFOR (Count-controlled Loop) – performs a specified number of repetitions or iterations. Modular Programming - approach which divides the computer program into small subprograms, or modules, that can be developed and tested separately. - makes a subprogram independent and self-contained to do a specific and defined function with one entry and exit point. - 50-100 instructions in length Procedures - representation of modules in high-level languages - series of computer instructions that together perform a given task Excel - spreadsheet which is a special type of mathematical software that allow the user to enter and perform calculations on rows and columns of data

- has some built-in numerical capabilities including equation solving, curve fitting, and optimization. - includes VBA as a macro language that can be used to implement numerical calculations. - has several visualization tools, such as graphs and threedimensional surface plots, that serve as valuable adjuncts for numerical analysis. MATLAB - flagship software product of The MathWorks, Inc., which was cofounded by the numerical analysts Cleve Moler and John N. Little. - originally developed as a matrix laboratory - has a variety of functions and operators that allow convenient implementation of many of the numerical methods APPROXIMATIONS AND ROUNDING-OFF ERRORS Approximation - representation of something that is not exact, but still close enough to be useful. Significant Figures - developed to formally designate the reliability of a numerical value. - are the numbers that can be used with confidence. - correspond to the number of certain digits plus one estimated digit. Identifying Significant Digits 1. All non-zero digits are considered significant. EX: 91 has two significant digits (9 and 1) 123.45 has five significant digits (1, 2, 3, 4 and 5). 2. Zeros appearing between two non-zero digits are significant. EX: 101.12 has five significant digits: 1, 0, 1, 1 and 2. 3. Leading zeros are not significant. EX: 0.00052 has two significant digits: 5 and 2. 4. Trailing zeros in a number containing a decimal point are significant. EX: 12.2300 has six significant digits: 1, 2, 2, 3, 0 and 0 0.000 122 300 still has only six significant digits 130.00 has five significant digits. 5. Zeros used to place value is insignificant unless a dot is included EX: 50 has 1 significant digit 50. has 2 significant digit Importance of Understanding Significant Figures in Numerical Math 1. Important when deciding the acceptability of an approximation to a certain number of significant figures. 2. Important since computers cannot express a value with infinite number. Accuracy and Precision Accuracy - refers to how closely a computed or measured value agrees with the true value. Precision - refers to how closely individual computed or measured values agree with each Other Inaccuracy - aka Bias, defined as systematic deviation from the truth. Imprecision - aka Uncertainty, refers to the magnitude of the scatter. Error - represents both the inaccuracy and imprecision of predictions. - arise from the use of approximations to represent exact mathematical operations and Quantities Major Forms of Numerical Errors: 1. Round-off Error - due to the fact that computers can represent only quantities with finite number of digits. - discrepancy introduced by the omission of significant figures 2. Truncation -discrepancy introduced by the fact that numerical methods may employ approximations to represent exact mathematical operations and quantities CHALLENGE OF NUMERICAL METHODS: Determining the error estimates in the absence of knowledge regarding the true value. SOLUTION TO THE CHALLENGE: Iterative approach is used to compute the answers The process is performed repeatedly, or iteratively, to successively compute better and better approximations FLOATING POINT - Represent s fractional quantities in computers

- Composed of the mantissa (significand), base of the number system and exponent (mbe) NUMBER SYSTEM - convention for representing quantities Base - number used as the reference for constructing the system Place Value - determines the position and magnitude of a digit or symbol TRUNCATING ERRORS AND TAYLOR SERIES Truncating Errors - results from using an approximation in place of an exact mathematical procedure EXAMPLE: Euler’s method used in solving the parachutist problem using approximation Taylor Series - mathematical formulation widely used in numerical methods to express functions in an approximate fashion. - provides a means to predict a function value at one point in terms of the function value and its derivatives at another point - states that any smooth function can be approximated as a polynomial. Zero-order Approximation - indicates that the value of f at the new point is the same as its value at the old point. - very useful if xi and xi+1 are close to each other. ˦{˲

- a value of 1 tells us that the function’s relative error is identical to the relative error in x - a value greater than 1 tells us that the relative error is amplified, whereas a value less than 1 tells us that it is attenuated. - ill-conditioned functions are those functions with very large values. TOTAL NUMERICAL ERROR - Summation of the truncation and round-off error s. - the only way to minimize round-off errors is to increase the number of significant figures - truncation errors can be reduced by decreasing the step size - truncation errors are decreased when there is an increase in the number of computation while round-off errors are increased - DILEMMA: The strategy for decreasing one component of the total error leads to an increase of the other component BLUNDERS - aka gross errors due to an error that contributed to all the other components of error. FORMULATION ERRORS – aka model errors relate to bias that can be ascribed t incomplete mathematical models DATA UNCERTAINTY - associated with inconsistency of nature of data and can exhibit both inaccuracy and imprecision. ROOTS OF EQUATION Bracketing Methods - requires two initial guesses to get the root - should bracket or enclose the root with the guesses - different strategies are used to systematically reduce the bracket and get the correct answer 1. Graphical Methods - simple method for getting an estimate of the root of equation by observing where does a line or curve crosses the x-axis. - limited practical value because they are not precise - used as starting guess for numerical methods Incremental Search Method - locates an interval where the function changes sign and divides the interval into subintervals to again locate where the sign changes and so on. 2. Bisection Method - aka binary chopping, interval halving, or Bolzano’s method - a type of incremental search method in which the interval is always divided in half. - if a function changes sign over an interval, the function value at the midpoint is evaluated and the location of the root is determined as lying at the midpoint of the subinterval within which the sign change occurs and the process is repeated to obtain refined estimates. 3. False Position Method - alternative method based on graphical insight - solves the inefficiency of Bisection Method (no account is taken of the magnitudes of f(xl) and f(xu). - joins f(xl) and f(xu) by a straight line and their intersection with the x- axis will serve as an improved estimate of the root - called false position because replacing a curve with a line will yield a false position (regula falsi in Latin). - aka Linear Interpolation Method. ˦{˲ {{˲ . ˲ { ˲ ˲ . ˦{˲ { . ˦{˲ { NOTE: In bisection method, the interval between xl and xu grows smaller during the course of computation while in false-position method, one of the initial guesses (xl or xu) may stay fixed throughout the computation as the other guess converge on the root. In general, false-position easily converges to the desired root but not all the time, especially for functions with significant curvature, due to its one-sidedness. 4. Modified False-position Method - combination of falseposition and bisection - mitigates the one-sidedness of false position - if one of the bounds is already stuck, the formula of xr for bisection will be used.

˦{˲ {

First-order Approximation - added to the zero-order approximation to provide better estimate - adds a terms consists of a slope f’(xi) multiplied by the distance between the xi and xi+1. - this adds a straight line and is capable of predicting an increase or decrease of the function between xi and xi+1. ˦{˲ # { ˦{˲ { - ˦# {˲ {{˲ # . ˲ { Second-order Approximation - captures some of the curvature that the function might exhibit. ˦ {˲ { {˲ # ˦{˲ # { ˦{˲ { - ˦# {˲ {{˲ # . ˲ { Ŷ . ˲ {$ The Complete Taylor Series ˦ {˲ { {˲ # ˦{˲ # { ˦{˲ { - ˦# {˲ {{˲ # . ˲ { Ŷ ˦ {˲ { $ .˲ { {˲ # J . ˲ { - ˞J NOTE: The nth-order Taylor series expansion will be exact for an nth-order polynomial. For other differentiable and continuous functions, such as exponentials and sinusoids, a finite number of terms will not yield an exact estimate. Only if an infinite number of terms are added will the series yield an exact result. If Rn is removed, the right Taylor Series becomes an approximation. That is, the error is proportional to the step size h raised to the (n +1)th power. Rn = O (hn+1) We can usually assume that the truncation error is decreased by the addition of terms to the Taylor series. Derivative Mean-Value Theorem - states that the if a function f(x) and its first derivative are continuous over an interval from xi to xi+1, then there exists at least one point on the function that has a slope f’(ξ) parallel to the line joining f(xi) and f(xi+1) STABILITY AND CONDITION Condition - relates to the sensitivity of the mathematical problem to changes in its input values Numerically Unstable - occurs in those problems if the uncertainty of the input values is grossly magnified by the numerical method. Condition Number - defined as the ratio of these relative errors - provides a measure of the extent to which an uncertainty in x is magnified by f(x)

In this section. The major difference is that when an unknown variable is eliminated in the GaussJordan method. A horizontal set of elements is called a row. there is no general rule as to how the determinants of a system be close to zero to lead to illconditioning. Invert the scaled matrix and if there are elements of the inverse that are several orders of magnitude greater than one. Multiplication of two matrices. Before proceeding with computer-aided methods. Division by zero. In general the only way to minimize round off errors is to increase the number of significant figures. Newton-Raphson method is often the most efficient rootfinding algorithm available. Although there are cases where the Jacobi method is useful. all rows are normalized by dividing them with their pivot elements.   ¡ ¢ Quadratic Interpolation ˦$ {˲{ I" . ˦{˲" { I# ˲# . with the exception of a band centered on the main diagonal. Formula: Linear Interpolation ˦# {˲{ ˦{˲" { - ˦{˲# { . However you may require estimates at points between the discrete values . Three methods are available for this purpose: so that the largest 1. Simple Fixed-Point Iteration. In secant method however. that is. Singular systems are characterized by a zero determinant. In addition. that is a single point with a large error. The Gauss-Jordan method is a variation of Gauss elimination. The first subscript I always designates the number of the row in which the element lies. Stopping Criterion (Es) – It is the pre-specified percent tolerance.{ ˲ {I# . some modifications are required to avoid pitfalls. Ill-conditioned systems. 4. 5. it indicates again that the system is ill-conditioned. ˲" { . Operations on Matrices. THREE ATTEMPTS TO FIT A “BEST” CURVE THROUGH FIVE DATA POINTS: -Did not connect points but rather characterized the general upward trend of data with a straight line . POLYNOMIAL REGRESSION . The Gauss elimination is a systematic technique of forward elimination and back substitution as described previously.3. errors can propagate through and round-off errors come into play. These are 1. Independent variable – Dimension such as time and space are examples of ___ variables. Newton-Raphson method converges quadratically. ˦{˲ # . There will also be the case that there are set of equations that are identical. a property of division.˥ {˳ . Augmentation. 2. Taylor Series – Error experienced when cutting down terms of a series of approximations. a33 and a44 is termed as the principal or main diagonal of the matrix. it is not necessary to employ back substitution as the constants column already gives the solutions for the unknowns. CURVE FITTING . iterative method. Imprecision – Refers to the magnitude of the scatter and also known as uncertainty.ŵ{ ˲$ ˳ . Newton-Raphson Method. multiplication of a matrix by the inverse of another matrix is analogous to division. Iterative or approximate methods provide an alternative to the elimination methods described to this point. I# ˲Ȋ ˟ Standard error of the Estimate ˟ È ¥ ¢ Special Matrices and Gauss-Seidel Certain matrices have special structures than can be exploited to develop efficient solution schemes. 6. ˲" I$ ˲$ . The technique described here is called Gauss elimination because it involves combining equations to eliminate unknowns. For two equations with two unknowns. If the initial guessat the root is x. The naive Gauss elimination uses normalization to eliminate variables. Modular Programming – Type of programming which makes subprogram independent and self-contained to do a specific and defined function with one entry and exit point. as in cij = aij + bij dij = eij / fij 2. The Gauss-Seidel method is similar in spirit to the fixed-point iteration described in the previous lesson. the truncation errors are decreased as the round-off errors are increased.6 illustrates the graphical depiction of the method. ˲# Curve Fitting Linear Regression ˟J I# Standard Deviation ˳ I" .. the elimination process reduces the system matrix into an identity matrix with the constants adjusted. These systems are called singular systems. This is because there are certain functions whose derivatives are too complicated to be evaluated or too inconvenient to be used. A potential problem in implementing the Newton-Raphson method is the evaluation of the derivative. denoted by . LINEAR ALGEBRAIC EQUATIONS Definitions and Notations. Thus. a critical difference between the methods is how one of the initial values is replaced by the new estimate. as in the set of equations normalizing the first row with the pivot will result in the division by zero. ˲" Error Analysis and System Condition.I# {˲ . will be demonstrated. Gauss-Seidel. ˦{˲ { ˲ # ˲ .{ ˲ % {I# . For these cases. Round-off errors will not be as great issue as in elimination methods. Matrices with row dimension n=1 are called row vectors. it stores them first before using them in the next iteration. ˲# { I" ˦{˲" { ˦{˲# { . However.Whereas there are many systems of equations that can be solved with naive Gauss elimination. the better the approximation Quadratic Interpolation . the Gauss-Seidel method. it also exhibits two fundamental problems: (1) it was sometimes non-convergent. An identity matrix is a diagonal matrix where all elements on the main diagonal are equal to 1. For simple fixed-point iteration. Pseudocode – Form of programming structure that uses code-like statements in place of the graphical symbols. The Gauss elimination consists of two steps: forward elimination of unknowns and obtaining solutions through back substitution.˥ ˟J {˳ .Used straight line segments or linear interpolation to connect the points . there is a tendency for the initial values to be on one side of the root. If not. and xi replaces xi-1. Solving Small Numbers of Equations. the initial values are updated in such a way that they always bracket the root. Blunders – Also known as gross error. If the system is ill-conditioned. it is eliminated from all other equations rather than the subsequent ones. as in 2. degrees of freedom are lost as one is forced to eliminate equations that are identical to another one.I# ˲ . The point at which this tangentcrosses the x-axis usually represents an improved approximation of the root.strategy for improving the estimates by introducing some curvature into the line connecting the points . Consequently. 2. I# ˲ . Scale the matrix of coefficients element in each row is 1. These are the graphical method. The following are the rules in the operation of matrices. I$ ˲ $ {$ Setting the equation Ә {J{I . This strategy is ill-suited for regression because it gives undue influence to an outlier. ˲$ . A matrix consists of a rectangular array of elements represented by a single symbol as in equation 3. I" .I# ˲ . utilizes a somewhat different tactic. The inverse of the matrix also provides a means to discern whether systems are ill-conditioned. Thus. Error – Difference between the exact value and approximate value. an approximate. ˦{˲# { ˦{˲# { . A check of whether the solutions satisfy the system of equations can be helpful to check whether a substantial error has occurred. called Jacobi. In false position method. Pseudocode – Code-like statements used in place of the graphical symbols of flowchart. Although these techniques are ideally suited for implementation on computers. the possibility of division by zero exists. while single column matrices are called column vectors.simplest form of interpolation by connecting two points with a straight line . An upper triangular matrix is one where all the elements below the main diagonal are zero. which can arise especially when working with large set of equations.the smaller the interval between the data points. Singular systems. a formula for an iterative algorithm can be derived by re-arranging the function f(x)=0 so that x is on the left hand side of the equation. Gauss-Jordan Elimination. {˭ . ˲ { ˲ # ˲ . and a vertical set is called a column.Ŷ Coefficient of Determination (r2) ˟ . Round-off errors. A possibility of divergence thus exists. and (2) if it converges. ˦{˲" { . ˦{˲ { Note the similarity of the algorithms of secant method and false-position method. Excel – A computer software that allows the user to enter and perform calculations on rows and columns of data.OPEN METHODS 1.ŵ 2 ˟ ˮ˨˥ ˬ˩J˥IJ J˥˧J˥JJ˩JJ ˭Jˤ˥ˬ ˨IJ ˭˥J˩ˮ Polynomial Regression ˳ I" .{ ˲ әI -{ ˟ È ˲ {I# . Addition and subtraction. several methods that are appropriate for solving small (n _ 3) sets simultaneous equations that can be done without the aid of a computer will be described first. However. The “naive” Gauss elimination does not avoid this problem.{ $ Ә ˲$ ә I . Illconditioned systems can have a wide range of solutions. Thus. that is the error is approximately proportional to the square of the previous error. A lower triangular matrix is one where all elements above the main diagonal are zero. which converges linearly. A similar method. a22. ˲# ˲# . there are some pitfalls that must be explored before writing a general computer program to implement the method. Multiplication by a scalar quantity. Count-Controlled Loop(DOFOR) – A kind of loop that performs a specified number of repetitions or iterations.˟ J$ ˟ Correlation coefficient ® J ˲ ˳ . that is the new value replaces x. I" . when multiplied to the original matrix results in an identity matrix . which is an iterative method to solve for the unknowns of the set of linear algebraic equations. Recall that the inverse of the square matrix . { ˲ {$ I" ˳ . The coefficient used to normalize a row is called the pivot element.used if three data points are available and a parabola can be fitted. the initial values are replaced in a strict order. ˦{˲" { {˲ . This can be accomplished by adding or subtracting the corresponding elements of the matrices. Figure 2. Efficient elimination methods are described for both. _A_ is the shorthand notation for the matrix and a__ designates an individual element of the matrix. however. The trace of the matrix is the sum of the elements on its principal diagonal. MINIMA CRITERION. Thus. In matrix [A] of 3. however. Here. The product of a scalar quantity g and a matrix [A] is obtained by multiplying every element of [A] by g 3. but still close enough to be useful. ˦ {˲ { 3. Truncation Error – a rise from the use of approximation to represent exact mathematical operations. Improvement of convergence using relaxation. ˲˳ J ˲ $ . In this section. ˳ {$ J ˲˳ . the main disadvantage will be the possibility of divergence. Other logical criterion might be to minimize the sum of the absolute values of the discrepancies 3. Pitfalls of Elimination Methods. Well-conditioned systems are those where a small change in one or more coefficients results in a similar small change in the solutions.method used when estimating intermediate values between precise points Linear Interpolation .I$ ˲ $ .{ ˲ $ {I$ ˲ $ {I$ ˲ & {I$ ˳ ˲˳ makes it the method of preference. Such approaches are similar to the techniques developed to obtain the roots of a single equation in the previous lesson on roots of equation. ˦{˲ {{˲ # . The second subscript j designates the column. I# ˲ {$ ˟ˮ {˳ . Invert the inverted matrix and assess whether the resulting matrix is sufficiently close to the original matrix. the derivative may be approximated by a backward finite divided difference. { ˲ {{ ˳ { J J{ ˲ $ .2. Problems can also arise when a coefficient is very close to zero. Standard Deviation – the most common measure of spread for a sample about the mean STANDARD ERROR OF THE ESTIMATE (Sy/x) – error for the predicted value of y corresponding to a particular value of x. { ˳ {$ ˟ È ˟ J.a simple method for fitting a curve to data is to plot points and then sketch a line that visually conforms to the data. If not. Total Numerical Error – Summation of the truncation and round-errors. However it also has pitfalls. Gauss Elimination This section deals with simultaneous linear algebraic equations that can be generally expressed as in Equation 3. Inverse of a matrix. a tangent can be extended from the point (xi. The focus is on GaussSeidel which is particularly well suited for large number of equations. The second part of this section turns to an alternative to elimination methods. The above matrix has a bandwidth of 3 and is given a special name – the tridiagonal matrix. A banded matrix has all elements equal to zero.12 depicts the difference between the two. Gauss-Seidel’s utilization of the best available estimates usually ˟ J. Ill-conditioned systems have determinants close to zero. However. 1. Figure 2. Taylor Series – Used to represent percent tolerance. Approximation – Represents something that is not exact.Used curves to try to capture the meanderings suggested by the data CRITERIA FOR “BEST FIT” 1.. Transpose of the matrix. Division of matrix is not a defined operation. The technique of pivoting has been developed to partially avoid these problems. How do you reduced Truncation Error and Round-off errors? The truncations error can be reduced by decreasing the step size because a decrease step size can lead to subtractive cancellation or to increase in computations. A line is chosen that minimizes the maximum distance that an individual point falls from the line.I$ {˲ . it indicates ill-conditioning. Trace of the matrix. ˟ J . also called one-point iteration or successive substitution. Instead of using the latest available values to solve for the next variable. The product of two matrices is represented as [C] = [A][B] where the elements of [C] are defined as I (# I I 4. ˲" ˦{˲$ { . Figure 3. 3. has the property that. then there are set of equations that are almost identical. Graphical Method.9 demonstrates this graphically. ˲" { ˲# . By developing convergence criterion. The first part of this section discusses two of such: banded and symmetric matrices. Compared to simple fixedpoint iteration.f(xi)). computation of such inverse will be demonstrated using the LU decomposition. 3. Naive Gauss Elimination. It is designated as tr[A] and is computed as ˮJ{˓{ (# I 7. Since the process involves large amount of computations. This is because multiplying a matrix by its inverse results in an identity matrix.used when marked pattern is poorly represented by a straight line and needs a curve for better fit INTERPOLATION . Ill- conditioned systems are those where small changes in coefficients result in large changes in the solutions. A diagonal matrix is a square matrix where all elements off the main diagonal are equal to zero. Multiply the inverse by the original coefficient matrix and assess whether the resulting matrix is close to the identity matrix.4. ˲" {{˲ . Secant Method. Cramer’s rule and the elimination of the unknowns. the diagonal consisting of the elements a11. The algorithms for computing inverse matrix will be discussed in future sections. one can know in advance if the Gauss-Seidel will lead to a converging solution. Relaxation represents a slight modification of the Gauss-Seidel method and is designed to enhance convergence. and may converge very slowly.data is given for discrete values along a continuum.   ¢   ¡ ¡ ¢ ¤ £   Convergence criterion for the Gauss-Seidel method. a graphical solution is obtained by plotting each of them on a Cartesian plane and looking for their intersection. it often did so very slowly. such as division by zero. Matrix Inversion. A matrix is augmented by the addition of a column or columns to the original matrix. One strategy for fitting a “best” line through the data would be to minimize the sum of the residual errors for all the available data 2. It involves transforming row elements into column and column elements into rows. it is likely that the system is illconditioned. if plainly used. { ˲ {$ J{ ˳ $ .

You're Reading a Free Preview

/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->