## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

, modified often Steps to create algorithm. 1. Solve specific instance of problem by hand 2. Generalize solution by putting in variables 3. Execute algorithm on several test cases 4. Write into programming statements Script files-set of matlab commands -can be executed by: typing name into command window, hit run button Functions Function files- accept input arguments from caller, return outputs to caller, variables created and manipulated within function only, reside in function workspace Syntax- function[outvar]=functionname(arglist) Function files can contain single function, but also contain primary function with one or more subfunction.-listed below primary function, only accessible by primary function or sub-functions within the same m-file Anonymous functions-one line functions created without need for separate file Fhandle=@(arg1,arg2,...)expression Decision statements if, elseif for loop- ends after a specific number of repetitions establisted by number of columns given to an index variable while loop-ends on basis of logical condition, if condition is true, statements will run and when finished, loop again checks on conditions ROOTS: bracketing methods and open methods Graphical method provides rough estimation Bracketing methods- can find root more exactly (up to desired relative error), require two initial estimates which bracket the root Conditions-two initial guesses but be known for which function f(x) changes its sign, function muct be continuous in bracket interval Advantage: bracketing methods always converge Disadvantage: convergence can be slow Incremental search- speeds up process to find appropriate brackets. Critical part: choice of interval length If too small>>very time consuming If to large>>some roots might be missed

tends to oscillate around local max or min Fzero function Using initial guess X=fzero (function. it may diverge. but when converge it usually much faster Fixed Point Iteration: Function f(x)=g(x)-x. calculate tangent of f(x0) that has slope f (x0). intersection of tangent is next estimation of rot. False-Position: -similar to bisection but root estimate is no longer middle of interval -est root value is intersection of straight line which connects Xl and Xu with x-axis -root calculation: value of Xr replaces which of two initial guesses yields a function value with same sign as f(Xr) Bisection VS False Position -bisection does not take into account shape of function Open Methods Require single starting value. x0) Using initial bracket X=fzero (function. repeat until approx. [x0 x1]) . comes don t diverge.Bisection: -concentrates on 1 interval btwn 2 initial guesses with different signs for f(x) -always halves the interval -check if sign changes in 1st or 2nd half. same as finding fixed point of x=g(x) Root can be found iteratively by xig(xi-1) Until ea=abs(xi-xi-1/xi)<Es Fixed points-intersection of function y=g(x) with the straight line y=x Iteration converges if abs g (g)<1 Newton-Raphson Method -start with initial guess x0. repeat until stopping criterion is met Pros: quadratic convergence Cons: some functions show slow or poor convergence. error is small enough Absolute error of bisection is reduced by factor or 2 for each iteration.

generally decreases as step size increases Optimiization Process of creating something that is as effective as possible Deals with finding maxima and minima of a function that depends on one or more variables Golden-section search -for finding minimum on interval [Xl Xu] -uses golden ratio phi=1. True fractional relative error-approx/true Relative error-true/estimate*100 Percent relative= Ea=present-previous/present *100% Abs pecent relative= ea=abs(present-previsou/present)*100% Roundoff errorsTruncation errors-result from using a approximation in place of exact mathematical procedure.Errors Accuracy-how closely a computer or measured value agrees with true value Precision-how closely individual computer or measured values agree with each other True error=difference btwn true value and approx. Absolute error=abs difference btwn true falue and approx.arise because digital computers cannor represent some quantities exactly.6180 Only 1 new interior point needed Parabolic Interpolation Uses three points to estimate the optimum location Location of maximum/minimum of parabola defined as interpolation of three points. - . generally increases as step size increases Roundoff error.

followed by back substitution. or A is singular. to eliminate eqns down the matrix final result is upper triangular matrix. almost singular Naïve Gauss Elimination Gauss part. using forward elimination. Sub step: L and U used to determine a solution x for a right-hand-side vector b: a. not square. Decomposition step: matrix A decomposed into upper (U) and lower (L) triangular matrix 2. Intermediate vector d is computer Ld=b b. D is used in Ux=d to solve for x In matlab [L. inv(A) cannor be defined Gauss Elimination Determinants of a system-if close to 0.TEST 2 Linear Algebraic equations and matrices System over-determined: if more equations than unknowns Under-determined: if more unknowns than equations Can use matlab inverse function X=inv(A)*b Or x=A\b If system is over/underdetermined. followed by back substitution problem-could divide by 0. uses it systematically uses it systematically. not good!! Partial Pivoting To fix problem of dividing by 0 Determine coefficient with the largest absolute value in the column below the pivot element Rows can be switched so largest element is pivot Fixes problem of no division by 0 LU decomposition Two steps: 1.U]=lu(a) D=L\b X=U\d .

uses that value in later equations Jacobi iteration Solves for all three eqns and plugs into next iteration Convergence-iterative method calculated by determining the relative percent change of each element in X Diagonal dominance-absolute value of the diagonal coefficient of each of equations must be larger than sum of absolute falues of the other coefficients in equation Newton Raphson Nonlinear systems may be solved using newton raphson method for multiple variables Statistics Review Measures of central tendency Arithmetic mean-sum of individual data points divided by number of points Median: midpoint of group of data Mode: value that occurs most frequently in group of data Measures of Spread Standard Deviation Linear Least-Squares Regression 1st strategy.if result is close to identity (sucks) Matrix condition number Norm: real-valued function that provides a measure of size or length of multicomponent mathematical entities such as vectors and matrices Gauss-Seidel method Solves each equation in a system for a particular variable.minimize sum of all residual errors for all available data 2nd-minimize sum of absolute values of discrepancies 3rd-minimize sum of squares of residuals-method has advantage of finding unique line for a given set of data Coefficient of determination-difference btwn sum of squares of data residuals and sum of squares of estimate residuals Linearizing nonlinear equations: ExpodentialNonlinear: y=ae^bx Linearized: lnY=lna+bx .Ill-conditioning.

PowerNonlinear: y=ax^b Linearized: logy=loga+blogx Interpolation: Polynomial-for n data points.used to do interpolation if # of data points =# of coefficients Newton Interpolating Polynomials -linear interpolation using similar triangles -quadratic interpolation.xi.y. method ) Preferred.order to n data points Lagrange: Weighted average of two values that connected by straight line Li=weighting coefficients-functions of x F(X)=L1*f(x1)+L2*f(x2) Splines: -connect lines point to point Linear splines-have discontinuous first derivatives. only one polynomial that passes through all points Polyfit. straight-line equations between each pair of points. pass through points Cubic splines Interp1 function can perform several different kinds of interpolation Yi=interp1(x.can use three points and construct 2nd-order polynomial Process can by generalized to fit an (n-1)th. not-a-knot force continuity of third derivative at second and penultimate points . provide simplest representation that exhibits desired appearance and smoothness.

even points weighted by 2.NEW STUFF Integration Trapezoidal Rule Single integration: I=(b-a) (f(A)+f(b)/2) Composite Drawback-error related to 2nd derivative of function Simpson s Rules 1/3 rule corresponds to 2nd order polynomials Interval divided into even number of segments. end points weighted by 1 3/8 rule corresponds with 3rd order polynomial using 3 points Interior points weighed 3/8. more accurate approximation Gauss quadratureTechniques for evaluating area under a straight line by joining any two points on a curve -chooses line that balances positive and negative errors . end points weighted 1/8 Richardson Extrapolation Uses two estimates of an integral to compute a third. Odd points weighted by 4.

techniques employed . euler method uses estimate as increment function Types of error within Euler s Roundoff errors-caused by limited numbers of significant digits that can be retained Truncation-caused by nature of approx. Euler s method Slope at beginning of interval taken as approximation of average slope over whole interval First derivative provides direct estimate of slope at ti.Differentiating Centered is best way to differentiate Richardson Extrapolation Used to combine two lower-accuracy estimates of derivative to produce higher-accuracy estimate Numerical differentiation -tends to amplify errors in data Integration tends to smooth data errors Ordinary Differential Equations One-Step Methods: Differ in slope estimating.

predictor eqn will be used to calculate slope and end of interval. correcting it based on slope calculated at that new value Make predictor qen but final value of intermediate prediction.Heun s predictor-Corrector Method Determine derivatives and beginning and predicted ending of interval and average Relies on making prediction of new value of y. Midpoint Method Predicts slope at midpoint of interval rather than at end Cannot be applied iteratively Superior to euler s method-utilizes slope estimation at midpoint of interval -centered finite differences better approximations of derivatives . combine two slopes to get average on.

- Sapiens
- Yes Please
- The Unwinding
- The Innovators
- Elon Musk
- Dispatches from Pluto
- John Adams
- Devil in the Grove
- The Prize
- Grand Pursuit
- This Changes Everything
- A Heartbreaking Work Of Staggering Genius
- The Emperor of All Maladies
- Team of Rivals
- The New Confessions of an Economic Hit Man
- Rise of ISIS
- The Hard Thing About Hard Things
- Smart People Should Build Things
- The World Is Flat 3.0
- Bad Feminist
- How To Win Friends and Influence People
- Angela's Ashes
- Steve Jobs

- The Silver Linings Playbook
- Leaving Berlin
- Extremely Loud and Incredibly Close
- The Sympathizer
- The Light Between Oceans
- The Incarnations
- You Too Can Have a Body Like Mine
- Life of Pi
- The Love Affairs of Nathaniel P.
- The Rosie Project
- The Blazing World
- We Are Not Ourselves
- The First Bad Man
- Brooklyn
- The Flamethrowers
- A Man Called Ove
- The Master
- Bel Canto
- Interpreter of Maladies
- The Kitchen House
- Beautiful Ruins
- The Art of Racing in the Rain
- Wolf Hall
- The Wallcreeper
- A Prayer for Owen Meany
- The Cider House Rules
- The Bonfire of the Vanities
- Lovers at the Chameleon Club, Paris 1932
- The Perks of Being a Wallflower
- Slaughterhouse-Five

Sign up to vote on this title

UsefulNot usefulRead Free for 30 Days

Cancel anytime.

Close Dialog## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Loading