P. 1
CAAM 353 -Solving Syst of NL Equations

CAAM 353 -Solving Syst of NL Equations

|Views: 0|Likes:
Published by j.emmett.dwyer1033
good intro for undergrads to non linear eqs, matrices
good intro for undergrads to non linear eqs, matrices

More info:

Published by: j.emmett.dwyer1033 on Apr 17, 2013
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

04/17/2013

pdf

text

original

CAAM 353

Solving Systems of Non-Linear Equations

f(x)=0
• We’ve discussed so far:
– One nonlinear equation, one unknown – n linear equations, n unknowns

• Now, we move on to:
– n nonlinear equations, n unknowns

• f(x)=0, where f is a set of n equations, and x is a vector of n unknowns • In words, want each 𝑓𝑖 to simultaneously equal zero when we plug in the vector x

Nonlinear Equations
• Nonlinear equations don’t follow the same pattern of linear equations of:
– 0, 1 or infinite solutions – You can have any number of solutions! 𝑦

= 𝑥 2 𝑦

= −𝑥 2 + 1

what values of x maximize or minimize f? • To solve.Application • When do we need to solve a system of nonlinear equations? • Optimization problems! • Ex. (where x is a vector of several variables).r.t. each 𝑥𝑖 and set equal to 0 . Given a function f(x). you write down all partial derivatives of f w.

Local Maxs or Mins .Applications MAX Global vs.

com/transcript/2 007/01/cytoskeleton.php . fibers modeled as elastic spring network • Hooke’s law type behavior http://scienceblogs.Real Example • Modeling the cytoskeleton: cross-liked fiber network of actin filaments • To simulate motion and deformation.

= relaxed length • Question is: where should the fiber endpoints be to minimize the stored energy in the system? .Fiber Network • Total potential energy in the system is given by: 𝐸 = 𝑛 𝑖=1 𝑘 2 system with n fibers 0 2 (𝑙𝑖 −𝑙𝑖 ) for each fiber i in the 0 𝑙𝑖 • 𝑙𝑖 = current fiber length.

Fiber Network • 𝑙𝑖 = 𝑖 𝑥2 − 𝑖 2 𝑥1 + 𝑖 𝑦2 − 𝑖 2 𝑦1 + 𝑖 𝑧2 − 𝑖 2 𝑧1 To find position of minimal energy for all fiber endpoints. we must find the minimum of the energy equation Solve system: 𝜕𝐸 𝑖 𝜕𝑥𝑗 = 0 with partials for y’s and z’s also .

Solution Approach • Iterative approach: Begin with initial guess 𝑥0 and (hopefully) walk towards the solution 𝑥 ∗ • Unfortunately. no bisection-type method exists for systems of nonlinear equations • Newton does extend! • Can get convergence if our initial guess is “close” to the true one • But no bisection to help us narrow down where to start! .

𝑏 𝑦 − 𝑏 2 … .Newton for Systems • Newton’s method derived from Taylor series. 𝑦 = 𝑓 𝑎. 𝑏 𝑥 − 𝑎 2 + 𝑓𝑥𝑦 𝑎. b y − b 1 + 𝑓𝑥𝑥 𝑎. 𝑏 𝑥 − 𝑎 + fy a. 𝑏 𝑥 − 𝑎 𝑦 − 𝑏 2 + 𝑓𝑦𝑦 𝑎. 𝑏 + 𝑓𝑥 𝑎. • Systems equivalent can be derived from multivariable Taylor series • 2 variable example: 𝑓 𝑥.

𝑥𝑛 . 𝑝2 .Newton for Systems • General case: 𝑓 𝑥 + 𝑝 = 𝑓 𝑥 + 𝐽 𝑥 𝑝 + 𝑂(||𝑝2 | … Where 𝑥 = 𝑥1 . … . 𝑝𝑛 ) is a direction vector 𝐽 𝑥 = Jacobian matrix. … . 𝑥2 . 𝑓𝑛 𝑝 = (𝑝1 . matrix of partial derivatives . … . 𝑓 = 𝑓1 . 𝑓2 .

Jacobian Matrix .

Newton for Systems • • • • • • • 𝑥0 initial guess vector 0 = 𝑓 𝑥𝑛+1 + 𝑝 = 𝑓 𝑥𝑛 + 𝐽 𝑥𝑛 𝑝 Solve Linear system: 𝐽 𝑥𝑛 𝑝 = −𝑓(𝑥𝑛 ) (Gives new direction vector p) Update 𝑥𝑛+1 = 𝑥𝑛 + 𝑝 Repeat! Until when? .

Stopping Criteria • Recall the stopping criteria for the one variable cases: – – – – 𝑥𝑛 − 𝑥𝑛−1 < 𝑡𝑜𝑙1 𝑥𝑛 − 𝑥𝑛−1 < 𝑡𝑜𝑙2 𝑥𝑛 𝑓 𝑥𝑛 < 𝑡𝑜𝑙3 𝑥𝑛 − 𝑥𝑛−1 < 𝑡𝑜𝑙4 (1 + 𝑥𝑛 ) • Simply replace absolute value with normas .

Newton for Systems • One issue: Method will only work if J(x) is nonsingular • Will depend on current x vector • Inverse Function Theorem: (from calculus). if functions in vector f are continuously differentiable. and the Jacobian is nonsingular at a point (vector) x* then the Jacobian will be nonsingular in a neighborhood of x* .

Newton’s method for systems converges quadratically provided the Jacobian has a bounded inverse (is nonsinular) and provided our initial guess is “close” to the root 𝑥 ∗ − 𝑥𝑘+1 < 𝑀 𝑥 ∗ − 𝑥𝑘 2 .Convergence • Like in the one function. one variable case.

m .Example • 𝑓1 𝑥 = + −4 • 𝑓2 𝑥 = 𝑥2 cos(𝑥1 ) 2 𝑥1 2 𝑥2 newtonsys.

this matrix is symmetric positive definite) • Algorithm: – Solve 𝛻 2 𝜙 𝑥𝑘 𝑝 = −𝛻𝜙(𝑥𝑘 ) – 𝑥𝑘+1 = 𝑥𝑘 + 𝑝 .Unconstrained Optimization • Problem: 𝑥∈𝑅 min 𝜙(𝑥) 𝑛 System of equations: 𝛻𝜙 𝑥 = 0 𝛻 2 𝜙(𝑥) Hessian matrix of 2nd derivatives. (at a minimum.

Example • Fiber network (mass-spring) application • Move fibers to minimize potential energy • 𝐸 = 𝑘 2 𝑘 2 𝑘 2 𝑥1 − 𝑥3 2 2 2 + 𝑦1 − 𝑦3 2 2 2 − 𝑙1 2 2 2 + 𝑥2 − 𝑥3 𝑥4 − 𝑥3 + 𝑦2 − 𝑦3 + 𝑦4 − 𝑦3 − 𝑙2 − 𝑙3 + .

t x1 – x4 and y1 – y4 • Compute Hessiam of 2nd partials (8 x 8 matrix) Initial Stretched State Min. Energy State basicsprings.r.m .Example • Compute partial derivative of E w.

Pros/Cons to Newton • Pros: – Easy Equivalent of the One Variable Case • Cons: – You have to compute Hessiam matrix every step – You have to do a linear solve every step – We have no control over convergence .

to update x vector) −1 – 𝑝𝑘 = −𝐵𝑘 𝛻𝜙(𝑥𝑘 ) (choose a new search direction) • Gradient Descent Method • Quasi-Newton Methods .Alternatives • General idea behind optimization iteration: – 𝑥𝑘+1 = 𝑥𝑘 + 𝑎𝑘 𝑝𝑘 (walk step length a in the p direction.

the gradient of f(x) at a point x* always points in the direction of steepest ascent • −𝛻f(x) points in the direction of steepest descent • It’s a great choice for p (search direction) • 𝑥𝑘+1 = 𝑥𝑘 + 𝑎𝑘 𝑝𝑘 • 𝑝𝑘 = −𝛻f(x) .Gradient Descent • Given a surface in 𝑅𝑛 defined by a function f(x).

Gradient Descent • springsmin.m Initial Position Ending Position .

etc. ¼. If f(x+ap) > f(x) then try a smaller step size: ½.f(x+ap)) coordinate pairs. utilize that a value for the step size . – Get 3 (a. ….Step Size • How far should you “walk” in the p direction to settle on a new position x? • Line Search Strategies: – Start with a max step size (like a=1). if f(x+ap) < f(x) then update x to x+ap. do a quadratic interpolation and find the minimum.

Gradient Descent • Pros – No need for linear solve – No need to compute Hessian • Cons – Can be very slow – Must decide on step size every time .

Quasi-Newton Methods • Systems equivalent of the secant method • Instead of computing the Jacobian every time. can we just estimate it? Or update it? • Better yet. can we update/estimate 𝐽−1 (𝑥) (inverse of the Hessiam matrix) to avoid the linear solves? .

B must satisfy: 𝐵𝑤𝑘 = 𝑦𝑘 .Quasi-Newton methods • Taylor’s expansion for the gradient 𝛻𝜙(𝑥) 𝛻𝜙 𝑥𝑘+1 = 𝛻𝜙 𝑥𝑘 + 𝛻 2 𝜙 𝑥𝑘 𝑥𝑘+1 − 𝑥𝑘 Let: B = 𝛻 2 𝜙 𝑥𝑘 (The Hessian matrix) 𝑤𝑘 = 𝑥𝑘+1 − 𝑥𝑘 𝑦𝑘 = 𝛻𝜙 𝑥𝑘+1 − 𝛻𝜙 𝑥𝑘 So.

Quasi-Newton Methods • It’s the problem Ax=b except x and b are known and A is not! • Under-determined system…. • But 𝐵𝑘+1 shouldn’t be too different from 𝐵𝑘 • It just has to be adjusted to satisfy 𝐵𝑤𝑘 = 𝑦𝑘 • (Just one direction) • Class of methods that rely on this idea or rankone updates of B .

𝑥𝑘+1 = 𝑥𝑘 + 𝑎𝑝𝑘 – 𝑤𝑘 = 𝑎𝑝𝑘 . 𝑦𝑘 = 𝛻𝜙 𝑥𝑘+1 − 𝛻𝜙 𝑥𝑘 – 𝐺𝑘+1 = 𝐼 − 𝑇 𝑤𝑘 𝑦𝑘 𝑇 𝑤 𝑦𝑘 𝑘 𝐺𝑘 𝐼 − 𝑇 𝑦𝑘 𝑤𝑘 𝑇 𝑤 𝑦𝑘 𝑘 + 𝑇 𝑤𝑘 𝑤𝑘 𝑇 𝑤 𝑦𝑘 𝑘 .BFGS Method • Popular Choice: Bryoden-Fletcher-GoldfarbShannon method • Actually updates 𝐺 = 𝐵−1 instead of 𝐵 • Algorithm: – Choose 𝑥0 and 𝐺0 (usually identity matrix) – 𝑝𝑘 = −𝐺𝑘 𝛻𝜙 𝑥𝑘 – Choose step size a.

m Initial Position Ending Postion .BFGS Method • springsbfgs.

Local vs. Global Min/Max • We usually want a global min/max (the overall smallest or largest value of our function f(x)) • But we may end up at a local one • Typical Solution? – Try different starting guesses and run minimization routine – Choose the “best” minimum from the set .

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->