MIT OpenCourseWare

http://ocw.mit.edu

16.323 Principles of Optimal Control
Spring 2008

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

16.323 Lecture 2
Nonlinear Optimization
• Constrained nonlinear optimization
• Lagrange multipliers
• Penalty/barrier functions also often used, but will not be discussed here.

Figure by MIT OpenCourseWare.

) June 18. assume that the p-state vector y can be separated into a decision m-vector u and a state n-vector x related to the decision variables through the constraints. 2008 . which involves – Solving for x in terms of u using f – Substituting this expression into F and solving for u using an unconstrained optimization.Spr 2008 Constrained Optimization 16.323 2–1 • Consider a problem with the next level of complexity: optimization with equality constraints min F (y) y such that f (y) = 0 a vector of n constraints • To simplify the notation. • One solution approach is direct substitution. u) u such that f (x. u) = 0 – Assume that p > n otherwise the problem is completely specified by the constraints (or over specified). Problem now becomes: min F (x. – Works best if f is linear (assumption is that not both of f and F are linear.

5 1 1.Spr 2008 16.8: Simple function minimization with constraint.5 −2 −2 −1.5 0 −0. • Bottom line: substitution works well for linear constraints.5 −1 −0. 2008 .323 2–2 • Example: minimize F = x21 + x22 subject to the constraint that x1 + x2 + 2 = 0 – Clearly the unconstrained minimum is at x1 = x2 = 0 – Substitution in this case gives equivalent problems: min F˜2 = (−2 − x2)2 + x22 x2 or min F˜1 = x21 + (−2 − x1)2 x1 for which the solution (∂F˜2/∂x2 = 0) is x1 = x2 = −1 2 1. June 18.5 0 x1 0. but pro­ cess hard to generalize for larger systems/nonlinear constraints.5 −1 −1.5 2 Figure 2.5 1 x2 0.

3) (2. u. 2008 (2. u) • Given values of x and u for which f (x. . • Since f (x. then ∂f ∂f dx + du ≡ 0 ∂x� �∂u −1 ∂f ∂f ⇒ dx = − du ∂x ∂u df = June 18. . we can adjoin it to the cost with constants � � T λ = λ1 .1) ∂x ∂x � ∂x �−1 ∂F ∂f ⇒ λT = − (2.4) .Spr 2008 Lagrange Multipliers 16. λ) = F (x. u) = 0. u) + λT f (x. u) = 0.323 2–3 • Need a more general strategy .2) ∂x ∂x • To proceed. – Changes to x and u are such that f (x. must determine what changes are possible to the cost keeping the equality constraint satisfied.using Lagrange multipliers. u) = 0. consider differential changes to the Lagrangian from differential changes to x and u: where ∂L ∂u = � ∂L ∂u1 ∂L ∂L dL = dx + du ∂x ∂u � ∂L · · · ∂um (row vector) • Since u are the decision variables it is convenient to choose λ so that ∂L � ∂F ∂f = + λT ≡ 0 (2. λn without changing the function value along the constraint to create Lagrangian function L(x.

10) in 2n + m unknowns. u) = 0 (2. 2. 2008 (2.7) • So the gradient of the cost F with respect to u while keeping the constraint f (x.12) . • Thus the necessary conditions for a stationary value of F are ∂L ∂x ∂L ∂u ∂L ∂λ which are 2n + m equations = 0 (2.11) (2.Spr 2008 16.323 2–4 • Then the allowable cost variations are ∂F ∂F dx + du dF = ∂x ∂u � � � �−1 ∂F ∂f ∂f ∂F = − + du ∂u ∂u ∂x ∂x � � ∂F ∂f = + λT du ∂u ∂u ∂L ≡ du ∂u (2.10 can be written compactly as ∂L = 0 ∂y ∂L = 0 ∂λ – The solutions of which give the stationary points.5) (2.9) = f (x.6) (2.8–2. June 18. u) = 0 is just ∂L ∂u and we need this gradient to be zero to have a stationary point so that dF = 0 ∀ du �= 0.8) = 0 (2. • Note that Eqs.

• Equivalent to saying that the gradient of the cost ftn (normal to the constant cost curve) ∂F/∂y [black lines] must lie in the space spanned by the constraint gradients ∂f /∂y [red lines] – Means cost cannot be improved without violating the constraints. – Can see that at the points on the constraint above and blow the optimal value of x2 June 18. 2008 . this corresponds to ∂F/∂y being collinear to ∂f /∂y • Note: If this were not true. thereby reducing the cost and still satisfying the constraint. then it would be possible to take dy in the negative of the direction of the component of the cost gradient orthogonal to the constraint gradient.323 2–5 • Can develop the intuition that the constrained solution will be a point of tangency of the constant cost curves and the constraint function – No further improvements possible while satisfying the constraints.Spr 2008 Intuition 16. – In 2D case.

June 18.Spr 2008 16.10: Zoomed in plot. f (x1 .9: Minimization with equality constraints: shows that function and cost gradients are nearly collinear near optimal point and clearly not far away.323 2–6 Figure 2. 2008 � 1 1 � 1 x 2 . x2 ) = x2 − ((x1 )3 − (x1 )2 + (x1 ) + 2) = 0 and F = 12 xT Figure 2.

323 2–7 3 2 f (x1 .note that the cost and constraint gradients are collinear.Spr 2008 16. x2 ) = x2 − ((x1 − 2) − (x1 − 2) + (x1 − 2) + 2) = 0 and F = 1 T 2x � 1 1 � 1 x 2 Figure 2. but now aligned June 18.11: Change constraint . 2008 .

Spr 2008 16.323 2–8 • Generalize this intuition of being “collinear” to larger state dimensions to notion that the cost gradient must lie in the space spanned by the constraint gradients. – Equivalent to saying that it is possible to express the cost gradient as a linear combination of the constraint gradients – Again. of course. then improvements can be made to the cost without violating the constraints.11. 2008 (2. • So that at a constrained minimum.13) (2. 2. there must exist constants such that the cost gradient satisfies: ∂F ∂f1 ∂f2 ∂fn − λ2 − · · · − λn = −λ1 ∂y ∂y ∂y ∂y ∂f = −λT ∂y or equivalently that ∂F ∂f + λT =0 ∂y ∂y which is. June 18. if this was not the case.14) . the same as Eq.

∂y ∂L =0 ∂λ which gives ∂L = 2x1 + λ = 0 ∂x1 ∂L = 2x2 + λ = 0 ∂x2 ∂L = x1 + x2 + 2 = 0 ∂λ • This gives 3 equations in 3 unknowns. 2008 . • Difficulty can be solving the resulting equations for the optimal points (can be ugly nonlinear equations) June 18.323 2–9 • Minimize F (x1. ∂L = 0. x2) + λf (x1. x2) = x21 + x22 + λ(x1 + x2 + 2) – Where λ is the Lagrange multiplier • The solution approach without constraints is to find the stationary point of F (x1. x2) = x1 + x2 + 2 = 0 – Form the Lagrangian L � F (x1.Spr 2008 Constrained Example 16. the selection of x1 and x2 during the minimization are not independent – The Lagrange multiplier captures this dependency. x2) (∂F/∂x1 = ∂F/∂x2 = 0) – With constraints we find the stationary points of L � y= x1 x2 � . x2) = x21 + x22 subject to f (x1. solve to find x�1 = x�2 = −1 • The key point here is that due to the constraint.

2008 . cost lower to the left.323 2–10 • Now consider the problem min F (y) (2.12: Cost and constraint gradients shown June 18. but now not all constraints are active – Black line at top is inactive since x1 + x2 − 1 < 0 at the optimal value x = [1 − 0. but f1 > 0 there Figure 2.60] ⇒ it does not limit a degree of freedom in the problem. but do not need to constrain n with respect to the state dimension p since not all inequality constraints will limit a degree of freedom of the solution. – Blue constraint is active.16) y – Assume that there are n constraints.15) such that f (y) ≤ 0 (2. • Have similar picture as before.Spr 2008 Inequality Constraints 16.

323 2–11 With x1 + x2 − 1 ≤ 0. 2008 .Spr 2008 16. both constraints are active Figure 2.13: Other cases of active and inactive constraints June 18.

λi ≥ 0. and Inactive ones λj = 0 June 18.Spr 2008 16.so split as: � ∂fi � ∂fj ∂F − = − λi λj (2.17.323 2–12 • Intuition in this case is that at the minimum. needed the cost and function gradients to be collinear. – Must restrict condition 2. need an additional constraint that is related to the allowable changes in the state. – Given 2. • For inequality constraints. then can set λj = 0 • With equality constraints.17) ∂y ∂y ∂y i j active inactive – And if the constraint is inactive.17 so that the cost gradient points in the direction of the “allowable side” of the constraint (f < 0). but they could be in any orientation. ⇒ Cost cannot be reduced without violating constraint. the cost gradient must lie in the space spanned by the active constraints . 2008 . require that λi ≥ 0 for active constraints • Summary: Active constraints. ⇒ Cost and function gradients must point in opposite directions.

• Note that there is an implicit assumption here of regularity – that the active constraint gradients are linearly independent – for the λ�s to be well defined.19 are the “essence” of the Kuhn-Tucker the­ orem in nonlinear programming . we can define the same Lagrangian as before L = F +λT f . • Equations 2.323 2–13 • Given this.7. 2008 .Spr 2008 16. and the necessary conditions for optimality are ∂L = 0 ∂y (2. sections 1.more precise statements available with more careful specification of the constraints properties. – Must also be careful in specifying the second order conditions for a stationary point to be a minimum .18 and 2.18) ∂L = 0 ∀i ∂λi where the second property applies to all constraints λi – Active ones have λi ≥ 0 and satisfy ∂L ∂λi – Inactive ones have λi = 0 and satisfy (2. – Avoids redundancy June 18.19) = fi = 0 ∂L ∂λi = fi < 0.3 and 1.see Bryson and Ho.

Spr 2008 Cost Sensitivity 16. ⇒ • For active constraints λ ≥ 0. June 18. • Note that at the solution point. ∂L ∂F ∂f =0⇒ = −λT ∂y ∂y ∂y If the state changes by Δy. 2008 . so expect that dF/df ≤ 0 – Makes sense because if it is active. would expect change in the Cost Constraint ∂F Δy ∂y ∂f Δf = Δy ∂y ΔF = So then we have that ΔF = −λT ∂f Δy = −λT Δf ∂y dF = −λT df – Sensitivity of the cost to changes in the constraint func­ tion is given by the Lagrange Multipliers. – Thus it would be good to establish the extent to which those choices impact the solution. then allowing f to increase will move the constraint boundary in the direction of reducing F – Correctly predicts that inactive constraints will not have an impact.323 2–14 • Often find that the constraints in the problem are picked somewhat arbitrarily .some flexibility in the limits.

note that . which means ∂f ∂f ≡ ∂y ∂y and assuming the f constraint remains active as we change c ∂f ∂f ≡ −I =0 ∂c ∂c • Note that at the solution point.323 2–15 Alternative Derivation of Cost Sensitivity • Revise the constraints so that they are of the form f ≤ c. – The constraints can be rewritten as f = f − c ≤ 0. must compute ∂F ∂c . ∂F ∂F ∂y = ∂c ∂y ∂c ∂f ∂y = −λT ∂y ∂c ∂f = −λT ∂c T = −λ June 18. 2008 To proceed. ∂L ∂F ∂f ∂f =0⇒ = −λT = −λT ∂y ∂y ∂y ∂y • To study cost sensitivity. where c ≥ 0 is a constant that is nominally 0.Spr 2008 16.

Spr 2008 16. June 18. 2008 .323 2–16 Figure 2.14: Shows that changes to the constraint impact cost in a way that can be predicted from the Lagrange Multiplier.

Spr 2008 16. 2008 .75 and λ2 = −9/2 June 18. x1 + x2 ≤ 3 • Form Lagrangian L = x21 + x1x2 + x22 + λ1(1 − x2) + λ2(x1 + x2 − 3) • Form necessary conditions: ∂L = ∂x1 ∂L = ∂x2 ∂L λ1 = ∂λ1 ∂L λ2 = ∂λ2 2x1 + x2 + λ2 = 0 x1 + 2x2 − λ1 + λ2 = 0 λ1(1 − x2) = 0 λ2(x1 + x2 − 3) = 0 • Now consider the various options: – Assume λ1 = λ2 = 0 both inactive ∂L = 2x1 + x2 = 0 ∂x1 ∂L = x1 + 2x2 = 0 ∂x2 gives solution x1 = x2 = 0 as expected. λ2 ≥ 0 (active) ∂L = 2x1 + x2 + λ2 = 0 ∂x1 ∂L = x1 + 2x2 + λ2 = 0 ∂x2 ∂L λ2 = λ2(x1 + x2 − 3) = 0 ∂λ2 which gives solution x1 = x2 = 3/2. which satisfies the con­ straints. but F = 6. but does not satisfy all the constraints – Assume λ1 = 0 (inactive).323 2–17 Simple Constrained Example • Consider case F = x21 + x1x2 + x22 and x2 ≥ 1.

323 2–18 – Assume λ1 ≥ 0 (active).Spr 2008 16. and F = 0. x2 = 1. λ2 = 0 (inactive) ∂L = 2x1 + x2 = 0 ∂x1 ∂L = x1 + 2x2 − λ1 = 0 ∂x2 ∂L λ1 = λ1(1 − x2) = 0 ∂λ1 gives solution x1 = −1/2. λ1 = 3/2 which satisfies the constraints.75 Figure 2. 2008 .15: Simple example June 18.

LB.12) axis([-3 5 -4 4]). 58 59 60 61 62 63 64 65 66 plot(X2(1).^3-1*(x1+1). set(0. hold on plot(xx(1). for ii=1:length(x1).g=[0. F(ii.’LineWidth’.12 1 2 3 % % 16. 2008 . X=[x1(ii) x2(jj)]’.2 .[X4(2).25*X2(1)]. end.^2+1*(x1+1)+2)’).’k-’.x2=-4:.B.’ro’.[X2(2)].’demi’) 8 9 global g G f 10 11 F=[].01:5. 35 36 37 38 39 40 % X=FMINCON(FUN.0]. 12 13 14 15 16 17 18 19 20 testcase=0 if testcase f=inline(’(1*(x1+1).’ko’.’ks’. figure(1). else f=inline(’(1*(x1-2).close all.X4(2).8.Aeq.:)’*gam/norm(Jx(ii.Spr 2008 16. 52 53 54 Nx1=X(1).X(2).[].1].01:4.clf contour(x1.f(x1).’rs’. % line scaling for ii=1:length(ll) X=[x1(ll(ii)).X2(1)].1*xx(1))) [kk. df=[-dfdx(Nx1). [kk.X3(2)]. dfdx=inline(’(3*1*(x1-2).f(x_1) < =0 55 56 57 X3=[Nx1.x2.1 .’r-’. 14.’MarkerSize’.12) plot([X4(1). ’DefaultTextFontWeight’. % x_2=f(x_1) ==> x_2 .323 2–19 Code to generate Figure 2. ’DefaultAxesFontWeight’.G=[1 1.12) plot(X3(1).9*xx(1))) ll=[II1 II2 II3].4 .[].29 .II2]=min(abs(x1-1.2) plot(X4(1).f(Nx1)].12) plot([X(1).[0.NONLCON.5 1:1:max(max(F))]). gam=.2).1 2].[X(2).X0.Beq.’meshc’).^2-2*1*(x1-2)+1)’).’m*’. plot(x1.’demi’) set(0.f(x1(ll(ii)))] Jx(ii.[]. for jj=1:length(x2).0].F’.X3(1)].’LineWidth’.[]. 14.[].’\partial F/\partial y’ ) June 18.II3]=min(abs(x1-0.X3(2). end.’MarkerSize’. dfdx=inline(’(3*1*(x1+1).’MarkerSize’.05 . X4=X3+df*gam/norm(df).2) if ii==1.A.UB.’LineWidth’.[min(min(F)) .^2-2*1*(x1+1)+1)’).X2(2)].[]. xlabel(’x_1’ ) ylabel(’x_2’ ) hold on. X2=X+Jx(ii. ’DefaultAxesFontSize’. end 21 22 23 24 25 26 27 28 29 30 31 32 33 34 x1=-3:.xx(2).’MarkerSize’. text([1.II1]=min(abs(x1-xx(1))) [kk.323 Spr 2008 % Plot of cost ftns and constraints 4 5 6 7 clear all.jj)=g’*X+X’*G*X/2.OPTIONS) xx=fmincon(’meshf’. 41 42 43 44 45 46 47 48 49 50 51 Jx=[].12) plot(X(1).^2+1*(x1-2)+2)’). ’DefaultTextFontSize’.:)=(g+G*X)’.^3-1*(x1-2).’MarkerSize’.X2(2).:)).

2).’LineWidth’.’\partial F/\partial y’) hold off June 18.gam=2.2) text([X2(1)].[xx(2).[].F’.’meshc2’).5 1:1:max(max(F))]). 76 77 78 79 figure(3). df=[-dfdx(dd) 1*ones(size(dd))].’MarkerSize’.’MarkerSize’.0].[].’LineWidth’.’k-’.xx(2).X2(ii.[].1)].’meshc2’).05 .’LineWidth’.’m-’.2 . X=[dd f(dd)].[].’m-’.1 .1).1).[X2(ii.2).12) plot(X2(1). hold on Jx=(g+G*xx)’.’f_1 < 0’) if testcase text(0.X2(1)]. text(-1.’k’. 125 126 127 128 figure(4). xlabel(’x_1’ ). X2=xx+Jx’*gam/norm(Jx).global f2 df2dx=inline(’-1*ones(size(x))’).X2(ii.2). df2=[-df2dx(dd) 1*ones(size(dd))].global f2 df2dx=inline(’-1*ones(size(x))’).4 .’LineWidth’.ylabel(’x_2’ ) 80 81 82 83 84 85 86 87 88 89 xx=fmincon(’meshf’.[0*X4(2)].-1].75].’m*’.’mo’.[min(min(F)) .3 .[X(ii.2)].X2(ii. for ii=3 plot([X(ii.X2(2)].12).ylabel(’x_2’) 129 130 131 132 133 134 135 136 137 138 xx=fmincon(’meshf’.f(x1). plot(x1.X2(2). plot(xx(1).’m*’.’\partial F/\partial y’) hold off 90 91 92 93 94 95 96 97 98 99 100 101 hold on.X2(1)].4 .x2.[].323 2–20 .’MarkerSize’.X2(2)].’\partial f/\partial y’) end hold off 120 121 %%%%%%%%%%%%%%%%%%%%%% 122 123 124 f2=inline(’-1*x1+1’).2) text([X2(ii. plot([xx(1).3 .’LineWidth’. contour(x1. 2008 16.3.gam=2.’MarkerSize’.[].1)-1].0. plot([xx(1).’f_1 > 0’) else text(1.X2(ii.2).1)].’f_2 < 0’) plot(x1.f2(x1).[min(min(F)) .[X(ii.clf.5.5 1:1:max(max(F))]). contour(x1.05 . X2=X+gam*df2/norm(df2).[0.[].1 . hold on Jx=(g+G*xx)’. X2=xx+Jx’*gam/norm(Jx).2) text([X2(1)].x2.[].2)].12).’\partial f/\partial y’) end X=[dd f2(dd)].Spr 2008 text([X4(1)-.3.[].X2(2).1.’f_2 > 0’) text(-2.[xx(2).12) plot(X2(1).’\partial f/\partial y’ ) 67 68 69 70 end end hold off 71 72 %%%%%%%%%%%%%%%%%%%%%%%% 73 74 75 f2=inline(’-1*x1-1’).’LineWidth’. text(3.’mo’.’f_1 > 0’) end 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 dd=[xx(1) 0 xx(1)]’.[X2(2)].[X2(ii.2)].[].2.[].[1.2)].[X2(2)].xx(2).2 . X2=X+gam*df/norm(df). %for ii=1:length(X) for ii=1 plot([X(ii.1)]. xlabel(’x_1’).2) text([X2(ii.F’. plot(xx(1).[].

%jpdf(’mesh1b’).’k-’.temp. figure(3) print -r300 -dpng mesh4.X2(ii. print -r300 -dpng mesh1c.’LineWidth’.temp.2) text([X2(ii.’LineWidth’.2).213 . axis([-4 0 -1 3]). alp=0. X2=X+gam*df/norm(df).’LineWidth’.[].[X(ii.1)]. 2008 16.2) text([X2(ii.1)]. end 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 % % sensitivity study % line given by x_2=f(x_1).05 .[].1 .png.lam1]=fmincon(’meshf’.%jpdf(’mesh4a’). print -r300 -dpng mesh1a.2).’f_1 > 0’) else text(1.’meshc3’).’k--’.%jpdf(’mesh1a’).3.[X2(ii.0]. plot(x1.1).Spr 2008 139 140 141 142 143 144 145 146 147 148 149 150 hold on. June 18. for ii=3 plot([X(ii.lam0]=fmincon(’meshf’.png. text(-1. df=[-dfdx(dd) 1*ones(size(dd))].’k’.’f_2 > 0’) text(-2.2)].[0.png. xlabel(’x_1’) ylabel(’x_2’) hold on. f=inline(’(1*(x1-2).[X2(ii.2)].X2(ii.png.’\partial f/\partial y’) end hold off 169 170 %%%%%%%%%%%%%%%%%%%%%%%%% 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 if testcase figure(1) print -r300 -dpng mesh1b.f(x1)+alp. df2=[-df2dx(dd) 1*ones(size(dd))].5 4 -2 2]).png.^3-1*(x1-2).[].’meshc3’).2).29 .5.temp.’\partial f/\partial y’) end X=[dd f2(dd)].1).[X(ii.4 .%jpdf(’mesh1’).[].2). alp=1.temp.f2(x1).2.[].2)]. X=[dd f(dd)].^2+1*(x1-2)+2)’).1)-1].[].[]. text(3.2.6:.[].X2(ii.’LineWidth’.png. figure(4) print -r300 -dpng mesh4a.%jpdf(’mesh1c’).[].3. 20 6 207 208 209 210 global alp [xx1.X2(ii.%jpdf(’mesh4’).’f_1 > 0’) end 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 dd=[xx(1) 0 xx(1)]’. %for ii=1:length(X) for ii=1 plot([X(ii.5:max(max(F))]).clf contour(x1.^2-2*1*(x1-2)+1)’). plot(x1.png.’LineWidth’.f(x1).1)]. figure(3) print -r300 -dpng mesh2.x2. [xx0.0].[min(min(F)) .’f_1 < 0’) if testcase text(0. and the constraint is that x_2-f(x_1) <= 0 % changes are made to the constraint so that x_2-f(x_1) <= alp > 0 figure(5).2). X2=X+gam*df2/norm(df2).F’. dfdx=inline(’(3*1*(x1-2).’LineWidth’.[].png.3. plot(x1.temp.323 2–21 .f(x1).’f_2 < 0’) plot(x1.temp.%jpdf(’mesh2a’). else figure(1) print -r300 -dpng mesh1.2).[0. figure(4) print -r300 -dpng mesh2a.’k-’.2)].[].%jpdf(’mesh2’).[]. axis([-.

xx0(2).’MarkerFaceColor’. ceq=X(2)-f(X(1)).’MarkerSize’. 2 3 global f f2 4 5 6 7 %c=[f(X(1))-X(2).num2str(meshf(xx0))].12.323 2–22 .’MarkerFaceColor’. 8 9 return June 18. ceq=[].ineqnonlin] 213 214 legend(’F’. function F=meshf(X). F^*=’ .5]) print -r300 -dpng mesh5.’md’.meshf(xx1) lam1. 2008 16.X(2)-f2(X(1))].num2str(meshf(xx1))]) 215 216 217 218 hold on plot(xx0(1).5 -1 . 8 9 return 1 function [c. %ceq=f(X(1))-X(2).%jpdf(’mesh5’). 6 7 end 1 function [c.[’const=0.Spr 2008 211 212 [meshf(xx0) lam0.f2(X(1))-X(2)]. 2 3 global f 4 5 6 7 c=[].’m’) plot(xx1(1).ceq]=meshc(X).xx0(2).[’\lambda_0 = ’.xx1(2).’m’) 219 220 text(xx0(1)+.[’const = 1.’MarkerSize’.num2str(lam0.12.ineqnonlin.ceq]=meshc(X).’mo’. 2 3 global g G 4 5 F=g’*X+X’*G*X/2.ineqnonlin)]) 221 222 223 1 axis([0 2. F^*=’.5. c=[X(2)-f(X(1)).

881]).^2+xx. axis([-3 3 -3 3 0 20])% hh=get(gcf.’LineWidth’.’m-’. 2008 16.for jj=1:length(xx).^2+xx(1).xx.[].20.’rs’. hold on plot3(xx(1).clf xx=[-3:. ceq=0.% 14 15 16 17 18 xx=fmincon(’simplecaseF’.^2+1+xx. 21 1 function F=simplecaseF(X).’LineWidth’. 2 3 4 c=[1-X(2).[].ceq]=simplecaseC(X).^2+xx(1).Spr 2008 Code for Simple Constrained Example 1 2 3 4 5 figure(1).xx.2).[-109 74].’CameraPosition’.’MarkerSize’.[].end.% 9 10 11 12 13 xlabel(’x_1’).1:3]’.% set(hh.png.ones(size(xx)).’View’.xx(2).[]. 4 5 return 1 function [c.*xx(2).*xx(2) 19 20 print -r300 -dpng simplecase.’g-’.[].% hold on.5555 13.0].[-26.^2+xx(2).% plot3(xx.[].3-xx.^2+(3-xx).’children’).323 2–23 .xx. % hold off. 5 6 return June 18.% 6 7 8 plot3(xx. 2 3 F=X(1)^2+X(1)*X(2)+X(2)^2.% hh=mesh(xx.2).jj)= xx(ii)^2+xx(ii)*xx(jj)+xx(jj)^2.5307 151.[0.’simplecaseC’). for ii=1:length(xx). % FF(ii.’MarkerFace’.*(3-xx). ylabel(’x_2’).’r’) xx(1).FF).xx(1).end.X(1)+X(2)-3].^2+xx(2).