You are on page 1of 35
ChAT heb lg Constrained Optimization II This chapter continues our presentation of the central mathematical technique in economic theory: the solution of constrained optimization problems. The last chapter introduced the Lagrangian formulation of that solution and focused on its most important aspect: the first order conditions that form the basis of a large number of economic principles. This chapter looks at three other aspects of the Lagrangian approach: (1) the sensitivity of the optimal value of the objective function to changes in the parameters of the problem, (2) the second order conditions that distinguish maxima from minima, and (3) the constraint qualifications that are a subtle but necessary hypothesis in the Lagrangian approach. The last section of this chapter contains careful proofs of the basic first order conditions of the last chapter. 19.1 THE MEANING OF THE MULTIPLIER In solving constrained optimization problems, we seem to be deriving extraneous information in the values of the multipliers (A,..., A%,). However, the multipliers play an important role in economic analysis —~ in some problems, a role at least as important as that of the maximizer itself. We will see in this section that the multipliers measure the sensitivity of the optimal value of the objective function to changes in the right-hand sides of the constraints and, as a result, they pro- vide a natural measure of value for scarce resources in economics maximization problems. 448 (19.1] THE MEANING OF THE MULTIPLIER 449. One Equality Constraint We return to the simplest problem — two variables and one equality constraint: maximize f(x, y) “ subject to A(x, y) =a We will consider a as a parameter which varies from problem to problem. For any fixed value of a, write (x"(a), y*(a)) for the solution to problem (1), and write 1*(a) for the multiplier which corresponds to this solution. Let f(x*(a), y*(a)) be the corresponding optimal value of the objective function, We prove that under reasonable conditions which hold for nearly all constrained maximization prob- lems, 42"(a) measures the rate of change of the optimal value of f with respect to the parameter a, or roughly speaking, the (infinitesimal) effect of a unit increase in aon f(x"(a), y"(a)). Theorem 19.1 Let f and h be C! functions of two variables. For any fixed value of the parameter a, let (x"(a), y"(a)) be the solution of problem (1) with corresponding multiplier 42°(a). Suppose that x*, y*, and j.* are C! functions of a and that NDCQ holds at (x*(a), y"(a), 1*(a)). Then, H@= 5 Ae Oye). @ Proof The Lagrangian for problem (1) is Ls ¥, wa) = f(s y) — (AG y) — a), 3) with a entering asa parameter. By Theorem 18.1, the solution (x*(a), y*(a), 1.*(a)) of (1) satisfies IL O= Z'@, y'@ H"(@);4) Le.) - POL WOYO a (4) Oe ye y"(@), (a); a) = Ler, y'@, n'a) - HOLE) ¥'@, (a)), for all a, Furthermore, since h(x*(a), y*(a)) = a for all a, dx’) | sh ay | Fey GO+ FO TO =1 3 450 CONSTRAINED OPTIMIZATION II [19] for every a, Therefore, using the Chain Rule and equations (4) and (5), Gie@yr@ = Lora) Fao+ Lowy) Za HF @ra) TO +e Foara) Lo “[Frora Ter Feara Fol te Example 19.1 In Example 18.5, we found that a maximizer of f(x, x2) = x2) | on the constraint set 2x? + x} = 3 isxy = 1, x = 1, with multiplier 4 = 0.5. The maximum value of f is f* = f(1, 1) = 1. Redo the probiem, this time using the constraint 2x7 + x3 = 3.3. The same computation as in Example 18.5 yields the solution x = x. = V1.1, with maximum value f* = (1.1)/? ~ 1.1537, | an increase of 0.1537 over the original f*. On the other hand, Theorem 19.1 predicts that changing the right-hand side of the constraint by 0.3 unit would change the maximum value of the objective function by roughly 03+ = 0.3-0.5 = 0.15 unit, an approximation correct to two decimal places. Several Equality Constraints The statement and proof of the natural generalization of Theorem 19.1 to several variables and several equality constraints is straightforward. Theorem 19.2 Let fhy,.... mm be C! functions on R®. Let a = (a1,..., 4m) be an m-tuple of exogeneous parameters, and consider the problem (Pa) of maximizing f(2;,..., x.) subject to the constraints My(Lay eee Xn) = Byes Amy +++» Xn) Let xj(a),...,x%(a) denote the solution of problem (P,), with corresponding Lagrange multipliers jj(a),..., 4,(a). Suppose further that the x7’s and y's are differentiable functions of (a1,..-) 4m) and that NDCQ holds. Then, for each j= 1,-2.4” Hf (@y say) = ie FOG rambo Qecam — [19.1] THE MEANING OF THE MULTIPLIER 451 Inequality Constraints Theorems 19.1 and 19.2 hold equally well for inequality constraints, as the fol- lowing theorem indicates. For ease of exposition, we will assume that all the constraints under consideration are inequality constraints. The statement of the corresponding result for mixed constraints is straightforward. Theorem 19.3 Let a* = (a',..., aj) be a k-tuple. Consider the problem (Qs-) of maximizing f(x), ..., Xn) subject to the k inequality constraints BICC 6 Xn) S Oy oe GEL Xn) F Ape a” Let xj(a"),...,x4(a") denote the solution of problem (Qs), and iet Aj(a*), . ++, Ag(a") be the corresponding Lagrange multipliers. Suppose that as a varies near a", x7,...,x%, and Af,..., Af are differentiable functions of (a1, ..., a) and that the NDCO holds at a”. Then, for each j = 1,...,k, Meee) = EHO AD. MaDe ® i Proof (Sketch) For ease of notation, we write a’ simply as a. As usual, we break the inequality constraints into two groups: the binding ones and the nonbinding | ones. The binding constraints can be treated as equality constraints, and so we apply Theorem 19.2 to them. Let g; be the constraint function for one of the nonbinding constraints: g;(x"(a)) < a;. Let C be the constraint set described by the inequalities (7). Let a; be any number such that gi(x"(a)) 0 and c"(y) > 0. Of the chips it produces, a fraction 1 — @ are unavoidably defective and cannot be sold. Working chips can be sold at a price p, and the microchip market is highly competitive. How will an increase in production quality affect the firm’s profit? The firm’s profit function is ap, a) = max[pay — ch, where “maxy” means the maximum value as y varies. The conditions on the cost function guarantee that there is 2 nonzero profit-maximizing output y"(a) which depends smoothly on a. The derivative of optimal profit 7 with respect to ais: du aead. Seca Has Fa = jg PY ~ 0) = py > 0. As one would suspect, increasing the fraction of nondefective chips will increase the firm’s profit. Once again, we were able to determine this without actually solving for the optimal output. Of course, Theorem 19.4 generalizes easily to the case where there is more than one parameter. One works with one parameter at a time and finds that Se fei sXe) = FFG 0 csiladeay vas) Constrained Problems The most general envelope theorem deals with constrained problems in which there are parameters in both the objective function and in the constraints. For example, consider the problem of maximizing f(x; a) subject to the constraints 456 CONSTRAINED OPTIMIZATION I! [19] hy(x;a) = 0,...,Ae(x;.a) = 0. If f does not depend oh @ and if each h;(x; a) can be written as h(x) — a, then we are back to the situation in Theorem 19.2. So the case under consideration is more general than the other two cases we have looked at. However, the answer is nearly as straightforward. As the next theorem indicates, the rate of change of f(x*(a); a) with respect to a equals the partial derivative with respect to a, not of f, but of the corresponding Lagrangian function. Theorem 19.5 Let f,hy,...hy:R" X R! — R! be C! functions. Let x*(a) = (x}(@),...,x%(@)) denote the solution of the problem of maximizing x + £(x;.a) on the constraint set A(x; a) = 0,..., A(x a) = 0, for any fixed choice of the parameter a. Suppose that x"(a) and the Lagrange multipliers 2(a),..., p4(a) are C! functions of a and that the NDCQ holds. Then, gee ea i gal © 4) = FEO, maja), (12) where L is the natural Lagrangian for this problem. Note, as in expression (10), that the left hand side of (12) is a total derivative while the right hand side is a partial derivative. The proof of Theorem 19.5 is similar to the proofs of Theorems 19.1 and Theorem 19.4 and will be left as an exercise, Example 19.6 Change the constraint in Example 18.7 from x? + y? = 1 to x? + 1.1y? S 1, keeping the objective function f(x, y) = xy. If we write both constraints as x? + ay? = 1, the Lagrangian for the parameterized problem is L(x, y, Asa) = xy — AQ? + ay’ - 1). The solution for the original (a = 1) problem was x = y = 1/¥2,A = 1/2. The Envelope Theorem tells us that as a changes from 1 to 1.1, the optimal value of f changes by approximately Since [19.3] SECOND ORDER CONDITIONS 457 the optimal value will decrease by approximately .1/4 = .025 to .475. One can calculate directly that the solution to the new problem is x = 1/V2, y = | 1/22, with maximum objective value of f approximately equal to 0.4767. All the theorems in the last two sections had two basic hypotheses: the smooth dependence of the maximizers and multipliers on the parameters and the nonde- generate constraint qualification (NDCQ). In Section 19.4, we will look at these hypotheses more carefully and restate them in terms of properties of the objective and constraint functions of the problem. EXERCISES 19.10 Write out a careful proof of Theorem 19.5. 19.11 Recover the statement of the Lagrange Multiplier Theorem (Theorem 19.1) from the statement of Theorem 19.5, 19.12 Use Exercise 18.2 and the Envelope Theorem ¢o estimate the maximum and mini- mum distance from the origin to the ellipse x7 + xy + 0.99? = 3. 19.13 Use Example 18.13 and the Envelope Theorem to estimate the maximum value of x? + x + 4.1y? on the constraint set: 2x + 2y = 1, x= 0, y=0. 19.3 SECOND ORDER CONDITIONS Inthe analysis of an economic model, the first order conditions for a maximization problem often yield an economic principle. The corresponding second order con- ditions provide some fine tuning on these principles. For example, as we noted in Section 3.6, for a firm in a competitive industry, the first order condition for profit maximization implies that marginal revenue equals marginal cost at the profit- maximizing output. The second order condition for profit maximization requires that at the profit-maximizing output, the firm must be experiencing increasing marginal cost, as illustrated in Figure 19.1, From a computational point of view, the second order condition can often help choose a maximizer from the set of candidates which satisfy the first order conditions. For example, second order conditions would rule out g = q; as a profit-maximizing output in Figure 19.1. Furthermore, the second order conditions in a maximization problem play a role in the comparative statics analysis of the solution of that problem. As just mentioned, the first order conditions describe the relationship that must occur between the exogenous variables and the endogenous variables at the optimizing. solution. In comparative statics or sensitivity analysis, we ask how changes in the exogenous variables affect the optimal values of the endogenous variables. To answer this question, we call on the Implicit Function Theorem and compute total differentials of the first order conditions. We are then Jed naturally to working with Figure 19.1 458 CONSTRAINED OPTIMIZATION II [19] _ Profit maximizer q q a ee Sse Increasing MC MC vs. price for a competitive industry. a linear system of equations whose coefficient matrix is the matrix of the second order conditions of the original maximization problem. The sign of the determinant of that matrix becomes important, for example, when one uses Cramer’s rule to solve the resulting system for the differentials of the endogenous variables. The next section illustrates this use of the second order conditions and relates this approach to the theorems of the first two sections. In Section 17.3, we saw that the second order condition for maximizing an unconstrained function f(x, ..., Xn) of n variables is that the Hessian of f at the maximizer x* ox? OX yx, D*f(x’) = : Ofanaeeans tof xy ony be negative definite. More precisely, at a maximum f(x"), Df(x") must be zero and D?f(x*) must be negative semidefinite (necessary conditions): ve (2 f°) ¥ <0 forall nonzero vectors v. To guarantee that a point x* is a local maximizer, we need Df(x*) = 0 and D? f(x") negative definite (sufficient conditions): v" (D*s@))¥ <0. forall nonzero vectors ¥. In this section we will focus on the sufficient second order conditions for constrained maximization and minimization problems. Just as the second order [19.3] SECOND ORDER CONDITIONS 459 conditions for unconstrained problems lead to consideration of the definiteness of certain quadratic forms, the second order conditions for constrained problems are closely related to the definiteness of a quadratic form restricted to a linear subspace — material covered in Section 16.3. Constrained Maximization Problems Intuitively, the second order condition for a constrained maximization problem: (1) should invoive the negative definiteness of some Hessian matrix, but (2) should only be concerned with directions along the constraint set. For example, suppose that the objective function is a quadratic function Q(x) = x" Hx = > hgxix; for some symmetric matrix H = ((hjj)) and that the constraint set is defined by the system of linear equations Ax = 0. Since 0 is in the constraint set and since it is a critical point of Q, it is natural to ask whether 0 is the constrained max. Analytically, we want to know whether 02x"Hx forall x such that Ax =0. ‘As we saw in Chapter 16, we are asking whether Q is negative definite on the constraint set Ax = 0. The algebraic condition for determining whether or not x" Hx is negative on the linear constraint set Ax = 0 involves the signs of certain leading principal minors of the bordered matrix: (ar i): For the problem of maximizing a general f(x) subject to the possibly nonlinear equality constraints A)(x) = cy,..., g(x) = cx, the first order conditions entail finding the critical points of the Lagrangian function LQG, 66) Xn Mayes Hk) (13) = F(%) = pa(ltxQ%) ~ 1) ~ +++ = pall) — cx). Let (x", pw") be a critical point of L. We expect that the second order condition involves the negative definiteness of a quadratic form along a linear constraint set. A natural candidate for the quadratic form is the Hessian of the Lagrangian function with respect to x;,...,X,. The natural linear constraint set for this problem is the hyperplane which is tangent to the constraint set {x © R® : h(x) = c} at the point x. The following theorem states that these natural candidates do indeed yield the proper second order sufficient conditions for a constrained max. First, recall (Theorem 15.6) that the tangent space to {h(x) = c} at the point x” is the set of vectors v such that Dh(x*) v = 0 (considered as vectors with their tails at x*). To see this, let x(r) be a curve at x” on this constraint set with tangent vector v, so 460 CONSTRAINED OPTIMIZATION I! [19] that x(0) = x", x'(Q) = v, and h(x(s)) = ¢. Then, by the Chain Rule, 0 “ nGae) = Dh(x(0)) -x/(0) = Dix’) v; l-o in words, the tangent vector v to any curve in the constraint set {h(x) = c} satisfies Dh(x*)v = 0. Theorem 19.6 Let fhy,..., 4, be C? functions on R". Consider the problem of maximizing f on the constraint set Ch = {x 5 Ay(®) = cy,-+-5 he) = cx}: Form the Lagrangian (13), and suppose that: (a) x‘ lies in the constraint set C,, (b) there exist wy,..., jz such that lat Gets ee pags Ds (c) the Hessian of L with respect to x at (x”, 41"), D2L(x*, p"), is negative definite on the linear constraint set {v : Dh(x*)v = 0}; that is, v#0 and Dh(x’)v = 0 => v"(D2L(x", p*)v < 0. (14) Then, x* is a strict local constrained max of f on C,. In Section 16.3 we learned the condition on bordered matrices for verifying second order condition (14). Border the n X n Hessian D2L(x", 1°) with the k Xn constraint matrix Dh(x*): _ 0 Dh(x") on (onary DL, ) ohy hy v0 ea : | ox, Ry | I 7 : hy hy oo eo | ox X as) “| he La Ox, ax, ox? Oey X1 nae ; ee a | aL By Ey Bixn ax [19.3] SECOND ORDER CONDITIONS 461 If the last (# — &) feading principal minors of matrix (15) alternate in sign, with the sign of the determinant of the (k + ) X (k + n) matrix HT in (15) the same as the sign of (—1)", then condition c of Theorem 19.6 holds. As one more indication of the naturalness of the statement of Theorem 219.6, note that the Hessian of the Lagrangian (13) with respect to all (n + k) variables By Mis X1y +++) Xq iS 0 0 a aa ae Ky [aaa : hy hy eG aon - : ox a, Deka ee ee 16) ot | at mm | aL aL 0) ox, ox, at OyXy : | i : ayy Cle oe XK, OXXy, axe me eer Itiply each of the last d each of the last since arid a leg we multiply each of the n rows and each of the n columns in (16) by ~1, we will nat change the sign of det D°L or of any of its principal minors since this process involves an even number of multiplications by —1 in every case. The result is the bordered Hessian (15). So, the bordered Hessian H in (15) has the same principal minors as the full Hessian (16) of the Lagrangian L. Recall, however, that the second order condition for the constrained maximization problem involves checking only the last n — k of the n + k leading principal minors of D2, ,yL. Let’s work out the proof of Theorem 19.6 for the simplest constrained max- imization problem: two variables and one equality constraint. The proof for the general case is presented in Chapter 30. Theorem 19.7 Let f and h be C? functions on R?. Consider the problem of maximizing f on the constraint set C, = { (x y) : h(x y) = ¢}. Form the Lagrangian LG ys H) = FY) ~ MAG y) ~ ©) Suppose that (x*, y*, 41”) satisfies: a in 0 ati, yS nu), and 462 CONSTRAINED OPTIMIZATION II_[19] ah ah Cy oh PL FL ©) det) Ge Gray [PO Ow). oh PL PL ay Gyax ay Then, (x”, y*) is a local max of f on Cy. Proof Condition b implies that Bory) #0 ox Rey) 40. We assume that (dh/ay)(x", y*) # 0, without loss of generality. Then, by the Implicit Function Theorem (Theorem 15.1), the constraint set C, can be considered as the graph of a C! function y = (x) around (x*, y*); in other words, A(x, d(x) =C forall x near x". a7 Differentiating (17) yields ah ah re pe hh OO) ba a (x) 6) = 0 (18) oh F680) or $'@) = — 3 (19) a $(x)) Let F(x) = fe, 60) 20) be f evaluated on C;, a function of one unconstrained variable. By the usual first and second order conditions for such functions, if F(x") = 0 and F"(x*) < 0, then x" will be a strict local max of F and (x*, y*) = (x", 6(x*)) will be a local constrained max of f. So, we compute F(x") and F"(x"). Now, Fix) = Le, O) + La boo)o'en. (21) [19.3] SECOND ORDER CONDITIONS 463 Multiply equation (18) by —2* and add it to (21), evaluating both at. x = x": OR ww wy i) +60r(Lory-w Sey) 2 = Ze) + 8a 9. By hypothesis a of the theorem, F(x") = 0. Now, take another derivative of F(x) at x", setting y* = $(x*) in (22): FL PL Hoey) Oe PL gy 2 be) = a + 2am ee + See PL PL dhfax\ , &L(_ ah/ax\? =o 4222 (-—F ) + SS [- y (1 ox +255 ( wu) * a ( aya) ma ; Ze (My 22h mm, ERY (2) ax? 3) axdy bx dy — ay? \ax a which is negative, by hypothesis b of the theorem, Since F(x") = O and F"(x") < G, xt F(x) = f(x, 6(x)) das a local max at x*, and therefore, f restricted to C;, has a local max at (x*, y*). a Minimization Problems Remark We have been concentrating on constrained maximization problems, up to this point. The second order conditions for a constrained minimization problem involve the positive definiteness of D2L(x*, 42") on the nullspace of Dh(x*) — replacing (14) by v#0 and Dh(x*\v = 0 => v"(D2L0*, u")v > 0 (23) in the statement of Theorem 19.6. By our discussion in Section 16.3, the bordered Hessian conditions for (23) are that the last (n — k) leading principal minors of (15) all have the same sign as (—1)*, where k is the number of constraints. For the case n = 2 and k = | in Theorem 19.7, this positive definiteness condition requires that the determinant in Condition b of Theorem 19.7 be negative. 464 CONSTRAINED OPTIMIZATION II [19] Example 19.7 In Example 18.5, we considered the problem of maximizing f(x1,x2) = x2x2 on the constraint set A(x,x2) = 2x7 + 23 = 3. There, we found six solutions to the first order conditions (18.11): (0, +V3,0) G32, W) = 9 (£1, +1, +.5) (41,-1,-.5). Let us use second order conditions to decide which of these points are local maxima and which are local minima. Differentiate the first order conditions (18.11) once again to obtain the general bordered Hessian 0 OF ar, 2x» H= [hy Lay Lon |=) 4 2-4 2m |. hy Lax Lan 2x2 2x —2p This problem has n = 2 variables and k = 1 equality constraint. As Theorem 19.7 indicates, we need only check the sign of n — k = 1 determinant — the determinant of H itself. if det has the same sign as (—1)" = +1, that is, if det H > 0, at a candidate point, that point is a local max. If det H has the same sign as (—1)* = —1, that is, if det <0, at the candidate point, that point is a local min. At the points (+1, —1, —0.5), 0 +4 2 H=([+4 0 +2 eyes 1 In either case, detH = —16; so these two points are local minima. At the points (+1, 1,0.5), UU ete an emus ae Ore eal 22 al In either case, det = +48; so these two points are local maxima. These computations support the observations we made at the end of Example 18.5. However, we were not able to determine the character of (x,%2) = (0, + V3) by simply plugging these points into the objective function in Example 18.5. Since p, = 0 for these points, the corresponding bordered Hessian is 0 0. +2y3 ne(.0 +2¥3 ). +23 0 0 [19.3] SECOND ORDER CONDITIONS 465 | For (x1,32) = (0, +3), detH = —24/3 < 0; this point is a local min. For (a1,.2) = (0, -3), detH = +24V3 > 0; this point is a local max. These calculations, which should agree with the conclusions of the geometric approach in Exercise 18.1, illustrate that extrema computed via first and second | order conditions of Theorem 19.6 need not be global extrema. Example 19.8 Consider the problem of maximizing xyz? subject to the con- straint x?+y?+z? = 3, a special case of Exercise 18.9. The first order conditions are L Lay? — 2x =0 ax aL : = 2y*y?-2py =0 ey @ _ aya, = 3p 7 ee we 0 © payee 0 op with solution x? is = 2 = p = 1. The bordered Hessian for this problem 0 2x 2y 22 Qe 2y?z? — Qe 4xy2? 4xytz 2y xyz? 22 — Ip 4x? yz 22 Axy*z 4x?yz x2? — 2p Atx = y =z =p = 1, the bordered Hessian becomes Oe or 22 20414 20 04 @A) 24 4 «0 Since n = 3 and k = 1, we have to check the signs of twa feading principal minors: the 3 X 3 submatrix H above the dashed lines in (24) and the complete 4X 4 matrix Hy in (24). One computes that det #3 = 32 and det Hy = —192. Since these determinants alternate in sign and since the sign of detH, is the sign of (—1)> = —A, the candidate x = y = z = 1 is indeed local constrained max by Theorem 19.6. 466 CONSTRAINED OPTIMIZATION I! [19] Inequality Constraints To include inequality constraints in the statement of Theorem 19.6, we call on the natural techniques that we used at this stage of our discussion of first order condi- tions, in Section 18.3. Given a solution (x*, A*) of the first order conditions, divide the inequality constraints into binding constraints and nonbinding constraints at x". On the one hand, we treat the binding inequality constraints like equality con- straints; on the other hand, the multipliers for the nonbinding constraints must be zero and these constraints drop out of the Lagrangian. The following theorem summarizes these considerations for a constrained maximization problem. Theorem 19.8 Let f.gi,..., 8m M1,.--,/ be C? functions on R®. Consider the problem of maximizing f on the constraint set Con = {X 5 8X) = By, 05 SalX) S Bm, My (X) = Cr+ a(x) = cx} Form the Lagrangian L(y 6s Amy Aye ees Arms By = F(%) ~ Ar(gu() ~ bi) — +++ — Am( 8%) — brn) ~ pa(hn(x) — ¢1) — +++ = Ba (Aas) — cx). (a) Suppose that there exist Af,...,Aj, Hy,---» ty Such that the first order conditions of Theorem 18.5 are satisfied; that is, He) a ah _ se el a ae at (x*, A*, 2"), P20... 4, 20, AT(@iQR") = bi) = 0.25 AR(Gm(%") — bm) = 0, iy (") = e020 hal) (©) For notation’s sake, suppose that gi,...,g. are binding at x* and Bevts++» 8m afe not binding. Write (g1,..., ge) as ge. Suppose that the Hessian of L with respect to x at (x”, A*, 4°) is negative definite on the linear constraint set {v:Dgx(x"v =0 and Dh(x*yv = 0} that is, v40, Dex(x'v =0, D(x =0 = vl + (DLO AY") Vv <0. Then x" is a strict local constrained max of f on C,4- {19.3] SECOND ORDER CONDITIONS _ 467 To check condition (b), form the bordered Hessian 981 og, Oe — cot et 0 0 OF | a On | i oe Be Be 0 0 0 eee ee, 2 | ox, Xp hy ah, 0 0 Sea : : ! Ox, Xn | ; ee hy ahy 0 Oo oO habla pas | ax; On de yh LHL ax, ox, dx, ox, ax? Oxnx, : ae eee : aes Be a he) PLL On By, Hy Xn Ox%n at If the last n — (e + &) leading principal minors alternate in sign with the sign of the determinant of the largest matrix the same as the sign of (— 1)”, then condition c holds. A little care has to be taken in writing the minimization version of Theorem 19.8. In particular, the minimization problem should be presented in standard form, as in Theorem 18.6. One makes the following changes in the wording of Theorem 19.8 for an inequality-constrained minimization problem: (1) change the word “maximizing” to “minimizing” on line two, (2) write the inequality constraints as g(x) = ; in the presentation of the constraint set Cop, (3) change “negative definite” and ““< 0” in condition (b) to “positive definite” and > 0, (4) change “max” to “min” in the concluding sentence. ‘The bordered Hessian check requires that the last n — (e + #) leading principal minors all have the same sign as (—1)¢*4, Alternative Approaches to the Bordered Hessian Condition The bordered Hessian condition for a constrained max or min can be presented in different ways. The two most common altematives to our presentation involve the position of the border in the bordered Hessian and the rules for the signs of the leading principal minors to distinguish a max from a min. Many texts place 468 CONSTRAINED OPTIMIZATION It [19] the Jacobian Dh(x*) of the constraint functions to the right and below the Hessian D2L(x", A*) of the Lagrangian with respect to the x's: Cee a oxy? Ax Oxy, ax, ax; i : : ay ae Shea cied pw, the ax, dx x? Xy ey Pye | gees gece eee eaiee cre cee cea eee aeeie (25) hy ah = rs ae | 0 0 : ee | aly aly om en [a0 0 Instead of examining the /ast n — k leading principal minors, this point of view looks at the » — k largest principal minors which “respect the borders”; after computing the determinant of all of H, one first throws away tow n and column n of H, then row n — 1 and column n — 1 of H, and so on. Furthermore, some texts state the sign conditions by emphasizing the sign of the smallest principal minor checked instead of the sign of the whole bordered Hessian H. For a constrained minimization problem, the n — & principal minors all have the same sign; for a constrained maximization problem, they alternate in sign. For a constrained minimization problem, the sign of the smallest principal minor checked is the same as the sign of (—1)*; for a constrained maximization problem, the sign of the smallest principal minor checked is the same as the sign of (-1)*"7. Necessary Second Order Conditions ‘Theorems 19.6, 19.7, and 19.8 state sufficient second order conditions for a candi- date point to be a solution of a constrained maximization problem, namely, that the Hessian D2.(x", 4") of the Lagrangian with respect to the x;’s be negative definite on the nullspace of the Jacobian Dh(x*) of the constraint functions — condition (14) in Theorem 19.6. The corresponding necessary condition that a constrained maximizer must satisfy is, of course, that D2L(x*, A*) be negative semidefinite on the nullspace of Dh(x"). The complete characterization of constrained negative semidefiniteness in terms of principal submatrices of the bordered Hessian is a bit too complex to state here in all its generality. It certainly requires that each of the largest n — k leading principal minors must be zero or have the same sign as (-1)f. If DL(x*) = 0 but one of the last m — k leading principal minors of the bordered Hessian is nonzero and has the wrong sign for negative definiteness, then the candidate cannot be a local constrained max.

You might also like