You are on page 1of 220

# AN INTRODUCTION TO

OPTIMIZATION

SOLUTIONS MANUAL

Fourth Edition

Edwin K. P. Chong and Stanislaw H. Zak

## A JOHN WILEY & SONS, INC., PUBLICATION

1. Methods of Proof and Some Notation
1.1

F F T T T T
F T T F T T
T F F T F F
T T F F T T

1.2

F F T T T T
F T T F T T
T F F T F F
T T F F T T

1.3

F F T T T T
F T T T F T
T F T F T T
T T F F F F

1.4

## A B A and B A and (not B) (A and B) or (A and (not B))

F F F F F
F T F F F
T F F T T
T T T F T

1.5
The cards that you should turn over are 3 and A. The remaining cards are irrelevant to ascertaining the
truth or falsity of the rule. The card with S is irrelevant because S is not a vowel. The card with 8 is not
relevant because the rule does not say that if a card has an even number on one side, then it has a vowel on
the other side.
Turning over the A card directly verifies the rule, while turning over the 3 card verifies the contraposition.

## 2. Vector Spaces and Matrices

2.1
We show this by contradiction. Suppose n < m. Then, the number of columns of A is n. Since rank A is
the maximum number of linearly independent columns of A, then rank A cannot be greater than n < m,
which contradicts the assumption that rank A = m.
2.2
.
: Since there exists a solution, then by Theorem 2.1, rank A = rank[A..b]. So, it remains to prove that
rank A = n. For this, suppose that rank A < n (note that it is impossible for rank A > n since A has
only n columns). Hence, there exists y Rn , y 6= 0, such that Ay = 0 (this is because the columns of
1
A are linearly dependent, and Ay is a linear combination of the columns of A). Let x be a solution to
Ax = b. Then clearly x + y 6= x is also a solution. This contradicts the uniqueness of the solution. Hence,
rank A = n.
: By Theorem 2.1, a solution exists. It remains to prove that it is unique. For this, let x and y be
solutions, i.e., Ax = b and Ay = b. Subtracting, we get A(x y) = 0. Since rank A = n and A has n
columns, then x y = 0 and hence x = y, which shows that the solution is unique.
2.3
i = [1, a>
Consider the vectors a >
i ] R
n+1
, i = 1, . . . , k. Since k n + 2, then the vectors a
1 , . . . , a
k must
n+1
be linearly independent in R . Hence, there exist 1 , . . . k , not all zero, such that
k
X
i ai = 0.
i=1
Pk
The first component of the above vector equation is i=1 i = 0, while the last n components have the form
Pk
i=1 i ai = 0, completing the proof.
2.4
a. We first postmultiply M by the matrix
" #
Ik O
M mk,k I mk

## to obtain " #" # " #

M mk,k I mk Ik O O I mk
= .
M k,k O M mk,k I mk M k,k O
Note that the determinant of the postmultiplying matrix is 1. Next we postmultiply the resulting product
by " #
O Ik
I mk O
to obtain " #" # " #
O I mk O Ik Ik O
= .
M k,k O I mk O O M k,k
Notice that " #! " #!
Ik O O Ik
det M = det det ,
O M k,k I mk O
where " #!
O Ik
det = 1.
I mk O
The above easily follows from the fact that the determinant changes its sign if we interchange columns, as
discussed in Section 2.2. Moreover,
" #!
Ik O
det = det(I k ) det(M k,k ) = det(M k,k ).
O M k,k

Hence,
det M = det M k,k .

b. We can see this on the following examples. We assume, without loss of generality that M mk,k = O and
let M k,k = 2. Thus k = 1. First consider the case when m = 2. Then we have
" # " #
O I mk 0 1
M= = .
M k,k O 2 0
2
Thus,
det M = 2 = det (M k,k ) .
Next consider the case when m = 3. Then

..
0 . 1 0
..
" #
O I mk 0 . 0 1 = 2 6= det (M k,k ) .
det = det
M k,k O

..

2 . 0 0

Therefore, in general,
det M 6= det (M k,k )
However, when k = m/2, that is, when all sub-matrices are square and of the same dimension, then it is
true that
det M = det (M k,k ) .
See .
2.5
Let " #
A B
M=
C D
and suppose that each block is k k. John R. Silvester  showed that if at least one of the blocks is
equal to O (zero matrix), then the desired formula holds. Indeed, if a row or column block is zero, then the
determinant is equal to zero as follows from the determinants properties discussed Section 2.2. That is, if
A = B = O, or A = C = O, and so on, then obviously det M = 0. This includes the case when any three
or all four block matrices are zero matrices.
If B = O or C = O then " #
A B
det M = det = det (AD) .
C D
The only case left to analyze is when A = O or D = O. We will show that in either case,

## det M = det (BC) .

Without loss of generality suppose that D = O. Following arguments of John R. Silvester , we premul-
tiply M by the product of three matrices whose determinants are unity:
" #" #" #" # " #
I k I k I k O I k I k A B C O
= .
O Ik Ik Ik O Ik C O A B

Hence,
" # " #
A B C O
det =
C O A B
= det (C) det B
= det (I k ) det C det B.

## Thus we have " #

A B
det = det (BC) = det (CB) .
C O

3
2.6
We represent the given system of equations in the form Ax = b, where

" # x1 " #
1 1 2 1 x2
1
A= , x = , and b= .
1 2 0 1 x3 2
x4

## Using elementary row operations yields

" # " #
1 1 2 1 1 1 2 1
A= , and
1 2 0 1 0 3 2 2
" # " #
1 1 2 1 1 1 1 2 1 1
[A, b] = ,
1 2 0 1 2 0 3 2 2 3
from which rank A = 2 and rank[A, b] = 2. Therefore, by Theorem 2.1, the system has a solution.
We next represent the system of equations as
" #" # " #
1 1 x1 1 2x3 x4
=
1 2 x2 2 + x4

" # " #1 " #
x1 1 1 1 2x3 x4
=
x2 1 2 2 + x4
" #" #
1 2 1 1 2x3 x4
=
3 1 1 2 + x4
" #
43 d3 31 d4
= .
1 32 d3 32 d4

## Therefore, a general solution is

4 4 1
x1 3 d3 13 d4 3 3 0
x 1 2 d 2 d 2 2 1
2 3 3 3 4 =
= 3 d3 + 3 d4 + ,

x3 d3 1 0 0
x4 d4 0 1 0

## where d3 and d4 are arbitrary values.

2.7
1. Apply the definition of | a|:

a
if a > 0
| a| = 0 if a = 0

(a) if a < 0

a if
a<0
= 0 if a=0

a if a>0
= |a|.

2. If a 0, then |a| = a. If a < 0, then |a| = a > 0 > a. Hence |a| a. On the other hand, | a| a
(by the above). Hence, a | a| = |a| (by property 1).
4
3. We have four cases to consider. First, if a, b 0, then a + b 0. Hence, |a + b| = a + b = |a| + |b|.
Second, if a, b 0, then a + b 0. Hence |a + b| = (a + b) = a b = |a| + |b|.
Third, if a 0 and b 0, then we have two further subcases:

## 2. If a + b 0, then |a + b| = a b |a| + |b|.

The fourth case, a 0 and b 0, is identical to the third case, with a and b interchanged.
4. We first show |a b| |a| + |b|. We have

|a b| = |a + (b)|
|a| + | b| by property 3
= |a| + |b| by property 1.

To show ||a| |b|| |a b|, we note that |a| = |a b + b| |a b| + |b|, which implies |a| |b| |a b|. On the
other hand, from the above we have |b| |a| |b a| = |a b| by property 1. Therefore, ||a| |b|| |a b|.
5. We have four cases. First, if a, b 0, we have ab 0 and hence |ab| = ab = |a||b|. Second, if a, b 0,
we have ab 0 and hence |ab| = ab = (a)(b) = |a||b|. Third, if a 0, b 0, we have ab 0 and hence
|ab| = ab = a(b) = |a||b|. The fourth case, a 0 and b 0, is identical to the third case, with a and b
interchanged.
6. We have

## |a + b| |a| + |b| by property 3

c + d.

7. : By property 2, a |a| and a |a. Therefore, |a| < b implies a |a| < b and a |a| < b.
: If a 0, then |a| = a < b. If a < 0, then |a| = a < b.
For the case when < is replaced by , we simply repeat the above proof with < replaced by .
8. This is simply the negation of property 7 (apply DeMorgans Law).
2.8
Observe that we can represent hx, yi2 as
" #
> 2 3
hx, yi2 = x y = (Qx)> (Qy) = x> Q2 y,
3 5

where " #
1 1
Q= .
1 2

## Note that the matrix Q = Q> is nonsingular.

1. Now, hx, xi2 = (Qx)> (Qx) = kQxk2 0, and

## hx, xi2 = 0 kQxk2 = 0

Qx = 0
x=0

since Q is nonsingular.
2. hx, yi2 = (Qx)> (Qy) = (Qy)> (Qx) = hy, xi2 .
3. We have

hx + y, zi2 = (x + y)> Q2 z
= x> Q2 z + y > Q2 z
= hx, zi2 + hy, zi2 .

5
4. hrx, yi2 = (rx)> Q2 y = rx> Q2 y = rhx, yi2 .
2.9
We have kxk = k(x y) + yk kx yk + kyk by the Triangle Inequality. Hence, kxk kyk kx yk. On
the other hand, from the above we have kyk kxk ky xk = kx yk. Combining the two inequalities,
we obtain |kxk kyk| kx yk.
2.10
Let  > 0 be given. Set = . Hence, if kx yk < , then by Exercise 2.9, |kxk kyk| kx yk < = .

3. Transformations
3.1
Let v be the vector such that x are the coordinates of v with respect to {e1 , e2 , . . . , en }, and x0 are the
coordinates of v with respect to {e01 , e02 , . . . , e0n }. Then,

v = x1 e1 + + xn en = [e1 , . . . , en ]x,

and
v = x01 e01 + + x0n e0n = [e01 , . . . , e0n ]x0 .
Hence,
[e1 , . . . , en ]x = [e01 , . . . , e0n ]x0
which implies
x0 = [e01 , . . . , e0n ]1 [e1 , . . . , en ]x = T x.

3.2
a. We have
1 2 4
0 0 0
[e1 , e2 , e3 ] = [e1 , e2 , e3 ] 3 1 5 .

4 5 3
Therefore,
1
1 2 4 28 14 14
0 0 0 1 1
T = [e1 , e2 , e3 ] [e1 , e2 , e3 ] = 3 1 5 = 29 19 7 .

42
4 5 3 11 13 7

b. We have
1 2 3
[e1 , e2 , e3 ] = [e01 , e02 , e03 ] 1 1 0 .

3 4 5
Therefore,
1 2 3
T = 1 1 0 .

3 4 5

3.3
We have
2 2 3
[e1 , e2 , e3 ] = [e01 , e02 , e03 ] 1 1 0 .

1 2 1

6
Therefore, the transformation matrix from {e01 , e02 , e03 } to {e1 , e2 , e3 } is

2 2 3
T = 1 1 0 ,

1 2 1

Now, consider a linear transformation L : R3 R3 , and let A be its representation with respect to
{e1 , e2 , e3 }, and B its representation with respect to {e01 , e02 , e03 }. Let y = Ax and y 0 = Bx0 . Then,
y 0 = T y = T (Ax) = T A(T 1 x0 ) = (T AT 1 )x0 .
Hence, the representation of the linear transformation with respect to {e01 , e02 , e03 } is

3 10 8
B = T AT 1 = 1 8 4 .

2 13 7

3.4
We have
1 1 1 1
0 1 1 1
[e01 , e02 , e03 , e04 ] = [e1 , e2 , e3 , e4 ] .

0 0 1 1
0 0 0 1
Therefore, the transformation matrix from {e1 , e2 , e3 , e4 } to {e01 , e02 , e03 , e04 } is
1
1 1 1 1 1 1 0 0
0 1 1 1 0 1 1 0
T = = .

0 0 1 1 0 0 1 1
0 0 0 1 0 0 0 1

Now, consider a linear transformation L : R4 R4 , and let A be its representation with respect to
{e1 , e2 , e3 , e4 }, and B its representation with respect to {e01 , e02 , e03 , e04 }. Let y = Ax and y 0 = Bx0 .
Then,
y 0 = T y = T (Ax) = T A(T 1 x0 ) = (T AT 1 )x0 .
Therefore,
5 3 4 3
3 2 1 2
B = T AT 1 = .

1 0 1 2
1 1 1 4
3.5
Let {v 1 , v 2 , v 3 , v 4 } be a set of linearly independent eigenvectors of A corresponding to the eigenvalues 1 ,
2 , 3 , and 4 . Let T = [v 1 , v 2 , v 3 , v 4 ]. Then,
AT = A[v 1 , v 2 , v 3 , v 4 ] = [Av 1 , Av 2 , Av 3 , Av 4 ]

1 0 0 0
0 2 0 0
= [1 v 1 , 2 v 2 , 3 v 3 , 4 v 4 ] = [v 1 , v 2 , v 3 , v 4 ] .

0 0 3 0
0 0 0 4
Hence,
1 0 0
AT = T 0 2 0 ,

0 0 3
7
or
1 0 0
T 1 AT = 0 2 0 .

0 0 3
Therefore, the linear transformation has a diagonal matrix form with respect to the basis formed by a linearly
independent set of eigenvectors.
Because
det(A) = ( 2)( 3)( 1)( + 1),
the eigenvalues are 1 = 2, 2 = 3, 3 = 1, and 4 = 1.
From Av i = i v i , where v i 6= 0 (i = 1, 2, 3), the corresponding eigenvectors are

0 0 0 24
0 0 2 12
v1 = , v2 = , v 3 = , and v4 = .

1 1 9 1
0 1 1 9

## Therefore, the basis we are interested in is

0 0 0 24

0 0 2 12

{v 1 , v 2 , v 3 } = , , , .

9

1 1 1

1 1 1 9

3.6
Suppose v 1 , . . . , v n are eigenvectors of A corresponding to 1 , . . . , n , respectively. Then, for each i =
1, . . . , n, we have
(I n A)v i = v i Av i = v i i v i = (1 i )v i
which shows that 1 1 , . . . , 1 n are the eigenvalues of I n A.
Alternatively, we may write the characteristic polynomial of I n A as

## which shows the desired result.

3.7
Let x, y V , and , R. To show that V is a subspace, we need to show that x + y V . For this,
let v be any vector in V. Then,

## since v > x = v > y = 0 by definition.

3.8 
The null space of A is N (A) = x R3 : Ax = 0 . Using elementary row operations and back-substitution,
we can solve the system of equations:

4 2 0 4 2 0 4 2 0
4x1 2x2 = 0
2 1 1 0 2 1 0 2 1

2x2 x3 = 0
2 3 1 0 2 1 0 0 0

1
x1
1 1 1 41
x2 = x3 , x1 = x2 = x3 x = x2 = 2 x3 .
2 2 4
x3 1

8
Therefore,
1

N (A) = 2 c : c R .

4

3.9
Let x, y R(A), and , R. Then, there exists v, u such that x = Av and y = Au. Thus,
x + y = Av + Au = A(v + u).
Hence, x + y R(A), which shows that R(A) is a subspace.
Let x, y N (A), and , R. Then, Ax = 0 and Ay = 0. Thus,
A(x + y) = Ax + Ay = 0.
Hence, x + y N (A), which shows that N (A) is a subspace.
3.10
Let v R(B), i.e., v = Bx for some x. Consider the matrix [A v]. Then, N (A> ) = N ([A v]> ), since if
u N (A> ), then u N (B > ) by assumption, and hence u> v = u> Bx = x> B > u = 0. Now,
dim R(A) + dim N (A> ) = m
and
dim R([A v]) + dim N ([A v]> ) = m.
Since dim N (A> ) = dim N ([A v]> ), then we have dim R(A) = dim R([A v]). Hence, v is a linear combi-
nation of the columns of A, i.e., v R(A), which completes the proof.
3.11
We first show V (V ) . Let v V , and u any element of V . Then u> v = v > u = 0. Therefore,
v (V ) .
We now show (V ) V . Let {a1 , . . . , ak } be a basis for V , and {b1 , . . . , bl } a basis for (V ) . Define
A = [a1 ak ] and B = [b1 bl ], so that V = R(A) and (V ) = R(B). Hence, it remains to show
that R(B) R(A). Using the result of Exercise 3.10, it suffices to show that N (A> ) N (B > ). So let
x N (A> ), which implies that x R(A) = V , since R(A) = N (A> ). Hence, for all y, we have
(By)> x = 0 = y > B > x, which implies that B > x = 0. Therefore, x N (B > ), which completes the proof.
3.12
Let w W , and y be any element of V. Since V W, then y W. Therefore, by definition of w, we have
w> y = 0. Therefore, w V .
3.13
Let r = dim V. Let v 1 , . . . , v r be a basis for V, and V the matrix whose ith column is v i . Then, clearly
V = R(V ).
Let u1 , . . . , unr be a basis for V , and U the matrix whose ith row is u>
i . Then, V

= R(U > ), and
>
V = (V ) = R(U ) = N (U ) (by Exercise 3.11 and Theorem 3.4).
3.14
a. Let x V. Then, x = P x + (I P )x. Note that P x V, and (I P )x V . Therefore,
x = P x + (I P )x is an orthogonal decomposition of x with respect to V. However, x = x + 0 is also an
orthogonal decomposition of x with respect to V. Since the orthogonal decomposition is unique, we must
have x = P x.
b. Suppose P is an orthogonal projector onto V. Clearly, R(P ) V by definition. However, from part a,
x = P x for all x V, and hence V R(P ). Therefore, R(P ) = V.
3.15
To answer the question, we have to represent the quadratic form with a symmetric matrix as
" # " #! " #
1 1 8 1 1 1 1 7/2
x> + x = x> x.
2 1 1 2 8 1 7/2 1
9
The leading principal minors are 1 = 1 and 2 = 45/4. Therefore, the quadratic form is indefinite.
3.16
The leading principal minors are 1 = 2, 2 = 0, 3 = 0, which are all nonnegative. However, the
eigenvalues of A are 0, 1.4641, 5.4641 (for example, use Matlab to quickly check this). This implies that
the matrix A is indefinite (by Theorem 3.7). An alternative way to show that A is not positive semidefinite
is to find a vector x such that x> Ax < 0. So, let x be an eigenvector of A corresponding to its negative
eigenvalue = 1.4641. Then, x> Ax = x> (x) = x> x = kxk2 < 0. For this example, we can take
x = [0.3251, 0.3251, 0.8881]> , for which we can verify that x> Ax = 1.4643.
3.17
a. The matrix Q is indefinite, since 2 = 1 and 3 = 2.
b. Let x M. Then, x2 + x3 = x1 , x1 + x3 = x2 , and x1 + x2 = x3 . Therefore,

## This implies that the matrix Q is negative definite on the subspace M.

3.18
a. We have
0 0 0 x1
f (x1 , x2 , x3 ) = x22 = [x1 , x2 , x3 ] 0 1 0 x2 .

0 0 0 x3
Then,
0 0 0
Q = 0 1 0

0 0 0
and the eigenvalues of Q are 1 = 0, 2 = 1, and 3 = 0. Therefore, the quadratic form is positive
semidefinite.
b. We have
1 0 12 x1
f (x1 , x2 , x3 ) = x21 + 2x22 x1 x3 = [x1 , x2 , x3 ] 0 2 0 x2 .

21 0 0 x3
Then,
1 0 21
Q= 0 2 0

21 0 0

and the eigenvalues of Q are 1 = 2, 2 = (1 2)/2, and 3 = (1 + 2)/2. Therefore, the quadratic form
is indefinite.
c. We have

1 1 1 x1
2 2
f (x1 , x2 , x3 ) = x1 + x3 + 2x1 x2 + 2x1 x3 + 2x2 x3 = [x1 , x2 , x3 ] 1 0 1 x2 .

1 1 1 x3

Then,
1 1 1
Q = 1 0 1

1 1 1

and the eigenvalues of Q are 1 = 0, 2 = 1 3, and 3 = 1 + 3. Therefore, the quadratic form is
indefinite.

10
3.19
We have

## f (x1 , x2 , x3 ) = 4x21 + x22 + 9x23 4x1 x2 6x2 x3 + 12x1 x3

4 2 6 x1
= [x1 , x2 , x3 ] 2 1 3 x2 .

6 3 9 x3

Let

4 2 6 x1
Q = 2 1 3 , x = x2 = x1 e1 + x2 e2 + x3 e3 ,

6 3 9 x3
where e1 , e2 , and e3 form the natural basis for R3 .
Let v 1 , v 2 , and v 3 be another basis for R3 . Then, the vector x is represented in the new basis as x
, where
x = [v 1 , v 2 , v 3 ]
x=Vx.
Now, f (x) = x> Qx = (V x )> Q(V x > (V > QV )
) = x x=x > Q
x, where

q11 q12 q13
=
Q q21 q22 q23

q31 q32 q33

## and qij = v i Qv j for i, j = 1, 2, 3.

We will find a basis {v 1 , v 2 , v 3 } such that qij = 0 for i 6= j, and is of the form

v1 = 11 e1
v2 = 21 e1 + 22 e2
v3 = 31 e1 + 32 e2 + 33 e3

Because
qij = v i Qv j = v i Q(j1 e1 + . . . + jj ej ) = j1 (v i Qe1 ) + . . . + jj (v i Qej ),
we deduce that if v i Qej = 0 for j < i, then v i Qv j = 0. In this case,

v i Qej = 0, j<i
v i Qei = 1,

## and, in this case, we get

11 0 0
=
Q 0 22 0 .

0 0 33

Case of i = 1.
From v >
1 Qe1 = 1,
(11 e1 )> Qe1 = 11 (e>
1 Qe1 ) = 11 q11 = 1.

Therefore, 1
4
1 1 1 0
11 = = = v 1 = 11 e1 = .

q11 1 4 0
0
11
Case of i = 2.
From v >
2 Qe1 = 0,

## (21 e1 + 22 e2 )> Qe1 = 21 (e> >

1 Qe1 ) + 22 (e2 Qe1 ) = 21 q11 + 22 q21 = 0.

From v >
2 Qe2 = 1,

## (21 e1 + 22 e2 )> Qe2 = 21 (e> >

1 Qe2 ) + 22 (e2 Qe2 ) = 21 q12 + 22 q22 = 1.

## Therefore, " #" # " #

q11 q21 21 0
= .
q12 q22 22 1

But, since 2 = 0, this system of equations is inconsistent. Hence, in this problem v >2 Qe2 = 0 should
be satisfied instead of v >
2 Qe2 = 1 so that the system can have a solution. In this case, the diagonal
matrix becomes
11 0 0
=
Q 0 0 0 ,

0 0 33
and the system of equations become
" #" # " # " # " #
1
q11 q21 21 0 21
= = 2 22 ,
q12 q22 22 0 22 1

## where 22 is an arbitrary real number. Thus,

1
2
v 2 = 21 e1 + 22 e2 = 1 a,
0

## where a is an arbitrary real number.

Case of i = 3.
Since in this case 3 = det(Q) = 0, we will have to apply the same reasoning of the previous case and
use the condition v > >
3 Qe3 = 0 instead of v 3 Qe3 = 1. In this way the diagonal matrix becomes

11 0 0
Q = 0 0 0 .

0 0 0

## Thus, from v > >

3 Qe1 = 0, v 3 Qe2 = 0 and v > 3 Qe3 = 0,

q11 q21 q31 31 31 31
q12 q22 q32 32 = Q> 32 = Q 32

q13 q23 q33 33 33 33

4 2 6 31 0
= 2 1 3 32 = 0 .

6 3 9 33 0

Therefore,
31 31
32 = 231 + 333 ,

33 33
12
where 31 and 33 are arbitrary real numbers. Thus,

b
v 3 = 31 e1 + 32 e2 + 33 e3 = 2b + 3c ,

c

## where b and c are arbitrary real numbers.

Finally,
1 a
4 2 b
V = [x1 , x2 , x3 ] = 0 a 2b + 3c ,

0 0 c
where a, b, and c are arbitrary real numbers.
3.20
We represent this quadratic form as f (x) = x> Qx, where

1 1
Q = 1 2 .

1 2 5

## The leading principal minors of Q are 1 = 1, 2 = 1 2 , 3 = 5 2 4. For the quadratic form to

be positive definite, all the leading principal minors of Q must be positive. This is the case if and only if
(4/5, 0).
3.21
The matrix Q = Q> > 0 can be represented as Q = Q1/2 Q1/2 , where Q1/2 = (Q1/2 )> > 0.
1. Now, hx, xiQ = (Q1/2 x)> (Q1/2 x) = kQ1/2 xk2 0, and

Q1/2 x = 0
x=0

## since Q1/2 is nonsingular.

2. hx, yiQ = x> Qy = y > Q> x = y > Qx = hy, xiQ .
3. We have

hx + y, ziQ = (x + y)> Qz
= x> Qz + y > Qz
= hx, ziQ + hy, ziQ .

## 4. hrx, yiQ = (rx)> Qy = rx> Qy = rhx, yiQ .

3.22
We have
kAk = max{kAxk : kxk = 1}.
Pn
We first show that kAk maxi k=1 |aik |. For this, note that for each x such that kxk = 1, we have

Xn
kAxk = max aik xk

i
k=1
n
X
max |aik ||xk |
i
k=1
Xn
max |aik |,
i
k=1

13
since |xk | maxk |xk | = kxk = 1. Therefore,
n
X
kAk max |aik |.
i
k=1
Pn
To show Rn , k
Pn that kAk = maxi k=1 |aik |, it remains to find a x xk = 1, such that kA
x k =
maxi k=1 |aik |. So, let j be such that
n
X n
X
|ajk | = max |aik |.
i
k=1 k=1

Define x
by (
|ajk |/ajk if ajk 6= 0
x
k = .
1 otherwise
Clearly k
xk = 1. Furthermore, for i 6= j,

Xn X n n
X n
X
k
aik x |aik | max |aik | = |ajk |

i
k=1 k=1 k=1 k=1

and
Xn X n
ajk x
k = |ajk |.

k=1 k=1

Therefore,
Xn X n n
X
kA
x k = max aik x
k = |ajk | = max |aik |.

i i
k=1 k=1 k=1

3.23
We have
kAk1 = max{kAxk1 : kxk1 = 1}.
Pm
We first show that kAk1 maxk i=1 |aik |. For this, note that for each x such that kxk1 = 1, we have

Xm X n
kAxk1 = aik xk

i=1 k=1
m X
X n
|aik ||xk |
i=1 k=1
Xn m
X
|xk | |aik |
k=1 i=1
m
! n
X X
max |aik | |xk |
k
i=1 k=1
m
X
max |aik |,
k
i=1
Pn
since k=1 |xk | = kxk1 = 1. Therefore,
m
X
kAk1 max |aik |.
k
i=1

14
Pm
To show Rm , k
Pmthat kAk1 = maxk i=1 |aik |, it remains to find a x xk1 = 1, such that kA
x k1 =
maxk i=1 |aik |. So, let j be such that
m
X m
X
|aij | = max |aik |.
k
i=1 i=1

Define x
by (
1 if k = j
x
k = .
0 otherwise
Clearly k
xk1 = 1. Furthermore,

m X
X n X m m
X
kA
x k1 = aik x
k = |aij | = max |aik |.

k
i=1 k=1 i=1 i=1

## 4. Concepts from Geometry

4.1
: Let S = {x : Ax = b} be a linear variety. Let x, y S and R. Then,

## A(x + (1 )y) = Ax + (1 )Ay = b + (1 )b = b.

Therefore, x + (1 )y S.
: If S is empty, we are done. So, suppose x0 S. Consider the set S0 = S x0 = {x x0 : x S}.
Clearly, for all x, y S0 and R, we have x + (1 )y S0 . Note that 0 S0 . We claim that S0
is a subspace. To see this, let x, y S0 , and R. Then, x = x + (1 )0 S0 . Furthermore,
1 1
2 x + 2 y S0 , and therefore x + y S0 by the previous argument. Hence, S0 is a subspace. Therefore, by
Exercise 3.13, there exists A such that S0 = N (A) = {x : Ax = 0}. Define b = Ax0 . Then,

S = S0 + x0 = {y + x0 : y N (A)}
= {y + x0 : Ay = 0}
= {y + x0 : A(y + x0 ) = b}
= {x : Ax = b}.

4.2
Let u, v = {x Rn : kxk r}, and [0, 1]. Suppose z = u + (1 )v. To show that is convex,
we need to show that z , i.e., kzk r. To this end,

## kzk2 = (u> + (1 )v > )(u + (1 )v)

= 2 kuk2 + 2(1 )u> v + (1 )2 kvk2 .

Since u, v , then kuk2 r2 and kvk2 r2 . Furthermore, by the Cauchy-Schwarz Inequality, we have
u> v kukkvk r2 . Therefore,

## kzk2 2 r2 + 2(1 )r2 + (1 )2 r2 = r2 .

Hence, z , which implies that is a convex set, i.e., the any point on the line segment joining u and v
is also in .
4.3
Let u, v = {x Rn : Ax = b}, and [0, 1]. Suppose z = u + (1 )v. To show that is convex,
we need to show that z , i.e., Az = b. To this end,

Az = A(u + (1 )v)
= Au + (1 )Av.
15
Since u, v , then Au = b and Av = b. Therefore,

Az = b + (1 )b = b,

and hence z .
4.4
Let u, v = {x Rn : x 0}, and [0, 1]. Suppose z = u + (1 )v. To show that is convex,
we need to show that z , i.e., z 0. To this end, write x = [x1 , . . . , xn ]> , y = [y1 , . . . , yn ]> , and
z = [z1 , . . . , zn ]> . Then, zi = xi + (1 )yi , i = 1, . . . , n. Since xi , yi 0, and , 1 0, we have zi 0.
Therefore, z 0, and hence z .

5. Elements of Calculus
5.1
Observe that
kAk k kAk1 kkAk kAk2 kkAk2 kAkk .
Therefore, if kAk < 1, then limk kAk k = O which implies that limk Ak = O.
5.2
For the case when A has all real eigenvalues, the proof is simple. Let be the eigenvalue of A with largest
absolute value, and x the corresponding (normalized) eigenvector, i.e., Ax = x and kxk = 1. Then,

## which completes the proof for this case.

In general, the eigenvalues of A and the corresponding eigenvectors may be complex. In this case, we
proceed as follows (see ). Consider the matrix

A
B= ,
kAk +

## where is a positive real number. We have

kAk
kBk = < 1.
kAk +

By Exercise 5.1, B k O as k , and thus by Lemma 5.1, |i (B)| < 1, i = 1, . . . , n. On the other hand,
for each i = 1, . . . , n,
i (A)
i (B) = ,
kAk +
and thus
|i (A)|
|i (B)| = < 1.
kAk +
which gives
|i (A)| < kAk + .
Since the above arguments hold for any > 0, we have |i (A)| kAk.
5.3

## b. F (x) = ab> + ba> .

16
5.4
We have
Df (x) = [x1 /3, x2 /2],
and " #
dg 3
(t) = .
dt 2
By the chain rule,
d dg
F (t) = Df (g(t)) (t)
dt dt " #
3
= [(3t + 5)/3, (2t 6)/2]
2
= 5t 1.

5.5
We have
Df (x) = [x2 /2, x1 /2],
and " # " #
g 4 g 3
(s, t) = , (s, t) = .
s 2 t 1
By the chain rule,
g
f (g(s, t)) = Df (g(t)) (s, t)
s s " #
1 4
= [2s + t, 4s + 3t]
2 2
= 8s + 5t,

and
g
f (g(s, t)) = Df (g(t)) (s, t)
t t " #
1 3
= [2s + t, 4s + 3t]
2 1
= 5s + 3t.

5.6
We have
Df (x) = [3x21 x2 x23 + x2 , x31 x23 + x1 , 2x31 x2 x3 + 1]
and
et + 3t2
dx
(t) = 2t .

dt
1

17
By the chain rule,
d
f (x(t))
dt
dx
= Df (x(t)) (t)
dt
et + 3t2
= [3x1 (t)2 x2 (t)x3 (t)2 + x2 (t), x1 (t)3 x3 (t)2 + x1 (t), 2x1 (t)3 x2 (t)x3 (t) + 1] 2t

1
= 12t(et + 3t2 )3 + 2tet + 6t2 + 2t + 1.

5.7
Let > 0 be given. Since f (x) = o(g(x)), then

kf (x)k
lim = 0.
x0 g(x)

kf (x)k
< ,
g(x)

## which can be rewritten as

kf (x)k g(x).

5.8
By Exercise 5.7, there exists > 0 such that if kxk < , then |o(g(x))| < g(x)/2. Hence, if kxk < , x 6= 0,
then
1
f (x) g(x) + |o(g(x))| < g(x) + g(x)/2 = g(x) < 0.
2
5.9
We have that
{x : f1 (x) = 12} = {x : x21 x22 = 12},
and
{x : f2 (x) = 16} = {x : x2 = 8/x1 }.
To find the intersection points, we substitute x2 = 8/x1 into x21 x22 = 12 to get x41 12x21 64 = 0.
Solving gives x21 = 16, 4. Clearly, the only two possibilities for x1 are x1 = +4, 4, from which we obtain
x2 = +2, 2. Hence, the intersection points are located at [4, 2]> and [4, 2]> .
The level sets associated with f1 (x1 , x2 ) = 12 and f2 (x1 , x2 ) = 16 are shown as follows.

18
x2
f2(x1,x2) = 16

f1(x1,x2) = 12
f1(x1,x2) = 12
3
2 (4,2)
1

12 1 2 3 x1
12
(4,2)

f2(x1,x2) = 16

5.10
a. We have
1
f (x) = f (xo ) + Df (xo )(x xo ) + (x xo )> D2 f (xo )(x xo ) + .
2
We compute

## Df (x) = [ex2 , x1 ex2 + 1],

" #
0 ex2
D2 f (x) = .
ex2 x1 ex2

Hence,
" # " #" #
x1 1 1 0 1 x1 1
f (x) = 2 + [1, 0] + [x1 1, x2 ] +
x2 2 1 1 x2
1
= 1 + x1 + x2 x1 x2 + x22 + .
2
b. We compute

## Df (x) = [4x3 + 4x1 x22 , 4x21 x2 + 4x32 ],

" 1 #
12x21 + 4x22 8x1 x2
D2 f (x) = .
8x1 x2 4x21 + 12x22

## Expanding f about the point xo yields

" # " #" #
x1 1 1 16 8 x1 1
f (x) = 4 + [8, 8] + [x1 1, x2 1] +
x2 1 2 8 16 x2 1
= 8x21 + 8x22 16x1 16x2 + 8x1 x2 + 12 + .
19
c. We compute

## Df (x) = [ex1 x2 + ex1 +x2 + 1, ex1 x2 + ex1 +x2 + 1],

" #
ex1 x2 + ex1 +x2 ex1 x2 + ex1 +x2
D2 f (x) = .
ex1 x2 + ex1 +x2 ex1 x2 + ex1 +x2

## Expanding f about the point xo yields

" # " #" #
x1 1 1 2e 0 x1 1
f (x) = 2 + 2e + [2e + 1, 1] + [x1 1, x2 ] +
x2 2 0 2e x2
= 1 + x1 + +x2 + e(1 + x21 + x22 ) + .

20
6. Basics of Unconstrained Optimization
6.1
a. In this case, x is definitely not a local minimizer. To see this, note that d = [1, 2]> is a feasible
direction at x . However, d> f (x ) = 1, which violates the FONC.
b. In this case, x satisfies the FONC, and thus is possibly a local minimizer, but it is impossible to be
definite based on the given information.
c. In this case, x satisfies the SOSC, and thus is definitely a (strict) local minimizer.
d. In this case, x is definitely not a local minimizer. To see this, note that d = [0, 1]> is a feasible direction
at x , and d> f (x ) = 0. However, d> F (x )d = 1, which violates the SONC.
6.2
Because there are no constraints on x1 or x2 , we can utilize conditions for unconstrained optimization. To
proceed, we first compute the function gradient and find the critical points, that is, the points that satisfy
the FONC,
f (x1 , x2 ) = 0.
The components of the gradient f (x1 , x2 ) are
f f
= x21 4 and = x22 16.
x1 x2
Thus there are four critical points:
" # " # " # " #
(1) 2 (2) 2 (3) 2 (4) 2
x = , x = , x = , and x = .
4 4 4 4

## We next compute the Hessian matrix of the function f :

" #
2x1 0
F (x) = .
0 2x2

Note that F (x(1) ) > 0 and therefore, x(1) is a strict local minimizer. Next, F (x(4) ) < 0 and therefore,
x(4) is a strict local maximizer. The Hessian is indefinite at x(2) and x(3) and so these points are neither
maximizer nor minimizers.
6.3
Suppose x is a global minimizer of f over , and x 0 . Let x 0 . Then, x and therefore
f (x ) f (x). Hence, x is a global minimizer of f over 0 .
6.4
Suppose x is an interior point of . Therefore, there exists > 0 such that {y : ky x k < } . Since
x is a local minimizer of f over , there exists 0 > 0 such that f (x ) f (x) for all x {y : ky x k < 0 }.
Take 00 = min(, 0 ). Then, {y : ky x k < 00 } 0 , and f (x ) f (x) for all x {y : ky x k < 00 }.
Thus, x is a local minimizer of f over 0 .
To show that we cannot make the same conclusion if x is not an interior point, let = {0}, 0 = [1, 1],
and f (x) = x. Clearly, 0 is a local minimizer of f over . However, 0 0 is not a local minimizer of f
over 0 .
6.5
a. The TONC is: if f 00 (0) = 0, then f 000 (0) = 0. To prove this, suppose f 00 (0) = 0. Now, by the FONC, we
also have f 0 (0) = 0. Hence, by Taylors theorem,
f 000 (0) 3
f (x) = f (0) + x + o(x3 ).
3!
Since 0 is a local minimizer, f (x) f (0) for all x sufficiently close to 0. Hence, for all such x,
f 000 (0) 3
x o(x3 ).
3!
21
Now, if x > 0, then
o(x3 )
f 000 (0) 3!,
x3
which implies that f 000 (0) 0. On the other hand, if x < 0, then
o(x3 )
f 000 (0) 3! ,
x3
which implies that f 000 (0) 0. This implies that f 000 (0) = 0, as required.
b. Let f (x) = x4 . Then, f 0 (0) = 0, f 00 (0) = 0, and f 000 (0) = 0, which means that the FONC, SONC, and
TONC are all satisfied. However, 0 is not a local minimizer: f (x) < 0 for all x 6= 0.
c. The answer is yes. To see this, we first write
f 00 (0) 2 f 000 (0) 3
f (x) = f (0) + f 0 (0)x + x + x .
2 3!
Now, if the FONC is satisfied, then
f 00 (0) 2 f 000 (0) 3
f (x) = f (0) + x + x .
2 3!
Moreover, if the SONC is satisfied, then either (i) f 00 (0) > 0 or (ii) f 00 (0) = 0. In the case (i), it is clear from
the above equation that f (x) f (0) for all x sufficiently close to 0 (because the third term on the right-hand
side is o(x2 )). In the case (ii), the TONC implies that f (x) = f (0) for all x. In either case, f (x) f (0) for
all x sufficiently close to 0. This shows that 0 is a local minimizer.
6.6
a. The TONC is: if f 0 (0) = 0 and f 00 (0) = 0, then f 000 (0) 0. To prove this, suppose f 0 (0) = 0 and
f 00 (0) = 0. By Taylors theorem, for x 0,
f 000 (0) 3
f (x) = f (0) + x + o(x3 ).
3!
Since 0 is a local minimizer, f (x) f (0) for sufficiently small x 0. Hence, for all x 0 sufficiently small,
o(x3 )
f 000 (0) 3! .
x3
This implies that f 000 (0) 0, as required.
b. Let f (x) = x4 . Then, f 0 (0) = 0, f 00 (0) = 0, and f 000 (0) = 0, which means that the FONC, SONC, and
TONC are all satisfied. However, 0 is not a local minimizer: f (x) < 0 for all x > 0.
6.7
For convenience, let z 0 = x0 + arg minx f (x). Thus we want to show that z 0 = arg miny0 f (y); i.e., for
all y 0 , f (y x0 ) f (z 0 x0 ). So fix y 0 . Then, y x0 . Hence,
f (y x0 ) min f (x)
x
 
= f arg min f (x)
x
= f (z 0 x0 ),
which completes the proof.
6.8
a. The gradient and Hessian of f are
" # " #
1 3 3
f (x) = 2 x+
3 7 5
" #
1 3
F (x) = 2 .
3 7
22
Hence, f ([1, 1]> ) = [11, 25]> , and F ([1, 1]> ) is as shown above.
b. The direction of maximal rate of increase is the direction of the gradient. Hence, the directional derivative
with respect to a unit vector in this direction is
>
(f (x))> f (x)

f (x)
f (x) = = kf (x)k.
kf (x)k kf (x)k

At x = [1, 1]> , we have kf ([1, 1]> )k = 112 + 252 = 27.31.
c. The FONC in this case is f (x) = 0. Solving, we get
" #
3/2
x= .
1

The point above does not satisfy the SONC because the Hessian is not positive semidefinite (its determinant
is negative).
6.9
a. A differentiable function f decreases most rapidly in the direction of the negative gradient. In our problem,
h i> h i>
f f
f (x) = x1 x2
= 2x1 x2 + x32 x21 + 3x1 x22 .

## Hence, the direction of most rapid decrease is

  h i>
f x(0) = 5 10 .


b. The rate of increase of f at x(0) in the direction f x(0) is

(0)
> f x(0)   
f x  = kf x(0) k = 125 = 5 5.
kf x(0) k

## c. The rate of increase of f at x(0) in the direction d is

" #
 > d h i 3 1
(0)
f x = 5 10 = 11.
kdk 4 5

6.10
a. We can rewrite f as " # " #
1 > 4 4 > 3
f (x) = x x+x + 7.
2 4 2 4
The gradient and Hessian of f are
" # " #
4 4 3
f (x) = x+ ,
4 2 4
" #
4 4
F (x) = .
4 2

## [1, 0]> f ([0, 1]> ) = 7.

23
b. The FONC in this case is f (x) = 0. The only point satisfying the FONC is
" #
1 5
x = .
4 2

The point above does not satisfy the SONC because the Hessian is not positive semidefinite (its determinant
is negative). Therefore, f does not have a minimizer.
6.11
a. Write the objective function as f (x) = x22 . In this problem the only feasible directions at 0 are of the
form d = [d1 , 0]> . Hence, d> f (0) = 0 for all feasible directions d at 0.
b. The point 0 is a local maximizer, because f (0) = 0, while any feasible point x satisfies f (x) 0.
The point 0 is not a strict local maximizer because for any x of the form x = [x1 , 0]> , we have f (x) =
0 = f (0), and there are such points in any neighborhood of 0.
The point 0 is not a local minimizer because for any point x of the form x = [x1 , x21 ] with x1 > 0, we
have f (x) = x41 < 0, and there are such points in any neighborhood of 0. Since 0 is not a local minimizer,
it is also not a strict local minimizer.
6.12
a. We have f (x ) = [0, 5]> . The only feasible directions at x are of the form d = [d1 , d2 ]> with d2 0.
Therefore, for such feasible directions, d> f (x ) = 5d2 0. Hence, x = [0, 1]> satisfies the first order
necessary condition.
b. We have F (x ) = O. Therefore, for any d, d> F (x )d 0. Hence, x = [0, 1]> satisfies the second order
necessary condition.
c. Consider points of the form x = [x1 , x21 + 1]> , x1 R. Such points are in , and are arbitrarily close to
x . However, for such points x 6= x ,

## Hence, x is not a local minimizer.

6.13
a. We have f (x ) = [3, 0]> . The only feasible directions at x are of the form d = [d1 , d2 ]> with d1 0.
Therefore, for such feasible directions, d> f (x ) = 3d1 0. Hence, x = [2, 0]> satisfies the first order
necessary condition.
b. We have F (x ) = O. Therefore, for any d, d> F (x )d 0. Hence, x = [2, 0]> satisfies the second order
necessary condition.
c. Yes, x is a local minimizer. To see this, notice that any feasible point x = [x1 , x2 ]> 6= x is such that
x1 < 2. Hence, for such points x 6= x ,

## In fact, x is a strict local minimizer.

6.14
a. We have f (x) = [0, 1], which is nonzero everywhere. Hence, no interior point satisfies the FONC.
Moreover, any boundary point with a feasible direction d such that d2 < 0 cannot be satisfy the FONC,
because for such a d, d> f (x) = d2 < 0. By drawing a picture, it is easy to see that the only boundary
point remaining is x = [0, 1]> . For this point, any feasible direction satisfies d2 0. Hence, for any feasible
direction, d> f (x ) = d2 0. Hence, x = [0, 1]> satisfies the FONC, and is the only such point.
b. We have F (x) = O. So any point (and in particular x = [0, 1]> ) satisfies the SONC.
p
c. The point x = [0, 1]> is not a local minimizer. To see this, consider points of the form x = [ 1 x22 , x2 ]>
where x2 [1/2, 1). It is clear that such points are feasible, and are arbitrarily close to x = [0, 1]> . However,
for such points, f (x) = x2 < 1 = f (x ).
24
6.15
a. We have f (x ) = [3, 0]> . The only feasible directions at x are of the form d = [d1 , d2 ]> with d1 0.
Therefore, for such feasible directions, d> f (x ) = 3d1 0. Hence, x = [2, 0]> satisfies the first order
necessary condition.
b. We have F (x ) = O. Therefore, for any d, d> F (x )d 0. Hence, x = [2, 0]> satisfies the second order
necessary condition.
c. Consider points of the form x = [x22 + 2, x2 ]> , x2 R. Such points are in , and could be arbitrarily
close to x . However, for such points x 6= x ,

## Hence, x is not a local minimizer.

6.16
a. We have f (x ) = 0. Therefore, for any feasible direction d at x , we have d> f (x ) = 0. Hence, x
satisfies the first-order necessary condition.
b. We have " #
8 0
F (x ) = .
0 2

Any feasible direction d at x has the form d = [d1 , d2 ]> where d2 2d1 , d1 , d2 0. Therefore, for any
feasible direction d at x , we have

## Hence, x satisfies the second-order necessary condition.

c. We have f (x ) = 0. Any point of the form x = [x1 , x21 + 2x1 ], x1 > 0, is feasible and has objective
function value given by

## f (x) = 4x21 (x21 + 2x1 )2 = (x41 + 4x31 ) < 0 = f (x ),

Moreover, there are such points in any neighborhood of x . Therefore, the point x is not a local minimizer.
6.17
a. We have f (x ) = [1/x1 , 1/x2 ]> . If x were an interior point, then f (x ) = 0. But this is clearly
impossible. Therefore, x cannot possibly be an interior point.
b. We have F (x) = diag[1/x21 , 1/x22 ], which is negative definite everywhere. Therefore, the second-order
necessary condition is satisfied everywhere. (Note that because we have a maximization problem, negative
definiteness is the relevant condition.)
6.18
Given x R, let
n
X
f (x) = (x xi )2 ,
i=1

so that x
is the minimizer of f . By the FONC,

f 0 (
x) = 0,

and hence
n
X
x xi ) = 0,
2(
i=1

## which on solving gives

n
1X
x
= xi .
n i=1
25
6.19
Let 1 be the angle from the horizontal to the bottom of the picture, and 2 the angle from the horizontal
to the top of the picture. Then, tan() = (tan(2 ) tan(1 ))/(1 + tan(2 ) tan(1 )). Now, tan(1 ) = b/x and
tan(2 ) = (a + b)/x. Hence, the objective function that we wish to maximize is

(a + b)/x b/x a
f (x) = 2
= .
1 + b(a + b)/x x + b(a + b)/x
We have
a2
 
0 b(a + b)
f (x) = 1 .
(x + b(a + b)/x)2 x2
Let x be the optimal distance. Then, by the FONC, we have f 0 (x ) = 0, which gives

b(a + b)
1 = 0
(x )2
p
x = b(a + b).

6.20
The squared distance from the sensor to the babys heart is 1 + x2 , while the squared distance from the
sensor to the mothers heart is 1 + (2 x)2 . Therefore, the signal to noise ratio is

1 + (2 x)2
f (x) = .
1 + x2
We have
2(2 x)(1 + x2 ) 2x(1 + (2 x)2 )
f 0 (x) =
(1 + x2 )2
2
4(x 2x 1)
= .
(1 + x2 )2

By the FONC, at the optimal position x , wehave f 0 (x ) = 0. Hence, either x = 1 2 or x = 1 + 2.
From the figure, it easy to see that x = 1 2 is the optimal position.
6.21
a. Let x be the decision variable. Write the total travel time as f (x), which is given by
p
1 + x2 1 + (d x)2
f (x) = + .
v1 v2
Differentiating the above expression, we get
x dx
f 0 (x) = p .
v1 1+x2 v2 1 + (d x)2

By the first order necessary condition, the optimal path satisfies f 0 (x ) = 0, which corresponds to
x d x
p = p ,
v1 1 + (x )2 v2 1 + (d x )2

or sin 1 /v1 = sin 2 /v2 . Upon rearranging, we obtain the desired equation.
b. The second derivative of f is given by
1 1
f 00 (x) = 2 3/2
+ .
v1 (1 + x ) v2 (1 + (d x)2 )3/2

Hence, f 00 (x ) > 0, which shows that the second order sufficient condition holds.
26
6.22
a. We have f (x) = U1 (x1 ) + U2 (x2 ) and = {x : x1 , x2 0, x1 + x2 1}. A picture of looks like:
x2
1 































0 1 x1

b. We have f (x) = [a1 , a2 ]> . Because f (x) 6= 0, for all x, we conclude that no interior point satisfies
the FONC. Next, consider any feasible point x for which x2 > 0. At such a point, the vector d = [1, 1]>
is a feasible direction. But then d> f (x) = a1 a2 > 0 which means that FONC is violated (recall that
the problem is to maximize f ). So clearly the remaining candidates are those x for which x2 = 0. Among
these, if x1 < 1, then d = [0, 1]> is a feasible direction, in which case we have d> f (x) = a2 > 0. This
leaves the point x = [1, 0]> . At this point, any feasible direction d satisfies d1 0 and d2 d1 . Hence,
for any feasible direction d, we have
d> f (x) = d1 a1 + d2 a2 d1 a1 + (d1 )a2 = d1 (a1 a2 ) 0.
So, the only feasible point that satisfies the FONC is [1, 0]> .
c. We have F (x) = O 0. Hence, any point satisfies the SONC (again, recall that the problem is to
maximize f ).
6.23
We have " #
4(x1 x2 )3 + 2x1 2
f (x) = .
4(x1 x2 )3 2x2 + 2
Setting f (x) = 0 we get
4(x1 x2 )3 + 2x1 2 = 0
3
4(x1 x2 ) 2x2 + 2 = 0.
Adding the two equations, we obtain x1 = x2 , and substituting back yields
x1 = x2 = 1.
Hence, the only point satisfying the FONC is [1, 1]> .
We have " #
12(x1 x2 )2 + 2 12(x1 x2 )2
F (x) = .
12(x1 x2 )2 12(x1 x2 )2 2
Hence " #
2> 0
F ([1, 1] ) =
0 2
Since F ([1, 1]> ) is not positive semidefinite, the point [1, 1]> does not satisfy the SONC.
6.24
Suppose d is a feasible direction at x. Then, there exists 0 > 0 such that x + d for all [0, 0 ].
Let > 0 be given. Then, x + (d) for all [0, 0 /]. Since 0 / > 0, by definition d is also a
feasible direction at x.
6.25
: Suppose d is feasible at x . Then, there exists > 0 such that x + d , that is, A(x + d) = b.
Since Ax = b and 6= 0, we conclude that Ad = 0.
27
: Suppose Ad = 0. Then, for any [0, 1], we have Ad = 0. Adding this equation to Ax = b, we
obtain A(x + d) = b, that is, x + d for all [0, 1]. Therefore, d is a feasible direction at x.
6.26
The vector d = [1, 1]> is a feasible direction at 0. Now,
f f
d> f (0) = (0) + (0).
x1 x2
Since f (0) 0 and f (0) 6= 0, then
d> f (0) < 0.
Hence, by the FONC, 0 is not a local minimizer.
6.27

We have f (x) = c 6= 0. Therefore, for any x , we have f (x) 6= 0. Hence, by Corollary 6.1, x
cannot be a local minimizer (and therefore it cannot be a solution).
6.28
The objective function is f (x) = c1 x1 c2 x2 . Therefore, f (x) = [c1 , c2 ]> 6= 0 for all x. Thus,
S by
FONC, the optimal solution x cannot lie in the interior of the feasible set. Next, for all x L1 L2 ,
d = [1, 1]> is a feasible direction. Therefore, d> f (x) = c1 c2 < 0. Hence, by FONC, the optimal
solution x cannot lie in L1 L2 . Lastly, for all x L3 , d = [1, 1]> is a feasible direction. Therefore,
S
d> f (x) = c2 c1 < 0. Hence, by FONC, the optimal solution x cannot lie in L3 . Therefore, by
elimination, the unique optimal feasible solution must be [1, 0]> .
6.29
a. We write
n
1X 2 2
f (a, b) = a xi + b2 + yi2 + 2xi ab 2xi yi a 2yi b
n i=1
n
! n
!
2 1X 2 2 1X
= a x +b +2 xi ab
n i=1 i n i=1
n
! n
! n
!
1X 1X 1X 2
2 xi yi a 2 yi b + y
n i=1 n i=1 n i=1 i
" P #" #
1 n Pn
n i=1 x2i n1 i=1 xi a
= [a b] 1 Pn
n i=1 xi 1 b
" n n
# " # n
1X 1X a 1X 2
2 xi yi , yi + y
n i=1 n i=1 b n i=1 i
= z > Qz 2c> z + d,
where z, Q, c and d are defined in the obvious way.
b. If the point z = [a , b ]> is a solution, then byPthe FONC, we have f (z ) = 2Qz 2c = 0,
n
which means Qz = c. Now, since X 2 (X)2 = n1 i=1 (xi X)2 , and the xi are not all equal, then
det Q = X 2 (X)2 6= 0. Hence, Q is nonsingular, and hence
" #" # XY (X)(Y )
1 1 X XY X 2 (X)2
z = Q1 c = 2
= (X 2 )(Y )(X)(XY )
.
2
X (X) 2 X X Y 2 2
X (X)

Since Q > 0, then by the SOSC, the point z is a strict local minimizer. Since z is the only point satisfying
the FONC, then z is the only local minimizer.
c. We have
(X 2 )(Y ) (X)(XY )
 
XY (X)(Y )
a X + b = X+ =Y.
X2 (X)2 X 2 (X)2
28
6.30
Given x Rn , let
p
1X
f (x) = kx x(i) k2
p i=1

be the average squared error between x and x(1) , . . . , x(p) . We can rewrite f as
p
1X
f (x) = (x x(i) )> (x x(i) )
p i=1
p
!>
> 1 X (i) 1
= x x2 x x + kx(i) k2 .
p i=1 p

## is the minimizer of f , then by the FONC, f (

So f is a quadratic function. Since x x) = 0, i.e.,
p
1 X (i)
x2
2 x = 0.
p i=1

Hence, we get
p
1 X (i)
x
= x ,
p i=1

## is just the average, or centroid, or center of gravity, of x(1) , . . . , x(p) .

i.e., x
The Hessian of f at x is
F (
x) = 2I n ,
which is positive definite. Hence, by the SOSC, x
is a strict local minimizer of f (in fact, it is a strict global
minimizer because f is a convex quadratic function).
6.31
Fix any x . The vector d = x x is feasible at x (by convexity of ). By Taylors formula, we have
f (x) = f (x ) + d> f (x ) + o(kdk) f (x ) + ckdk + o(kdk).
Therefore, for all x sufficiently close to x , we have f (x) > f (x ). Hence, x is a strict local minimizer.
6.32
Since f C 2 , F (x ) = F > (x ). Let d 6= 0 be a feasible directions at x . By Taylors theorem,
1 >
f (x + d) f (x ) = d f (x ) + d> F (x )d + o(kdk2 ).
2
Using conditions a and b, we get
f (x + d) f (x ) ckdk2 + o(kdk2 ),
Therefore, for all d such that kdk is sufficiently small,
f (x + d) > f (x ),
and the proof is completed.
6.33
Necessity follows from the FONC. To prove sufficiency, we write f as
1 1
f (x) = (x x )> Q(x x ) x> Qx
2 2
where x = Q1 b is the unique vector satisfying the FONC. Clearly, since 1 >
2x Qx is a constant, and
Q > 0, then
1
f (x) f (x ) = x> Qx ,
2
29
and f (x) = f (x ) if and only if x = x .
6.34
Write u = [u1 , . . . , un ]. We have

xn = axn1 + bun
= a(axn2 + bun1 ) + bun
= a2 xn2 + abun1 + bun
..
.
= an x0 + an1 bu1 + + abun1 + bun
= c> u,

## which is a positive definite quadratic in u. The solution is therefore

q
u= c,
2r
or, equivalently, ui = qani b/(2r), i = 1, . . . , n.

## 7. One Dimensional Search Methods

7.1
The range reduction factor for 3 iterations of the Golden Section method is (( 51/2)3 = 0.236, while that of
the Fibonacci method (with = 0) is 1/F3+1 = 0.2. Hence, if the desired range reduction factor is anywhere
between 0.2 and 0.236 (e.g., 0.21), then the Golden Section method requires at least 4 iterations, while the
Fibonacci method requires only 3. So, an example of a desired final uncertainty range is 0.21(85) = 0.63.
7.2
a. The plot of f (x) versus x is as below:

3.2

3.1

2.9

2.8
f(x)

2.7

2.6

2.5

2.4

2.3
1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2
x

b. The number of steps needed for the Golden Section method is computed from the inequality:
0.2
0.61803N N 3.34.
21
30
Therefore, the fewest possible number of steps is 4. Applying 4 steps of the Golden Section method, we end
up with an uncertainty interval of [a4 , b0 ] = [1.8541, 2.000]. The table with the results of the intermediate
steps is displayed below:

## Iteration k ak bk f (ak ) f (bk ) New uncertainty interval

1 1.3820 1.6180 2.6607 2.4292 [1.3820,2]
2 1.6180 1.7639 2.4292 2.3437 [1.6180,2]
3 1.7639 1.8541 2.3437 2.3196 [1.7639,2]
4 1.8541 1.9098 2.3196 2.3171 [1.8541,2]

c. The number of steps needed for the Fibonacci method is computed from the inequality:
1 + 2 0.2
N 4.
FN +1 21

Therefore, the fewest possible number of steps is 4. Applying 4 steps of the Fibonacci method, we end up
with an uncertainty interval of [a4 , b0 ] = [1.8750, 2.000]. The table with the results of the intermediate steps
is displayed below:

## Iteration k k ak bk f (ak ) f (bk ) New unc. int.

1 0.3750 1.3750 1.6250 2.6688 2.4239 [1.3750,2]
2 0.4 1.6250 1.7500 2.4239 2.3495 [1.6250,2]
3 0.3333 1.7500 1.8750 2.3495 2.3175 [1.7500,2]
4 0.45 1.8750 1.8875 2.3175 2.3169 [1.8750,2]

d. We have f 0 (x) = 2x 4 sin x, f 00 (x) = 2 4 cos x. Hence, Newtons algorithm takes the form:

## x(k) 2 sin x(k)

x(k+1) = x(k) .
1 2 cos x(k)

Applying 4 iterations with x(0) = 1, we get x(1) = 7.4727, x(2) = 14.4785, x(3) = 6.9351, x(4) = 16.6354.
Apparently, Newtons method is not effective in this case.
7.3

## a. We first create the M-file f.m as follows:

% f.m
function y=f(x)
y=8*exp(1-x)+7*log(x);

fplot(f,[1 2]);
xlabel(x);
ylabel(f(x));

## The resulting plot is as follows:

31
8

7.95

7.9

7.85

f(x)
7.8

7.75

7.7

7.65
1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2
x

## b. The MATLAB routine for the Golden Section method is:

%Matlab routine for Golden Section Search

left=1;
right=2;
uncert=0.23;

rho=(3-sqrt(5))/2;

N=ceil(log(uncert/(right-left))/log(1-rho)) %print N

lower=a;
a=left+(1-rho)*(right-left);
f_a=f(a);
for i=1:N,
if lower==a
b=a
f_b=f_a
a=left+rho*(right-left)
f_a=f(a)
else
a=b
f_a=f_b
b=left+(1-rho)*(right-left)
f_b=f(b)
end %if
if f_a<f_b
right=b;
lower=a
else
left=a;
lower=b
end %if
New_Interval = [left,right]
end %for i
%------------------------------------------------------------------

Using the above routine, we obtain N = 4 and a final interval of [1.528, 1.674]. The table with the results
of the intermediate steps is displayed below:

32
Iteration k ak bk f (ak ) f (bk ) New uncertainty interval
1 1.382 1.618 7.7247 7.6805 [1.382,2]
2 1.618 1.764 7.6805 7.6995 [1.382,1.764]
3 1.528 1.618 7.6860 7.6805 [1.528,1.764]
4 1.618 1.674 7.6805 7.6838 [1.528,1.674]

## c. The MATLAB routine for the Fibonacci method is:

%Matlab routine for Fibonacci Search technique

left=1;
right=2;
uncert=0.23;
epsilon=0.05;

F(1)=1;
F(2)=1;

N=0;
while F(N+2) < (1+2*epsilon)*(right-left)/uncert
F(N+3)=F(N+2)+F(N+1);
N=N+1;
end %while

N %print N

lower=a;
a=left+(F(N+1)/F(N+2))*(right-left);
f_a=f(a);

for i=1:N,
if i~=N
rho=1-F(N+2-i)/F(N+3-i)
else
rho=0.5-epsilon
end %if
if lower==a
b=a
f_b=f_a
a=left+rho*(right-left)
f_a=f(a)
else
a=b
f_a=f_b
b=left+(1-rho)*(right-left)
f_b=f(b)
end %if
if f_a<f_b
right=b;
lower=a
else
left=a;
lower=b
end %if
New_Interval = [left,right]
end %for i
%------------------------------------------------------------------

33
Using the above routine, we obtain N = 3 and a final interval of [1.58, 1.8]. The table with the results of
the intermediate steps is displayed below:
Iteration k k ak bk f (ak ) f (bk ) New uncertainty interval
1 0.4 1.4 1.6 7.7179 7.6805 [1.4,2]
2 0.333 1.6 1.8 7.6805 7.7091 [1.4,1.8]
3 0.45 1.58 1.6 7.6812 7.6805 [1.58,1.8]

7.4
FN k+1
Now, k = 1 FN k+2 . Hence,

## k 1 FN k+1 /FN k+2

1 = 1
1 k FN k+1 /FN k+2
FN k+2 FN k+1
= 1
FN k+1
FN k
= 1
FN k+1
= k+1

To show that 0 k 1/2, we proceed by induction. Clearly 1 = 1/2 satisfies 0 1 1/2. Suppose
0 k 1/2, where k {1, . . . , N 1}. Then,
1
1 k 1
2
and hence
1
1 2.
1 k
Therefore,
1 k
1.
2 1 k
k
Since k+1 = 1 1k , then
1
0 k+1
2
as required.
7.5
We proceed by induction. For k = 2, we have F0 F3 F1 F2 = (1)(3) (1)(2) = 1 = (1)2 . Suppose
Fk2 Fk+1 Fk1 Fk = (1)k . Then,

## Fk1 Fk+2 Fk Fk+1 = Fk1 (Fk+1 + Fk ) (Fk1 + Fk2 )Fk+1

= Fk1 Fk Fk2 Fk+1
= (1)k
= (1)k+1 .

7.6
Define yk = Fk and zk = Fk1 . Then, we have
" # " #
yk+1 yk
=A ,
zk+1 zk

where " #
1 1
A= ,
1 0
34
with initial condition " # " #
y0 1
= .
z0 0
We can write # " " #
yn n 1
Fn = yn = [1, 0] = [1, 0]A .
zn 0
Since A is symmetric, it can be diagonalized as
" #" #
u> 1 0 h i
A= u v
v> 0 2

where
1+ 5 1 5
1 = , 2 = ,
2 2
and q q
1 2/( 5 1) 1 ( 5 1)/2
u = 1/4 q , v = 1/4 q .
5 ( 5 1)/2 5 2/( 5 1)
Therefore, we have
" #n
1 0
Fn = u> u
0 2
= u21 n1 + u22 n2
!n+1 !n+1

1 1+ 5 1 5
= .
5 2 2

7.7
The number log(2) is the root of the equation g(x) = 0, where g(x) = exp(x) 2. The derivative of g is
g 0 (x) = exp(x). Newtons method applied to this root finding problem is

exp(x(k) ) 2
x(k+1) = x(k) = x(k) 1 + 2 exp(x(k) ).
exp(x(k) )

## Performing two iterations, we get x(1) = 0.7358 and x(2) = 0.6940.

7.8
a. We compute g 0 (x) = 2ex /(ex + 1)2 . Therefore Newtons method of tangents for this problem takes the
form
(k) (k)
(ex 1)/(ex + 1)
x(k+1) = x(k)
2ex(k) /(ex(k) + 1)2
(k)
(e2x 1)
= x(k)
2ex(k)
= x sinh x(k) .
(k)

b. By symmetry, we need x(1) = x(0) for cycling. Therefore, x(0) must satisfy

## The algorithm cycles if x(0) = c, where c > 0 is the solution to 2c = sinh c.

35
c. The algorithm converges to 0 if and only if |x(0) | < c, where c is from part b.
7.9
The quadratic function that matches the given data x(k) , x(k1) , x(k2) , f (x(k) ), f (x(k1) ), and f (x(k2) )
can be computed by solving the following three linear equations for the parameters a, b, and c:

## a(x(ki) )2 + bx(ki) + c = f (x(ki) ), i = 0, 1, 2.

Then, the algorithm is given by x(k+1) = b/2a (so, in fact, we only need to find the ratio of a and b).
With some elementary algebra (e.g., using Cramers rule without needing to calculate the determinant in
the denominator), the algorithm can be written as:

## 12 f (x(k) ) + 20 f (x(k1) ) + 01 f (x(k2) )

x(k+1) =
2(12 f (x(k) ) + 20 f (x(k1) ) + 01 f (x(k2) ))

## where ij = (x(ki) )2 (x(kj) )2 and ij = x(ki) x(kj) .

7.10
a. A MATLAB routine for implementing the secant method is as follows.
function [x,v] = secant(g,xcurr,xnew,uncert);
%Matlab routine for finding root of g(x) using secant method
%
% secant;
% secant(g);
% secant(g,xcurr,xnew);
% secant(g,xcurr,xnew,uncert);
%
% x=secant;
% x=secant(g);
% x=secant(g,xcurr,xnew);
% x=secant(g,xcurr,xnew,uncert);
%
% [x,v]=secant;
% [x,v]=secant(g);
% [x,v]=secant(g,xcurr,xnew);
% [x,v]=secant(g,xcurr,xnew,uncert);
%
%The first variant finds the root of g(x) in the M-file g.m, with
%initial conditions 0 and 1, and uncertainty 10^(-5).
%The second variant finds the root of the function in the M-file specified
%by the string g, with initial conditions 0 and 1, and uncertainty 10^(-5).
%The third variant finds the root of the function in the M-file specified
%by the string g, with initial conditions specified by xcurr and xnew, and
%uncertainty 10^(-5).
%The fourth variant finds the root of the function in the M-file specified
%by the string g, with initial conditions specified by xcurr and xnew, and
%uncertainty specified by uncert.
%
%The next four variants returns the final value of the root as x.
%The last four variants returns the final value of the root as x, and
%the value of the function at the final value as v.

if nargin < 4
uncert=10^(-5);
if nargin < 3
if nargin == 1
xcurr=0;
xnew=1;
elseif nargin == 0
36
g=g;
else
disp(Cannot have 2 arguments.);
return;
end
end
end

g_curr=feval(g,xcurr);

while abs(xnew-xcurr)>xcurr*uncert,
xold=xcurr;
xcurr=xnew;
g_old=g_curr;
g_curr=feval(g,xcurr);
xnew=(g_curr*xold-g_old*xcurr)/(g_curr-g_old);
end %while

## %print out solution and value of g(x)

if nargout >= 1
x=xnew;
if nargout == 2
v=feval(g,xnew);
end
else
final_point=xnew
value=feval(g,xnew)
end %if
%------------------------------------------------------------------

## b. We get a solution of x = 0.0039671, with corresponding value g(x) = 9.908 108 .

7.11

function alpha=linesearch_secant(grad,x,d)
%Line search using secant method

## epsilon=10^(-4); %line search tolerance

max = 100; %maximum number of iterations
alpha_curr=0;
alpha=0.001;
dphi_zero=feval(grad,x)*d;
dphi_curr=dphi_zero;

i=0;
while abs(dphi_curr)>epsilon*abs(dphi_zero),
alpha_old=alpha_curr;
alpha_curr=alpha;
dphi_old=dphi_curr;
dphi_curr=feval(grad,x+alpha_curr*d)*d;
alpha=(dphi_curr*alpha_old-dphi_old*alpha_curr)/(dphi_curr-dphi_old);
i=i+1;
if (i >= max) & (abs(dphi_curr)>epsilon*abs(dphi_zero)),
disp(Line search terminating with number of iterations:);
disp(i);
break;
end
end %while
%------------------------------------------------------------------

37
7.12
a. We could carry out the bracketing using the one-dimensional function 0 () = f (x(0) + d(0) ), where
d(0) is the negative gradient at x(0) , as described in Section 7.8. The decision variable would be . However,
here we will directly represent the points in R2 (which is equivalent, though unnecessary in general).
The uncertainty interval is calculated by the following procedure:
" # " #
1 > 2 1 2 1
f (x) = x x, f (x) = x
2 1 2 1 2

## Therefore, " #" # " #

2 1 0.8 1.35
d = f (x(0) ) = =
1 2 0.25 0.3

## First, we begin calculating f (x(0) ) and x(1) :

" #!
0.8
f (x(0) ) = f = 0.5025,
0.25
" # " # " #
0.8 1.35 0.6987
x(1) = x(0) + d = + 0.075 =
0.25 0.3 0.2725

## Then, we proceed as follows to find the uncertainty interval:

" #!
(1) 0.6987
f (x ) = f = 0.3721
0.2725
"# " # " #
(2) (0) 0.8 1.35 0.5975
x = x + 2d = + 0.15 =
0.25 0.3 0.2950
" #!
0.5975
f (x(2) ) = f = 0.2678
0.2950
" # " # " #
(3) (0) 0.8 1.35 0.3950
x = x + 4d = + 0.3 =
0.25 0.3 0.3400
" #!
0.3950
f (x(3) ) = f = 0.1373
0.3400
" # " # " #
(4) (0) 0.8 1.35 0.0100
x = x + 8d = + 0.6 =
0.25 0.3 0.4300
" #!
0.0100
f (x(4) ) = f = 0.1893
0.4300

## Between f (x(3) ) and "" f (x(4) ) the# function

" increases,
## which
" means # that the minimizer must occur on the
0.5975 0.0100 1.35
interval [x(2) , x(4) ] = , , with d = .
0.2950 0.4300 0.3
MATLAB code to solve the problem is listed next.

## % Coded by David Schvartzman

38
% In our case we have:
Q = [2 1; 1 2];
x0 = [0.8; -.25];
e = 0.075;
f = zeros(1,10);
X = zeros(2,10);
x1 = x0;
d = -Q*x1;
f(1) = 0.5*(x1)*Q*x1;
for i=2:10
X(:,i) = x1+e*d;
f(i) = 0.5*X(:,i)*Q*X(:,i);
e = 2*e;
if(f(i) > f(i-1))
break;
end
end
% The interval is defined by:
a = X(:,i-2);
b = X(:,i);
str = sprintf(The minimizer is located in: [a, b], where a = [%.4f; %.4f]...
and b = [%.4f; %.4f], a(1,1), a(2,1), b(1,1), b(2,1));
disp(str);

## b. First, we determine the number of necessary iterations:

The initial uncertainty interval width is 0.6223. This width will be (0.6223)(0.618)N after N stages. We
choose N so that
0.01
(0.618)N N =9
0.6223
We show the first iteration of the algorithm; the rest are analogous and shown in the following table.
From part a, we know that [a0 , b0 ] = [x(2) , x(4) ], then:
"" # " ##
0.5975 0.0100
[a0 , b0 ] = , , with f (a0 ) = 0.2678, f (b0 ) = 0.1893.
0.2950 0.4300
T
a1 = a0 + (b0 a0 ) = [0.3655 0.3466]
T
b1 = a0 + (1 )(b0 a0 ) = [0.2220 0.3784]
f (a1 ) = 0.1270
f (b1 ) = 0.1085

We can see f (a1 ) > f (b1 ), hence the uncertainty interval is reduced to:
"" # " ##
0.3655 0.0100
[a1 , b0 ] = ,
0.3466 0.4300

So, calculating the norm of (b0 a1 ), we see that the uncertainty region width is now 0.38461.

39
Iteration ak bk f (ak ) f (bk ) New Uncertainty Interval
" # " # "" # " ##
0.3655 0.2220 .3655 0.0100
1 0.1270 0.1085 ,
0.3466 0.3784 0.3466 0.4300
" # " # "" # " ##
0.2220 0.1334 .3655 .1334
2 0.1085 0.1232 ,
0.3784 0.3981 0.3466 0.3981
" # " # "" # " ##
0.2768 0.2220 .2768 .1334
3 0.1094 0.1085 ,
0.3663 0.3784 0.3663 0.3981
" # " # "" # " ##
0.2220 0.1882 .2768 .1882
4 0.1085 0.1117 ,
0.3784 0.3860 0.3663 0.3860
" # " # "" # " ##
0.2430 0.2220 .2768 .2220
5 0.1079 0.1085 ,
0.3738 0.3784 0.3663 0.3784
" # " # "" # " ##
0.2559 0.2430 0.2559 0.2220
6 0.1081 0.1079 ,
0.3709 0.3738 0.3709 0.3784
" # " # "" # " ##
0.2430 0.2350 0.2559 0.2350
7 0.1079 0.1080 ,
0.3738 0.3756 0.3709 0.3756
" # " # "" # " ##
0.2479 0.2430 0.2479 0.2350
8 0.1080 0.1079 ,
0.3727 0.3738 0.3727 0.3756
" # " # "" # " ##
0.2430 0.2399 0.2479 0.2399
9 0.1079 0.1079 ,
0.3738 0.3745 0.3727 0.3745

"" # " ##
0.2479 0.2399
We can now see that the minimizer is located within , , and its uncertainty
0.3727 0.3745
interval width is 0.00819.

## % Coded by David Schvartzman

% To succesfully run this program, run
% the previous script to obatin a and b.
e = 0.01;
Q = [2 1; 1 2];
ro = 0.5*(3-sqrt(5));
% First we determine the number of necessary iterations.
d = norm(a-b);N = ceil( (log(e/d))/(log(1-ro)));
fa = 0.5*a*Q*a;
fb = 0.5*b*Q*b;
str1 = sprintf(Initial values:);
str2 = sprintf(a0 = [%.4f,%.4f]., a(1), a(2));
str3 = sprintf(b0 = [%.4f,%.4f]., b(1), b(2));
str4 = sprintf(f(a0) = %.4f., fa);
str5 = sprintf(f(b0) = %.4f., fb);
40
strn = sprintf(\n);
disp(strn);
disp(str1);disp(str2);
disp(str3);disp(str4);
disp(str5);
s = a + ro*(b-a);
t = a + (1-ro)*(b-a);
fs = 0.5*s*Q*s;
ft = 0.5*t*Q*t;
for i=1:N
str1 = sprintf(Iteration number: %d, i);
str2 = sprintf(a%d = [%.4f,%.4f]., i, s(1), s(2));
str3 = sprintf(b%d = [%.4f,%.4f]., i, t(1), t(2));
str4 = sprintf(f(a%d) = %.4f., i, fs);
str5 = sprintf(f(b%d) = %.4f., i, ft);
if (ft>fs)
b = t;
%fb = ft;
t = s;
ft = fs;
s = a + ro*(b-a);
fs = 0.5*s*Q*s;
else
a = s;
%fa = fs;
s = t;
fs = ft;
t = a + (1-ro)*(b-a);
ft = 0.5*t*Q*t;
end
str6 = sprintf(New uncertainty interval: a%d = [%.4f,%.4f], ...
b%d = [%.4f,%.4f]., i, a(1), a(2), i, b(1), b(2));
disp(strn);
disp(str1)
disp(str2)
disp(str3)
disp(str4)
disp(str5)
disp(str6)
end
% The interval where the minimizer is boxed in is given by:
an = a;
bn = b;
% We can return (an+bn)/2 as the minimizer.
min = (an+bn)/2;disp(strn);
str = sprintf(The minimizer x* is: [%.4f; %.4f], min(1,1), min(2,1));
disp(str);

## c. We need to determine the number of necessary iterations:

The initial uncertainty interval width is 0.6223. This width will be (0.6223) F1+2
(N +1)
, where Fk is the k-th
element of the Fibonacci sequence. We choose N so that
1 + 2 0.01 1 + 2
= 0.0161 FN +1
FN +1 0.6223 0.0161
41
For = 0.05, we require F(N +1) > 68.32, thus F(10) = 89 is enough, and we have N = 10 1, 9 iterations.

We show the first iteration of the algorithm; the rest are analogous and shown in the following table.
From part a, we know that [a0 , b0 ] = [x(2) , x(4) ], then:
"" # " ##
0.5975 0.0100
[a0 , b0 ] = , , with f (a0 ) = 0.2678, f (b0 ) = 0.1893.
0.2950 0.4300
FN 55
Recall that in the Fibonacci method, 1 = 1 FN +1 =1 89 = 0.3820 .
T
a1 = a0 + 1 (b0 a0 ) = [0.3654 0.3466]
T
b1 = a0 + (1 1 )(b0 a0 ) = [0.2221 0.3784]
f (a1 ) = 0.1270
f (b1 ) = 0.1085

We can see f (a1 ) > f (b1 ), hence the uncertainty interval is reduced to:
"" # " ##
0.3654 0.0100
[a1 , b0 ] = ,
0.3466 0.4300

So, calculating the norm of (b0 a1 ), we see that the uncertainty region width is now 0.38458.

## k k ak bk f (ak ) f (bk ) New Uncertainty Interval

" # " # "" # " ##
0.3654 0.2221 0.3654 0.0100
1 0.3820 0.1270 0.1085 ,
0.3466 0.3784 0.3466 0.4300
" # " # "" # " ##
0.2221 0.1333 0.3654 0.1333
2 0.3818 0.1085 0.1232 ,
0.3784 0.3981 0.3466 0.3981
" # " # "" # " ##
0.2767 0.2221 0.2767 0.1333
3 0.3824 0.1094 0.1085 ,
0.3663 0.3784 0.3663 0.3981

42
k k ak bk f (ak ) f (bk ) New Uncertainty Interval
" # " # "" # " ##
0.2221 0.1879 0.2767 0.1879
4 0.3810 0.1085 0.1118 ,
0.3784 0.3860 0.3663 0.3860
" # " # "" # " ##
0.2426 0.2221 0.2767 0.2221
5 0.3846 0.1079 0.1085 ,
0.3739 0.3784 0.3663 0.3784
" # " # "" # " ##
0.2562 0.2426 0.2562 0.2221
6 0.3750 0.1082 0.1079 ,
0.3708 0.3739 0.3708 0.3784
" # " # "" # " ##
0.2426 0.2357 0.2562 0.2357
7 0.4000 0.1079 0.1080 ,
0.3739 0.3754 0.3708 0.3754
" # " # "" # " ##
0.2494 0.2426 0.2494 0.2357
8 0.3333 0.1080 0.1079 ,
0.3724 0.3739 0.3724 0.3754
" # " # "" # " ##
0.2426 0.2419 0.2494 0.2419
9 0.4500 0.1079 0.1079 ,
0.3739 0.3740 0.3724 0.3740

"" # " ##
0.2494 0.2419
We can now see that the minimizer is located within , , and its uncertainty
0.3724 0.3740
interval width is 0.00769.

## % Coded by David Schvartzman

% To succesfully run this program, run the first of the above scripts
% to obtain a and b.
% We take
e = 0.01;
Q = [2 1; 1 2];
% First determine the number of necessary iterations.
d = norm(a-b);
F_N_1 =(2*0.05+1)/(e/d);
F = zeros(1,20);
F(1) = 0;
F(2) = 1;
for i=1:20
F(i+1) = F(i)+F(i+1);
if(F(i+2)>= F_N_1)
break;
end
end
N = i-1;
ro = zeros(1, N+1);
for i = 1:N
ro(i) = 1 - F(N+3-i)/F(N+4-i);
end
ro(N) = ro(N) - 0.05;
fa = 0.5*a*Q*a;
fb = 0.5*b*Q*b;
43
str1 = sprintf(Initial values:);
str2 = sprintf(a0 = [%.4f,%.4f]., a(1), a(2));
str3 = sprintf(b0 = [%.4f,%.4f]., b(1), b(2));
str4 = sprintf(f(a0) = %.4f., fa);
str5 = sprintf(f(b0) = %.4f., fb);
strn = sprintf(\n);
disp(strn);
disp(str1);
disp(str2);
disp(str3);
disp(str4);
disp(str5);
s = a + ro(1)*(b-a);
t = a + (1-ro(1))*(b-a);
fs = 0.5*s*Q*s;
ft = 0.5*t*Q*t;
for i=1:N
str1 = sprintf(Iteration number: %d, i);
str2 = sprintf(a%d = [%.4f,%.4f]., i, s(1), s(2));
str3 = sprintf(b%d = [%.4f,%.4f]., i, t(1), t(2));
str4 = sprintf(f(a%d) = %.4f., i, fs);
str5 = sprintf(f(b%d) = %.4f., i, ft);
if (ft>fs)
b = t;
t = s;
ft = fs;
s = a + ro(i+1)*(b-a);
fs = 0.5*s*Q*s;
else
a = s;
s = t;
fs = ft;
t = a + (1-ro(i+1))*(b-a);
ft = 0.5*t*Q*t;
end
str6 = sprintf(New uncertainty interval: a%d = [%.4f,%.4f],...
b%d = [%.4f,%.4f]., i, a(1), a(2), i, b(1), b(2));
str7 = sprintf(Uncertainty interval width: %.5f, norm(a-b));
disp(strn);
disp(str1)
disp(str2)
disp(str3)
disp(str4)
disp(str5)
disp(str6)
disp(str7)
end
% The minimizer is boxed in the interval:
an = a;
bn = b;
% We can return (an+bn)/2 as the minimizer.
min = (an+bn)/2;
disp(strn);
str = sprintf(The minimizer x* is: [%.4f; %.4f], min(1,1), min(2,1));

44
disp(str);

8. Gradient Methods
8.1
The function f is a quadratic and so we can represent it in standard form as
" # " #
1 > 1 0 > 1 1
f= x xx + 3 = x> Qx x> b + c.
2 0 2 1/2 2

## The first iteration is  

x(1) = x(0) 0 f x(0) .

To find x(1) , we need to compute f x(0) = g (0) . We have
h i>
g (0) = Qx(0) b = 1 1/2 .

## The step size, 0 , can be computed as

g (0)> g (0) 5
0 = (0)> (0)
= .
g Qg 6
Hence, " # " #
(1) (0) 5 1 5/6
x = 0 g = = .
6 1/2 5/12
The second iteration is  
x(2) = x(1) 1 f x(1) ,
where " #

(1)

(1) (1) 1/6
f x =g = Qx b= ,
1/3
and
g (1)> g (1) 5
1 = (1)> (1)
= .
g Qg 9
Hence, " # " # " #
(2) (1) (1) 5/6 5 1/6 25
27
x =x 1 g = = 25 .
5/12 9 1/3 108
The optimal solution is x = [1, 1/4]> obtaind by solving the equation Qx = b.
8.2
Let s be the order of convergence of {x(k) }. Suppose there exists c > 0 such that for all k sufficiently large,

kx(k+1) x k ckx(k) x kp .

## Hence, for all k sufficiently large,

kx(k+1) x k kx(k+1) x k 1
=
kx(k) x ks kx x k kx x ksp
(k) p (k)
c
.
kx(k) x ksp
Taking limits yields
kx(k+1) x k c
lim .
k kx(k) x ks limk kx(k) x ksp
45
Since by definition s is the order of convergence,

kx(k+1) x k
lim < .
k kx(k) x ks

## Combining the above two inequalities, we get

c
< .
limk kx(k) x ksp

Therefore, since limk kx(k) x k = 0, we conclude that s p, i.e., the order of convergence is at most p.
8.3
We use contradiction. Suppose x(k) x and

kx(k+1) x k
lim >0
k kx(k) x kp

for some p < 1. We may assume that x(k) = 6 x for an infinite number of k (for otherwise, by convention,
the ratio above is eventually 0). Fix > 0. Then, there exists K1 such that for all k K1 ,

kx(k+1) x k
> .
kx(k) x kp

## Dividing both sides by kx(k) x k1p , we obtain

kx(k+1) x k

> .
(k)
kx x k kx x k1p
(k)

Because x(k) x and p < 1, we have kx(k) x k1p 0. Hence, there exists K2 such that for all k K2 ,
kx(k) x k1p < . Combining this inequality with the previous one yields

kx(k+1) x k
>1
kx(k) x k

## for all k max(K1 , K2 ); i.e.,

kx(k+1) x k > kx(k) x k,
which contradicts the assumption that x(k) x .
8.4
2
a. The sequence converges to 0, because the exponent 2k grows unboundedly negative as k .
b. The order of convergence of {x(k) } is . To see this, we first write, for p 1,
(k+1)2
|x(k+1) | 22
= 2
|x(k) |p 22k p
k2 +2k+1 2
+p2k
= 22
k2
(22k+1 p)
= 22 .
2
But notice that the exponent 2k (22k+1 p) grows unboundedly negative as k , regardless of the value
of p. Therefore, for any p,
|x(k+1) |
lim = 0,
k |x(k) |p

46
8.5
a. We have

x(k) = ax(k1)
= a ax(k2)
= a2 x(k2)
..
.
= ak x(0) .

## Because 0 < a < 1, we have ak 0, and hence x(k) 0.

b. Similarly, we have

y (k) = (y (k1) )b
= ((y (k2) )b )b
2
= (y (k2) )b
..
.
k
= (y (0) )b .

## Because |y (0) | < 1 and b > 1, we have bk and hence y (k) 0.

c. The order of convergence of {x(k) } is 1 because

|x(k+1) |
lim = lim a = a,
k |x(k) | k

## and 0 < a < .

The order of convergence of {y (k) } is b because

|y (k+1) |
lim = lim 1 = 1,
k |y (k) |b k

## and 0 < 1 < .

d. Suppose |x(k) | c|x(0) |. Using part a, we have ak c, which implies that k log(c)/ log(a). So the
smallest number of iterations k such that |x(k) | c|x(0) | is dlog(c)/ log(a)e (the smallest integer not smaller
than log(c)/ log(a)).
k
e. Suppose |y (k) | c|y (0) |. Using part b, we have |y (0) |b c|y (0) |. Taking logs (twice) and rearranging, we
have  
1 log(c)
k log 1 + .
log(b) log(|y (0) |)
Denote the right-hand side by z. So the smallest number of iterations k such that |y (k) | c|y (0) | is dze.
f. Comparing the answer in part e with that of part d, we can see that as c 0, the answer in part d is
(log(c)), whereas the answer in part e is O(log log(c)). Hence, in the regime where c is very small, the
number of iterations in part d (linear convergence) is (at least) exponentially larger than that in part e
(superlinear convergence).
8.6
We have uk+1 = (1 )uk , and uk 0. Therefore,

|uk+1 |
lim =1>0
k |uk |

## and thus the order of convergence is 1.

47
8.7
a. The value of x (in terms of a, b, and c) that minimizes f is x = b/a.
b. We have f 0 (x) = ax b. Therefore, the recursive equation for the DDS algorithm is

## x(k+1) = x(k) (ax(k) b) = (1 a)x(k) + b.

= limk x(k) . Taking limits of both sides of x(k+1) = x(k) (ax(k) b) (from part b), we get
c. Let x

x (a
=x x b).

= b/a = x .
Hence, we get x
d. To find the order of convergence, we compute

## |x(k+1) b/a| |(1 a)x(k) + b b/a|

=
|x(k) b/a|p |x(k) b/a|p
|(1 a)x(k) (1 a)b/a|
=
|x(k) b/a|p
= |1 a||x(k) b/a|1p .

Let z (k) = |1 a||x(k) b/a|1p . Note that z (k) converges to a finite nonzero number if and only if p = 1
(if p < 1, then z (k) 0, and if p > 1, then z (k) ). Therefore, the order of convergence of {x(k) } is 1,
e. Let y (k) = |x(k) b/a|. From part d, after some manipulation we obtain

## y (k+1) = |1 a|y (k) = |1 a|k+1 y (0) .

The sequence {x(k) } converges (to b/a) if and only if y (k) 0. This holds if and only if |1 a| < 1, which
is equivalent to 0 < < 2/a.
8.8
We rewrite f as f (x) = 12 x> Qx b> x, where
" #
6 4
Q=
4 6

The characteristic polynomial of Q is 2 12 + 20. Hence, the eigenvalues of Q are 2 and 10. Therefore,
the largest range of values of for which the algorithm is globally convergent is 0 < < 2/10.
8.9
a. We can write h(x) = Qx b, where b = [4, 1]> and
" #
3 2
Q=
2 3

## is positive definite. Hence, the solution is

" #" # " #
1 3 2 4 2
Q1 b = = .
5 2 3 1 1

b. By part a, the algorithm is a fixed-step-size gradient algorithm for a problem with gradient h. The
eigenvalues of Q are 1 and 5. Hence, the largest range of values of such that the algorithm is globally
convergent to the solution is 0 < < 2/5.
c. The eigenvectors of Q corresponding to eigenvalue 5 has the form c[1, 1]> , where c R. Hence, to violate
the descent property, we pick " # " #
1 3
x(0) = Q1 b + c =
1 0
48
where we choose c = 1 so that x(0) has the specified form.
8.10
a. We have " # " #
1 > 3 1+a > 1
f (x) = x xx + b.
2 1+a 3 1

b. The unique global minimizer exists if and only if the Hessian is positive definite, which holds if and only
if (1 + a)2 < 9 (by Sylvesters criterion). Hence, the largest set of values of a and b such that the global
minimizer of f exists is given by 4 < a < 2 and b R (unrestricted).
The minimizer is given by
" #" # " # " #
1 3 (1 + a) 1 3 (1 + a) 1 1 1
x = = =
9 (1 + a)2 (1 + a) 3 1 9 (1 + a)2 1 4+a 1

c. The algorithm is a gradient algorithm with fixed step size = 2/5. The eigenvalues of the Hessian are
(after some calculations) 4 + a and 2 a. For global convergence, we need 2/5 < 2/max , or max < 5,
where max = max(4 + a, 2 a). From this we deduce that 3 < a < 1. Hence, the largest set of values of
a and b such that the algorithm is globally convergent is given by 3 < a < 1 and b R (unrestricted).
8.11
a. We have

## f (x(k+1) ) = (x(k+1) c)2 /2

= (x(k) k (x(k) c) c)2 /2
= (1 k )2 (x(k) c)2 /2
= (1 k )2 f (x(k) ).

b. We have x(k) c if and only if f (x(k) ) 0. Hence, the algorithm is globally convergent Q if and only if
f (x(k) ) 0 for any x0 . From part a, we deduce that
Qf (x(k)
) 0 for any x0 if and only if k=0 (1k )2
= 0.
Because 0 < < 1, this condition is equivalent to k=0 (1 k ) = 0, which holds if and only if

X
k = .
k=0

8.12
The only local minimizer of f is x = 1/ 3. Indeed, we have f 0 (x ) = 0 and f 00 (x ) = 2 3. To find the
largest range of values of such that the algorithm is locally convergent, we use a linearization argument:
The algorithm is locally convergent if and only if the linearized algorithm x(k+1) = x(k) f 00 (x )(x(k) x )
is globally convergent. But the linearized algorithm is just a fixed step size algorithm applied to a quadratic
with second derivative f 00 (x ). Therefore,
the largest range of values of such that the algorithm is locally
convergent is 0 < < 2/f 00 (x ) = 1/ 3.
8.13
We use the formula from Lemma 8.1:

f (x(k+1) ) = (1 k )f (x(k) )

(we have V = f in this case). Using the expression for k , we get, assuming x(k) 6= 1,

k = 4 2k (1 2k ).

Hence, k > 0, which means that f (x(k+1) ) < f (x(k) ) if x(k) 6= 1 for k 0. This implies that the algorithm
has the descent property (for k 0).

49
We also note that

!  
X X
k
X
k 4
k = 4 2 4 = 4 2 < inf ty.
3
k=0 k=0 k=0

Since k > 0 for all k 0, we can apply the theorem given in class to deduce that the algorithm is not
globally convergent.
8.14
We have
|x(k+1) x | = |x(k) x f 0 (x(k) )/f 00 (x )|.
By Taylors Theorem,

## Combining the above with the first equation, we get

|x(k+1) x | = O(|x(k) x |2 ),

## which implies that the order of convergence is at least 2.

8.15
a. The objective function is a quadratic that can be written as

## Hence, the minimizer is x = a> b/kak2 .

b. Note that f 00 (x) = 2kak2 . Thus, by the result for fixed step size gradient algorithms, the required largest
range for is (0, 1/kak2 ).
8.16
a. We have

## f (x) = kAx bk2 = (Ax b)> (Ax b)

= (x> A> b> )(Ax b)
= x> (A> A)x 2(A> b)> x + b> b

which is a quadratic function. The gradient is given by f (x) = 2(A> A)x 2(A> b) and the Hessian is
given by F (x) = 2(A> A).
b. The fixed step size gradient algorithm for solving the above optimization problem is given by

## x(k+1) = x(k) (2(A> A)x(k) 2A> b)

= x(k) 2A> (Ax(k) b).

c. The largest range of values for such that the algorithm in part b converges to the solution of the problem
is given by
2 1
0<< >
= .
max (2A A) 4

8.17
a. We use contraposition. Suppose an eigenvalue of A is negative: Av = v, where < 0 and v is a
corresponding eigenvector. Choose x(0) = v + x . Then,

x(1) = v + x (Av + Ax + b) = v + x v,
50
and hence
x(1) x = (1 )(x(0) x ).
Since 1 > 1, we conclude that the algorithm is not globally monotone.
b. Note that the algorithm is identical to a fixed step size gradient algorithm applied to a quadratic with
Hessian A. The eigenvalues of A are 1 and 5. Therefore, the largest range of values of for which the
algorithm is globally convergent is 0 < < 2/5.
8.18
The steepest descent algorithm applied to the quadratic function f has the form

## g (k)> g (k) (k)

x(k+1) = x(k) k g (k) = x(k) g .
g (k)> Qg (k)

: If x(1) = Q1 b, then
Q1 b = x(0) 0 g (0) .
Rearranging the above yields
Qx(0) b = 0 Qg (0) .
Since g (0) = Qx(0) b 6= 0, we have
1 (0)
Qg (0) = g
0
which means that g (0) is an eigenvector of Q (with corresponding eigenvalue 1/0 ).
: By assumption, Qg (0) = g (0) , where R. We want to show that Qx(1) = b. We have

## g (0)> g (0) (0)

 
(1) (0)
Qx = Q x (0)> g
g Qg (0)
1 g (0)> g (0)
= Qx(0) Qg (0)
g (0)> g (0)
= Qx(0) g (0)
= b.

8.19
a. Possible. Pick f such that max 2min and x(0) such that g (0) is an eigenvector of Q with eigenvalue
min . Then,
g (0)> g (0) 1 2
0 = (0)> = .
g Qg (0) min max

## b. Not possible. Indeed, using Rayleighs inequality,

g (0)> g (0) 1
0 = .
g (0)> Qg (0) min

8.20
a. We rewrite f as
1 >
f (x) = x Qx b> x 22,
2
where " # #"
3 2 3
Q= , b= .
2 3 1
The eigenvalues of Q are 1 and 5. Therefore, the range of values of the step size for which the algorithm
converges to the minimizer is 0 < < 2/5.

51
b. An eigenvector of Q corresponding to the eigenvalue 5 is v = [1, 1]> /5. We have x = Q1 b = [11, 9]> /5.
Hence, an initial condition that results in the algorithm diverging is
" #
(0) 2
x =x +v = .
2

8.21
In both cases, we compute the Hessian Q of f , and find its largest eigenvalue max . Then the range we seek
is 0 < < 2/max .
a. In this case, " #
6 4
Q= ,
4 6
with eigenvalues 2 and 10. Hence, the answer is 0 < < 1/5.
b. In this case, again we have " #
6 4
Q= ,
4 6
with eigenvalues 2 and 10. Hence, the answer is 0 < < 1/5.
8.22
For the given algorithm we have

(g (k)> g (k) )2
 
k = (2 )
(g (k)> Qg (k) )(g (k)> Q1 g (k) )

## If 0 < < 2, then (2 ) > 0, and by Lemma 8.2,

 
min (Q)
k (2 ) >0
max (Q)
P
which implies that k=0 k = . Hence, by Theorem 8.1, x(k) x for any x(0) .
If 0 or 2, then (2 ) 0, and by Lemma 8.2,
 
max (Q)
k (2 ) < 0.
min (Q)

By Lemma 8.1, V (x(k) ) V (x(0) ). Hence, if x(0) 6= x , then {V (x(k) } does not converge to 0, and
consequently x(k) does not converge to x .
8.23
By Lemma 8.1, V (x(k+1) ) = (1 k )V (x(k) ) for all k. Note that the algorithm has a descent property if
and only if V (x(k+1) ) < V (x(k) ) whenever g (k) 6= 0. Clearly, whenever g (k) 6= 0, V (x(k+1) ) < V (x(k) ) if and
only if 1 k < 1. The desired result follows immediately.
8.24
We have
x(k+1) x(k) = k d(k)
and hence
hx(k+1) x(k) , f (x(k+1) )i = k hd(k) , f (x(k+1) )i.
Now, let k () = f (x(k) + d(k) ). Since k minimizes k , then by the FONC, 0k (k ) = 0. By the chain
rule, 0k () = d(k)> f (x(k) + d(k) ). Hence,

## 0 = 0k (k ) = d(k)> f (x(k) + k d(k) ) = hd(k) , f (x(k+1) )i,

52
and so
hx(k+1) x(k) , f (x(k+1) )i = 0.

8.25
A simple MATLAB routine for implementing the steepest descent method is as follows.
function [x,N]=steep_desc(grad,xnew,options);
% STEEP_DESC(grad,x0);
% STEEP_DESC(grad,x0,OPTIONS);
%
% x = STEEP_DESC(grad,x0);
% x = STEEP_DESC(grad,x0,OPTIONS);
%
% [x,N] = STEEP_DESC(grad,x0);
% [x,N] = STEEP_DESC(grad,x0,OPTIONS);
%
%The first variant finds the minimizer of a function whose gradient
%is described in grad (usually an M-file: grad.m), using a gradient
%descent algorithm with initial point x0. The line search used in the
%secant method.
%The second variant allows a vector of optional parameters to
%defined. OPTIONS(1) controls how much display output is given; set
%to 1 for a tabular display of results, (default is no display: 0).
%OPTIONS(2) is a measure of the precision required for the final point.
%OPTIONS(3) is a measure of the precision required of the gradient.
%OPTIONS(14) is the maximum number of iterations.
%For more information type HELP FOPTIONS.
%
%The next two variants returns the value of the final point.
%The last two variants returns a vector of the final point and the
%number of iterations.

if nargin ~= 3
options = [];
if nargin ~= 2
disp(Wrong number of arguments.);
return;
end
end

if length(options) >= 14
if options(14)==0
options(14)=1000*length(xnew);
end
else
options(14)=1000*length(xnew);
end

clc;
format compact;
format short e;

options = foptions(options);
print = options(1);
epsilon_x = options(2);
epsilon_g = options(3);
max_iter=options(14);

for k = 1:max_iter,

53
xcurr=xnew;
g_curr=feval(grad,xcurr);
if norm(g_curr) <= epsilon_g
disp(Terminating: Norm of gradient less than);
disp(epsilon_g);
k=k-1;
break;
end %if

alpha=linesearch_secant(grad,xcurr,-g_curr);

xnew = xcurr-alpha*g_curr;

if print,
disp(Iteration number k =)
disp(k); %print iteration index k
disp(alpha =);
disp(alpha); %print alpha
disp(Gradient = );
disp(g_curr); %print gradient
disp(New point =);
disp(xnew); %print new point
end %if

## if norm(xnew-xcurr) <= epsilon_x*norm(xcurr)

disp(Terminating: Norm of difference between iterates less than);
disp(epsilon_x);
break;
end %if

if k == max_iter
disp(Terminating with maximum number of iterations);
end %if
end %for

if nargout >= 1
x=xnew;
if nargout == 2
N=k;
end
else
disp(Final point =);
disp(xnew);
disp(Number of iterations =);
disp(k);
end %if
%------------------------------------------------------------------

To apply the above MATLAB routine to the function in Example 8.1, we need the following M-file to
specify the gradient.
function y=g(x)

## We applied the algorithm as follows:

>> options(2) = 10^(-6);
>> options(3) = 10^(-6);

54
>> steep_desc(g,[-4;5;1],options)

## Terminating: Norm of gradient less than

1.0000e-06
Final point =
4.0022e+00 3.0000e+00 -4.9962e+00
Number of iterations =
25

As we can see above, we obtained the final point [4.002, 3.000, 4.996]> after 25 iterations. The value of
the objective function at the final point is 7.2 1010 .
8.26
The algorithm terminated after 9127 iterations. The final point was [0.99992, 0.99983]> .

9. Newtons Method
9.1
a. We have f 0 (x) = 4(x x0 )3 and f 00 (x) = 12(x x0 )2 . Hence, Newtons method is represented as

x(k) x0
x(k+1) = x(k) ,
3
which upon rewriting becomes
2  (k) 
x(k+1) x0 = x x0
3

## b. From part a, y (k) = |x(k) x0 | = (2/3)|x(k1) x0 | = (2/3)y (k1) .

c. From part b, we see that y (k) = (2/3)k y (0) and therefore y (k) 0. Hence x(k) x0 for any x(0) .
d. From part b, we have
|x(k+1) x0 | 2 2
lim (k)
= lim = >0
k |x x0 | k 3 3
and hence the order of convergence is 1.
e. The theorem assumes that f 00 (x ) 6= 0. However, in this problem, x = x0 , and f 00 (x ) = 0.
9.2
a. We have
|x(k+1) x | = |x(k) x k f 0 (x(k) )|.
By Taylors theorem applied to f 0 ,

## x(k) x k f 0 (x(k) ) = (1 k f 00 (x ))(x(k) x ) + k o(|x(k) x |)

= o(|x(k) x |) + k o(|x(k) x |)
= (1 + k )o(|x(k) x |).

Because {k } converges, it is bounded, and so (1 + k )o(|x(k) x |) = o(|x(k) x |). Combining the above
with the first equation, we get
|x(k+1) x | = o(|x(k) x |),
which implies that the order of convergence is superlinear.
b. In the secant algorithm, if x(k) x , then (f 0 (x(k) ) f 0 (x(k1) ))/(x(k) x(k1) ) f 00 (x ). Since the
secant algorithm has the form x(k+1) = x(k) k f 0 (x(k) ) with k = (x(k) x(k1) )/(f 0 (x(k) )f 0 (x(k1) )), we
55
deduce that k 1/f 00 (x ). Hence, if we apply the secant algorithm to a function f C 2 , and it converges
to a local minimizer x such that f 00 (x ) 6= 0, then the order of convergence is superlinear.
9.3
a. We compute f 0 (x) = 4x1/3 /3 and f 00 (x) = 4x2/3 /9. Therefore Newtons algorithm for this problem
takes the form
4(x(k) )1/3 /3
x(k+1) = x(k) = 2x(k) .
4(x(k) )2/3 /9

b. From part a, we have x(k) = 2k x(0) . Therefore, as long as x(0) 6= 0, the sequence {x(k) } does not converge
to 0.
9.4
a. Clearly f (x) 0 for all x. We have

## f (x) = 0 x2 x21 = 0 and 1 x1 = 0

x = [1, 1]> .

Hence, f (x) > f ([1, 1]> ) for all x 6= [1, 1]> , and therefore [1, 1]> is the unique global minimizer.
b. We compute
" #
400x31 400x1 x2 + 2x1 2
f (x) =
200(x2 x21 )
" #
1200x21 400x2 + 2 400x1
F (x) = .
400x1 200

## To apply Newtons method we use the inverse of the Hessian, which is

" #
1 1 200 400x1
F (x) = .
80000(x21 x2 ) + 400 400x1 1200x21 400x2 + 2

Applying two iterations of Newtons method, we have x(1) = [1, 0]> , x(2) = [1, 1]> . Therefore, in this
particular case, the method converges in two steps! We emphasize, however, that this fortuitous situation is
by no means typical, and is highly dependent on the initial condition.
c. Applying the gradient algorithm x(k+1) = x(k) k f (x(k) ) with a fixed step size of k = 0.05, we
obtain x(1) = [0.1, 0]> , x(2) = [0.17, 0.1]> .
9.5
If x(0) = x , we are done. So, assume x(0) 6= x . Since the standard Newtons method reaches the point x
in one step, we have

f (x ) = f (x(0) + Q1 g (0) )
= min f (x)
f (x(0) + Q1 g (0) )

## for any 0. Hence,

0 = arg min f (x(0) + Q1 g (0) ) = 1.
0

Hence, in this case, the modified Newtons algorithm is equivalent to the standard Newtons algorithm, and
thus x(1) = x .

## 10. Conjugate Direction Methods

56
10.1
We proceed by induction to show that for k = 0, . . . , n 1, the set {d(0) , . . . , d(k) } is Q-conjugate. We
assume that d(i) 6= 0, i = 1, . . . , k, so that d(i)> Qd(i) 6= 0 and the algorithm is well defined.
For k = 0, the statement trivially holds. So, assume that the statement is true for k < n 1, i.e.,
{d(0) , . . . , d(k) } is Q-conjugate. We now show that {d(0) , . . . , d(k+1) } is Q-conjugate. For this, we need only
to show that for each j = 0, . . . , k, we have d(k+1)> Qd(j) = 0. To this end,
k
!
(k+1)> (j)
X p(k+1)> Qd(i) (i)>
d Qd = p(k+1)>
(i)> (i)
d Qd(j)
i=0 d Qd
k
X p(k+1)> Qd(i)
= p(k+1)> Qd(j) d(i)> Qd(j) .
i=0 d(i)> Qd(i)
(i)> (j)
By the induction hypothesis, d Qd = 0 for i 6= j. Therefore
p(k+1)> Qd(j)
d(k+1)> Qd(j) = p(k+1)> Qd(j) d(j)> Qd(j) = 0.
d(j)> Qd(j)
In the above, we have assumed that the vectors d(k) are nonzero (so that d(k)> Qd(k) 6= 0 and the algorithm
is well defined). To prove that this assumption holds, we use induction to show that d(k) is a (nonzero)
linear combination of p(0) , . . . , p(k) (which immediately implies that d(k) is nonzero because of the linear
independence of p(0) , . . . , p(k) ).
For k = 0, we have d(0) = p(0) by definition. Assume that the result holds for k < n 1; i.e., d(k) =
Pk (k) (j) (k)
j=0 j p , where the coefficients j are not all zero. Consider d(k+1) :
k
X
d(k+1) = p(k+1) i d(i)
i=0
k i
(k)
X X
= p(k+1) i j p(j)
i=0 j=0
k X
k
(k)
X
= p(k+1) i j p(j) .
j=0 i=j

(k+1) (0)
So, clearly d is a nonzero linear combination of p , . . . , p(k+1) .
10.2
Let k {0, . . . , n 1} and k () = f (x(k) + d(k) ). By the chain rule, we have
0 (k ) = f (x(k) + k d(k) )> d(k) = g (k+1)> d(k) .
Since g (k+1)> d(k) = 0, we have 0 (k ) = 0. Note that
1  (k)> 
k () = d Qd(k) 2 + g (k)> d(k) + constant.
2
As is a quadratic function of (with positive coefficient in the quadratic term), we conclude that k =
arg min f (x(k) + d(k) ).
Note that since g (k)> d(k) 6= 0 is the coefficient of the linear term in k , we have k 6= 0. For i
{0, . . . , k 1}, we have
1 (k+1)
d(k)> Qd(i) = (x x(k) )> Qd(i)
k
1 (k+1)
= (g g (k) )> d(i)
k
1  (k+1)> (i) 
= g d g (k)> d(i)
k
= 0
57
by assumption, which completes the proof.
10.3
From the conjugate gradient algorithm we have

g (k)> Qd(k1)
d(k) = g (k) + (k1)> (k1)
d(k1) .
d Qd

Premultiplying the above by d(k)> Q and using the fact that d(k) and d(k1) are Q-conjugate, yields

g (k)> Qd(k1)
d(k)> Qd(k) = d(k)> Qg (k) + d(k)> Qd(k1)
d(k1)> Qd(k1)
= d(k)> Qg (k) .

10.4
a. Since Q is symmetric, then there exists a set of vectors {d(1) , . . . , d(n) } such that Qd(i) = i d(i) ,
i = 1, . . . , n, and d(i)> d(j) = 0, j 6= i, where the i are (real) eigenvalues of Q. Therefore, if i 6= j, we have
d(i)> Qd(j) = d(i)> (j d(j) ) = j (d(i)> d(j) ) = 0. Hence the set {d(1) , . . . , d(n) } is Q-conjugate.
b. Define i = (d(i)> Qd(i) )/(d(i)> d(i) ). Let

d(1)>
.
.. .
D=
d(n)>

Since Q is positive definite and the set {d(1) , . . . , d(n) } is Q-conjugate, then by Lemma 10.1, the set is also
linearly independent. Hence, D is nonsingular. By Q-conjugacy, we have that for all i 6= j, d(i)> Qd(j) = 0.
By assumption, we have d(i)> j d(j) = j d(i)> d(j) = 0. Hence, d(i)> Qd(j) = j d(i)> d(j) . Moreover, for
each i = 1, . . . , n, we have d(i)> Qd(i) = d(i)> i d(i) = i d(i)> d(i) . We can write the above conditions in
matrix form:
DQd(i) = D(i d(i) ).
Since D is nonsingular, then we have
Qd(i) = i d(i) ,
which completes the proof.
10.5
We have
d(k)> Qd(k+1) = k d(k)> Qg (k+1) + d(k)> Qd(k) .
Hence, in order to have d(k)> Qd(k+1) = 0, we need

d(k)> Qd(k)
k = .
d(k)> Qg (k+1)

10.6
We use induction. For k = 0, we have

d(0) = a0 g (0) = a0 b V1 .

Moreover, x(0) = 0 V0 . Hence, the proposition is true at k = 0. Assume it is true at k. To show that it is
also true at k + 1, note first that
x(k+1) = x(k) + k d(k) .

58
Because x(k) Vk Vk+1 and d(k) Vk+1 by the induction hypothesis, we deduce that x(k+1) Vk+1 .
Moreover,

## d(k+1) = ak g (k+1) + bk d(k)

= ak (Qx(k+1) b) + bk d(k) .

But because x(k+1) Vk+1 , Qx(k+1) b Vk+2 . Moreover, d(k) Vk+1 Vk+2 . Hence, d(k+1) Vk+2 .
This completes the induction proof.
b. The conjugate gradient algorithm is an instance of the algorithm given in the question. By the expanding
subspace theorem, we can say that in the conjugate gradient algorithm (with x(0) = 0), at each k, x(k) is
the global minimizer of f on the Krylov subspace Vk . Note that for all k n, Vk+1 = Vk , because of the
Cayley-Hamilton theorem, which allows us to express Qn as a linear combination of I, Q, . . . , Qn1 .
10.7
Expanding (a) yields
1
(a) = (x0 + Da)> Q(x0 + Da) (x0 + Da)> b
2
1 > >    1 
> > > > >
= a D QD a + a D Qx0 D b + x Qx0 x0 b .
2 2 0

Clearly is a quadratic function on Rr . It remains to show that the matrix in the quadratic term, D > QD,
is positive definite. Since Q > 0, for any a Rr , we have
 
a> D > QD a = (Da)> Q(Da) 0

and  
a> D > QD a = (Da)> Q(Da) = 0

if and only if Da = 0. Since rank D = r, Da = 0 if and only if a = 0. Hence, the matrix D > QD is positive
definite.
10.8
a. Let 0 k n 1 and 0 i k. Then,

## g (k+1)T g (i) = g (k+1)T (i1 d(i1) d(i) )

= i1 g (k+1)T d(i1) g (k+1)T d(i)
= 0

by Lemma 10.2.
b. Let 0 k n 1. and 0 i k 1. Then,

g (k+1)T Qg (i)
= (k d(k) d(k+1) )T Q(i1 d(i1) d(i) )
= k i1 d(k)T Qd(i1) k d(k)T Qd(i) i1 d(k+1)T Qd(i1) + d(k+1)T Qd(i)
= 0

by Q-conjugacy of d(k+1) , d(k) , d(i) and d(i1) (note that the iteration indices here are all distinct).
10.9
We represent f as " # " #
1 > 5 3 0
f (x) = x x x> 7.
2 3 2 1

59
The conjugate gradient algorithm is based on the following formulas:

g (k)> d(k)
x(k+1) = x(k) + k d(k) , k =
d(k)> Qd(k)
g (k+1)> Qd(k)
d(k+1) = g (k+1) + k d(k) , k = .
d(k)> Qd(k)
We have, # "
(0) (0) (0) 0
d =g = Qx b = b = .
1
We then proceed to compute
" #
h i 0
0 1
g (0)> d(0) 1 1
0 = = " #" # = .
d(0)> Qd(0) h i 5 3 0 2
0 1
3 2 1

## Hence, " # " # " #

(1) (0) (0) 0 1 0 0
x =x + 0 d = = .
0 2 1 1/2

## We next proceed by evaluating the gradient of the objective function at x(1) ,

" #" # " # " #
(1) (1) 5 3 0 0 3/2
g = Qx b = = .
3 2 1/2 1 0

Because the gradient is nonzero, we can proceed with the next step where we compute
" #" #
h i 5 3 0
3/2 0
g (1)>
Qd(0) 3 2 1 9
0 = (0)> (0)
= " #" # = .
d Qd h i 5 3 0 4
0 1
3 2 1

## Hence, the direction d(1) is

" # " # " #
(1) (1) (0) 3/2 9 0 3/2
d = g + 0 d = = .
0 4 1 9/4

It is easy to verify that the directions d(0) and d(1) are Q-conjugate. Indeed,
" #" #
h i 5 3 3/2
(0)> (1)
d Qd = 0 1 = 0.
3 2 9/4

10.10
a. We have f (x) = 12 x> Qx b> x where
" # " #
5 2 3
Q= , b= .
2 1 1

60
b. Since f is a quadratic function on R2 , we need to perform only two iterations. For the first iteration we
compute

## d(0) = g (0) = [3, 1]>

5
0 =
29
x(1) = [0.51724, 0.17241]>
g (1) = [0.06897, 0.20690]> .

## For the second iteration we compute

0 = 0.0047534
d (1)
= [0.08324, 0.20214]>
1 = 5.7952
x(2) = [1.000, 1.000]> .

## c. The minimizer is given by x = Q1 b = [1, 1]> , which agrees with part b.

10.11
A MATLAB routine for the conjugate gradient algorithm with options for different formulas of k is:
function [x,N]=conj_grad(grad,xnew,options);
% CONJ_GRAD(grad,x0);
% CONJ_GRAD(grad,x0,OPTIONS);
%
% x = CONJ_GRAD(grad,x0);
% x = CONJ_GRAD(grad,x0,OPTIONS);
%
% [x,N] = CONJ_GRAD(grad,x0);
% [x,N] = CONJ_GRAD(grad,x0,OPTIONS);
%
%The first variant finds the minimizer of a function whose gradient
%is described in grad (usually an M-file: grad.m), using initial point
%x0.
%The second variant allows a vector of optional parameters to be
%defined:
%OPTIONS(1) controls how much display output is given; set
%to 1 for a tabular display of results, (default is no display: 0).
%OPTIONS(2) is a measure of the precision required for the final point.
%OPTIONS(3) is a measure of the precision required of the gradient.
%OPTIONS(5) specifies the formula for beta:
% 0=Powell;
% 1=Fletcher-Reeves;
% 2=Polak-Ribiere;
% 3=Hestenes-Stiefel.
%OPTIONS(14) is the maximum number of iterations.
%For more information type HELP FOPTIONS.
%
%The next two variants return the value of the final point.
%The last two variants return a vector of the final point and the
%number of iterations.

if nargin ~= 3
options = [];
if nargin ~= 2
disp(Wrong number of arguments.);
return;
end

61
end

numvars = length(xnew);
if length(options) >= 14
if options(14)==0
options(14)=1000*numvars;
end
else
options(14)=1000*numvars;
end

clc;
format compact;
format short e;

options = foptions(options);
print = options(1);
epsilon_x = options(2);
epsilon_g = options(3);
max_iter=options(14);

g_curr=feval(grad,xnew);
if norm(g_curr) <= epsilon_g
disp(Terminating: Norm of initial gradient less than);
disp(epsilon_g);
return;
end %if

d=-g_curr;
reset_cnt = 0;
for k = 1:max_iter,

xcurr=xnew;
alpha=linesearch_secant(grad,xcurr,d);
%alpha=-(d*g_curr)/(d*Q*d);
xnew = xcurr+alpha*d;

if print,
disp(Iteration number k =)
disp(k); %print iteration index k
disp(alpha =);
disp(alpha); %print alpha
disp(Gradient = );
disp(g_curr); %print gradient
disp(New point =);
disp(xnew); %print new point
end %if

## if norm(xnew-xcurr) <= epsilon_x*norm(xcurr)

disp(Terminating: Norm of difference between iterates less than);
disp(epsilon_x);
break;
end %if

g_old=g_curr;
g_curr=feval(grad,xnew);

## if norm(g_curr) <= epsilon_g

disp(Terminating: Norm of gradient less than);

62
disp(epsilon_g);
break;
end %if

reset_cnt = reset_cnt+1;
if reset_cnt == 3*numvars
d=-g_curr;
reset_cnt = 0;
else
if options(5)==0 %Powell
beta = max(0,(g_curr*(g_curr-g_old))/(g_old*g_old));
elseif options(5)==1 %Fletcher-Reeves
beta = (g_curr*g_curr)/(g_old*g_old);
elseif options(5)==2 %Polak-Ribiere
beta = (g_curr*(g_curr-g_old))/(g_old*g_old);
else %Hestenes-Stiefel
beta = (g_curr*(g_curr-g_old))/(d*(g_curr-g_old));
end %if
d=-g_curr+beta*d;
end

if print,
disp(New beta =);
disp(beta);
disp(New d =);
disp(d);
end

if k == max_iter
disp(Terminating with maximum number of iterations);
end %if
end %for

if nargout >= 1
x=xnew;
if nargout == 2
N=k;
end
else
disp(Final point =);
disp(xnew);
disp(Number of iterations =);
disp(k);
end %if
%------------------------------------------------------------------

We created the following M-file, g.m, for the gradient of Rosenbrocks function:
function y=g(x)
y=[-400*(x(2)-x(1).^2)*x(1)-2*(1-x(1)), 200*(x(2)-x(1).^2)];

## We tested the above routine as follows:

>> options(2)=10^(-7);
>> options(3)=10^(-7);
>> options(14)=100;
>> options(5)=0;
>> conj_grad(g,[-2;2],options);

## Terminating: Norm of difference between iterates less than

63
1.0000e-07
Final_Point =
1.0000e+00 1.0000e+00
Number_of_iteration =
8
>> options(5)=1;
>> conj_grad(g,[-2;2],options);

## Terminating: Norm of difference between iterates less than

1.0000e-07
Final_Point =
1.0000e+00 1.0000e+00
Number_of_iteration =
10
>> options(5)=2;
>> conj_grad(g,[-2;2],options);

## Terminating: Norm of difference between iterates less than

1.0000e-07
Final_Point =
1.0000e+00 1.0000e+00
Number_of_iteration =
8
>> options(5)=3;
>> conj_grad(g,[-2;2],options);

## Terminating: Norm of difference between iterates less than

1.0000e-07
Final_Point =
1.0000e+00 1.0000e+00
Number_of_iteration =
8

The reader is cautioned not to draw any conclusions about the superiority or inferiority of any of the
formulas for k based only on the above single numerical experiment.

## 11. Quasi-Newton Methods

11.1
a. Let
() = f (x(k) + d(k) ).
Then, using the chain rule, we obtain

## 0 () = d(k)> f (x(k) + d(k) ).

Hence
0 (0) = d(k)> g (k) .
Since 0 is continuous, then, if d(k)> g (k) < 0, there exists
> 0 such that for all (0,
], () < (0),
i.e., f (x(k) + d(k) ) < f (x(k) ).
b. By part a, () < (0) for all (0,
]. Hence,

k = arg min () 6= 0
0

## which implies that k > 0.

64
c. Now,
d(k)> g (k+1) = d(k)> f (x(k) + k d(k) ) = 0k (k ).
Since k = arg min0 f (x(k) + d(k) ) > 0, we have 0k (k ) = 0. Hence, g (k+1)> d(k) = 0.
d.
i. We have d(k) = g (k) . Hence, d(k)> g (k) = kg (k) k2 . If g (k) 6= 0, then kg (k) k2 > 0, and hence
d(k)> g (k) < 0.
ii. We have d(k) = F (x(k) )1 g (k) . Since F (x(k) ) > 0, we also have F (x(k) )1 > 0. Therefore,
d(k)> g (k) = g (k)> F (x(k) )1 g (k) < 0 if g (k) 6= 0.
iii. We have
d(k) = g (k) + k1 d(k1) .
Hence,
d(k)> g (k) = kg (k) k2 + k1 d(k1)> g (k) .
By part c, d(k1)> g (k) = 0. Hence, if g (k) 6= 0, then kg (k) k2 > 0, and

## d(k)> g (k) = kg (k) k2 < 0.

iv. We have d(k) = H k g (k) . Therefore, if H k > 0 and g (k) 6= 0, then d(k)> g (k) = g (k)> H k g (k) < 0.

## d(k)> g (k+1) = d(k)> (Qx(k+1) b)

= d(k)> (Q(x(k) + k d(k) ) b)
= k d(k)> Qd(k) + d(k)> (Qx(k) b)
= k d(k)> Qd(k) + d(k)> g (k) .

## By part c, d(k)> g (k+1) = 0, which implies

d(k)> g (k)
k = .
d(k)> Qd(k)

11.2
Yes, because:
1. The search direction is of the form d(k) = H k f (x(k) ) for matrix H k = F (x(k) )1 ;
2. The matrix H k = F (x(k) )1 is symmetric for f C 2 ;

3. If f is quadratic, then the quasi-Newton condition is satisfied: H k+1 g (i) = x(i) , 0 i k. To see
this, note that if the Hessian is Q, then Qx(i) = g (i) . Multiplying both sides by H k = Q1 , we
obtain the desired result.

11.3
a. We have
  1 >    >
f x(k) + d(k) = x(k) + d(k) Q x(k) + d(k) x(k) + d(k) b + c.
2
Using the chain rule, we obtain

d  (k)   >
f x + d(k) = x(k) + d(k) Qd(k) d(k)> b.
d
65
Equating the above to zero and solving for gives
 
x(k)> Q b> d(k) = d(k)> Qd(k) .

Taking into account that g (k)> = x(k)> Q b> and that d(k)> Qd(k) > 0 for g (k) 6= 0, we obtain

k = (k)> (k)
= .
d Qd d(k)> Qd(k)

## b. The matrix Q is symmetric and positive definite; hence k > 0 if H k = H >

k > 0.
11.4
a. The appropriate choice is H = F (x )1 . To show this, we can apply the same argument as in the proof
of the theorem on the convergence of Newtons method. (We wont repeat it here.)
b. Yes (provided we incorporate the usual step size). Indeed, if we apply the algorithm with the choice of H
in part a, then when applied to a quadratic with Hessian Q, the algorithm uses H = Q1 , which definitely
satisfies the quasi-Newton condition. In fact, the algorithm then behaves just like Newtons algorithm.
11.5
Our objective is to minimize the quadratic
1 >
f (x) = x Qx x> b + c.
2
We first compute the gradient f and evaluate it at x(0) ,
# "
  1
f x(0) = g (0) = Qx(0) b = .
1

## It is a non-zero vector, so we proceed with the first iteration. Let H 0 = I 2 . Then,

" #
1
d(0) = H 0 g (0) = .
1

## The step size 0 is " #

h i 1
1 1
g (0)> d(0) 1 2
0 = = " #" # = .
d(0)> Qd(0) h i 1 0 1 3
1 1
0 2 1
Hence, " #
(1) (0) (0) 2/3
x =x + 0 d = .
2/3

## We evaluate the gradient f and evaluate it at x(1) to obtain

" #" # " # " #
  1 0 2/3 1 1/3
f x(1) = g (1) = Qx(1) b = = .
0 2 2/3 1 1/3

## It is a non-zero vector, so we proceed with the second iteration. We compute H 1 , where

  >
x(0) H 0 g (0) x(0) H 0 g (0)
H1 = H0 +   .
g (0)> x(0) H 0 g (0)

66
To find H 1 we need to compute,
" # " #
(0) (1) (0) 2/3 (0) (1) (0) 2/3
x =x x = and g =g g = .
2/3 4/3

## Using the above, we determine,

" #
(0) (0) 0   8
x H 0 g = and g (0)> x(0) H 0 g (0) = .
2/3 9

Then, we obtain
  >
x(0) H 0 g (0) x(0) H 0 g (0)
H1 = H0 +  
g (0)> x(0) H 0 g (0)
" #
0 0
" #
1 0 0 4/9
= +
0 1 8/9
" #
1 0
=
0 1/2

and " #
(1) (1) 1/3
d = H 1 g = .
1/6
We next compute
g (1)> d(1)
1 = = 1.
d(1)> Qd(1)
Therefore, "#
(2) (1) (1) 1
x =x =x + 1 d = .
1/2
Note that g (2) = Qx(2) b = 0 as expected.
11.6
We are guaranteed that the step size satisfies k > 0 if the search direction is in the descent direction,
i.e., the search direction d(k) = M k f (x(k) ) has strictly positive inner product with f (x(k) ) (see
Exercise 11.1). Thus, the condition on M k that guarantees k > 0 is f (x(k) )> M k f (x(k) ) > 0, which
corresponds to 1 + a > 0, or a > 1. (Note that if a 1, the search direction is not in the descent
direction, and thus we cannot guarantee that k > 0.)
11.7
Let x Rn . Then
!
> > > (x(k) H k g (k) )(x(k) H k g (k) )>
x H k+1 x = x Hkx + x x
g (k)> (x(k) H k g (k) )
(x> (x(k) H k g (k) ))2
= x> H k x + .
g (k)> (x(k) H k g (k) )
Note that since H k > 0, we have x> H k x > 0. Hence, if g (k)> (x(k) H k g (k) ) > 0, then x> H k+1 x > 0.
11.8
The complement of the Rank One update equation is
(g (k) B k x(k) )(g (k) B k x(k) )>
B k+1 = B k + .
x(k)> (g (k) B k x(k) )
67
Using the matrix inverse formula, we get

B 1
k+1
= B 1
k
B 1
k (g
(k)
B k x(k) )(g (k) B k x(k) )B 1
k

x (k)>
(g (k)
B k x(k) ) + (g (k) B k x(k) )> B 1
k (g
(k)
B k x(k) )
(x(k) B 1
k g
(k)
)(x(k) B 1
k g
(k) >
)
= B 1
k + .
g (k)> (x(k) B 1
k g
(k)
)

Substituting H k for B 1
k , we get a formula identical to the Rank One update equation. This should not
be surprising, since there is only one update equation involving a rank one correction that satisfies the
quasi-Newton condition.
11.9
We first compute the gradient f and evaluate it at x(0) ,
# "

(0)

(0) (0) 1
f x =g = Qx b= .
1

" #
(0) (0) 1
d = H 0 g = .
1

## The step size 0 is " #

h i 1
1 1
g (0)> d(0) 1 2
0 = = " #" # = .
d(0)> Qd(0) h i 1 0 1 3
1 1
0 2 1
Hence, " #
(1) (0) (0) 2/3
x =x + 0 d = .
2/3

## We evaluate the gradient f and evaluate it at x(1) to obtain

" #" # " # " #

(1)

(1) (1) 1 0 2/3 1 1/3
f x = g = Qx b = = .
0 2 2/3 1 1/3

## It is a nonzero vector, so we proceed with the second iteration. We compute H 1 , where

  >
x(0) H 0 g (0) x(0) H 0 g (0)
H1 = H0 +   .
g (0)> x(0) H 0 g (0)

To find H 1 we need
" # " #
(0) (1) (0) 2/3 (0) (1) (0) 2/3
x =x x = and g =g g = .
2/3 4/3

## Using the above, we determine,

" #
0   8
x(0) H 0 g (0) = and g (0)> x(0) H 0 g (0) = .
2/3 9
68
Then, we obtain
  >
x(0) H 0 g (0) x(0) H 0 g (0)
H1 = H0 +  
g (0)> x(0) H 0 g (0)
" #
0 0
" #
1 0 0 4/9
= +
0 1 8/9
" #
1 0
=
0 1/2

and " #
(1) (1) 1/3
d = H 1 g = .
1/6

Note that d(0)> Qd(1) = 0, that is, d(0) and d(1) are Q-conjugate.
11.10
The calculations are similar until we get to the second step:
" #
1/2 1/2
H1 =
1/2 1/2
d(0) = 0.

So the algorithm gets stuck at this point, which illustrates that it doesnt work.
11.11
a. Since f is quadratic, and k = arg min0 f (x(k) + d(k) ), then

g (k)> d(k)
k = .
d(k)> Qd(k)

## b. Now, d(k) = H k g (k) , where H k = H >

k > 0. Substituting this into the formula for k in part a, yields

g (k)> H k g (k)
k = > 0.
d(k)> Qd(k)

11.12
(Our solution to this problem is based on a solution that was furnished to us by Michael Mera, a student in
ECE 580 at Purdue in Spring 2005.) To proceed, we recall the formula of Lemma 11.1,

## (A1 u)(v > A1 )

(A + uv > )1 = A1
1 + v > A1 u
for 1 + v > A1 u 6= 0. Recall the definitions from the hint,

g (k)
A 0 = B k , u0 = , v>
0 = g
(k)>
,
g (k)> x(k)
and
g (k) g (k)> B k x(k)
A1 = B k + (k)> (k)
= A 0 + u0 v >
0 , u1 = (k)>
,
g x x B k x(k)
and
v>
1 = x
(k)>
Bk .
69
Using the above notation, we represent B k+1 as

0 + u1 v 1
>
= A 1 + u1 v 1 .

H BF
k+1
GS
= (B k+1 )1
= (A1 + u1 v >
1)
1

A1 > 1
1 u1 v 1 A 1
= A1
1 1 .
1 + v>
1 A 1 u1

## Substituting into the above the expression for A1

1 yields

A1 > 1
A1 > 1
   
1 0 u0 v 0 A0 > 1 0 u0 v 0 A0
1 > 1
A 0 u0 v 0 A 0 A 0 > 1
1+v 0 A0 u0
u 1 v 1 A 0 >
1+v A u 1
H BF
k+1
GS
= A10 > 1  1 > A1
 0 0 0 .
1 + v 0 A 0 u0 > 1 A u 0 v
1 + v 1 A0 1+v> A1 u
0 0 0
u1
0 0 0

## Note that A0 = B k . Hence, A1 1

0 = B k = H k . Using this and the notation introduced at the beginning of
the solution, we obtain

H k g (k) g (k)> H k
H BF
k+1
GS
= H k
g (k)> x(k) + g (k)> H k g (k)
  
H k g (k) g (k)> H k B k x(k) x(k)> B k
H k g(k)> x(k) +g (k)> H k g (k) x(k)> B k x(k)
 (k) g (k)> H
 
H g B k x(k)
1 + x(k)> B k H k g(k)> x(k) +g(k)> H k g(k)
k k
x(k)> B k x(k)

H k g (k) g (k)> H k
 
Hk .
g (k)> x(k) + g (k)> H k g (k)

## We next perform some multiplications taking into account that H k = B 1

k and hence

H k Bk = Bk H k = I n.

We obtain
H k g (k) g (k)> H k
H BF GS
k+1 = Hk
g (k)> x(k) + g (k)> H k g (k)
   
H k g g (k)>
(k)
(k) (k)> g (k) g (k)> H k
1 g(k)> x (k) +g (k)> H g (k)
k
(x x ) 1 g (k)> (k)
x +g (k)> H k g (k)
  .
(k)> (k) (k)> g (k) g (k)> (k)
x B k x + x B k g(k)> x(k) +g(k)> H k g(k) (x )

We proceed with our manipulations. We first perform multiplications by x(k) and x(k)> to obtain

H k g (k) g (k)> H k
H BF
k+1
GS
= Hk
g (k)> x(k) + g (k)> H k g (k)
 (k) (k)>
 
H k g g x(k) (k) (k)> x(k)> g (k) g (k)> H k
g (k)> (k)
x +g (k)> H k g (k) x x g (k)> (k)
x +g (k)> H k g (k)
x(k)> g (k) g (k)> x(k)
.
x(k)> B k x(k) x(k)> B k x(k) + g (k)> x(k) +g (k)> H k g (k)

70
Cancelling the terms in the denominator of the last term above and performing further multiplications gives

BF GS H k g (k) g (k)> H k
Hk+1 = Hk
g (k)> x(k) + g (k)> H k g (k)
H k g (k) (g (k)> x(k) )(x(k)> g (k) )g (k)> H k
g (k)> x(k) +g (k)> H k g (k)
+  
x(k)> g (k) g (k)> x(k)

x(k) x(k)> g (k)> x(k) + g (k)> H k g (k)
+  
x(k)> g (k) g (k)> x(k)
(k)

H k g g (k)> x(k) x(k)> + x(k) g (k)> H k
  .
x(k)> g (k) g (k)> x(k)
Further simplification of the third and the fifth terms on the right hand-side of the above equation gives

BF GS H k g (k) g (k)> H k
Hk+1 = Hk
g (k)> x(k) + g (k)> H k g (k)
H k g (k) g (k)> H k
+
g (k)> x(k) + g (k)> H k g (k)

x(k) x(k)> g (k)> x(k) + g (k)> H k g (k)
+  
x(k)> g (k) g (k)> x(k)
H k g (k) x(k)> + x(k) g (k)> H k
.
x(k)> g (k)
Note that the second and the third terms cancel out each other. We then represent the fourth term in
alternative manner to obtain
x(k) x(k)> g (k)> H k g (k)
 
BF GS
H k+1 = H k + 1+
x(k)> g (k) g (k)> x(k)
H k g (k) x(k)> + x(k) g (k)> H k
,
x(k)> g (k)
which is the desired BFGS update formula.
11.13
The first step for both algorithms is clearly the same, since in either case we have

## For the second step,

d(1) = H 1 g (1)
!
g (0)> g (0) x(0) x(0)>
= In + 1+
g (0)> x(0)
x(0)> g (0)
!
g (0) x(0)> + (g (0) x(0)> )>
(0)> (0)
g (1)
g x
!
(0)>
(1) g g (0) x(0) x(0)> g (1)
= g 1 +
g (0)> x(0) x(0)> g (0)
g (0) x(0)> g (1) + x(0) g (0)> g (1)
+ .
g (0)> x(0)
Since the line search is exact, we have

## x(0)> g (1) = 0 d(0)> g (1) = 0.

71
Hence,
!
(1) g (0)> g (1)
d = g (1)
+ x(0)
g (0)> x(0)
!
g (1)> g (0)
= g (1) + d(0)
g (0)> d(0)
= g (1) + 0 d(0)

where
g (1)> g (0) g (1)> (g (1) g (0) )
0 = =
d(0)> g (0) d(0)> (g (1) g (0) )
is the Hestenes-Stiefel update formula for 0 . Since d(0) = g (0) , and g (1)> g (0) = 0, we have

0 = ,
g (0)> g (0)

## which is the Polak-Ribiere formula. Applying g (1)> g (0) = 0 again, we get

g (1)> g (1)
0 = ,
g (0)> g (0)
which is the Fletcher-Reeves formula.
11.14
a. Suppose the three conditions hold whenever applied to a quadratic. We need to show that when applied
to a quadratic, for k = 0, . . . , n 1 and i = 0, . . . , k, H k+1 g (i) = x(i) . For i = k, we have

## H k+1 g (k) = H k g (k) + U k g (k) by condition 1

(k) (k)
= H k g + x H k g (k) by condition 2
(k)
= x ,

## as required. For the rest of the proof (i = 0, . . . , k 1), we use induction on k.

For k = 0, there is nothing to prove (covered by the i = k case). So suppose the result holds for k 1.
To show the result for k, first fix i {0, . . . , k 1}. We have

## H k+1 g (i) = H k g (i) + U k g (i)

= x(i) + U k g (i) by the induction hypothesis
= x(i) + a(k) x(k)> g (i) + b(k) g (k)> H k g (i) by condition 3.

So it suffices to show that the second and third terms are both 0. For the second term,

## x(k)> g (i) = x(k)> Qx(i)

= k i d(k)> Qd(i)
= 0

because of the induction hypothesis, which implies Q-conjugacy (where Q is the Hessian of the given
quadratic). Similarly, for the third term,

## g (k)> H k g (i) = g (k)> x(i) by the induction hypothesis

(k)> (i)
= x Qx
= k i d(k)> Qd(i)
= 0,

72
again because of the induction hypothesis, which implies Q-conjugacy. This completes the proof.
b. All three algorithms satisfy the conditions in part a. Condition 1 holds, as described in class. Condition
2 is straightforward to check for all three algorithms. For the rank-one and DFP algorithms, this is shown in
the book. For BFGS, some simple matrix algebra establishes that it holds. Condition 3 holds by appropriate
definition of the vectors a(k) and b(k) . In particular, for the rank-one algorithm,

## (x(k) H k g (k) ) (x(k) H k g (k) )

a(k) = (k)
, b(k) = .
(x Hk g (k) )> g (k) (x(k) H k g (k) )> g (k)
For the DFP algorithm,

x(k) H k g (k)
a(k) = , b(k) = .
x(k)> g (k)
g (k)> H k g (k)
Finally, for the BFGS algorithm,
!
g (k)> H k g (k) x(k) H k g (k) x(k)
a(k) = 1+ , b(k) = .
g (k)> x(k) x(k)> g (k) g (k)> x (k)
g (k)> x(k)

11.15
a. Suppose we apply the algorithm to a quadratic. Then, by the quasi-Newton property of DFP, we have
H DF P
k+1 g
(i)
= x(i) , 0 i k. The same holds for BFGS. Thus, for the given H k , we have for 0 i k,

H k+1 g (i) = H DF P
k+1 g
(i)
+ (1 )H BF GS
k+1 g
(i)

= x(i) + (1 )x(i)
= x(i) ,

which shows that the above algorithm is a quasi-Newton algorithm (and hence also a conjugate direction
algorithm).
b. By Theorem 11.4 and the discussion on BFGS, we have H DFk
P
> 0 and H BF
k
GS
> 0. Hence, for any
x 6= 0,
x> H k x = x> H DF
k
P
x + (1 )x> H BF
k
GS
x>0
since and 1 are nonnegative. Hence, H k > 0, from which we conclude that the algorithm has the
descent property if k is computed by line search (by Proposition 11.1).
11.16
To show the result, we will prove the following precise statement: In the quadratic case (with Hessian Q),
suppose that H k+1 g (i) = i x(i) , 0 i k, k n 1. If i 6= 0, 0 i k, then d(0) , . . . , d(k+1) are
Q-conjugate.
We proceed by induction. We begin with the k = 0 case: that d(0) and d(1) are Q-conjugate. Because
0 6= 0, we can write d(0) = x(0) /0 . Hence,

## d(1)> Qd(0) = g (1)> H 1 Qd(0)

Qx(0)
= g (1)> H 1
0
H 1 g (0)
= g (1)>
0
0 x(0)
= g (1)>
0
= 0 g (1)> d(0) .

But g (1)> d(0) = 0 as a consequence of 0 > 0 being the minimizer of () = f (x(0) + d(0) ). Hence,
d(1)> Qd(0) = 0.
73
Assume that the result is true for k 1 (where k < n 1). We now prove the result for k, that is, that
d , . . . , d(k+1) are Q-conjugate. It suffices to show that d(k+1)> Qd(i) = 0, 0 i k. Given i, 0 i k,
(0)

using the same algebraic steps as in the k = 0 case, and using the assumption that i 6= 0, we obtain

## d(k+1)> Qd(i) = g (k+1)> H k+1 Qd(i)

..
.
= i g (k+1)> d(i) .

Because d(0) , . . . , d(k) are Q-conjugate by assumption, we conclude from the expanding subspace lemma
(Lemma 10.2) that g (k+1)> d(i) = 0. Hence, d(k+1)> Qd(i) = 0, which completes the proof.
11.17
A MATLAB routine for the quasi-Newton algorithm with options for different formulas of H k is:
function [x,N]=quasi_newton(grad,xnew,H,options);
% QUASI_NEWTON(grad,x0,H0);
% QUASI_NEWTON(grad,x0,H0,OPTIONS);
%
% x = QUASI_NEWTON(grad,x0,H0);
% x = QUASI_NEWTON(grad,x0,H0,OPTIONS);
%
% [x,N] = QUASI_NEWTON(grad,x0,H0);
% [x,N] = QUASI_NEWTON(grad,x0,H0,OPTIONS);
%
%The first variant finds the minimizer of a function whose gradient
%is described in grad (usually an M-file: grad.m), using initial point
%x0 and initial inverse Hessian approximation H0.
%The second variant allows a vector of optional parameters to be
%defined:
%OPTIONS(1) controls how much display output is given; set
%to 1 for a tabular display of results, (default is no display: 0).
%OPTIONS(2) is a measure of the precision required for the final point.
%OPTIONS(3) is a measure of the precision required of the gradient.
%OPTIONS(5) specifies the formula for the inverse Hessian update:
% 0=Rank One;
% 1=DFP;
% 2=BFGS;
%OPTIONS(14) is the maximum number of iterations.
%For more information type HELP FOPTIONS.
%
%The next two variants return the value of the final point.
%The last two variants return a vector of the final point and the
%number of iterations.

if nargin ~= 4
options = [];
if nargin ~= 3
disp(Wrong number of arguments.);
return;
end
end

numvars = length(xnew);
if length(options) >= 14
if options(14)==0
options(14)=1000*numvars;
end
else
74
options(14)=1000*numvars;
end

clc;
format compact;
format short e;

options = foptions(options);
print = options(1);
epsilon_x = options(2);
epsilon_g = options(3);
max_iter=options(14);

reset_cnt = 0;
g_curr=feval(grad,xnew);
if norm(g_curr) <= epsilon_g
disp(Terminating: Norm of initial gradient less than);
disp(epsilon_g);
return;
end %if

d=-H*g_curr;
for k = 1:max_iter,

xcurr=xnew;
alpha=linesearch_secant(grad,xcurr,d);
xnew = xcurr+alpha*d;

if print,
disp(Iteration number k =)
disp(k); %print iteration index k
disp(alpha =);
disp(alpha); %print alpha
disp(Gradient = );
disp(g_curr); %print gradient
disp(New point =);
disp(xnew); %print new point
end %if

## if norm(xnew-xcurr) <= epsilon_x*norm(xcurr)

disp(Terminating: Norm of difference between iterates less than);
disp(epsilon_x);
break;
end %if

g_old=g_curr;
g_curr=feval(grad,xnew);

## if norm(g_curr) <= epsilon_g

disp(Terminating: Norm of gradient less than);
disp(epsilon_g);
break;
end %if

p=alpha*d;
q=g_curr-g_old;

reset_cnt = reset_cnt+1;
if reset_cnt == 3*numvars

75
d=-g_curr;
reset_cnt = 0;
else
if options(5)==0 %Rank One
q*(p-H*q)
H = H+(p-H*q)*(p-H*q)/(q*(p-H*q));
elseif options(5)==1 %DFP
H = H+p*p/(p*q)-(H*q)*(H*q)/(q*H*q);
else %BFGS
H = H+(1+q*H*q/(q*p))*p*p/(p*q)-(H*q*p+(H*q*p))/(q*p);
end %if
d=-H*g_curr;
end

if print,
disp(New H =);
disp(H);
disp(New d =);
disp(d);
end

if k == max_iter
disp(Terminating with maximum number of iterations);
end %if
end %for

if nargout >= 1
x=xnew;
if nargout == 2
N=k;
end
else
disp(Final point =);
disp(xnew);
disp(Number of iterations =);
disp(k);
end %if
%------------------------------------------------------------------

We created the following M-file, g.m, for the gradient of Rosenbrocks function:
function y=g(x)
y=[-400*(x(2)-x(1).^2)*x(1)-2*(1-x(1)), 200*(x(2)-x(1).^2)];

## We tested the above routine as follows:

>> options(2)=10^(-7);
>> options(3)=10^(-7);
>> options(14)=100;
>> x0=[-2;2];
>> H0=eye(2);
>> options(5)=0;
>> quasi_newton(g,x0,H0,options);

## Terminating: Norm of difference between iterates less than

1.0000e-07
Final point =
1.0000e+00 1.0000e+00
Number of iterations =
8

76
>> options(5)=1;
>> quasi_newton(g,x0,H0,options);

## Terminating: Norm of difference between iterates less than

1.0000e-07
Final point =
1.0000e+00 1.0000e+00
Number of iterations =
8
>> options(5)=2;
>> quasi_newton(g,x0,H0,options);

## Terminating: Norm of difference between iterates less than

1.0000e-07
Final point =
1.0000e+00 1.0000e+00
Number of iterations =
8

The reader is again cautioned not to draw any conclusions about the superiority or inferiority of any of
the formulas for H k based only on the above single numerical experiment.
11.18
a. The plot of the level sets of f were obtained using the following MATLAB commands:
>> [X,Y]=meshdom(-2:0.1:2, -1:0.1:3);
>> Z=X.^4/4+Y.^2/2 -X.*Y +X-Y;
>> V=[-0.72, -0.6, -0.2, 0.5, 2];
>> contour(X,Y,Z,V)

## The plot is depicted below:

2.5

1.5
x_2

0.5

-0.5

-1
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
x_1

b. With the initial condition [0, 0]> , the algorithm converges to [1, 0]> , while with the initial condition
[1.5, 1]> , the algorithm converges to [1, 2]> . These two points are the two strict local minimizers of f (as can
be checked using the SOSC). The algorithm apparently converges to the minimizer closer to the initial
point.

12. Solving Ax = b

77
12.1
Write the least squares cost in the usual notation kAx bk2 where

3 h i 1
A = 5 , x= m , b = 2 .

6 3

## The least squares estimate of the mass is

31
m = (A> A)1 A> b = .
70
12.2
Write the least squares cost in the usual notation kAx bk2 where

1 1 " # 3
a
A = 1 2 , x= , b = 4 .

b
1 4 5

## The least squares estimate for [a, b]> is

" #
a
= (A> A)1 A> b
b
" #1 " #
3 7 12
=
7 21 31
" #1 " #
1 21 7 12
=
14 7 3 31
" #
1 35
=
14 9
" #
5/2
= .
9/14

12.3
a. We form
12 /2 5.00
A = 22 /2 , b = 19.5 .

32 /2 44.0
The least squares estimate of g is then given by
g = (A> A)1 A> b = 9.776.

b. We start with P 0 = 0.040816, and x(0) = 9.776. We have a1 = 42 /2 = 8, and b(1) = 78.5. Using the RLS
formula, we get x(1) = 9.802, which is our updated estimate of g.
12.4
Let x = [x1 , x2 , . . . , xn ]> and y = [y1 , y2 , . . . , yn ]> . This least-squares estimation problem can be expressed
as
minimize kx yk2 ,
with as the decision variable. Assuming that x 6= 0, the solution is unique and is given by
Pn
x> y xi yi
= (x> x)1 x> y = > = Pi=1 n 2 .
x x i=1 xi

78
12.5
The least squares estimate of R is the least squares solution to

1R = V1
..
.
1R = Vn .

## Therefore, the least squares solution is

1
1 V1

>
. >
. V1 + + Vn
[1, . . . , 1] .. [1, . . . , 1] .. =
R = .

n
1 Vn

12.6
We represent the data in the table and the decision variables a and b using the usual least squares matrix
notation:
1 2 6 " #
a
A = 1 1 , b = 4 , x= .

b
3 2 5
The least squares estimate is given by
"# " #1 " # " #" # " #
a > 1 > 11 9 25 1 9 9 25 1/2
x = = (A A) A b = = = .
b 9 9 26 18 9 11 26 61/18

12.7
The problem can be formulated as a least-squares problem with

0.3 0.1 5
A = 0.4 0.2 , b = 3 ,

0.3 0.7 4

where the decision variable is x = [x1 , x2 ], and x1 and x2 are the amounts of A and B, respectively. After
some algebra, we obtain the solution:
" #" #
1 0.54 0.32 3.9
x = (A> A)1 A> b = .
(0.34)(0.54) (0.32)2 0.32 0.34 3.9

Since we are only interest in the ratio of the first component of x to the second component, we need only
explicitly compute:
0.54 0.32 0.22
Ratio = = = 11.
0.32 + 0.34 0.02
12.8
For each k, we can write

yk = ayk1 + buk + vk
= a2 yk2 + abuk1 + avk1 + buk + vk
..
.
= ak1 bu1 + ak2 bu2 + + buk + ak1 v1 + ak2 v2 + + vk

79
Write u = [u1 , . . . , un ]> , v = [v1 , . . . , vn ]> , and y = [y1 , . . . , yn ]> . Then, y = Cu + Dv, where

b 0 0 1 0 0
.. . .. .
ab b . .. a 1 . ..
C= . , D= . .

. .. .. . ..
. . . 0 . . 0

an1 b ab b an1 an2 1
Write b = D 1 y and A = D 1 C so that b = Au + v. Therefore, the linear least-squares estimate of u
given y is
u = (A> A)1 A> b = (C > D > D 1 C)1 C > D > D 1 y.
But C = bD. Hence,
1 0 0
.. ..
1 1 a 1
1 . .
u = D y= . y.

b b . .. ..
. . . 0

0 a 1
(Notice that D 1 has the simple form shown above.)
An alternative solution is first to define z = [z1 , . . . , zn ]> by zk = yk ayk1 . Then, we have z = bu + v.
Therefore, the linear least-squares estimate of u given y (or, equivalently, z) is

1 0 0
.. ..
1 1 a 1 . .
u = z = . y.

b b . .. ..
. . . 0

0 a 1

12.9
Define
x1 1 y1
. .. .
X = ..

. ,

.. .
y=
xp 1 yp
Since the xi are not all equal, we have rank X = 2. The objective function can be written as
" # 2
a
f (a, b) = X y .

b

## Therefore, by Theorem 12.1 there exists a unique minimizer [a , b ]> given by

" #
a
= (X > X)1 X > y
b
"P #1 " P #
p Pp p
i=1 x2i i=1 xi i=1 xi yi
= Pp Pp
i=1 xi p i=1 yi
" #1 " #
X2 X XY
=
X 1 Y
" #" #
1 1 X XY
=
X 2 (X)2 X X 2 Y

XY (X)(Y )
X 2 (X)2
= (X 2 )(Y )(X)(XY )
.
X 2 (X)2

80
As we can see, the solution does not depend on Y 2 .
12.10
a. We wish to find and such that

sin(t1 + ) = y1
..
.
sin(tp + ) = yp .

## Taking arcsin, we get the following system of linear equations:

t1 + = arcsin y1
..
.
tp + = arcsin yp .

## b. We may write the system of linear equations in part a as Ax = b, where

t1 1 " # arcsin y1
. .. ..
A= .. . , x= , b= .
.

tp 1 arcsin yp

Since the ti are not all equal, the first column of A is not a scalar multiple of the second column. Therefore,
rank A = 2. Hence, the least squares solution is

## x = (A> A)1 A> b

"P #1 " P #
p Pp p
i=1 t2i i=1 ti i=1 ti arcsin yi
= Pp Pp
i=1 ti p i=1 arcsin yi
" #1 " #
T2 T TY
=
T 1 Y
" #" #
1 1 T TY
=
T 2 (T )2 T T 2 Y
" #
1 T Y (T )(Y )
= .
T 2 (T )2 (T )(T Y ) + (T 2 )(Y )

12.11
The given line can be expressed as the range of the matrix A = [1, m]> . Let b = [x0 , y0 ]> be the given point.
Therefore, the problem is a linear least squares problem of minimizing kAx bk2 . The solution is given by
x0 + my0
x = (A> A)1 A> b = .
1 + m2

Therefore, the point on the straight line that is closest to the given point [x0 , y0 ] is given by [x , mx ]> .
12.12
a. Write
x>
1 1 " # y1
. .. p(n+1) a .
A= .
. . R
, z= Rn+1 , b= p
.. R .
c
x>
p 1 yp
The objective function can then be written as kAz bk2 .
81
b. Let X = [x1 , . . . , xp ]> Rpn , and e = [1, . . . , 1]> Rp . Then we may write A = [X e]. The solution
to the problem is (A> A)1 A> b. But
" # " #
> X >X X >e X >X 0
A A= =
e> X p 0> p

" # " #
> X >y 0
A y= = >
e> y e y

## since X > y = y1 x1 + + yp xp = 0 by assumption. Therefore, the solution is given by

" #" # " #
> 1 > (X > X)1 0 0 0
z = (A A) A b = = 1 > .
0> 1/p e> y pe y

The affine function of best fit is the constant function f (x) = c, were
p
1X
c= yi .
p i=1

12.13
a. Using the least squares formula, we have
1
u1 y1 Pn
u k yk
n = [u1 , . . . , un ] [u1 , . . . , un ] = Pk=1 2 .

n
k=1 uk
un yn

## b. Given uk = 1 for all k, we have

n n n
1X 1X 1X
n = yk = ( + ek ) = + ek .
n n n
k=1 k=1 k=1

Pn
Hence, n if and only if limn 1
n k=1 ek = 0.
12.14
We pose the problem as a least squares problem: minimize kAx bk2 where x = [a, b]> , and

x0 1 x1
A = x1 1 , b = x2 .

x2 1 x3

## We have "P # "P #

2 P2 2
x2i xi xi xi+1
A> A = Pi=0
2
i=0
, A> b = Pi=0
2 .
i=0 xi 3 i=0 xi+1

## Therefore, the least squares solution is

" # "P #1 " P # " #1 " # " #
2 P2 2
a x2i xi xi xi+1 5 3 18 7/2
= Pi=0
2
i=0 Pi=0
2 = = .
b i=0 xi 3 i=0 xi+1 3 3 11 1/6

82
12.15
We pose the problem as a least squares problem: minimize kAx bk2 where x = [a, b]> , and

0 1 h1
h 0 h2

1
A= . . . , b = .
. ..

.
.
hn1 0 hn

## (note that h0 = 0). We have

"P # "P #
n1 n1
h2i 0 hi hi+1
A> A = i=1
, A> b = i=1
.
0 1 h1

The matrix A> A is nonsingular because we assume that at least one hk is nonzero. Therefore, the least
squares solution is
" # "P #1 " P # " P Pn1 2 #
n1 2 n1 n1
a h
i=1 i 0 i=1 h i h i+1 ( i=1 h i h i+1 )/( i=1 hi )
= = .
b 0 1 h1 h1

12.16
We pose the problem as a least squares problem: minimize kAx bk2 where x = [a, b]> , and

0 1 s1
s1 1 s2

A= .. ,
.. b= ..

. . .
sn1 1 sn

## (where we use s0 = 0). We have

"P # "P #
n1 2 Pn1 n1
> si si > i=1 si si+1
A A= Pi=1
n1
i=1
, A b= P n .
i=1 si n i=1 si

The matrix A> A is nonsingular because we assume that at least one sk is nonzero. Therefore, the least
squares solution is
" # "P
n1 2 Pn1 #1 " Pn1 #
a si si si si+1
= Pi=1
n1
i=1 i=1
P n
b i=1 si n i=1 si
" Pn1 Pn1 Pn #
1 n i=1 si si+1 i=1 si i=1 si
= 2 Pn1 Pn1 Pn1 Pn .
i=1 si i=1 si si+1 + i=1 s2i i=1 si
Pn1 P
n1
n i=1 s2i i=1 is

12.17
This least-squares estimation problem can be expressed as

## minimize kax yk2 .

If x = 0, then the problem has an infinite number of solutions: any a solves the problem. Assuming that
x 6= 0, the solution is unique and is given by

x> y
a = (x> x)1 x> y = .
x> x

83
12.18
The solution to this problem is the same as the solution to:
1
minimize kx bk2
2
subject to x R(A).
Substituting x = Ay, we see that this is simply a linear least squares problem with decision variable y. The
solution to the least squares problem is y = (A> A)1 A> b, which implies that the solution to the given
problem is x = A(A> A)1 A> b.
12.19
We solve the problem using two different methods. The first method would be to use the Lagrange multiplier
technique to solve the equivalent problem,
minimize kx x0 k2
h i
subject to h(x) = 1 1 1 x 1 = 0,

## The lagrangian for the above problem has the form,

l(x, ) = x21 + (x2 + 3)2 + x23 + (x1 + x2 + x3 1).
Applying the FONC gives

2x1 +
x l = 2x2 + 6 + and x1 + x2 + x3 1 = 0.

2x3 +
Solving the above yields
4
35
x = 3 .
4
3
The second approach is to use the well-known solution to the minimum norm problem. We first derive a
general solution formula for the problem,
minimize kx x0 k
subject to Ax = b,
where A Rmn , m n, and rank A = m. To proceed, we first transform the above problem from the x
coordinates into the z = x x0 coordinates to obtain,
minimize kzk
subject to Az = b Ax0 .
The solution to the above problem has the form,
 1
z = A> AA> (b Ax0 )
 1  1
= A> AA> b A> AA> Ax0 .

## Therefore, the solution to the original problem is

 1
x = A> AA> (b Ax0 ) + x0
 1  1
= A> AA> b A> AA> Ax0 + x0
 1   1 
= A> AA> b + I n A> AA> A x0 .

84
We substitute into the above formula the given numerical data to obtain

1 2 1 1
0
3 3 3 3
x = 13 + 13 2
1
3

3 3
1 1 1 2
3 3 3 3 0

4
3
= 53 .
4
3

12.20
For each x Rn , let y = x x0 . Then, the original problem is equivalent to

minimize kyk
subject to Ay = b Ax0 ,

in the sense that y is a solution to the above problem if and only if x = y + x0 is a solution to the original
problem. By Theorem 12.2, the above problem has a unique solution given by

## Therefore, the solution to the original problem is

x = A> (AA> )1 b A> (AA> )1 Ax0 + x0 = A> (AA> )1 b + (I n A> (AA> )1 A)x0 .

Note that

kx x0 k = y
= kA> (AA> )1 (b Ax0 )k
= kA> (AA> )1 b A> (AA> )1 Ax0 k.

12.21
The objective function of the given problem can be written as

## f (x) = kBx ck2 ,

where
A b1
. .
B = ..

, .. .
c=
A bp
The solution is therefore
p p
1 X > 1 > 1X
x = (B > B)1 B > c = (pA> A)1 A> (b1 + + bp ) = (A A) A bi = x
p i=1 p i=1 i

Alternatively: Write
kAx bi k2 = x> A> Ax 2x> A> bi + kbi k2
Therefore, the given objective function can be written as

## The solution is therefore

p
> 1 > 1X
x = (pA A) A (b1 + + bp ) = x
p i=1 i
85
Note that the original problem can be written as the least squares problem

## minimize kAx bk2 ,

where
b1 + + bp
b= .
p
12.22
Write
kAx bi k2 = x> A> Ax 2x> A> bi + kbi k2
Therefore, the given objective function can be written as

## The solution is therefore (by inspection)

p p
1 X X
x = ((1 + + p )A> A)1 A> (1 b1 + + p bp ) = i xi = i xi ,
1 + + p i=1 i=1

where i = i /(1 + + p ).
Note that the original problem can be written as the least squares problem

## minimize kAx bk2 ,

where
1 b1 + + p bp
b= .
1 + + p

12.23
Let x = A> (AA> )1 b. Suppose y is a point in R(A> ) that satisfies Ay = b. Then, there exists
z Rm such that y = A> z . Then, subtracting the equation A(A> (AA> )1 b) = b from the equation
A(A> z ) = b, we get
(AA> )(z (AA> )1 b) = 0.
Since rank A = m, AA> is nonsingular. Therefore, z (AA> )1 b = 0, which implies that

## Hence, x = A> (AA> )1 b is the only vector in R(A> ) that satisfies Ax = b.

12.24
a. We have
x(0) = (A>
0 A0 )
1 > (0)
A0 b = G1 > (0)
0 A0 b .
Similarly,
x(1) = (A>
1 A1 )
1 > (1)
A1 b = G1 > (1)
1 A1 b .

b. Now,
" #
h i A
1
G0 = A>
1 a1
a>
1

= A> >
1 A1 + a1 a1
= G1 + a1 a>
1.

Hence,
G1 = G0 a1 a>
1.

86
c. Using the Sherman-Morrison formula,

P1 = G1
1
= (G0 a1 a>
1)
1

G1 > 1
0 (a1 )a1 G0
= G1
0
1 + (a1 )> G1
0 a1
P 0 a1 a>
1 P0
= P0 + >
.
1 a1 P 0 a1

d. We have

A>
0b
(0)
= G0 G01 A>
0b
(0)

= G0 x(0)
= (G1 + a1 a>
1 )x
(0)

= G1 x(0) + a1 a>
1x
(0)
.

e. Finally,

1b
(1)

## = G11 (A> b(1) + a1 b1 a1 b1 )

 1 
= G1
1 A>
0b
(0)
a1 b1
 
= G1
1 G1 x (0)
+ a1 a> (0)
1 x a1 b 1
 
1
= x G1 a1 b1 a>
(0)
1x
(0)

 
= x(0) P 1 a1 b1 a> 1 x (0)
.

## The general RLS algorithm for removals of rows is:

P k ak+1 a>
k+1 P k
P (k+1) = Pk + >
1 ak+1 P k ak+1
 
x(k+1) = x P k+1 ak+1 bk+1 a>
(k)
k+1 x
(k)
.

12.25
Using the notation of the proof of Theorem 12.3, we can write
aR(k)+1
x(k+1) = x(k) + (bR(k)+1 a>
R(k)+1 x
(k)
) .
kaR(k)+1 k2

Hence,
k1
X 
(2 ) >
x(k) = (b R(i)+1 aR(i)+1 x (i)
) aR(i)+1
i=0
kaR(k)+1 k2

## which means that x(k) is in span[a1 , . . . , am ] = R(A> ).

12.26
a. We claim that x minimizes kx x(0) k subject to {x : Ax = b} if and only if y = x x(0) minimizes
kyk subject to {Ay = b Ax(0) }.
To prove sufficiency, suppose y minimizes kyk subject to {Ay = b Ax(0) }. Let x = y + x(0) .
Consider any point x1 {x : Ax = b}. Now,

## A(x1 x(0) ) = b Ax(0) .

87
Hence, by definition of y ,
kx1 x(0) k ky k = kx x(0) k.
Therefore x minimizes kx x(0) k subject to {x : Ax = b}.
To prove necessity, suppose x minimizes kx x(0) k subject to {x : Ax = b}. Let y = x x(0) .
Consider any point y 1 {y : Ay = b Ax(0) }. Now,

A(y 1 + x(0) ) = b.

Hence, by definition of x ,

## Therefore, y minimizes kyk subject to {Ay = b Ax(0) }.

By Theorem 12.2, there exists a unique vector y minimizing kyk subject to {Ay = b Ax(0) }. Hence,
by the above claim, there exists a unique x minimizing kx x(0) k subject to {x : Ax = b}
b. Using the notation of the proof of Theorem 12.3, Kaczmarzs algorithm is given by

R(k)+1 x
(k)
)aR(k)+1 .

R(k)+1 x
(0)
) a>
R(k)+1 (x
(k)
x(0) ))aR(k)+1 .

## y (k+1) = y (k) + ((bR(k)+1 a>

R(k)+1 x
(0)
) a>
R(k)+1 y
(k)
)aR(k)+1 .

Note that y (0) = 0. By Theorem 12.3, the sequence {y (k) } converges to the unique point y that minimizes
kyk subject to {Ay = b Ax(0) }. Hence {x(k) } converges to y + x(0) . From the proof of part a,
x = y + x(0) minimizes kx x(0) k subject to {x : Ax = b}. This completes the proof.
12.27
Following the proof of Theorem 12.3, assuming kak = 1 without loss of generality, we arrive at

## kx(k+1) x k2 = kx(k) x k2 (2 )(a> (x(k) x ))2 .

Since x(k) , x R(A) = R([a> ]) by Exercise 12.25, we have x(k) x R(A). Hence, by the Cauchy-
Schwarz inequality,
(a> (x(k) x ))2 = kak2 kx(k) x k2 = kx(k) x k2 ,
since kak = 1 by assumption. Thus, we obtain

## kx(k+1) x k2 = (1 (2 ))kx(k) x k2 = 2 kx(k) x k2

p
where = 1 (2 ). It is easy to check that 0 1 (2 ) < 1 for all (0, 2). Hence, 0 < 1.
12.28
In Kaczmarzs algorithm with = 1, we may write
aR(k)+1
x(k+1) = x(k) + (bR(k)+1 a>
R(k)+1 x
(k)
) .
kaR(k)+1 k2

## Subtracting x and premultiplying both sides by a>

R(k)+1 yields
 
> (k+1) > (k) > (k) aR(k)+1
aR(k)+1 (x x ) = aR(k)+1 x x + (bR(k)+1 aR(k)+1 x )
kaR(k)+1 k2
= a>
R(k)+1 x
(k)
a> >
R(k)+1 x + (bR(k)+1 aR(k)+1 x
(k)
)
= bR(k)+1 a>
R(k)+1 x

= 0.
88
Substituting a>
R(k)+1 x = bR(k)+1 yields the desired result.
12.29
We will prove this by contradiction. Suppose Cx is not the minimizer of kBy bk2 over Rr . Let y be the
minimizer of kBy bk2 over Rr . Then, kB y bk2 < kBCx bk2 = kAx bk2 . Since C is of full rank,
Rn such that y = C x
there exists x . Therefore,

x bk2 = kBC x
kA

## which contradicts the assumption that x is a minimizer of kAx bk2 over Rn .

12.30
a. Let A = BC be a full rank factorization of A. Now, we have A = C B , where B = (B > B)1 B > and
C = C > (CC > )1 . On the other hand (A> ) = (C > B > ) . Since A> = C > B > is a full rank factorization
of A> , we have (A> ) = (C > B > ) = (B > ) (C > ) . Therefore, to show that (A> ) = (A )> , it is enough
to show that

(B > ) = (B )>
(C > ) = (C )> .

To this end, note that (B > ) = B(B > B)1 , and (C > ) = (CC > )1 C. On the other hand, (B )> =
((B > B)1 B > )> = B(B > B)1 , and (C )> = (C > (CC > )1 )> = (CC > )1 C, which completes the proof.
b. Note that A = C B , which is a full rank factorization of A . Therefore, (A ) = (B ) (C ) . Hence,
to show that (A ) = A, it is enough to show that

(B ) = B
(C ) = C.

To this end, note that (B ) = ((B > B)1 B > ) = B since B is a full rank matrix. Similarly, (C ) =
(C > (CC > )1 ) = C since C is a full rank matrix. This completes the proof.
12.31
: We prove properties 14 in turn.
1. This is immediate.
2. Let A = BC be a full rank factorization of A. We have A = C B , where B = (B > B)1 B > and
C = C > (CC > )1 . Note that B B = I and CC = I. Now,

A AA = C B BCC B
= C B
= A .

3. We have

## (AA )> = (BCC B )>

= (BB )>
= (B )> B >
= ((B > B)1 B > )> B >
= B(B > B)1 B >
= BB
= BCC B
= AA .

89
4. We have

(A A)> = (C B BC)>
= (C C)>
= C > (C )>
= C > (C > (CC > )1 )>
= C > (CC > )1 C
= C C
= C B BC
= A A.

## : By property 1, we immediately have AA A = A. Therefore, it remains to show that there exist

matrices U and V such that A = U A> and A = A> V .
For this, we note from property 2 that A = A AA . But from property 3, AA = (AA )> = (A )> A> .
Hence, A = A (A )> A> . Setting U = A (A )> , we get that A = U A> .
Similarly, we note from property 4 that A A = (A A)T = A> (A )> . Substituting this back into property
2 yields A = A AA = A> (A )> A . Setting V = (A )> A yields A = A> V . This completes the proof.
12.32
(Taken from [23, p. 24]) Let

0 0 0 1 0 0
A1 = 0 1 1 , A2 = 0 1 0 .

0 1 0 0 0 0

We compute
0 0 0 1 0 0
A1 = 0 0 1 , A2 = 0 1 0 = A 2 .

0 1 1 0 0 0
We have
0 0 0 0 h i
A1 A2 = 0 1 0 = 1 0 1 0

0 1 0 1
which is a full rank factorization. Therefore,

0 0 0
(A1 A2 ) = 0 1/2 1/2 .

0 0 0

But
0 0 0
A2 A1 = 0 0 1 .

0 0 0

Hence, (A1 A2 ) 6= A2 A1 .

## 13. Unconstrained Optimization and Feedforward Neural Networks

13.1
a. The gradient of f is given by
f (w) = X d (y d X >
d w).

90
b. The Conjugate Gradient algorithm applied to our training problem is:

## 1. Set k := 0; select the initial point w(0) .

2. g (0) = X d (y d X >
dw
(0)
). If g (0) = 0, stop, else set d(0) = g (0) .
(k)>
d g (k)
3. k = d(k)> X > (k)
dXd d

## 4. w(k+1) = w(k) + k d(k)

5. g (k+1) = X d (y d X >
dw
(k+1)
). If g (k+1) = 0, stop.
g (k+1)> X d X >d d
(k)
6. k = d(k)> X d X > d (k)
d

## 7. d(k+1) = g (k+1) + k d(k)

8. Set k := k + 1; go to 3.

## c. We form the matrix X d as

" #
0.5 0.5 0.5 0 0 0 0.5 0.5 0.5
Xd =
0.5 0 0.5 0.5 0 0.5 0.5 0 0.5

## Running the Conjugate Gradient algorithm, we get a solution of w = [0.8806, 0.000]> .

d. The level sets are shown in the figure below.

0.5

0.4

0.3

0.2

0.1
w_2

-0.1

-0.2

-0.3

-0.4

-0.5
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
w_1

## The solution in part c agrees with the level sets.

e. The plot of the error function is depicted below.

91
0.5

0.4

0.3

0.2

0.1

error
-0.1

-0.2

-0.3

-0.4

-0.5
-1

-0.5 -1
1 0.5 0
1
x_2 x_1

13.2
a. The expression we seek is
ek+1 = (1 )ek .
To derive the above, we write
 
ek+1 ek = y d x>
dw
(k+1)
y d x>
dw
(k)

 
= x> d w
(k+1)
w(k) .

ek xd
ek+1 ek = x>
d = ek .
x>
d xd

## Hence, ek+1 = (1 )ek .

b. For ek 0, it is necessary and sufficient that |1 | < 1, which is equivalent to 0 < < 2.
13.3
a. The error satisfies
e(k+1) = (I p )e(k) .
To derive the above expression, we write
 
e(k+1) e(k) = yd X >d w (k+1)
y d X > (k)
d w
 
>
= X d w(k+1) w(k) .

d X d (X d X d )
1
e(k) = e(k) .

## Hence, e(k+1) = (I p )e(k) .

b. From part a, we see that e(k) = (I p )k e(0) . Hence, by Lemma 5.1, a necessary and sufficient condition
for e(k) 0 for any e(0) is that all the eigenvalues of I p must be located in the open unit circle. From
Exercise 3.6, it follows that the above condition holds if and only if |1 i ()| < 1 for each eigenvalue i ()
of . This is true if and only if 0 < |i ()| < 2 for each eigenvalue i () of .
13.4
We modified the MATLAB routine of Exercise 8.25, by fixing the step size at a value = 100. We need the
following M-file for the gradient:
92
function y=Dfbp(w);

wh11=w(1);
wh21=w(2);
wh12=w(3);
wh22=w(4);
wo11=w(5);
wo12=w(6);

## xd1=0; xd2=1; yd=1;

v1=wh11*xd1+wh12*xd2;
v2=wh21*xd1+wh22*xd2;
z1=sigmoid(v1);
z2=sigmoid(v2);
y1=sigmoid(wo11*z1+wo12*z2)
d1=(yd-y1)*y1*(1-y1);

y(1)=-d1*wo11*z1*(1-z1)*xd1;
y(2)=-d1*wo12*z2*(1-z2)*xd1;
y(3)=-d1*wo11*z1*(1-z1)*xd2;
y(4)=-d1*wo12*z2*(1-z2)*xd2;
y(5)=-d1*z1;
y(6)=-d1*z2;

y=y;

## After 20 iterations of the backpropagation algorithm, we get the following weights:

o(20)
w11 = 2.883
o(20)
w12 = 3.194
h(20)
w11 = 0.1000
h(20)
w12 = 0.8179
h(20)
w21 = 0.3000
h(20)
w22 = 1.106.
(20)
The corresponding output of the network is y1 = 0.9879.
13.5
We used the following MATLAB routine:
function [x,N]=backprop(grad,xnew,options);
% BACKPROP(grad,x0);
% BACKPROP(grad,x0,OPTIONS);
%
% x = BACKPROP(grad,x0);
% x = BACKPROP(grad,x0,OPTIONS);
%
% [x,N] = BACKPROP(grad,x0);
% [x,N] = BACKPROP(grad,x0,OPTIONS);
%
%The first variant trains a net whose gradient
%is described in grad (usually an M-file: grad.m), using a backprop
%algorithm with initial point x0.
%The second variant allows a vector of optional parameters to
%defined. OPTIONS(1) controls how much display output is given; set
%to 1 for a tabular display of results, (default is no display: 0).

93
%OPTIONS(2) is a measure of the precision required for the final point.
%OPTIONS(3) is a measure of the precision required of the gradient.
%OPTIONS(14) is the maximum number of iterations.
%For more information type HELP FOPTIONS.
%
%The next two variants returns the value of the final point.
%The last two variants returns a vector of the final point and the
%number of iterations.

if nargin ~= 3
options = [];
if nargin ~= 2
disp(Wrong number of arguments.);
return;
end
end

if length(options) >= 14
if options(14)==0
options(14)=1000*length(xnew);
end
else
options(14)=1000*length(xnew);
end

clc;
format compact;
format short e;

options = foptions(options);
print = options(1);
epsilon_x = options(2);
epsilon_g = options(3);
max_iter=options(14);

for k = 1:max_iter,

xcurr=xnew;

g_curr=feval(grad,xcurr);

## if norm(g_curr) <= epsilon_g

disp(Terminating: Norm of gradient less than);
disp(epsilon_g);
k=k-1;
break;
end %if

alpha=10.0;

xnew = xcurr-alpha*g_curr;

if print,
disp(Iteration number k =)
disp(k); %print iteration index k
disp(alpha =);
disp(alpha); %print alpha
disp(Gradient = );
disp(g_curr); %print gradient

94
disp(New point =);
disp(xnew); %print new point
end %if

## if norm(xnew-xcurr) <= epsilon_x*norm(xcurr)

disp(Terminating: Norm of difference between iterates less than);
disp(epsilon_x);
break;
end %if

if k == max_iter
disp(Terminating with maximum number of iterations);
end %if
end %for

if nargout >= 1
x=xnew;
if nargout == 2
N=k;
end
else
disp(Final point =);
disp(xnew);
disp(Number of iterations =);
disp(k);
end %if
%------------------------------------------------------------------
To apply the above routine, we need the following M-file for the gradient.
function y=grad(w,xd,yd);

wh11=w(1);
wh21=w(2);
wh12=w(3);
wh22=w(4);
wo11=w(5);
wo12=w(6);
t1=w(7);
t2=w(8);
t3=w(9);

xd1=xd(1); xd2=xd(2);

v1=wh11*xd1+wh12*xd2-t1;
v2=wh21*xd1+wh22*xd2-t2;
z1=sigmoid(v1);
z2=sigmoid(v2);
y1=sigmoid(wo11*z1+wo12*z2-t3);
d1=(yd-y1)*y1*(1-y1);

y(1)=-d1*wo11*z1*(1-z1)*xd1;
y(2)=-d1*wo12*z2*(1-z2)*xd1;
y(3)=-d1*wo11*z1*(1-z1)*xd2;
y(4)=-d1*wo12*z2*(1-z2)*xd2;
y(5)=-d1*z1;
y(6)=-d1*z2;
y(7)=d1*wo11*z1*(1-z1);
y(8)=d1*wo12*z2*(1-z2);
y(9)=d1;
95
y=y;

## We applied our MATLAB routine as follows.

>> options(2)=10^(-7);
>> options(3)=10^(-7);
>> options(14)=10000;
>> w0=[0.1,0.3,0.3,0.4,0.4,0.6,0.1,0.1,-0.1];
>> [wstar,N]=backprop(grad,w0,options)

## Terminating with maximum number of iterations

wstar =
-7.7771e+00
-5.5932e+00
-8.4027e+00
-5.6384e+00
-1.1010e+01
1.0918e+01
-3.2773e+00
-8.3565e+00
5.2606e+00
N =
10000

As we can see from the above, the results coincide with Example 13.3. The table of the outputs of the
trained network corresponding to the training input data is shown in Table 13.2.

## 14. Global Search Algorithms

14.1
The MATLAB program is as follows.
function [ output_args ] = nm_simplex( input_args )
%Nelder-Mead simplex method
%Based on the program by the Spring 2007 ECE580 student, Hengzhou Ding

## disp (We minimize a function using the Nelder-Mead method.)

disp (There are two initial conditions.)
disp (You can enter your own starting point.)
disp (---------------------------------------------)

## % disp(Select one of the starting points)

% disp ([0.55;0.7] or [-0.9;-0.5])
% x0=input()
disp ( )
clear
close all;

## disp(Select one of the starting points, or enter your own point)

disp([0.55;0.7] or [-0.9;-0.5])
disp((Copy one of the above points and paste it at the prompt))
x0=input()

hold on
axis square
%Plot the contours of the objective function
[X1,X2]=meshgrid(-1:0.01:1);

96
Y=(X2-X1).^4+12.*X1.*X2-X1+X2-3;
[C,h] = contour(X1,X2,Y,20);
clabel(C,h);

lambda=0.1;
rho=1;
chi=2;
gamma=1/2;
sigma=1/2;
e1=[1 0];
e2=[0 1];
%x0=[0.55 0.7];
%x0=[-0.9 -0.5];

## % Plot initial point and initialize the simplex

plot(x0(1),x0(2),--*);
x(:,3)=x0;
x(:,1)=x0+lambda*e1;
x(:,2)=x0+lambda*e2;

while 1
% Check the size of simplex for stopping criterion
simpsize=norm(x(:,1)-x(:,2))+norm(x(:,2)-x(:,3))+norm(x(:,3)-x(:,1));
if(simpsize<1e-6)
break;
end
lastpt=x(:,3);
% Sort the simplex
x=sort_points(x,3);
% Reflection
centro=1/2*(x(:,1)+x(:,2));
xr=centro+rho*(centro-x(:,3));
% Accept condition
if(obj_fun(xr)>=obj_fun(x(:,1)) && obj_fun(xr)<obj_fun(x(:,2)))
x(:,3)=xr;
% Expand condition
elseif(obj_fun(xr)<obj_fun(x(:,1)))
xe=centro+rho*chi*(centro-x(:,3));
if(obj_fun(xe)<obj_fun(xr))
x(:,3)=xe;
else
x(:,3)=xr;
end

## % Outside contraction or shrink

elseif(obj_fun(xr)>=obj_fun(x(:,2)) &&
obj_fun(xr)<obj_fun(x(:,3)))
xc=centro+gamma*rho*(centro-x(:,3));
if(obj_fun(xc)<obj_fun(x(:,3)))
x(:,3)=xc;
else
x=shrink(x,sigma);
end
% Inside contraction or shrink
else
xcc=centro-gamma*(centro-x(:,3));
if(obj_fun(xcc)<obj_fun(x(:,3)))
x(:,3)=xcc;

97
else
x=shrink(x,sigma);
end
end
% Plot the new point and connect
plot([lastpt(1),x(1,3)],[lastpt(2),x(2,3)],--*);
end
% Output the final simplex (minimizer)
x(:,1)

% obj_fun
function y = obj_fun(x)
y=(x(1)-x(2))4+12*x(1)*x(2)-x(1)+x(2)-3;

% sort_points
function y = sort_points(x,N)
for i=1:(N-1)
for j=1:(N-i)
if(obj_fun(x(:,j))>obj_fun(x(:,j+1)))
tmp=x(:,j);
x(:,j)=x(:,j+1);
x(:,j+1)=tmp;
end
end
end
y=x;

% shrink
function y = shrink(x,sigma)
x(:,2)=x(:,1)+sigma*(x(:,2)-x(:,1));
x(:,3)=x(:,1)+sigma*(x(:,3)-x(:,1));
y=x;

When we run the MATLAB code above with initial condition [0.55, 0.7]> , we obtain the following plot:

3.
82 5.3
87 06
0.8 2. 2
35 3.0
1.6 12 8 99 3.8
12 287
0.6 4
2.3
512
0.87 1.61
36 24
0.40.603 0.13 7
84 49 1
0.60
1.34 384
0.22.0814 26
2.081
4
2.8201
0 3.5589 3.558
9
4.2976
0.2 5.03
364 64
5.0
5.7
1 751
775
0.4
5.

0.6
1
75
5 .7
0.8
0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9

When we run the MATLAB code above with initial condition [0.9, 0.5]> , we obtain the following plot:

98
1
26 14 2.8201
4 91 84 .34 .08
13 03 1 12
0. 0.6 0
.82 3.5589
2

89
.55
3
0.5
3
.55
89 589
3.5
2.
820
1
2.8201
2.0
0 1.3 814
426
2
1. .08
342 14
0.8 0.1 0 6
736 .60
34 38
7 91 4
0.
87

0
0.
0.5 1. 36

13

.6
3.8 0.9 6 7
0.8 0.7 12 0.6 0.5 0.4

49

03
28 4

84
7

Note that this function has two local minimizers. The algorithm terminates at these two minimizers with
the two different initial conditions. This behavior depends on the value of lambda, which determines the
initial simplex. It is possible to reach both minimizers starting from the same initial point by using different
values of lambda. In the solution above, the initial simplex is smalllambda is just 0.1.
14.2
A MATLAB routine for a naive random search algorithm is given by the M-file rs_demo shown below:
function [x,N]=random_search(funcname,xnew,options);
%Naive random search demo
% [x,N]=random_search(funcname,xnew,options);
% print = options(1);
% alpha = options(18);

if nargin ~= 3
options = [];
if nargin ~= 2
disp(Wrong number of arguments.);
return;
end
end

if length(options) >= 14
if options(14)==0
options(14)=1000*length(xnew);
end
else
options(14)=1000*length(xnew);
end
if length(options) < 18
options(18)=1.0; %optional step size
end

format compact;
format short e;

options = foptions(options);
print = options(1);
epsilon_x = options(2);

99
epsilon_g = options(3);
max_iter=options(14);

alpha0 = options(18);

if funcname == f_r,
ros_cnt
elseif funcname == f_p,
pks_cnt;
end %if

if length(xnew) == 2
plot(xnew(1),xnew(2),o)
text(xnew(1),xnew(2),Start Point)
xlower = [-2;-1];
xupper = [2;3];
end

f_0=feval(funcname,xnew);
xbestcurr = xnew;
xbestold = xnew;
f_best=feval(funcname,xnew);
f_best=10^(sign(f_best))*f_best;

for k = 1:max_iter,

xcurr=xbestcurr;
f_curr=feval(funcname,xcurr);

alpha = alpha0;
xnew = xcurr + alpha*(2*rand(length(xcurr),1)-1);

for i=1:length(xnew),
xnew(i) = max(xnew(i),xlower(i));
xnew(i) = min(xnew(i),xupper(i));
end %for

f_new=feval(funcname,xnew);

## if f_new < f_best,

xbestold = xbestcurr;
xbestcurr = xnew;
f_best = f_new;
end

if print,
disp(Iteration number k =)
disp(k); %print iteration index k
disp(alpha =);
disp(alpha); %print alpha
disp(New point =);
disp(xnew); %print new point
disp(Function value =);
disp(f_new); %print func value at new point
end %if

## if norm(xnew-xbestold) <= epsilon_x*norm(xbestold)

disp(Terminating: Norm of difference between iterates less than);
disp(epsilon_x);

100
break;
end %if

pltpts(xbestcurr,xbestold);

if k == max_iter
disp(Terminating with maximum number of iterations);
end %if
end %for

if nargout >= 1
x=xnew;
if nargout == 2
N=k;
end
else
disp(Final point =);
disp(xbestcurr);
disp(Number of iterations =);
disp(k);
end %if

A MATLAB routine for a simulated annealing algorithm is given by the M-file sa_demo shown below:
function [x,N]=simulated_annealing(funcname,xnew,options);
%Simulated annealing demo
% random_search(funcname,xnew,options);
% print = options(1);
% gamma = options(15);
% alpha = options(18);

if nargin ~= 3
options = [];
if nargin ~= 2
disp(Wrong number of arguments.);
return;
end
end

if length(options) >= 14
if options(14)==0
options(14)=1000*length(xnew);
end
else
options(14)=1000*length(xnew);
end

if length(options) < 15
options(15)=5.0; %
end
if options(15)==0
options(15)=5.0; %
end

if length(options) < 18
options(18)=0.5; %optional step size
end

format compact;
101
format short e;

options = foptions(options);
print = options(1);
epsilon_x = options(2);
epsilon_g = options(3);
max_iter=options(14);

alpha = options(18);
gamma = options(15);
k0=2;

if funcname == f_r,
ros_cnt
elseif funcname == f_p,
pks_cnt;
end %if

if length(xnew) == 2
plot(xnew(1),xnew(2),o)
text(xnew(1),xnew(2),Start Point)
xlower = [-2;-1];
xupper = [2;3];
end

f_0=feval(funcname,xnew);
xbestcurr = xnew;
xbestold = xnew;
xcurr = xnew;
f_best=feval(funcname,xnew);
f_best=10^(sign(f_best))*f_best;

for k = 1:max_iter,

f_curr=feval(funcname,xcurr);

## xnew = xcurr + alpha*(2*rand(length(xcurr),1)-1);

for i=1:length(xnew),
xnew(i) = max(xnew(i),xlower(i));
xnew(i) = min(xnew(i),xupper(i));
end %for

f_new=feval(funcname,xnew);

## if f_new < f_curr,

xcurr = xnew;
f_curr = f_new;
else
cointoss = rand(1);
Temp = gamma/log(k+k0);
Prob = exp(-(f_new-f_curr)/Temp);
if cointoss < Prob,
xcurr = xnew;
f_curr = f_new;
end
end

## if f_new < f_best,

102
xbestold = xbestcurr;
xbestcurr = xnew;
f_best = f_new;
end

if print,
disp(Iteration number k =)
disp(k); %print iteration index k
disp(alpha =);
disp(alpha); %print alpha
disp(New point =);
disp(xnew); %print new point
disp(Function value =);
disp(f_new); %print func value at new point
end %if

## if norm(xnew-xbestold) <= epsilon_x*norm(xbestold)

disp(Terminating: Norm of difference between iterates less
than);
disp(epsilon_x);
break;
end %if

pltpts(xbestcurr,xbestold);

if k == max_iter
disp(Terminating with maximum number of iterations);
end %if
end %for

if nargout >= 1
x=xnew;
if nargout == 2
N=k;
end
else
disp(Final point =);
disp(xbestcurr);
disp(Objective function value =);
disp(f_best);
disp(Number of iterations =);
disp(k);
end %if

## To use the above routines, we also need the following M-files:

pltpts.m:
function out=pltpts(xnew,xcurr)

plot([xcurr(1),xnew(1)],[xcurr(2),xnew(2)],r-,xnew(1),xnew(2),o,Erasemode,
none);
drawnow; % Draws current graph now
%pause(1)

out = [];

f_p.m:
function y=f_p(x);
103
y=3*(1-x(1)).^2.*exp(-(x(1).^2)-(x(2)+1).^2) -
10.*(x(1)/5-x(1).^3-x(2).^5).*exp
(-x(1).^2-x(2).^2) - exp(-(x(1)+1).^2-x(2).^2)/3;
y=-y;

pks_cnt.m:
echo off
X = [-3:0.2:3];
Y = [-3:0.2:3];
[x,y]=meshgrid(X,Y) ;
func = 3*(1-x).^2.*exp(-x.^2-(y+1).^2) -
10.*(x/5-x.^3-y.^5).*exp(-x.^2-y.^2) -
exp(-(x+1).^2-y.^2)/3;
func = -func;
clf
levels = exp(-5:10);
levels = [-5:0.9:10];
contour(X,Y,func,levels,k--)
xlabel(x_1)
ylabel(x_2)
title(Minimization of Peaks function)
drawnow;
hold on
plot(-0.0303,1.5455,o)
text(-0.0303,1.5455,Solution)

To run the naive random search algorithms, we first pick a value of = 0.5, which involves setting
options(18)=0.5. We then use the command rs_demo(f_p,[0;-2],options). The resulting plot of the
algorithm trajectory is given below. As we can see, the algorithm is stuck at a local minimizer. (By running
the algorithm several times, the reader can verify that this non-convergent behavior is typical.)

## Minimization of Peaks function

3

2
Solution

1
x2

2 Start Point

3
3 2 1 0 1 2 3
x1

Next, we try = 1.5, which involves setting options(18)=1.5. We then use the command
rs_demo(f_p,[0;-2],options) again, to obtain the plot shown below. This time, the algorithm reaches
the global minimizer.

104
Minimization of Peaks function
3

2
Solution

x2
0

2 Start Point

3
3 2 1 0 1 2 3
x1

Finally, we again set = 0.5, using options(18)=0.5. We then run the simulated annealing code using
sa_demo(f_p,[0;-2],options). The algorithm can be seen to converge to the global minimizer, as
plotted below.

## Minimization of Peaks function

3

2
Solution

1
x2

2 Start Point

3
3 2 1 0 1 2 3
x
1

14.3
A MATLAB routine for a particle swarm algorithm is:
% A particle swarm optimizer
% to find the minimum/maximum of the MATLABs peaks function
% D---# of inputs to the function (dimension of problem)
clear
%Parameters
ps=10;
D=2;
ps_lb=-3;
ps_ub=3;
vel_lb=-1;
vel_ub=1;
iteration_n = 50;
range = [-3, 3; -3, 3]; % Range of the input variables
% Plot contours of peaks function
[x, y, z] = peaks;
%pcolor(x,y,z); shading interp; hold on;
105
%contour(x, y, z, 20, r);
mesh(x,y,z)
% hold off;
%colormap(gray);
set(gca,Fontsize,14)
axis([-3 3 -3 3 -9 9])
%axis square;
xlabel(x_1,Fontsize,14);
ylabel(x_2,Fontsize,14);
zlabel(f(x_1,x_2),Fontsize,14);
hold on

## upper = zeros(iteration_n, 1);

average = zeros(iteration_n, 1);
lower = zeros(iteration_n, 1);

## % initialize population of particles and their velocities at time

% zero,
% format of pos= (particle#, dimension)
% construct random population positions bounded by VR
% need to bound positions

ps_pos=ps_lb + (ps_ub-ps_lb).*rand(ps,D);

## % need to bound velocities between -mv,mv

ps_vel=vel_lb + (vel_ub-vel_lb).*rand(ps,D);

p_best = ps_pos;

## % returns column of cost values (1 for each particle)

f1=3*(1-ps_pos(i,1))^2*exp(-ps_pos(i,1)^2-(ps_pos(i,2)+1)^2);
f2=-10*(ps_pos(i,1)/5-ps_pos(i,1)^3-ps_pos(i,2)^5)*exp(-ps_pos(i,1)^2-ps_pos(i,2)^2);
f3=-(1/3)*exp(-(ps_pos(i,1)+1)^2-ps_pos(i,2)^2);

p_best_fit=zeros(ps,1);
for i=1:ps
g1(i)=3*(1-ps_pos(i,1))^2*exp(-ps_pos(i,1)^2-(ps_pos(i,2)+1)^2);
g2(i)=-10*(ps_pos(i,1)/5-ps_pos(i,1)^3-ps_pos(i,2)^5)*exp(-ps_pos(i,1)^2-ps_pos(i,2)^2);
g3(i)=-(1/3)*exp(-(ps_pos(i,1)+1)^2-ps_pos(i,2)^2);
p_best_fit(i)=g1(i)+g2(i)+g3(i);
end
p_best_fit;
hand_p3=plot3(ps_pos(:,1),ps_pos(:,2),p_best_fit,*k,markersize,15,erase,xor);

% initial g_best

[g_best_val,g_best_idx] = max(p_best_fit);
%[g_best_val,g_best_idx] = min(p_best_fit); this is to minimize
g_best=ps_pos(g_best_idx,:);

## % get new velocities, positions (this is the heart of the PSO

% algorithm)
for k=1:iteration_n
for count=1:ps
ps_vel(count,:) = 0.729*ps_vel(count,:)... % prev vel

106
+1.494*rand*(p_best(count,:)-ps_pos(count,:))... % independent
+1.494*rand*(g_best-ps_pos(count,:)); % social
end
ps_vel;
% update new position
ps_pos = ps_pos + ps_vel;

%update p_best
for i=1:ps
g1(i)=3*(1-ps_pos(i,1))^2*exp(-ps_pos(i,1)^2-(ps_pos(i,2)+1)^2);
g2(i)=-10*(ps_pos(i,1)/5-ps_pos(i,1)^3-ps_pos(i,2)^5)*exp(-ps_pos(i,1)^2-ps_pos(i,2)^2);
g3(i)=-(1/3)*exp(-(ps_pos(i,1)+1)^2-ps_pos(i,2)^2);
ps_current_fit(i)=g1(i)+g2(i)+g3(i);

if ps_current_fit(i)>p_best_fit(i)
p_best_fit(i)=ps_current_fit(i);
p_best(i,:)=ps_pos(i,:);
end
end
p_best_fit;
%update g_best
[g_best_val,g_best_idx] = max(p_best_fit);
g_best=ps_pos(g_best_idx,:);

## % Fill objective function vectors

upper(k) = max(p_best_fit);
average(k) = mean(p_best_fit);
lower(k) = min(p_best_fit);

set(hand_p3,xdata,ps_pos(:,1),ydata,ps_pos(:,2),zdata,ps_current_fit);
drawnow
pause
end
g_best
g_best_val
figure;
x = 1:iteration_n;
plot(x, upper, o, x, average, x, x, lower, *);
hold on;
plot(x, [upper average lower]);
hold off;
legend(Best, Average, Poorest);
xlabel(Iterations); ylabel(Objective function value);

When we run the MATLAB code above, we obtain a plot of the initial set of particles, as shown below.

107
5

f(x1,x2)
0

2
2
0
0
2 2
x2 x1

5
f(x1,x2)

2
2
0
0
2 2
x2 x1

## Finally, after 50 iterations, we obtain:

5
f(x1,x2)

2
2
0
0
2 2
x2 x1

108
A plot of the objective function values (best, average, and poorest) is shown below.

9
Best
8 Average
Poorest
7

## Objective function value

5

1
0 10 20 30 40 50
Iterations

14.4
a. Expanding the right hand side of the second expression gives the desired result.
b. Applying the algorithm, we get a binary representation of 11111001011, i.e.,

1995 = 210 + 29 + 28 + 27 + 26 + 23 + 21 + 20 .

## c. Applying the algorithm, we get a binary representation of 0.1011101, i.e.,

0.7265625 = 21 + 23 + 24 + 25 + 27 .

d. We have 19 = 24 + 21 + 20 , i.e., the binary representation for 19 is 10011. For the fractional part, we
need at least 7 bits to keep at least the same accuracy. We have 0.95 = 21 + 22 + 23 + 24 + 27 + ,
i.e., the binary representation is 0.1111001 . Therefore, the binary representation of 19.95 with at least
the same degree of accuracy is 10011.1111001.
14.5
It suffices to prove the result for the case where only one symbol is swapped, since the general case is
obtained by repeating the argument. We have two scenarios. First, suppose the symbol swapped is at a
position corresponding to a dont care symbol in H. Clearly, after the swap, both chromosomes will still be
in H. Second, suppose the symbol swapped is at a position corresponding to a fixed symbol in H. Since
both chromosomes are in H, their symbols at that position must be identical. Hence, the swap does not
change the chromosomes. This completes the proof.
14.6 T
Consider a given chromosome in M (k) H. The probability that it is chosen for crossover is qc . If neither
of its offsprings is in H, then at least one of the crossover points must be between the corresponding first
and last fixed symbols of H. The probability of this is 1 (1 (H)/(L 1))2 . To see this, note that
the probability that each crossover point is not between the corresponding first and last fixed symbols is
1 (H)/(L 1), and thus the probability that both crossover points are not between the corresponding
first and last fixed symbols of H is (1 (H)/(L 1))2 . Hence, the probability that the given chromosome
is chosen for crossover and neither of its offsprings is in H is bounded above by
 2 !
(H)
qc 1 1 .
L1

109
14.7
As for two-point crossover, the n-point crossover operation is a composition of n one-point crossover opera-
tions (i.e., n one-point crossover operations in succession). The required result for this case is as follows.
Lemma: T
Given a chromosome in M (k) H, the probability that it is chosen for crossover and neither of its offsprings
is in H is bounded above by   n 
(H)
qc 1 1 .
L1
2
For the proof, proceed as in the solution of Exercise 14.6, replacing 2 by n.
14.8

function M=roulette_wheel(fitness);
%function M=roulette_wheel(fitness)
%fitness = vector of fitness values of chromosomes in population
%M = vector of indices indicating which chromosome in the
% given population should appear in the mating pool

## fitness = fitness - min(fitness); % to keep the fitness positive

if sum(fitness) == 0,
disp(Population has identical chromosomes -- STOP);
break;
else
fitness = fitness/sum(fitness);
end
cum_fitness = cumsum(fitness);
for i = 1:length(fitness),
tmp = find(cum_fitness-rand>0);
M(i) = tmp(1);
end

14.9

## % parent1, parent2 = two binary parent chromosomes (row vectors)

L = length(parent1);
crossover_pt = ceil(rand*(L-1));
offspring1 = [parent1(1:crossover_pt) parent2(crossover_pt+1:L)];
offspring2 = [parent2(1:crossover_pt) parent1(crossover_pt+1:L)];

14.10

## % mating_pool = matrix of 0-1 elements; each row represents a chromosome

% p_m = probability of mutation
N = size(mating_pool,1);
L = size(mating_pool,2);
mutation_points = rand(N,L) < p_m;
new_population = xor(mating_pool,mutation_points);

14.11
A MATLAB routine for a genetic algorithm with binary encoding is:

110
function [winner,bestfitness] = ga(L,N,fit_func,options)
% function winner = GA(L,N,fit_func)
% Function call: GA(L,N,f)
% L = length of chromosomes
% N = population size (must be an even number)
% f = name of fitness value function
%
%Options:
%print = options(1);
%selection = options(5);
%max_iter=options(14);
%p_c = options(18);
%p_m = p_c/100;
%
%Selection:
% options(5) = 0 for roulette wheel, 1 for tournament

clf;

if nargin ~= 4
options = [];
if nargin ~= 3
disp(Wrong number of arguments.);
return;
end
end

if length(options) >= 14
if options(14)==0
options(14)=3*N;
end
else
options(14)=3*N;
end
if length(options) < 18
options(18)=0.75; %optional crossover rate
end

%format compact;
%format short e;

options = foptions(options);
print = options(1);
selection = options(5);
max_iter=options(14);
p_c = options(18);
p_m = p_c/100;

P = rand(N,L)>0.5;
bestvaluesofar = 0;

%Initial evaluation
for i = 1:N,
fitness(i) = feval(fit_func,P(i,:));
end
[bestvalue,best] = max(fitness);
if bestvalue > bestvaluesofar,
bestsofar = P(best,:);
bestvaluesofar = bestvalue;

111
end

for k = 1:max_iter,

%Selection
fitness = fitness - min(fitness); % to keep the fitness positive
if sum(fitness) == 0,
disp(Population has identical chromosomes -- STOP);
disp(Number of iterations:);
disp(k);
for i = k:max_iter,
upper(i)=upper(i-1);
average(i)=average(i-1);
lower(i)=lower(i-1);
end
break;
else
fitness = fitness/sum(fitness);
end
if selection == 0,
%roulette-wheel
cum_fitness = cumsum(fitness);
for i = 1:N,
tmp = find(cum_fitness-rand>0);
m(i) = tmp(1);
end
else
%tournament
for i = 1:N,
fighter1=ceil(rand*N);
fighter2=ceil(rand*N);
if fitness(fighter1)>fitness(fighter2),
m(i) = fighter1;
else
m(i) = fighter2;
end
end
end
M = zeros(N,L);
for i = 1:N,
M(i,:) = P(m(i),:);
end

%Crossover
Mnew = M;
for i = 1:N/2
ind1 = ceil(rand*N);
ind2 = ceil(rand*N);
parent1 = M(ind1,:);
parent2 = M(ind2,:);
if rand < p_c
crossover_pt = ceil(rand*(L-1));
offspring1 = [parent1(1:crossover_pt) parent2(crossover_pt+1:L)];
offspring2 = [parent2(1:crossover_pt) parent1(crossover_pt+1:L)];
Mnew(ind1,:) = offspring1;
Mnew(ind2,:) = offspring2;
end
end

112
%Mutation
mutation_points = rand(N,L) < p_m;
P = xor(Mnew,mutation_points);

%Evaluation
for i = 1:N,
fitness(i) = feval(fit_func,P(i,:));
end
[bestvalue,best] = max(fitness);
if bestvalue > bestvaluesofar,
bestsofar = P(best,:);
bestvaluesofar = bestvalue;
end

upper(k) = bestvalue;
average(k) = mean(fitness);
lower(k) = min(fitness);

end %for

if k == max_iter,
disp(Algorithm terminated after maximum number of iterations:);
disp(max_iter);
end

winner = bestsofar;
bestfitness = bestvaluesofar;

if print,
iter = [1:max_iter];
plot(iter,upper,o:,iter,average,x-,iter,lower,*--);
legend(Best, Average, Worst);
xlabel(Generations,Fontsize,14);
ylabel(Objective Function Value,Fontsize,14);
set(gca,Fontsize,14);
hold off;
end

## function dec = bin2dec(bin,range);

%function dec = bin2dec(bin,range);
%Function to convert from binary (bin) to decimal (dec) in a given range

index = polyval(bin,2);
dec = index*((range(2)-range(1))/(2^length(bin)-1)) + range(1);

function y=f_manymax(x);

y=-15*(sin(2*x))^2-(x-2)^2+160;

function y=fit_func1(binchrom);
%1-D fitness function

f=f_manymax;
range=[-10,10];
x=bin2dec(binchrom,range);
y=feval(f,x);

## We use the following script to run the algorithm:

113
clear;
options(1)=1;
[x,y]=ga(8,10,fit_func1,options);
f=f_manymax;
range=[-10,10];
disp(GA Solution:);
disp(bin2dec(x,range));
disp(Objective function value:);
disp(y);

Running the above algorithm, we obtain a solution of x = 1.6078, and an objective function value of
159.7640. The figure below shows a plot of the best, average, and worst solution from each generation of the
population.

160

150

140
Objective Function Value

130

120

110

100

90

80 Best
Average
70 Worst

60
0 5 10 15 20 25 30
Generations

b. To run the routine, we create the following M-files (we also use the routine bin2dec from part a.

function y=f_peaks(x);

y=3*(1-x(1)).^2.*exp(-(x(1).^2)-(x(2)+1).^2) -
10.*(x(1)/5-x(1).^3-x(2).^5).*exp
(-x(1).^2-x(2).^2) - exp(-(x(1)+1).^2-x(2).^2)/3;

function y=fit_func2(binchrom);
%2-D fitness function

f=f_peaks;
xrange=[-3,3];
yrange=[-3,3];
L=length(binchrom);
x1=bin2dec(binchrom(1:L/2),xrange);
x2=bin2dec(binchrom(L/2+1:L),yrange);
y=feval(f,[x1,x2]);

## We use the following script to run the algorithm:

clear;
options(1)=1;
[x,y]=ga(16,20,fit_func2,options);
f=f_peaks;
xrange=[-3,3];

114
yrange=[-3,3];
L=length(x);
x1=bin2dec(x(1:L/2),xrange);
x2=bin2dec(x(L/2+1:L),yrange);
disp(GA Solution:);
disp([x1,x2]);
disp(Objective function value:);
disp(y);

## A plot of the objective function is shown below.

160

140

120

100
f(x)

80

60

40

20

0
10 5 0 5 10
x

Running the above algorithm, we obtain a solution of [0.0353, 1.4941]> , and an x = [0.0588, 1.5412]> ,
and an objective function value of 7.9815. (Compare this solution with that of Example 14.3.) The figure
below shows a plot of the best, average, and worst solution from each generation of the population.

4
Objective Function Value

Best
4 Average
Worst

6
0 10 20 30 40 50 60
Generations

14.12
A MATLAB routine for a real-number genetic algorithm:
function [winner,bestfitness] = gar(Domain,N,fit_func,options)
% function winner = GAR(Domain,N,fit_func)
% Function call: GAR(Domain,N,f)
% Domain = search space; e.g., [-2,2;-3,3] for the space [-2,2]x[-3,3]

115
% (number of rows of Domain = dimension of search space)
% N = population size (must be an even number)
% f = name of fitness value function
%
%Options:
%print = options(1);
%selection = options(5);
%max_iter=options(14);
%p_c = options(18);
%p_m = p_c/100;
%
%Selection:
% options(5) = 0 for roulette wheel, 1 for tournament

clf;

if nargin ~= 4
options = [];
if nargin ~= 3
disp(Wrong number of arguments.);
return;
end
end

if length(options) >= 14
if options(14)==0
options(14)=3*N;
end
else
options(14)=3*N;
end
if length(options) < 18
options(18)=0.75; %optional crossover rate
end

%format compact;
%format short e;

options = foptions(options);
print = options(1);
selection = options(5);
max_iter=options(14);
p_c = options(18);
p_m = p_c/100;
n = size(Domain,1);
lowb = Domain(:,1);
upb = Domain(:,2);
bestvaluesofar = 0;
for i = 1:N,
P(i,:) = lowb + rand(1,n).*(upb-lowb);
%Initial evaluation
fitness(i) = feval(fit_func,P(i,:));
end
[bestvalue,best] = max(fitness);
if bestvalue > bestvaluesofar,
bestsofar = P(best,:);
bestvaluesofar = bestvalue;
end

116
for k = 1:max_iter,

%Selection
fitness = fitness - min(fitness); % to keep the fitness positive
if sum(fitness) == 0,
disp(Population has identical chromosomes -- STOP);
disp(Number of iterations:);
disp(k);
for i = k:max_iter,
upper(i)=upper(i-1);
average(i)=average(i-1);
lower(i)=lower(i-1);
end
break;
else
fitness = fitness/sum(fitness);
end
if selection == 0,
%roulette-wheel
cum_fitness = cumsum(fitness);
for i = 1:N,
tmp = find(cum_fitness-rand>0);
m(i) = tmp(1);
end
else
%tournament
for i = 1:N,
fighter1=ceil(rand*N);
fighter2=ceil(rand*N);
if fitness(fighter1)>fitness(fighter2),
m(i) = fighter1;
else
m(i) = fighter2;
end
end
end
M = zeros(N,n);
for i = 1:N,
M(i,:) = P(m(i),:);
end

%Crossover
Mnew = M;
for i = 1:N/2
ind1 = ceil(rand*N);
ind2 = ceil(rand*N);
parent1 = M(ind1,:);
parent2 = M(ind2,:);
if rand < p_c
a = rand;
offspring1 = a*parent1+(1-a)*parent2+(rand(1,n)-0.5).*(upb-lowb)/10;
offspring2 = a*parent2+(1-a)*parent1+(rand(1,n)-0.5).*(upb-lowb)/10;
%do projection
for j = 1:n,
if offspring1(j)<lowb(j),
offspring1(j)=lowb(j);
elseif offspring1(j)>upb(j),
offspring1(j)=upb(j);
end

117
if offspring2(j)<lowb(j),
offspring2(j)=lowb(j);
elseif offspring2(j)>upb(j),
offspring2(j)=upb(j);
end
end
Mnew(ind1,:) = offspring1;
Mnew(ind2,:) = offspring2;
end
end

%Mutation
for i = 1:N,
if rand < p_m,
a = rand;
Mnew(i,:) = a*Mnew(i,:) + (1-a)*(lowb + rand(1,n).*(upb-lowb));
end
end

P = Mnew;

%Evaluation
for i = 1:N,
fitness(i) = feval(fit_func,P(i,:));
end
[bestvalue,best] = max(fitness);
if bestvalue > bestvaluesofar,
bestsofar = P(best,:);
bestvaluesofar = bestvalue;
end

upper(k) = bestvalue;
average(k) = mean(fitness);
lower(k) = min(fitness);

end %for

if k == max_iter,
disp(Algorithm terminated after maximum number of iterations:);
disp(max_iter);
end

winner = bestsofar;
bestfitness = bestvaluesofar;

if print,
iter = [1:max_iter];
plot(iter,upper,o:,iter,average,x-,iter,lower,*--);
legend(Best, Average, Worst);
xlabel(Generations,Fontsize,14);
ylabel(Objective Function Value,Fontsize,14);
set(gca,Fontsize,14);
hold off;
end
To run the routine, we create the following M-file for the given function.
function y=f_wave(x);

y=x(1)*sin(x(1)) + x(2)*sin(5*x(2));
118
We use the following script to run the algorithm:

options(1)=1;
options(14)=50;
[x,y]=gar([0,10;4,6],20,f_wave,options);
disp(GA Solution:);
disp(x);
disp(Objective function value:);
disp(y);

Running the above algorithm, we obtain a solution of x = [7.9711, 5.3462]> , and an objective function
value of 13.2607. The figure below shows a plot of the best, average, and worst solution from each generation
of the population.

15

10
Objective Function Value

5 Best
Average
Worst

10
0 10 20 30 40 50
Generations

Using the MATLAB function fminunc (from the Optimization Toolbox), we found the optimal point to
be [7.9787, 5.3482]> , with objective function value 13.2612. We can see that this solution agrees with the
solution obtained using our real-number genetic algorithm.

119
15. Introduction to Linear Programming
15.1

minimize 2x1 x2
subject to x1 + x3 = 2
x1 + x2 + x4 = 3
x1 + 2x2 + x5 = 5
x1 , . . . , x5 0

15.2
We have
x2 = ax1 + bu1 = a2 x0 + abu0 + bu1 = a2 + [ab, b]u
where u = [u0 , u1 ]> is the decision variable. We can write the constraint as ui 1 and ui 1. Hence, the
problem is:

## minimize a2 + [ab, b]u

subject to 1 ui 1, i = 1, 2.

Since a2 is a constant, we can remove it from the objective function without changing the solution. Intro-
ducing slack variables v1 , v2 , v3 , v4 , we obtain the standard form problem

## minimize [ab, b]u

subject to u0 + v1 = 1
u0 + v2 = 1
u1 + v3 = 1
u1 + v4 = 1.

15.3

Let x+ + +
i , xi 0 be such that |xi | = xi + xi , xi = xi xi . Substituting into the original problem, we have

minimize c1 (x+ + +
1 + x1 ) + c2 (x2 + x2 ) + + cn (x2 + x2 )
+
subject to A(x x ) = b
x+ , x 0,

where x+ = [x+ + > >
1 , . . . , xn ] and x = [x1 , . . . , xn ] . Rewriting, we get

## minimize [c> , c> ]z

subject to [A, A]z = b
z 0,

## which is an equivalent linear programming problem in standard form.

+
Note that although the variables x+ i and xi in the solution are required to satisfy xi xi = 0, we do
not need to explicitly include this in the constraint because any optimal solution to the above transformed

problem automatically satisfies the condition x+ i xi = 0. To see this, suppose we have an optimal solution
+
with both xi > 0 and xi > 0. In this case, note that ci > 0 (for otherwise we can add any arbitrary

constant to both x+i and xi and still satisfy feasibility, but decrease the objective function value). Then,

by subtracting min(xi , xi ) from x+
+
i and xi , we have a new feasible point with lower objective function
value, contradicting the optimality assumption. [See also M. A. Dahleh and I. J. Diaz-Bobillo, Control of
Uncertain Systems: A Linear Programming Approach, Prentice Hall, 1995, pp. 189190.]

120
15.4
Not every linear programming problem in standard form has a nonempty feasible set. Example:

minimize x1
subject to x1 = 1
x1 0.

Not every linear programming problem in standard form (even assuming a nonempty feasible set) has an
optimal solution. Example:

minimize x1
subject to x2 = 1
x1 , x2 0.

15.5
Let x1 and x2 represent the number of units to be shipped from A to C and to D, respectively, and x3 and
x4 represent the number of units to be shipped from B to C and to D, respectively. Then, the given problem
can be formulated as the following linear program:

## minimize x1 + 2x2 + 3x3 + 4x4

subject to x1 + x3 = 50
x2 + x4 = 60
x1 + x2 70
x3 + x4 80
x1 , x2 , x3 , x4 0.

## minimize x1 + 2x2 + 3x3 + 4x4

subject to x1 + x3 = 50
x2 + x4 = 60
x1 + x2 + x5 = 70
x3 + x4 + x6 = 80
x1 , x2 , x3 , x4 , x5 , x6 0.

15.6

We can see that there are two paths from A to E (ACDE and ACBF DE), and two paths from B to F
(BCDF and BF ). Let x1 and x2 be the data rates for the two paths from A to E, respectively, and x3 and x4
the data rates for the two paths from B to F , respectively. The total revenue is then 2(x1 + x2 ) + 3(x3 + x4 ).
For each link, we have a data rate constraint on the sum of all xi s passing through that link. For example,

121
for link BC, there are two paths passing through it, with total data rate x2 + x3 . Hence, the constraint for
link BC is x2 + x3 7. Hence, the optimization problem is the following linear programming problem:
maximize 2(x1 + x2 ) + 3(x3 + x4 )
subject to x1 + x2 10
x1 + x2 12
x1 + x3 8
x2 + x3 7
x2 + x3 4
x2 + x4 3
x1 , . . . , x4 0.
Converting this to standard form:
minimize 2(x1 + x2 ) 3(x3 + x4 )
subject to x1 + x2 + x5 = 10
x1 + x2 + x6 = 12
x1 + x3 + x7 = 8
x2 + x3 + x8 = 7
x2 + x3 + x9 = 4
x2 + x4 + x10 = 3
x1 , . . . , x10 0.

15.7
Let xi 0, i = 1, . . . , 4, be the weight in pounds of item i to be used. Then, the total weight is x1 +x2 +x3 +x4 .
To satisfy the percentage content of fiber, fat, and sugar, and the total weight of 1000, we need
3x1 + 8x2 + 16x3 + 4x4 = 10(x1 + x2 + x3 + x4 )
6x1 + 46x2 + 9x3 + 9x4 = 2(x1 + x2 + x3 + x4 )
20x1 + 5x2 + 4x3 + 0x4 = 5(x1 + x2 + x3 + x4 )
x1 + x2 + x3 + x4 = 1000
The total cost is 2x1 + 4x2 + x3 + 2x4 . Therefore, the problem is:
minimize 2x1 + 4x2 + x3 + 2x4
subject to 7x1 2x2 + 6x3 6x4 = 0
4x1 + 44x2 + 7x3 + 7x4 = 0
15x1 x3 5x4 = 0
x1 + x2 + x3 + x4 = 1000
x1 , x2 , x3 , x4 0
Alternatively, we could have simply replaced x1 + x2 + x3 + x4 in the first three equality constraints above
by 1000, to obtain:
3x1 + 8x2 + 16x3 + 4x4 = 10000
6x1 + 46x2 + 9x3 + 9x4 = 2000
20x1 + 5x2 + 4x3 + 0x4 = 5000
x1 + x2 + x3 + x4 = 1000.
Note that the only vector satisfying the above linear equations is [179, 175, 573, 422]> , which is not
feasible. Therefore, the constraint does not have any any feasible points, which means that the problem does
not have a solution.
122
15.8
The objective function is p1 + + pn . The constraint for the ith location is: gi,1 p1 + + gi,n pn P .
Hence, the the optimization problem is:

minimize p1 + + pn
subject to gi,1 p1 + + gi,n pn P, i = 1, . . . , m
p1 , . . . , pn 0.

By defining the notation G = [gi,j ] (m n), en = [1, . . . , 1]> (with n components), and p = [p1 , . . . , pn ]> ,
we can rewrite the problem as

minimize e>
np
subject to Gp P em
p 0.

15.9
It is easy to check (using MATLAB, for example) that the matrix

2 1 2 1 3
A = 1 2 3 1 0

1 0 2 0 5

is of full rank (i.e., rank A = 3). Therefore, the system has basic solutions. To find the basic solutions, we
first select bases. Each basis consists of three linearly independent columns of A. These columns correspond
to basic variables of the basic solution. The remaining variables are nonbasic and are set to 0. The matrix
A has 5 columns; therefore, we have 53 = 10 possible candidate basic solutions (corresponding to the 10
combinations of 3 columns out of 5). It turns out that all 10 combinations of 3 columns of A are linearly
independent. Therefore, we have 10 basic solutions. These are tabulated as follows:

## Columns Basic Solutions

1, 2, 3 [4/17, 80/17, 83/17, 0, 0]>
1, 2, 4 [10, 49, 0, 83, 0]>
1, 2, 5 [105/31, 25/31, 0, 0, 83/31]>
1, 3, 4 [12/11, 0, 49/11, 80/11, 0]>
1, 3, 5 [100/35, 0, 25/35, 0, 80/35]>
1, 4, 5 [65/18, 0, 0, 25/18, 49/18]>
2, 3, 4 [0, 6, 5, 2, 0]>
2, 3, 5 [0, 100/23, 105/23, 0, 4/23]>
2, 4, 5 [0, 13, 0, 21, 2]>
3, 4, 5 [0, 0, 65/19, 100/19, 12/19]>

15.10
In the figure below, the shaded region corresponds to the feasible set. We then translate the line 2x1 +5x2 = 0
across the shaded region until the line just touches the region at one point, and the line is as far as possible
from the origin. The point of contact is the solution to the problem. In this case, the solution is [2, 6]> , and
the corresponding cost is 34.

123
x2

2
0 6
6 2x +
1 5x
2 =34
4
4

x
1+
x
2=
8

x1
4
2x + 0
1 5x
2 =0

15.11
We use the following MATLAB commands:
>> f=[0,-10,0,-6,-20];
>> A=[1,-1,-1,0,0; 0,0,1,-1,-1];
>> b=[0;0];
>> vlb=zeros(5,1);
>> vub=[4;3;3;2;2];
>> x0=zeros(5,1);
>> neqcstr=2;
>> x=linprog(f,A,b,vlb,vub,x0,neqcstr)

x =

4.0000
2.0000
2.0000
0.0000
2.0000

## 16. The Simplex Method

124
16.1
a. Performing a sequence of elementary row operations, we obtain

1 2 1 3 2 1 2 1 3 2 1 2 1 3 2
2 1 3 0 1 0 5 5 6 3 0 5 5 6 3

A= = B.

3 1 2 3 3 0 5 5 6 3 0 0 4 2 1
1 2 3 1 1 0 0 4 2 1 0 0 0 0 0

Because elementary row operations do not change the rank of a matrix, rank A = rank B. Therefore
rank A = 3.
b. Performing a sequence of elementary row operations, we obtain

1 1 2 1 10 6 1 1 10 6 1
A = 2 1 5 1 1 2 0 10 5 1

1 10 6 1 2 1 5 0 21 + 12 3

1 1 10 6 1 1 10 6
0 1 10 5 0 1 10 5 =B

0 3 21 + 12 0 0 3( 3) 3
Because elementary row operations do not change the rank of a matrix, rank A = rank B. Therefore
rank A = 3 if 6= 3 and rank A = 2 if = 3.
16.2
a. " # " #
3 1 0 1 4
A= , b= , c = [2, 1, 1, 0] .
6 2 1 1 5

b. Pivoting the problem tableau about the elements (1, 4) and (2, 3), we obtain

3 1 0 1 4
3 1 1 0 1
5 0 0 0 1

## c. Basic feasible solution: x = [0, 0, 1, 4]> , c> x = 1.

d. [r1 , r2 , r3 , r4 ] = [5, 0, 0, 0].
e. Since the reduced cost coefficients are all 0, the basic feasible solution in part c is optimal.
f. The original problem does indeed have a feasible solution, because the artificial problem has an optimal
feasible solution with objective function value 0, as shown in the final phase I tableau.
g. Extract the submatrices corresponding to A and b, append the last row [c> , 0], and pivot about the
(2, 1)th element to obtain
0 0 1 1 3
1 1/3 1/3 0 1/3
0 5/3 5/3 0 2/3

16.3
The problem in standard form is:

minimize x1 x2 3x3
subject to x1 + x3 = 1
x2 + x3 = 2
x1 , x2 , x3 0.
125
We form the tableau for the problem:
1 0 1 1
0 1 1 2
1 1 3 0
Performing necessary row operations, we obtain a tableau in canonical form:

1 0 1 1
0 1 1 2
0 0 1 3

## We pivot about the (1, 3)th element to get:

1 0 1 1
1 1 0 1
1 0 0 4

The reduced cost coefficients are all nonnegative. Hence, the current basic feasible solution is optimal:
[0, 1, 1]> . The optimal cost is 4.
16.4
The problem in standard form is:

minimize 2x1 x2
subject to x1 + x3 = 5
x2 + x4 = 7
x1 + x2 + x5 = 9
x1 , . . . , x5 0.

## We form the tableau for the problem:

1 0 1 0 0 5
0 1 0 1 0 7
1 1 0 0 1 9
2 1 0 0 0 0
The above tableau is already in canonical form, and therefore we can proceed with the simplex procedure.
We first pivot about the (1, 1)th element, to get

1 0 1 0 0 5
0 1 0 1 0 7
0 1 1 0 1 4
0 1 2 0 0 10

## Next, we pivot about the (3, 2)th element to get

1 0 1 0 0 5
0 0 1 1 1 3
0 1 1 0 1 4
0 0 1 0 1 14

The reduced cost coefficients are all nonnegative. Hence, the optimal solution to the problem in standard
form is [5, 4, 0, 3, 0]> . The corresponding optimal cost is 14.

126
16.5
a. Let B = [a2 , a1 ] represent the first two columns of A ordered according to the basis corresponding to the
given canonical tableau, and D the second two columns. Then,
" #
1 2
B 1 D = ,
3 4

Hence,
" #1 " #
1 2 3/2 1/2
B=D = .
3 4 2 1
Hence, " #
1/2 3/2 0 1
A=
1 2 1 0
An alternative approach is to realize that the canonical tableau is obtained from the problem tableau via
elementary row operations. Therefore, we can obtain the entries of A from the 2 4 upper-left submatrix
of the canonical tableau via elementary row operations also. Specifically, start with
" #
0 1 1 2
1 0 3 4

and then do two pivoting operations, one about (1, 4) and the other about (2, 3).
b. The right-half of c is given by
" #
1 1 2
c>
D = r>
D + c>
BB D = [1, 1] + [7, 8] = [1, 1] + [31, 46] = [30, 47].
3 4

## So c = [8, 7, 30, 47]> .

c. First we calculate B 1 b, giving us the basic variable values:
" #" # " #
1 2 1 5 16
B b= = .
4 3 6 38

## Hence, the BFS is [38, 16, 0, 0]> .

1
d. The first two entries are 16 and 38, respectively. The last component is c>
BB b = (7(16) + 8(38)) =
416. Hence, the last column is the vector [16, 38, 416]> .
16.6

The columns in the constraint matrix A corresponding to x+ i and xi are linearly dependent. Hence they

cannot both enter a basis at the same time. This means that only one variable, x+ i or xi , can assume a
nonnegative value; the nonbasic variable is necessarily zero.
16.7
a. From the given information, we have the 4 6 canonical tableau

1 0 0 1/2 1
0 1 0 0 2

0 0 1 0

3
0 1 0 0 1 6

Explanations:
The given vector x indicates that A is 3 5.
127
In the above tableau, we assume that the basis is [a1 , a3 , a4 ], in this order. Other permutations of
orders will result in interchanging rows among the first three rows of the tableau.
The fifth column represents the coordinates of a5 with respect to the basis [a1 , a3 , a4 ]. Because
[2, 0, 0, 0, 4]> lies in the nullspace of A, we deduce that 2a1 + 4a5 = 0, which can be rewritten
as a5 = (1/2)a1 + 0a3 + 0a4 , and hence the coordinate vector is [1/2, 0, 0]> .

b. Let d0 = [2, 0, 0, 0, 4]> . Then, Ad0 = 0. Therefore, the vector x0 = x + d0 also satisfies Ax = b. Now,
x0 = x + d0 = [1 2, 0, 2, 3, 4]> . For x0 to be feasible, we must have 1/2. Moreover, the objective
function value of x0 is c> x0 = z0 + r5 x05 = 6 4, where z0 is the objective function value of x. So, if we
pick any (0, 1/2], then x0 will be a feasible solution with objective function value strictly less than 6.
For example, with = 1/2, x0 = [0, 0, 2, 3, 2]> is such a point. (We could also have obtained this solution by
pivoting about the element (1, 5) in the tableau of part a.)
16.8
a. The BFS is [6, 0, 7, 5, 0]> , with objective function value 8.
b. r = [0, 4, 0, 0, 4]> .
c. Yes, because the 5th column has all negative entries.
d. We pivot about the element (3, 2). The new canonical tableau is:

0 0 1/3 1 0 8/3
1
0 2/3 0 0 4/3
1 1/3 0 1 7/3

0
0 0 4/3 0 0 4/3

e. First note that based on the 5th column, the following point is feasible:

6 2
0 0

x = 7 +  3 .

5 1
0 1

Note that x5 = . Now, any solution of the form x = [, 0, , , ]> has an objective function value given by

z = z0 + r 5 

where z0 = 8 and r5 = 4 (from parts a and b). If z = 100, then  = 23. Hence, the following point has
objective function value z = 100:
6 2 52
0 0 0

x = 7 + 23 3 = 76 .

5 1 28
0 1 23

f. The entries of the 2nd column of the given canonical tableau are the coordinates of a2 with respect to the
basis {a4 , a1 , a3 }. Therefore,
a2 = a4 + 2a1 + 3a3 .
Therefore, the vector [2, 1, 3, 1, 0]> lies in the nullspace of A. Similarly, using the entries of the 5th
column, we deduce that [2, 0, 3, 1, 1]> also lies in the nullspace of A. These two vectors are linearly
independent. Because A has rank 3, the dimension of the nullspace of A is 2. Hence, these two vectors form
a basis for the nullspace of A.
128
16.9
a. We can convert the problem to standard form by multiplying the objective function by 1 and introducing
a surplus variable x3 . We obtain:

minimize x1 + 2x2
subject to x2 x3 = 1
x1 , x2 , x3 0.

Note that we do not need to deal with the absence of the constraint x2 0 in the original problem, since
x2 1 implies that x2 0 also. Had we used the rule of writing x2 = u v with u, v 0, we obtain the
standard form problem:

minimize x1 + 2u 2v
subject to u v x3 = 1
x1 , u, v, x3 0.

0 1 1 1 1
0 0 0 1 0

## Pivoting about element (1, 4), we obtain the canonical tableau:

0 1 1 1 1
0 1 1 0 1

Pivoting now about element (1, 2), we obtain the next canonical tableau:

0 1 1 1 1
0 0 0 1 0

Hence, phase I terminates, and we use x2 as our initial basic variable for phase II.
For phase II, we set up the problem tableau as:

0 1 1 1
1 2 0 0

## Pivoting about element (1, 2), we obtain

0 1 1 1
1 0 2 2
Hence, the BFS [0, 1, 0]> is optimal, with objective function value 2. Therefore, the optimal solution to the
original problem is [0, 1]> with objective function value 2.
16.10
a. [1, 0]>
h i
b. 1 1 1
" #
1 1 1
Note that the answer is not , which is the canonical tableau.
0 1 1
c. We choose q = 2 because the only negative RCC value is r2 . However, y1,2 < 0. Therefore, the simplex
algorithm terminates with the condition that the problem is unbounded.
d. Any vector of the form [x1 , x1 1]> , x1 1, is feasible. Therefore the first component can take arbitrarily
large (positive) values. Hence, the objective function, which is x1 , can take arbitrarily negative values.
129
16.11
The problem in standard form is:
minimize x1 + x2
subject to x1 + 2x2 x3 = 3
2x1 + x2 x4 = 3
x1 , x2 , x3 , x4 0.
We will use x1 and x2 as initial basic variables. Therefore, Phase I is not needed, and we immediately
proceed with Phase II. The tableau for the problem is:
a1 a2 a3 a4 b
1 2 1 0 3
2 1 0 1 3
c> 1 1 0 0 0
We compute
" #1
> 1 1 2
= c>
BB = [1, 1] = [1/3, 1/3],
2 1
" #
> 1 0
r>
D = c>
D D = [0, 0] [1/3, 1/3] = [1/3, 1/3] = [r3 , r4 ].
0 1

The reduced cost coefficients are all nonnegative. Hence, the solution to the standard form problem is
[1, 1, 0, 0]> . Therefore, the solution to the original problem is [1, 1]> , and the corresponding cost is 2.
16.12
a. The problem in standard form is:
minimize 4x1 + 3x2
subject to 5x1 + x2 x3 = 11
2x1 + x2 x4 = 8
x1 + 2x2 x5 = 7
x1 , . . . , x5 0.
We do not have an apparent basic feasible solution. Therefore, we will need to use the two phase method.
Phase I: We introduce artificial variables x6 , x7 , x8 and form the following tableau.
a1 a2 a3 a4 a5 a6 a7 a8 b
5 1 1 0 0 1 0 0 11
2 1 0 1 0 0 1 0 8
1 2 0 0 1 0 0 1 7
c> 0 0 0 0 0 1 1 1 0
We then form the following revised tableau:
Variable B 1 y0
x6 1 0 0 11
x7 0 1 0 8
x8 0 0 1 7
We compute:
> = [1, 1, 1]
r>
D = [r1 , r2 , r3 , r4 , r5 ] = [8, 4, 1, 1, 1].
130
We form the augmented revised tableau by introducing y 1 = B 1 a1 = a1 :

Variable B 1 y0 y1
x6 1 0 0 11 5
x7 0 1 0 8 2
x8 0 0 1 7 1
We now pivot about the first component of y 1 to get

Variable B 1 y0
x1 1/5 0 0 11/5
x7 2/5 1 0 18/5
x8 1/5 0 1 24/5
We compute
> = [3/5, 1, 1]
r>
D = [r2 , r3 , r4 , r5 , r6 ] = [12/5, 3/5, 1, 1, 8/5].
We bring y 2 = B 1 a2 into the basis to get

Variable B 1 y0 y2
x1 1/5 0 0 11/5 1/5
x7 2/5 1 0 18/5 3/5
x8 1/5 0 1 24/5 9/5
We pivot about the third component of y 2 to get

Variable B 1 y0
x1 2/9 0 1/9 5/3
x7 1/3 1 1/3 2
x2 1/9 0 5/9 8/3
We compute
> = [1/3, 1, 1/3]
r>
D = [r3 , r4 , r5 , r6 , r8 ] = [1/3, 1, 1/3, 4/3, 4/3].
We bring y 3 = B 1 a3 into the basis to get

Variable B 1 y0 y3
x1 2/9 0 1/9 5/3 2/9
x7 1/3 1 1/3 2 1/3
x2 1/9 0 5/9 8/3 1/9
We pivot about the second component of y 3 to obtain

Variable B 1 y0
x1 0 2/3 1/3 3
x3 1 3 1 6
x2 0 1/3 2/3 2
We compute
> = [0, 0, 0]
r>
D = [r4 , r5 , r6 , r7 , r8 ] = [0, 0, 1, 1, 1] 0> .
131
Thus, Phase I is complete, and the initial basic feasible solution is [3, 2, 6, 0, 0]> .
Phase II
We form the tableau for the original problem:

a1 a2 a3 a4 a5 b
5 1 1 0 0 11
2 1 0 1 0 8
1 2 0 0 1 7
c> 4 3 0 0 0 0
The initial revised tableau for Phase II is the final revised tableau for Phase I. We compute
> = [0, 5/3, 2/3]
r>
D = [r4 , r5 ] = [5/3, 2/3] > 0> .
Hence, the optimal solution to the original problem is [3, 2]> .
b. The problem in standard form is:
minimize 6x1 4x2 7x3 5x4
subject to x1 + 2x2 + x3 + 2x4 + x5 = 20
6x1 + 5x2 + 3x3 + 2x4 + x6 = 100
3x1 + 4x2 + 9x3 + 12x4 + x7 = 75
x1 , . . . , x7 0.
We have an apparent basic feasible solution: [0, 0, 0, 20, 100, 75]> , corresponding to B = I 3 . We form the
revised tableau corresponding to this basic feasible solution:

Variable B 1 y0
x5 1 0 0 20
x6 0 1 0 100
x7 0 0 1 75
We compute
> = [0, 0, 0]
r>
D = [r1 , r2 , r3 , r4 ] = [6, 4, 7, 5].
We bring y 2 = B 1 a3 = a3 into the basis to obtain

Variable B 1 y0 y3
x5 1 0 0 20 1
x6 0 1 0 100 3
x7 0 0 1 75 9
We pivot about the third component of y 3 to get

Variable B 1 y0
x5 1 0 1/9 35/3
x6 0 1 1/3 75
x3 0 0 1/9 25/3
We compute
> = [0, 0, 7/9]
r>
D = [r1 , r2 , r4 , r7 ] = [11/3, 8/9, 13/3, 7/9].
132
We bring y 1 = B 1 a1 into the basis to obtain

Variable B 1 y0 y1
x5 1 0 1/9 35/3 2/3
x6 0 1 1/3 75 5
x3 0 0 1/9 25/3 1/3

## We pivot about the second component of y 1 to obtain

Variable B 1 y0
x5 1 2/15 1/15 5/3
x1 0 1/5 1/15 15
x3 0 1/15 2/15 10/3

We compute

## > = [0, 11/15, 8/15]

r>
D = [r2 , r4 , r6 , r7 ] = [27/15, 43/15, 11/15, 8/15] > 0> .

The optimal solution to the original problem is therefore [15, 0, 10/3, 0]> .
16.13
a. By inspection of r > , we conclude that the basic variables are x1 , x3 , x4 , and the basis matrix is

0 0 1
B = 1 0 0 .

0 1 0

Since r > 0> , the basic feasible solution corresponding to the basis B is optimal. This optimal basic
feasible solution is [8, 0, 9, 7]> .
b. An optimal solution to the dual is given by

> = c>
BB
1
,

where c>
B = [6, 4, 5], and
0 1 0
B 1 = 0 0 1 .

1 0 0

## We obtain > = [5, 6, 4].

> >
c. We have r > > > >
D = cD D, where r D = , cD = [c2 ], = [5, 6, 4], and D = [2, 1, 3]> . We get
1 = c2 10 6 12, which yields c2 = 29.
16.14
a. There are two basic feasible solutions: [1, 0]> and [0, 2]> .
b. The feasible set in R2 for this problem is the line segment joining the two basic feasible solutions [1, 0]>
and [0, 2]> . Therefore, if the problem has an optimal feasible solution that is not basic, then all points in
the feasible set are optimal. For this, we need
" # " #
c1 2
= ,
c2 1

where R.
c. Since all basic feasible solutions are optimal, the relative cost coefficients are all zero.
133
16.15
a. 2 < 0, 0, 0, and anything.
b. 2 0, = 7, and anything.
c. 2 < 0, > 0, either 0 or 5/ 4/, and anything.
16.16
a. The value of must be 0, because the objective function value is 0 (lower right corner), and is the
value of an artificial variable.
The value of must be 0, because it is the RCC value corresponding to a basic column.
The value of must be 2, because it must be a positive value. Otherwise, there is a feasible solution to
the artificial problem with objective function value smaller than 0, which is impossible.
The value of must be 0, because we must be able to bring the fourth column into the basis without
changing the objective function value.
b. The given linear programming problem does indeed have a feasible solution: [0, 5, 6, 0]> . We obtain
this by noticing that the right-most column is a linear combination of the second and third columns, with
coefficients 5 and 6.
16.17
First, we convert the inequality constraint Ax b into standard form. To do this, we introduce a variable
w Rm of surplus variables to convert the inequality constraint into the following equivalent constraint:
" #
x
[A, I] = b, w 0.
w
Next, we introduce variables u, v Rn to replace the free variable x by u v. We then obtain the following
equivalent constraint:
u
[A, A, I] v = b, u, v, w 0.

w
This form of the constraint is now in standard form. So we can now use Phase I of the simplex method to
implement an algorithm to find a vectors u, v, and w satisfying the above constraint, if such exist, or to
declare that none exists. If such exist, we output x = u v; otherwise, we declare that no x exists such
that Ax b. By construction, this algorithm is guaranteed to behave in the way specified by the question.
16.18
a. We form the tableau for the problem:
1 0 0 1/4 8 1 9 0
0 1 0 1/2 12 1/2 3 0
0 0 1 0 0 1 0 1
0 0 0 3/4 20 1/2 6 0
The above tableau is already in canonical form, and therefore we can proceed with the simplex procedure.
We first pivot about the (1, 4)th element, to get
4 0 0 1 32 4 36 0
2 1 0 0 4 3/2 15 0
0 0 1 0 0 1 0 1
3 0 0 0 4 7/2 33 0
Pivoting about the (2, 5)th element, we get
12 8 0 1 0 8 84 0
1/2 1/4 0 0 1 3/8 15/4 0
0 0 1 0 0 1 0 1
1 1 0 0 0 2 18 0
134
Pivoting about the (1, 6)th element, we get
3/2 1 0 1/8 0 1 21/2 0
1/16 1/8 0 3/64 1 0 3/16 0
3/2 1 1 1/8 0 0 21/2 1
2 3 0 1/4 0 0 3 0
Pivoting about the (2, 7)th element, we get
2 6 0 5/2 56 1 0 0
1/3 2/3 0 1/4 16/3 0 1 0
2 6 1 5/2 56 0 0 1
1 1 0 1/2 16 0 0 0
Pivoting about the (1, 1)th element, we get
1 3 0 5/4 28 1/2 0 0
0 1/3 0 1/6 4 1/6 1 0
0 0 1 0 0 1 0 1
0 2 0 7/4 44 1/2 0 0
Pivoting about the (2, 2)th element, we get
1 0 0 1/4 8 1 9 0
0 1 0 1/2 12 1/2 3 0
0 0 1 0 0 1 0 1
0 0 0 3/4 20 1/2 6 0
which is identical to the initial tableau. Therefore, cycling occurs.
b. We start with the initial tableau of part a, and pivot about the (1, 4)th element to obtain
4 0 0 1 32 4 36 0
2 1 0 0 4 3/2 15 0
0 0 1 0 0 1 0 1
3 0 0 0 4 7/2 33 0
Pivoting about the (2, 5)th element, we get
12 8 0 1 0 8 84 0
1/2 1/4 0 0 1 3/8 15/4 0
0 0 1 0 0 1 0 1
1 1 0 0 0 2 18 0
Pivoting about the (1, 6)th element, we get
3/2 1 0 1/8 0 1 21/2 0
1/16 1/8 0 3/64 1 0 3/16 0
3/2 1 1 1/8 0 0 21/2 1
2 3 0 1/4 0 0 3 0
Pivoting about the (2, 1)th element, we get
0 2 0 1 24 1 6 0
1 2 0 3/4 16 0 3 0
0 2 1 1 24 0 6 1
0 1 0 5/4 32 0 3 0
135
Pivoting about the (3, 2)th element, we get

0 0 1 0 0 1 0 1
1 0 1 1/4 8 0 9 1
0 1 1/2 1/2 12 0 3 1/2
0 0 1/2 3/4 20 0 6 1/2

## Pivoting about the (3, 4)th element, we get

0 0 1 0 0 1 0 1
1 1/2 3/4 0 2 0 15/2 3/4
0 2 1 1 24 0 6 1
0 3/2 5/4 0 2 0 21/2 5/4

The reduced cost coefficients are all nonnegative. Hence, the optimal solution to the problem is
[3/4, 0, 0, 1, 0, 1, 0]> . The corresponding optimal cost is 5/4.
16.19
a. We have
Ad(0) = A(x(1) x(0) )/0 = (b b)/0 = 0.
Hence, d(0) N (A).
b. From our discussion of moving from one BFS to an adjacent BFS, we deduce that
" #
(0) y q
d = .
eqm

In other words, the first m components of d(0) are y1q , . . . , ymq , and all the other components are 0 except
the qth component, which is 1.
16.20
The following is a MATLAB function that implements the simplex algorithm.
function [x,v]=simplex(c,A,b,v,options)
% SIMPLEX(c,A,b,v);
% SIMPLEX(c,A,b,v,options);
%
% x = SIMPLEX(c,A,b,v);
% x = SIMPLEX(c,A,b,v,options);
%
% [x,v] = SIMPLEX(c,A,b,v);
% [x,v] = SIMPLEX(c,A,b,v,options);
%
%SIMPLEX(c,A,b,v) solves the following linear program using the
%Simplex Method:
% min cx subject to Ax=b, x>=0,
%where [A b] is in canonical form, and v is the vector of indices of
%basic columns. Specifically, the v(i)-th column of A is the i-th
%standard basis vector.
%The second variant allows a vector of optional parameters to be
%defined:
%OPTIONS(1) controls how much display output is given; set
%to 1 for a tabular display of results (default is no display: 0).
%OPTIONS(5) specifies how the pivot element is selected;
% 0=choose the most negative relative cost coefficient;
% 1=use Blands rule.

136
if nargin ~= 5
options = [];
if nargin ~= 4
disp(Wrong number of arguments.);
return;
end
end

format compact;
%format short e;

options = foptions(options);
print = options(1);

n=length(c);
m=length(b);

cB=c(v(:));

cost = -cB*b;

## tabl=[A b;r cost];

if print,
disp( );
disp(Initial tableau:);
disp(tabl);
end %if

## while ones(1,n)*(r >= zeros(n,1)) ~= n

if options(5) == 0;
[r_q,q] = min(r);
else
%Blands rule
q=1;
while r(q) >= 0
q=q+1;
end
end %if

min_ratio = inf;
p=0;
for i=1:m,
if tabl(i,q)>0
if tabl(i,n+1)/tabl(i,q) < min_ratio
min_ratio = tabl(i,n+1)/tabl(i,q);
p = i;
end %if
end %if
end %for
if p == 0
disp(Problem unbounded);
break;
end %if

tabl=pivot(tabl,p,q);

137
if print,
disp(Pivot point:);
disp([p,q]);
disp(New tableau:);
disp(tabl);
end %if

v(p) = q;
r = tabl(m+1,1:n);
end %while

x=zeros(n,1);
x(v(:))=tabl(1:m,n+1);

The above function makes use of the following function that implements pivoting:
function Mnew=pivot(M,p,q)
%Mnew=pivot(M,p,q)
%Returns the matrix Mnew resulting from pivoting about the
%(p,q)th element of the given matrix M.

for i=1:size(M,1),
if i==p
Mnew(p,:)=M(p,:)/M(p,q);
else
Mnew(i,:)=M(i,:)-M(p,:)*(M(i,q)/M(p,q));
end %if
end %for
%----------------------------------------------------------------

We now apply the simplex algorithm to the problem in Example 16.2, as follows:
>> A=[1 0 1 0 0; 0 1 0 1 0; 1 1 0 0 1];
>> b=[4;6;8];
>> c=[-2;-5;0;0;0];
>> v=[3;4;5];
>> options(1)=1;
>> [x,v]=simplex(c,A,b,v,options);

Initial Tableau:
1 0 1 0 0 4
0 1 0 1 0 6
1 1 0 0 1 8
-2 -5 0 0 0 0
Pivot point:
2 2
New tableau:
1 0 1 0 0 4
0 1 0 1 0 6
1 0 0 -1 1 2
-2 0 0 5 0 30
Pivot point:
3 1
New tableau:
0 0 1 1 -1 2
0 1 0 1 0 6
1 0 0 -1 1 2
0 0 0 3 2 34
>> disp(x);
2 6 2 0 0

138
>> disp(v);
3 2 1

As indicated above, the solution to the problem in standard form is [2, 6, 2, 0, 0]> , and the objective
function value is 34. The optimal cost for the original maximization problem is 34.
16.21
The following is a MATLAB routine that implements the two-phase simplex method, using the MATLAB
function from Exercise 16.20.
function [x,v]=tpsimplex(c,A,b,options)
% TPSIMPLEX(c,A,b);
% TPSIMPLEX(c,A,b,options);
%
% x = TPSIMPLEX(c,A,b);
% x = TPSIMPLEX(c,A,b,options);
%
% [x,v] = TPSIMPLEX(c,A,b);
% [x,v] = TPSIMPLEX(c,A,b,options);
%
%TPSIMPLEX(c,A,b) solves the following linear program using the
%two-phase simplex method:
% min cx subject to Ax=b, x>=0.
%The second variant allows a vector of optional parameters to be
%defined:
%OPTIONS(1) controls how much display output is given; set
%to 1 for a tabular display of results (default is no display: 0).
%OPTIONS(5) specifies how the pivot element is selected;
% 0=choose the most negative relative cost coefficient;
% 1=use Blands rule.

if nargin ~= 4
options = [];
if nargin ~= 3
disp(Wrong number of arguments.);
return;
end
end

clc;
format compact;
%format short e;

options = foptions(options);
print = options(1);

n=length(c);
m=length(b);

%Phase I
if print,
disp( );
disp(Phase I);
disp(-------);
end

v=n*ones(m,1);
for i=1:m
v(i)=v(i)+i;

139
end

[x,v]=simplex([zeros(n,1);ones(m,1)],[A eye(m)],b,v,options);

if all(v<=n),
%Phase II
if print
disp( );
disp(Phase II);
disp(--------);
disp(Basic columns:)
disp(v)
end

## %Convert [A b] into canonical augmented matrix

Binv=inv(A(:,[v]));
A=Binv*A;
b=Binv*b;

[x,v]=simplex(c,A,b,v,options);

if print
disp( );
disp(Final solution:);
disp(x);
end
else
%assumes nondegeneracy
disp(Terminating: problem has no feasible solution.);
end
%----------------------------------------------------------------
We now apply the above MATLAB routine to the problem in Example 16.5, as follows:
>> A=[1 1 1 0; 5 3 0 -1];
>> b=[4;8];
>> c=[-3;-5;0;0];
>> options(1)=1;
>> format rat;
>> tpsimplex(c,A,b,options);

Phase I
-------

Initial Tableau:
1 1 1 0 1 0 4
5 3 0 -1 0 1 8
-6 -4 -1 1 0 0 -12
Pivot point:
2 1
New tableau:
0 2/5 1 1/5 1 -1/5 12/5
1 3/5 0 -1/5 0 1/5 8/5
0 -2/5 -1 -1/5 0 6/5 -12/5
Pivot point:
1 3
New tableau:
0 2/5 1 1/5 1 -1/5 12/5
1 3/5 0 -1/5 0 1/5 8/5
0 * 0 * 1 1 *

140
Pivot point:
2 2
New tableau:
-2/3 0 1 1/3 1 -1/3 4/3
5/3 1 0 -1/3 0 1/3 8/3
* 0 0 * 1 1 *
Pivot point:
1 4
New tableau:
-2 0 3 1 3 -1 4
1 1 1 0 1 0 4
* 0 * 0 1 1 *

Basic columns:
4 2

Phase II
--------

Initial Tableau:
-2 0 3 1 4
1 1 1 0 4
2 0 5 0 20

Final solution:
0 4 0 4

16.22
The following is a MATLAB function that implements the revised simplex algorithm.
function [x,v,Binv]=revsimp(c,A,b,v,Binv,options)
% REVSIMP(c,A,b,v,Binv);
% REVSIMP(c,A,b,v,Binv,options);
%
% x = REVSIMP(c,A,b,v,Binv);
% x = REVSIMP(c,A,b,v,Binv,options);
%
% [x,v,Binv] = REVSIMP(c,A,b,v,Binv);
% [x,v,Binv] = REVSIMP(c,A,b,v,Binv,options);
%
%REVSIMP(c,A,b,v,Binv) solves the following linear program using the
%revised simplex method:
% min cx subject to Ax=b, x>=0,
%where v is the vector of indices of basic columns, and Binv is the
%inverse of the basis matrix. Specifically, the v(i)-th column of
%A is the i-th column of the basis vector.
%The second variant allows a vector of optional parameters to be
%defined:
%OPTIONS(1) controls how much display output is given; set
%to 1 for a tabular display of results (default is no display: 0).
%OPTIONS(5) specifies how the pivot element is selected;
% 0=choose the most negative relative cost coefficient;
% 1=use Blands rule.

if nargin ~= 6
options = [];
if nargin ~= 5
disp(Wrong number of arguments.);
return;
end
141
end

format compact;
%format short e;

options = foptions(options);
print = options(1);

n=length(c);
m=length(b);

cB=c(v(:));
y0 = Binv*b;
lambdaT=cB*Binv;
r = c-lambdaT*A; %row vector of relative cost coefficients

if print,
disp( );
disp(Initial revised tableau [v B^(-1) y0]:);
disp([v Binv y0]);
disp(Relative cost coefficients:);
disp(r);
end %if

## while ones(1,n)*(r >= zeros(n,1)) ~= n

if options(5) == 0;
[r_q,q] = min(r);
else
%Blands rule
q=1;
while r(q) >= 0
q=q+1;
end
end %if

yq = Binv*A(:,q);
min_ratio = inf;
p=0;
for i=1:m,
if yq(i)>0
if y0(i)/yq(i) < min_ratio
min_ratio = y0(i)/yq(i);
p = i;
end %if
end %if
end %for
if p == 0
disp(Problem unbounded);
break;
end %if

if print,
disp(Augmented revised tableau [v B^(-1) y0 yq]:)
disp([v Binv y0 yq]);
disp((p,q):);
disp([p,q]);
end

augrevtabl=pivot([Binv y0 yq],p,m+2);

142
Binv=augrevtabl(:,1:m);
y0=augrevtabl(:,m+1);

v(p) = q;

cB=c(v(:));
lambdaT=cB*Binv;
r = c-lambdaT*A; %row vector of relative cost coefficients

if print,
disp(New revised tableau [v B^(-1) y0]:);
disp([v Binv y0]);
disp(Relative cost coefficients:);
disp(r);
end %if

end %while

x=zeros(n,1);
x(v(:))=y0;

## The function makes use of the pivoting function in Exercise 16.20.

We now apply the simplex algorithm to the problem in Example 16.2, as follows:
>> A=[1 0 1 0 0; 0 1 0 1 0; 1 1 0 0 1];
>> b=[4;6;8];
>> c=[-2;-5;0;0;0];
>> v=[3;4;5];
>> Binv=eye(3);
>> options(1)=1;
>> [x,v,Binv]=rev_simp(c,A,b,v,Binv,options);

## Initial revised tableau [v B^(-1) y0]:

3 1 0 0 4
4 0 1 0 6
5 0 0 1 8
Relative cost coefficients:
-2 -5 0 0 0
Augmented revised tableau [v B^(-1) y0 yq]:
3 1 0 0 4 0
4 0 1 0 6 1
5 0 0 1 8 1
(p,q):
2 2
New revised tableau [v B^(-1) y0]:
3 1 0 0 4
2 0 1 0 6
5 0 -1 1 2
Relative cost coefficients:
-2 0 0 5 0
Augmented revised tableau [v B^(-1) y0 yq]:
3 1 0 0 4 1
2 0 1 0 6 0
5 0 -1 1 2 1
(p,q):
3 1
New revised tableau [v B^(-1) y0]:
3 1 1 -1 2
2 0 1 0 6
1 0 -1 1 2

143
Relative cost coefficients:
0 0 0 3 2
>> disp(x);
2 6 2 0 0
>> disp(v);
3 2 1
>> disp(Binv);
1 1 -1
0 1 0
0 -1 1

16.23
The following is a MATLAB routine that implements the two-phase revised simplex method, using the
MATLAB function from Exercise 16.22.
function [x,v]=tprevsimp(c,A,b,options)
% TPREVSIMP(c,A,b);
% TPREVSIMP(c,A,b,options);
%
% x = TPREVSIMP(c,A,b);
% x = TPREVSIMP(c,A,b,options);
%
% [x,v] = TPREVSIMP(c,A,b);
% [x,v] = TPREVSIMP(c,A,b,options);
%
%TPREVSIMP(c,A,b) solves the following linear program using the
%two-phase revised simplex method:
% min cx subject to Ax=b, x>=0.
%The second variant allows a vector of optional parameters to be
%defined:
%OPTIONS(1) controls how much display output is given; set
%to 1 for a tabular display of results (default is no display: 0).
%OPTIONS(5) specifies how the pivot element is selected;
% 0=choose the most negative relative cost coefficient;
% 1=use Blands rule.

if nargin ~= 4
options = [];
if nargin ~= 3
disp(Wrong number of arguments.);
return;
end
end

clc;
format compact;
%format short e;

options = foptions(options);
print = options(1);

n=length(c);
m=length(b);

%Phase I
if print,
disp( );
disp(Phase I);
disp(-------);
144
end

v=n*ones(m,1);
for i=1:m
v(i)=v(i)+i;
end

[x,v,Binv]=rev_simp([zeros(n,1);ones(m,1)],[A eye(m)],b,v,eye(m),options);

%Phase II
if print
disp( );
disp(Phase II);
disp(--------);
end

[x,v,Binv]=rev_simp(c,A,b,v,Binv,options);

if print
disp( );
disp(Final solution:);
disp(x);
end
%----------------------------------------------------------------
We now apply the above MATLAB routine to the problem in Example 16.5, as follows:
>> A=[4 2 -1 0; 1 4 0 -1];
>> b=[12;6];
>> c=[2;3;0;0];
>> options(1)=1;
>> format rat;
>> tprevsimp(c,A,b,options);

Phase I
-------

## Initial revised tableau [v B^(-1) y0]:

5 1 0 12
6 0 1 6
Relative cost coefficients:
-5 -6 1 1 0 0
Augmented revised tableau [v B^(-1) y0 yq]:
5 1 0 12 2
6 0 1 6 4
(p,q):
2 2
New revised tableau [v B^(-1) y0]:
5 1 -1/2 9
2 0 1/4 3/2
Relative cost coefficients:
-7/2 0 1 -1/2 0 3/2
Augmented revised tableau [v B^(-1) y0 yq]:
5 1 -1/2 9 7/2
2 0 1/4 3/2 1/4
(p,q):
1 1
New revised tableau [v B^(-1) y0]:
1 2/7 -1/7 18/7
2 -1/14 2/7 6/7

145
Relative cost coefficients:
0 0 0 0 1 1

Phase II
--------

## Initial revised tableau [v B^(-1) y0]:

1 2/7 -1/7 18/7
2 -1/14 2/7 6/7
Relative cost coefficients:
* 0 5/14 4/7

Final solution:
18/7 6/7 0 0

17. Duality
17.1
Since x and are feasible, we have Ax b, x 0, and > A c> , 0. Postmultiplying both sides of
> A c> by x 0 yields
> Ax c> x.
Since Ax b and > 0> , we have > Ax > b. Hence, > b c> x.
17.2
The primal problem is:

minimize e>
np
subject to Gp P em
p 0,

where G = [gi,j ], en = [1, . . . , 1]> (with n components), and p = [p1 , . . . , pn ]> . The dual of the problem is
(using symmetric duality):

maximize P > em
subject to > G e>
n
0.

17.3
a. We first transform the problem into standard form:

## minimize 2x1 3x2

subject to x1 + 2x2 + x3 = 4
2x1 + x2 + x4 = 5
x1 , x2 , x3 , x4 0.

## The initial tableau is:

1 2 1 0 4
2 1 0 1 5
2 3 0 0 0
We now pivot about the (1, 2)th element to get:

1/2 1 1/2 0 2
3/2 0 1/2 1 3
1/2 0 3/2 0 6
146
Pivoting now about the (2, 1)th element gives:

0 1 2/3 1/3 1
1 0 1/3 2/3 2
0 0 4/3 1/3 7

Thus, the solution to the standard form problem is x1 = 2, x2 = 1, x3 = 0, x4 = 0. The solution to the
original problem is x1 = 2, x2 = 1.
b. The dual to the standard form problem is

maximize 41 + 52
subject to 1 + 22 2
21 + 2 3
1 , 2 0.

From the discussion before Example 17.6, it follows that the solution to the dual is > = c> >
I rI =
[4/3, 1/3].
17.4
The dual problem is

maximize 111 + 82 + 73
subject to 51 + 22 + 3 4
1 + 2 + 23 3
1 , 2 , 3 0.

Note that we may arrive at the above in one of two ways: by applying the asymmetric form of duality, or
by applying the symmetric form of duality to the original problem in standard form. From the solution of
Exercise 16.11a, we have that the solution to the dual is > = c>
BB
1
= [0, 5/3, 2/3] (using the proof of the
duality theorem).
17.5
We represent the primal in the form

minimize c> x
subject to Ax = b
x 0.

## The corresponding dual is

maximize > b
subject to > A c> ,

that is,

maximize 21 + 72 + 33

h i 2 1 1 0 0 h i
subject to 1 2 3 1 2 0 1 0 1 2 0 0 0 .

1 0 0 0 1

The solution to the dual can be obtained using the formula, > = c>
BB
1
, where

h i 2 1 1
c> = 1 2 0 and B = 1 2 0 .

B
1 0 0
147
1
Note that because the last element in c>
B is zero, we do not need to calculate the last row of B when
>
computing , that is, these elements are dont care elements that we denote using the asterisk. Hence,

h i 0 0 1 h i
> = c>BB
1
= 1 2 0 0 1/2 1/2 = 0 1 2 .

Note that
c> x = > b = 13,
as expected.
17.6
a. Multiplying the objective function by 1, we see that the problem is of the form of the dual in the
asymmetric form of duality. Therefore, the dual to the problem is of the form of the primal in the asymmetric
form:
minimize > b
subject to > A = c>
0

b. The given vector y is feasible in the dual. Since b = 0, any feasible point in the dual is optimal. Thus, y
is optimal in the dual, and the objective function value for y is 0. Therefore, by the Duality Theorem, the
primal also has an optimal feasible solution, and the corresponding objective function value is 0. Since the
vector 0 is feasible in the primal and has objective function value 0, the vector 0 is a solution to the primal.
17.7

We introduce two sets of nonnegative variables: x+
i 0, xi 0, i = 1, 2, . . . , 3. We can then represent the
optimization problem in the form

minimize (x+ + +
1 + x1 ) + (x2 + x2 ) + (x3 + x3 )
+
x1
x+
" # 2 " #
+
1 1 1 1 1 1 x3 2
subject to =
0 1 0 0 1 0 x1 1

x2
x
3

x+
i 0, xi 0.

## We form the initial tableau,

x+
1 x+
2 x+
3 x
1 x
2 x
3 b
1 1 1 1 1 1 2
0 1 0 0 1 0 1
c> 1 1 1 1 1 1 0
There is no apparent basic feasible solution. We add the second row to the first one to obtain,
x+
1 x+
2 x+
3 x
1 x
2 x
3 b
1 0 1 1 0 1 3
0 1 0 0 1 0 1
c> 1 1 1 1 1 1 0
We next calculate the reduced cost coefficients,
x+
1 x+
2 x+
3 x
1 x
2 x
3 b
1 0 1 1 0 1 3
0 1 0 0 1 0 1
c> 0 2 2 2 0 0 4
148
We have zeros under the basic columns. The reduced cost coefficients are all nonnegative. The optimal
solution is,
h i>
x = 3 0 0 0 1 0 .

## The optimal solution to the original problem is x = [3, 1, 0]> .

The dual of the above linear program is

maximize 21 + 2
" #
h i 1 1 1 1 1 1 h i
subject to 1 2 1 1 1 1 1 1 .
0 1 0 0 1 0

## The optimal solution to the dual is

> = c>BB
1
" #1
h i 1 1
= 1 1
0 1
h i
= 1 2 .

17.8
a. The dual (asymmetric form) is

maximize
subject to ai 1, i = 1, . . . , n.

## We can write the constraint as

min{1/ai : i = 1, . . . , n} = 1/an .
Therefore, the solution to the dual problem is

= 1/an .

b. Duality Theorem: If the primal problem has an optimal solution, then so does the dual, and the optimal
values of their respective objective functions are equal.
By the duality theorem, the primal has an optimal solution, and the optimal value of the objective function
is 1/an . The only feasible point in the primal with this objective function value is the basic feasible solution
[0, . . . , 0, 1/an ]> .
c. Suppose we start at a nonoptimal initial basic feasible solution, [0, . . . , 1/ai , . . . , 0]> , where 1 i n 1.
The relative cost coefficient for the qth column, q 6= i, is
aq
rq = 1 .
ai
Since an > aj for any j 6= n, rq is the most negative relative cost coefficient if and only if q = n.
17.9
a. By asymmetric duality, the dual is given by

minimize
subject to ci , i = 1, . . . , n.

b. The constraint in part a implies that is feasible if and only if c4 . Hence, the solution is = c4 .
c. By the duality theorem, the optimal objective function value for the given problem is c4 . The only solution
that achieves this value is x4 = 1 and xi = 0 for all i 6= 4.
149
17.10
a. The dual is

minimize > 0
subject to > A c>
0.

b. By the duality theorem, we conclude that the optimal value of the objective function is 0. The only
vector satisfying x 0 that has an objective function value of 0 is x = 0. Therefore, the solution is x = 0.
c. The constraint set contains only the vector 0. Any other vector x satisfying x 0 has at least one
positive component, and consequently has a positive objective function value. But this contradicts the fact
that the optimal solution has an objective function value of 0.
17.11
a. The artificial problem is:

## minimize [0> , e> ]z

subject to [A, I]z = b
z 0,

## where e = [1, . . . , 1]> and z = [x> , y > ]> .

b. The dual to the artificial problem is:

maximize > b
subject to > A 0>
> e> .

c. Suppose the given original linear programming problem has a feasible solution. By the FTLP, the original
LP problem has a BFS. Then, by a theorem given in class, the artificial problem has an optimal feasible
solution with y = 0. Hence, by the Duality Theorem, the dual of the artificial problem also has an optimal
feasible solution.
17.12
a. Possible. This situation arises if the primal is unbounded, which by the Weak Duality Lemma implies
that the dual has no feasible solution.
b. Impossible, because the Duality Theorem requires that if the primal has an optimal feasible solution,
then so does the dual.
c. Impossible, because the Duality Theorem requires that if the dual has an optimal feasible solution, then
so does the primal. Also, the Weak Dual Lemma requires that if the primal is unbounded (i.e., has a feasible
solution but no optimal feasible solution), then the dual must have no feasible solution.
17.13
To prove the result, we use Theorem 17.3 (Complementary Slackness). Since 0, we have A> = c
c. Hence, is a feasible solution to the dual. Now, (c A> )> x = > x = 0. Therefore, by Theorem 17.3,
x and are optimal for their respective problems.
17.14
To use the symmetric form of duality, we need to rewrite the problem as

## minimize c> (u v),

subject to A(u v) b
u, v 0,

150
which we represent in the form
" #
> > u
minimize [c c ] ,
v
" #
u
subject to [A A] b
v
" #
u
0.
v

## By the symmetric form of duality, the dual is:

maximize > (b)
subject to > [A A] [c> c> ]
0.
Note that for the constraint involving A, we have
> [A A] [c> c> ] > A c> and > A c>
> A = c> .
Therefore, we can represent the dual as
minimize > b
subject to > A = c>
0.

17.15
The corresponding dual can be written as:
maximize 31 + 32
subject to 1 + 22 1
21 + 2 1
1 , 2 0.
To solve the dual, we refer back to the solution of Exercise 16.11. Using the idea of the proof of the duality
theorem (Theorem 17.2), we obtain the solution to the dual as > = c> BB
1
= [1/3, 1/3]. The cost of the
dual problem is 2, which verifies the duality theorem.
17.16
The dual to the above linear program (asymmetric form) is
maximize > 0
subject to > O c> .
The above dual problem has a feasible solution if and only if c 0. Since any feasible solution to the dual
is also optimal, the dual has an optimal solution if and only if c 0. Therefore, by the duality theorem, the
primal problem has a solution if and only if c 0.
If the solution to the dual exists, then the optimal value of the objective function in the primal is equal
to that of the dual, which is clearly 0. In this case, 0 is optimal, since c> 0 = 0.
17.17
Consider the primal problem
minimize 0> x
subject to Ax b
x 0,
151
and its corresponding dual
maximize y> b
subject to y> A 0
y 0.
: By assumption, there exists a feasible solution to the primal problem. Note that any feasible solution
is also optimal, and has objective function value 0. Suppose y satisfies A> y 0 and y 0. Then, y is a
feasible solution to the dual. Therefore, by the Weak Duality Lemma, b> y 0.
: Note that the feasible region for the dual is nonempty, since 0 is a feasible point. Also, by assumption,
0 is an optimal solution, since any other feasible point y satisfies b> y b> 0 = 0. Hence, by the duality
theorem, the primal problem has an (optimal) feasible solution.
17.18
a. The dual is
maximize y> b
subject to y > A 0> .

b. The feasible set of the dual problem is always nonempty, because 0 is clearly guaranteed to be feasible.
c. Suppose y is feasible in the dual. Then, by assumption, b> y 0. But the point 0 is feasible and has
objective function value 0. Hence, 0 is optimal in the dual.
d. By parts b and c, the dual has an optimal feasible solution. Hence, by the duality theorem, the primal
problem also has an optimal feasible solution.
e. By assumption, there exists a feasible solution to the primal problem. Note that any feasible solution
in the primal has objective function value 0 (and hence so does the given solution). Suppose y satisfies
A> y 0. Then, y is a feasible solution to the dual. Therefore, by weak duality, b> y 0.
17.19
Consider the primal problem
minimize y> b
subject to y> A = 0
y0
and its corresponding dual
maximize 0> x
subject to Ax b.
: By assumption, there exists a feasible solution to the dual problem. Note that any feasible solution
is also optimal, and has objective function value 0. Suppose y satisfies A> y = 0 and y 0. Then, y is a
feasible solution to the primal. Therefore, by the Weak Duality Lemma, b> y 0.
: Note that the feasible region for the primal is nonempty, since 0 is a feasible point. Also, by as-
sumption, 0 is an optimal solution, since any other feasible point y satisfies b> y b> 0 = 0. Hence, by the
duality theorem, the dual problem has an (optimal) feasible solution.
17.20
Let e = [1, . . . , 1]> . Consider the primal problem
minimize 0> x
subject to Ax e
and its corresponding dual
maximize e> y
subject to y> A = 0
y 0.
152
: Suppose there exists Ax < 0. Then, the vector x0 = x/ min{|(Ax)i |} is a feasible solution to the
primal problem. Note that any feasible solution is also optimal, and has objective function value 0. Suppose
y satisfies A> y = 0, y 0. Then, y is a feasible solution to the dual. Therefore, by the Weak Duality
Lemma, e> y 0. Since y 0, we conclude that y = 0.
: Suppose 0 is the only feasible solution to the dual. Then, 0 is clearly also optimal. Hence, by the
duality theorem, the primal problem has an (optimal) feasible solution x. Since Ax e and e < 0, we
get Ax < 0.
17.21
a. Rewrite the primal as
minimize (e)> x
subject to (P I)> x = 0
x 0.
By asymmetric duality, the dual is
maximize > 0
subject to > (P I)> e> .

## b. To make the notation simpler, we rewrite the dual as:

maximize 0
subject to (P I)y e.
Suppose the dual is feasible. Then, there exists a y such that P y y + e > y. Let yi be the largest
element of y, and p(i)> the ith row of P . Then, p(i)> y > yi . But, by definition of yi , y yi e. Hence,
p(i)> y yi p(i)> e = yi , which contradicts the inequality p(i)> y > yi . Hence, the dual is not feasible.
c. The primal is certainly feasible, because 0 is a feasible point. Therefore, by part b and strong duality, the
primal must also be unbounded.
d. Because 0 is an achievable objective function value (it is the objective function value of 0), and the problem
is unbounded, we deduce that 1 is also achievable. Hence, there exists a feasible x such that x> e = 1. This
proves the desired result.
17.22
Write the LP problem
minimize c> x
subject to Ax b
x0
and the corresponding dual problem
maximize > b
subject to > A c>
0.
By a theorem on duality, if we can find feasible points x and for the primal and dual, respectively, such
that c> x = > b, then x and are optimal for their respective problems. We can rewrite the previous set
of relations as
b>
>
c

0
c> >
b " #
0
A 0 x
b
.

In 0 0
A>

0 c
0 Im 0
153
where A
b,
Therefore, writing the above as Ay R(2m+2n+2)(m+n) and b R(2m+2n+2) , we have that
is a solution to the given linear programming
b)
the first n components of ((2m + 2n + 2), (m + n), A,
problem.
17.23
a. Consider the dual; b does not appear in the constraint (but it does appear in the dual objective function).
Thus, provided the level sets of the dual objective function do not exactly align with one of the faces of the
constraint set (polyhedron), the optimal dual vector will not change if we perturb b very slightly. Now, by
the duality theorem, z(b) = > b. Because is constant in a neighborhood of b, we deduce that z(b) = .
b. By part a, we deduce that the optimal objective function value will change by 3b1 .
17.24
a. Weak duality lemma: if x0 and y 0 are feasible points in the primal and dual, respectively, then f1 (x0 )
f2 (y 0 ).
Proof: Because y 0 0 and Ax0 b 0, we have y >0 (Ax0 b) 0. Therefore,

## f1 (x0 ) f1 (x0 ) + y >

0 (Ax0 b)
1 >
= x x0 + y > >
0 Ax0 y 0 b.
2 0
Now, we know that
1 > 1 >
x x0 + y >
0 Ax0 x x + y>
0 Ax ,
2 0 2
where x = A> y 0 . Hence,
1 > 1 > > > 1 > >
x 0 x0 + y > >
0 Ax0 y 0 AA y 0 y 0 AA y 0 = y 0 AA y 0 .
2 2 2
Combining this with the above, we have
1
f1 (x0 ) y> AA> y 0 y >
0b
2 0
= f2 (x0 ).

## Alternatively, notice that

1 > 1
f1 (x0 ) f2 (y 0 ) = x x0 + y > AA> y 0 + b> y 0
2 0 2 0
1 > 1
x x0 + y > AA> y 0 + x> >
0 A y0
2 0 2 0
1
= kx0 + A> y 0 k2
2
0.

b. Suppose f1 (x0 ) = f2 (y 0 ) for feasible points x0 and y 0 . Let x be any feasible point in the primal. Then,
by part a, f1 (x) f2 (y 0 ) = f1 (x0 ). Hence x0 is optimal in the primal.
Similarly, let y be any feasible point in the dual. Then, by part a, f2 (y) f1 (x0 ) = f2 (y 0 ). Hence y 0 is
optimal in the dual.

## 18. Non-Simplex Methods

18.1
The following is a MATLAB function that implements the affine scaling algorithm.
function [x,N] = affscale(c,A,b,u,options);
% AFFSCALE(c,A,b,u);
% AFFSCALE(c,A,b,u,options);
154
%
% x = AFFSCALE(c,A,b,u);
% x = AFFSCALE(c,A,b,u,options);
%
% [x,N] = AFFSCALE(c,A,b,u);
% [x,N] = AFFSCALE(c,A,b,u,options);
%
%AFFSCALE(c,A,b,u) solves the following linear program using the
%affine scaling Method:
% min cx subject to Ax=b, x>=0,
%where u is a strictly feasible initial solution.
%The second variant allows a vector of optional parameters to be
%defined:
%OPTIONS(1) controls how much display output is given; set
%to 1 for a tabular display of results (default is no display: 0).
%OPTIONS(2) is a measure of the precision required for the final point.
%OPTIONS(3) is a measure of the precision required cost value.
%OPTIONS(14) = max number of iterations.
%OPTIONS(18) = alpha.

if nargin ~= 5
options = [];
if nargin ~= 4
disp(Wrong number of arguments.);
return;
end
end
xnew=u;

if length(options) >= 14
if options(14)==0
options(14)=1000*length(xnew);
end
else
options(14)=1000*length(xnew);
end

## %if length(options) < 18

options(18)=0.99; %optional step size
%end

%clc;
format compact;
format short e;

options = foptions(options);
print = options(1);
epsilon_x = options(2);
epsilon_f = options(3);
max_iter=options(14);
alpha=options(18);

n=length(c);
m=length(b);

for k = 1:max_iter,

xcurr=xnew;
D = diag(xcurr);

155
Abar = A*D;
Pbar = eye(n) - Abar*inv(Abar*Abar)*Abar;
d = -D*Pbar*D*c;
if d ~= zeros(n,1),
nonzd = find(d<0);
r = min(-xcurr(nonzd)./d(nonzd));
else
disp(Terminating: d = 0);
break;
end
xnew = xcurr+alpha*r*d;

if print,
disp(Iteration number k =)
disp(k); %print iteration index k
disp(alpha_k =);
disp(alpha*r); %print alpha_k
disp(New point =);
disp(xnew); %print new point
end %if

## if norm(xnew-xcurr) <= epsilon_x*norm(xcurr)

disp(Terminating: Relative difference between iterates <);
disp(epsilon_x);
break;
end %if

## if abs(c*(xnew-xcurr)) < epsilon_f*abs(c*xcurr),

disp(Terminating: Relative change in objective function < );
disp(epsilon_f);
break;
end %if

if k == max_iter
disp(Terminating with maximum number of iterations);
end %if
end %for

if nargout >= 1
x=xnew;
if nargout == 2
N=k;
end
else
disp(Final point =);
disp(xnew);
disp(Number of iterations =);
disp(k);
end %if
%----------------------------------------------------------------
We now apply the affine scaling algorithm to the problem in Example 16.2, as follows:
>> A=[1 0 1 0 0; 0 1 0 1 0; 1 1 0 0 1];
>> b=[4;6;8];
>> c=[-2;-5;0;0;0];
>> u=[2;3;2;3;3];
>> options(1)=0;
>> options(2)=10^(-7);
>> options(3)=10^(-7);
156
>> affscale(c,A,b,u,options);
Terminating: Relative difference between iterates <
1.0000e-07
Final point =
2.0000e+00 6.0000e+00 2.0000e+00 1.0837e-09 1.7257e-08
Number of iterations =
8

The result obtained after 8 iterations as indicated above agrees with the solution in Example 16.2:
[2, 6, 2, 0, 0]> .
18.2
The following is a MATLAB routine that implements the two-phase affine scaling method, using the MAT-
LAB function from Exercise 18.1.
function [x,N]=tpaffscale(c,A,b,options)
% March 28, 2000
%
% TPAFFSCALE(c,A,b);
% TPAFFSCALE(c,A,b,options);
%
% x = TPAFFSCALE(c,A,b);
% x = TPAFFSCALE(c,A,b,options);
%
% [x,N] = TPAFFSCALE(c,A,b);
% [x,N] = TPAFFSCALE(c,A,b,options);
%
%TPAFFSCALE(c,A,b) solves the following linear program using the
%Two-Phase Affine Scaling Method:
% min cx subject to Ax=b, x>=0.
%The second variant allows a vector of optional parameters to be
%defined:
%OPTIONS(1) controls how much display output is given; set
%to 1 for a tabular display of results (default is no display: 0).
%OPTIONS(2) is a measure of the precision required for the final point.
%OPTIONS(3) is a measure of the precision required cost value.
%OPTIONS(14) = max number of iterations.
%OPTIONS(18) = alpha.

if nargin ~= 4
options = [];
if nargin ~= 3
disp(Wrong number of arguments.);
return;
end
end

%clc;
format compact;
format short e;

options = foptions(options);
print = options(1);

n=length(c);
m=length(b);

%Phase I
if print,
disp( );

157
disp(Phase I);
disp(-------);
end

u = rand(n,1);
v = b-A*u;
if v ~= zeros(m,1),
u = affscale([zeros(1,n),1],[A v],b,[u 1],options);
u(n+1) = [];
end

if print
disp( )
disp(Initial condition for Phase II:)
disp(u)
end

## if u(n+1) < options(2),

%Phase II
u(n+1) = [];
if print
disp( );
disp(Phase II);
disp(--------);
disp(Initial condition for Phase II:);
disp(u);
end
[x,N]=affscale(c,A,b,u,options);
if nargout == 0
disp(Final point =);
disp(x);
disp(Number of iterations =);
disp(N);
end %if
else
disp(Terminating: problem has no feasible solution.);
end
%----------------------------------------------------------------
We now apply the above MATLAB routine to the problem in Example 16.5, as follows:
>> A=[1 1 1 0; 5 3 0 -1];
>> b=[4;8];
>> c=[-3;-5;0;0];
>> options(1)=0;
>> tpaffscale(c,A,b,options);
Terminating: Relative difference between iterates <
1.0000e-07
Terminating: Relative difference between iterates <
1.0000e-07
Final point =
4.0934e-09 4.0000e+00 9.4280e-09 4.0000e+00
Number of iterations =
7

The result obtained above agrees with the solution in Example 16.5: [0, 4, 0, 4]> .
18.3
The following is a MATLAB routine that implements the affine scaling method applied to LP problems of
the form given in the question by converting the given problem in Karmarkars artificial form and then using
the MATLAB function from Exercise 18.1.
158
function [x,N]=karaffscale(c,A,b,options)
%
% KARAFFSCALE(c,A,b);
% KARAFFSCALE(c,A,b,options);
%
% x = KARAFFSCALE(c,A,b);
% x = KARAFFSCALE(c,A,b,options);
%
% [x,N] = KARAFFSCALE(c,A,b);
% [x,N] = KARAFFSCALE(c,A,b,options);
%
%KARAFFSCALE(c,A,b) solves the following linear program using the
%Affine Scaling Method:
% min cx subject to Ax>=b, x>=0.
%We use Karmarkars artificial problem to convert the above problem into
%a form usable by the affine scaling method.
%The second variant allows a vector of optional parameters to be
%defined:
%OPTIONS(1) controls how much display output is given; set
%to 1 for a tabular display of results (default is no display: 0).
%OPTIONS(2) is a measure of the precision required for the final point.
%OPTIONS(3) is a measure of the precision required cost value.
%OPTIONS(14) = max number of iterations.
%OPTIONS(18) = alpha.

if nargin ~= 4
options = [];
if nargin ~= 3
disp(Wrong number of arguments.);
return;
end
end

%clc;
format compact;
format short e;

options = foptions(options);
print = options(1);

n=length(c);
m=length(b);

## %Convert to Karmarkars aftificial problem

x0 = ones(n,1);
l0 = ones(m,1);
u0 = ones(n,1);
v0 = ones(m,1);

AA = [
c -b zeros(1,n) zeros(1,m) (-c*x0+b*l0);
A zeros(m,m) zeros(m,n) -eye(m) (b-A*x0+v0);
zeros(n,n) A eye(n) zeros(n,m) (c-A*l0)
];
bb = [0; b; c];
cc = [zeros(2*m+2*n,1); 1];
y0 = [x0; l0; u0; v0; 1];

[y,N]=affscale(cc,AA,bb,y0,options);

159
if cc*y <= options(3),
x = y(1:n);
if nargout == 0
disp(Final point =);
disp(x);
disp(Final cost =);
disp(c*x);
disp(Number of iterations =);
disp(N);
end %if
else
disp(Terminating: problem has no optimal feasible solution.);
end

We now apply the above MATLAB routine to the problem in Example 15.15, as follows:
>> c=[-3;-5];
>> A=[-1 -5; -2 -1; -1 -1];
>> b=[-40;-20;-12];
>> options(2)=10^(-4);
>> karaffscale(c,A,b,options);
Terminating: Relative difference between iterates <
1.0000e-04
Final point =
5.1992e+00 6.5959e+00
Final cost =
-4.8577e+01
Number of iterations =
3

The solution from Example 15.15 is [5, 7]> . The accuracy of the result obtained above is disappointing.
We believe that the inaccuracy here may be caused by our particularly simple numerical implementation of
the affine scaling method. This illustrates the numerical issues that must be dealt with in any practically
useful implementation of the affine scaling method.
18.4
a. Suppose T (x) = T (y). Then, Ti (x) = Ti (y) for i = 1, . . . , n + 1. Note that for i = 1, . . . , n, Ti (x) =
(xi /ai )Tn+1 (x) and Ti (y) = (yi /ai )Tn+1 (y). Therefore,

Ti (x) = (xi /ai )Tn+1 (x) = Ti (y) = (yi /ai )Tn+1 (y) = (yi /ai )Tn+1 (x),

## which implies that xi = yi , i = 1, . . . , n. Hence x = y.

b. Let y {x : xn+1 > 0}. Hence yn+1 > 0. Define x = [x1 , . . . , xn ]> by xi = ai yi /yn+1 , i = 1, . . . , n.
Then, T (x) = y. To see this, note that
1 yn+1
Tn+1 (x) = = = yn+1 .
y1 /yn+1 + + yn /yn+1 + 1 y1 + yn + yn+1

Also, for i = 1, . . . , n,
Ti (x) = (yi /yn+1 )Tn+1 (x) = yi .
c. An immediate consequence of the solution to part b.
d. We have
1 1
Tn+1 (a) = = ,
a1 /a1 + + an /an + 1 n+1
and, for i = 1, . . . , n,
1
Ti (a) = (ai /ai )Tn+1 (a) = .
n+1
160
e. Since y = T (x), we have that for i = 1, . . . , n, yi = (xi /ai )yn+1 . Therefore, x0i = yi ai = xi yn+1 , which
implies that x0 = yn+1 x. Hence, Ax0 = yn+1 Ax = byn+1 .
18.5
Let x Rn , and y = T (x). Let ai be the ith column of A, i = 1, . . . , n. As in the hint, let A0 be given by

A0 = [a1 a1 , . . . , an an , b] .

Then,

Ax = b Ax b = 0

x1
.
..
[a1 , . . . , an , b]
=0

xn
1

x1 /a1
.
..
[a1 a1 , . . . , an an , b] =0
xn /an

1

(x1 /a1 )yn+1
..
0
A
.
=0
(xn /an )yn+1

yn+1
A0 y = 0.

18.6
The result follows from Exercise 18.5 by setting A := c> and b := 0.
18.7
Consider the set {x Rn : e> x = 1, x 0, x1 = 0}, which can be written as {x Rn : Ax = b, x 0},
where " # " #
e> 1
A= > , b= ,
e1 0

with e = [1, . . . , 1]> , e1 = [1, 0, . . . , 0]> . Let a0 = e/n. By Exercise 12.20, the closest point on the set
{x : Ax = b} to the point a0 is
 >
1 1
x = A> (AA> )1 (b Aa0 ) + a0 = 0, ,..., .
n1 n1

Since x {x : Ax = b, x 0} {x : Ax = b}, the point x is also the closest point on the set
{x : Ax = b, x 0} to the point a0 .
Let r = ka0 x k. Then, the sphere of radius r is inscribed in . Note that
1
r = ka0 x k = p .
n(n 1)
p
larger than or equal to 1/ n(n 1). It remains to
Hence, the radius of the largest sphere inscribed in is p
p than or equal to 1/ n(n 1). To this end, we show that this largest
show that this largest radius is less
radius is less than or equal to 1/ n(n 1) + for any > 0. For this, it suffices to show that the sphere of

161
p
radius 1/ n(n 1) + is not inscribed in . To show this, consider the point

x a0
x = x +
kx a0 k
p
= x + n(n 1)(x a0 )
" r #>
n1 1 1
= x + ,p ,..., p .
n n(n 1) n(n 1)
p
p1/ n(n 1) + . However, clearly the
It is easy to verify that the point x above is on the sphere of radius
first component of x is negative. Therefore, the sphere of radius 1/ n(n 1) + is not inscribed in . Our
proof is thus completed.
18.8
We first consider the constraints. We claim that x x . To see this, note that if x , then
e> x = 1 and hence
= e> D 1 x/e> D 1 x = 1,
e> x
which means that x . The same argument can be used for the converse. Next, we claim Ax = 0
AD x = 0. To see this, write
Ax = ADD 1 x = AD x (e> D 1 x).
Since e> D 1 x > 0, we have Ax = 0 AD x = 0. Finally, we claim that if x is an optimal solution to the

original problem, then x = U (x ) is an optimal solution to the transformed problem. To see this, recall
that the problem in a Karmarkars restricted problem, and hence by Assumption (B) we have c> x = 0. We
now note that the minimum value of the objective function c> D x in the transformed problem is zero. This
= c> x/e> D 1 x, and e> D 1 x > 0. Finally, we observe that at the point, x
is because c> D x = U (x )
the objective function value for the transformed problem is zero. Indeed,

= c> DD 1 x /e> D 1 x = 0.
c> D x

## Therefore, the two problems are equivalent.

18.9
Let v Rm+1 be such that v > B = 0> . We will show that
" #
> A
v = 0>
e>

" #
A
rank > = m + 1.
e

## This in turn gives us the desired result.

To proceed, write v as " #
u
v=
vm+1
m
where u R constitute the first m components of v. Then,

## v > B = u> AD + vm+1 e> = 0> .

Postmultiplying the above by e, and using the facts that De = x0 , Ax0 = 0, and e> e = n, we get

## u> Ax0 + vm+1 n = vm+1 n = 0,

162
which implies that vm+1 = 0. Hence, u> AD = 0> , which after postmultiplying by D 1 gives u> A = 0> .
Hence, " #
> A
v = 0> ,
e>
which implies that v = 0. Hence, rank B = m + 1.
18.10
We proceed by induction. For k = 0, the result is true because x(0) = a0 . Now suppose that x(k) is a strictly
(k+1) is a strictly interior point. Now,
interior point of . We first show that x

c(k) .
(k+1) = a0 r
x

c(k) k = 1, we have
Then, since (0, 1) and k

c(k) k < r.
x(k+1) a0 k |r|k
k

## (k+1) is a strictly interior point of . To complete

Since r is the radius of the largest sphere inscribed in , x
the proof, we write
Dk x (k+1)
x(k+1) = U 1 x(k+1) ) = >
k (
e Dk x (k+1)
We already know that x(k+1) . It therefore remains to show that it is strictly interior, i.e., x(k+1) > 0.
To see this, note that e> D k x
(k+1) > 0. Furthermore, we can write
(k+1) (k)
x
1 x1
(k+1)
.
..

Dk x
=

.

(k+1) (k)
x
n xn
(k) (k) (k+1) (k+1) >
Since x(k) = [x1 , . . . , xn ]> > 0 by the induction hypothesis, and x
(k+1) = [
x1 ,...x
n ] > 0 by the
(k+1)
above, x > 0 and hence it is a strictly interior point of .

## 19. Integer Linear Programming

19.1
The result follows from the simple observation that if M is a submatrix of A, then any submatrix of M is
also a submatrix of A. Therefore, any property involving all submatrices of A also applies to all submatrices
of M .
19.2
The result follows from the simple observation that any submatrix of A> is the transpose of a submatrix of
A, and that the determinant of the transpose of a matrix equals the determinant of the original matrix.
19.3
The claim that A is totally unimodular if [A, I] is totally unimodular follows from Exercise 19.1. To show
the converse, suppose that A is totally unimodular. We will show that any p p invertible submatrix of
[A, I], p min(m, n), has determinant 1. We first note that any p p invertible submatrix of [A, I] that
consists only of columns of A has determinant 1 because A is totally unimodular. Moreover, any p p
invertible submatrix of I has determinant 1.
Consider now a p p invertible submatrix of [A, I] composed of k columns of A and p k columns of I.
Without loss of generality, suppose that this submatrix is composed of the first p rows of [A, I], the last k
columns of A, and the first p k columns of I. (This choice of rows and columns is without loss of generality
because we can exchange rows and columns to arrive at this form, and each exchange only changes the sign
of the determinant.) We now proceed as in the proof of Proposition 19.1.
19.4
The result follows from these properties of determinants: (1) that exchanging columns only changes the sign
163
of the determinant; (2) the determinant of a block triangular matrix is the product of the determinants of
the diagonal blocks; and (3) the determinant of the identity matrix is 1. See also Exercise 2.4.
19.5
The vectors x and z together satisfy
Ax + z = b,
which means that z = b Ax. Because the right-hand side involves only integers, z is an integer vector.
19.6
The following MATLAB code generates the figures.
%-----------------------------------------------
% The vertices of the feasible set
x = [2/5 1; 2/5 -2/5]\[3; 1];
X = [0 0 x(1) 2.5];
Y = [0 3 x(2) 0];
fs=16; %Fontsize
% Draw the fesible set for x1 x2 \in R.
vi = convhull(X,Y);
plot(X,Y, o);
axis on; axis equal;
axis([-0.2 4.2 -0.2 3.2]);
hold on
fill (X(vi), Y(vi), b,facealpha, 0.2);
text(.1,.5,[\fontsize{48}\Omega],position, [1.5 1.25])
set(gca,Fontsize,fs)
hold off
% The optimal solution has to be one of the extreme points
c = [-3 -4];
% Draw the feasible set for the noninteger problem
figure
axis on; axis equal;
x = [-0.5:0.1:x(1)];
y1 = -x*0.4+3;
y2 = x-2.5;
fs=16; % Fontsize
plot(x,y1,--b,x,y2,--b,LineWidth,2);
axis([-0.2 4.2 -0.2 3.2]);
set(gca,Fontsize,fs)
hold on
X = [zeros(1,4) ones(1,3) 2*ones(1,3) 3];
Y = [0:max(floor(Y)) 0:max(floor(Y)-1) 0:max(floor(Y)-1) 1];
plot(X,Y,bls,LineWidth,2,...
MarkerEdgeColor,k,...
MarkerFaceColor,g,...
MarkerSize,10)
% Plot of the cost function
xc = [-1:0.5:5];
yc = -0.75*xc+(14/4)*ones(1,length(xc));
yc0 = -0.75*xc+(17.5/4)*ones(1,length(xc));
fs=16; % Fontsize
plot(xc, yc, r, xc, yc0, o-k, LineWidth,2);
set(gca,Fontsize,fs)
%text(.1,.5,[\fontsize{48}\Omega],position, [1.5 1.25])
[~,Xmin] = min(c*[X; Y]);
str = sprintf(The maximizer is [%d, %d] and the maximum is %.4f],...
X(Xmin), Y(Xmin), -c*[X(Xmin); Y(Xmin)]);
disp(str);
%-----------------------------------------------

164
19.7
It suffices to show the following claim: If we introduce the equation
n
X
xi + byij cxj + xn+1 = byi0 c
j=m+1

into the original constraint, then the result holds. The reason this suffices is that the Gomory cut is obtained
by subtracting this equation from an equation obtained by elementary row operations on [A, b] (hence is
equivalent to premultiplication by an invertible matrix).
To show the above claim, let xn+1 satisfy this constraint with an integer vector x. Then,
n
X
xi + byij cxj + xn+1 = byi0 c,
j=m+1

## which implies that

n
X
xn+1 = byi0 c xi byij cxj .
j=m+1

## Because the right-hand side involves only integers, xn+1 is an integer.

19.8
If there is only one Gomory cut, then the result follows directly from Exercise 19.7. The general result
follows by induction on the number of Gomory cuts, using Exercise 19.7 at each inductive step.
19.9
The result follows from Exercises 19.5 and 19.8.
19.10
The dual problem is:

minimize
31 + 2
2 2
subject to 1 + 2 3
5 5
2
11 2 4
5
1 , 2 0
1 , 2 Z.

The problem is solved graphically using the same approach as in Example 19.5. We proceed by calculating
the extreme points of the feasible set. We first assume that 1 , 2 R. The extreme points are calculated
intersecting the given constraints, and they are:
5 15 >
(1) = [5, ]> , (2) = [ , 0] .
2 2
In Figure 24.10, we show the feasible set for the case when 1 , 2 R.
Next we sketch the feasible set for the case when 1 , 2 Z and solve the problem graphically. The
graphical solution is depicted in Figure 24.11. We can see in Figure 24.11 that the optimal integer solution
is
x = [6, 2]> .
The following MATLAB code generates the figures.

%--------------------------------------------------------
% The vertices of the feasible set are:

## x = [2/5 2/5;1 -2/5]\[3; 4];

X = [7.5 x(1) 20 20];

165
Figure 24.10 Feasible set for the case when 1 , 2 R in Example 19.5.

18

16

14

12

10

0
0 5 10 15

## Figure 24.11 Real feasible set with 1 , 2 Z.

166
Y = [0 x(2) 0 40];
fs=16; % Fontsize
% Now we draw the set Omega, supposing x1 x2 \in R.
vi = convhull(X,Y);
plot(X(1:2),Y(1:2), rX, LineWidth,4);
axis on; axis equal;
axis([-.2 18 -0.2 18]);
set(gca,Fontsize,fs)
%title(Feasible set supossing x1, x2 in R.,Fontsize,14,Fontname,Avant-garde);
hold on
fill (X(vi), Y(vi), b,facealpha, 0.2);
text(.1,.5,[\fontsize{48}\Omega],position, [12 5])
hold off

c = [3 1];

## %Now we draw the real feasible set for the problem.

figure
axis on; axis equal;
axis([-.2 18 -0.2 18]);
set(gca,Fontsize,fs)
%title(Feasible set and cost function,Fontsize,14,Fontname,Avant-garde);
hold on

X = [];
Y = [];

for i=1:18
j=0;
while ((j<=((i-4)*2.5)) && (j<18))
if((j>=(7.5-i)))
X = [X i];
Y = [Y j];
end
j=j+1;
end
end

plot(X,Y,bls,LineWidth,1,...
MarkerEdgeColor,k,...
MarkerFaceColor,g,...
MarkerSize,10)
x = [5:0.1:18];
y1 = (x-4)*2.5;
y2 = (7.5-x);

plot(x,y1,--b,x,y2,--b,LineWidth,2);
set(gca,Fontsize,fs)
%text(.1,.5,[\fontsize{48}\Omega],position, [12 5])

## %Plot of the cost function at level 17.5

xc = [-1:0.5:18];
yc = (35/2)*ones(1,length(xc))-3*xc;
plot(xc, yc, dk, LineWidth,2);

## %Plot of the cost function

xc = [-1:0.5:18];
yc = (20)*ones(1,length(xc))-3*xc;

167
plot(xc, yc, r, LineWidth,2);

## str = sprintf(The minimizer is [%d, %d] and the maximum is %.4f],...

X(Xmin), Y(Xmin), c*[X(Xmin); Y(Xmin)]);
disp(str);
%--------------------------------------------------------

168
20. Problems with Equality Constraints
20.1
The feasible set consists of the points " #
2
x= , a 1.
a
We next find the gradients:
" # # "
2(x1 2) 0
h(x) = and g(x) = .
0 3(x2 + 1)2

All feasible points are not regular because at the above points the gradients of h and g are not linearly
independent. There are no regular points of the constraints.
20.2
a. As usual, let f be the objective function, and h the constraint function. We form the Lagrangian
l(x, ) = f (x)+> h(x), and then find critical points by solving the following equations (Lagrange condition):

## Dx l(x, ) = 0> , D l(x, ) = 0> .

We obtain
2 2 0 1 4 x1 4
2 6 0 2 0 x 5
2

0 0 0 0 5 x3 = 6 .

1 2 0 0 0 1 3
4 0 5 0 0 2 6
The unique solution to the above system is

## Note that x is a regular point. We now apply the SOSC. We compute

2 2 0
L(x , ) = F (x ) + [ H(x )] = 2 6 0 .

0 0 0

## The tangent plane is

( " # )
1 2 0
T (x ) = y R3 : y=0
4 0 5
= {a[5/4, 5/8, 1]> : a R}.

## Let y = a[5/4, 5/8, 1]> T (x ), a 6= 0. We have

75 2
y > L(x , )y = a > 0.
32
Therefore, x is a strict local minimizer.
b. The Lagrange condition for this problem is

4 + 2x1 = 0
2x2 + 2x2 = 0
x21 + x22 9 = 0.

169
We have four points satisfying the Lagrange condition:

## x(1) = [3, 0]> , (1) = 2/3

(2) >
x = [3, 0] , (2) = 2/3

x = [2, 5]> ,
(3)
(3) = 1

x(4) = [2, 5]> , (4) = 1.

Note that all four points x(1) , . . . , x(4) are regular. We now apply the SOSC. We have
" # " #
0 0 2 0
L(x, ) = + ,
0 2 0 2
T (x) = {y : [2x1 , 2x2 ]y = 0}.

## For the first point, we have

" #
(1) (1) 4/3 0
L(x , ) =
0 2/3
T (x(1) ) = {a[0, 1]> : a R}.

## Let y = a[0, 1]> T (x(1) ), a 6= 0. Then

2 2
y > L(x(1) , (1) )y = a > 0.
3
Hence, x(1) is a strict local minimizer.
For the second point, we have
" #
(2) (2) 4/3 0
L(x , )= > 0.
0 10/3

## Hence, x(2) is a strict local minimizer.

For the third point, we have
" #
2 0
L(x(3) , (3) ) =
0 0

T (x(3) ) = {a[ 5, 2]> : a R}.

Let y = a[ 5, 2]> T (x(3) ), a 6= 0. Then

## Hence, x(3) is a strict local maximizer.

For the fourth point, we have
"#
(4) (4) 2 0
L(x , ) =
0 0

T (x(4) ) = {a[ 5, 2]> : a R}.

Let y = a[ 5, 2]> T (x(4) ), a 6= 0. Then

## Hence, x(4) is a strict local maximizer.

170
c. The Lagrange condition for this problem is

x2 + 2x1 = 0
x1 + 8x2 = 0
x21 + 4x22 1 = 0.

## We have four points satisfying the Lagrange condition:

x(1) = [1/ 2, 1/(2 2)]> , (1) = 1/4

x(2) = [1/ 2, 1/(2 2)]> , (2) = 1/4

x(3) = [1/ 2, 1/(2 2)]> , (3) = 1/4

x(4) = [1/ 2, 1/(2 2)]> , (4) = 1/4.

Note that all four points x(1) , . . . , x(4) are regular. We now apply the SOSC. We have
" # " #
0 1 2 0
L(x, ) = + ,
1 0 0 8
T (x) = {y : [2x1 , 8x2 ]y = 0}.

Note that
" #
1/2 1
L(x, 1/4) =
1 2
" #
1/2 1
L(x, 1/4) = .
1 2

After standard manipulations, we conclude that the first two points are strict local maximizers, while the
last two points are strict local minimizers.
20.3
We form the lagrangian

## The Lagrange conditions take the form,

h i
x l = (ab> + ba> )x + x h1 (x) x h2 (x)

0 1 0 1 0
= 1 0 1 x + 1 1

0 1 0 0 1

x2 + 1
= x1 + x3 + 1 + 2

x2 + 2

0
= 0

0
" # " #
x1 + x2 0
h(x) = = .
x2 + x3 0

## It is easy to see that x = 0 and = 0 satisfy the Lagrange, FONC, conditions.

171
The Hessian of the lagrangian is

0 1 0
> >
L(x , ) = ab + ba = 1 0 1

0 1 0

## and the tangent space

1

T (x ) = y : y = a 1 , a R .

1

## To verify if the critical point satisfies the SOSC, we evaluate

y > L(x , )y = 4a2 < 0.
Thus the critical point is a strict local maximizer.
20.4
By the Lagrange condition, x = [x1 , x2 ]> satisfies
x1 + = 0
x1 + 4 + 4 = 0.
Eliminating we get
3x1 4 = 0
which implies that x1 = 4/3. Therefore, f (x ) = [4/3, 16/3]> .

20.5
a. The Lagrange condition for this problem is:
2(x x0 ) + 2 x = 0
kx k2 = 9,
where R. Rewriting the first equation we get (1 + )x = x0 , which when combined with the second
equation gives two values for 1 + :
1 + 1 = 2/3 and 1 + 2 =
2/3. Hence there are two solutions to the
(1)
Lagrange condition: x = (3/2)[1, 3], and x(2) = (3/2)[1, 3].
b. We have L(x(i) , i ) = (1 + i )I. To apply the SONC Theorem, we need to check regularity. This is
easy, since the gradient of the constraint function at any point x is 2x, which is nonzero at both the points
in part a.
For the second point, 1 + 2 = 2/3, which implies that the point is not a local minimizer because the
SONC does not hold.
On the other hand, the first point satisfies the SOSC (since 1 + 1 = 2/3), which implies that it is a strict
local minimizer.
20.6
a. Let x1 , x2 , and x3 be the dimensions of the closed box. The problem is
minimize 2(x1 x2 + x2 x3 + x3 x1 )
subject to x1 x2 x3 = V.
We denote f (x) = 2(x1 x2 + x2 x3 + x3 x1 ), and h(x) = x1 x2 x3 V . We have f (x) = 2[x2 + x3 , x1 +
x3 , x1 + x2 ]> and h(x) = [x2 x3 , x1 x3 , x1 x2 ]> . By the Lagrange condition, the dimensions of the box with
minimum surface area satisfies
2(b + c) + bc = 0
2(a + c) + ac = 0
2(a + b) + ab = 0
abc = V,
172
where R.
b. Regularity of x means h(x ) 6= 0 (since there is only one scalar equality constraint in this case). Since
x = [a, b, c]> is a feasible point, we must have a, b, c 6= 0 (for otherwise the volume will be 0). Hence,
h(x ) 6= 0, which implies that x is regular.
c. Multiplying the first equation by a and the second equation by b, and then subtracting the first from the
second, we obtain:
c(a b) = 0.
Since c 6= 0 (see part b), we conclude that a = b. By a similar procedure on the second and third equations,
we conclude that b = c. Hence, substituting into the fourth (constraint) equation, we obtain

a = b = c = V 1/3 ,

with = 4V 1/3 .
d. The Hessian of the Lagrangian is given by

0 2 + c 2 + b 0 2 2 0 1 1
L(x , ) = 2 + c 0 2 + a = 2 0 2 = 2 1 0 1 .

2 + b 2 + a 0 2 2 0 1 1 0

The matrix L(x , ) is not positive definite (there are several ways to check this: we could use Sylvesters
criterion, or we could compute the eigenvalues of L(x , ), which are 2, 2, 4). Therefore, we need to compute
the tangent space T (x ). Note that

## Dh(x ) = h(x)> = [bc, ac, ab] = V 2/3 [1, 1, 1].

Hence,
T (x ) = {y : Dh(x )y = 0} = {y : [1, 1, 1]y = 0} = {y : y3 = (y1 + y2 )}.
Let y T (x ), y 6= 0. Note that either y1 6= 0 or y2 6= 0. We have,

0 1 1
y > L(x , )y = 2y > 1 0 1 y = 4(y1 y2 + y1 y3 + y2 y3 ).

1 1 0

## where z = [y1 , y2 ]> 6= 0 and " #

1 1/2
Q= > 0.
1/2 1

Therefore, y > L(x , )y > 0, which shows that the SOSC is satisfied.
An alternative (simpler) calculation:

0 1 1
> >
y L(x , )y = 2y 1 0 1 y = 2(y1 (y2 + y3 ) + y2 (y1 + y3 ) + y3 (y1 + y2 )).

1 1 0

Substituting y1 = (y2 + y3 ), y2 = (y1 + y3 ), and y3 = (y1 + y2 ) in the first, second, and third terms,
respectively, we obtain
y > L(x , )y = 2(y12 + y22 + y22 ) > 0.

173
20.7
a. We first compute critical points by applying the Lagrange conditions. These are:

2x1 + 2x1 = 0
6x1 + 2x2 = 0
1 + 2x3 = 0
x21 + x22 + x23 16 = 0.

## There are six points satisfying the Lagrange condition:

x(1) = [ 63/2, 0, 1/2]> , (1) = 1

x(2) = [ 63/2, 0, 1/2]> , (2) = 1
x(3) = [0, 0, 4]> , (3) = 1/8
(4) >
x = [0, 0, 4] , (4) = 1/8

x = [0, 575/6, 1/6]> ,
(5)
(5) = 3

x(6) = [0, 575/6, 1/6]> , (6) = 3.

All the above points are regular. We now apply second order conditions to establish their nature. For this,
we compute
2 0 0 2 0 0
F (x) = 0 6 0 , H(x) = 0 2 0 ,

0 0 0 0 0 2
and
T (x ) = {y R3 : [2x1 , 2x2 , 2x3 ]y = 0}.
For the first point, we have

0 0 0
L(x(1) , (1) ) = 0
4 0

0 2 0

T (x ) = {[a/ 63, b, a]> : a, b R}.
(1)

Let y = [a/ 63, b, a]> T (x(1) ), where a and b are not both zero. Then

> 0 if |a| < b2

y > L(x(1) , (1) )y = 4b2 2a2 = 0 if |a| = b 2 .
< 0 if |a| > b2

From the above, we see that x(1) does not satisfy the SONC. Therefore, x(1) cannot be an extremizer.
Performing similar calculations for x(2) , we conclude that x(2) cannot be an extremizer either.
For the third point, we have

7/4 0 0
L(x(3) , (3) ) = 0 23/4 0

0 0 1/4
T (x(3) ) = {[a, b, 0]> : a, b R}.

Let y = [a, b, 0]> T (x(3) ), where a and b are not both zero. Then
7 2 23 2
y > L(x(3) , (3) )y = a + b > 0.
4 4

174
Hence, x(3) is a strict local minimizer. Performing similar calculations for the remaining points, we conclude
that x(4) is a strict local minimizer, and x(5) and x(6) are both strict local maximizers.
b. The Lagrange condition for the problem is:

## 2x1 + (6x1 + 4x2 ) = 0

2x2 + (4x1 + 12x2 ) = 0
3x22 + 4x1 x2 + 6x22 140 = 0.

## We represent the first two equations as

" #" # " #
2 + 6 4 x1 0
= .
4 2 + 12 x2 0

From the constraint equation, we note that x = [0, 0]> cannot satisfy the Lagrange condition. Therefore,
the determinant of the above matrix must be zero. Solving for yields two possible values: 1/7 and 1/2.
We then have four points satisfying the Lagrange condition:

## x(1) = [2, 4]> , (1) = 1/7

(2) >
x = [2, 4] , (2) = 1/7

x(3) = [2 14, 14]> , (3) = 1/2

x(4) = [2 14, 14]> , (4) = 1/2.

Applying the SOSC, we conclude that x(1) and x(2) are strict local minimizers, and x(3) and x(4) are strict
local maximizers.
20.8
a. We can represent the problem as

minimize f (x)
subject to h(x) = 0,

where f (x) = 2x1 + 3x2 4, and h(x) = x1 x2 6. We have Df (x) = [2, 3], and Dh(x) = [x2 , x1 ]. Note
that 0 is not a feasible point. Therefore, any feasible point is regular. If x is a local extremizer, then by
the Lagrange multiplier theorem, there exists R such that Df (x ) + Dh(x ) = 0> , or

2 + x2 = 0
3+ x1 = 0.

Solving, we get two possible extremizers: x(1) = [3, 2]> , with corresponding Lagrange multiplier (1) = 1,
and x(2) = [3, 2]> , with corresponding Lagrange multiplier (2) = 1.
b. We have F (x) = O, and " #
0 1
H(x) = .
1 0

First, consider the point x(1) = [3, 2]> , with corresponding Lagrange multiplier (1) = 1. We have
" #
(1) (1) 0 1
L(x , ) = ,
1 0

and
T (x(1) ) = {y : [2, 3]y = 0} = {[3, 2]> : R}.
Let y = [3, 2]> T (x(1) ), 6= 0. We have

## y > L(x(1) , (1) )y = 122 > 0.

175
Therefore, by the SOSC, x(1) = [3, 2]> is a strict local minimizer.
Next, consider the point x(2) = [3, 2]> , with corresponding Lagrange multiplier (2) = 1. We have
" #
0 1
L(x(2) , (2) ) = .
1 0

and
T (x(2) ) = {y : [2, 3]y = 0} = {[3, 2]> : R} = T (x(1) ).
Let y = [3, 2]> T (x(2) ), 6= 0. We have

## Therefore, by the SOSC, x(2) = [3, 2]> is a strict local maximizer.

c. Note that f (x(1) ) = 8, while f (x(2) ) = 16. Therefore, x(1) , although a strict local minimizer, is not a
global minimizer. Likewise, x(2) , although a strict local maximizer, is not a global maximizer.
20.9
We observe that f (x1 , x2 ) is a ratio of two quadratic functions, that is, we can represent f (x1 , x2 ) as

x> Qx
f (x1 , x2 ) = .
x> P x
Therefore, if a point x is a maximizer of f (x1 , x2 ) then so is any nonzero multiple of this point because

## (tx)> Q(tx) t2 x> Qx x> Qx

= = .
(tx)> P (tx) t 2 x> P x x> P x
Thus any nonzero multiple of a solution is also a solution. To proceed, represent the original problem in an
equivalent form,

## maximize x> Qx = 18x21 8x1 x2 + 12x22

subject to x> P x = 2x21 + 2x22 = 1.

Thus, we wish to maximize f (x1 , x2 ) = 18x21 8x1 x2 + 12x22 subject to the equality constraint, h(x1 , x2 ) =
1 2x21 2x22 = 0. We apply the Lagranges method to solve the problem. We form the Lagrangian function,

l(x, ) = f + h,

## compute its gradient and find critical points. We have,

" # " # !!
> 18 4 > 2 0
x l = x x x+ 1x x
4 12 0 2
" # " #
18 4 2 0
= 2 x 2 x
4 12 0 2
= 0.

## We represent the above in an equivalent form,

" #1 " #
I 2 2 0 18 4
x = 0.
0 2 4 12

That is, solving the problem is being reduced to solving an eigenvalue-eigenvector problem,
" #! " #
9 2 9 2
I 2 x= x = 0.
2 6 2 6
176
The characteristic polynomial is
" #
9 2
det = 2 15 + 50 = ( 5)( 10).
2 6

The eigenvalues are 5 and 10. Because we are interested in finding a maximizer, we conclude that the value
of the maximized function is 10, while the corresponding maximizer corresponds to an appropriately scaled,
to satisfy the constraint, eigenvector of this eigenvalue. An eigenvector can easily be found by taking any
nonzero column of the adjoint matrix of
" #
9 2
10I 2 .
2 6

## Performing simple manipulations gives

" # " #
1 2 4 2
adj = .
2 4 2 1

Thus, " #
2
0.1
1
is a maximizer for the equivalent problem. Any multiple of the above vector is a solution of the original
maximization problem.
20.10
We use the technique of Example 20.8. First, we write the objective function in the form x> Qx, where
" #
> 3 2
Q=Q = .
2 3

The characteristic polynomial of Q is 2 6 + 5, and the eigenvalues of Q are 1 and 5. The solutionsto
the problem are the unit length eigenvectors of Q corresponding to the eigenvalue 5, which are [1, 1]> / 2.
20.11
Consider the problem

minimize kAxk2
subject to kxk2 = 1.

The optimal objective function value of this problem is the smallest value that kyk2 can take. The above
can be solved easily using Lagrange multipliers. The Lagrange conditions are

## x> A> A x> = 0>

1 x> x = 0.

The first equation can be rewritten as A> Ax = x, which implies that is an eigenvalue of A> A. Moreover,
premultiplying by x> yields x> A> Ax = x> x = , which indicates that the Lagrange multiplier is equal
to the optimal objective function value. Hence, the range of values that kyk = kAxk can take is 1 to 20.
20.12
Consider the following optimization problem (we need to use squared norms to make the functions differen-
tiable):

minimize kAxk2
subject to kxk2 = 1.
177
As usual, write f (x) = kAxk2 and h(x) = kxk2 1. We have f (x) = 2A> Ax and h(x) = 2x. Note
that all feasible solutions are regular. Let x be an optimal solution. Note that the optimal value of the
objective function is f (x ) = kAk22 . The Lagrange condition for the above problem is:

2A> Ax + (2x ) = 0
2
kx k = 1.

## From the first equation, we see that

A> Ax = x ,
which implies that is an eigenvalue of A> A, and x is the corresponding eigenvector. Premultiplying the
above equation by x> and combining the result with the constraint equation, we obtain

## = x> A> Ax = kAx k2 = f (x ) = kAk22 .

Therefore, because x minimizes f (x ), we deduce that must be the largest eigenvalue of A> A; i.e.,
= 1 . Therefore, p
kAk2 = 1 .

20.13
Let h(x) = 1 x> P x = 0. Let x0 be such that h(x0 ) = 0. Then, x0 6= 0. For x0 to be a regular point, we
need to show that {h(x0 )} is a linearly independent set, i.e., h(x0 ) 6= 0. Now, h(x) = 2P x. Since
P is nonsingular, and x0 6= 0, then h(x0 ) = 2P x0 6= 0.
20.14
Note that the point [1, 1]> is a regular point. Applying the Lagrange multiplier theorem gives

a + 2 = 0

b + 2 = 0.

Hence, a = b.
20.15
a. Denote the solution by [x1 , x2 ]> . The Lagrange condition for this problem has the form

x2 2 + 2 x1 = 0
x1 2 x2 = 0
(x1 )2 (x2 )2 = 0.

From the first and third equations it follows that x1 , x2 6= 0. Then, combining the first and second equations,
we obtain
2 x2 x1
= =
2x1 2x2
which implies that 2x2 (x2 )2 = (x1 )2 . Hence, x2 = 1, and by the third Lagrange equation, (x1 )2 = 1.
Thus, the only two points satisfying the Lagrange condition are [1, 1]> and [1, 1]> . Note that both points
are regular.
b. Consider the point x = [1, 1]> . The corresponding Lagrange multiplier is = 1/2. The Hessian of
the Lagrangian is " # " # " #
0 1 1 2 0 1 1
L(x , ) = = .
1 0 2 0 2 1 1
The tangent plane is given by

## T (x ) = {y : [2, 2]y = 0} = {[a, a]> : a R}.

Let y T (x ), y 6= 0. Then, y = [a, a]> for some a 6= 0. We have y > L(x , )y = 2a2 < 0. Hence,
SONC does not hold in this case, and therefore x = [1, 1]> cannot be local minimizer. In fact, the point
is a strict local maximizer.
178
c. Consider the point x = [1, 1]> . The corresponding Lagrange multiplier is = 1/2. The Hessian of the
Lagrangian is " # " # " #
0 1 1 2 0 1 1
L(x , ) = + = .
1 0 2 0 2 1 1
The tangent plane is given by

## T (x ) = {y : [2, 2]y = 0} = {[a, a]> : a R}.

Let y T (x ), y 6= 0. Then, y = [a, a] for some a 6= 0. We have y > L(x , )y = 2a2 > 0. Hence, by the
SOSC, the point x = [1, 1]> is a strict local minimizer.
20.16
a. The point x is the solution to the optimization problem
1
minimize kx x0 k2
2
subject to Ax = 0.

Since rank A = m, any feasible point is regular. By the Lagrange multiplier theorem, there exists Rm
such that
(x x0 )> > A = 0> .
Postmultiplying both sides by x and using the fact that Ax = 0, we get

(x x0 )> x = 0.

## b. From part a, we have

x x0 = A> .
Premultiplying both sides by A we get
Ax0 = (AA> )
from which we conclude that = (AA> )1 Ax0 . Hence,

## x = x0 + A> = x0 A> (AA> )1 Ax0 = (I n A> (AA> )1 A)x0 .

20.17
a. The Lagrange condition is (omitting all superscript- for convenience):

## (Ax b)> A + > C = 0>

Cx = d.

For simplicity, write Q = A> A, which is positive definite. From the first equation, we have

x = Q1 A> b Q1 C > .

## from which we obtain

= (CQ1 C > )1 (CQ1 A> b d).
Substituting back into the equation for x, we obtain

## x = Q1 A> b Q1 C > (CQ1 C > )1 (CQ1 A> b d).

179
b. Rewrite the objective function as
1 > > 1
x A Ax b> Ax + kbk2 .
2 2
As before, write Q = A> A. Completing the squares and setting y = x Q1 A> b, the objective function
can be written as
1 >
y Qy + const.
2
Hence, the problem can be converted to the equivalent QP:
1 >
minimize y Qy
2
subject to Cy = d CQ1 A> b.

## The solution to this QP is

y = Q1 C > (CQ1 C > )1 (d CQ1 A> b).
Hence, the solution to the original problem is:

## x = Q1 A> b + Q1 C > (CQ1 C > )1 (d CQ1 A> b)

= (A> A)1 A> b + (A> A)1 C > (C(A> A)1 C > )1 (d C(A> A)1 A> b),

## which agrees with the solution obtained in part a.

20.18
Write f (x) = 12 x> Qx c> x + d (actually, we could have ignored d) and h(x) = b Ax. We have

b Ax = 0.

## From the first equation we get

x = Q1 (A> + c).
Multiplying both sides by A and using the second equation (constraint), we get

## = (AQ1 A> )1 (b AQ1 c).

Hence,
x = Q1 c + Q1 A> (AQ1 A> )1 (b AQ1 c).
Alternatively, we could have rewritten the given problem in our usual quadratic programming form with
variable y = x Q1 c.
20.19
Clearly, we have M = R(B), i.e., y M if and only if there exists x Rm such that y = Bx. Hence

L is positive semidefinite on M
for all y M, y > Ly 0
for all x Rm , (Bx)> L(Bx) 0
m
for all x R , x> (B > LB)x 0
for all x Rm , x> LM x 0
LM 0.
180
For positive definiteness, the same argument applies, with replaced by >.
20.20
a. By simple manipulations, we can write

x2 = a2 x0 + abu0 + bu1 .

## Therefore, the problem is

1 2
minimize (u + u21 )
2 0
subject to a2 x0 + abu0 + bu1 = 0.

## Alternatively, we may use a vector notation: writing u = [u0 , u1 ]> , we have

minimize f (u)
subject to h(u) = 0,

where f (u) = 21 kuk2 , and h(u) = a2 x0 + [ab, b]u. Since the vector h(u) = [ab, b]> is nonzero for any u,
then any feasible point is regular. Therefore, by the Lagrange multiplier theorem, there exists R such
that

u0 + ab = 0
u1 + b = 0
a2 x0 + abu0 + bu1 = 0.

We have three linear equations in three unknowns, that upon solving yields

a3 x0 a2 x0
u0 = , u1 = .
b(1 + a2 ) b(1 + a2 )

b. The Hessians of f and h are F (u) = I 2 (2 2 identity matrix) and H(u) = O, respectively. Hence, the
Hessian of the Lagrangian is L(u , ) = I 2 , which is positive definite. Therefore, u satisfies the SOSC,
and is therefore a strict local minimizer.
20.21
Letting z = [x2 , u1 , u2 ]> , the objective function is z > Qz, where

1 0 0
Q = 0 1/2 0 .

0 0 1/3

## The linear constraint on z is obtained by writing

x2 = 2x1 + u2 = 2(2 + u1 ) + u2 ,

## which can be written as Az = b, where

A = [1, 2, 1], b = 4.

## Hence, using the method of Section 20.6, the solution is

1 1/3
z = Q1 A> (AQ1 A> )1 b = 4 (12)1 4 = 4/3 .

3 1

## Thus, the optimal controls are u1 = 4/3 and u2 = 1.

181
20.22
The composite input vector is
h i>
u = u0 u1 u2 .

The performance index J is J = 21 u> u. To obtain the constraint Au = b, where A R13 , we proceed as
follows. First, we write

x2 = x1 + 2u1
= x0 + 2u0 + 2u1 .

## Using the above, we obtain

x3 = 9
= x2 + 2u2
= x0 + 2u0 + 2u1 + 2u2 .

## We represent the above in the format Au = b as follows

h i u0
2 2 2 u1 = 6.

u2

Thus we formulated the problem of finding the optimal control sequence as a constrained optimization
problem
1 >
minimize u u
2
subject to Au = b.

## To solve the above problem, we form the Lagrangian

1 >
l(u, ) = u u + (Au b) ,
2
where is the Lagrange multiplier. Applying the Lagrange first-order condition yields

u + A> = 0 and Au = b.

From the first of the above conditions, we calculate, u = A> . Substituting the above into the second of
the Lagrange conditions gives
 1
= AA> b.

Combining the last two equations, we obtain a closed-form formula for the optimal input sequence
 1
u = A> AA> b.

In our problem,
u0 1
b
u = u1 = 
 A> =
1 .

>
u2 AA 1

## 21. Problems With Inequality Constraints

182
21.1
a. We form the Lagrangian function,

## The KKT conditions take the form,

h i
Dx l(x, ) = 2x1 2x1 8x2 4x2 = 0>
(4 x21 2x22 ) = 0
0
4 x21 2x22 0.

## From the first of the above equality, we obtain

(1 )x1 = 0 (2 )x2 = 0.

We first consider the case when = 0. Then, we obtain the point, x(1) = 0, which does not satisfy the
constraints.
The next case is when = 1. Then we have to have x2 = 0 and using (4 x21 2x22 ) = 0 gives
" # " #
(2) 2 (3) 2
x = and x = .
0 0

" # " #
(4) 0 (5) 0
x = and x = .
2 2

## b. The Hessian of l is " # " #

2 0 2 0
L= + .
0 8 0 4
When = 1, " #
0 0
L= .
0 4
We next find the subspace
h i
T = T = {y : 4 0 y = 0}
h i>
= {y = a 0 1 : a R}.

## We then check for positive definiteness of L on T,

" #" #
h i 0 0 0
> 2
y Ly = a 0 1 = 4a2 > 0.
0 4 1

Hence, x(2) and x(3) satisfy the SOSC to be strict local minimizers.
When = 2, " #
2 0
L= ,
0 0
and h i>
T = {y = a 1 0 : a R}.

183
We have
y > Ly = 2a2 < 0.
Thus, x(4) and x(5) do not satisfy the SONC to be minimizers.
In summary, only x(2) and x(3) are strict local minimizers.
21.2
a. We first find critical points by applying the Karush-Kuhn-Tucker conditions, which are
2x1 2 21 x1 + 52 = 0
1 1
2x2 10 + 1 + 2 = 0
   5 2 
1 1
1 x2 x21 + 2 5x1 + x2 5 = 0
5 2
0.
We have to check four possible combinations.
Case 1: (1 = 0, 2 = 0) Solving the first and second Karush-Kuhn-Tucker equations yields x(1) = [1, 5]> .
However, this point is not feasible and is therefore not a candidate minimizer.
Case 2: (1 > 0, 2 = 0) We have two possible solutions:
(2)
x(2) = [0.98, 4.8]> 1 = 2.02
(3)
x(3) = [0.02, 0]> 1 = 50.
Both x(2) and x(3) satisfy the constraints, and are therefore candidate minimizers.
Case 3: (1 = 0, 2 > 0) Solving the corresponding equation yields:
(4)
x(4) = [0.5050, 4.9505]> 1 = 0.198.
The point x(4) is not feasible, and hence is not a candidate minimizer.
Case 4: (1 > 0, 2 > 0) We have two solutions:
x(5) = [0.732, 2.679]> (5) = [13.246, 3.986]>
x(6) = [2.73, 37.32]> (6) = [188.8, 204]> .
The point x(5) is feasible, but x(6) is not.
We are left with three candidate minimizers: x(2) , x(3) , and x(5) . It is easy to check that they are regular.
We now check if each satisfies the second order conditions. For this, we compute
" #
2 21 0
L(x, ) = .
0 2

## For x(2) , we have

" #
2.04 0
L(x(2) , (2) ) =
0 2
T(x(2) ) = {a[0.1021, 1]> : a R}.
Let y = a[0.1021, 1]> T(x(2) ) with a 6= 0. Then
y > L(x(2) , (2) )y = 1.979a2 > 0.
Thus, by the SOSC, x(2) is a strict local minimizer.
For x(3) , we have
" #
(3) (3) 97.958 0
L(x , ) =
0 2
T(x(3) ) = {a[4.898, 1]> : a R}.
184
Let y = a[4.898, 1]> T(x(3) ) with a 6= 0. Then,

## y > L(x(3) , (3) )y = 2347.9a2 < 0.

Thus, x(3) does not satisfy the SOSC. In fact, in this case, we have T (x(3) ) = T(x(3) ), and hence x(3) does
not satisfy the SONC either. We conclude that x(3) is not a local minimizer. We can easily check that x(3)
is not a local maximizer either.
For x(5) , we have
" #
(5) (5) 24.4919 0
L(x , ) =
0 2
T(x(5) ) = {0}.

The SOSC is trivially satisfied, and therefore x(5) is a strict local minimizer.
b. The Karush-Kuhn-Tucker conditions are:

2x1 1 3 = 0
2x2 2 3 = 0
x1 0
x2 0
x1 x2 + 5 0
1 x1 2 x2 + 3 (x1 x2 + 5) = 0
1 , 2 , 3 0.

It is easy to verify that the only combination of Karush-Kuhn-Tucker multipliers resulting in a feasible point
is 1 = 2 = 0, 3 > 0. For this case, we obtain x = [2.5, 2.5]> , = [0, 0, 5]> . We have
" #
2 0
L(x , ) = > 0.
0 2

Hence, x is a strict local minimizer (in fact, the only one for this problem).
c. The Karush-Kuhn-Tucker conditions are:

2x1 + 6x2 4 + 21 x1 + 22 = 0
6x1 2 + 21 22 = 0
x21
+ 2x2 1 0
2x1 2x2 1 0
1 (x21 + 2x2 1) + 2 (2x1 2x2 1) = 0
1 , 2 0.

It is easy to verify that the only combination of Karush-Kuhn-Tucker multipliers resulting in a feasible point
is 1 = 0, 2 > 0. For this case, we obtain x = [9/14, 2/14]> , = [0, 13/14]> . We have
" #
2 6
L(x , ) =
6 0
T(x ) = {a[1, 1] : a R}.

## y > L(x , )y = 14a2 > 0.

Hence, x is a strict local minimizer (in fact, the only one for this problem).
185
21.3
The Karush-Kuhn-Tucker conditions are:

## 2x1 + 2x1 + 2x2 + 2x1 = 0

2x2 + 2x1 + 2x2 = 0
x21 + 2x1 x2 + x22 1 = 0
x21 x2 0
(x21 x2 ) = 0
0.

## We have two cases to consider.

Case 1: ( > 0) Substituting x2 = x21 into the third equation and combining the result with the first two
yields two possible points:

## x(1) = [1.618, 0.618]> (1) = 3.7889

x(2) = [2.618, 0.382]> (2) = 0.2111.

Note that the resulting values violate the condition > 0. Hence, neither of the points are minimizers
(although they are candidates for maximizers).
Case 2: ( = 0) Subtracting the second equation from the first yields x1 = x2 , which upon substituting
into the third equation gives two possible points:

## x(3) = [1/2, 1/2]> , x(4) = [1/2, 1/2]> .

Note that x(4) is not a feasible point, and is therefore not a candidate minimizer.
Therefore, the only remaining candidate is x(3) , with corresponding (3) = 1/2 (and = 0). We now
check second order conditions. We have
" #
1 1
L(x(3) , 0, (3) ) =
1 1
T(x(3) ) = {a[1, 1]> : a R}.

## Therefore, by the SOSC, x(3) is a strict local minimizer.

21.4
The optimization problem is:

minimize e>
np
subject to Gp P em
p 0,

where G = [gi,j ], en = [1, . . . , 1]> (with n components), and p = [p1 , . . . , pn ]> . The KKT condition for this
problem is:

## e> > >

n 1 G 2 = 0>
> >
1 (P em Gp) 2 p = 0
Gp P em
1 , 2 , p 0.

186
21.5
a. We have f (x) = x2 (x1 2)3 + 3 and g(x) = 1 x2 . Hence, f (x) = [3(x1 2)2 , 1]> and
g(x) = [0, 1]> . The KKT condition is

0
3(x1 2)2 = 0
1 = 0
(1 x2 ) = 0
1 x2 0.

## The only solution to the above conditions is x = [2, 1]> , = 1.

To check if x is regular, we note that the constraint is active. We have g(x ) = [0, 1]> , which is
nonzero. Hence, x is regular.
b. We have " #
0 0
L(x , ) = F (x ) + G(x ) = .
0 0
Hence, the point x satisfies the SONC.
c. Since > 0, we have T(x , ) = T (x ) = {y : [0, 1]y = 0} = {y : y2 = 0}, which means that T
contains nonzero vectors. Hence, the SOSC does not hold at x .
21.6
a. Write f (x) = x2 , g(x) = (x2 + (x1 1)2 3). We have f (x) = [0, 1]> and g(x) = [2(x1 1), 1]> .
The KKT conditions are:

0
2(x1 1) = 0
1 = 0
2
(x2 + (x1 1) 3) = 0
x2 + (x1 1)2 + 3 0.

From the third equation we get = 1. The second equation then gives x1 = 1, and the fourth equation gives
x2 = 3. Therefore, the only point that satisfies the KKT condition is x = [1, 3]> , with a KKT multiplier of
= 1.
b. Note that the constraint x2 + (x1 1)2 + 3 0 is active at x . We have T (x ) = {y : [0, 1]y =
0} = {y : y2 = 0}, and N (x ) = {y : y = [0, 1]z, z R} = {y : y1 = 0}. Because > 0, we have
T(x ) = T (x ) = {y : y2 = 0}.
c. We have " # " #
2 0 2 0
L(x , ) = O + 1 = .
0 0 0 0

From part b, T (x ) = {y : y2 = 0}. Therefore, for any y T (x ), y > L(x , )y = 2y12 0, which means
that x does not satisfy the SONC.
21.7
a. We need to consider two optimization problems. We first consider the minimization problem

## minimize (x1 2)2 + (x2 1)2

subject to x21 x2 0
x1 + x2 2 0
x1 0.

187
Then, we form the Lagrangian function

## The KKT condition takes the form

h i
x l(x, ) = 2(x1 2) + 21 x1 + 2 3 2(x2 1) 1 + 2 = 0T
1 (x21 x2 ) = 0
2 (x1 + x2 2) = 0
3 (x1 ) = 0
i 0.

The point x = 0 satisfies the above conditions for 1 = 2, 2 = 0, and 3 = 4. Thus the point x does
not satisfy the KKT conditions for minimum.
We next consider the maximization problem

## minimize (x1 2)2 (x2 1)2

subject to x21 x2 0
x1 + x2 2 0
x1 0.

## The KKT condition takes the form

h i
x l(x, ) = 2(x1 2) + 21 x1 + 2 3 2(x2 1) 1 + 2 = 0>
1 (x21 x2 ) = 0
2 (x1 + x2 2) = 0
3 (x1 ) = 0
i 0.

The point x = 0 satisfies the above conditions for 1 = 2, 2 = 0, and 3 = 4. Hence, the point x satisfies
the KKT conditions for maximum.
b. We next compute the Hessian, with respect to x, of the lagrangian to obtain
" # " # " #
2 0 4 0 2 0
L = F + 1 G1 = + =
0 2 0 0 0 2

## which is indefinite on R2 . We next find the subspace

( " # ) ( " # )
g1 (x )> 0 1
T = y: >
y=0 = y: y=0
g3 (x ) 1 0
= {0},

That is, T is a trivial subspace that consists only of the zero vector. Thus the SOSC for x to be a strict
local maximizer is trivially satisfied.
21.8
a. Write h(x) = x1 x2 , g(x) = x1 . We have Df (x) = [x22 , 2x1 x2 ], Dh(x) = [1, 1], Dg(x) = [1, 0].

188
Note that all feasible points are regular. The KKT condition is:

x22 + = 0
2x1 x2 = 0
x1 = 0
0
x1 x2 = 0
x1 0.

We first try x1 = x1 = 0 (active inequality constraint). Substituting and manipulating, we have the solution
x1 = x2 = 0 with = 0, which is a legitimate solution. If we then try x1 = x1 > 0 (inactive inequality
constraint), we find that there is no consistent solution to the KKT condition. Thus, there is only one point
satisfying the KKT condition: x = 0.
c. The tangent space at x = 0 is given by

## Therefore, the SONC holds for the solution in part a.

d. We have " #
0 2x2
L(x, , ) = .
2x2 2x1
Hence, at x = 0, we have L(0, 0, 0) = O. Since the active constraint at x = 0 is degenerate, we have

## T(0, 0) = {y : [1, 1]y = 0},

which is nontrivial. Hence, for any nonzero vector y T(0, 0), we have y > L(0, 0, 0)y = 0 6> 0. Thus, the
SOSC does not hold for the solution in part a.
21.9
a. The KKT condition for the problem is:

> x = 0
0
e> x 1 = 0
x 0

## where e = [1, . . . , 1]> .

b. A feasible point x is regular in this problem if the vectors e, ei , i J(x ) are linearly independent,
where J(x ) = {i : xi = 0} and ei is the vector with 0 in all components except the ith component, which
is 1.
In this problem, all feasible points are regular. To see this, note that 0 is not feasible. Therefore, any
feasible point results in the set J(x ) having fewer than n elements, which implies that the vectors e, ei ,
i J(x ) are linearly independent.
21.10
By the KKT Theorem, there exists 0 such that

(x x0 ) + g(x ) = 0
g(x ) = 0.

## Premultiplying both sides of the first equation by (x x0 )> , we obtain

kx x0 k2 + (x x0 )> g(x ) = 0.
189
Since kx x0 k2 > 0 (because g(x0 ) > 0) and 0, we deduce that (x x0 )> g(x ) < 0 and > 0.
From the second KKT condition above, we conclude that g(x ) = 0.
21.11
a. By inspection, we guess the point [2, 2]> (drawing a picture may help).
b. We write f (x) = (x1 3)2 + (x2 4)2 , g1 (x) = x1 , g2 (x) = x2 , g3 (x) = x1 2, g4 (x) = x2 2,
g = [g1 , g2 , g3 , g4 ]> . The problem becomes
minimize f (x)
subject to g(x) 0.
We now check the SOSC for the point x = [2, 2]> . We have two active constraints: g3 , g4 . Regularity
holds, since g3 (x ) = [1, 0]> and g4 (x ) = [0, 1]> . We have f (x ) = [2, 4]> . We need to find a
R4 , 0, satisfying FONC. From the condition > g(x ) = 0, we deduce that 1 = 2 = 0. Hence,
Df (x ) + > Dg(x ) = 0> if and only if = [0, 0, 2, 4]> . Now,
" # " #
2 0 0 0
F (x ) = , [ G(x )] = .
0 2 0 0

Hence " #
2 0
L(x , ) =
0 2
which is positive definite on R2 . Hence, SOSC is satisfied, and x is a strict local minimizer.
21.12
The KKT condition is
x> Q + > A = 0>
> (Ax b) = 0
0
Ax b 0.
Postmultiplying the first equation by x gives
x> Qx + > Ax = 0.
We note from the second equation that > Ax = > b. Hence,
x> Qx + > b = 0.
Since Q > 0, the first term is nonnegative. Also, the second term is nonnegative because 0 and b 0.
Hence, we conclude that both terms must be zero. Because Q > 0, we must have x = 0.
Aside: Actually, we can deduce that the only solution to the KKT condition must be 0, as follows. The
problem is convex; thus, the only points satisfying the KKT condition are global minimizers. However, we
see that 0 is a feasible point, and is the only point for which the objective function value is 0. Further, the
objective function is bounded below by 0. Hence, 0 is the only global minimizer.
21.13
a. We have one scalar equality constraint with h(x) = [c, d]> x e and two scalar inequality constraints with
g(x) = x. Hence, there exists R2 and R such that
0
a + c 1

= 0
b + d 2 = 0
>
x = 0
cx1 + dx2 = e

x 0.
190
b. Because x is a basic feasible solution, and the equality constraint precludes the point 0, exactly one of the
inequality constraints is active. The vectors h(x ) = [c, d]> and g1 = [1, 0]> are linearly independent.
Similarly, the vectors h(x ) = [c, d]> and g2 = [0, 1]> are linearly independent. Hence, x must be
regular.
c. The tangent space is given by

## T (x ) = {y Rn : Dh(x )y = 0, Dgj (x )y = 0, j J(x )}

= N (M ),

where M is a matrix with the first row equal to Dh(x ) = [c, d], and the second row is either Dg1 = [1, 0]
or Dg2 = [0, 1]. But, as we have seen in part b, rank M = 2 Hence, T (x ) = {0}.
d. Recall that we can take to be the relative cost coefficient vector (i.e., the KKT conditions are satisfied
with being the relative cost coefficient vector). If the relative cost coefficients of all nonbasic variables
are strictly positive, then j > 0 for all j J(x ). Hence, T(x , ) = T (x ) = {0}, which implies that
L(x , , ) > 0 on T(x , ). Hence, the SOSC is satisfied.
21.14
Let x be a solution. Since A is of full rank, x is regular. The KKT Theorem states that x satisfies:

0
> >
c + A = 0
> Ax = 0.

If we postmultiply the second equation by x and subtract the third from the result, we get

c> x = 0.

21.15
a. We can write the LP as

minimize f (x)
subject to h(x) = 0,
g(x) 0,

where f (x) = c> x, h(x) = Ax b, and g(x) = x. Thus, we have Df (x) = c> , Dh(x) = A, and
Dg(x) = I. The Karush-Kuhn-Tucker conditions for the above problem have the form: if x is a local
minimizer, then there exists and such that

0
> >
c + A > = 0>
> x = 0.

b. Let x be an optimal feasible solution. Then, x satisfies the Karush-Kuhn-Tucker conditions listed in
part a. Since 0, then from the second condition in part a, we obtain ( )> A c> . Hence, =
is a feasible solution to the dual (see Chapter 17). Postmultiplying the second condition in part a by x , we
have
0 = c> x + > Ax > x = c> x + > b
which gives
> b.
c > x =
achieves the same objective function value for the dual as x for the primal.
Hence,

c. From part a, we have > = c> A. Substituting this into > x = 0 yields the desired result.

191
21.16
By definition of J(x ), we have gi (x ) < 0 for all i 6 J(x ). Since by assumption gi is continuous for all i,
there exists > 0 such that gi (x) < 0 for all i 6 J(x ) and all x in the set B = {x : kx x k < }. Let
S1 = {x : h(x) = 0, gj (x) 0, j J(x )}. We claim that S B = S1 B. To see this, note that clearly
S B S1 B. To show that S1 B S B, suppose x S1 B. Then, by definition of S1 and B, we
have h(x) = 0, gj (x) 0 for all j J(x ), and gi (x) < 0 for all i 6 J(x ). Hence, x S B.
Since x is a local minimizer of f over S, and S B S, x is also a local minimizer of f over
S B = S1 B. Hence, we conclude that x is a regular local minimizer of f on S1 . Note that S 0 S1 ,
and x S 0 . Therefore, x is a regular local minimizer of f on S 0 .
21.17
Write f (x) = x21 +x22 , g1 (x) = x21 x2 4, g2 (x) = x2 x1 2, and g = [g1 , g2 ]> . We have f (x) = [2x1 , 2x2 ]> ,
g1 (x) = [2x1 , 1]> , g2 (x) = [1, 1]> , and D2 f (x) = diag[2, 2]. We compute

## f (x) + > g(x) = [2x1 + 21 x1 2 , 2x2 1 + 2 ]> .

We use the FONC to find critical points. Rewriting f (x) + > g(x) = 0, we obtain
2 1 2
x1 = , x2 = .
2 + 21 2
We also use > g(x) = 0 and 0, giving

1 (x21 x2 4) = 0, 2 (x2 x1 2) = 0.

The vector has two components; therefore, we try four different cases.
Case 1: (1 > 0, 2 > 0) We have

x21 x2 4 = 0, x2 x1 2 = 0.

We obtain two solutions: x(1) = [2, 0]> and x(2) = [3, 5]> . For x(1) , the two FONC equations give 1 = 2
and 2(2 + 21 ) = 1 , which yield 1 = 2 = 4/5. This is not a legitimate solution since we require 0.
For x(2) , the two FONC equations give 1 2 = 10 and 3(2 + 21 ) = 2 , which yield = [16/5, 66/5].
Again, this is not a legitimate solution.
Case 2: (1 = 0, 2 > 0) We have
2 2
x2 x1 2 = 0, x1 = , x2 = .
2 2
Hence, x1 = x2 , and thus x = [1, 1], 2 = 2. This is not a legitimate solution since we require 0.
Case 3: (1 > 0, 2 = 0) We have
1
x21 x2 4 = 0, x1 = 0, x2 = .
2
Therefore, x2 = 4, 1 = 8, and again we dont have a legitimate solution.
Case 4: (1 = 0, 2 = 0) We have x1 = x2 = 0, and all constraints are inactive. This is a legitimate
candidate for the minimizer. We now apply the SOSC. Note that since the candidate is an interior point
of the constraint set, the SOSC for the problem is equivalent to the SOSC for unconstrained optimization.
The Hessian matrix D2 f (x) = diag[2, 2] is symmetric and positive definite. Hence, by the SOSC, the point
x = [0, 0]> is the strict local minimizer (in fact, it is easy to see that it is a global minimizer).
21.18
Write f (x) = x21 +x22 , g1 (x) = x1 +x22 +4, g2 (x) = x1 10, and g = [g1 , g2 ]> . We have f (x) = [2x1 , 2x2 ]> ,
g1 (x) = [1, 2x2 ]> , g2 (x) = [1, 0]> , D2 f (x) = diag[2, 2], D2 g1 (x) = diag[0, 2], and D2 g2 (x) = O. We
compute
f (x) + > g(x) = [2x1 1 + 2 , 2x2 + 21 x2 ]> .
We use the FONC to find critical points. Rewriting f (x) + > g(x) = 0, we obtain
1 2
x1 = , x2 (1 + 1 ) = 0.
2
192
Since we require 0, we deduce that x2 = 0. Using > g(x) = 0 gives

1 (x1 + 4) = 0, 2 = 0.

## We are left with two cases.

Case 1: (1 > 0, 2 = 0) We have x1 + 4 = 0, and 1 = 8, which is a legitimate candidate.
Case 2: (1 = 0, 2 = 0) We have x1 = x2 = 0, which is not a legitimate candidate, since it is not a
feasible point.
We now apply SOSC to our candidate x = [4, 0]> , = [8, 0]> . Now,
" # " # " #
> > 2 0 0 0 2 0
L([4, 0] , [8, 0] ) = +8 = ,
0 2 0 2 0 18

which is positive definite on all of R2 . The point [4, 0]> is clearly regular. Hence, by the SOSC, x = [4, 0]
is a strict local minimizer.
21.19
Write f (x) = x21 +x22 , g1 (x) = x1 x22 +4, g2 (x) = 3x2 x1 , g3 (x) = 3x2 x1 and g = [g1 , g2 , g3 ]> . We have
f (x) = [2x1 , 2x2 ]> , g1 (x) = [1, 2x2 ]> , g2 (x) = [1, 3]> , g2 (x) = [1, 3]> , D2 f (x) = diag[2, 2],
D2 g1 (x) = diag[0, 2], and D2 g2 (x) = D2 g3 (x) = O.
From the figure, we see that the two candidates are x(1) = [3, 1] and x(2) = [3, 1]. Both points are easily
verified to be regular.
For x(1) , we have 3 = 0. Now,

## Df (x(1) ) + > Dg(x(1) ) = [6 1 2 , 2 21 + 32 ] = 0> ,

which yields 1 = 4, 2 = 2. Now, T(x(1) ) = {0}. Therefore, any matrix is positive definite on T(x(1) ).
Hence, by the SOSC, x(1) is a strict local minimizer.
For x(2) , we have 2 = 0. Now,

## Df (x(1) ) + > Dg(x(1) ) = [6 1 3 , 2 + 21 33 ] = 0> ,

which yields 1 = 4, 3 = 2. Now, again we have T(x(2) ) = {0}. Therefore, any matrix is positive definite
on T(x(2) ). Hence, by the SOSC, x(2) is a strict local minimizer.
21.20
a. Write f (x) = 3x1 and g(x) = 2 x1 x22 . We have f (x ) = [3, 0]> and g(x ) = [1, 0]> . Hence,
letting = 3, we have f (x )+ g(x ) = 0. Note also that 0 and g(x ) = 0. Hence, x = [2, 0]>
satisfies the KKT (first order necessary) condition.
b. We have F (x ) = O and G(x ) = diag[0, 2]. Hence, L(x , ) = O + 3 diag[0, 2] = diag[0, 6]. Also,
T (x ) = {y : [1, 0]y = 0} = {y : y1 = 0}. Hence, x = [2, 0]> does not satisfy the second order necessary
condition.
c. No. Consider points of the form x = [x22 + 2, x2 ]> , x2 R. Such points are feasible, and could be
arbitrarily close to x . However, for such points x 6= x ,

## Hence, x is not a local minimizer.

21.21
The KKT condition for the problem is

0

x + a = 0
> x = 0.

193
Premultiplying the second KKT condition above by > and using the third condition, we get

> a = k k2 .

Also, premultiplying the second KKT condition above by x> and using the feasibility condition a> x = b,
we get
kx k2
= < 0.
b
We conclude that = 0. For if not, the equation > a = k k2 implies that > a < 0, which contradicts
0 and a 0.
Rewriting the second KKT condition with = 0 yields

x = a.

## Using the feasibility condition a> x = b, we get

b
x = a .
kak2

21.22
a. Suppose (x1 )2 + (x2 )2 < 1. Then, the point x = [x1 , x2 ]> lies in the interior of the constraint set
= {x : kxk2 1}. Hence, by the FONC for unconstrained optimization, we have that f (x ) = 0, where
f (x) = kx [a, b]> k2 is the objective function. Now, f (x ) = 2(x [a, b]> ) = 0, which implies that
x = [a, b]> which violates the assumption (x1 )2 + (x2 )2 < 1.
b. First, we need to show that x is a regular point. For this, note that if we write the constraint as
g(x) = kxk2 1 0, then g(x ) = 2x 6= 0. Therefore, x is a regular point. Hence, by the Karush-
Kuhn-Tucker theorem, there exists R, 0, such that

f (x ) + g(x ) = 0,

## which gives " #

1 a
x = .
1+ b
Hence, x is unique, and we can write x1 = a, x2 = b, where = 1/(1 + ) 0.
Using part b and the fact that kx k = 1, we get kx k2 = 2 k[a, b]> k2 = 1, which gives = 1/k[a, b]> k =
c.
1/ a2 + b2 .
21.23
a. The Karush-Kuhn-Tucker conditions for this problem are

2x1 + exp(x1 ) = 0
2(x2 + 1)
= 0

(exp(x1 ) x2 ) = 0
exp(x1 ) x2

0.

b. From the second equation in part a, we obtain = 2(x2 + 1). Since x2 exp(x1 ) > 0, then > 0.
Hence, by the third equation in part a, we obtain x2 = exp(x1 ).
c. Since = 2(x2 + 1) = 2(exp(x1 ) + 1), then by the first equation in part a, we have

## 2x1 + 2(exp(x1 ) + 1) exp(x1 ) = 0

which implies
x1 = (exp(2x1 ) + exp(x1 )).
194
Since exp(x1 ), exp(2x1 ) > 0, then x1 < 0, and hence exp(x1 ), exp(2x1 ) < 1. Therefore, x1 > 2.
21.24
a. We rewrite the problem as

minimize f (x)
subject to g(x) 0,

where f (x) = c> x and g(x) = 21 kxk2 1. Hence, f (x) = c and g(x) = x. Note that x 6= 0 (for
otherwise it would not be feasible), and therefore it is a regular point. By the KKT theorem, there exists
0 such that c = x and g(x ) = 0. Since c 6= 0, we must have 6= 0. Therefore, g(x ) = 0,
which implies that kx k2 = 2.
p
b. From part a, we have 2 kek2 = 2. Since kek2 = n, we have = 2/n.
To find c, we use
4 = c> x + 8 = kx k2 + 8 = 2 + 8,
 p 
and thus = 2. Hence, c = 2e = 2 2/n e.
21.25
We can represent the equivalent problem as

minimize f (x)
subject to g(x) 0,

## where g(x) = 12 kh(x)k2 . Note that

g(x) = Dh(x)> h(x).
Therefore, the KKT condition is:

0
f (x ) + Dh(x )> h(x ) = 0
kh(x )k = 0.

Note that for a feasible point x , we have h(x ) = 0. Therefore, the KKT condition becomes

0

f (x ) = 0.

Note that g(x ) = 0. Therefore, any feasible point x is not regular. Hence, the KKT theorem cannot
be applied in this case. This should be clear, since obviously f (x ) = 0 is not necessary for optimality in
general.

## 22. Convex Optimization Problems

22.1
The given function is a quadratic, which we represent in the form

1 1
>
f = x 1 2 x.

1 2 5

A quadratic function is concave if and only if it is negative-semidefinite. Equivalently, if and only if its
negative is positive-semidefinite. On the other hand, a symmetric matrix is positive semidefinite if and only

195
if all its principal minors, not just the leading principal minors, are nonnegative. Thus we will determine
the range of the parameter for which

1 1
f = x> 1 2 x.

1 2 5

is positive-semidefinite. It is easy to see that the three first-order principal minors (diagonal elements of
F ) are all positive. There are three second-order principal minors. Only one of them, the leading principal
minor, is a function of the parameter ,
" #
1
det F (1 : !2, 1 : !2) = det = 1 2 .
1

[1, 1].

## det F (1 : !3, 1 : !3) and det F (2 : !3, 2 : !3),

and they are positive. There is only one third-order principal minor, det F , where
" # " # " #
1 2 1 1
det F = det det det
2 5 2 5 1 2
= 1 (5 + 2) (2 + 1)
= 1 52 2 2 1
= 52 4.

The third-order principal minor is nonnegative if and only if, (5 + 4) 0, that is, if and only if

[4/5, 0].

Combining this with [1, 1] from above, we conclude that the function f is negative-semidefinite,
equivalently, the quadratic function f is concave, if and only if

[4/5, 0].

22.2
We have
1
() = (x + d)> Q(x + d) (x + d)b
2  
1 > 2 > 1 > >
= (d Qd) + d (Qx b) + x Qx + x b .
2 2
This is a quadratic function of . Since Q > 0, then
d2
() = d> Qd > 0
d2
and hence by Theorem 22.5, is strictly convex.
22.3
Write f (x) = x> Qx, where " #
1 0 1
Q= .
2 1 0
196
Let x, y . Then, x = [a1 , ma1 ]> and y = [a2 , ma2 ]> for some a1 , a2 R. By Proposition 22.1, it is
enough to show that (y x)> Q(y x) 0. By substitution,

## which completes the proof.

22.4
Let x, y and (0, 1). Then, h(x) = h(y) = c. By convexity of , h(x + (1 )y) = c. Therefore,

## which shows that h is convex, and thus h is concave.

22.5
At x = 0, for [1, 1], we have, for all y R,

|y| |0| + (y 0) = y.

## Thus, in this case any in the interval [1, 1] is a subgradient of f at x = 0.

At x = 1, = 1 is the only subgradient of f , because, for all y R,

|y| 1 + (y 1) = y.

22.6
Let x, y and (0, 1). For convenience, write f = max{f1 , . . . , f } = maxi fi . We have

## f(x + (1 )y) = max fi (x + (1 )y)

i
max (fi (x) + (1 )fi (y)) by convexity of each fi
i
max fi (x) + (1 ) max fi (y) by property of max
i i
= f(x) + (1 )f(y)

## which implies that f is convex.

22.7
: This is true by definition.
: Let d Rn be given. We want to show that d> Qd 0. Now, fix some vector x . Since is
open, there exists 6= 0 such that y = x d . By assumption,

## which implies that d> Qd 0.

22.8
Yes, the problem is a convex optimization problem.
First we show that the objective function f (x) = 12 kAx bk2 is convex. We write
1 > >
f (x) = x (A A)x (b> A)x + constant
2
which is a quadratic function with Hessian A> A. Since the Hessian A> A is positive semidefinite, the
objective function f is convex.
Next we show that the constraint set is convex. Consider two feasible points x and y, and let (0, 1).
Then, x and y satisfy e> x = 1, x 0 and e> y = 1, y 0, respectively. We have

## e> (x + (1 )y) = e> x + (1 )e> y = + (1 ) = 1.

197
Moreover, each component of x + (1 )y is given by xi + (1 )yi , which is nonnegative because every
term here is nonnegative. Hence, x + (1 )y is a feasible point, which shows that the constraint set is
convex.
22.9
We need to show that is a convex set, and f is a convex function on .
To show that is a convex set, we need to show that for any y, z and (0, 1), we have y+(1)z
. Let y, z and (0, 1). Thus, y1 = y2 0 and z1 = z2 0. Hence,
" #
y1 + (1 )z1
x = y + (1 )z =
y2 + (1 )z2

Now,
x1 = y1 + (1 )z1 = y2 + (1 )z2 = x2 ,
and since , 1 0,
x1 0.
Hence, x and therefore is convex.
To show that f is convex on , we need to show that for any y, z and [0, 1], f (y + (1 )z)
f (y) + (1 )f (z). Let y, z and [0, 1]. Thus, y1 = y2 0 and z1 = z2 0, so that f (y) = y13 and
f (z) = z13 . Also, 3 and (1 )3 (1 ). We have

## f (y + (1 )z) = (y1 + (1 )z1 )3

= 3 y13 + (1 )3 z13 + 32 x21 (1 )y1 + 3x1 (1 )2 y12
y13 + (1 )z13 + max(y1 , z1 )(3 + (1 )3 (1 )
+ 32 (1 ) + 3(1 )2 )
= y13
+ (1 )z13
= f (y) + (1 )f (z).

Hence, f is convex.
22.10
Since the problem is a convex optimization problem, we know for sure that any point of the form y+(1)z,
(0, 1), is a global minimizer. However, any other point may or may not be a minimizer. Hence, the
largest set of points G for which we can be sure that every point in G is a global minimizer, is given by

G = {y + (1 )z : 0 1}.

22.11
a. Let f be the objective function and the constraint set. Consider the set = {x : f (x) 1}.
This set contains all three of the given points. Moreover, by Lemma 22.1, is convex. Now, if we take the
average of the first two points (which is a convex combination of them), the resulting point (1/2)[1, 0, 0]> +
(1/2)[0, 1, 0]> = (1/2)[1, 1, 0]> is in , because is convex. Similarly, the point (2/3)(1/2)[1, 1, 0]> +
(1/3)[0, 0, 1]> = (1/3)[1, 1, 1]> is also in , because is convex. Hence, the objective function value of
(1/3)[1, 1, 1]> must be 1.
b. If the three points are all global minimizers, then the point (1/3)[1, 1, 1]> , which must cannot have higher
objective function value than the given three points (by part a), must also be a global minimizer.
22.12
a. The Lagrange condition for the problem is given by:

x> Q + > A = 0
Ax = b.

## From the first equation above, we obtain

x = Q1 A> .
198
Applying the second equation (constraint on x), we have

(AQ1 A> ) = b.

Since rank A = m, the matrix AQ1 A> is invertible. Therefore, the only solution to the Lagrange condition
is
x = Q1 A> (AQ1 A> )1 b.

b. The point in part a above is a global minimizer because the problem is a convex optimization problem (by
problem 1, the constraint set is convex; the objective function is convex because its Hessian, Q, is positive
definite).
22.13
By Theorem 22.4, for all x , we have

f (x) f (x ) + Df (x )(x x ).
>
Substituting Df (x ) from the equation Df (x ) + jJ(x ) j a>
P
j = 0 into the above inequality yields
X
f (x) f (x ) j a>
j (x x ).
jJ(x )

a>
j x + bj = 0,

a>
j x + bj 0.

## Hence, for each j J(x ),

a>
j (x x ) 0.

Since j 0, we get
X
f (x) f (x ) j a>
j (x x ) f (x )
jJ(x )

## and the proof is completed.

22.14
a. Let = {x Rn : a> x b}, x1 , x2 , and [0, 1]. Then, a> x1 b and a> x2 b. Therefore,

b + (1 )b
= b

## which means that x1 + (1 )x2 . Hence, is a convex set.

b. Rewrite the problem as

minimize f (x)
subject to g(x) 0

where f (x) = kxk2 and g(x) = b a> x. Now, g(x) = a 6= 0. Therefore, any feasible point is regular.
By the Karush-Kuhn-Tucker theorem, there exists 0 such that

2x a = 0
(b a> x )

= 0.

Since x is a feasible point, then x 6= 0. Therefore, by the first equation, we see that 6= 0. The second
equation then implies that b a> x = 0.
199
c. By the first Karush-Kuhn-Tucker equation, we have x = a/2. Since a> x = b, then a> a/2 =
a> x = b, and therefore = 2b/kak2 . Since x = a/2 then x is uniquely given by x = ba/kak2 .
22.15
a. Let f (x) = c> x and = {x : x 0}. Suppose x, y , and (0, 1). Then, x, y 0. Hence,
x + (1 )y 0, which means x + (1 )y . Furthermore,

## Therefore, f is convex. Hence, the problem is a convex programming problem.

b. : We use contraposition. Suppose ci < 0 for some i. Let d = [0, . . . , 1, . . . , 0], where 1 appears in the
ith component. Clearly d is a feasible direction for any point x 0. However, d> f (x ) = d> c = ci < 0.
Therefore, the FONC does not hold, and any point x 0 cannot be a minimizer.
: Suppose c 0. Let x = 0, and d a feasible direction at x . Then, d 0. Hence, d> f (x ) 0.
Therefore, by Theorem 22.7, x is a solution.
The above also proves that if a solution exists, then 0 is a solution.
c. Write g(x) = x so that the constraint can be expressed as g(x) 0.
: We have Dg(x) = I, which has full rank. Therefore, any point is regular. Suppose a solution x
exists. Then, by the KKT theorem, there exists 0 such that c> > = 0> and > x = 0. Hence,
c = 0.
: Suppose c 0. Let x = 0 and = c. Then, 0, c> > = 0> , and > x = 0, i.e., the
KKT condition is satisfied. By part a, x is a solution to the problem.
The above also proves that if a solution exists, then 0 is a solution.
22.16
a. The standard form problem is

minimize c> x
subject to Ax = b
x 0,

## which can be written as

minimize f (x)
subject to h(x) = 0,
g(x) 0,

where f (x) = c> x, h(x) = Ax b, and g(x) = x. Thus, we have Df (x) = c> , Dh(x) = A, and
Dg(x) = I. The Karush-Kuhn-Tucker conditions for the above problem has the form:

0
> >
c + A > = 0>
> x = 0.

b. The Karush-Kuhn-Tucker conditions are sufficient for optimality in this case because the problem is a
convex optimization problem, i.e., the objective function is a convex function, and the feasible set is a convex
set.
c. The dual problem is

maximize > b
subject to > A c> .

c. Let
= c> > A.
200
Since is feasible for the dual, we have 0. Rewriting the above equation, we get

c> > A = 0.

The Complementary Slackness condition (c> > A)x = 0 can be written as > x = 0. Therefore, the
Karush-Kuhn-Tucker conditions hold. By part b, x is optimal.
22.17
a. We can treat s(1) and s(2) as vectors in Rn . We have

## Let a = [a, . . . , a]> . The optimization problem is:

1 2
minimize (x + x22 )
2 1
subject to x1 s(1) + x2 s(2) a.

## b. The KKT conditions are:

x1 > s(1) = 0
> (2)
x2 s = 0
> (1) (2)
(x1 s + x2 s a) = 0
0
(1) (2)
x1 s + x2 s a.

c. Yes, because the Hessian of the Lagrangian is I (identity), which is positive definite.
d. Yes, this is a convex optimization problem. The objective function is quadratic, with identity Hessian
(hence positive definite). The constraint set is of the form Ax a, and hence is a linear variety.
22.18
a. We first show that the set of probability vectors

= {q Rn : q1 + + qn = 1, qi > 0, i = 1, . . . , n}

## is a convex set. Let y, z , so y1 + + yn = 1, yi > 0, z1 + + zn = 1, and zi > 0. Let (0, 1) and

x = y + (1 )z. We have

x1 + + xn = y1 + (1 )z1 + + yn + (1 )zn
= (y1 + + yn ) + (1 )(z1 + + zn )
= + (1 )
= 1.

Also, because yi > 0, zi > 0, > 0, and 1 > 0, we conclude that xi > 0. Thus, x , which shows that
is convex.
b. We next show that the function f is a convex function on . For this, we compute
p1
q12
0
. .. ..
..
F (q) = . . ,

pn
0 q2
n

which shows that F (q) > 0 for all q in the open set {q : qi > 0, i = 1, . . . , n}, which contains . Therefore,
f is convex on .
201
c. Fix a probability vector p. Consider the optimization problem
   
p1 pn
minimize p1 log + + pn log
x1 xn
subject to x1 + + xn = 1
xi > 0, i = 1, . . . , n
By parts a and b, the problem is a convex optimization problem. We ignore the constraint xi > 0 and write
down the Lagrange conditions for the equality-constraint problem:
pi
+ = 0, i = 1, . . . , n
xi
x1 + + xn = 1.
Rewrite the first set of equations as xi = pi / . Combining this with the constraint and the fact that
p1 + + pn = 1, we obtain = 1, which means that xi = pi . Therefore, the unique global minimizer is
x = p.
Note that f (x ) = 0. Hence, we conclude that f (x) 0 for all x . Moreover, f (x) = 0 if and only if

x = 0. This proves the required result.
d. Given two probability vectors p and q, the number
   
p1 pn
D(p, q) = p1 log + + pn log
q1 qn
is called the relative entropy (or Kullback-Liebler divergence) between p and q. It is used in information
theory to measure the distance between two probability vectors. The result of part c justifies the use of
D as a measure of distance (although D is not a metric because it is not symmetric).
22.19
We claim that a solution exists, and that it is unique. To prove the first claim, choose > 0 such that there
exists x satisfying kx zk < . Consider the modified problem
minimize kx zk
\
subject to x {y : ky zk }.

If this modified problem has a solution, then clearly so does the original problem. The objective function here
is continuous, and the constraint set is closed and bounded. Hence, by Weierstrasss Theorem, a solution to
the problem exists.
Let f be the objective function. Next, we show that f is convex (and hence the problem is a convex
optimization problem). Let x, y and (0, 1). Then,
f (x + (1 )y) = kx + (1 )y zk
= k(x z) + (1 )(y z)k
kx zk + (1 )ky zk
= f (x) + (1 )f (y),
which shows that f is convex.
To prove uniqueness, let x1 and x2 be solutions to the problem. Then, by convexity, x3 = (x1 + x2 )/2 is
also a solution. But

x 1 + x2
kx3 zk = z
2

x1 z x2 z
= +
2 2
1
(kx1 zk + kx2 zk)
2
= kx3 zk,
202
from which we conclude that the triangle inequality above holds with equality, implying that x1 z =
(x2 z) for some 0. Because kx1 zk = kx2 zk = kx1 zk, we have = 1. From this, we obtain
x1 = x2 , which proves uniqueness.
22.20
a. Let A Rnn and B Rn be symmetric and A 0, B 0. Fix (0, 1), x Rn , and let
C = A + (1 )B. Then,

## x> Cx = x> [A + (1 )B]x

= x> Ax + (1 )x> Bx.

Since x> Ax 0, x> Bx 0, and , (1 ) > 0 by assumption, then x> Cx 0, which proves the required
result.
Pn
b. We first show that the constraint set = {x : F 0 + j=1 xj F j 0} is convex. So, let x, y and
(0, 1). Let z = x + (1 )y. Then,
n
X n
X
F0 + zj F j = F0 + [xj + (1 )yj ]F j
j=1 j=1
Xn n
X
= F0 + xj F j + (1 ) yj F j
j=1 j=1
Xn n
X
= [F 0 + xj F j ] + (1 )[F 0 + yj F j ].
j=1 j=1

By assumption, we have
n
X
F0 + xj F j 0
j=1
Xn
F0 + yj F j 0.
j=1

n
X
F0 + zj F j 0,
j=1

## which implies that z .

To show that the objective function f (x) = c> x is convex on , let x, y and (0, 1). Then,

## f (x + (1 )y) = c> (x + (1 )y)

= c> x + (1 )c> y
= f (x) + (1 )f (y)

## which shows that f is convex.

c. The objective function is already in the required form. To rewrite the constraint, let ai,j be the (i, j)th
entry of A, i = 1, . . . , m, j = 1, . . . , n. Then, the constraint Ax b can be written as

## Now form the diagonal matrices

F0 = diag{b1 , . . . , bm }
Fj = diag{a1,j , . . . , am,j }, j = 1, . . . , n.

203
Note that a diagonal matrix is positive semidefinite if and
Pn only if every diagonal element is nonnegative.
Hence, the constraint Ax b can be written as F 0 + j=1 xj F j 0. The left hand side is a diagonal
matrix, and the ith diagonal element is simply bi + ai,1 x1 + ai,2 x2 + + ai,n xn .
22.21
a. We have
= {x : x1 + . . . + xn = 1; x1 , . . . , xn > 0; x1 2xi , i = 2, . . . , n}.
So let x, y and (0, 1). Consider z = x + (1 )y. We have

z1 + + zn = x1 + (1 )y1 + + xn + (1 )yn
= (x1 + + xn ) + (1 )(y1 + + yn )
= +1
= 1.

Moreover, for each i, because xi > 0, yi > 0, > 0 and 1 > 0, we have zi > 0. Finally, for each i,

## Hence, z , which implies that is convex.

b. We first show that the negative of the objective function is convex. For this, we will compute its Hessian,
which turns out to be a diagonal matrix with ith diagonal entry 1/x2i , which is strictly positive. Hence, the
Hessian is positive definite, which implies that the negative of the objective function is convex.
Combining the above with part a, we conclude that the problem is a convex optimization problem. Hence,
the FONC (for set constraints) is necessary and sufficient. Let x be a given allocation. The FONC at x
is d> f (x) 0 for all feasible directions d at x. But because is P convex, the FONC can be written as
n
(y x)> f (x) 0 for all y . Computing f (x) for f (x) = i=1 log(xi ), we get the proportional
fairness condition.
22.22
a. We rewrite the problem into a minimization problem by multiplying the objective function by 1. Thus,
the new objective function is the sum of the functions Ui . Because each Ui is concave, Ui is convex, and
hence their sum is convex.
To show that the constraint set = {x : e> x C} (where e = [1, . . . , 1]> ) is convex, let x1 , x2 , and
[0, 1]. Then, e> x1 C and e> x2 C. Therefore,

C + (1 )C
= C

## which means that x1 + (1 )x2 . Hence, is a convex set.

b. Because the problem is a convex optimization problem, the following KKT condition is necessary and
sufficient for x to be a global minimizers:

0
Ui0 (xi ) + = 0, i = 1, . . . , n
n
!
X

xi C = 0
i=1
n
X
xi C.
i=1

Note that because Ui (xi ) + xi is a convex function of xi , the second line above can be written as
xi = arg maxx (Ui (x) x).

204
Pn
c. Because each Ui is concave and increasing, we conclude that i=1 xi = C; for otherwise we could increase
n
some xi and hence Ui (xi ) and also i=1 Ui (xi ), contradicting the optimality of x .
P

22.23
First note that the optimization problem that you construct cannot be a convex problem (for otherwise, the
FONC implies that x is a global minimizer, which then implies that the SONC holds). Let f (x) = x2 ,
g(x) = (x2 + x21 ), and x = 0. Then, f (x ) = [0, 1]> . Any feasible direction d at x is of the form
d = [d1 , d2 ]> with d2 0. Hence, d> f (x ) 0, which shows that the FONC holds.
Because g(x ) = [0, 1] (so x is regular), we see that if = 1, then f (x ) + g(x ) = 0, and so
the KKT condition holds.
Because F (x ) = O, the SONC for set constraint holds. However,
" #
2 0
L(x , ) = O + <0
0 0

and T (x ) = {y : y2 = 0}, which shows that the SONC for inequality constraint g(x) 0 does not hold.
22.24
a. Let x0 and 0 be feasible points in the primal and dual, respectively. Then, g(x0 ) 0 and 0 0, and
so >
0 g(x0 ) 0. Hence,

## f (x0 ) f (x0 ) + >

0 g(x0 )
= l(x0 , 0 )
min l(x, 0 )
xRn
= q(0 ).

b. Suppose f (x0 ) = q(0 ) for feasible points x0 and 0 . Let x be any feasible point in the primal. Then,
by part a, f (x) q(0 ) = f (x0 ). Hence x0 is optimal in the primal.
Similarly, let be any feasible point in the dual. Then, by part a, q() f (x0 ) = q(0 ). Hence 0 is
optimal in the dual.
c. Let x be optimal in the primal. Then, by the KKT Theorem, there exists Rm such that
x l(x , ) = (Df (x ) + > Dg(x ))> = 0
>
g(x ) = 0

0.

Therefore, is feasible in the dual. Further, we note that l(, ) is a convex function (because f is convex,
> g is convex being the sum of convex functions, and hence l is the sum of two convex functions). Hence,
we have l(x , ) = minxRn l(x, ). Therefore,
q( ) = min l(x, )
xRn
= l(x , )

= f (x ) + > g(x )
= f (x ).
By part b, is optimal in the dual.
22.25
a. The Schur complement of M (1, 1) is
11 = M (2 : !3, 2 : !3) M (2 : 3, 1)M (1, 1)1 M (1, 2 : !3)
" # " #
1 2 h i
= 1
2 5 1
" #
1 2 2 +
= .
2+ 4
205
b. The Schur complement of M (2 : !3, 2 : !3) is

## 22 = M (1, 1) M (1, 2 : !3)M (2 : !3, 2 : 3)1 M (2 : !3, 1)

i 1 2 1
" # " #
h
= 1 1
2 5 1
" #" #
h i 5 2
= 1 1
2 1 1
= (5 + 4).

22.26
Let " #
x1 x2
P =
x2 x3
and let " # " # " #
1 0 0 1 0 0
P1 = , P2 = , and P 3 = .
0 0 1 0 0 1

## Then, we can represent the Lyapunov inequality A> P + P A < 0 as

   
A> P + P A = x1 A> P 1 + P 1 A + x2 A> P 2 + P 2 A
 
+x3 A> P 3 + P 3 A
= x1 F 1 x2 F 2 x3 F 3
< 0,

where
F i = A> P i P i A, i = 1, 2, 3.
Equivalently,
P = P > and A> P + P A < 0
if and only if
F (x) = x1 F 1 + x2 F 2 + x3 F 3 > 0.

22.27
The quadratic inequality
A> P + P A + P BR1 B > P < 0
can be equivalently represented by the following LMI:
" #
R B>P
< 0,
P B A> P + P A

## or as the following LMI: " #

A> P + P A PB
< 0.
B>P R
It is easy to verify, using Schur complements, that the above two LMIs are equivalent to the following
quadratic inequality: " #
A> P + P A + P BR1 B > P O
< 0.
O R

206
22.28
The MATLAB code is as follows:

## A =[-0.9501 -0.4860 -0.4565

-0.2311 -0.8913 -0.0185
-0.6068 -0.7621 -0.8214];
setlmis([]);
P=lmivar(1,[3 1]);
lmiterm([1 1 1 P],1,A,s)
lmiterm([2 1 1 0],0.1)
lmiterm([-2 1 1 P],1,1)
lmiterm([3 1 1 P],1,1)
lmiterm([-3 1 1 0],1)
lmis=getlmis;
[tmin,xfeas]=feasp(lmis);
P=dec2mat(lmis,xfeas,P)

## 23. Algorithms for Constrained Optimization

23.1
a. By drawing a simple picture, it is easy to see that [x] = x/kxk, provided x 6= 0.
b. By inspection, we see that the solutions are [0, 1]> and [0, 1]> . (Or use Rayleighs inequality.)
c. Now,
x(k+1) = [x(k) + f (x(k) )] = k (x(k) + Qx(k) ) = k (I + Q)x(k) ,
where k = 1/k(I + Q)x(k) k. For the particular given form of Q, we have
(k+1) (k)
x1 = k (1 + )x1
(k+1) (k)
x2 = k (1 + 2)x2 .

Hence,  
1+
y (k+1) = y (k) .
1 + 2

(0)
d. Assuming x2 6= 0, y (0) is well defined. Hence, by part c, we can write
 k
1+
y (k) = y (0) .
1 + 2

Because > 0,
1+
< 1,
1 + 2
which implies that y (k) 0. But

1 = kx(k) k
q
(k) (k)
= (x1 )2 + (x2 )2
v !
u (k)
u (k)
2
(x1 )2
= t (x2 ) (k)
+1
(x2 )2
q
(k)
= |x2 | (y (k) )2 + 1,

207
which implies that
(k) 1
|x2 | = p .
(y (k) )2 + 1
(k) (k+1) (k)
Because y (k) 0, we have |x2 | 1. By the expression for x2 in part c, we see that the sign of x2 does
(k) (k) (k)
not change with k. Hence, we deduce that either x2 1 or x2 1. This also implies that x1 0.
(k)
Hence, x converges to a solution to the problem.
(0) (k) (k)
e. If x2 = 0, then x2 = 0 for all k, which means that x1 = 1 or 1 for all k. In this case, the algorithm
is stuck at the initial condition [1, 0]> or [1, 0]> (which are in fact the minimizers).
23.2
a. Yes. To show: Suppose that x(k) is a global minimizer of the given problem. Then, for all x ,
x 6= x(k) , we have c> x c> x(k) . Rewriting, we obtain c> (x x(k) ) 0. Recall that
[x(k) f (x(k) )] = arg min kx (x(k) f (x(k) ))k2
x

## = arg min kx x(k) + ck2 .

x
(k)
But, for any x , x 6= x ,
kx x(k) + ck2 = kx x(k) k2 + kck2 + 2c> (x x(k) )
> kck2 ,
where we used the facts that kx x(k) k2 > 0 and c> (x x(k) ) 0. On the other hand, kx(k) x(k) + ck2 =
kck2 . Hence,
x(k+1) = [x(k) f (x(k) )] = x(k) .
b. No. Counterexample:

23.3
a. Suppose x(k) satisfies the FONC. Then, f (x(k) ) = 0. Hence, x(k+1) = x(k) . Conversely, suppose x(k)
does not satisfy the FONC. Then, f (x(k) ) 6= 0. Hence, k > 0, and so x(k+1) 6= x(k) .
b. Case (i): Suppose x(k) is a corner point. Without loss of generality, take x(k) = [1, 1]> . (We can do this
because any other corner point can be mapped to this point by changing variables xi to xi as appropriate.)
Note that any feasible direction d at x(k) = [1, 1]> satisfies d 0. Therefore,
x(k+1) = x(k) f (x(k) ) 0
d> f (x(k) ) 0 for all feasible d at x(k)
x(k) satisfies FONC.
Case (ii): Suppose x(k) is not a corner point (i.e., is an edge point). Without loss of generality, take
(k)
x {x : x1 = 1, 1 < x2 < 1}. (We can do this because any other edge point can be mapped to this
point by changing variables xi to xi as appropriate.) Note that any feasible direction d at x(k) {x : x1 =
1, 1 < x2 < 1} satisfies d1 0. Therefore,
x(k+1) = x(k) f (x(k) ) = [a, 0]> , a > 0
d> f (x(k) ) 0 for all feasible d at x(k)
x(k) satisfies FONC.
208
23.4
By definition of , we have

## [x0 + y] = arg min kx (x0 + y)k

x
= arg min k(x x0 ) yk.
x

## arg min k(x x0 ) yk = x0 + arg min kz yk.

x zN (A)

The term arg minzN (A) kz yk is simply the orthogonal projection of y onto N (A). By Exercise 6.7, we
have
arg min kz yk = P y,
zN (A)

> > 1
where P = I A (AA ) A. Hence,

[x0 + y] = x0 + P y.

23.5
Since k 0 is a minimizer of k () = f (x(k) P g (k) ), we apply the FONC to k () to obtain

## x(k)> Q b> = g (k)> .

Hence
g (k)> P g (k)
k = .
g (k)> P QP g (k)
23.6
By Exercise 23.5, the projected steepest descent algorithm applied to this problem takes the form

## x(k+1) = x(k) P x(k)

= (I n P )x(k)
= A> (AA> )1 Ax(k) .

## which solves the problem (see Section 12.3).

23.7
a. Define
k () = f (x(k) P f (x(k) ))
By the Chain Rule,
dk
0k (k ) = (k )
d
= (f (x(k) k P f (x(k) )))> P f (x(k) )
= (f (x(k+1) ))> P f (x(k) ).

## Since k minimizes k , 0k (k ) = 0, and thus g (k+1)> P g (k) = 0.

209
b. We have x(k+1) x(k) = k P g (k) and x(k+2) x(k+1) = k+1 P g (k+1) . Therefore,

## (x(k+2) x(k+1) )> (x(k+1) x(k) ) = k+1 k g (k+1)> P > P g (k)

= k+1 k g (k+1)> P g (k)
= 0

## by part a, and the fact that P = P > = P 2 .

23.8
a. minimize f (x) + P (x).
b. Suppose x 6 . Then, P (x ) > 0 by definition of P . Because x is a global minimizer of the
unconstrained problem, we have

f (x ) + P (x ) f (x ) + P (x )
= f (x ),

## which implies that

f (x ) f (x ) P (x ) < f (x ).

23.9
We use the penalty method. First, we construct the unconstrained objective function with penalty parameter
:
f (x) = x21 + 2x22 + (x1 + x2 3)2 .
Because f is a quadratic with positive definite quadratic term, it is easy to find its minimizer:
" #
1 2
x = .
1 + 2/(3) 1

## 2(1 + )x1 + 2x2 6 = 0

2x1 + 2(2 + )x2 6 = 0.

## Now letting , we obtain " #

2
x = .
1
(It is easy to verify, using other means, that this is indeed the correct solution.)
23.10
Using the penalty method, we construct the unconstrained problem

## minimize x + (max(a x, 0))2

To find the solution to the above problem, we use the FONC. It is easy to see that the solution x satisfies
x < a. The derivative of the above objective function in the region x < a is 1 + 2(x a). Thus, by the
FONC, we have x = a 1/(2). Since the true solution is at a, the difference is 1/(2). Therefore, for
1/(2) , we need 1/(2). The smallest such is 1/(2).
23.11
a. We have " # " #
1 2 2 1 > 1 + 2 2 > 2
kxk + kAx bk = x xx + .
2 2 2 1 + 2 2

210
The above is a quadratic with positive definite Hessian. Therefore, the minimizer is
" #1 " #
1 + 2 2 2
x =
2 1 + 2 2
" #
1 1
= .
2 + 1/2 1

Hence, " #
1 1
lim x = .
2 1

## The solution to the original constrained problem is (see Section 12.3)

" #
> > 1 1 1
x = A (AA ) b = .
2 1

## b. We represent the objective function of the associated unconstrained problem as

1 1    
kxk2 + kAx bk2 = x> I n + 2A> A x x> 2A> b + b> b.
2 2
The above is a quadratic with positive definite Hessian. Therefore, the minimizer is
 1  
x = I n + 2A> A 2A> b
 1
1 >
= In + A A A> b.
2

Let A = U [S O] V > be the singular value decomposition of A. For simplicity, denote = 1/2. We have
 1
x = I n + A> A A> b
" # !1
S
= I n + V U > U [S O] V > A> b
O
" # !1
S2 O >
= I n + V V A> b
O O
" #1
I m + S 2 O
= V V > A> b.
O I nm

## Note that " #

S
V > A> = U >.
O
Also,
" #1 " #
I m + S 2 O (I m + S 2 )1 O
= 1 ,
O I nm O I nm

211
where (I m + S 2 )1 is diagonal. Hence,
 1
x = I n + A> A A> b
" #" #
(I m + S 2 )1 O S
= V 1 U>
O I nm O
" #
S
= V (I m + S 2 )1 U >
O
" #
S
= V U > U (I m + S 2 )1 U >
O
= A> U (I m + S 2 )1 U > .

## Note that as , 0, and

U (I m + S 2 )1 U > U (S 2 )1 U > .

But,
 1
U (S 2 )1 U > = U S 2 U > = (AA> )1 .

Therefore,
x A> (AA> )1 b = x .

## 24. Multi-Objective Optimization

24.1
The MATLAB code is as follows:
function multi_op
%MULTI_OP, illustrates multi-objective optimization.

clear
clc
disp ( )
disp (This is a demo illustrating multi-objective optimization.)
disp (The numerical example is a modification of the example)
disp (from the 2002 book by A. Osyczka,)
disp (Example 5.1 on pages 101--105)
disp (------------------------------------------------------------)
disp (Select the population size denoted POPSIZE, for example, 50.)
disp ( )
POPSIZE=input(Population size POPSIZE = );
disp (------------------------------------------------------------)
disp (Select the number of iterations denoted NUMITER; e.g., 10.)
disp ( )
NUMITER=input(Number of iterations NUMITER = );
disp ( )
disp (------------------------------------------------------------)

% Main
for i = 1:NUMITER
fprintf(Working on Iteration %.0f...\n,i)
xmat = genxmat(POPSIZE);
if i~=1
for j = 1:length(xR)

212
xmat = [xmat;xR{j}];
end
end
[xR,fR] = Select_P(xmat);
fprintf(Number of Pareto solutions: %.0f\n,length(fR))

end
disp ( )
disp (------------------------------------------------------------)

## fprintf( Pareto solutions \n)

celldisp(xR)
disp ( )
disp (------------------------------------------------------------)

## fprintf( Objective vector values \n)

celldisp(fR)
xlabel(f_1,Fontsize,16)
ylabel(f_2,Fontsize,16)
title(Pareto optimal front,Fontsize,16)
set(gca,Fontsize,16)
grid
for i=1:length(xR)
xx(i)=xR{i}(1);
yy(i)=xR{i}(2);
end
XX=[xx; yy];
figure
axis([1 7 5 10])
hold on
for i=1:size(XX,2)
plot(XX(1,i),XX(2,i),marker,o,markersize,6)
end
xlabel(x_1,Fontsize,16)
ylabel(x_2,Fontsize,16)
title(Pareto optimal solutions,Fontsize,16)
set(gca,Fontsize,16)
grid
hold off
figure
axis([-2 10 2 13])
hold on
plot([2 6],[5 5],marker,o,markersize,6)
plot([6 6],[5 9],marker,o,markersize,6)
plot([2 6],[9 9],marker,o,markersize,6)
plot([2 2],[5 9],marker,o,markersize,6)
for i=1:size(XX,2)
plot(XX(1,i),XX(2,i),marker,x,markersize,10)
end
x1=-2:.2:10;
x2=2:.2:13;
[X1, X2]=meshgrid(x1,x2);
Z1=-X1.^2 - X2;
v=[0 -5 -7 -10 -15 -20 -30 -40 -60];
cs1=contour(X1,X2,Z1,v);
clabel(cs1)
Z2=X1+X2.^2;
v2=[20 25 35 40 60 80 100 120];
cs2=contour(X1,X2,Z2,v2);

213
clabel(cs2)
xlabel(x_1,Fontsize,16)
ylabel(x_2,Fontsize,16)
title(Level sets of f_1 and f_2, and Pareto optimal
points,Fontsize,16)
set(gca,Fontsize,16)
grid
hold off

## function xmat0 = genxmat(POPSIZE)

xmat0 = rand(POPSIZE,2);
xmat0(:,1) = xmat0(:,1)*4+2;
xmat0(:,2) = xmat0(:,2)*4+5;

## function [xR,fR] = Select_P(xmat)

% Declaration
J = size(xmat,1);

% Init
Rset = ;
j = 1;
isstep7 = 0;

% Step 1
x{1} = xmat(1,:);
f{1} = evalfcn(x{1});

% Step 2
while j < J
j = j+1;

% Step 3
r = 1;
rdel = [];
q = 0;
R = length(Rset);

for k = 1:size(xmat,1)
x{k} = xmat(k,:);
f{k} = evalfcn(x{k});
end

% Step 4
while 1
%for r=1:R
if all(f{j}<f{Rset(r)})
q = q+1;
rdel = [rdel r];

else

% Step 5
if all(f{j}>=f{Rset(r)})
break
end
end

% Step 6

214
r=r+1;
if r > R
isstep7 = 1;
break
end
end

% Step 7
if isstep7 == 1
isstep7 = 0;
if (q~=0)
Rset(rdel) =[];
Rset = [Rset j];
else

%Step 8
Rset = [Rset j];
end
end

for k = 1:size(xmat,1)
x{k} = xmat(k,:);
f{k} = evalfcn(x{k});
end

R = length(Rset);
end

## % Return the Pareto solution.

for i = 1:length(Rset)
xR{i} = x{Rset(i)};
fR{i} = f{Rset(i)};
end

x1 = [];
y1 = [];
x2 = [];
y2 = [];
for k = 1:size(xmat,1)
if ismember(k,Rset)
x1 = [x1 f{k}(1)];
y1 = [y1 f{k}(2)];
else
x2 = [x2 f{k}(1)];
y2 = [y2 f{k}(2)];
end
end
%newplot
plot(x1,y1,xr,x2,y2,.b)
drawnow

function y = f1(x)
% y = x(1)^2+x(2);
% The above function is the original function in the Osyczkas 2002
% book,
% (Example 5.1, page 101).
% Its negative makes a much more interesting example.

215
y = -(x(1)^2+x(2));

function y = f2(x)
y = x(1)+x(2)^2;

function y = evalfcn(x)
y(1) = f1(x);
y(2) = f2(x);

24.2
a. We proceed using contraposition. Assume that x is not Pareto optimal. Therefore, there exists a point
such that
x
x) fi (x ) for all i = 1, 2, . . . ,
fi (
x) < fj (x ). Since c > 0,
and for some j, fj (

## c> f (x ) > c> f (

x),

which implies that x is not a global minimizer for the weighted-sum problem.
For the converse, consider the following counterexample: = {x R2 : kxk 1, x 0} and f (x) =
[x1 , x2 ]> . It is easy to see that the Pareto
front is {x : kxk = 1, x 0} (i.e., the part of the unit circle in the
nonnegative quadrant). So x = (1/ 2)[1, 1]> is a Pareto minimizer. However, there is no c > 0 such that
x is a global minimizer of the weighted-sum problem. To see this, fix c >0 (assuming c1 c2 without loss
of generality) and consider the objective function value f (x ) = (c1 + c2 )/ 2 for the weighted-sum problem.
Now, the point x0 = [1, 0]> is also a feasible point. Moreover f (x0 ) = c1 (c1 + c2 )/2 f (x ). So x is
not a global minimizer of the weighted-sum problem.
b. We proceed using contraposition. Assume that x is not Pareto optimal. Therefore, there exists a point
such that
x
x) fi (x ) for all i = 1, 2, . . . , 
fi (
x) < fj (x ). By assumption, for all i = 1, . . . , , fi (x ) 0, which implies that
and for some j, fj (

X
X
(fi (x ))p > x))p
(fi (
i=1 i=1

(because p > 0). Hence, x is not a global minimizer for the minimum-norm problem.
For the converse, consider the following counterexample: = {x R2 : x1 + 2x2 2, x 0} and
f (x) = [x1 , x2 ]> . It is easy to see that the Pareto front is {x : x1 + 2x2 = 2, x 0}. So x = [1, 1/2]> is
a Pareto minimizer. However, there is no p > 0 such that x is a global minimizer of the minimum-norm
problem. To see this, fix p > 0 and consider the objective function value f (x ) = 1+(1/2)p for the minimum-
norm problem. Now, the point x0 = [0, 1]> is also a feasible point. Moreover f (x0 ) = 1 1+(1/2)p = f (x ).
So x is not a global minimizer of the minimum-norm problem.
c. For the first part, consider the following counterexample: = {x R2 : x1 + x2 2, x 0} and
f (x) = [x1 , x2 ]> . The Pareto front is {x : x1 + x2 = 2, x 0}, and x = [1/2, 3/4]> is a Pareto minimizer.
But

f (x ) = max{f1 (x ), f2 (x )}
= max{1/2, 3/4}
= 3/4.

However, x0 = [1, 1]> is also a feasible point, and f (x0 ) = 1 < f (x ). Hence, x is not a global minimizer
of the minimax problem.
For the second part, suppose = {x R2 : x1 2} and f (x) = [x1 , 2]> . Then, for any x ,
max{f1 (x), f2 (x)} = 2. So any x R2 is a global minimizer of the minimax (single-objective) problem.

216
However, consider another point x such that x 1 < x1 . Then, f1 (
x) < f1 (x ) and f2 (
x) = f2 (x ). Hence,

x is not a Pareto minimizer.
In fact, in the above example, no Pareto minimizer exists. However, if we set = {x R2 : 1 x1 2},
then the counterexample is still valid, but in this case any point of the form [1, x2 ]> is a Pareto minimizer.
24.3
Let
f (x) = c> f (x)
where x = {x : h(x) = 0}. The function f is convex because all the functions fi are convex and
ci > 0, i = 1, 2, . . . , . We can represent the given first-order condition in the following form: for any feasible
direction d at x , we have
d> f (x ) 0.
By Theorem 22.7, the point x is a global minimizer of f over . Therefore,

## f (x ) f (x) for all x .

That is,

X 
X
ci fi (x ) ci fi (x) for all x .
i=1 i=1

To finish the proof, we now assume that x is not Pareto optimal and the above condition holds. We then
proceed using the proof by contradiction. Because, by assumption, x is not Pareto optimal, there exists a
such that
point x
x) fi (x ) for all i = 1, 2, . . . ,
fi (
x) < fj (x ). Since for all i = 1, 2, . . . , , ci > 0, we must have
and for some j, fj (

X 
X
ci fi (x ) > ci fi (
x),
i=1 i=1

P
P
which contradicts the above condition, i=1 ci fi (x ) i=1 ci fi (x) for all x . This completes the
proof. (See also Exercise 24.2, part a.)
24.4
Let
f (x) = c> f (x)
where x = {x : h(x) = 0}. The function f is convex because all the functions fi are convex and ci > 0,
i = 1, 2, . . . , . We can represent the given Lagrange condition in the form

h(x ) = 0.

## f (x ) f (x) for all x .

That is,

X
X
ci fi (x ) ci fi (x) for all x .
i=1 i=1

To finish the proof, we now assume that x is not Pareto optimal and the above condition holds. We then
proceed using the proof by contradiction. Because, by assumption, x is not Pareto optimal, there exists a
such that
point x
x) fi (x ) for all i = 1, 2, . . . , 
fi (

217
x) < fj (x ). Since for all i = 1, 2, . . . , , ci > 0, we must have
and for some j, fj (

X
X
ci fi (x ) > ci fi (
x),
i=1 i=1
P
P
which contradicts the above condition, i=1 ci fi (x ) i=1 ci fi (x) for all x . This completes the
proof. (See also Exercise 24.2, part a.)
24.5
Let
f (x) = c> f (x)
where x = {x : g(x) 0}. The function f is convex because all the functions fi are convex and ci > 0,
i = 1, 2, . . . , . We can represent the given KKT condition in the form

0
Df (x ) + > Dg(x ) = 0>
> g(x ) = 0
g(x ) 0.

## f (x ) f (x) for all x .

That is,

X 
X
ci fi (x ) ci fi (x) for all x .
i=1 i=1

To finish the proof, we now assume that x is not Pareto optimal and the above condition holds. We then
proceed using the proof by contradiction. Because, by assumption, x is not Pareto optimal, there exists a
such that
point x
x) fi (x ) for all i = 1, 2, . . . ,
fi (
x) < fj (x ). Since for all i = 1, 2, . . . , , ci > 0, we must have
and for some j, fj (

X 
X
ci fi (x ) > ci fi (
x),
i=1 i=1
P
P
which contradicts the above condition, i=1 ci fi (x ) i=1 ci fi (x) for all x . This completes the
proof. (See also Exercise 24.2, part a.)
24.6
Let
f (x) = c> f (x)
where x = {x : h(x) = 0, g(x) 0}. The function f is convex because all the functions fi are convex
and ci > 0, i = 1, 2, . . . , . We can represent the given KKT-type condition in the form

0
Df (x ) + > Dh(x ) + > Dg(x ) = 0>
> g(x ) = 0
h(x ) = 0
g(x ) 0.

## f (x ) f (x) for all x .

218
That is,

X
X
ci fi (x ) ci fi (x) for all x .
i=1 i=1

To finish the proof, we now assume that x is not Pareto optimal and the above condition holds. We then
proceed using the proof by contradiction. Because, by assumption, x is not Pareto optimal, there exists a
such that
point x
x) fi (x ) for all i = 1, 2, . . . , 
fi (
x) < fj (x ). Since for all i = 1, 2, . . . , , ci > 0, we must have
and for some j, fj (

X
X
ci fi (x ) > ci fi (
x),
i=1 i=1

P
P
which contradicts the above condition, i=1 ci fi (x ) i=1 ci fi (x) for all x . This completes the
proof. (See also Exercise 24.2, part a.)
24.7
The given minimax problem is equivalent to the problem given in the hint:

minimize z
subject to fi (x) z 0, i = 1, 2.

Suppose [x> , z ]> is a local minimizer for the above problem (which is equivalent to x being a local
minimizer to for the original problem). Then, by the KKT Theorem, there exists > 0, where R2 ,
such that
" #
>
> > [f1 (x ) , 1]
[0 , 1] + = 0>
[f2 (x )> , 1]
" #

> f1 (x ) z
= 0.
f2 (x ) z

## Rewriting the first equation above, we get

1 f1 (x ) + 2 f2 (x ) = 0, 1 + 2 = 1.

## Rewriting the second equation, we get

i (fi (x ) z ) = 0, i = 1, 2.

Suppose fi (x ) < max{f1 (x ), f2 (x )}, where i {1, 2}. Then, z > fi (x ). Hence, by the above equation
we conclude that i = 0.

219