You are on page 1of 28

# Math 404

## Linear and Nonlinear Programming

Classical Optimization Techniques
Inequality Constraints

## Dr. Ahmed Sayed AbdelSamea

Giza, Egypt, Spring 2018
aabdelsamea@zewailcity.edu.eg
Classical Optimization Techniques
Multivariable Optimization with Equality Constraints
Solution by the method of Lagrange Multipliers

## Joseph-Louis Lagrange (1736 –1813) was an Italian/French

mathematician and astronomer. He made significant
contributions to the fields of analysis, number theory,
variational calculus, mathematical physics and both classical
and celestial mechanics.

2
Classical Optimization Techniques
Multivariable Optimization with Equality Constraints
Solution by the method of Lagrange Multipliers
Problem with 2 variables and 1 constraints (n = 2, m = 1)

min 𝑓 𝑥1 , 𝑥2
s. t. 𝑔 𝑥1 , 𝑥2 = 0
construct the Lagrange function 𝐿 𝑥1 , 𝑥2 , 𝜆 where 𝜆 is the
Lagrange multiplier
𝐿 𝑥1 , 𝑥2 , 𝜆 = 𝑓 𝑥1 , 𝑥2 + 𝜆𝑔 𝑥1 , 𝑥2

3
Classical Optimization Techniques
Multivariable Optimization with Equality Constraints
Solution by the method of Lagrange Multipliers
The necessary conditions for its extreme point are:

𝜕𝐿 𝜕𝑓 𝜕𝑔
𝑥1 , 𝑥2 , 𝜆 = 𝑥1 , 𝑥2 + 𝜆 𝑥1 , 𝑥2 = 𝟎
𝜕𝑥1 𝜕𝑥1 𝜕𝑥1
𝜕𝐿 𝜕𝑓 𝜕𝑔
𝑥1 , 𝑥2 , 𝜆 = 𝑥1 , 𝑥2 + 𝜆 𝑥1 , 𝑥2 = 𝟎
𝜕𝑥2 𝜕𝑥2 𝜕𝑥2
𝜕𝐿
𝑥1 , 𝑥2 , 𝜆 = 𝑔 𝑥1 , 𝑥2 = 0
𝜕𝜆
Example
min 𝑓 𝑥, 𝑦 = 𝑘𝑥 −1 𝑦 −2
s. t. 𝑔 𝑥, 𝑦 = 𝑥 2 + 𝑦 2 − 𝑎2 = 0
4
Classical Optimization Techniques
Multivariable Optimization with Equality Constraints
Solution by the method of Lagrange Multipliers

5
Classical Optimization Techniques
Multivariable Optimization with Equality Constraints
Solution by the method of Lagrange Multipliers

6
Classical Optimization Techniques
Multivariable Optimization with Equality Constraints
Solution by the method of Lagrange Multipliers
min 𝑓 𝑥1 , 𝑥2
s. t. 𝑔1 𝑥1 , 𝑥2 = 0
𝑔2 𝑥1 , 𝑥2 = 0
the necessary condition to have
𝑥1∗ , 𝑥2∗ on C as an extreme is:
−𝛻𝑓 = 𝜆1 𝛻𝑔1 + λ2 𝛻𝑔2

7
Classical Optimization Techniques
Multivariable Optimization with Equality Constraints
Solution by the method of Lagrange Multipliers
Problem with n variables and m constraints

min 𝑓 𝒙
𝒙

s. t. 𝑔𝑗 𝒙 = 0, 𝑗 = 1,2, . . , 𝑚
construct the Lagrange function 𝐿 with Lagrange multiplier
𝜆𝑗 for each constraint 𝑔𝑗
𝑚

𝐿 𝑥1 , 𝑥2 , … , 𝑥𝑛 , 𝜆1 , 𝜆2 , … , 𝜆𝑚 = 𝑓 𝒙 + 𝜆𝑗 𝑔𝑗 𝒙
𝑗=1
𝐿 𝒙, 𝝀 = 𝑓 𝒙 + 𝝀𝑻 𝒈 𝒙 , 𝒙 ∈ ℝ𝑛 , 𝝀, 𝒈 ∈ ℝ𝑚
8
Classical Optimization Techniques
Multivariable Optimization with Equality Constraints
Solution by the method of Lagrange Multipliers
Necessary Conditions for a General Problem (n , m )
By treating 𝐿 as a function of n + m unknowns,
𝑚
𝜕𝐿 𝜕𝑓 𝜕𝑔𝑗
= + 𝜆𝑗 = 0, 𝑖 = 1,2, … , 𝑛 (𝑛 equations)
𝜕𝑥𝑖 𝜕𝑥𝑖 𝜕𝑥𝑖
𝑗=1
𝜕𝐿
= 𝑔𝑗 𝒙 = 0, 𝑗 = 1,2, … , 𝑚 (𝑚 equations)
𝜕𝜆𝑗
Sufficient Conditions for a General Problem (n , m )
𝑛 𝑛
𝜕 2𝐿
𝑄 = 𝑑𝒙 𝑇 𝛻𝑥2 𝐿 𝒙∗ , 𝝀∗ 𝑑𝒙 = 𝒙∗ , 𝝀∗ 𝑑𝑥𝑖 𝑑𝑥𝑗 > 0
𝜕𝑥𝑖 𝑥𝑗
𝑖=1 𝑗=1
for all the admissible variations 𝑑𝑥𝑖 and 𝑑𝑥𝑗 (𝛻𝑥2 𝐿 ≻ 0) 9
Classical Optimization Techniques
Multivariable Optimization with Equality Constraints
Solution by the method of Lagrange Multipliers
Sufficient Conditions for a General Problem (n , m )

## All values of z should be positive (negative) to have a local min (max).

10
Classical Optimization Techniques
Multivariable Optimization with Equality Constraints
Solution by the method of Lagrange Multipliers
Example
Find the dimension of a cylindrical tip with top and bottom
made up a sheet of metal to maximize its volume such that the
total surface area = 24.
Solution
Let 𝑥1 , 𝑥2 denote the radius of the base and length of the tin,
respectively, the problem can be stated as
max 𝑓 𝑥1 , 𝑥2 = 𝜋𝑥12 𝑥2
𝒙
s. t. 2𝜋𝑥12 + 2𝜋𝑥1 𝑥2 = 24𝜋,
The optimum is at 𝑥1∗ = 2, 𝑥2∗ = 4, 𝜆∗ = −1 and 𝑓 ∗ = 16𝜋.
(Check for sufficiency condition! z < 0) 11
Classical Optimization Techniques
Multivariable Optimization with Equality Constraints
Solution by the method of Lagrange Multipliers
Interpretation of Lagrange Multipliers
Consider the problem with a single equality constraint
min 𝑓 𝒙
𝒙
s. t. 𝑔 𝒙 =𝑏 →𝑔 𝒙 =𝑏−𝑔 𝒙 =0
It can be shown that
𝑑𝑓 ∗ = 𝜆∗ 𝑑𝑏
Thus, 𝜆∗ denotes the sensitivity (or rate of change) of 𝑓 w.r.t. 𝑏
(or the marginal or incremental change in 𝑓 ∗ w.r.t. 𝑏 at 𝒙∗ ). In
other words, 𝜆∗ indicates how tightly the constraint is binding
at the optimum point. (small relaxation or tightening of the
constraint effect on the optimum value of the objective 12
Classical Optimization Techniques
Multivariable Optimization with Equality Constraints
Solution by the method of Lagrange Multipliers
Interpretation of Lagrange Multipliers
Proof
It is required to find the effect of a small change in 𝑏 on 𝑓 ∗ :
𝑑𝑓 𝒙∗ 𝑏
= 𝛻𝒙 𝑓 𝑇 𝒙∗ 𝑏 ∙ 𝛻𝑏 𝒙∗ 𝑏
𝑑𝑏
But using Lagrange multiplier, 𝛻𝒙 𝑓 𝒙∗ 𝑏 = −𝜆∗ 𝛻𝒙 𝑔 𝒙∗ 𝑏
However, 𝑔 𝒙∗ = 𝑏 − 𝑔 𝒙∗ → 𝛻𝒙 𝑔 𝒙∗ 𝑏 = −𝛻𝒙 𝑔 𝒙∗ 𝑏
Then
𝑑𝑓 𝒙∗ 𝑏 ∗ 𝑻 ∗ ∗ ∗
𝑑 𝑔 𝒙 ∗
𝑏
= 𝜆 𝛻𝒙 𝑔 𝒙 𝑏 ∙ 𝛻𝑏 𝒙 𝑏 = 𝜆 = 𝜆∗ (1)
𝑑𝑏 𝑑𝑏
𝑑𝑓 ∗ = 𝜆∗ 𝑑𝑏 13
Classical Optimization Techniques
Multivariable Optimization with Equality Constraints
Solution by the method of Lagrange Multipliers
Interpretation of Lagrange Multipliers
The sensitivity of the optimal value 𝑓 ∗ w.r.t. the RHS 𝑏 at 𝒙∗ is
𝑑𝑓 ∗ = 𝜆∗ 𝑑𝑏
where the following physical meaning can be attributed to 𝜆∗ as
1. 𝜆∗ > 0: a unit decrease in 𝑏 is positively valued since one gets a
smaller min value of the objective function 𝑓. Hence 𝜆∗ may be
interpreted as the marginal gain (further reduction) in 𝑓 ∗ due to
the tightening of the constraint. In case of unit increase in 𝑏, 𝜆∗
may be thought of as the marginal cost (increase) in 𝑓 ∗ due to the
relaxation of the constraint.
2. 𝜆∗ < 0: opposite.
3. 𝜆∗ = 0: No effect and hence the constraint will not be binding.14
Classical Optimization Techniques
Multivariable Optimization with Equality Constraints
Solution by the method of Lagrange Multipliers
Example
Solve max 𝑓 𝑥1 , 𝑥2 = 2𝑥1 + 𝑥2 + 10
𝒙
s. t. 𝑥1 + 2𝑥22 = 3
Find the effect of changing the right-hand side of the constraint
on the optimum value of 𝑓 ∗ .
Solution
The optimum: 𝑥1∗ = 2.97, 𝑥2∗ = 0.13, 𝜆∗ = 2 and 𝑓 ∗ = 16.07, z = −6.29.
• If original constraint is tightened by 1 unit, then 𝑑𝑏 = −1 and
𝑑𝑓 ∗ = 𝜆∗ 𝑑𝑏 = −2 → 𝑓 ∗ + 𝑑𝑓 ∗ = 14.07
• If original constraint is relaxed by 2 units, then 𝑑𝑏 = 2 and
𝑑𝑓 ∗ = 𝜆∗ 𝑑𝑏 = 4 → 𝑓 ∗ + 𝑑𝑓 ∗ = 20.07.
15
Classical Optimization Techniques
Multivariable Optimization with Inequality Constraints
Karush Kuhn Tucker (KKT) conditions

## Harold W. Kuhn, (American mathematician, 1925–2014) and

Albert W. Tucker (Canadian mathematician, 1905 –1995), who
first published the conditions in 1951.
Later scholars discovered that the necessary conditions for this
problem had been stated by William Karush (American
mathematician, 1917–1997) in his master's thesis in 1939.

16
Classical Optimization Techniques
Multivariable Optimization with Inequality Constraints
Karush Kuhn Tucker (KKT) conditions
Consider the problem
min 𝑓 𝒙
𝒙

𝐬. 𝐭. 𝑔𝒋 𝒙 ≤ 0, 𝑗 ∈ 𝐽 = {1,2, . . , 𝑚}
The necessary conditions to be satisfied at a relative min of 𝑓
𝑚
𝜕𝑓 𝜕𝑔𝑗
+ 𝜆𝑗 =0
𝜕𝑥𝑖 𝜕𝑥𝑖
𝑗∈𝐽1
𝜆𝑗 > 0; 𝑗 ∈ 𝐽1
where 𝐽1 is the set of active constraints.
17
Classical Optimization Techniques
Multivariable Optimization with Inequality Constraints
Karush Kuhn Tucker (KKT) conditions
Feasible directions
A vector 𝑺 is called a feasible direction from a point 𝒙 if at least
a small step can be taken along 𝑺 that does not immediately
leave the feasible region. Thus for problems with sufficiently
smooth constraint surfaces, vector 𝑺 satisfying the relation

𝑺T 𝛻𝑔𝑗 < 0

18
Classical Optimization Techniques
Multivariable Optimization with Inequality Constraints
Karush Kuhn Tucker (KKT) conditions
Feasible directions

𝑺T 𝛻𝑔𝑗 ≤ 0

19
Classical Optimization Techniques
Multivariable Optimization with Inequality Constraints
Karush Kuhn Tucker (KKT) conditions
Feasible directions

## −𝛻𝑓 = 𝜆𝑗 𝛻𝑔𝑗 , 𝜆𝑗 > 0; 𝑗 ∈ 𝐽1

𝑗∈𝐽1
For positive 𝜆’s, we will not be able to find any direction in the
feasible domain along which the function value can be
decreased further. 20
Classical Optimization Techniques
Multivariable Optimization with Inequality Constraints
Karush Kuhn Tucker (KKT) conditions
Notes
o The KKT condition in vector form is
𝑚

## −𝛻𝑓 = 𝜆𝑗 𝛻𝑔𝑗 , 𝜆𝑗 > 0; 𝑗 ∈ 𝐽1

𝑗∈𝐽1
It indicates that the negative grad of the OF can be expressed as a
linear combination of the grad of active constraints.
o These conditions are, in general, not sufficient to ensure a relative
(local) minimum.
o For convex programming problems, KKT conditions are
necessary and sufficient for a global minimum.
o For relative maximum 𝜆𝑗 < 0; 𝑗 ∈ 𝐽1 21
Classical Optimization Techniques
Multivariable Optimization with Inequality Constraints
Karush Kuhn Tucker (KKT) conditions
Notes (continued)
o If the set of active constraints is not known, KKT will be
𝑚
𝜕𝑓 𝜕𝑔𝑗
+ 𝜆𝑗 = 0; 𝑖 = 1,2, … , 𝑛
𝜕𝑥𝑖 𝜕𝑥𝑖
𝑗=1
𝜆𝑗 𝑔𝑗 = 0; 𝑗 = 1,2, . . , 𝑚
𝑔𝒋 𝒙 ≤ 0; 𝑗 = 1,2, . . , 𝑚
𝜆𝑗 ≥ 0; 𝑗 = 1,2, . . , 𝑚
o If the problem is one of maximization or if the constraints
are of the type 𝑔𝒋 𝒙 ≥ 0, the 𝜆𝑗 have to be non-positive.
o if the problem is one of maximization with constraints in the
form 𝑔𝒋 𝒙 ≥ 0, the 𝜆𝑗 have to be non-negative. 22
Classical Optimization Techniques
Multivariable Optimization with Inequality Constraints
Karush Kuhn Tucker (KKT) conditions
Example
Consider the following optimization problem:
min 𝑓 𝑥1 , 𝑥2 = 𝑥12 + 𝑥22
𝒙
s. t. 𝑥1 + 2𝑥2 ≤ 15
1 ≤ 𝑥𝑖 ≤ 10; 𝑖 = 1,2
derive the conditions to be satisfied at x = (1, 7) by the search
direction to be
a) a usable direction (𝑆 T 𝛻𝑓 < 0)
b) a feasible direction (𝑆 T 𝛻𝑔𝑗 < 0)
Solution
One direction is 𝑆 = ( 1 − 1). Is there more? 23
Classical Optimization Techniques
Multivariable Optimization with Mixed Constraints

## Consider the optimization problem with both equality

and inequality constraints:

min 𝑓 𝒙
𝒙

𝐬. 𝐭. ℎ𝑘 𝒙 = 0, 𝑘 = 1,2, . . , 𝑝
𝑔𝒋 𝒙 ≤ 0, 𝑗 = 1,2, . . , 𝑚

24
Classical Optimization Techniques
Multivariable Optimization with Mixed Constraints
Karush-Kuhn-Tucker (KKT) conditions
The necessary conditions to be satisfied at a relative min of 𝑓:

𝑚 𝑝
𝜕𝑓 𝜕𝑔𝑗 𝜕ℎ𝑘
+ 𝜆𝑗 − 𝛽𝑘 = 0; 𝑖 = 1,2, … , 𝑛
𝜕𝑥𝑖 𝜕𝑥𝑖 𝜕𝑥𝑘
𝑗=1 𝑘=1
𝜆𝑗 𝑔𝑗 = 0; 𝑗 = 1,2, . . , 𝑚
𝑔𝒋 𝒙 ≤ 0; 𝑗 = 1,2, . . , 𝑚
ℎ𝑘 𝒙 = 0, 𝑘 = 1,2, . . , 𝑝
𝜆𝑗 ≥ 0; 𝑗 = 1,2, . . , 𝑚

25
Classical Optimization Techniques
Multivariable Optimization with Mixed Constraints
Constraint Qualification
Theorem
Let 𝒙∗ be a feasible solution to the problem of mixed constraints
problem. If 𝛁𝒈𝒋 (𝒙∗ ), 𝑗 ∈ 𝐽1 and 𝛁𝒉𝒌 (𝒙∗ ), 𝒌 = 𝟏, 𝟐, … , 𝒑, are linearly
independent, there exist 𝝀∗ and 𝜷∗ such that (𝒙∗ , 𝝀∗ , 𝜷∗ ) satisfy:

𝑚 𝑝
𝜕𝑓 𝜕𝑔𝑗 𝜕ℎ𝑘
+ 𝜆𝑗 − 𝛽𝑘 = 0; 𝑖 = 1,2, … , 𝑛
𝜕𝑥𝑖 𝜕𝑥𝑖 𝜕𝑥𝑘
𝑗=1 𝑘=1
𝜆𝑗 𝑔𝑗 = 0; 𝑗 = 1,2, . . , 𝑚
𝑔𝒋 𝒙 ≤ 0; 𝑗 = 1,2, . . , 𝑚
ℎ𝑘 𝒙 = 0, 𝑘 = 1,2, . . , 𝑝
𝜆𝑗 ≥ 0; 𝑗 = 1,2, . . , 𝑚 26
Classical Optimization Techniques
Multivariable Optimization with Mixed Constraints
Constraint Qualification
Notes
• The requirement of 𝛁𝒈𝒋 (𝒙∗ ) , 𝑗 ∈ 𝐽1 and 𝛁𝒉𝒌 (𝒙∗ ) , 𝒌 =
𝟏, 𝟐, … , 𝒑, are linearly independent, is called the constraint
qualification .
• If 𝛁𝒈𝒋 (𝒙∗ ), 𝑗 ∈ 𝐽1 and 𝛁𝒉𝒌 (𝒙∗ ), 𝒌 = 𝟏, 𝟐, … , 𝒑, are linearly
independent then, there exist ( 𝒙∗ , 𝝀∗ , 𝜷∗ ) satisfy KTT,
however, converse may not be true.
• The constraint qualification is always satisfied for problems:
– All the inequality and equality constraint functions are linear.
– All the inequality constraint functions are convex, and all the equality
constraint functions are linear.
27
Classical Optimization Techniques
Multivariable Optimization with Mixed Constraints
Constraint Qualification
Example
Consider the following optimization problem:
min 𝑓 𝑥1 , 𝑥2 = 𝑥1 − 1 2 + 𝑥22
𝒙
s. t. 𝑥13 + 2𝑥2 ≤ 0
𝑥13 − 2𝑥2 ≤ 0

## Determine whether the constraint qualification and the KKT

conditions are satisfied at the optimum point.
Solution
Both are violated at (0,0) although the point is minimum.
28