gytki

© All Rights Reserved

8 views

gytki

© All Rights Reserved

- Treinamento_SNP
- MATH_F471_1476
- Operations Research MB0048
- Svm
- QueFlow- The ultimate teller scheduling tool
- Keynote lecture ESCAPE 2016
- Sensitivity Analysis and Duality
- 15ProMechanica-06.pdf
- lagrange.pdf
- Fairness Resource Allocation in Blind Wireless Multimedia
- Allocation
- 6968_chap01
- 18.100A-PS12
- optimum desing methodology for DGBB bearings
- MIT15_093J_F09_lec05
- Optimal Coordination DOCR Using LP
- LNO Problem Sheet
- m.sc Statistics (MDU )
- Linear Programing Problem.pdf
- 05741917

You are on page 1of 67

Duality

INSEAD, Spring 2006

Jean-Philippe Vert

Ecole des Mines de Paris

Jean-Philippe.Vert@mines.org

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.1/67

Outline

The Lagrange dual function

Weak and strong duality

Geometric interpretation

Saddle-point interpretation

Optimality conditions

Perturbation and sensitivity analysis

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.2/67

The Lagrange dual function

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.3/67

Setting

We consider an equality and inequality constrained

optimization problem:

minimize f (x)

subject to hi (x) = 0 , i = 1, . . . , m ,

gj (x) 0 , j = 1, . . . , r ,

under the constraints, i.e., f = f (x ) if the minimum is

reached at a global minimum x .

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.4/67

Lagrange dual function

Remember the Lagrangian of this problem is the

function L : Rn Rm Rr R defined by:

m

X r

X

L (x, , ) = f (x) + i hi (x) + j gj (x) .

i=1 j=1

as:

q(, ) = infn L (x, , )

xR

Xm r

X

= infn f (x) + i hi (x) + j gj (x) .

xR

i=1 j=1

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.5/67

Properties of the dual function

When L in unbounded below in x, the dual function q(, )

takes on the value . It has two important properties:

1. q is concave in (, ), even if the original problem is not

convex.

2. The dual function yields lower bounds on the optimal

value f of the original problem when is nonnegative:

q(, ) f , Rm , Rr , 0 .

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.6/67

Proof

1. For each x, the function (, ) 7 L(x, , ) is linear, and

therefore both convex and concave in (, ). The

pointwise minimum of concave functions is concave,

therefore q is concave.

2. Let x be any feasible point, i.e., h(

x) = 0 and g(

x) 0.

Then we have, for any and 0:

m

X r

X

i hi (

x) + i hi (

x) 0 ,

i=1 i=1

m

X r

X

= L(

x, , ) = f (

x)+ i hi (

x)+ i hi (

x) f (

x) ,

i=1 i=1

x, , ) f (

x) ,

x.

x

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.7/67

Proof complement

We used the fact that the poinwise maximum (resp. minimum) of convex

(resp. concave) functions is itself convex (concave).

To prove this, suppose that for each y A the function f (x, y) is convex in

x, and let the function:

g(x) = sup f (x, y) .

yA

for any [0, 1] and x1 , x2 in the domain of g:

yA

yA

yA yA

= g(x1 ) + (1 )g(x2 ) .

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.8/67

Illustration

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.9/67

Example 1

Least-squares solution of linear equations:

minimize x> x

subject to Ax = b ,

Lagrangian with domain Rn Rp is:

to zero:

> 1 >

x L(x, ) = 2x + A = 0 = x= A .

2

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.10/67

Example 1 (cont.)

Plug it in L to obtain the dual function:

1 > 1 >

q() = L A , = AA> b>

2 4

holds:

1 >

f AA> b> , Rp .

4

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.11/67

Example 2

Standard form LP:

minimize c> x

subject to Ax = b ,

x0.

constraints, the Lagrangian with domain Rn Rp Rn is:

>

= > b + c + A> x .

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.12/67

Example 2 (cont.)

L is linear in x, and its minimum can only be 0 or :

(

> b if A> + c = 0

g(, ) = infn L(x, , ) =

xR otherwise.

The lower bound is non-trivial when and satisfy 0

and A> + c = 0, giving the following bound:

f > b if A> + c 0 .

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.13/67

Example 3

Inequality form LP:

minimize c> x

subject to Ax b ,

Lagrangian with domain Rn Rp is:

>

= > b + A> + c x .

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.14/67

Example 3 (cont.)

L is linear in x, and its minimum can only be 0 or :

(

> b if A> + c = 0

g(, ) = infn L(x, , ) =

xR otherwise.

The lower bound is non-trivial when satisfies 0 and

A> + c = 0, giving the following bound:

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.15/67

Example 4

Two-way partitioning:

minimize x> W x

subject to x2i = 1 , i = 1, . . . , n .

discrete points (xi = 1).

Interpretation : partition (1, . . . , n) in two sets, Wij is the cost

of assigning i, j to the same set, Wij the cost of assigning

them to different sets. Lagrangian with domain Rn Rn :

n

X

>

i x2i

L(x, ) = x W x + 1

i=1

= x> (W + diag()) x 1> .

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.16/67

Example 4 (cont.)

For M symmetric, the minimum of x> M x is 0 if all

eigenvalues of M are nonnegative, otherwise. We

therefore get the following dual function:

(

1> if W + diag() 0 ,

q() =

otherwise.

W + diag() 0. This holds in particular for = min (W ),

resulting in:

f 1> = nmin (W ) .

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.17/67

Weak and strong duality

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.18/67

Dual problem

For the (primal) problem:

minimize f (x)

subject to h(x) = 0 , g(x) 0 ,

maximize q(, )

subject to 0,

are the Lagrange multipliers associated to the constraints

h(x) = 0 and g(x) 0.

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.19/67

Weak duality

Let d the optimal value of the Lagrange dual problem.

Each q(, ) is an lower bound for f and by definition d is

the best lower bound that is obtained. The following weak

duality inequality therefore always hold:

d f .

ence d f is called the optimal duality gap of the original

problem.

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.20/67

Application of weak duality

For any optimization problem, we always have:

the dual problem is a convex optimization problem

(=easy to solve)

weak duality holds.

Hence solving the dual problem can provide useful lower

bounds for the original problem, no matter how difficult it is.

For example, solving the following SDP problem (using

classical optimization toolbox) provides a non-trivial lower

bound for the optimal two-way partitioning problem:

minimize 1>

subject to W + diag() 0

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.21/67

Strong duality

We say that strong duality holds if the optimal duality gap is

zero, i.e.:

d = f .

can be obtained from the Lagrange dual function is tight

Strong duality does not hold for general nonlinear

problems.

It usually holds for convex problems.

Conditions that ensure strong duality for convex

problems are called constraint qualification.

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.22/67

Slaters constraint qualification

Strong duality holds for a convex problem:

minimize f (x)

subject to gj (x) 0 , j = 1, . . . , r ,

Ax = b ,

point that satisfies:

gj (x) < 0 , j = 1, . . . , r , Ax = b .

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.23/67

Remarks

Slaters conditions also ensure that the maximum d (if

> ) is attained, i.e., there exists a point ( , ) with

q ( , ) = d = f

not required for affine constraints.

There exist many other types of constraint qualifications

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.24/67

Example 1

Least-squares solution of linear equations:

minimize x> x

subject to Ax = b ,

1 >

maximize AA> b> .

4

b Im(A). In that case strong duality holds.

In fact strong duality also holds if f = +: there exists z with

A> z = 0 and b> z 6= 0, so the dual is unbounded above and

d = + = f .

Nonlinear optimization c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.25/67

Example 2

Inequality form LP:

minimize c> x

subject to Ax b ,

(

> b if A> + c = 0

g(, ) = infn L(x, , ) =

xR otherwise.

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.26/67

Example 2 (cont.)

The dual problem is therefore equivalent to the following

standard form LP:

minimize b>

subject to A> + c = 0, 0.

duality holds for any LP provided the primal problem is

feasible.

In fact, f = d except when both the primal LP and the

dual LP are infeasible.

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.27/67

Example 3

Quadratic program (QP):

minimize x> P x

subject to Ax b ,

the Lagrangian is:

1 1 >

x () = P A .

2

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.28/67

Example 3

The dual function is therefore

1 >

q() = AP 1 A> b> .

4

and the dual problem:

1 >

maximize AP 1 A> b>

4

subject to 0.

holds if the primal problem is feasible: f = d .

In fact, strong duality always holds, even if the primal is

not feasible (in which case f = d = +), cf LP case.

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.29/67

Example 4

The following QCQP problem is not convex if A symmetric but not positive

semidefinite:

subject to x> x 1 .

maximize t

A + I b

subject to 0.

b> t

In fact, strong duality holds in this case (more generally for quadratic ob-

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.30/67

Geometric interpretation

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.31/67

Setting

We consider the simple problem

minimize f (x)

subject to g(x) 0 ,

of the weak and strong duality.

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.32/67

Optimal value f

We consider the subset of R2 defined by:

S = {(g(x), f (x) | x Rn )} .

f = inf {t | (u, t) S, u 0} .

u

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.33/67

Dual function q()

The dual function for 0 is:

q() = infn {f (x) + g(x)}

xR

= inf {u + t} .

(u,t)S

S

u+t= cte

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.34/67

Dual optimal d

d = sup q()

0

= sup inf {u + t} .

0 (u,t)S

f

d

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.35/67

Weak duality

d f

t

f

d

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.36/67

Strong duality

For convex problems with strictly feasible points:

d = f

t t

S S

f

d

f

d

u u

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.37/67

Saddle-point interpretations

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.38/67

Setting

We consider a general optimization problem with inequality

constraints:

minimize f (x)

subject to gj (x) 0 , j = 1, . . . , r .

Its Lagrangian is

r

X

L(x, ) = f (x) + j gj (x) .

j=1

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.39/67

Inf-sup form of f

We note that, for any x Rn :

X r

sup L(x, ) = sup f (x) + j gj (x)

0 0 j=1

(

f (x) if gj (x) 0 , j = 1, . . . , r ,

=

+ otherwise.

Therefore:

f = infn sup L(x, ) .

xR 0

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.40/67

Duality

By definition we also have

0 xR

0 xR xR 0

0 xR xR 0

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.41/67

Max-min inequality

In fact the weak duality does not depend on any property of

L, it is just an instance of the general max-min inequality

that states that

zZ wW wW zZ

equality holds, i.e.,

zZ wW wW zZ

holds only in special cases.

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.42/67

Proof of Max-min inequality

For any (w0 , z0 ) W Z we have by definition of the infimum in w:

wW

For w0 fixed, this holds for any choice of z0 so we can take the supremum

in z0 on both sides to obtain:

zZ wW zZ

w0 . The inequality is valid for any w0 , so we can take the infimum to obtain

the result:

sup inf f (w, z) inf sup f (w, z) .

zZ wW wW zZ

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.43/67

Saddle-point interpretation

A pair (w , z ) W Z is called a saddle-point for f if

f (w , z) f (w , z ) f (w, z ) , w W , z Z .

because:

zZ wW wW

zZ wW zZ

the Lagrangian. Conversily, if the Lagrangian has a saddle-

point then strong duality holds.

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.44/67

Game interpretation

Consider a game with two players:

1. Player 1 chooses w W ;

2. then Player 2 chooses z Z ;

3. then Player 1 pays f (w, z) to Player 2.

Player 1 wants to minimize f , while Player 2 wants to

maximize it. If Player 1 chooses w, then Player 2 will

choose z Z to obtain the maximum payoff supzZ f (w, z).

Knowing this, Player must chose w to make this payoff

minimum, equal to:

wW zZ

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.45/67

Game interpretation (cont.)

If Player 2 plays first, following a similar argument, the

payoff will be:

sup inf f (w, z) .

zZ wW

player to know his or her opponents choice before choos-

ing. The optimal duality gap is the advantage afforded to

the player who plays second. If there is a saddle-point, then

there is no advantage to the players of knowing their oppo-

nents choice.

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.46/67

Optimality conditions

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.47/67

Setting

We consider an equality and inequality constrained

optimization problem:

minimize f (x)

subject to hi (x) = 0 , i = 1, . . . , m ,

gj (x) 0 , j = 1, . . . , r ,

We will revisit the optimality conditions at the light of duality.

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.48/67

Dual optimal pairs

Suppose that strong duality holds, x is primal optimal,

( , ) is dual optimal. Then we have:

f (x ) = q ( , )

m

X r

X

= infn f (x) + i hi (x) + j gj (x)

xR

i=1 j=1

m

X r

X

f (x ) + i hi (x ) + j gj (x )

i=1 j=1

f (x )

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.49/67

Complimentary slackness

The first equality shows that:

L (x , , ) = infn L (x, , ) ,

xR

second equality shows that:

j gj (x ) = 0 , j = 1, . . . , r .

the ith optimal Lagrange multiplier is zero unless the ith con-

straint is active at the optimum.

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.50/67

KKT conditions

If the functions f, g, h are differentiable and there is no

duality gap, then we have seen that x minimizes

L(x, , ), therefore:

m

X r

X

x L(x , , ) = f (x )+ i hi (x )+ j gi (x ) = 0 .

i=1 j=1

conditions, we recover the KKT optimality conditions that x

must fulfill. and now have the interpretation of dual

optimal.

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.51/67

KKT conditions for convex problems

Suppose now that the problem is convex, i.e., f and g are

convex functions, h is affine, and let x and ( , ) satisfy

the KKT conditions:

hi (x ) = 0 i = 1, . . . , m

gj (x ) 0 j = 1, . . . , r

j 0 j = 1, . . . , r

j gj (x ) = 0 j = 1, . . . , r

m

X r

X

f (x ) + i hi (x ) + j gi (x ) = 0 ,

i=1 j=1

duality gap ( the KKT conditions are sufficient in this case).

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.52/67

Proof

The first 2 conditions show that x is feasible. Because

0, the Lagrangian L(x, , ) is convex in x. Therefore,

the last equality shows that x minimized it, therefore:

q ( , ) = L (x , , )

m

X r

X

= f (x ) + i hi (x ) + j gj (x )

i=1 j=1

= f (x ) ,

therefore primal and dual optimal.

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.53/67

Summary

For any problem with differentiable objective and constraints:

If x and (, ) satisfy f (x) = q(, ) (which implies in particular

that x is optimal), then (x, , ) satisfy KKT.

For a convex problem the converse is true: x and (, ) satisfy

f (x) = q(, ) if and only if they satisfy KKT.

For a convex problem where Slaters condition holds, we know

that strong duality holds and that the dual optimal is attained,

so x is optimal if and only if there are (, ) that together with x

satisfy the KKT conditions.

We showed previously without convexity assumption, if x is

optimal and regular, then there exists (, ) that together with x

satisfy KKT. In that case, however, we do note have in general

f (x) = q(, ) (otherwise strong duality would

hold).

Nonlinear optimization c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.54/67

Example 1

Equality constrained quadratic minimization:

1 >

minimize x P x + q>x + r

2

subject to Ax = b ,

constraint, so the KKT conditions are necessary and

sufficient: ! ! !

P A> x q

=

A 0 b

This is a set of m + n equations with m + n variables.

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.55/67

Example 2

Water-filling problem (assuming i 0):

n

X

minimize log (i + xi )

i=1

subject to x 0 , 1> x = 1 .

Slaters conditions, x is optimal iff x 0, 1> x = 1, and there

exists R and Rn s.t.

1

0, i xi = 0 , + i = .

xi + i

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.56/67

Example 2

This problem is easily solved directly:

If < 1/i : i = 0 and xi = 1/ i

If 1/i : i = 1/i and xi = 0

>

Pn

determine from 1 x = i=1 max {0, 1/ i } = 1

Interpretation:

n patches; level of patch i

is at height i

flood area with unit

amount of water

resulting level is 1/

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.57/67

Perturbation and sensitivity analysis

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.58/67

Unperturbed optimization problem

We consider the general problem:

minimize f (x)

subject to hi (x) = 0 , i = 1, . . . , m ,

gj (x) 0 , j = 1, . . . , r ,

maximize q(, )

subject to 0.

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.59/67

Perturbed problem and its dual

The perturbed problem is

minimize f (x)

subject to hi (x) = ui , i = 1, . . . , m ,

gj (x) vj , j = 1, . . . , r ,

subject to 0.

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.60/67

Interpretation

When u = v = 0, this coincides with the original

problem: f (0, 0) = f .

When vj > 0, we have relaxed the j th inequality

constraint.

When vj < 0, we have tightened the j th inequality

constraing.

We are interested in informations about f (u, v) that can

be obtained from the solution of the unperturbed

problem and its dual.

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.61/67

A global sensitivity result

Now we assume that:

Strong duality holds, i.e., f = d .

The dual optimum is attained, i.e., there exist ( , )

such that d = q ( , ).

Applying weak duality to the perturbed problem we obtain

= f u> v > .

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.62/67

Global sensitivity interpretation

If j is large: f increases greatly if we tighten the jth inequality constraint (vj < 0)

If j is small: f does not decrease much if we loosen the jth inequality constraint

(vj > 0)

If i is large and positive: f increases greatly if we decrease the ith equality

constraint (ui < 0)

If i is large and negative: f increases greatly if we increase the ith equality

constraint (ui > 0)

If i is small and positive: f does not decrease much if we increase the ith equality

constraint (ui > 0)

If i is small and negative: f does not decrease much if we decrease the ith equality

constraint (ui < 0)

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.63/67

Local sensitivity analysis

If (in addition) we assume that f (u, v) is differentiable at

(0, 0), then the following holds:

f (0, 0) f (0, 0)

i = , i = ,

ui vi

In that case, the Lagrange multipliers are exactly the local

sensitivities of the optimal value with respect to constraint

perturbation. Tightening the ith inequality constraint a small

amount vj < 0 yields an increase in f of approximately

j vj .

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.64/67

Proof

For i : from the global sensitivity result, it holds that:

f (tei , 0) f (0, 0)

t > 0 = i ,

t

and therefore

f (0, 0)

i .

ui

A similar analysis with t < 0 yields f (0, 0)/ui i , and

therefore:

f (0, 0)

= i .

ui

A similar proof holds for j .

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.65/67

Shadow price interpretation

We assume the following problem is convex and Slaters

condition holds:

minimize f (x)

subject to gj (x) vj , j = 1, . . . , r ,

The objective f is the cost, i.e., f is the profit.

Each constraint gj (x) 0 is a limit on some resource

(labor, steel, warehouse space...)

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.66/67

Shadow price interpretation (cont.)

f (v) is how much more or less profit could be made if

more or less of each resource were made available to

the firm.

j = f (0, 0)/vj is how much more profit the firm

could make for a small increase in availability of

resource j .

j is therefore the natural or equilibrium price for

resource j .

Nonlinear optimization

c 2006 Jean-Philippe Vert, (Jean-Philippe.Vert@mines.org) p.67/67

- Treinamento_SNPUploaded byAlfredo Machado Neto
- MATH_F471_1476Uploaded byshivam12365
- Operations Research MB0048Uploaded byNitin Sanil
- SvmUploaded byNon Sense
- QueFlow- The ultimate teller scheduling toolUploaded byRaj N Gaonkar
- Keynote lecture ESCAPE 2016Uploaded bykaye
- Sensitivity Analysis and DualityUploaded byIntan Dewi Fortuna
- 15ProMechanica-06.pdfUploaded bystephen_huang
- lagrange.pdfUploaded bycarolina
- Fairness Resource Allocation in Blind Wireless MultimediaUploaded byMuhammad Naeem
- AllocationUploaded bymattinnbox
- 6968_chap01Uploaded bykhitish100
- 18.100A-PS12Uploaded byEric Emer
- optimum desing methodology for DGBB bearingsUploaded bySatish Pathak
- MIT15_093J_F09_lec05Uploaded byPankaj Kumar
- Optimal Coordination DOCR Using LPUploaded byJamile_P_N
- LNO Problem SheetUploaded bySandeep Kumar
- m.sc Statistics (MDU )Uploaded byDrRitu Malik
- Linear Programing Problem.pdfUploaded byIvan Draga
- 05741917Uploaded byTra Le
- 563187Uploaded byhemanta57jena
- Ken Notes 05Uploaded byabhay2425
- Shailesh Patel Ljiet CvUploaded byShailesh Patel
- 4. Extreme point method for linear programming.pdfUploaded byHassan Nathani
- CASD 2017. It cannot be denied that robust model predictive control with input constraint have been playing an important role in nonlinear model analysis. A group of nonlinear model system under state – dependent disturbances and input constraint is approached by robust nonlinear model predictive control method. The investigated state space model is separated into linear part, mismatch model and state – dependent disturbances belonging L∞. Solving optimization problem of upper quadratic function of infinite horizon objective function with input constraint via LMIs [4] provide state – feedback control law to help state converge to origin. In this paper, in order to obtain robust stability condition, the proposed method must generates stability regions lying in feasibility regions which ensure the existence of solution. Additionally, these regions are able to contract after each sampling time to proof the robust stability of the system. Finally, the simulation results of the three dimenUploaded byNam Hoang Thanh
- Technical PaperUploaded byCdiq Aziz
- 1-s2.0-S0959652617316785-main.pdfUploaded byRaju Gumma
- STHS Decomposition ApproachUploaded bySayyed Shaik
- 450 Bolzano CauchyUploaded byTania Ibarra
- geovariancesisatis-simulationreductionUploaded byFredy HC

- Game Theory Roger MCCainUploaded byIulian Hertug
- Mas Cole Ll SolutionsUploaded byRicardo Gutiérrez Argüelles
- Data Analyst CV TemplateUploaded bydhiraj_sonawane_1
- EFM June2014 EvaluationUploaded byRicardo Gutiérrez Argüelles
- AContributionToTheEmpiricsOfReserv PreviewUploaded byRicardo Gutiérrez Argüelles
- 225831027-Kreps-Notes.pdfUploaded byRicardo Gutiérrez Argüelles
- DV-2017 Instructions and FAQs.pdfUploaded byShpetim Malo
- Berto LaUploaded byRicardo Gutiérrez Argüelles
- TWAIN-WIA-DRIVERENOGR3 (1)Uploaded byRicardo Gutiérrez Argüelles
- Ps3 SolutionUploaded byBruno Oliveira
- 1. NCE_Chapter5_Economics Of Change.pdfUploaded byRicardo Gutiérrez Argüelles
- Börner et al. 2010.pdfUploaded byRicardo Gutiérrez Argüelles
- Boucher Et Al. 2009Uploaded byRicardo Gutiérrez Argüelles
- CESifo wp3713 PrincenUploaded byRicardo Gutiérrez Argüelles
- 3_WP_BER_Flughafen_Berlin_Brandenburg.pdfUploaded byRicardo Gutiérrez Argüelles
- Chetty Szeidl Commitment Habit FormationUploaded byRicardo Gutiérrez Argüelles
- analytical derivation efficient frontier Merton.pdfUploaded byRicardo Gutiérrez Argüelles
- Economic Modelling T&P.pdfUploaded byRicardo Gutiérrez Argüelles
- essa2013_2649Uploaded byRicardo Gutiérrez Argüelles
- Briscoe 1998 Managing water as an economic resource - the Chilean experience.pdfUploaded byRicardo Gutiérrez Argüelles
- 2. Soma Vatn 2014 Common Goods Stakeholders CitizenUploaded byRicardo Gutiérrez Argüelles
- Bodmer Public Rebuttal TestimonyUploaded byRicardo Gutiérrez Argüelles
- Farley 2015 Extending market allocation to ecosystem services clear.pdfUploaded byRicardo Gutiérrez Argüelles
- Analytical Derivation Efficient Frontier MertonUploaded byRicardo Gutiérrez Argüelles
- 1404Uploaded byRicardo Gutiérrez Argüelles
- 6262905 (1)Uploaded byRicardo Gutiérrez Argüelles
- Anderson Baland 2002Uploaded byRicardo Gutiérrez Argüelles

- ead 520 assignment 1 july 2018Uploaded byapi-445074198
- Enodeb Lmt User Guide(v100r004c00_01)(PDF)-EnUploaded byala_a_silawi
- 1 IJAEST Volume No 1 Issue No 2 a Parametric and Non Parametric Approach for Performance Appraisal of Indian Power Generating CompaniesUploaded byiserp
- Computer Fundamentals for Kids SyllabusUploaded bymarvin_pharaoh
- Vice President Sales Marketing in San Diego, CA resume.docUploaded byRCTBLPO
- [SiC-En-2013-17] Challenges Regarding Parallel Connection of SiC JFETsUploaded bynuaa_qhh
- tlcUploaded byAftab Hussain Siddiqui
- 614, g.r. No. L-6494 Angeles vs JoseUploaded bynazh
- 247Uploaded byRahul
- 2013 47 Summer Wiring Matters Medical LocationsUploaded bypavlone
- VRV Product Catalogue for Professional Network ECPEN16-200 English (12)Uploaded byCallany Anycall
- New Microsoft Word DocumentUploaded bysandeeppacheyan
- 3dsmax Shockwave QuickstartUploaded byMuhammad Shafie
- Hydroponics SummaryUploaded byJose Novoa
- rman_buffer[1]Uploaded bybala_govind
- My Project ETHOSOMESUploaded byHaleema Sultan
- ra 9439Uploaded byFlorante Manongdo Tapac Jr.
- 002 Business Consolidation (SEM-BCS).pdfUploaded byparivijji
- AcrUploaded bynaseema
- Narrative Report of OJT at Registrar's OfficeUploaded byGabz Gabion
- 12. Pp and Malayan Insurance Company vs. PiccioUploaded byRene Valentos
- miwd customer assurance warranty -6-4-13Uploaded byapi-245069443
- Akira Seiki Vmc Full Line_20090216Uploaded byUNIVERSALCNC
- GRI 4704 Data SheetUploaded byJMAC Supply
- TRK S2-Pengantar PptUploaded byAchmad Royani
- 17. PGDMLM SyllabusUploaded byNikhil Patwardhan
- 7 Occena vs MarquezUploaded byToni Calsado
- Ways to increase National Strength in MalaysiaUploaded byHabeeb E Sadeed
- Inkscape Tutorial for Beginners: How to Make a Yoga Classes FlyerUploaded byRenee Liverpool
- Super Final HydroUploaded byRaymond Weil Rillon