This action might not be possible to undo. Are you sure you want to continue?

BooksAudiobooksComicsSheet Music### Categories

### Categories

### Categories

### Publishers

Scribd Selects Books

Hand-picked favorites from

our editors

our editors

Scribd Selects Audiobooks

Hand-picked favorites from

our editors

our editors

Scribd Selects Comics

Hand-picked favorites from

our editors

our editors

Scribd Selects Sheet Music

Hand-picked favorites from

our editors

our editors

Top Books

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Audiobooks

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Comics

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Sheet Music

What's trending, bestsellers,

award-winners & more

award-winners & more

P. 1

Equality|Views: 5|Likes: 0

Published by pacman123454544

Linear programming with equality constraints

Linear programming with equality constraints

See more

See less

https://www.scribd.com/doc/126326146/Equality

02/20/2013

text

original

11. Equality constrained minimization

• equality constrained minimization • eliminating equality constraints • Newton’s method with equality constraints • infeasible start Newton method • implementation

11–1

twice continuously diﬀerentiable • A ∈ Rp×n with rank A = p • we assume p⋆ is ﬁnite and attained optimality conditions: x⋆ is optimal iﬀ there exists a ν ⋆ such that ∇f (x⋆) + AT ν ⋆ = 0. Ax⋆ = b Equality constrained minimization 11–2 .Equality constrained minimization minimize f (x) subject to Ax = b • f convex.

equality constrained quadratic minimization (with P ∈ Sn ) + minimize (1/2)xT P x + q T x + r subject to Ax = b optimality condition: P AT A 0 x⋆ ν⋆ = −q b • coeﬃcient matrix is called KKT matrix • KKT matrix is nonsingular if and only if Ax = 0. x=0 =⇒ xT P x > 0 • equivalent condition for nonsingularity: P + AT A ≻ 0 Equality constrained minimization 11–3 .

obtain x⋆ and ν ⋆ as x⋆ = F z ⋆ + x. ˆ ν ⋆ = −(AAT )−1A∇f (x⋆) Equality constrained minimization 11–4 .Eliminating equality constraints represent solution of {x | Ax = b} as {x | Ax = b} = {F z + x | z ∈ Rn−p} ˆ • x is (any) particular solution ˆ • range of F ∈ Rn×(n−p) is nullspace of A (rank F = n − p and AF = 0) reduced or eliminated problem minimize f (F z + x) ˆ • an unconstrained problem with variable z ∈ Rn−p • from solution z ⋆.

. i. xn−1) Equality constrained minimization 11–5 . . choose x = ben.. .e.example: optimal allocation with resource constraint minimize f1(x1) + f2(x2) + · · · + fn(xn) subject to x1 + x2 + · · · + xn = b eliminate xn = b − x1 − · · · − xn−1. . ˆ F = I −1T ∈ Rn×(n−1) reduced problem: minimize f1(x1) + · · · + fn−1(xn−1) + fn(b − x1 − · · · − xn−1) (variables x1.

A(x + v) = b v w = −∇f (x) 0 Equality constrained minimization 11–6 .Newton step Newton step ∆xnt of f at feasible x is given by solution v of ∇2f (x) AT A 0 interpretations • ∆xnt solves second order approximation (with variable v) minimize f (x + v) = f (x) + ∇f (x)T v + (1/2)v T ∇2f (x)v subject to A(x + v) = b • ∆xnt equations follow from linearizing optimality conditions ∇f (x + v) + AT w ≈ ∇f (x) + ∇2f (x)v + AT w = 0.

Newton decrement λ(x) = ∆xT ∇2f (x)∆xnt nt properties • gives an estimate of f (x) − p⋆ using quadratic approximation f : 1 f (x) − inf f (y) = λ(x)2 Ay=b 2 • directional derivative in Newton direction: d f (x + t∆xnt) dt = −λ(x)2 t=0 1/2 = −∇f (x)T ∆xnt 1/2 • in general. λ(x) = ∇f (x)T ∇2f (x)−1∇f (x) Equality constrained minimization 1/2 11–7 .

Line search. 2.Newton’s method with equality constraints given starting point x ∈ dom f with Ax = b. quit if λ2/2 ≤ ǫ. Choose step size t by backtracking line search. Stopping criterion. repeat 1. tolerance ǫ > 0. Compute the Newton step and decrement ∆xnt. • a feasible descent method: x(k) feasible and f (x(k+1)) < f (x(k)) • aﬃne invariant Equality constrained minimization 11–8 . 4. λ(x). 3. Update. x := x + t∆xnt.

Newton’s method and elimination Newton’s method for reduced problem ˜ minimize f (z) = f (F z + x) ˆ • variables z ∈ Rn−p • x satisﬁes Aˆ = b. rank F = n − p and AF = 0 ˆ x ˜ • Newton’s method for f . generates iterates z (k) Newton’s method with equality constraints when started at x(0) = F z (0) + x. iterates are ˆ x(k+1) = F z (k) + x ˆ hence. started at z (0). don’t need separate convergence analysis Equality constrained minimization 11–9 .

Newton step at infeasible points 2nd interpretation of page 11–6 extends to infeasible x (i.e. Ax = b) linearizing optimality conditions at infeasible x (with x ∈ dom f ) gives ∇2f (x) AT A 0 primal-dual interpretation • write optimality condition as r(y) = 0.. ν). where y = (x. r(y) = (∇f (x) + AT ν. Ax − b) ∆xnt w =− ∇f (x) Ax − b (1) • linearizing r(y) = 0 gives r(y + ∆y) ≈ r(y) + Dr(y)∆y = 0: ∇2f (x) AT A 0 ∆xnt ∆νnt =− ∇f (x) + AT ν Ax − b same as (1) with w = ν + ∆νnt Equality constrained minimization 11–10 .

α ∈ (0. • not a descent method: f (x(k+1)) > f (x(k)) is possible • directional derivative of r(y) 2 in direction ∆y = (∆xnt. β ∈ (0. ν) 2 ≤ ǫ. Backtracking line search on r 2. 1). ν + t∆νnt) 2 > (1 − αt) r(x. while r(x + t∆xnt. until Ax = b and r(x. ν := ν + t∆νnt.Infeasible start Newton method given starting point x ∈ dom f . ν) 2. t := βt. Compute primal and dual Newton steps ∆xnt. ν . x := x + t∆xnt. repeat 1. 3. tolerance ǫ > 0. ∆νnt. ∆νnt) is 2 t=0 d r(y + t∆y) dt = − r(y) 2 Equality constrained minimization 11–11 . 2. Update. t := 1. 1/2).

• elimination with singular H: write as H + AT QA AT A 0 with Q v w =− g + AT Qh h Hv = −(g + AT w) AT 0 v w =− g h 0 for which H + AT QA ≻ 0.Solving KKT systems H A solution methods • LDLT factorization • elimination (if H nonsingular) AH −1AT w = h − AH −1g. and apply elimination 11–12 Equality constrained minimization .

Ax(0) = b) 105 f (x(k)) − p⋆ 100 10−5 10−100 5 10 k 15 20 Equality constrained minimization 11–13 . Newton method with equality constraints (requires x(0) ≻ 0. diﬀerent starting points 1.Equality constrained analytic centering primal problem: minimize − n i=1 log xi subject to Ax = b +n dual problem: maximize −bT ν + n T i=1 log(A ν)i three methods for an example with A ∈ R100×500.

2. Newton method applied to dual problem (requires AT ν (0) ≻ 0) 105 p⋆ − g(ν (k)) 100 10−5 10−100 2 4 k 6 8 10 3. ν (k)) 105 100 10−5 10−10 10−150 5 10 15 20 25 11–14 2 k Equality constrained minimization . infeasible start Newton method (requires x(0) ≻ 0) 1010 r(x(k).

solve ADAT w = h with D positive diagonal Equality constrained minimization 11–15 . use block elimination to solve KKT system diag(x)−2 AT A 0 ∆x ∆ν = diag(x)−11 Ax − b reduces to solving A diag(x)2AT w = 2Ax − b conclusion: in each case. solve Newton system A diag(AT ν)−2AT ∆ν = −b + A diag(AT ν)−11 3.complexity per iteration of three methods is identical 1. use block elimination to solve KKT system diag(x)−2 AT A 0 ∆x w = diag(x)−11 0 reduces to solving A diag(x)2AT w = b 2.

φi: cost ﬂow function for arc i (with φ′′(x) > 0) i ˜ • node-incidence matrix A ∈ R(p+1)×n deﬁned as 1 arc j leaves node i ˜ −1 arc j enters node i Aij = 0 otherwise ˜ • reduced node-incidence matrix A ∈ Rp×n is A with last row removed • b ∈ Rp is (reduced) source vector • rank A = p if graph is connected Equality constrained minimization 11–16 n . p + 1 nodes • xi: ﬂow through arc i.Network ﬂow optimization minimize i=1 φi (xi ) subject to Ax = b • directed graph with n arcs.

φ′′ (xn)). . . . Hv = −(g + AT w) sparsity pattern of coeﬃcient matrix is given by graph connectivity (AH −1AT )ij = 0 ⇐⇒ (AAT )ij = 0 ⇐⇒ nodes i and j are connected by an arc Equality constrained minimization 11–17 . positive diagonal n 1 • solve via elimination: AH −1AT w = h − AH −1g. .KKT system H A AT 0 v w g h =− • H = diag(φ′′(x1).

Analytic center of linear matrix inequality minimize − log det X subject to tr(AiX) = bi. . −(X ⋆)−1 + j=1 ⋆ νj Ai = 0. variable X ∈ Sn optimality conditions p i = 1. p • follows from linear approximation (X + ∆X)−1 ≈ X −1 − X −1∆XX −1 • n(n + 1)/2 + p variables ∆X. p X ⋆ ≻ 0. tr(AiX ⋆) = bi. tr(Ai∆X) = 0. i = 1. p Newton equation at feasible X: p X −1∆XX −1 + j=1 wj Ai = X −1. . . . . w Equality constrained minimization 11–18 . . . . . . i = 1. . .

solution by block elimination • eliminate ∆X from ﬁrst equation: ∆X = X − • substitute ∆X in second equation p p j=1 wj XAj X tr(AiXAj X)wj = bi. . p (2) a dense positive deﬁnite set of linear equations with variable w ∈ Rp ﬂop count (dominant terms) using Cholesky factorization X = LLT : • form p products LT Aj L: (3/2)pn3 • form p(p + 1)/2 inner products tr((LT AiL)(LT Aj L)): (1/2)p2n2 • solve (2) via Cholesky factorization: (1/3)p3 Equality constrained minimization 11–19 . . . . j=1 i = 1.

Iteration slides

Axmith Coaching

Amason Conflict

Thomas Matters

Chemers

Importance of traits in leadership

Axmith Coaching

Leadership Coaching

Negro Problems Reading

Slides on Sequences

Slides for Decomposition

12-Monotone Split Slides

22 Relaxations

Reading List

Linear regression

Algo Comp

- Read and print without ads
- Download to keep your version
- Edit, email or read offline

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

CANCEL

OK

You've been reading!

NO, THANKS

OK

scribd

/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->