31 views

Uploaded by Trần Mạnh Hùng

Lagrange

- 101Teles
- Lecture Notes Kuhn Tucker 2
- LP Objective Solution Practice
- Supply Chain logistics paper
- a04v9n1.pdf
- Optimal Power Flow by Enhanced Genetic Algorithm
- Grossmann Flexibility in Chemical Process Design
- Rate Beamforming for Multi-cell
- OptimumT_Tutorial3
- LP Solved Problems
- Linear Programming.pptx 1
- term structure and interest rates
- 2.2 Exercise_2
- Prblms in VLSI Design
- render_07
- SNP
- Rigorous Integrated Evolutionary Workflow for Optimal Exploitation of Unconventional Gas Assets
- Sing Levar Opt
- IRJET-Optimisation of Draw-Bead Design in Sheet Metal Forming of an Asymmetric Part using RSM and LSDYNA
- Assignment Problems (1)

You are on page 1of 22

Lagrange relaxation Global optimality conditions KKT conditions for convex problems Applications 2. KKT conditions for general nonlinear optimization problems.

Lagrange relaxation

We consider the optimization problem minimize f (x) s.t. gi (x) 0, i = 1, . . . , m where f (x) and gi (x) are real valued functions. If g(x) = g1 (x) . . . gm (x)

T

(1)

(2)

The id ea behind Lagrange relaxation is to put non-negative prices yi 0, on the constraints and then add these to the objective function. This gives the (unconstrained) optimization problem:

m

minimize f (x) +

i=1 T

y i gi ( x )

(3)

which using y = y1 . . . ym

can be written

minimize f (x) + yT g(x) The price yi is called a Lagrange multiplicator. Denition 1. The function L : Rn Rm R dened by L(x, y) = f (x) + yT g(x) is called the Lagrange function to (2).

Opt & Syst, KTH 3 Lagrange relaxation and KKT conditions

Weak duality Theorem 1 (Weak duality). For an arbitrary y 0 it holds that min L(x, y) f ( x), where x is an optimal solution to (2).

x

Proof: Since x is a feasible solution to (2) it holds that g( x) 0. We get x). min L(x, y) L( x, y) = f ( x) + yT g( x) f (

x 0 0

(4)

minimizing the Lagrange function provides lower bounds to the optimization problem (2). By an appropriate choice of y a good approximation of the optimal solution to (2) is searched for. In practical algorithms one tries to solve max min L(x, y). The next theorem gives conditions for the Lagrange multiplicator providing equality in (4).

Opt & Syst, KTH 4 Lagrange relaxation and KKT conditions

y 0

Global optimality conditions Theorem 2. If ( x, y ) Rn Rm satises the conditions (1) L( x, y ) = min L(x, y ),

x

(2) g( x) 0, (3) y 0, (4) y T g( x) = 0. then x is an optimal solution to (2). Proof: If x is an arbitrary feasible solution to (2) it holds that g(x) 0, which shows that f (x) f (x) + y T g (x) = L (x, y ) L( x, y ) = f ( x) where the rst inequality follows from (3) and g (x) 0, the second inequality follows from (1), and the last one from (4).

Opt & Syst, KTH 5 Lagrange relaxation and KKT conditions

Convex optimization problems If the functions f and g1 , . . . , gm are convex and continuously dierentiable, then condition (1) in Theorem 2 is equivalent to the condition

m

f ( x) +

i=1

y i gi ( x) = 0T

(5)

This follows since L(x, y ) is convex when y 0 and then it holds that x is a minimum point for L(x, y ) if, and only if, x L( x, y ) = 0T , i.e., if and only if (5) is satised. The global optimality conditions in Theorem 2 are sucient conditions for optimality, but in general not necessary. The next theorem shows that they are often also necessary conditions for convex optimization problems.

Opt & Syst, KTH 6 Lagrange relaxation and KKT conditions

Denition 2. The optimization problem (1) is a regular convex optimization problem if the functions f and g1 , . . . , gm are convex and continuously dierentiable and there exists a point x0 Rn such that gi (x0 ) < 0, i = 1, . . . , m. Theorem 3 (KKT for convex problems). Assume that (1) is a regular convex problem. Then x is a (global) optimal solution if, and only if, there exists a vector y Rm such that (1) f ( x) + (2) g( x) 0, (3) y 0, (4) y T g( x) = 0. Proof: Suciency was shown previously. Necessity is shown in the book.

Opt & Syst, KTH 7 Lagrange relaxation and KKT conditions

m i=1

y i gi ( x) = 0T

The conditions (2) (4) can be made more explicit. We have that

m

y T g( x) =

i=1

y i gi ( x) = 0

Geometric interpretation The complementarity condition (4 ) implies that if gi ( x) < 0 then yi = 0. Therefore, condition (1) can be written f ( x) =

i:gi ( x)=0

y i gi ( x)

this means that the gradient is a negative linear combination of the gradients of the binding (active) constraints.

f ( x) x g2 ( x)

Opt & Syst, KTH

f ( x) x x) g2 (

9

x f ( x) = 0T

x) g1 (

(6)

If H is positive semi-denite, then this is a convex optimization problem and we can apply Theorem 3. Theorem 4. x is a (global) optimal solution to (6) if, and only if, there exists a vector y Rm such that (1) Hx + c = AT y (2) Ax b, (3) y 0, (4) y T (A x b) = 0.

Opt & Syst, KTH 10 Lagrange relaxation and KKT conditions

Trac control in communication systems We consider a communication network consisting of two links. Three sources are sending data over the network to three dierent destinations.

Source 1 Destination 1

Link 1

Link 2

Source 2

Destination2

Source 3

Destination 3

Source 1 uses both links. Source 2 uses link 1. Source 3 uses link 2.

Opt & Syst, KTH 11 Lagrange relaxation and KKT conditions

Link 1 has capacity 2 (normalized entity [data/s]) Link 2 has capacity 1 The three sources sends data with speeds xr , r = 1, 2, 3. The three sources has each a utility function Ur (x), r = 1, 2, 3. A common choice of the utility function is Ur (xr ) = wr log(xr ). For ecient and fair use of the available capacity, the data speeds are chosen using the following optimization criterion: maximize U1 (x1 ) + U2 (x2 ) + U3 (x3 ) s.t. x1 + x2 2 x1 + x3 1

x1 0, x2 0, x3 0

Lagrange relaxation and KKT conditions

12

Assume Uk (x) = log(xk ), k = 1, 2, 3. The optimization problem can be written minimize log(x1 ) log(x2 ) log(x3 ) s.t. x1 + x2 2 x1 + x3 1

We relaxed the constraints xk 0, k = 1, . . . , 3 since they will be automatically satised, ( log(x) as x 0). The optimization problem is convex since the constraints are linear inequalities and the objective function is a sum of convex functions, and hence convex.

13

1 x + y1 + y2 = 0 1 1 x + y1 = 0 2 1 x + y2 = 0 3

(1)

(2)

x1 + x2 2 0 x1 + x3 1 0 y1 0 y2 0 y1 (x1 + x2 2) = 0 y2 (x1 + x3 1) = 0

14 Lagrange relaxation and KKT conditions

(3)

(4)

This leads to y1 > 0 and y2 > 0, hence the complementarity constraint (4) shows that (2) is satised with equality. We get

1 y1 +y2 1 y1 +y2

+ +

1 y1 1 y2

=2 =1

y2 = 3

y1 =

3 3+1

which in turn gives the optimal data speeds 3+1 3 x 1 = x 2 = , 3+2 3 3+1

1 x 3 = 3

15

minimize f (x) s.t. hi (x) = 0, i = 1, . . . , m The feasible region F = {x Rn : hi (x) = 0, i = 1, . . . , m} is not convex unless the functions hi are ane, i.e., hi (x) = aT i x + bi . We assume that n > m. We need the following technical assumption: Denition 3. A feasible solution x F is a regular point to (7) if hi (x), i = 1, . . . , m are linearly independent.

(7)

16

Theorem 5 (KKT for problems with equality constraints). Assume that x F is a regular point and a local optimal solution to (7). Then there exists u Rm such that

m

(1) f ( x) +

i=1

u i hi ( x) = 0T ,

(2) hi ( x) = 0, i = 1, . . . , m. Proof idea: Let x(t) be an arbitrary parameterized curve in the feasible set F = {x Rn : gi (x) 0, i = 1, . . . , m} such that x(0) = x . The gure on the next page illustrates how this curve is mapped on a curve f (x(t)) on the range space of the objective function. The feasible set F is in general of higher dimension than one, which is illustrated in the right gure.

Opt & Syst, KTH 17 Lagrange relaxation and KKT conditions

f (x) f (x(t))

x3 , . . . , xn

x(t)

x2 x F

x x1 F

x2 , . . . , xn

x1

Since x(0) = x is a local optimal solution it holds that d f (x(t))|t=0 = f ( x) x (0) = 0 dt Furthermore, x(t) F , which leads to hi (x(t)) = 0, for some > 0.

Opt & Syst, KTH 18 Lagrange relaxation and KKT conditions

i = 1, . . . , m,

t (, )

This means that d hi (x(t))|t=0 = hi ( x) x (0) = 0, dt which in turn leads to x (0) N (A), where h1 ( x) . . A= . hm ( x) Conversely, the implicit function theorem can be used to show that if p N (A), then there exists a parameterized curve x(t) F with x(0) = x and x (0) = p. i = 1, . . . , m

19

m

i=1

u i hi ( x) = 0T

20

(8)

where f (x) and gi (x) are real valued functions. If the problem is not convex (i.e., if F = {x Rn : gi (x) 0, i = 1, . . . , m} is not convex and/or f is not convex on F ) it is in general only possible to derive necessary optimality conditions. The regularity condition in Denition 2 has to be replaced with a stronger condition. Denition 4. For x F we let Ia (x) denote the index set for active constraints in the point x, i.e., Ia (x) = {i {1, . . . , m} : gi (x) = 0}.

Denition 5. A feasible solution x F is a regular point to (8) if gi (x) for i Ia (x) are linearly independent.

Opt & Syst, KTH 21

Theorem 6 (KKT for general problems with inequality constraints). Assume that x is a regular point to (8) and a local optimal solution. Then there exists a vector y Rm such that (1) f ( x) +

m i=1

y i gi ( x) = 0T

(2) gi ( x) 0, i = 1, . . . , m, (3) y i 0, i = 1, . . . , m, (4) y i gi ( x) = 0, i = 1, . . . , m. Proof: The proof is similar to the proof of Theorem 3.

22

- 101TelesUploaded byRaghavendra Pratap Singh
- Lecture Notes Kuhn Tucker 2Uploaded bysilvio de paula
- LP Objective Solution PracticeUploaded bySupriya Kadam Jagtap
- Supply Chain logistics paperUploaded byYash Jain
- a04v9n1.pdfUploaded byepe civil
- Optimal Power Flow by Enhanced Genetic AlgorithmUploaded byS Bharadwaj Reddy
- Grossmann Flexibility in Chemical Process DesignUploaded byangelgalvisc
- Rate Beamforming for Multi-cellUploaded byhendra lam
- OptimumT_Tutorial3Uploaded byDavid López Almirall
- LP Solved ProblemsUploaded byAaronLumibao
- Linear Programming.pptx 1Uploaded byAlisa Conley
- term structure and interest ratesUploaded byScarlat Corneliu
- 2.2 Exercise_2Uploaded byjulianli0220
- Prblms in VLSI DesignUploaded bySailu Vlsi
- render_07Uploaded byillustra7
- SNPUploaded byAgnihotri Vikas
- Rigorous Integrated Evolutionary Workflow for Optimal Exploitation of Unconventional Gas AssetsUploaded byha
- Sing Levar OptUploaded byDiego Isla
- IRJET-Optimisation of Draw-Bead Design in Sheet Metal Forming of an Asymmetric Part using RSM and LSDYNAUploaded byIRJET Journal
- Assignment Problems (1)Uploaded byPooja Yadav
- Full TextUploaded byTester Avni
- Terminal Automation Point Solution Brief_final_lrUploaded bySilai Gaby
- UntitledUploaded bysunil_panchal34
- Lmi MethodUploaded bylokesh1308
- PQUploaded byPranjal Verma
- AFT Mercury 7 Data SheetUploaded byerjainrachit
- 5.Risk Based MaintenanceUploaded byLuis Earles Delgado
- Design Wind Speeds for MexicoUploaded byFernandochiaro
- Extra Problems Existence and Uniqueness of Optimal SolutionUploaded byArie Brouwer
- Dinamic load Transient stabilityUploaded byRafaelFunes

- SDR BASED OFDM FOR LOW RATE VIDEO TRANSMISSION SYSTEMUploaded byTrần Mạnh Hùng
- Energy Detection abcUploaded byTrần Mạnh Hùng
- 50 Câu Tiếng Anh Giao Tiếp Thông DụngUploaded byNam Nguyen
- Enso Evolution Status Fcsts WebUploaded byTrần Mạnh Hùng
- Bai Tap Lap Trinh Huong Doi Tuong c 2308Uploaded byRyan Nguyễn
- Ly_thuyet_trai_pho_va_da_truy_nhap_vo_tuyenUploaded byNguyen Song Nghiem
- 03-07-01Uploaded byTrần Mạnh Hùng
- Đề Tài Tìm Hiểu Hoạt Động Nhập Khẩu Của Công Ty TNHH Việt Anh - Luận Văn, Đồ Án, Đề Tài Tốt NghiệpUploaded byTrần Mạnh Hùng
- msp430F1611.pdfUploaded byTrần Mạnh Hùng
- KKT.pdfUploaded byTrần Mạnh Hùng
- Lap Tring C can banUploaded byDestiny IIl
- Lich Thi Hk1a Nam Hoc 2013-2014-UpdateUploaded bySiêu Nhân Cún
- Chuong_07Uploaded byhosithai
- Interference Reduction in Cognitive Networks via SchedulingUploaded byTrần Mạnh Hùng
- Ba yếu tố làm nên bài báo quốc tếUploaded byacsimet0612
- PicUploaded byTrần Mạnh Hùng

- FilmeUploaded bygiani_buzatu
- Global Optimization Toolbox - Performing a Multiobjective Optimization Using the Genetic Algorithm Demo - MathWorks ItaliaUploaded byAccalappiacani87
- Duality OptUploaded bySidney Lins
- LU Factorization of Systems LinearUploaded byJaol1976
- Ind RNN.pdfUploaded byMuhammad Shahid
- Neural NetworkUploaded byKarteek
- Lecture Notes 12-Higher-Order Taylor MethodsUploaded byTebsu Tebsuko
- Capitulo 10 Metodos NumericosUploaded byMarioAlbertoSimbron
- Artificial Intelligence Neural Networks Unit 5 Basics of NNUploaded byAnanth Nath
- logistic.pdfUploaded byAnil
- Akshay Sethi - ResumeUploaded byAkshay Sethi
- 9 5 Multiplying PolynomialsUploaded byJam Moraleja
- W6 Bilinear InterpolationUploaded byAbhijit Konwar
- Approximate ValueUploaded bybestread67
- TVD Schemes for Unstructured GridsUploaded byMohamad Fathi
- Lecture 04 bUploaded byPaolo Zanoni
- fem_introUploaded byKeyur Desai
- EE 559 Midterm From S11Uploaded bychrisc885597
- Text Sentiment analysisUploaded byHưng Phan Đăng
- Bilinear InterpolationUploaded bySridhar Panneer
- Matlab Presentation.pptxUploaded bytesfay gebre
- 0_ML_introUploaded byYugal Kumar
- Shooting MethodUploaded byKrish Pavan
- 3 4c Simplek MinimasiUploaded bywirda01121993
- 15816_csc6870-06-B-splines-and-nurbsUploaded byTarunpreet Singh Batra
- An Introduction to Differential EvolutionUploaded bydimitrioschr
- 2- Solution of Simultaneous Linear Equations -26 Pgs NewUploaded byRafi Sulaiman
- b 04110710Uploaded byInternational Journal of Engineering Inventions (IJEI)
- NUMERICAL METHODSUploaded byMelwin Jay Porras Asturias
- Artificial Intelligence & Neural Networks Unit-5 Basics of NNUploaded byMukesh