10 views

Uploaded by Renato Markele

- Application of Linear Programming Techniques to Practical
- HW_F92
- M3L1_LN
- 1532-0545-2006-07-01-0088
- MC0079 Summer Drive Assignment 2011
- Ppc
- PSM010_2
- Linear programming applied to healthcare problems
- Abstracts Correct Final
- Commitment.pdf
- or assignment 1
- Quantitative Analysis-2
- Nn Ppt of Optimization New - Copy
- linearprogrammingppt-130902003046-phpapp02
- Lingo 11 Users Manual
- Use of Particle Swarm Optimization Algorithm.pdf
- colgen
- NR 311303 Operations Research
- 10.1.1.91
- test1soln

You are on page 1of 3

Let A be an m × n matrix, and consider the following primal problem P in standard form and its dual D. P: min c′ x D: max p′ b subject to Ax = b subject to p′ A ≤ c′ x ≥0 p free The primal-dual algorithm is initialized with a dual feasible solution p. Deﬁne the index set J of admissible columns to be J = { j : p′ A j = c j }, where A j denotes the jth column of A. The set J is the set of dual constraints which are tight at p. Suppose that x is a primal feasible solution. Then complementary slackness states that x and p are optimal for their respective problems if and only if p i (a i x − b i ) = 0, for i = 1, . . . , m, and x j (p′ A j − c j ) = 0, for j = 1, . . . , n. (Here, a i denotes the i th row of A.) The ﬁrst set of equations is always satisﬁed since x is feasible and hence Ax = b. The second set of equations is equivalent to x j = 0 for j ∈ J. Thus, we seek a primal / feasible x where x j = 0 for j ∈ J. If such an x exists, then by complementary slackness, x and p are / optimal for their respective problems. We determine if such an x exists by testing whether the linear program constraints FP: subject to Ax = b x ≥0 x j = 0 if j ∈ J /

are feasible using Phase I of the two phase simplex algorithm. In Phase I, we solve an auxiliary linear program formed by adding m artiﬁcial variables yi and changing the objective function. We call the auxiliary linear program the restricted primal RP. RP: min subject to 1y Ax x xj

+ Iy = ≥ y ≥ =

b 0 0 0, if j ∈ J. /

(Here 1 denotes the vector of all 1s.) The auxiliary linear program of Phase I always has a ﬁnite optimum. Let ξ be the optimal cost of RP. If ξ = 0, then there exists a feasible solution x to the primal P such that x and p satisfy the complementary slackness conditions. Hence, x and p are optimal. Note that x is obtained by solving RP (and driving artiﬁcial variables out of the basis if necessary). If ξ > 0, then we use a solution to the dual of the restricted primal to improve our solution p to the dual. The dual of the restricted primal is DRP: max subject to p′ b p′ A j p p

≤ 0, for j ∈ J, ≤1

free.

1

We summarize an iteration of the primal-dual algorithm: 1. and so A j is an admissible column at the start of the next iteration. the dual has optimal cost +∞ and the primal is infeasible. If p j / ˆ ¯ and the algorithm terminates. ¯ −p j j j ˆ ¯ ¯ p′ A j = p′ A j + θ ∗ p′ A j = p′ A j = c j . where θ ∗ = min ¤ j/ J ∈ ¯ p′ A j >0 c j − p′ A j ©. Proof. ¯ If j ∈ J and p′ A j > 0. To discuss ﬁniteness and the advantages of the primal-dual algorithm. setting θ ∗ = min ¤ j/ J ∈ ¯ p′ A j >0 c j − p′ A j © ¯ p′ A j ˆ ¯ and p = p + θ ∗ p. set p = p + θ ∗ p. Hence. then we require that / ˆ ¯ p′ A j = p′ A j + θ p′ A j ≤ c j . We repeat this process until obtaining an optimal primal solution x or proving that the primal is infeasible. ¯ Note that c j − p′ A j > 0 for j ∈ J. and the algorithm terminates. If p′ A j ≤ 0 for all j ∈ J. since p′ A j = 0. Hence. and hence we have improved the cost of our dual feasible solution. If the optimal cost ξ of RP is positive. ¯ p′ A j Since these are the only restrictions on θ. Solve the restricted primal problem RP using the simplex method. then the dual has optimal cost +∞. we have the following Lemma 1. Note that the cost of p is ˆ ¯ p′ b = p′ b + θ p′ b > p′ b ¯ since p′ b = ξ > 0. Every admissible column in the optimal basis of the restricted primal RP remains admissible at the start of the next iteration. (Note that the cost c in RP of x is 0). Otherwise. If the optimal cost ξ of RP is 0. If j ∈ J. J is found by considering which dual constraints are tight at p. then the reduced cost of x j in the optimal tableau is c j = ¯ ′ A = 0. and so θ≤ c j − p′ A j . we choose the most restrictive bound. We update our dual feasible vector by setting p = ¯ ˆ p + θ p for some θ > 0.¯ ˆ Let p be an optimal solution to DRP with cost ξ. 2. ﬁnd an optimal solution p to the dual of the restricted ′ A ≤ 0 for all j ∈ J. then / ˆ ¯ p′ A j = p′ A j + θ p′ A j ≤ c j . ˆ We wish to make θ as large as possible to increase the cost of p while still maintaining dual feasi¯ bility. 2 . or j ∈ J but p′ A j ≤ 0. If A j is in the optimal basis of RP. the optimal solution x to RP is also an optimal solution to the primal problem. so the value of θ ∗ is strictly positive. Given a dual feasible solution p. ﬁnd the set J = { j : p′ A j = c j } of admissible columns. ¯ p′ A j ˆ Repeat Step 1 with the new dual feasible solution p. the primal is infeasible. ¯ 3. ¯ problem DRP. then θ / / can be chosen arbitrarily large.

In forming the restricted primal problems. The primal-dual algorithm terminates in a ﬁnite number of iterations. Each basis corresponds to a basic feasible solution with strictly decreasing costs ξ. Note that if A j is newly admissible. 1982. Lemma 1 states that every column in an optimal basis of the restricted primal remains admissible at the start of the next iteration. assigning a cost of 1 to each artiﬁcial variable yi and a cost of 0 to every other variable x j.Thus. Combinatorial optimization: algorithms and complexity. Then the reduced cost of x j is c j = 0 − p′ A j . 3 . The restricted primal problems have the same number of rows as the primal problem. The rest of column j in the tableau can be computed using B−1 A j .. Steiglitz. and the primal-dual algorithm terminates using only a ﬁnite number of pivots solving restricted primal problems. Suppose that A j is newly admissible. Hence. Papadimitriou and K. and so c j < 0. ¯ References [1] C. the use of anticycling rules guarantees that the primal-dual algorithm terminates in a ﬁnite number of iterations. that optimal basis can be used as the initial basic feasible solution when solving the subsequent restricted primal problem. Theorem 2. Proof. Englewood Cliffs. In the presence of degeneracy. This perspective shows that the primal-dual algorithm is ﬁnite. We conclude by discussing three advantages of the primal-dual algorithm. the primal-dual algorithm can be thought of as pivoting in a primal tableau for Phase I with restrictions on which columns may enter the basis. N. then from the previous iteration we have ¯ p′ A j > 0. we replace the original cost function c with the simpler cost function prescribed by Phase I of the simplex method. Inc. that is.. We view the primal-dual algorithm as providing a sequence of bases for the Phase I auxiliary linear program AP for the original primal problem. 1. No basic feasible solution of AP is degenerate.J. but they have signiﬁcantly fewer variables. The ﬁnal tableau simultaneously satisﬁes primal and dual feasibility and hence is optimal. Prentice-Hall. we “combinatorialize the cost”. then no basis will be repeated. H. The tableau needs to be adjusted by adding columns for newly admissible columns and removing the columns that are no longer admissible. 2. 3.

- Application of Linear Programming Techniques to PracticalUploaded byAlexander Decker
- HW_F92Uploaded byEduar Fernando Aguirre Gonzalez
- M3L1_LNUploaded byharsimran_sodhi
- 1532-0545-2006-07-01-0088Uploaded byNP
- MC0079 Summer Drive Assignment 2011Uploaded byPublic_Pentium
- PpcUploaded byabdulsadam
- PSM010_2Uploaded byAsmZziz Oo
- Linear programming applied to healthcare problemsUploaded byselma86
- Abstracts Correct FinalUploaded byAlishbah Khan Niazii
- Commitment.pdfUploaded byHans
- or assignment 1Uploaded byapi-323462816
- Quantitative Analysis-2Uploaded byShyam Solanki
- Nn Ppt of Optimization New - CopyUploaded byetayhailu
- linearprogrammingppt-130902003046-phpapp02Uploaded byRajendra Pansare
- Lingo 11 Users ManualUploaded byMathi3u
- Use of Particle Swarm Optimization Algorithm.pdfUploaded bynguyenlon
- colgenUploaded byPervez Ahmad
- NR 311303 Operations ResearchUploaded bySrinivasa Rao G
- 10.1.1.91Uploaded bysolder123
- test1solnUploaded byWilliam Cabrera Castro
- QM Course Outline Sandeep 2017Uploaded bySumedh Saraf
- IE312 Homework 5Uploaded byDylan Clements
- Chance-Constrained Problems With Stochastic QuadraticUploaded byPaz Ochoa
- Yuping_Intro_to_BendersDecomp.pdfUploaded byMario Fernández Navarro
- Distributed Model Predictive Control Based on Benders' Decomposition Applied to Multi Source Multizone Building Temperature RegulationUploaded byhamid_ouali_1
- 10.1.1.141.3070Uploaded byshotorbari
- MB0048 SUMMER 2015Uploaded bySarlaJaiswal
- A Review of Constraint ProgrammingUploaded byATS
- A Steepest Descent Algorithm for Circularity EvaluationUploaded byTao Ye
- Syllabus Online Spring16Uploaded byadivasi

- Big M Method.pdfUploaded byzidan40o0
- RM ANOVA - SPSS InterpretationUploaded bylieselbad
- Linear Programming - Sensitivity Analysis - Using SolverUploaded byAmit Maheshwari
- 1. pendahuluan.pptUploaded bysidajateng
- Stata Textbook Examples Introductory Eco No Metrics by JeffreyUploaded byHoda777
- stata2Uploaded byAhmad Md Shakil
- optimization lecture1Uploaded byKatie Cook
- Joint regression models for sales analysis using SASUploaded bydmitry8071
- Future Value of a Single Amount ExplanationUploaded byHwee Ling
- Artsy Case SolutionUploaded bylenafran22
- Advanced Stat Methods PaperUploaded byAves Ahmed Khan
- Pert and Cpm2Uploaded bySohail Aziz Ahmad Malik
- Chapter6accountingwordUploaded byAngel Chavero
- Ballou Bab 08Uploaded bySanty San
- old exam-dec.pdfUploaded byTram Gia Phat
- Regression Analysis.pptxUploaded byT_007
- 8 Numerical Methods for Constrained Optimization.pdfUploaded byAugusto De La Cruz Camayo
- Multi CollinearityUploaded byCharuJagwani
- Game Theory Overview Presentation by Scott Corwon of IMPACTSUploaded byStanfordEcon
- Binomial Distribution (1)Uploaded byMohit Gupta
- VaR at GarchUploaded bybhawanishekhawat505
- Chp13_LinearOptimization2Uploaded byjohn hart
- 575806300415S_lecture08editedUploaded byMutia Wahyuni
- Chapter 14Uploaded byreza786
- Best Subset Reg 2Uploaded byDipasha Sharma
- Chapter 009Uploaded bytrannnen
- ForecastingUploaded byDebashish Gorai
- Lecture_1Uploaded byAnonymous njZRxR
- Presentation ModlangUploaded byJojo Frank
- Week 4 Exercise SansUploaded bypatientbioinvest