10 views

Uploaded by Muhammad Bilal Junaid

Lec 25 Linear Programming

Lec 25 Linear Programming

© All Rights Reserved

- moeller-20040514
- Simplex Method for Fuzzy Variable Linear Programming Problems
- Linear Programming
- Tutorial
- Operational Research CH 1 Hillier Print out ppt
- Using EXCEL Solver
- napkin problem
- Cash Management
- MB0048 SET2
- A Linear Approach for Optimizing Energy in Wind Turbines by Embedding Process
- 2005_StructuralOptimizationStaticsDinamicsBeyond
- MPC_Book
- Linear Programming and Lean Manufacturing
- Linear Programing
- Simplex Method
- Introduction to Linear Programming
- OpMan-LP
- lecture3.doc
- optadcampaign
- Assignment2-Fall-18.doc

You are on page 1of 30

Email: nasirmm@yahoo.com

Introduction to LP Problem

When the objective function and all constraints are linear

functions of the variables, the problem is known as a

linear programming (LP) problem.

A large number of engineering applications have been

successfully formulated and solved as LP problems.

LP problems also arise during the solution of nonlinear

problems as a result of linearizing functions around a

given point.

It is important to recognize that a problem is of the LP

type because of the availability of well-established

methods, such as the simplex method, for solving such

problems.

Problems with thousands of variables and constraints

can be handled with the simplex method.

Introduction to LP Problem

The simplex method requires that the LP problem be

stated in a standard form that involves only the equality

constraints.

Conversion of a given LP problem to this form is

discussed in the first.

In the standard LP form, since the constraints are linear

equalities, the simplex method essentially boils down to

solving systems of linear equations.

A review of solving linear systems of equations using the

Gauss-Jordan form and the LU decomposition will help.

Here we present methods for solving linear programming

(LP) problems expressed in the following standard

form:

Find x in order to: Minimize f(x) = cTx

Subject to Ax = b and x 0.

where ,

x = [x1, x2 ,..., xn]T a vector of optimization variables;

c = [c1, c2 ,..., cn]T a vector of objective or cost coefficients

b = [b1, b2, ..., bm]T > 0 vector of right-hand sides of constraints

A is m x n matrix of constraint coefficients:

a11 a12 a1n

a 21 a 22 a 2n

A=

am

am

amn

1

2

Note that in this standard form, the problem is of

minimization type.

All constraints are expressed as equalities with the righthand side greater than or equal to (>) 0.

Furthermore, all optimization variables are restricted to

be positive.

Maximization Problem

As already mentioned a maximization problem can be

converted to a minimization problem simply by

multiplying the objective function by a negative sign.

For example,

Maximize z(x) = 3x + 5y is the same as

Minimize: f(x) = -3x -5y.

From the optimality conditions, it is easy to see that the optimum

solution x* does not change if a constant is either added to or

subtracted from the objective function.

Thus, a constant in the objective function can simply be ignored.

After the solution is obtained, the optimum value of the objective

function is adjusted to account for this constant.

Alternatively, a new dummy optimization variable can be defined to

multiply the constant and a constraint added to set the value of this

variable to 1.

For example, consider the following objective function of two

variables:

Minimize: f(x, y) = 3x + 5y + 7

In standard LP form, it can be written as follows:

Minimize: f(x, y) = 3xi + 5 + 7z

subject to z = 1

The standard form requires that all constraints must be

arranged such that the constant term, if any, is a

positive quantity on the right-hand side.

If a constant appears as negative on the right-hand side

of a given constraint, multiply the constraint by a

negative sign.

Keep in mind that the direction of inequality changes

(that < becomes >, and vice versa) when both sides are

multiplied by a negative sign.

For example,

3x1 + 5x2 < -7 is the same as -3x1 - 52 > 7

Less than Type Constraints

Add a new positive variable (called a slack variable) to convert a

constraint ( less than equal to, LE) to an equality.

For example, 3x +5y 7 is converted to 3x +5y + z = 7,

where z > 0 is a slack variable.

Subtract a new positive variable (called a surplus variable) to

convert a constraint (GE) to equality.

For example, 3x + 5y 7 is converted to + 5y - z = 7,

where z > 0 is a surplus variable.

Note that, since the right-hand sides of the constraints are restricted

to be positive, we cannot simply multiply both sides of the GE

constraints by -1 to convert them into the LE type.

Unrestricted Variables

The standard LP form restricts all variables to be

positive.

If an actual optimization variable is unrestricted in sign,

it can be converted to the standard form by defining it

as the difference of two new positive variables.

For example, if variable x is unrestricted in sign, it is

replaced by two new variables y1 and 2 with x = y1 y2.

Both the new variables are positive.

After the solution is obtained, if y1 > y2 then x will be

positive and if y1 < y2, then x will be negative.

Example

Convert the following problem to the standard LP form.

Maximize: z = 3x + 8y

Subjected to : x + 4y -2

x + 3y 6

x 0

Note that y is unrestricted in sign.

Define new variables (all 0)

x = y 1 ; y = y 2 y3

sign, the problem is as follows:

Maximize:

Subject to:

z = 3y1 + 8y2 8

-3y1 - 4y2 + 4y3 20

y1 + 3y2 - 6

y1, y2, 0

Example

Multiplying the objective function by a negative sign and

introducing slack surplus variables in the constraints, the

problem in the standard LP form is as follows:

Minimize f = -3y1 8y2 + 8y3

Subjected to:

-3y1 4y2 + 4y3 + y4 = 20

y1 + 3y2 - 3y3 - y5 = 6

y1 , . . . , y5 0

Since linear functions are always convex, the LP

problem is a convex programming problem.

This means that if an optimum solution exists, it

is a global optimum.

The optimum solution of an LP problem always

lies on the boundary of the feasible domain.

We can easily prove this by contradiction.

LP Problems

Once an LP problem is converted to its standard form, the

constraints represent a system of n equations in m unknowns.

When m = n (i.e., the number of constraints is equal to the number

of optimization variables), then the solution for all variables is

obtained from the solution of constraint equations and there is no

consideration of the objective function. This situation clearly does

not represent an optimization problem.

On the other hand, m > n does not make sense because in this

case, some of the constraints must be linearly dependent on the

others.

Thus, from an optimization point of view, the only meaningful case

is when the number of constraints is smaller than the number of

variables (after the problem has been expressed in the standard LP

form) that is m < n.

So, solving LP problems involves solving a system of

undetermined linear equations. (The number of

equations is less than the number of unknowns.)

A review of the Gauss-Jordan procedure for solving a

system of linear equations is presented next.

Consider the solution of the following system of

equations: Ax = b where A is an m x n coefficient

matrix, x is n x 1 vector of unknowns, and b is an m x 1

vector of known right-hand sides.

Basic Principles

The general description of a set of linear

equations in the matrix form:

[A][X] = [B]

[A] ( m x n ) matrix

[X] ( n x 1 ) vector

[B] (m x 1 ) vector

Identify unknowns and order them

Isolate unknowns

Write equations in matrix form

Matrix Representation

[A]{x} = {b}

a11 a12

a a

21 22

an1 an2

a1n x1 b1

a2n x2 b2

=

ann xn bn

Gaussian Elimination

One of the most popular techniques for solving

simultaneous linear equations of the form

[A ][X ] = [C ]

Consists of 2 steps

1. Forward Elimination of Unknowns.

2. Back Substitution

Forward Elimination

The goal of Forward Elimination is to transform

the coefficient matrix into an Upper Triangular

Matrix

25 5 1

64 8 1

144 12 1

5

1

25

0 4.8 1.56

0

0

0.7

Forward Elimination

Linear Equations: A set of n equations and n unknowns

( Eq.1 )

( Eq.2 )

.

.

.

.

.

.

Computer Program

function x = gaussE(A,b,ptol)

%

GEdemo Show steps in Gauss elimination and back substitution

%

No pivoting is used.

%

%

Synopsis:

x = GEdemo(A,b)

%

x = GEdemo(A,b,ptol)

%

%

Input:

A,b

= coefficient matrix and right hand side vector

%

ptol = (optional) tolerance for detection of zero pivot

%

Default: ptol = 50 * eps

%

%

Output:

x = solution vector, if solution exists

A=[25 5 1; 64 8 1; 144 12 1]

b=[106.8; 177.2; 279.2]

if nargin<3, ptol = 50*eps; end

[m,n] = size(A);

if m~=n, error('A matrix needs to be square'); end

nb = n+1; Ab = [A b];

% Augmented system

fprintf('\n Begin forward elimination with Augmented system;\n'); disp(Ab);

%

--- Elimination

% program continued

for i =1:n-1

pivot = Ab(i,i);

if abs(pivot)<ptol, error('zero pivot encountered'); end

for k=i+1:n

factor

= - Ab(k,i)/pivot;

Ab(k,i:nb) = Ab(k,i:nb) - (Ab(k,i)/pivot)*Ab(i,i:nb);

fprintf('Multiplication factor is %g\n',factor);

disp(Ab);

pause;

end

fprintf('\n After elimination in column %d with pivot = %f \n\n',i,pivot);

disp(Ab);

pause;

end

%

--- Back substitution

x

= zeros(n,1);

% Initializing the x vector to zero

x(n) = Ab(n,nb) /Ab(n,n);

for i= n-1:-1:1

x(i) = (Ab(i,nb) - Ab(i,i+1:n)*x(i+1:n))/Ab(i,i);

end

system of underdetermined (fewer equations than variables) linear

equations.

remaining n - m variables.

The variables that we choose to solve for are called basic, and the

remaining variables are called non-basic.

Minimize:

f = -x + y

subjected to:

x 2y 2

x+y 4

x 3

x, y 0.

In standard form this LP problem becomes:

Minimize:

f = -x1 + x2

subjected to: x1 2x2 x3 = 2

x1 + x2 + x4 = 4

x1 + x5 = 3

xi 0, for all i=1, 2, 3, 4, 5.

Where, x3 is a surplus variable for the first constraint,

and x4 and x5 are slack variables for the two less-than

type constraints.

The number of equations is m = 3.

Thus, we can have three basic variables and two nonbasic variables.

If we arbitrarily choose x3 , x4 , and x5 as basic

variables, a general solution of the constraint

equations can readily be written as follows:

x3 = -2 + x1 - 2x2 ;

x4 = 4 x1 - x2 ;

x5 = 3 x1

Basic Solutions

The general solution is valid for any values of the nonbasic variables.

Since all variables are positive and we are interested in

minimizing the objective function, we assign 0 values to

non-basic variables.

A solution from the constraint equations obtained by

setting non-basic variables to zero is called a basic

solution.

Therefore, one possible basic solution for the above

example is as follows: x3 = -2 ; x4 = 4 ; x5 = 3

Since all variables must be > 0, this basic solution is not

feasible because x3 is negative.

Basic solutions

Let's find another basic solution by choosing (again arbitrarily) x1, x4, and

x5 as basic variables and x2 and x3 as non-basic.

x1 = 2; x1 + x4 = 4 ; x1 + x5 = 3

all variables have positive values, this basic solution is feasible as well.

constraints and the number of variables in the problem:

Possible basic solutions = n! / [ m! (n - m)! ]

number of basic solutions is

5! / [ 3! 2! ] = 10.

solutions are

computed

from the

constraint

equations and

are

summarized in

the following

table. The set

of basic

variables for a

particular

solution is

called a basis

for that

solution.

Basis

Solution

Status

{3, 1, -1, 0, 0}

infeasible

feasible

infeasible

{3, 0, 1, 1, 0}

feasible

{4, 0, 2, 0, -1}

infeasible

{2, 0, 0, 2, 0}

feasible

{-}

No sol.

{0, 4, -10, 0, 3}

infeasible

{0, -1, 0, 5, 3}

infeasible

10

{0, 0, -2, 4, 3}

infeasible

Graphical Solution

Constraint:

x2 = 4.0 - x1

1.0

f = -1.0

-1.5

-2.0

y-values or x2

0.5

Sol 2

Feasible

region

0.0

Sol 6

Constraint:

x2 = 0.5x1 1.0

Sol 4

Opt. sol.

-2.5

-0.5

-3.0

Constraint:

x1 = 3

-3.5

-1.0

0.0

0.5

1.0

1.5

2.0

x-values or x1

2.5

3.0

3.5

Final Thoughts

One of these basic feasible solutions must be the

optimum.

Thus, a brute-force method to find an optimum is to

compute all possible basic solutions.

The one that is feasible and has the lowest value of the

objective function is the optimum solution.

For the example problem, the fourth basic solution is

feasible and has the lowest value of the objective

function.

Thus, this represents the optimum solution for the

problem. Optimum solution:

x1 = 3, x2 = 0, x3 = 1, x4 = 1, x5 = 0, f = -3.

- moeller-20040514Uploaded byabdel
- Simplex Method for Fuzzy Variable Linear Programming ProblemsUploaded byNurroo Ainee
- Linear ProgrammingUploaded byLadnaks
- TutorialUploaded byMotor Racer
- Operational Research CH 1 Hillier Print out pptUploaded byPuput Ichwatus
- Using EXCEL SolverUploaded bybabuplus
- napkin problemUploaded byDian Septiana
- Cash ManagementUploaded bynarayan
- MB0048 SET2Uploaded byMaulik Parekh
- A Linear Approach for Optimizing Energy in Wind Turbines by Embedding ProcessUploaded byneyzenapo
- 2005_StructuralOptimizationStaticsDinamicsBeyondUploaded bybrucemls
- MPC_BookUploaded byNilay Saraf
- Linear Programming and Lean ManufacturingUploaded byLau Yeowhong
- Linear ProgramingUploaded bychacrd
- Simplex MethodUploaded bySundaramali Govindaswamy G
- Introduction to Linear ProgrammingUploaded byDeepika Raj
- OpMan-LPUploaded byprincess_camarillo
- lecture3.docUploaded byAmani
- optadcampaignUploaded byapi-285095600
- Assignment2-Fall-18.docUploaded bysanjid
- Computaional Economics - MotivationUploaded byisaac_maykovich
- Course ContentUploaded byHarish Ravilla
- Network OptUploaded byAziz.3251
- 00859356Uploaded byHarish Kolli
- Notes on Optimization and Resource AllocationUploaded byjodomofo
- OPTIMIZATION OF VEHICLE FRONT FOR SAFETY OF PEDESTRIANSUploaded byHariharanSankarasubramanian
- Application of Fuzzy Multi-objective Linear Programming To Aggregate Production PlanningUploaded byTria Putri Noviasari
- Or Short Answer QquestionsUploaded byCheerladinne Srinivasa Chakravarthy
- CSEUploaded bylightyagami00004
- markal-a4-a1Uploaded byAmir Joon

- LicenseUploaded byKuracha Polarao
- 05 Ramadan Generic Cata 2017Uploaded byMuhammad Bilal Junaid
- BTS3900 V100R012C10SPC100 (ENodeB, TDD) Differentiated Baseline Paramete...Uploaded byMuhammad Bilal Junaid
- 4G HUA 60 NCL MODE CHANGE 20170110.xlsxUploaded byMuhammad Bilal Junaid
- HistogramUploaded byMuhammad Bilal Junaid
- CE cogestion.txtUploaded byMuhammad Bilal Junaid
- S1P1_Yachen_Wang.pptxUploaded byMuhammad Bilal Junaid
- 23216 870 Intraumts SrvccUploaded bybadass
- 295302452-Reserved-Parameter-List.xlsUploaded byMuhammad Bilal Junaid
- 279392594 SCCPCH Concept ImplementationUploaded byMuhammad Bilal Junaid
- Lab1Uploaded byMuhammad Bilal Junaid
- Lab Manual 1Uploaded byMuhammad Bilal Junaid
- LAB 3Uploaded byMuhammad Bilal Junaid
- Time TableUploaded byMuhammad Bilal Junaid
- Assignment OptUploaded byMuhammad Bilal Junaid
- Assignment 2Uploaded byMuhammad Bilal Junaid
- ADSP Lec 01Uploaded byShoaib Malik
- Lab7Uploaded byMuhammad Bilal Junaid
- Assignment1_2.pdfUploaded byMuhammad Bilal Junaid
- Israr q22(Publish)Uploaded byMuhammad Bilal Junaid
- Lec 13 Newton Raphson MethodUploaded byMuhammad Bilal Junaid
- Lec28 DgtlComm F13 V1Uploaded byMuhammad Bilal Junaid
- Lec 15 Summary of Single Var MethodsUploaded byMuhammad Bilal Junaid
- Lec 26 Simplex MethodUploaded byMuhammad Bilal Junaid
- Lec 24 Lagrange MultiplierUploaded byMuhammad Bilal Junaid
- Lec 22 - 23 KKT ConditionsUploaded byMuhammad Bilal Junaid
- Lec 21 Marquardt MethodUploaded byMuhammad Bilal Junaid
- lec 18 unidirectional search.pdfUploaded byMuhammad Bilal Junaid

- 0203Uploaded byGinanjar Dika
- Part08 Fuzzy LogicUploaded byNaman Verma
- langrange equationUploaded byJothiBasu
- Knot CalUploaded byniksloter84
- newtons methodUploaded byAngeli Merced Burac
- Math Paper Class 9Uploaded byMantu Das
- Lagrange MultipliersUploaded bynellai kumar
- 68-248-1-PBUploaded byabhijeet
- Ass2 Solution 14Uploaded byHassni Croni
- Chapter6(MBI)ElUploaded byjokowi123
- SISC-092240RUploaded byRajib Pal
- Typed Lambda CalculiUploaded byOgnjenOklobdzija
- OpenFOAM Steady StateUploaded byAli Hussain Kadar
- Gradient Descent - KonsepUploaded byyoungwencie
- Cml lab manualUploaded byMani Kandan
- APPLIED OPTIMIZATION WITH MATLAB® PROGRAMMINGUploaded byPatricia Mansano
- GAMS Solvers.pdfUploaded byvazzoleralex6884
- Curve FittingUploaded byJaphet Gabatan
- GradientUploaded byAliAlMisbah
- Trajectory Survey of optimization techniquesUploaded byRajib Pal
- Finite Difference ProofUploaded bysatish315kumar
- Clausal Form, ResolutionUploaded byPreeTi KalWar
- Numerical AnalysisUploaded byJuch Cowank
- MTH603 Final Term Paper. eyeUploaded byRao Zeeshan Danish
- lec30Uploaded bySaid
- LP1Uploaded byshahidsark
- operation researchUploaded bykamun0
- goekenUploaded byDiego Vialle de Angelo
- QM10 Tif Ch09Uploaded byLour Raganit
- adding subtracting polynomialsUploaded byapi-288922072