You are on page 1of 12

Linear Programming

Optimization Problem: Problems which seek to maximize or minimize a numerical function of


a number of finite variables subject to certain constraints are called optimization problems
Programming Problem: Programming problems deal with determining optimal allocations of
limited resources to meet given objectives. The constraints or limited resources are given by
linear or non-linear inequalities or equations. The given objective may be to maximize or
minimize certain function of finite variables.
Linear Programming and Linear Programming Problem: Suppose we have given m linear
inequalities or equations in n variables and we wish to find non-negative values of these
variables which will satisfy the constraints and maximize or minimize some linear functions of
these variables (objective functions), then this procedure is known as linear programming and
the problem which is described is known as linear programming problem.
Mathematically it can be described as, suppose we have m linear inequalities or equations in n
unknown variables of the form
n

 a x {  ,=,  }b
j=1
ij j i (i= 1, 2,....,m) where for each constraint one and only one of the signs  ,=, 

holds. Now we wish to find the non-negative values of x j , j = 1, 2,………,n. which will satisfy
n
the constraints and maximize of minimize a linear function z = c x
j=1
j j . Here a ij , b i and c j

are known constant.


Application:
(i) Linear programming problem is widely applicable in business and economic activities
(ii) It is also applicable in government, military and industrial operations
(iii) It is also extensively used in development of planning.
n
Objective Function: In a linear programming problem, a linear function z = c x
j=1
j j of the

variables x j , j = 1, 2,………,n. which is to be optimized is called objective function . In a

objective function no constant term will be appeared. i. e. we cannot write the objective
n
function of the type z = c x
j=1
j j +k

Example of Linear Programming Problem:


Machine Type Products Total Time Available Per Week
1 2 3 4
A 1.5 1 2.4 1 2000
B 1 5 1 3.5 8000
C 1.5 3 3.5 1 5000
Unit Profits 5.24 7.30 8.34
4.18

Page | 1
Suppose three types of machines A, B and C turns out four products 1, 2, 3, 4. The above table
shows (i) the hours required on each machine type to produce per unit of each product (ii) total
available machine hours per week, and (iii) per unit profit on sale of each of the product.

Suppose x j (j = 1, 2, 3, 4) is the no. of units of product j produced per week. So we have the

following linear constraints;


1.5x1 +x 2 +2.4x 3 +x 4  2000
x1 +5x 2 +x 3 +3.5x 4  8000
1.5x1 +3x 2 +3.5x 3 +x 4  5000

Since the amount of production cannot be negative so, x j  0 (j = 1, 2, 3, 4) . The weekly profit is

given by z= 5.24x1 +7.3x 2 +8.34x 3 +4.18x 4 . Now we wish to determine the values of the variables

x j ' s for which (i), (ii), (iii) and (iv) will be satisfied and (v) will be maximized

Formulation of Linear Programming Problem


(i) Transportation Problem:
Suppose given amount of uniform product are available at each of a no. of origins say
warehouse. We wish to send specified amount of the products to each of a no. of different
destinations say retail stores. We are interested in determining the minimum cost -routing
from warehouse to the retail stores.
Let use define
m = no. of warehouses
n = no. of retail stores

x ij the amount of product shipped from the ith warehouse to the jth retail store.

Since negative amounts cannot be shipped so we have x ij  0 i, j

a i = total no. of units of the products available for shipment at the ith (i= 1,
2,………m)warehouse.

b j = the no. of units of the product required at the jth retail store.

Since we cannot supply more than the available amount of the product from ith warehouse to
the different retail stores, therefore we have
x i1 +x i2 +............+x in  a i i= 1, 2,……..,m
We must supply at each retail store with the no. of units desired, therefore

x1j +x 2j +.............+x mj =b j ; j = 1, 2,………….,n

The total amount received at any retail store is the sum over the amounts received from each
warehouse. The needs of the retail stores can be satisfied
m n

 a i  b j
i=1 j=1

Page | 2
Let us define c ij is the per unit cost of shifting from ith warehouse to the jth retial store, then

the total cost of shifting


m n
z=  cij x ij
i=1 j=1
m n
Now we wish to determine x ij which minimize the cost z=  cij x ij subject to the constraints
i=1 j=1

x i1 +x i2 +............+x in  a i

x1j +x 2j +.............+x mj =b j

It is a linear programming problem in mn variables with (m+n) constraints.


(2) The Diet Problem
Suppose we have given the nutrient content of a no. of different foods. We have also given the
minimum daily requirement for each nutrient and quantities of nutrient contained in one of
each food being considered. Since we know the cost per ounce of food, the problem is to
determine the diet that satisfy the minimum daily requirement of nutrient and also the
minimum cost diet.
Let us define
m = the no. of nutrients
n = the no. of foods

a ij = the quantity (mg) of ith nutrient per (oz) of the jith food

bi = the minimum quantity of ith nutrient

c j = the cost per (oz) of the jth food

x j = the quantity of jth food to be purchased

The total amount of ith nutrient contained in all the purchased foods cannot be less than the
minimum daily requirements
Therefore we have
n
a i1x1 +a i2 x 2 +............+a in x n =  a ij x j  bi
j=1

The total cost for all purchased foods is given by;


n
z =  c jx j
j=1
n
Now our problem is to minimize cost z =  c jx j subject to the constraints
j=1
n
a i1x1 +a i2 x 2 +............+a in x n =  a ij x j  bi and
j=1

xj  0

This is called the linear programming problem.

Page | 3
Feasible Solution:
n
Any set of values of the variables x j which satisfies the constraints  a x {, ,  b
j=1
ij j i , where

a ij and bi are constant is called a solution to the linear programming problem and any

solution which satisfies the non-negative restrictions i. e. x j  0 is called a feasible solution.

Optimal Feasible Solution


In a linear programming problem there is an infinite no. of feasible solutions and out of all
these solutions we must find one feasible solution which optimize the objective function
n
z =  c jx j is called optimal feasible solution
j=1

In other words, any feasible solution which satisfies the following conditions;
n
(i)  a x {, ,  b
j=1
ij j i

(ii) x j  0
n
(iii) optimize objective function z =  c jx j , is called a optimal feasible solution.
j=1

Corner Point Feasible Solution:


A feasible solution which does not lie on the line segment, connection any other two feasible
solution is called a corner point feasible solution.
Properties:
(i) If there is a exactly one optimal solution of the linear programming problem, then it is a
corner point feasible solution.
(ii) If there are more than two optimal solutions of the given problem, then at least two of them
are adjacent corner points.
(iii) In a linear programming problem there are a finite number of corner points
(iv) If a corner point feasible solution is better than its adjacent corner point solution, then it is
better than all other feasible solutions.
Methods for Solving Linear Programming Problems
(1) Graphical Method
(2) Algebraic Method
(3) Simplex Method
Graphical Method:
The graphical method to solve a linear programming problem involves two basic steps
(1) At the first step we have to determine the feasible solution space.
We represent the values of the variable x 1 to the X axis and the their corresponding values of
the variable x 2 to the Y axis. Any point lying in the first quadrant satisfies x1 > 0 and x 2  0 .
The easiest way of accounting for the remaining constraints for optimization objective function
is to replace inequalities with equations and then plot the resulting straight lines
Give an example:

Page | 4
Next we consider the effect of the inequality. All the inequality does is to divide the (x1 , x 2 )
-plane into two spaces that occur on both sides of the plotted line: one side satisfies the
inequality and the other one dies not. Any point lying on or below the line satisfies the
inequality. A procedure to determine the feasible side is to use the origin (0, 0) as a reference
point.
Step 2: At the second step we have to determine the optimal solution.
Problem: Find the non-negative value of the variables
x1 and x 2 which satisfies the constraints
3x1 +5x 2  15
5x1 +2x 2  10
And which maximize the objective function z = 5x1 +3x 2
Solution: We introduce an x1x 2 co-ordinate system. Any point lying in the first quadrant has
x1 ,x 2  0 . Now we show the straight lines 3x1 +5x 2 =15 and 5x1 +2x 2 =10 on the graph. Any
point lying on or below the line 3x1 +5x 2 =15 satisfies the 3x1 +5x 2  15 . Similarly any point
lying on or below the line 5x1 +2x 2 =10 satisfies the constraint 5x1 +2x 2  10

B(0,3)
A(1.053, 2.368)
3x1 +5x 2 =15

O C (5,0)
z = 5x1 +3x 2

5x1 +2x 2 =10

So, the region ABOC containing the set of points satisfying both the constraints and the non
negative
restriction. So, the points in this region are the feasible solution. Now we wish to find the line
with the largest value of z = 5x1 +3x 2 which has at least one point in common with the region
of feasible solution. The line is drawn in the graph above. It shows that the value of x 1 and x 2
at the point A are the required solution.
Here x1 =1.053 and x 2  2.368 approximate.
Now from the objective function we get the maximum value of z which is given by
z = 5 1.053+3  2.368=12.37

Page | 5
Algebraic Method: In LP problems, generally the constraints are not all equations. Since
equations are easy to handle as compared to inequalities, a simple conversion is needed to
make the inequalities into equality. Let us consider first, the constraints having less than or
equal signs (  ). Any constraint of this category can be written as
a h1x1 +a h2 x 2 +..............+a hn x n  b h (1)
Let us introduce a new variable x n+h which satisfies that x n+h  0 where
n
x n+h  b h   a hjx j  0 , to convert the inequalities to the equality such that
j=1

a h1x1 +a h2 x 2 +..............+a hn x n +x n+h  b h (2)


The new variable x n+h is the difference between the amount available of resource and the
amount actually used and it is called the slack variables.
Next we consider the constraints having signs greater than or equal (  ). A typical inequality
in this set can be written as;
a k1x1 +a k2 x 2 +..............+a kn x n  b k (3)
Introducing a new variable x n+k  0 , the inequality can be written as equality which is given
by;
a k1x1 +a k2 x 2 +..............+a kn x n -x n+k  b k (4)
Here the variable x n+k is called the surplus variable, because it is the difference between
resources used and the minimum amount to be produced is called the surplus.
Therefore using algebraic method for solving a linear programming problem, the linear
programming problem with original constraints can be transformed into a LP problem with
constraints of simultaneously linear equation form by using slack and surplus variable
Example: Considering the LP problem
Min: -x1 -3x 2
St x1 -2x 2  4
-x1 +x 2  3
x1 , x 2 > 0
Now introducing two new variables x 3 and x 4 , the problem can be written as;
Min: -x1 -3x 2 +0.x 3 +0.x 4
x1 -2x 2  x 3 =4
St: -x1 +x 2  x 4 = 3
x1 , x 2 , x 3 , x 4 > 0
Here x 3 is the slack variable and x 4 is the surplus variable.
Effect of Introducing Slack and Surplus Variables
Suppose we have a linear programming problem P1 such that
Optimize
Z= c1x1 +c2 x 2 +..............+c n x n (1)
Subject to the condition
a h1x1 +a h2 x 2 +..............+a hn x n {, , }b h (2)
Where one and only one of the signs in the bracket hold for each constraint
The problem is converted to another linear programming problem P2 such that
Z= c1x1 +c2 x 2 +..............+cn x n +0.x n+1 +............+0.x m (3)

Page | 6
Subject to the condition
Ax = a h1x1 +a h2 x 2 +..............+a hn x n  a hn+1x n+1  ...........  a hm x m  bh (4)

Where A=  a ij  and a j ( j = 1, 2,…….,m) is the jth column of A.


nm

We claim that optimizing (3) subject to (4) with x j  0 is completely equivalent to optimizing

(1) subject to (2) with x j  0

To prove this, we first note that if we have any feasible solution to the original constraints,
then our method of introducing slack or surplus variables will yield a set of non-negative slack
or surplus variables such that equation (4) is satisfied with all variables non-negative
consequently if we have a feasible solution to (4) with all variables non-negative, then its first
n components will yield a feasible solution to (2) .Thus there exist one –to-one correspondence
between the feasible solutions to the original set of constraints and the feasible solution to the
set of simultaneous linear equations. Now if

X* = (x1* x *2 ,........,x *m )  0 is a feasible optimal solution to linear programming P2 then the first n

components of X* that is (x1* x *2 ,........,x *n ) is an optimal solution by annexing the slack and

surplus variables to any optimal solution to P1 we obtain an optimal solution to P2


Therefore, wet may conclude that if slack and surplus variables having a zero cost are
introduced to convert the original set of constraint into a set of simultaneous linear equations,
so the resulting problem is equivalent to the original problem.
Existence of Extreme Basic Feasible Solution: Reduction of any feasible solution to a basic
feasible solution
Let us consider a linear programming problem with m linear equations in n unknowns such
that
AX = b
X0
Which has at least one basic feasible solution without loss of generality suppose that Rank(A)

= m and let X=(x1 , x 2 ,......,x n ) be as feasible solution. Further suppose that x1 , x 2 ,......,x p >0 and

that x p+1 , x P+2 ,......,x n =0 . And let a1 , a 2 ,......,a p be the respective columns of A corresponding to

the variables x1 , x 2 ,......,x p . If a1 , a 2 ,......,a p are linearly independent then X is a basic feasible

solution. in such case p  m . If p=m from the theory of system of linear equation, the solution
is non-degeneracy basic feasible solution.
If p<m, the system have a degenerate basic feasible solution with (m-p) of the basic variables
are equal to zero.

If a1 , a 2 ,......,a p are dependent then there exist scalars 1 , 2 ,......,  p with at least one positive

Page | 7
p
 j such that a 
j 1
j j 0

Considering the following point X with

 x j   0 j ; j  1,2,...., p
xj = 
0; j  p  1, p  2,.....n

 x  x
where  o = Minimum  j ;  j  0  = k >0
j=1,2,....,p
  j  k

If  j  0 , then xj >0 , since both x j and  0 are positive. If  j  0 , then by the definition of  0

xj
we have   o  x j   0 j . Thus xj >0
j

Furthermore
xk
xk = x k - 0k =x k - k =0 . Hence x has at most (p-1) positive components.
k
Also,
n
Ax=  a jxj
j=1
n
=  a (x
j=1
j j   0 j )

n n
=  a jx j   0  a j j
j=1 j=1

=b
Thus we have a constructed feasible solution x since Ax=b , x  0 with at most (p-1)
positive components. If the columns of A corresponding to these positive components are
linearly independent then x is basic feasible solution. Otherwise the process is repeated.
Eventually a basic feasible solution (BFS) will be obtained.
Example: Consider the following inequalities
x1 +x 2  6
x2  3
x1 , x 2  0
Find basic solution, BFS and extreme points.
Solution. By introducing slack variables x 3 and x 4 , the problem is put into the following
standard format
x1 +x 2  x 3 =6
x 2  x 4 =3
x1 , x 2 , x 3 ,x 4  0
So, the constraint matrix A is given by;
1 1 1 0 6
A=   = (a1 , a 2 , a 3 , a 4 ) , b=   Rank(A) = 2
0 1 0 1  3
Page | 8
Therefore, the basic solutions corresponding to finding a 2  2 basis B. Following are the
possible ways of extracting B out of A

 1 1 -1  1 -1  1 -1 6   3   x3   0 
(i) B=(a1, a 2 ) =   , B =  , x B =B b= 
-1
  =   , x n =  x  =  0 
 0 1 0 1   0 1  3   3   4  

1 1
(ii) B=(a1 , a 3 )=   , Since |B|=0, it is not possible to find B and hence x B
-1

0 0

1 0 -1  1 0  x1  -1  1 0  6   6   x2   0 
(iii) B=(a1, a 4 )=   ; B =   x B =   =B b=    =   x n =  = 
0 1 0 1  x4   0 1  3   3   x3   0 

1 1  -1  0 1   x 2  -1  0 1  6   3   x1   0 
(iv) B=(a 2 , a 3 )=   B =  x B =  x  =B b=  1 1 3  =  3  x n =  x  =  0 
1 0   1 1  3       4  

1 0  -1  1 0  x 2  -1  1 0  6   6   x1   0 
(v) B=(a 2 , a 4 )=   ; B =   ; x B =   =B b=    =   x n =  = 
1 1   -1 1   x4   -1 1  3   -3   x3   0 

1 0 -1  1 0  x 3  -1  1 0  6   6   x1   0 
(vi) B=(a 3 , a 4 )=   ; B =  ; x B =  x  =B b=  0 1  3  =  3  x n =  x  =  0 
0 1 0 1  4       2  

Hence we have the following five basic solutions


 3 6 0 0 0
         
3 0 3 6 0
x1 =   ; x 2 =   ; x 3 =   ; x 4 =   ; x 5 =  
0 0  3 0 6
         
0  3 0  -3   3
Of which except x 4 are BFS because it violates non-negativity restrictions. The BFS belong to
a four dimensional space. These basic feasible solutions are projected in the (x1 , x 2 ) space
gives rise to the following four points.
 3   6   0   0  0 
  ,   ,   ,   
 3   0   3   6  0 
From the graphical representation the extreme points are (0, 0), (0, 3), (3, 3) and (6,0) which
are the same as the BFSs. Therefore the extreme points are precisely the BFS. The no. of BFS
is 4 less than 6.

The Simplex Method:


General Mathematical Formulation for Linear Programming
Let us define the objective function which to be optimized
z = c1x1 +c 2 x 2 +...................+cn x n
We have to find the values of the decision variables x1 , x 2 ,.........,x n on the basis of the following
m constraints;

Page | 9
a11x1 +a12 x 2 +.........+a1n x n (  ,=,  )b1
a 21x1 +a 22 x 2 +.........+a 2n x n (  ,=,  )b 2

a m1x1 +a m2 x 2 +.........+a mn x n (  ,=,  )b m


and

x j  0; j = 1, 2,.......,n

The above formulation can be written as the following compact form by using the summation
sign;
n
Optimize (maximize or minimize) z = c x
j=1
j j

Subject to the conditions;


n

 a x (  ,=,  )b ;i=1, 2,.......,m


j=1
ij j i

and x j  0; j = 1, 2,.......,n

The constants c j ; j =1, 2,......,n are called the cost coefficients; the constants bi ; i =1, 2,.......,m

are called stipulations and the constants a ij ; i =1, 2,.....,m; j=1,2,.....,n are called structural

coefficients. In matrix notation the above equations can be written as;


Optimize z = CX
Subject to the conditions
AX(  ,=,  )B
Where

 x1   b1 
   
 x2   a11 a12 ...... a1n   b2 
 .     . 
   a 21 a22 ...... a2n   
C=  c1 c2 ... ... cn 1n ; X=  .  ; A=  . . . .  ; B=  . 
 .     . 
   . . . .   
a 
 .   m1 am2 ...... amn  m n  . 
x  b 
 n  n 1  m m n

Where, A is called the coefficient matrix, X is called the decision vector, B is called the
requirement vector and C is called the cost vector of linear programming problem
The Standard Form of LP Problem
The use of basic solutions to solve the general LP models requires putting the problem in
standard form. The followings are the characteristics of the standard form
(i) All the constraints are expressed in the form of equations except the non-negative
restrictions on the decision variables which remain inequalities
(ii) The right hand side of each constraint equation is non-negative

Page | 10
(iii) All the decision variables are non-negative
(iv) The objective function may be of the maximization or the minimization type
Conversion of Inequalities into Equations:
The inequality constraint of the type  ,(  ) can be converted to an equation by adding or
subtracting a variable from the left-hand sides of such constraints. These new variables are
called the slack variables or simply slacks. They are added if the constraints are of the 
types and subtracted if the constraints are of the  types. Since in the cases of  type the
subtracted variables represent the surplus of the left -hand side over right-hand side, it is
commonly known as the surplus variables and is in fact a negative slack.
For example
x1 +x 2  b1
Is equivalent to
x1 +x 2  s1 = b1
If x1 +x 2  b 2
Is equivalent to
x1 +x 2  s1 =b 2
The general LP problem that discussed above can be expressed as the following standard form;
n
z= c x
j=1
j j

Subject to the conditions


n

a x
j=1
ij j  si =bi ;i=1, 2,.......,m

x j  0; j = 1, 2,.......,n

And
si  0; i = 1, 2,.....,m
In the matrix notation, the general LP problem can be written as the following standard form;
Optimize z = CX
Subject to the conditions
AX  S = B
X0
S0
Example: Express the following LP problem in a standard form;
Maximize z= 3x1 +2x 2
Subject to the conditions;
2x1 +x 2  2
3x1 +4x 2  12
x1 ,x 2  0
Solution: Introducing slack and surplus variables, the problem can be expressed as the
standard form and is given below;
Maximize z= 3x1 +2x 2
Subject to the conditions;

Page | 11
2x1 +x 2  s1 =2
3x1 +4x 2  s2 =12
x1 ,x 2 , s1 , s2  0
Conversion of Unrestricted Variable into Non-negative Variables

An unrestricted variable x j can be expressed in terms of two non-negative variables by using

substitution such that x j = x +j -x-j ; x +j ,x -j  0

For example, if x j = -10 , then x +j =0, and x-j = 10 . If x j = 10 , then x +j =10, and x-j = 0 .

The substitution is effected in all constraints and in the objective function. After solving the

problem in terms of x +j and x-j , the value of the original variable x j is then determined

through back substitution.


Example: Express the following linear programming problem in the standard form;
Maximize, z= 3x1 +2x 2 +5x 3
Subject to
2x1 -3x 2  3
x1 +2x 2  3x 3  5
3x1 +2x 3  2
Solution: Here x 1 and x 2 are restricted to be non-negative while x 3 is unrestricted. Let us

express as, x 3 = x 3+ -x 3- where, x 3+  0 and x 3-  0 . Now introducing slack and surplus variable the

problem can be written as the standard form which is given by;

Maximize, z= 3x1 +2x 2 +5(x 3+ -x 3- )

Subject to the conditions;


2x1 -3x 2  s1  3
x1 +2x 2  3x 3+ -3x 3-  s2 =5
3x1 +2x 3+ -2x 3-  s3 =2
x1 ,x 2 , x 3+ , x 3- , s1, s2 , s3  0

Conversion of Maximization to Minimization:


The maximization of a function f(x1 , x 2 ,.....,x n ) is equivalent to the minimization of
-f(x1 , x 2 ,.....,x n ) in the sense that both problems yield the same optimal values of x1 ,x 2 ,......, and
xn

Page | 12

You might also like