You are on page 1of 30

LINEAR PROGRAMMING:

INTRODUCTION TO LINEAR
PROGRAMMING
AN INTRODUCTION TO OPTIMIZATION
LECTURE 10

Krishnendu Chandra
Indian Statistical Institute
North-East Centre at Tezpur

Fall 2019
Outline

1 Introduction

2 Problem Formulation

3 Simple Examples of Linear Programs

4 Standard Form Linear Programs

5 Basic Solutions

6 Properties of Basic Solutions

7 Geometric View of Linear Programs

(LECTURE 10) OPTIMIZATION Fall 2019 2 / 30


Introduction
The goal of linear programming is to determine the values of
decision variables that maximize or minimize a linear objective
function, where the decision variables are subject to linear
constraints.
A linear programming problem is a special case of a general
constrained optimization problem.
In the general setting, the goal is to find a point that minimizes
the objective function and at the same time satisfies the
constraints. We refer to any point that satisfies the constraints
as a feasible point.
In a linear programming problem, the objective function is linear,
and the set of feasible points is determined by a set of linear
equations and/or inequalities.
In this lecture we study methods for solving linear programming
problems.
(LECTURE 10) OPTIMIZATION Fall 2019 3 / 30
Problem Formulation
Formally, a linear program is an optimization problem of the form

minimize cT x
subject to Ax = b
x ≥ 0,
where c ∈ Rn , b ∈ Rm and A ∈ Rm×n . The vector inequality x ≥ 0
means that each component of x is nonnegative.
Several variations of this problem are possible; for example,
instead of minimizing, we can maximize, or the constraints may
be in the form of inequalities, such as Ax ≥ b or Ax ≤ b . We
also refer to these variations as linear programs.
In fact, as we shall see later, these variations can all be rewritten
into the standard form shown above.
(LECTURE 10) OPTIMIZATION Fall 2019 4 / 30
Simple Examples of Linear Programs I

Example: A manufacturer produces four different products: X1 , X2 ,


X3 and X4 . There are three inputs to this production process: labor in
person-weeks, kilograms of raw material A, and boxes of raw material
B. Each product has different input requirements. In determining
each week’s production schedule, the manufacturer cannot use more
than the available amounts of labor and the two raw materials. The
relevant information is presented in the following table:

(LECTURE 10) OPTIMIZATION Fall 2019 5 / 30


Simple Examples of Linear Programs II
Every production decision must satisfy the restrictions on the
availability of inputs. These constraints can be written using the data
in the corresponding table above. In particular, we have

x1 + 2x2 + x3 + 2x4 ≤ 20
6x1 + 5x2 + 3x3 + 2x4 ≤ 100
3x1 + 4x2 + 9x3 + 12x4 ≤ 75.

Because negative production levels are not meaningful, we must


impose the following nonnegativity constraints on the production
levels:

xi ≥ 0, i = 1, 2, 3, 4.

(LECTURE 10) OPTIMIZATION Fall 2019 6 / 30


Simple Examples of Linear Programs III

Now, suppose that one unit of product X1 sells for $6, and X2 , X3
and X4 sell for $4, $7, and $5, respectively. Then, the total revenue
for any production decision (x1 , x2 , x3 , x4 ) is

f (x1 , x2 , x3 , x4 ) = 6x1 + 4x2 + 7x3 + 5x4 .


The problem is then to maximize f subject to the given constraints
(the three inequalities and four nonnegativity constraints). Using
vector notation the problem can be written in the compact form

maximize cT x
subject to Ax ≤ b
x ≥ 0,
(LECTURE 10) OPTIMIZATION Fall 2019 7 / 30
Simple Examples of Linear Programs IV

where

c T = [6, 4, 7, 5] ,
   
1 2 1 2 20
A = 6 5 3 2  , b = 100 .
3 4 9 12 75

(LECTURE 10) OPTIMIZATION Fall 2019 8 / 30


Simple Examples of Linear Programs I

Example: Assume that n different food types are available. The jth
food sells at a price cj per unit. In addition, there are m basic
nutrients. To achieve a balanced diet, you must receive at least bi
units of the ith nutrient per day. Assume that each unit of food j
contains aij units of the ith nutrient. Denote by xj the number of
units of food j in the diet. The objective is to select the xj to
minimize the total cost of the diet:

minimize c1 x1 + c2 x2 + · · · + cn xn

(LECTURE 10) OPTIMIZATION Fall 2019 9 / 30


Simple Examples of Linear Programs II

subject to the nutritional constraints

a11 x1 + a12 x2 + · · · + a1n xn ≥ b1


a21 x1 + a22 x2 + · · · + a2n xn ≥ b2
..
.
am1 x1 + am2 x2 + · · · + amn xn ≥ bm ,

and the nonnegativity constraints

x1 ≥ 0, x2 ≥ 0, . . . , xn ≥ 0.

(LECTURE 10) OPTIMIZATION Fall 2019 10 / 30


Simple Examples of Linear Programs III

In the more compact vector notation, this problem becomes

minimize cT x
subject to Ax ≥ b
x ≥ 0.

(LECTURE 10) OPTIMIZATION Fall 2019 11 / 30


Simple Examples of Linear Programs I

Example:
A manufacturer produces two different products, X1 and X2 , using
three machines: M1 , M2 and M3 . Each machine can be used for only
a limited amount of time. Production times of each product on each
machine are given in the following table:

(LECTURE 10) OPTIMIZATION Fall 2019 12 / 30


Simple Examples of Linear Programs II
The objective is to maximize the combined time of utilization of all
three machines. Every production decision must satisfy the
constraints on the available time. These restrictions can be written
down using data from the above table. In particular, we have

x1 + x2 ≤ 8,
x1 + 3x2 ≤ 18,
2x1 + x2 ≤ 14,

where x1 and x2 denote the production levels. The combined


production time of all three machines is

f (x1 , x2 ) = 4x1 + 5x2 .

(LECTURE 10) OPTIMIZATION Fall 2019 13 / 30


Simple Examples of Linear Programs III
The problem in compact notation has the form

maximize cT x
subject to Ax ≤ b
x ≥ 0,
where

c T = [4, 5] ,
   
1 1 8
A = 1 3  , b = 18 .
2 1 14

(LECTURE 10) OPTIMIZATION Fall 2019 14 / 30


Simple Examples of Linear Programs I
Example: Consider the following linear program

maximize cT x
subject to Ax ≤ b
x ≥ 0,
where

c T = [1, 5] ,
   
A = 3 2 , b = 30
5 6
12
.

(LECTURE 10) OPTIMIZATION Fall 2019 15 / 30


Simple Examples of Linear Programs II
Figure: Geometric solution of a linear program in R2 .

(LECTURE 10) OPTIMIZATION Fall 2019 16 / 30


Simple Examples of Linear Programs

Geometrically, maximizing c T x = x1 + 5x2 subject to the


constraints can be thought of as finding the straight line
f = x1 + 5x2 that intersects the shaded region and has the
largest f .

The coordinates of the point of intersection will then yield a


maximum value of c T x .

In our example, the point [0, 5]T is the solution.

(LECTURE 10) OPTIMIZATION Fall 2019 17 / 30


Standard Form Linear Programs I
We refer to a linear program of the form

minimize cT x
subject to Ax = b
x ≥ 0,
as a linear program in standard form. Without loss of generality, we
assume that b ≥ 0. If a component of b is negative, say the ith
component, we multiply the ith constraint by −1 to obtain a positive
right-hand side.

Theorems and solution techniques for linear programs are usually


stated for problems in standard form. Other forms of linear programs

(LECTURE 10) OPTIMIZATION Fall 2019 18 / 30


Standard Form Linear Programs II

can be converted to the standard form, as we now show. If a linear


program is in the form

minimize cT x
subject to Ax ≥ b
x ≥ 0,
then by introducing surplus variables yi , we can convert the original
problem into the standard form

(LECTURE 10) OPTIMIZATION Fall 2019 19 / 30


Standard Form Linear Programs III

minimize c T x
subject to ai1 x1 + ai2 x2 + . . . + ain xn − yi = bi , i = 1, . . . , m
x1 ≥ 0, x2 ≥ 0, . . . , xn ≥ 0
y1 ≥ 0, y2 ≥ 0, . . . , ym ≥ 0.

In more compact notation, the formulation above can be represented


as

minimize cT x
Ax − I m y = [A, −I m ] yx = b
 
subject to

x ≥ 0, y ≥ 0,
(LECTURE 10) OPTIMIZATION Fall 2019 20 / 30
Standard Form Linear Programs IV
where I m is the m × m identity matrix. If, on the other hand, the
constraints have the form

Ax ≤ b
x ≥ 0,
then we introduce slack variables yi to convert the constraints into
the form

Ax + I m y = [A, I m ] yx
 
=b

x ≥ 0, y ≥ 0,
where y is the vector of slack variables. Note that neither surplus nor
slack variables contribute to the objective function c T x .
(LECTURE 10) OPTIMIZATION Fall 2019 21 / 30
Standard Form Linear Programs I

Example: Consider the following optimization problem

maximize x2 − x1
subject to 3x1 = x2 − 5
|x2 | ≤ 2
x1 ≤ 0.

To convert the problem into a standard form linear programming


problem, we perform the following steps:
1 Change the objective function to: minimize x1 − x2 .

2 Substitute x1 = −x 0 .
1
3 Write |x2 | ≤ 2 as x2 ≤ 2 and −x2 ≤ 2.

(LECTURE 10) OPTIMIZATION Fall 2019 22 / 30


Standard Form Linear Programs II

4 Introduce slack variables x3 and x4 , and convert the inequalities


above to x2 + x3 = 2 and −x2 + x4 = 2.
5 Write x2 = u − v , u, v ≥ 0.

Hence, we obtain

minimize − x10 − u + v
subject to 3x10 + u − v = 5
u − v + x3 = 2
v − u + x4 = 2
x10 , u, v , x3 , x4 ≥ 0.

(LECTURE 10) OPTIMIZATION Fall 2019 23 / 30


Basic Solutions I
We have seen in the previous section that any linear programming
problem involving inequalities can be converted to standard form, that
is, a problem involving linear equations with nonnegative variables:

minimize cT x
subject to Ax = b
x ≥ 0,
where c ∈ Rn , A ∈ Rm×n , b ∈ Rm , m < n, rankA = m, b ≥ 0. In
the following discussion we only consider linear programming
problems in standard form. Consider the system of equalities

Ax = b,
where rankA = m.
(LECTURE 10) OPTIMIZATION Fall 2019 24 / 30
Basic Solutions II
In dealing with this system of equations, we frequently need to
consider a subset of columns of the matrix A.
For convenience, we often reorder the columns of A so that the
columns we are interested in appear first.
Specifically, let B be a square matrix whose columns are m
linearly independent columns of A.
If necessary, we reorder the columns of A so that the columns in
B appear first: A has the form A = [B , D ], where D is an
m × (n − m) matrix whose columns are the remaining columns
of A.
The matrix B is nonsingular, and thus we can solve the equation

Bx B = b
for the m-vector x B . The solution is x B = B −1b.
(LECTURE 10) OPTIMIZATION Fall 2019 25 / 30
Basic Solutions III

Let x be the n-vector whose first m components are equal to xB


and the remaining components are equal to zero; that is,
x = x TB , 0T T . Then, x is a solution to Ax = b.
 

(LECTURE 10) OPTIMIZATION Fall 2019 26 / 30


Basic Solutions IV
Definition
We call x TB , 0T a basic solution to Ax = b with respect to
 T

the basis B . We refer to the components of the vector x B as


basic variables and the columns of B as basic columns.
If some of the basic variables of a basic solution are zero, then
the basic solution is said to be a degenerate basic solution.
A vector x satisfying Ax = b , x ≥ 0, is said to be a feasible
solution.
A feasible solution that is also basic is called a basic feasible
solution.
If the basic feasible solution is a degenerate basic solution, then
it is called a degenerate basic feasible solution.
Note that in any basic feasible solution, x B ≥ 0.

(LECTURE 10) OPTIMIZATION Fall 2019 27 / 30


Basic Solutions
Example: Consider the equation Ax = b with
   
A = [a1, a2, a3, a4] = 1 −2 −1 1 ,
1 1 −1 4
b =
8
2
,

where ai denotes the ith column of the matrix A.


x = [6, 2, 0, 0]T is a basic feasible solution with respect to the
basis B = [a1 , a2 ].
x = [0, 0, 0, 2]T is a degenerate basic feasible solution with
respect to the basis B = [a3 , a4 ] (as well as [a1 , a4 ] and
[a2 , a4 ]).
x = [3, 1, 0, 1]T is a feasible solution that is not basic.
x = [0, 2, −6, 0]T is a basic solution with respect to the basis
B = [a2, a3], but is not feasible.
(LECTURE 10) OPTIMIZATION Fall 2019 28 / 30
Properties of Basic Solutions
Definition
Any vector x that yields the minimum value of the objective
function c T x over the set of vectors satisfying the constraints
Ax = b, x ≥ 0, is said to be an optimal feasible solution.
An optimal feasible solution that is basic is said to be an optimal
basic feasible solution.

Theorem
Fundamental Theorem of LP.
Consider a linear program in standard form.
1 If there exists a feasible solution, then there exists a basic
feasible solution.
2 If there exists an optimal feasible solution, then there exists an
optimal basic feasible solution.

(LECTURE 10) OPTIMIZATION Fall 2019 29 / 30


Geometric View of Linear Programs
Theorem
Let Ω be the convex set consisting of all feasible solutions, that is, all
n-vectors x satisfying

Ax = b, x ≥ 0,
where A ∈ Rm×n , m < n. Then, x is an extreme point of Ω if and
only if x is a basic feasible solution to Ax = b , x ≥ 0.

From the above theorem it follows that the set of extreme points
of the constraint set Ω = {x : Ax = b , x ≥ 0} is equal to the
set of basic feasible solutions to Ax = b , x ≥ 0.
Combining this observation with the fundamental theorem of LP,
we can see that in solving linear programming problems we need
only examine the extreme points of the constraint set.

(LECTURE 10) OPTIMIZATION Fall 2019 30 / 30

You might also like