You are on page 1of 12

Module 9 1

Lecture 5

This lecture intends to introduce us to another special class of linear programming problem
namely an assignment problem (in short let us write AP). Traditionally an AP is described in
terms of assigning n persons to n jobs in an optimal way. Let us explain it a bit more. Suppose
we have n persons denoted by P1 , P2 , . . . , Pn to do n-jobs denoted by J1 , J2 , . . . , Jn , such that
each person is capable of doing any job (though later we shall see that this assumption can
be relaxed) and any job can be assigned to any person. But this assignment is to be done in
such a way that each person is assigned only one complete job to finish and also one job can be
assigned to only one person and can not be shared. In other way, this assignment can be seen
as 1-1 onto map between two sets of nodes. One for persons and other for jobs. For example,
suppose we have two persons P1 and P2 and two jobs J1 and J2 to be assigned to them. Initially,
the network looks like as follows.

The assignments possible are

Note no other arrangement is possible. So a total 2!=2 assignments. If we have 3 persons and
3 jobs then possible arrangements are following

Copyright
c Reserved IIT Delhi
2

P1 → J1 P1 → J1 P1 → J2 P1 → J2 P1 → J3 P1 → J3
P2 → J2 P2 → J3 P2 → J1 P2 → J3 P2 → J2 P2 → J1
P3 → J3 P3 → J2 P3 → J3 P3 → J1 P3 → J1 P2 → J2

basically all possible permutations of {1,2,3} which is 3!=6. In general if we have n persons-jobs
then we have n! assignment. To choose one, we need to define an “optimal” assignment. For
that, we assume that doing any job by any person (or even machine) will incur some cost. So,
let us take cij as the cost of doing j th job by the ith person. The cost matrix so worked out is

So, an optimal assignment means the one that yields the lowest cost of completing all jobs by
persons.

To make a mathematical model of it, we introduce variables.


(
1 if ith person is assigned j th job
xij =
0 otherwise

where i, j = 1, . . . , n.
 
x11 x12 ··· x1n
x21 x22 ··· x2n 
 

Let X = [xij ]n x n = .. .
.
 
 
xn1 xn2 · · · xnn

As only one person is to be assigned only one job and vice versa, if there is 1 in the ith row and
j th column of matrix X then all other entries in that row and column must be zero. The final
AP model is as follows

Pn Pn
(AP) Min cij xij
Pni=1 j=1
subject to xij = 1, j = 1, 2, . . . , n
Pni=1
j=1 xij = 1, i = 1, 2, . . . , n

Copyright
c Reserved IIT Delhi
3
xij ∈ {0, 1}, ∀ (i, j)

The above problem is an integer linear program, in fact a binary linear program. But what we
shall see in just a few minutes that (AP) can be converted into a linear program. For this, we
need a concept of what we call as “doubly stochastic matrix”.

Definition 9.5.1: An nxn matrix M = (mij ) is called a doubly stochastic matrix (DSM) if its
entries are non-negative and all its row sums and column sums are equal to one, that is,

n
X n
X
mij = 1, ∀ j and mij = 1, ∀ i.
i=1 j=1

For example,
 
! 4/7 2/7 1/7 !
2/3 1/3 1 0
,  1/7 4/7 2/7  ,
 
1/3 2/3 0 1
2/7 1/7 4/7

are examples of DSMs.

Another matrix of importance in AP context is ‘permutation matrix’.

Definition 9.5.2: An nxn matrix P = (pij ) is called a permutation matrix if its entries are 0
or 1, and all its row sums and column sums are 1.

Note that a permutation matrix is special case of DSM. Another important observation is that
ifM P
denotes the set of all nxn DSMs and is the set of all nxn permutation matrices then M
P M
is a convex set and is the set of extreme points of . Though we are skipping the proof here
because it is rather hard, but we (urge to take
! it on face!)
value, and we encourage you to try it

for case when n=2, and then = P 1 0


0 1
,
0 1
1 0
.

Now go back to problem (AP). Carefully see that what we are looking for is nothing more than
P
a permutation matrix which yields an optimal objective value. Since forms an extreme points
set of M and in any LPP (and so also in (AP)), an optimal solution (if exists) lies on an extreme
point of feasible set of LPP, we can equivalently view an (AP) problem, as a LPP with feasible
setM P
and hence on optimizing it we get an optimal solution obviously in . Finally we get the
following model for an assignment problem which we continue to write (AP) only.

z = ni=1 nj=1 cij xij


P P
(AP) Min
Pn
Subject to xij = 1, j = 1, . . . , n
Pni=1
j=1 xij = 1, j = 1, . . . , n
xij ≥ 0, ∀ (i, j)

Copyright
c Reserved IIT Delhi
4

Note that the new model of (AP) can be treated as a special case of (TP) problem with i=j=n,
ai = bj = 1, ∀ (i, j). So, we can always solve an (AP) problem by the method of balanced (TP)
studied in earlier lectures. But still we do not advise to solve (AP) by method of (TP). The
reason is simply that the basic feasible solutions of (AP) ( if solved by method of solving (TP))
are highly degenerate. See for (AP), the number of basic variables are m + n − 1 = n + n − 1 =
2n − 1 with only n entries 1 and rest n − 1 basic variables 0. Such a high level of degeneracy in
any basic feasible solution of (AP) makes (TP) solving method inefficient for solving (AP). We
thus need to relook at an (AP) problem and think of a better method for solving it.

The following lemma will help us in designing an algorithm for (AP).

Lemma 5.1: Let AP(C) denote an assignment problem (AP) with cost matrix C and let
C
b = [b cij = cij + αi + βj , αi ∈ R, βj ∈ R, i = 1, . . . , n, j = 1, . . . , n. The assignment
cij ], b
problems AP(C) and AP(C) b have the same optimal assignment.

Pn Pn
cij xij = ni=1 nj=1 (cij + αi + βj )xij
P P
Proof: Note i=1 j=1 b
= ni=1 nj=1 cij xij + ni=1 αi
P P P Pn Pn Pn
j=1 xij + j=1 βj i=1 xij
Pn Pn Pn Pn
= i=1 j=1 cij xij + i=1 αi + j=1 βj
for all x = [xij ] feasible for AP(C).

Since ni=1 αi and nj=1 βj are constant, free of [xij ], thus optimizing ni=1
P P P Pn
j=1 cij xij and
Pn Pn
i=1 cij xij are equivalent a, it means both AP(C) and AP(C hat) have same optimal
j=1 b
solutions with only difference in their optimal values which differ by a constant, that is,

Min
Pn
i=1
Pn
j=1 cij xij = Min
Pn
i=1
Pn
j=1 cij xij −
b V
V
where = ni=1 αi + nj=1 βj ∈ R, on the feasible set of AP(C) which is same as the feasible
P P

set of AP(C)
b and both equals M
(remember the set of all nxn DSMs).

In view of above lemma, we can always assume that cij ≥ 0, ∀ (i, j) in a given (AP) problem.

We shall now discuss an algorithm called the “Hungarian Method” to solve (AP).

Hungarian Method
This method is a combinatorial algorithm which solves an (AP) in a polynomial time. It was
developed and first published by Harold Kuhn in in 1955 in Naval Research Logistic Quarterly
journal with the name Hungarian method because the algorithm is largely based on the works
of two Hungarian mathematicians Dénes König and Jenõ Egerv́ary. We shall talk about it later
surely but for the time being let us understand the working of the algorithm. Once through

Copyright
c Reserved IIT Delhi
5
with it we will come back and see how the two Hungarians works are related to the method
that we have had learned.

The main idea in the Hungarian Method (HM) is to generate ‘enough’ zeros in matrix C (again
why ??), But there are plenty of questions that we shall not answer immediate. Believe, we will
have answer all legitimate questions at the end of this lecture and next lecture. Patience pays).
The step-wise procedure of HM is as follows:

Step 1: Choose the smallest element in each row of C and subtract the same from all the
elements of the corresponding row. The resultant matrix C1 has at least one zero in each row.

Step 2: Choose the smallest element in each column of C1 and subtract the same from all
elements of the corresponding column. The resultant matrix C2 has at least one zero in each
row and in each column.

Step 3: Draw minimum numbers of horizontal and vertical lines to cover all zeros in C2 (Now
what is the significance of this step? Again urge you to put this question in your piggy beg of
questions on (AP)). Obviously this minimum number of lines say r satisfies r ≤ n. If r < n go
to step 4 else go to step 5.

Step 4: Select the smallest element from the uncovered elements (remember all zeros are
covered). Subtract this smallest element from all those elements which are not covered and add
this smallest element to all those elements which are at the intersection of two covered lines,
and leave rest of the elements as they are. (One may wonder as to what is the purpose and
interpretation of this step but again we urge you to have a patience and we shall definitely
search answers of all genuine queries so arising in this algorithm).

Go back to step 3 with the modified assignment matrix.

Step 5: This step can be treated as actual “assignment”.


(i) Starting with first row of the matrix, examine each row one by one until a row containing
exactly one zero is found. Identify/marked this zero by encircling it. Now cross-off all the other
zeros in the column in which the assignment is made. The latter step is obvious as if a job is
assigned to a prson than that person is not available to do more jobs.

(ii) When the row examination is complete adopt an identical procedure with the columns,
starting from first column. Once a column is found with exactly one zero then encircle this zero
and cross-off the other zeros in the corresponding row in which assignment is made.

Continue these successive operations on row and columns, and if a situation arises which means
that there is no row or column in a matrix with exactly one zero and not all zeros are crossed

Copyright
c Reserved IIT Delhi
6
off so far, then break the tie randomly by encircling any one of the uncrossed zero and then
proceed. Finally, stop when all zeros are either assigned or crossed-off.

Let us illustrate the above procedure by the following example.

(Example 9.5.1) Suppose four persons A,B,C,D are to be assigned four jobs J1 , J2 , J3 , J4 . The
cost matrix is as under.

The minimum element in each row is indicated in red on right side.


Subtracting these from their corresponding rows we get the following matrix

The minimum element in each column is indicated in red at bottom. So, subtracting these from
their respective column entries we get the following matrix.The steps below are self explanatory.

The next step is to cover the zeros by horizontal and vertical lines only.

Copyright
c Reserved IIT Delhi
7

We need minimum 4 lines (try for less!) to cover all zeros. So, n=r=4. We are ready to make
assignment.

Thus the optimal assignment is

(Example 9.5.2) Consider the following cost matrix for assigning five jobs to five machines

Copyright
c Reserved IIT Delhi
8

Following the procedure laid down above, the following expressions are self explanatory.

Subtract 2 (the least value among the uncovered elements in above matrix) from all those
elements which are not covered and add it to all those elements which are at the intersection of
horizontal and vertical lines drawn above, while all the remaining elements remain as they are,
we obtain the following matrix

Minimum uncovered element is 2

Copyright
c Reserved IIT Delhi
9

Now we can make optimal assignment.

Thus, optimal assignment is

Copyright
c Reserved IIT Delhi
10

Let us try to provide answers to few of the questions posed in the lecture before we move on to
next lecture.

1. Why the above described method is called the Hungarian method?

As already told that the name given is to acknowledge the contribution made by the two Hun-
garians D.König and J.Egerv́ary. To understand their contributions, we need few concepts from
Graph Theory. These concepts are discussed here to provide insight and close connection be-
tween (AP) and a specific problem in Graph Theory. You can easily skip it if you so wish, but
it is an exciting connection.

(Definition 9.5.3) A vertex cover of a graph G = (V, E) is a set Ve of vertices such that each
edge of the graph is incident to at least one vertex of the set Ve . Here by graph we mean a set
of vertices V connected by certain edges forming an edge set E.
Suppose the graph is as follows

Let Ve = {v1 , v2 , v5 }, that is, vertices marked in red

Copyright
c Reserved IIT Delhi
11
then each edge in G has at least one incident vertex in red.

The vertices in red and in green form vertex cover of the graph given left most.

(Definition 9.5.4) Given a graph G=(V,E), a matching M in G is a set of pair wise non-
adjacent edges. that is no two edges share a common vertex.

(Definition 9.5.5) A graph G=(V,E) is called a bipartite graph if its vertex set V can be
divided into two disjoint sets V1 and V2 such that every edge in G connects a vertex of v1 to a
vertex in V2 . Note that the networks in (TP) and (AP) are bipartite graphs.

Thus the (AP) under consideration possesses an underline structure of bipartite graph.

Now, the two famous persons proved a milestone result as follows.

Copyright
c Reserved IIT Delhi
12

(Theorem 9.5.1) (König-Egerv́ary theorem) In any bipartite graph the maximum size of
matching is equal to minimum size of a vertex cover.

Here we urge you to observe that solving an assignment problem (AP) (i.e., optimally assign-
ing jobs to persons) is fundamentally equivalent to finding maximum matching in a weighted
bipartite graph where weights along edges are cij .

In view of Theorem 9.5.1, solving (AP) is equivalent to finding minimum size vertex cover in
underline weighted bipartite graph of (AP).

Now we draw those horizontal and vertical lines to cover ‘zeros’ in the resultant matrix of (AP)
in step 3 of the Hungarian method, then what precisely we are doing is finding a maximum
number of linearly independent zeros because it is precisely this which defines the minimum
vertex cover. In fact a minimum vertex cover corresponds to the maximum number of linearly
independent zeros and the moment we achieve it (i.e. when r=n=maximum number of linearly
independent zeros) we obtain a minimum vertex cover and hence an optimal solution of (AP).

We will try to answer some other questions posed during lecture in another class. Till then keep
practising some more assignment problems, belive they are fun to solve.

Copyright
c Reserved IIT Delhi

You might also like