You are on page 1of 8





Q.1:- What are the important features of Operations Research? Describe in

details the different phases of Operations Research?

Ans:- Important features of OR are: It is System oriented: OR studies the

problem from over all point of view of organizations or situations since optimum
result of one part of the system may not be optimum for some other part.

(i) It imbibes Inter disciplinary team approach. Since no single

individual can have a thorough knowledge of all fast developing
scientific knowhow, personalities from different scientific and
managerial cadre form a team to solve the problem.
(ii) It makes use of Scientific methods to solve problems.
(iii) OR increases the effectiveness of a management Decision making
(iv) It makes use of computer to solve large and complex problems.
(v) It gives Quantitative solution.
(vi) It considers the human factors also.

Phases of Operations Research The scientific method in OR study generally

involves the following three phases:

(i) Judgment Phase: This phase consists of:-

(a) Determination of the operation.

(b) Establishment of the objectives and values related to the
(c) Determination of the suitable measures of effectiveness and
(d) Formulation of the problems relative to the objectives.

(ii) Research Phase: This phase utilizes

(a) Operations and data collection for a better understanding of

the problems.
(b) Formulation of hypothesis and model.
(c) Observation and experimentation to test the hypothesis on the
basis of additional data.

(d) Analysis of the available information and verification of the
hypothesis using pre established measure of effectiveness.
(e) Prediction of various results and consideration of alternative

(iii) Action Phase: It consists of making recommendations for the decision

process by those who first posed the problem for consideration or by
anyone in a position to make a decision, influencing the operation in which
the problem is occurred.

Q.2:- Describe a Linear Programming Problem in details in canonical form?

Ans:- Linear Programming The Linear Programming Problem (LPP) is a class

of mathematical programming in which the functions representing the objectives
and the constraints are linear. Here, by optimization, we mean either to maximize
or minimize the objective functions. The general linear programming model is
usually defined as follows:

Maximize or Minimize
Z = c1 x1 + c2 x 2 +............................+cn x n

subject to the constraints,

a11 x1 + a12 x2 + ............................+ a1n xn ~ b1

a21 x1 + a22 x2 + .................+ a2n xn ~ b2
am 1x1 + am2 x2 + ................ +amn xn ~ bm
and x1 > 0, x2 > 0, xn > 0.

Where cj, bi and aij (i = 1, 2, 3, m, j = 1, 2, 3........ n) are constants determined

from the technology of the problem and xj (j = 1, 2, 3 n) are the decision variables.
Here ~ is either < (less than), > (greater than) or = (equal). Note that, in terms of the
above formulation the coefficient cj, aij, bj are interpreted physically as follows. If bi
is the available amount of resources i, where aij is the amount of resource i, that
must be allocated to each unit of activity j, the worth per unit of activity is equal to cj.

Canonical forms :

The general Linear Programming Problem (LPP) defined above can

always be put in the following form which is called as the canonical form:

Maximise Z = c1 x1+c2 x2 + .......+cn xn

Subject to

a11 x1 + a12 x2 +................. a1n xn < b1
a21 x1 + a22 x2 + ................. +a2n xn < b2
am1x1+am2 x2 + ...... + amn xn < bm
x1, x2, x3, ... xn > 0.

The characteristics of this form are:

1) all decision variables are nonnegative.

2) all constraints are of < type.
3) the objective function is of the maximization type.

Any LPP can be put in the cannonical form by the use of five elementary

1. The minimization of a function is mathematically equivalent to

the maximization of the negative expression of this function. That is,
Minimize Z = c 1 x 1 + c 2x2 + ....... + c n x n is equivalent to

Maximize Z = c 1x1 c 2x2 ... cnxn.

2. Any inequality in one direction (< or >) may be changed to an inequality in the
opposite direction (> or <) by multiplying both sides of the inequality by 1.

For example 2x 1+ 3x 2 > 5 is equivalent to 2x 13x 2 < 5.

3. An equation can be replaced by two inequalities in opposite direction. For

example, 2x1+3x2 = 5 can be written as 2x 1+3x 2 < 5 and 2x1+3x2 > 5 or 2x1+3x2 <
5 and 2x1 3x2 < 5.

4. An inequality constraint with its left hand side in the absolute form can be
changed into two regular inequalities. For example: | 2x1+3x2 | < 5 is equivalent to
2x1+3x2 < 5 and 2x1+3x2 > 5 or 2x1 3x2 < 5.

5. The variable which is unconstrained in sign (i.e., > 0, < 0 or zero) is equivalent
to the difference between 2 nonnegative variables. For example, if x is
unconstrained in sign then x

= (x + x ) where x + > 0, x < 0.

Q.3:- What are the different steps needed to solve a system of equations by
the simplex method?

Ans:- To Solve problem by Simplex Method

1. Introduce stack variables (Sis) for < type of constraint.

2. Introduce surplus variables (Sis) and Artificial Variables (Ai) for >
type of constraint.
3. Introduce only Artificial variable for = type of constraint.
4. Cost (Cj) of slack and surplus variables will be zero and that of artificial
variable will be M Find Zj Cj for each variable.
5. Slack and Artificial variables will form Basic variable for the first
simplex table. Surplus variable will never become Basic Variable for the first
simplex table.
6. Zj = sum of [cost of variable x its coefficients in the constraints Profit
or cost coefficient of the variable].
7. Select the most negative value of Zj - Cj. That column is called key
column. The variable corresponding to the column will become Basic
variable for the next table.
8. Divide the quantities by the corresponding values of the key column to
get ratios select the minimum ratio. This becomes the key row. The Basic
variable corresponding to this row will be replaced by the variable found in
step 6.
9. The element that lies both on key column and key row is called
Pivotal element.
10. Ratios with negative and a value are not considered for
determining key row.
11. Once an artificial variable is removed as basic variable, its column will
be deleted from next iteration.
12. For maximisation problems decision variables coefficient will be same
as in the objective function. For minimization problems decision variables
coefficients will have opposite signs as compared to objective function.
13. Values of artificial variables will always is M for both maximisation
and minimization problems.
14.The process is continued till all Zj Cj > 0.

Q.4:- What do you understand by the transportation problem? What is the
basic assumption behind the transportation problem? Describe the MODI
method of solving transportation problem?

Ans:- This model studies the minimization of the cost of transporting a commodity
from a number of sources to several destinations. The supply at each source and
the demand at each destination are known. The transportation problem involves
m sources, each of which has available ai (i = 1, 2, .....,m) units of homogeneous
product and n destinations, each of which requires bj (j = 1, 2...., n) units of
products. Here ai and bj are positive integers. The cost cij of transporting one unit of
the product from the i th source to the j th destination is given for each i and j. The
objective is to develop an integral transportation schedule that meets all demands
from the inventory at a minimum total transportation cost. It is assumed that the
total supply and the total demand are equal.

The condition (1) is guaranteed by creating either a fictitious destination with

a demand equal to the surplus if total demand is less than the total supply or a
(dummy) source with a supply equal to the shortage if total demand exceeds total
supply. The cost of transportation from the fictitious destination to all sources and
from all destinations to the fictitious sources are assumed to be zero so that total
cost of transportation will remain the same.

The Transportation Algorithm (MODI Method)

The first approximation to (2) is always integral and therefore always a

feasible solution. Rather than determining a first approximation by a direct
application of the simplex method it is more efficient to work with the table given
below called the transportation table. The transportation algorithm is the simplex
method specialized to the format of table it involves:

(i) Finding an integral basic feasible solution

(ii) Testing the solution for optimality
(iii) Improving the solution, when it is not optimal
(iv) Repeating steps (ii) and (iii) until the optimal solution is obtained.

The solution to T.P is obtained in two stages. In the first stage we find Basic
feasible solution by any one of the following methods a) Northwest corner rale b)
Matrix Minima Method or least cost method c) Vogels approximation method. In
the second stage we test the B.Fs for its optimality either by MODI method or by
stepping stone method.
Modified Distribution Method / Modi Method / U V Method .

Step 1 : Under this method we construct penalties for rows and columns by
subtracting the least value of row / column from the next least value.
Step 2 : We select the highest penalty constructed for both row and column.
Enter that row / column and select the minimum cost and allocate min (ai, bj)
Step 3 : Delete the row or column or both if the rim availability / requirements is
met. Step 4 : We repeat steps 1 to 2 to till all allocations are over.
Step 5 : For allocation all form equation ui + vj = cj set one of the dual variable
ui / vj to zero and solve for others.
Step 6 : Use these value to find ij = cij ui vj of all ij >, then it is the
optimal solution.
Step 7 : If any Dij 0, select the most negative cell and form loop. Starting
point of the loop is +ve and alternatively the other corners of the loop are ve and
+ve. Examine the quantities allocated at ve places. Select the minimum. Add it
at +ve places and subtract from ve place. Step 8 : Form new table and repeat
steps 5 to 7 till ij > 0

Q.5:- Describe the North-West Corner rule for finding the initial basic feasible
solution in the transportation problem?

Ans:- North West Corner Rule

Step1: The first assignment is made in the cell occupying the upper left hand (north
west) corner of the transportation table. The maximum feasible amount is allocated
there, that is x11 = min (a1,b1)

So that either the capacity of origin O1 is used up or the requirement at

destination D1 is satisfied or both. This value of x11 is entered in the upper left
hand corner (small square) of cell (1, 1) in the transportation table

Step 2: If b1 > a1 the capacity of origin O, is exhausted but the requirement at

destination D1 is still not satisfied , so that at least one more other variable in the first
column will have to take on a positive value. Move down vertically to the second
row and make the second allocation of magnitude

x21 = min (a2, b1 x21) in the cell (2,1). This either exhausts the capacity of
origin O2 or satisfies the remaining demand at destination D1.

If a1 > b1 the requirement at destination D1 is satisfied but the capacity of origin

O1 is not completely exhausted. Move to the right horizontally to the second column
and make the second allocation of magnitude x12 = min

(a1 x11, b2) in the cell (1, 2) . This either exhausts the remaining capacity of
origin O1 or satisfies the demand at destination D2 .

If b1 = a1, the origin capacity of O1 is completely exhausted as well as the
requirement at destination is completely satisfied. There is a tie for second
allocation, An arbitrary tie breaking choice is made. Make the second allocation of
magnitude x12 = min (a1 a1, b2) = 0 in the cell (1, 2) or x21 = min (a2, b1 b2) =
0 in the cell (2, 1).

Step 3: Start from the new north west corner of the transportation table satisfying
destination requirements and exhausting the origin capacities one at a time, move
down towards the lower right corner of the transportation table until all the rim
requirements are satisfied.

Q.6:- Describe the Branch and Bound Technique to solve an I.P.P. problem?

Ans:- The Branch And Bound Technique Sometimes a few or all the variables
of an IPP are constrained by their upper or lower bounds or by both. The most
general technique for the solution of such constrained optimization problems is the
branch and bound technique. The technique is applicable to both all IPP as well
as mixed I.P.P. the technique for a maximization problem is discussed below:

Let the I.P.P. be

Subject to the constraints

xj is integer valued , j = 1, 2, ........, r (< n) (3)

xj > 0...................................... j = r + 1, ......, n _________ (4)

Further let us suppose that for each integer valued xj, we can assign lower and
upper bounds for the optimum values of the variable by

Lj xj Uj j = 1, 2, .... r _____________ (5)

The following idea is behind the branch and bound technique
Consider any variable xj, and let I be some integer value satisfying Lj I Uj 1.
Then clearly an optimum solution to (1) through (5) shall also satisfy either the
linear constraint.

x j > I + 1 ___________________________ ( 6)

Or the linear constraint xj I ............................... ..(7)

To explain how this partitioning helps, let us assume that there were no
integer restrictions (3), and suppose that this then yields an optimal solution to
L.P.P. (1), (2), (4) and (5). Indicating x1 = 1.66 (for example). Then we formulate
and solve two L.P.Ps each containing (1), (2) and (4).

But (5) for j = 1 is modified to be 2 x1 U1 in one problem and L1 x1

1 in the other. Further each of these problems process an optimal solution
satisfying integer constraints (3)

Then the solution having the larger value for z is clearly optimum for the given
I.P.P. However, it usually happens that one (or both) of these problems has no
optimal solution satisfying (3), and thus some more computations are necessary.
We now discuss step wise the algorithm that specifies how to apply the
partitioning (6) and (7) in a systematic manner to finally arrive at an optimum

We start with an initial lower bound for z, say z (0) at the first iteration which
is less than or equal to the optimal value z*, this lower bound may be taken as the
starting Lj for some xj.

In addition to the lower bound z (0) , we also have a list of L.P.Ps (to be
called master list) differing only in the bounds (5). To start with (the 0 th iteration)
the master list contains a single L.P.P. consisting of (1), (2), (4) and (5). We now
discuss below, the step by step procedure that specifies how the partitioning (6)
and (7) can be applied systematically to eventually get an optimum integer valued