You are on page 1of 392

ALAGAPPA UNIVERSITY

[Accredited with ‘A+’ Grade by NAAC (CGPA:3.64) in the Third Cycle


and Graded as Category–I University by MHRD-UGC]
(A State University Established by the Government of Tamil Nadu)
KARAIKUDI – 630 003

Directorate of Distance Education

B.Sc. (Mathematics)
V - Semester
113 52

OPERATIONS RESEARCH
Authors:
P

"The copyright shall be vested with Alagappa University"

All rights reserved. No part of this publication which is material protected by this copyright notice
may be reproduced or transmitted or utilized or stored in any form or by any means now known or
hereinafter invented, electronic, digital or mechanical, including photocopying, scanning, recording
or by any information storage or retrieval system, without prior written permission from the Alagappa
University, Karaikudi, Tamil Nadu.

Information contained in this book has been published by VIKAS® Publishing House Pvt. Ltd. and has
been obtained by its Authors from sources believed to be reliable and are correct to the best of their
knowledge. However, the Alagappa University, Publisher and its Authors shall in no event be liable for
any errors, omissions or damages arising out of use of this information and specifically disclaim any
implied warranties or merchantability or fitness for any particular use.

Vikas® is the registered trademark of Vikas® Publishing House Pvt. Ltd.


VIKAS® PUBLISHING HOUSE PVT. LTD.
E-28, Sector-8, Noida - 201301 (UP)
Phone: 0120-4078900  Fax: 0120-4078999
Regd. Office: A-27, 2nd Floor, Mohan Co-operative Industrial Estate, New Delhi 1100 44
 Website: www.vikaspublishing.com  Email: helpline@vikaspublishing.com

Work Order No. AU/DDE/DE12-27/Preparation and Printing of Course Materials/2020 Dated 12.08.2020 Copies - ......
SYLLABI-BOOK MAPPING TABLE
Operations Research
Syllabi Mapping in Book

BLOCK I: SIMPLEX, BIG M AND TWO PHASE METHODS IN


LPP
UNIT - 1: Introduction – Origin and Development of Operations Research Unit 1: Operations Research: An
(OR) – Nature and Features of OR – Scientific Methods in OR – Introduction
Modeling in OR – Advantages and Limitations of Models – General (Pages 1-30);
Solution Methods of OR Models – Applications of Operations Research. Unit 2: Linear Programming
UNIT - 2: Linear Programming Problem – Mathematical Formulation of Problems
the Problem – Illustration on Mathematical Formulation of Linear (Pages 31-50);
Programming Problems (LPP) – Graphical Solution Method – Some Unit 3: Linear Programming
Exceptional Cases. And Simplex Method
UNIT - 3: General Linear Programming Problem – Canonical and (Pages 51-78);
Standard Forms of LPP Simplex Method. Unit 4: Artificial Variable
UNIT - 4: Linear Programming using Artificial Variables – Big M Method– Techniques
Two Phase Method – Problems. (Pages 79-93)

BLOCK II: DUALITY AND INTEGER PROGRAMMING


UNIT - 5: Duality in Linear Programming (LP) – General Primal and Unit 5: Duality in Linear
Dual Pair – Formulating a Dual Problem – Primal – Dual Pair in Matrix Programming
Form Duality Theorems – Complementary Slackness Theorem. (Pages 94-120);
UNIT - 6: Integer Programming – Cutting Plane Technique, Dual Simplex Unit 6: Integer Programming
Method. (Pages 121-156);
UNIT - 7: Introduction – LP Formulation of Transportation Problem Unit 7: Linear Programming and
(TP)– Existence of Solution in TP – The Transportation Table – Loops Transportation Problem
in TP – Solution of a Transportation Problem – Finding an Initial Basic – (Pages 157-193);
Feasible Solution (NWCM – LCM – VAM). Unit 8: Transportation Problems
UNIT - 8: Degeneracy in TP – Transportation Algorithm (MODI Method)– (Pages194-223)
Unbalanced TP – Maximization TP.

BLOCK III: ASSIGNMENT AND SEQUENCING PROBLEM


UNIT - 9: Assignment Problem – Introduction – Mathematical Formulation Unit 9: Assignment Problems
of the Problem Test for Optimality by using Hungarian Method – (Pages 224-237);
Maximization Case in Assignment Problem. Unit 10: Sequencing Models
UNIT - 10: Sequencing Problem – Introduction – Problem of Sequencing (Pages 238-269);
Basic Terms used in Sequencing – n Jobs to be Operated on Two Unit 11: Game Theory
Machines – Problems - n Jobs to be Operated on K Machines – Problems– (Pages 270-289)
Two Jobs to be Operated on K Machines (Graphical Method) – Problems.
UNIT - 11: Game Theory Two Person Zero-Sum Games – Basic Terms–
Maximin – Minimax Principle.
BLOCK IV: DOMINANCE IN GAMES AND NETWORK
ANALYSIS
UNIT - 12: Games Without Saddle Points – Mixed Strategies – Graphical Unit 12: Saddle Points and
Solution of 2 × n and m × 2 Games. Mixed Strategies
UNIT - 13: Dominance Property – General Solution of m × n Rectangular (Pages 290-313);
Games - Problems. Unit 13: Dominance Property
UNIT - 14: Network Scheduling by PERT / CPM – Network Basic (Pages 314-326);
components – Drawing Network – Critical Path Analysis – PERT Unit 14: Network Analysis:
Analysis – Distinction between PERT and CPM. CPM and PERT
(Pages 327-380)
CONTENTS
INTRODUCTION
BLOCK I: SIMPLEX, BIG M AND TWO PHASE METHODS IN LPP

UNIT 1 OPERATIONS RESEARCH: AN INTRODUCTION 1-30


1.0 Introduction
1.1 Objectives
1.2 Operations Research: Meaning, Nature and Origin
1.2.1 Nature of Operations Research
1.3 Development of Operational Research
1.4 Operations Research in India
1.5 Operations Research as a Tool in Decision-Making
1.6 Operations Research and Management
1.6.1 Significance of Operations Research
1.6.2 Operations Research and Modern Business Management
1.7 Features and Methodology of Operations Research and Phases of Operations Research Study
1.7.1 Methodology of Operations Research
1.8 Models in Operations Research and Methods of Deriving the Solution
1.8.1 Advantages of a Model
1.8.2 Classification of Models
1.9 Limitations of Operations Research
1.10 Answers to Check Your Progress Questions
1.11 Summary
1.12 Key Words
1.13 Self-Assessment Questions and Exercises
1.14 Further Readings

UNIT 2 LINEAR PROGRAMMING PROBLEMS 31-50


2.0 Introduction
2.1 Objectives
2.2 Linear Programming Problem
2.2.1 Meaning of Linear Programming Problem
2.2.2 Fields Where Linear Programming can be Used
2.3 Mathematical Formulation of the Problem
2.3.1 Basic Concepts and Notations
2.3.2 General Form of the Linear Programming Model
2.4 Illustration on Mathematical Formulation of Linear Programming Problems
2.5 Graphical Solution Method
2.5.1 Graphic Solution
2.5.2 Some Exceptional Cases
2.6 Answers to Check Your Progress Questions
2.7 Summary
2.8 Key Words
2.9 Self-Assessment Questions and Exercises
2.10 Further Readings

UNIT 3 LINEAR PROGRAMMING AND SIMPLEX METHOD 51-78


3.0 Introduction
3.1 Objectives
3.2 General Linear Programming Variables
3.2.1 Graphical Solution
3.2.2 Some Important Definitions
3.3 Canonical and Standard forms of LPP
3.4 Simplex Method
3.5 Answers to Check Your Progress Questions
3.6 Summary
3.7 Key Words
3.8 Self-Assessment Questions and Exercises
3.9 Further Readings

UNIT 4 ARTIFICIAL VARIABLE TECHNIQUES 79-93


4.0 Introduction
4.1 Objectives
4.2 Linear Programming Using Artificial Variable
4.3 Big M Method
4.4 Two-Phase Method
4.5 Answers to Check Your Progress Questions
4.6 Summary
4.7 Key Words
4.8 Self Assessment Questions and Exercises
4.9 Further Readings

BLOCK II: DUALITY AND INTEGER PROGRAMMING

UNIT 5 DUALITY IN LINEAR PROGRAMMING 94-120


5.0 Introduction
5.1 Objectives
5.2 Duality and Linear Programming
5.3 General Primal and Dual Pair
5.4 Formulating a Dual Problem
5.5 Dual Pair in Matrix Form
5.6 Duality Theorem
5.7 Complementary Slackness Theorem
5.8 Answers to Check Your Progress Questions
5.9 Summary
5.10 Key Words
5.11 Self Assessment Questions and Exercises
5.12 Further Readings

UNIT 6 INTEGER PROGRAMMING 121-156


6.0 Introduction
6.1 Objectives
6.2 Integer Programming and Cutting Plan Techniques
6.2.1 Importance of Integer Programming Problems
6.2.2 Applications of Integer Programming
6.2.3 Methods of Integer Programming Problem
6.2.4 Mixed Integer Programming Problem
6.2.5 Branch and Bound Method
6.3 Dual Simplex Method
6.4 Answers to Check Your Progress Questions
6.5 Summary
6.6 Key Words
6.7 Self Assessment Questions and Exercises
6.8 Further Readings

UNIT 7 LINEAR PROGRAMMING AND TRANSPORTATION PROBLEM 157-193


7.0 Introduction
7.1 Objectives
7.2 Linear Programming Formulation of Transportation Problems
7.3 Existence of Solution of Transportation Problems
7.4 Solution of a Transportation Problem
7.4.1 Transhipment Model
7.5 Feasible Solution (NWCM - LCM - VAM)
7.6 Answers to Check Your Progress Questions
7.7 Summary
7.8 Key Words
7.9 Self Assessment Questions and Exercises
7.10 Further Readings

UNIT 8 TRANSPORTATION PROBLEMS 194-223


8.0 Introduction
8.1 Objectives
8.2 Degeneracy in Transportation Problems
8.3 Transportation Algorithm (MODI Method)
8.4 Unbalanced and Maximization of Transportation Problems
8.5 Answers to Check Your Progress Questions
8.6 Summary
8.7 Key Words
8.8 Self Assessment Questions and Exercises
8.9 Further Readings

BLOCK III: ASSIGNMENT AND SEQUENCING PROBLEMS

UNIT 9 ASSIGNMENT PROBLEMS 224-237


9.0 Introduction
9.1 Objectives
9.2 Assignment Problems
9.3 Test for Optimality by using Hungarian Method
9.4 Maximization in Assignment Problems
9.5 Answers to Check Your Progress Questions
9.6 Summary
9.7 Key Words
9.8 Self Assessment Questions and Exercises
9.9 Further Readings

UNIT 10 SEQUENCING MODELS 238-269


10.0 Introduction
10.1 Unit Objectives
10.2 Sequencing Models: Basic Concepts
10.2.1 Definition
10.2.2 Terminology and Notations
10.2.3 Principal Assumptions
10.2.4 Job Sequence Problems
10.3 Processing of n Jobs through Two Machines
10.4 Processing of n Jobs through Three Machines
10.5 Processing of n Jobs through m Machines
10.6 Processing of Two Jobs through m Machines
10.7 Maintenance Crew Scheduling
10.8 Answers to Check Your Progress Questions
10.9 Summary
10.10 Key Words
10.11 Self Assessment Questions and Exercises
10.12 Further Readings

UNIT 11 GAME THEORY 270-289


11.0 Introduction
11.1 Objectives
11.2 Game Theory
11.3 Basic Terms in Game Theory
11.4 Two-Person Zero-Sum Games
11.4.1 Sum Games
11.5 The Maximin-Minimax Principal
11.6 Answers to Check Your Progress Questions
11.7 Summary
11.8 Key Words
11.9 Self Assessment Questions and Exercises
11.10 Further Readings

BLOCK IV: DOMINANCE IN GAMES AND NETWORK ANALYSIS

UNI T 12 SA DDL E POI NT S A ND M I X ED ST RAT EGI ES 290-313


12.0 Introduction
12.1 Objectives
12.2 Games without Saddle Points
12.3 Mixed Strategies
12.3.1 Pure and Mixed Strategies with Saddle Point
12.3.2 Mixed Strategy Problems by Arithmetic Method
12.4 Graphic Solution of 2 × n and m × 2 Games
12.5 Answers to Check Your Progress Questions
12.6 Summary
12.7 Key Words
12.8 Self Assessment Questions and Exercises
12.9 Further Readings

UNIT 13 DOMINANCE PROPERTY 314-326


13.0 Introduction
13.1 Objectives
13.2 Dominance Property
13.2.1 Rule for Dominance
13.3 Principle of Dominance and General Solution of m × n Rectangular Games
13.4 Answers to Check Your Progress Questions
13.5 Summary
13.6 Key Words
13.7 Self Assessment Questions and Exercises
13.8 Further Readings

UNIT 14 NETWORK ANALYSIS: CPM AND PERT 327-380


14.0 Introduction
14.1 Objectives
14.2 Introduction to Network Concept
14.2.1 Development of Network Analysis - CPM and PERT
14.3 Network Analysis and Rules of Network Construction
14.3.1 Rules of Network Construction
14.3.2 Time Analysis
14.3.3 Network Diagram
14.4 Critical Path Method (CPM)
14.4.1 Computations for Critical Path
14.4.2 Applications of CPM Analysis
14.5 Programme Evaluation and Review Technique (PERT)
14.5.1 PERT Procedure
14.6 Comparison and Limitations of PERT and CPM
14.7 Answers to Check Your Progress Questions
14.8 Summary
14.9 Key Words
14.10 Self Assessment Questions and Exercises
14.11 Further Readings
Introduction
INTRODUCTION

Operational Research, or simply OR, originated in the context of military operations,


but today it is widely accepted as a powerful tool for planning and decision- NOTES
making, especially in business and industry. The OR approach has provided a
new tool for managing conventional management problems. In fact, operational
research techniques do constitute a scientific methodology of analysing the problems
of the business world. They provide an improved basis for taking management
decisions. The practice of OR helps in tackling intricate and complex problems,
such as that of resource allocation, product mix, inventory management, sequencing
and scheduling, replacement and a host of similar problems of modern business
and industry.
Operations research is an interdisciplinary branch of applied mathematics
and formal science that uses mathematical methods, such as mathematical modelling,
statistics and algorithms to arrive at optimal or near optimal solutions to complex
problems. Basically, it is concerned with optimizing the maxima (profit, assembly
line performance, bandwidth, etc.) or minima (loss, risk, etc.) of some objective
function. It also helps management achieve its goals using scientific methods. The
field of operations research is closely related to the Industrial Engineering and
hence the Industrial Engineers consider operations research techniques as their
major toolset. Some of the primary tools used by operations researchers are
statistics, optimization, probability theory, queuing theory, game theory, graph theory,
decision analysis and simulation. Because of the computational nature of these
fields, OR is linked to computer science and OR professionals use specific custom-
written software for computation of data and decision-making. The uniqueness of
OR prompted industries to use its formal tools, such as operations analysis, system
analysis, management science, decision science, etc. Commercial industries, such
as airlines, automobiles, communications, electronics, transportation, chemicals
and mining use OR techniques to optimally utilize their limited resources and thereby
maximize profits. Hence, OR is the application of the methods of science to complex
problems arising in the direction and management of large systems of men, machines,
materials and money in industry, business, government and defence. The distinctive
approach is to develop a scientific model of the system, incorporating measurement
of factors, such as chance and risk, with which to predict and compare the outcomes
of alternative decision strategies and controls.
Operations research provides top-level administrators a quantitative basis
for taking decisions which will help organizations to carry out their functions, such
as planning, controlling and organizing, effectively. Decision-making is the key
responsibility of managers and OR provides a scientific approach to them for
solving problems. Decisions in an organization should be such that they can compete
in the market. We can say that the OR and decision-making processes are

Self-Instructional
Material 11
Introduction interlinked. There are intangible factors also, such as human behaviour, which OR
has to take into account when calculating for a solution.
This book, Operations Research, is divided into four blocks, which are
further subdivided into fourteen units. This book provides a basic understanding
NOTES
of the subject and helps to grasp its fundamentals. In a nutshell, it explains various
aspects, such as introduction to operations research, origin, development, nature
and features of Operations Research (OR), scientific methods in OR, modeling in
OR, general solution methods of OR models, applications of OR, Linear
Programming Problem (LPP), mathematical formulation of the problem, graphical
solution method, canonical and standard forms of LPP, simplex method, linear
programming using artificial variables, big M method, two phase method, duality
in Linear Programming (LP), general primal and dual pair, formulating a dual
problem, primal, dual pair in matrix form, duality theorems, complementary
slackness theorem, integer programming, cutting plane technique, dual simplex
method, LP formulation of Transportation Problem (TP), existence of solution in
TP, transportation table, feasible solution (NWCM – LCM – VAM), degeneracy
in TP, transportation algorithm (MODI method), unbalanced TP, assignment
problem, mathematical formulation of the problem, test for optimality by using
Hungarian method, maximization in assignment problem, sequencing problem,
problem of sequencing, basic terms used in sequencing, n jobs to be operated on
two machines and on k machines, game theory, two-person zero-sum games,
basic terms, maximin– minimax principle, games without saddle points, mixed
strategies, graphical solution of 2 × n and m × 2 games, dominance property,
network scheduling by Programme Evaluation and Review Technique (PERT) /
Critical Path Method (CPM), network basic components, drawing network, critical
path analysis, and PERT analysis.
The book follows the Self-Instructional Mode (SIM) wherein each unit
begins with an ‘Introduction’ to the topic. The ‘Objectives’ are then outlined before
going on to the presentation of the detailed content in a simple and structured
format. ‘Check Your Progress’ questions are provided at regular intervals to test
the student’s understanding of the subject. ‘Answers to Check Your Progress
Questions’, a ‘Summary’, a list of ‘Key Words’, and a set of ‘Self-Assessment
Questions and Exercises’ are provided at the end of each unit for effective
recapitulation. This book provides a good learning platform to the people who
need to be skilled in the area of operating system functions. Logically arranged
topics, relevant examples and illustrations have been included for better
understanding of the topics and for effective recapitulation.

Self-Instructional
12 Material
Operations Research:
BLOCK - I An Introduction
SIMPLEX, BIG M AND TWO
PHASE METHODS IN LPP
NOTES

UNIT 1 OPERATIONS RESEARCH:


AN INTRODUCTION
Structure
1.0 Introduction
1.1 Objectives
1.2 Operations Research: Meaning, Nature and Origin
1.2.1 Nature of Operations Research
1.3 Development of Operational Research
1.4 Operations Research in India
1.5 Operations Research as a Tool in Decision-Making
1.6 Operations Research and Management
1.6.1 Significance of Operations Research
1.6.2 Operations Research and Modern Business Management
1.7 Features and Methodology of Operations Research and Phases of
Operations Research Study
1.7.1 Methodology of Operations Research
1.8 Models in Operations Research and Methods of Deriving the Solution
1.8.1 Advantages of a Model
1.8.2 Classification of Models
1.9 Limitations of Operations Research
1.10 Answers to Check Your Progress Questions
1.11 Summary
1.12 Key Words
1.13 Self-Assessment Questions and Exercises
1.14 Further Readings

1.0 INTRODUCTION

The term Operations Research (OR) was coined by J.F. McCloskey and F.N.
Trefethen in 1940 in Bowdsey in the United Kingdom. This innovative science
was discovered during World War II for a specific military situation, when military
management sought decisions based on the optimal consumption of limited military
resources with the help of an organized and systematized scientific approach. This
was termed as operations research or operational research. Thus, OR was known
as an ability to win a war without really going into a battlefield or fighting it. It is a
new field of scientific and managerial application.

Self-Instructional
Material 1
Operations Research: The different phases of applications of OR include-implementing and formulating
An Introduction
the problem, constructing a mathematical model, deriving the result from the model,
testing and updating the model and controlling the final output or solution. Requirements
of a good model which must be capable of working on a new formulation without
NOTES making anychanges in its frame with minimum assumptions and with minimum variables
and must not take extraordinary time to solve the problem.
The various models of OR work and the techniques adopted. These
techniques are: linear programming, waiting line or queuing theory, inventory control/
planning, game theory, decision theory, network analysis, simulation, integrated
production models, non-linear programming, dynamic programming, heuristic
programming, integer programming, algorithmic programming, quadratic
programming, parametric programming, probabilistic programming, search theory
and theory of replacement. All these techniques involve higher mathematics. In
real practice these techniques are used in combination to form more sophisticated
and advanced programming models.
In this unit, you will study about the concepts of origin and development of
operations research, nature and features of operations research, scientific method
in operations research, modelling in operations research, advantages and limitations
of models, general solution methods of operations research, models and applications
of operations research.

1.1 OBJECTIVES

After going through this unit, you will be able to:


 Understand the definition and scope of Operations Research (OR)
 Define development of OR
 Analyse the nature of OR
 Know about OR in India
 Describe OR as a tool in decision-making
 Explain the relation between OR and management
 Know about the phases and applications of OR
 Understand the requirements of a good model
 Understand the working of various models and techniques of OR
 Derive solutions using OR

1.2 OPERATIONS RESEARCH: MEANING,


NATURE AND ORIGIN

The term, Operations Research (OR) was first coined in 1940 by J.F. McCloskey
and F.N. Trefethen in a small town Bowdsey in the United Kingdom. This new
science came into existence in a military context. During World War II, military
Self-Instructional
2 Material
management called on scientists from various disciplines and organized them into Operations Research:
An Introduction
teams to assist in solving strategic and tactical problems, relating to air and land
defence. Their mission was to formulate specific proposals and plans for aiding
the Military commands to arrive at decisions on optimal utilization of scarce military
resources and attempts to implement these decisions effectively. This new approach NOTES
to the systematic and scientific study of the operations of the system was called
Operations Research (OR) or Operational Research. Hence, OR can be
associated with ‘an art of winning the war without actually fighting it’.
Definitions
Operations Research (OR) has been defined so far in various ways and it is perhaps
still too young to be defined in some authoritative way. It is not possible to give
uniformly acceptable definitions of OR. A few opinions about the definition of OR
are given below. These have been changed according to the development of the
subject.
OR is a scientific method of providing executive departments with a
quantitative basis for decisions regarding the operations under their control.
Morse and Kimball (1946)
OR is the scientific method of providing executive with an analytical and
objective basis for decisions.
P.M.S. Blackett (1948)
OR is a systematic method-oriented study of the basic structures,
characteristics, functions and relationships of an organization to provide the
executive with a sound, scientific and quantitative basis for decision-making.
E.L. Arnoff and M.J Netzorg
OR is a scientific approach to problem solving for executive management.
H.M. Wagner
OR is an aid for the executive in making his decisions by providing him with
the quantitative information based on the scientific method of analysis.

C.Kittee
OR is the scientific knowledge through interdisciplinary team effort for the
purpose of determining the best utilization of limited resources.

H.A. Taha
The various definitions given here bring out the following essential characteristics
of operations research:
(i) System orientation
(ii) Use of interdisciplinary terms
(iii) Application of scientific methods
(iv) Uncovering new problems Self-Instructional
Material 3
Operations Research: (v) Quantitative solutions
An Introduction
(vi) Human factors
Scope of Operations Research
NOTES There is a great scope for economists, statisticians, administrators and the technicians
working as a team to solve problems of defence by using the OR approach.
Besides this, OR is useful in various other important fields like:
(i) Agriculture
(ii) Finance
(iii) Industry
(iv) Marketing
(v) Personnel Management
(vi) Production Management
(vii) Research and Development
Phases of Operations Research
The procedure to be followed in the study of OR generally involves the following
major phases:
(i) Formulating the problem
(ii) Constructing a mathematical model
(iii) Deriving the solution from the model
(iv) Testing the model and its solution (updating the model)
(v) Controlling the solution
(vi) Implementation
1.2.1 Nature of Operations Research
Looking to the basic features of the definitions concerning OR, we can state that,
‘Operational Research can be considered as the application of scientific method by
interdisciplinary teams to problems involving the control of organized (man-machine)
systems to provide solutions, which best serve the purposes of the organization as a
whole’.
Different characteristics constituting the nature of OR can be summed up as follows:
1. Interdisciplinary Team Approach: The Operations Research has the
characteristics that it is done by a team of scientists drawn from various
disciplines such as mathematics, statistics, economics, engineering, physics,
etc. It is essentially an interdisciplinary team approach. Each member of the
OR team is benefited from the viewpoint of the other so that a workable
solution obtained through such collaborative study has a greater chance of
acceptance by management.

Self-Instructional
4 Material
2. Systems Approach: Operations Research emphasizes on the overall Operations Research:
An Introduction
approach to the system. This characteristic of OR is often referred as system
orientation. The orientation is based on the observation that in the organized
systems the behaviour of any part ultimately has some effect on every other
part. But all these effects are not significant and even not capable of detection. NOTES
Therefore, the essence of system orientation lies in the systematic search
for significant interactions in evaluating actions of any part of the organization.
In OR an attempt is made to take account of all the significant effects and to
evaluate them as a whole. OR thus considers the total system for getting the
optimum decisions.
3. Helpful in Improving the Quality of Solution: Operations Research cannot
give perfect answers or solutions to the problems. It merely gives bad answers
to the problems which otherwise have worst answers. Thus, OR simply helps
in improving the quality of the solution but does not result into a perfect solution.
4. Scientific Method: Operations Research involves scientific and systematic
attack of complex problems to arrive at the optimum solution. In other
words, Operations Research or OR uses techniques of scientific research.
Thus OR comprehends both aspects, i.e., it includes both scientific research
on the phenomena of operating systems and the associated engineering
activities aimed at applying the results of research.
5. Goal Oriented Optimum Solution: Operations Research tries to optimize
a well-defined function subject to given constraints and as such is concerned
with the optimization theory.
6. Use of Models: Operations Research uses models built by quantitative
measurement of the variables concerning a given problem and also derives
a solution from the model using one or more of the diversified solution
techniques. A solution may be extracted from a model either by conducting
experiments on it or by mathematical analysis. The purpose is to help the
management to determine its policy and actions scientifically.
7. Require Willing Executives: Operations Research does require the
willingness on the part of the executive for experimentation to evaluate the
costs and the consequences of the alternative solutions of the problem. It
enables the decision-maker to be objective in choosing an alternative from
among many possible alternatives.
8. Reduces Complexity: Operations Research tries to reduce the complexity
of business operations and does help the executive in correcting a
troublesome function and to consider innovations which are too costly and
complicated to experiment with the actual practice.
In view of the above, OR must be viewed as both a science and an art. As
science, OR provides mathematical techniques and algorithms for solving
appropriate decision problems. OR is an art because success in all the
phases that precede and succeed the solution of a problem largely depends
on the creativity and personal ability of the decision-making analysts.
Self-Instructional
Material 5
Operations Research:
An Introduction
Check Your Progress
1. How was the concept of operations research started?
NOTES 2. State one definition of OR.
3. Mention the essential characteristics of operations research.
4. Apart from defence, mention other areas where OR is applied.
5. What are major phases in the application of OR?

1.3 DEVELOPMENT OF OPERATIONAL


RESEARCH

The subject of Operational Research (OR) was developed in military context during
World War II, pioneered by the British scientists. At that time, the military management
in England appointed a study group of scientists to deal with the strategic and tactical
problems related to air and land defence of the country. The main reason for
conducting the study was that they were having very limited military resources. It
was, therefore, necessary to decide upon the most effective way of utilizing these
resources. As the name implies, Operations Research was apparently invented
because the team was dealing with research on military operations. The scientists
studied the various problems and on the basis of quantitative study of operations
suggested certain approaches which showed remarkable success. The encouraging
results obtained by the British operations research teams consisting of personnel
drawn from various fields like Mathematics, Physics, Biology, Psychology and other
physical sciences, quickly motivated the United States military management to start
similar activities. Successful innovations of the US teams included the development
of new flight patterns, planning sea mining and effective utilization of electronic
equipments. Similar OR teams also started functioning in Canada and France. These
OR teams were usually assigned to the executive-in-charge of operations and as
such their work came to be known as ‘Operational Research’ in the UK and by a
variety of names in the United States an Operational Analysis, Operations Evaluation,
Operations Research, Systems Analysis, Systems Evaluation and Systems Research.
The name ‘Operational Research’ or ‘Operations Research’ or simply OR is most
widely used now a days all over the world for the systematic and scientific study of
the operations of the system. Till fifties, use of OR was mainly confined to military
purposes.
After the end of the second world war, the success of military teams attracted
the attention of industrial managers who were seeking solutions to their complex
managerial problems. At the end of the war, expenditures on defence research
were reduced in the UK and this led to the release of many operations research
workers from the military at a time when industrial managers were confronted
with the need to reconstruct most of Britain’s manufacturing industries and plants
that had been damaged in war. Executives in such industries sought assistance
Self-Instructional
6 Material
from the said operations research workers. But in the USA most of the war Operations Research:
An Introduction
experienced operations research workers remained in military service as the defence
research was increased and consequently operations research was expanded at
the end of the war. It was only in the early 1950s, industry in the USA began to
absorb the operations is research worker under the pressure for increased demands NOTES
for greater productivity originated because of the outbreak of the Korean conflict
and because of technological developments in industry. Thus, OR began to develop
in industrial field in the United States since the year 1950. The Operations Research
Society of America was formed in 1953 and in 1957 the International Federation
of Operational Research Societies was established. Various journals relating to
OR began to appear in different countries in the years that followed the mid-fifties.
Courses and curricula in OR in different universities and other academic institutions
began to proliferate in the United States. Other countries rapidly followed this and
after the late fifties Operations Research was applied for solving business and
industrial problems. Introduction of Electronic Data Processing (EDP) methods
further enlarged the scope for application of OR techniques. With the help of a
digited computer many complex problems can be studied on day-to-day basis.
As a result, many industrial concerns are adopting OR as an integrated decision-
making tool for their routine decision procedures.

1.4 OPERATIONS RESEARCH IN INDIA

Today, the impact of OR in Indian business and industry can be felt in many areas.
A large number of management consulting firms are recently engaged in OR
activities. Apart from military and business applications, OR activities include
transportation systems, libraries, hospitals, city planning, financial institutions, etc.
With increasing use of computers the operations research techniques have started
playing a noticeable role in our country as well. The major Indian industries such
as Delhi Cloth Mills, Indian Railways, Indian Airlines, Defence Organizations,
Hindustan Liver, Tata Iron and Steel Company, Fertilizer Corporation of India
and similar industries make use of operations research techniques for solving
problems and making decisions.
Historically, Operations Research started developing in India after
independence specially with the setting up of an Operation Research Unit at the
Regional Research Laboratory at Hyderabad in 1949. Operations Research
activities gained further impetus with the establishment of an Operations Research
Unit in 1953 in the Indian Statistical Institute (ISI), Calcutta for applying Operations
Research techniques in national planning and survey. Operational Research Society
of India was formed in 1957 which joined International Federation of Operational
Research Societies in 1960 by becoming its member. The said society helped the
cause of the development of Operations Research activities in India in several
ways and started publishing a journal of Operations Research entitled

Self-Instructional
Material 7
Operations Research: ‘OPSEARCH’ from 1963. Besides, the Indian Institute of Industrial Engineers
An Introduction
has also promoted Operations Research in India and its journals viz., ‘Industrial
Engineering’ and ‘Management’ are considered as important key journals relating
to Operations Research in the country. Other important journals which deal with
NOTES Operations Research in our country are the Journal of the National Productivity
Council, Materials Management Journal of India and the Defence Science Journal.
There are several institutions which train and produce people in the field of
Operations Research to meet the need of OR practitioners in the country.
So far as the application of Operations Research in India is concerned it
was Professor P.C. Mahalonobis of ISI, Calcutta who made the first important
application. He formulated the Second Five Year Plan of our country with the help
of OR technique to forecast the trends of demand, availability of resources and
for scheduling the complex scheme necessary for developing our country’s economy.
It was estimated that India could become self-sufficient in food and solve her
foreign exchange problems merely by reducing the wastage of food by 15%.
Operational Research Commission made the use of OR techniques for planning
the optimum size of the Caravelle fleet of Indian Airlines. Kirloskar company made
use of assignments models for allocation of their salesmen to different areas so as
to maximize their profit. Linear Programming (LP) models were also used by
them to assemble various diesel engines at minimum possible cost. Various cotton
textile leaders such as Binny, DCM, Calico, etc., are using linear programming
techniques in cotton blending. Many other firms like Union Carbide, ICI, TELCO
and Hindustan Liver, etc., are making use of OR techniques for solving many of
their complex business problems. State Trading Corporation of India (STCI) has
also set up a Management Sciences Group with the idea of promoting and developing
the use of OR techniques in solving its management decision problems. Besides,
many Universities and professional academic institutions are imparting training in
OR in our country. The subject of OR has been included in the courses of such
institutions. But in comparison with the western world the present state of OR in
our country is much behind. Operations Research activities are very much limited
and confined only to the big organized industries. Most popular practical application
of Operations Research has been mainly that of Linear Programming. There is
relative scarcity of well-trained operational researchers. The use of Operations
Research is relatively a very costly affair. In spite of several limitations, our
industrialists are gradually becoming conscious of the role of Operations Research
techniques and in the coming years such techniques will have an increasingly
important role to play in Indian business and industry.

1.5 OPERATIONS RESEARCH AS A TOOL IN


DECISION-MAKING

Mathematical models have been constructed for OR problems and methods for
solving the models are available in many cases. Such methods are usually termed
Self-Instructional
8 Material
as OR techniques. Some of the important OR techniques often used by decision- Operations Research:
An Introduction
makers in modern times in business and industry are as under:
1. Linear Programming: This technique is used in finding a solution for
optimizing a given objective, such as profit maximization or cost minimization
NOTES
under certain constraints. This technique is primarily concerned with the
optimal allocation of limited resources for optimizing a given function. The
name linear programming is given because of the fact that the model in such
cases consists of linear equations indicating linear relationship between the
different variables of the system. Linear programming technique solves
product-mix and distribution problems of business and industry. It is a
technique used to allocate scarce resources in an optimum manner in
problems of scheduling, product-mix, and so on. Key factors under this
technique include an objective function, choice among several alternatives,
limits or constraints stated in symbols and variables assumed to be linear.
2. Waiting Line or Queuing Theory: Waiting line or queuing theory deals
with mathematical study of queues. The queues are formed whenever the
current demand for service exceeds the current capacity to provide that
service. Waiting line technique concerns itself with the random arrival of
customers at a service station where the facility is limited. Providing too
much of capacity will mean idle time for servers and will lead to waste of
‘money’. On the other hand, if the queue becomes long there will be a cost
due to waiting of units in the queue. Waiting line theory, therefore, aims at
minimizing the costs of both servicing and waiting. In other words, this
technique is used to analyse the feasibility of adding facilities and to assess
the amount and cost of waiting time. With its help we can find the optimal
capacity to be installed which will lead to a sort of an economic balance
between cost of service and cost of waiting.
3. Inventory Control/Planning: Inventory planning aims at optimizing
inventory levels. Inventory may be defined as a useful idle resource which
has economic value, e.g., raw materials, spare parts, finished products, etc.
Inventory planning, in fact, answers the two questions, viz., how much to
buy and when to buy. Under this technique the main emphasis is on minimizing
costs associated with holding of inventories, procurement of inventories
and the shortage of inventories.
4. Game Theory: Game theory is used to determine the optimum strategy in
a competitive situation. Simplest possible competitive situation is that of
two persons playing zero-sum game, i.e., a situation in which two persons
are involved and one person wins exactly what the other loses. More
complex competitive situations in real life can be imagined where game
theory can be used to determine the optimum strategy.
5. Decision Theory: Decision theory concerns with making sound decisions
under conditions of certainty, risk and uncertainty. As a matter of fact there
are three different kinds of states under which decisions are made, viz.,
Self-Instructional
Material 9
Operations Research: deterministic, stochastic and uncertainty and the decision theory explains
An Introduction
how to select a suitable strategy to achieve some object or goal under each
of these three states.
6. Network Analysis: Network analysis involves the determination of an
NOTES optimum sequence of performing certain operations concerning some jobs
in order to minimize overall time and/or cost. Programme Evaluation and
Review Technique (PERT), Critical Path Method (CPM) and other network
techniques such as Gantt Chart come under network analysis. Key concepts
under this technique are network of events and activities, resource allocation,
time and cost considerations, network paths and critical paths.
7. Simulation: Simulation is a technique of testing a model, which resembles
a real life situation. This technique is used to imitate an operation prior to
actual performance. Two methods of simulation are used—Monte Carlo
method and the System simulation method. The former uses random numbers
to solve problems which involve conditions of uncertainty and the
mathematical formulation is impossible. In case of System simulation, there
is a reproduction of the operating environment and the system allows for
analysing the response from the environment to alternative management
actions. This method draws samples from a real population instead of from
a table of random numbers.
8. Integrated Production Models: This technique aims at minimizing cost
with respect to work force, production and inventory. This technique is
highly complex and is used only by big business and industrial units. This
technique can be used only when sales and costs statistics for a considerable
long period are available.
9. Some Other OR Techniques: In addition, there are several other techniques
such as non-linear programming, dynamic programming, search theory, the
theory of replacement, etc. A brief mention of some of these is as follows:
(i) Non-Linear Programming: A form of programming in which some
or all of the variables are curvilinear. In other words, this means that
either the objective function or constraints or both are not in linear
form. In most of the practical situations, we encounter non-linear
programming problems but for computation purpose we approximate
them as linear programming problems. Even then there may remain
some non-linear programming problems which may not be fully solved
by presently known methods.
(ii) Dynamic Programming: It refers to a systematic search for optimal
solutions to problems that involve many highly complex interrelations
that are, moreover, sensitive to multistage effects such as successive
time phases.

Self-Instructional
10 Material
(iii) Heuristic Programming: It is also known as discovery method and Operations Research:
An Introduction
refers to step-by-step search towards an optimum when a problem
cannot be expressed in mathematical programming form. The search
procedure examines successively a series of combinations that lead to
stepwise improvements in the solution and the search stops when a NOTES
near optimum has been found.
(iv) Integer Programming: It is a special form of linear programming in
which the solution is required in terms of integral numbers (i.e., whole
numbers) only.
(v) Algorithmic Programming: It is just the opposite of Heuristic
programming. It may also be termed as similar to mathematical
programming. This programming refers to a thorough and exhaustive
mathematical approach to investigate all aspects of the given variables
in order to obtain optimal solution.
(vi) Quadratic Programming: It refers to a modification of linear
programming in which the objective equations appear in quadratic
form, i.e., they contain squared terms.
(vii) Parametric Programming: It is the name given to linear programming
when it is modified for the purpose of inclusion of several objective
equations with varying degrees of priority. The sensitivity of the solution
to these variations is then studied.
(viii) Probabilistic Programming: It is also known as stochastic
programming and refers to linear programming that includes an
evaluation of relative risks and uncertainties in various alternatives of
choice for management decisions.
(ix) Search Theory: It concerns itself with search problems. A search
problem is characterized by the need for designing a procedure to
collect information on the basis of which one or more decisions are
made. This theory is useful in places in which some events are known
to occur but the exact location is not known. The first search model
was developed during World War II to solve decision problems
connected with air patrols and their search for submarines. Advertising
agencies search for customers, personnel departments search for good
executives are some of the examples of search theory’s application in
business.
(x) The Theory of Replacement: It is concerned with the prediction of
replacement costs and determination of the most economic
replacement policy. There are two types of replacement models. First
type of models deal in replacing equipments that deteriorate with time
and the other type of models help in establishing replacement policy
for those equipments which fail completely and instantaneously.
Self-Instructional
Material 11
Operations Research: All these techniques are not simple but involve higher mathematics. The
An Introduction
tendency today is to combine several of these techniques and form into more
sophisticated and advanced programming models.

NOTES
Check Your Progress
6. When was the subject of operational research developed?
7. Why is linear programming technique used?
8. Define waiting line theory.
9. Why is game theory used?
10. What does dynamic programming refers to?

1.6 OPERATIONS RESEARCH AND


MANAGEMENT

Operations Research plays significant role in industries and empowers management


to make better decision. The following are the important applications of OR in
business and management:
1.6.1 Significance of Operations Research
Operations Research has gained increasing importance since World War II in the
technology of business and industry administration. It greatly helps in tackling the
intricate and complex problems of modern business and industry. OR techniques
are, in fact, examples of the use of scientific method of management. The significance
of OR can be well understood under the following heads:
1. OR provides a tool for scientific analysis. OR provides the executives
with a more precise description of the cause and effect relationship and
risks underlying the business operations in measurable terms and this
eliminates the conventional intuitive and subjective basis on which
managements used to formulate their decisions decades ago. In fact, OR
replaces the intuitive and subjective approach of decision-making by an
analytical and objective approach. The use of OR has transformed the
conventional techniques of operational and investment problems in business
and industry. As such OR encourages and enforces disciplined thinking
about organizational problems.
2. OR provides solutions for various business problems. OR techniques
are being used in production, procurement, marketing, finance and other
allied fields. It can be used to solve problems like how best can managers
and executives allocate the available resources to various products so that
in a given time the profits are maximum or the cost is minimum, possibility of
Self-Instructional
12 Material
an industrial enterprise to arrange the time and quantity of orders of its Operations Research:
An Introduction
stocks such that the overall profit with given resources is maximum,
competence of business managers to determine the number of men and
machines to be employed and used in such a manner that neither remains
idle and at the same time the customer or the public has not to wait unduly NOTES
long for service, and similar other problems. Similarly we might have a
complex of industries; steel, machine tools and others, all employed in the
production of one item, say, steel. At any particular time we have a number
of choices of allocating resources such as money, steel and tools for
producing autos, building steel factories or tool factories. What should be
the policy which optimizes the total number of autos produced over a given
period? OR techniques are capable of providing an answer in such a situation.
Planning decisions in business and industry are largely governed by
the picture of anticipated demands. The potential long range profits of the
business may vary in accordance with different possible demand patterns.
The OR techniques serve to develop a scientific basis for coping with the
uncertainties of future demands. Thus, in dealing with the problem of
uncertainty over future sales and demands, OR can be used to generate ‘A
Least Risk’ Plan.
At times there may be a problem of finding an acceptable definition of
long range company objectives. Management may be confronted with
different viewpoints; some may stress the desirability of maximizing of net
profit whereas others may focus attention primarily on the minimization of
costs. OR techniques, specially that of mathematical programming such as
linear programming can help resolve such dilemmas by permitting systematic
evaluation of the best strategies for attaining different objectives. These
techniques can also be used for estimating the worth of technical innovations
as also of potential profits associated with the possible changes in rules and
policies.
How much changes can there be in the data on which a planning
formulation is based without undermining the soundness of the plan itself?
How accurately must managements know cost coefficients, production
performance figures and other factors before it can make planning decisions
with confidence? Many of the basic data required for the development of
long-range plans are uncertain. Such uncertainties though cannot be avoided
but through various OR techniques the management can know how critical
such uncertainties are and this in itself is a great help to business planners.
3. OR enables proper deployment of resources. OR renders valuable help in
proper deployment of resources. For example, Programme Evaluation and
Review Technique (PERT) enables us to determine the earliest and the latest
times for each of the events and activities and thereby helps in identification
Self-Instructional
Material 13
Operations Research: of the critical path. All this helps in the deployment of resources from one
An Introduction
activity to another to enable the project completion on time. This technique,
thus provides for determining the probability of completing an event or project
itself by a specified date.
NOTES
4. OR helps in minimizing waiting and servicing costs. The waiting line or
queuing theory helps management in minimizing the total waiting and servicing
costs. This technique also analyses the feasibility of adding facilities and
thereby helps businesspeople take correct and profitable decisions.
5. OR enables management to decide when to buy and how much to buy.
The main objective of inventory planning is to achieve a balance between
the cost of holding stocks and the benefits from stock-holding.
Hence, the technique of inventory planning enables the management to decide
when to buy and how much to buy.
6. OR assists in choosing an optimum strategy. Game theory is specially
used to determine the optimum strategy in a competitive situation and enables
businesspeople to maximize profits or minimize losses by adopting the
optimum strategy.
7. OR renders great help in optimum resource allocation. Linear
programming technique is used to allocate scarce resources in an optimum
manner in problems of scheduling, product-mix and so on. This technique
is popularly used by modern management in resource allocation and in
ensuring optimal assignments.
8. OR facilitates the process of decision-making. Decision theory enables
businessmen to select the best course of action when information is given in
a probabilistic form. Through decision tree (a network showing the logical
relationship between the different parts of a complex decision and the
alternative courses of action in any phase of a decision situation) technique
executive’s judgement can systematically be brought into the analysis of the
problems. Simulation is another important technique used to imitate an
operation or process prior to actual performance. The significance or
simulation lies in the fact that it enables in finding out the effect of alternative
courses of action in a situation involving uncertainty where mathematical
formulation is not possible. Even complex groups of variables can be handled
through this technique.
9. Through OR management can know the reactions of the integrated
business systems. The Integrated Production Models technique is used to
minimize cost with respect to work force, production and inventory. This
technique is quite complex and is usually used by companies having detailed
information concerning their sales and costs statistics over a long period.
Besides, various other OR techniques also help management people in taking
decisions concerning various problems of business and industry. The
Self-Instructional
14 Material
techniques are designed to investigate how the integrated business system Operations Research:
An Introduction
would react to variations in its component elements and/or external factors.
10. OR techniques help a lot in the preparation of future (or would be)
managers. In fact, OR techniques substitute a means for improving the
NOTES
knowledge and skill of youngsters in the field of management.
1.6.2 Operations Research and Modern Business Management
From what has been stated above, we can say that operational research renders
valuable service in the field of business management. It ensures improvement in
the quality of managerial decisions in all functional areas of management. The role
of OR in business management can be summed up as under:
OR techniques help the directing authority in optimum allocation of
various limited resources, viz., men, machines, money, material, time, etc., to
different competing opportunities on an objective basis for achieving effectively
the goal of a business unit. They help the chief of executive in broadening management
vision and perspectives in the choice of alternative strategies to the decision
problems such as forecasting manpower, production capacities, capital requirements
and plans for their acquisition.
OR is useful to the production management in (i) Selecting the building
site for a plant, scheduling and controlling its development and designing its layout;
(ii) Locating within the plant and controlling the movements of required production
materials and finished goods inventories; (iii) Scheduling and sequencing production
by adequate preventive maintenance with optimum number of operatives by proper
allocation of machines; and (iv) Calculating the optimum product-mix.
OR is useful to the personnel management to find out (i) Optimum
manpower planning; (ii) The number of persons to be maintained on the permanent
or full time roll; (iii) The number of persons to be kept in a work pool intended for
meeting absenteeism; (iv) The optimum manner of sequencing and routing of
personnel to a variety of jobs; and (v) In studying personnel recruiting procedures,
accident rates and labour turnover.
OR techniques equally help the marketing management to determine
(i) Where distribution points and warehousing should be located; their size, quantity
to be stocked and the choice of customers; (ii) The optimum allocation of sales
budget to direct selling and promotional expenses; (iii) The choice of different
media of advertising and bidding strategies; and (iv) The consumer preferences
relating to size, colour, packaging, etc., for various products as well as to outbid
and outwit competitors.
OR is also very useful to the financial management in (i) Finding long
range capital requirements as well as how to generate these requirements;
(ii) Determining optimum replacement policies; (iii) Working out a profit plan for
the firm; (iv) Developing capital investments plans; and (v) Estimating credit and
investment risks.
Self-Instructional
Material 15
Operations Research: In addition to all this, OR provides the business executives such an
An Introduction
understanding of the business operations which gives them new insights and
capability to determine better solutions for several decision-making problems
with great speed, competence and confidence. When applied on the level of
NOTES management where policies are formulated, OR assists the executives in an
advisory capacity but on the operational level where production, personnel,
purchasing, inventory and administrative decisions are made. It provides
management with a means for handling and processing information. Thus, in
brief, OR can be considered as scientific method of providing executive
departments with a quantitative basis for taking decisions regarding operations
under their control.

1.7 FEATURES AND METHODOLOGY OF


OPERATIONS RESEARCH AND PHASES OF
OPERATIONS RESEARCH STUDY

OR study generally involves three phases viz., the judgement phase, the research
phase and the action phase. Of these three, the research phase is the longest and
the largest, but the remaining two phases are very important since they provide the
basis for and implementation of the research phase respectively.
The judgement phase includes (i) The determination of the problem;
(ii) The establishment of the objectives and values related to the operation; and
(iii) The determination of suitable measures of effectiveness.
The research phase utilizes (i) Observations and data collection for better
understanding of the problem; (ii) Formulation of hypothesis and models;
(iii) Observation and experimentation to test the hypothesis on the basis of
additional data; and (iv) Predictions of various results from the hypothesis,
generalization of the result and consideration of alternative methods.
The action phase in the OR consists of making recommendations for
decision process. As such this phase deals with the implementation of the tested
results of the model. This phase is executed primarily through the cooperation of
the OR experts on the one hand and those who are responsible for operating the
system on the other.
1.7.1 Methodology of Operations Research
In view of the above referred phases the methodology of OR generally involves
the following steps:
1. Formulating the Problem: The first step in an OR study is to formulate
the problem in an appropriate form. Formulating a problem consists in
identifying, defining and specifying the measures of the components of a

Self-Instructional
16 Material
decision model. This means that all quantifiable factors which are pertinent Operations Research:
An Introduction
to the functioning of the system under consideration are defined in
mathematical language as variables (factors which are not controllable),
parameters or coefficient along with the constraints on the variables and the
determination of suitable measures of effectiveness. NOTES
2. Constructing the Model: The second step consists in constructing the
model by which we mean that appropriate mathematical expressions are
formulated which describe interrelations of all variables and parameters.
In addition, one or more equations or inequalities are required to express
the fact that some or all of the controlled variables can only be manipulated
within limits. Such equations or inequalities are termed as constraints or
the restrictions. The model must also include an objective function which
defines the measure of effectiveness of the system. The objective function
and the constraints, together constitute a model of the problem that we
want to solve. This model describes the technology and the economics of
the system under consideration through a set of simultaneous equations
and inequalities.
3. Deriving the Solution: Once the model is constructed the next step in an
OR study is that of obtaining the solution to the model, i.e., finding the
optimal values of the controlled variables—values that produce the best
performance of the system for specified values of the uncontrolled variables.
In other words, an optimum solution is determined on the basis of the various
equations of the model satisfying the given constraints and inter-relations of
the system and at the same time maximizing profit or minimizing cost or
coming as close as possible to some other goal or criterion. How the solution
can be derived depends on the nature of the model. In general, there are
three methods available for the purpose viz., the analytical methods, the
numerical methods and the simulation methods. Analytical methods involve
expressions of the model by mathematical computations and the kind of
mathematics required depends upon the nature of the model under
consideration. This sort of mathematical analysis can be conducted only in
some cases without any knowledge of the values of the variables but in
others the values of the variables must be known concretely or numerically.
In latter cases, we use the numerical methods which are concerned with
iterative procedures through the use of numerical computations at each step.
The algorithm (or the set of computational rules) is started with a trial or
initial solution and continued with a set of rules for improving it towards
optimality. The initial solution is then replaced by the improved one and the
process is repeated until no further improvement is possible. But in those
cases where the analytical as well as the numerical methods cannot be used
for deriving the solution then we use simulation methods, i.e., we conduct

Self-Instructional
Material 17
Operations Research: experiments on the model in which we select values of the uncontrolled
An Introduction
variables with the relative frequencies dictated by their probability
distributions. The simulation methods involve the use of probability and
sampling concepts and are generally used with the help of computers.
NOTES Whichever method is used, our objective is to find an optimal or near-
optimal solution, i.e., a solution which optimizes the measure of effectiveness
in a model.
4. Testing the Validity: The solution values of the model, obtained as stated
in step three above, are then tested against actual observations. In other
words, effort is made to test the validity of the model used. A model is
supposed to be valid if it can give a reliable prediction of the performance
of the system represented through the model. If necessary, the model may
be modified in the light of actual observations and the whole process is
repeated till a satisfactory model is attained. The operational researcher
quite often realizes that his model must be a good representation of the
system and must correspond to reality which in turn requires this step of
testing the validity of the model in an OR study. In effect, performance of
the model must be compared with the policy or procedure that it is meant to
replace.
5. Controlling the Solution: This step of an OR study establishes control
over the solution by proper feedback of information on variables which
might have deviated significantly. As such the significant changes in the
system and its environment must be detected and the solution must
accordingly be adjusted.
6. Implementing the Results: Implementing the results constitutes the last
step of an OR study. The objective of OR is not merely to produce reports
but to improve the performance of systems. The results of the research
must be implemented if they are accepted by the decision-makers. It is
through this step that the ultimate test and evaluation of the research is
made and it is in this phase of the study the researcher has the greatest
opportunity for learning.
Thus, the procedure for an OR study generally involves some major steps viz.,
formulating the problem, constructing the mathematical model to represent the
system under study, deriving a solution from the model, testing the model and the
solution so derived, establishing controls over the solution and lastly putting the
solution to work-implementation. Although the said phases and the steps are usually
initiated in the order listed in an OR study but it should always be kept in mind that
they are likely to overlap in time and to interact each phase usually continues until
the study is completed.

Self-Instructional
18 Material
Flow Chart Showing OR Approach Operations Research:
An Introduction
OR approach can be well illustrated through the following flow chart:
Information needs
Facts, opinions NOTES
& symptoms Define the
of the problem problem

Determine
Technical, factors affecting
financial the problem
and economic (variables,
information constraints and
from all sources assumptions) Model Tools
development of trade

Develop objective Maximise


and alternative or
solutions Minimise
some
function
Detailed Analyse
information alternatives
on factors and deriving
optimum solution
Computer
service
Determination (may be used)
of validity.
Recommended
action,
Control and
Implementation

Check Your Progress


11. What is the significance of operations research?
12. How does operations research provides solution for various business
problems?
13. How operations research is useful to the production management?
14. What does judgement phase include?
15. What is the first step in OR study?

1.8 MODELS IN OPERATIONS RESEARCH AND


METHODS OF DERIVING THE SOLUTION

A model in OR is a simplified representation of an operation or as a process in


which only the basic aspects or the most important features of a typical problem
under investigation are considered. The objective of a model is to identify significant
factors and interrelationships. The reliability of the solution obtained from a model
depends on the validity of the model representing the real system.
Self-Instructional
Material 19
Operations Research: A good model must possess the following characteristics:
An Introduction
(i) It should be capable of taking into account new formulation without
having any changes in its frame.
NOTES (ii) Assumptions made in the model should be as small as possible.
(iii) Variables used in the model must be less in number ensuring that it is
simple and coherent.
(iv) It should be open to parametric type of treatment.
(v) It should not take much time in its construction for any problem.
1.8.1 Advantages of a Model
There are certain significant advantages gained when using a model. These are:
(i) Problems under consideration become controllable through a model.
(ii) It provides a logical and systematic approach to the problem.
(iii) It provides the limitations and scope of an activity.
(iv) It helps in finding useful tools that eliminate duplication of methods applied
to solve problems.
(v) It helps in finding solutions for research and improvements in a system.
(vi) It provides an economic description and explanation of either the operation,
or the systems they represent.
1.8.2 Classification of Models
The classification of models is a subjective problem. They may be distinguished as
follows:
(i) Models by Degree of Abstraction
(ii) Models by Function
(iii) Models by Structure
(iv) Models by Nature of an Environment
(v) Models by the Extent of Generality
Models by Function
These models consist of (i) Descriptive Models, (ii) Predictive Models, and
(iii) Normative or Optimization Models.
Descriptive and Predictive Models: These models describe and predict
facts and relationships among the various activities of the problem. These models
do not have an objective function as a part of the model to evaluate decision
alternatives. In this model, it is possible to get information as to how one or more
factors change as a result of changes in other factors.
Normative or Optimization Models: They are prescriptive in nature and
develop objective decision-rule for optimum solutions.

Self-Instructional
20 Material
Models by Structure Operations Research:
An Introduction
These models are represented by (i) Iconic or Physical models, (ii) Analog models,
and (iii) Mathematic or Symbolic models.
Iconic or Physical Models: These are pictorial representations of real NOTES
systems and have the appearance of the real thing. An iconic model is said to be
scaled down or scaled up according to the dimensions of the model which may be
smaller or greater than that of the real item, e.g., city maps, blue prints of houses,
globe and so on. These models are easy to observe and describe but are difficult
to manipulate and are not very useful for the purposes of prediction.
Analog Models: These are abstract than the iconic ones. There is no look
alike correspondence between these models and real life items. The models in
which one set of properties is used to represent another set of properties are
called analog models. After the problem is solved, the solution is reinterpreted in
terms of the original system. These models are less specific, less concrete but
easier to manipulate than iconic models.
Mathematic or Symbolic Models: These are most abstract in nature in
comparison to others. They employ a set of mathematical symbols to represent
the components of the real system. These variables are related together by means
of mathematical equations to describe the behaviour of the system. The solution of
the problem is then obtained by applying well developed mathematical techniques
to the model. The symbolic model is usually the easiest to manipulate experimentally
and it is the most general and abstract. Its function is more explanatory than
descriptive.
Models by Nature of an Environment
These models can be classified into (i) Deterministic models, and (ii) Probabilistic
or Stochastic models.
Deterministic Models: In these models, all parameters and functional
relationships are assumed to be known with certainty when the decision is to be
made. Linear programming and break even models are the examples of deterministic
models.
Probabilistic or Stochastic Models: These models are those in which
atleast one parameter or decision variable is a random variable. These models
reflect to some extent the complexity of the real world and the uncertanity
surrounding it.
Models by the Extent of Generality
These models can be categorized into (i) Specific models, and (ii) General models.
Specific Models: When a model presents a system at some specific time,
it is known as a specific model. In these models, if the time factor is not considered
them they are termed as static models. An inventory problem of determining
Self-Instructional
Material 21
Operations Research: economic order quantity for the next period assuming that the demand in the planning
An Introduction
period would remain same as that of today is an example of static model. Dynamic
programming may be considered as an example of dynamic model.
General Models: Simulation and heuristic models fall under the category
NOTES
of general models. These models are used to explore alternative strategies which
have been overlooked previously.

1.9 LIMITATIONS OF OPERATIONS RESEARCH

OR though is a great aid to management as outlined above but still it cannot be a


substitute for decision-making. The choice of a criterion as to what is actually best
for a business enterprise is still that of an executive who has to fall back upon his
experience and judgement. This is so because of the several limitations of OR.
Important limitations are given below:
1. The inherent limitations concerning mathematical expressions. OR
involves the use of mathematical models, equations and similar other
mathematical expressions. Assumptions are always incorporated in the
derivation of an equation or model and such an equation or model may be
correctly used for the solution of the business problems when the underlying
assumptions and variables in the model are present in the concerned problem.
If this caution is not given due care then there always remains the possibility
of wrong application of OR techniques. Quite often the operations
researchers have been accused of having many solutions without being able
to find problems that fit.
2. High costs are involved in the use of OR techniques. OR techniques
usually prove very expensive. Services of specialized persons are invariably
called for and along with it the use of computer and its maintenance is also
considered while using OR techniques. Hence, only big concerns can think
of using such techniques. Even in big business organizations we can expect
that OR techniques will continue to be of limited use simply because they
are not in many cases worth their cost. As opposed to this a typical manager,
exercising intuition and judgement, may be able to make a decision very
inexpensively. Thus, the use of OR is a costlier affair and this constitutes an
important limitation of OR.
3. OR does not take into consideration the intangible factors, i.e., non-
measurable human factors. OR makes no allowance for intangible factors
such as skill, attitude, vigour of the management in taking decisions but in
many instances success or failure hinges upon the consideration of such
non-measurable intangible factors. There cannot be any magic formula for
getting an answer to management problems but it depends upon proper
managerial attitudes and policies.

Self-Instructional
22 Material
4. OR is only a tool of analysis and not the complete decision-making Operations Research:
An Introduction
process. It should always be kept in mind that OR alone cannot make the
final decision. It is just a tool and simply suggests best alternatives. In the
final analysis many business decisions will involve human element. Thus,
OR is at best a supplement to rather than a substitute for management; NOTES
subjective judgement is likely to remain a principal approach to decision-
making.
5. Other limitations. Among other limitations of OR, the following deserve
mention:
(i) Bias. The operational researchers must be unbiased. An attempt to
shoehorn results into a confirmation of management’s prior preferences
can greatly increase the likelihood of failure.
(ii) Inadequate objective functions. The use of a single objective function
is often an insufficient basis for decisions. Laws, regulations, public
relations, market strategies, etc., may all serve to overrule a choice
arrived at in this way.
(iii) Internal resistance. The implementation of an optimal decision may
also confront internal obstacles such as trade unions or individual
managers with strong preferences for other ways of doing the job.
(iv) Competence. Competent OR analysis calls for the careful specification
of alternatives, a full comprehension of the underlying mathematical
relationships and a huge mass of data. Formulation of an industrial
problem to an OR set programme is quite often a difficult task.
(v) Reliability of the prepared solution. At times a non-linear relationship
is changed to linear for fitting the problem to the LP pattern. This may
disturb the solution.

Check Your Progress


16. What is an OR model? What is its objective?
17. How are the models classified?
18. Define descriptive models.
19. Define deterministic models.
20. OR is only a tool of analysis and not the complete decision-making process.
How?

1.10 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS
1. The concept of Operations Research (OR) came into existence in a military
context during World War II, when military management wanted to arrive
Self-Instructional
Material 23
Operations Research: at decisions on optimal utilization of scarce military resources with a new
An Introduction
approach to the systematic and scientific study of the operations of the
system.
2. OR is the scientific knowledge through interdisciplinary team effort for the
NOTES
purpose of determining the best utilization of limited resources.
3. Essential characteristics of operations research are:
(i) System orientation
(ii) Use of interdisciplinary terms
(iii) Application of scientific methods
(iv) Uncovering new problems
(v) Quantitative solutions
(vi) Human factors
4. Following are the areas where concept of OR is applied:
(i) Agriculture
(ii) Finance
(iii) Industry
(iv) Marketing
(v) Personnel Management
(vi) Production Management
(vii) Research and Development
5. Major phases involved in the application of OR are:
(i) Formulating the problem.
(ii) Constructing a mathematical model
(iii) Deriving the solution from the model
(iv) Testing the model and its solution (updating the model)
(v) Controlling the solution
(vi) Implementation
6. The subject of Operational Research (OR) was developed in military context
during World War II, pioneered by the British scientists.
7. Linear programming technique is used in finding a solution for optimizing a
given objective, such as profit maximization or cost minimization under certain
constraints. This technique is primarily concerned with the optimal allocation
of limited resources for optimizing a given function.
8. Waiting line or queuing theory deals with mathematical study of queues.
The queues are formed whenever the current demand for service exceeds
the current capacity to provide that service. Waiting line technique concerns
itself with the random arrival of customers at a service station where the
facility is limited.
9. Game theory is used to determine the optimum strategy in a competitive
situation.
Self-Instructional
24 Material
10. Dynamic programming refers to a systematic search for optimal solutions to Operations Research:
An Introduction
problems that involve many highly complex interrelations that are, moreover,
sensitive to multistage effects, such as successive time phases.
11. Operations Research has gained increasing importance since World War II
NOTES
in the technology of business and industry administration. It greatly helps in
tackling the intricate and complex problems of modern business and industry.
12. OR techniques are being used in production, procurement, marketing, finance
and other allied fields. It can be used to solve problems like how best can
managers and executives allocate the available resources to various products
so that in a given time the profits are maximum or the cost is minimum
13. OR is useful to the production management in:
(i) Selecting the building site for a plant, scheduling and controlling its
development and designing its layout
(ii) Locating within the plant and controlling the movements of required
production materials and finished goods inventories
(iii) Scheduling and sequencing production by adequate preventive
maintenance with optimum number of operatives by proper allocation
of machines
(iv) Calculating the optimum product-mix.
14. The judgement phase includes:
(i) A determination of the problem
(ii) The establishment of the objectives and values related to the operation
(iii) The determination of suitable measures of effectiveness.
15. The first step in an OR study is to formulate the problem in an appropriate
form. Formulating a problem consists in identifying, defining and specifying
the measures of the components of a decision model.
16. A model in OR is a simplified representation of an operation or as a process
in which only the basic aspects or the most important features of a typical
problem under investigation are considered. The objective of a model is to
identify significant factors and interrelationships.
17. The classification of models is a subjective problem. They may be
distinguished as follows:
(i) Models by degree of abstraction
(ii) Models by function
(iii) Models by structure
(iv) Models by nature of an environment
(v) Models by the extent of generality
18. Descriptive models describe facts and relationships among the various
activities of the problem. These models do not have an objective function
Self-Instructional
Material 25
Operations Research: as a part of the model to evaluate decision alternatives. In this model, it is
An Introduction
possible to get information as to how one or more factors change as a result
of changes in other factors.
19. In deterministic models, all parameters and functional relationships are
NOTES
assumed to be known with certainty when the decision is to be made. Linear
programming and break even models are the examples of deterministic
models.
20. It should always be kept in mind that OR alone cannot make the final
decision. It is just a tool and simply suggests best alternatives. In the final
analysis many business decisions will involve human element. Thus, OR is
at best a supplement to rather than a substitute for management; subjective
judgement is likely to remain a principal approach to decision-making.

1.11 SUMMARY

 The term Operations Research (OR) was first coined in 1940 by J.F.
McCloskey and F.N. Trefethen.
 OR is a scientific method of providing executive departments with a
quantitative basis for decisions regarding the operations under their control.
 OR can be considered as being the application of scientific method by
interdisciplinary teams to problems involving the control of organized (man-
machine) systems to provide solutions which best serve the purpose of the
organization as a whole.
 OR emphasizes on the overall approach to the system. This characteristic
of OR is often referred as system oriented.
 OR involves scientific and systematic attack of complex problems to arrive
at the optimum solution.
 Linear programming technique is used in finding a solution for optimizing a
given objective, such as profit maximization or cost minimization under certain
constraints.
 In OR, waiting line or queuing theory deals with mathematical study of queues
which are formed whenever the current demand for service exceeds the
current capacity to provide that service.
 In OR, decision theory concerns with making sound decisions under
conditions of certainty, risk and uncertainty.
 OR techniques are being used in production, procurement, marketing, finance
and other allied fields. Through OR, management can know the reactions
of the integrated business systems. The Integrated Production Models
technique is used to minimize cost with respect to work force, production
and inventory.
Self-Instructional
26 Material
 OR provides the business executives such an understanding of the business Operations Research:
An Introduction
operations which gives them new insights and capability to determine better
solutions for several decision-making problems with great speed,
competence and confidence.
NOTES
 OR study generally involves three phases, viz., the judjement phase, the
research phase and the action phase.
 The procedure for an OR study generally involves some major steps viz.,
formulating the problem, constructing the mathematical model to represent
the system under study, deriving a solution from the model, testing the model
and the solution so derived, establishing controls over the solution and putting
the solution to work implementation.
 OR is generally concerned with problems that are tactical rather than strategic
in nature.
 A model in OR is a simplified representation of an operation or a process in
which only the basic aspects or the most important features of a typical
problem under investigation are considered.
 Models by function consist of descriptive models, predictive models and
normative models.
 Models by structure are represented by iconic or physical models, analog
models and mathematic or symbolic models.
 Models by nature of an environment can be classified into deterministic
models and probabilistic models.
 Models by the extent of generality can be categorized into specific models
and general models.

1.12 KEY WORDS

 Operations research: The application of scientific knowledge through


interdisciplinary team effort for the purpose of determining the best utilization
of limited resources
 Model: A simplified representation of an operation or a process that
considers only the basic aspects or the most important features of a typical
problem under investigation with an objective to identify significant factors
and their interrelationships
 Descriptive model: A type of model that describes facts and relationships
among the various activities of the problem and collects information on
factors that change as a result of changes in other factors
 Normative or optimization models: These models are prescriptive in
nature and develop objective decision-rule for optimum solutions

Self-Instructional
Material 27
Operations Research:  Iconic or physical models: Pictorial representations of real systems that
An Introduction
have the appearance of the real thing.
 Analog models: These are abstract models and one set of properties is
used to represent another set of properties
NOTES
 Mathematic or symbolic models: These models are most abstract in
nature and employ a set of mathematical symbols to represent the
components of the real system

1.13 SELF-ASSESSMENT QUESTIONS AND


EXERCISES

Short-Answer Questions
1. Where did the concept of operations research originate?
2. Name the fields where operations research can be used.
3. What do you mean by nature of operations research?
4. Which Indian companies use operations research?
5. What is inventory control? What is its importance?
6. What is decision theory?
7. What do you understand by network analysis?
8. What is non-linear programming?
9. What is integer programming?
10. Describe heuristic programming and algorithmic programming. When are
these used?
11. What is search theory? In which situation is it applied?
12. What do you understand by theory of replacement? What is its significance?
13. How operations research enables proper deployment of resources?
14. How operations research techniques help in directing authority?
15. Define research phase.
16. What do you mean by testing the validity?
17. How competitive problems arise?
18. What is a model in operations research?
19. Write any one limitation of operations research.
Long-Answer Questions
1. Explain the meaning and origin of operations research with the help of
definitions and examples.
Self-Instructional
28 Material
2. Discuss about the nature of operations research with the help of examples. Operations Research:
An Introduction
3. Write an essay on the development of operations research.
4. Discuss operations research in India.
5. How is operations research used as a tool in decision-making? NOTES
6. Explain the various types of operations research techniques.
7. How is operations research useful to management?
8. Explain the significance of operations research in management process.
9. Explain the role of operations research in modern business management.
10. Explain the various phases of operations research study.
11. Describe operations research approach with the help of a flow chart.
12. Explain the various categories under which operations research problems
are classified.
13. How can you classify operations research models? Explain each type with
the help of an example.
14. What should be the characteristics of a good operations research model?
Explain.
15. What are the advantages of using a model? Explain and also give your own
opinion.
16. Write about application of operations research in modern business
environment. Explain with suitable examples.
17. Explain the limitations of operations research with the help of suitable
examples.

1.14 FURTHER READINGS

Arumugam, R. S. 2006. Operations Research. Palayamkottai (Tamil Nadu): New


Gamma Publications.
Sundharesan, V., K. S. Ganapathy and K. Ganesan. 2017. Resource Management
Techniques (Operations Research). Chennai: A. R. Publications.
Swaroop, Kanti, P. K. Gupta and Man Mohan. 2007. Operations Research,
13th Edition. New Delhi: Sultan Chand & Sons.
Taha, Hamdy A. 1992. Operations Research: An Introduction. New York:
Macmillan.
Sharma, S. D. 2006. Operations Research. Uttar Pradesh: Kedar Nath Ram
Nath & Co.
Gupta, P. K. and D. S. Hira. 2002. Introduction to Operations Research. New
Delhi: S. Chand And Company Limited.
Self-Instructional
Material 29
Operations Research: Gillett, Billy E. 2007. Introduction to Operations Research. New Delhi: Tata
An Introduction
McGraw-Hill.
Ackoff, R. L. and M. W. Sasieni. 1968. Fundamentals of Operations Research.
New York: John Wiley & Sons Inc.
NOTES
Kothari, C. R. 1992. An Introduction to Operational Research. New Delhi:
Vikas Publishing House Pvt. Ltd.
Kalavathy, S. 2002. Operations Research. New Delhi: Vikas Publishing House
Pvt. Ltd.
Jensen, Paul A., and Jonathan F. Bard. 2003. Operations Research Models and
Methods. New York: John Wiley & Sons.
Sharma, J. K. 2001. Operations Research: Theory and Applications. New
Delhi: Macmillan India Ltd.

Self-Instructional
30 Material
Linear Programming

UNIT 2 LINEAR PROGRAMMING Problems

PROBLEMS
NOTES
Structure
2.0 Introduction
2.1 Objectives
2.2 Linear Programming Problem
2.2.1 Meaning of Linear Programming Problem
2.2.2 Fields Where Linear Programming can be Used
2.3 Mathematical Formulation of the Problem
2.3.1 Basic Concepts and Notations
2.3.2 General Form of the Linear Programming Model
2.4 Illustration on Mathematical Formulation of Linear Programming Problems
2.5 Graphical Solution Method
2.5.1 Graphic Solution
2.5.2 Some Exceptional Cases
2.6 Answers to Check Your Progress Questions
2.7 Summary
2.8 Key Words
2.9 Self-Assessment Questions and Exercises
2.10 Further Readings

2.0 INTRODUCTION

Linear Programming (LP) method was first formulated by a Russian mathematician


L.V. Kantorovich, but it was developed later in 1947 by George B. Dantzig. ‘For
the purpose of scheduling the complicated procurement activities of the United
States Air Force’. Today, this method is being used in solving a wide range of
practical business problems. The advent of electronic computers has further
increased its applications to solve many other problems in industry. It is being
considered as one of the most versatile management tools.
Linear Programming (LP) are decision-making. For a manufacturing process,
a production manager has to take decisions as to what quantities and which process
or processes are to be used so that the cost is minimum and profit is maximum.
Currently, this method is used in solving a wide range of practical business problems.
The word ‘Linear’ means that the relationships are represented by straight lines.
The word ‘Programming’ means following a method for taking decisions
systematically. Linear programming problem may be solved using a simplified
version of the simplex technique called transportation method. A linear programming
is a mathematical method for determining method to achieve the best outcome,
i.e., maximum profit at lowest cost. Graphical methods with a mathematical basis
in operation research included diagram techniques, chart techniques, plot
techniques, and other forms of visualization. Self-Instructional
Material 31
Linear Programming In this unit, you will study about the concepts of linear programming problem,
Problems
mathematical formulation of the problem, illustration on mathematical formulation
of linear programming problem, and graphical solution method.

NOTES
2.1 OBJECTIVES

After going through this unit, you will be able to:


 Understand the significance of linear programming problem
 Elaborate on the concept of linear programming
 Analyse the mathematical formulation of the problem
 Know about the general form of the linear programming model
 Illustrate about the mathematical formulation of linear programming
 Explain the graphical solution method for solving linear programming
problems

2.2 LINEAR PROGRAMMING PROBLEM

Decision-making has always been very important in the business and industrial
world, particularly with regard to the problems concerning production of
commodities. Which commodity/commodities to produce, in what quantities
and by which process or processes, are the main questions before a production
manager. English economist Alfred Marshall pointed out that the businessman
always studies his production function and his input prices and substitutes one
input for another till his costs become the minimum possible. All this sort of
substitution, in the opinion of Marshall, is being done by businessman’s trained
instinct rather than with formal calculations. But now there does exist a method
of formal calculations often termed as Linear Programming. This method was
first formulated by a Russian mathematician L.V. Kantorovich, but it was
developed later in 1947 by George B. Dantzig ‘for the purpose of scheduling
the complicated procurement activities of the United States Air Force’. Today,
this method is being used in solving a wide range of practical business problems.
The advent of electronic computers has further increased its applications to
solve many other problems in industry. It is being considered as one of the most
versatile management tools.
2.2.1 Meaning of Linear Programming
Linear Programming (LP) is a major innovation since World War II in the field
of business decision-making, particularly under conditions of certainty. The
word ‘Linear’ means that the relationships are represented by straight lines,
i.e., the relationships are of the form y = a + bx and the word ‘Programming’
means taking decisions systematically. Thus, LP is a decision-making technique
Self-Instructional
32 Material
under given constraints on the assumption that the relationships amongst the Linear Programming
Problems
variables representing different phenomena happen to be linear. In fact, Dantzig
originally called it ‘programming of interdependent activities in a linear structure’
but later shortened it to ‘Linear Programming’. LP is generally used in solving
maximization (sales or profit maximization) or minimization (cost minimization) NOTES
problems subject to certain assumptions. Putting in a formal way, ‘Linear
Programming is the maximization (or minimization) of a linear function of
variables subject to a constraint of linear inequalities.’ Hence, LP is a
mathematical technique designed to assist the organization in optimally allocating
its available resources under conditions of certainty in problems of scheduling,
product-mix and so on.
2.2.2 Fields Where Linear Programming can be Used
The problem for which LP provides a solution may be stated to maximize or
minimize for some dependent variable which is a function of several independent
variables when the independent variables are subject to various restrictions. The
dependent variable is usually some economic objectives, such as profits, production,
costs, work weeks, tonnage to be shipped, etc. More profits are generally preferred
to less profits and lower costs are preferred to higher costs. Hence, it is appropriate
to represent either maximization or minimization of the dependent variable as one
of the firm’s objective. LP is usually concerned with such objectives under given
constraints with linearity assumptions. In fact, it is powerful to take in its stride a
wide range of business applications. The applications of LP are numerous and are
increasing every day. LP is extensively used in solving resource allocation problems.
Production planning and scheduling, transportation, sales and advertising, financial
planning, portfolio analysis, corporate planning, etc., are some of its most fertile
application areas. More specifically, LP has been successfully applied in the
following fields:
(i) Agricultural applications: LP can be applied in farm management
problems as it relates to the allocation of resources such as acreage,
labour, water supply or working capital in such a way that is maximizes
net revenue.
(ii) Contract awards: Evaluation of tenders by recourse to LP guarantees
that the awards are made in the cheapest way.
(iii) Industrial applications: Applications of LP in business and industry
are of most diverse kind. Transportation problems concerning cost
minimization can be solved by this technique. The technique can also
be adopted in solving the problems of production (product-mix) and
inventory control.
Thus, LP is the most widely used technique of decision-making in business and
industry in modern times in various fields as stated above.

Self-Instructional
Material 33
Linear Programming
Problems 2.3 MATHEMATICAL FORMULATION OF THE
PROBLEM

NOTES The following are the concepts, notations and forms used in a linear programming
model:
2.3.1 Basic Concepts and Notations
There are certain basic concepts and notations to be first understood for easy
adoption of the LP technique. A brief mention of such concepts is as follows:
1. Linearity: The term linearity implies straight line or proportional relationships
among the relevant variables. Linearity in economic theory is known as
constant returns which means that if the amount of input doubles, the
corresponding output and profit are also doubled. Linearity assumption,
thus, implies that if two machines and two workers can produce twice as
much as one machine and one worker; four machines and four workers
twice as much as two machines and two workers, and so on.
2. Process and its Level: Process means the combination of particular inputs
to produce a particular output. In a process, factors of production are used
in fixed ratios, of course, depending upon technology and as such no
substitution is possible with a process. There may be many processes open
to a firm for producing a commodity and one process can be substituted for
another. There is, thus, no interference of one process with another when
two or more processes are used simultaneously. If a product can be produced
in two different ways, then there are two different processes (or activities
or decision variables) for the purpose of a linear programme.
3. Criterion Function: Criterion function is also known as objective function
which states the determinants of the quantity either to be maximized or to
be minimized. For example, revenue or profit is such a function when it is to
be maximized or cost is such a function when the problem is to minimize it.
An objective function should include all the possible activities with the revenue
(profit) or cost coefficients per unit of production or acquisition. The goal
may be either to maximize this function or to minimize this function. In
symbolic form, let ZX denote the value of the objective function at the X
level of the activities included in it. This is the total sum of individual activities
produced at a specified level. The activities are denoted as j =1, 2,..., n.
The revenue or cost coefficient of the jth activity is represented by Cj.
Thus, 2X1, implies that X units of activity j = 1 yields a profit (or loss) of C1
= 2.
4. Constraints or Inequalities: These are the limitations under which one
has to plan and decide, i.e., restrictions imposed upon decision variables.
For example, a certain machine requires one worker to be operated upon;
Self-Instructional
34 Material
another machine requires at least four workers (i.e., > 4); there are at most Linear Programming
Problems
20 machine hours (i.e., < 20) available; the weight of the product should be
say 10 lbs and so on, are all examples of constraints or why are known as
inequalities. Inequalities like X > C (reads X is greater than C or X < C
(reads X is less than C) are termed as strict inequalities. The constraints NOTES
may be in form of weak inequalities like X  C (reads X is less than or equal
to C) or X  C (reads C is greater than or equal to C). Constraints may be
in the form of strict equalities like X = C (reads X is equal to C).
Let bi denote the quantity b of resource i available for use in various
production processes. The coefficient attached to resource i is the quantity
of resource i required for the production of one unit of product j.
5. Feasible Solutions: Feasible solutions are all those possible solutions which
can be worked upon under given constraints. The region comprising of all
feasible solutions is referred as Feasible Region.
6. Optimum Solution: Optimum solution is the best of the feasible solutions.
2.3.2 General Form of the Linear Programming Model
Linear Programming problem mathematically can be stated as under:
Choose the quantities,
Xj > 0 (j = 1,..., n) ...(2.1)
This is also known as the non-negativity condition and in simple terms means
that no X can be negative.
To maximize,
n
Z  C j X j ...(2.2)
j 1

Subject to the constraints,


n
aij X j bi (i = 1,...,m) ...(2.3)
j 1

The above is the usual structure of a linear programming model in the simplest
possible form. This model can be interpreted as a profit maximization situation
where n production activities are pursued at level Xj which have to be decided
upon, subject to a limited amount of m resources being available. Each unit of the
jth activity yields a return C and uses an amount aij of the ith resource. Z denotes
the optimal value of the objective function for a given system.
Assumptions or the Conditions to be Fulfilled Underlying the LP Model
LP model is based on the assumptions of proportionality, additivity, certainty,
continuity and finite choices.
Proportionality is assumed in the objective function and the constraint
inequalities. In economic terminology this means that there are constant returns to Self-Instructional
Material 35
Linear Programming scale, i.e., if one unit of a product contributes 5 toward profit, then 2 units will
Problems
contribute 10, 4 units 20, and so on.
Certainty assumption means the prior knowledge of all the coefficients in
the objective function, the coefficients of the constraints and the resource values.
NOTES
LP model operates only under conditions of certainty.
Additivity assumption means that the total of all the activities is given by the
sum total of each activity conducted separately. For example, the total profit in the
objective function is equal to the sum of the profit contributed by each of the
products separately.
Continuity assumption means that the decision variables are contiunous.
Accordingly the combinations of output with fractional values, in case of product-
mix problems, are possible and obtained frequently.
Finite choices assumption implies that finite number of choices are available
to a decision-maker and the decision variables do not assume negative values.

Check Your Progress


1. What is linear programming problem?
2. Define the linear programming?
3. What are the fields where linear programing can be used?
4. Explain the term linearity.
5. State about the criterion function.
6. What do you mean by the constraints or inequalities?
7. Elaborate on the term feasible solutions.

2.4 ILLUSTRATION ON MATHEMATICAL


FORMULATION OF LINEAR PROGRAMMING
PROBLEMS

The applications of linear programming problems are based on linear


programming matrix coefficients and data transmission prior to solving the simplex
algorithm. The problem can be formulated from the problem statement using
linear programming techniques. The following are the objectives of linear
programming:
 Identify the objective of the linear programming problem, i.e., which quantity
is to be optimized. For example, maximize the profit.
 Identify the decision variables and constraints used in linear programming,
for example, production quantities and production limitations are taken as
decision variables and constraints.

Self-Instructional
36 Material
 Identify the objective functions and constraints in terms of decision variables Linear Programming
Problems
using information from the problem statement to determine the proper
coefficients.
 Add implicit constraints, such as non-negative restrictions.
NOTES
 Arrange the system of equations in a consistent form and place all the
variables on the left side of the equations.
Applications of Linear Programming
Linear programming problems are associated with the efficient use of allocation of
limited resources to meet desired objectives. A solution required to solve the linear
programming problem is termed as optimal solution. The linear programming
problems contain a very special subclass and depend on mathematical model or
description. It is evaluated using relationships and are termed as straight-line or
linear. The following are the applications of linear programming:
 Transportation problem
 Diet problem
 Matrix games
 Portfolio optimization
 Crew scheduling
Linear programming problem may be solved using a simplified version of the simplex
technique called transportation method. Because of its major application in solving
problems involving several product sources and several destinations of products,
this type of problem is frequently called the transportation problem. It gets its
name from its application to problems involving transporting products from several
sources to several destinations. The formation is used to represent more general
assignment and scheduling problems as well as transportation and distribution
problems. The two common objectives of such problems are:
 To minimize the cost of shipping m units to n destinations.
 To maximize the profit of shipping m units to n destinations.
The goal of the diet problem is to find the cheapest combination of foods
that will satisfy all the daily nutritional requirements of a person. The problem is
formulated as a linear program where the objective is to minimize cost and meet
constraints which require that nutritional needs be satisfied. The constraints are
used to regulate the number of calories and amounts of vitamins, minerals, fats,
sodium and cholesterol in the diet.
Game method is used to turn a matrix game into a linear programming
problem. It is based on the Min-Max theorem which suggests that each player
determines the choice of strategies on the basis of a probability distribution over
the player’s list of strategies.

Self-Instructional
Material 37
Linear Programming The portfolio optimization template calculates the optimal capital of
Problems
investments that gives the highest return for the least risk. The unique design of the
portfolio optimization technique helps in financial investments or business portfolios.
The optimization analysis is applied to a portfolio of businesses to represent a
NOTES desired and beneficial framework for driving capital allocation, investment and
divestment decisions.
Crew scheduling is an important application of linear programming problem.
It helps if any airline has a problem related to a large potential crew schedules
variables. Crew scheduling models are a key to airline competitive cost advantage
these days because crew costs are the second largest flying cost after fuel costs.
Limitations of Linear Programming Problems
Linear programming is applicable if constraints and objective functions are linear,
but there are some limitations of this technique which are as follows:
 All the uncertain factors, such as weather conditions, growth rate of industry,
etc., are not taken into consideration.
 Integer values are not taken as the solution, e.g., a value is required for
fraction and the nearest integer is not taken for the optimal solution.
 Linear programming technique gives those practical-valued answers that
are really not desirable with respect to linear programming problem.
 It deals with one single objective in real life problem which is more limited
and the problems come with multi-objective.
 In linear programming, coefficients and parameters are assumed as constants
but in realty they do not take place.
 Blending is a frequently encountered problem in linear programming. For
example, if different commodities are purchased which have different
characteristics and costs, then the problem helps to decide how much of
each commodity would be purchased and blended within specified bound
so that the total purchase cost is minimized.

2.5 GRAPHICAL SOLUTION METHOD

A linear programming is a mathematical method for determining method to achieve


the best outcome, i.e., maximum profit at lowest cost.
2.5.1 Graphic Solution
The procedure for mathematical formulation of an LPP consists of the following
steps:
Step 1: The decision variables of the problem are noted.
Step 2: The objective function to be optimized (maximized or minimized) as a
linear function of the decision variables is formulated.
Self-Instructional
38 Material
Step 3: The other conditions of the problem, such as resource limitation, market Linear Programming
Problems
constraints, interrelations between variables, etc., are formulated as linear
inequations or equations in terms of the decision variables.
Step 4: The non-negativity constraint from the considerations is added so that the
NOTES
negative values of the decision variables do not have any valid physical interpretation.
The objective function, the set of constraints and the non-negative constraint
together form a linear programming problem.
2.5.2 Some Exceptional Cases
The general formulation of the LPP can be stated as follows:
In order to find the values of n decision variables X1 X2 ... Xn to maximize or
minimize the objective function.
Z  C1 X 1  C2 X 2  K  Cn X n ... (2.4)

a11 X 1  a12 X 2  L  a1n X n (, , )b1 


a21 X 1  a22 X 2  L  a2 n X n (, , )b2 
: 

ai1 X 1  ai 2 X 2  L  ain X n (, , )bi  ... (2.5)
: 

am1 X 1  am 2 X 2  L  amn X n (  )bm 

Here, the constraints can be inequality  or  or even in the form an equation


(=) and finally satisfy the non-negative restrictions:
X 1  0, X 2  0K X n  0 ... (2.6)
Matrix Form of Linear Programming Problem
The LPP can be expressed in the matrix form as follows:
Maximize or minimize Z = CX  Objective function
Subject to AX () B  Constant equation
B > 0, X  0  Non-negativity restrictions
Where, X = ( X1, X 2 L X n )
C = (C1 , C2 Cn )

a11a12 L a1n
b1
a21a22 L a2 n
B = b2 A
:
bm
am1am 2 L amn

Self-Instructional
Material 39
Linear Programming Example 2.1: A manufacturer produces two types of models M1 and M2. Each
Problems
model of the type M1 requires 4 hours of grinding and 2 hours of polishing;
whereas each model of the type M2 requires 2 hours of grinding and 5 hours of
polishing. The manufacturers have 2 grinders and 3 polishers. Each grinder works
NOTES 40 hours a week and each polisher works for 60 hours a week. The profit on
M1 model is 3.00 and on model M2 is 4.00. Whatever is produced in a week
is sold in the market. How should the manufacturer allocate his production
capacity to the two types of models, so that he may make the maximum profit in
a week?
Solution:
Decision Variables: Let X1 and X2 be the number of units of M1 and M2.
Objective Function: Since the profit on both the models are given, we
have to maximize the profit, viz.,
Max Z = 3X1 + 4X2
Constraints: There are two constraints: one for grinding and the other for
polishing.
The number of hours available on each grinder for one week is 40 hours.
There are 2 grinders. Hence, the manufacturer does not have more than 2 × 40 =
80 hours for grinding. M1 requires 4 hours of grinding and M2 requires 2 hours of
grinding.
The grinding constraint is given by,
4 X 1  2 X 2  80
Since there are 3 polishers, the available time for polishing in a week is
given by 3 × 60 = 180. M1 requires 2 hours of polishing and M2 requires 5 hours
of polishing. Hence, we have 2X1 + 5X2  180
Thus, we have,
Max Z = 3X1 + 4X2
Subject to, 4 X 1  2 X 2  80
2 X 1  5 X 2  180
X1, X 2  0
Example 2.2: A company manufactures two products A and B. These products
are processed in the same machine. It takes 10 minutes to process one unit of
product A and 2 minutes for each unit of product B and the machine operates for
a maximum of 35 hours in a week. Product A requires 1 kg and B 0.5 kg of raw
material per unit, the supply of which is 600 kg per week. The market constraint
on product B is known to be 800 units every week. Product A costs 5 per unit
and is sold at 10. Product B costs 6 per unit and can be sold in the market at
a unit price of 8. Determine the number of units of A and B that should be
manufactured per week to maximize the profit.

Self-Instructional
40 Material
Solution: Linear Programming
Problems
Decision Variables: Let X1 and X2 be the number of products of A and B.
Objective Function: Cost of product A per unit is 5 and is sold at 10
per unit. NOTES
 Profit on one unit of product A = 10 – 5 = 5
 X1 units of product A, contributes a profit of 5X1 from one unit of
product.
Similarly, profit on one unit of B = 8 – 6 = 2
:. X2 units of product B, contribute a profit of 2X2.
 The objective function is given by,
Max Z  5 X 1  2 X 2
Constraints: Time requirement constraint is given by,
10 X 1  2 X 2  (35  60)
10 X 1  2 X 2  2100
Raw material constraint is given by,
X 1  0.5 X 2  600
Market demand on product B is 800 units every week.
 X2  800
The complete LPP is,
Max Z  5 X 1  2 X 2

Subject to, 10 X 1  2 X 2  2100


X 1  0.5 X 2  600
X 2  800
X1, X 2  0

Example 2.3: A person requires 10, 12 and 12 units of chemicals A, B and C


respectively for his garden. A liquid product contains 5, 2 and 1 units of A, B and
C respectively per jar. A dry product contains 1, 2 and 4 units of A, B, C per
carton. If the liquid product sells for 3 per jar and the dry product sells for 2
per carton, what should be the number of jar that needs to be purchased, in order
to bring down the cost and meet the requirements?
Solution:
Decision Variables: Let X1 and X2 be the number of units of liquid and
dry products.
Objective Function: Since the cost for the products are given, we have to
minimize the cost.
Self-Instructional
Material 41
Linear Programming Min Z = 3X1 + 2X2
Problems
Constraints: As there are 3 chemicals and their requirements are given,
we have three constraints for these three chemicals.
NOTES 5 X 1  X 2  10
2 X 1  2 X 2  12
X 1  4 X 2  12

Hence, the complete LPP is,


Min Z = 3X1 + 2X2
Subject to,

5 X 1  X 2  10
2 X 1  2 X 2  12
X 1  4 X 2  12
X1, X 2  0

Example 2.4: A paper mill produces two grades of paper, X and Y. Because of
raw material restrictions, it cannot produce more than 400 tonnes of grade X and
300 tonnes of grade Y in a week. There are 160 production hours in a week. It
requires 0.2 and 0.4 hours to produce a tonne of products X and Y respectively
with corresponding profits of 200 and 500 per tonne. Formulate this as a LPP
to maximize profit and find the optimum product mix.
Solution:
Decision Variables: Let X1 and X2 be the number of units of the two
grades of paper, X and Y.
Objective Function: Since the profit for the two grades of paper X and Y
are given, the objective function is to maximize the profit.
Max Z = 200X1 + 500X2
Constraints: There are 2 constraints one with reference to raw material,
and the other with reference to production hours.
Max Z = 200X1 + 500X2
Subject to,

X 1  400
X 2  300
0.2 X 1  0.4 X 2  160

Non-negative restriction X1, X2  0


Example 2.5: A company manufactures two products A and B. Each unit of B
takes twice as long to produce as one unit of A and if the company were to produce
Self-Instructional
42 Material
only A it would have time to produce 2000 units per day. The availability of the raw Linear Programming
Problems
material is enough to produce 1500 units per day of both A and B together. Product
B requiring a special ingredient, only 600 units of it can be made per day. If A fetches
a profit of 2 per unit and B a profit of 4 per unit, find the optimum product mix by
graphical method. NOTES
Solution: Let X1 and X2 be the number of units of the products A and B respectively.
The profit after selling these two products is given by the objective function,
Max Z = 2X1 + 4X2
Since the company can produce at the most 2000 units of the product in a
day and product B requires twice as much time as that of product A, production
restriction is given by,
X 1  2 X 2  2000
Since the raw material is sufficient to produce 1500 units per day of both A
and B, we have X 1  X 2  1500.
There are special ingredients for the product B we have X2  600.
Also, since the company cannot produce negative quantities X1  0 and X2
 0.
Hence, the problem can be finally put in the form:
Find X1 and X2 such that the profits, Z = 2X1 + 4X2 is maximum.

Subject to, X 1  2 X 2  2000


X 1  X 2  1500
X 2  600
X1, X 2  0

Example 2.6: A firm manufacturers 3 products A, B and C. The profits are 3,


2 and 4, respectively. The firm has 2 machines and the following is the required
processing time in minutes for each machine on each product.

Product
A B C
Machines C 4 3 5
D 3 2 4

Machine C and D have 2000 and 2500 machine minutes, respectively. The
firm must manufacture 100 units of A, 200 units of B and 50 units of C, but not
more than 150 units of A. Set up an LP problem to maximize the profit.
Solution: Let X1, X2, X3 be the number of units of the product A, B, C respectively.

Self-Instructional
Material 43
Linear Programming Since the profits are 3, 2 and 4 respectively, the total profit gained by
Problems
the firm after selling these three products is given by,
Z  3 X1  2 X 2  4 X 3
NOTES The total number of minutes required in producing these three products at
machine C is given by 4X1 + 3X2 + 5X3 and at machine D is given by 3X1 + 2X2
+ 4X3.
The restrictions on the machine C and D are given by 2000 minutes and
2500 minutes.
4 X 1  3 X 2  5 X 3  2000
3 X 1  2 X 2  4 X 3  2500
Also, since the firm manufactures 100 units of A, 200 units of B and 50 units
of C, but not more than 150 units of A, the further restriction becomes,
100  X 1  150
200  X 2  0
50  X 3  0
Hence, the allocation problem of the firm can be finally put in the form:
Find the value of X1, X2, X3 so as to maximize,
Z = 3X1 + 2X2 + 4X3
Subject to the constraints,
4 X 1  3 X 2  5 X 3  2000
3 X 1  2 X 2  4 X 3  2500
100  X 1  150, 200  X 2  0,50  X 3  0

Check Your Progress


8. What are the objectives of linear programming?
9. Write the applications of linear programming.
10. State the limitations of linear programming problems.
11. Explain the graphical solution method LPP.
12. Define the matrix form of linear programming problem.

2.6 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS

1. Decision-making has always been very important in the business and


industrial world, particularly with regard to the problems concerning
production of commodities. Which commodity/commodities to produce, in
Self-Instructional
44 Material
what quantities and by which process or processes, are the main questions Linear Programming
Problems
before a production manager. English economist Alfred Marshall pointed
out that the businessman always studies his production function and his
input prices and substitutes one input for another till his costs become the
minimum possible. All this sort of substitution, in the opinion of Marshall, is NOTES
being done by businessman’s trained instinct rather than with formal
calculations. But now there does exist a method of formal calculations often
termed as Linear Programming.
2. The word ‘Linear’ means that the relationships are represented by straight
lines, i.e., the relationships are of the form y = a + bx and the word
‘Programming’ means taking decisions systematically.
3. The applications of LP are numerous and are increasing every day. LP is
extensively used in solving resource allocation problems. Production planning
and scheduling, transportation, sales and advertising, financial planning,
portfolio analysis, corporate planning, etc., are some of its most fertile
application areas. More specifically, LP has been successfully applied in the
following fields:
(i) Agricultural applications: LP can be applied in farm management
problems as it relates to the allocation of resources such as acreage,
labour, water supply or working capital in such a way that is maximizes
net revenue.
(ii) Contract awards: Evaluation of tenders by recourse to LP guarantees
that the awards are made in the cheapest way.
(iii) Industrial applications: Applications of LP in business and industry are
of most diverse kind. Transportation problems concerning cost
minimization can be solved by this technique. The technique can also
be adopted in solving the problems of production (product-mix) and
inventory control.
4. Linearity: The term linearity implies straight line or proportional relationships
among the relevant variables. Linearity in economic theory is known as
constant returns which means that if the amount of input doubles, the
corresponding output and profit are also doubled. Linearity assumption,
thus, implies that if two machines and two workers can produce twice as
much as one machine and one worker; four machines and four workers
twice as much as two machines and two workers, and so on.
5. Criterion function is also known as objective function which states the
determinants of the quantity either to be maximized or to be minimized.
6. These are the limitations under which one has to plan and decide, i.e.,
restrictions imposed upon decision variables.
7. Feasible Solutions: Feasible solutions are all those possible solutions which
can be worked upon under given constraints. The region comprising of all
feasible solutions is referred as Feasible Region. Self-Instructional
Material 45
Linear Programming 8. The applications of linear programming problems are based on linear
Problems
programming matrix coefficients and data transmission prior to solving the
simplex algorithm. The problem can be formulated from the problem
statement using linear programming techniques. The following are the
NOTES objectives of linear programming:
 Identify the objective of the linear programming problem, i.e., which
quantity is to be optimized. For example, maximize the profit.
 Identify the decision variables and constraints used in linear programming,
for example, production quantities and production limitations are taken
as decision variables and constraints.
 Identify the objective functions and constraints in terms of decision
variables using information from the problem statement to determine the
proper coefficients.
 Add implicit constraints, such as non-negative restrictions.
 Arrange the system of equations in a consistent form and place all the
variables on the left side of the equations.
9. The following are the applications of linear programming:
 Transportation problem
 Diet problem
 Matrix games
 Portfolio optimization
 Crew scheduling
10.  All the uncertain factors, such as weather conditions, growth rate of
industry, etc., are not taken into consideration.
 Integer values are not taken as the solution, e.g., a value is required for
fraction and the nearest integer is not taken for the optimal solution.
11. The procedure for mathematical formulation of an LPP consists of the
following steps:
Step 1: The decision variables of the problem are noted.
Step 2: The objective function to be optimized (maximized or minimized) as
a linear function of the decision variables is formulated.
Step 3: The other conditions of the problem, such as resource limitation,
market constraints, interrelations between variables, etc., are formulated as
linear inequations or equations in terms of the decision variables.
Step 4: The non-negativity constraint from the considerations is added so
that the negative values of the decision variables do not have any valid
physical interpretation.
12. The LPP can be expressed in the matrix form as follows:
Maximize or minimize Z = CX  Objective function
Subject to AX () B  Constant equation
Self-Instructional
46 Material
B > 0, X  0  Non-negativity restrictions Linear Programming
Problems
Where, X = ( X1, X 2 L X n )
C = (C1 , C2  Cn )
NOTES
a11a12 L a1n
b1
a21a22 L a2 n
B = b2 A
:
bm
am1am 2 L amn

2.7 SUMMARY

 Decision-making has always been very important in the business and


industrial world, particularly with regard to the problems concerning
production of commodities. Which commodity/commodities to produce, in
what quantities and by which process or processes, are the main questions
before a production manager.
 The word ‘Linear’ means that the relationships are represented by straight
lines, i.e., the relationships are of the form y = a + bx and the word
‘Programming’ means taking decisions systematically.
 The problem for which LP provides a solution may be stated to maximize
or minimize for some dependent variable which is a function of several
independent variables when the independent variables are subject to various
restrictions.
 Agricultural applications: LP can be applied in farm management
problems as it relates to the allocation of resources such as acreage,
labour, water supply or working capital in such a way that is maximizes
net revenue.
 Industrial applications: Applications of LP in business and industry are
of most diverse kind. Transportation problems concerning cost
minimization can be solved by this technique. The technique can also be
adopted in solving the problems of production (product-mix) and
inventory control.
 Linearity: The term linearity implies straight line or proportional
relationships among the relevant variables. Linearity in economic theory
is known as constant returns which means that if the amount of input
doubles, the corresponding output and profit are also doubled. Linearity
assumption, thus, implies that if two machines and two workers can
produce twice as much as one machine and one worker; four machines
and four workers twice as much as two machines and two workers,
and so on.
Self-Instructional
Material 47
Linear Programming  Process and its Level: Process means the combination of particular inputs
Problems
to produce a particular output. In a process, factors of production are used
in fixed ratios, of course, depending upon technology and as such no
substitution is possible with a process.
NOTES
 Criterion function is also known as objective function which states the
determinants of the quantity either to be maximized or to be minimized.
 These are the limitations under which one has to plan and decide, i.e.,
restrictions imposed upon decision variables.
 Feasible Solutions: Feasible solutions are all those possible solutions which
can be worked upon under given constraints. The region comprising of all
feasible solutions is referred as Feasible Region.
 The applications of linear programming problems are based on linear
programming matrix coefficients and data transmission prior to solving the
simplex algorithm. The problem can be formulated from the problem
statement using linear programming techniques.
 Linear programming problems are associated with the efficient use of
allocation of limited resources to meet desired objectives. A solution required
to solve the linear programming problem is termed as optimal solution.
 All the uncertain factors, such as weather conditions, growth rate of industry,
etc., are not taken into consideration.
Integer values are not taken as the solution, e.g., a value is required for
fraction and the nearest integer is not taken for the optimal solution.

2.8 KEY WORDS

 Linear programming: the word ‘Linear’ means that the relationships are
represented by straight lines, i.e., the relationships are of the form y = bx
and the word ‘Programming’ means taking decisions systematically.
 Linearity: The term linearity implies straight line or proportional relationships
among the relevant variables.
 Criterion function: Criterion function is also known as objective function
which states the determinants of the quantity either to be maximized or to
be maximized.
 Constraints or inequalities: These are the limitations under which one
has to plan and decide, i.e., restrictions impose upon decision variables.
 Feasible solutions: Feasible solutions are all those possible solutions which
can be worked upon under given constraints. The region comprising of all
feasible solutions is referred as feasible region.

Self-Instructional
48 Material
Linear Programming
2.9 SELF-ASSESSMENT QUESTIONS AND Problems

EXERCISES

Short-Answer Questions NOTES

1. What is meant by proportionality in linear programming?


2. Mention two areas where linear programming finds application.
3. Explain the term of linearity.
4. Define the term of constraints or inequalities.
5. What are the basic constituents of an LP model?
6. Write the objectives of linear programming problems.
7. State about application of linear programming.
8. Write the limitations of linear programming problems.
9. Explain the term of graphical solution.
Long-Answer Questions
1. Describe the areas where linear programming can be used?
2. Explain the basic concepts and notations in linear programming.
3. Explain the applications of linear programming.
4. What are the limitations of linear programming problems?
5. Discuss briefly about the graphical solution method.
6. A company produces two types of leather belts A and B. A is of superior
quality and B is of inferior quality. The respective profits are 10 and 5 per
belt. The supply of raw material is sufficient for making 850 belts per day. For
belt A, a special type of buckle is required and 500 are available per day.
There are 700 buckles available for belt B per day. Belt A needs twice as
much time as that required for belt B and the company can produce 500 belts
if all of them were of the type A. Formulate a LP Model for the given problem.
7. The standard weight of a special purpose brick is 5 kg and it contains two
ingredients B1 and B2, where B1 costs 5 per kg and B2 costs 8 per kg.
Strength considerations dictate that the brick contains not more than 4 kg
of B1 and a minimum of 2 kg of B2 since the demand for the product is likely
to be related to the price of the brick. Formulate the given problem as a LP
Model.
8. Solve the following by graphical method:
(i) Max Z = X1 – 3X2
Subject to, X1 + X2 300
X1 – 2X2  200
2X1 + X2  100
X2  200
X1, X2  0 Self-Instructional
Material 49
Linear Programming (ii) Max Z = 5X + 8Y
Problems
Subject to, 3X + 2Y  36
X + 2Y  20
NOTES 3X + 4Y  42
X, Y  0
9. Solve graphically the following LPP:
Max Z = 20X1 + 10X2
Subject to, X1 + 2X2  40
3X1 + X2  30
4X1 + 3X2  60
and X1, X2  0

2.10 FURTHER READINGS

Arumugam, R. S. 2006. Operations Research. Palayamkottai (Tamil Nadu): New


Gamma Publications.
Sundharesan, V., K. S. Ganapathy and K. Ganesan. 2017. Resource Management
Techniques (Operations Research). Chennai: A. R. Publications.
Swaroop, Kanti, P. K. Gupta and Man Mohan. 2007. Operations Research,
13th Edition. New Delhi: Sultan Chand & Sons.
Taha, Hamdy A. 1992. Operations Research: An Introduction. New York:
Macmillan.
Sharma, S. D. 2006. Operations Research. Uttar Pradesh: Kedar Nath Ram
Nath & Co.
Gupta, P. K. and D. S. Hira. 2002. Introduction to Operations Research. New
Delhi: S. Chand And Company Limited.
Gillett, Billy E. 2007. Introduction to Operations Research. New Delhi: Tata
McGraw-Hill.
Ackoff, R. L. and M. W. Sasieni. 1968. Fundamentals of Operations Research.
New York: John Wiley & Sons Inc.
Kothari, C. R. 1992. An Introduction to Operational Research. New Delhi:
Vikas Publishing House Pvt. Ltd.
Kalavathy, S. 2002. Operations Research. New Delhi: Vikas Publishing House
Pvt. Ltd.
Jensen, Paul A., and Jonathan F. Bard. 2003. Operations Research Models and
Methods. New York: John Wiley & Sons.
Sharma, J. K. 2001. Operations Research: Theory and Applications. New
Delhi: Macmillan India Ltd.
Self-Instructional
50 Material
Linear Programming

UNIT 3 LINEAR PROGRAMMING and Simplex Method

AND SIMPLEX METHOD


NOTES
Structure
3.0 Introduction
3.1 Objectives
3.2 General Linear Programming Variables
3.2.1 Graphical Solution
3.2.2 Some Important Definitions
3.3 Canonical and Standard forms of LPP
3.4 Simplex Method
3.5 Answers to Check Your Progress Questions
3.6 Summary
3.7 Key Words
3.8 Self-Assessment Questions and Exercises
3.9 Further Readings

3.0 INTRODUCTION

Procedure for solving Linear Programming Problem (LLP) by graphical method


and any solution to a LPP which satisfies the non-negativity restrictions of the LPP.
Simplex method is an iterative procedure for solving LPP in a finite number of
steps. This method provides an algorithm which consists of moving from one vertex
of the region of feasible solution to another in such a manner that the value of the
objective function at the succeeding vertex is less or more as the case may be that
at the previous vertex.
Linear Programming (LP) and simplex method linearity implies straight line
or proportional relationships among the relevant variables. Process means the
combination of one or more inputs to produce a particular output. Criterion function
is an objective function which is to be either maximized or minimized.
Constraints are limitations under which one has to plan and decide. There
are restrictions imposed upon decision variables. Feasible solutions are all those
possible solutions considering given constraints. An optimum solution is considered
the best among feasible solutions.
A canonical, normal, or standard form of a mathematical object is a standard
way of presenting that object as a mathematical expression. Often, it is one which
provides the simplest representation of an object and which allows it to be identified
in a unique way. The distinction between “Canonical” and “Normal” forms varies
from subfield to subfield. In most fields, a canonical form specifies
a unique representation for every object, while a normal form simply specifies its
form, without the requirement of uniqueness.
Self-Instructional
Material 51
Linear Programming Simplex method is a popular algorithm for linear programming. The name
and Simplex Method
of the algorithm is derived from the concept of a simplex and was suggested by T.
S. Motzkin. Simplices are not actually used in the method, but one interpretation
of it is that it operates on simplicial cones, and these become proper simplices
NOTES with an additional constraint. The simplicial cones in question are the corners (i.e.,
the neighbourhoods of the vertices) of a geometric object called a polytope. The
shape of this polytope is defined by the constraints applied to the objective function.
In this units, you will study about the concept of general linear programming
problem, canonical and standard form of LPP and simplex method.

3.1 OBJECTIVES

After going through this unit, you will be able to:


 Understand the significance of general linear programming problem
 Explain the canonical and standard forms of LLP
 Define meaning of simplex method

3.2 GENERAL LINEAR PROGRAMMING


VARIABLES

The Linear Programming Problems (LPP) can be solved as follows using the
graphical solution and simplex method:
3.2.1 Graphical Solution
Simple linear programming problem with two decision variables can be easily
solved by graphical method.
Procedure for Solving LPP by Graphical Method
The steps involved in the graphical method are as follows:
Step 1: Consider each inequality constraint as an equation.
Step 2: Plot each equation on the graph as each will geometrically represent
a straight line.
Step 3: Mark the region. If the inequality constraint corresponding to that
line is , then the region below the line lying in the first quadrant (due to non-
negativity of variables) is shaded. For the inequality constraint  sign, the region
above the line in the first quadrant is shaded. The points lying in the common
region will satisfy all the constraints simultaneously. The common region thus
obtained is called the feasible region.
Step 4: Allocate an arbitrary value, say zero, for the objective function.
Step 5: Draw the straight line to represent the objective function with the
arbitrary value (i.e., a straight line through the origin).
Self-Instructional
52 Material
Step 6: Stretch the objective function line till the extreme points of the feasible Linear Programming
and Simplex Method
region. In the maximization case, this line will stop farthest from the origin and
passes through at least one corner of the feasible region. In the minimization case
this line will stop nearest to the origin and passes through at least one corner of the
feasible region. NOTES
Step 7: Find the coordinates of the extreme points selected in Step 6 and
find the maximum or minimum value of Z.
Note: As the optimal values occur at the corner points of the feasible region, it is enough to
calculate the value of the objective function of the corner points of the feasible region and
select the one which gives the optimal solution, i.e., in the case of maximization problem,
optimal point corresponds to the corner point at which the objective function has a maximum
value and in the case of minimization, the corner point which gives the objective function the
minimum value is the optimal solution.
Example 3.1: Solve the following LPP by graphical method.
Minimize, Z = 20X1 + 10X2
Subject to, X1 + 2X2  40
3X1 + X2  30
4X1 + 3X2  60
X1, X2  0
Solution: Replace all the inequalities of the constraints by equation,
X1 + 2X2 = 40 If X1 = 0  X2 = 20
If X2 = 0  X1 = 40
:. X1 + 2X2 = 40 passes through (0, 20) (40, 0)
3X1 + X2 = 30 passes through (0, 30) (10, 0)
4X1+ 3X2 = 60 passes through (0, 20) (15, 0)
Plot each equation on the graph.
The feasible region is ABCD.
C and D are points of intersection of lines.
X1+ 2X2 = 40, 3X1+ X2 = 30
And, 4X1+ 3X2= 60

Self-Instructional
Material 53
Linear Programming
and Simplex Method

NOTES

)
(4,18

X1 +
2X
2 =4
0

X1
3X 1
4 X 1 30
+X 2

+3
X2
=

=6
0

On solving, we get C (4, 18) and D (6, 12)


Corner Points Value of Z = 20X1 + 10X2
A (15, 0) 300
B (40, 0) 800
C (4, 18) 260
D (6, 12) 240 (Minimum value)
 The minimum value of Z occurs at D (6, 12). Hence, the optimal solution
is X1 = 6, X2= 12.
Example 3.2: Find the maximum value of Z = 5X1 + 7X2
Subject to the constraints,
X1 + X2  4
3X1 + 8X2  24
10X1 + 7X2  35
X1, X2 > 0
Solution: Replace all the inequalities of the constraints by forming equations.
X1 + X2 = 4 passes through (0, 4) (4, 0)
3X1 + 8X2 = 24passes through (0, 3) (8, 0)
10X1 + 7X2 = 35passes through (0, 5) (3.5, 0)
Plot these lines in the graph and mark the region below the line as the inequality
Self-Instructional
54 Material
of the constraint is  and is also lying in the first quadrant. Linear Programming
and Simplex Method

X2

NOTES

2.4)
C ( 1 .6 ,

B (1.6, 2.3)
(8, 0)
X
O 3X1 + 1
8X =
2 2 4
X 1 0X 1+
+X
1

2
=4
7X 2
=3
5

The feasible region is OABCD.


B and C are points of instruction of lines,
X1 + X2 = 4, 10X1 + 7X2 = 35
And, 3X1 + 8X2 = 24
On solving we get,
B (1.6, 2.3)
C (1.6, 2.4)
Corner Points Value of Z = 5X1 + 7X2
O (0, 0) 0
A (3.5, 0) 17.5
B (1.6, 2.3) 25.1
C (1.6, 2.4) 24.8 (Maximum value)
D (0, 3) 21
 The maximum value of Z occurs at C (1.6, 2.4) and the optimal solution
is X1 = 1.6, X2 = 2.4.
Example 3.3: A company makes 2 types of hats. Each hat A needs twice as
much labour time as the second hat B. If the company is able to produce only hat
B, then it can make about 500 hats per day. The market limits daily sales of the hat
A and hat B to 150 and 250 hats. The profits on hat A and hat B are 8 and 5,
respectively. Solve graphically to get the optimal solution.
Self-Instructional
Material 55
Linear Programming Solution: Let X1 and X2 be the number of units of type A and type B hats,
and Simplex Method
respectively.
Maximize Z = 8X1 + 5X2
NOTES Subject to, 2X1 + 2X2  500
X1  150
X2  250
X1, X2  0
First rewrite the inequality of the constraint into an equation and plot the
lines in the graph.
2X1 + X2 = 500 passes through (0, 500) (250, 0)
X 1 = 150 passes through (150, 0)
X 2 = 250 passes through (0, 250)
We mark the region below the lines lying in the first quadrant as the inequality
of the constraints are  . The feasible region is OABCD. B and C are points of
intersection of lines:
2X1 + X2 = 500, where X1 = 150 and X2 = 250
On solving, we get B = (150, 200)
C = (125, 250)
X2
X1 = 150

D (0, 250) C (125, 250) X2 = 250

B (150, 200)

O 400 500 X1
2X 1
+ X2
=5
00

Corner Points Value of Z = 8X1 + 5X2


O (0, 0) 0
A (150, 0) 1200
B (150, 200) 2200
C (125, 250) 2250 (Maximum Z = 2250)
Self-Instructional
56 Material
D (0, 250) 1250 Linear Programming
and Simplex Method
The maximum value of Z is attained at C (125, 250)
 The optimal solution is X1 = 125, X2 = 250
i.e., The company should produce 125 hats of type A and 250 hats of type NOTES
B in order to get the maximum profit of 2250.
Example 3.4: By graphical method solve the following LPP.
Maximize Z = 3X1 + 4X2
Subject to, 5X1 + 4X2  200
3X1 + 5X2  150
5X1 + 4X2  100
8X1 + 4X2  80
and X1, X2  0
Solution: X2

C (0, 30)

(0, 25) D

, 11.5)
B (33 0.8
X1
+5
X2
=1
50
X1
(0, 0)
5X

5X
1

1
+

+
8X 1

4X

4X
2

2
=

=
+4

10

20
X2

0
=8
0

Feasible region is given by OABCD.


Corner Points Value of Z = 3X1 + 4X2
O (20, 0) 60
A (40, 0) 120
B (30.8, 11.5) 138.4 (Maximum value)
C (0, 30) 120
D (0, 25) 100
 The maximum value of Z is attained at B (30.8, 11.5)
 The optimal solution is X1 = 30.8, X2 = 11.5
Example 3.5: Use graphical method to solve the following LPP.
Maximize, Z = 6X1 + 4X2
Self-Instructional
Material 57
Linear Programming Subject to,–2X1 + X2  2
and Simplex Method
X1 – X2  2
3X1 + 2X2  9
NOTES X1, X2  0
Solution:

=2
X2

2
+X
2
=

1
– 2X
X2

X1

5)
, 3/
/5
( 13

– – X1

3X 1

+2
X2

=9

The feasible region is given by ABC.


Corner Points Value of Z = 6X1 + 4X2
A (2, 0) 12
B (3,0) 18
90
C (13/5, 3/5)  18 (Maximum value)
5
The maximum value of Z is attained at C (13/5, 3/5)
 The optimal solution is X1= 13/5, X2= 3/5
Example 3.6: Use graphical method to solve the following LPP.
Minimize Z = 3X1 + 2X2
Subject to, 5X1 + X2  10
X1 + X2  6
X1 + 4X2 12
X1, X2  0
Solution: Corner Points Value of Z = 3X1 + 2X2
A (0, 10) 20
B (1, 5) 13 (Minimum value)
C (4, 2) 16
D (12, 0) 36
Self-Instructional
58 Material
Linear Programming
and Simplex Method

)
(0 , 10

NOTES

(1, 5)

(4, 2 )
X1 +
4X 2 = 12
5X 1 +

X
1 +
X
X2 = 1

2 =
6
0

Since the minimum value is attained at B (1,5), the optimum solution is,
X1 = 1, X2 = 5
Note: In this problem, if the objective function is maximization then the solution is unbounded,
as maximum value of Z occurs at infinity.

3.2.2 Some Important Definitions


1. A set of values X1, X2, ..., Xn which satisfies the constraints of the LPP is
called its solution.
2. Any solution to a LPP which satisfies the non-negativity restrictions of the
LPP is called its feasible solution.
3. Any feasible solution which optimizes (minimizes or maximizes) the objective
function of the LPP is called its optimum solution.
4. Given a system of m linear equations with n variables (m < n), any solution
which is obtained by solving m variables keeping the remaining n – m
variables zero is called a basic solution. Such m variables are called basic
variables and the remaining variables are called non-basic variables.
5. A basic feasible solution is a basic solution which also satisfies all basic
variables are non-negative.
Basic feasible solutions are of two types:
(i) Non-Degenerate: A non-degenerate basic feasible solution is the basic
feasible solution which has exactly m positive Xi (i = 1, 2, ..., m), i.e.,
none of the basic variables is zero.
(ii) Degenerate: A basic feasible solution is said to be degenerate if one
or more basic variables are zero.
6. If the value of the objective function Z can be increased or decreased
indefinitely, such solutions are called unbounded solutions.

Self-Instructional
Material 59
Linear Programming
and Simplex Method
Check Your Progress
1. Explain the term graphical solution.
NOTES 2. State the solution of LPP.
3. Differentiate between the feasible and optimum solution.
4. Explain the term basic solution.
5. Differentiate the basic and non-basic variables.
6. State the degenerate and non-degenerate feasible solution.

3.3 CANONICAL AND STANDARD FORMS OF


LPP

The general LPP can be put in either canonical or standard forms.


In the standard form, irrespective of the objective function, namely maximize
or minimize, all the constraints are expressed as equations. Moreover, RHS of
each constraint and all variables are non-negative.
Characteristics of the Standard Form
(i) The objective function is of maximization type.
(ii) All constraints are expressed as equations.
(iii) Right hand side of each constraint is non-negative.
(iv) All variables are non-negative.
In the canonical form, if the objective function is of maximization, all the
constraints other than non-negativity conditions are ‘’ type. If the objective
function is of minimization, all the constraints other than non-negative conditions
are ‘’ type.
Characteristics of the Canonical Form
(i) The objective function is of maximization type.
(ii) All constraints are of ‘’ type.
(iii) All variables Xi are non-negative.
Notes:
1. Minimization of a function Z is equivalent to maximization of the negative expression
of this function, i.e., Min Z = –Max (–Z).
2. An inequality in one direction can be converted into an inequality in the opposite
direction by multiplying both sides by (–1).
3. Suppose we have the constraint equation,
a11 X1+a12X2 +...... +a1n Xn = b1

Self-Instructional
60 Material
This equation can be replaced by two weak inequalities in opposite directions. Linear Programming
and Simplex Method
a11 X1+ a12 X2 +...... + am Xn  b1
a11 X1+ a12 X2 +...... + a1n Xn  b1
4. If a variable is unrestricted in sign, then it can be expressed as a difference of two
non-negative variables, i.e., X1 is unrestricted in sign, then Xi = Xi – Xi, where Xi, Xi, NOTES
Xi are  0.
5. In standard form, all the constraints are expressed in equation, which is possible by
introducing some additional variables called slack variables and surplus variables so
that a system of simultaneous linear equations is obtained. The necessary
transformation will be made to ensure that bi  0.

Definition
(i) If the constraints of a general LPP be,
n

a
j 1
ij X i  bi (i  1, 2, ..., m),

then the non-negative variables Si, which are introduced to convert the inequalities
n
() to the equalities a
j 1
ij X i  Si  bi (i  1, 2, ..., m) , are called slack variables.

Slack variables are also defined as the non-negative variables which are
added in the LHS of the constraint to convert the inequality ‘’ into an equation.
(ii) If the constraints of a general LPP be,
n

a
j 1
ij X j  bi (i  1, 2, ..., m) ,

then, the non-negative variables Si which are introduced to convert the


n
inequalities ‘’ to the equalities  aij X j – Si  bi (i  1, 2, ..., m) are called surplus
j 1
variables.
Surplus variables are defined as the non-negative variables which are
removed from the LHS of the constraint to convert the inequality ‘’ into an
equation.

3.4 SIMPLEX METHOD

Simplex method is an iterative procedure for solving LPP in a finite number of


steps. This method provides an algorithm which consists of moving from one vertex
of the region of feasible solution to another in such a manner that the value of the
objective function at the succeeding vertex is less or more as the case may be that
at the previous vertex. This procedure is repeated and since the number of vertices
is finite, the method leads to an optimal vertex in a finite number of steps or indicates
the existence of unbounded solution.

Self-Instructional
Material 61
Linear Programming Definition
and Simplex Method
(i) Let XB be a basic feasible solution to the LPP.
Max Z = CX
NOTES Subject to AX = b and X  0, such that it satisfies XB = B–1b,
Where B is the basic matrix formed by the column of basic variables.
The vector CB = (CB1, CB2 … CBm), where CBj are components of C
associated with the basic variables is called the cost vector associated with the
basic feasible solution XB.
(ii) Let XB be a basic feasible solution to the LPP.
Max Z = CX, where AX = b and X  0.
Let CB be the cost vector corresponding to XB. For each column vector aj
in A1, which is not a column vector of B, let
m
aj aij b j
i 1

m
Then the number Z j CBi aij is called the evaluation corresponding to
i 1

aj and the number (Zj – Cj) is called the net evaluation corresponding to j.
Simplex Algorithm
For the solution of any LPP by simplex algorithm, the existence of an initial basic
feasible solution is always assumed. The steps for the computation of an optimum
solution are as follows:
Step 1: Check whether the objective function of the given LPP is to be
maximized or minimized. If it is to be minimized then we convert it into a problem
of maximization by,
Min Z = –Max (–Z)
Step 2: Check whether all bi (i = 1, 2, …, m) are positive. If any one of bi
is negative, then multiply the inequation of the constraint by –1 so as to get all bi to
be positive.
Step 3: Express the problem in the standard form by introducing slack/
surplus variables to convert the inequality constraints into equations.
Step 4: Obtain an initial basic feasible solution to the problem in the form
XB = B–1b and put it in the first column of the simplex table. Form the initial simplex
table shown as follows:

Self-Instructional
62 Material
Linear Programming
and Simplex Method

n NOTES

Step 5: Compute the net evaluations Zj – Cj by using the relation:


Zj – Cj = CB (aj – Cj)
Examine the sign of Zj – Cj:
(i) If all Zj – Cj  0, then the initial basic feasible solution XB is an optimum
basic feasible solution.
(ii) If at least one Zj – Cj > 0, then proceed to the next step as the solution
is not optimal.
Step 6: To find the entering variable, i.e., key column.
If there are more than one negative Zj – Cj choose the most negative of
them. Let it be Zr – Cr for some j = r. This gives the entering variable Xr and is
indicated by an arrow at the bottom of the rth column. If there are more than one
variable having the same most negative Zj – Cj, then any one of the variable can be
selected arbitrarily as the entering variable.
(i) If all Xir  0 (i = 1, 2, …, m) then there is an unbounded solution to the
given problem.
(ii) If at least one Xir > 0 (i = 1, 2, …, m), then the corresponding vector
Xr enters the basis.
Step 7: To find the leaving variable or key row:
Compute the ratio (XBi /Xkr, Xir>0)
If the minimum of these ratios be XBi /Xkr, then choose the variable Xk to
leave the basis called the key row and the element at the intersection of the key
row and the key column is called the key element.
Step 8: Form a new basis by dropping the leaving variable and introducing
the entering variable along with the associated value under CB column. The leaving
element is converted to unity by dividing the key equation by the key element and
all other elements in its column to zero by using the formula:
New element = Old element

 Product of elements in key row and key column 


– 
 Key element 

Step 9: Repeat the procedure of Step (5) until either an optimum solution
is obtained or there is an indication of unbounded solution.
Self-Instructional
Material 63
Linear Programming Example 3.7: Use simplex method to solve the following LPP.
and Simplex Method
Maximize, Z = 3X1 + 2X2

NOTES Subject to, X 1 X 2 4


X1 X 2 2
X1, X 2 0

Solution: By introducing the slack variables S1 and S2 convert the problem into
standard form.
Max, Z = 3X1 + 2X2 + 0S1 + 0S2

Subject to, X 1 X 2 S1 4
X 1 X 2 S2 2
X 1 , X 2 , S1 , S 2 0

X 
 X1 X2 S1 S2   1 
1 X 4
 1 1 0   2    
 S  2
 1 1 0 1   1 
 S2 

An initial basic feasible solution is given by,


XB = B–1b,
Where, B = I2, XB = (S1, S2)
i.e., (S1, S2) = I2 (4, 2) = (4, 2)
Initial Simplex Table
Zj = CB aj
0
Z1  C1  CB a1  C1    1 1  3  3
0
0
Z 2  C2  C B a2  C2    1 1  2  2
0
0
Z 3  C3  CB a3  C3    1 0   0  0
0
0
Z 4  C4  CB a4  C4     0 1  0  0
0

Self-Instructional
64 Material
Cj 3 2 0 0 Linear Programming
and Simplex Method
XB
CB B XB X1 X2 S1 S2 Min
X1
0 S1 4 1 1 1 0 4/1 = 4
NOTES
0 S2 2 1 –1 0 1 2/1 = 2
Zj 0 0 0 0 0
Zj – Cj –3 –2 0 0

Since, there are some Zj – Cj = 0, the current basic feasible solution is not
optimum.
Since, Z1 – C1= –3 is the most negative, the corresponding non-basic variable
X1 enters the basis.
The column corresponding to this X1 is called the key column.

X Bi
Ratio = Min , X ir 0
X ir

4 2
= Min  1 , 1  , which corresponds to S2
 
 The leaving variable is the basic variable S2. This row is called the key
row. Convert the leading element X21 to units and all other elements in its column
n, i.e., (X1) to zero by using the formula:
New element = Old element –

 Product of elements in key row and key column 


 Key element 
 
To apply this formula, first we find the ratio, namely
The element to be zero 1
 1
Key element 1
Apply this ratio for the number of elements that are converted in the key
row. Multiply this ratio by key row element shown as follows:
1×2
1×1
1 × –1
1×0
1×1
Now, subtract this element from the old element. The element to be converted
into zero is called the old element row. Finally, we have
Self-Instructional
Material 65
Linear Programming 4–1×2=2
and Simplex Method
1–1×1=0
1 – 1 × –1 = 2
NOTES 1–1×0=1
0 – 1 × 1 = –1
 The improved basic feasible solution is given in the following simplex
table.
First Iteration
Cj 3 2 0 0
XB
CB B XB X1 X2 S1 S2 Min
X2

0 S1 2 0 2 1 –1 2/2 = 1
3 X1 2 1 –1 0 1 –
Zj 6 3 –3 0 0
Zj – C j 0 –5 0 0

Since, Z2 – C2 is the most negative, X2 enters the basis.


X 
To find Min  B , X i 2  0 
 X i2 
2 2 
Min  ,   1
 2 1 
( Negative or zero values are not considered)
This gives the outgoing variables. Convert the leaving element into one. This
is done by dividing all the elements in the key row by 2. The remaining elements
are converted to zero by using the following formula.

Here, – 1 2 is the common ratio. Put this ratio 5 times and multiply each
ratio by the key row element.
1
2
2
1
0
2
1
2
2
–1/2 × 1
–1/2 × –1
Subtract this from the old element. All the row elements which are converted
Self-Instructional
into zero are called the old elements.
66 Material
Linear Programming
 1 
2     2  3 and Simplex Method
 2 
1 – (–1/2 × 0) = 1
–1 – (–1/2 × 2) = 0 NOTES
0 – (–1/2 × 1) = 1/2
1 – (–1/2 × –1) = 1/2
Second Iteration

1/2 –1/2

1/2 1/2
1/2
1/2

Since all Zj – Cj  0, the solution is optimum. The optimal solution is Max


Z = 11, X1 = 3, and X2 = 1.
Example 3.8: Solve the LPP
Max Z = 3x1 + 2x2
Subject to, 4x1 + 3x2  12
4x1 + x2  8
4x1 – x2  8
x1, x2  0
Solution: Convert the inequality of the constraint into an equation by adding slack
variables S1, S2, S3.
Max Z= 3x1 + 2x2 + 0S1 + 0S2 + 0S3
Subject to, 4x1 + 3x2 + S1 = 12
4x1 + x2 + S2 = 8
4x1 – x2 + S3 = 8
x1, x2, S1, S2, S3  0

x1
x1 x2 S1 S 2 S3
x2 12
4 3 1 0 0
S1 8
4 1 0 1 0
S2 8
4 1 0 0 1
S3

Self-Instructional
Material 67
Linear Programming Initial Table
and Simplex Method
Cj 3 2 0 0 0
xB
CB Basis xB x1 x2 S1 S2 S3 Min
x1
NOTES
0 S1 12 4 3 1 0 0 12/4 = 3
0 S2 8 4 1 0 1 0 8/4 = 2
0 S3 8 4 –1 0 0 1 8/4 = 2
Zj 0 0 0 0 0 0
Zj – Cj –3 –2 0 0 0

xB
Q Z1 – C1 is most negative, x1 enters the basis. And the min , xil 0 =
xil
min (3, 2, 2) = 2 gives S3 as the leaving variable.
Convert the leading element into 1, by dividing key row element by 4 and
the remaining elements into 0.

Initial Simplex Table


Cj 3 2 0 0 0

xB
CB Basis xB x1 x2 S1 S2 S3 Min
x2

0 S1 4 0 4 1 0 –1 4/4 = 1

0 S2 0 0 2 0 1 –1 0/2 = 0

3 x1 2 1 –1/4 0 0 ¼ —

Zj 6 3 –3/4 0 0 ¾
Zj – Cj 0 –11/4 0 0 ¾

4 4
8 8 =0 12 8 =4
4 4

4 4
4 4 =0 4 4 =0
4 4

4 4
1 1 =2 3 1 =4
4 4

4 4
0 0 =0 1 0 =1
4 4

4 4
1 0 =1 0 0 =0
4 4

4 4
0 1 = –1 0 1 = –1
4 4
Self-Instructional
68 Material
11 Linear Programming
Since, Z2 – C2 = – is the most negative, x2 enters the basis. and Simplex Method
4
 xB 
To find the outgoing variable, find Min  , xi 2  0 
 xi 2  NOTES
4 0 
Min  , ,   = 0
4 2 

First Iteration
Therefore, S2 leaves the basis. Convert the leading element into 1 by dividing the
key row elements by 2 and make the remaining elements in that column as zero using
the formula.
New element = Old element
Product of elements in key row and key column
Key element

Cj 3 2 0 0 0

xB
CB Basis xB x1 x2 S1 S2 S3 Min
S3
0 S1 4 0 0 1 –2 1 4/1 = 4
2 x2 0 0 1 0 1/2 –1/2 —
2
3 x1 2 1 0 0 1/8 1/8 = 16
1/8

Zj 6 3 2 0 11/8 – 5/8
Zj – Cj 0 0 0 11/8 –5/8

Second Iteration
Since Z5 – C5 = –5/8 is most negative, S3 enters the basis and
xB 4
Min , Si 3 = Min , 16 = 4.
S13 1
Therefore, S1 leaves the basis. Convert the leading element into one and remaining
elements as zero.
Third Iteration

Cj 3 2 0 0 0

CB Basis xB x1 x2 S1 S2 S3
0 S3 4 0 0 1 –2 1
2 x2 2 0 1 1/2 –1/2 0
3 x1 3/2 1 0 –1/8 3/8 0
Zj 17/2 3 2 5/8 1/8 0
Z j – Cj 0 0 5/8 1/8 0 Self-Instructional
Material 69
Linear Programming Since all Zj – Cj  0, the solution is optimum and it is given by x1 = 3/2, x2 =
and Simplex Method
2 and Max Z = 17/2.

Example 3.9: Using simplex method solve the LPP.


NOTES Max Z = x1 + x2 + 3x3
Subject to, 3x1 + 2x2 + x3  3
2x1 + x2 + 2x3  2
x1, x2, x3  0
Solution: Rewrite the inequality of the constraints into an equation by adding
slack variables.
Max Z= x1 + x2 + 3x3 + 0S1 + 0S2
Subject to, 3x1 + 2x2 + x3 + S1  3
2x1 + x2 + 2x3 + S2  2
Initial basic feasible solution is,
x 1 = x2 = x3 = 0
S1 = 3, S2 = 2 and Z = 0
x1 x2 x3 S1 S2
3 2 1 1 0
2 1 2 0 1
1 1 3 0 0

Cj 1 1 3 0 0

xB
CB Basis xB x1 x2 x3 S1 S2 Min
x3

0 S1 3 3 2 1 1 0 3/1 = 3
0 S2 2 2 1 2 0 1 2/2 = 1

Zj 0 0 0 0 0 0
Zj – Cj –1 –1 –3 0 0


Since Z3 – C3 = –3 is the most negative, the variable x3 enters the basis. The
column corresponding to x3 is called the key column.
xB 3 2
To determine the key row or leaving variable, find Min , xi 3 0 Min , =1
xi 3 1 2
Therefore, the leaving variable is the basic variable S2, the row is called the key
row and the intersection element 2 is called the key element.
Convert this element into one by dividing each element in the key row by 2
and the remaining elements in that key column as zero using the formula
Product of elements in key row and key column
New element = Old element
Self-Instructional Key element
70 Material
First Iteration Linear Programming
and Simplex Method
Cj 1 1 3 0 0

CB Basis xB x1 x2 x3 S1 S2

0 S1 2 2 3/2 0 1 –1/2 NOTES


3 x3 1 1 1/2 1 0 1/2

Zj 3 3 3/2 3 0 3/2
Z j – Cj 2 1/2 0 0 3/2

Since all Zj – Cj0, the solution is optimum and it is given by x1 = 0, x2 = 0,


x3 = 1, Max Z = 3.

Example 3.10: Use simplex method to solve the LPP.


Min Z = x2 – 3x3 + 2x5
Subject to, 3x2 – x3 + 2x5  7
– 2x2 + 4x3  12
– 4x2 + 3x3 + 8x5  10
x2, x3, x5  0
Solution: Since the given objective function is of minimization, we shall convert
it into maximization using Min Z = –Max(–Z) = –Max Z*
Max Z* = – x2 + 3x3 – 2x5
Subject to, 3x2 – x3 + 2x5  7
– 2x2 + 4x3  12
– 4x2 + 3x3 + 8x5  10
We rewrite the inequality of the constraints into an equation by adding slack
variables S1, S2, S3 and the standard form of LPP becomes.
Max Z = – x2 + 3x3 – 2x5 + 0S1 + 0S2 + 0S3
Subject to, 3x2 – x3 + 2x5 + S1  7
– 2x2 + 4x3 + S2  12
– 4x2 + 3x3 + 8x5 + S3  10
x2, x3, x5, S1, S2, S3  0
 The initial basic feasible solution is given by S1 = 7, S2 = 12, S3 = 10. (x2 =
x3 = x5 = 0)

Self-Instructional
Material 71
Linear Programming Initial Table
and Simplex Method
Cj –1 3 –2 0 0 0

xB
CB Basis xB x2 x3 x5 S1 S2 S3 Min
NOTES x3

0 S1 7 3 –1 2 1 0 0 —

0 S2 12 –2 4 0 0 1 0 12/4 = 3

0 S3 10 –4 3 8 0 0 1 10/3 = 3.33

Zj 0 0 0 0 0 0

Zj – Cj 1 –3 2 0 0 0


Since Z2 – C2 = –3 < 0, the solution is not optimum.
The incoming variable is x3 (key column) and the outgoing variable (key row)
is given by,
xB  12 10 
Min xi 3 0 = Min  , ,  = 3.
xi 3  4 3
Hence, S2 leaves the basis.
First Iteration
Cj –1 3 –2 0 0 0

xB
CB B xB x2 x3 x5 S1 S2 S3 Min
x2
10
0 S1 10 5/2 0 2 1 1/4 0 =4
5/2
3 x3 3 –1/2 1 0 0 1/4 0 —

0 S3 1 5/2 0 8 0 –3/4 1 2/5

Zj 9 –3/2 3 0 0 3/4 0

Zj – Cj –1/2 0 2 0 3/4 0

Since Z1 – C1 < 0, the solution is not optimum. Improve the solution by allowing
the variable x2 to enter into the basis and the variable S1 to leave the basis.
Second Iteration
Cj –1 3 –2 0 0 0

CB B xB x2 x3 x5 S1 S2 S3

–1 x2 4 1 0 4/5 2/5 1/10 0


 x3 5 0 1 2/5 1/5 3/10 0
0 S3 11 0 0 10 1 – 1/2 1

Zj 11 –1 3 2/5 1/5 8/10 0


Zj – Cj 0 0 12/5 1/5 8/10 0
Self-Instructional
72 Material
Since all Zj – Cj0, the solution is optimum. Linear Programming
and Simplex Method
 The optimal solution is given by Max Z* = 11
x 2 = 4, x3 = 5, x5 = 0
 Min Z = – Max (–Z) = – 11 NOTES
 Min Z = –11, x2 = 4, x3 = 5, x5 = 0.
Example 3.11: Solve the following LPP using simplex method.
Max Z = 15x1 + 6x2 + 9x3 + 2x4
Subject to, 2x1 + x2 + 5x3 + 6x4  20
3x1 + x2 + 3x3 + 25x4  24
7x1 + x4  70
x1, x2, x3, x4  0
Solution: Rewriting the inequality of the constraint into an equation by adding
slack variables S1, S2 and S3, the standard form of LPP becomes.
Max Z = 15x1 + 6x2 + 9x3 + 2x4 + 0S1 + 0S2 + 0S3
Subject to, 2x1 + x2 + 5x3 + 6x4 + S1  20
3x1 + x2 + 3x3 + 25x4 + S2  24
7x1 + x4 + S3  70
x1, x2, x3, x4, S1, S2, S3  0
The initial basic feasible solution is S1 = 20, S2 = 24, S3 = 70 (x1 = x2 = x3 =
x4 = 0 non-basic)
The initial simplex table is given by:
Cj 15 6 9 2 0 0 0

xB
CB Basis xB x1 x2 x3 x4 S1 S2 S3 Min
x1
0 S1 20 2 1 5 6 1 0 0 20/2 = 10
0 S2 24 3 1 3 25 0 1 0 24/3 = 8
0 S3 70 7 0 0 1 0 0 1 70/7 = 10

Zj 0 0 0 0 0 0 0 0
Z j – Cj –15 –6 –9 –2 0 0 0


 s some of Zj – Cj  0 the current basic feasible solution is not optimum.
Z1 – C1 = –15 is the most negative value and hence x1 enters the basis and the
variable S2 leaves the basis.

Self-Instructional
Material 73
Linear Programming First Iteration
and Simplex Method Cj 15 6 9 2 0 0 0

xB
CB Basis xB x1 x2 x3 x4 S1 S2 S3 Min
x2
NOTES 4
0 S1 4 0 1/3 3 – 32/3 1 – 2/3 0 = 12
1/ 3
8
15 x1 8 1 1/3 1 25/3 0 1/3 0 = 24
1/ 3
0 S3 14 0 – 7/3 –7 – 172/3 0 – 7/3 1 —
Zj 120 15 5 15 125 0 5 0
Z j – Cj 0 –1 6 123 0 5 0

Since Z2 – C2 = –1 < 0 the solution is not optimal therefore, x2 enters the basis and the
basic variable S1 leaves the basis.
Second Iteration
Cj –5 6 9 –2 0 0 0

CB B xB x1 x2 x3 x4 S1 S2 S3
 x2 12 0 1 9 –32 3 –2 0
15 x1 4 1 0 –2 57/3 –1 1 0
0 S3 42 0 0 14 –132 7 –7 1
Zj 132 15 6 24 93 3 3 0
Z j – Cj 0 0 15 91 3 3 0

Since all Zj – Cj  0, the solution is optimal and is given by,


Max Z= 132, x1 = 4, x2 = 12, x3 = 0,
x4 = 0.

Check Your Progress


7. Write the characteristics of the standard form.
8. Explain the characteristics of the canonical form.
9. Define simplex method.
10. How is a leaving element converted to unity in a simplex algorithm?

3.5 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS

1. Simple linear programming problem with two decision variables can be


easily solved by graphical method.
2. A set of values X1, X2, ..., Xn which satisfies the constraints of the LPP is
called its solution.

Self-Instructional
74 Material
3. Any solution to a LPP which satisfies the non-negativity restrictions of the Linear Programming
and Simplex Method
LPP is called its feasible solution.
Any feasible solution which optimizes (minimizes or maximizes) the objective
function of the LPP is called its optimum solution.
NOTES
4. Given a system of m linear equations with n variables (m < n), any solution
which is obtained by solving m variables keeping the remaining n – m
variables zero is called a basic solution.
5. Given a system of m linear equations with n variables (m < n), any solution
which is obtained by solving m variables keeping the remaining n – m
variables zero is called a basic solution. Such m variables are called basic
variables and the remaining variables are called non-basic variables.
6. Basic feasible solutions are of two types:
(i) Non-Degenerate: A non-degenerate basic feasible solution is the basic
feasible solution which has exactly m positive Xi (i = 1, 2, ..., m), i.e.,
none of the basic variables is zero.
(ii) Degenerate: A basic feasible solution is said to be degenerate if one or
more basic variables are zero.
If the value of the objective function Z can be increased or decreased
indefinitely, such solutions are called unbounded solutions.
7. Characteristics of the Standard Form
(i) The objective function is of maximization type.
(ii) All constraints are expressed as equations.
(iii) Right hand side of each constraint is non-negative.
(iv) All variables are non-negative.
8. Characteristics of the Canonical Form
(i) The objective function is of maximization type.
(ii) All constraints are of ‘’ type.
(iii) All variables Xi are non-negative.
9. Simplex method is an iterative procedure for solving LPP in a finite number
of steps. This method provides an algorithm which consists of moving from
one vertex of the region of feasible solution to another in such a manner that
the value of the objective function at the succeeding vertex is less or more
as the case may be that at the previous vertex.
10. The leaving element is converted to unity by dividing the key equation by
the key element and all other elements in its column to zero by using the
formula:
New element = Old element

 Product of elements in key row and key column 


– 
 Key element  Self-Instructional
Material 75
Linear Programming
and Simplex Method 3.6 SUMMARY

 Simple linear programming problem with two decision variables can be


NOTES easily solved by graphical method.
 A set of values X1, X2, ..., Xn which satisfies the constraints of the LPP is
called its solution.
 Any solution to a LPP which satisfies the non-negativity restrictions of the
LPP is called its feasible solution.
Any feasible solution which optimizes (minimizes or maximizes) the objective
function of the LPP is called its optimum solution.
 Given a system of m linear equations with n variables (m < n), any solution
which is obtained by solving m variables keeping the remaining n – m
variables zero is called a basic solution.
 A non-degenerate basic feasible solution is the basic feasible solution which
has exactly m positive Xi (i = 1, 2, ..., m), i.e., none of the basic variables
is zero.
 A basic feasible solution is said to be degenerate if one or more basic variables
are zero.
 Simplex method is an iterative procedure for solving LPP in a finite number
of steps. This method provides an algorithm which consists of moving from
one vertex of the region of feasible solution to another in such a manner that
the value of the objective function at the succeeding vertex is less or more
as the case may be that at the previous vertex.

3.7 KEY WORDS

 Graphical solution: The linear programming problems can be solved as


follows using the graphical solution and simplex method.
 Feasible solution: Any solution to a LPP which satisfies the non-negativity
restrictions of the LPP is called its feasible solution.
 Non-degenerate feasible solution: A non-degenerate basic feasible
solution is the basic feasible solution which has exactly m positive Xi (i =
1,2,…,m), i.e., none of the basic variables is zero.
 Degenerate feasible solution: A basic feasible solution is said to be
degenerate if one or more basic variables are zero.
 Simplex method: Simplex method is an iterative procedure for solving LPP
in a finite number of steps. This method provides an algorithm which consists
of moving from one vertex of the region of feasible solution to another in such
a manner that the value of the objective function at the succeeding vertex is
less or more as the case may be that at the previous vertex.
Self-Instructional
76 Material
Linear Programming
3.8 SELF-ASSESSMENT QUESTIONS AND and Simplex Method

EXERCISES

Short-Answer Questions NOTES

1. How to solve linear programming problem for graphical method?


2. Define the following terms:
(a) Feasible solution
(b) Optimum solution
(c) Basic and non-basic solution
3. What is the canonical form of a LPP?
4. Explain the characteristics of the standard form.
5. What are characteristics of the canonical form?
6. State about the simplex method.
7. Write the simplex algorithm.
Long-Answer Questions
1. Describe briefly the graphical solutions with the help of examples.
2. Discuss the types of basic feasible solutions.
3. Explain briefly the canonical and standard forms of LPP.
4. State the simplex method with appropriate examples.
5. Elaborate on the simplex algorithm giving examples.

3.9 FURTHER READINGS

Arumugam, R. S. 2006. Operations Research. Palayamkottai (Tamil Nadu): New


Gamma Publications.
Sundharesan, V., K. S. Ganapathy and K. Ganesan. 2017. Resource Management
Techniques (Operations Research). Chennai: A. R. Publications.
Swaroop, Kanti, P. K. Gupta and Man Mohan. 2007. Operations Research,
13th Edition. New Delhi: Sultan Chand & Sons.
Taha, Hamdy A. 1992. Operations Research: An Introduction. New York:
Macmillan.
Sharma, S. D. 2006. Operations Research. Uttar Pradesh: Kedar Nath Ram
Nath & Co.
Gupta, P. K. and D. S. Hira. 2002. Introduction to Operations Research. New
Delhi: S. Chand And Company Limited.
Self-Instructional
Material 77
Linear Programming Gillett, Billy E. 2007. Introduction to Operations Research. New Delhi: Tata
and Simplex Method
McGraw-Hill.
Ackoff, R. L. and M. W. Sasieni. 1968. Fundamentals of Operations Research.
New York: John Wiley & Sons Inc.
NOTES
Kothari, C. R. 1992. An Introduction to Operational Research. New Delhi:
Vikas Publishing House Pvt. Ltd.
Kalavathy, S. 2002. Operations Research. New Delhi: Vikas Publishing House
Pvt. Ltd.
Jensen, Paul A., and Jonathan F. Bard. 2003. Operations Research Models and
Methods. New York: John Wiley & Sons.
Sharma, J. K. 2001. Operations Research: Theory and Applications. New
Delhi: Macmillan India Ltd.

Self-Instructional
78 Material
Artificial Variable

UNIT 4 ARTIFICIAL VARIABLE Techniques

TECHNIQUES
NOTES
Structure
4.0 Introduction
4.1 Objectives
4.2 Linear Programming Using Artificial Variable
4.3 Big M Method
4.4 Two-Phase Method
4.5 Answers to Check Your Progress Questions
4.6 Summary
4.7 Key Words
4.8 Self Assessment Questions and Exercises
4.9 Further Readings

4.0 INTRODUCTION

Artificial variable techniques consider the LPP in which constraints may also have
the  and = signs after ensuring that all bi  0. In such cases, the basis matrix
cannot be obtained as an identify matrix in the starting simplex table, therefore we
introduce a new type of variable called the artificial variable.
In operations research, the Big M method is a method of solving linear
programming problems using the simplex algorithm. The Big M method extends
the simplex algorithm to problems that contain “greater-than” constraints. It does
so by associating the constraints with large negative constants which would not be
part of any optimal solution, if it exists. Simplex method and M method are the
methods of solution by iterative procedure in a finite number of steps using matrix.
If at least one artificial variable appears in the basis at the positive level and the
optimality condition is satisfied, then the original problem has no feasible solution.
The solution satisfies the constraints but does not optimize the objective function
since it contains a very large penalty M. The Big M method introduces surplus and
artificial variables to convert all inequalities into that form. The “Big M” refers to a
large number associated with the artificial variables, represented by the letter M.
Two-phase simplex method is another method to solve a given LPP involving
some artificial variables.
In this unit, you will study about the concepts of linear programming using
artificial variables, big M method and two-phase method problem.

Self-Instructional
Material 79
Artificial Variable
Techniques 4.1 OBJECTIVES

After going through this unit, you will be able to:


NOTES  Understand the significance of linear programming using artificial variables
 Know about the big M method
 Derive two-phase method problem

4.2 LINEAR PROGRAMMING USING ARTIFICIAL


VARIABLE

This section will consider the LPP in which constraints may also have the  and =
signs after ensuring that all bi  0. In such cases, the basis matrix cannot be obtained
as an identify matrix in the starting simplex table, therefore we introduce a new
type of variable called the artificial variable. These variables are fictitious and
cannot have any physical meaning. The artificial variable technique is merely a
device to get the starting basic feasible solution so that the simplex procedure may
be adopted as usual until the optimal solution is obtained. To solve such LPP,
there are following two methods:
(i) The Big M Method or the Method of Penalties
(ii) The Two-Phase Simplex Method

4.3 BIG M METHOD

The following steps are involved in solving an LPP using the Big M method.
Step 1: Express the problem in the standard form.
Step 2: Add non-negative artificial variables to the left side of each of the
equations corresponding to constraints of the type  or =. However, the addition
of these artificial variables causes a violation of the corresponding constraints.
Therefore, we would like to get rid of these variables and not allow them to appear
in the final solution. This is achieved by assigning a very large penalty (–M for
maximization and M for minimization) in the objective function.
Step 3: Solve the modified LPP by the simplex method, until one of the
following three cases arises.
1. If no artificial variable appears in the basis and the optimality conditions are
satisfied, then the current solution is an optimal basic feasible solution.
2. If at least one artificial variable appears in the basis at the zero level and the
optimality condition is satisfied, then the current solution is an optimal basic
feasible solution (though a degenerated solution).

Self-Instructional
80 Material
3. If at least one artificial variable appears in the basis at the positive level and Artificial Variable
Techniques
the optimality condition is satisfied, then the original problem has no feasible
solution. The solution satisfies the constraints but does not optimize the
objective function since it contains a very large penalty M, and is called a
pseudo-optimal solution. NOTES
Note: While applying the simplex method, whenever an artificial variable happens to leave
the basis, we drop that artificial variable and omit all the entries corresponding to its column
from the simplex table.
Example 4.1: Use the penalty method to
Maximize Z = 3x1 + 2x2
Subject to the constraints,
2x1 + x2  2
3x1 + 4x2  12
x1, x2  0
Solution: By introducing the slack variable S1  0, the surplus variable S2  0 and
the artificial variable A1  0, the given LPP can be reformulated as:
Maximize Z = 3x1, + 2x2 + 0S1 + 0S2 – MA1,
Subject to 2x1 + x2 + S1 = 2
3x1 + 4x2 – S2 + A1 = 12
The starting feasible solution is S1= 2, A1= 12.
Initial Table

Cj 3 2 0 0 –M
CB B xB x1 x2 S1 S2 A1 Min xy/x2

0 S1 2 2 1 1 0 0 2/1 = 2
–M A1 12 3 4 0 –1 1 12/4 = 3

Zj –12M –3M –4M 0 M –M


Zj – Cj – –3M – 3 –4M – 2 0 M 0

Since some of Zj – Cj  0, the current feasible solution is not optimum.


Choose the most negative Zj – Cj = –4M–2.
 the x2 variable enters the basis and the basic variable S1 leaves the basis.

Self-Instructional
Material 81
Artificial Variable Cj 3 2 0 0 –M
Techniques
CB B xB x1 x2 S1 S2 A1
2 x1 2 2 1 1 0 0
NOTES –M A1 4 –5 0 –4 –1 1

Zj 4 – 4M 4 + 5M 2 2+4M M –M
Zj – Cj 5M + 1 0 4M+2 M 0

Since all Zj – Cj  0 and an artificial variable appears in the basis at the


positive level, the given LPP does not possess any feasible solution. But the LPP
possesses a pseudo-optimal solution.
Example 4.2: Solve the following LPP.
Minimize Z = 4x1 + x2
Subject to 3x1 + x2 = 3
4x1 + 3x2  6
x1 + 2x2  3
x1, x2  0
Solution: Since the objective function is minimization, we covert it into maximization
using,
Min Z = –Max (–z)
Maximize Z = –4x1 – x2
Subject to 3x1 + x2 = 3
4xl + 3x2  6
x1 +2x2  3
x1, x2  0
Convert the given LPP into the standard form by adding the artificial variables A1,
A2, the surplus variable S1 and the slack variable S2 to get the initial basic feasible
solution.
Maximize Z = –4x1 – x2 + 0S1 + 0S2 – MA1 – MA2
Subject to 3x1 + x2 + A1 = 3
4x1 + 3x2 – S1 + A2 = 6
x1 + 2x2 + S2 = 3
The starting feasible solution is A1 = 3, A2 = 6, S2 = 3.

Self-Instructional
82 Material
Initial Solution Artificial Variable
Techniques
Cj –4 –1 –M 0 –M 0
xB
CB B xB x1 x2 A1 S1 A2 S2 Min
x2
NOTES
–M A1 3 3 1 1 0 0 0 3/3 = 1
–M A2 6 4 3 0 –1 1 0 6/4 = 1.5
0 S2 3 1 2 0 0 0 1 3/1 = 3
Zj –9M –7M –4M –M M –M 0
Zj – Cj –7M + 4 –4M + 1 0 M 0 0

Since some of Zj – Cj  0, the current feasible solution is not optimum. As


Z1 – C1 is the most negative, x1 enters the basis and the basic variable A2 leaves
the basis.
First Iteration
Cj –4 –1 –M 0 –M 0
x
CB B xB x1 x2 A1 S1 A2 S2 Min B
x1

–M A1 3/2 5/2 0 1 0 0 –1/2 3/5

–M A2 3/2 5/2 0 0 –1 1 –3/2 3/5

–1 x2 3/2 3/2 1 0 0 0 1.2 3


Zj –3M–3/2 –5M–1/2 –1 –M +M –M 2M–1/2
Zj – Cj –5M–7/2 0 0 M 0 2M–1/2

Since Z1 – C1 is negative, the current feasible solution is not optimum.


Therefore, the x1 variable enters the basis and the artificial variable A2 leaves the
basis.
Second Iteration
Cj –4 –1 –M 0 0
x
CB B xB x1 x2 A1 S1 S2 Min B
x1

–M A1 0 0 0 1 1 1 0

3 2 3
–M x1 1 0 0 –
5 5 5
6 1 4
–1 x2 0 1 0 –
5 5 5
Zj 18 –4 –1 –M 9
–M+
Zj – Cj 5 5

0 0 0 9
–M 
5
Self-Instructional
Material 83
Artificial Variable Since Z4 – C4 is the most negative, S1 enters the basis and the artificial variable A1
Techniques
leaves the basis.
Third Iteration
NOTES Cj –4 –1 0 0
x
CB B xB x1 x2 S1 S2 Min B
x2

0 S1 0 0 0 1 1 0

3 1
–4 x1 1 0 0 –
5 5
–1 x2 6/5 0 1 0 1 6/5
Zj 18 –4 –1 0 –1/5
Zj – Cj 5 0 0 0 –1/5

Since Z4 – C1 is the most negative, S2 enters the basis and S1 leaves the
basis.
Fourth Iteration

Cj –4 –1 0 0
CB B xB x1 x2 S1 S2

0 S2 0 0 0 1 1
1
–4 x1 3/5 1 0 0
5
–1 x2 6/5 0 1 –1 0
Zj –18/5 –4 –1 1/5 0
Zj – C j 0 0 1/5 0

Since all Zj – Cj  0, the solution is optimum and is given by x1= 3/5, x2= 6/
5 and Max Z = –18/5.
 Min Z = –Max (–Z) = 18/5.
Example 4.3: Solve by Big M method.
Maximize Z = x1 + 2x2 + 3x3 – x4
Subject to, x1 + 2x2 + 3x3 = 15
2x1 + x2 + 5x3 = 20
x1 + 2x2 + x3 + x4 = 10
Solution: Since the constraints are equations, introduce artificial variables A1,
A2  0. The reformulated problem is given as follows.

Self-Instructional
84 Material
Maximize Z= x1 + 2x2 + 3x3 – x4 – MA1 – MA2 Artificial Variable
Techniques
Subject to, x1 + 2x2 + 3x3 + A1 = 15
2x1 + x2 + 5x3 + A2 = 20
x1 + 2x2 + x3 + x4 = 10 NOTES
Initial solution is given by A1 = 15, A2 = 20 and x4 = 10.
Cj 1 2 3 –1 –M –M
xB
CB B xB x1 x2 x3 x4 A1 A2 Min
x3

–M A1 15 1 2 3 0 1 0 15/3 = 5
 –M A2 20 2 1 5 0 0 1 20/5 = 4

–1 x4 10 1 2 1 1 0 0 10/1 = 10

Zj – 35M – 3M – 3M – 8M –1 –M –M
– 10 – 1 – 2 –1
Z j – Cj – 3M – 2 – 3M – 4 – 8M – 4 0 0 0

Since Z3 – C3 is most negative, x3 enters the basis and the basic variable A2
leaves the basis.
First Iteration
Cj 1 2 3 –1 –M

x
CB B xB x1 x2 x3 x4 A1 Min xB
2

3 15
– M A1 3 –1/5 7/5 0 0 1 =
7/5 7
4 20
3 x3 4 2/5 1/5 1 0 0 =
1/ 5 1
6 30
–1 x4 6 3/5 9/5 0 1 0 =
9/5 9
1 3 7 6
Zj – 3M + 6 M   M  3 –1 –M
5 5 5 5
1 2 7 16
Z j – Cj M   M 0 0 0
5 5 5 5

Since Z2 – C2 is most negative, x2 enters the basis and the basic variable A1
leaves the basis.

Self-Instructional
Material 85
Artificial Variable Second Iteration
Techniques
Cj 1 2 3 –1

CB B xB x1 x2 x3 x4 x
Min xB
1
NOTES
1
2 x2 15/7  1 0 0 —
7
3 x3 25/7 3/7 0 1 0 25/3
6
– 1 x4 15/7 0 0 1 15/6
7
90 1
Zj 2 3 –1
7 7
Z j – Cj –6/7 0 0 0


Since Z1 – C1 = –6/7 is negative, the current feasible solution is not optimum.
herefore, x1 enters the basis and the basic variable x4 leaves the basis.
Third Iteration
Cj 1 2 3 –1

CB B xB x1 x2 x3 x4

2 x2 15/6 0 1 0 1/6
3 x3 15/6 0 0 1 3/6
3 x1 15/6 1 0 0 7/6

Zj 15 1 2 3 3
Z j – Cj 0 0 0 4

Since all Zj – Cj  0, the solution is optimum and is given by x1 = x2 = x3 = 15/


6 = 5/2, and Max Z = 15.
Example 4.4: Use penalty method to solve the following LPP:
Minimize Z = 5x + 3y
Subject to, 2x + 4y  12
2x + 2y = 10
5x + 2y  10
x, y  0
Solution: First we convert the objective function from minimization to
maximization using
Min Z = – Max(– Z).
Rewrite the given LPP into standard form by adding slack variable S1  0,
surplus variable S2  0 and the artificial variables A1, A2  0.

Self-Instructional
86 Material
Maximize Z = – 5x – 3y + 0S1 + 0S2 – MA1 – Artificial Variable
Techniques
MA2
Subject to, 2x + 4y + S1 = 12
2x + 2y + A1 = 10 NOTES
5x + 2y – S2 + A2 = 10
x, y, S1, S2, A1, A2  0
Initial feasible solution is given by S1 = 12, A1 = 10 and A2 = 10
Since all Zj – Cj 0 (see Table below) and no artificial variable is in the basis, the
solution is optimum and is given by:
x = 4, y = 1, Max Z = – 23
Min Z = – Max (– Z) = 23
Table
Cj –5 –3 0 –M 0 –M
x
CB B xB x y S1 A1 S2 A2 Min xB

0 S1 12 2 4 1 0 0 0 12/2 = 6
–M A1 10 2 2 0 1 0 0 10/2 = 5
 –M A2 10 5 2 0 0 –1 1 10/5 = 2

Zj 20M –7M – 4M 0 –M M –M
Z j – Cj –7M + 5 – 4M + 3 0 0 M 0
x
Min yB

85
0 S1 8 0 16/5 1 0 2/5 = 5/2
16
–M A1 6 0 6/5 0 1 2/5 — 30/6 = 5
–5 x 2 1 2/5 0 0 – 1/5 — 10/2 = 5


Cj –5 –3 0 –M 0 –M
x
CB B xB x y S1 A1 S2 A2 Min xB

Zj –6M – 10 –5 – 6/5 M – 2 0 –M – 2/5 M + 1 —


2
Zj – Cj 0 – 6/5 M + 1 0 0  M 1
5

Self-Instructional
Material 87
Artificial Variable xB
Techniques Min
S2

–3 y 5/2 0 1 5/16 0 1/8 — 20


 –M A1 3 0 0 – 3/8 1 – 1/4 — 12
NOTES –5 x 1 1 0 – 1/8 0 –1/4 — —

Zj – 3M – 25/2 –5 –3 –3/8M –M + 7/8 —


– 5/16 0 – 1/4M
Zj – Cj 0 0 3/8M 0 7/8 —
– 5/16 – M/4

–3 y 1 0 1 1/2 — 0 —
0 S2 12 0 0 – 3/2 — 1 —
–5 x 4 1 0 – 1/2 — 0 —

Zj – 23 –5 –3 1 — 0 —
Zj – Cj 0 0 1 — 0 —

4.4 TWO PHASE METHOD

The two-phase simplex method is another method to solve a given LPP involving
some artificial variables. The solution is obtained in two phases.
Phase I
In this phase, we construct an auxiliary LPP leading to a final simplex table
containing a basic feasible solution to the original problem.
Step 1: Assign a cost –1 to each artificial variable and a cost 0 to all other
variables and get a new objective function Z* = – A1 – A2 – A3...where Ai are
artificial variables.
Step 2: Write down the auxiliary LPP in which the new objective function is
to be maximized, subject to the given set of constraints.
Step 3: Solve the auxiliary LPP by the simplex method until either of the
following three cases arise:
(i) Max Z* < 0 and at least one artificial variable appears in the optimum
basis at the positive level.
(ii) Max Z* = 0 and at least one artificial variable appears in the optimum
basis at the zero level.
(iii) Max Z* = 0 and no artificial variable appears in the optimum basis.
In case (i), the given LPP does not possess any feasible solution, whereas in
cases (ii) and (iii), we go to Phase II.
Phase II
Use the optimum basic feasible solution of Phase I as a starting solution for the
original LPP. Assign the actual costs to the variable in the objective function and a

Self-Instructional
88 Material
zero cost to every artificial variable in the basis at the zero level. Delete the artificial Artificial Variable
Techniques
variable column that is eliminated from the basis in Phase I from the table. Apply
the simplex method to the modified simplex table obtained at the end of Phase I till
an optimum basic feasible solution is obtained or till there is an indication of an
unbounded solution. NOTES
Since all Zj – Cj  0, an optimum basic feasible solution to the auxiliary
LPP has been attained. But since max Z* is negative and the artificial variable A1
appears in the basis at the positive level, the original problem does not possess
any feasible solution.
Example 4.5: Use the two-phase simple method to solve:
Maximize Z = 5x1 – 4x2 + 3x3
Subject to 2x1 + x2 – 6x3 = 20
6x1 + 5x2 + 10x3  76
8x1 – 3x2 + 6x3  50
x1, x2, x3  0
Solution: Introducing the slack variables S1, S2  0 and an artificial variable A1 
0 in the constraints of the given LPP, the problem is reformulated in the standard
form.
The initial basic feasible solution is given by A1 = 20, S1 = 76 and S2 = 50.
Phase I
Assigning a cost –1 to the artificial variable A1 and a cost 0 to the other variables,
the objective function of the auxiliary LPP becomes,
Maximize Z* = 0x1 + 0x2 + 0x3 + 0S1 + 0S2 – 1A1
Subject to 2x1 + x2 – 6x3 + A1 = 20
6x1 + 5x2 + 10x3 + S1 = 76
8x1 – 3x2 + 6x3 + S2 = 50
x1, x2, x3, S1, S2, A1  0
Cj 0 0 0 –1 0 0
xB
CB B xB x1 x2 x3 A1 S1 S2 Min
x1
– 1 A1 20 2 1 –6 1 0 0 20/2 = 10
0 S1 76 6 5 10 0 1 0 76/6 = 12.66
 0 S2 50 8 –3 6 0 0 1 50/8 = 6.25
Zj – 20 –2 –1 6 –1 0 0
Z j – Cj –2 –1 6 0 0 0

Self-Instructional
Material 89
Artificial Variable Cj 0 0 0 –1 0 0
Techniques
xB
CB B xB x1 x2 x3 A1 S1 S2 Min
x2

 –1 A1 15/2 0 7/4 –15/2 1 0 –1/4 30/7


NOTES
0 S1 77/2 0 29/4 11/2 0 1 –3/4 154/29
0 x1 25/4 1 –3/8 3/4 0 0 1/8 —
Zj –15/2 0 –7/4 15/2 –1 0 1/4
Z j – Cj 0 –7/4 15/2 0 0 1/4
0 x2 30/7 0 1 –30/7 4/7 0 –1/7
0 S1 52/7 0 1 256/7 –29/7 1 2/7
0 x1 55/7 1 0 –6/7 3/4 0 1/14
Zj 0 0 0 0 0 0 0
Z j – Cj 0 0 0 0 1 0

Since all Zj Cj 0, an optimum solution to the auxiliary LPP has been obtained.
Also, Max Z* 0 with no artificial variable in the basis. We now go to Phase II.
Phase II
Consider the final simplex table of Phase I. Consider the actual cost associated
with the original variables. Delete the artificial variable A1 column from the table as
it is eliminated in Phase I.
Cj 5 –4 3 0 0
CB B xB x1 x2 x3 S1 S2
–4 x2 30/7 0 1 – 30/7 0 – 1/7
0 S1 52/7 0 0 256/7 1 2/7
5 x1 55/7 1 0 –6/7 0 1/14
Zj 155/7 5 –4 90/7 0 13/4
Z j – Cj 0 0 0 69/7 0 13/14

Since all Zj – Cj 0, an optimum basic feasible solution has been reached. Hence,
an optimum feasible solution to the given LPP is x1 = 55/7, x2 = 30/7, x3 = 0 and
Max Z = 155/7.

Check Your Progress


1. What is artificial variable?
2. State the two method to solve LPP.
3. Explain the term pseudo-optimal solution.
4. Discuss the Phase I of two-phase method.

Self-Instructional
90 Material
Artificial Variable
4.5 ANSWERS TO CHECK YOUR PROGRESS Techniques

QUESTIONS

1. The basis matrix cannot be obtained as an identify matrix in the starting NOTES
simplex table, therefore we introduce a new type of variable called the
artificial variable. These variables are fictitious and cannot have any physical
meaning.
2. To solve such LPP, there are following two methods:
(i) The Big M Method or the Method of Penalties
(ii) The Two-Phase Simplex Method
3. If at least one artificial variable appears in the basis at the positive level and
the optimality condition is satisfied, then the original problem has no feasible
solution. The solution satisfies the constraints but does not optimize the
objective function since it contains a very large penalty M, and is called a
pseudo-optimal solution.
4. In this phase, we construct an auxiliary LPP leading to a final simplex table
containing a basic feasible solution to the original problem.
Step 1: Assign a cost –1 to each artificial variable and a cost 0 to all other
variables and get a new objective function Z* = – A1 – A2 – A3...where Ai
are artificial variables.
Step 2: Write down the auxiliary LPP in which the new objective function is
to be maximized, subject to the given set of constraints.
Step 3: Solve the auxiliary LPP by the simplex method until either of the
following three cases arise:
(i) Max Z* < 0 and at least one artificial variable appears in the optimum
basis at the positive level.
(ii) Max Z* = 0 and at least one artificial variable appears in the optimum
basis at the zero level.
(iii) Max Z* = 0 and no artificial variable appears in the optimum basis.

4.6 SUMMARY

 The basis matrix cannot be obtained as an identify matrix in the starting


simplex table, therefore we introduce a new type of variable called the
artificial variable. These variables are fictitious and cannot have any physical
meaning.
 Add non-negative artificial variables to the left side of each of the equations
corresponding to constraints of the type  or =. However, the addition of
these artificial variables causes a violation of the corresponding constraints.
Self-Instructional
Material 91
Artificial Variable Therefore, we would like to get rid of these variables and not allow them to
Techniques
appear in the final solution. This is achieved by assigning a very large penalty
(–M for maximization and M for minimization) in the objective function.
 If at least one artificial variable appears in the basis at the positive level and
NOTES
the optimality condition is satisfied, then the original problem has no feasible
solution. The solution satisfies the constraints but does not optimize the
objective function since it contains a very large penalty M, and is called a
pseudo-optimal solution.
 The two-phase simplex method is another method to solve a given LPP
involving some artificial variables. The solution is obtained in two phases.
 Since all Zj – Cj  0, an optimum basic feasible solution to the auxiliary LPP
has been attained. But since max Z* is negative and the artificial variable A1
appears in the basis at the positive level, the original problem does not
possess any feasible solution.

4.7 KEY WORDS

 Artificial variable: These variables are fictitious and cannot have any
physical meaning. The artificial variable technique is merely a device to get
the starting basic feasible solution so that the simplex procedure may be
adopted as usual untile the optimal solution is obtained.
 Big M method: The big M method is a method of solving linear
programming problems using the simplex algorithm. The big M method
extends the simplex algorithm to problems that contain “greater-than”
constraints.
 Two phase method: The two phase simplex method is another method to
solve a given LPP involving some artificial variables.

4.8 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short-Answer Questions
1. How artificial variable used in linear programming?
2. Elaborate on the big M method.
3. Describe the two-phase method.
4. How the solution is obtained in Phase I of two-phase method.
Long-Answer Questions
1. Explain the linear programming using artificial variable.
Self-Instructional 2. Describe briefly the big M method for LPP giving examples.
92 Material
3. Explain phase I of two-phase method with appropriate example. Artificial Variable
Techniques
4. Elaborate an Phase II of two-phase method giving suitable examples.

4.9 FURTHER READINGS NOTES

Arumugam, R. S. 2006. Operations Research. Palayamkottai (Tamil Nadu): New


Gamma Publications.
Sundharesan, V., K. S. Ganapathy and K. Ganesan. 2017. Resource Management
Techniques (Operations Research). Chennai: A. R. Publications.
Swaroop, Kanti, P. K. Gupta and Man Mohan. 2007. Operations Research,
13th Edition. New Delhi: Sultan Chand & Sons.
Taha, Hamdy A. 1992. Operations Research: An Introduction. New York:
Macmillan.
Sharma, S. D. 2006. Operations Research. Uttar Pradesh: Kedar Nath Ram
Nath & Co.
Gupta, P. K. and D. S. Hira. 2002. Introduction to Operations Research. New
Delhi: S. Chand And Company Limited.
Gillett, Billy E. 2007. Introduction to Operations Research. New Delhi: Tata
McGraw-Hill.
Ackoff, R. L. and M. W. Sasieni. 1968. Fundamentals of Operations Research.
New York: John Wiley & Sons Inc.
Kothari, C. R. 1992. An Introduction to Operational Research. New Delhi:
Vikas Publishing House Pvt. Ltd.
Kalavathy, S. 2002. Operations Research. New Delhi: Vikas Publishing House
Pvt. Ltd.
Jensen, Paul A., and Jonathan F. Bard. 2003. Operations Research Models and
Methods. New York: John Wiley & Sons.
Sharma, J. K. 2001. Operations Research: Theory and Applications. New
Delhi: Macmillan India Ltd.

Self-Instructional
Material 93
Duality in Linear
Programming BLOCK - II
DUALITY AND INTEGER PROGRAMMING

NOTES
UNIT 5 DUALITY IN LINEAR
PROGRAMMING
Structure
5.0 Introduction
5.1 Objectives
5.2 Duality and Linear Programming
5.3 General Primal and Dual Pair
5.4 Formulating a Dual Problem
5.5 Dual Pair in Matrix Form
5.6 Duality Theorem
5.7 Complementary Slackness Theorem
5.8 Answers to Check Your Progress Questions
5.9 Summary
5.10 Key Words
5.11 Self Assessment Questions and Exercises
5.12 Further Readings

5.0 INTRODUCTION

The dual of a given Linear Programming (LP) is another LP that is derived from
the original (the primal) LP in the following schematic way: Each variable in the
primal LP becomes a constraint in the dual LP; Each constraint in the primal LP
becomes a variable in the dual LP; The objective direction is inversed – maximum
in the primal becomes minimum in the dual and vice-versa.
The weak duality theorem states that the objective value of the dual LP at
any feasible solution is always a bound on the objective of the primal LP at any
feasible solution (upper or lower bound, depending on whether it is a maximization
or minimization problem).
In fact, this bounding property holds for the optimal values of the dual and
primal LPs. The strong duality theorem states that, moreover, if the primal has an
optimal solution then the dual has an optimal solution too, and the two optima are
equal. These theorems belong to a larger class of duality theorems in optimization.
The strong duality theorem is one of the cases in which the duality gap (the gap
between the optimum of the primal and the optimum of the dual) is 0.
The duality theorem states that ‘for every maximization (or minimization)
problem in linear programming, there is a unique similar problem of minimization
(or maximization) involving the same date which describes the original problem’.
Self-Instructional
94 Material
The original problem is referred to as the ‘Primal’. The ‘Dual’ of a dual problem is Duality in Linear
Programming
the primal. Thus the primal and dual problems are replicas of each other. Further,
the maximum feasible value of the primal objective function equals to the minimum
feasible value of the dual objective function. This means that the solutions of the
primal and the dual problems are related which in fact yields several advantages. NOTES
Every LPP (called the primal) is associated with another LPP (called its dual).
Either of the problem can be considered as primal with the other as dual.
In this unit, you will study about the duality in linear programming, general
primal and dual pair, formulating a dual problem, dual pair in matrix form, duality
theorem, and complementary slackness theorem.

5.1 OBJECTIVES

After going through this unit, you will be able to:


 Understand the duality in linear programming
 Explain the general primal and dual pair
 Analyse the formulating a dual problem
 Illustrate the dual pair in matrix form
 State the duality theorem
 Define the complementary slackness theorem

5.2 DUALITY AND LINEAR PROGRAMMING

For every given linear programming problem, there is another intimately related
linear programming problem referred to as its dual. The duality theorem states that
‘for every maximization (or minimization) problem in linear programming, there is
a unique similar problem of minimization (or maximization) involving the same date
which describes the original problem’. The original problem is referred to as the
‘Primal’. The ‘Dual’ of a dual problem is the primal. Thus the primal and dual
problems are replicas of each other. Further, the maximum feasible value of the
primal objective function equals to the minimum feasible value of the dual objective
function. This means that the solutions of the primal and the dual problems are
related which infact yields several advantages.
The transformation of a given primal problem into a dual problem involves the
following considerations:
(1) If the objective of the primal is maximization, the objective of the dual is
minimization.
(2) The primal has m-constraints while its dual has m-unknowns.
(3) The primal has n-unknowns while its dual has n-constraints.

Self-Instructional
Material 95
Duality in Linear (4) The n-coefficients of the objective function of primal (Cj) become the n-
Programming
constant terms (bi) of its dual.
(5) The m-constant terms of the primal (bi) become the m-constant terms of
the objective function (Cj) of its dual.
NOTES (6) The coefficients of the variables of the primal are transformed in their position
in the dual. This means that the first column of the coefficients in the primal
becomes the first row in the dual, the second column becomes the second
row and so on.
(7) The n-variables (Xn) of the primal are replaced by the m new variables (Ym)
of its dual. This change affects the system of restrictions as well as the
objective function.
(8) The sign of the inequalities in the set of restrictions of the primal (<) is
reversed in the set of restrictions in its dual (>). In other words, if the
inequalities in the primal are of the type <, then, they are of > type in the
dual.
(9) The sign of the inequalities restricting the variable (Xj) to non-negative
values in the primal is equal to the inequality sign of the new variable
( Yj)of its dual.
(10) For writing the dual of the given maximization problem, we should first
ensure that all the constraint inequalities are of the < type and for writing the
dual of the given minimization problem, the constraint inequalities should be
of the > type. We can see the application of these considerations with the
help of given examples.
Duality in Linear Programming
Every LPP (called the primal) is associated with another LPP (called its dual).
Either of the problem can be considered as primal with the other as dual.
The importance of the duality concept is due to two main reasons:
(i) If the primal contains a large number of constraints and a smaller number of
variables, the labour of computation can be considerably reduced by converting it
in to the dual problem and then solving it. (ii) The interpretation of the dual variables
from the cost or economic point of view, proves extremely useful in making future
decisions in the activities being programmed.
Formulation of Dual Problems
For formulating a dual problem, we first write the problem in the canonical form.
The following changes are used in formulating the dual problem:
(1) Change the objective function of maximization in the primal into
minimization in the dual, and vice versa.
(2) The variables in the primal should be equal to the constraints in the dual
and vice versa.

Self-Instructional
96 Material
(3) The cost coefficients C1, C2 ... Cn in the objective function of the primal Duality in Linear
Programming
should be the RHS constant of the constraints in the dual and vice versa.
(4) In forming the constraints for the dual, we consider the transpose of the
body matrix of the primal problem.
NOTES
(5) The variables in both the problems are non-negative.
(6) If the variable in the primal is unrestricted in sign, then the corresponding
constraint in the dual will be an equation, and vice versa.
Definition of the Dual Problem
A dual problem refers to a linear program in which the objective function is a linear
combination of m values that are the limits in the m constraints of the primal problem.
Let the primal problem be:
Max Z = C1x1 + C2x2 + ... + Cnxn

Subject to, a11 x1 a12 x2 K a1n xn b1


a21 x1 a22 x2 K a2 n xn b2
M
am1 x1 am 2 x2 K amn xn bm
x1 , x2 L xn 0

Dual: The dual problem is defined as,


Min Z = b1w1 + b2w2 + ... + bmwm
Subject to a11w1 a21w2 K am1wm C1
a12 w1 a22 w2 K am 2 wm C2
M
a1n w1 a2 n w2 K amn wn Cn
w1 , w2 L wm 0

where w1, w2, w3 ... wm are called dual variables.


Example 5.1: Write the dual of the primal LP problem given as follows:
Max Z = x1+ 2x2 + x3
Subject to, 2 x1  x2  x3  2
2 x1  x2  5 x3  6

4 x1  x2  x3  6
x1 , x2 , x3  0
Solution: Since the problem is not in the canonical form, we interchange the
inequality of the second constraint.
Max Z = x1+ 2x2 + x3
Self-Instructional
Material 97
Duality in Linear
Programming Subject to, 2 x1 x2 x3 2
2 x1 x2 5 x3 6
4 x1 x2 x3 6
NOTES
and x1 , x2 , x3 0

Dual: Let w1, w2, w3 be the dual variables.


Min Z = 2w1+ 6w2 + 6w3

Subject to, 2 w1 2 w2 4 w3 1
w1 w2 w3 2
w1 5w2 w3 1
w1 , w2 , w3 0

Example 5.2: Write the dual of the following LPP.


Min Z = 2x2 + 5x3
Subject to, x1 x2 2
2 x1 x2 6 x3 6
x1 x2 3 x3 4
x1 , x2 , x3 0
Solution: Since the given primal problem is not in the canonical form, we interchange
the inequality of the constraint. Also, the third constraint is an equation. This equation
can be converted into two inequations.
Min Z = 0x1 + 2x2 + 5x3
Subject to, x1 x2 0 x3 2
2 x1 x2 6 x3 6
x1 x2 3x3 4
x1 x2 3x3 4
x1 , x2 , x3 0

Again, on rearranging the constraint, we have,


Min Z = 0x1 + 2x2 + 5x3
Subject to,
x1  x2  0 x3  2
2 x1  x2  6 x3  6

x1  x2  3 x3  4
 x1  x2  3 x3  4
x1 , x2 , x3  0
Self-Instructional
98 Material
Dual: Since there are four constraints in the primal, we have four dual Duality in Linear
Programming
variables, namely w1, w2, w3, w3
Max Z = 2w1 – 6w2 + 4w3 – 4w3
Subject to w1 2w2 w3 w3 0 NOTES
w1 w2 w3 w3 2
0 w1 6 w2 3w3 3w3 5
w1 , w2 , w3 , w3 0
Let w3 w3 w3

Max Z 2 w1 6w2 4( w3 w3 )

Subject to w1 2 w2 ( w3 w3 ) 0
w1 w2 ( w3 w3 ) 2
Finally, we have, 0 w1 6w2 3( w3 w3 ) 5
Max Z 2 w1 6w2 4 w3

Subject to w1 2w2 w3 0
w1 w2 w3 2
0w1 6 w2 3w3 5
w1 , w2 0, w3 is unrestricted.

Example 5.3: Find the dual of the LPP given as follows:


Max Z = 3x1– x2 + x3

Subject to 4 x1 x2 8
8 x1 x2 3x3 12
5 x1 6 x3 13
x1 , x2 , x3 0

Solution: Since the problem is not in the canonical form, we interchange the
inequality of the second constraint.
Max Z = 3x1– x2 + x3

Subject to 4 x1 x2 0 x3 8
8 x1 x2 3 x3 12
5 x1 0 x2 6 x3 13
x1 , x2 , x3 0

Self-Instructional
Material 99
Duality in Linear Max Z = Cx
Programming
Subject to Ax B
x 0
NOTES
x1 8
C (3 11) x2 b 12
x3 13

4 1 0
A 8 1 3
5 0 6

Dual: Let w1, w2, w3 be the dual variables. The dual problem is:
Min Z = bTW
Subject to ATW  CT and W  0

w1
i.e., Min Z = (8 12 13) w2
w3

4 8 5 w1 3
Subject to 1 1 0 w2 1
0 3 6 w3 1

Min Z1 = 8w1 – 12w2 + 13w3


Subject to 4 w1  8w2  5w3  3
 w1  w2  0w3  1
0 w1  3w2  6 w3  1
w1 , w2 , w3  0
Example 5.4: Give the dual of the problem given as follows:
Max Z = x + 2y

Subject to 2x 3 y 4
3x 4 y 5
x 0 and y unrestricted.

Self-Instructional
100 Material
Duality in Linear
Solution: Since the variable y is unrestricted, it can be expressed as y y y , Programming
y,y 0 . On reformulating the given problem, we have,

Max Z x 2( y y )
NOTES
Subject to 2 x  3 ( y   y)  4
3 x  4 ( y  y)  5

3 x  4 ( y  y)  5
x, y , y   0
Since the problem is not in the canonical form, we rearrange the constraints.
Max Z = x + 2y – 2y
Subject to 2x 3y 3y 4
3x 4 y 4y 5
3x 4 y 4y 5
Dual: Since there are three variables and three constraints, in the dual we
have three variables namely, w1, w2, w2.
Min Z 4 w1 5w2 5 w2

Subject to 2w1 3w2 3w2 1


3w1 4w2 4w2 2
3w1 4w2 4w2 2
w1 , w2 , w2 0
Let w2 = w2 – w2, so that the dual variable w2 is unrestricted in sign.
Hence, the dual is:
Min Z 4 w1 5 ( w2 w2 )

Subject to 2 w1 3 ( w2 w2 ) 1
3w1 4 ( w2 w ) 2
3w1 4 ( w2 w ) 2

i.e., Min Z = –4w1 + 5w2

Subject to 2 w1 3w2 1
3w1 4 w2 2
3w1 4 w2 2

Self-Instructional
Material 101
Duality in Linear w1  0 and w2 is unrestricted.
Programming
i.e., Min Z = –4w1 + 5w2
Subject to 2 w1 3w2 1
NOTES 3w1 4 w2 2
3w1 4 w2 2
i.e., Min Z = –4w1 + 5w2
Subject to –2w1 + 3w2  1
–3w1 + 4w2  2, w1  0 and w2 is unrestricted.
Example 5.5: Write the dual of the following primal LPP.
Min Z = 4x1 + 5x2 – 3x3

Subject to x1 x2 x3 22
3 x1 5 x2 2 x3 65
x1 7 x2 4 x3 120
x1 + x2  0 and x3 is unrestricted.
Solution: Since the variable x3 is unrestricted, x3 x3 x3 . Also, bring the
problem into canonical form by rearranging the constraints.
Min Z = 4x1 + 5x2 – 3 ( x3 x3 )

Subject to x1 x2 ( x3 x3 ) 22
x1 x2 x3 x3 22
3x1 5 x2 2 ( x3 x3 ) 65
x1 7 x2 4 ( x3 x3 ) 120
x1 , x2 , x3 x3 0

Min Z = 4x1 + 5x2 – 3 x3 3x3


Subject to x1 x2 x3 x3 22
x1 x2 x3 x3 22
3 x1 5 x2 2 x3 2 x3 65
x1 7 x2 4 x3 4 x3 120
x1 , x2 , x3 x3 0
Dual: Since there are four constraints in the primal problem, in the dual
there are four variables, namely w1, w1, w2 , w3 so that the dual is given by:
Max Z 22( w1 w1 ) 65w2 120 w3

Self-Instructional
102 Material
Duality in Linear
Subject to w1 w1 3w2 w3 4 Programming

w1 w1 5w2 7 w3 5
w1 w1 2w2 4 w3 3
NOTES
w1 w1 2 w2 4 w3 3
w1 , w1 , w2 , w3 0

Let w1 w1 w11 , i.e., the variable w1 is unrestricted.


i.e., Max Z 22( w1 w1 ) 65w2 120 w3

Subject to w1 w1 3w2 w3 4
w1 w1 5w2 7 w3 5
( w1 w1 ) 2w2 4 w3 3
( w1 w1 ) 2w2 4 w3 3

i.e., Max Z 22 w1 65w2 120 w3

Subject to w1 3w2 w3 4
w1 5w2 7 w3 5
w1 2 w2 4 w3 3
w1 2 w2 4 w3 3
Thus, we have,
Min Z 22 w1 65w2 120 w3

Subject to w1 3w2 4 w3 4
w1 5w2 7 w2 5
w1 2 w2 4 w3 3
w2, w3  0 and w1 is unrestricted.
Important Results in Duality
1. The dual of the dual is primal.
2. If one is a maximization problem, then the other is a minimization one.
3. The necessary and sufficient condition for any LPP and its dual to have an
optimal solution is that both must have a feasible solution.
4. The fundamental duality theorem states that if either the primal or the dual
problem has a definite optimal solution, then the other problem also has a
definite optimal solution and the maximum values of the objective function
in both the problems are the same, i.e., Max Z = Min Z. The solution of the

Self-Instructional
Material 103
Duality in Linear other problem can be read from the Zj – Cj row below the columns of slack
Programming
and surplus variables.
5. The existence theorem states that if either problem has an unbounded solution,
then the other problem has no feasible solution.
NOTES 6. Complementary slackness theorem: According to this,
(i) If a primal variable is positive, then the corresponding dual constraint
is an equation at the optimum, and vice versa.
(ii) If a primal constraint is a strict inequality, then the corresponding dual
variable is zero at the optimum, and vice versa.
Economic Interpretation of Dual Variables
If we interpret our primal LP problem as a classical ‘Resource Allocation’ problem,
then its dual can be interpreted as a ‘Resource Valuation’ problem. Thus the values
of the optimal dual variables have fascinating economic interpretation. The primal
problem is used to describe a production problem in which the objective function
represents the gain obtained from the production of goods, while the constraints
characterize bounds on the production amounts due to the presence of limited
resources. The available quantity of each resource is then measured.
Economic Interpretation: Consider the following primal problem:
Maximize C1x1 + …+ Cnxn
Subject to, All xi  0
a11x1 + …+ a1nxn  b1
am1x1 + …+ amnxn  bm
Where,
n = Economic activities
m = Resources
Cj = Revenue per unit of activity j
bi = Maximum availability of resource i
aij = Consumption of resource i per unit of activity j
If (x1, …, xn) is optimal for the primal and (y1, …, ym) is optimal for the dual, then
we can state that:
C1x1 + …+ Cnxn = b1y1 + …+ bnym
Here,
Left hand side = Maximal revenue
Right hand side = Resource i (Availability of resource i) × Revenue per unit of
resource i. In other words we can say that the value of yi at optimal is dual price
of resource i.

Self-Instructional
104 Material
Duality in Linear
5.3 GENERAL PRIMAL AND DUAL PAIR Programming

Every LPP (called the primal) is associated with another LPP (called its dual).
Either of the problem can be considered as primal with the other as dual. NOTES
The importance of the duality concept is due to two main reasons:
(i) If the primal contains a large number of constraints and a smaller number of
variables, the labour of computation can be considerably reduced by converting it
in to the dual problem and then solving it. (ii) The interpretation of the dual variables
from the cost or economic point of view, proves extremely useful in making future
decisions in the activities being programmed.

5.4 FORMULATING A DUAL PROBLEM

For formulating a dual problem, we first write the problem in the canonical form.
The following changes are used in formulating the dual problem:
(1) Change the objective function of maximization in the primal into
minimization in the dual, and vice versa.
(2) The variables in the primal should be equal to the constraints in the dual
and vice versa.
(3) The cost coefficients C1, C2 ... Cn in the objective function of the primal
should be the RHS constant of the constraints in the dual and vice versa.
(4) In forming the constraints for the dual, we consider the transpose of the
body matrix of the primal problem.
(5) The variables in both the problems are non-negative.
(6) If the variable in the primal is unrestricted in sign, then the corresponding
constraint in the dual will be an equation, and vice versa.

5.5 DUAL PAIR IN MATRIX FORM

A dual problem refers to a linear program in which the objective function is a linear
combination of m values that are the limits in the m constraints of the primal problem.
Let the primal problem be:
Max Z = C1x1 + C2x2 + ... + Cnxn
Subject to a11 x1 a12 x2 K a1n xn b1
a21 x1 a22 x2 K a2 n xn b2
M
am1 x1 am 2 x2 K amn xn bm
x1 , x2 L xn 0
Dual: The dual problem is defined as, Self-Instructional
Material 105
Duality in Linear Min Z = b1w1 + b2w2 + ... + bmwm
Programming
Subject to a11w1 a21w2 K am1wm C1
a12 w1 a22 w2 K am 2 wm C2
NOTES M
a1n w1 a2 n w2 K amn wn Cn
w1 , w2 L wm 0

where w1, w2, w3 ... wm are called dual variables.


Example 5.6: Find the maximum of Z = 6x + 8y.
Subject to 5 x 2 y 20
x 2 y 10
x, y 0 by solving its dual problem.
Solution: The following is the dual of the given primal problem. As there are two
constraints in the primal, we have two dual variables, namely w1 and w2.
Min Z = 20wl + 10w2
Subject to 5w1  2w2  6
w1  2w2  8
w1 , w2  0
We solve the dual problem using the Big M method. Since this method
involves artificial variables, the problem is reformulated and we have
Max Z 20w1 10 w2 0S1 0 S 2 MA1 MA2
Subject to 5w1 w2 S1 A1 6
2w1 2w2 S2 A2 8
w1 , w2 , S1 , S2 , A1 , A2 0

Cj 1 1 3 0 0 0 0

CB B XB w1 w2 S1 S2 A1 A2
xB
Min
w1

–M A1 6 5 1 –1 0 1 0 6/5 = 1.02


–M A2 8 2 2 0 –1 0 1 8/2 = 4
Zj –14M –7M –3M M M –M –M
Zj – Cj –7M+20 –3M–10 M M 0 0

–20 w1 6/5  1/5 –1/5 0 – 0 6/5×5/1 = 6

Self-Instructional
106 Material
Cj –20 –10 0 0 –M –M Duality in Linear
Programming
CB B XB w1 –w2 S1 S2 A1 A2
xB
Min
w1

–M A2
28
0 8/5 2/5 –1 – 1
28 5 NOTES
5 5 8
28
=
8

25
Zj – M–24 –20 –4–8/5M 4–2/5M M – –M
8

8 2
Z j – Cj 0 – M+6 4– M M – 0
5 5

 
1 1
–20 w1  0 –1/4 – –
2 8

7 –5
–10 w2  1 1/4 – –
2 8
Zj –45 –20 –10 5/2 15/4 – –
Z j – Cj  0 5/2 15/4

Since all Zj – Cj  0, the solution is optimum. Therefore, the optimal solution


of the dual is:
w1 = 1/2, w2 = 7/2, min Z = – 45
The optimum solution of the primal problem is given by the value of Zj – Cj in
the optimal table corresponding to the column of the surplus variables, S1 and S2.
5 15
x= y=
2 4
5 15
Max Z = 6   8   45
2 4
Example 5.7: Apply the principle of duality to solve the LPP,
Max Z = 3x1 + 2x2
subject to x1 + x2  1
x1 + x2  7
x1 + x2  10
x2  3, x1, x2  0
Solution: First, we convert the given (primal) problem into its dual. As there are 4
constraints in the primal problem, we have four variables w1 w2 w3 w4 in its dual.
We convert the given problem into its canonical form by rearranging some of the
constraints:

Self-Instructional
Material 107
Duality in Linear Max Z = 3x2 + 2x2
Programming
Subject to –x1 – x2  –1
x1 + x2  7
NOTES x1 + 2x2  10
0xl + x2  3
x1, x2  0
Dual:
Min Z = –w1 + 7w2 + 10w3 + 3w4
Subject to –w1 + w2 + w3 + 0w4  3
–w1 + w2 + 2w3 + w4  2
w1, w2, w3, w4  0
We apply the Big M method to get the solution of the dual problem, as it
involves artificial variables.
Max Z = w1 – 7w2 – 10w3 – 3w4 + 0S1 + 0S2 – MA1 – MA2
Subject to –w1 + w2 + w3 + 0w1 – S1 + A1 = 3
–w1 + w2 + 2w3 + w4 – S2 + A2 =2
w1, w2, w3, w4, S1 S2 A1, A2  0
Since all Zj – Cj  0, the solution is optimum. The optimal solution of the
dual problem is:
w1 = w3 = w4 = 0, w2 = 3, Min Z = –21
Also, from the optimum simplex table of the dual problem, the optimal solution
of the primal problem is given by the value Zj – Cj corresponding to the column of
the surplus variables, S1 and S2.
 x1 = 7, x2 = 0 Max Z = 21
Example 5.8: Write down the dual of the following LPP and solve it.
Max Z = 4x1 + 2x2
Subject to x1 + x2  3
x1 – x2  2
x1, x2  0

Self-Instructional
108 Material
Duality in Linear
Programming

NOTES

Solution: The dual of the given (primal) problem is as follows. First we convert
the given problem into its canonical form by rearranging the constraints.
Self-Instructional
Material 109
Duality in Linear Max Z = 4x1 +2x2
Programming
Subject to –x1 – x2  –3
–x1 + x2  –2
NOTES x1, x2  0
Dual:
Min Z = –3w1 – 2w2
Subject to –w1 – w2  4
–w1 + w2  2
w1, w2  0
Introducing the surplus variables S1, S2  0 and artificial variables A1 A2  0,
the problem becomes,
Max Z = 3w1 + 2w2 + 0S1 + 0S2 – MA1 – MA2
Subject to –w1 – w2 – S1 + A1 = 4
–w1 + w2 – S2 + A2 = 2
w1, w2, Sl, S2, Al, A2  0

 All Zj – Cj  0 and an artificial variable A1 is in the basis at the positive


level. Thus, the dual problem does not possess any optimum basic feasible solution.
Consequently, there exists no finite optimum solution to the given LP problem, or
the solution of the given LPP is unbounded.
Example 5.9: Prove using the duality theory that the following LPP is feasible but
has no optimal solution.
Min Z = x1 – x2 + X3
Subject to x1 – x3  4
x1 – x2 + 2x3  3
and x1, x2, x3  0
Solution: The given primal LPP is:
Self-Instructional Min Z = x1 – x2 + x3
110 Material
Subject to x1 + 0x3 – x3  4 Duality in Linear
Programming
x1 – x2 + 2x3  3
xl, x2, x3  0
Dual: Since there are two constraints, there are two variables, w1 and w2, NOTES
in the dual, given by:
Max Z = 4w1 + 3w2
Subject to w1 + w2  1
0w1 – w2  –1
–w1 + 2w2  1
w1, w2  0
To solve the dual problem,
Max Z = 4w1 + 3w2
Subject to w1 + w2 + S1 = 1
0w1 + w2 – S2 + A1 = 1
–w1 + 2w2 + S3 = 1
where S1, S3 are the slack variables, S2 the surplus variable and A1 is the
artificial variable.
Cj 4 3 0 0 –M 0

CB B XB W1 W2 S1 S2 A2 S 3 Min XB/w2

0 S1 1 1 1 1 0 0 0 1
–M A1 1 0 1 0 –1 1 0 1
0 S3 1 –1 2 0 0 0 1 1/2

Zj –M 0 –M 0 M –M 0
Z j–C j –4 –M –3 0 M 0 0
XB
 Min
W1
0 S1 1/2 3/2 0 1 0 0 – 1/2 1/3
–M A1 1/2 1/2 0 0 –1 1 – 1/2 1
3 W2 1/2 –1/2 1 0 0 0 1/2 –

1 1 1
Zj M– 3/2 M– 3/2 3 0 M –M M+ 3/2
2 2 2

M 5 M 3
Z j–C j   0 M 0
2 2 2

Self-Instructional
Material 111
Duality in Linear
Programming 5.6 DUALITY THEOREM

As per the standard form the dual to an LP is


NOTES Maximize cT x
(P)
Subject to Ax  b, 0  x is the LP
Minimize bTy
(D)
Subject to ATy  c, 0  y.
Because the problem D is a Linear Program (LP), it too has a dual. The
duality terminology states that the problems P and D are considerd a pair implying
that the dual to D should be P. This is certainly to condition,
Minimize bTy –Maximize (–b)Ty
Subject to ATy  c, = Subject to (AT)y  (–c),
0y 0  y.
The problem on the right is in standard form so we can take its dual to get
the LP as follows;
Minimize (–c)Tx Maximize cTx
Subject to (–AT) x  (–b), 0  x = Subject to Ax b, 0  x.
The primal-dual pair of LPs P – D are related via the Weak Duality Theorem.
Theorem 5.1 (Weak Duality Theorem) If x  2 Rn is feasible for P and y  Rm
is feasible for D, then
cTx  yT Ax  bTy.
Thus, if P is unbounded, then D is necessarily infeasible, and if D is unbounded,
then P is necessarily infeasible. Moreover, if cT x = bT y with x feasible for P
and y feasible for D, then x must solve P and y must solve D.
We now use The Weak Duality Theorem in conjunction with The
Fundamental Theorem of Linear Programming to prove the Strong Duality
Theorem. The key ingredient in this proof is the general form for simplex tableaus.
Theorem 5.2 (The Strong Duality Theorem) If either P or D has a finite optimal
value, then so does the other, the optimal values coincide, and optimal solutions to
both P and D exist.
Note: This result states that the finiteness of the optimal value implies the existence
of a solution. This is not always the case for nonlinear optimization problems.
Certainly, consider the problem,

min e x .
x R
This problem has a finite optimal value, namely zero; however, this value is
not attained by any point x R . That is, it has a finite optimal value, but a solution
Self-Instructional
112 Material
does not exist. The existence of solutions when the optimal value is finite is one of Duality in Linear
Programming
the many special properties of linear programs.
Proof: Since the dual of the dual is the primal, we may as well assume that the
primal has a finite optimal value. In this case, the Fundamental Theorem of Linear
NOTES
Programming says that an optimal basic feasible solution exists. By the formula for
the general form of simplex tableaus, we know that there exists a nonsingular
record matrix R R n×n and a vector y R m such that the optimal tableau has the
form,

R 0 A I b RA R Rb
T T T T T
.
y 1 c 0 0 c -y A y yT b

Since this is an optimal tableau hence,


c – AT y  0Ty, – yT  0
with yTb equal to optimal value in the primal problem. But then AT y  c and
0  y so that y is feasible for the dual problem D. In addition, the Weak Duality
Theorem implies that,
bT y = maximize cTx bT ŷ
subject to Ax  b, 0  x
for every vector ŷ that is feasible for D. Therefore, y solves D

5.7 COMPLEMENTARY SLACKNESS

The Strong Duality Theorem tells us that optimality is equivalent to equality


in the Weak Duality Theorem. That is, x solves P and y solves D if and only if
(x, y) is a P – D feasible pair and
cTx = yTAx = bTy.
We now carefully examine the consequences of this equivalence. Here the
equation cTx = yTAx implies that,
n n
0 xT AT y c xj aij yi – c j ...(5.1)
j=1 j=1

In addition, feasibility implies that,


m

0  xj and 0 aij yi – c j for j = 1,. .., n,


i=1

And so,
m
xj aij yi cj 0 for j = 1, . . . , n.
i 1
Self-Instructional
Material 113
Duality in Linear Hence, the only way Equation (5.1) can hld is if,
Programming
m
xj aij yi cj 0 for j = 1, . . . , n.
i 1
NOTES
Or equivalently,
m

xj = 0 or aij yi cj or both for j = 1, . . . , n. (5.2)


i 1

Similary, it implies that,


m n
T
0 y b Ax yi bi aij x j .
i 1 j 1

Again, feasibility implies that,


n

0  yi and 0 bi aij x j for i = 1, . . . , m.


j 1

Thus, we must have,


n
yi bi aij x j 0 for j = 1, . . . , n,
j 1

Or equivalenty,
n

yi = 0 or aij x j bi or both for i = 1, . . . , m. (5.3)


j 1

The two Equations (5.2) and (5.3) combine to yield the following theorem.
Theorem 5.3 (The Complementary Slackness Theorem) The vector
x Rn solves P and the vector y Rm solves D if and ony if x is feasible for P and
y is feasible for D and
m

(i) Either 0 = xj or aij yi c j or both for j = 1, . . . , n, and


i 1

(ii) Either 0 = yi or aij x j bi or both for i = 1, . . . , m.


j 1

Proof: If x solves P and y solves D, then by the Strong Duality Theorem we have
equality in the Weak Duality Theorem. But we have just observed that this implies
Equations (5.2) and (5.3) which are equivalent to conditions (i) and (ii) mentioned
above.
Self-Instructional
114 Material
Conversely, if conditions (i) and (ii) are satisfied, then we get equality in the Duality in Linear
Programming
Weak Duality Theorem. Therefore, by Theorem 5.2, x solves P and y solves D.
The Complementary Slackness Theorem can be used to develop a test of
optimality for a putative solution to P (or D).
NOTES
Corollary The vector x R solves P if and only if x is feasible for P and there
n

exists a vector y R m feasible for D and such that,


n

(i) For each i {1, 2,. .., m}, if aij x j < b , then y = 0, and
i i
j =1

(ii) For each j {1, 2,. .., n}, if 0 < xj, then aij yi cj .
i =1

Proof: Conditions (i) and (ii) implies equality in the Weak Duality Theorem. The
primal feasibility of x and the dual feasibility of y combined with Theorem 5.1 yield
the result.
We now show how to apply this Corollary to test whether or not a given
point solves an LP. Recall that all of the nonbasic variables in an optimal BFS take
the value zero, and, if the BFS is nondegenerate, then all of the basic variables are
nonzero. That is, m of the variables in the optimal BFS are nonzero since every
BFS has m basic variables. Consequently, among the n original decision variables
and the m slack variables, m variables are nonzero at a nondegenerate optimal
BFS. That is, among the constraints
0  xj j = 1,. .., n,

0  xn + i = ci aij x j i = 1,. .., m


i N

m of them are strict inequalities. Considering the Corollary given above, we


see that every nondegenerate optimal basic feasible solution yields a total of m
equations that an optimal dual solution y must satisfy. Therefore, this Corollary
states that the m optimal dual variables yi satisfy m equations. Hence, we can
write an m × m system of equations to solve for y. Corollary can be illustrated in
the form of the following LP:
Maximize 7x1 + 6x2 + 5x3 – 2x4 + 3x5
Subject to x1 + 3x2 + 5x3 – 2x4 + 2x5  4
4x1 + 2x2 – 2x3 + x4 + x5  3
2x1 + 4x2 + 4x3 – 2x4 + 5x5  5
3x1 + x2 + 2x3 – x4 – 2x5  1
0  x1, x2 , x3, x4, x5

Self-Instructional
Material 115
Duality in Linear
Programming
Check Your Progress
1. Explain the duality in linear programming.
NOTES 2. Describe the formulation of dual problem.
3. Discuss some important results in duality.
4. Explain the economic interpretation of dual variables.
5. State the general primal and dual pair.
6. Define the dual pair in matrix form.
7. Explain the weak duality theorem.
8. Define the strong duality theorem.
9. State the complementary slackness theorem.

5.8 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS

1. For every given linear programming problem, there is another intimately


related linear programming problem referred to as its dual. The duality
theorem states that ‘for every maximization (or minimization) problem in
linear programming, there is a unique similar problem of minimization (or
maximization) involving the same date which describes the original problem’.
2. The following changes are used in formulating the dual problem:
(i) Change the objective function of maximization in the primal into
minimization in the dual, and vice versa.
(ii) The variables in the primal should be equal to the constraints in the
dual and vice versa.
(iii) The cost coefficients C1, C2 ... Cn in the objective function of the
primal should be the RHS constant of the constraints in the dual and
vice versa.
(iv) In forming the constraints for the dual, we consider the transpose of
the body matrix of the primal problem.
(v) The variables in both the problems are non-negative.
(vi) If the variable in the primal is unrestricted in sign, then the corresponding
constraint in the dual will be an equation, and vice versa.
3. (i) The dual of the dual is primal.
(ii) If one is a maximization problem, then the other is a minimization
one.

Self-Instructional
116 Material
(iii) The necessary and sufficient condition for any LPP and its dual to Duality in Linear
Programming
have an optimal solution is that both must have a feasible solution.
(iv) The fundamental duality theorem states that if either the primal or the
dual problem has a definite optimal solution, then the other problem
NOTES
also has a definite optimal solution and the maximum values of the
objective function in both the problems are the same, i.e., Max Z =
Min Z. The solution of the other problem can be read from the Zj – Cj
row below the columns of slack and surplus variables.
4. If we interpret our primal LP problem as a classical ‘Resource Allocation’
problem, then its dual can be interpreted as a ‘Resource Valuation’ problem.
Thus the values of the optimal dual variables have fascinating economic
interpretation. The primal problem is used to describe a production problem
in which the objective function represents the gain obtained from the
production of goods, while the constraints characterize bounds on the
production amounts due to the presence of limited resources. The available
quantity of each resource is then measured.
Economic Interpretation: Consider the following primal problem:
5. The importance of the duality concept is due to two main reasons:
(i) If the primal contains a large number of constraints and a smaller number
of variables, the labour of computation can be considerably reduced by
converting it in to the dual problem and then solving it. (ii) The interpretation
of the dual variables from the cost or economic point of view, proves
extremely useful in making future decisions in the activities being programmed.
6. A dual problem refers to a linear program in which the objective function is
a linear combination of m values that are the limits in the m constraints of the
primal problem.
7. (Weak Duality Theorem) If x  2 Rn is feasible for P and y  Rm is feasible
for D, then
cTx  yT Ax  bTy.
Thus, if P is unbounded, then D is necessarily infeasible, and if D is
unbounded, then P is necessarily infeasible. Moreover, if cT x = bT y
with x feasible for P and y feasible for D, then x must solve P and y
must solve D.
We now use The Weak Duality Theorem in conjunction with The Fundamental
Theorem of Linear Programming to prove the Strong Duality Theorem. The
key ingredient in this proof is the general form for simplex tableaus.
8. (The Strong Duality Theorem) If either P or D has a finite optimal value,
then so does the other, the optimal values coincide, and optimal solutions to
both P and D exist.

Self-Instructional
Material 117
Duality in Linear 9. The Complementary Slackness Theorem: The vector x Rn solves P and
Programming
the vector y Rm solves D if and ony if x is feasible for P and y is feasible
for D and
m
NOTES aij yi c j or both for j = 1, . . . , n, and
(i) Either 0 = xj or
i 1

(ii) Either 0 = yi or aij x j bi or both for i = 1, . . . , m.


j 1

5.9 SUMMARY

 For every given linear programming problem, there is another intimately


related linear programming problem referred to as its dual. The duality
theorem states that ‘for every maximization (or minimization) problem in
linear programming, there is a unique similar problem of minimization (or
maximization) involving the same date which describes the original problem’.
 Every LPP (called the primal) is associated with another LPP (called its
dual). Either of the problem can be considered as primal with the other as
dual.
 If the variable in the primal is unrestricted in sign, then the corresponding
constraint in the dual will be an equation, and vice versa.
 The fundamental duality theorem states that if either the primal or the dual
problem has a definite optimal solution, then the other problem also has a
definite optimal solution and the maximum values of the objective function
in both the problems are the same, i.e., Max Z = Min Z. The solution of the
other problem can be read from the Zj – Cj row below the columns of slack
and surplus variables.
 The existence theorem states that if either problem has an unbounded solution,
then the other problem has no feasible solution.
 If we interpret our primal LP problem as a classical ‘Resource Allocation’
problem, then its dual can be interpreted as a ‘Resource Valuation’ problem.
Thus the values of the optimal dual variables have fascinating economic
interpretation. The primal problem is used to describe a production problem
in which the objective function represents the gain obtained from the
production of goods, while the constraints characterize bounds on the
production amounts due to the presence of limited resources. The available
quantity of each resource is then measured.
Economic Interpretation: Consider the following primal problem:
 A dual problem refers to a linear program in which the objective function is
a linear combination of m values that are the limits in the m constraints of
the primal problem.

Self-Instructional
118 Material
 Thus, if P is unbounded, then D is necessarily infeasible, and if D is unbounded, Duality in Linear
Programming
then P is necessarily infeasible. Moreover, if cT x = bT y with x feasible
for P and y feasible for D, then x must solve P and y must solve D.
 (The Strong Duality Theorem) If either P or D has a finite optimal value, NOTES
then so does the other, the optimal values coincide, and optimal solutions to
both P and D exist.

5.10 KEY WORDS

 Duality in LPP: Every LPP (called the primal) is associated with another
LPP (called dual). Either of the problem can be considered as primal with
the other as dual.
 Economic interpretation of dual variables: If we interpret our LP
problem as a classical ‘Resource Allocation’ problem, then its dual can be
interpreted as a ‘Resource Valuation’ problem.
 Dual pair in matrix form: A dual problem refers to a linear program in
which the objective function is a linear combination of m values that the
limits in the m constraints of the primal problem.
 Strong duality theorem: If either P or D has a finite optimal value, then so
does the other, the optimal values coincide, and optimal solution to both P
and D exist.

5.11 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short-Answer Questions
1. Discuss the duality in linear programming.
2. Explain the formulation of dual problem.
3. Define some important results in duality.
4. State the general primal and dual pair.
5. Explain the dual pair in matrix form.
6. Analyse the strong duality theorem.
7. State the complementary slackness theorem.
Long-Answer Questions
1. Discuss briefly the duality in linear programming with help of examples.
2. Illustrate the economical interpretation of dual variables.
3. Briefly discuss the general primal and dual pair. Give appropriate examples.
Self-Instructional
Material 119
Duality in Linear 4. Analyse the dual pair in matrix form giving examples.
Programming
5. State the duality theorem.
6. What is weak duality theorem? Discuss giving its proof.
NOTES 7. Elaborate on the strong duality theorem.

5.12 FURTHER READINGS

Arumugam, R. S. 2006. Operations Research. Palayamkottai (Tamil Nadu): New


Gamma Publications.
Sundharesan, V., K. S. Ganapathy and K. Ganesan. 2017. Resource Management
Techniques (Operations Research). Chennai: A. R. Publications.
Swaroop, Kanti, P. K. Gupta and Man Mohan. 2007. Operations Research,
13th Edition. New Delhi: Sultan Chand & Sons.
Taha, Hamdy A. 1992. Operations Research: An Introduction. New York:
Macmillan.
Sharma, S. D. 2006. Operations Research. Uttar Pradesh: Kedar Nath Ram
Nath & Co.
Gupta, P. K. and D. S. Hira. 2002. Introduction to Operations Research. New
Delhi: S. Chand And Company Limited.
Gillett, Billy E. 2007. Introduction to Operations Research. New Delhi: Tata
McGraw-Hill.
Ackoff, R. L. and M. W. Sasieni. 1968. Fundamentals of Operations Research.
New York: John Wiley & Sons Inc.
Kothari, C. R. 1992. An Introduction to Operational Research. New Delhi:
Vikas Publishing House Pvt. Ltd.
Kalavathy, S. 2002. Operations Research. New Delhi: Vikas Publishing House
Pvt. Ltd.
Jensen, Paul A., and Jonathan F. Bard. 2003. Operations Research Models and
Methods. New York: John Wiley & Sons.
Sharma, J. K. 2001. Operations Research: Theory and Applications. New
Delhi: Macmillan India Ltd.

Self-Instructional
120 Material
Integer Programming

UNIT 6 INTEGER PROGRAMMING


Structure
6.0 Introduction NOTES
6.1 Objectives
6.2 Integer Programming and Cutting Plan Techniques
6.2.1 Importance of Integer Programming Problems
6.2.2 Applications of Integer Programming
6.2.3 Methods of Integer Programming Problem
6.2.4 Mixed Integer Programming Problem
6.2.5 Branch and Bound Method
6.3 Dual Simplex Method
6.4 Answers to Check Your Progress Questions
6.5 Summary
6.6 Key Words
6.7 Self Assessment Questions and Exercises
6.8 Further Readings

6.0 INTRODUCTION

An integer programming problem is a mathematical optimization or feasibility


program in which some or all of the variables are restricted to be integers. In many
settings the term refers to Integer Linear Programming (ILP), in which the objective
function and the constraints (other than the integer constraints) are linear.
Integer programming is NP-complete. In particular, the special case of 0-1
integer linear programming, in which unknowns are binary, and only the restrictions
must be satisfied, is one of Karp’s 21 NP-complete problems. If some decision
variables are not discrete the problem is known as a mixed-integer programming
problem.
This is a type of linear programming in which all or some variables are
constrained to assume non-negative integer values. Problems related to such
programming are known as integer programming problems. If all variables assume
integer values then it is called ‘Pure Integer Programming Problem’. But in the
optimal solution, if this restriction of integer is not on all and only few can assume
non-integer solution, then it is a case of mixed integer programming problem. If these
integers are limited to either of the two values 0 or 1, then these are known as ‘0 –
1 Programming Problems’ or ‘Standard Discrete Programming Problem’. Nature
of problems where there is decision making, depends on either to do or not to do.
The choice is between two outcomes, in which one outcome has to be selected.
Integer programming problem is applied in business and industries. Problems
like traveling salesman problem, assignment problem and transportation problems
are part of integer programming where the decision variables are either 0 or 1. If
xij represents an activity from i to j, then xij = 1, which specifies that the activity
Self-Instructional
is performed, and when xij = 0 then activity is not performed. Material 121
Integer Programming In this unit, you will study about the integer programming, cutting plan
techniques, and dual simplex method.

NOTES
6.1 OBJECTIVES

After going through this unit, you will be able to:


 Understand the significance of Integer Programming
 Apply principles of IPP in real life situations
 Solve problems on IPP using the cutting plane method
 Resolve the problems on IPP using branch and bound or search methods
 Analyse the dual simplex method

6.2 INTEGER PROGRAMMING AND CUTTING


PLAN TECHNIQUES
A linear programming problem in which all or some of the decision variables are
constrained to assume non-negative integer values is called an Integer
Programming Problem (IPP).
In a Linear Programming Problem (LPP) if all variables are required to take
integer values then it is called the Pure (all) Integer Programming Problem
(Pure IPP).
If only some of the variables in the optimal solution of a LPP are restricted
to assume non-negative integer values, while the remaining variables are free to
take any non-negative values, then it is called a Mixed Integer Programming
Problem (Mixed IPP).
Further, if all the variables in the optimal solution are allowed to take values
0 or 1, then the problem is called the 0–1 Programming Problem or Standard
Discrete Programming Problem.
The general integer programming problem is given by,
Max Z = CX
Subject to constraints,
AXB
X  0 Where some or all variables are integers.
6.2.1 Importance of Integer Programming Problems
In IPP, all the decision variables are allowed to take any non-negative real values
as it is quite possible and appropriate to have fractional values in many situations.
There are several frequently occurring circumstances in business and industries
that lead to planning models involving integer-valued variables. For example, in
production, manufacturing is frequently scheduled in terms of batches, lots or runs.
Self-Instructional
122 Material
In allocation of goods, a shipment must involve a discrete number of trucks or Integer Programming

aircrafts. In such cases the fractional values of variables like 13/3 may be meaningless
in the context of the actual decision problem.
This is the main reason why integer programming is so important for marginal
NOTES
decisions.
6.2.2 Applications of Integer Programming
Integer programming is applied in business and industries. All assignment and
transportation problems are integer programming problems, because in the
assignment and travelling salesmen problem all the decision variables are either
zero or one.
i.e., xij = 0 or 1
Other examples include capital budgeting and production scheduling
problems. In fact, any situation involving decisions of the type ‘either to do a job
or not’ can be viewed as an IPP. In all such situations,
xij = 1, if the jth activity is performed.
xij = 0, if the jth activity is not performed.
In addition, allocation problems involving the allocation of men or machines
give rise to IPP, since such commodities can be assigned only in integers and not in
fractions.
Note: If the non-integer variable is rounded off, then it violates the feasibility and there is no
guarantee that the rounded off solution will be optimal. Due to these difficulties, there is a
need for developing a systematic and efficient procedure for obtaining the exact optimal
integer solution to such problems.

6.2.3 Methods of Integer Programming Problem


Two methods are used to solve IPP:
(i) Gomory’s Cutting Plane Method
(ii) Branch and Bound Method or Search Method
Gomory’s Cutting Plane Method
A systematic procedure for solving pure IPP was first developed by R.E. Gomory
in 1956, which he later used to deal with the more complicated cases of mixed
integer programming problems. This method consists of first solving the IPP as an
ordinary LPP by ignoring the restriction of integer values and then introducing a
new constraint to the problem such that the new set of feasible solution includes all
the original feasible integer solutions, but does not include the optimum non-integer
solution initially found. This new constraint is called ‘Fractional Cut’ or ‘Gomorian
constraint’. Then the revised problem is solved using the simplex method, till an
optimum integer solution is obtained.

Self-Instructional
Material 123
Integer Programming Branch and Bound Method
This is an enumeration method in which all feasible integer points are enumerated.
This is the widely used search method based on Branch and Bound technique.
NOTES It was developed in 1960 by A.H. Land and A.G. Doig. This method is applicable
to both pure and mixed IPP. It first divides the feasible region into smaller subsets
that eliminate parts containing no feasible integer solution.
Gomory’s Fractional Cut Algorithm or Cutting Plane Method for Pure
(All) IPP
Step 1: Convert the minimization IPP into an equivalent maximization LPP. Ignore
the integrality condition.
Step 2: Introduce slack and/or surplus variables if necessary to convert the given
LPP in its standard form and obtain the optimum solution of the given LPP by
using simplex method.
Step 3: Test the integrality of the optimum solution.
(i) If all xBi  0 and are integers, then an optimum integer solution is obtained.
(ii) If all xBi  0 and at least one xBi is not an integer, then go to the next step.
Step 4: Rewrite each xBi as xBi = [xBi] + fi, where xBi is the integral part of xBi and
fi is the positive fractional part of xBi, 0 < fi < 1.
Choose the largest fraction of xBi i.e., choose Max ( fi ). If there is a
tie, then select arbitrarily. Let Max ( fi ) = fk, corresponding to xBk (the kth row is
called the ‘source row’).
Step 5: Express each negative fraction, if any, in the source row of the optimum
simplex table as the sum of a negative integer and a non-negative fraction.
Step 6: Find the fractional cut constraint or Gomorian constraint.
n
From the source row,  akj x j = xBi
j 1

n
i.e.,  ([akj ]  fkj) x j = [xBk] + fk
j 1

n
n
The equation is in the form,  fkj x j  fK    fkj x j  – fk
j 1
j 1
n
Or,   fkj x j  G1 = –fk
j 1

Where, G1 is the Gomorian slack.


Step 7: Add the fractional cut constraint obtained in Step (6) at the bottom of the
simplex table obtained in Step (2). Find the new feasible optimum solution using
dual simplex method.
Self-Instructional
124 Material
Step 8: Go to Step (3) and repeat the procedure until an optimum integer solution Integer Programming

is obtained.
Example 6.1: Find the optimum integer solution to the following IPP.
Max Z = x1 + x2 NOTES
Subject to constraints, 3x1 + 2x2  5
x2  2
x1, x2  0 and are integers.
Solution: After introducing the non-negative slack variables S1, S2  0, the
standard form of the IPP becomes,
Max Z = x1 + x2 + 0S1 + 0S2
Subject to constrains, 3x1 + 2x2 + S1 = 5
0x1 + x2 + S2  2
x1, x2, S1, S2  0
Ignoring the integrality condition, solve the problem by simplex method. The
initial basic feasible solution is given by putting x1 = 0 and x2 = 0.
Hence,
S1 = 5 and S2 = 2
Cj 1 1 0 0

xB
CB B xB x1 x2 S1 S2 Min
xi
0 S1 5 3 2 1 0 5/3
0 S2 2 0 1 0 1 —
Zj 0 0 0 0 0
Z j – Cj –1 –1 0 0

1 x1 5/3 1 2/3 1/3 0 5/2 = 2.5


0 S2 2 0 1 0 1 2/1 = 2
Zj 5/3 1 2/3 1/3 0
Z j – Cj 0 –1/3 1/3 0
1 x1 1/3 1 0 1/3 –2/3
1 x2 2 0 1 0 1
Zj 7/3 1 1 1/3 1/3
Z j – Cj 0 0 1/3 1/3

Since all Zj – Cj  0, an optimum solution is obtained, which is given by:


Max Z = 7/3, x1 = 1/3, x2 = 2
To obtain an optimum integer solution, we have to add a fractional cut constraint
in the optimum simplex table.
Self-Instructional
Material 125
Integer Programming Since xB = 1/3, the source row is the first row.
Expressing the negative fraction –2/3 as a sum of negative integer and positive
fraction, we get –2/3 = –1 + 1/3
NOTES
Since x1 is the source row, we have,
1/3 = x1 + 1/3 S1 – 2/3 S2
i.e., 1/3 = x1 + 1/3S1 + ( –1 + 1/3) S2
The fractional cut Gomorian constraint is given by,
1/3S1 + 1/3S2  1/3
 – 1/3S1 – 1/3S2  –1/3
 – 1/3S1 – 1/3 S2 + G1 = –1/3
Where, G1 is the Gomorian slack. Add this fractional cut constraint at the bottom
of the above optimal simplex table.
We apply dual simplex method. Since G1 = –1/3, hence G1 leaves the
basis. To find the entering variable we find,
 Z j  C j 
= Max 
1/ 3 1/ 3 
Max  , aij  0  , 
 aij   1/ 3 1/ 3 

Max {–1, –1} = –1


We choose S1 as the entering variable arbitrarily.
Cj 1 1 0 0 0

CB B xB x1 x2 S1 S2 G1
1 x1 1/3 1 0 1/3 –2/3 0
1 x2 2 0 1 0 1 0
0 G1 –1/3 0 0 –1/3 –1/3 1
Zj 7/3 1 1 1/3 1/3 0
Z j – Cj 0 0 1/3 1/3 0

1 x1 0 1 0 0 –1 1
1 x2 2 0 1 0 1 1
0 S1 1 0 0 1 1 –3

Zj 2 1 1 0 0 1
Z j – Cj 0 0 0 0 1

Since all Zj – Cj  0 and all xBi  0, we obtain an optimal feasible integer solution.
 The optimum integer solution is,
Max Z = 2, x1 = 0, x2 = 2.
Self-Instructional
126 Material
Example 6.2: Find an optimum integer solution to the following IPP. Integer Programming

Max Z = x1 + 2x2
Subject to contraints, 2x2  7
x1 + x2  7 NOTES
2x1  11
x1, x2  0 and are integers.
Solution: Introducing slack variables S1, S2, S3  0, we get,
Max Z = x1 + 2x2 + 0S1 + 0S2 + 0S3
Subject to constraints, 2x2 + S1 = 7
x1 + x2 + S2 = 7
2x1 + S3 = 11
Ignoring the integrality condition, we get the optimum solution of the given IPP
with initial basic feasible solution obtained by putting x1 = 0 and x2 = 0 as S1 =
7, S2 = 7 and S3 = 11.
Cj 1 2 0 0 0

xB
CB B xB x1 x2 S1 S2 S3 Min
xi

0 S1 7 0 2 1 0 0 7/2 = 3.5
0 S2 7 1 1 0 1 0 7/1 = 7
0 S3 11 2 0 0 0 1 —

Zj 0 0 0 0 0 0
Z j – Cj –1 –2 0 0 0

2 x2 7/2 0 1 1/2 0 0 —
0 S2 7/2 1 0 –1/2 1 0 7/2 = 3.5
0 S3 11 2 0 0 0 1 11/2 = 5.5

Zj 7 0 2 1 0 0
Z j – Cj –1 0 1 0 0

2 x2 7/2 0 1 1/2 0 0
1 x1 7/2 1 0 –1/2 1 0
0 S3 4 0 0 1 –2 1

Zj 21/2 1 2 1/2 1 0
Z j – Cj 0 0 1/2 1 0

Since all Zj – Cj  0, an optimum solution is obtained which is given by,

7 7
Max Z = 21, x1 = , x2 =
2 2 2 Self-Instructional
Material 127
Integer Programming Since the optimum solution obtained above is not an integer, we now select
a constraint corresponding to,
Max ( fi) = Max ( f1, f2, f3)

NOTES x 1 = 7/2 = 3 + 1/2


x 2 = 7/2 = 3 + 1/2
S3 = 4 = 4 + 0

 Max ( fi) = Max  1 , 1 , 0  = 1/2


 
2 2 

Since the Max fraction is same for both x1 and x2 rows, we choose x1 row
as the source row arbitrarily. From this row we have,
7/2 = x1 + 0x2 – 1/2 S1 + 1S2 + 0S3
On expressing the negative fraction as a sum of negative integer and a positive
fraction, we have,
3 + 1/2 = x1 + 0x2 + (–1 + 1/2) S1 + 1S2 + 0S3
 The Gomorian constraint is given by,
1/2 S1  1/2
i.e., –1/2 S1  –1/2  –1/2 S1 + G1 = –1/2
Here, G1 is the Gomorian slack. Adding this new constraint at the bottom of the
above optimal simplex table, we get a new table.

We apply dual simplex method. Since G1 = –1/2, G1 leaves the basis.


Entering variable is given by,
 Z j  C j   1/ 2 
Max  , aij  0 = Max  
 aij    1/ 2 

This gives the non-basic variable S1 to enter into the basis. Drop G1 and introduce S1.
Cj 1 2 0 0 0 0

CB B xB x1 x2 S1 S2 S3 G1
2 x2 7/2 0 1 1/2 0 0 0
1 x1 7/2 1 0 –1/2 1 0 0
0 S3 4 0 0 1 –2 1 0
0 G1 –1/2 0 0 –1/2 0 0 1
Zj 21/2 1 2 1/2 1 0 0
Z j – Cj 0 0 1/2  1 0 0

Self-Instructional
128 Material
2 x2 3 0 1 0 0 0 1 Integer Programming

1 x1 4 1 0 0 1 0 –1
0 S3 3 0 0 0 –2 1 2
0 S1 1 0 0 1 0 0 –2
NOTES
Zj 10 1 2 0 1 0 1
Z j – Cj 0 0 0 1 0 1

Since all Zj – Cj  0, an optimum solution has been obtained in integers. Hence,


the integer optimum solution is given by,
Max Z = 10, x1 = 4, x2 = 3
Example 6.3: Solve the following integer programming problem.
Max Z = 2x1 + 20x2 – 10x3
Subject to constraints, 2x1 + 20x2 + 4x3  15
6x1 + 20x2 + 4x3 = 20
x1, x2, x3  0 and are integers.
Solution: Introducing slack variable S1  0 and an artificial variable A1  0, the
initial basic feasible solution becomes S1 = 15, A1 = 20 by putting x1 = x2 = x5
= 0 Ignoring the integer condition, solve the problem using simplex method.
Max Z = 2x1 + 20x2 – 10x3 + 0S1 – MA1
Subject to constraints, 2x1 + 20x2 + 4x3 + S1 = 15
6x1 + 20x2 + 4x3 + A1= 20
x1, x2, x3, S1, A1  0

Cj 2 20 –10 0 –M

xB
CB B xB x1 x2 x3 S1 A1 Min
xi

0 S1 15 2 20 4 1 0 15/20 = 3/4 
–M A1 20 6 20 4 0 1 20/20 = 1

Zj –20M –6M –20M –4M 0 –M


Z j – Cj –6M – 2 –20M – 20 –4M + 10 0 0

Self-Instructional
Material 129
Integer Programming 1 3
20 x2 3/4 1/10 1 1/20 0  10 = 7.5
5 4
5
–M A1 5 4 0 0 –1 1 = 1.25
4
NOTES Zj 15 – 5M 2 – 4M 20 4 1+M –M
Z j – Cj –4 M 0 14 M +1 0

20 x2 5/8 0 1 1/5 3/40 —
2 x1 5/4 1 0 0 –1/4 —
Zj 15 2 20 4 1 —
Z j – Cj 0 0 14 1

Since all Zj – Cj  0, an optimum solution is obtained which is given by,


Max Z = 15, x1 = 5/4, x2 = 5/8, x3 = 0
Since the optimum solution obtained above is not an integer hence select a
constraint corresponding to,
Max ( fi) = Max ( fi, f2)
5 1 5
x1 =  1  and x2 =
4 4 8
1 5 5
 Max (fi) = Max  4 , 8   8
 
 Max ( f1, f2) = Max (5/4, 5/8) = 5/8
 The source row is the first row, namely x2 row. From this source row
we have,
5/8 = 0x1 + 1x2 + (1/5)x3 + (3/40)S1
The fractional cut constraint is given by,
(1/5)x3 + (3/40)S1  5/8
(–1/5)x3 – (3/40)S1  –5/8  – (1/5)x3 – (3/40)S1 + G1 = –5/8
Here, G1 is the Gomorian slack.
Adding this additional constraint in the optimum simplex table, we obtain the
new table as given below.
We apply dual simplex method. Since G1 = –5/8, hence G1 leaves the
basis.
 Z j  C j   14 1 
Also, Max  , aij  0 = Max   =
 aij   1/5 3/40 
40
Max 
3

This gives the non-basic variable S1, which enters the basis.
Self-Instructional
130 Material
Integer Programming
Cj 2 20 –10 0 0

CB B xB x1 x2 x3 S1 G1
5
20 x2 0 1 1/5 3/40 0
8 NOTES
5
2 x1 1 0 0 –1/4 0
4
5
0 G1 0 0 –1/5 –3/40 1
8
Zj 15 2 20 4 1 0
Z j – Cj 0 0 14 1 0

20 x2 0 0 1 0 0 1
2 x1 10/3 1 0 2/3 0 –10/3
0 S1 25/3 0 0 8/3 1 –40/3
Zj 20/3 2 20 4/3 0 40/3
Z j – Cj 0 0 34/3 0 40/3

Again, since the solution is non-integer, we add one more fractional cut
constraint.

10 1 25 1
 x1   3  , x2  0 and  S1  8
3 3 3 3

1
 Max{fi} = Max ( , 0, 1/3)
3
Since, the Max fraction is same for both the rows x1 and S1, we choose S1
arbitrarily.
 From the source row we have,
25/3 = 0x1 + 0x2 + (8/3)x3 + 1S1 – (40/3)G1
Expressing the negative fraction as the sum of negative integer and positive
fraction we have,
(8 + 1/3) = 0x1 + 0x2 + (2 + 2/3)x3 + 1S1 + (–14 + 2/3)G1
The corresponding fractional cut is given by,
–2/3x3 – 2/3 G1 + G2 = –1/3.
Add this second Gomorian constraint at the bottom of the above simplex table
and apply dual simplex method.

Since G2 = –1/3, hence G2 leaves the basis. Also,


 Z j  Cj 
= Max 
34 / 3 40 / 3 
Max  , aij  0  ,   = – 17
 aij    2 / 3 2 / 3 
 

Self-Instructional
Material 131
Integer Programming This gives the non-basic variable x3 which enters the basis. Using dual simplex
method, introduce x3 and drop G2.
Cj 2 20 –10 0 0 0

NOTES CB B xB x1 x2 x3 S1 G1 G2
20 x2 0 0 1 0 0 1 0
2 x1 10/3 1 0 2/3 0 –10/3 0
0 S1 25/3 0 0 8/3 1 –40/3 0
0 G2 –1/3 0 0 –2/3 0 –2/3 1
Zj 20/3 2 20 4/3 0 40/3 0
Z j – Cj 0 0 34/3 0 40/3 0
20 x2 0 0 1 0 0 1 0
2 x1 3 1 0 0 0 –4 1
0 S1 7 0 0 0 1 – 16 4
–10 x3 1/2 0 0 1 0 1 –3/2
Zj 1 2 20 –10 0 2 17
Z j – Cj 0 0 0 0 2 17

Since the solution is still a non-integer, a third fractional cut is required. It is


given from the source row (x3 row) as,
1  1
 2   –2  2  G2
 
1 1
Or, –  – G2
2 2
Or, – 1/2 = –1/2 G2 + G3
Insert this additional constraint at the bottom of the table. The modified
simplex table is shown below.
Using dual simplex method, we drop G3 and introduce G2.
Cj 2 20 –10 0 0 0 0
CB B xB x1 x2 x3 S1 G1 G2 G3
20 x2 0 0 1 0 0 1 0 0
2 x1 3 1 0 0 0 –4 1 0
0 S1 7 0 0 0 1 –16 4 0
–10 x3 1/2 0 0 1 0 1 –3/2 0
0 G3 –1/2 0 0 0 0 0 –1/2 1
Zj 1 2 20 –10 0 2 17 0
Z j – Cj 0 0 0 0 2 17 0
20 x2 0 0 1 0 0 1 0 0
2 x1 2 1 0 0 0 –4 0 2
0 S1 3 0 0 0 1 –16 0 8
–10 x3 2 0 0 1 0 1 0 –3
0 G2 1 0 0 0 0 0 1 –2
Zj –16 2 20 –10 0 2 0 34
Z j – Cj 0 0 0 0 2 0 34
Self-Instructional
132 Material
Since all Zj – Cj  0 and also the variables are integers, the optimum integer Integer Programming

solution is obtained and given by x1 = 2, x2 = 0, x3 = 2 and Max Z = –16.

Example 6.4: Solve the integer programming problem.


NOTES
Max Z = 7x1 + 9x2
Subject to constraints, – x1 + 3x2  6
7x1 + x2  35
x1, x2  0 are integers.
Solution: Now ignoring the integer conditions, and introducing slack variables
S1, S2  0, we get the standard form of IPP as,
Max Z = 7x1 + 9x2 + 0S1 + 0S2
Subject to constraints, – x1 + 3x2 + S1 = 6
7x1 + x2 + S2 = 35
x1, x2, S1, S2  0
The given IPP is solved using simplex method.
Cj 7 9 0 0
xB
CB B xB x1 x2 S1 S2 Min
xi
0 S1 6 –1 3 1 0 6/3 = 2
0 S2 35 7 1 0 1 35/1 = 35
Zj 0 0 0 0 0
Z j – Cj –7 – 9 0 0
9 x2 2 –1/3 1 1/3 0 –
3
0 S2 33 22/3 0 –1/3 1 33  
22
Zj 18 –3 9 3 0
Z j – Cj –10 0 3 0

9 x2 7/2 0 1 7/22 1/22


7 x1 9/2 1 0 –1/22 3/22
Zj 63 7 9 28/11 15/11
Z j – Cj 0 0 28/11 15/11

Since all Zj – Cj  0, an optimum solution is obtained as,


9
x1 = x = 7 and Max Z = 63.
2 2 2
Since the optimum solution obtained above is not an integer solution, we select a
constraint corresponding to,
7 1 9 1
xB1   3  , xB 2   4 
2 2 2 2
Self-Instructional
Material 133
Integer Programming Max ( fi ) = Max ( f1, f2 )

= Max  ,  
1 1 7 1 9 1
[3] , [4]
2 2  2 2 2 2

NOTES Since both the equations have the same value of fi, either one of the two
equations can be used. Let us consider the x2 row as source row.
From x2 row we have,
1 7 1
3  0 x1  x2  S1  S2
2 22 22
There is no negative fraction.
The Gomorian constraint is given by,
7 1 1
S1  S2 
22 22 2
7 1 1
i.e., – S1  S2  
22 22 2
7 1 1
  S1  S 2  G1 = 
22 22 2
Here, G1 is the Gomorian slack. Adding this new constraint at the bottom of the
above optimal simplex table, we have the new table.
We apply dual simplex method. Since G1 = – 1/2, hence G1 leaves the basis.
Also,
 28 15 
 Z j  C j   11 
Max  , aij  0  = Max  , 11 
 aij   7  1 
 22 22 
= Max (–8, – 30) = – 8
This gives the non-basic variable S1 to enter into the basis.
Applying dual simplex method drop G1 and introduce S1.

Cj 7 9 0 0 0

CB B xB x1 x2 S1 S2 G1
9 x2 7/2 0 1 7/22 1/22 0
7 x1 9/2 1 0 –1/22 3/22 0
0 G1 –1/2 0 0 –7/22 –1/22 1
Zj 63 7 9 28/11 15/11 0
Z j – Cj 0 0 28/11 15/11 0
9 x2 3 0 1 0 0 1
7 x1 32/7 1 0 0 1/7 –1/7
0 S1 11/7 0 0 1 +1/7 –22/7
Zj 59 7 9 0 1 8
Z j – Cj 0 0 0 1 8
Self-Instructional
134 Material
The optimal solution obtained by dual simplex method as above is still a non- Integer Programming

integer. Thus a new Gomory’s constraint is to be reconsidered.


32 4 11 4
Here, x2 = 3, x1 =  4  and S1   1 
7 7 7 7 NOTES
4
Max ( fi ) = Max , ,  = 7
4 4

 7 7
Choose the x1 row as source row arbitrarily as both the fraction values are
the same. From the source row we have,
4 1 6
= 1x1  0 x2  0 S1  S 2  G1
7 7 7
There is no negative fraction in the source row.
The Gomory’s constraint is given by,
1 6 4 1 6 4
S 2  G1  i.e., – S2  G1  G2 = 
7 7 7 7 7 7
Here, G2 is the Gomorian slack. Adding this constraint in the above simplex table,
we get a modified table.
We again apply the dual simplex method.
4
Since G2 =  , hence G2 leaves the basis. Also,
7
 
 Z j  C j   1 8 
Max  , aij  0  = Max  , 
 aij   1  6 
 7 7

= Max  –7,    – 7
28
 3 
This gives the non-basic variable S2 to enter into the basis.
Cj 7 9 0 0 0 0

CB B xB x1 x2 S1 S2 G1 G2
9 x2 3 0 1 0 0 1 0
7 x1 32/7 1 0 0 1/7 –1/7 0
0 S1 11/7 0 0 1 1/7 –22/7 0
0 G2 –4/7 0 0 0 –1/7 –6/7 1
Zj 59 7 9 0 1 8 0
Z j – Cj 0 0 0 1 8 0

9 x2 3 0 1 0 0 1 0
7 x1 4 1 0 0 0 –1 1
0 S1 1 0 0 1 0 –4 1
0 S2 4 0 0 0 1 6 –7
Zj 55 7 9 0 0 2 7
Z j – Cj 0 0 0 0 2 7
Self-Instructional
Material 135
Integer Programming Since all Zj – Cj  0 and also the solution is an integer, we obtain an optimum
integer solution given by x1 = 4, x2 = 3 and Max Z = 55.
6.2.4 Mixed Integer Programming Problem
NOTES In mixed IPP only some of the variables are restricted to integer values, while the
other variables may take integer or other real values.
Mixed integer cutting plane procedure: The iterative procedure for the solution
of mixed integer programming problem is as follows.
Step 1: Reformulate the given IPP into a standard maximization LPP form and
then determine an optimum solution using simplex method.
Step 2: Test the integrality of the optimum solution.
(i) If all xBi  0 (i = 1, 2,... m) and are integers, then the current solution
is an optimum one.
(ii) If all xBi  0 (i = 1, 2,... m) but the integer restricted variables are
not integers, then go to the next step.

Step 3: Choose the largest fraction among those xBi, which are restricted to
integers. Let it be xBk = fk (assume),
Step 4: Find the fractional cut constraints from the source row, namely kth row.
From the source row,
n
 akj k j = xBk
j 1
n
i.e.,   akj   fki  rj = [xBk] + fk
j 1
n

In the form  fki x j  fk


j 1

 f 
i.e.,  f kj x j   f k k1   f kj x j  fk
j  j   j  j

 f 
  f kj x j   f k k1   f kj x j  – fk
j j   j  j

 f 
  f kj x j   f k k1   f kj x j  Gk = – fk
j j    j j 

Here, Gk is Gomorian slack.


j   [ j / f kj  0 ]
j   [ j / f kj  0 ]
Step 5: Add this cutting plane generated in Step (4) at the bottom of the optimum
simplex table obtained in Step (1). Find the new optimum solution using dual
simplex method.
Self-Instructional
136 Material
Step 6: Go to Step (2) and repeat the procedure until all xBi  0 (i = 1, 2,... m) Integer Programming

and all restricted variables are integers.

Example 6.5: Find the optimum integer solution of the following IPP.
Max Z = x1 + x2 NOTES
Subject to constraints, 3x1 + 2x2  5
x2  2
x1 + x2  0 and x1 is an integer.
Solution: Introducing slack variables S1, S2  0 the standard form of IPP
becames,
Max Z = x1 + x2 + 0S1 + 0S2
Subject to constraints, 3x1 + 2x2 + S1 = 5
x2 + S2 = 2
x1, x2, S1, S2  0
Initial basic feasible solution,
S1 = 5, S2 = 2
Ignore the integer condition and solve the problem using simplex method to
obtain optimum solution.
Cj 1 1 0 0
xB
CB B xB x1 x2 S1 S2 Min
xi

0 S1 5 3 2 1 0 5/3
0 S2 2 0 1 0 1 —
Zj 0 0 0 0 0
Z j – Cj –1 –1 0 0
1 x1 5/3 1 2/3 1/3 0 5/2
0 S2 2 0 1 0 1 5/2
Zj 5/3 1 2/3 1/3 0
Z j – Cj 0 –1/3 1/3 0

1 x1 1/3 1 0 1/3 –2/3


1 x2 2 0 1 0 1
Zj 7/3 1 1 1/3 1/3
Z j – Cj 0 0 1/3 1/3

Since all Zj – Cj  0, the current basic feasible solution is optimum. But x1 is


non-integer. From the source row (first row) we have,
1/3 = x1 + 0 x2 + 1/3 S1 – 2/3 S2

Self-Instructional
Material 137
Integer Programming The Gomorian constraint is given by,
 1 
1    2  1
S1   3    S2 
3  1  1   3  3
NOTES 3 
1 1 1 1 1 1
S1  S 2    S1  S2  
3 3 3 3 3 3
1 1 1
S1  S 2  G1 = 
3 3 3
Hence, G1 is the Gomorian slack.
Adding this Gomorian constraint at the bottom of the above simplex table,
we have the following equation unsing the dual simplex method. Since hence
G1 = –1/3 < 0, G1 leaves the basis. Also,
 Z j  C j 
Max  , aij  0 
 aij 

 1 1 
 3  = Max (–1, –1) = –1
Max  3 , 
 1 1 
 3 3 
As this corresponds to both S1 and S2, we choose S1 arbitrarily as the entering
variable.
Drop G1 and introduce S1.
Cj 1 1 0 0 0

CB B xB x1 x2 S1 S2 G1

1 x1 1/3 1 0 1/3 –2/3 0


1 x2 2 0 1 0 1 0
0 G1 –1/3 0 0 –1/3 –1/3 1

Zj 7/3 1 1 1/3 1/3 0


Z j – Cj 0 0 1/3 1/3 0

1 x1 0 1 0 0 –1 1
1 x2 2 0 1 0 1 0
0 S2 1 0 0 1 1 –3

Zj 2 1 1 0 0 1
Z j – Cj 0 0 0 0 1

Since all Zj – Cj  0 and all xBi  0, the current solution is feasible and optimal.
The required optimal integer solution is given by,
x1 = 0, x2 = 2 and Max Z = 2.

Self-Instructional
138 Material
Example 6.6: Find the optimum integer solution of the given IPP. Integer Programming

Max Z = 4x1 + 6x2 + 2x3


Subject to constraints, 4x1– 4x2  5
–x1 + 6x2   NOTES
x1 + x2 + x3  
x1, x2, x3  and x1, x3 are integers.
Solution: Introducing slack variables S1, S2, S3  0 the standard form of IPP
becomes,
Max Z = 4x1 + 6x2 + 2x3 + 0S1 + 0S2 + 0S3
Subject to constraints, 4x1 – 4x2 + S1 = 5
– x1 + 6x2 + S2 = 5
– x1 + x2 + x3 + S3 = 5
The initial basic feasible solution is given by S1 = 5, S2 = 5 and S3 = 5. Ignoring
the integer condition, the optimum solution of given IPP is obtained by the simplex
method.

Cj 4 6 2 0 0 0

CB B xB x1 x2 x3 S1 S2 S3 Min xB/xi

0 S1 5 4 –4 0 1 0 0 —
0 S2 5 –1 6 0 0 1 0 5/6
0 S3 5 –1 1 1 0 0 1 5/1

Zj 0 0 0 0 0 0 0
Z j – Cj –4 –6 –2 0 0 0

0 S1 25/3 10/3 0 0 1 2/3 0 25/10


6 x2 5/6 –1/6 1 0 0 1/6 0 —
0 S3 25/6 –5/6 0 1 0 –1/6 1 —

Zj 5 –1 6 0 0 1 0
Z j – Cj –5 0 –2 0 1 0

4 x1 5/2 1 0 0 3/10 1/5 0 —


6 x2 5/4 0 1 0 1/20 1/5 0
25
0 S3 25/4 0 0 1 1/4 0 1 
4 /1
Zj 35/2 4 6 0 3/2 2 0
Z j – Cj 0 0 –2 3/2 2 0

4 x1 5/2 1 0 0 3/10 1/5 0


6 x2 5/4 0 1 0 1/20 1/5 0
2 x3 25/4 0 0 1 1/4 0 1

Zj 30 4 6 2 2 2 0
Z j – Cj 0 0 0 2 2 0

Self-Instructional
Material 139
Integer Programming Since all Zj – Cj  the solution is optimum. But the integer constrained
variables x1 and x3 are non-integer.
5 1
 x 1 = 5/2 = 2 + 1/2, x2 =  1  and
NOTES 4 4
x 3 = 25/4 = 6 + 1/4
Max (f1, f2, f3) = Max (1/2, 1/4, 1/4) = 1/2
From the first row we have,
(2 + 1/2 ) = x1 + 0x2 + 0x3 + (3/10) S1 + (1/5) S2
The Gomorian constraint is given by,
3/10 S1 + 1/5 S2  1/2
–3/10S1 – 1/5 S2  –1/2
i.e., –3/10 S1 – 1/5 S2 + G1 = – 1/2, where G1 is the Gomorian slack. Introduce
this new constraint at the bottom of the above simplex table using dual simplex
method since G1 = – 1/2 < 0, G1 leaves the basis. Also,
 
 Z j  C j   2 2  20  20
Max  , aij  0  = Max  ,  = Max  
,  10 =
 aij    3  1   3  3
 10 5 
This Gives the non-basic variable S1, to enter into the basis.
Cj 4 6 2 0 0 0 0

CB B xB x1 x2 x3 S1 S2 S3 G1

4 x1 5/2 1 0 0 3/10 1/5 0 0


6 x2 5/4 0 1 0 1/20 1/5 0 0
2 x3 25/4 0 0 1 1/4 0 1 0
0 G1 –1/2 0 0 0 –3/10 –1/5 0 1

Zj 30 4 6 2 2 2 2 0
Z j – Cj 0 0 0 2 2 2 0

4 x1 2 1 0 0 0 0 0 1
6 x2 7/6 0 1 0 0 1/6 0 1/6
2 x3 35/6 0 0 1 0 –1/6 1 5/6
0 S1 5/3 0 0 0 1 2/3 0 –10/3

Zj 80/3 4 6 2 0 2/3 2 20/3


Z j – Cj 0 0 0 0 2/3 2 20/3

Since all Zj – Cj  0, the solution is optimum and also the integer restricted
variable x2 = 7/6, x3 = 35/6, and S1 = 5/3 is not an integer, therefore, we add
another Gomorian constraint,
Self-Instructional
140 Material
 x2 = 7/6 = 1 + 1/6, x3 = 35/6 = 5 + 5/6, and S1 = 5/3 = 1+1/3 Integer Programming

1 5 1
Max (f1, f2, f3) = Max  , ,  = 5/6
 6 6 3
Therefore, the source row is the third row.
NOTES
From this row we have,
5 1 5
5 = 0 x1  0 x2  x3  0S1  S2  S3  G1
6 6 6
The Gomorian constraint is given by,
 5 
 6   1  5 5
    S2  G1 
5
  1   6  6 6
6 
5 5 
 S 2  G1 
6 6 6
5 5 5
 S 2  G1  G2 
6 6 6
Here, G2 is the Gomorian slack.
Add this second cutting plane constraint at the bottom of the above optimum
simplex table.
Use dual simplex method. Because G2 = – 5/6 < 0, hence G2 leaves the basis.
 2 20 
 Z j  C j   3  = Max   4 ,  8 =  4 ,
Also, Max  aij  0  = Max  3 ,   
 aij   5 5   5  5
 6 6 
Here, corresponds to S2.
Drop G2 and introduce S2.
Cj 4 6 2 0 0 0 0 0
CB B xB x1 x2 x3 S1 S2 S3 G1 G2
4 x1 2 1 0 0 0 0 0 1 0
6 x2 7/6 0 1 0 0 1/6 0 1/6 0
2 x3 35/6 0 0 1 0 –1/6 1 5/6 0
0 S1 5/3 0 0 0 1 2/3 0 –10/3 0
0 G2 –5/6 0 0 0 0 –5/6 0 –5/6 1
Zj 80/3 4 6 2 2/3 2/3 2 20/3 0
Z j – Cj 0 0 0 2/3 2/3 2 20/3 0

4 x1 2 1 0 0 0 0 0 1 0
6 x2 1 0 1 0 0 0 0 0 1/5
2 x3 6 0 0 1 0 0 1 1 –1/5
0 S1 1 0 0 0 1 0 0 –4 4/5
0 S2 1 0 0 0 0 1 0 1 –6/5
Zj 26 4 6 2 0 0 2 6 4/5
Z j – Cj 0 0 0 0 0 2 6 4/5

Self-Instructional
Material 141
Integer Programming Since all Zj – Cj  and also all the restricted variables x1, x2 and x3 are
integers, an optimum integer solution is obtained.
The optimum integer solution is,
NOTES x1 = 2, x2 = 1, x3 = 6 and Max Z = 26.
6.2.5 Branch and Bound Method
This method is applicable to both, pure as well as mixed IPP. Some times a few or
all the variables of an IPP are constrained by their upper or lower bounds. The
most general method for the solution of such constrained optimization problems is
called ‘Branch and Bound method’.
This method first divides the feasible region into smaller subsets and then
examines each of them successively until a feasible solution that gives an optimal
value of objective function is obtained.
Let the given IPP be,
Max Z = CX
Subject to constraints, AX  b
X  0 and are integers.
In this method, we first solve the problem by ignoring the integrality condition.
(i) If the solution is in integers, the current solution is optimum for the given
IPP.
(ii) If the solution is not in integers, say one of the variable Xr is not an integer,
then xr* < xr < x*r +1 where x*r , x*r +1 are consecutive non-negative
integers.
Hence, any feasible integer value of xr must satisfy one of the two conditions.
xr  xr* or xr  x*r + 1
These two conditions are mutually exclusive (both cannot be true
simultaneously). By adding these two conditions separately to the given IPP, we
form different sub-problems.
Sub-problem 1 Sub-problem 2
Max Z = CX Max Z = CX
Subject to constraints, AX  b Subject to constraints, AX  b
x r  x*r x r  x*r + 1
x  0. x  0.
Thus, we have branched or partitioned the original problem into two sub-problems.
Each of these sub-problems is then solved separately as IPP.
If any sub-problem yields an optimum integer solution, it is not further
branched. But if any sub-problem yields a non-integer solution, it is further
branched into two sub-problems. This branching process is continued until each
Self-Instructional
142 Material
problem terminates with either an integer optimal solution or there is an evidence Integer Programming

that it cannot yield a better solution. The integer-valued solution among all the
sub-problems, which gives the most optimal value of the objective function is
then selected as the optimum solution.
NOTES
Note: For minimization problem, the procedure is the same except that upper bounds are
used. The sub-problem is said to be fathomed and is dropped from further consideration
if it yields a value of the objective function lower than that of the best available integer
solution and it is useless to explore the problem any further.

6.3 DUAL SIMPLEX METHOD

The dual simplex method is very similar to the regular simplex method, the only
difference lies in the criterion used for selecting a variable to enter the basis and to
leave the basis. In the dual simplex method, we first select the variable to leave the
basis and then the variable to enter the basis. This method yields an optimal solution
to the given LPP in a finite number of steps, provided no basis is repeated.
The dual simplex method is used to solve problems that start as dual feasible
(i.e., whose primal is optimal but infeasible). In this method, the solution starts as
optimum, but infeasible, and remains infeasible until the true optimum is reached,
at which point the solution becomes feasible. The advantage of this method is that
it avoids the artificial variables introduced in the constraints along with the surplus
variables, as all ‘’ constraints are converted into the ‘’ type.
Dual Simplex Algorithm
The iterative procedure for the dual simplex method is listed as follows:
Step 1: Convert the problem into the maximization form if it is initially in the
minimization form.
Step 2: Convert the ‘’ type constraints, if any, to ‘’ types by multiplying both
sides by –1.
Step 3: Express the problem in the standard form by introducing slack variables.
Obtain the initial basic solution and display this solution in the simplex table.
Step 4: Test the nature of Zj – Cj (optimal condition).
Case i If all Zj – Cj 0 and all xBi 0, then the current solution is an
optimum feasible solution.
Case ii If all Zj – Cj  0 and at least one xBi < 0, then the current solution is
not the optimum basic feasible solution. In this case, go to the next step.
Case iii If any Zj – Cj < 0, then the method fails.
Step 5: In this step, we find the leaving variable, which is the basic variable
corresponding to the most negative value of xBi. Let xk be the leaving variable, i.e.,
xBK = min{xBi,x = 0}.
Self-Instructional
Material 143
Integer Programming To find out the variable entering the basis, we compute the ratio between the Zj –
Cj row and the key row, i.e., we compute Max {Zj – Cj /cik, aik < 0}. (Consider
the ratios with negative denominators alone). The entering variable is the one having
the maximum ratio. If there is no such ratio with a negative denominator, then the
NOTES problem does not have a feasible solution.
Step 6 Convert the leading element into unity and all the other elements of the key
column into zero to get an improved solution.
Step 7 Repeat Steps 4 and 5 until either an optimum basic feasible solution is
attained or till an indication of no feasible solution is obtained.
Example 6.7: Use the dual simplex method to solve the following LPP.
Max Z = –3x1 – x2
Subject to x1 + x2  1
2x1 + 3x2  2
x1, x2  0
Solution: Convert the given constraints into < type.
Max Z = –3x1 – x2
Subject to –x1 – x2  –1
–2x1 – 3x2  –2
x1, x2  0
Introducing slack variables S1 and S2  0, we get:
Max Z = –3x1 – x2 + 0S1, + 0S2
Subject to –x1 – x2+ S1 –1
–2x1 –3x2 + S2 –2
x1, x2, S1, S2  0
An initial basic (infeasible) solution of the modified LPP is S1 = – 1, S2= – 2.
Cj –3 –1 0 0

CB B XB W1 W2 S1 S2

0 S1 –1 –1 –1 1 0
0 S2 –2 –2 –3 0 1

Zj 0 0 0 0 0
Z j–C j 3 1 0 0

Since all Zj – Cj  0 and all xBi < 0, the current solution is not an optimum basic
feasible solution.
Since xB2 = – 2 is the most negative, the corresponding basic variable S2 leaves
the basis. Also, since max {Zj – Cj /aik, aik< 0}, where xk is the leaving variable
Self-Instructional
144 Material
max {3/ – 2, 1/– 3} = – 1/3 = Z2 – C2 /a22, the non-basic variable x2 enters the Integer Programming

basis.
Drop S2 and introduce x2.
First iteration NOTES
Cj –3 –1 0 0

CB B XB X1 X2 S1 S2

0 S1 –1/3 –1/3 0 1 –1/3


–1 X2 2/3 2/3 1 0 –1/3

Zj – 2/3 – 2/3 –1 0 1/3


Z j–C j 7/3 0 0 1/3

Since all Zj – Cj  0 and xB1 = –1/3 < 0, the current solution is not an
optimum basic feasible solution.
Q xB1 = – 1/3, the basic variable S leaves the basis. Also, since max {Zj –
Cj/ai1, ai1< 0} = max {(1/3)/( – l/3)...(l/3)/( – 1/3)} = – 1, it corresponds to the
non-basic variable S2
 Drop S1 and introduce S2.
Second iteration
Cj –3 –1 0 0

CB B xB x1 x2 S1 S2

0 S2 1 1 0 –3 1
–1 x2 1 1 1 –1 0

Zj –1 –1 –1 1 0
Z j–C j 2 0 1 0

Since all Zj – Cj  0 and also xBi  0, an optimum basic feasible solution has
been reached. The optimal solution to the given LPP is x1 = 0; x2 = 1, Maximum
Z = – 1.
Example 6.8: Solve by the dual simplex method the following LPP.
Min Z = 5x1 + 6x2
Subject to x1 + x2  2
4x1 + x2  4
x1, x2  0
Solution: The given LPP is Max Z = 5x1 – 6x2
Subject to – x1 – x2  –2
Self-Instructional
Material 145
Integer Programming – 4x1 – x2  –4 and
x1, x2  0
By introducing slack variables S1 and S2, the standard form of LPP becomes,
NOTES Max Z = –5x1 – 6x2 + 0S1 + 0S2
Subject to – x1 – x2 + S1 = –2
– 4x1 – x2 + S2 = –4
Initial table
Cj –5 –6 0 0

CB B XB X1 X2 S1 S2

0 S1 –2 –1 –1 1 0
0 S2 –4 –4 –1 0 1

Zj 0 0 0 0 0
Z j–C j 5 6 0 0

Since all Zj – Cj  0 and xBi  0, the current solution is not an optimum


basic feasible solution.
Since xB2 = – 4 is the most negative, the corresponding basic variable S2
leaves the basis.
Also, Max Zj – Cj/ai2a i2 < 0 = Max {– 5/4, 6/ –1,...} = –5/4 gives the
non-basic variable, and x1 enters into the basis.
First iteration
Cj –5 –6 0 0

CB B XB X1 X2 S1 S2

0 S1 –1 –1 – 3/4 1 – 1/4
–5 X1 1 –4 1/4 0 – 1/4

Zj –5 –5 – 5/4 0 5/4
Z j–C j 0 19/4 0 5/4

Since all Zj – Cj  0 and also xB1= – 1<0, the current basic feasible solution
is not optimum. As xB1 = – 1 < 0, the basic variable S1 leaves the basis.
Z j  Cj   19 5 
Also, since Max  , ai1  0  = Max  4 , 4  = 5
4 , it corresponds
 ai1   3 4 14 
to the non-basic variable S2.
:. Drop S1 and introduce S2.
Self-Instructional
146 Material
Second iteration Integer Programming
Cj –5 –6 0 0

CB B XB X1 X2 S1 S2
NOTES
0 S2 4 0 3 –4 1
–5 X1 1 –4 1 –1 0

Zj – 10 –5 –5 5 0
Z j–C j 0 1 5 0

Since all Zj – Cj  0 and also all xBi  0, the current basic feasible solution
is optimum. The optimal solution is given by x1 = 2, x2 = 0, Max Z = – 10
i.e., Min Z = 10.
Example 6.9: Use the dual simplex method to solve the following LPP.
Max Z = –3x1 – 2x2
Subject to x1 + x2  1
x1 + x2  1
x1+ 2x2  10
x2  3
x1, x2  0
Solution: Interchanging the > inequality of the constraints into <, the given LPP
becomes,
Max Z = – 3x1 – 2x2
Subject to – x1 – x2  –1
x1 + x2  7
–x1 – 2x2  –10
0x1 + x2  3
By introducing the non-negative slack variables S1, S2, S3 and S4, the standard
form of the LPP becomes,
Max Z = – 3x1 – 2x2 + 0S1 + 0S2 + 0S3 + 0S4
Subject to – x1 – x2 + S1 = –1
x1 + x2 + S2 = 1
–x1 – 2x2 + S3 = –10
0x1 + x2 + S4 = 3
The initial solution is given by:
S1= –1, S2 = 7, S3 = –10, S4 = 3

Self-Instructional
Material 147
Integer Programming Initial table
Cj –3 –2 0 0 0 0

CB B XB X1 X2 S1 S2 S3 S4
NOTES 0 S1 –1 1 1 1 0 0 0
0 S2 7 1 1 0 1 0 0
0 S3 – 10 –1 –2 0 0 1 0
0 S4 3 0 1 0 0 0 1
Zj 0 0 0 0 0 0 0
Zj – Cj 3 2 0 0 0 0

Since all Zj – Cj  0 and some xBi  0, the current solution is not the main
workable solution. = xB3 = – 10 being the most negative, the basic variable S3
leaves the basis.
Also, since Max {Zj – Cj /ai2, ai2 < 0} = Max {3/–1, 2/–2} = –1, the non-
basic variable x2 enters the basis.
First iteration
Cj –3 –2 0 0 0 0

CB B XB X1 X2 S1 S2 S3 S4

0 S1 4 – 1/2 0 1 0 – 1/2 0
0 S2 2 1/2 0 0 1 1/2 0
–2 X2 5 1/2 1 0 0 – 1/2 0
0 S4 –2 –1/2 0 0 0 1/2 1
Zj – 10 –1 –2 0 0 1 0
Zj – Cj 2 0 0 0 1 0

Second iteration
Drop S4 and introduce x1.
Q xB4 = –2 < 0, S4 leaves the basis.

Z j  Cj 
a1i  0  = Max  
2
Max     = – 4
 a1i    1/ 2 
Hence, x1 enters the basis.
Cj 3 2 0 0 0 0

CB B XB X1 X2 S1 S2 S3 S4

0 S1 2 0 0 1 0 –1 –1
0 S2 0 0 0 0 1 1 1
Self-Instructional
148 Material
–2 X2 3 0 1 0 0 0 1 Integer Programming

–3 S4 4 1 0 0 0 –1 –2
Zj – 18 –3 –2 0 0 3 4
Zj – Cj 0 0 0 0 3 4 NOTES
Since all Zj – Cj  0 and all xBi  0, the current solution is an optimum basic
feasible solution.
 The optimum solution is Max Z= – 18, x1 = 4, x2 = 3.
Example 6.10: Use dual simplex method to solve the following LPP.
Max Z = – 2x1 – x3
Subject to x1 + x2 – x3  5
x1 – 2x2 + 4x3  8
x1, x2, x3  0
Solution: The given problem can be written as:
Max Z = – 2x1 – 0x2 – x3
Subject to – x1 – x2 + x3 –5
–x1 +2x2 – 4x3  –8
x1, x2, x3  0
Adding the slack variables S1 and S2, we get the constraints,
–x1 – x2 + x3+ S1 = –5
–x1 + 2x2 – 4x3 + S2 = –8
Cj –2 0 –1 0 0

CB B XB X1 X2 X3 S1 S2

0 S1 –5 –1 –1 1 1 0
0 S2 –8 –1 2 –4 0 1

Zj 0 0 0 0 0 0
Zj – Cj 2 0 1 0 0

Since all Zj – Cj  0 and also some xBi  0, the solution is not optimum.
Q xB2 = – 8 is the most negative, the basic variable S2 leaves the basis. Also,
since
Z j  Cj 
Max  , aik  0  gives
 aiK 
2 1  1
Max  , , ,    = 
 1 4  4
x3 enters the basis. Drop S2 and introduce x3.
Self-Instructional
Material 149
Integer Programming First interation
Cj –2 0 –1 0 0

CB B XB X1 X2 X3 S1 S2
NOTES
0 S1 –7 – 5/4 – 1/2 1 1 1/4
–1 X3 2 1/4 – 1/2 0 0 – 1/4
Zj –2 – 1/4 1/2 –1 0 1/4
Zj – Cj 7/4 1/2 0 0 1/4

Drop S1 and introduce x2.


Since xB1 = – 7 < 0, S1 the basic variable leaves the basis.
Therefore, the non-basic variable x2 enters the basis.
Z j  Cj 
Max  , aik  0  gives
 aiK 

 7 / 4 1/ 2 
Max  , ,   = – 1
 5 / 4 1/ 2 
Second iteration
Cj –2 0 –1 0 0

CB B XB X1 X2 X3 S1 S2
0 S2 14 5/2 1 0 –2 – 1/2
–1 X3 9 3/2 0 1 –1 – 1/1
Zj –9 – 3/2 0 –1 1 1/2
Zj – Cj 1/2 0 0 1 1

Since all Zj – Cj  0 and all xBi  0, the current feasible solution is optimum.
The optimal solution is given by x1 = 0, x2 = 14, x3 = 9.
Max Z = – 9.

Check Your Progress


1. Define integer programming.
2. Explain the importance of integer programming problems.
3. State the applications of integer programming.
4. Analyse the Gomory's cutting method.
5. What is Branch and Bound method?
6. Define the mixed integer programming problem.
7. Write first four steps of dual simplex algorithm.
Self-Instructional
150 Material
Integer Programming
6.4 ANSWERS TO CHECK YOUR PROGRESS
QUESTIONS

1. A linear programming problem in which all or some of the decision variables NOTES
are constrained to assume non-negative integer values is called an Integer
Programming Problem (IPP).
In a Linear Programming Problem (LPP) if all variables are required to take
integer values then it is called the Pure (all) Integer Programming Problem
(Pure IPP).
2. In IPP, all the decision variables are allowed to take any non-negative real
values as it is quite possible and appropriate to have fractional values in
many situations. There are several frequently occurring circumstances in
business and industries that lead to planning models involving integer-valued
variables. For example, in production, manufacturing is frequently scheduled
in terms of batches, lots or runs. In allocation of goods, a shipment must
involve a discrete number of trucks or aircrafts. In such cases the fractional
values of variables like 13/3 may be meaningless in the context of the actual
decision problem.
3. Integer programming is applied in business and industries. All assignment
and transportation problems are integer programming problems, because
in the assignment and travelling salesmen problem all the decision variables
are either zero or one.
i.e., xij = 0 or 1
Other examples include capital budgeting and production scheduling
problems. In fact, any situation involving decisions of the type ‘either to do
a job or not’ can be viewed as an IPP. In all such situations,
xij = 1, if the jth activity is performed.
xij = 0, if the jth activity is not performed.
In addition, allocation problems involving the allocation of men or machines
give rise to IPP, since such commodities can be assigned only in integers
and not in fractions.
4. A systematic procedure for solving pure IPP was first developed by R.E.
Gomory in 1956, which he later used to deal with the more complicated
cases of mixed integer programming problems. This method consists of first
solving the IPP as an ordinary LPP by ignoring the restriction of integer
values and then introducing a new constraint to the problem such that the
new set of feasible solution includes all the original feasible integer solutions,
but does not include the optimum non-integer solution initially found. This
new constraint is called ‘Fractional cut’ or ‘Gomorian constraint’. Then the
revised problem is solved using the simplex method, till an optimum integer
solution is obtained. Self-Instructional
Material 151
Integer Programming 5. This is an enumeration method in which all feasible integer points are
enumerated. This is the widely used search method based on Branch and
Bound technique. It was developed in 1960 by A.H. Land and A.G. Doig.
This method is applicable to both pure and mixed IPP. It first divides the
NOTES feasible region into smaller subsets that eliminate parts containing no feasible
integer solution.
6. In mixed IPP only some of the variables are restricted to integer values,
while the other variables may take integer or other real values.
Mixed integer cutting plane procedure: The iterative procedure for the
solution of mixed integer programming problem is as follows.
7. The dual simplex method is very similar to the regular simplex method, the
only difference lies in the criterion used for selecting a variable to enter the
basis and to leave the basis. In the dual simplex method, we first select the
variable to leave the basis and then the variable to enter the basis. This
method yields an optimal solution to the given LPP in a finite number of
steps, provided no basis is repeated.
The dual simplex method is used to solve problems that start as dual feasible
(i.e., whose primal is optimal but infeasible). In this method, the solution
starts as optimum, but infeasible, and remains infeasible until the true optimum
is reached, at which point the solution becomes feasible.

6.5 SUMMARY

 A linear programming problem in which all or some of the decision variables


are constrained to assume non-negative integer values is called an Integer
Programming Problem (IPP).
 In a Linear Programming Problem (LPP) if all variables are required to
take integer values then it is called the Pure (all) Integer Programming
Problem (Pure IPP).
 In IPP, all the decision variables are allowed to take any non-negative real
values as it is quite possible and appropriate to have fractional values in
many situations. There are several frequently occurring circumstances in
business and industries that lead to planning models involving integer-valued
variables.
 A systematic procedure for solving pure IPP was first developed by R.E.
Gomory in 1956, which he later used to deal with the more complicated
cases of mixed integer programming problems.
 This is an enumeration method in which all feasible integer points are
enumerated. This is the widely used search method based on Branch and
Bound technique. It was developed in 1960 by A.H. Land and A.G. Doig.

Self-Instructional
152 Material
 Mixed integer cutting plane procedure: The iterative procedure for the solution Integer Programming

of mixed integer programming problem is as follows.


 Some times a few or all the variables of an IPP are constrained by their
upper or lower bounds. The most general method for the solution of
NOTES
such constrained optimization problems is called ‘Branch and Bound
method’.
 The dual simplex method is very similar to the regular simplex method, the
only difference lies in the criterion used for selecting a variable to enter the
basis and to leave the basis. In the dual simplex method, we first select the
variable to leave the basis and then the variable to enter the basis. This
method yields an optimal solution to the given LPP in a finite number of
steps, provided no basis is repeated.
 The dual simplex method is used to solve problems that start as dual feasible
(i.e., whose primal is optimal but infeasible). In this method, the solution
starts as optimum, but infeasible, and remains infeasible until the true optimum
is reached, at which point the solution becomes feasible.

6.6 KEY WORDS

 Integer Programming Problem (IPP): A problem in which all or some


variables are constrained to assume non-negative integer values.
 Pure IPP: In an IPP when all variables are constraint to non-negative integer
values.
 Mixed IPP: In this type of IPP, some variables are allowed to assume
nonnegative non-integer values.
 Standard discrete programming problem: When integer values of
variables are restricted to either of the two discrete values, 0 or 1, it is
known as standard discrete programming problem. It is also known by
another name ‘0 – 1 programming problem’.
 Cutting plane method: A method of solving integer programming problem.
In this method, solution is first made by ignoring the constraint of having
integer values of the variables initially and then new constraint is applied to
solve it.
 Branch and Bound method: An enumeration method in which a feasible
region is divided into smaller subsets and each of these is examined
successively to find a feasible solution. Initially, it is solved ignoring integer
constraints. Then, solution so obtained is divided into two disjoint subsets
of two consecutive non-negative integers. This method is used both for
pure and mixed integer programming problems.

Self-Instructional
Material 153
Integer Programming
6.7 SELF ASSESSMENT QUESTIONS AND
EXERCISES

NOTES Short-Answer Questions


1. How an IPP is different from a linear programming problem?
2. What is ‘0 - 1 programming problem’?
3. What is a Gomorian constraint?
4. Which method is widely used for solving integer programming problems?
5. Describe Gomory's cutting plane method.
6. Explain the Branch and Bound method?
7. Define dual simplex method.
8. Illustrate the dual simplex algorithm.
Long-Answer Questions
1. Find the optimum integer solution of the following pure integer programming
problems.
Max Z = 4x1 + 3x2
Subject to constraints, x1 + 2x2  4
2x1 + x2  6
x1, x2  0 and are integers.
[Ans. x1 = 3, x2 = 0 and Max Z = 12]
2. Find the optimum integer solution of the following pure integer programming
problems.
Max Z = 3x1 + 4x2
Subject to constraints, 3x1 + 2x2  8
x1 + 4x2  10
x1, x2  0 and are integers.
[Ans. Max Z = 16, x1 = 0, x2 = 4]
3. Find the optimum integer solution of the following pure integer programming
problems.
Max Z = 3x1 – 2x2 + 5x3
Subject to constraints, 4x1 + 5x2 + 5x3  30
5x1 + 2x2 + 7x3  28
x1, x2, x3  0 and are integers.
[Ans. x1 = x2 = 0, x3 = 4 and Max Z = 20]

Self-Instructional
154 Material
4. Find the optimum integer solution of the following pure integer programming Integer Programming

problems.
Min Z = –2x1 – 3x2
Subject to constraints, 2x1 + 2x2  7
NOTES
x1  2
x2  2
x1, x2  0 and are integers.
[Ans. Min Z = –8, x1 = 1, x2 = 2]
5. Solve the following mixed integer programing problems using Gomory’s
cutting plane method.
Max Z = 7x1 + 9x2
Subject to constraints, –x1 + 3x2  6
7x1 + x2  35
x1, x2  0 and x1 is an integer.
[Ans. x1 = 3, x2 = 2, Max Z = 5 or x1 = 4, x2 = 1, Max Z = 5]
6. Solve the following mixed integer programing problems using Gomory’s
cutting plane method.
Max Z = 3x1 + x2 + 3x3
Subject to constraints, –x1 + 2x2 + x3  4
4x2 – 3x3  2
x1 – 3x2 + 2x3  3
x1, x2, x3  0, where x1 and x3 are integers.
[Ans. x1 = 5, x2 = 11/4, x3 = 3, Max Z = 107/4]
7. Solve the following mixed integer programing problems using Gomory’s
cutting plane method.
Max Z = x1 + x2
Subject to constraints, 2x1 + 5x2  16
6x1 + 5x2  30
x1, x2  0 and x1 is an integer.
[Ans. x1 = 4, x2 = 6/5, Max Z = 26/5]
8. Solve the following mixed integer programing problems using Gomory’s
cutting plane method.
Min Z = 10x1 + 9x2
Subject to constraints, x1  8
x 2  10
5x1 + 3x2  45
x1, x2  0 and x1 is an integer.
[Ans. x1 = 8, x2 = 5/3, Min Z = 95]

Self-Instructional
Material 155
Integer Programming
6.8 FURTHER READINGS

Arumugam, R. S. 2006. Operations Research. Palayamkottai (Tamil Nadu): New


NOTES Gamma Publications.
Sundharesan, V., K. S. Ganapathy and K. Ganesan. 2017. Resource Management
Techniques (Operations Research). Chennai: A. R. Publications.
Swaroop, Kanti, P. K. Gupta and Man Mohan. 2007. Operations Research,
13th Edition. New Delhi: Sultan Chand & Sons.
Taha, Hamdy A. 1992. Operations Research: An Introduction. New York:
Macmillan.
Sharma, S. D. 2006. Operations Research. Uttar Pradesh: Kedar Nath Ram
Nath & Co.
Gupta, P. K. and D. S. Hira. 2002. Introduction to Operations Research. New
Delhi: S. Chand And Company Limited.
Gillett, Billy E. 2007. Introduction to Operations Research. New Delhi: Tata
McGraw-Hill.
Ackoff, R. L. and M. W. Sasieni. 1968. Fundamentals of Operations Research.
New York: John Wiley & Sons Inc.
Kothari, C. R. 1992. An Introduction to Operational Research. New Delhi:
Vikas Publishing House Pvt. Ltd.
Kalavathy, S. 2002. Operations Research. New Delhi: Vikas Publishing House
Pvt. Ltd.
Jensen, Paul A., and Jonathan F. Bard. 2003. Operations Research Models and
Methods. New York: John Wiley & Sons.
Sharma, J. K. 2001. Operations Research: Theory and Applications. New
Delhi: Macmillan India Ltd.

Self-Instructional
156 Material
Linear Programming

UNIT 7 LINEAR PROGRAMMING and Transportation


Problem

AND TRANSPORTATION
NOTES
PROBLEM
Structure
7.0 Introduction
7.1 Objectives
7.2 Linear Programming Formulation of Transportation Problems
7.3 Existence of Solution of Transportation Problems
7.4 Solution of a Transportation Problem
7.4.1 Transhipment Model
7.5 Feasible Solution (NWCM - LCM - VAM)
7.6 Answers to Check Your Progress Questions
7.7 Summary
7.8 Key Words
7.9 Self Assessment Questions and Exercises
7.10 Further Readings

7.0 INTRODUCTION

Linear Programming (LP, also called linear optimization) is a method to achieve


the best outcome (such as, maximum profit or lowest cost) in a mathematical
model whose requirements are represented by linear relationships. Linear
programming is a special case of mathematical programming (also known
as mathematical optimization). The transportation problem as it is stated in modern
or more technical literature looks somewhat different because of the development
of Riemannian geometry and measure theory. The mines-factories example, simple
as it is, is a useful reference point when thinking of the abstract case. In this setting,
we allow the possibility that we may not wish to keep all mines and factories open
for business, and allow mines to supply more than one factory, and factories to
accept iron from more than one mine.
There are numerous categories of Linear Programming (LP) models that
exhibit an exceptional and unique structure that helps in the formulation of efficient
algorithms for finding their solutions. These structures helped to solve larger
problems which otherwise would not have been possible to solve using the existing
technology. Traditionally, the first or initial of these special and unique structures
that were typically analysed is termed as the Transportation Problem (TP) are
considered as a particular type of network problem. Principally, it provided an
efficient solution to the problem to be solved and was considered as the initial
most widespread application of Linear Programming Problems (LPP) that were
typically used for the industrial logistics.
Self-Instructional
Material 157
Linear Programming Fundamentally, the Transportation Problem (TP) is usually concerned with
and Transportation
Problem the distribution of a certain commodity/product from several origins/sources to
several destinations with minimum total cost through single mode of transportation.
A necessary and sufficient condition for the existence of a feasible solution to a
NOTES transportation problem is that the transportation problem must be balanced. A
Balanced Transportation Problem (BTP) is a transportation problem in which the
total supply is equivalent to the total demand.
In the transportation problems, the term ‘Loop’ or ‘Path’ is defined as an
ordered sequence of at least four different cells that satisfy all the concerned
conditions.
In this unit, you will study about the linear programming formulation of
transportation problem, loops in the transportation problems, finding an initial base
in the transportation problems, existence of solution of transportation problems,
transhipment problem, and feasible solution (NWCM-LCM-VAM).

7.1 OBJECTIVES

After going through this unit, you will be able to:


 Understand the linear programming formulation of transportation problem
 Identify the loops in the transportation problems
 Finding an initial base in the transportation problem
 Define the existence of solution of transportation problem
 Explain the transhipment problem
 Illustrate the feasible solution (NWCM - LCM - VAM)

7.2 LINEAR PROGRAMMING FORMULATION OF


TRANSPORTATION PROBLEMS

There are numerous categories of Linear Programming (LP) models that exhibit
an exceptional and unique structure that helps in the formulation of efficient algorithms
for finding their solutions. These structures helped to solve larger problems which
otherwise would not have been possible to solve using the existing technology.
Traditionally, the first or initial of these special and unique structures that were
typically analysed is termed as the Transportation Problem (TP) are considered
as a particular type of network problem. Principally, it provided an efficient solution
to the problem to be solved and was considered as the initial most widespread
application of Linear Programming Problems (LPP) that were typically used for
the industrial logistics.

Self-Instructional
158 Material
In the field of linear programing, the ‘Network Models’ are considered as Linear Programming
and Transportation
the most significant and unique structures. The problems are solved using the special Problem
rules of the simplex methods which yield the result on the basis of the structure of
the network model.
NOTES
The network model of transportation problems typically defines the unique
method of transportation of a product which has been manufactured at different
plants or factories (supply origins) to a number of manufactured at different
warehouses (demand destinations). This is solved using the transhipment method
or the transhipment problem. In this case we know or assume about the total
number of units that can be produced at each plant or factory and the total number
of units that are required in the market. The product is not sent directly from
source (plant or factory) to destination (market), but it is first stored at the
intermediary points namely the warehouses or distribution centers, and then routed
to the required destination market.
The objective is to minimize the variable cost of production of product in
demand and then shipping the products to meet the consumer’s demand. The
different sources, destinations, and the intermediate points are jointly termed as
the ‘Nodes’ of the network, and the various transportation links which connect
these nodes are termed as ‘Arcs’.
Fundamentally, the Transportation Problem (TP) is usually concerned with
the distribution of a certain commodity/product from several origins/sources to
several destinations with minimum total cost through single mode of transportation.
A necessary and sufficient condition for the existence of a feasible solution to a
transportation problem is that the transportation problem must be balanced. A
Balanced Transportation Problem (BTP) is a transportation problem in which the
total supply is equivalent to the total demand.
Loops in the Transportation Problems
In the transportation problems, the term ‘Loop’ or ‘Path’ is defined as an ordered
sequence of at least four different cells that satisfy all the following three conditions:
1. Any two consecutive cells lie in either the same row or same column.
2. No three or more consecutive cells lie in the same row or column.
3. The last cell is in the same row or column as the first cell.
The loops can be used to improve the basic feasible solution, and its process
are described in the following four steps.
Step 1: Find the only loop involving the entering variable and some of the basic
feasible variables.
Step 2: Count the cells in the loop (starting from 0), label them as odd cells or
even cells.
Self-Instructional
Material 159
Linear Programming Step 3: Find the odd cell with the smallest value. Call this value, say . This cell
and Transportation
Problem corresponds to the leaving variable.
Step 4: Decrease each odd cell in the loop by and increase each even cell in the
NOTES loop by .
Since, the position of the entering variable is already known hence one can
find the only possible loop. In the beginning, we start moving up from the entering
variable, and then move to the right to the last basic variable in the row, the directions
are selected random. Now mark the loop cells as even and odd.
Finding an Initial Base in the Transportation Problems
We can determine an initial basic feasible solution in the transportation problems
using any one of the following three methods:
1. North West Corner Rule (NWCR)
2. Least Cost Method (LCM) or Matrix Minimum Method
3. Vogel Approximation Method (VAM)
Follow the steps given below for determining an initial basic feasible solution.
Step 1: Determine the values of dual variables.
Step 2: Compute the opportunity cost.
Step 3: Now check the sign of each opportunity cost as follows:
 If the opportunity costs of all the unoccupied cells are either positive or
zero, then the given solution is the optimal solution.
 If one or more unoccupied cell has negative opportunity cost, then the given
solution is not an optimal solution and further savings in transportation cost
are possible.
Step 4: Select the unoccupied cell with the smallest negative opportunity cost as
the cell to be included in the next solution.
Step 5: Draw a closed path or loop for the unoccupied cell selected in the Step 4.
The right angle turn in this path is permitted only at occupied cells and at the
original unoccupied cell.
Step 6: Assign alternate plus and minus signs at the unoccupied cells on the corner
points of the closed path or loop with a plus sign at the cell being evaluated.
Step 7: Determine the maximum number of units that should be shipped to this
unoccupied cell.
The smallest value with a negative position on the closed path or loop specifies
the number of units that can be shipped to the entering cell. An unoccupied cell
becomes an occupied cell by adding the quantity to all the cells on the corner
points of the closed path or loop that are marked with plus signs, and then by
subtracting it from those cells which are marked with minus signs.

Self-Instructional
160 Material
Linear Programming
7.3 EXISTENCE OF SOLUTION OF and Transportation
Problem
TRANSPORTATION PROBLEMS

The linear programming model of a transport problem is: NOTES


Minimize Z =  CijXij where, i = 1, 2, ……, n and j = 1, 2, ….., m.
Here, Xij is the amount to be moved from source i to destination j, and Cij
is the cost of movement from source i to destination j.
The constraints are:
Xij = ai where, i = 1, 2, ……, n. This is the row sum and shows total
supply.
Xij = bj where, j = 1, 2, ….., m. This is the column sum and shows total
demand.
And xij  0
A transport problem is balanced if,
ai = bj, which means that the total supply = Total demand.
Let us take the example of a balanced problem. The total supply equals the
total demand.
In the simplex method, at each step, we send units along a route that is
unused in the current basic feasible solution and eliminate one of the roots that is
currently in use.
This may be visualized as follows:

d1 1 1 s1

d2 2 2 s2

d3 3 m sm

dn n

Demand Supply

Let us take the following supply chain network to solve using the simplex
method.

Self-Instructional
Material 161
Linear Programming Storage Dealers Supply
and Transportation
Problem 1 2 3 4
1 12 13 4 6 500
2 6 4 10 11 700
NOTES 3 10 9 12 4 800
Demand 400 900 200 500 2000

The solution is reached two phases.


Phase I
Step 1 We start with cell in the North West corner.
500
700
800
400 900 200 500

Step 2 Now, we allocate as many units as possible that are consistent with demand
and the available supply.
Step 3 Move one cell right if supply still remains, else move one cell down.
Step 4 Repeat Step 2.
Construction of initial basic feasible solution (BFS)
400 100
700
800
0 900 200 500

400 100 0
700
800
0 800 200 500

400 100 0
700 0
800
0 100 200 500

400 100 0
700 0
100 700
0 0 200 500

400 100 0
700 0
100 200 500
0 0 0 500
Self-Instructional
162 Material
400 100 0 Linear Programming
and Transportation
700 0 Problem
100 200 500 0
0 0 0 0
NOTES
The basic feasible solution is like a tree network.
C11, X11
d1 1 1 s1

d2 2 s2
2

d3 3 3 s3

d4 4

Demand Supply

Phase II
Step 1 Find the shadow price for each supply side ui and each demand side vj
such that ui + vj = Cij for every used cell, which is the basic variable. Set vn = 0.
Step 2 Now, make calculations for reduced costs rij = Cij – ui – vj for the unused
cells that are the non-basic variables. Now, if the reduced cost for every unused
cell is non-negative, then the solution is optimal.
Step 3 Now, select an unused cell with the most negative reduced cost. Use a
chain reaction cycle to find the maximum number of units (µ) that can be allocated
to the cell and then adjust the allocation appropriately. Update the values of the
new set of used cells (BFS).
Step 4 Move to Step 1.
Now find the shadow prices:
12 13 u1
4 u2
9 12 4 u3
v1 v2 v3 v4 = 0

Self-Instructional
Material 163
Linear Programming 12 13 u1
and Transportation
Problem 4 u2
9 12 4 u3 = 4
v1 v2 v3 v4 = 0
NOTES

12 13 u1
4 u2
9 12 4 u3 = 4
v1 v2 v3 = 8 v4 = 0

12 13 u1
4 u2
9 12 4 u3 = 4
v1 v2 = 5 v3 = 8 v4 = 0

12 13 u1
4 u2 = –1
9 12 4 u3 = 4
v1 v2 = 5 v3 = 8 v4 = 0

12 13 u1 = 8
4 u2 = –1
9 12 4 u3 = 4
v1 v2 = 5 v3 = 8 v4 = 0

12 13 u1 = 8
4 u2 = –1
9 12 4 u3 = 4
v1 = 4 v2 = 5 v3 = 8 v4 = 0

Reduced cost coefficients


12 13 4 6 u1 = 8
6 4 10 11 u2 = –1
10 9 12 4 u3 = 4
v1 = 4 v2 = 5 v3 = 8 v4 = 0

12 / 0 13 / 0 4 / –12 6 / –2 u1 = 8
6/3 4 10 / 3 11 / 12 u2 = –1
10 / 2 9/0 12 / 0 4/0 u3 = 4
v1 = 4 v2 = 5 v3 = 8 v4 = 0
Self-Instructional
164 Material
Chain reaction cycle Linear Programming
and Transportation
400 100 (–) (+) 0 Problem

700 0
100 (+) 200 (–) 500 0
NOTES
0 0 0 0

Here,  = 100
C11, X11
d1 1 1 s1

d2 2 s2
2

d3 3 3 s3

d4 4

Demand Supply

The updated modified BFS is given as:


400 100 0
700 0
200 100 500 0
0 0 0 0

7.4 SOLUTION OF A TRANSPORTATION


PROBLEM

In a transportation problem, where shipments are allowed only between source-


sink pairs, there is a possibility of existing points via which units of a goods/
merchandise may be transhipped from a source to a sink. It is a strong assumption
that shipments may be allowed between sources and between sinks, and also,
inter-linking source-sink. Transportation models which have these additional features
are called transhipment problems. Often, we see a gradual shift towards conversion
from a transhipment problem to a transportation problem. This conversion
procedure is of great significance as it broadens the applicability of algorithm as a
solution for transportation problems. This conversion procedure can be well defined
with the following example:
Self-Instructional
Material 165
Linear Programming Transhipment Problem-to-Transportation Problem
and Transportation
Problem
An organic food company manufactures cereals in two cities, Leeds and Kent.
The daily cereal production capacity at Leeds and Kent are 160 and 200 packets,
NOTES respectively. Cereals are shipped by air to consumers in London and New York.
The consumers in each city require 140 packets of cereals per day. However, due
to the deregulation of air fares, the organic food company believes that it may be
cheaper to fly some variety of cereals to Leeds or Dallas, and then do the final
packaging fly the packets of cereals to London and New York (final destinations).
The table given below shows the cost of flying one packet of the cereal between
these cities:

From  Leeds Kent London Dallas New York

Leeds £0 — £9 £ 14 £ 29
Kent — £0 £ 16 £ 13 £ 26
London — — £0 £7 £ 18
Dallas — — £7 £0 £ 17
New York — — — — £0

Now, to minimize the total incurred cost of daily shipments of the cereals to
its consumers, we first need to understand terminologies, such as source and sink.
Source is a city that can send products, however cannot receive any product from
any other city. Whereas, sink is a city that can receive products but cannot send to
any other city.
So, in this example we can say, that Leeds and Kent are source, and Leeds
and Dallas are transhipment points, and finally, London and New York are sinks
(each with a daily requirement of 140 packets of cereals).
So, we see a mismatch in demand and supply with total supply equals to
156 and the total demand equals to 122.
Now, to solve this imbalance we need to create a dummy sink, with a
demand of 34. We would now have 2 sources, 2 sinks, and 2 transhipment points.
As discussed before, transhipment points can act in dual roles, both as sources
and sinks. As there are no transportation costs from a transhipment point to itself,
the primary objective to reduce costs remain infect.
Therefore, we should perform a reformulation and use the transhipment points
as an optimal solution for imbalanced demand-supply as well as reduce
transportation problem (costs) to ensure maximization of profits.
7.4.1 Transhipment Model
In a transhipment model, the objects are supplied from various specific sources to
various specific destinations. It is also economic if the shipment passes via the
transient nodes which are in between the sources and the destinations. It is different
Self-Instructional
166 Material
from transportation problem where the shipments are directly sent from a specific Linear Programming
and Transportation
source to a specific destination, whereas in the transhipment problem the main Problem
goal is to reduce the total cost of shipments. Hence, the shipment passes via one
or more intermediary nodes before it reaches its desired specific destination.
Basically, there are two methods of evaluating transhipment problems as discussed NOTES
below.
The following is the schematic illustration of the sources and destinations
acting as transient nodes of a simple transhipment problem.

Fig. 7.1 Schematic Diagram of Simple Transhipment Model

The figure shows the shipment of objects from source S1 to destination D2.
Shipment from source S1 can pass via S2 and D1 before it reaches the desired
destination D2. Because the shipment passes via the particular transient nodes, this
arrangement is named as transhipment model. The goal of the transhipment problem
is to discover the optimal shipping model so that the total transportation cost is
reduced.
Figure 7.2 shows a different approach where the number of first starting
nodes and also the number of last ending nodes is the sum of the total number of
sources and destinations of the original problem. Let B be the buffer which should
be maintained at every transient source and transient destination. Considering it as
a balanced problem, buffer B at the least may be equal to the sum of total supplies
or the sum of total demands. Therefore, a constant B is further added to all the
starting nodes and the ending nodes as shown below:
Self-Instructional
Material 167
Linear Programming
and Transportation
Problem

NOTES

Fig. 7.2 Modified Version of Simple Transhipment Problem

Here in the modified version of simple transhipment model, the destinations


D1, D2, D3, ..., Di,..., Dn are incorporated as added starting nodes which basically
acts as the transient nodes. Hence, these do not have the original supplies and at
least the supply of every transient node must be equal to B. Therefore, every
transient node is assigned B units of supply value. Also, the sources S1, S2, S3,...,
Sj..., Sm are incorporated as added ending nodes which basically act as the transient
nodes. These nodes too do not have the original demands but every transient
node is assigned B units of demand value. To make it a balanced problem, B is
further added to every starting node and to the ending nodes. Hence, the problem
resembles a usual transportation problem and can be solved to obtain the optimum
shipping plan.
Example 7.1: The following is the transhipment problem with 4 sources and
2 destinations. The supply values of the sources S1, S2, S3 and S4 are 100,
200, 150 and 350 units respectively. The demand values of destinations D1
and D2 are 350 and 450 units respectively. Transportation cost per unit between
various defined sources and destinations are given in the following table. Solve
the transhipment problem.

Self-Instructional
168 Material
Linear Programming
and Transportation
Destination
Problem
S1 S2 S3 S4 D1 D2

S1 0 4 20 5 25 12 NOTES
S2 10 0 6 10 5 20

Source S3 15 20 0 8 45 7

S4 20 25 10 0 30 6

D1 20 18 60 15 0 10

D2 10 25 30 23 4 0

Solution: In the above table the number of sources is 4 and the number of
destinations is 2. Therefore, the total number of starting nodes and the ending
nodes of the transshipment problem will be 4 + 2 = 6. We also have,
n m
B   ai   b j
i 1 j 1

The following is the detailed format of the transshipment problem after


including transient nodes for the sources and destinations. Here, the value of B is
added to all the rows and columns.

Destination Supply

S1 S2 S3 S4 D1 D2

S1 0 4 20 5 25 12 100+800=900

S2 10 0 6 10 5 20 200+800=1000

S3 15 20 0 8 45 7 150+800=950
Source
S4 20 25 10 0 30 6 350+800=1150

D1 20 18 60 15 0 10 800

D2 10 25 30 23 4 0 800

800 800 800 800 350+800=1150 450+800=1250

The optimal solution and the corresponding total cost of transportation is


Rs 5,600. The allocations defined in the main diagonal cells are ignored. The
diagrammatic representation of the optimal shipping pattern of the shipments related
to the off-diagonal cells is shown in Figure 7.3:

Self-Instructional
Material 169
Linear Programming
S1
and Transportation
Problem D1
100
300
S2
NOTES 50
150
S3
D2
350
S4
Optimal Shipping Pattern
Fig. 7.3 Optimal Shipping Pattern

7.5 FEASIBLE SOLUTION (NWCM - LCM - VAM)

Optimal solution is a feasible solution (not necessarily basic) which minimizes the
total cost.
The solution of a Transportation Problem (TP) can be obtained in two stages,
namely initial solution and optimum solution.
Initial solution can be obtained by using any one of the three methods, viz.
(i) North West Corner Rule (NWCR)
(ii) Least Cost Method (LCM) or Matrix Minima Method
(iii) Vogel’s Approximation Method (VAM)
VAM is preferred over the other two methods, since the initial basic feasible
solution obtained by this method is either optimal or very close to the optimal
solution.
The cells in the transportation table can be classified as occupied cells and
unoccupied cells. The allocated cells in the transportation table is called occupied
cells and empty cells in a transportation table is called unoccupied cells.
The improved solution of the initial basic feasible solution is called optimal
solution which is the second stage of solution, that can be obtained by MODI
(Modified Distribution Method).
1. North West Corner Rule (NWCR)
Step 1: Starting with the cell at the upper left corner (North West) of the
transportation matrix we allocate as much as possible so that either the capacity of
the first row is exhausted or the destination requirement of the first column is
satisfied, i.e., X11 = Min (a1,b1).
Step 2: If b1>a1, we move down vertically to the second row and make the
second allocation of magnitude X22 = Min(a2 b1 – X11) in the cell (2, 1).
If b1<a1, move right horizontally to the second column and make the second
allocation of magnitude X12 = Min (a1, X11 – b1) in the cell (1, 2).
Self-Instructional
170 Material
If b1 = a1, there is a tie for the second allocation. We make the second Linear Programming
and Transportation
allocations of magnitude, Problem

X 12  Min (a1  a1 , b1 )  0 in the cell (1, 2)


or, X 21  Min (a2 , b1  b1 )  0 in the cell (2,1) NOTES
Step 3: Repeat steps 1 and 2 moving down towards the lower right corner of the
transportation table until all the rim requirements are satisfied.
Example 7.2: Obtain the initial basic feasible solution of a transportation problem
whose cost and rim requirement table is as follows:

Solution: Since ai = 34 = bj, there exists a feasible solution to the transportation
problem. We obtain initial feasible solution as follows.
The first allocation is made in the cell (1, 1) the magnitude being,
X11 = Min (5, 7) = 5.
The second allocation is made in the cell (2, 1) and the magnitude of the
allocation is given by X21 = Min (8, 7 – 5) = 2.

The third allocation is made in the cell (2, 2) the magnitude X22 = Min
(8 – 2, 9) = 6.
The magnitude of the fourth allocation is made in the cell (3, 2) given by
X32 = Min (7, 9 – 6) = 3.
The fifth allocation is made in the cell (3, 3) with magnitude X33 = Min
(7 – 3, 14) = 4.
The final allocation is made in the cell (4, 3) with magnitude X43 = Min
(14, 18 – 4) = 14.

Self-Instructional
Material 171
Linear Programming Hence, we get the initial basic feasible solution to the given TP and is given by,
and Transportation
Problem X11 = 5; X21 = 2; X22 = 6; X32 = 3; X33 = 4; X43 = 14
Total Cost = 2 × 5 + 3 × 2 + 3 × 6 + 3 × 4 + 4 × 7 + 2 × 14
NOTES =10 + 6 + 18 + 12 + 28 + 28 = Rs 102
Example 7.3: Determine an initial basic feasible solution to the following
transportation problem using North West Corner Rule (NWCR).

Solution: The problem is a balanced TP as the total supply is equal to the total
demand. Using the steps we find the initial basic feasible solution as given in the
following table.

The solution is given by,


X11 = 6; X12 = 8; X22 = 2; X23 = 14; X33 = 1; X34 = 4
Total Cost = 6 × 6 + 4 × 8 + 2 × 9 + 2 × 14 +6 × 1 + 2 × 4
= 36 + 32 + 18 + 28 + 6 + 8 = Rs 128.
Least Cost Method (LCM) or Matrix Minima Method
Step 1: Determine the smallest cost in the cost matrix of the transportation table.
Let it be Cij. Allocate Xij = Min (ai, bj) in the cell (i, j).
Step 2: If Xij = ai cross off the ith row of the transportation table and decrease
bj by ai. Then go to Step 3.
If Xij = bj cross off the jth column of the transportation table and decrease
ai by bj. Go to Step 3.
If Xij = ai = bj cross off either the ith row or the jth column but not both.
Step 3: Repeat Steps 1 and 2 for the resulting reduced transportation table until
all the rim requirements are satisfied. Whenever the minimum cost is not unique,
make an arbitrary choice among the minima.
Self-Instructional
172 Material
Example 7.4: Obtain an initial feasible solution to the following TP using Matrix Linear Programming
and Transportation
Minima Method. Problem

NOTES

Solution: Since ai = bj = 24, there exists a feasible solution to the TP using the
steps in the least cost method, the first allocation is made in the cell (3, 1) the
magnitude being X31 = 4. This satisfies the demand at the destination D1 and we
delete this column from the table as it is exhausted.

The second allocation is made in the cell (2, 4) with magnitude X24 = Min
(6, 8) = 6. Since it satisfies the demand at the destination D4, it is deleted from the
table. From the reduced table, the third allocation is made in the cell (3, 3) with
magnitude X33 = Min (8, 6) = 6. The next allocation is made in the cell (2, 3) with
magnitude X23 of Min (2, 2) = 2. Finally, the allocation is made in the cell (1, 2)
with magnitude X12 = Min (6, 6) = 6. Now, all the requirements have been satisfied
and hence, the initial feasible solution is obtained.
The solution is given by,
X12 = 6; X23 = 2; X24 = 6; X31 = 4; X33 = 6
Since the total number of occupied cells = 5 < m + n – 1
We get a degenerate solution.
Total Cost×××××
Rs 28.
Example 7.5: Determine an initial basic feasible solution for the following TP,
using the Least Cost Method (LCM).

Self-Instructional
Material 173
Linear Programming Solution: Since ai = bj, there exists a basic feasible solution. Using the steps
and Transportation
Problem in least cost method we make the first allocation to the cell (1, 3) with magnitude
X13 = Min (14, 15) = 14 (as it is the cell having the least cost).
This allocation exhaust the first row supply. Hence, the first row is deleted.
NOTES
From the reduced table, the next allocation is made in the next least cost cell (2, 3)
which is chosen arbitrarily with magnitude X23 = Min (1, 16) = 1. This exhausts
the 3rd column destination.
From the reduced table the next least cost cell is (3, 4) for which allocation
is made with magnitude Min (4, 5) = 4. This exhausts the destination D4 requirement.
Delete this fourth column from the table. The next allocation is made in the cell
(3, 2) with magnitude X32 = Min (1, 10) = 1 which exhausts the 3rd origin capacity.
Hence, the 3rd row is exhausted. From the reduced table the next allocation is
given to the cell (2,1) with magnitude X21 = Min (6, 15) = 6. This exhausts the first
column requirement. Hence, it is deleted from the table.
Finally, the allocation is made to the cell (2, 2) with magnitude X22 = Min
(9, 9) = 9 which satisfies the rim requirement. These allocation are shown in the
transportation table as follows:

(I Allocation) (II Allocation)

(III Allocation) (IV Allocation)

(V, VI Allocation)
The following table gives the initial basic feasible solution.

Self-Instructional
174 Material
Linear Programming
and Transportation
Problem

NOTES

The solution is given by,


X13=14; X21=6; X22= 9; X23= 1; X32= 1; X34= 4
 Transportation Cost
= 14 × 1 + 6 × 8 + 9 × 9 + 1 × 2 + 3 × 1 + 4 × 2
= 14 + 48 + 81 + 2 + 3 + 8 = Rs 156.
Vogel’s Approximation Method (VAM)
Vogel Approximation Method (VAM) is used to find the feasible solution for
transportation of goods where the solution is either optimal or near to the optimal
solution. Typically, this method is used to reduce the transportation costs by
interpreting using a mathematical table the transportation costs from one place to
another. In the table, the column represents the demand centres while the row
represents the supply points. The following are the general steps used in VAM:
Step 1: Identify the minimum and next minimum numbers in a column and repeat
the same for the row.
Step 2: The above step is repeated for all other columns and rows.
Step 3: Now, subtract the two numbers identified for each column and each row
such that the difference is positive.
Step 4: Identify the maximum difference among all the rows and also among all the
columns.
Step 5: Assign all the demand units for that minimum number in that column which
has got the maximum difference (repeat the same for the row).
Step 6: Remove that column and row completely and repeat the above process
until all the demand units are filled up completely.
Vogel’s Approximation Method (VAM) also takes costs into account in allocation.
The steps involved in this method for finding the initial solution are as follows.
Step 1: Find the penalty cost, namely the difference between the smallest and next
smallest costs in each row and column.
Step 2: Among the penalties as found in Step (1) choose the maximum penalty. If
this maximum penalty is more than one (i.e., if there is a tie) choose any one
arbitrarily.
Self-Instructional
Material 175
Linear Programming Step 3: In the selected row or column as by Step (2) find out the cell having the
and Transportation
Problem least cost. Allocate to this cell as much as possible depending on the capacity and
requirements.
Step 4: Delete the row or column which is fully exhausted. Again, compute the
NOTES
column and row penalties for the reduced transportation table and then go to Step
(2). Repeat the procedure until all the rim requirements are satisfied.
Note: If the column is exhausted, then there is a change in row penalty and vice
versa.
Example 7.6: Find the initial basic feasible solution for the following transportation
problem using VAM.

Solution: Since ai = bj = 950, the problem is balanced and there exists a
feasible solution to the problem.
First, we find the row and column penalty PI as the difference between the
least and the next least cost. The maximum penalty is 5. Choose the first column
arbitrarily. In this column, choose the cell having the least cost name (1, 1). Allocate
to this cell with minimum magnitude (i.e., Min (250, 200) = 200.) This exhausts
the first column. Delete this column. Since a column is deleted, then there is a
change in row penalty PII and column penalty PII remains the same. Continuing in
this manner, we get the remaining allocations as given in the following table below.
I Allocation II Allocation

1
1

Self-Instructional
176 Material
III Allocation IV Allocation Linear Programming
and Transportation
Problem

NOTES

V Allocation VI Allocation

Finally, we arrive at the initial basic feasible solution which is shown in the
following table.

There are 6 positive independent allocations which equals to m + n –1 =


3 + 4 – 1. This ensures that the solution is a non-degenerate basic feasible solution.
Transportation Cost
11 × 200 + 13 × 50 + 18 × 175 + 10 × 125 + 13 × 275 + 10 × 125 =
12,075.
Example 7.7: Find the initial solution to the following TP using VAM.

Self-Instructional
Material 177
Linear Programming Solution: Since ai = bj the problem is a balance TP. Hence, there exists a
and Transportation
Problem feasible solution.

NOTES

Finally, we have the initial basic feasible solution as given in the following
table.

There are 6 independent non-negative allocations equal to m + n – 1 = 3 +


4 – 1 = 6. This ensures that the solution is non-degenerate basic feasible.
 Transportation Cost
= 3 × 45 + 4 × 30 + 1× 25 + 2 × 80 + 4 × 45 + 1 + 75
= 135 + 120 + 25 + 160 + 180 + 75
= 695.
Example 7.8: An organization has four destinations (D1, D2, D3 and D4) and
three sources (S1, S2 and S3) for supply of goods. The transportation cost per unit
is given below. The total availability is 700 units which exceeds the cumulative
demand of 600 units. Find the optimal transportation scheme for this condition
using the Vogel’s Approximation Method or VAM.

Self-Instructional
178 Material
Linear Programming
and Transportation
Problem

NOTES

Solution: The solution is obtained as follows.


Step 1: First check for balance of supply and demand
Supply = 250 + 200 + 250 = 700 units
Demand = 100 + 150 + 250 + 100 = 600 units
Decision Rule
(i) If Supply = Demand then go to next step.
(ii) If Supply > Demand then add a ‘Dummy Destination’ with zero transportation
cost.
(iii) If Supply < Demand then add a ‘Dummy Source’ with zero transportation
cost.
In the given problem, Supply > Demand.
Hence, we add a ‘Dummy Destination’ say D5 with zero transportation cost and
balance demand which is difference in supply and demand (= 100 units). The
initial transportation matrix is now formulated with transportation cost in each route.
Each cell of the transportation matrix represents a possible route. In the following
table, dummy column is introduced for balancing the supply and demand.

Self-Instructional
Material 179
Linear Programming Step 2: (i) Decide the nature of problem, i.e., minimization of transportation cost.
and Transportation
Problem (ii) Make initial assignment using the Vogel’s approximation method.
(i) Select the lowest transportation cost route in the initial matrix. For example, it
NOTES is route S1D5, S2D5 and S3D5 in the given problem with zero transportation cost.
Allocate the minimum of remaining balance of supply (in last column) and demand
(in last row).
In this method, we calculate the difference between the two least-cost routes
for each row and column. The difference is called as penalty cost for not using the
least-cost route. Following table shows the first calculation of ‘Penalty’ cost in
VAM.

Highest of all calculated penalty costs is for S3 and S2. Therefore, allocation
is to be made in row of source S3. The route or cell which should be selected
should be the lowest cost of this row, i.e., the route S3D5. Hence, first allocation in
Vogel’s method is as follows.

With the first allocation, destination D5 is used. Leave out this column and
develop the remaining matrix for calculating the penalty cost. We obtain the
following matrix.
Now for this, source S1 has highest penalty cost. For this row, the least cost
route is S1D1. Hence, next assignment is due in this route:
Self-Instructional
180 Material
Linear Programming
and Transportation
Problem

NOTES

Second Calculation of Penalty Cost in VAM


Second allocation in Vogel’s method is obtained as follows:

After second allocation, since destination D1 is used, leave this column and
proceed for calculation of next penalty cost. Allocation is done in route S1D2.
Since there is tie between all routes, break the tie by arbitrarily selecting any route,
for example S1D2 in this case.
Third Calculation of Penalty Cost in VAM

Self-Instructional
Material 181
Linear Programming Third Allocation in VAM
and Transportation
Problem

NOTES

Fourth Calculation of Penalty Cost in VAM

Fourth Allocation in VAM

With the fourth allocation, column D4 is used. In the only left column D3, the
allocations of 100 units and 150 units are done in route S2D3 and S4D3, respectively.
Self-Instructional
182 Material
Thus, we obtain the following allocations using the Vogel’s Approximation Method Linear Programming
and Transportation
or VAM. Problem

Final Allocation Through Vogel’s Method


NOTES

The initial cost for this allocation is:


(13 × 100 + 16 × 150 + 16 × 100 + 15 × 100 + 17 × 150 + 0 × 100) or equal
to 9350
Step 3: Verify for degeneracy, (m + n – 1) = 7.
Number of filled cell = 6, which is one less than (m + n + 1). Hence, go to Step 4
for removing the degeneracy.
Step 4: Now we allocate in the least cost unfilled cell. This cell is route S1D5 or
S2D5. Select route S1D5. We obtain the following matrix after removing degeneracy.
Final Allocation After Removing Degeneracy in Vogel’s Method

Self-Instructional
Material 183
Linear Programming Optimization of Initial Assignment
and Transportation
Problem
The initial feasible assignment done by using Vogel’s approximation method does
not guarantee optimal solution. Hence, next step is to check the optimality of the
NOTES initial solution.
Step 5: Check the optimality of the initial solution. For this, calculate the opportunity
cost of un-occupied routes.
First, we start with any row (or column). Select row 1, i.e., source S1. For this
row define row value, u1 = 0. Now consider all filled routes of this row. For these
routes, calculate column values v using following equation:
u1 + v1 = Cij (For any filled route)
Where u1 = Row value
vj = Column value
Cij = Unit cost of assigned route
Once first set of column values is known, say in this case vj is known, locate other
routes of filled cells in these columns. Calculate next of ui or vj values using the
above equation. In this method, for all rows and columns, ui and vj values are
determined for a non-degenerate initial solution.
Step 6: Check the optimality.
Calculate the opportunity of non-allocated or unfilled routes. For this, use the
following equation:
Opportunity Unassigned Route = ui + vj – Cij
Where, ui = Row value
vj = Column value
Cij = Unit cost of unassigned route
If the opportunity cost is negative for all unassigned routes, the initial solution is
optimal. If in case any of the opportunity costs is positive, then go to next step.

Self-Instructional
184 Material
Step 7: Make a loop of horizontal and vertical lines which joins some filled routes Linear Programming
and Transportation
with the unfilled route, which has a positive opportunity cost. Note that all the Problem
corner points of the loop are either filled cells or positive opportunity cost unassigned
cells.
NOTES
For this, we start with row, S1 and take u1 = 0. Now S1D1, S1D2, and S1D5 are
filled cells. Hence, for filled cells (vj = Cij – ui).
v1 = 13 – 0 = 13
v2 = 16 – 0 = 16
v5 = 0 – 0 = 0
The optimality of Vogel’s method’s initial solution is as follows.
Calculation of ui and vj for Vogel Approximation Method’s Initial Solutions

Opportunity cost of above assignment using VAM is as follows:

Unassigned Route Opportunity Cost (ui + vj – Cij)


S1D3 0 + 17 – 19 = –2

S1D4 0 + 16 – 17 = –1

S2D1 –1 + 13 – 17 = –5

S2D2 –1 + 16 – 19 = –4

S2D5 –1 + 0 – 0 = –1

S3D1 0 + 13 – 15 = –2

S3D2 0 + 16 – 17 = –1

S3D4 0 + 16 – 16 = 0

Self-Instructional
Material 185
Linear Programming Since all opportunity costs are negative or zero, the initial assignment of Vogel’s
and Transportation
Problem solution is optimal with total cost of 9350.
Example 7.9: Distances between factory and its warehouses and demand at
each warehouse are given in the following table. Calculate the values of penalty to
NOTES
all the rows and columns for the reduced transportation problem and repeat the
same procedure till the entire requirement has been met. Solve this problem using
Vogel’s Approximation Method or VAM.
Table Transportation Table

Factory/Warehouse W1 W2 W3 Supply
F1 16 22 14 200
F2 18 14 18 150
F3 8 14 16 100
Demand 175 125 150

Solution: The solution is obtained as follows.


Step 1: Compute the penalty for each row and column of the transportation
problems. The penalty for the first row is, (16 - 14) = 2. Similarly the values of
penalty for the second and the third row are 4 and 6.

W1 W2 W3 Supply Penalty

F1 16 22 14 200 2

F2 18 14 18 150 4

F3 8 14 16 100 6

Demand 175 125 150

8 0 2

Similarly, the values of penalty for the first, second and the third columns are 8, 0
and 2, respectively.
Step 2: Identify the row or column with the largest penalty value. In this case, the
first column with a penalty value is 8.
Step 3: The cell with the least cost is chosen and the possible number of goods is
assigned to that cell. Therefore, assign 100 to the cell (F3, W1).
Step 4: If the remaining row supply or column demand is zero, remove that row/
column.
Now, the transportation problem can be reduced as illustrated in the following
table:

Self-Instructional
186 Material
W1 W2 W3 Supply Penalty Linear Programming
and Transportation
F1 16 22 14 200 2 Problem
F2 18 14 18 150 4
Demand 75 125 150
Penalty 2 8 4 NOTES
Step 5: The process is repeated for the reduced transportation problem till the
entire supply at the factories is assigned to satisfy the demand at different
warehouses.
Now, the W2 column has the highest penalty, i.e., 8. Therefore, assign 125 units to
the cell (F2, W2) since the cell has the least cost in the W2 column.
Then the transportation problem can further be reduced as illustrated in the following
table:
W1 W3 Supply Penalty
F1 16 14 200 2
F2 18 18 25 0
Demand 75 150
Penalty 2 4

Now, the W3 column has the highest penalty, i.e., 4. Next assign 150 units
to the cell (F1, W3) since the cell has the least cost. Then remove the W3 column
and the remaining units are assigned to the cells (F1, W1) and (F2, W1). Thus, 50
units are assigned to the cell (F1, W1) and 25 units to the cell (F2, W1).
Since the number of cells occupied is 5, i.e., (3+31), the solution obtained
is a feasible solution. Thus, the cost obtained using VAM is:
(50 × 16) + (25 × 18) + (100 × 8) + (125 × 14) + (150 × 14) = Rs.5,900.

Check Your Progress


1. Explain the linear programming formulation of transportation problem.
2. Discuss the loops in the transportation problem.
3. Illustrate the finding of an initial base in the transportation problem.
4. Define the transhipment problem.
5. Analyse the feasible solution (NWCM - LCM - VAM).
6. Define the first two steps of least cost method.
7. Explain the first three steps of Vogel’s approximation method.

Self-Instructional
Material 187
Linear Programming
and Transportation 7.6 ANSWERS TO CHECK YOUR PROGRESS
Problem
QUESTIONS

NOTES 1. There are numerous categories of Linear Programming (LP) models that
exhibit an exceptional and unique structure that helps in the formulation of
efficient algorithms for finding their solutions. These structures helped to
solve larger problems which otherwise would not have been possible to
solve using the existing technology. Traditionally, the first or initial of these
special and unique structures that were typically analysed is termed as the
Transportation Problem (TP) are considered as a particular type of network
problem. Principally, it provided an efficient solution to the problem to be
solved and was considered as the initial most widespread application of
Linear Programming Problems (LPP) that were typically used for the industrial
logistics.
2. In the transportation problems, the term ‘Loop’ or ‘Path’ is defined as an
ordered sequence of at least four different cells that satisfy all the following
three conditions:
(i) Any two consecutive cells lie in either the same row or same column.
(ii) No three or more consecutive cells lie in the same row or column.
(iii) The last cell is in the same row or column as the first cell.
3. We can determine an initial basic feasible solution in the transportation
problems using any one of the following three methods:
(i) North West Corner Rule (NWCR)
(ii) Least Cost Method (LCM) or Matrix Minimum Method
(iii) Vogel Approximation Method (VAM)
4. In a transportation problem, where shipments are allowed only between
source-sink pairs, there is a possibility of existing points via which units of
a goods/merchandise may be transhipped from a source to a sink. It is a
strong assumption that shipments may be allowed between sources and
between sinks, and also, inter-linking source-sink.
5. Optimal solution is a feasible solution (not necessarily basic) which minimizes
the total cost.
The solution of a Transportation Problem (TP) can be obtained in two
stages, namely initial solution and optimum solution.
Initial solution can be obtained by using any one of the three methods, viz.
(i) North West Corner Rule (NWCR)
(ii) Least Cost Method (LCM) or Matrix Minima Method
(iii) Vogel’s Approximation Method (VAM)
6. Step 1: Determine the smallest cost in the cost matrix of the transportation
table. Let it be Cij. Allocate Xij = Min (ai, bj) in the cell (i, j).
Self-Instructional
188 Material
Step 2: If Xij = ai cross off the ith row of the transportation table and Linear Programming
and Transportation
decrease bj by ai. Then go to Step 3. Problem
If Xij = bj cross off the jth column of the transportation table and decrease
ai by bj. Go to Step 3.
NOTES
If Xij = ai = bj cross off either the ith row or the jth column but not both.
7. The steps involved in this method for finding the initial solution are as follows.
Step 1: Find the penalty cost, namely the difference between the smallest
and next smallest costs in each row and column.
Step 2: Among the penalties as found in Step (1) choose the maximum
penalty. If this maximum penalty is more than one (i.e., if there is a tie)
choose any one arbitrarily.
Step 3: In the selected row or column as by Step (2) find out the cell having
the least cost. Allocate to this cell as much as possible depending on the
capacity and requirements.

7.7 SUMMARY

 Linear Programming (LP, also called linear optimization) is a method to


achieve the best outcome (such as maximum profit or lowest cost) in
a mathematical model whose requirements are represented by linear
relationships.
 There are numerous categories of Linear Programming (LP) models that
exhibit an exceptional and unique structure that helps in the formulation of
efficient algorithms for finding their solutions.
 The network model of transportation problems typically defines the unique
method of transportation of a product which has been manufactured at
different plants or factories (supply origins) to a number of manufactured at
different warehouses (demand destinations).
 The smallest value with a negative position on the closed path or loop specifies
the number of units that can be shipped to the entering cell.
 In a transportation problem, where shipments are allowed only between
source-sink pairs, there is a possibility of existing points via which units of a
goods/merchandise may be transhipped from a source to a sink.
 In a transhipment model, the objects are supplied from various specific
sources to various specific destinations. It is also economic if the shipment
passes via the transient nodes which are in between the sources and the
destinations.
 Optimal solution is a feasible solution (not necessarily basic) which minimizes
the total cost. The solution of a Transportation Problem (TP) can be obtained
in two stages, namely initial solution and optimum solution.
Self-Instructional
Material 189
Linear Programming  Starting with the cell at the upper left corner (North West) of the
and Transportation
Problem transportation matrix we allocate as much as possible so that either the
capacity of the first row is exhausted or the destination requirement of the
first column is satisfied, i.e., X11 = Min (a1,b1).
NOTES
 Vogel Approximation Method (VAM) is used to find the feasible solution
for transportation of goods where the solution is either optimal or near to
the optimal solution. Typically, this method is used to reduce the
transportation costs by interpreting using a mathematical table the
transportation costs from one place to another.
 The initial feasible assignment done by using Vogel’s approximation method
does not guarantee optimal solution.

7.8 KEY WORDS

 Nodes and arcs: The different sources, destinations, and the intermediate
points are jointly termed as the ‘Nodes’ of the network, and the various
transportation links which connect these nodes are termed as ‘Arcs’.
 Loops or path: In the transportation problems, the term ‘Loop’ or ‘Path’
is defined as an ordered sequence of at least four different cells that satisfy
some concerned conditions.
 Feasible solution: Optimal solution is a feasible solution 9not necessarily
basic) which minimizes the total cost.
 VAM: Vogel’s Approximation Method (VAM) is used to find the feasible
solution for transportation of goods where the solution is either optimal or
near to the optimal solution.

7.9 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short-Answer Questions
1. Explain the linear programming formulation of transportation problem.
2. Discuss the loops in transportation problem.
3. Analyse the finding of an initial base in the transportation problem.
4. Elaborate on the existence of solution of transportation problem.
5. Define the transhipment problem.
6. State the North West corner rule.
7. Explain the Vogel’s approximation method.

Self-Instructional
190 Material
Long-Answer Questions Linear Programming
and Transportation
Problem
1. Discuss about the transportation model givine appropriate examples.
2. Define feasible solution, basic solution, non-degenerate solution, optimal
solution in a transportation problem. NOTES
3. Explain the following terms briefly with examples:
(i) North West Corner Rule
(ii) Least Cost Method
(iii) Vogel’s Approximation Method
4. Give the mathematical formulation of a transportation problem.
5. Write an algorithm to solve a transportation problem.
6. Obtain the initial solution for the following transportation problem using
(i) North West Corner Rule (ii) Least Cost Method (iii) VAM
Destination
A B C Supply
1 2 7 4 5
Source

2 3 3 1 8
3 5 4 7 7
4 1 6 2 14
Demand 7 9 18 34
[Ans.
(i) X11 = 5, X21 = 2, X22 = 6, X32 = 3, X33 = 4, X43 = 14 and the transportation cost 102.
(ii) X12 = 2, X13 = 3, X23 = 8, X32 = 7, X41 = 7, X43 = 7 and transportation cost 83.
(iii) X11 = 5, X23 = 8, X32 = 7, X41 = 2, X42 = 2, X43 = 10 and transportation cost 80.]
7. Solve the following transportation problem where the cell entries denote
the unit transportation costs.
Destination
A B C D Supply
P 5 4 2 6 20
Origin Q 8 3 5 7 30
R 5 9 4 6 50
Demand 10 40 20 30 100

[Ans. X12 = 10, X13 = 10, X22 = 30, X31 = 10, X33 = 10, X34 = 30. The optimum
transportation cost is 420.]
8. Solve the following transportation problem.
Destination
1 2 3 Capacity
1 2 2 3 10
Source 2 4 1 2 15
3 1 3 1 40
Demand 20 15 30

[Ans. X12 = 10, X23 = 15, X31 = 20, X33 = 15, X32 = 5. The transportation cost is 100.] Self-Instructional
Material 191
Linear Programming 9. Find the minimum transportation cost.
and Transportation
Problem Warehouse
D1 D2 D3 D4 Supply
F1 19 30 50 10 7
NOTES
Factory F2 70 30 40 60 9
F3 40 8 70 30 18
Demand 5 8 7 14

[Ans. X11 = 5, X14 = 2, X22 = 2, X23 = 7, X32 = 6, X34 = 12


and the minimum transportation cost = 743.]
10. Solve the following transportation problem using Vogel’s Approximation
Method.
Warehouse
A B C D E F Available
1 9 12 9 6 9 10 5
2 7 3 7 7 5 5 6
Factory
3 6 5 9 11 3 11 2
4 6 8 11 2 2 10 9
Requirement 4 4 6 2 4 2

[Ans. X13 = 5, X22 = 4, X26 = 2, X31 = 1, X32 = e, X33 = 1, X41 = 3, X44 = 2, X45 = 4
and the min. transportation cost is 112.]
11. Solve the following transportation problem.
Destination
A B C D Supply
1 1 2 3 4 6
Source 2 4 3 2 0 8
3 0 2 2 1 10
Demand 4 6 8 6

[Ans. X12 = 6, X23 = 2, X24 = 6, X31 = 4, X32 = e, X33 = 6. The min.


transportation cost is 28.]
12. Solve the following transportation problem.
Destination
A B C D Supply
1 11 20 7 8 50
Source 2 21 16 20 12 40
3 8 12 8 9 70
Demand 30 25 35 40

[Ans. X13 = 35, X14 = 15, X24 = 10, X25 = 30, X31 = 30, X32 = 25, X34 = 15
min. transportation cost = 1,160.]

Self-Instructional
192 Material
13. Solve the following transportation problem to maximize the profit. Linear Programming
and Transportation
Destination Problem

A B C D Supply
1 40 25 22 33 100
NOTES
Source 2 44 35 30 30 30
3 38 38 28 30 70
Demand 40 20 60 30

[Ans. X11 = 20, X14 = 30, X15 = 50, X21 = 20, X23 = 10, X32 = 20, X33 = 50
and the optimum profit is 5,130.]

7.10 FURTHER READINGS

Arumugam, R. S. 2006. Operations Research. Palayamkottai (Tamil Nadu): New


Gamma Publications.
Sundharesan, V., K. S. Ganapathy and K. Ganesan. 2017. Resource Management
Techniques (Operations Research). Chennai: A. R. Publications.
Swaroop, Kanti, P. K. Gupta and Man Mohan. 2007. Operations Research,
13th Edition. New Delhi: Sultan Chand & Sons.
Taha, Hamdy A. 1992. Operations Research: An Introduction. New York:
Macmillan.
Sharma, S. D. 2006. Operations Research. Uttar Pradesh: Kedar Nath Ram
Nath & Co.
Gupta, P. K. and D. S. Hira. 2002. Introduction to Operations Research. New
Delhi: S. Chand And Company Limited.
Gillett, Billy E. 2007. Introduction to Operations Research. New Delhi: Tata
McGraw-Hill.
Ackoff, R. L. and M. W. Sasieni. 1968. Fundamentals of Operations Research.
New York: John Wiley & Sons Inc.
Kothari, C. R. 1992. An Introduction to Operational Research. New Delhi:
Vikas Publishing House Pvt. Ltd.
Kalavathy, S. 2002. Operations Research. New Delhi: Vikas Publishing House
Pvt. Ltd.
Jensen, Paul A., and Jonathan F. Bard. 2003. Operations Research Models and
Methods. New York: John Wiley & Sons.
Sharma, J. K. 2001. Operations Research: Theory and Applications. New
Delhi: Macmillan India Ltd.

Self-Instructional
Material 193
Transportation Problems

UNIT 8 TRANSPORTATION
PROBLEMS
NOTES
Structure
8.0 Introduction
8.1 Objectives
8.2 Degeneracy in Transportation Problems
8.3 Transportation Algorithm (MODI Method)
8.4 Unbalanced and Maximization of Transportation Problems
8.5 Answers to Check Your Progress Questions
8.6 Summary
8.7 Key Words
8.8 Self Assessment Questions and Exercises
8.9 Further Readings

8.0 INTRODUCTION

Transportation Problems (TP) deals with study of optimal transportation and


allocation of resources. The problem was formalized by the
French mathematician Gaspard Monge in 1781. In the 1920s A.N. Tolstoi was
one of the first to study the transportation problem mathematically. In 1930, in the
collection Transportation Planning Volume I for the National Commissariat of
Transportation of the Soviet Union, he published a paper “Methods of Finding the
Minimal Kilometrage in Cargo-transportation in space”. Major advances were
made in the field during World War II by the Soviet mathematician and
economist Leonid Kantorovich. Consequently, the problem as it is stated is
sometimes known as the Monge–Kantorovich transportation problem. The linear
programming formulation of the transportation problem is also known as
the Hitchcock–Koopmans transportation problem.
The transportation problem is one of the subclasses of LPP (Linear
Programming Problem) in which the objective is to transport various quantities of
a single homogeneous commodity that are initially stored at various origins to
different destinations in such a way that the transportation cost is minimum. To
achieve this objective we must know the amount and location of available supplies
and the quantities demanded. In addition, we must know the costs that result from
transporting one unit of commodity from various origins to various destinations.
Any set of non-negative allocations (Xij>0) which satisfies the row and
column sum (rim requirement) is called a feasible solution. A feasible solution is
called a basic feasible solution if the number of non-negative allocations is equal to
m + n – 1, where m is the number of rows and n the number of columns in a
transportation table. If a basic feasible solution contains less than m + n – 1 non-
negative allocations it is said to be degenerate.
Self-Instructional
194 Material
Optimality test can be conducted to any initial basic feasible solution of a TP Transportation Problems

provided such allocations has exactly m + n – 1 non-negative allocations, where


m is the number of origins and n is the number of destinations. Also, these allocations
must be in independent positions. To perform this optimality test, we shall discuss
the modified distribution method (MODI). The various steps involved in the MODI NOTES
method for performing optimality test are as follows.
An unbalanced transportation problem is that in which the supply and demand
are unequal. The maximization method is used to solve the problem by finding the
regret values instead of opportunity cost. The following are the two methods: In
the first method we put a negative sign before the values in the assignment matrix
and then solve the problem as a minimization case using Hungarian methods. In
the second method we find the largest value in the given matrix and then subtract
each element in the matrix from this value. This problem is solved as a minimization
case using this new modified matrix.
In this unit, you will study about the degeneracy in transportation problem,
transportation algorithm (MODI Method), unbalanced transportation problem,
and maximisation transportation problem.

8.1 OBJECTIVES

After going through this unit, you will be able to:


 Understand the degeneracy in TP
 Analyse transportation algorithm (MODI Method)
 Explain the unbalanced transportation problem
 Illustrate the maximization transportation problem

8.2 DEGENERACY IN TRANSPORTATION


PROBLEMS

The Transportation Problem (TP) is one of the subclasses of LPP (Linear


Programming Problem) in which the objective is to transport various quantities of
a single homogeneous commodity that are initially stored at various origins to different
destinations in such a way that the transportation cost is minimum. To achieve this
objective we must know the amount and location of available supplies and the
quantities demanded. In addition we must know the costs that result from
transporting one unit of commodity from various origins to various destinations.
Elementary Transportation Problem
Consider a transportation problem with m origins (rows) and n destinations
(columns). Let Cij be the cost of transporting one unit of the product from the ith
origin to jth destination, ai the quantity of commodity available at origin i, bj the
Self-Instructional
Material 195
Transportation Problems quantity of commodity needed at destination j. Xij is the quantity transported from
ith origin to jth destination. This transportation problem can be stated in the
following tabular form.

NOTES
X X X X

X X X X

X X X

X X X X

i=1 j=1

The linear programming model representing the transportation problem is


given by,
m n
Minimize Z   Cij X ij
i 1 j 1
Subject to the constraints,
n

 X ij  ai i  1, 2, ..., n (Row sum)


j 1

m
X ij bj j 1, 2, ..., n (Column sum)
i 1

X ij  0 For all i and j


The given transportation problem is said to be balanced if,

i.e., the total supply is equal to the total demand.


Definitions
Feasible Solution: Any set of non-negative allocations (Xij>0) which
satisfies the row and column sum is called a feasible solution.
Basic Feasible Solution: A feasible solution is called a basic feasible
solution if the number of non-negative allocations is equal to m + n – 1, where m
is the number of rows and n the number of columns in a transportation table.
Non-Degenerate Basic Feasible Solution: Any feasible solution to a
transportation problem containing m origins and n destinations is said to be non-
degenerate, if it contains m + n – 1 occupied cells and each allocation is in
independent positions.
Self-Instructional
196 Material
The allocations are said to be in independent positions if it is impossible to Transportation Problems

form a closed path. Closed path means by allowing horizontal and vertical lines
and when all the corner cells are occupied.
The allocations in the following tables are not in independent positions. NOTES

The allocations in the following tables are in independent positions.

Degenerate Basic Feasible Solution: If a basic feasible solution contains


less than m + n – 1 non-negative allocations it is said to be degenerate.
Transportation Algorithm
This algorithm can be used for minimizing the transportation cost for goods from O
origins to D destinations and there may be O*D number of direct routes from O
origins to D destinations. Problem is balanced when sum of supplies at O sources
is equal to sum of demands at D destinations. If it is not so, then this problem is not
balanced. There may be two such situations. Supply may be lesser than demand
and in that case it is balanced by adding dummy supply node. If demand is lesser
than supply then dummy demand node is added to make it a balanced problem.
Thus, before starting to use this algorithm, problem should be made balanced, if it
is not balanced.
Data is presented in tabular form. As a convention, origins are put on left
side of the table with quantity to be supplied listed towards right side and demands
are put on top with quantity of demand towards the bottom side. Unit cost of
transportation is put at the top of every cell within a small box. Zero unit cost
shows unshipped unit column in case supply is in excess of the demand. Similarly,
a unit cost either penalty or zero shows shortage row in supplies that are lesser
that demand. Self-Instructional
Material 197
Transportation Problems The algorithm has two phases. In phase I this makes allocation for supplies
on demands by making utilizing an approach of minimum unit cost for generating a
feasible solution. This feasible solution may not be optimal. Optimization is done
in second phase and in this phase checking is done for optimality conditions and
NOTES improvement is done for reducing the cost if optimality conditions are not satisfied.
This second phase adopts iterative steps and stops only when optimality conditions
are satisfied. Once done, no further steps are required.
Basic and Non-Basic Cells
Basic cells are those that indicate positive values and non-basic cells have zero
value for flow. According to transportation problem number of basic cells will be
exactly m + n – 1.
Algorithm follows as below:
Step 0
Initialization: Before starting to solve the problem, it should be balanced. If not
then make it balanced by ‘Unshipped Supply’ column in case demand is less than
supply or by adding ‘Shortage’ row in case supply is less than the supply. Put zero
for unit costs in the column for unshipped supply. Put either penalty costs or zero
in a row that shows shortage.
Phase I: To Find Initial ‘Feasible Solution’
Step 1: Locate cell with minimum cost having positive supply as well as demand,
then make allocation in that cell having residuals as minimum.
Step 2: Reduce residual supply/demand as per allocations made above (Step 1).
Do this till all demands are met and then proceed to Phase II. If all demands are
not met go to Step 1.
Phase II: Optimal Solution
Carry out check for Optimality.
Step 3: For rows and columns find dual values, ui for rows and vj for columns. For
this, set u1 (first dual value) to 0 followed by solution of triangular dual equations one
by one. These dual equations are to be applied for basic cells only as below as,
Cij = ui + vj
Where, Cij denotes unit cost, as given for that cell and either vj or ui is
already known. Here, vj denotes dual value of column j and ui denotes that for
row i. These are taken as already known. Other dual value is computed from
equation,
Cij = ui + vj
Thus, dual values of every cell can be computed by setting the first to zero
and appropriate order is used for dual equations for basic cells.
Self-Instructional
198 Material
Step 4: Optimality conditions are expressed as reduced costs in case of all non- Transportation Problems

basic cells as given below:


ij = Cij - (ui + vj)
Reduced cost (ij) in case of non-basic cells represent net change in unit NOTES
cost resulting due to movement of cell ij to solve and adjusting around a cycle that
has been created in this way for basic cells to find current solution. This is shown
in Step 5. Hence if one ij is positive, then by use of this cell total transportation
will increase, but when ij is negative for a cell, this will cause reduction in total
transportation. If all reduced costs (ij) are positive, i.e., ij > 0, optimality
conditions are satisfied and no further improvement is possible and algorithm
terminates at this point. But if at least one ij is negative, optimality conditions are
not satisfied and there is possibility of reduction in costs. If optimality conditions
are not satisfied algorithm continues as given in Step 5.
Adjustment for Reducing Cost
Step 5: Select a cell ‘ij’ that is most negative for ij. This becomes entering variable.
Put (+) sign for identifying it. To maintain constraints of balance in supply and
demand, locate basic cells for ith row as well as jth column that compensates for
increase in value in cell ‘ij’. Put negative (–) sign in these cells. This process
should be continued to get one cycle in which (+) and (–) are marked. Such a
cycle will unique and if ‘Dead Ends’ are encountered in such a process, make a
back track and one amongst other alternatives are tried. A cycle has rows/column
having non-basic cells for holding compensating (+) or (–) sign. This may require
trial and error for to finding it.
Step 6: After determining every basic cell within this cycle, adjustment is obtained
as minimum value in basic cells that are negative. This is known as adjustment
amount and let it be called ‘aa’. Add this to every cell value that is positive and
marked with (+) sign, and subsequently deduct this from cells having negative (–)
sign and then drop those from the basis that becomes zero. In case two or more
cells become zero from such adjustment, drop only one of these that have greatest
Cij value. This is necessary for maintaining basic cells having m +n – 1 number for
computing dual values. Reduction in cost that is associated with such a change is
found as product of reduced cost ij for incoming cell multiplied by cell value
previously held by outgoing cell.
After finding this new solution, move to the third Step 3 for checking
conditions of optimality. If optimality condition is satisfied, algorithm terminates.
Applications of Transportation Problem
Following is the application of the transportation problem:
The Travelling Salesman Problem
Assume that a salesman has to visit n cities. He wishes to start from a particular
city, visits each city once and then returns to his starting point. His objective is to Self-Instructional
Material 199
Transportation Problems select the sequence in which the cities are visited in such a way that his total
travelling time is minimized.
To visit 2 cities A and B, there is no choice. To visit 3 cities we have 2!
possible routes. For 4 cities we have 3! possible routes. In general to visit n cities
NOTES
there are (n – 1)! possible routes.
Mathematical Formulation
Let Cij be the distance or time or cost of going from city i to city j. The decision
variable Xij be 1 if the salesman travels from city i to city j and otherwise 0.
The objective is to minimize the travelling time.
n n

Z=  C
i 1 j 1
ij X ij

Subject to the constraints,


n

X
j 1
ij = 1, i = 2, ..., n

X
i 1
ij = 1, j = 2, ..., n

Subject to the additional constraint that Xij is so chosen that no city is visited twice
before all the cities are completely visited.
In particular, going directly from i to i is not permitted. Which means Cij =
, when i = j.
In travelling salesman problem we cannot choose the element along the
diagonal and this can be avoided by filling the diagonal with infinitely large elements.
The travelling salesman problem is very similar to the assignment problem
except that in the former case, there is an additional restriction that Xij is so chosen
that no city is visited twice before the tour of all the cities is completed.
Treat the problem as an assignment problem and solve it using the same
procedures. If the optimal solution of the assignment problem satisfies the additional
constraint, then it is also an optimal solution of the given travelling salesman problem.
If the solution to the assignment problem does not satisfy the additional restriction
then after solving the problem by assignment technique we use the method of
enumeration.
Example 8.1: A travelling salesman has to visit 5 cities. He wishes to start from a
particular city, visit each city once and then return to his starting point. Cost of
going from one city to another is shown below. You are required to find the least
cost route.

Self-Instructional
200 Material
To City Transportation Problems

From City NOTES

Solution: First we solve this problem as an assignment problem.


Subtract the minimum element in each row from all the elements in its row.

Subtract the minimum element in each column from all the elements in its
column.

We have the first modified matrix. Draw minimum number of lines to cover
all zeros.

Here, N = 4 < n = 5. Subtract the smallest uncovered element from all the
uncovered elements and add to the element which is in the point of intersection of
lines. Hence, we get the second modified matrix.

N = 5 = n = 5 = Order of matrix. We make assignment.

Self-Instructional
Material 201
Transportation Problems Assignment

NOTES

As the salesman should go from A to E and then come back to A without


covering B, C, D which is contradicting the fact that no city is visited twice before
all the cities are visitied.
Hence, we obtain the next best solution by bringing the next minimum non-
zero element namely 4.

A  B, B  C, C  D, D  E, E  A
Since all the cities have been visited and no city is visited twice before
completing the tour of all the cities, we have an optimal solution to the travelling
salesman.
The least cost route is A  B  C  D  E  A.
Total Cost = 4 + 6 + 8 + 10 + 2 = 30.
Example 8.2: A machine operator processes five types of items on his machine
each week and must choose a sequence for them. The set-up cost per change
depends on the items presently on the machine and the set-up to be made according
to the following table.
To Item

From Item

If he processes each type of item once and only once in each week, how
should he sequence the items on his machine in order to minimize the total set-up
cost?
Solution: Reduce the cost matrix and make assignments in rows and columns
having single row.

Self-Instructional
202 Material
Modify the matrix by subtracting the least element from all the elements in Transportation Problems

its row and also in its column.

NOTES

Here, N = 4 < n = 5, i.e., N < n.


Subtract the smallest uncovered element from all the uncovered elements
and add to the element which is at the point of intersection of lines and get the
reduced second modified matrix.

Here, N= 5 = n = 5 = Order of matrix. We make the assignment.


Assignment
We make a solution by considering the next smallest non-zero element by
considering 1.

A  E, E  C, C  B, B  D, D  A
We get the solution A  E  C  B  D  A.
This schedule provides the required solution in which each item is not
processed once in a week.
i.e., A  E  C  B  D  A.
The total set-up cost comes to 21.
Self-Instructional
Material 203
Transportation Problems

Check Your Progress


1. What do you understand by the transportation problem?
NOTES 2. Differentiate between the feasible and basic feasible solution.
3. Explain the non-degenerate basic feasible solution.
4. State the degenerate basic feasible solution.
5. What are basic and non-basic cells?
6. Define the term initialization.
7. Elaborate on the travelling salesman problem.

8.3 TRANSPORTATION ALGORITHM (MODI


METHOD)

Optimality Test
Once the initial basic feasible solution has been computed, the next step in the
problem is to determine whether the solution obtained is optimum or not.
Optimality test can be conducted to any initial basic feasible solution of a
TP provided such allocations has exactly m + n – 1 non-negative allocations,
where m is the number of origins and n is the number of destinations. Also, these
allocations must be in independent positions.
To perform this optimality test, we shall discuss the MOdified DIstribution
(MODI) method. The various steps involved in the MODI method for performing
optimality test are as follows:
MODI Method
Step 1: Find the initial basic feasible solution of a TP by using any one of the three
methods.
Step 2: Find out a set of numbers ui and vj for each row and column satisfying
ui + vj = Cij for each occupied cell. To start with, we assign a number ‘0’ to any
row or column having maximum number of allocations. If this maximum number of
allocations is more than one, choose any one arbitrarily.
Step 3: For each empty (unoccupied) cell, we find the sum ui and vj written in the
bottom left corner of that cell.
Step 4: Find out for each empty cell the net evaluation value ij = Cij–(ui+vj) and
which is written at the bottom right corner of that cell. This step gives the optimality
conclusion,
(i) If all ij > 0 (i.e., all the net evaluation value) the solution is optimum
and a unique solution exists.

Self-Instructional
204 Material
(ii) If ij  0 then the solution is optimum, but an alternate solution exists. Transportation Problems

(iii) If at least one ij < 0 , the solution is not optimum. In this case, we go
to the next step to improve the total transportation cost.
Step 5: Select the empty cell having the most negative value of ij. From this cell
NOTES
we draw a closed path by drawing horizontal and vertical lines with the corner
cells occupied. Assign sign ‘+’ and ‘–’ alternately and find the minimum allocation
from the cell having negative sign. This allocation should be added to the allocation
having positive sign and subtracted from the allocation having negative sign.
Step 6: The previous step yields a better solution by making one (or more) occupied
cell as empty and one empty cell as occupied. For this new set of basic feasible
allocations repeat from the Step (2) till an optimum basic feasible solution is
obtained.
Example 8.3: Solve the following transportation problem.

Solution: We first find the initial basic feasible solution by using VAM. Since
ai= bj the given TP is a balanced one. Therefore, there exists a feasible solution.
Finally, we have the initial basic feasible solution as given in the following
table:

Self-Instructional
Material 205
Transportation Problems From this table, we see that the number of non-negative independent
allocations is 6 = m+ n – 1 = 3 + 4 –1.
Hence, the solution is non-degenerate basic feasible.
The Initial Transportation Cost
NOTES = 11 × 13 + 3 × 14 + 4 × 23 + 6 × 17 + 17 × 10 + 18 × 9 = 711.
To Find the Optimal Solution
We apply the MODI method in order to determine the optimum solution. We
determine a set of numbers ui and vj for each row and column, with ui+vj = Cij for
each occupied cell. To start with we give u2 = 0 as the second row has the maximum
number of allocation.
Now, we find the sum ui and vj for each empty cell and enter at the bottom
left corner of that cell.
C21 = u2 + v1 = 17 = 0 + v1 = 17  v1 = 17
C23 = u2 + v3 = 14 = 0 + v3 = 14  v3 = 14
C24 = u2 + v4 = 23 = 0 + v4 = 23  v4 = 23
C14 = u1 + v4 = 13 = u1 + 23 = 13  u1 = 10
C33 = u3 + v3 = 18 = u3 + 14 = 18  u3 = 4
Next, we find the net evaluations ij = Cij – (ui + vj) for each unoccupied
cell and enter at the bottom right corner of that cell.
Initial Table

Self-Instructional
206 Material
Since all ij > 0, the solution is optimal and unique. The optimum solution is Transportation Problems

given by,
X14 = 11; X21 = 6; X23 = 3; X24 = 4; X32 = 10; X33 = 9
The Minimum Transportation Cost, NOTES
= 11 × 13 + 17 × 6 + 3 × 14 + 4 × 23 + 10 × 17 + 9 × 18 = 711.
Degeneracy in Transportation Problem
In a TP, if the number of non-negative independent allocations is less than m + n – 1,
where m is the number of origins (rows) and n is the number of destinations
(columns) there exists a degeneracy. This may occur either at the initial stage or at
the subsequent iteration.
To resolve this degeneracy, we adopt the following steps.
Step 1: Among the empty cell, we choose an empty cell having the least cost
which is of an independent position. If this cell is more than one, choose any one
arbitrarily.
Step 2: To the cell as chosen in Step (1) we allocate a small positive quantity
> 0.
The cells containing ‘’ are treated like other occupied cells and degeneracy
is removed by adding one (more) accordingly. For this modified solution, we adopt
the steps involved in MODI method till an optimum solution is obtained.
Example 8.4: Solve the transportation problem for minimization.

Solution: Since ai =bj the problem is a balanced TP. Hence, there exists
a feasible solution. We find the initial solution by North West Corner rule as
given here.

Self-Instructional
Material 207
Transportation Problems Since, the number of occupied cells = 5 = m + n – 1 and all the allocations
are independent we get an initial basic feasible solution.
The Initial Transportation Cost
NOTES = 10 × 2 + 4 × 10 + 5 × 1 + 10 × 3 + 1 × 30 = 20 + 40 + 5 + 30 + 30 =
125.
To Find the Optimal Solution (MODI Method)
We use the previous table to apply the MODI method. We find out a set of
numbers ui and vj for which ui + vj = Cij only for occupied cell. To start with as the
maximum number of allocations is 2 in more than one row and column, we choose
arbitrarily column 1 and assign a number 0 to this column, i.e., v1 = 0. The remaining
numbers can be obtained as follows.
C11 = u1 + v1 = 2 = u1 + 0 = 2  u1 = 2
C21 = u2 + v1 = 4  u2 = 4 – 0 = 4
C22 = u2 + v2 = 1  v2 = 1 – u2 = 1 – 4 = –3
C32 = u3 + v2 = 3 = u3 = 3 – v2 = 3 – (–3) = 6
C33 = u3 + v3 = 1 = v3 = 1 – u3 = 1 – 6 = –5
Initial Table

v1 v2 v3

We find the sum of ui and vj for each empty cell and write at the bottom left
corner of that cell. Find the net evaluations ij = Cij – (ui + vj) for each empty cell
and enter at the bottom right corner of the cell.
The solution is not optimum as the cell (3, 1) has a negative ij value. We
improve the allocation by making this cell namely (3, 1) as an allocated cell. We
draw a closed path from this cell and assign sign ‘+’ and ‘’ alternately. From the
cell having ‘–’ sign we find the minimum allocation given by Min (10, 10) = 10.
Hence, we get two occupied cells.
(2, 1) (3, 2) becomes empty and the cell (3, 1) is occupied and resulting in
a degenerate solution.
Number of allocated cells = 4 < m  n  1  5.

Self-Instructional
208 Material
We get degeneracy. To resolve we add the empty cell (1, 2) and allocate Transportation Problems

>0. This cell namely (1, 2) is added as it satisfies the two steps for resolving the
degeneracy. We assign a number 0 to the first row, namely u1 = 0 we get the
remaining numbers as follows.
NOTES
C11

C 12
C 31
C 33
C 22
Next, we find the sum of uj and vj for the empty cell and enter at the bottom
left corner of that cell and also the net evaluation ij  Cij– (ui+ vj) for each empty
cell and enter at the bottom right corner of the cell.
Iteration Table

The modified solution is given in the following table. This solution is also
optimal and unique as it satisfies the optimality condition that all ij > 0.

X11 = 10: X22 = 15; X33 = 30; X12 = ; X31 = 10


Total Cost = 10 × 2 + 2 × + 15 × 1 + 10 × 1 + 30 × 1
= 75 + 2 = 75. Self-Instructional
Material 209
Transportation Problems
8.4 UNBALANCED AND MAXIMIZATION OF
TRANSPORTATION PROBLEM

NOTES Unbalanced Transportation


The given TP is said to be unbalanced if ai bj , i.e., if the total supply is not
equal to the total demand.
There are following two possible cases:
m n

Case (i): 
i=1
ai  b
j 1
j

If the total supply is less than the total demand, a dummy source (row) is
included in the cost matrix with zero cost; the excess demand is entered as a rim
requirement for this dummy source (origin). Hence, the unbalanced transportation
problem can be converted into a balanced TP.
m n

Case (ii):  a   bi j
i=1 j 1

Therefore, the total supply is greater than the total demand. In this case, the
unbalanced TP can be converted into a balanced TP by adding a dummy destination
(column) with zero cost. The excess supply is entered as a rim requirement for the
dummy destination.
Example 8.5: Solve the transportation problem when the unit transportation costs,
demands and supplies are as given below:

Solution: Since the total demand bj 215 is greater than the total supply
ai 195, the problem is an unbalanced TP.
We convert this into a balanced TP by introducing a dummy origin O4 with
cost zero and giving supply equal to 21519520 units. Hence, we have the
converted problem as follows:

Self-Instructional
210 Material
As this problem is balanced, there exists a feasible solution to this problem. Transportation Problems

Using VAM we get the initial solution.

NOTES

The initial solution to the problem is given by,

There are 7 independent non-negative allocations equal to m + n – 1. Hence,


the solution is a non-degenerate one.
The Total Transportation Cost,
= 6 × 65 + 5 × 1 + 5 × 30 + 2 × 25 + 4 × 25 + 7 × 45 + 20 × 0
= 1010.
To Find the Optimal Solution
We apply the steps in the MODI method to the previous table.

Self-Instructional
Material 211
Transportation Problems

5 0
11
NOTES
10
25 45

Since all ij  0 the solution is not optimum and to solve we introduce the
cell (3,1) as this cell has the most negative value of ij. We modify the solution by
adding and subtracting the minimum allocation given by Min (65, 30, 25). While
doing this, the occupied cell (3, 3) becomes empty.
First Iteration Table

As the number of independent allocations are equal to m+n–1, we check


the optimality.
Since, all ij 0, the solution is optimal and an alternate solution exists as
14 = 0. Therefore, the optimum allocation is given by,
X11 = 40; X12 = 30; X22 = 5; X23 = 50; X31 = 25; X34 = 45; X41 = 20
The Optimum Transportation Cost
= 6 × 40 + 1 × 30 + 5 × 5 + 2 × 50 + 10 × 25 + 7 × 45 + 0 × 20
= 960.
Example 8.6: A product is produced by 4 factories F1, F2, F3 and F4. Their unit
production cost are 2, 3, 1 and 5 respectively. Production capacity of the factories
are 50, 70, 40 and 50 units respectively. The product is supplied to 4 stores S1,
S2, S3 and S4 and the requirements of which are 25, 35, 105 and 20, respectively.
Find the transportation plan such that the total production and transportation
cost is minimum. Unit costs of transportation are given as follow:

Self-Instructional
212 Material
Transportation Problems

NOTES

Solution: We form the transportation table which consists of both production


and transportation costs.

Total capacity = 200 units


Total demand = 185 units
Therefore, ai > bj. Hence, the problem is unbalanced. We convert it
into a balanced one by adding a dummy store S5 with cost 0 and the excess supply
is given as the rim requirement to this store, namely (200–185) units.

The initial basic feasible solution is obtained by least cost method. We get
the solution containing 8 non-negative independent allocations equal to m +n – 1.
So the solution is a non-degenerate solution.
The Total Transportation Cost,
= 4 × 25 + 6 × 5 + 8 × 20 + 10 × 50 + 8 × 20 + 4 × 30 + 13 × 35 +
0 × 15
= 1525.

Self-Instructional
Material 213
Transportation Problems To Find the Optimal Solution
We apply MODI method to the previous table as it has m + n – 1 independent
non-negative allocations.
NOTES Initial Table

The solution is not optimum as the cell (4, 4) is having a negative net evaluation
value, i.e., 4, 4 = –3. We draw a closed path from this cell and have a modified
allocation by adding and subtracting the allocation Min (35, 20) = 20. This modified
allocation is given in the table.
First Iteration Table

Since all the values of ij  0 the solution is optimum, but an alternate solution
exists.
The optimum solution or the transportation plan is given by,
X11 = 25 units; X32 = 30 units
X12 = 5 units; X43 = 15 units

Self-Instructional
214 Material
X13 = 20 units; X44 = 20 units Transportation Problems

X23 = 70 units; X45 = 15 units


This is the surplus capacity that are not transported which are manufactured
in factory F4. The optimum production with transportation cost is as follows: NOTES
= 4 × 25 + 6 × 5 + 8 × 20 + 10 × 70 + 4 × 30 + 8 × 20 + 13 × 15 + 0
× 15
= 1465.
In this, the objective is to maximize the total profit for which the profit
matrix is given. For this, first we have to convert the maximization problem into
minimization by subtracting all the elements from the highest element in the given
transportation table. This modified minimization problem can be solved in the
usual manner.
Example 8.7: There are three factories A, B and C which supply goods to four
dealers D1, D2, D3 and D4. The production capacities of these factories are
1000, 700 and 900 units per month, respectively. The requirements from the
dealers are 900, 800, 500 and 400 units per month respectively. The per unit
return (excluding transportation cost) are 8, 7 and 9 at the three factories.
The following table gives the unit transportation costs from the factories to the
dealers.

Determine the optimum solution to maximize the total returns.


Solution: Use the form Profit/Return–Transportation Cost. With this we form a
transportation table with profit.

Profit Matrix

Self-Instructional
Material 215
Transportation Problems This profit matrix is converted into its equivalent loss matrix by subtracting all
the elements from the highest element 8. Hence, we have the following loss matrix.

NOTES

We use VAM to get the initial basic feasible solution.

The initial solution is given in the following table:

Since, the number of allocated cell = 5 < m + n – 1 = 6, the solution is


degenerate. To resolve this degeneracy we add an empty cell (1, 3) by allocating
a non-negative quantity .
This cell is the least cost cell and is of independent position. The initial basic
feasible solution is given as follows:

Self-Instructional
216 Material
Initial table Transportation Problems

NOTES

Number of allocations = 6 = m + n – 1 and the 6 allocations are in independent


position. Hence, we can perform the optimality test using MODI method.
D1 D2 D3 D4 ui
2 2 2 4
A 200 800 0
 1 3
4 6 2 3
B 700 2
4 2 4 2 3 0
3 2 1 0
C 500 400 –1
1 2 1 1
ui 2 2 2 1

Since all the net evaluation ji > 0 are non-negative the initial solution is optimum.
The optimum distribution is,
A D1 = 200 units
A  D2 = 800 units
A  D3 =  units
B  D1 = 700 units
C  D3 = 500 units
A  D4 = 400 units
Total profit or the Max. return,
= 200  6 + 6  800 + 4  700 + 7  500 + 8  400
= 15,500.

Check Your Progress


8. How can we conduct an optimality test for a solution?
9. State the first four steps of MODI method.
10. Explain the unbalanced transportation.
11. How can be a unbalanced TP converted into a balanced TP?
12. Describe the maximization in transportation problem.
Self-Instructional
Material 217
Transportation Problems
8.5 ANSWERS TO CHECK YOUR PROGRESS
QUESTIONS

NOTES 1. The Transportation Problem (TP) is one of the subclasses of LPP (Linear
Programming Problem) in which the objective is to transport various quantities
of a single homogeneous commodity that are initially stored at various origins
to different destinations in such a way that the transportation cost is minimum.
To achieve this objective we must know the amount and location of available
supplies and the quantities demanded. In addition we must know the costs
that result from transporting one unit of commodity from various origins to
various destinations.
2. Feasible Solution: Any set of non-negative allocations (Xij>0) which satisfies
the row and column sum is called a feasible solution.
Basic Feasible Solution: A feasible solution is called a basic feasible solution
if the number of non-negative allocations is equal to m + n – 1, where m is
the number of rows and n the number of columns in a transportation table.
3. Non-Degenerate Basic Feasible Solution: Any feasible solution to a
transportation problem containing m origins and n destinations is said to be
non-degenerate, if it contains m + n – 1 occupied cells and each allocation
is in independent positions.
4. Degenerate Basic Feasible Solution: If a basic feasible solution contains
less than m + n – 1 non-negative allocations it is said to be degenerate.
5. Basic cells are those that indicate positive values and non-basic cells have
zero value for flow. According to transportation problem number of basic
cells will be exactly m + n – 1.
6. Initialization: Before starting to solve the problem, it should be balanced. If
not then make it balanced by ‘Unshipped Supply’ column in case demand
is less than supply or by adding ‘Shortage’ row in case supply is less than
the supply. Put zero for unit costs in the column for unshipped supply. Put
either penalty costs or zero in a row that shows shortage.
7. Assume that a salesman has to visit n cities. He wishes to start from a
particular city, visits each city once and then returns to his starting point. His
objective is to select the sequence in which the cities are visited in such a
way that his total travelling time is minimized.
To visit 2 cities A and B, there is no choice. To visit 3 cities we have 2!
possible routes. For 4 cities we have 3! possible routes. In general to visit
n cities there are (n – 1)! possible routes.
8. Once the initial basic feasible solution has been computed, the next step in
the problem is to determine whether the solution obtained is optimum or
not.
Self-Instructional
218 Material
Optimality test can be conducted to any initial basic feasible solution of a Transportation Problems

TP provided such allocations has exactly m + n – 1 non-negative allocations,


where m is the number of origins and n is the number of destinations. Also,
these allocations must be in independent positions.
NOTES
To perform this optimality test, we shall discuss the MOdified DIstribution
(MODI) method. The various steps involved in the MODI method for
performing optimality test are as follows:
9. Step 1: Find the initial basic feasible solution of a TP by using any one of the
three methods.
Step 2: Find out a set of numbers ui and vj for each row and column
satisfying ui + vj = Cij for each occupied cell. To start with, we assign a
number ‘0’ to any row or column having maximum number of allocations. If
this maximum number of allocations is more than one, choose any one
arbitrarily.
Step 3: For each empty (unoccupied) cell, we find the sum ui and vj written
in the bottom left corner of that cell.
Step 4: Find out for each empty cell the net evaluation value Dij = Cij–
(ui+vj) and which is written at the bottom right corner of that cell. This step
gives the optimality conclusion,
(i) If all ij > 0 (i.e., all the net evaluation value) the solution is optimum
and a unique solution exists.
(ii) If ij  0 then the solution is optimum, but an alternate solution exists.
(iii) If at least one ij < 0 , the solution is not optimum. In this case, we go
to the next step to improve the total transportation cost.
10. The given TP is said to be unbalanced if Sai ¹ Sbj , i.e., if the total supply is
not equal to the total demand.
11. There are following two possible cases:
m n

Case (i):  a   bi j
i=1 j 1

If the total supply is less than the total demand, a dummy source (row) is
included in the cost matrix with zero cost; the excess demand is entered as
a rim requirement for this dummy source (origin). Hence, the unbalanced
transportation problem can be converted into a balanced TP.
m n

Case (ii): 
i=1
ai  b
j 1
j

Therefore, the total supply is greater than the total demand. In this case, the
unbalanced TP can be converted into a balanced TP by adding a dummy
destination (column) with zero cost. The excess supply is entered as a rim
requirement for the dummy destination.
Self-Instructional
Material 219
Transportation Problems 12. In this, the objective is to maximize the total profit for which the profit
matrix is given. For this, first we have to convert the maximization problem
into minimization by subtracting all the elements from the highest element in
the given transportation table. This modified minimization problem can be
NOTES solved in the usual manner.

8.6 SUMMARY

 The Transportation Problem (TP) is one of the subclasses of LPP (Linear


Programming Problem) in which the objective is to transport various quantities
of a single homogeneous commodity that are initially stored at various origins
to different destinations in such a way that the transportation cost is minimum.
 Feasible Solution: Any set of non-negative allocations (Xij>0) which satisfies
the row and column sum is called a feasible solution.
 Basic Feasible Solution: A feasible solution is called a basic feasible solution
if the number of non-negative allocations is equal to m + n – 1, where m is
the number of rows and n the number of columns in a transportation table.
 Non-Degenerate Basic Feasible Solution: Any feasible solution to a
transportation problem containing m origins and n destinations is said to be
non-degenerate, if it contains m + n – 1 occupied cells and each allocation
is in independent positions.
 Degenerate Basic Feasible Solution: If a basic feasible solution contains
less than m + n – 1 non-negative allocations it is said to be degenerate.
 Basic cells are those that indicate positive values and non-basic cells have
zero value for flow. According to transportation problem number of basic
cells will be exactly m + n – 1.
 Initialization: Before starting to solve the problem, it should be balanced. If
not then make it balanced by ‘Unshipped Supply’ column in case demand
is less than supply or by adding ‘Shortage’ row in case supply is less than
the supply. Put zero for unit costs in the column for unshipped supply. Put
either penalty costs or zero in a row that shows shortage.
 Optimality test can be conducted to any initial basic feasible solution of a
TP provided such allocations has exactly m + n – 1 non-negative allocations,
where m is the number of origins and n is the number of destinations. Also,
these allocations must be in independent positions.
 The given TP is said to be unbalanced if Sai ¹ Sbj , i.e., if the total supply is
not equal to the total demand.
 In this, the objective is to maximize the total profit for which the profit
matrix is given. For this, first we have to convert the maximization problem
into minimization by subtracting all the elements from the highest element in
the given transportation table. This modified minimization problem can be
solved in the usual manner.
Self-Instructional
220 Material
Transportation Problems
8.7 KEY WORDS

 Transportation problem: A problem for transportation of various quantities


of a single homogeneous commodity, initially stored at various origins to NOTES
different destinations at the minimum cost.
 Feasible solution: A set of non-negative allocations where some quantity
is transferred from an origin i to a destination j, (Xij > 0) and satisfies the
row and column sum is a feasible solution.
 Basic feasible solution: A feasible solution where number of non-negative
allocations is equal to m + n – 1, where m is the number of rows and n is
the number of columns in a transportation table.
 Non-degenerate basic feasible solution: Any feasible solution to a
transportation problem containing m origins and n destinations is known as
non-degenerate, if it contains m + n – 1 occupied cells and each allocation
is in independent positions.
 Degenerate basic feasible solution: A basic feasible solution that
contains less than m + n – 1 non-negative allocations.
 Unbalanced transportation: The given TP is said to be unbalanced if
 ai bj , i.e., if the total supply is not equal to the total demand.
 Maximization in TP: In this, the objective is to maximize the total profit
for which the profit matrix is given.

8.8 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short-Answer Questions
1. When does degeneracy occur in transportation problem?
2. What is transportation problem?
3. What is transportation algorithm?
4. How will you identify if a transportation problem has an alternate optimal
solution or not?
5. What is the coefficient of Xij of constraints in a transportation problem?
6. What is an initial basic feasible solution?
Long-Answer Question
1. Give the mathematical formulation of a transportation problem.
2. Write an algorithm to solve a transportation problem.

Self-Instructional
Material 221
Transportation Problems 3. Define the terms feasible solution, basic solution, non-degenerate solution
and, optimal solution in a transportation problem.
4. Explain degeneracy in a transportation problem. Describe a method to resolve
it.
NOTES
5. What do you mean by an unbalanced transportation problem? Explain the
process of converting an unbalanced transportation problem into a balanced
one.
6. What do you understand by transportation model?
7. Solve the following transportation problem where the cell entries denote
the unit transportation costs.

Destination
A B C D Supply
P 5 4 2 6 20
Origin Q 8 3 5 7 30
R 5 9 4 6 50
Demand 10 40 20 30 100

8. Solve the following transportation problem.


Destination
1 2 3 Capacity
1 2 2 3 10
Source 2 4 1 2 15
3 1 3 1 40
Demand 20 15 30

8.9 FURTHER READINGS

Arumugam, R. S. 2006. Operations Research. Palayamkottai (Tamil Nadu): New


Gamma Publications.
Sundharesan, V., K. S. Ganapathy and K. Ganesan. 2017. Resource Management
Techniques (Operations Research). Chennai: A. R. Publications.
Swaroop, Kanti, P. K. Gupta and Man Mohan. 2007. Operations Research,
13th Edition. New Delhi: Sultan Chand & Sons.
Taha, Hamdy A. 1992. Operations Research: An Introduction. New York:
Macmillan.
Sharma, S. D. 2006. Operations Research. Uttar Pradesh: Kedar Nath Ram
Nath & Co.

Self-Instructional
222 Material
Gupta, P. K. and D. S. Hira. 2002. Introduction to Operations Research. New Transportation Problems

Delhi: S. Chand And Company Limited.


Gillett, Billy E. 2007. Introduction to Operations Research. New Delhi: Tata
McGraw-Hill.
NOTES
Ackoff, R. L. and M. W. Sasieni. 1968. Fundamentals of Operations Research.
New York: John Wiley & Sons Inc.
Kothari, C. R. 1992. An Introduction to Operational Research. New Delhi:
Vikas Publishing House Pvt. Ltd.
Kalavathy, S. 2002. Operations Research. New Delhi: Vikas Publishing House
Pvt. Ltd.
Jensen, Paul A., and Jonathan F. Bard. 2003. Operations Research Models and
Methods. New York: John Wiley & Sons.
Sharma, J. K. 2001. Operations Research: Theory and Applications. New
Delhi: Macmillan India Ltd.

Self-Instructional
Material 223
Assignment Problems
BLOCK - III
ASSIGNMENT AND SEQUENCING PROBLEMS

NOTES
UNIT 9 ASSIGNMENT PROBLEMS
Structure
9.0 Introduction
9.1 Objectives
9.2 Assignment Problems
9.3 Test for Optimality by using Hungarian Method
9.4 Maximization in Assignment Problems
9.5 Answers to Check Your Progress Questions
9.6 Summary
9.7 Key Words
9.8 Self Assessment Questions and Exercises
9.9 Further Readings

9.0 INTRODUCTION

Assignment problem, which can be presented in the form of n × n cost matrix of


real numbers. For solving an assignment problem, the Hungarian method is used.
The assignment problem is a fundamental combinatorial optimization problem. In
its most general form, the problem is as follows:
The problem instance has a number of agents and a number of tasks. Any
agent can be assigned to perform any task, incurring some cost that may vary
depending on the agent-task assignment. It is required to perform as many tasks
as possible by assigning at most one agent to each task and at most one task to
each agent, in such a way that the total cost of the assignment is minimized.
The Hungarian method is a combinatorial optimization algorithm that solves
the assignment problem in polynomial time and which anticipated later primal–
dual methods. It was developed and published in 1955 by Harold Kuhn, who
gave the name “Hungarian Method” because the algorithm was largely based on
the earlier works of two Hungarian mathematicians: Dénes Kőnig and Jenő
Egerváry.
An assignment problem is balanced if the cost matrix is a square matrix,
otherwise, it is termed as unbalanced. To make an unbalanced assignment
problem balanced, dummy rows or dummy columns are added with all entries
as zeroes.
In this unit, you will study about the concepts of assignment problem, test
for optimality by using Hungarian method, maximization case in assignment
problem.
Self-Instructional
224 Material
Assignment Problems
9.1 OBJECTIVES

After going through this unit, you will be able to:


 Understand the significant of assignment problem NOTES
 Analyse the mathematical formulation of the assignment problem
 Explain the test for optimality by using Hungarian method
 Define the maximization in assignment problem

9.2 ASSIGNMENT PROBLEMS

The assignment problem is used to find the best possible assignment for the given
situations.
Basics of Assignment Problems
The assignment problem is one of the fundamental combinatorial optimization
problems. It helps to find a maximum weight identical in nature in a weighted
bipartite graph. The assignment problem is also termed as a special case of
transportation problem.
Suppose there are n jobs to be performed and n persons are available for
doing these jobs. Assume that each person can do each job at a time, though
with varying degrees of efficiency. Let Cij be the cost if the ith person is assigned
to the jth job. The solution to the problem is to find an assignment (which job
should be assigned to which person on one-one basis) so that the total cost of
performing all jobs is minimum. Problems of this kind are known as assignment
problems.
The assignment problem can be stated in the form of n × n cost matrix [Cij]
of real numbers as given in the following table:
Jobs
1 2 3... j... n
1  C11 C12 C13 ... C1 j ... C1n 
 C23 ... C2 j ... C2 n 
2 C21 C22
3 C31 C32 C33 ... C3 j ... C3n 
Persons  

i  Ci1 Ci 2 Ci 3 ... Cij ... Cin 
 
 M 
n Cn1 Cn 2 Cn 3 ... Cnj ... Cnn 

Self-Instructional
Material 225
Assignment Problems Mathematical Formulation of the Assignment Problems
Mathematically, the assignment problem can be stated as follows:
n n
Minimize, Z =  Cij xij i  1,2, K , n
NOTES i 1 j 1

j  1, 2, K , n
Subject to the restrictions:
1 if the ith person is assigned jth job
xij  
0 if not
n
xij 1 (one job is done by the ith person)
j 1

n
And xij 1 (only one person should be assigned the jth job)
i 1

Where xij denotes that the jth job is to be assigned to the ith person.

9.3 TEST FOR OPTIMALITY BY USING


HUNGARIAN METHOD

The solution of an assignment problem can be arrived at by using the Hungarian


method. The steps involved in this method are as follows:
Step 1: Prepare a cost matrix. If the cost matrix is not a square matrix then
add a dummy row (column) with zero cost element.
Step 2: Subtract the minimum element in each row from all the elements of
the respective rows.
Step 3: Further modify the resulting matrix by subtracting the minimum
element of each column from all the elements of the respective columns. Thus,
obtain the modified matrix.
Step 4: Then, draw minimum number of horizontal and vertical lines to
cover all zeroes in the resulting matrix. Let the minimum number of lines be N.
Now, there are two possible cases.
Case (i) If N = n, where n is the order of matrix an optimal assignment
can be made. So make the assignment to get the required solution.
Case (ii) If N < n, then proceed to Step 5.
Step 5: Determine the smallest uncovered element in the matrix (element
not covered by N lines). Subtract this minimum element from all uncovered elements
and add the same element at the intersection of horizontal and vertical lines. Thus,
the second modified matrix is obtained.
Step 6: Repeat Steps (3) and (4) until we get the Case (i) of Step 4.
Step 7: To make zero assignment examine the rows successively until a
row with exactly single zero is found. Circle (O) this zero to make the assignment.
Self-Instructional
226 Material
Then mark a cross (×) over all zeroes if lying in the column of the circled zero Assignment Problems
showing that they can not be considered for future assignment. Continue in this
manner until all the zeroes have been examined. Repeat the same procedure for
the columns also.
Step 8: Repeat the Step 6 successively until one of the following situation NOTES
arises:
Case (i) If no unmarked zero is left, then the process ends.
Case (ii) If there lies more than one of the unmarked zero in any column or
row, then circle one of the unmarked zeroes arbitrarily and mark a cross in the
cells of remaining zeroes in its row or column. Repeat the process until no unmarked
zero is left in the matrix.
Step 9: Thus, exactly one marked circled zero in each row and each
column of the matrix is obtained. The assignment corresponding to these marked
circled zeroes will give the optimal assignment.
Example 9.1: Using the following cost matrix and determine (i) Optimal job
Assignment and (ii) The cost of Assignments.
Jobs
1 2 3 4 5
A 10 3 3 2 8
B  9 7 8 2 7 
Mechanics C  7 5 6 2 4
 
D 3 5 8 2 4
E  9 10 9 6 10 

Solution: Select the smallest element in each row and subtract this smallest element
from all the other elements in its row.

Subtract the minimum element from each column and subtract this element
from all the elements in its column. With this we get the first modified matrix.

Self-Instructional
Material 227
Assignment Problems In this modified matrix, we draw the minimum number of lines to cover all
zeroes (horizontal and vertical).

NOTES

Number of lines drawn to cover all zeroes is N = 4.


The order of matrix is n = 5.
Hence, N < n, less than order of matrix.
Now, we get the second modified matrix by subtracting the smallest
uncovered element from the remaining uncovered elements and adding it to the
element at the point of intersection of lines.

Number of lines drawn to cover all zeroes is N = 5


The order of matrix is n = 5
Hence, N = n, i.e., order of matrix. Now, we determine the optimum
assignment.
Assignment

The first row contains more than one zero. So, proceed to the second row. It
has exactly one zero. The corresponding cell is (B, 4). Circle this zero thus making
an assignment. Mark (×) for all other zeroes in its column to show that they cannot
be used for making other assignments. Now, row 5 has a single zero in the cell
(E, 3). Make an assignment in this cell and cross the 2nd zero in the 3rd column.
Self-Instructional
228 Material
Now, row 1 has a single zero in the column 2, i.e., in the cell (A, 2). Assignment Problems

Make an assignment in this cell and cross the other zeroes in the 2nd column.
This leads to a single zero in the column 1 of the cell (D, 1). Make an assignment
in this cell and cross the other zeroes in the 4th row. Finally, we have a single zero
NOTES
left in the 3rd row making an assignment in the cell (C, 5). Thus, we have the
following assignment.
Job Mechanic Cost
1 D 3
2 A 3
3 E 9
4 B 2
5 C 4
21
Therefore, 1D, 2A, 3E, 4B, 5C with minimum cost 21.

9.4 MAXIMIZATION IN ASSIGNMENT PROBLEMS

The following situations are considered as variations in assignment problem:


Unbalanced Assignment Problems
Any assignment problem is said to be unbalanced if the cost matrix is not a square
matrix, i.e., the number of rows and columns are not equal. To make it balanced,
we add a dummy row or dummy column with all the entries as zero.
Example 9.2: A company has 4 machines to do 3 jobs. Each job can be assigned
to only one machine. The cost of each job on each machine is given below.
Determine the job assignments that will minimize the total cost.
Machines
W X Y Z
A  18 24 28 32 
 
Jobs B  8 13 17 18 
C  10 15 19 22 
Solution: Since the cost matrix is not a square matrix, we add a dummy row D
with all the elements 0.
W X Y Z
A  18 24 28 32 
B  8 13 17 18 
C  10 15 19 22 
 
D 0 0 0 0

Self-Instructional
Material 229
Assignment Problems Subtract the minimum element in each row from all the elements in its row.
W X Y Z
A 0 6 10 14 
B 0 5 9 10 
NOTES
C 0 5 9 12 
 
D 0 0 0 0
Since each column has a minimum element 0, we draw minimum number of lines
to cover all zeros.
W X Y Z
A 0 6 10 14 
B 0 5 9 10 
C 0 5 9 12 
 
D 0 0 0 0
 The number of lines drawn to cover all zeroes is N = 2 < n = 4, which is less
than the order of matrix, hence we form a second modified matrix.
W X Y Z
A 0 1 5 9
B  0 0 4 5 
C 0 0 4 7
 
D 5 0 0 0
Here, N = 3 < n = 4, which is less than the order of matrix.
Again we subtract the smallest uncovered element from all the uncovered elements
and add to the element at the point of intersection
W X Y Z
A 0 1 1 5
B  0 0 0 1 
C 0 0 0 3
 
D  9 4 0 0 

Here, N = 4 = n = 4 = Order of matrix. Hence, we make an assignment.


Assignment
W X Y Z W X Y Z
A 0 1 1 5 A W A0 1 1 5 AW
B  0 0 0 1  BX B  0 0 0 1  B Y
C 0 0 0 3 C Y or C 0 0 0 3 CX
   
D  9 4 0 0  DZ D9 4 0 0 DZ

Since D is a dummy job, machine Z is assigned no job.


Therefore, optimum cost = 18 + 13 + 19 = 50.

Self-Instructional
230 Material
Maximization in Assignment Problems Assignment Problems

In this, the objective is to maximize the profit. To solve this, we first convert the
given profit matrix into the loss matrix by subtracting all the elements from the
highest element. For this converted loss matrix we apply the steps in Hungarian NOTES
method to get the optimum assignment.
Example 9.3: The owner of a small machine shop has four mechanics available
to assign jobs for the day. Five jobs are offered with expected profit for each
mechanic on each jobs, which are as follows:
Jobs

A B CD E
1  62 78 111 82 
50
 84 61 73 59 
Mechanics 2  71
3  87 92 111 71 81 
 
4  48 64 87 77 80 

By using the assignment method, find the assignment of mechanics to the


job that will result in maximum profit. Which job should be declined ?
Solution: The given profit matrix is not a square matrix as the number of jobs is
not equal to the number of mechanics. Hence, we introduce a dummy mechanic 5
with all the elements 0.
Jobs

A B C D E
1  62 78 50 111 82 
2  71 84 61 73 59 
Mechanics 
3 87 92 111 71 81
 
4  48 64 87 77 80 
5  0 0 0 0 0 

Now we convert this profit matrix into loss matrix by subtracting all the elements
from the highest element 111.
Loss Matrix

A B C D E
1  49 33 61 0 29 
2  40 27 50 38 52 

3  24 19 0 40 30 
 
4  63 47 24 34 31 
5 111 111 111 111 111

Self-Instructional
Material 231
Assignment Problems We subtract the smallest element from all the elements in the respective
rows.
A B C D E
1  49 33 61 0 29 
NOTES 2  13 0 23 11 25 
3  24 19 0 40 30 
 
4  39 23 0 10 7 
5  0 0 0 0 0 

Since each column has minimum element as zero, we draw minimum number
of lines to cover all zeros.
A B C D E
1  49 33 61 0 29 
2 13 0 23 11 25
3  24 19 0 40 30 
 
4  39 23 0 10 7
5  0 0 0 0 0 

Here the number of lines drawn to cover all zeroes is N = 4, which is less
than the order of matrix.
We form the second modified matrix by subtracting the smallest uncovered
element from the remaining uncovered elements and adding to the element that is
at the point of intersection of lines.
A B C D E
1  49 40 68 0 29
2  6 0 23 4 18 
3 17 19 0 33 23
 
4 32 23 0 3 0
5  0 7 7 0 0 
Here, N = 5 = n = 5 = Order of matrix.
We make the assignment.
A B C D E
1  49 40 68 0 29 
2  6 0 23 4 18 
3 17 19 0 33 23 
 
4 32 23 0 3 0
5  0 7 7 0 0 

The optimum assignment is:


Jobs Mechanics
A 5
B 2
C 3
D 1
Self-Instructional E 4
232 Material
Since the 5th mechanic is a dummy, job A assigned to the 5th mechanic is Assignment Problems

declined.
The maximum profit is given by, 84 + 111 + 111 + 80 = 386.
NOTES
Check Your Progress
1. What do you understand by an assignment problem?
2. Explain the mathematical formulation of the assignment problem.
3. Describe the unbalanced assignment problem.
4. State the maximization in assignment problem.

9.5 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS

1. The assignment problem is one of the fundamental combinatorial optimization


problems. It helps to find a maximum weight identical in nature in a weighted
bipartite graph. The assignment problem is also termed as a special case of
transportation problem.
Suppose there are n jobs to be performed and n persons are available for
doing these jobs. Assume that each person can do each job at a time,
though with varying degrees of efficiency. Let Cij be the cost if the ith person
is assigned to the jth job. The solution to the problem is to find an assignment
(which job should be assigned to which person on one-one basis) so that
the total cost of performing all jobs is minimum. Problems of this kind are
known as assignment problems.
2. Mathematically, the assignment problem can be stated as follows:
n n
Minimize, Z =  Cij xij i  1,2, K , n
i 1 j 1

j  1, 2, K , n
Subject to the restrictions:
1 if the ith person is assigned jth job
xij  
0 if not
n
xij 1 (one job is done by the ith person)
j 1

n
And xij 1 (only one person should be assigned the jth job)
i 1

Where xij denotes that the jth job is to be assigned to the ith person.

Self-Instructional
Material 233
Assignment Problems 3. Any assignment problem is said to be unbalanced if the cost matrix is not a
square matrix, i.e., the number of rows and columns are not equal. To make
it balanced, we add a dummy row or dummy column with all the entries as
zero.
NOTES
4. Maximization in Assignment Problems
In this, the objective is to maximize the profit. To solve this, we first convert
the given profit matrix into the loss matrix by subtracting all the elements
from the highest element. For this converted loss matrix we apply the steps
in Hungarian method to get the optimum assignment.

9.6 SUMMARY

 The assignment problem is one of the fundamental combinatorial optimization


problems. It helps to find a maximum weight identical in nature in a weighted
bipartite graph. The assignment problem is also termed as a special case of
transportation problem.
 The assignment problem can be stated in the form of n × n cost matrix [Cij]
of real numbers.
 Determine the smallest uncovered element in the matrix (element not covered
by N lines). Subtract this minimum element from all uncovered elements
and add the same element at the intersection of horizontal and vertical lines.
Thus, the second modified matrix is obtained.
 To make zero assignment examine the rows successively until a row with
exactly single zero is found. Circle (O) this zero to make the assignment.
Then mark a cross (×) over all zeroes if lying in the column of the circled
zero showing that they can not be considered for future assignment. Continue
in this manner until all the zeroes have been examined. Repeat the same
procedure for the columns also.
 Any assignment problem is said to be unbalanced if the cost matrix is not a
square matrix, i.e., the number of rows and columns are not equal. To make
it balanced, we add a dummy row or dummy column with all the entries as
zero.
 Maximization in Assignment Problems
In this, the objective is to maximize the profit. To solve this, we first convert
the given profit matrix into the loss matrix by subtracting all the elements
from the highest element. For this converted loss matrix we apply the steps
in Hungarian method to get the optimum assignment.

Self-Instructional
234 Material
Assignment Problems
9.7 KEY WORDS

 Assignment problem: Assignment problem which can be presented in the


form of n × n cost matrix of real numbers. NOTES
 Basics of assignment problem: The assignment problem is one of the
fundamental combinatorial optimization problems. It helps to find a maximum
weight identical in nature in a weighted bipartite graph.
 Hungarian method: The Hungarian method is a combinatorial optimization
algorithm that solves the assignment problem in polynomial time and which
anticipated later primal-dual methods.
 Unbalanced assignment problem: Any assignment problem is said to be
unbalanced if the cost matrix is not a square matrix, i.e., the number of row
and column with all the entries as zero.
 Maximization in assignment problem: The objective is to maximize the
profit. To solve this, we first convert the given matrix into the loss matrix by
subtracting all the elements from the highest element.

9.8 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short-Answer Questions
1. Explain basics of assignment problem.
2. Write the first three steps of the Hungarian procedure for solving assignment
problem.
3. How many steps are there in the Hungarian method?
4. What do you mean by the unbalanced assignment problem?
5. Elaborate on the maximization in assignment problem.
Long-Answer Question
1. Discuss in brief the assignment problem with help of examples.
2. How can we solve an assignment problem using Hungarian method? Discuss
with the help of examples.
3. What do you mean by an unbalanced assignment problem?
4. Explain the process of converting an unbalanced assignment problem into a
balanced one.
5. Analyse the maximization in assignment problem. Give appropriate
examples.

Self-Instructional
Material 235
Assignment Problems 6. Solve the following assignment problems:
(i) (ii)
A B C D Men
A B C D
NOTES I 1 4 63 I 10 25 15 20
II  9 7 10 9   30 5 15 
Jobs Jobs II 15
III  4 5 11 7  III 35 20 12 24
  
IV 17

25 24 20
IV  8 7 8 5 

Men
(iii)
A B C D E
I 1 3 2 8 8
II  2 4 3 1 5 
III  5 6 3 4 6
 
IV  3 1 4 2 2
V  1 5 6 5 4 
Tasks
7. There are five jobs to be assigned one each to 5 machines. The associated
cost matrix is as follows.
Machines
1 2 3 4 5
A 11 17 8 16 20 
B  9 7 12 6 15 
Jobs C 13 16 15 12 16 
 
D  21 24 17 28 26 
E 14 10 12 11 15 
How should the job be assigned to various machines?
8. A company is faced with the problem of assigning 4 machines to 6 different
jobs (one machine to one job only). The profits are estimated as follows.
Machines
A B C D
1 3 6 2 6
2 7 1 4 4 
3 3 8 5 8
Jobs 4  6 4 3 7


5 5 2 4 3
 
6  5 7 6 4 

Solve the problem to maximize the profit.


Self-Instructional
236 Material
Assignment Problems
9.9 FURTHER READINGS

Arumugam, R. S. 2006. Operations Research. Palayamkottai (Tamil Nadu): New


Gamma Publications. NOTES
Sundharesan, V., K. S. Ganapathy and K. Ganesan. 2017. Resource Management
Techniques (Operations Research). Chennai: A. R. Publications.
Swaroop, Kanti, P. K. Gupta and Man Mohan. 2007. Operations Research,
13th Edition. New Delhi: Sultan Chand & Sons.
Taha, Hamdy A. 1992. Operations Research: An Introduction. New York:
Macmillan.
Sharma, S. D. 2006. Operations Research. Uttar Pradesh: Kedar Nath Ram
Nath & Co.
Gupta, P. K. and D. S. Hira. 2002. Introduction to Operations Research. New
Delhi: S. Chand And Company Limited.
Gillett, Billy E. 2007. Introduction to Operations Research. New Delhi: Tata
McGraw-Hill.
Ackoff, R. L. and M. W. Sasieni. 1968. Fundamentals of Operations Research.
New York: John Wiley & Sons Inc.
Kothari, C. R. 1992. An Introduction to Operational Research. New Delhi:
Vikas Publishing House Pvt. Ltd.
Kalavathy, S. 2002. Operations Research. New Delhi: Vikas Publishing House
Pvt. Ltd.
Jensen, Paul A., and Jonathan F. Bard. 2003. Operations Research Models and
Methods. New York: John Wiley & Sons.
Sharma, J. K. 2001. Operations Research: Theory and Applications. New
Delhi: Macmillan India Ltd.

Self-Instructional
Material 237
Sequencing Problem

UNIT 10 SEQUENCING MODELS


NOTES Structure
10.0 Introduction
10.1 Objectives
10.2 Sequencing Models: Basic Concepts
10.2.1 Definition
10.2.2 Terminology and Notations
10.2.3 Principal Assumptions
10.2.4 Job Sequence Problems
10.3 Processing of n Jobs through Two Machines
10.4 Processing of n Jobs through Three Machines
10.5 Processing of n Jobs through m Machines
10.6 Processing of Two Jobs through m Machines
10.7 Maintenance Crew Scheduling
10.8 Answers to Check Your Progress Questions
10.9 Summary
10.10 Key Words
10.11 Self Assessment Questions and Exercises
10.12 Further Readings

10.0 INTRODUCTION

Sequencing models determine an appropriate order (sequence) for a series of


jobs to be done on a finite number of service facilities in some pre-assigned order,
so as to optimize the total cost (time) involved. The algorithm, which is used to
optimize the total elapsed time for processing n jobs through two machines is
called ‘Johnson’s Algorithm’. Consider n jobs (1, 2, ..., n) processing on three
machines A, B, C in the order ABC. The optimal sequence can be obtained by
converting the problem into a two-machine problem. If there are two jobs, each
of which is to be processed on K machines M1, M2, ..., MK in two different
orders. The ordering of each of the two jobs through K machines is known in
advance.
Maintenance crew scheduling becomes more and more complex as you
add variables to the problem. These variables can be as simple as, 1 location, 1
skill requirement, 1 shift of work and 1 set roster of people. Crew expenses are
wages and overnight costs while away from their crew base. In the transportation
industries, such as rail or mainly air travel, these variables become very complex.
In air travel for instance, there are numerous rules or ‘Constraints’ that are
introduced. Basically, these deal with legalities relating to work shifts and time,
and crew members qualifications for working on a particular aircraft. Fuel is
also a major consideration as aircrafts and other vehicles require a lot of costly
fuel to operate. Finding the most efficient route and staffing it with properly
Self-Instructional
238 Material
qualified personnel is a critical financial consideration. The same applies to rail Sequencing Problem

travel.
In this unit, you will study about the concept of sequencing models, definition,
terminology and notations, principal assumptions, job sequence problem,
NOTES
processing of n jobs through two machines, processing of n jobs through three
machines, processing of n jobs through m machines, processing of two jobs through
m machines, and maintenance crew scheduling.

10.1 OBJECTIVES
After going through this unit, you will be able to:
 Understand the basic concept of sequencing models
 Analyse the processing of n jobs through two as well as three machines
 Understand the processing of n as well as two jobs through m machines
 Know about the maintenance crew scheduling

10.2 SEQUENCING MODELS: BASIC CONCEPTS


Sequencing models determine an appropriate order (sequence) for a series of
jobs to be done on a finite number of service facilities in some pre-assigned order,
so as to optimize the total cost (time) involved.
10.2.1 Definition
Suppose there are n jobs (1, 2, ..., n), each of which has to be processed one at
a time at m machines (A, B, C, ...). The order of processing each job through each
machine is given. The problem is to find a sequence among (n!)m number of all
possible sequences for processing the jobs so that the total elapsed time for all the
jobs will be minimum.
10.2.2 Terminology and Notations
The following are the terminologies and notations used in this unit.
Number of Machines: It means the service facilities through which a job must
pass before it is completed.
Processing Order: It refers to the order in which various machines are required
for completing the job.
Processing Time: It means the time required by each job on each machine.
Idle Time on a Machine: This is the time for which a machine remains idle
during the total elapsed time. The notation xij is used to denote the idle time of a
machine j between the end of the (i – 1)th job and the start of the ith job.
Total Elapsed Time: This is the time between starting the first job and completing
the last job, which also includes the idle time, if present.
Self-Instructional
Material 239
Sequencing Problem No Passing Rule: It means, passing is not allowed, i.e., maintaining the same
order of jobs over each machine. If each of N-jobs is to be processed through 2
machines M1 and M2 in the order M1 M2, then this rule will mean that each job will
go to machine M1 first and then to M2. If a job is finished on M1, it goes directly to
NOTES machine M2 if it is free, otherwise it starts a waiting line or joins the end of the
waiting line, if one already exists. Jobs that form a waiting line are processed on
machine M2 when it becomes free.
10.2.3 Principal Assumptions
(i) No machine can process more than one operation at a time.
(ii) Each operation once started must be performed till completion.
(iii) Each operation must be completed before starting any other operation.
(iv) Time intervals for processing are independent of the order in which operations
are performed.
(v) There is only one machine of each type.
(vi) A job is processed as soon as possible, subject to the ordering requirements.
(vii) All jobs are known and are ready for processing, before the period under
consideration begins.
(viii) The time required to transfer jobs between machines is negligible.
10.2.4 Job Sequence Problems
Job sequencing is basically the planning of the jobs in sequential manner and is an
essential part of any work. Without proper planning and scheduling one can not
achieve the desired output and profit. For sequencing a job, generally the two
techniques are used termed as Priority Rules and Johnson’s Rules. Priority rules
give the guidelines for properly sequencing the job, where as Johnson’s rule is
used to minimize the completion time for a set of jobs to be done on two different
machines. Using these rules one can assign jobs and maximize product and profit.
Basic characteristics of job sequencing
1. Only one single job should be scheduled for a machine at a time.
2. Do not stop the process in between before completion.
3. New processing can be started after the completion of the previous
processing.
4. Any job is scheduled for processing as per the order and due date
requirements.
5. If the jobs are transferred from one machine to another due to some reason,
then the time involved in transferring the jobs is considered negligible.
Priority Rules: These rules are used to get specific guidelines for job sequencing.
The rules do not consider job setup cost and time while analysing processing
times. In it job processing time and due dates are given importance because the
Self-Instructional
240 Material
due dates are fixed to give delivery in time to the customers. The rules are very Sequencing Problem

useful for process-focussed amenities, for example health clinics, print shops and
manufacturing industries. Hence, priority rules minimize the time for completing a
job, sequences the jobs in the organization, checks if any job is late and maximizes
resource utilization. The most popular priority rules are as follows: NOTES
 First Come First Serve (FCFS): The job to be processed first is the job
that turned up first in the organization.
 Earliest Due Date (EDD): The job to be processed first is the job that
has earliest due date.
 Shortest Processing Time (SPT): The job to be processed first and
completed is the job that is shortest in nature; in other words the job can be
processed in short time.
 Longest Processing Time (LPT): The job to be processed first is the
job that is very important or of high priority though it can take longer
processing time.
 Critical Ratio (CR): The job to be processed first is analysed on the
basis of critical ratio, which is an index number calculated from time remaining
until due date divided by the remaining work time.
Johnson’s Rule: This rule is applied to minimize the completion time for a set of
jobs that are to be processed on two different machines or at two consecutive
work stations. The main objectives of the rules are,
 To minimize the processing time while sequencing a set of jobs on two
different machines or work stations.
 To minimize the complete idle time on the processing machines.
 To minimize the flow time of the job, i.e., from the start of the first job until
the completion of the last job.
Necessary Conditions for Johnson’s Rules: The necessary conditions to
efficiently complete the processing of the jobs are as follows:
 Knowledge about job time for each job at the specific work station.
 Job time must not depend on sequencing of jobs.
 All the jobs to follow the predefined work sequence.
 Avoid job priority.
Four Steps Johnson’s Rule: The following are the important four steps in
Johnson’s rule:
Step 1: List all the jobs and the processing time of each machine to which these
jobs are scheduled.
Step 2: Choose the job which has the shortest processing time. If the shortest
time has been scheduled on the first machine or work station then the job is selected
Self-Instructional
Material 241
Sequencing Problem first for processing. In case the shortest time is scheduled on the second machine
or work station then the job is processed at the end.
Step 3: After scheduling the job for processing go to Step 4.
NOTES Step 4: Repeat Step 2 again to schedule the processing of remaining jobs and fill
the sequence columns towards the centre till all the jobs are scheduled.
The following example will help you to understand how the sequences are
scheduled.
For example, there are five jobs to be done at a factory and each job must
be processed through two work stations at two different machines, drill machine
and lathe machine. Using Johnson’s rule we can schedule the sequence of jobs.
The time (in hours) for processing each job is given in the following table:

Work Station 1 Work Station 2


Jobs
(Drill) (Lathe)
A 5 2
B 3 6
C 9 4
D 12 8
E 8 14

Using the Steps of Johnson’s Rule, the job processing sequences are scheduled as
follows:
Step 1: In the given table, the job with the shortest processing time is job A, in
work station 2 (with a time of 2 hours). Because it is at the second work station,
schedule A last.

A
Step 2: Next shortest time is of job B (with a time of 3 hours). Because it is at the
first work station, schedule it at first priority and eliminate it from the list.

B A

Step 3: The next shortest time is of job C (with a time of 4 hours), but it is at the
second work station. Therefore, place it at last before A.

B C A

Step 4: There is a tie between job D (with a time of 8 hours at work station 2) and
job E (with a time of 8 hours at work station 1) for the shortest remaining job.
Because job E is at the first work station, so place it first after job B. Then place
job D in the last sequencing position. You will get the job sequence schedule as
follows:

B E D C A
Self-Instructional
242 Material
The final sequential times at both the work stations will be: Sequencing Problem

Jobs B E D C A
Work Station 1 (Drill) 3 8 12 9 5
Work Station 2 (Lathe) 6 14 8 4 2 NOTES

Check Your Progress


1. What is basic concept of sequencing models?
2. Give the definition of sequencing models.
3. State terminologies and notations regarding sequencing models.
4. Explain the principal assumptions.
5. Define the job sequence problem.
6. What are basic characteristics of job sequencing?
7. Elaborate on the priority rules.
8. State the Johnson’s rule.
9. What are the necessary conditions for Johnson’s rule?
10. State the four steps Johnson’s rule.

10.3 PROCESSING OF n JOBS THROUGH TWO


MACHINES
The algorithm, which is used to optimize the total elapsed time for processing n
jobs through two machine is called ‘Johnson’s Algorithm’ and has the following
steps.
Consider n jobs (1, 2, 3, ..., n) processing on two machines A and B in the
order AB. The processing periods (time) are A1 A2 ... An and B1 B2 ... Bn as given
in the following Table.
Machine/Job 1 2 3 ... n

A A1 A2 A3 ... An
B B1 B2 B3 ... Bn

The problem is to sequence the jobs so as to minimize the total elapsed


time.
The solution procedure adopted by Johnson is given below:
Step 1: Select the least processing time occurring in the list A1 A2 ... An and B1 B2
... Bn. Let this minimum processing time occur for a job K.
Step 2: If the shortest processing is for machine A, process the Kth job first and
place it in the beginning of the sequence. If it is for machine B, process the
Kth job last and place it at the end of the sequence.
Self-Instructional
Material 243
Sequencing Problem Step 3: When there is a tie in selecting the minimum processing time, then there
may be three solutions.
(i) If the equal minimum values occur only for machine A, select the job
with larger processing time in B to be placed first in the job sequence.
NOTES
(ii) If the equal minimum values occur only for machine B, select the job
with larger processing time in A to be placed last in the job sequence.
(iii) If there are equal minimum values, one for each machine, then place
the job in machine A first and the one in machine B last.
Step 4: Delete the jobs already sequenced. If all the jobs have been sequenced,
go to the next Step. Otherwise, repeat Steps 1 to 3.
Step 5: In this Step, determine the overall or total elapsed time and also the idle
time on machines A and B as follows.
Total elapsed time = The time between starting the first job in the optimal sequence
on machine A and completing the last job in the optimal sequence on machine B.
Idle time on A = (Time when the last job in the optimal sequence is completed on
machine B) – (Time when the last job in the optimal sequence is completed on
machine A)
Idle time on B = When the first job in the optimal sequence starts on machine
n
B [Time Kth job starts on machine B – Time (K – 1)th job finished on machine
K 2

B]
Example 10.1: There are five jobs, each of which must go through the two machines
A and B in the order AB. Processing times are given below.
Job 1 2 3 4 5

Machine A 5 1 9 3 10

Machine B 2 6 7 8 4

Determine a sequence for the five jobs that will minimize the total elapsed time.
Solution: The shortest processing time in the given problem is 1 on machine A.
So perform job 2 in the beginning, as shown below.
2

The reduced list of processing time becomes


Job 1 3 4 5

Machine A 5 9 3 10
Machine B 2 7 8 4

Self-Instructional
244 Material
Again the shortest processing time in the reduced list is 2 for job 1 on machine B. Sequencing Problem

So place job 1 as the last.


2 1

Continuing in the same manner the next reduced list is obtained as NOTES

Job 3 4 5

Machine A 9 3 10
Machine B 7 8 4

Leading to the sequence


2 4 1

and the list


Job 3 5

Machine A 9 10
Machine B 7 4

gives rise to the sequence


2 4 5 1

Finally, the optimal sequence n is obtained as

2 4 3 5 1

Flow of jobs through machines A and B using the optimal sequence is,
2  4  3  5  1.
Computation of the total elapsed time and the machine’s idle time.
Machine A Machine B Idle time
Job
In Out In Out A B
2 0 1 1 7 0 1

4 1 4 7 15 0 0
3 4 13 15 22 0 0

5 13 23 23 27 0 1
1 23 28 28 30 30 – 28 1

=2 3

From the above Table we find that the total elapsed time is 30 hours and the idle
time on machine A is 2 hours and on machine B is 3 hours.

Self-Instructional
Material 245
Sequencing Problem Example 10.2: Find the sequence that minimizes the total elapsed time (in hours)
required to complete the following tasks on two machines.
Task A B C D E F G H I

NOTES Machine I 2 5 4 9 6 8 7 5 4

Machine II 6 8 7 4 3 9 3 8 11

Solution: The shortest processing time is 2 on machine I for job A. Hence, process
this job first.
A

Deleting these jobs, we get the reduced list of processing time.


Task B C D E F G H I

Machine I 5 4 9 6 8 7 5 4

Machine II 8 7 4 ss3 9 3 8 11

The next minimum processing time is same for jobs E and G on machine II. The
corresponding processing time on machine I for this job is 6 and 7. The longest
processing time is 7. So sequence job G at the end and E next to it.
A E G

Deleting the jobs that are sequenced, the reduced processing list is,
Job B C D F H I

Machine I 5 4 9 8 5 4

Machine II 8 7 4 9 8 11

The minimum processing time is 4 for job C, I and D. For job C and I it is on
machine I and for job D it is on machine II. There is a tie in sequencing jobs C and
I. To break this, we consider the corresponding time on machine II, the longest
time is 11 (eleven). Hence, sequence job I in the beginning followed by job C. For
job D, as it is on machine II, sequence it last.
A I C D E G

Deleting the jobs that are sequenced, the reduced processing list is,
Job B F H

Machine I 5 8 5

Machine II 8 9 8

The next minimum processing time is 5 on machine I for jobs B and H, which is
again a tie. To break this, we consider the corresponding longest time on the other
Self-Instructional machine (II) and sequence job B or H first. Finally, job F is sequenced.
246 Material
The optimal sequence for this job is, Sequencing Problem

A I C B H F D E G

The total elapsed time and idle time for both the machines are calculated from the NOTES
following Table.
Machine A Machine B Idle time
Job
In Out In Out I II

A 0 2 2 8 0 2
I 2 6 8 19 0 0

C 6 10 19 26 0 0
B 10 15 26 34 0 0

H 15 20 34 42 0 0
F 20 28 42 51 0 0
D 28 37 51 55 0 0

E 37 43 55 58 0 0
G 43 50 58 61 61 – 50 0
11 hours 2 hours

Total elapsed time = 61 hours.


Idle time for Machine I = 11 hours; Idle time for Machine II = 2 hours.
Example 10.3: A company has six jobs, A to F. All the jobs have to go through
two machines. The time required for the jobs on each machine in hours is given
below. Find the optimum sequence that minimizes the total elapsed time.
Job A B C D E F
Machine I 1 4 6 3 5 2
Machine II 3 6 8 8 1 5

Solution: The minimum processing time is 1 on machine I for job A and on machine
II for job E. So process job A first and sequence it in the beginning and process
job E last.
A E

Deleting these jobs, the reduced processing time list is,


B C D E F

Machine I 4 6 3 2

Machine II 6 8 8 5

Self-Instructional
Material 247
Sequencing Problem The next minimum processing time is on machine I for job F, so sequence this job
in the beginning.
A E

NOTES Continuing this way we get the optimum sequence as follows.


A F D B C E

The total elapsed time and idle time for each machine can be obtained as follows.
Machine A Machine B Idle time
Job
In Out In Out I II

A 0 1 1 4 – 1
F 1 3 4 9 –
D 3 6 9 17 – –

B 6 10 17 23 – –
C 10 16 23 31 – –

E 16 21 31 32 32 – 21 = 11
11 hours 1 hour

Total elapsed time = 32 hours.


Idle time for Machine I = 11 hours: Idle time for Machine II = 1 hour.

10.4 PROCESSING OF n JOBS THROUGH THREE


MACHINES
Consider n jobs (1, 2, ..., n) processing on three machines A, B, C in the order
ABC. The optimal sequence can be obtained by converting the problem into a
two-machine problem. From this, we get the optimum sequence using Johnson’s
algorithm.
The following steps are used to convert the given problem into a two-machine
problem.
Step 1: Find the minimum processing time for the jobs on the first and last machine
and the maximum processing time for the second machine.
i.e., Find Min (Ai, Ci) i = 1, 2, ..., n
i
and Max (Bi)
i
Step 2: Check the following inequality
Min Ai  Mix Bi
i i
or
Min Ci  Mix Bi
i i
Self-Instructional
248 Material
Step 3: If none of the inequalities in Step 2 are satisfied, this method cannot be Sequencing Problem

applied.
Step 4: If at least one of the inequalities in Step 2 is satisfied, we define two
machines G and H, such that the processing time on G and H are given
NOTES
by,
Gi = Ai + Bi i = 1, 2, ..., n
Hi = Bi + Ci i = 1, 2, ..., n
Step 5: For the converted machines G and H, we obtain the optimum sequence
using two-machine algorithm.
Example 10.4: A machine operator has to perform three operations, turning
threading and knurling, on a number of different jobs. The time required to perform
these operations (in minutes) on each job is known. Determine the order in which
the jobs should be processed in order to minimize the total time required to turn
out all the jobs. Also find the minimum elapsed time.

Job 1 2 3 4 5 6

Turning 3 12 5 2 9 11

Threading 8 6 4 6 3 1

Knurling 13 14 9 12 8 13

Solution: Let us consider the three machines as A, B and C.


A = Turning; B = Threading; C = Knurling
Step 1: Min (Ai Ci )= (2, 8)
i
Max (Bi) = 8
i
Step 2: Min Ai = 2 
 Max Bi = 8
i i
Min Ci = 8  Max Bi is satisfied.
i i
We define two machines G and H
Such that, Gi = Ai + Bi
Hi = Bi + Ci
Job 1 2 3 4 5 6
G 11 18 9 8 12 12
H 21 20 13 18 11 14

We adopt Johnson’s algorithm steps to get the optimum sequence.


4 3 1 6 2 5

Self-Instructional
Material 249
Sequencing Problem To find the minimum total elapsed time and idle time for machines A, B and C,
Machine A Machine B Machine C Idle time
Job
In Out In Out In Out A B C
NOTES 4 0 2 2 8 8 20 – 2 8
3 2 7 8 12 20 29 – – –
1 7 10 12 20 29 42 – – –
6 10 21 21 22 42 55 – 1 –
2 21 33 33 39 55 69 – 11 –
5 33 42 42 45 69 77 77 – 42 3 –
= 35 77 – 45 –
35 49 8

Total elapsed time = 77 minutes


Idle time for machine A = 35 minutes; Idle time for machine B = 49 minutes;
Idle time for machine C = 8 minutes.
Example 10.5: We have five jobs, each of which must go through the machines
A, B and in the order ABC. Determine the sequence that will minimize the total
elapsed time.
Job No. 1 2 3 4 5
Machine A 5 7 6 9 5
Machine B 2 1 4 5 3
Machine C 3 7 5 6 7

Solution: The optimum sequence can be obtained by converting the problem into
that of two-machines, by using the following steps.
Step 1: Find Min
i (Ai Ci) i = 1, 2, ..., 5
= (5, 3).
Step 2: Max
i (Bi) = 5
Min
i Ai = 5 = Max B =5
i i
 Min Ai Max
i Bi is satisfied.
i
We convert the problem into a two-machine problem by defining two machines G
and H, such that the processing time on G and H are given by,
Gi = Ai + B
i = 1, 2, ..., 5
Hi = Bi + Ci

Self-Instructional
250 Material
Sequencing Problem
Job 1 2 3 4 5
G 7 8 10 14 8
H 5 8 9 11 10

We obtain the optimum sequence by using the steps in Johnson’s algorithm. NOTES

5 2 4 3 1

To find the total elapsed time and idle time on three machines.
Machine A Machine B Machine C Idle time
Job
In Out In Out In Out A B C
5 0 5 5 8 8 15 – 5 8
2 5 12 12 13 15 22 – 4 –
4 12 21 21 26 26 32 – 8 4
3 21 27 27 31 32 37 – 1 –
1 27 32 32 34 37 40 40 – 34 1 –
40 – 34 –
=6 –
8 25 12

Total elapsed time = 40 hours.


Idle time for machine A = 8 hours; Idle time for machine B = 25 hours; Idle
time for machine C = 12 hours.
Example 10.6: A readymade garments manufacturer has to process 7 items
through two stages of production, namely, cutting and sewing. The time taken for
each of these at the different stages are given below in appropriate units.
Item 1 2 3 4 5 6 7

Process Cutting 5 7 3 4 6 7 12

Time Sewing 2 6 7 5 9 5 8

(a) Find an order in which these items are to be processed through these
stages, so as to minimize the total processing time.
(b) Suppose a third stage of production is added, namely, pressing and packing
with the processing time as follows.

Item 1 2 3 4 5 6 7

Processing time 10 12 11 13 12 10 11
(Pressing and
packing)

Self-Instructional
Material 251
Sequencing Problem Find an order in which these seven items are to be processed so as to minimize the
time taken to process all the items through all the three stages.
Solution:
NOTES 1. We consider the two stages of cutting and sewing by the machines A and B.
The optimum sequence for these 7 items can be given as follows using the
steps involved in Johnson’s algorithm.
3 4 5 7 2 6 1

Machine A Machine B Idle time


Job
In Out In Out A B
3 0 3 3 10 – 3
4 3 7 10 15 – –
5 7 13 15 24 – –
7 13 25 25 33 – 1
2 25 32 33 39 – –
6 32 39 39 44 – –
1 39 44 44 46 46 – 44 –
2 4

Total elapsed time = 46 hours.


Idle time for machine A = 2 hours; Idle time for machine B = 4 hours.
2. To get the optimum sequence for including the third stage, namely, pressing
and packing, we use the optimum sequence for three-machine problem by
considering the stage pressing and packing for machine C. We convert the
problem into a two-machine problem using the following steps.
Min
i (Ai, Ci ) = (3, 10) i = 1, 2, ... ,7
Max
i (Bi ) = 9
Since min Ci = 10 > Max Bi = 9 is satisfied, we convert it into a two-machine
problem with machines G and H such that,
Gi = Ai + Bi, Hi = Bi + Ci
i = 1, 2, ..., 7
Item 1 2 3 4 5 6 7
Machine G 7 13 10 9 15 12 20
Machine H 12 18 18 18 21 15 19

1 4 3 6 2 6 7

Self-Instructional
252 Material
To compute the total elapsed time. Sequencing Problem

Machine A Machine B Machine C Idle time


Job
In Out In Out In Out A B C
1 0 5 5 7 7 17 – 5 7
NOTES
4 5 9 9 14 17 30 – 2 –
3 9 12 14 21 30 41 – – –
6 12 19 21 26 41 51 – – –
2 19 26 26 32 51 63 – – –
5 26 31 32 41 63 75 – – –
7 31 43 43 51 75 86 – 2 –
86 – 43 86 – 51 –
= 43 = 35 –
43 44 7
Total elapsed time = 86 hours
Idle time for machine A = 43 hours; Idle time for machine B = 44 hours; Idle
time for machine C = 7 hours.

10.5 PROCESSING OF n JOBS THROUGH m


MACHINES
Consider n jobs (1, 2, ..., n) processing through k machines M1 M2 ... Mk in the
same order. The iterative procedure of obtaining an optimal sequence is as follows:
Step 1: Find Min. Mi and Min. Mik and Max. of each of Mi2, Mi3, ..., Mik1 for i
= 1, 2, ..., n.
Step 2: Check whether

i Mi1  Max
Min i Mij, for j = 2, 3, ..., k – 1 or
i Mi  Max
Min i Mij, for j = 2, 3, ..., k – 1.
k
Step 3: If the inequalities in Step 2 are not satisfied, the method fails, otherwise,
go to the next Step.
Step 4: In addition to step 2, if Mi2 + Mi3 + ...+, Mik–1 = C, where C is a positive
fixed constant for all, i = 1, 2 ,..., n
Then determine the optimal sequence for n jobs, where the two machines are Ml
and Mk in the order Ml Mk, by using the optimum sequence algorithm.
Step 5: If the condition Mi2 + Mi3 + ...+, Mik –1  C for all i = 1, 2 ..., n, we
define two machines G and H such that,
Gi = Mi1 + Mi2 + ..., + Mik–1
Hi = Mi2 + Mi3 + ..., + Mik i = 1, 2, 3, 4.
Determine the optimal sequence of performance of all jobs on G and H using the
optimum sequence algorithm for two machines.
Self-Instructional
Material 253
Sequencing Problem Example 10.7: Four jobs 1, 2, 3 and 4 are to be processed on each of the five
machines A, B, C, D and E in the order A B C D E. Find the total minimum
elapsed time if no passing of jobs is permitted. Also find the idle time for each
machine.
NOTES
Jobs
Machines
1 2 3 4
A 7 6 5 8
B 5 6 4 3
C 2 4 5 3
D 3 5 6 2
E 9 10 8 6

Solution: Since the problem is to be sequenced on five machines, we convert the


problem into a two-machine problem by adopting the following steps.
Step 1: Find Min
i (Ai, Ei) = (5, 6)
i = 1, 2, 3, 4
Max (Bi, Ci, Di) = (6, 5, 6)
Step 2: The inequality

i Ei = 6  Max
Min
i
(Bi, Ci, Di )
is satisfied. Therefore, we can convert the problem into a two-machine
problem.
Step 3: Since Bi + Ci + Di  C, where C is a fixed constant, we define two
machines G and H such that,
Gi = Ai + Bi + Ci + Di
Hi = Bi + Ci + Di + Ei i = 1, 2, 3, 4.
Job 1 2 3 4
G 17 21 20 16
H 19 25 23 14

1 3 2 4

Machine A Machine B Machine C Machine D Machine E


Job
In Out In Out In Out In Out In Out
1 0 7 7 12 12 14 14 17 17 26
3 7 12 12 16 14 19 19 25 26 34
2 12 18 18 24 24 28 28 33 34 44
4 18 26 26 29 29 32 33 35 44 50

Self-Instructional
254 Material
Sequencing Problem
Idle time
A B C D E
– 7 12 14 17
– – – 2 – NOTES
– 2 5 3 –
– 2 1 – –
50 – 26 21 18 15 –
24 32 36 34 17

Total elapsed time = 50 hours.


Idle time for machine A = 24 hours; Idle time for machine B = 32 hours; Idle
time for machine C = 36 hours; Idle time for machine D = 34 hours; Idle time for
machine E = 17 hours.

Example 10.8: When passing is not allowed, solve the following problem giving
an optimal solution.
Machine
Job
M1 M2 M3 M4
A 24 7 7 29
B 16 9 5 15
C 22 8 6 14
D 21 6 8 32

Solution: The given problem lists four jobs on four machines. The optimum
sequence can be obtained by converting it into a two-machine problem. The
following steps are adopted to find the optimum sequence.
Step 1: Min
i
(Mi1, Mi4) = (16, 14)
Max
i (Mi2, Mi3) = (9, 8)
Step 2: Both the inequalities

i Mi1 = 16  Max
Min i (Mi2, Mi3)
(9, 8)
and, i (Mi4) = 14  Max
Min i (Mi2, Mi3) are satisfied.
(9, 8)
Step 3: In addition we also have Mi2 + Mi3 = 14 for i = 2, 3. We have two
machines M1 and M4 in the order M1 M4
Total elapsed time = 125 hours.

Self-Instructional
Material 255
Sequencing Problem
Job A B C D
M1 24 16 22 21
M4 29 15 14 32

NOTES
D A B C

Machine M1 Machine M2 Machine M3 Machine M4


Job
In Out In Out In Out In Out
D 0 21 21 27 27 35 35 67
A 21 45 45 52 52 59 67 96
B 45 61 61 70 70 75 96 111
C 61 83 83 91 91 97 111 125

Idle time
M1 M2 M3 M4
– 21 27 35
– 18 17 –
– 9 11 –
42 13 16 –
– 34 28 –
42 95 99 35

Idle time for machine M1 = 43 hours; Idle time for machine M2 = 95 hours;
Idle time for machine M3 = 99 hours; Idle time for machine M4 = 35 hours.
Jobs 1 2 3 4 5 6 7 8 9
Machine A 4 9 5 10 6 12 8 3 8
Machine B 6 4 8 9 4 6 2 6 4
Machine C 10 12 9 11 14 15 10 14 12

Example 10.9: Find an optimal sequence for processing nine jobs through the
machines A, B, C in the order ABC. Processing times are given below in hours.
Find the total elapsed time for the optimal sequences.
Solution: The optimal sequence can be obtained by converting the problem into a
two-machine problem, using the following steps.
Step 1: Find, Min
i (Ai, Ci) = (3, 9) i = 1, 2, ..., 9
Max
i (Bi) = 9
i Ci = 9  Max
Step 2: The inequality Min i Bi = 9 is satisfied. Hence we convert the
given problem into a two-machine problem.

Self-Instructional
256 Material
Step 3: Define two machines G and H, such that the processing time on G and H Sequencing Problem
are given by,
Gi = Ai + Bi
i = 1, 2,..., 9
NOTES
Hi = Bi + Ci
Job 1 2 3 4 5 6 7 8 9
Machine G 10 13 13 19 10 18 10 9 12

Machine H 16 16 17 20 14 21 12 20 16

8 1 5 7 9 3 2 6 4

Total elapsed time can be calculated from the following table:


Machine A Machine B Machine C Idle time
Job
In Out In Out In Out A B C
8 0 3 3 9 9 23 – 3 9
1 3 7 9 15 23 33 – – –
5 7 13 15 19 33 47 – – –
7 13 21 21 23 47 57 – 2 –
9 21 29 29 33 57 69 – 6 –
3 29 34 34 42 69 78 – 1 –
2 34 43 43 47 78 90 – 1 –
6 43 55 55 61 90 105 – 8 –
4 55 65 65 74 105 116 51 4 –
42
51 67 9

Total elapsed time = 116 hours.


Idle time for machine A = 51 hours; Idle time for machine B = 67 hours; Idle
time for machine C = 9 hours.

10.6 PROCESSING OF TWO JOBS THROUGH M


MACHINES
Consider two jobs, each of which is to be processed on K machines M1, M2 ,...,
MK in two different orders. The ordering of each of the two jobs through K machines
is known in advance. Such ordering may not be the same for both the jobs. The
exact or expected processing times on all the given machines are known.
Each machine can perform only one job at a time. The objective is to
determine the optimal sequence of processing the jobs so as to minimize total
elapsed time.
The optimal sequence in this case can be obtained by making use of the
graph. Self-Instructional
Material 257
Sequencing Problem The procedure is given in the following steps.
Step 1: First draw a set of axes, where the horizontal axis represents processing
time on job 1 and the vertical axis represents processing time on job 2.
Step 2: Mark the processing time for job 1 and job 2 on the horizontal and vertical
NOTES
lines respectively, according to the given order of machines.
Step 3: Construct various blocks starting from the origin (starting point), by pairing
the same machines until the end point.
Step 4: Draw the line starting from the origin to the end point by moving horizontally,
vertically and diagonally along a line which makes an angle of 45º with the
horizontal line (base). The horizontal segment of this line indicates that the
first job is under process while second job is idle. Similarly, the vertical
line indicates that the second job is under process while first job is idle.
The diagonal segment of the line shows that the jobs are under process
simultaneously.
Step 5: An optimum path is one that minimizes the idle time for both the jobs.
Thus, we must choose the path on which diagonal movement is maximum.
Step 6: The total elapsed time is obtained by adding the idle time for either
job to the processing time for that job.

Example 10.10: Use graphical method to minimize the time needed to process
the following jobs on the machines shown below, i.e., for each machine find the
job that should be done first. Also calculate the total time needed to complete both
the jobs.
Job 1 Sequence of machine A B C D E
Time 2 3 4 6 2
Job 2 Sequence of machine C A D E B
Time 4 5 3 2 6

Solution: The given information is shown in the Figure. The shaded blocks represent
the overlaps that are to be avoided.
Job 2

20 Finish

B
B
15
E E
Idle time
for Job 1

D
10
A D
A
5

C C

5 10 15 20 Job 1
Self-Instructional A B C D E
258 Material
An optimal path is one that minimizes the idle time for Job 1 (horizontal Sequencing Problem

movement). Similarly, an optimal path is one that minimizes the idle time for Job 2
(vertical movement).
For the elapsed time, we add the idle time for either of the job to the
NOTES
processing time for that job.
In this problem, the idle time for the chosen path is seen to be 3 hours for
Job 1 and zero for Job 2.
Thus, the total elapsed time = 17 + 3 = 20 hours.
Example 10.11: Use the graphical method to minimize the time needed to process
the following jobs on the machines shown, that is, for each machine find the job
that should be done first. Also calculate the total elapsed time to complete both
the jobs.
Job 1 Sequence of machine A B C D E
Time 3 4 2 6 2
Job 2 Sequence of machine B C A D E
Time 5 4 3 2 6

Solution: The given information is shown in the following Figure. The shaded
blocks represent the overlaps that are to be avoided.
Job 2

20

E
15
D

A
10

5 10 15 20 Job 1
A B C D E

An optimal path is one that minimizes the idle time for Job 1 (horizontal
movement). Similarly, an optimal path is one that minimizes the idle time for Job 2
(vertical movement).
For the elapsed time, we add the idle time for either of the job to the
processing time for that job.
In this problem the idle time for the chosen path is 5 hours for Job 1 and 2
hours for Job 2. Therefore, the total elapsed time is obtained as follows.
Processing time of Job 1 + idle time for Job 1 = 17 + (2 + 3) = 22 hours
Processing time of Job 2 + idle time for Job 2 = 20 + 2 = 22 hours. Self-Instructional
Material 259
Sequencing Problem
10.7 MAINTENANCE CREW SCHEDULING
Crew scheduling is the process of assigning crews to operate transportation systems,
NOTES such as rail lines or airlines. Most transportation systems use software to manage
the crew scheduling process. Crew scheduling becomes more and more complex
as you add variables to the problem. These variables can be as simple as 1 location,
1 skill requirement, 1 shift of work and 1 set roster of people. For example, in air
lines crew scheduling consists of deciding the flight schedules of the crew. This is
done according to their qualification in flying certain types of fleets (aircraft type
rating), by respecting labor and contractual rules and minimizing crew expenses.
Crew expenses are wages and overnight costs while away from their crew base.
In the Transportation industries, such as Rail or mainly Air Travel, these variables
become very complex. In Air travel for instance, there are numerous rules or
‘Constraints’ that are introduced. Basically, these deal with legalities relating to
work shifts and time, and crew members qualifications for working on a particular
aircraft. Fuel is also a major consideration as aircrafts and other vehicles require a
lot of costly fuel to operate. Finding the most efficient route and staffing it with
properly qualified personnel is a critical financial consideration. The same applies
to rail travel.
The problem is computationally difficult and various competent mathematical
methods are used to solve the problem. Within a set of constraints and rules, move
a set roster of people with certain qualifications, from place to place with the least
amount of personnel and aircraft or vehicles in the least amount of time. Lowest
cost has traditionally been the major driver for any crew scheduling solution. The
following four equations are must for the computational process:
 People and their qualifications and abilities.
 Aircraft or vehicles, qualification requirements of people and their cost to
operate over distance.
 Locations and the time, and distance between each location.
 Work rules for the personnel, including shift hours and seniority.
In crew scheduling the rules and constraints are typically a combination of the
following:
 Government regulations concerning flight time, duty time and required rest,
designed to promote aviation safety and limit crew fatigue.
 Crew bid requests and vacations.
 Labor agreements.
 Aircraft maintenance schedules.
 Crew member qualification and licensing.
 Other constraints related to training.
Self-Instructional
260 Material
 Pairing experienced crew members with more junior crew members. Sequencing Problem

 Returning crew to their base at the end of their trip called deadheading.
All of these issues must be addressed in order to create a satisfactory solution for
personnel and management of the organization. For the crew member in seniority NOTES
based system schedules are decided largely on workplace seniority. Those at the
top of a seniority list are allowed some choices. As assignments are made and the
remaining roster of personnel becomes fewer, managements’ systems start to assign
the remaining trips based on a weighting of the 4 previously mentioned variables,
without any input from personnel. This does not allow the personnel to have any
choice or voice in the schedules they receive. This lack of scheduling awareness
until the end of each scheduling period is a major workforce issue and an employee
morale problem, often creating a tenuous situation especially where a collective
bargaining agreement is in place and particularly at negotiation time. Crew members
and management can interact with schedules and data in a real time Web interface.
Status can be seen and acted upon as it develops and changes rather than requesting
or bidding for a schedule once then waiting to see the outcome. Additional unplanned
disruptions in schedules due to weather and air traffic control delays can disrupt
schedules, so crew scheduling software remains an area for ongoing research.
Need for Integrated Optimization
The need for optimization integration comes from the fact that crew of a specific
type rating can only fly a subset of available aircraft types. As a result fleet assignment
decisions influence the cost of crew scheduling.Another reason for using optimization
integration is the fact that it is possible for the crew to remain on the same aircraft
instead of commuting within the airport for their next flight. The aircraft needed for
their next flight is predetermined by the aircraft routing, thus once more upstream
decisions influence crew scheduling.
Scheduling (Integrated Optimization): Airlines typically need to schedule their
flights, aircraft and crew. Due to its mathematical complexity, airline scheduling
was broken in the following stages:
 Fleet assignment
 Aircraft rotations with maintenance
 Crew scheduling
Airline Crew Scheduling: From Planning to Operations
Crew scheduling problems at the planning level are typically solved in two steps:
first, creating working patterns and then assigning these to individual crew. The
first step is solved with a set covering model and the second with a set partitioning
model. At the operational level, the re-planning period is considerably smaller
than during the strategic planning phase. Both models are integrated to solve time
critical crew recovery problems arising on the day of operations.

Self-Instructional
Material 261
Sequencing Problem

Check Your Progress


11. Describe first three steps for processing of n jobs through two machines.
NOTES 12. Define first three steps for processing of n jobs through three machines.
13. What are first three steps for processing of n jobs through m machines?
14. Write down first three steps for processing of two jobs through M machines.
15. What do you understand by the maintenance crew scheduling?
16. Define the need for integrated optimization.

10.8 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS
1. Sequencing models determine an appropriate order (sequence) for a series
of jobs to be done on a finite number of service facilities in some pre-
assigned order, so as to optimize the total cost (time) involved.
2. Suppose there are n jobs (1, 2, ..., n), each of which has to be processed
one at a time at m machines (A, B, C, ...). The order of processing each job
through each machine is given. The problem is to find a sequence among
(n!)m number of all possible sequences for processing the jobs so that the
total elapsed time for all the jobs will be minimum.
3. The following are the terminologies and notations used in this unit.
Number of Machines: It means the service facilities through which a job
must pass before it is completed.
Processing Order: It refers to the order in which various machines are
required for completing the job.
Processing Time: It means the time required by each job on each machine.
Idle Time on a Machine: This is the time for which a machine remains idle
during the total elapsed time. The notation xij is used to denote the idle time
of a machine j between the end of the (i – 1)th job and the start of the ith
job.
Total Elapsed Time: This is the time between starting the first job and
completing the last job, which also includes the idle time, if present.
No Passing Rule: It means, passing is not allowed, i.e., maintaining the same
order of jobs over each machine. If each of N-jobs is to be processed
through 2 machines M1 and M2 in the order M1 M2, then this rule will mean
that each job will go to machine M1 first and then to M2. If a job is finished
on M1, it goes directly to machine M2 if it is free, otherwise it starts a waiting
line or joins the end of the waiting line, if one already exists. Jobs that form
a waiting line are processed on machine M2 when it becomes free.
Self-Instructional
262 Material
4. (i) No machine can process more than one operation at a time. Sequencing Problem

(ii) Each operation once started must be performed till completion.


(iii) Each operation must be completed before starting any other operation.
(iv) Time intervals for processing are independent of the order in which NOTES
operations are performed.
(v) There is only one machine of each type.
(vi) A job is processed as soon as possible, subject to the ordering
requirements.
(vii) All jobs are known and are ready for processing, before the period
under consideration begins.
(viii) The time required to transfer jobs between machines is negligible.
5. Job sequencing is basically the planning of the jobs in sequential manner
and is an essential part of any work. Without proper planning and scheduling
one can not achieve the desired output and profit. For sequencing a job,
generally the two techniques are used termed as Priority Rules and Johnson’s
Rules. Priority rules give the guidelines for properly sequencing the job,
where as Johnson’s rule is used to minimize the completion time for a set of
jobs to be done on two different machines. Using these rules one can assign
jobs and maximize product and profit.
6. (i) Only one single job should be scheduled for a machine at a time.
(ii) Do not stop the process in between before completion.
(iii) New processing can be started after the completion of the previous
processing.
(iv) Any job is scheduled for processing as per the order and due date
requirements.
(v) If the jobs are transferred from one machine to another due to some
reason, then the time involved in transferring the jobs is considered
negligible.
7. Priority Rules: These rules are used to get specific guidelines for job
sequencing. The rules do not consider job setup cost and time while analysing
processing times. In it job processing time and due dates are given
importance because the due dates are fixed to give delivery in time to the
customers. The rules are very useful for process-focussed amenities, for
example health clinics, print shops and manufacturing industries. Hence,
priority rules minimize the time for completing a job, sequences the jobs in
the organization, checks if any job is late and maximizes resource utilization.
8. Johnson’s Rule: This rule is applied to minimize the completion time for a
set of jobs that are to be processed on two different machines or at two
consecutive work stations. The main objectives of the rules are,
 To minimize the processing time while sequencing a set of jobs on two
different machines or work stations.
Self-Instructional
Material 263
Sequencing Problem  To minimize the complete idle time on the processing machines.
 To minimize the flow time of the job, i.e., from the start of the first job
until the completion of the last job.
NOTES 9. Necessary Conditions for Johnson’s Rules: The necessary conditions
to efficiently complete the processing of the jobs are as follows:
 Knowledge about job time for each job at the specific work station.
 Job time must not depend on sequencing of jobs.
 All the jobs to follow the predefined work sequence.
 Avoid job priority.
10. Step 1: List all the jobs and the processing time of each machine to which
these jobs are scheduled.
Step 2: Choose the job which has the shortest processing time. If the shortest
time has been scheduled on the first machine or work station then the job is
selected first for processing. In case the shortest time is scheduled on the
second machine or work station then the job is processed at the end.
Step 3: After scheduling the job for processing go to Step 4.
Step 4: Repeat Step 2 again to schedule the processing of remaining jobs
and fill the sequence columns towards the centre till all the jobs are scheduled.
11. Step 1: Select the least processing time occurring in the list A1 A2 ... An and
B1 B2 ... Bn. Let this minimum processing time occur for a job K.
Step 2: If the shortest processing is for machine A, process the Kth job first
and place it in the beginning of the sequence. If it is for machine B, process
the Kth job last and place it at the end of the sequence.
Step 3: When there is a tie in selecting the minimum processing time, then
there may be three solutions.
(i) If the equal minimum values occur only for machine A, select the job
with larger processing time in B to be placed first in the job sequence.
(ii) If the equal minimum values occur only for machine B, select the job
with larger processing time in A to be placed last in the job sequence.
(iii) If there are equal minimum values, one for each machine, then place
the job in machine A first and the one in machine B last.
12. Step 1: Find the minimum processing time for the jobs on the first and last
machine and the maximum processing time for the second machine.
i.e., Find Min (Ai, Ci) i = 1, 2, ..., n
i
and Max (Bi)
i
Step 2: Check the following inequality
Min Ai  Mix Bi
i i
Self-Instructional
264 Material
or Sequencing Problem

Min Ci  Mix Bi
i i
Step 3: If none of the inequalities in Step 2 are satisfied, this method cannot
be applied. NOTES
13. Step 1: Find Min. Mi and Min. Mik and Max. of each of Mi2, Mi3, ..., Mik1
for i = 1, 2, ..., n.
Step 2: Check whether
Mini Mi1  Max i Mij, for j = 2, 3, ..., k – 1 or
Mini Mik  Max i Mij, for j = 2, 3, ..., k – 1.
Step 3: If the inequalities in Step 2 are not satisfied, the method fails,
otherwise, go to the next Step.
14. Step 1: First draw a set of axes, where the horizontal axis represents
processing time on job 1 and the vertical axis represents processing time on
job 2.
Step 2: Mark the processing time for job 1 and job 2 on the horizontal and
vertical lines respectively, according to the given order of machines.
Step 3: Construct various blocks starting from the origin (starting point), by
pairing the same machines until the end point.
15. Crew scheduling is the process of assigning crews to operate transportation
systems, such as rail lines or airlines. Most transportation systems use
software to manage the crew scheduling process. Crew scheduling becomes
more and more complex as you add variables to the problem. These variables
can be as simple as 1 location, 1 skill requirement, 1 shift of work and 1 set
roster of people. For example, in air lines crew scheduling consists of deciding
the flight schedules of the crew. This is done according to their qualification
in flying certain types of fleets (aircraft type rating), by respecting labor and
contractual rules and minimizing crew expenses.
16. The need for optimization integration comes from the fact that crew of a
specific type rating can only fly a subset of available aircraft types. As a
result fleet assignment decisions influence the cost of crew scheduling.
Another reason for using optimization integration is the fact that it is possible
for the crew to remain on the same aircraft instead of commuting within the
airport for their next flight. The aircraft needed for their next flight is
predetermined by the aircraft routing, thus once more upstream decisions
influence crew scheduling.

10.9 SUMMARY
 Sequencing models determine an appropriate order (sequence) for a series
of jobs to be done on a finite number of service facilities in some pre-
assigned order, so as to optimize the total cost (time) involved.
Self-Instructional
Material 265
Sequencing Problem  Suppose there are n jobs (1, 2, ..., n), each of which has to be processed
one at a time at m machines (A, B, C, ...). The order of processing each job
through each machine is given. The problem is to find a sequence among
(n!)m number of all possible sequences for processing the jobs so that the
NOTES total elapsed time for all the jobs will be minimum.
 Number of Machines: It means the service facilities through which a job
must pass before it is completed.
 Processing Order: It refers to the order in which various machines are
required for completing the job.
 Processing Time: It means the time required by each job on each machine.
 Idle Time on a Machine: This is the time for which a machine remains idle
during the total elapsed time. The notation xij is used to denote the idle time
of a machine j between the end of the (i – 1)th job and the start of the ith
job.
 Total Elapsed Time: This is the time between starting the first job and
completing the last job, which also includes the idle time, if present.
 Job sequencing is basically the planning of the jobs in sequential manner
and is an essential part of any work. Without proper planning and scheduling
one can not achieve the desired output and profit. For sequencing a job,
generally the two techniques are used termed as Priority Rules and Johnson’s
Rules.
 Priority Rules: These rules are used to get specific guidelines for job
sequencing. The rules do not consider job setup cost and time while analysing
processing times. In it job processing time and due dates are given
importance because the due dates are fixed to give delivery in time to the
customers.
 Johnson’s Rule: This rule is applied to minimize the completion time for a set
of jobs that are to be processed on two different machines or at two
consecutive work stations.
 Crew scheduling is the process of assigning crews to operate transportation
systems, such as rail lines or airlines. Most transportation systems use
software to manage the crew scheduling process. Crew scheduling becomes
more and more complex as you add variables to the problem.

10.10 KEY WORDS


 Sequencing models: Sequencing models determine an appropriate order
(sequence) for a series of jobs to be done on a finite number of services
facilities in some pre-assigned order, so as to optimize the total cost (time)
involved.
Self-Instructional
266 Material
 Number of machines: It means the service facilities through which a job Sequencing Problem

must pass before it is completed.


 Processing order: It refers to the order in which various machines are
required for completing the job.
NOTES
 Processing time: It means the time required by each job on each machine.
 Idle time on a machine: This is the time for which a machine remains idle
during the total elapsed time.
 Total elapsed time: This is the time between starting the first job and
completing the last job, which also includes the idle time, if present.
 No passing rule: It means, passing is not allowed, i.e., maintaining the
same order of jobs, over each machine.
 Priority rules: These rules are used to get specific guidelines for job
sequencing. The job rules do not consider job setup cost and time while
analysing processing times.
 Johnson’s rule: This rule is applied to minimize the completion time for a
set of jobs.
 Maintenance crew scheduling: Crew scheduling is the process of assigning
crews to operate transportation systems, such as rail lines or airlines.

10.11 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short-Answer Questions
1. What is no passing rule in a sequencing algorithm?
2. State the principle assumptions made while dealing with a sequencing
problem.
3. What is sequencing problem?
4. Define crew scheduling.
Long-Answer Questions
1. Explain the graphical method to solve two jobs on machines with given
technological ordering for each job. What are the limitations of the method?
2. Six jobs go first through machine I and then through machine II. The order
of completion of jobs has no significance. The following gives the machine
times in hours, for six jobs and the two machines.
Job 1 2 3 4 5 6
Machine I 5 9 4 7 8 6
Machine II 7 4 8 3 9 5
Self-Instructional
Material 267
Sequencing Problem Find the sequence of jobs that minimizes the total elapsed time to complete
the jobs.
3. We have seven jobs, each of which has to go through the machines M1 and
M2 in the order M1 M2. Processing time (in hours) are given as:
NOTES
Job 1 2 3 4 5 6 7
M1 3 12 15 6 10 11 9
Me 2 8 10 10 6 12 1 3
Determine a sequence of these jobs that will minimize the total elapsed
time.
4. Find the sequence that minimizes the total elapsed time required to complete
the following tasks.
Tasks A B C D E F G
Time on machine I 3 8 7 4 9 8 7
Time on machine II 4 3 2 5 1 4 3
Time on machine III 6 7 5 11 5 6 12
5. Find the sequence for the following eight jobs that will minimize the total
elapsed time for the completion of all the jobs. Each job is processed in the
same order.
Time for Jobs
machines 1 2 3 4 5 6 7 8
A 4 6 7 4 5 3 6 2
B 8 10 7 8 11 8 9 13
C 5 6 2 3 4 9 15 11
The entries give the time in hours on the machine.
6. We have 4 jobs, each of which has to go through the machines Mj, j = 1, 2,
..., 6 in the order M1 M2 ... M6.
Processing time (in hours) is given below.
M1 M2 M3 M4 M5 M6
Job A 18 8 7 2 10 25
Job B 17 6 9 6 8 19
Job C 11 5 8 5 7 15
Job D 20 4 3 4 8 12
Determine a sequence of these four jobs that minimizes the total elapsed
time.

Self-Instructional
268 Material
7. Two jobs are to be processed on four machines A, B, C and D. The Sequencing Problem

technological order for these machines is as follows.


Job 1 A B C D
Job 2 D B A C NOTES
Processing periods (time) are given in the following table.
Machines
A B C D
Job 1 4 6 7 3
Job 2 4 7 5 8
Find the optimal sequence of jobs on each of the machines.
8. What are the rules and constraints of crew scheduling made up of? Elaborate
with the help of examples.

10.12 FURTHER READINGS

Arumugam, R. S. 2006. Operations Research. Palayamkottai (Tamil Nadu): New


Gamma Publications.
Sundharesan, V., K. S. Ganapathy and K. Ganesan. 2017. Resource Management
Techniques (Operations Research). Chennai: A. R. Publications.
Swaroop, Kanti, P. K. Gupta and Man Mohan. 2007. Operations Research,
13th Edition. New Delhi: Sultan Chand & Sons.
Taha, Hamdy A. 1992. Operations Research: An Introduction. New York:
Macmillan.
Sharma, S. D. 2006. Operations Research. Uttar Pradesh: Kedar Nath Ram
Nath & Co.
Gupta, P. K. and D. S. Hira. 2002. Introduction to Operations Research. New
Delhi: S. Chand And Company Limited.
Gillett, Billy E. 2007. Introduction to Operations Research. New Delhi: Tata
McGraw-Hill.
Ackoff, R. L. and M. W. Sasieni. 1968. Fundamentals of Operations Research.
New York: John Wiley & Sons Inc.
Kothari, C. R. 1992. An Introduction to Operational Research. New Delhi:
Vikas Publishing House Pvt. Ltd.
Kalavathy, S. 2002. Operations Research. New Delhi: Vikas Publishing House
Pvt. Ltd.
Jensen, Paul A., and Jonathan F. Bard. 2003. Operations Research Models and
Methods. New York: John Wiley & Sons.
Sharma, J. K. 2001. Operations Research: Theory and Applications. New
Delhi: Macmillan India Ltd. Self-Instructional
Material 269
Game Theory

UNIT 11 GAME THEORY


Structure
NOTES 11.0 Introduction
11.1 Objectives
11.2 Game Theory
11.3 Basic Terms in Game Theory
11.4 Two-Person Zero-Sum Games
11.4.1 Sum Games
11.5 The Maximin-Minimax Principal
11.6 Answers to Check Your Progress Questions
11.7 Summary
11.8 Key Words
11.9 Self Assessment Questions and Exercises
11.10 Further Readings

11.0 INTRODUCTION

Game theory is the study of mathematical models of strategic interaction


among rational decision-makers. It has applications in all fields of social science,
as well as in logic, systems science and computer science. Originally, it
addressed zero-sum games, in which each participant’s gains or losses are exactly
balanced by those of the other participants. In the 21st century, game theory
applies to a wide range of behavioural relations, and is now an umbrella term for
the science of logical decision making in humans, animals, and computers.
Modern game theory began with the idea of mixed-strategy equilibria in
two-person zero-sum games and its proof by John von Neumann. Von Neumann’s
original proof used the Brouwer fixed-point theorem on continuous mappings into
compact convex sets, which became a standard method in game theory
and mathematical economics. His paper was followed by the 1944 book Theory
of Games and Economic Behaviour, co-written with Oskar Morgenstern, which
considered cooperative games of several players. The second edition of this book
provided an axiomatic theory of expected utility, which allowed mathematical
statisticians and economists to treat decision-making under uncertainty.
Game theory was developed extensively in the 1950s by many scholars. It
was explicitly applied to evolution in the 1970s, although similar developments go
back at least as far as the 1930s. Game theory has been widely recognized as an
important tool in many fields. As of 2014, with the Nobel Memorial Prize in
Economic Sciences going to game theorist Jean Tirole, eleven game theorists have
won the economics Nobel Prize. John Maynard Smith was awarded the Crafoord
Prize for his application of evolutionary game theory.
Zero-sum games are a special case of constant-sum games in which choices
Self-Instructional by players can neither increase nor decrease the available resources. In zero-sum
270 Material
games, the total benefit goes to all players in a game, for every combination of Game Theory

strategies, always adds to zero (more informally, a player benefits only at the equal
expense of others). Poker exemplifies a zero-sum game (ignoring the possibility
of the house’s cut), because one wins exactly the amount one’s opponents lose.
Other zero-sum games include matching pennies and most classical board games NOTES
including Go and chess.
Frequently, in game theory, maximin is distinct from minimax. Minimax is
used in zero-sum games to denote minimizing the opponent’s maximum payoff. In
a zero-sum game, this is identical to minimizing one’s own maximum loss, and to
maximizing one’s own minimum gain. “Maximin” is a term commonly used for
non-zero-sum games to describe the strategy which maximizes one’s own minimum
payoff. In non-zero-sum games, this is not generally the same as minimizing the
opponent’s maximum gain, nor the same as the Nash equilibrium strategy.
Minimax (sometimes MinMax, MM or saddle point) is a decision rule used
in artificial intelligence, decision theory, game theory, statistics,
and philosophy for minimizing the possible loss for a worst case (maximum loss)
scenario. When dealing with gains, it is referred to as “Maximin”—to maximize
the minimum gain. Originally formulated for n-player zero-sum game theory, covering
both the cases where players take alternate moves and those where they make
simultaneous moves, it has also been extended to more complex games and to
general decision-making in the presence of uncertainty.
In this unit, you will be study about the game theory, basic terms of game
theory, two-person zero sum games, and the maximin - minimax principal.

11.1 OBJECTIVES

After going through this unit, you will be able to:


 Understand the concept and significance of game theory
 Analyse the basic terms of game theory
 Define two-person zero-sum games
 Explain the maximin - minimax principal

11.2 GAME THEORY

Competition is a watchword of modern life. We say that a competitive situation


exists if two or more individuals are making decisions in situation that involves
conflicting interests and in which the outcome is controlled by the decisions of all
parties concerned. We assme that in a competitive situation, each participant acts
in a rational manner and tries to resolve the conflict of interests in his favour. It is in
this context that game theory has developed. Professor John von Neumann and
Oscar Morgenstern published their book entitled ‘The Theory of Games and
Self-Instructional
Material 271
Game Theory Economic Behaviour’ wherein they provided a new approach to many problems
involving conflict situations—an approach now widely used in Economics, Business
Administration, Sociology, Psychology and Political Science as well as in Military
Training. Fundamentally, the theory of games attempts to provide an answer to the
NOTES question: What may be considered a rational course of action for an individual
confronted with a situation whose outcome depends not only upon his own actions
but also upon the actions of others, Who in turn, are faced with a similar problem
of choosing a rational course of action? In fact, the theory of games is simply the
logic of rational decisions.
The term ‘Game’ represents a conflict between two or more parties. Game
theory is really the ‘Science of Conflict’. It is not concerned with finding an optimum
or winning strategy for a particular conflict situation but it provides general rules
concerning the logic that underlies strategic behaviour of all types.
Game theory applies to those competitive situations that are technically
known as ‘Competitive Games’ or simply ‘Games’. Situations, in order to be
termed games, must possess the following properties:
(i) The number of competitors is finite.
(ii) There is a conflict of interests between the participants.
(iii) Each of the participants has available to him a finite list of possible
courses of action, i.e., several choices of appropriate actions; this list
being not necessarily the same for each competitor.
(iv) The rules governing these choices are specified and known to all the
players; a play of the game results when each of the players chooses
a single course of action from the list of courses available to him.
(v) The outcome of the game is affected by the choices made by all the
players; the choices are made simultaneously so that no competitor
knows his opponent’s choice until he is already committed to his own.
(vi) The outcome for all the specific sets of choices by all the players is
known in advance and numerically defined. The outcome of a play
consists of the particular set of courses of action undertaken by the
competitors. Each outcome determines a set of payments (+ve, –ve
or zero), one to each competitor.
Illustration of a Game
When a competitive situation meets all the above stated criteria, we can call it
a game. This can be made clear by an example of a simple game. Suppose
there are two opponents X and Y. We can think of them as sitting across a
table from each other and each with two buttons in front of him. We shall
denote player X’s buttons , m and n and player Y’s buttons as r and t; thus
each player has two choices open to him. We also presume a partition between
them so that neither can see in advance which button his opponent is going to

Self-Instructional
272 Material
press. At a signal from a third party, each player presses one of his buttons. Game Theory

The results of each of the possible four combinations is known in advance to


both of the players; the uncertainty inherent in the game arises from the fact
that neither player knows what button his opponent will press next. Every
time the third party signals, each player presses one of his buttons and the NOTES
game thus continues. At the end of, say, a hundred ‘plays’, the game is over
and the points won by each of the players are totalled and the winner is
determined. It is assumed that both players are of equal intelligence and that
each actively attempts to win the game.
This sort of simple game can be illustrated in tabular form as follows:
A Simple Game in Tabular Form
Player Y
Button r Button t
Button X wins X wins
m 2 points 3 points
Player X
Button Y wins Y wins
n 3 points 1 point

In the above table, the plays open to each of the opponents and the resulting
gains or losses (called payoffs) have been shown. This game is biased against
Player Y because if Player X presses button m for each play of the game, Player Y
cannot win; in fact, Player Y faces a choice between losing 2 points on each play
if he responds by pressing Button r or losing 3 points on each play if he responds
by pressing Button t. Player Y will respond each time by pressing Button r since
this represents his least loss alternative. Thus, Player X will win 2 points on each
play of the game. We can similarly present a game biased against Player X wherein
X has no chances winning and at best an minimize his losses like Player Y in the
above stated case.
But all simple games cannot be said to have such simple solutions. This can
be illustrated by the following example of a game:
A Simple Game in Tabular Form
Player Y
Button r Button t
Button Y wins X wins
m 4 points 3 points
Player X
Button X wins Y wins
n 1 point 2 points
In the above case, it is not as simple as it was in the earlier example to
determine the individual player’s strategies. We can see from the game that Player
Self-Instructional
Material 273
Game Theory X would not press his button m, hoping to win 3 points on each play simply
because Player Y could counter with button r and win 4 points himself. Similarly,
Player Y would not press button hoping to win 4 points on every play because
Player X could counter with his button n and win 1 point himself. In such a situation,
NOTES therefore, it is advantageous for the players to play each of their choices (buttons)
a part of the time only. How to calculate the proportion of time to allot to each
choice shall be discussed a little later.
Standard Conventions in Game Theory
In order to eliminate the necessity for written descriptions of the ‘payoffs’ (as we
have shown in the above two examples), a standard set of conventions has been
established in game theory. It is the usual practice to omit a description like ‘Player
X wins two points’ and replace it with integer 2. The positive algebraic sign which
is assumed to accompany this number indicates that it is Player X who benefits
from this payoff. Similarly, instead of saying ‘Player Y wins three points’ one simply
indicates this with the value –3, the minus sign indicating that it is Player Y who
benefits from this particular payoff.
Another standard convention that is usually followed is that Player X has
choices between the rows and Player Y has choices between the columns.
Keeping these two conventions in view we can write the above stated two
illustrations of games as follows:
Illustration 1 Illustration 2
Player Y Player Y

2 3 –4 3
Player X Player X
–3 –1 1 –2
A further convention in game theory is known as matrix notation. In this
case, games are represented in the form of a matrix (a rectangular array of numbers
deriving from matrix algebra). When games are expressed in this fashion, the resulting
matrix is commonly known as a payoff matrix. The above stated two illustrations
can be put in the form of payoff matrices as follows:
Illustration 1 Illustration 2
Player Y Player Y

2 3  4 3 
Player X 3 1 Player X  1 2 
 
The term strategy is often talked about in game theory. It refers to the
total pattern of choices employed by any player. It may be defined as a complete
set of plans of action specifying precisely what the player will do under every
possible future contingency that might occur during the play of the game. For
Self-Instructional
274 Material
example, if Player X chooses to play his first row half of the time and his second Game Theory

row half of the time, his strategy for the game is 1/2, 1/2. Thus, the strategy of a
player is the decision rule he uses for making the choice from his list of courses
of action. The strategy could be a pure or mixed one. When a player plays one
row all the time (or one column all the time) he is said to be adopting a pure NOTES
strategy. In a mixed strategy, Player X will play each of his rows a certain part of
the time and Player Y will play each of his columns a certain part of the time. In
business, a close analogy is when, for example, a manager follows a certain
course of action A until an alternate course of action B appears to be more
profitable. Later on, should Action A appear more attractive again, the manager
switches back to it.

11.3 BASIC TERMS IN GAME THEORY

Games can be of several types. The important ones are as follows:


1. Two-person games and n-person games. In two-person games, the players
may have many possible choices open to them for each play of the game
but the number of players remains only two. But games can also involve
many people as active participants, each with his own set of choices for
each play of the game. A game of three players can be named as three-
person game. Thus, in case of more than two persons, the game is generally
named an n person game.
2. Zero-sum and non-zero-sum games. A zero-sum game is one in which the
sum of the payments to all the competitors is zero for every possible outcome
of the game. In other words, in such a game the sum of the points won
equals the sum of the points lost, i.e., one player wins at the expense of the
other (others). A two-person matrix game is always a zero-sum game since
one player loses what the other wins. In a non-zero-sum game, the sum of
the payoffs from any play of the game may be either positive or negative but
not zero. A well known example of a non-zero-sum game is the case of two
competing firms, each with a choice regarding its advertising campaign. We
may assume that the market is quite small so that sales remain at nearly the
same level regardless of whether the firms engage in advertising or not. If
neither firm advertises, they both save the cost of advertising and thus each
receives a positive benefit or payoff (representing a non-zero-sum outcome).
In the event both firms spend equal amounts of money on advertisement,
neither firm’s sales will increase (given the equal quality of the advertising
campaigns) and thus both firms will have incurred the cost of advertising
without any benefits. Since both firms will lose money, this is clearly a non-
zero-sum outcome.
3. Games of perfect information and games of imperfect information.
Whatever strategy is adopted by either player, if the same can also be
Self-Instructional
Material 275
Game Theory discovered by his competitor, then such games are known as games of
perfect information. In games of imperfect information, neither player knows
the entire situation and must be guided in part by guess work as to what the
real situation is.
NOTES
4. Games with finite, i.e., limited number of moves (or plays) and games
with unlimited number of moves. Games with finite number of moves are
those where the number of moves is limited to a fixed magnitude before
play begins. If the game could continue over an extended period of time
and no limit is put on the number of moves, it is referred to as a game with
an unlimited number of moves.
5. Constant-sum games. In some situations, a zero-sum game is correctly
referred to as a constant-sum game. Take, for example, a competitive struggle
for market share by two firms (a duopoly). In such a case, every percentage
point gained by one of the firms is necessarily lost by the other. This situation
is ordinarily called a two-person, zero-sum game. But strictly speaking, this
is a constant-sum rather than a zero-sum game because the sum of the two
market shares is the fixed number 100 (per cent), not zero. It should,
however, be remembered that ‘There is no significant analytic difference
between the constant-sum and the zero-sum games. This is because there
is no change in strategic possibilities from a given game to another game in
which some constant amount that cannot be changed by players is added
to the original payoffs.’
6. 2 × 2 two-person games and 2 × m and m × 2 games. Two person zero-
sum games with only two choices open to each player are denoted as 2 ×
2 two person games but games in which one of the players has more than
two choices of rows or columns and in which the other player has exactly
two choices are referred to as m × 2 or 2 × m games respectively.
7. 3 × 3 and larger games. Two-person zero-sum games can be of size 3 ×
3 and larger. If each of the two players has three choices then the game is of
3 × 3 type and if the choices open to any of the player or to both the players
are more than three, the game is referred to as of a larger size. 3 × 3 and
larger games quite often present complications and in such situations, linear
programming as a solution method may allow us to find the optimum
strategies.
8. Negotiable (or cooperative) and non-negotiable (or non-cooperative)
games. Negotiation amongst players is possible in n-person and non-zero-
sum games but the same is not necessarily required. On this basis, we can
divide the analysis of such games into two parts—games in which the
participants can negotiate and games in which negotiation is not permitted.
The former type of games are known as negotiable games and the latter are
known as non-negotiable games.

Self-Instructional
276 Material
Game Theory

Check Your Progress


1. What do you understand by the game theory?
2. How can we illustrate a game? NOTES
3. State the standard conventions in game theory.
4. Explain the two-person games and n- person games.
5. Define the zero-sum and non-zero-sum games.
6. What is constant-sum games?
7. Elaborate on the 3 × 3 and larger games.

11.4 TWO-PERSON ZERO-SUM GAMES

In game theory, a zero-sum game is a mathematical representation of a situation in


which each participant’s gain or loss of utility is exactly balanced by the losses or
gains of the utility of the other participants. Alternatively, we can say that the zero-
sum games are the games in which one player’s win is the other player’s loss. If
the total gains of the participants are added up and the total losses are subtracted,
then they will sum to zero. Thus, for example, cutting a cake, where taking a larger
piece reduces the amount of cake available for others as much as it increases the
amount available for that taker, is a zero-sum game if all participants value each
unit of cake equally.
In contrast, non-zero-sum describes a situation in which the interacting
parties’ aggregate gains and losses can be less than or more than zero. A zero-sum
game is also called a strictly competitive game while non-zero-sum games can be
either competitive or non-competitive. Zero-sum games are most often solved
with the ‘Minimax Theorem’ which is closely related to linear programming duality
or with Nash equilibrium. Many mathematicians have a reasoning prejudice
towards seeing situations as zero-sum, known as zero-sum bias.
Definition: A two player game is called a zero-sum game if the sum of the
payoffs to each player is constant for all possible outcomes of the game. More
specifically, the terms or coordinates in each payoff vector must add up to the
same value for each payoff vector. Such games are sometimes called constant-
sum games as an alternative.
The zero-sum property (if one gains, another loses) means that any result of
a zero-sum situation is Pareto optimal. Generally, any game where all strategies
are Pareto optimal is called a conflict game. Following is the example of generic
zero-sum game:
Choice 1 Choice 2

Choice 1 −A, A B, −B

Choice 2 C, −C −D, D Self-Instructional


Material 277
Game Theory Zero-sum games are a specific example of constant-sum games where the
sum of each outcome is always zero. Such games are distributive, not integrative;
the pie cannot be enlarged by good negotiation.
Situations where participants can all gain or suffer together are referred to
NOTES
as non-zero-sum. Other non-zero-sum games are games in which the sum of
gains and losses by the players are sometimes more or less than what they began
with.
The idea of Pareto optimal payoff in a zero-sum game gives rise to a
generalized relative selfish rationality standard, the punishing-the-opponent
standard, where both players always seek to minimize the opponent’s payoff at a
favourable cost to himself rather to prefer more than less. The punishing-the-
opponent standard can be used in both zero-sum games, for example warfare
game and chess, and non-zero-sum games, for example pooling selection games.
Basic Concepts of Two-Person Zero-Sum Games
Following are the basic and significant concepts of simple two-person zero-sum
games:
 A two-person game is characterized by the strategies of each player and
the payoff matrix.
 The payoff matrix shows the gain (positive or negative) for player 1 that
would result from each combination of strategies for the two players.
Remember that the matrix for player 2 is the negative of the matrix for
player 1 in a zero-sum game.
 The entries in the payoff matrix can be in any units as long as they represent
the utility (or value) to the player.
 There are two key assumptions about the behaviour of the players. The
first is that both players are rational. The second is that both players are
materialistic meaning that they choose their strategies in their own interest.
11.4.1 Sum Games
In game theory, the concept ‘Value of a Game’ is considered very important. It
refers to the average payoff per play of the game over an extended period of time.
This can be explained by an example. Suppose the two games are as follows:
Player Y Player Y

3 4 7 2
Player X 6 2 Player X 3 1

Game One Game Two


(With a Positive Value) (With a Negative Value)

Self-Instructional
278 Material
In game one, Player X would play his first row on each play of the game and Game Theory

Y would respond by playing his first column each time in order to minimize his
losses. Since Player X wins three points on each play of the game, his average
winnings per play will also be three as long as the game is played. The value of the
game is thus three with an implicit positive algebraic sign which denotes that Player NOTES
X wins the game. But in game two, Player Y plays his first column on each play of
the game and Player X responds by playing his second row on each play of the
game in order to minimize losses. As a result, Y wins 3 points on each play of the
game. Since the average payoff per play is –3, the value of this game is –3, the
minus sign indicating that Y is the winner.
But determining the value of the game is not always as simple as in the case
of these two examples. In case the players determine that their best alternative is
to play each row or each column a certain part of the time, calculating the value of
the game becomes a bit more complex. We shall take them up a little later in this
unit.
Now we shall see the process of determination of optimum strategies and
the value of a game with the help of some illustrations.
Example 11.1: Determine the optimum strategies for the two players X and Y and
find the value of the game from the following payoff matrix:
Player Y

3 1 4 2
Player X
1 3 7 0
4 6 2 9

Solution: For determining the optimum strategies, the cautious approach is to assume
the worst and act accordingly. If Player X plays with first row strategy, then Player
Y will play with second column the win one point, otherwise he will lose 3, 4 and 2 if
he plays with column, 13 and 4 respectively. If Player X plays with second row
strategy, then the worst would happen to him only when Player Y plays with the third
column because in that case Y would win 7 points. If Player X plays with third row
strategy, then the worst he can expect is losing 9 points when Y plays with the fourth
column. In this problem then, Player X should adopt first row strategy because only
then his loss will be minimum. Thus, Player X can make the best of the situation by
aiming at the highest of these minimal payoffs. This decision rule is known as ‘maximin
strategy’.
Looking from the perspective of Player Y, we can say that if Y plays with
column first strategy the maximum he can lose is 4 points if Player X adopts the
strategy of row three. If Player Y plays with column two strategy, there is no
question of any loss whatever may be the strategy of Player X. In such a case
player X will adopt the strategy of row one, for only then his loss will be minimum.
If Player Y plays with column third strategy, the maximum he can lose is 4 points if
X adopts the strategy of row one. If Y adopts the strategy of column four, he can
Self-Instructional
Material 279
Game Theory lose at the most 2 points if X adopts the strategy of row one. In this problem then,
Y should adopt the second column strategy and thereby ensure a victory of 1 point
which is the maximum in the given case. Thus, Player Y can make the best of the
situation by aiming at the lowest of these maximum payoffs (viz. 4, –1, 4, 2). Thus,
NOTES he should seek the minimum among the maximum payoffs. This decision rule is
known as ‘minimax strategy’.
Thus, Player Y will play his second column on each play and Player X will
respond by playing his first row on each play. In this way, Y will win 1 point and X
will lose 1 point in each play. Hence, this is a two-person zero-sum game with a
pure strategy. Since Y will win 1 point and X will lose 1 point in each play, the
value of the game is –1. This payoff –1 is then a saddle point in the given game and
can be marked (encircled) as under:
Player Y
3 1 4 2
Player X 1 3 7 0
4 6 2 9

11.5 THE MAXIMIN-MINIMAX PRINCIPAL

This principle is used for the selection of optimal strategies by two players. Consider
two players A and B. A is a player who wishes to maximize his gains, while player
B wishes to minimize his losses. Since A would like to maximize his minimum gain,
we obtain for player A, the value called maximin value and the corresponding
strategy is called the maximin strategy.
On the other hand, since player B wishes to minimize his losses, a value
called the minimax value, which is the minimum of the maximum losses is found.
The corresponding strategy is called the minimax strategy. When these two are
equal (maximin value = minimax value), the corresponding strategies are called
optimal strategies and the game is said to have a saddle point. The value of the
game is given by the saddle point.
The selection of maximin and minimax strategies by A and B is based upon
the so called maximin-minimax principle, which guarantees the best of the worst
results.
Saddle Point: A saddle point is a position in the payoff matrix where the maximum
of row minima coincides with the minimum of column maxima. The payoff at the
saddle point is called the value of the game.
We shall denote the maximin value by  , the minimax value of the game by 
and the value of the game by .
Notes:
1. A game is said to be fair if,
Self-Instructional Maximin value = Minimax value = 0, i.e., if  =  = 0
280 Material
2. A game is said to be strictly determinable if, Game Theory
Maximin value = Minimax value  0.  =  =  .

Example 11.2: Solve the game whose payoff matrix is given by,
NOTES
Player B
B1 B2 B3
A1  1 3 1
Player A
A2  0  4  3 
A3  1 5  1 

Solution:
Player B
B1 B2 B3 Row minima
A1  1 3 1  1
Player A
A2  0 4 3  4
A3  1 5 1  1

Column maxima 1 5 1

Maxi (minimum) = Max (1, – 4, – 1) = 1


Mini (maximum) = Min (1, 5, 1) = 1
i.e., Maximin value  = 1 = Minimax value 
 Saddle point exists. The value of the game is the saddle point, which is 1.
The optimal strategy is the position of the saddle point and is given by (A1, B1).

Example 11.3: For what value of  is the game with the following matrix strictly
determinable?
Player B
B1 B2 B3
A1   6 2 
Player A A2  1  7 
A3  2 4  
Solution: Ignoring the value of  the payoff matrix is given by,
Player B
B1 B2 B3 Row minima
A1   6 2 2
 1  7  7
Player A A2  
A3  2 4   2

Column maxima –1 6 2
Self-Instructional
Material 281
Game Theory The game is strictly determinable if,
 =  =  . Hence,  = 2,  = – 1

 – 1    2.
NOTES
Example 11.4: Determine which of the following two-person zero-sum games
are strictly determinable and fair. Give the optimum strategy for each player in the
case of strictly determinable games.
(i) Player B (ii) Player B
B1 B2 B1 B2
A1  5 2  A1  1 1 
Player A Player A
A2  7 4  A2  4 3 
Solution:
(i) Player B
B1 B2 Row minima
A1  5 2  5
Player A
A2  7 4  7
Column maxima –5 2
Maxi (minimum) =  = Max (– 5, – 7) = – 5

Mini (maximum) =  = Min (– 5, 2) = – 5

Since  =  = – 5  0, the game is strictly determinable. There exists a


saddle point = – 5. Hence, the value of the game is –5. The optimal strategy
is the position of the saddle point given by (A1, B1).
(ii) Player B
B1 B2 Row minimum
Player A A1  1 1  1
A2  4 3  3
Column maximum 41
Maxi (minimum) =  = Max (1, – 3) = 1.
Mini (maximum) =  = Min (4, 1) = 1.
Since  =  = 1 ¹ 0, the game is strictly determinable. Value of the game
is 1. The optimal strategy is (A1, B2).
Example 11.5: Solve the game whose payoff matrix is given below.
 2 0 0 5 3
 3 2 1 2 2 
 
 4 3 0 2 6 
 
 5 3 4 2 6 

Self-Instructional
282 Material
Solution: Player B Game Theory

B1 B2 B3 B4 B5 Row minima
A1  2 0 0 5 3  2
 1 2 2  1
A2  3 2 NOTES
Player A
A3  4 3 0 2 6  4
 
A4  5 3 4 2 6  6
Column maxima 5 3 1 5 6
Maxi (minimum) =  = Max (– 2, 1, – 4, – 6) = 1.

Mini (maximum) =  = Min (5, 3, 1, 5, 6) = 1.


Since,  =  = 1, there exists a saddle point. Value of the game is 1. The
position of the saddle point is the optimal strategy and is given by [A2, B3].

Check Your Progress


8. Explain about the two - person zero - sum games.
9. What are the basic concepts of two-person zero-sum games?
10. What do you mean by the value of a game?
11. State the maximin value.
12. Define the minimax value.
13. Elaborate on the saddle point.

11.6 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS

1. The term ‘Game’ represents a conflict between two or more parties. Game
theory is really the ‘Science of Conflict’. It is not concerned with finding an
optimum or winning strategy for a particular conflict situation but it provides
general rules concerning the logic that underlies strategic behaviour of all
types.
Game theory applies to those competitive situations that are technically
known as ‘Competitive Games’ or simply ‘Games’.
2. Suppose there are two opponents X and Y. We can think of them as sitting
across a table from each other and each with two buttons in front of him.
We shall denote player X’s buttons , m and n and player Y’s buttons as r
and t; thus each player has two choices open to him. We also presume a
partition between them so that neither can see in advance which button his
opponent is going to press. At a signal from a third party, each player presses
one of his buttons.
Self-Instructional
Material 283
Game Theory 3. In order to eliminate the necessity for written descriptions of the ‘payoffs’
(as we have shown in the above two examples), a standard set of conventions
has been established in game theory. It is the usual practice to omit a
description like ‘Player X wins two points’ and replace it with integer 2.
NOTES The positive algebraic sign which is assumed to accompany this number
indicates that it is Player X who benefits from this payoff. Similarly, instead
of saying ‘Player Y wins three points’ one simply indicates this with the
value –3, the minus sign indicating that it is Player Y who benefits from this
particular payoff.
Another standard convention that is usually followed is that Player X has
choices between the rows and Player Y has choices between the columns.
4. In two-person games, the players may have many possible choices open to
them for each play of the game but the number of players remains only two.
But games can also involve many people as active participants, each with
his own set of choices for each play of the game. A game of three players
can be named as three-person game. Thus, in case of more than two persons,
the game is generally named an n person game.
5. Zero-sum and non-zero-sum games. A zero-sum game is one in which the
sum of the payments to all the competitors is zero for every possible outcome
of the game. In other words, in such a game the sum of the points won
equals the sum of the points lost, i.e., one player wins at the expense of the
other (others). A two-person matrix game is always a zero-sum game since
one player loses what the other wins. In a non-zero-sum game, the sum of
the payoffs from any play of the game may be either positive or negative but
not zero.
6. In some situations, a zero-sum game is correctly referred to as a constant-
sum game. Take, for example, a competitive struggle for market share by two
firms (a duopoly). In such a case, every percentage point gained by one of
the firms is necessarily lost by the other. This situation is ordinarily called a
two-person, zero-sum game. But strictly speaking, this is a constant-sum
rather than a zero-sum game because the sum of the two market shares is the
fixed number 100 (per cent), not zero. It should, however, be remembered
that ‘there is no significant analytic difference between the constant-sum and
the zero-sum games. This is because there is no change in strategic possibilities
from a given game to another game in which some constant amount that
cannot be changed by players is added to the original payoffs.’
7. Two-person zero-sum games can be of size 3 × 3 and larger. If each of the
two players has three choices then the game is of 3 × 3 type and if the
choices open to any of the player or to both the players are more than
three, the game is referred to as of a larger size. 3 × 3 and larger games
quite often present complications and in such situations, linear programming
as a solution method may allow us to find the optimum strategies.
Self-Instructional
284 Material
8. In game theory, a zero-sum game is a mathematical representation of a Game Theory

situation in which each participant’s gain or loss of utility is exactly balanced


by the losses or gains of the utility of the other participants. Alternatively, we
can say that the zero-sum games are the games in which one player’s win is
the other player’s loss. NOTES
9. Following are the basic and significant concepts of simple two-person zero-
sum games:
 A two-person game is characterized by the strategies of each player
and the payoff matrix.
 The payoff matrix shows the gain (positive or negative) for player 1 that
would result from each combination of strategies for the two players.
Remember that the matrix for player 2 is the negative of the matrix for
player 1 in a zero-sum game.
 The entries in the payoff matrix can be in any units as long as they represent
the utility (or value) to the player.
 There are two key assumptions about the behaviour of the players.
The first is that both players are rational. The second is that both players
are materialistic meaning that they choose their strategies in their own
interest.
10. In game theory, the concept ‘Value of a Game’ is considered very
important. It refers to the average payoff per play of the game over an
extended period of time. But determining the value of the game is not
always as simple as in the case of these two examples. In case the players
determine that their best alternative is to play each row or each column a
certain part of the time, calculating the value of the game becomes a bit
more complex.
11. Consider two players A and B. A is a player who wishes to maximize his
gains, while player B wishes to minimize his losses. Since A would like
to maximize his minimum gain, we obtain for player A, the value called
maximin value and the corresponding strategy is called the maximin
strategy.
12. Since player B wishes to minimize his losses, a value called the minimax
value, which is the minimum of the maximum losses is found. The
corresponding strategy is called the minimax strategy. When these two are
equal (maximin value = minimax value), the corresponding strategies are
called optimal strategies and the game is said to have a saddle point. The
value of the game is given by the saddle point.
13. A saddle point is a position in the payoff matrix where the maximum of row
minima coincides with the minimum of column maxima. The payoff at the
saddle point is called the value of the game.

Self-Instructional
Material 285
Game Theory
11.7 SUMMARY

 The term ‘Game’ represents a conflict between two or more parties. Game
NOTES theory is really the ‘Science of Conflict’. It is not concerned with finding an
optimum or winning strategy for a particular conflict situation but it provides
general rules concerning the logic that underlies strategic behaviour of all
types.
Game theory applies to those competitive situations that are technically
known as ‘Competitive Games’ or simply ‘Games’.
 Suppose there are two opponents X and Y. We can think of them as sitting
across a table from each other and each with two buttons in front of him.
We shall denote player X’s buttons , m and n and player Y’s buttons as r
and t; thus each player has two choices open to him. We also presume a
partition between them so that neither can see in advance which button his
opponent is going to press. At a signal from a third party, each player presses
one of his buttons.
 In order to eliminate the necessity for written descriptions of the ‘payoffs’
(as we have shown in the above two examples), a standard set of conventions
has been established in game theory. It is the usual practice to omit a
description like ‘Player X wins two points’ and replace it with integer 2.
 In two-person games, the players may have many possible choices open to
them for each play of the game but the number of players remains only two.
But games can also involve many people as active participants, each with
his own set of choices for each play of the game. A game of three players
can be named as three-person game. Thus, in case of more than two persons,
the game is generally named an n person game.
 Zero-sum and non-zero-sum games. A zero-sum game is one in which the
sum of the payments to all the competitors is zero for every possible outcome
of the game. In other words, in such a game the sum of the points won
equals the sum of the points lost, i.e., one player wins at the expense of the
other (others).
 Games of perfect information and games of imperfect information.
Whatever strategy is adopted by either player, if the same can also be
discovered by his competitor, then such games are known as games of
perfect information. In games of imperfect information, neither player
knows the entire situation and must be guided in part by guess work as to
what the real situation is.
 Games with finite, i.e., limited number of moves (or plays) and games with
unlimited number of moves. Games with finite number of moves are those
where the number of moves is limited to a fixed magnitude before play
begins. If the game could continue over an extended period of time and no
Self-Instructional
286 Material
limit is put on the number of moves, it is referred to as a game with an Game Theory

unlimited number of moves.


 2 × 2 two-person games and 2 × m and m × 2 games. Two person zero-
sum games with only two choices open to each player are denoted as 2 × 2
NOTES
two person games but games in which one of the players has more than two
choices of rows or columns and in which the other player has exactly two
choices are referred to as m × 2 or 2 × m games respectively.
 In game theory, a zero-sum game is a mathematical representation of a
situation in which each participant’s gain or loss of utility is exactly balanced
by the losses or gains of the utility of the other participants. Alternatively, we
can say that the zero-sum games are the games in which one player’s win is
the other player’s loss.
 A two player game is called a zero-sum game if the sum of the payoffs to
each player is constant for all possible outcomes of the game. More
specifically, the terms or coordinates in each payoff vector must add up to
the same value for each payoff vector. Such games are sometimes called
constant-sum games as an alternative.
 But determining the value of the game is not always as simple as in the case
of these two examples. In case the players determine that their best
alternative is to play each row or each column a certain part of the time,
calculating the value of the game becomes a bit more complex.
 Consider two players A and B. A is a player who wishes to maximize his
gains, while player B wishes to minimize his losses. Since A would like to
maximize his minimum gain, we obtain for player A, the value called maximin
value and the corresponding strategy is called the maximin strategy.
 Since player B wishes to minimize his losses, a value called the minimax
value, which is the minimum of the maximum losses is found. The
corresponding strategy is called the minimax strategy. When these two are
equal (maximin value = minimax value), the corresponding strategies are
called optimal strategies and the game is said to have a saddle point. The
value of the game is given by the saddle point.
 A saddle point is a position in the payoff matrix where the maximum of row
minima coincides with the minimum of column maxima. The payoff at the
saddle point is called the value of the game.

11.8 KEY WORDS

 Game theory: Game theory is really the ‘Science of Conflicts’.


 Game: A conflict between two or more parties.
 Zero-sum and non-zero games: A zero-sum game is one in which the
sum of the payments to all the competitors is zero for every possible outcome
of the game. Self-Instructional
Material 287
Game Theory  Value of a game: It refers to the average payoff per play of the game over
an extended period of time.
 Saddle point: A saddle point is a position in the payoff matrix where the
maximum of row minima coincides with the minimum of column maxima.
NOTES

11.9 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short-Answer Questions
1. What is game theory?
2. Define the illustration of a game.
3. State the standard conventions that are used in game theory.
4. Explain the basic terms of game theory.
5. Elaborate on the two-person zero-sum games.
6. What are the basic concepts of two-person zero-sum games?
7. Define about the value of game.
8. State the maximin- minimax principal.
Long-Answer Question
1. Briefly define the game theory giving an examples.
2. Explain the two-person zero-sum games with the help of examples.
3. Discuss briefly the basic concepts of two-person zero-sum games.
4. Analyse the maximin-minimax principal. Give appropriate examples.

11.10 FURTHER READINGS

Arumugam, R. S. 2006. Operations Research. Palayamkottai (Tamil Nadu): New


Gamma Publications.
Sundharesan, V., K. S. Ganapathy and K. Ganesan. 2017. Resource Management
Techniques (Operations Research). Chennai: A. R. Publications.
Swaroop, Kanti, P. K. Gupta and Man Mohan. 2007. Operations Research,
13th Edition. New Delhi: Sultan Chand & Sons.
Taha, Hamdy A. 1992. Operations Research: An Introduction. New York:
Macmillan.
Sharma, S. D. 2006. Operations Research. Uttar Pradesh: Kedar Nath Ram
Nath & Co.
Gupta, P. K. and D. S. Hira. 2002. Introduction to Operations Research. New
Self-Instructional Delhi: S. Chand And Company Limited.
288 Material
Gillett, Billy E. 2007. Introduction to Operations Research. New Delhi: Tata Game Theory

McGraw-Hill.
Ackoff, R. L. and M. W. Sasieni. 1968. Fundamentals of Operations Research.
New York: John Wiley & Sons Inc.
NOTES
Kothari, C. R. 1992. An Introduction to Operational Research. New Delhi:
Vikas Publishing House Pvt. Ltd.
Kalavathy, S. 2002. Operations Research. New Delhi: Vikas Publishing House
Pvt. Ltd.
Jensen, Paul A., and Jonathan F. Bard. 2003. Operations Research Models and
Methods. New York: John Wiley & Sons.
Sharma, J. K. 2001. Operations Research: Theory and Applications. New
Delhi: Macmillan India Ltd.

Self-Instructional
Material 289
Saddle Points and Mixed
Strategies BLOCK - IV
DOMINANCE IN GAMES AND NETWORK ANALYSIS

NOTES
UNIT 12 SADDLE POINTS AND
MIXED STRATEGIES
Structure
12.0 Introduction
12.1 Objectives
12.2 Games without Saddle Points
12.3 Mixed Strategies
12.3.1 Pure and Mixed Strategies with Saddle Point
12.3.2 Mixed Strategy Problems by Arithmetic Method
12.4 Graphic Solution of 2 × n and m × 2 Games
12.5 Answers to Check Your Progress Questions
12.6 Summary
12.7 Key Words
12.8 Self Assessment Questions and Exercises
12.9 Further Readings

12.0 INTRODUCTION

The saddle point in a payoff matrix is one which is the smallest value in its row and
the largest value in its column. The saddle point is also known as equilibrium point
in the theory of games. An element of a matrix that is simultaneously minimum of
the row in which it occurs and the maximum of the column in which it occurs is a
saddle point of the matrix game. In a game having a saddle point optimum strategy
for player X is always to play the row containing a saddle point and for the player
Y to play the column that contains a saddle point. Saddle point also gives the
values of such a game. Saddle point in a payoff matrix concerning a game may be
there and may not be there. If there is a saddle point we can easily find out the
optimum strategies and the value of the game by what is known as the solution by
saddle point without doing much calculations. But when saddle point is not there
we have to use algebraic methods for working out the solutions concerning game
problems.
A saddle point or minimax point is a point on the surface of the graph of a
function where the slopes (derivatives) in orthogonal directions are all zero (a critical
point), but which is not a local extremum of the function. An example of a saddle
point is when there is a critical point with a relative minimum along one axial direction
(between peaks) and at a relative maximum along the crossing axis. However, a
saddle point need not be in this form.
Self-Instructional
290 Material
A mixed strategy is an assignment of a probability to each pure strategy. Saddle Points and Mixed
Strategies
When enlisting missed strategy, it is often because the game doesn’t allow for a
rational description in specifying a pure strategy for the game. This allows for a
player to randomly select a pure strategy. Since probabilities are continuous, there
are infinitely many mixed strategies available to a player. Since probabilities are NOTES
being assigned to strategies for a specific player when discussing the payoffs of
certain scenarios the payoff must be referred to as “Expected Payoff.” Of course,
one can regard a pure strategy as a degenerate case of a mixed strategy, in which
that particular pure strategy is selected with probability 1 and every other strategy
with probability 0. A totally mixed strategy is a mixed strategy in which the player
assigns a strictly positive probability to every pure strategy. (Totally mixed strategies
are important for equilibrium refinement such as trembling hand perfect equilibrium.)
A totally mixed strategy is a mixed strategy in which the player assigns a strictly
positive probability to every pure strategy. The totally mixed strategies are important
for equilibrium refinement.
In this unit, you will study about the concepts of games without saddle
points, mixed strategies, and graphic solution of 2 × n and m × 2 games.

12.1 OBJECTIVES

After going through this unit, you will be able to:


 Analyse the games without saddle points
 Understand the significance of mixed strategies
 Explain the graphic solution of 2 × n and m ×2 games

12.2 GAMES WITHOUT SADDLE POINTS

A game without saddle point can be solved by various solution methods.


2 × 2 Games Without Saddle Point
Consider a 2 × 2 two-person zero-sum game without any saddle point, having the
payoff matrix for player A as,
B1 B2
A1  a11 a12 
A2  a21 a22 
The optimum mixed strategies,
A A  B B 
SA =  1 2  and SB  1 2 
 p1 p2   q1 q2 
a22  a12
where, p1 = , p1 + p2 = 1  p2 = 1 –
(a11  a22 )  (a12  a21 )
Self-Instructional
Material 291
Saddle Points and Mixed p1
Strategies
a22  a21
q1 = , q1 + q2 = 1  q2 = 1 –
(a11  a22 )  (a12  a21 )
q1
NOTES
a11 a22  a12 a21
The value of the game () =
(a11  a22 )  (a12  a21 )

Example 12.1: Solve the following game and determine its value.

B
 4 4 
A 
 4 4 

Solution: It is clear that the payoff matrix does not possess any saddle point. The
players will use mixed strategies. The optimum mixed strategy for the players are,
 A A 
SA =  1 2  , p1 + p2 = 1
 p1 p2 

B B 
SB =  1 2  , q1 + q2 = 1
 q1 q2 

a22  a21 4  ( 4) 8 1
p1 = = = 
(a11  a22 )  (a12  a21 ) 4  4  ( 4  4) 16 2

1 1
p2 = 1 – p1  p2 = 1 – =
2 2

a22  a12 4  ( 4) 8 1
q1 = = = 
(a11  a22 )  (a12  a21 ) 4  4  (  4  4) 16 2
1 1
q2 = 1 – q1 = 1 – =
2 2
1 1 1 1
The optimum strategy is, SA = 2 , 2 ; SB = 2 , 2

a22 a11  a12 a21


The value of the game is,  =
(a22  a11 )  (a12  a21 )

(4  4)  [ 4  (  4)]
= = 0.
(4  4)  [ 4  ( 4)]

Example 12.2: In a game of matching coins with two players, suppose A wins
one unit of value when there are two heads, wins nothing when there are two tails
and losses 1/2 unit of value when there are one head and one tail. Determine the
payoff matrix, the best strategies for each player and the value of the game to A.

Self-Instructional
292 Material
Solution: The payoff matrix for the player A is given by, Saddle Points and Mixed
Strategies
Player B
H T
 1
H 1   NOTES
 2
Player A  
1
T   0 
 2 
Let this be,
B1 B2
A1  a11 a12 .
 
A2  a21 a22 
The optimum mixed strategies,
 A A 
SA =  1 2  , p1 + p2 = 1
 p1 p2 
 B1 B2 
SB =  q q  , q1 + q2 = 1.
 1 2 

 1
0   
a22  a21  2
p1 = =
(a11  a22)  (a12  a21)  1 1
1 0    
 2 2
1
1
= 2 =
2 4
1 3
p2 = 1 – p1 = 1 – = .
4 4
 1
0   
a22  a12  2 1
q1 = = =
(a11  a22)  (a12  a21)  1 1 4
1 0    
 2 2 
1 3
q2 = 1 – q1 = 1 – =
4 4
 1 1
1  0        1
 2 2  4
Value of the game =
 1 1 2
1 0    
 2 2
1
=
8
 The optimum mixed strategy is given by,

SA =  ,  ; SB =  , 
1 3 1 3
4 4
  4 4  
and the value of game is –1/8.
Self-Instructional
Material 293
Saddle Points and Mixed Example 12.3: Solve the following payoff matrix. Also determine the optimal
Strategies
strategies and value of the game.
B
5 1
NOTES A 
3 4
Solution:
B
5 1
A 
3 4
Let this be,
B1 B2
A1  a11 a12 
A2  a21 a22 
The optimum mixed strategies,
 A A  B B 
SA =  1 2  and SB =  1 2 
 p1 p2   q1 q2 
a22 a21 43 1
Where, p1 = = 
(a11 a22) (a12 a21 ) (5  4)  (1  3) 5

1 4
p2 = 1 – p1  p2 = 1 – =
5 5

a22 a12 4 1 3
q1 = = 
(a11 a22) (a12 a21 ) (5  4)  (1  3) 5

3 2
q2 = 1 – q1  q2 = 1 – =
5 5

(5  4)  (1  3) 17
Value of game,  = =
(5  4)  (1  3) 5

 The optimum mixed strategies,

SA =  ,  ; SB = 3
1 4 2
 , 
5 5 5 5
17
Value of game =
5

12.3 MIXED STRATEGIES

In game theory, the strategy of a player in a game is a complete plan of action for
any situation that may occur. This determines the complete behaviour of player
and the player’s strategy determines the action that the player will take at any stage
of the game.
Self-Instructional
294 Material
A strategy profile is also sometimes termed as strategy combination. It is a Saddle Points and Mixed
Strategies
set of strategies for each player that specifies all actions in a game. It must include
one and only one strategy for every player. Sometimes the strategy concept is by
mistake confused with that of a move. A move is an action taken by a player at
some point during the play of a game whereas a strategy is a complete algorithm NOTES
for playing the game because it guides a player what to do for every possible
situation throughout the game. A player’s strategy set describes what strategies
are available for playing the game. Strategies are of two types, pure and mixed. A
pure strategy provides a complete definition of how a player will play a game. A
mixed strategy is an assignment of a probability to each pure strategy. This allows
for a player to randomly select a pure strategy. Since probabilities are continuous,
there are infinitely many mixed strategies available to a player, even if their strategy
set is finite. Certainly, a pure strategy can be considered as a degenerate case of a
mixed strategy in which that specific pure strategy is selected with probability 1
and every other strategy with probability 0.
A totally mixed strategy is a mixed strategy in which the player assigns a
strictly positive probability to every pure strategy. The totally mixed strategies are
important for equilibrium refinement.
Consider the payoff matrix table of pure coordination game (Refer
Table 12.1). Here one player chooses the row and the other chooses a column.
The row player receives the first payoff, the column player the second. If row opts
to play A with probability 1, i.e., play A for sure then the player is said to be
playing a pure strategy. If column opts to flip a coin and play A if the coin lands
heads and B if the coin lands tails then the player is said to be playing a mixed
strategy and not a pure strategy.
Table 12.1 Pure Coordination Game
A B

A 1 ,1 0 ,0

B 0 ,0 1 ,1

Example 12.4: Find the optimum strategies and the value of the game from the
following payoff matrix concerning two-person game:
Player Y
1 4 
Player X  
5 3 
Solution: In the given game, there is no saddle point because there is no one value
which is smallest value in its row and largest in its column. Therefore the players
will resort to what is known as mixed strategy, i.e., player X will play each of his
rows a certain portion of time and player Y will play each of his columns a certain
part of the time. The question then is to determine what proportion of the time a
player should spend on his respective rows and columns. This can be done by the
use of algebraic method stated as follows. Self-Instructional
Material 295
Saddle Points and Mixed Let Q equal the proportion of time player X spends playing the first row,
Strategies
then 1–Q must equal the time he spends playing his second row (because one
equals the time available for play). Similarly, suppose player Y spends time R in
playing first column and 1–R proportion of time he spends playing the second
NOTES column. All this can be stated as under:
Player Y
R 1–R
Q 1 4
Player X
1– Q 5 3
Now, we must find out the values of Q and R. Let us analyse the situation
from X’s view point. He would like to devise a strategy that will maximize his
winning (or minimize his losses) irrespective of what his opponent Y does. For this
X would like to divide his play between his rows in such a manner that his expected
winnings or losses when Y plays the first column will equal his expected winnings
or losses when Y plays the second column. Expected winnings indicate the sum,
overtime, of the payoffs multiplied by the probabilities that these payoffs will obtain.
This can be calculated as shown below:
X’s EXPECTED WINNINGS
When Y Plays When Y Plays
Column One Column Two
X Plays Row One: (1) (Q) (4) (Q)
Q of the Time
X Plays Row Two: 5(1 – Q) 3(1 – Q)
1–Q of the Time.
X’s Total Expected Q + 5(1 – Q) 4Q + 3(1 – Q)
Winnings
Equating the expected winnings of X when Y plays column one with when Y plays
column two we can find the value of Q as follows:
Q + 5(1 – Q) = 4Q + 3(1 – Q)
or Q + 5 – 5Q = 4Q + 3 – 3Q
or –5Q = –2
or Q = 2/5
 1 – Q = 3/5
This means that player X should play his first row 2/5 of the time and his second
row 3/5 of the time if he wants to maximize his expected winnings from the game.
On the similar basis expected losses of Y can be worked out as under:
Y’s EXPECTED LOSSES
Y Plays Column One Y Plays Column Two Y’s Total Expected
R of the Time 1—R of the Time Losses
When X Plays 1(R) 4(1 – R) R + 4(l – R)
Row One
When X Plays 5(R) 3(1 – R) 5R + 3(1 – R)
Row Two
Self-Instructional
296 Material
Equating the expected losses of Y when X plays row one with when X plays Saddle Points and Mixed
Strategies
row two, we can find the value of R as follows:
R + 4(1 – R) = 5R + 3(l – R)
R + 4 – 4 R = 5R + 3 – 3R
or –5R = –1 NOTES
or R = l/5
 1 – R = 4/5
This means that player Y should play his first column 1/5 of the time and his second
column 4/5 of the time if he wants to minimize his expected losses in the game.
Now we can illustrate the original game with the appropriate strategies for
each of the player as follows:
Player Y
1/5 4/5
2/5 1 4 
Player X  
3/5 5 3 
Alternative Method or Short Cut Method for Finding the Above Strategies
Original game
Y
1 4 
X 
5 3 
Step 1: Subtract the smaller payoff in each row from the larger one and the smaller
payoff in each column from the larger one.
Y
1 4 3 (i.e., 4 – 1 = 3)
X
5 3 2 (i.e., 5 – 3 = 2 )
4 1
(i.e. 5 – 1 = 4) (i.e. 4 – 3=1)
Step 2: Interchange each of these pairs of subtracted numbers found in Step 1
above.
Y
1 4 2
X
5 3 3
1 4
Step 3: Put each of the interchanged numbers over the sum of the pair of numbers.
Y
1 4 2 /(2  3)
X
5 3  3/(2  3)
1/(1 + 4) 4/(1 + 4)

Self-Instructional
Material 297
Saddle Points and Mixed Step 4: Simplify the fraction to obtain the proper proportions or the required
Strategies
strategies.
Y
1 4 2 / 5
NOTES X
5 3  3/ 5
1/5 4/5
Now we determine the value of the game. Looking at the game from X’s point of
view we can argue as follows:
(i) During the 1/5 of the time Y plays column one, X wins 1 point 2/5 of the
time (when X plays row one) and 5 points 3/5 of the time (when X plays
row second).
(ii) During the 4/5 of the time Y plays column two, X wins 4 points 2/5 of the
time (when X plays row one) and 3 points 3/5 of the time (when X plays
row second).
Thus total expected winnings of player X are the sum of the above two
statements as under:
1 2 3 4 2 3
1 5 4 3
5 5 5 5 5 5

 1  17   4  17 
=       
 5  5   5  5 
17 68
= 
25 25
85
=
25
17
=
5
Thus the value of the game is 17/5 which means that player X can expect to
win an average payoff of 17/5 points for each play of the game if he adopts the
strategy we have determined as stated above. If the value of the game determined
above had a negative sign, it would simply signify that Y was the winner. The same
result we can also get by looking at the game from Y’s point of view doing similar
calculations.
12.3.1 Pure and Mixed Strategies with Saddle Point
The saddle point in a payoff matrix is the one which is the smallest value in its row
and the largest value in its column. The saddle point is also known as the equilibrium
point in the theory of games. An element of a matrix that is simultaneously the
minimum of the row in which it occurs and the maximum of the column in which it
occurs is a saddle point of the matrix game. In a game having a saddle point, the
Self-Instructional
298 Material
optimum strategy for Player X is always to play the row containing a saddle point Saddle Points and Mixed
Strategies
and for Player Y to play the column that contains a saddle point. The saddle point
also gives the value of such a game. The saddle point in the payoff matrix of a
game may or may not exist. If there is a saddle point, we can easily find out the
optimum strategies and the value of the game by what is known as solution by NOTES
saddle point without having to do too many calculations. But when the saddle
point is not there, then we have to use algebraic methods for working out the
solutions of the game problems.
Game Problems of Mixed Strategy
Example 12.5: Find the optimum strategies and the value of the game from the
following payoff matrix concerning a two-person game:
Player Y

1 4
Player X 5 3

Solution: In the given game, there is no saddle point because there is no one value
which is smallest in its row and largest in its column. Therefore, the players have to
resort to mixed strategy, i.e., Player X will play each of his rows a certain part of
time and Player Y will play each of his columns a certain part of the time. The
question then is to determine the proportion of time a player should spend on his
respective rows and columns. This can be done by the use of the following algebraic
method:
Let Q equal the proportion of time Player X spends playing the first row,
then 1 – Q must equal the time he spends playing his second row (because one
equals the time available for play). Similarly, suppose Player Y spends time R in
playing the first column and 1 – R be the time he spends on playing the second
column. All this can be stated as follows:
Player Y

Q 1 4
Player X 1 Q 5 3

Now, we must find the values of Q and R. Let us analyse the situation from
X’s view point. He would like to devise a strategy that would maximize his winning
(or minimize his losses) irrespective of what his opponent Y does. For this, X
would like to divide his play between his rows in such a manner that his expected
winnings or losses, when Y plays the first column equal to his expected winnings or
losses when Y plays the second column. (Expected winnings indicate the sum,
overtime, of the payoffs that will be obtained multiplied by the probabilities that
these payoffs will obtain). In our case, we can calculate the same as follows:

Self-Instructional
Material 299
Saddle Points and Mixed X’s Expected Winnings
Strategies
When Y plays When Y plays
Column one Column two
NOTES X plays row one: (1) (Q) (4) (Q)
Q of the time
X plays row two: 5(1– Q) 3(1 – Q)
1 – Q of the time
X’s total expected winnings Q + 5(1 – Q) 4Q + 3 (1 – Q)
Equating the expected winnings of X when Y plays column one with when Y
plays column two, we can find the value of Q as follows:
Q + 5(1 – Q) = 4Q + 3(1 – Q)
Q + 5 – 5Q = 42 + 3 – 3Q
–5Q = –2
Or Q = 2/5
1 – Q = 3/5
This means that Player X should play his first row 2/5 of the time and his
second row 3/5 of the time if he wants to maximize his expected winnings from the
game.
Similarly, the losses of Y can be worked out as follows:
Y’s Expected Losses
Equating the expected losses of Y when X plays row one with when X plays
row two, we can find the value of R as follows:
R + 4(l – R) = 5R + 3(1 – R)
R + 4 – 4R = 5R + 3 – 3R
Or –5R = –1
Or R = 1/5
1 – R = 4/5
This means that Player Y should play his first column 1/5 of the time and his
second column 4/5 of the time if he wants to minimize his expected losses in the
game.
Now we can illustrate the original game with the appropriate strategies for
each of the player as follows:
Player Y
2/5
1 4 
Player X  3
3/ 5  5 
Self-Instructional
300 Material
An alternative method (or the short-cut method) for finding the above Saddle Points and Mixed
Strategies
strategies is as follows:
Original game
Y NOTES
1 4
X
5 3

Step 1 Subtract the smaller payoff in each row from the larger one and the
smaller payoff in each column from the larger one.
Y
1 4 3 (i.e., 4 – 1 = 3)
X
5 3 2 (i.e., 5 – 3 = 2)
(i.e., 5 – 1 = 4) (i.e., 4 – 3 = 1)
Step 2 Interchange each of these pairs of subtracted numbers found in Step
1 above.
Y
1 4 2
X
5 3 3
1 4
Step 3 Put each of the interchanged numbers over the sum of the pair of the
numbers.
Y
1 4 2 /(2 3)
X
5 3 3/(2 3)
1/(1 4) 4 /(1 4)

Step 4 Simplify the fraction to obtain the proper proportions or the required
strategies.
Y
1 4 2/5
X
5 3 3/ 5
1/ 5 4 / 5
Now we determine the value of the game. Looking at the game from X’s
point of view we can argue that:
(i) During the 1/5 of the time that Y plays column one, X wins 1 point 2/
5 of the time (when X plays row one) and 5 points 3/5 of the time
(when X plays row second).
Self-Instructional
Material 301
Saddle Points and Mixed (ii) During the 4/5 of the time that Y plays column two, X wins 4 points 2/
Strategies
5 of the time (when X plays row one) and 3 points 3/5 of the time
(when X plays row second).
Thus, the total expected winnings of player X are the sum of the above two
NOTES
statements, that is,
1 2 3 4 2 3
(1) (5) (4) (3)
5 5 5 5 5 5
1 17 4 17
5 5 5 5
17 8
25 25
85
25
17
5
Thus, the value of the game is 17/5 which means that Player X can expect
to win an average payoff of 17/5 points for each play of the game if he adopts the
strategy we have determined above. If the value of the game determined above
had a negative sign, it would simply signify that Y was the winner. The same result
can also be achieved by looking at the game from Y’s point of view and carrying
out similar calculations.
First Alternative Method (or Short-cut Method) for Determining the Value
of a Game
Under this, we calculate:
X’s expectations when Y plays column one,
X’s expectations when Y plays column two,
Y’s expectations when X plays row one
Y’s expectations when X plays row 2
In all these four cases the answer remains the same. Hence, any one of
these calculations is sufficient to determine the value of the game. The logic
behind this approach is the same as that of determining the optimum strategies.
(In the case of Player X, we determined a strategy that guaranteed X the same
winnings, irrespective of his opponent’s choice of columns). This can be illustrated
as follows:

Self-Instructional
302 Material
1 4 17 Saddle Points and Mixed
Player 5 5 5 Strategies
1 4
5 5
Player X 2 2
5 1 5
NOTES
2 3 17 17
(1× ) + (5× ) = 5 5
5 5
5
3

17
5

Fig. 12.1 Diagrammatic Form of Determining the Value of a Game

Second Alternative Method for Determining the Value of a Game Using


the Probabilities of each Payoff
In a simple 2 × 2 game without a saddle point, each player’s strategy consists of
two probabilities denoting the portion of the time he spends on each of his rows or
columns. Since each player plays a random pattern, we can list the probabilities of
each payoff (in our given question) as follows:

Payoff Strategies which produce Probability of this


this payoff payoff

2 1 2
1 Row 1 column 1
5 5 25

2 4 8
4 Row 1 column 2
5 5 25

3 1 3
5 Row 2 column 1
5 5 25

3 4 12
3 Row 2 column 2
5 5 25
Sum = 1.0

The value of the game can be found out by taking the sum of the products of
each of these payoffs and their respective probabilities.

Self-Instructional
Material 303
Saddle Points and Mixed For the given problem, this can be worked out as follows:
Strategies
Payoff Probability of the Product of the first two
payoff columns
NOTES
2 2
1
25 25
8 32
4
25 25
3 15
5
25 25
12 36
3
25 25
85
Total =
25
17
= (Value of the game)
5
12.3.2 Mixed Strategy Problems by Arithmetic Method
By mixed strategy is meant a situation in which the course of action is selected
with some fixed probability. As such, there is a probabilistic situation. In such a
strategy, the objective of the players is to maximize the expected gain or minimize
the expected loss by choosing among various pure strategies with fixed
probabilities.
In mathematical terms, a mixed strategy for a player having two or more
courses of action can be thought as a set S of n probabilities (whose sum is unity).
Here, n is the number of pure strategies by the player. Let pj be the probability of
selecting strategy j, where j = 1, 2, 3, ……, n. Then, S = {p1, p2, ……., pn}; p1 +
p2 + …….+ pn = 1 and j pj ³ 0.
There are cases when a pure strategy for a game may not exist. Hence, no
saddle point exists. In such cases, both the players choose an optimal mixture of
strategies to find an equilibrium point. The optimal mixed strategy may be determined
for each player in this case by assigning the probability of it being chosen to each
strategy. This is known as mixed strategy since this is a probabilistic combination
of the available choices of strategy.
Value of the game
When the mixed strategy is obtained, it has the least payoff that Player A can
expect to gain and Player B can expect to lose. The expected payoff with
arbitrary payoff matrix [aij] of order m × n is given by E(p, q) =  pi aij qj =

Self-Instructional
304 Material
PT A Q, where i = 1, 2, …., m and j = 1, 2, ….,n; P = (p1, p2, ……., pm) and Q Saddle Points and Mixed
Strategies
= (p1, p2, ……., pn)
Arithmetic Method of solving Mixed Strategy Problems
This method is also known as the short-cut method. This is a simple method in which NOTES
the optimal strategy is found for each player in a payoff matrix of order 2 × 2 with no
saddle point. The followings are steps to be adopted in this method:
1. Calculate the difference between the two values of the first row, neglecting
the negative sign, and put it against the second row.
2. Calculate the difference between the two values of the second row, neglecting
the negative sign, and put it against the first row.
3. Repeat the above two steps for the two columns also.
The values obtained by swapping the differences, as stated above, are the optimal
relative frequencies for play for the strategies of both the players. These are then
converted into probabilities by dividing each by their sum.
Note: This method cannot be used to solve a 2 × 2 game having a saddle point.
Example 12.6: The following is the payoff matrix of two competitor companies in
terms of their advertising plan.

Company B

Company A Normal Special


Advertisement Advertisement
B1 B2

Normal
Advertisement A1 12 15
Special
Advertisement A2 14 10

Suggest the optimal strategies for the two companies and the net outcome.
Solution: The payoff matrix has no saddle point and so mixed strategies and
arithmetic methods can be used.
The solution reveals that Company A should adopt strategy A1, 57% of the
time and A2, 43% of the time. In the same way, Company B should adopt strategy
B1, for 71% of the time and B2, 29% of the time.
We can now calculate the expected gain for Company A as follows:
(i) 12 × 4/7 + 14 × 3/7 = 90/7 and Company B adopts B1
(ii) 15 × 4/7 + 10 × 3/7 = 90/7 and Company B adopts B2

Self-Instructional
Material 305
Saddle Points and Mixed We proceed as follows:
Strategies
Company Company
A B

B1 B2
NOTES
A1 12 15 14 – 10 = 4 p(A1) = 4/(4 + 3) = 4/7
A2 14 10 15 – 12 = 3 p(A2) = 3/(4 + 3) = 3/7

15 – 10 = 5 14 – 12 = 2

p(B1) = 5/(5 + 2) = 5/7 p(B2) = 2/(5 + 2) = 2/7

The expected loss for Company B is,


(i) 12 × 5/7 + 15 × 2/7 = 90/7 and Company A adopts A1
(ii) 14 × 5/7 + 10 × 2/7 = 90/7 and Company A adopts A2

12.4 GRAPHIC SOLUTION OF 2 × n AND m × 2


GAMES

The graphic method can only be used in games with no saddle point and having
payoff m × n matrices where either m or n is two. The graphic method enables us
to substitute a much simpler 2 × 2 matrix for the original m × 2 or 2 × n matrix. We
will illustrate all this with Example 12.6.
We will apply the graphic short-cut by plotting on 2 different vertical axis,
the 2 payoffs corresponding to each of the 5 columns. The payoff numbers in the
first row are plotted on axis 1 and those in the second row on axis 2, which should
be drawn at some distance away from the first axis but should be parallel to the
first axis as shown in the following figure:
Axis 1 Axis 2
A —
6 — — 6
5 — — 5
4 — B— 4
3 — — 3
2 — — 2
1 — — 1
0 — — 0
–1 — — –1
T
–2 — — –2
–3K — L — –3
–4 — — –4
–5 — — –5
–6 — — –6
Self-Instructional
306 Material
Thus, the 2 payoff numbers 6 and 3 in the first column are denoted Saddle Points and Mixed
Strategies
respectively by point A on axis 1 and point B on axis 2. Line AB then denotes Y’s
move of the first column. By plotting the payoff numbers of each of the remaining
4 columns on the 2 axes, we obtain five lines like the line AB, which correspond to
the given 5 moves of Y. NOTES
If using a thick line we draw the segments which bound the figure from the
bottom, namely, the segments AT and LT and mark the highest point (7) on this
boundary, the two lines passing through it identify the two critical moves of Y,
which combined with the two of X, yield the following 2×2 matrix:

1 3
X
4 1

The optimal strategies now can be determined in the way explained above.
Example 12.7: Determine the optimum strategies and the value of the game from
the following payoff matrix concerning a 2 person 4×2 game.

Y
6 2
3 4
X
2 9
7 1

Solution: There is no saddle point in the given game. This game also cannot be
reduced by dominance because there is no row which is always preferred by X
irrespective of what Y might do.
Player X can actually think of this 4 × 2 game as six sub-games, each of size 2 ×
2 (because he can choose not to play any two of his rows if he so desires). These
six sub-games can be listed as follows:

Y
2 3
5 5
1
 6
5 2 
X  Sub game No. (i )
4  3  4 
5

Strategies have been noted in this matrix payoff.

6 12 18
The value of the game is = = –3.60
5 5 5

Self-Instructional
Material 307
Saddle Points and Mixed
Strategies Y
7 8
15 15
11
 6
15 2 
X   Sub game No. (ii )
NOTES 4
15  2  9 

Strategies have been noted in this matrix payoff.

11 4  66   8  58
The value of the game is 6 2 =        3.87
15 15  15   15  15
Y
  6 2 
X   Sub game No. (iii )
  7 1
There is a saddle point here. Hence, the value of this sub-game is –6.
Y
 3 4 
 2 9   Sub game No. (iv )
 
There is saddle point here. Henve value to this sub-game is –4.
Y
3 4
7 7
6
 3 4 
7
X  Sub game No. ( v )
1  7  1 
7

Strategies have been noted in the matrix payoff.


18 7
The value of the game is
7 7
25
= = –3.57
7
Y
8 9
7 7
6
2
17 9 
X  Sub game No. ( vi )
11   7  1 
17 

Strategies have been noted in the matrix payoff.


The value of the game is,
12 77 65
3.82
17 17 17
Self-Instructional
308 Material
If we look at the values of all these sub-games, we find that the values being Saddle Points and Mixed
Strategies
negative, Y will win and X will lose in all the cases. Player X, who is making the
choice, will naturally prefer the sub-game with the smallest negative value and this
is sub-game No. (v) in our example. Thus, X will play a two-row mixed strategy
between the second and fourth rows of the original game and he will expect to NOTES
lose an average of 357 points per play of the game, which will be his minimum
possible loss. Player Y will also adopt a strategy consisting of the same number of
columns. In brief, sub-game No.(v)’s strategies will be adopted by players X and
Y with a game value equal to –3.57 in the case of the given example.
Solution through Graphic Method
The above example can also be easily worked out with the help of the graphic
method as follows.
The case of payoff matrices having only two columns but more than two
rows is similar, except that in the diagram, we thicken the line segments which
bound the figure from the top and take the lowest point on this boundary. The
following diagram shows the payoff numbers from each row represented as points
on two vertical axes, 1 and 2.

Axis 1 Axis 2

5 5

K K
0 0
L

B1 M
–5 –5

–10 –10

Thus, line B joins the first payoff number –6 and –2 of the first row. Similarly,
other lines have been drawn representing payoff numbers in the 2nd, 3rd and 4th
rows. The segments KP, PM and ML drawn in thick lines bound the figure from
the top and their lowest intersection M, through which two lines pass, defining the
following 2 × 2 matrix relevant for our purpose:
Y
3 4
X
1 1
The optimal strategies can now be determined as usual.
Self-Instructional
Material 309
Saddle Points and Mixed
Strategies
Check Your Progress
1. How can we solve 2 × 2 games without saddle points?
NOTES 2. Explain about the strategy used in game theory.
3. Define the term strategy combination.
4. Elaborate on the term mixed strategies.
5. What do you understand by the totally mixed strategy?

12.5 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS

1. Consider a 2 × 2 two-person zero-sum game without any saddle point,


having the payoff matrix for player A as,
B1 B2
A1  a11 a12 
A2  a21 a22 

The optimum mixed strategies,


A A  B B 
SA =  1 2  and SB  1 2 
 p1 p2   q1 q2 
a22  a12
where, p1 = , p1 + p2 = 1  p2 = 1 – p1
(a11  a22 )  (a12  a21 )

a22  a21
q1 = , q1 + q2 = 1  q2 = 1 – q1
(a11  a22 )  (a12  a21 )

a11 a22  a12 a21


The value of the game () =
(a11  a22 )  (a12  a21 )

2. In game theory, the strategy of a player in a game is a complete plan of


action for any situation that may occur. This determines the complete
behaviour of player and the player’s strategy determines the action that the
player will take at any stage of the game.
3. A strategy profile is also sometimes termed as strategy combination. It is a
set of strategies for each player that specifies all actions in a game. It must
include one and only one strategy for every player. Sometimes the strategy
concept is by mistake confused with that of a move. A move is an action
taken by a player at some point during the play of a game whereas a strategy
is a complete algorithm for playing the game because it guides a player
what to do for every possible situation throughout the game. A player’s
strategy set describes what strategies are available for playing the game.
Self-Instructional
Strategies are of two types, pure and mixed.
310 Material
4. Mixed strategy is an assignment of a probability to each pure strategy. This Saddle Points and Mixed
Strategies
allows for a player to randomly select a pure strategy. Since probabilities
are continuous, there are infinitely many mixed strategies available to a player,
even if their strategy set is finite. Certainly, a pure strategy can be considered
as a degenerate case of a mixed strategy in which that specific pure strategy NOTES
is selected with probability 1 and every other strategy with probability 0.
5. A totally mixed strategy is a mixed strategy in which the player assigns a
strictly positive probability to every pure strategy. The totally mixed strategies
are important for equilibrium refinement.

12.6 SUMMARY

 In game theory, the strategy of a player in a game is a complete plan of


action for any situation that may occur. This determines the complete
behaviour of player and the player’s strategy determines the action that the
player will take at any stage of the game.
 A strategy profile is also sometimes termed as strategy combination. It is a
set of strategies for each player that specifies all actions in a game. It must
include one and only one strategy for every player. Sometimes the strategy
concept is by mistake confused with that of a move.
 Mixed strategy is an assignment of a probability to each pure strategy. This
allows for a player to randomly select a pure strategy. Since probabilities
are continuous, there are infinitely many mixed strategies available to a player,
even if their strategy set is finite. Certainly, a pure strategy can be considered
as a degenerate case of a mixed strategy in which that specific pure strategy
is selected with probability 1 and every other strategy with probability 0.
 A totally mixed strategy is a mixed strategy in which the player assigns a
strictly positive probability to every pure strategy. The totally mixed strategies
are important for equilibrium refinement.

12.7 KEY WORDS

 Strategies: The strategies of a player in a game is a complete plan of


action for any situation that may occur.
 Strategy combination: It is a set of strategies for each player that specifies
all actions in a game.
 Mixed strategies: Mixed strategy is an assignment of a probability to
each pure strategy.
 Totally mixed strategy: A totally mixed strategy is a mixed strategy in
which the player assign a strictly positive probability to every pure strategy.
 Graphic method: The graphic method can only be used in games with no
saddle point and having payoff m x n matrices where either m or n is two. Self-Instructional
Material 311
Saddle Points and Mixed
Strategies 12.8 SELF ASSESSMENT QUESTIONS AND
EXERCISES

NOTES Short-Answer Questions


1. Explain 2 × 2 games without saddle point.
2. What are mixed strategies?
3. Define the meaning of the term ‘game’.
4. Write the properties of game.
5. State about the constant-sum games.
Long-Answer Question
1. Elaborate 2 × 2 games without saddle point with the help of examples.
2. Discuss briefly about the standard conventions that are used in game theory.
3. Explain the term of game and its types givine examples.
4. Use graphic methods in solving the following games:
B
 3 3 4 
(a) A 
 1 1 3 
Y
 2 5
(b) X 
 4 1
5. The following matrix represents the payoff to P1 in a rectangular game
between two persons P1 and P2.
P2
 8 15 4 –2 
P1 19 15 17 16 
 0 20 15 5

By the notion of dominance, reduce the game to a 2 × 4 game and solve it


graphically.
6. Solve the following game:
Player B
I II III IV
I 3 2 4 0
II  3 4 2 4 
III  4 2 4 0
 
IV  0 4 0 8
Self-Instructional
312 Material
Saddle Points and Mixed
12.9 FURTHER READINGS Strategies

Arumugam, R. S. 2006. Operations Research. Palayamkottai (Tamil Nadu): New


Gamma Publications. NOTES
Sundharesan, V., K. S. Ganapathy and K. Ganesan. 2017. Resource Management
Techniques (Operations Research). Chennai: A. R. Publications.
Swaroop, Kanti, P. K. Gupta and Man Mohan. 2007. Operations Research,
13th Edition. New Delhi: Sultan Chand & Sons.
Taha, Hamdy A. 1992. Operations Research: An Introduction. New York:
Macmillan.
Sharma, S. D. 2006. Operations Research. Uttar Pradesh: Kedar Nath Ram
Nath & Co.
Gupta, P. K. and D. S. Hira. 2002. Introduction to Operations Research. New
Delhi: S. Chand And Company Limited.
Gillett, Billy E. 2007. Introduction to Operations Research. New Delhi: Tata
McGraw-Hill.
Ackoff, R. L. and M. W. Sasieni. 1968. Fundamentals of Operations Research.
New York: John Wiley & Sons Inc.
Kothari, C. R. 1992. An Introduction to Operational Research. New Delhi:
Vikas Publishing House Pvt. Ltd.
Kalavathy, S. 2002. Operations Research. New Delhi: Vikas Publishing House
Pvt. Ltd.
Jensen, Paul A., and Jonathan F. Bard. 2003. Operations Research Models and
Methods. New York: John Wiley & Sons.
Sharma, J. K. 2001. Operations Research: Theory and Applications. New
Delhi: Macmillan India Ltd.

Self-Instructional
Material 313
Dominance Property

UNIT 13 DOMINANCE PROPERTY


Structure
NOTES 13.0 Introduction
13.1 Objectives
13.2 Dominance Property
13.2.1 Rule for Dominance
13.3 Principle of Dominance and General Solution of m × n Rectangular
Games
13.4 Answers to Check Your Progress Questions
13.5 Summary
13.6 Key Words
13.7 Self Assessment Questions and Exercises
13.8 Further Readings

13.0 INTRODUCTION

In game theory, dominance occurs when one strategy is better than another strategy
for one player, no matter how that player’s opponents may play. Many simple
games can be solved using dominance. The opposite, intransitivity, occurs in games
where one strategy may be better or worse than another strategy for one player,
depending on how the player’s opponents may play. When a player tries to choose
the “Best” strategy among a multitude of options, that player may compare two
strategies A and B to see which one is better. A complete contingent plan for a
player in the game. A complete contingent plan is a full specification of a player’s
behaviour, describing each action a player would take at every possible decision
point. Because information sets represent points in a game where a player must
make a decision, a player’s strategy describes what that player will do at each
information set.
The concept of dominance is very useful for reducing the size of the game.
Applying this concept, we can convert a bigger game into a smaller game. Hence,
we must examine the possibility of reducing the game size using the principle of
dominance. If all the elements in a column are greater than or equal to the
corresponding elements in another column, then that column is dominated. Similarly,
if all the elements in a row are less than or equal to the corresponding elements in
another row, then that row is dominated. Dominated rows or columns may be
deleted which reduces the size of the game. Always look for dominance when
solving a game.
In this unit, you will study about the dominance property, rule for dominance,
principal of dominance, and general solution of m × n rectangular games.

Self-Instructional
314 Material
Dominance Property
13.1 OBJECTIVES

After going through this unit, you will be able to:


 Understand the dominance property NOTES
 Explain the rule for dominance
 Define principle of dominance
 Analyse the general solution of m × n rectangular games

13.2 DOMINANCE PROPERTY

The concept of dominance is very useful for reducing the size of the game. Applying
this concept, we can convert a bigger game into a smaller game. Hence, we must
examine the possibility of reducing the game size using the principle of dominance.
13.2.1 Rule for Dominance
(i) If all the elements in a column are greater than or equal to the corresponding
elements in another column, then that column is dominated.
(ii) Similarly, if all the elements in a row are less than or equal to the corresponding
elements in another row, then that row is dominated.
Dominated rows or columns may be deleted which reduces the size of
the game. Always look for dominance when solving a game.
Example 13.1: Determine the optimum strategies and the value of the game from
the following payoff matrix concerning a two person 4 × 2 game:
Y
 6 2 
 3 4 
X
2 9 
 
 7 1

Solution: There is no saddle point in the given game. This game also cannot be
solved by dominance because there is no row which is always preferred by X
irrespective of what Y might do.
Player X can actually think of this 4 × 2 game as being six sub-games each
of size 2 × 2 (because he can choose not to play any two of his rows if he so
desires). These six sub-games can be enlisted as follows:
Y
2/5 3/5
1/5  6 2 
 Sub-game No. (i)
X Strategies have been noted in this matrix payoff.
4/5  3 4 
Self-Instructional
Material 315
Dominance Property
 6  12  
Value of the game is  –   –  
 5  5 
18
= –  –3.60
NOTES 5
Y
7/15 8/15
 Sub-game No. (ii)
11/15  6 2 Strategies have been noted in this matrix payoff.
X
4/15  2 9
Value of the game is  11  6    4  2 
 15   15 
 66   8  58
  –      –  –3.87
=
 15   15  15
Y

X 
–6 2
 Sub-game No. (iii)
There is saddle point here.
7 1 Hence value to this sub-game is –6

X
3 4  Sub-game No. (iv)
There is saddle point here.
2 9 Hence value to this sub-game is –4
Y
3/7 4/7
6/7  3 4 
 Sub-game No. (v)
X Strategies have been noted in this matrix payoff.
1/7  7 1  18   7 
Value of the game is  –    – 
 7   7
25
=–  –3.57
7
Y
8/17 9/17
6/17 2 9  Sub-game No. (vi)
X Strategies have been noted in this matrix payoff.
11/17 7 1 Value of the game is
 12   77  65
 –   –  –3.82
 17   17  17
If we look at the values of all these sub-games we find that, the values being
negative, Y will win and X will lose in all the cases. Player X who is making the
choice will naturally prefer the sub-game with the smallest negative value and this
is sub-game No. (v) in the example. Thus X will play a two-row mixed strategy
between the second and fourth rows of the original game and he will expect to

Self-Instructional
316 Material
lose an average of 3.57 points per play of the game which will be his minimum Dominance Property

possible loss. Being an intelligent opponent player Y will also adopt a strategy
consisting of the same number of columns. In brief, sub-game No. (v)th’s strategies
will be adopted by players X and Y with a game value equal to –3.57.
NOTES
Example 13.2: Determine the optimum strategies and the value of the game from
the following 2 × m payoff matrix game for X:
Player Y

 6 3 1 0  3 
Player X  
 3 2 4 2 1
Solution: There is no saddle point in the above game. Hence, mixed strategies
will have to be adopted by the two players. Further, if we look at the given payoff
matrix fiom Y’s point of view, we find that he chooses not to play columns 1, 2 or
4 since either column 3 or column 5 (both with two negative payoffs) offers a
better alternative irrespective of the actions of Player X. We can say then that
columns 1, 2 and 4 are dominated by the remaining two columns viz., number 3
and 5, and hence, would never be played by Y. As soon as Y decides never to play
his first, second and fourth columns, the game is reduced to size 2 × 2 as stated
below:
Player Y

 1 3
Player X  
 4 1
Now the optimum strategies and the value of the game can be easily found
out as per the method stated in Example 5.2 above. The calculations can be shown
as follows:
Games as stated above.
Y

1 3
X
4 1

Step 1 Substracting the payoffs.


Y
 1 3 2 (i.e.,  1  (3)  2)
X  
 4 1 3 (i.e.,  1  ( 4)  3)
3 2
(i.e., –1 – (–4) = 3 (i.e., –1 – (–3) = 2)

Self-Instructional
Material 317
Dominance Property Step 2 Pairs interchanged.
Y

1 3 2
NOTES X
4 1 3
2 3

Step 3 Pairs over the sum.

1 3 3 / (3 2)
X
4 1 2 / (3 2)

2 / (2 + 3) 3 / (3 + 2)

Step 4 Fractions simplifed and optimum strategies determined.

Y
2 / 5 3/ 5
3/ 5 1 3
X
2/5 4 1

Value of the Game

3 2
1 4
5 5
3 8 11
5 5 5

Example 13.2 is an example of solving the problem by what is known as the


method of dominance.

13.3 PRINCIPLE OF DOMINANCE AND GENERAL


SOLUTION OF m × n RECTANGULAR GAMES

Sometimes, it is observed that one of the pure strategies of either player is always
inferior to at least one of the remaining ones. The superior strategies are said to
dominate the inferior ones. In such cases of dominance, we reduce the size of the
payoff matrix by deleting those strategies which are dominated by others. The
general rules for dominance are:
(i) If all the elements of a row, say kth row, are less than or equal to the
corresponding elements of any other row, say rth row, then the kth row is
dominated by the rth row.

Self-Instructional
318 Material
(ii) If all the elements of a column, say kth column, are greater than or equal to Dominance Property

the corresponding elements of any other column, say rth column, then the
kth column is dominated by the rth column.
(iii) Dominated rows and columns may be deleted to reduce the size of the
NOTES
payoff matrix as the optimal strategies will remain unaffected.
(iv) If some linear combinations of some rows dominate the ith row, then the ith
row will be deleted. Similar arguments follow for columns.
Example 13.3: Using the principle of dominance, solve the following game:
Player B
 3 2 4 
 
Player A  1 4 2 

 2 2 6 
Solution: In the given payoff matrix, all the elements in the third column are
greater than or equal to the corresponding elements in the first column. Therefore,
column three is dom-inated by the first column. Delete column three. The reduced
payoff matrix is given by,
Player B
 3 2 
 
Player  1 4

 2 2 

Since no row (or column) dominates another row (or column), the 3 × 2 game can
now be solved by the graphical method. Since Player B wishes to minimize his
maximum loss, we find the lowest point of the upper boundary. The expected
payoff equations are then plotted as follows:
Axis II
Axis I

4 Upper boundary 4
Minimax point
3 3
A1
2 2
A2
1 1
A3
0 0

–1 –1

–2 –2

The lowest point in the upper boundary is given by the intersection of lines A1 and
A2. The solution in the original game is reduced to a 2 × 2 matrix.

Self-Instructional
Material 319
Dominance Property B1 B2
A1  3 2
A3  2 2 

NOTES The optimum strategy for A and B is given by,

A A2 A3 
SA =  1  p  p2 = 1
 p1 0 p2  1

B B 
SB =  1 2  q1  q2 = 0
 q1 q2 
22
p1 = =0
3  2  (2  2)
p 2 = 1 – p1 = 1 – 0 = 1
22 4
q1 = =
3  2  (2  2) 5

4 1
q 2 = 1  q1  1  
5 5
 B1 B2 
A A2 A3   
SA =  1  and SB =  4 1 
0 0 1  
5 5 

3  2  (2)  (2) 2
Value of the game, g = 
3  2  (2  2) 5

Example 13.4: Solve the following game:


Player B
1 7 2
 
Player A  6 2 7 
 
 5 1 6 
Solution: Since all the elements in the third row are less than or equal to the
corresponding elements of the second row, therefore, the third row is dominated by
the second row. Delete this dominated row. The reduced payoff matrix is given by,
Player B
1 7 2 
Player A  
6 2 7 

The elements of the third column are greater than or equal to the corresponding
elements of the first column, which indicate that column three is dominated by
column one. This dominated column is deleted and the reduced payoff matrix is
given by,

Self-Instructional
320 Material
Dominance Property
Player B
Player A 1 7 
6 2
 
NOTES
The reduced payoff matrix is a 2 × 2 matrix. The optimal strategy for players A and
B is given by,

A A2 A3 
SA =  1  p  p2 = 1
 p1 p2 0 1

B B2 B3 
SB =  1  q  q2 = 1
 q1 q2 0 1

26 4 2
p1 =  
2  1  (7  6) 10 5

2 3
p2 = 1  
5 5

26 5 1
q1 =  
2  1  (7  6) 10 2

1 1
q 2 = 1  q1  1  
2 2
2 1  7  6 40
The value of the game,  =  = 4.
2  1  (7  6) 10

The optimal strategy is given by,

 A1 A2 A3 
SA =  2 3

 0 
5 5 

 B1 B2 B3 
SB =  1 1

 0 
2 2 

The value of the game is,  = 4.


Example 13.5: Is the following two-person zero-sum game stable? Solve the game.

Player B
 5 10 9 0
 
6 7 8 1
Player A 
8 7 15 1 
3 4 1 4 

Self-Instructional
Material 321
Dominance Property Solution: Since the game has no saddle point, it is not stable. All the elements of
the first row and a second row are to the corresponding elements of the third row.
Hence, these two rows are dominated rows. Deleting these two rows from the
payoff matrix, the reduced payoff matrix is given by,
NOTES
Player B
8 7 15 1 
Player A  
 3 4 1 4 

In this modified payoff matrix, we observe that all the elements of the second column
are  the corresponding elements of the fourth column. Hence, this dominated
column (2nd column) is deleted from the payoff matrix. The reduced payoff matrix
is given by,
8 15 1
Player A  
 3 1 4 

Now we observe that no row or column dominates another row or column.


However, we note that a convex combination of the second and third columns is
given by,
1 1
15   1 = 8  8
2 2
1 1 1 3
   4 =  3
2 2 2 2
and hence the elements of the first column are greater than or equal to the
corresponding elements of this combination. Deleting this dominated column, the
reduced payoff matrix is given by,
Player B
 15 1 
Player A 
 1 4 
 

A B4  4  (1) 5
SA =  3  , p1  p2 = 1 p 1 = 15  4  (1  1  1)  19
 p1 p2 

5 14
p2 = 1  
19 19

B B4  4 1 3
SB =  3  , q  q2 = 1 q1 = 
 q1 q  1 19 19
2

3 16
q2 = 1  
19 19

Self-Instructional
322 Material
The optimum strategy of the given payoff matrix is given by, Dominance Property

 A1 A2 A3 A4   B1 B2 B3 B4 
SA =  5
 
14  SB   3

16 
0 0  0 0 
 19 19   19 19  NOTES

(4  15)  (1   1) 61
and the value of game is,  .
19 19

Check Your Progress


1. Explain the dominance property.
2. What are the rules of dominance?
3. Define the principle of dominance.

13.4 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS

1. The concept of dominance is very useful for reducing the size of the game.
Applying this concept, we can convert a bigger game into a smaller game.
Hence, we must examine the possibility of reducing the game size using the
principle of dominance.
2. (i) If all the elements in a column are greater than or equal to the
corresponding elements in another column, then that column is
dominated.
(ii) Similarly, if all the elements in a row are less than or equal to the
corresponding elements in another row, then that row is dominated.
Dominated rows or columns may be deleted which reduces the size of the
game. Always look for dominance when solving a game.
3. (i) If all the elements of a row, say kth row, are less than or equal to the
corresponding elements of any other row, say rth row, then the kth
row is dominated by the rth row.
(ii) If all the elements of a column, say kth column, are greater than or
equal to the corresponding elements of any other column, say rth
column, then the kth column is dominated by the rth column.
(iii) Dominated rows and columns may be deleted to reduce the size of
the payoff matrix as the optimal strategies will remain unaffected.
(iv) If some linear combinations of some rows dominate the ith row, then
the ith row will be deleted. Similar arguments follow for columns.

Self-Instructional
Material 323
Dominance Property
13.5 SUMMARY

 The concept of dominance is very useful for reducing the size of the game.
NOTES Applying this concept, we can convert a bigger game into a smaller game.
Hence, we must examine the possibility of reducing the game size using the
principle of dominance.
 If all the elements in a column are greater than or equal to the corresponding
elements in another column, then that column is dominated.
 Dominated rows or columns may be deleted which reduces the size of the
game. Always look for dominance when solving a game.
 Sometimes, it is observed that one of the pure strategies of either player is
always inferior to at least one of the remaining ones. The superior strategies
are said to dominate the inferior ones.
 If all the elements of a row, say kth row, are less than or equal to the
corresponding elements of any other row, say rth row, then the kth row is
dominated by the rth row.
 If all the elements of a column, say kth column, are greater than or equal to
the corresponding elements of any other column, say rth column, then the
kth column is dominated by the rth column.
 If some linear combinations of some rows dominate the ith row, then the
ith row will be deleted. Similar arguments follow for columns.

13.6 KEY WORDS

 Dominance: The concept of dominance is very useful for reducing the size
of the game. Applying this concept, we can convert a bigger game into a
smaller game.
 Rule of dominance: dominance rows or column may be deleted which
reduces the size of the game. Always look for dominance when solving a
game.
 Principle of dominance: Sometimes, it is observed that one of the pure
strategies of either player is always inferior to at least one of the remaining
ones. The superior strategies are said to dominate the inferior ones.

13.7 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short-Answer Questions
1. Explain about the dominance property.
2. List the general rules for dominance.
Self-Instructional 3. Describe the principle of dominance.
324 Material
Long-Answer Questions Dominance Property

1. Discuss briefly the dominance property with the help of examples.


2. State the principle of dominance.
3. The following matrix represents the payoff to P1 in a rectangular game NOTES
between two persons P1 and P2.

P2
 8 15 4 –2 
P1 19 15 17 16 
 0 20 15 5

By the notion of dominance, reduce the game to a 2 × 4 game and solve it


graphically.
4. Solve the following game:
Player B
I II III IV
I 3 2 4 0
II  3 4 2 4 
III  4 2 4 0
 
IV  0 4 0 8
5. Using dominance solve the payoff matrix, given by,
(i) Player B
 2 2 4 1 
 6 1 12 3
Player A  
 3 2 0 6 
 
 2 3 7 7 
(ii) Player B
1 7 3 4
Player A  5 6 4 5 
7 2 0 3

13.8 FURTHER READINGS

Arumugam, R. S. 2006. Operations Research. Palayamkottai (Tamil Nadu): New


Gamma Publications.
Sundharesan, V., K. S. Ganapathy and K. Ganesan. 2017. Resource Management
Techniques (Operations Research). Chennai: A. R. Publications. Self-Instructional
Material 325
Dominance Property Swaroop, Kanti, P. K. Gupta and Man Mohan. 2007. Operations Research,
13th Edition. New Delhi: Sultan Chand & Sons.
Taha, Hamdy A. 1992. Operations Research: An Introduction. New York:
Macmillan.
NOTES
Sharma, S. D. 2006. Operations Research. Uttar Pradesh: Kedar Nath Ram
Nath & Co.
Gupta, P. K. and D. S. Hira. 2002. Introduction to Operations Research. New
Delhi: S. Chand And Company Limited.
Gillett, Billy E. 2007. Introduction to Operations Research. New Delhi: Tata
McGraw-Hill.
Ackoff, R. L. and M. W. Sasieni. 1968. Fundamentals of Operations Research.
New York: John Wiley & Sons Inc.
Kothari, C. R. 1992. An Introduction to Operational Research. New Delhi:
Vikas Publishing House Pvt. Ltd.
Kalavathy, S. 2002. Operations Research. New Delhi: Vikas Publishing House
Pvt. Ltd.
Jensen, Paul A., and Jonathan F. Bard. 2003. Operations Research Models and
Methods. New York: John Wiley & Sons.
Sharma, J. K. 2001. Operations Research: Theory and Applications. New
Delhi: Macmillan India Ltd.

Self-Instructional
326 Material
Network Analysis: CPM

UNIT 14 NETWORK ANALYSIS: and PERT

CPM AND PERT


NOTES
Structure
14.0 Introduction
14.1 Objectives
14.2 Introduction to Network Concept
14.2.1 Development of Network Analysis - CPM and PERT
14.3 Network Analysis and Rules of Network Construction
14.3.1 Rules of Network Construction
14.3.2 Time Analysis
14.3.3 Network Diagram
14.4 Critical Path Method (CPM)
14.4.1 Computations for Critical Path
14.4.2 Applications of CPM Analysis
14.5 Programme Evaluation and Review Technique (PERT)
14.5.1 PERT Procedure
14.6 Comparison and Limitations of PERT and CPM
14.7 Answers to Check Your Progress Questions
14.8 Summary
14.9 Key Words
14.10 Self Assessment Questions and Exercises
14.11 Further Readings

14.0 INTRODUCTION

Network analysis is a method of planning and controlling projects by recording


their interdependence in a diagrammatic form that enables each fundamental
problem involved to be tackled separately. Network analysis clearly shows the
interdependences between jobs to be performed in contest of a project and thus
enables people to see not only the overall plan but the ways in which their own
activities depend upon or influence those of others. It allows the total requirements
of men, materials, and money, machinery and space resources to be readily
calculated and also indicates where the delaying of non-critical jobs (i.e., jobs
which do not immediately affect the duration of the project) may be used for
optimal utilisation of resources. Network analysis, as stated above, is a technique
related to sequencing problems which are concerned with minimizing some measure
or performance of the system such as the total completion time of the project, the
overall cost and so on. The technique is useful for describing the elements in a
complex situation for the purpose of designing, planning, coordinating controlling
and making decision. Network analysis is especially suited for projects which are
not routine or repetitive and which will be conducted only once or a few times. A
network is a graphic representation of logically and sequentially connected arrows
Self-Instructional
Material 327
Network Analysis: CPM and nodes, representing the activities and events, respectively of a project. An
and PERT
event is the beginning and end points of an activity and is represented by a node.
Learn how to construct a network after going through the rules for network
construction.
NOTES
Network scheduling is a technique used for planning and scheduling large
projects in the field of construction, maintenance, fabrication, etc. It is a tool for
minimizing problems in the execution and controlling critical factors in a project.
Program Evaluation Review Technique (PERT) and Critical Path Method (CPM)
are two planning and control techniques for keeping a project schedule on track
to complete within the scheduled time.
The Critical Path Method (CPM), or Critical Path Analysis (CPA), is
an algorithm for scheduling a set of project activities. It is commonly used in
conjunction with the Program Evaluation and Review Technique (PERT). A critical
path is determined by identifying the longest stretch of dependent activities and
measuring the time required to complete them from start to finish. The Program
(or project) Evaluation and Review Technique (PERT) is a statistical tool used
in project management, which was designed to analyse and represent
the tasks involved in completing a given project.
In this unit, you will study about the concept of network scheduling by
PERT/ CPM, network basic components, drawing network, Critical Path Analysis
or CPM, PERT analysis, and the distinction between the PERT and CPM.

14.1 OBJECTIVES
After going through this unit, you will be able to:
 Know how to do network analysis for large projects
 Understand PERT and CPM for handling projects
 Construct a network of activities and events for analysis
 Understand the importance of CPM and PERT analysis
 Explain the PERT procedure
 Compare PERT and CPM

14.2 INTRODUCTION TO NETWORK CONCEPT

Meaning and Objectives of Network Analysis


Network analysis is a method of planning and controlling projects by recording their
interdependence in a diagrammatic form that enables each fundamental problem
involved to be tackled separately. The main objectives of network analysis are:
1. To foster increased orderliness and consistency in the planning and evaluating
of all areas in the project
Self-Instructional
328 Material
2. To provide an automatic mechanism for the identification of potential trouble Network Analysis: CPM
and PERT
spots in all areas which arise as a result of a failure in one.
3. To structure a method to give operational flexibility to the programme by
allowing for experimentation in a simulated sense.
NOTES
4. To effect speedy handling and analysis of the integrated data, thus allowing
for expeditious correction of recognised trouble areas in project management.
Network analyses, thus, plays an important role in project management.
Through network analysis, which is a graphic depiction of ‘Activities’ and
‘Events’ related to a project, planning, scheduling and control of project
becomes easier and effective.
Steps Involved in Network Analysis
Network analyses achieve their purpose in three broad steps:
1. They present in diagrammatic form, a picture of all the jobs (or activities) to
be accomplished and of their dependence on one another. The way in which
this is done is to construct what is known as a ‘Network Diagram’ in which
each job is represented by an arrow on the diagram. The way in which the
arrows are linked indicates the dependencies of the jobs on each other.
2. They consider the limitations imposed by the availability of resources viz.,
of men, machine, money and material and in view of these estimate the time
required to do each job.
3. They apply the estimated job time to the network diagram and then analyse
the network. Analysis in this case means the calculation of the total length of
time involved in each path through the network.
Significance of Network Analysis
1. Network analysis clearly shows the interdependences between jobs to be
performed in contest of a project and thus enables people to see not only
the overall plan but the ways in which their own activities depend upon or
influence those of others.
2. By splitting up the project into smaller activities, it assist in the estimation of
their durations and thereby leading to more accurate target dates.
3. It enables stricter controls to be applied since any deviation from schedule
is quickly noticed.
4. It allows the total requirements of men, materials, money, machinery and
space resources to be readily calculated and also indicates where the delaying
of non-critical jobs (i.e., jobs which do not immediately affect the duration
of the project) may be used for optimal utilisation of resources.
5. Its identification of the critical path has two advantages: (i) If the completion
date has to be advanced, attention can be concentrated on speeding up the

Self-Instructional
Material 329
Network Analysis: CPM relatively few ‘Critical’ jobs, (ii) Money is not wasted on speeding up ‘Non
and PERT
Critical’ jobs.
6. It allows modifications of policy to be built easily and their impact can also
be assessed quickly.
NOTES
7. It allows schedules to be based on considerations of costs so as to complete
projects in a given time at minimum expense.
8. It separates the planning of the sequence of jobs from the scheduling of
times for the jobs and thus it makes planning and scheduling effective.
Limitations of Network Analysis
The only real disadvantage of network analysis as a planning tool is that it is a
tedious and exacting task if attempted manually. The calculations are done in terms
of the sequence of activities and, if this is all that is required, a project involving
several hundred activities may be attempted manually. However, the possibility of
error is high, and if the results are to be sorted, the cost of manual operation
rapidly becomes uneconomic. The consideration of various alternative plans also
becomes impossible, because of the large volumes involved.
But now we have standard computer programmes for network analysis,
which can handle project plans of upto 5000 activities and more, and produce
‘Output’ in various forms. Even then it must be emphasised, that a computer only
assists with the calculation and with the printing of pirns of operation sorted into
various orders. The project manager is still responsible for the planning and must
still make me necessary decisions based upon the information supplied by the
computer. The computer can not take over this responsibility. Equally important is
the fact that the computer output is only as accurate as its input which is supplied
in the first instance by human beings.
14.2.1 Development of Network Analysis - CPM and PERT
Network analysis, as stated above, is a technique related to sequencing problems
which are concerned with minimizing some measure or performance of the system
such as the total completion time of the project, the overall cost and so on. The
technique is useful for describing the elements in a complex situation for the purpose
of designing, planning, coordinating controlling and making decision. Network
analysis is specially suited for projects which are not routine or repetitive and
which will be conducted only once or a few times. Two most popular forms of this
technique now used in many scheduling situations are the Critical Path Method (or
simply CPM) and the Programme Evaluation and Review Technique (popularly
known as PERT).
Critical Path Method or CPM was developed in 1956 at the E.I. du Pont
de Nemours & Co., USA, to aid in the scheduling of routine plant overhaul
maintenance and construction work. This method differentiates between planning
and scheduling. Planning refers to the determination of activities that must be
Self-Instructional
330 Material
accomplished and the order in which such activities should be performed to achieve Network Analysis: CPM
and PERT
the objective of the project. Scheduling refers to the introduction of time into the
plan thereby creating a time table for the various activities to be performed. CPM
uses two time and two cost estimates for each activity (one time-cost estimate for
the normal situation and the other estimate for the crash situation) but does not NOTES
incorporate any statistical analysis in determining such time estimates. CPM
operates on the assumption that there is a precise known time that each activity in
the project will take.
Programme Evaluation and Review Technique or PERT was first
developed in 1958 for use in defence projects specifically in the development of
Polaris fleet ballistic missile programme. But now this technique is very popular in
the hands of project planner and controller. PERT, now assists a business manager
in planning and controlling a project. It allows a manager to calculate the expected
total amount of time that the entire project will take to complete at the stage of
formulation and planning a project and at the same time highlights the critical or the
bottleneck activities in the project so that a manager may either allocate more
resources for them or keep a careful watch on such activities as the project
progresses. In PERT, we usually assume that the time to perform each activity is
uncertain and as such three time estimates (the optimistic, the pessimistic and the
most likely) are used. PERT is often described as an approach of multiple time
estimates to scheduling problems of long-range research and development projects.
PERT incorporates the statistical analysis in determining time estimates and enables
the determination of the probabilities concerning the time by which each activity as
well as the entire project would be completed. As such it can be taken as an
advancement over the CPM. PERT is equally unique as a control device for it
assists the management in controlling a project, once it has begun, by calling attention
as a result of constant review to such delays in activities which might cause a delay
in the project’s completion date.

14.3 NETWORK ANALYSIS AND RULES OF


NETWORK CONSTRUCTION

Network scheduling is a technique used for planning, and scheduling large projects
in the field of construction, maintenance, fabrication, purchasing computer system,
etc. The technique is a method of minimizing the trouble spots such as production,
delays and interruptions, by determining critical factors and coordinating various
parts of the overall job.
There are two basic planning and control technique that utilize a network to
complete a predetermined project or schedule. These are Programme Evaluation
Review Technique (PERT) and Critical Path Method (CPM).
A project is defined as a combination of interrelated activities all of which
must be executed in a certain order for its completion.
Self-Instructional
Material 331
Network Analysis: CPM The work involved in a project can be divided into three phases
and PERT
corresponding to the management functions of planning, scheduling and control.
Planning: This phase involves setting the objectives of the project and the
assumptions to be made. Also it involves the listing of tasks or jobs that must be
NOTES
performed to complete a project under consideration. In this phase, men, machines
and materials required for the project, in addition to the estimates of costs and
duration of the various activities of the project, are also determined.
Scheduling: This consists of laying the activities according to the precedence
order and determining,
(i) The start and finish times for each activity.
(ii) The critical path on which the activities require special attention.
(iii) The slack and float for the non-critical paths.
Controlling: This phase is exercised after the planning and scheduling, which
involves the following:
(i) Making periodical progress reports.
(ii) Reviewing the progress.
(iii) Analysing the status of the project.
(iv) Management decisions regarding updating, crashing and resource allocation.
Basic Terms
To understand the network techniques one should be familiar with few basic
terms of which both CPM and PERT are special applications.
Network: It is the graphic representation of logically and sequentially connected
arrows and nodes representing activities and events of a project. Networks are
also called arrow diagram.
Activity: An activity represents some action and is a time consuming effort
necessary to complete a particular part of the overall project. Thus, each and
every activity has a point of time where it begins and a point where it ends.
It is represented in the network by an arrow,
A j
i

Here, A is called the activity.


Event: The beginning and end points of an activity are called events or nodes.
Event is a point in the time and does not consume any resource. It is represented
by a numbered circle. The head event called the jth event has always a number
higher than the tail event called the ith event.
Activity j
i
Tail Head
Self-Instructional
332 Material
Merge and Burst Events: It is not necessary for an event to be the ending event Network Analysis: CPM
and PERT
of only one activity but can be the ending event of two or more activities. Such
event is defined as a merge event.

NOTES

If the event happens to be the beginning event of two or more activities it is


defined as a burst event.

Preceding, Succeeding and Concurrent Activities: Activities, which must be


accomplished before a given event can occur are termed as preceding activities.
Activities, which cannot be accomplished until an event has occurred are
termed as succeeding activities.
Activities, which can be accomplished concurrently are known as concurrent
activities.
This classification is relative, which means that one activity can be preceding
to a certain event, and the same activity can be succeeding to some other event or
it may be a concurrent activity with one or more activities.
Dummy Activity: Certain activities, which neither consumes time nor resources
but are used simply to represent a connection or a link between the events are
known as dummies. It is shown in the network by a dotted line. The purpose of
introducing dummy activity is as follows:
(i) To maintain uniqueness in the numbering system as every activity may have
distinct set of events by which the activity can be identified.
(ii) To maintain a proper logic in the network.

Self-Instructional
Material 333
Network Analysis: CPM Common Errors
and PERT
Following are the three common errors in a network construction:
Looping (cycling): In a network diagram looping error is also known as cycling
NOTES error. Drawing an endless loop in a network is known as error of looping. A loop
can be formed if an activity were represented as going back in time.

Dangling: To disconnect an activity before the completion of all the activities in a


network diagram is known as dangling.

Redundancy: If a dummy activity is the only activity emanating from an event and
which can be eliminated is known as redundancy.

14.3.1 Rules of Network Construction


There are a number of rules in connection with the handling of events and activities
of a project network that should be followed.
(i) Try to avoid arrows which cross each other.
(ii) Use straight arrows.
(iii) No event can occur until every activity preceding it has been completed.
(iv) An event cannot occur twice, i.e., there must be no loops.
(v) An activity succeeding an event cannot be started until that event has
occurred.
(vi) Use arrows from left to right. Avoid mixing two directions. Vertical and
standing arrows may be used if necessary.
Self-Instructional
334 Material
(vii) Dummies should be introduced if it is extremely necessary. Network Analysis: CPM
and PERT
(viii) The network has only one entry point called the start event and one point of
emergence called the end or terminal event.
Numbering the Events (Fulkerson’s Rule) NOTES
After the network is drawn in a logical sequence, every event is assigned a number.
The number sequence must reflect the flow of the network. In numbering the
events the following rules should be observed:
(i) Event numbers should be unique.
(ii) Event numbering should be carried out on a sequential basis from left to
right.
(iii) The initial event which has all outgoing arrows with no incoming arrow is
numbered as 1.
(iv) Delete all arrows emerging from all the numbered events. This will create at
least one new start event out of the preceding events.
(v) Number all new start events 2, 3 and so on. Repeat this process until all the
terminal event without any successor activity is reached. Number the terminal
node suitably.
Note: The head of an arrow should always bear a number higher than the one assigned to the
tail of the arrow.
Construction of network
Example 14.1: Construct a network for the project whose activities and their
precedence relationships are as given below:

Activities A B C D E F G H I
Immediate Predecessor – A A – D B,C,E F D G,H

Solution: From the given constraints, it is clear that A, D are the starting activity
and I the terminal activity. B, C are starting with the same event and are both the
predecessors of the activity F. Also E has to be the predecessor of both F and H.
Hence, we have to introduce a dummy activity.

Self-Instructional
Material 335
Network Analysis: CPM
and PERT

NOTES

D1 is the dummy activity.


Finally we have the following network:

Example 14.2: Construct a network for each of the projects whose activities and
their precedence relationships are given below.

Activity A B C D E F G H I J K
Predecessor – – – A B B C D E H,I F,G

Solution: A, B, C are the concurrent activities as they start simultaneously. B


becomes the predecessor of activity E and F. Since the activities J, K have two
preceding activities, dummy may be introduced (if possible).

Self-Instructional
336 Material
Network Analysis: CPM
and PERT

NOTES

Finally we have,
D H J
A 2 5 8 9

1 B I
E
6 K
3 F
C 7
4 G

Example 14.3: Construct a network of the project whose activities are given as
below.
A<C, D, I; B<G, F; D<G, F; F<H, K; G, H<J; I, J, K<E
Solution: Given A<C which means that C cannot be started until A is completed,
i.e., A is the preceding activity to C. The above constraints can be given in the
following table:
Self-Instructional
Material 337
Network Analysis: CPM
and PERT Activity A B C D E F G H I J K
Predecessor – – A A I, J, K B,D B,D F A G,H F

NOTES A, B are the starting activity, and E is the terminal activity.

Self-Instructional
338 Material
Finally we have, Network Analysis: CPM
and PERT

NOTES

Example 14.4: Construct the network for the project whose activities and
precedence relationship is given below. Show also the dummy activity.

Activities A B C D E F G H I
Immediate Predecessor – – A,B B B A,B F,D F,D C,G

Solution: A, B are concurrent activities as they start simultaneously. I is the terminal


activity. Since the activities C and F are coming from both the activities A, B we
need to introduce a dummy activity.

Self-Instructional
Material 339
Network Analysis: CPM
and PERT

NOTES

Example 14.5: Make a network of the project having activities and precedence
relationship as given below:
A, B, C can start simultaneously,
A<D, I; B<G, F; D<G, F; C<E; E<H, K; F<H, K; G, H<J
Solution: The above constraints can be formatted into a table.
Activity A B C D E F G H I J K
Predecessor Activity – – – A C B, D B, D E,F A G, H E, F

Self-Instructional
340 Material
Network Analysis: CPM
and PERT

J
NOTES

14.3.2 Time Analysis


Once the network of a project is constructed the time analysis of the network
becomes essential for planning various activities of the project. An activity time is
a forecast of the time an activity is expected to take from its starting point to its
completion (under normal conditions).
We shall use the following notation for basic scheduling computations.
( i, j) = Activity (i, j) with tail event i and head event j
T ij = Estimated completion time of activity (i, j)
(ES)ij = Earliest starting time of activity (i, j)
(EF)ij = Earliest finishing time of activity (i, j)
(LS)ij = Latest starting time of activity (i, j)
(LF)ij = Latest finishing time of activity (i, j)
The basic scheduling computation can be put under the following three groups.
Forward Pass Computations (for earliest event time)
Before starting computations, the occurrence time of the initial network event is
fixed. The forward pass computation yields the earliest start and the earliest finish
time for each activity (i, j) and indirectly the earliest occurrence time for each
event namely Ei. This consists of the following three steps:
Step 1: The computations begin from the start node and move towards the end
node. Let zero be the starting time for the project.
Step 2: Earliest starting time (ES)ij = Ei is the earliest possible time when an
activity can begin assuming that all of the predecessors are also started at their

Self-Instructional
Material 341
Network Analysis: CPM earliest starting time. Earliest finish time of activity (i, j) is the, Earliest starting time
and PERT
+ Activity time
(EF )ij = (ES)ij + tij
NOTES Step 3: Earliest event time for event j is the maximum of the earliest finish time of
all the activities ending at that event.
E j = Max E j tij
i

The computed ‘E’ values are put over the respective rectangle representing
each event.
Backward Pass Computations (for latest allowable time)
The latest event time (L) indicates the time by which all activities entering into that
event must be completed without delaying the completion of the project. These
can be calculated by reversing the method of calculations used for the earliest
event time. This is done in the following steps:
Step 1: For ending event assume E = L.
Step 2: Latest finish time for activity (i, j) is the target time for completing the
project
(LF)ij = Lj
Step 3: Latest starting time of the activity (i,j) = Latest completion time of (i,,j),
the activity time.
(LS)ij = (LF)ij – tij
= Lj – tij
Step 4: Latest event time for event i is the minimum of the latest start time of all
activities originating from the event.

Li Min L j – tij
j

The computed ‘L’ values are put over the respective triangle  representing
each event.
Determination of Floats and Slack Times
Float is defined as the difference between the latest and the earliest activity time.
Slack is defined as the difference between the latest and the earliest event
time.
Hence, the basic difference between the slack and the float is that slack is
used for events only whereas float is used for activities.
There are mainly three kinds of floats as given below:

Self-Instructional
342 Material
Total Float: It refers to the amount of time by which the completion of an activity Network Analysis: CPM
and PERT
could be delayed beyond the earliest expected completion time without affecting
the overall project duration time.
Mathematically, the Total Float (TF) of an activity (i, j) is the difference NOTES
between the latest start time and the earliest start time of that activity.
Hence, the total float for an activity (i, j) denoted by (TF)ij is calculated by
the formula,
(TF)ij= (Latest start – Earliest start) for activity (i, j)

i.e.,(TF )ij   LS ij –  ES ij


or, TF ij  L j – Ei – tij 

Where Ei, Lj are the earliest time and latest time for the tail event i and head
event j and tij is the normal time for the activity (i, j). This is the most important
type of float as it concerns with the overall project duration.
Free Float: The time by which the completion of an activity can be delayed
beyond the earliest finish time without affecting the earliest start of a subsequent
succeeding activity.
Mathematically, the Free Float for activity (i, j) denoted by (FF)ij can be
calculated by the formula,


FFij  E j – Ei – tij
FFij  Total float – Head event slack

Head event slack = Lj – Ej


This float is concerned with the commencement of subsequent activity.
The free float can take values from zero up to total float, but it cannot
exceed total float. This float is very useful for rescheduling the activities with minimum
disruption of earlier plans.
Independent Float: The amount of time by which the start of an activity can be
delayed without affecting the earliest start time of any immediately following activities
assuming that the preceding activity has finished at its latest finish time.
Mathematically, Independent Float of an activity (i, j) denoted by (IF)ij can
be calculated by the formula,

 
( IF )ij  E j – Li – tij
or
( IF ) ji  Free float –Tail event slack

Self-Instructional
Material 343
Network Analysis: CPM Where tail event slack is given by,
and PERT
Tail event slack = Li – Ei
The negative independent float is always taken as zero. This float is concerned
NOTES with prior and subsequent activities.
(IF)ij  (FF)ij  (TF)ij
Notes:
1. If the total float TFij for any activity (i, j) is zero, then those activities are called
critical activity.
2. The float can be used to reduce project duration. While doing this, the float of not
only that activity but that of other activities would also change.
Critical Activity: An activity is said to be critical if a delay in its start will cause a
further delay in the completion of the entire project.
Critical Path: The sequence of critical activities in a network is called the critical
path. It is the longest path in the network from the starting event to the ending
event and defines the minimum time required to complete the project. In the network,
it is denoted by double line. This path identifies all the critical activities of the
project. Hence, for the activity (i, j) to lie on the critical path, following conditions
must be satisfied.
(i) ESi= LFi
(ii) ESj = LFj
(iii) ESj – ESi = LFj – LFi = tij
ESi, ESj, are the earliest start and finish time of the event i and j.
LFi, LFj are the latest start, finish time of the event i and j.
14.3.3 Network Diagram
Start Finish
A D G
1 2 5 7
3 2 8
B E I
5 H
3 6
C 7
4 F
4 9

In the above diagram, each arrow represents an activity and each circle an event.
Circle 1 represents the starting event and circle 7 represents the ending event. The
names of the activities are generally stated just above the corresponding arrows.
Thus A in the above diagram is the name of the activity represented by the arrow
just drawn below it.
 Merge and Burst Events: It may be pointed out that it is not necessary for
an event to be the ending event of only one activity but an event can be the
ending event of two or more activities in which case the said event is
Self-Instructional
344 Material
technically described as merge event. Similarly, if the event happens to be Network Analysis: CPM
and PERT
the beginning event of two or more activities it is technically called as the
‘Burst Event’.
 Preceding, Succeeding and Concurrent Activities: The activities can be
NOTES
classified as preceding activities; succeeding activities and the concurrent
activities. Activities which most be accomplished before a given event can
occur are termed as preceding activities; activities which cannot be
accomplished until an event has occurred are termed as succeeding
activities and activities which can be accomplished concurrently are known
as concurrent activities. This classification is relative which means that
one activity can be preceding to a certain event and the same activity can be
succeeding to some other event or it may be a concurrent activity with one
or more of the activities.
 Dummy Activities: Some times we use dummy activities in the preparation
of network diagram. Such activities are to designate a precedence
relationship and in the network diagram are shown as broken lines. They
are characterized by their use of zero time and zero resource. Their main
function is to help in assuring that the activities and events in a network
diagram are in proper sequence.
 Path and Critical Path: A path is continuous chain of activities through a
network which connects the first event to the last event. Critical path consists
of the sequence of those events and connected activities that require the
maximum time in the completion of the project. It is that path which takes
the longest time. It is known as critical because it controls the completion
date of the project. The length of this path determines the minimum time in
which the project may be completed.
 Critical Activities or Bottleneck Activities: All the activities associated
with the critical path are called as critical or bottleneck activities. Any
delay in the completion of one or more of these activities will cause delay in
the completion date of the project. Hence, such activities require special
attention of the project incharge.
 Earlier Start Time or Est: Est for an activity is the earliest possible time an
activity can begin on the assumption that all activities preceding to it started
at the earliest possible times.
 Earliest Finish Time or Eft: Eft is the sum of the earliest start time and the
estimated time to perform the concerning activity.
 Latest Finish Time or Lft: Lft for an activity is the latest possible time an
activity can finish without delaying the project beyond its dead line on the
assumption that all the subsequent activities are performed as planned.
 Latest Start Time (or Lst): Lst for an activity is the difference between the
latest finish time and the estimated time for the activity to be performed.
Self-Instructional
Material 345
Network Analysis: CPM  Float (Total, Interfering Independent and Free floats): Quite often the
and PERT
term float (in CPM terminology) is used in context of network analysis.
Float may be understood as total float, interfering float, free float and
independent float. Total float is the duration by which an activity can be
NOTES delayed without delaying the project and can be worked out as either (Lst-
Est) or (Lft-Eft). Interfering float is that part of the total float which causes
a reduction in the float of the successor activity or activities. In other words,
it is that portion of the activity float which cannot be consumed without
affecting adversely the float of the succeeding activity or activities. It is
worked out as a difference between the Lft of the activity and the Est of
the following activity or zero whichever is larger. Interfering float is also
known as the head event slack of an activity. Free float is that portion of
the total float within which an activity can be manipulated without affecting
the float of subsequent activities. It is worked out by subtracting the head
event slack from the total float. The head event slack is its latest event time
minus earliest event time or (LT – ET). Independent float is that portion of
the total float within which an activity can be delayed for started without
affecting float of the preceding activities. It is worked out by subtracting the
tall event slack from the free float. If it obtains a negative value then it is
taken as equal to zero. Tail event refers to the event where an activity say
begins and Head event is the event where an activity comes to an end. If
we have events (1) and (2) then (1) is the tail event and (2) is the head event
of an activity A.
Float may be positive or negative. Positive float indicates that the activities
concerned have certain amount of spare time and can be delayed without
effecting the project duration. On the other hand, negative float highlights
the situation in which the activities concerned are short of time and unless
their duration (to the extent of negative float) is reduced, completion of the
project by the target time cannot be assured. Thus, negative float indicates
the extent of criticality of the activities.
 Slack: The term slack is normally associated with events. It indicates the
amount of latitude that is available for an event to occur. It is worked out as
under:
Slack of an event = (Latest occurence time of the event) – (Earlier occurence
time of the event) or simply slack of event = (LT – ET). Slack can be positive or
negative depending upon whether the targetted date of completion is later or earlier
than the earliest finish time of the task respectively.
When used for activities, the term slack should be used for activity slack
(activity slack is synonymous to float). Since slack is associated with the events,
each activity will have two slacks which includes the slack of its head event or the
head slack and the slack of its tail event or the tail slack.

Self-Instructional
346 Material
Preparation of the Network Arrow Diagram Network Analysis: CPM
and PERT
We require the following information for each activity in the project for the
preparation of the network diagram:
(a) The sequencing requirements for an activity must be known, i.e., the set of NOTES
activities which must be completed prior to the beginning of each specific
activity should be known.
(b) An estimate of the time each activity will take should also be known.
Keeping all what has been stated above in view, the network diagram can
easily be prepared. But the following rules of constructing network diagrams will
have to be invariably adhered:
(i) Each activity is shown by an arrow only once in the network.
(ii) Network has to be developed on the basis of logical dependencies between
various activities.
(iii) The length of arrows representing various activities have no significance;
they only indicate the logical precedence.
(iv) Arrow direction shows the general progression in time.
(v) Events in the network are shown by numbers.
(vi) Activities are identified by the numbers of their starting and the ending events.
(vii) Parallel activities between two events without intervening events are not
permitted. In such a situation dummy activities may have to be introduced.
(viii) Looping is not permitted in a network. This means that if activity A precedes
B and B precedes C, then C cannot precede A.
Now construct the network diagrams using the above stated rules.
Examples 14.6: Prepare a network arrow diagram for the following information.
Activity Name of the Pre-Requisite Estimated Time
Activity Activity (Weeks)
Event Event
1 2 A None 3
1 3 B None 5
1 4 C None 4
2 5 D None 2
3 5 E A 3
4 6 F B 9
5 7 G C 8
3 6 H D 7
6 7 I E 9

Self-Instructional
Material 347
Network Analysis: CPM Solution: Draw the following network arrow diagram to solve the problem:
and PERT
D
2 5 G Finish
A 2
8 7
Start 3 E
NOTES B 3 I
1 H 9
5 3
6
C F
4 9
4

The above is the required network digram for the given problem. Activity A must
be completed before activity D can begin the arrow activities. The immediate
preceding activity of activity D is activity A which means that activity A must be
completed before activity D can begin. The arrow from circle 1 to circle 2 indicate
that activity a must be completed before activity D can begin. Similarly activity B
must be completed before activities E and/or H can C must be completed before
F can begin; activities H and F must be completed before I can begn and activities
D and E must be completed before activity G can begin. The estimated time for
each activity has been placed just below the arrow representing that activity.
Example 14.7: Draw a network arrow diagram for the following information
concerning some project:
Activity Predecessor Activity or Activities
A None
B A
C A
D B,C
E C
F D
G E
H F,G.
Solution: Draw the following network arrow diagram to solve the problem:
D
B 4
5 F
A H
1 2 7 8
C G
E
3 6
In this diagram, activity 3-4 is the dummy activity shown as a broken line. It
is required because activities B and C both precede activity D but activity C alone
precedes activity E.

Self-Instructional
348 Material
Network Analysis: CPM
and PERT
Check Your Progress
1. What are the management functions for the three phases of work involved
in a project? NOTES
2. Define planning?
3. What is an activity?
4. What is an event?
5. What do you mean by a network?
6. What are merge events and burst events?
7. What is a dummy activity?
8. What is a redundant activity or redundancy?

14.4 CRITICAL PATH METHOD (CPM)

Critical Path Method (CPM) is a graphical technique for planning and scheduling
of projects. This technique involves the preparation of the network in the form
of arrow diagram and its analysis to indicate the critical path. It has the potential
for scheduling of a task in minimum time and/or cost in accordance with specified
constraints.
After preparing the network diagram and indicating the times for each
activity,we can now mention the various possible paths, for determining the critical
path. The critical path being the longest path can easily be found out from the
possible paths as the one taking the maximum time in the completion of the project.
In the network diagram of Example 14.6 there are in all four paths viz.
l. A  D  G requiring 3 + 2 + 8 = 13 weeks in completion of the
project.
2. B  E  G requiring 5 + 3 + 8 = 16 weeks in completion of the
project.
3. B  H  I requiring 5 + 7 + 9 = 21 weeks in completion of the
project.
4. C  F  I requiring 4 + 9 + 9 = 22 weeks in completion of the
project.
As the path C  F  I takes longest time in the completion of the project,
it is the critical path. The activities C, F and I are associated with it and hence they
are critical or bottleneck activities. All other activities viz., A, B, D, E, G and H are
non-critical activities. Non-critical activities have a certain amount of spare time or
float available. These activities can be delayed to the extent of the float available
without affecting the overall completion time of the project

Self-Instructional
Material 349
Network Analysis: CPM The critical path is important as its length determines the minimum time
and PERT
required for the completion of the project. The critical path requires greater attention
because of the following reasons:
(a) The critical path highlights those activities which must be performed more
NOTES
rapidly if the total project completion time is to be reduced;
(b) Any delay in activities which are on the critical path will produce delay in
the completion of the project, i.e., will postpone the final completion date
of the project on the other hand delays in non-critical activities may not
actually delay the completion of the project;
(c) Advance planning and improvement along the critical path may cause another
path to become critical.
In brief, the critical path directs management’s attention to important facts,
spots potential bottlenecks and avoids unnecessary pressure on other paths that
will not result in an earlier final completion date of the project.
14.4.1 Computations for Critical Path
The computations to be accomplished for critical path are as follows: The Earliest
Start Time (or Est) and the Earliest Finish Time (or Eft) for each activity are to be
obtained first. For this purpose, we set the Est of the first activity equal to zero.
Then add the estimated time to perform the first activity to its Est and the result is
the Eft for the first activity. Now, take any activity for which all of its immediate
preceding activities have Est and Eft values. The Est of such an activity is equal
to the largest of the Eft values of its immediate preceding activities. If we
proceed this way (i.e., from left to right), finding the Est and Eft of all activities in
the network, we are said to adopt what is known as the Forward Pass.
Similarly, we will have to work out the Latest Start Time (or Lst) and the
Latest Finish Time (or Lft) for each activity to be performed in the completion of
the project. This can be done as under:
Start at the end of the network diagram and first set the Lft for the last
activity equal to the Eft for that activity. Then subtract the estimated time to perform
the last activity from its Lft to obtain its Lst. Now take any activity for which all of
its immediate succeeding activities have Lst and Lft values. The Lft of such an
activity is equal to the smallest of the Lst values of its immediate succeeding
activities. If we proceed this way (i.e., from right to left) finding the Lst and Lft of
all activities in the network, we are said to adopt what is known as the Backward
Pass.
Activity Floats: Often the complete date of the project is determined from
the length of the critical path, the float will be positive except along the critical path
where it will be zero. The float for each activity can be calculated by taking the
difference either between the Lst and the Est or between Lft and Eft for that
activity. Float will always be zero for those activities which are on the critical
path. This is a the definition of critical path since my delay in a critical activity will
cause delay in the completion date of the project.

Self-Instructional
350 Material
‘Slack’ in case of events: In case of events we usually talk of slack and for Network Analysis: CPM
and PERT
activities we can think of their head events slacks and tail events slacks.
Slack for events concerning illustration one can be shown as under:
For Event LT ET Slack (i.e., LT – ET)
NOTES
1 6 0 0
2 12 3 9
3 6 5 1
4 4 4 0
5 14 8 6
6 13 13 0
7 22 22 0
LT = Latest Event Time
ET = Earliest Event Time
Head event slack tail event slack relating to an activity can be shown as under:
For Activity Head event Slack Tail event Slack
A 9 0
B 1 0
C 0 0
D 6 9
E 6 1
F 0 0
G 0 6
H 0 1
I 0 0
Float or slack whether positive or negative, is generally considered as undesirable
and should be avoided to the extent possible. Positive float simply means that
there is idle time and resources with corresponding simplicit costs burden. When
the time to be allowed for the completion of the project is less than the time which
is required as per the critical path analysis for the completion of the project, we
have negative float which simply represents that project requires more resources
than are normally available. In such a situation, the project manager can either
choose not to meet the completion date and bear the burden of penalties, if any,
that may be imposed or to use more resources, i.e., to work on the basis of crash
plan and absorb the corresponding increase in costs in order to complete the
project within the stipulated time. Negative float is a sort of warning that final
event will not be completed on schedule with the existing plan. It thus serves to
indicate the extent of criticality of the activity. Once we have determined the critical
path and have worked out the floats in respect of each activity, then adjustments
can be made for better utilization of resources and time. Some of the possible
adjustments can be as under:
(i) Reduction of time estimates of bottleneck activities.
(ii) Eliminations of some activities, if possible.
(iii) Bringing in some more resources.

Self-Instructional
Material 351
Network Analysis: CPM (iv) Transfering resources from activities having float to critical activities with
and PERT
zero float.
(v) Restructuring of the network with a view to reduce completion time of the
project.
NOTES
With one or more of the above stated adjustments, the CPM analysis can
result in producing an improved plan for the completion of the project in time and
that too in an economical manner. Besides, when new information comes as the
project progresses, the plan can be revaluated and revised to incorporate the new
developments. If used in this manner, the network analysis proves to be a dynamic
device for effecting control over the project.
Resource Allocation and Levelling
Resource Allocation (also known as resource scheduling) implies the task of
allocation of resources to various activities in such a manner that the allocation is
considered as acceptable under the given situation. The task of allocation of
resources is of vital importance as the final schedule depends upon the quantity of
deployment of resources. Then the basic question is: How the resources should
be allocated? This depends upon several factors like availability of resources,
requirements, restrictions in regard to completion date, etc. Various types of
problems may be encountered in this connection, but we shall consider only two
of such problems: (i) Resource levelling and (ii) Limited resource allocation.
The Problem of Resource Levelling
Resource Levelling (also known as local smoothing) means the resource scheduling
exercise in which the resource demand is evened out or levelled as much as possible.
In other words, resource levelling refers to the scheduling of activities within the
limits of the available floats in such a way that variations in resource requirements
are minimized. Though no constraint is put on the availability of resources in context
of resource levelling but the aggregate demand of each of the important resources
requires to be levelled up so as to minimize resource costs.
The Problem of Resource Allocation
Problem arises when the resource/resources are limited. We quite often find that
projects require costly items of plant and equipments for execution of the work of
which only a limited number are available. Such limited resources must be allocated
with a lot of care so that the total requirement should not exceed the ceiling and
the utilization factor remains high. This necessitates rescheduling of some or all of
the activities and may even involve delay in overall completion of the project.
The methodology for resource levelling involves the following steps:
(i) Prepare the list of the resources that would be required for execution of the
various activities.

Self-Instructional
352 Material
(ii) Prepare the resource profiles for each resource by resource aggregation Network Analysis: CPM
and PERT
exercise.
(iii) Identify the periods of peak and low demands.
(iv) Make an attempt to lower down the demand in peak periods to fill up the NOTES
troughs, i.e., to make the demand as uniform as possible. This can be done
by altering the times of start and finish of non-critical activities in accordance
of their floats without affecting the overall completion date of the project.
14.4.2 Applications of CPM Analysis
The iterative procedure of determining the critical path is as follows.
Step 1: List all the jobs and then draw arrow (network) diagram. Each job is
indicated by an arrow with the direction of the arrow showing the sequence of
jobs. The length of the arrows has no significance. The arrows are placed based
on the predecessor, successor, and concurrent relation within the job.
Step 2: Indicate the normal time (tij) for each activity (i, j) above the arrow which
is deterministic.
Step 3: Calculate the earliest start time and the earliest finish time for each event
and write the earliest time Ei for each event i in the . Also calculate the latest
finish and latest start time. From this we calculate the latest time Lj for each event
j and put in the .
Step 4: Tabulate various times namely normal time, earliest time and latest time on
the arrow diagram.
Step 5: Determine the total float for each activity by taking the difference between
the earliest start and the latest start time.
Step 6: Identify the critical activities and connect them with the beginning event
and the ending event in the network diagram by double line arrows. Which gives
the critical path.
Step 7: Calculate the total project duration.
Note: The earliest start, finish time of an activity, and the latest start, finish time of an activity
are shown in the table. These are calculated by using the following hints.
To find the earliest time we consider the tail event of the activity. Let the
starting time of the project, namely ESi = 0. Add the normal time with the starting
time to get the earliest finish time. The earliest starting time for the tail event of the
next activity is given by the maximum of the earliest finish time for the head event of
the previous activity.
Similarly, to get the latest time, we consider the head event of the activity.
The latest finish time of the head event of the final activity is given by the
target time of the project. The latest start time can be obtained by subtracting the
normal time of that activity. The latest finish time for the head event of the next
activity is given by the minimum of the latest start time for the tail event of the
previous activity. Self-Instructional
Material 353
Network Analysis: CPM Example 14.8: A project schedule has the following characteristics.
and PERT

Activity 1– 2 1– 3 2–4 3–4 3 – 5 4 – 9 5 – 6 5 –7


Time(days) 4 1 1 1 6 5 4 8
NOTES
Activity 6 – 8 7 – 8 8 – 10 9 – 10
Time(days) 1 2 5 7

From the above information, you are required to:


(i) Construct a network diagram.
(ii) Compute the earliest event time and latest event time.
(iii) Determine the critical path and total project duration.
(iv) Compute total, free float for each activity.
Solution: First we construct the network with the given constraints. Here we get
this by just connecting the event numbers.

The following table gives the critical path, total, and free floats calculation.

Time

The earliest and latest calculations are shown below:


Forward Pass Calculation: In this we estimate the earliest start ESi and finish
time ESj. The earliest time for the event i is given by,

Self-Instructional
354 Material
Network Analysis: CPM
and PERT

NOTES

Backward Pass Calculation: In this, we calculate the latest finish and the latest
start time. The latest time L for an event i is given by Li = Minj (LFj – tij)
Where LFj is the latest finish time for the event j, tij is the normal time of the
activity.
L10  22
L9  L10  t9,10  22  7  15
L8  L10  t8,10  22  5  17
L7  L8  t7,8  17  2  15
L6  L8  t6,8  17  1  16
L5  Min ( L6  t5,6 , L7  t5,7 )
 Min (16  4, 15  8)  7
L4  L9  t4,9  15  5  10
L3  Min ( L4  t3,4 , L5  t3,5 )
 Min (10  1, 7  6)  1
L2  L4  t2,4  10  1  9
L1  Min ( L2  t12 , L3  t13 )  Min (9  4, 1  1)  0.
These calculations are shown in the given table.
To find the TF (Total Float): Considering the activity 1–2, TF of (1–2) =
Latest start–Earliest start.
So, TF = 5 – 0 = 5
Similarly, TF(2–4) = LS – ES
Self-Instructional
Material 355
Network Analysis: CPM So, TF = 9 – 4 = 5
and PERT
Free float = TF – Head event slack.
Consider the activity 1 – 2
NOTES FF of 1 – 2 = TF of 1 – 2 – Slack for the head event 2
So, FF = 5 – (9 – 4) (from the figure for event 2)
 FF = 5 – 5 = 0
FF of 2 – 4 = TF of 2 – 4 – Slack for the head event 4
So, FF = 5 – (10 – 5) = 5 – 5 = 0
Like this we calculate the TF and FF for the remaining activities.
From the above table we observe that the activities 1–3, 3–5, 5–7, 7–8, 8–10 are
the critical activities as their total float is 0.
Hence, we have the following critical path.
1  3  5  7  8  10, with the total project duration of 22 days.
Example 14.9: A small maintenance project consists of the following jobs whose
precedence relationships is given below:

Job 1– 2 1– 3 2– 3 2–5 3–4 3–6 4–5 4 – 6 5 – 6 6 –7


Duration(days) 15 15 3 5 8 12 1 14 3 14

(i) Draw an arrow diagram representing the project.


(ii) Find the total float for each activity.
(iii) Find the critical path and the total project duration.

Solution:

Forward Pass Calculation: In this we estimate the earliest start and the earliest
finish time ESj given by,
ES j Max (ESi tij ) where ESi is the earliest start time and tij is the normal
i

time for the activity (i, j).


Self-Instructional
356 Material
Network Analysis: CPM
ES1  0 and PERT
ES 2  ES1  t15  0  15  15
ES3  Max ( ES2  t23 , ES1  t13 )
 Max (15  3, 0  15)  18 NOTES
ES 4  ES3  t34  18  8  26
ES5  Max ( ES2  t25 , ES 4  t45 )
 Max (15  5, 26  1)  27
ES6  Max ( ES3  t36 , ES 4  t 46 , ES5  t56 )
 Max (18  12, 26  14, 27  3)
 40
ES7  ES6  t67  40  14  54
Backward Pass Calculation: In this we calculate the latest finish and latest start
time LFi, given by LFi= Mini(LFj–tij) where LFj is the latest finish time for the
event j.
LF7  54
LF6  LF7  t67  54  14  40
LF5  LS6  t56  40  3  37
LF4  Min ( LS5  t45 , LS6  t46 )
 Min (37  1, 40  14)  26
LF3  Min ( LF4  t34 , LF6  t36 )
 Min (26  8, 40  12)  18
LF2  Min ( LF5  t25 , LF3  t23 )
 Min (37  5, 18  3)  15
LF1  Min ( LF3  t13 , LF2  t12 )
 Min (18  15, 15  15)  0
The following table gives the calculation for critical path and total float.

Time Float

Self-Instructional
Material 357
Network Analysis: CPM From the above table we observe that the activities 1 – 2, 2 – 3, 3 – 4, 4 – 6, 6 –
and PERT
7 are the critical activities and the critical path is given by, 1  2  3  4  6
 7.
The total project completion is given by 54 days.
NOTES
Example 14.10: Tasks A, B, ... H, I constitute a project. The notation X<Y means
that the task X must be completed before Y is started. With the notation,
A<D; A<E; B<F; D<F; C<G; C<H; F<I; G<I
Draw a graph to represent the sequence of tasks and find the minimum time of
completion of the project, when the time (in days) of completion of each task is as
follows:
The above constraints can be given in the following table.
Task A B C D E F G H I
Time (days) 8 10 8 10 16 17 18 14 9

Solution: The above constraints can be given in the following table.


Activity A B C D E F G H I
Preceding Activity – – – A A B,D C B F,G

Time Calculation: Using forward and backward pass calculation, we first


estimate the earliest and the latest time for each event.
ES1  E1  0
E2  E1  t12  0  8  8
E3  Max ( E1  E13 , E2  t23 )
 Max (0  10, 8  10)  18
E4  E1  t14  0  8  8
E5  Max ( E3  t35 , E4  t45 )
 Max (18  17, 8  18)  35
E6  Max ( E2  t26 , E4  t46 , E5  t56 )
 Max (8  16, 8  14, 35  9)  44

Self-Instructional
358 Material
The value of the latest time can now be obtained. Network Analysis: CPM
and PERT
L6  E6  44 (Target completion time for the project)
L5  L6  t56  44  9  35
L4  Min ( L6  t46 , L5  t45 ) NOTES
 Min (44  14, 35  18)  17
L3  L5  t35  35  17  18
L2  Min ( L6  t26 , L3  t23 )
 Min (44  16, 18  10)  8
L1  Min ( L4  t14 , L3  t13 , L2  t12 )
 Min (17  8, 18  10, 8  8)  0
To evaluate the critical events, all these calculations are put in the following table.
Normal
Time/Days

The above table shows that the critical events are the tasks 1 – 2, 2 – 3, 3
– 5, 5 – 6 as their total float is zero.

The critical path is given by 1  2  3  5  6 or A  D  F  I with


the total project duration as 44 days.

Self-Instructional
Material 359
Network Analysis: CPM
and PERT 14.5 PROGRAMME EVALUATION AND REVIEW
TECHNIQUE (PERT)

NOTES The network methods discussed so far may be termed as deterministic, since
estimated activity times are assumed to be known with certainty. However, in
research project or design of gear box of a new machine, various activities are
based on judgement. It is difficult to obtain a reliable time estimate due to the
changing technology. Time values are subject to chance variations. For such cases
where the activities are non-deterministic in nature, PERT was developed. Hence,
PERT is a probabilistic method where the activity times are represented by a
probability distribution. This probability distribution of activity times is based upon
three different time estimates made for each activity. These are as follows:
(i) Optimistic time estimate
(ii) Most likely time estimate
(iii) Pessimistic time estimate
Optimistic Time Estimate: It is the smallest time taken to complete the activity
if everything goes on well. There is very little chance that activity can be done in
time less than the optimistic time. It is denoted by t0 or a.
Most Likely Time Estimate: It refers to the estimate of the normal time the
activity would take. This assumes normal delays. It is the mode of the probability
distribution. It is denoted by tm or (m).
Pessimistic Time Estimate: It is the longest time that an activity would take if
everything goes wrong. It is denoted by tp or b. These three time values are shown
in the following figure.
Frequency

0 t0 tm tp

Time Distribution Curve

From these three time estimates, we have to calculate the expected time of
an activity. It is given by the weighted average of the three time estimates,

t0  4tm  t p
te 
6

 distribution with weights of 1, 4, 1, for to, tm and tp estimates respectively.


Variance of the activity is given by,
2
 t p – t0 
 = 
2

 6 
Self-Instructional
360 Material
The expected length (duration), denoted by Tc of the entire project is the Network Analysis: CPM
and PERT
length of the critical path, i.e., the sum of the tc’s of all the activities along the
critical path.
The main objective in the analysis through PERT is to find the completion
NOTES
for a particular event within specified date TS, given by P (Z  D) where,
Due date – Expected date of completion
D=
Project variance

Where Z stands for standard normal variable.


14.5.1 PERT Procedure
Step 1: Draw the project network.
Step 2: Compute the expected duration of each activity using the formula.
t0  4tm  t p
te 
6
Also calculate the expected variance 2 of each activity.
2
 t p  to 
i.e., 2 =  
 6 
Step 3: Compute the earliest start, earliest finish, latest start, latest finish and total
float of each activity.
Step 4: Find the critical path and identify the critical activities.
Step 5: Compute the project length variance 2 which is the sum of the variance of
all the critical activities and hence find the standard deviation of the project
length .
Ts – Te
Step 6: Calculate the standard normal variable Z = where TS is the

scheduled time to complete the project.
Te = Normal expected project length duration.
 = Expected standard deviation of the project length.
Using the normal curve, we can estimate the probability of completing the
project within a specified time.
Example 14.11: The following table shows the jobs of a network alongwith their
time estimates.
Job 1- 2 1-6 2 - 3 2 - 4 3 - 5 4 - 5 6 -7 5-8 7 -8
a (days ) 1 2 2 2 7 5 5 3 8
m (days) 7 5 14 5 10 5 8 3 17
b (days) 13 14 26 8 19 17 29 9 32
Self-Instructional
Material 361
Network Analysis: CPM Here, a is optimistic time, m is most likely time and b is pessimistic time
and PERT
estimate.
Draw the project network and find the probability that the project is
completed in 40 days.
NOTES
Solution: First we calculate the expected time and standard deviation for each
activity.
2
t o + 4t m + t p  t p – to 
Activity te = σ2 =  
6  6 
2
1  4  7  13  13  1 
1–2 7   4
6  6 
2
2  4  5  14  14  2 
1–6 6   4
6  6 
2
2  4  14  26  26  2 
2–3  14    16
6  6 
2
2  5 4  8 82
2–4 5   1
6  6 
2
7  4  10  19  19  7 
3–5  11   4
6  6 
2
5  5  4  17  17  5 
4–5 7   4
6  6 
2
5  8  4  29  29  5 
6–7  11    16
6  6 
2
3  3 4  9 93
5–8 4   1
6  6 
2
8  4  17  32  32  8 
7–8  18    16
6  6 

Self-Instructional
362 Material
Expected project duration = 36 days. Network Analysis: CPM
and PERT
Critical path 1  2  3  5  8
Project length variance = 2 = 4 +16 + 4 + 1
= 25 NOTES
 =5
Probability that the project will be completed in 40 days is given by,
P(Z  D)
Ts – Te 40  36 4
D=    0.8
 5 5
Area under the normal curve for  = 0.8,
P(Z  0.8)
= 0.5 +  (0.8) [(8) = 0.2881 (refer Z – table)]
= 0.5 + 0.2881 = 0.7881 = 78.81%

Conclusion: If the project is performed 100 times under the same conditions,
there will be 78.81 occasions for this job to be completed in 40 days.
Example 14.12: A small project is composed of seven activities whose time
estimates are listed in the table as follow.
Duration (Weeks)
Likely

You are required to:


(i) Draw the project network.
(ii) Find the expected duration and variance of each activity.
(iii) Calculate the early and late occurrence for each event and the expected
project length.
(iv) Calculate the variance and standard deviations of project length. Self-Instructional
Material 363
Network Analysis: CPM (v) What is the probability that the project will be completed:
and PERT
(a) 4 weeks earlier than expected.
(b) Not more than 4 weeks later than expected.
NOTES (c) If the project due date is 19 weeks, what is the probability of meeting
the due date.
Solution: The expected time and variance of each activity is computed as shown
in the table below:
Activity a m b t o + 4t m + t p 2
te = σ2 = t p to
6 6

The earliest and the latest occurrence time for each is calculated as below:
E 1 = 0; E2= 0 + 2 = 2
E3 = 0 + 4 = 4
E4 = 0 + 3 = 3
E 5 = Max (2 + 1, 4 + 6) = 10
E 6 = Max (10 + 7, 3 + 5) = 17
To determine the latest expected time we start from E6 being the last event
and move backwards subtracting te from each activity. Hence, we have
L 6 = E6 = 17
L 5 = L6 – 7 = 17 – 7 = 10
L 4 = 17 – 5 = 12
L 3 = 10 – 6 = 4
L 2 = 10 – 1 = 9
L 1 = Min (9 – 2, 4 – 4, 12 – 3) = 0
Using the above information, we get the following network, where the critical
path is shown by the double line arrow.

Self-Instructional
364 Material
We observe the critical path of the above network as 1  3  5  6. Network Analysis: CPM
and PERT
The expected project duration is 17 weeks, i.e., Te = 17 weeks.
The variance of the project length is given by,
2 = 1 + 4 + 4 = 9 NOTES
Hence,  =3
(i) The probability of completing the project within 4 weeks earlier than expected
is given by,
T –T
P( Z  D) where D  S e

Due date  Expected date of completion


D
Project variance
17  4  17 13  17 4
D  
3 3 3
 1.33
P( Z 1.33) 0.5 (1.33)
0.5 0.4082 (from the table)
= 0.0918 = 9.18%

Conclusion: If the project is performed 100 times under the same conditions,
then there will be 9 occasions for this job to be completed in 4 weeks earlier than
expected.
(ii) The probability of completing the project not more than 4 weeks later than
expected is given by,
P (Z  D)

Where, D  Ts  Te

Here, Ts = 17 + 4 = 21
21  17 4
D   1.33
3 3
P (Z  1.33)
= 0.5 + (1.33)
= 0.5 + 0.4082 (from the table)
= 0.9082 = 90.82% Self-Instructional
Material 365
Network Analysis: CPM
and PERT

NOTES

Conclusion: If the project is performed 100 times under the same conditions,
then there will be 90.82 occasions when this job will be completed not more than
4 weeks later than expected.
(iii) The probability of completing the project within 19 weeks, is given by,
19 17 2 19 17 2
P(Z  D) where, D= P(Z D) Since  19
T D=
where, Since TS 19
3 3 3 3
= 0.666
P (Z  0.666) = 0.5 +  (0.666)
= 0.5 + 0.2514 (from the table)
= 0.7514 = 75.14%

Conclusion: If the project is performed 100 times under the same conditions,
then there will be 75.14 occasions for this job to be completed in 19 weeks.
Example 14.13: Consider the following project.
Activity Time Estimate in Weeks Predecessor
to tm tp

A 3 6 9 None
B 2 5 8 None
C 2 4 6 A
D 2 3 10 B
E 1 3 11 B
F 4 6 8 C,D
G 1 5 15 E

Self-Instructional
366 Material
Find the path and standard deviation. Also find the probability of completing Network Analysis: CPM
and PERT
the project by 18 weeks.
Solution: First we calculate the expected time and variance of each activity as in
the following table:
NOTES
2
t o + 4t m + t p 2  t p – to 
Activity to tm tp te = σ = 
6  6 
3 46  9
A 3 6 9 6 [(9 – 3)/6]2 = 1
6
30
B 2 5 8 5 [(8 – 2)/6]2 = 1
6
C 2 4 6 24/6 = 4 [(6 – 2)/6]2 = 0.444
D 2 3 10 4 1.777
E 1 3 11 4 2.777
F 4 6 8 6 0.444
G 1 5 15 6 5.444

We construct the network with the help of predecessor relation given in the
data.

Critical path is 1  2  4  6 or A  C  F
The project length = 16 weeks.
Project length variance 2 = 1 + 0.444 + 0.444 = 1.888
Standard deviation =  = 1.374
The probability of completing the project in 18 weeks is given by:
P(Z  D)
Ts – Te
Where, D=

Ts = 18; Te = 16;  = 1.374

Self-Instructional
Material 367
Network Analysis: CPM
and PERT 18 – 16
D= = 1.4556
1.374
P(Z  D) = P(Z  1.4556) = 0.5 + (1.4456)
NOTES = 0.5 + 0.4265 (from table)
= 0.9265 = 92.65%

Conclusion: If the project is performed 100 times under the same conditions,
then there will be 92.65 occasions when this job will be completed by 18 weeks.
Example 14.14: Assuming that the expected times are normally distributed, find
the probability of meeting the schedule date as given for the network.
Activity Days
(i–j) Optimistic Most Likely Pessimistic
to tm tp

1–2 2 5 14
1–3 9 12 15
2–4 5 14 17
3–4 4 4 10
4–5 8 17 20
3–5 6 6 12

Scheduled project completion date is 30 days. Also, find the date on which
the project manager can complete the project with a probability of 0.90.
Solution: The expected time te and variance for each activity is calculated in the
following table:
Activity te = (to + 4tm + tp)/6  2 = ((tp – to)/)2
1–2 6 4
1–3 12 1
2–4 13 4
3–4 5 1
3–5 16 4
4–5 7 1

Self-Instructional
368 Material
To determine the critical path, the earliest expected time and the latest Network Analysis: CPM
and PERT
allowable time. First we draw the project network as follows:

NOTES

The critical path is given by 1  3  5 and the project duration is given by 28


days. Project length variance = 2 = 1 + 4 = 5. Standard deviation = 2 =
2.236.
The probability of completing the project within 30 days is given by,
Ts  Te 30  28
P (Z  D), where D =   0.8944
 2.236
P (Z  0.8944) = 0.5 +  (0.8944)
= 0.8133
= 81.33%
Conclusion: If the project is performed 100 times under the same conditions,
then there will be 81.33 occasions when the project will be completed in 30 days.
If the probability for the completion of the project is 0.90 then the
corresponding value of Z = 1.29.
Ts  Te
Z  1.29

Ts  28
i.e.,  1.29
2.236
 Ts = (1.29) (2.236) + 28
 Ts = 30.88 weeks

14.6 COMPARISON AND LIMITATIONS OF PERT


AND CPM
1. CPM is activity oriented, i.e., CPM network is built on the basis of activities.
Also results of various calculations are considered in terms of activities of
the project. On the other hand, PERT is event oriented.

Self-Instructional
Material 369
Network Analysis: CPM 2. CPM is a deterministic model, i.e., it dose not take into account the uncertainties
and PERT
involved in the estimation of time for execution of a job or an activity. It
completely ignores the probabilistic element of the problem. PERT however
is a probabilistic model. It uses three estimates of the activity time- optimistic,
NOTES pessimistic and most likely with a view to take into account time uncertainty.
Thus the expected duration of each activity is probabilistic indicates that there
is fifty percent probability of getting the job done within the time.
3. CPM places dual emphasis on time and cost and evaluates the trade off
between project cost and project time. It allows the project manager to
manipulate project duration within certain limits so that project duration can
be shortened to an optimal cost. On the other hand PERT is primarily
concerned with time. It helps the manager to schedule and coordinate various
activities so that the project can be completed on scheduled time.
4. Since the Critical Path Method does not account for uncertainty, it is best
used in projects where the activity time estimate can be predicted fairly
accurately. For example, for repetitive projects you can estimate the time
for each activity quite accurately from past experience. Whereas for projects
that have a higher degree of uncertainty, use the PERT Network. Most
software projects will require you to account for a high degree of uncertainty.
5. Another difference in PERT and CPM is in how the diagrams are drawn. In
PERT, events are placed in circles or rectangles to emphasize a point in
time. Tasks are indicated by the lines connecting the network of events. In
CPM the emphasis is on the tasks, which are placed in circles. The circles
are then connected with lines to indicate the relationship between the tasks.
CPM use has become more widespread than the use of PERT applications.
PERT and CPM are used together because they have similarities. For example,
PERT and CPM both assume that a small set of activities, which make up the
longest path through the activity network control the entire project. In addition to
that, PERT and CPM also share the following six key assumptions:
1. All tasks have distinct beginning and end points.
2. All estimates can be mathematically derived.
3. Tasks must be able to be arranged in a defined sequence that produces a
pre defined result.
4. Resources may be shifted to meet the need.
5. Cost and time share a direct relationship, i.e., cost of each activity is evenly
spread over time.
6. Time, of itself, has no value.
When used together, PERT and CPM can provide:
 A range of time estimates (by PERT).
 Likely time estimates (by PERT and CPM).
Self-Instructional
370 Material
 Cost estimates (by CPM). Network Analysis: CPM
and PERT
 Time and costs if crashed (by CPM).
 Probabilities of completion on time for a range of times (by PERT).
 A clear path of tasks that are critical to the project (by PERT and CPM). NOTES
 A central focus for solid communications on project issues (by PERT and
CPM).
Limitations of PERT/CPM
 Clearly defined, independent and stable activities.
 Specified precedence relationships.
 Over emphasis on critical paths.
 Deterministic CPM model.
 Activity time estimates are subjective and depend on judgment. If the
estimates are subjective, then it compromises the purpose of the formula.
The weighted estimate and standard deviation will not accurately depict the
amount of time required for each task. In case where there is little experience
in performing the activity, these estimates may be only a guess. Moreover if
the person or group performing the activity estimates the time, there may be
a bias in the estimate.
 PERT assumes a beta distribution for these time estimates, but the actual
distribution may be different.
 Even if the beta distribution assumption holds, PERT assumes that the
probability distribution of the project completion time is the same as that of
the critical path. PERT consistently underestimates the expected project
completion time due to alternate paths becoming critical. Under estimation
of time can cause huge problems in project management. Not only can it
cause the project to fall behind, but it can also cause overages in budget
when employees are either forced to pull overtime to meet project deadlines
or the project might over extend what was budgeted resource wise, thus
causing a problem with over allocation.

Check Your Progress


9. What is a critical activity and critical path?
10. What is PERT? Where is it used?
11. What are optimistic time, most likely time and pessimistic time for activities
in a project?
12. State about the expected time for an activity in a project.
13. Define variance of an activity.
14. What is the standard normal variable of a project?
15. Why are PERT and CPM used together?
Self-Instructional
Material 371
Network Analysis: CPM
and PERT 14.7 ANSWERS TO CHECK YOUR PROGRESS
QUESTIONS

NOTES 1. Management functions involved in three phases of work involved in a project


are, planning, scheduling and controlling.
2. Planning is setting of objectives of the project by listing of tasks to be
performed and resources available to complete the project.
3. An activity represents an action. It is an effort that consumes time that is
needed to complete a part of the overall project.
4. An event is either start or end of an activity.
5. Network is a graphic representation of logically connected activities and
events where activities are presented as arrows and events as nodes.
6. Events that are ending of more than one activity are known as merge events
and those which are beginning of more than one activity are known as burst
events.
7. An activity that consumes neither any resource nor time, but it is there on
the network to show a link between events, are known as dummy activity.
8. A dummy activity that is the only activity emanating from an event is a
redundant activity which can be eliminated, and this phenomenon is known
as redundancy.
9. An activity is critical if delay in its start will cause further delay in completion
of the entire project and critical path is the sequence of all such activities in
the network.
10. PERT stands for Program Evaluation and Review Technique. It is a
probabilistic method where activity times are represented by a probability
distribution. PERT is used where activities involved in a project are non-
deterministic in nature.
11. Smallest time taken to complete an activity assuming that everything goes
well is optimistic time. Most likely time is the normal time taken by an activity
assuming normal delay. Pessimistic time is the longest time required for an
activity to complete assuming everything going wrong.
12. Expected time for an activity in a project is denoted in a PERT chart and is
given by the formula as given below:
Expected time = (Optimistic time + 4  Most likely time + Pessimistic
time)/6.
13. Variance of an activity is given by 2 = [(Pessimistic time – Optimistic time)/6]2.
14. Standard normal variable is given by the formula; (Scheduled time for project
completion – Normal expected time for the project)/Expected standard
deviation for the project. Expected standard deviation for the project is
calculated by the square root of sum of variance of all the critical activities.
15. PERT and CPM are used together because they have similarities.
Self-Instructional
372 Material
Network Analysis: CPM
14.8 SUMMARY and PERT

 Network analysis is a method of planning and controlling projects by


recording their interdependence in a diagrammatic form that enables each NOTES
fundamental problem involved to be tackled separately.
 Network analysis clearly shows the interdependences between jobs to be
performed in contest of a project and thus enables people to see not only
the overall plan but the ways in which their own activities depend upon or
influence those of others.
 By splitting up the project into smaller activities, network analysis assists in
the estimation of their durations leading to more accurate target dates.
Network analysis allows schedules to be based on considerations of costs
so as to complete projects in a given time at minimum expense.
 Network analysis separates the planning of the sequence of jobs from the
scheduling of times for the jobs and thus it makes planning and scheduling
effective.
 Network analysis is specially suited for projects which are not routine or
repetitive and which will be conducted only once or a few times. Two most
popular forms of this technique now used in many scheduling situations are
the Critical Path Method (or simply CPM) and the Programme Evaluation
and Review Technique (popularly known as PERT).
 Critical Path Method or CPM was developed to aid in the scheduling of
routine plant overhaul, maintenance and construction work. This method
differentiates between planning and scheduling.
 Planning refers to the determination of activities that must be accomplished
and the order in which such activities should be performed to achieve the
objective of the project whereas scheduling refers to the introduction of
time into the plan thereby creating a time table for the various activities to be
performed.
 Programme Evaluation and Review Technique or PERT was developed for
use in defence projects specifically but now this technique assists a business
manager in planning and controlling a project.
 In PERT, the time assumed to perform each activity is uncertain and as such
three time estimates (the optimistic, the pessimistic and the most likely) are
used. It is often described as an approach of multiple time estimates to
scheduling problems of long-range research and development projects.
 A project is defined as a combination of interrelated activities, all of which
must be executed in a certain order for its completion.
 Network is the graphic representation of logically and sequentially connected
arrows and nodes representing activities and events of a project. Networks
are also called arrow diagram. Self-Instructional
Material 373
Network Analysis: CPM  An activity represents some action and is a time consuming effort necessary
and PERT
to complete a particular part of the overall project. Thus, each and every
activity has a point of time where it begins and a point where it ends.
 The beginning and end points of an activity are called events or nodes.
NOTES
Event is a point in the time and does not consume any resource. It is
represented by a numbered circle.
 Activities, which must be accomplished before a given event can occur are
termed as preceding activities. Activities, which cannot be accomplished
until an event has occurred are termed as succeeding activities. Activities,
which can be accomplished concurrently are known as concurrent activities.
 Certain activities which neither consume time nor resources but are used
simply to represent a connection or a link between the events are known as
dummies. It is shown in the network by a dotted line.
 In a network diagram looping error is also known as cycling error. Drawing
an endless loop in a network is known as error of looping.
 To disconnect an activity before the completion of all the activities in a
network diagram is known as dangling.
 As per Fulkerson’s rule, after the network is drawn in a logical sequence
every event is assigned a number. The number sequence must reflect the
flow of the network.
 Once the network of a project is constructed, the time analysis of the network
becomes essential for planning various activities of the project. An activity
time is a forecast of the time an activity is expected to take from its starting
point to its completion (under normal conditions).
 An activity is said to be critical if a delay in its start will cause a further delay
in the completion of the entire project.
 The sequence of critical activities in a network is called the critical path. It is
the longest path in the network from the starting event to the ending event
and defines the minimum time required to complete the project. In the
network, it is denoted by double line.
 The critical path highlights those activities which must be performed more
rapidly if the total project completion time is to be reduced.
 The term slack is normally associated with events. It indicates the amount
of latitude that is available for an event to occur.

14.9 KEY WORDS

 Network: A graphic representation of logically connected activities and


events where activities are presented as arrows and events as nodes.

Self-Instructional
374 Material
 Activity: An activity represents an action. It is an effort that consumes time Network Analysis: CPM
and PERT
that is needed to complete a part of the overall project.
 Preceding activity: An activity that must be accomplished for an event to
occur.
NOTES
 Succeeding activity: An activity that cannot occur until an event has
occurred.
 Concurrent activities: Activities which can be accomplished concurrently.
 Critical activity: An activity is critical if delay in its start will cause further
delay in completion of the entire project.
 Event: An event is either start or end of an activity.
 Critical path: It is the path connecting all critical events of the project from
start to the completion of the project.
 PERT: It stands for Program Evaluation Review Technique. It is a
probabilistic method where activity times are represented by a probability
distribution.
 CPM: It stands for Critical Path Method and is based on determination of
the critical path.
 Dummy activity: An activity that consumes neither any resource nor time,
but it is there on the network to show a link between events.
 Redundancy: A dummy activity that is the only activity emanating from an
event is a redundant activity which can be eliminated.
 Optimistic time: The minimum time taken to complete an activity assuming
that everything goes well.
 Most likely time: The normal time taken by an activity assuming normal
delay.
 Pessimistic time: The longest time required for an activity to complete
assuming everything going wrong.
 Expected time: The time for an activity in a project, denoted in a PERT
chart and is given by the formula, as given below:
Expected time = (Optimistic time + 4  Most likely time + Pessimistic time)
6.

14.10 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short-Answer Questions
1. What is understood by a project?
2. What is dangling in a network? How can it be avoided?
Self-Instructional
Material 375
Network Analysis: CPM 3. Write basic two differences between PERT and CPM.
and PERT
4. How are time estimates used in PERT and CPM?
5. How many types of float are there?
NOTES 6. Differentiate between float and slack times.
7. Define critical activity and critical path?
8. The total float of an activity i – j is 18. The latest and earliest occurrence of
events i and j are 15, 12 and 22, 10, respectively. Find the free float.
9. What is independent float when the total float of an activity i – j is 18?
Latest and earliest occurrence of events i and j are 15, 12 and 22, 10,
respectively.
10. What are the limitations of PERT and CPM?
Long-Answer Questions
1. The following table gives the activities and duration of a construction
project.

Activity 1-2 1-3 2-3 2-4 3-4 4-5


Duration(days) 20 25 10 12 6 10

(i) Draw the network for the project.


(ii) Find the critical path.
2. A small project consits of 11 activities A, B, C, ..., K. The precedence
relationship A, B can start simultaneously. Given A<C, D, I; B<G, F; D<G,
F; F<H, K; G, H<J; I, J, K<E. The duration of the activities are as follows.

Activity A B C D E F G H I J K
Duration(Days) 5 3 10 2 8 4 5 6 12 8 9

Draw the network of the project. Summarise the CPM calculations in a


tabular form computing total, and free floats of activities and hence determine
the critical path.
3. Draw the network and determine the critical path for the given data. Also
calculate all the floats involved in CPM.

Jobs 1-2 1-3 2 -4 3-4 3-5 4 -5 4 -6 5 -6


Duration 6 5 10 3 4 6 2 9

4. A small maintenance project consists of the following 12 jobs.


Jobs 1-2 2-3 2-4 3-4 3 - 5 4 - 6 5 - 8 6 -7 6 - 10 7 - 9 8 - 9 9 - 10
Duration (Days) 2 7 3 3 5 3 5 8 4 4 1 7

Self-Instructional
376 Material
Draw the arrow network of the project. Summarize CPM calculations in a Network Analysis: CPM
and PERT
tabular form calculating the three types of floats and hence determine the
critical path.
5. Consider the following data for activities in a given project.
NOTES
Activity A B C D E F
Predecessor – A – B,C C D,E
Time (Days) 5 4 7 3 4 2

Draw the arrow diagram for the project. Compute the earliest and the latest
event times. What is the minimum project completion time? List the activities
on the critical path.
6. For the following project, determine the critical path and its duration?

Activity A B C D E F G H
Predecessors – A A B B D,E D C,F,G
Time (Days) 2 4 8 3 2 3 4 8

7. A project has the following time schedule.

Activity 1- 2 1-3 1-4 2 -5 3-6 3 -7 4 -6 5 -8 6 -9 7 -8 8 -9


Duration
2 2 1 4 8 5 3 1 5 4 3
(Month)

Construct the network and compute:


(i) Total float for each activity.
(ii) Critical path and its duration.
8. The data for a small PERT project is as given below, where a represents
optimistic time, m the most likely time and b the pessimistic time. Estimates
(in days) of the activities A, B, ..., J, K.

Activity A B C D E F G H I J K
a 3 2 6 2 5 3 3 1 4 1 2
m 6 5 12 5 11 6 9 4 19 2 4
b 5 14 30 8 17 15 27 7 28 9 12
A, B, C can start simultaneously; A  D, I; B < G, F; D < G, F; C < E; E <
H, K; F < H, K; G, H < J.
(i) Draw the arrow network of the project.
(ii) Calculate the earliest and the latest expected times to each event and
find critical path.
Self-Instructional
Material 377
Network Analysis: CPM (iii) What is the probability that the project will be completed 2 days later
and PERT
than expected?
9. The three estimates for the activities of a project are given below:
NOTES Estimate Duration (Days)
Activity a m b
1–2 5 6 7
1–3 1 1 7
1–4 2 4 12
2–5 3 6 15
3–5 1 1 1
4–6 2 2 8
5–6 1 4 7

Draw the project network. Find out the critical path of the project and
project duration. What is the probability that the project will be completed
at least 5 days earlier than expected?
What is the probability that the project will be completed by 22 days?
10. Consider the network shown in the figure below. The estimate to, tm and tp
are shown in this order for each of the activities on the top of the arcs
denoting the respective activities.
Find the probability of completing the project in 25days.

11. A project is represented by the network shown below and has the following
table:
Task A B C D E F G H I
Least time 5 18 26 16 15 6 7 7 3
Greatest time 10 22 40 20 25 12 12 9 5
Most likely time 8 20 33 18 20 9 10 8 4
Determine the following:
(i) Expected tasks time and their variance.
(ii) The earliest and the latest expected time to reach each mode.

Self-Instructional
378 Material
(iii) The critical path. Network Analysis: CPM
and PERT
(iv) The probability of completing the project within 41.5 weeks.
12. Consider a project having the following activities and their time estimates.
Draw an arrow diagram for the project. Identify the critical path and compute NOTES
the expected project completion time. What is the probability that the project
will require atleast 75 days?
Activity Predecessor Days
t0 tm tp
A – 2 4 6
B A 8 12 16
C A 14 16 30
D B 4 10 16
E C,B 6 12 18
F E 6 8 22
G D 18 18 30
H F,G 8 14 32

13. Compare PERT and CPM with the help of examples.

14.11 FURTHER READINGS

Arumugam, R. S. 2006. Operations Research. Palayamkottai (Tamil Nadu): New


Gamma Publications.
Sundharesan, V., K. S. Ganapathy and K. Ganesan. 2017. Resource Management
Techniques (Operations Research). Chennai: A. R. Publications.
Swaroop, Kanti, P. K. Gupta and Man Mohan. 2007. Operations Research,
13th Edition. New Delhi: Sultan Chand & Sons.
Taha, Hamdy A. 1992. Operations Research: An Introduction. New York:
Macmillan.
Sharma, S. D. 2006. Operations Research. Uttar Pradesh: Kedar Nath Ram
Nath & Co.
Gupta, P. K. and D. S. Hira. 2002. Introduction to Operations Research. New
Delhi: S. Chand And Company Limited.
Gillett, Billy E. 2007. Introduction to Operations Research. New Delhi: Tata
McGraw-Hill.
Ackoff, R. L. and M. W. Sasieni. 1968. Fundamentals of Operations Research.
New York: John Wiley & Sons Inc.
Self-Instructional
Material 379
Network Analysis: CPM Kothari, C. R. 1992. An Introduction to Operational Research. New Delhi:
and PERT
Vikas Publishing House Pvt. Ltd.
Kalavathy, S. 2002. Operations Research. New Delhi: Vikas Publishing House
Pvt. Ltd.
NOTES
Jensen, Paul A., and Jonathan F. Bard. 2003. Operations Research Models and
Methods. New York: John Wiley & Sons.
Sharma, J. K. 2001. Operations Research: Theory and Applications. New
Delhi: Macmillan India Ltd.

Self-Instructional
380 Material

You might also like