You are on page 1of 251

Industrial

Operations
Management
Dilnesaw Samuel (MA)
Tariku Jebena (PhD)
Logistics and Supply Chain Management
School of Commerce
Addis Ababa University
Distance Learning Module
MA Program in Logistics and Supply
Chain Management
School of Commerce
Addis Ababa University

Addis Ababa, Ethiopia

1/1/2015
I
INDUSTRIAL OPERATIONS MANAGEMENT

DISTANCE LEARNERS MODULE

General Introduction to the Module


An operation is about how organizations produce goods and services. Everything you wear, eat,
sit on, use or read comes to you as the result of operations management that has designed,
planned and controlled its production. Every treatment you receive at the hospital, every service
you expect in the shops and every tutorial class you attend at university- all have been produced.
While the particular function may not always be called operations management, that is what the
function actually represents. And that is what this module is concerned with- the tasks, issues
and decisions involved in operations management functions which have made the services and
goods on which we all depend.

Industrial Operations Management is a course designed to help understand the Toyota


production system of Just in Time manufacturing philosophy, contribution of quality gurus in
managing quality, the total quality management concept vis a- vis BPR, Benchmarking, ISO
standards in maintaining quality, Quality maintenance and effective failure mode analysis, theory
of constraint, Lean and Six sigma practices. Besides the course will enable students to make
industrial visit (Service, manufacturing, construction, mining, etc) in Ethiopia augment their
theoretical knowledge of operations management in Ethiopian Industries.
Upon completion of the course, learners will be able to:
 Describe operations management, its scope and activities
 Appreciate the importance of operations management in a business environment
 Develop the decisions involved in designing and controlling the operations system
 Analyze and apply selected quantitative tools, and models in the analysis of decisions for the
design, planning and controlling of operation systems
 Develop a skill set for quality and process improvement
 Widen a working knowledge of concepts and methods related to improving operations in
organizations.

This module is, thus, organized around nine chapters to achieve the aforementioned objectives.
Chapter 1 orients the learners with the basic issues of industrial operations management and its
historical evolutions as well as the scopes of the same function.

Chapter 2, Strategic Operations Management, focuses on the role of operations in a competitive


and long-term performance of an organization. It also addresses the strategic design of operations
and its management.
Chapter 3 addresses the issues of Designing such as Product and Process Design that addresses
the product and process design process, and tools for product and process design. The chapter
also addresses process analysis concepts. Besides layout design, location decisions and capacity
planning issues have been included.

II
Chapter 4 introduces Operations Planning into the aggregate planning process. The Chapter
updates resource planning with discussions of MRP, MRP II and ERP. Besides, it includes on
production control such as scheduling, loading and sequencing.
Chapter 5, Quality Management, emphasizes quality management systems and has expanded
coverage of Inspection, quality Control, TQM, Benchmarking, Continuous improvement/ kaizen
and BPR.
Chapter 6, Six Sigma, including the Six Sigma process, Six Sigma tools, Six Sigma and
profitability, and lean Six Sigma.

Chapter 7 has sections on just-in-time and leaning the operations, elements of lean operations,
tools to successfully implement JIT and theory of the constraints.

Chapter 8, Mechanization and Automation addresses about the historical background of


mechanization and automation of operations. Besides, it addresses how these events have
affected productivity. Different types of automation as well as strategies of automation have been
addressed.

Chapter 9, The Future of Operations Management, emphasizes the challenges that operations
managers would face in the near future as a result of global growth, internet/ Information and
Communication revolution and environmtalism.

The organization of this module appreciated the emergence of Strategic Operations Management,
Quality Management, Six Sigma, and JIT/Lean Operations as the modern state of operations
management knowledge. Hence, all these concepts have been discussed in detail in a separate
chapter each. Other issues such as Business Process reengineering [BPR], Kaizen,
Benchmarking, Enterprise Resource Planning [ERP], Mass customization, Agile Manufacturing
and Flexible Manufacturing are discussed as subchapters of the module. Besides, the Future of
Operations Management has been included in the final chapter of the module. In fact, the chapter
has been written based on the best guess of scholars, surely not based on the predictions of
witches.

Learners should note, however, that there might be a different approach to ordering these topics
than presented in this module. This is perfectly understandable given the interdependency of
decisions in operations management.

III
Table of Contents
CHAPTER I .................................................................................................................................................. 1
INTRODUCTION .................................................................................................................................... 1
1.1 Understanding the Nature of Operations Management ................................................................... 1
1.2 The Operations System ................................................................................................................... 2
1.3 Operations Function in Organizations ............................................................................................ 4
1.4 A typology of Operations ................................................................................................................ 5
1.5 Historical Background and the Evolution of Operations Management .......................................... 8
1.6 Related Issues of Operations Management ................................................................................... 11
1.7 The Scope Operations Management Decisions............................................................................. 15
Chapter Summary ............................................................................................................................... 17
Review Questions ............................................................................................................................... 17
CHAPTER II............................................................................................................................................... 18
STRATEGIC OPERATIONS MANAGEMENT ................................................................................... 18
2.1 What is operations strategy? ......................................................................................................... 19
2.2 The Strategic Role of Operations .................................................................................................. 20
2.3 Performance Objectives of Operations [Strategic] ....................................................................... 22
2.4 Approaches to Operations Strategy ............................................................................................... 26
2.5 The process of operations strategy ................................................................................................ 33
2.6 Strategic Resonance ...................................................................................................................... 34
2.7 Operations Strategy Implementation ............................................................................................ 35
Chapter Summary ............................................................................................................................... 35
Review Questions ............................................................................................................................... 36
CHAPTER III ................................................................................................................................................. 37
STRATEGIC DECISIONS IN OPERATIONS MANAGEMENT ...................................................................... 37
3.1 Product Design .............................................................................................................................. 38
3.2 Process Design .............................................................................................................................. 57
3.3 Long-term Capacity Planning ....................................................................................................... 71
3.4 Facilities Location Decisions ........................................................................................................ 79
3.5 Facility Layout .............................................................................................................................. 86
3.6 Assembly-line Balancing ........................................................................................................ 96

i
Chapter Summary ............................................................................................................................... 98
Review Questions ............................................................................................................................... 99
CHAPTER IV ............................................................................................................................................... 100
OPERATING DECISIONS ........................................................................................................................ 100
4.1 Production Planning and Control Systems: An Overview ......................................................... 100
4.1.1 Aggregate Production/ Operations Planning ............................................................................ 103
4.1.2 Master Production Schedule(MPS)......................................................................................... 111
4.1.3. Manufacturing Resource Planning: A transition From MRP to MRPII.................................. 122
4.1.4 Enterprise Resource Planning (ERP) ....................................................................................... 125
4.2 Shop floor Planning and Control ................................................................................................ 129
Chapter Summary ............................................................................................................................. 143
Review questions .............................................................................................................................. 143
CHAPTER V ................................................................................................................................................ 145
QUALITY MANAGEMENT...................................................................................................................... 145
5.1 Meaning and Nature of Quality .................................................................................................. 145
5.2 Dimensions of Quality ................................................................................................................ 147
5.3 Historical Perspective on Quality and Its Management .............................................................. 148
5.4 Quality Gurus .............................................................................................................................. 152
5.5 Typology of approaches to quality management ........................................................................ 157
5.6 Quality Standards, Certification and Awards.............................................................................. 164
5.7 Total Quality Management ......................................................................................................... 165
5.8 Quality circles ............................................................................................................................. 168
5.9 Total Quality Management (TQM) and Continuous Improvement/ Kaizen ............................... 169
5.10 TQM and Benchmarking....................................................................................................... 172
5.11 TQM and Business Process Re-engineering ............................................................................. 174
5.12 The Cost of Quality ................................................................................................................... 177
Chapter Summary ............................................................................................................................. 179
Review Questions ............................................................................................................................. 180
CHAPTER VI ............................................................................................................................................... 181
SIX SIGMA ............................................................................................................................................. 181
6.1 Introduction ................................................................................................................................. 181
6.2 The Variants of Six Sigma .......................................................................................................... 183

ii
6.3 The Six Sigma Process................................................................................................................ 185
6.4 The Six Sigma Team ................................................................................................................... 186
6.5 Six Sigma Methodologies ........................................................................................................... 188
6.6 Measuring Performance .............................................................................................................. 191
6.7 Six Sigma versus TQM ............................................................................................................... 192
6.8 Six sigma Success factors ........................................................................................................... 193
Chapter Summary ............................................................................................................................. 193
Review Questions ............................................................................................................................. 193
CHAPTER VII .......................................................................................................................................... 195
JUST-IN-TIME AND LEAN PRODUCTION ..................................................................................... 195
7.1 Introduction to the Early Periods of JIT ...................................................................................... 195
7.2 JIT today .................................................................................................................................... 197
7.3 Lean Production .......................................................................................................................... 198
7.4 JIT Goals ..................................................................................................................................... 199
7.5 The Basic Elements of JIT/ Lean Production ............................................................................. 202
7.6 JIT Tools ..................................................................................................................................... 204
7.7 Advantages and Disadvantages of JIT ........................................................................................ 215
7.8 Basics of Constraints Management ............................................................................................. 217
Chapter Summary ............................................................................................................................. 221
Review Questions ............................................................................................................................. 222
CHAPTER VIII ........................................................................................................................................ 223
MECHANIZATION AND AUTOMATION ....................................................................................... 223
8.1 Introduction ................................................................................................................................. 223
8.2 Assembly Line ............................................................................................................................ 223
8.3 Industrial Robot .......................................................................................................................... 224
8.4. The Age of Automation ............................................................................................................. 224
8.5 Types of Automation .................................................................................................................. 225
8.6 Reasons for Automation.............................................................................................................. 226
8.7 Advantages and Disadvantages of Automation .......................................................................... 227
8.8 Automation Strategies ................................................................................................................. 228
Chapter Summary ............................................................................................................................. 229
Review Questions ............................................................................................................................. 229

iii
CHAPTER IX ........................................................................................................................................... 230
THE FUTURE OF OPERATIONS MANAGEMENT......................................................................... 230
9.1 The Challenge of Global Growth ................................................................................................ 231
9.2 The challenge of the Internet ...................................................................................................... 235
9.3 The challenge of the environment ............................................................................................... 237
Chapter Review ................................................................................................................................. 240
Review Questions ............................................................................................................................. 241
Hints for Activities .................................................................................................................................... 242
Answer Key for Multiple Choice Review Questions ................................................................................ 243
References . ............................................................................................................................................... 244

iv
CHAPTER I

INTRODUCTION

This is an introductory chapter, so we will examine what we mean by operations management,


how operations processes can be found everywhere, how they are all similar yet different, and
what it is that operations managers do.

Upon completion of this chapter, you will be able to:


 Define operations and operations management.
 Explore the history and context today of operations management
 Describe the role of operations in different types of organizations
 Explain how operations management is relevant to organizations, managers and
individuals
 Describe basic concepts in operations management.
 Distinguish the key decisions of operations management.

1.1 Understanding the Nature of Operations Management

The term operations refer to the transformation process that converts resources into finished
goods and services. The word operation as used in this module encompasses two areas:
manufacture in the manufacturing sector and backroom activities in the service sector. In
manufacturing industries, operations are those activities, typically carried out in a factory, which
transform material into the final product. In service industries, operations are those activities
which process customer transactions but which do not involve direct contact with external
customers (e.g., backroom activities such as customer order preparation and payment
processing).
That said, what is operations management? Operations management is defined as the process
whereby resources, flowing within a defined system, are combined and transformed by a
controlled manner to add value in accordance with policies communicated by management.

Operations management focuses on managing the process of transforming materials, labor, and
capital into useful goods and/or services. The product outputs can be either goods or services;
effective operations management is a concern for both manufacturing and service organizations.
The resource inputs, or factors of production, include the wide variety of raw materials,
technologies, capital information, and people needed to create finished products. The
transformation process, in turn, is the actual set of operations or activities through which various
resources are utilized to produce finished goods or services of value to customers or clients.

Operations management concerns with the conversion of inputs into outputs, using physical
resources, so as to provide the desired utilities to the customer while meeting the other
organizational objectives of effectiveness, efficiency and adoptability. It distinguishes itself from
other functions such as personnel, marketing, finance, etc., by its primary concern for
‘conversion by using physical resources.

1
The set of interrelated management activities, which are involved in manufacturing certain
products, is called as production management. If the same concept is extended to services
management, then the corresponding set of management activities is called as operations
management.

1.2 The Operations System

Generally speaking, systems are the arrangement of components designed to achieve objectives
according to the plan. The business systems are subsystem of large social systems. In turn, it
contains subsystem such as personnel, engineering, finance and operations, which will function
for the good of the organization. A systems approach to operations management recognizes the
hierarchical management responsibilities. If subsystems goals are pursued independently, it will
results in sub-optimization. A consistent and integrative approach will lead to optimization of
overall system goals.

The operations system of an organization is the part that produces the organization’s products.
Operations system converts inputs in order to provide outputs, which are required by a customer.
It converts physical resources into outputs, the function of which is to satisfy customer wants. It
can also be defined as a configuration of resources combined for the provision of goods or
services.

Operations management systems contain five basic elements: inputs, transformation processes,
outputs, control systems, and feedback. These elements must be brought together and
coordinated into a system to produce the product or service-the reason for the business to exist.

Figure 1.1 below portrays the process of operations management in a simplified fashion. The
system takes in inputs—people, technology, capital, equipment, materials, and information—and
transforms them through various processes, procedures, work activities, and so forth into
finished goods and services.

Adjustment Random Fluctuation Monitor


Needed Output
Conversion
Inputs: Output:
Land, Labor, Process Goods,
Capital, Services
Management

Comparison:
Actual
Feedback Desired

Figure 1.1: Operations System 2


Inputs/ Resources
Resources are the human, material and capital inputs to the production process. Human resources
are the key assets of an organization. As the technology advances, a large proportion of human
input is in planning and controlling activities. By using the intellectual capabilities of people,
managers can multiply the value of their employees into by many times. Material resources
include the physical facilities and materials such as plant equipment, inventories and supplies.
These are the major assets of an organization. Raw materials are necessary as the things that will
become transformed in a business. Capital in the form of stock, bonds, and/or taxes and
contributions is a vital asset. Capital is a store of value, which is used to regulate the flow of the
other resources. Information, and energy are resources all needed in varying degrees. Besides,
the inputs in an operations management system include intangible resources that come into a
business as well.

Inputs are important to the quality of the finished product of the business. Remember the
computer cliché “Garbage in-garbage out (GIGO)”? The idea holds true for operations
management, too: You can’t produce high quality outputs from inferior inputs.

Transformation Processes
Once we have identified the inputs of a business, we can look at the processes that are used to
transform them into finished products. Transformation processes are the active practices-
including concepts, procedures, and technologies-that are implemented to produce outputs. Dry
cleaners, for instance, take soiled clothing (inputs) and use chemicals, equipment, and know-how
to transform them into clean clothing (the outputs of the business). The transformation process,
therefore, can be:
 physical, as in manufacturing operations;
 locational, as in transportation or warehouse operations;
 exchange, as in retail operations;
 physiological, as in health care;
 psychological, as in entertainment; or
 informational, as in communication.
Outputs
Outputs, the result of the transformation processes, are what your business produces. Outputs can
be tangible, such as a CD, or intangible, such as a doctor’s diagnosis, or the entertainment
experience. Since a business’s social responsibility, or obligations to the community, has
become as serious a matter as product-liability and other lawsuits, we need to consider all the
outputs a business produces-not just the beneficial or intended ones. When we look at the big
picture of the transformation process, we see that employee accidents, consumer injuries,
pollution, and waste are also outputs.

Control Systems
Control systems provide the means to monitor and correct problems or deviations when they
occur in the operating system. Controls are integrated into all three stages of production-input,
transformation, and output (see Figure 1.1). An example of a control system would be the use of
electronic monitors in a manufacturing process to tell a machine operator that the product is not
being made within the allowed size tolerance. In service companies, employee behavior is part of

3
the transformation process to be controlled. Control systems ensure the quality of the product or
service the customer expects occurs every time.

The objective of combining resources under controlled conditions is to transform them into
goods and services having a higher value than the original inputs. For all operations, the goal is
to create some kind of value-added, so that the outputs are worth more to consumers than just the
sum of the individual inputs. The transformation process applied will be in the form of
technology to the inputs. The effectiveness of the production factors in the transformation
process is known as productivity. The issue of productivity will be discussed in the forthcoming
sections of the module. Operations manager should concentrate improving the transformation
efficiency and to increase the ratio.

Feedback
Feedback is the information that a manager receives in monitoring the operation system. It can
be verbal, written, electronic, or observational. Feedback is the necessary communication that
links a control system to the inputs, transformation, and outputs. Once feedback is received, the
cycle begins again since the transformation process is a continuous process, with any needed
changes taking place throughout the process.

Activity 1
Describe how the input/transformation/output/control/feedback model can be used in School of
Commerce, Addis Ababa University. Identify the inputs, transformation process and the outputs
of the system.

1.3 Operations Function in Organizations


The operations function is central to an organization because it produces the goods and services
which are its reason for existing, though it is not the only function. The operations activities are
closely intertwined with other functional areas of a firm.

For most firms, operations are the technical core or “hub” of the organization, interacting with
the other functional areas and suppliers to produce goods and provide services for customers. For
example, to obtain monetary resources for production, operations provides finance and
accounting with production and inventory data, capital budgeting requests, and capacity
expansion and technology plans. Finance pays workers and suppliers, performs cost analyses,
approves capital investments, and communicates requirements of shareholders and financial
markets. Marketing provides operations with sales forecasts, customer orders, customer
feedback, and information on promotions and product development. Operations, in turn, provides
marketing with information on product or service availability, lead-time estimates, order status,
and delivery schedules. For personnel needs, operations rely on human resources to recruit, train,
evaluate, and compensate workers and to assist with legal issues, job design, and union activities.
Outside the organization operations interacts with suppliers to order materials or services,
communicate production and delivery requirements, certify quality, negotiate contracts, and
finalize design specifications.

Operations management is important in all types of organization including business, industry,


and government regardless of differences in size. Operations management uses the

4
organization’s resources to create outputs that fulfill defined market requirements. This is the
fundamental activity of any type of enterprise.

Besides, operations management is relevant to all parts of a business organization. Because every
unit in an organization produces something, managers need to be familiar with operations
management concepts in order to achieve goals efficiently and effectively. It is not just the
operations function that manages processes; all functions manage processes. For example, the
marketing function will have processes that produce demand forecasts, processes that produce
advertising campaigns and processes that produce marketing plans. These processes in the other
functions also need managing using similar principles to those within the operations function.
Each function will have its ‘technical’ knowledge. In marketing, this is the expertise in designing
and shaping marketing plans; in finance, it is the technical knowledge of financial reporting.

Yet each will also have a ‘process management’ role of producing plans, policies, reports and
services. The implications of this are very important. Because all managers have some
responsibility for managing processes, they are, to some extent, operations managers. They all
should want to give good service to their (often internal) customers, and they all will want to do
this efficiently. So, operations management is relevant for all functions, and all managers
should have something to learn from the principles, concepts, approaches and techniques of
operations management. It also means that we must distinguish between two meanings of
‘operations’:
 ‘Operations’ as a function, meaning the part of the organization which produces the
products and services for the organization’s external customers;
 ‘Operations’ as an activity, meaning the management of the processes within any of the
organization’s functions.

1.4 A typology of Operations


If you were asked to describe an operation with which you have come into contact, how would
you do this? You might describe the operation in terms of your experience with it, or its size or
reputation. A number of basic elements are helpful in describing operations. The first is whether
it is a manufacturing or a service operation or, as will be seen in the following section, if there
are elements of each in the organization.

Another aspect is the nature of the process taking place. Two characteristics describe this –
volume and variety. High volume products such as cars, consumer electronic devices and fast
food are typical examples of this. In order to achieve what economists describe as ‘economies of
scale’, these are usually produced in low variety. The number of variations of a car may be
significant when considering the different body styles, engine sizes and types, colours and
options available, but the reality is that the variety is limited by the choices available, and so the
variety is perceived rather than actual. Similarly, low volume products and services are generally
available in a higher variety.
The relationship between volume and variety is described often as the higher the volume the
lower the variety and vice versa. Supermarkets offer a high variety of products and yet sell in
high volumes. Doesn’t this rather change the rule? Not in this case, although there may well be
examples where both are offered. The point here is that the process is the same for all customers-
it is standardized. Everyone is treated in the same way – it is not tailored for the individual.
Therefore, from a process perspective, the variety is low, and the general finding still stands.

5
There are two other dimensions that provide insight into the nature of the operations environment
in which the organization operates. The first is the degree of competition in the market for the
organization’s goods or services. Generally, high volume organizations operate in highly
competitive markets, with many offerings competing for market share. The extreme is the mass-
market for cars and computers, where global hyper-competition exists. This is not the case for all
firms, as many operate in niche markets, often serving local customers. The second dimension is
that of position in the supply chain or supply network.

Regardless of whether an operation is manufacturing or service-based, it is part of a network or


chain of activities. These may be serving end-users directly, or providing a contribution towards
that directly or indirectly through their products.

In summary, the typology of operations is shown in Figure 1.2. This classification is useful as it
will tell us something about the general characteristics of the operations that we describe in this
way.

Business Manufacturing Service

Volume High Low

Variety Low High

Environment Global hyper- Stable, niche market


competition

Position in Start of supply-chain Suppliers end-user


Supply chain directly

Figure 1.2 A typology of operations.

These are summarized in Table 1.1.

Table 1.1 The general characteristics of operations


Task Manufacturing: the creation of physical Service: all work not concerned with
products. the creation of physical products
Volume : High volume – low variety: high levels of Low volume – high variety: usually
variety capital investment, systemization, routinized flexible technology, people and
work and flow through transformation system, systems performing high value-
resulting in low unit costs. adding work resulting in high unit
costs
Environment Hyper-competition: organizations are pursuing Niche: organizations optimize
any possible avenue to create competitive existing systems to maximize return
advantage, or simply survive on their investment
Position in Supply end customer/user: driven by needs of Removed from final customer/user:
supply chain consumers, must integrate supply networks to driven by needs of intermediaries in
deliver these needs the process, work as part of supply
networks

6
Considering the first element of this typology, manufacturing and service operations are
different, yet both are important to the success of an organization. As we have mentioned,
operations management isn’t just about managing manufacturing operations; service operations
are equally important. We usually describe organizations that transform physical materials into
tangible products (goods) as manufacturing. In contrast, organizations that influence materials,
people or information without physically transforming them may be termed as service
organizations.
It is important to bear in mind two major differences between services and manufacturing, which
are:
1. Tangibility – whether the output can be physically touched; services are usually
intangible, whilst products are usually concrete
2. Customer contact with the operation – whether the customer has a low or high level of
contact with the operation that produced the output.
These two factors – intangibility and customer contact – lead to other differences between
manufacturing and service operations, as shown in the following list:
a. Storability – whether the output can be physically stored
b. Transportability – whether the output can be physically moved (rather than the means of
producing the output)
c. Transferability – ownership of products is transferred when they are sold, but ownership
of services is not usually transferred
d. Simultaneity of production and consumption – whether the output can be produced prior
to customer receipt
e. Quality – whether the output is judged on solely the output itself or on the means by
which it was produced.
Another insight in distinguishing between manufacturing and service environments under a
number of key headings has been shown in Table 1.2 below.

Table 1.2 Typical differences between manufacturing and services


Manufacturing Service
Tangibility Product is generally concrete Service is intangible
Ownership Transferred when sold Not generally transferred.
Resale Can be resold Cannot generally be resold
Demonstrability Can be demonstrated Does not exist before purchase
Storability Can be stored by providers and Cannot be stored
customers.
Simultaneity Production precedes consumption Generally coincide
Location Production/selling/ consumption Spatially united
locally differentiable
Transportability Can be transported Generally not (but producers might be)
Production Seller alone produces Buyer/client takes part
Contact Indirect contact possible between Direct contact usually necessary
client and provider
Internationalization Can be exported Service usually cannot (but delivery
system often can)
Quality Can be inspected Cannot be checked before being
supplied

7
In real life most organizations produce both services and products for their customers and only a
few could be called ‘pure manufacturing’ or ‘pure services’. As noted previously, even
manufactured products are now surrounded by complex and sophisticated service packages, and
manufacturing organizations are being transformed into service operations surrounding a
manufacturing core. For example, services such as installation, maintenance and repair and
technical advice are usually provided with household appliances such as refrigerators and
washing machines. Software applications such as word-processing or spreadsheet programs
generally come on physical media such as CD-ROMs, accompanied by technical documentation
manuals.
Even though some aspects of the production of goods and services will differ, the operations
function itself is becoming increasingly similar for goods and services. Recognizing this, Chase
(1983) suggested that operations could be ranged along a continuum from pure manufacturing to
pure services, with quasi-manufacturing in the middle, as shown in Figure 1.3.

The continuum helps us to understand where a firm’s operations line up in terms of the emphasis
on manufacturing or services. Hence, we can say that operations management includes both
manufacturing and services activities, and that these need to be integrated into a combined,
holistic manner. These sectors may process different things. This may have implications for the
specific implementation of strategy, but not for operations management principles or issues per
se.
In a nutshell, the key point that the learner should bear in mind is that though it is customary to
make a distinction between manufacturing and services, it should be noted that this is not always
helpful when trying to manage operations. What provides better insight is in viewing
manufacturing and service operations as collaborative activities in providing goods and services
to customers. A more relevant distinction is to differentiate between those operations that process
materials and those that process customers. It needs to be remembered that materials do not think
or act for themselves, whereas customers can and do. Service companies that forget this and start
to treat their customers as if they were materials will not survive in the long term, even if they
provide excellent value.

1.5 Historical Background and the Evolution of Operations Management


Operations management has made many contributions to the development of modern
management theory, beginning with scientific management and industrial engineering early in
the twentieth century, through to the influence of Japanese management at the end of the century.

8
Operations, broadly defined, may be argued to have existed as long ago as the Pyramids and
other great works projects, but the academic study of operations management only took off after
World War II.

For over two centuries, operations management has been recognized as an important factor in
economic development of a country. Operations management has passed through a series of
names like: manufacturing management, production management, and operations management.
All of these describe the same general discipline.

The traditional view of manufacturing management began in the 18th century when Adam Smith
recognized the economic benefits of specialization of labor. He recommended breaking jobs
down into subtasks and reassigning workers to specialized tasks in which they become highly
skilled and efficient.

In the early 20th century, Fredrick W. Taylor implemented Smith’s theories and crusaded for
scientific management in the manufacturing sectors of his day. From then until about 1930, the
traditional view prevailed, and many techniques we still use today were developed. A brief
sketch of these and other contributions to manufacturing management is given in Table 1.2.

Table 1.2. Historical summary of Operations Management

Date Contribution Contributor


(appr
ox)
1776 Specialization of labor in manufacturing Adam Smith
1799 Interchangeable parts, cost accounting Eli Whitney and others
1832 Division of labor by skill; assignment of jobs by skill; Charles Babbage
basics of time study
1900 Scientific management; time study and work study Frederick W. Taylor
developed; dividing planning and doing of work
1900 Motion study of jobs Frank B. Gilbreth
1901 Scheduling techniques for employees, machines, jobs Henry L. Gantt
in manufacturing
1915 Economic lot sizes for inventory control F. W. Harris
1927 Human relations; the Hawthorne studies Elton Mayo
1931 Statistical inference applied to product quality; quality Walter A. Shewhart
control charts
1935 Statistical sampling applied to quality control; H. F. Dodge and plans H. G.
inspection sampling Romig
1940 Operations research applications in World War II P. M. S. Blacket and others
1946 Digital computer John Mauchly and J. P.
Eckert
1947 Linear programming George B. Dantzig, William
Orchard Hays, and others
1950 Mathematical programming, nonlinear and stochastic A. Charnes, W. W. Cooper,

9
processes H. Raiffa, and others
1951 Commercial digital computer; large-scale Sperry Univac
computations available
1960 Organizational behavior; continued study of people at L. Cummings, L. Porter, and
work others
1970 Integrating operations into overall strategy and policy W. Skinner J. Orlicky and O.
Computer applications to manufacturing, scheduling, Wright
and control, material requirements planning (MRP)
1980 Quality and productivity applications from Japan; W. E. Deming and J. Juran
robotics, computer-aided design and manufacturing
(CAD/CAM)

Production management became the more widely accepted term from 1930s through the 1950s.
As Frederick Taylor’s work became more widely known, managers developed techniques that
focused on economic efficiency in manufacturing. Workers were ‘put under a microscope’ and
studied in great detail to eliminate wasteful efforts and achieve greater efficiency. At this same
time, however, management also began discovering that workers have multiple needs, not just
economic needs.

Psychologists, sociologists, and other social scientists began to study people and human behavior
in the work environment. In addition, economists, mathematicians, and computer scientists
contributed newer, more sophisticated analytical approaches.

With the 1970’s emerges two distinct changes. The most obvious of these, reflected in the new
name-operations management-was a shift in the service and manufacturing sectors of the
economy. As the service sector became more prominent, the change from ‘production’ to
‘operations’ emphasized the broadening of our field to service organizations. The second, more
subtle change was the beginning of an emphasis on synthesis, rather than just analysis, in
management practices. These days, organizational goals are more focused to meet consumers’
needs throughout the world. Quality concepts like TQM, ISO-9000, Quality function
deployment, etc. are all examples of this attitude of management.

It is vital to note at this juncture that it is also customary that some describe the evolution of
operations management as the Craft Era, Mass Production Era and the Modern era. The scheme
is based on the major changes observed in the changes of operations/ production systems. It
shows that over time, operations have evolved from craft production, to mass production, to the
systems in use today. Though most concepts do overlap with the previously discussed historical
background issues, hereunder we have presented pertinent issues in the so called modern era of
operations management.
The modern era
The third era (the current and, for the foreseeable future at least, the likely scenario) is more
difficult to name and has been called various things. The terms used to describe the current era
include:
 Mass customization– reflecting the need for volume combined with recognition of
customers’ (or consumers’) wishes.

10
 Flexible specialization – related to the manufacturing strategy of firms (especially small
firms) to focus on parts of the value-adding process and collaborate within networks to
produce whole products.
 Lean production – developed from the massively successful Toyota Production System,
focusing on the removal of all forms of waste from a system (some of them difficult to
see).
 Agile – emphasizing the need for an organization to be able to switch frequently from one
market-driven objective to another.
 Strategic – in which the need for the operations to be framed in a strategy is brought to
the fore.

Whatever it is called, the paradigm for the current era addresses the need to combine high
volume and variety together with high levels of quality as the norm, and rapid, ongoing
innovation in many markets. It is, as mass production was a hundred years ago, an innovation
that makes the system it replaces largely redundant.

As each era appeared, however, it did not entirely replace the former era. As we have seen, a few
pockets of craft manufacture still exist. Mass production is still apparent in chemical plants and
refineries and other high-volume/low-variety environments. However, many are changing
fundamentally as existing economies of scale are questioned: thus, steel manufacture faces
variety requirements and has to develop ‘minimills’ to lower economic batch sizes; the same is
true for brewers and pharmaceutical companies.

1.6 Related Issues of Operations Management

1. Productivity
Productivity is a composite of people and operations variables. To improve productivity,
managers must focus on both. The late W. Edwards Deming, a renowned quality expert, believed
that managers, not workers, were the primary source of increased productivity.
Some of his suggestions for managers included planning for the long-term future, never being
complacent about product quality, understanding whether problems were confined to particular
parts of the production process or stemmed from the overall process itself, training workers for
the job they’re being asked to perform, raising the quality of line supervisors, requiring workers
to do quality work, and so forth. It has been appreciated that there is strong interplay between
people and operations. High productivity can’t come solely from good “people management.”
The truly effective organization will maximize productivity by successfully integrating people
into the overall operations system.

As a manager, you are involved in planning, organizing, leading, and controlling. But how do
you tell if and when you are reaching the goals that you have set? You can measure your success
by assessing your productivity, the measure of output per hour worked which is labor
productivity, the most common productivity measure. Or, from another perspective, it is the
amount of output produced compared to the amount of inputs used. Productivity can be
described numerically as the ratio of inputs used to outputs produced. The higher the ratio, the
more efficient is your operating system. You should constantly look for ways to increase outputs

11
while keeping inputs constant or to keep outputs constant while decreasing inputs, both of which
will increase your productivity. In a market that becomes more competitive daily, increasing
productivity is key to profitability. The company that can do more with less is the company that
succeeds. Increasing the productivity of labor revolves around investing in capital, improving
technology, and making sure workers have appropriate and better skill levels.

It is a very comprehensive concept, both in its aim and also in its operational content. It is a
matter of common knowledge that higher productivity leads to a reduction in cost of production,
reduces the sales price of an item, expands markets, and enables the goods to compete effectively
in the world market. It yields more wages to the workers, shorter working hours and greater
leisure time for the employees. In fact the strength of a country, prosperity of its economy,
standard of living of the people and the wealth of the nation are very largely determined by the
extent and measure of its production and productivity. By enabling an increase in the output of
goods or services for existing resources, productivity decreases the cost of goods per unit, and
makes it possible to sell them at lower prices, thus benefiting the consumers while at the same
time leaving a margin for increase in the wages of the workers.

Productivity can be defined in many ways. Some of them are as follows:


a. Productivity is nothing but the reduction in wastage of resources such as labor, machines,
materials, power, space, time, capital, etc.
b. Productivity can also be defined as human endeavor (effort) to produce more and more with
less and less inputs of resources so that the products can be purchased by a large number of
people at affordable price.
c. Productivity implies development of an attitude of mind and a constant urge to find better,
cheaper, easier, quicker, and safer means of doing a job, manufacturing a product and
providing service.
d. Productivity aims at the maximum utilization of resources for yielding as many goods and
services as possible, of the kinds most wanted by consumers at lowest possible cost.
e. Productivity processes more efficient works involving less fatigue to workers due to
improvements in the layout of plant and work, better working conditions and simplification
of work. In a wider sense productivity may be taken to constitute the ratio of all available
goods and services to the potential resources of the group.

A variety of factors can go into lowering the productivity of your business, literally producing
less output or using more resources to produce the same amount of outputs:
• Older technology, tools, or out-of-date processes can decrease the amount of output
produced, increasing the costs of production. This in turn decreases your profitability.
• Lack of key materials or suppliers for the materials can stop production if the needed
resources are not available.
• Lack of employees with the appropriate skills or employees who are not proficient in
those skills can slow or even stop production in some instances.
• Not enough dollars to provide the needed resources. Money is also a necessary input
and too little can make all the other resources also unavailable or not available in the
quantities needed.

12
Productivity can be measured for your entire business or for a specific portion of it. Because
many inputs go into your business, the input you choose determines the productivity you are
measuring. The goal becomes to produce the optimal amount of output and to minimize costs in
the process. Total productivity can be determined by dividing total outputs by total inputs:

The ratio between output and one of these factors of input is usually known as productivity of the
factor considered. Productivity may also be considered as a measure of performance of the
economy as a whole. Mathematically,

An example to illustrate the difference between production and productivity follows: For
instance, 50 persons employed in an industry may be producing the same volume of goods over
the same period as 75 persons working in another similar industry. Productions of these two
industries are equal, but productivity of the former is higher than that of the latter.
In order to assure that productivity measurement captures what the company is trying to do with
respect to such vague issues as customer satisfaction and quality, some firms redefined
productivity as

As it has been said so many times productivity measurement is the ratio of organizational outputs
to organizational inputs. Thus productivity ratios can be
 Partial productivity measurement
 Multi-factor productivity measurement
 Total productivity measurement

a. Partial Productivity Measurement


Partial productivity measurement is used when the firm is interested in the productivity of a
selected input factor. It is the ratio of output values to one class of input.

b. Multi-factor Productivity Measurement


This productivity measurement technique is used when the firm is interested to know the
productivity of a group of input factors but not all input factors.

13
c. Total (Composite) Productivity Measures
A firm deals about composite productivity when it is interested to know about the overall
productivity of all input factors. This technique will give us the productivity of an entire
organization or even a nation.

The above measurement techniques can be grouped into two popular productivity measurement
approaches the first uses a group-generated model and is called normative productivity
measurement methodology. The second is less participative in that one model can be modified to
fit any organization scheme. It is called multi-factor productivity measurement model.

If your company sold $500,000 worth of products and used $100,000 in resources, your total
productivity ratio would be 5. But you may not always want to consider all of your inputs every
time. For example, because materials may account for as much as 90 percent of operating costs
in businesses that use little labor, materials productivity would be an important ratio to track.

Materials productivity = Outputs/Materials

If 4,000 pounds of sugar are used to produce 1,000 pounds of candy, the materials productivity is
1,000 divided by 4,000, or 0.25, which becomes a base figure for comparing increases or
decreases in productivity. Stated simply, you can increase the productivity of your business by
increasing outputs, decreasing inputs, or a combination of both. Most productivity improvements
come from changing processes used by your business, from your employees accomplishing
more, or from technology that speeds production.

2. Effectiveness
It is the degree of accomplishment of the objectives that is: How well a set of result is
accomplished? How well are the resources utilized? Effectiveness is obtaining the desired
results. It may reflect output quantities, perceived quality or both. Effectiveness can also be
defined as doing the right things.

3. Efficiency
This occurs when a certain output is obtained with a minimum of inputs. The desired output can
be increased by minimizing the down times as much as possible (down times are coffee breaks,
machine failures, waiting time, etc). But as we decrease down times the frequency of occurrence
of defective products will increase due to fatigue. The production system might efficiently
produce defective (ineffective) products. Efficiency can be defined as doing things right.
Operational efficiency refers to a ratio of outputs to inputs (like land, capital, labor, etc.)

14
Illustration: Management of a hotel is concerned with labor efficiency, especially when labor is
costly. To determine how efficient labor is in a given situation, management sets an individual
standard; a goal reflecting an average worker’s output per unit of time under normal working
conditions. Say that the standard in a cafeteria is the preparation of 200 salads per hour. If a labor
input produces 150 salads per hour, how efficient is the salad operation?

So, compared with the standard, this operation is 75% efficient in the preparation of salads.

1.7 The Scope Operations Management Decisions


Operations managers have some responsibility for all the activities in the organization which
contribute to the effective production of products and services. And while the exact nature of the
operations function’s responsibilities will, to some extent, depend on the way the organization
has chosen to define the boundaries of the function, there are some general classes of activities
that apply to all types of operation.

1. Strategic (long-term) Decisions

A decision is said to be strategic if it has a long-term impact; influences a larger part of the
system and is difficult to undo once implemented. These decisions in the context of production
systems are essentially those which deal with the Design and Planning (long-range or
intermediate range) aspects. Some examples of these decisions are:
a) Product selection and design: What products or services are to be offered constitute a
crucial decision? A wrong choice of product or poor design of the product may render our
systems operations ineffective and non-competitive. A careful evaluation of product/service
alternatives on the multiple objective bases can help in choosing right product(s).
Techniques of value engineering can be useful in creating a good design which does not
incorporate unnecessary features and can attain the intended functions at lowest costs.

b) Process selection and planning: Choosing optimal (best under the circumstances and for
the purpose) process of conversion system is an important decision concerning choice of
technology, equipment and machines. Process planning pertains to careful detailing of
processes of resource conversion required and their sequence. Included in such decisions are
the aspects of mechanization and automation.

c) Facilities location: It concerns decision regarding location of production system or its


facilities. A poor location may spell operating disadvantages for all times to come.
Therefore it is important to choose a right location which will minimize total ‘delivered-to-
customer’ cost (production and distribution cost) by virtue of location. Evidently such a
decision calls for evaluation of location alternatives against multiplicity of relevant factors
considering their relative importance for the system under consideration.

d) Facilities layout and materials handling: Facilities layout planning problems are
concerned with relative location of one department (activity centre) with another in order to

15
facilitate material flow, reduce handling cost, delays and congestion, provide good house-
keeping, facilitate coordination etc. A detailed layout plan gives a blueprint of how actual
factors of production are to be integrated. The types of layout will depend upon tile nature
of production systems. Most of the concepts used in layout planning models are based on
the importance of locating departments close to each other in order to minimize the cost of
Materials handling.

e) Capacity planning: It Concerns the acquisition of productive resources. Capacity may be


considered as the maximum available amount of output of the conversion process over some
specified time span. Capacity planning may be over short-term as well as on a long-term
basis. In service systems the concept of capacity and hence capacity planning is a bit more
difficult problem, Long-term capacity planning includes expansion and contraction of major
facilities required in conversion process, determination of economics of multiple shift
operation etc. Break even analysis is a valuable tool for capacity planning.

2. Operational Decisions
Operational level decisions deal with short-term planning and control problem. Some of these
are:
a) Production planning, scheduling and control: In operation scheduling we wish to
determine the optimal schedule and sequence of operations, economic batch quantity,
machine assignment and dispatching priorities for sequencing Production control is a
complementary activity to production planning and involves follow up of the production
plans.
b) Inventory planning and control: This problem deals with determination of optimal
inventory levels at raw material, in process and finished goods stages of a production.
Particularly, how much to order, and when to order are two typical decisions involving
inventories. Materials requirement planning (MRP) is an important upcoming concept in
such a situation.
c) Quality management: Quality is an important aspect of production systems and we must
ensure that whatever product or service is produced it satisfies the quality requirements of
the customer at lowest cost. This may be termed as quality management. Setting standards
of quality, control of quality of products, processes are some of the aspects of quality
assurance. Value engineering considerations are related issues in quality management.
d) Work and job design: These are problems concerning design of work methods, systems
and procedures, methods improvement, elimination of avoidable delays, work measurement,
work place layout, ergonomic considerations in job design, work and job restructuring, job
enlargement etc. Design and operation of wage incentives is an associated problem area.
e) Maintenance and replacement: These include decisions regarding optimal policies for
preventive, scheduled and breakdown maintenance of the machines repair policies and
replacement decisions. Maintenance of manpower scheduling and sequencing of repair jobs;
preventive replacement and condition monitoring of the equipment and machines are some
other important decisions involving equipment maintenance. Maintenance is extremely
crucial problem area particularly for a developing economy such as ours because it is only
through a very effective maintenance management that we can improve capacity utilization
and keep our plant and machinery productive and available for use.

16
f) Cost reduction and control: For an on-going production system the role of cost reduction
is prominent because through effective control of total cost of production, we can offer more
competitive products and services. Cost avoidance and cost reduction can be achieved
through various productivity techniques. Value engineering is a prominent technique
available for cost reduction. Concepts like standard costing and budgetary control help in
monitoring and controlling the costs of labour, material etc. and suggest appropriate follow
up action to keep these costs within limits.

Operations managers implement these decisions by identifying key tasks and the staffing needed to
achieve them. However, the implementation of decisions is influenced by a variety of issues,
including a product’s proportion of goods and services. Few products are either all goods or all
services. While these decisions remain the same for goods and services, their relative importance and
method of implementation depend upon this ratio of goods and services.
Chapter Summary
Operations management is important because when an operation works well, goods and services
are delivered to customers when they want them, with something extra that delights the customer
and creates customer loyalty. The challenge for operations managers is to make this happen. If
an organization has outstanding financials, human resources and market plans, and utilizes the
very latest IT system but can’t deliver products and services, then it will not succeed. Operations
management makes this happen. The study of operations management shows us how to
accomplish and improve the operations task of the organization. Operations are changing as fast
as organizations themselves change – everyday, products and processes are being improved.
Operations management contributes to organizational success or failure. Every organization has
an operations function, which is what the company does. Within an organization, operations
produce the organization’s goods and services for internal and organizational customers or
clients. Operations management focuses on the processes by which work gets done. Operations
function of an organization, thus, can be distinguished as a system with input (resources),
transformation process, an output (goods and services), and feedback and control. Operations
can be categorized using the volume–variety matrix and other characteristics. Operations
managers are responsible for managing human resources, assets and costs. The operations
process itself can be described using the transformation model, which applies to both services
and manufacturing. In the wider perspective, operations managers bring together resources,
knowledge and market opportunities.

Review Questions
Multiple Choice Questions
1. Which one of the following statements is correct
a. Productivity is the total value of all inputs to the transformation process divided by the
total value of the outputs produced.
b. Measuring the impact of a capital acquisition on productivity is an example of multi-
factor productivity.
c. Shewhart’s contributions to operations management came during the Scientific
Management Era.
d. How much inventory of this item should we have?" is within the critical decision area
of managing quality.
e. None of the above.

17
2. Operations management is applicable
a. mostly to the service sector
b. to services exclusively
c. mostly to the manufacturing sector
d. to all firms, whether manufacturing and service
e. to the manufacturing sector exclusively
3. Which of the following would not be an operations function in a fast-food restaurant?
a. advertising and promotion
b. designing the layout of the facility
c. maintaining equipment
d. making hamburgers and fries
e. purchasing ingredients
4. An operations manager is not likely to be involved in
a. the design of goods and services to satisfy customers' wants and needs
b. the quality of goods and services to satisfy customers' wants and needs
c. the identification of customers' wants and needs
d. work scheduling to meet the due dates promised to customers
e. maintenance schedules
5. Which of the following statements is true?
a. Almost all services and almost all goods are a mixture of a service and a tangible
product.
b. A pure good has no tangible product component.
c. A pure service has only a tangible product component.
d. There is no such thing as a pure good.
e. None of the above is a true statement.
Discussion Questions

1. Discuss Operations System in an organization.


2. Explain the operation functions in organization.
3. Explain manufacturing operations versus service operations.
4. Explain the historical evolution of production and operations management.
5. Briefly discuss the strategic decision of an operation management.

CHAPTER II

STRATEGIC OPERATIONS MANAGEMENT

In this chapter we will expand upon a number of key issues faced by operations managers in both
manufacturing and service environments that we discussed in Chapter 1. We will explore the
strategic contribution of operations within the business. This chapter, therefore, explores
strategy, and in particular its relevance to operations and operations management. Strategy and
operations strategy are first defined, and then the developments of operations strategy are
explored.

Learning Objectives
The purpose of this chapter is to enable learners to:

18
 Define the strategic role of operations and operations management
 Understand why there is a need for all organizations to develop operations strategies;
 Appreciate the importance of strategy to operations and operations management
 Describe the major perspectives of strategies and strategy processes
 Provide indications of the process and content of strategy;
 Appreciate why some organizations struggle with devising and implementing operations
strategies.
2.1 What is operations strategy?
In this chapter we will expand upon a number of key issues faced by operations managers in both
manufacturing and service environments that we discussed in Chapter 1. We will develop the
theme of how operations management must be seen in terms of strategic importance and how
strategies have to be in place if the organization wants to be able to compete in the modern
business world.
Surprisingly, ‘strategy’ is not particularly easy to define. Linguistically the word derives from
the Greek word ‘strategos’ meaning ‘leading an army’. And although there is no direct historical
link between Greek military practice and modern ideas of strategy, the military metaphor is
powerful.

Strategy is clearly a complex issue, thus it can be described in numerous ways. Often it is
described as how the mission of a company is accomplished. It is the total pattern of decisions
and actions that position the organization in its environment and that are intended to achieve its
long-term goals. Strategy is important as it unites an organization, provides consistency in
decisions, and keeps the organization moving in the right direction.

Narrowing down our scope of discussion, we will address the concept of strategic operation
management as follows. Strategic operations management concerns the pattern of strategic
decisions and actions which set the role, objectives and activities of the operation. The term
‘operations strategy’ sounds at first like a contradiction. How can ‘operations’, a subject that is
generally concerned with the day-to-day creation and delivery of goods and services, be
strategic? ‘Strategy’ is usually regarded as the opposite of those day-to-day routine activities. But
‘operations’ is not the same as ‘operational’. ‘Operations’ are the resources that create products
and services. ‘Operational’ is the opposite of strategic, meaning day-to-day and detailed. So, one
can examine both the operational and the strategic aspects of operations. Furthermore, there is a
perceived confusion with operation strategy that managers often find it hard to distinguish
between approaches such as JIT and other issues that might be included in operation strategy.

It is also conventional also to distinguish between the content and the process of operations
strategy. The content of operations strategy is the specific decisions and actions which set the
operations role, objectives and activities. The process of operations strategy is the method that is
used to make the specific ‘content’ decisions.

In line with the former perspective of strategic operations management, these specific decisions
and actions peculiarly refer to those which are widespread in their effect on the organization to
which the strategy refers, define the position of the organization relative to its environment, and
move the organization closer to its long-term goals. But ‘strategy’ is more than a single decision;

19
it is the total pattern of the decisions and actions that influence the long-term direction of the
business. Thinking about strategy in this way helps us to discuss an organization’s strategy even
when it has not been explicitly stated. Observing the total pattern of decisions gives an indication
of the actual strategic behavior.

2.2 The Strategic Role of Operations


To begin with, the very idea that operations should be seen as a ‘strategic’ factor is still a
problem for some firms, whose overall strategy may be governed by a few people at the top of
the hierarchy of the firm who might know very little about production and operations
management. As a result of this, the rationale behind, and the measurement of the success of,
business decisions may be driven almost entirely by short-term financial criteria.

Nevertheless, the potential of operations function to the strategic success of an organization has
become a clear issue today. Shortly put, operations management can either ‘make or break’ any
business. It is large and, in most businesses, represents the bulk of its assets, but also because the
operations function gives the ability to compete by providing the ability to respond to customers
and by developing the capabilities that will keep it ahead of its competitors in the future. Besides,
the aforementioned indications of what strategy is about are all linked to operations management
in various ways. Hence, businesses expect their operations strategy to improve operations
performance over time. In doing this they should be progressing from a state where they are
contributing very little to the competitive success of the business through to the point where they
are directly responsible for its competitive success. This means that they should be able to, in
turn, master the skills to first implement, then support, and then drive operations strategy.

Implementing business strategy. The most basic role of operations is to implement strategy.
Most companies will have some kind of strategy but it is the operation that puts it into practice.
You cannot, after all, touch a strategy; you cannot even see it; all you can see is how the
operation behaves in practice. For example, if an insurance company has a strategy of moving to
an entirely online service, its operations function will have to supervise the design of all the
processes which allow customers to access online information, issue quotations, request further
information, check credit details, send out documentation and so on. Without effective
implementation even the most original and brilliant strategy will be rendered totally ineffective.

Supporting business strategy. Support strategy goes beyond simply implementing strategy. It
means developing the capabilities which allow the organization to improve and refine its
strategic goals. For example, a mobile phone manufacturer wants to be the first in the market
with new product innovations so its operations need to be capable of coping with constant
innovation. It must develop processes flexible enough to make novel components, organize its
staff to understand the new technologies, develop relationships with its suppliers which help
them respond quickly when supplying new parts, and so on. The better the operation is at doing
these things, the more support it is giving to the company’s strategy.

Driving business strategy. The third, and most difficult, role of operations is to drive strategy
by giving it a unique and long-term advantage. For example, a specialist food service company
supplies restaurants with frozen fish and fish products. Over the years it has built up close
relationships with its customers (chefs) as well as its suppliers around the world (fishing

20
companies and fish farms). In addition it has its own small factory which develops and produces
a continual stream of exciting new products. The company has a unique position in the industry
because its exceptional customer relationships, supplier relationship and new product
development are extremely difficult for competitors to imitate. In fact, the whole company’s
success is based largely on these unique operations capabilities. The operation drives the
company’s strategy.

The four stages of operations contribution


The ability of any operation to play these roles within the organization can be judged by
considering the organizational aims or aspirations of the operations function. Professors
Hayes and Wheelwright of Harvard University, developed a four-stage model which can be
used to evaluate the role and contribution of the operations function. The model traces the
progression of the operations function from what is the largely negative role of stage 1
operations to its becoming the central element of competitive strategy in excellent stage 4
operations.

Stage 1: Internal neutrality. This is the very poorest level of contribution by the operations
function. It is holding the company back from competing effectively. It is inward-looking and, at
best, reactive with very little positive to contribute towards competitive success. Paradoxically,
its goal is ‘to be ignored’ (or ‘internally neutral’). At least then it isn’t holding the company back
in any way. It attempts to improve by ‘avoiding making mistakes’.

Stage 2: External neutrality. The first step of breaking out of stage 1 is for the operations
function to begin comparing itself with similar companies or organizations in the outside market
(being ‘externally neutral’). This may not immediately take it to the ‘first division’ of companies
in the market, but at least it is measuring itself against its competitors’ performance and trying to
implement ‘best practice’.

Stage 3: Internally supportive. Stage 3 operations are amongst the best in their market. Yet,
stage 3 operations still aspire to be clearly and unambiguously the very best in the market. They
achieve this by gaining a clear view of the company’s competitive or strategic goals and
supporting it by developing appropriate operations resources. The operation is trying to be
‘internally supportive’ by providing a credible operations strategy.

Stage 4: Externally supportive. Yet Hayes and Wheelwright suggest a further stage - stage 4,
where the company views the operations function as providing the foundation for its competitive
success. Operations looks to the long term. It forecasts likely changes in markets and supply, and
it develops the operations-based capabilities which will be required to compete in future market
conditions. Stage 4 operations are innovative, creative and proactive and are driving the
company’s strategy by being ‘one step ahead’ of competitors – what Hayes and Wheelwright call
‘being externally supportive’.

Hayes and Wheelwright mapped how manufacturing’s role linked with business strategy, from
being passive and reactive (stage 1) to a full, pivotal, involvement in the planning stages of
business strategy (stage 4). The model is important as a mapping exercise so that firms can
realize where manufacturing/operations lines up within the business strategy process. The level

21
of operations contribution also indicate what operations strategy should contain: operations
strategy consists of a sequence of decisions that, over time, enables a business unit to achieve a
desired manufacturing structure, infrastructure and set of specific capabilities.

2.3 Performance Objectives of Operations [Strategic]


The operations performance is very vital in any organization. Operations management can either
‘make or break’ any business. It is large and, in most businesses, represents the bulk of its assets,
but also because the operations function gives the ability to compete by providing the ability to
respond to customers and by developing the capabilities that will keep it ahead of its competitors
in the future.
These show that the performance of operations should be closely monitored. And the aspects of
performance that are used in the judgment are called performance objectives. To understand the
strategic contribution of the operations function, it is important to understand how we can assess
its performance.

Imagine that you are an operations manager in any kind of business-a hospital administrator, for
example, or a production manager at a car plant. What kind of things are you likely to want to do
in order to satisfy customers and contribute to competitiveness?
 You would want to do things right; that is, you would not want to make mistakes, and
would want to satisfy your customers by providing error-free goods and services which
are ‘fit for their purpose’. This is giving a quality advantage.
 You would want to do things fast, minimizing the time between a customer asking for
goods or services and the customer receiving them in full, thus increasing the availability
of your goods and services and giving a speed advantage.
 You would want to do things on time, so as to keep the delivery promises you have made.
If the operation can do this, it is giving a dependability advantage.
 You would want to be able to change what you do; that is, being able to vary or adapt the
operation’s activities to cope with unexpected circumstances or to give customers
individual treatment. Being able to change far enough and fast enough to meet customer
requirements gives a flexibility advantage.
 You would want to do things cheaply; that is, produce goods and services at a cost which
enables them to be priced appropriately for the market while still allowing for a return to
the organization; or, in a not-for-profit organization, give good value to the taxpayers or
whoever is funding the operation. When the organization is managing to do this, it is
giving a cost advantage.

These are the five basic performance objectives and they apply to all types of operations.

1. The quality objective

Quality is consistent conformance to customers’ expectations, in other words, ‘doing things


right’, but the things which the operation needs to do right will vary according to the kind of
operation. All operations regard quality as a particularly important objective. In some ways
quality is the most visible part of what an operation does. Furthermore, it is something that a
customer finds relatively easy to judge about the operation. Is the product or service as it is

22
supposed to be? Is it right or is it wrong? There is something fundamental about quality. Because
of this, it is clearly a major influence on customer satisfaction or dissatisfaction. A customer
perception of high-quality products and services means customer satisfaction and therefore the
likelihood that the customer will return.

When quality means consistently producing services and products to specification it not only
leads to external customer satisfaction, but makes life easier inside the operation as well.
 Quality reduces costs. The fewer mistakes made by each process in the operation, the less time
will be needed to correct the mistakes and the less confusion and irritation will be spread. For
example, if a supermarket’s regional warehouse sends the wrong goods to the supermarket, it
will mean staff time, and therefore cost, being used to sort out the problem.
 Quality increases dependability. Increased costs are not the only consequence of poor quality.
At the supermarket it could also mean that goods run out on the supermarket shelves with a
resulting loss of revenue to the operation and irritation to the external customers. Sorting the
problem out could also distract the supermarket management from giving attention to the other
parts of the supermarket operation. This in turn could result in further mistakes being made.
So, quality has both an external impact which influences customer satisfaction and an internal
impact which leads to stable and efficient processes.

2. The speed objective


Speed means the elapsed time between customers requesting products or services and receiving
them. The main benefit to the operation’s (external) customers of speedy delivery of goods and
services is that the faster they can have the product or service, the more likely they are to buy it,
or the more they will pay for it, or the greater the benefit they receive.

Besides, speed is important inside the operation. Fast response to external customers is greatly
helped by speedy decision-making and speedy movement of materials and information inside the
operation. And there are other benefits.

Speed reduces inventories. Take, for example, the automobile plant. Steel for the vehicle’s door
panels is delivered to the press shop, pressed into shape, transported to the painting area, coated
for colour and protection, and moved to the assembly line where it is fitted to the automobile.
This is a simple three-stage process, but in practice material does not flow smoothly from one
stage to the next. First, the steel is delivered as part of a far larger batch containing enough steel
to make possibly several hundred products. Eventually it is taken to the press area, pressed into
shape, and again waits to be transported to the paint area. It then waits to be painted, only to wait
once more until it is transported to the assembly line. Yet again, it waits by the trackside until it
is eventually fitted to the automobile. The material’s journey time is far longer than the time
needed to make and fit the product. It actually spends most of its time waiting as stocks
(inventories) of parts and products. The longer items take to move through a process, the more
time they will be waiting and the higher inventory will be.

Speed reduces risks. Forecasting tomorrow’s events is far less of a risk than forecasting next
year’s. The further ahead companies forecast, the more likely they are to get it wrong. The faster
the throughput time of a process the later forecasting can be left. Consider the automobile plant
again. If the total throughput time for the door panel is six weeks, door panels are being

23
processed through their first operation six weeks before they reach their final destination. The
quantity of door panels being processed will be determined by the forecasts for demand six
weeks ahead. If instead of six weeks, they take only one week to move through the plant, the
door panels being processed through their first stage are intended to meet demand only one week
ahead. Under these circumstances it is far more likely that the number and type of door panels
being processed are the number and type which eventually will be needed.

3. The dependability objective


Dependability means doing things in time for customers to receive their goods or services
exactly when they are needed, or at least when they were promised. Customers might only judge
the dependability of an operation after the product or service has been delivered. Initially this
may not affect the likelihood that customers will select the service- they have already
‘consumed’ it. Over time, however, dependability can override all other criteria. No matter how
cheap or fast a bus service is, if the service is always late (or unpredictably early) or the buses are
always full, then potential passengers will be better off calling a taxi.

Inside the operation internal customers will judge each other’s performance partly by how
reliable the other processes are in delivering material or information on time. Operations where
internal dependability is high are more effective than those which are not, for a number of
reasons.

Dependability saves time. Take, for example, the maintenance and repair centre for the city bus
company. If the centre runs out of some crucial spare parts, the manager of the centre will need
to spend time trying to arrange a special delivery of the required parts and the resources allocated
to service the buses will not be used as productively as they would have been without this
disruption. More seriously, the fleet will be short of buses until they can be repaired and the fleet
operations manager will have to spend time rescheduling services. So, entirely due to the one
failure of dependability of supply, a significant part of the operation’s time has been wasted
coping with the disruption.

Dependability saves money. Ineffective use of time will translate into extra cost. The spare parts
might cost more to be delivered at short notice and maintenance staff will expect to be paid even
when there is not a bus to work on. Nor will the fixed costs of the operation, such as heating and
rent, be reduced because the two buses are not being serviced. The rescheduling of buses will
probably mean that some routes have inappropriately sized buses and some services could have
to be cancelled. This will result in empty bus seats (if too large a bus has to be used) or a loss of
revenue (if potential passengers are not transported).

Dependability gives stability. The disruption caused to operations by a lack of dependability goes
beyond time and cost. It affects the ‘quality’ of the operation’s time. If everything in an operation
is always perfectly dependable, a level of trust will have built up between the different parts of
the operation. There will be no ‘surprises’ and everything will be predictable. Under such
circumstances, each part of the operation can concentrate on improving its own area of
responsibility without having its attention continually diverted by a lack of dependable service
from the other parts.

24
4. The flexibility objective
Flexibility means being able to change the operation in some way. This may mean changing
what the operation does, how it is doing it, or when it is doing it. Specifically, customers will
need the operation to change so that it can provide four types of requirement:
 product/service flexibility - the operation’s ability to introduce new or modified
products and services;
 mix flexibility -the operation’s ability to produce a wide range or mix of products and
services;
 volume flexibility -the operation’s ability to change its level of output or activity to
produce different quantities or volumes of products and services over time;
 delivery flexibility- the operation’s ability to change the timing of the delivery of its
services or products.
Developing a flexible operation can also have advantages to the internal customers within the
operation.
Flexibility speeds up response. Fast service often depends on the operation being flexible. For
example, if the hospital has to cope with a sudden influx of patients from a road accident, it
clearly needs to deal with injuries quickly. Under such circumstances a flexible hospital which
can speedily transfer extra skilled staff and equipment to the Accident and Emergency
department will provide the fast service which the patients need.

Flexibility saves time. In many parts of the hospital, staff have to treat a wide variety of
complaints. Fractures, cuts or drug overdoses do not come in batches. Each patient is an
individual with individual needs. The hospital staff cannot take time to ‘get into the routine’ of
treating a particular complaint; they must have the flexibility to adapt quickly. They must also
have sufficiently flexible facilities and equipment so that time is not wasted waiting for
equipment to be brought to the patient. The time of the hospital’s resources is being saved
because they are flexible in ‘changing over’ from one task to the next.

Flexibility maintains dependability. Internal flexibility can also help to keep the operation on
schedule when unexpected events disrupt the operation’s plans. For example, if the sudden influx
of patients to the hospital requires emergency surgical procedures, routine operations will be
disrupted. This is likely to cause distress and considerable inconvenience. A flexible hospital
might be able to minimize the disruption by possibly having reserved operating theatres for such
an emergency, and being able to bring in medical staff quickly that are ‘on call’.

5. The cost objective


To the companies which compete directly on price, cost will clearly be their major operations
objective. The lower the cost of producing their goods and services, the lower can be the price to
their customers. Even those companies which do not compete on price will be interested in
keeping costs low. Every euro or dollar removed from an operation’s cost base is a further euro
or dollar added to its profits. Not surprisingly, low cost is a universally attractive objective.
The ways in which operations management can influence cost will depend largely on where the
operation costs are incurred. The operation will spend its money on staff (the money spent on
employing people), facilities, technology and equipment (the money spent on buying, caring for,
operating and replacing the operation’s ‘hardware’) and materials (the money spent on the
‘bought-in’ materials consumed or transformed in the operation).

25
All operations have an interest in keeping their costs as low as is compatible with the levels of
quality, speed, dependability and flexibility that their customers require. The measure that is
most frequently used to indicate how successful an operation is at doing this is productivity. All
operations are increasingly concerned with cutting out waste, whether it is waste of materials,
waste of staff time, or waste through the under-utilization of facilities.

All of the performance objectives discussed in the previous sections affect cost. So, one
important way to improve cost performance is to improve the performance of the other
operations objectives.
 High-quality operations do not waste time or effort having to re-do things, nor are their
internal customers inconvenienced by flawed service.
 Fast operations reduce the level of in-process inventory between and within processes, as
well as reducing administrative overheads.
 Dependable operations do not spring any unwelcome surprises on their internal
customers.
 They can be relied on to deliver exactly as planned. This eliminates wasteful disruption
and allows the other micro-operations to operate efficiently.
 Flexible operations adapt to changing circumstances quickly and without disrupting the
rest of the operation. Flexible micro-operations can also change over between tasks
quickly and without wasting time and capacity.

Trade-offs between performance objectives


Improving the performance of one objective inside the operation could also improve other
performance objectives. Most notably, better quality, speed, dependability and flexibility can
improve cost performance. But externally this is not always the case. In fact there may be
compromises or trade-offs between performance objectives. In other words improving the
performance of one performance objective might only be achieved by sacrificing the
performance of another. So, for example, an operation might wish to improve its cost efficiencies
by reducing the variety of products or services that it offers to its customers. ‘There is no such
thing as a free lunch’ could be taken as a summary of this approach.

But there are two views of trade-offs. The first emphasizes repositioning performance objectives
by trading off improvements in some objectives for a reduction in performance in others. The
other emphasizes increasing the effectiveness of the operation by overcoming trade-offs so that
improvements in one or more aspects of performance can be achieved without any reduction in
the performance of others. Most businesses at some time or other will adopt both approaches.

Activity 2.
In the Ethiopian Textile industry, which objective should be prioritized? Why?

2.4 Approaches to Operations Strategy


Different authors have slightly different views and definitions of operations strategy. Between
them, four ‘perspectives’ emerge:
 Operation strategy is a top-down reflection of what the whole group or business wants
to do.

26
 Operations strategy is a bottom-up activity where operations improvements
cumulatively build strategy.
 Operations strategy involves translating market requirements into operations decisions.
 Operations strategy involves exploiting the capabilities of operations resources in
chosen markets.

None of these four perspectives alone gives the full picture of what operations strategy is. But
together they provide some idea of the pressures which go to form the content of operations
strategy. We will treat each in turn.

The ‘top-down’ and ‘bottom-up’ perspectives


Everyone in the organization is ultimately affected by strategy. Who should be involved in
forming strategy in the first place? Both the top-down approach to strategic planning and the
bottom-up approach have been advocated, and many organizations combine the two. It is easier
to suggest a particular strategy, however, then actually to realize it. To achieve a particular
business goal, strategies have to be in place throughout the entire organization.

Top-down Perspective
A large corporation will need a strategy to position itself in its global, economic, political and
social environment. This will consist of decisions about what types of business the group wants
to be in, what parts of the world it wants to operate in, how to allocate its cash between its
various businesses, and so on. Decisions such as these form the corporate strategy of the
corporation. Each business unit within the corporate group will also need to put together its own
business strategy which sets out its individual mission and objectives. This business strategy
guides the business in relation to its customers, markets and competitors, and also the strategy of
the corporate group of which it is a part. Similarly, within the business, functional strategies
need to consider what part each function should play in contributing to the strategic objectives of
the business. The operations, marketing, product/service development and other functions will all
need to consider how best they should organize themselves to support the business’s objectives.

So, one perspective on operations strategy is that it should take its place in this hierarchy of
strategies. Its main influence, therefore, will be whatever the business sees as its strategic
direction. The perspective states that strategy starts at the top (the corporate level); it then passes
DOWN to business levels (where business strategy is devised) and then passes DOWN again to
functional levels, including operations. Some publications say that there should, ideally, be
dialogue in the process- particularly where a resource-driven (not necessarily including
operations capabilities, by the way) strategy is being pursued. However, in the main, the top-
down model of strategy remains the dominant model. The person at the top of the organizational
hierarchy will create a strategy.

As the model involves a very few people in strategy crafting, excellent communication
processes also need to be in place so that all employees own the change. The top-down view
results in a false division between corporate and functional strategies in general and operations
strategies in particular.

Bottom-up Perspective

27
The ‘top-down’ perspective provides an orthodox view of how functional strategies should be
put together. But in fact the relationship between the levels in the strategy hierarchy is more
complex than this. Although it is a convenient way of thinking about strategy, this hierarchical
model is not intended to represent the way strategies are always formulated. When any group is
reviewing its corporate strategy, it will also take into account the circumstances, experiences and
capabilities of the various businesses that form the group. Similarly, businesses, when reviewing
their strategies, will consult the individual functions within the business about their constraints
and capabilities. They may also incorporate the ideas which come from each function’s day-to-
day experience. Therefore an alternative view to the top-down perspective is that many strategic
ideas emerge over time from operational experience. Sometimes companies move in a particular
strategic direction because the ongoing experience of providing products and services to
customers at an operational level convinces them that it is the right thing to do.

There may be no high-level decisions examining alternative strategic options and choosing the
one which provides the best way forward. Instead, a general consensus emerges from the
operational level of the organization. The ‘high-level’ strategic decision-making, if it occurs at
all, may confirm the consensus and provide the resources to make it happen effectively.

Suppose a printing services company succeeds in its expansion plans. However, in doing so it
finds that having surplus capacity and a distributed network of factories allows it to offer an
exceptionally fast service to customers. It also finds that some customers are willing to pay
considerably higher prices for such a responsive service. Its experiences lead the company to set
up a separate division dedicated to providing fast, high margin printing services to those
customers willing to pay. The strategic objectives of this new division are not concerned with
high-volume growth but with high profitability.

This idea of strategy being shaped by operational level experience over time is sometimes called
the concept of emergent strategies. Strategy is gradually shaped over time and based on real-life
experience rather than theoretical positioning. Indeed, strategies are often formed in a relatively
unstructured and fragmented manner to reflect the fact that the future is at least partially
unknown and unpredictable.

This view of operations strategy is perhaps more descriptive of how things really happen, but at
first glance it seems less useful in providing a guide for specific decision-making. Yet while
emergent strategies are less easy to categorize, the principle governing a bottom-up perspective
is clear: shape the operation’s objectives and action, at least partly, by the knowledge it gains
from its day-to-day activities. The key virtues required for shaping strategy from the bottom up
are an ability to learn from experience and a philosophy of continual and incremental
improvement.

The market requirements and operations resources perspectives


Market-requirements Based Strategies
One of the obvious objectives for any organization is to satisfy the requirements of its markets.
No operation that continually fails to serve its markets adequately is likely to survive in the long
term. And although understanding markets is usually thought of as the domain of the marketing
function, it is also of importance to operations management. Without an understanding of what

28
markets require, it is impossible to ensure that operations is achieving the right priority between
its performance objectives (quality, speed, dependability, flexibility and cost).

Strategy is sometimes seen as an either/or scenario. The firm can either compete on its
capabilities- a resource-based strategy - or pursue a market-driven strategy. There has been
considerable debate on the conflict between the two strategies. The latter can be seen as an
‘outside-in’ approach (market-driven); the former can be viewed as an ‘inside-out’ approach
(resource-driven). Each approach has distinct advantages and disadvantages.

The ‘outside-in’, market-based strategies were popularized by Michael Porter. Its main advocates
today are those who concentrate on marketing strategy. The market-based view of strategy
proposes that the firm should seek external opportunities in new and existing markets, or market
niches, and then aligns the firm with these opportunities. This requires evaluating which markets
are attractive and which markets the firm should exit.

A market-led strategy does not ignore a firm’s capabilities. Indeed, a market-led strategy
demands that a coherent, unifying and integrative framework needs to be in place if the transition
from market requirements to in-house capabilities is to be realized. However, this is done only
when particular market opportunities have been deemed to be ‘attractive’ for the firm.

The danger with market-led strategies is that the firm may end up competing in markets in which
it may not have sufficient capabilities to do so effectively. Thus there will be a strategic gap
between what the firm would like to do (and may have chosen to do) and what it can actually do!
This perspective presupposes that the competitive priorities of operations strategy might be
related to cost, quality, speed and flexibility. They are the factors that would enable a product
win in the market competition.

The market influence on performance objectives


Operations seek to satisfy customers through developing their five performance objectives.
For example, if customers particularly value low-priced products or services, the operation will
place emphasis on its cost performance. Alternatively, a customer emphasis on fast delivery will
make speed important to the operation, and so on. These factors which define the customers’
requirements are called competitive factors (also called ‘critical success factors’ by some
authors). Whatever competitive factors are important to customers should influence the priority
of each performance objective.

29
Competitive factors Performance Objectives
If customers value these,… Then the operation needs to
excel at these,…
Low price Cost

High Quality Quality

Fast Delivery Speed

Reliable Delivery Dependability

Innovative Products Flexibility (Product)

Wide range of products Flexibility (mix)

The ability to change the


timing or quantity of Flexibility (volume and/or
products delivery )

Figure 2.1: Different competitive factors imply different performance objectives

Order-winning and qualifying objectives


A particularly useful way of determining the relative importance of competitive factors is to
distinguish between ‘order-winning’ and ‘qualifying’ factors. Order-winning factors are those
things which directly and significantly contribute to winning business. They are regarded by
customers as key reasons for purchasing the product or service. Raising performance in an order-
winning factor will either result in more business or improve the chances of gaining more
business. Qualifying factors may not be the major competitive determinants of success, but are
important in another way. They are those aspects of competitiveness where the operation’s
performance has to be above a particular level just to be considered by the customer.
Performance below this ‘qualifying’ level of performance will possibly disqualify the company
from being considered by many customers. But any further improvement above the qualifying
level is unlikely to gain the company much competitive benefit. To order-winning and qualifying
factors can be added less important factors which are neither order-winning nor qualifying.
They do not influence customers in any significant way. They are worth mentioning here only
because they may be of importance in other parts of the operation’s activities.

Order-winning factors show a steady and significant increase in their contribution to


competitiveness as the operation gets better at providing them. Qualifying factors are ‘givens’;
they are expected by customers and can severely disadvantage the competitive position of the
operation if it cannot raise its performance above the qualifying level. Less important objectives
have little impact on customers no matter how well the operation performs in them.

If an operation produces goods or services for more than one customer group, it will need to
determine the order-winning, qualifying and less important competitive factors for each group.
This shows that different customer needs imply different objectives.

30
The product/service life cycle influence on performance objectives

One way of generalizing the behaviour of both customers and competitors is to link it to the life
cycle of the products or services that the operation is producing. The exact form of
product/service life cycles will vary, but generally they are shown as the sales volume passing
through four stages – introduction, growth, maturity and decline. The important implication of
this for operations management is that products and services will require operations strategies in
each stage of their life cycle.

Introduction stage. When a product or service is first introduced, it is likely to be offering


something new in terms of its design or performance, with few competitors offering the same
product or service. The needs of customers are unlikely to be well understood, so the operations
management needs to develop the flexibility to cope with any changes and be able to give the
quality to maintain product/service performance.
Growth stage. As volume grows, competitors may enter the growing market. Keeping up with
demand could prove to be the main operations preoccupation. Rapid and dependable response to
demand will help to keep demand buoyant, while quality levels must ensure that the company
keeps its share of the market as competition starts to increase.
Maturity stage. Demand starts to level off. Some early competitors may have left the market and
the industry will probably be dominated by a few larger companies. So operations will be
expected to get the costs down in order to maintain profits or to allow price cutting, or both.
Because of this, cost and productivity issues, together with dependable supply, are likely to be
the operation’s main concerns.
Decline stage. After time, sales will decline with more competitors dropping out of the market.
There might be a residual market, but unless a shortage of capacity develops the market will
continue to be dominated by price competition. Operations objectives continue to be dominated
by cost.

The operations resources perspective


The fourth and final perspective of operations strategy is based on a particularly influential
theory of business strategy- the resource-based view (RBV) of the firm. Put simply, the RBV
holds that firms with an ‘above-average’ strategic performance are likely to have gained their
sustainable competitive advantage because of the core competences (or capabilities) of their
resources. This means that the way an organization inherits, or acquires, or develops its
operations resources will, over the long term, have a significant impact on its strategic success.
Furthermore, the impact of its ‘operations resource’ capabilities will be at least as great as, if not
greater than, that which it gets from its market position. So understanding and developing the
capabilities of operations resources is a important perspective on operations strategy.

Advocators of the perspective argued that firms were collections of productive resources that
provide firms with their uniqueness and, by implication, their means of competitive advantage.
The role of internal resource-based strategies gained prominence in the early 1990s with the
emphasis on ‘core competencies’, which argued that the chief means of sustaining competitive
advantage for a firm comes from developing and guarding core capabilities and competencies. A
successful resource-based strategy process requires that strategists need to be fully aware of, and
make the best possible use of, the firm’s capabilities.

31
The dangers of adopting a resource-based strategy is that basing strategy on existing resources,
looking inwards, risks building a company that achieves excellence in providing products and
services that nobody wants.

Resource constraints and capabilities


No organization can merely choose which part of the market it wants to be in without
considering its ability to produce products and services in a way that will satisfy that market.
In other words, the constraints imposed by its operations must be taken into account.

For example, a small translation company offers general translation services to a wide range of
customers who wish documents such as sales brochures to be translated into another language. A
small company, it operates an informal network of part-time translators who enable the company
to offer translation into or from most of the major languages in the world. Some of the
company’s largest customers want to purchase their sales brochures on a ‘one-stop shop’ basis
and have asked the translation company whether it is willing to offer a full service, organizing
the design and production, as well as the translation, of export brochures. This is a very
profitable market opportunity; however, the company does not have the resources, financial or
physical, to take it up. From a market perspective, it is good business; but from an operations
resource perspective, it is not feasible.

However, the operations resource perspective is not always so negative. This perspective may
identify constraints to satisfying some markets but it can also identify capabilities which can be
exploited in other markets. For example, the same translation company has recently employed
two new translators who are particularly skilled at web site development. To exploit this, the
company decides to offer a new service whereby customers can transfer documents to the
company electronically, which can then be translated quickly. This new service is a ‘fast
response’ service which has been designed specifically to exploit the capabilities within the
operations resources. Here the company has chosen to be driven by its resource capabilities
rather than the obvious market opportunities.

Intangible resources
An operations resource perspective must start with an understanding of the resource capabilities
and constraints within the operation. It must answer the simple questions, what do we have, and
what can we do? An obvious starting point here is to examine the transforming and transformed
resource inputs to the operation. These, after all, are the ‘building blocks’ of the operation.
However, merely listing the type of resources an operation has does not give a complete picture
of what it can do. Trying to understand an operation by listing its resources alone is like trying to
understand an automobile by listing its component parts.
To describe it more fully, we need to describe how the component parts form the internal
mechanisms of the motor car. Within the operation, the equivalent of these mechanisms is its
processes. Yet, even for an automobile, a technical explanation of its mechanisms still does not
convey everything about its style or ‘personality’. Something more is needed to describe these.
In the same way, an operation is not just the sum of its processes. In addition, the operation has
some intangible resources. An operation’s intangible resources include such things as its
relationship with suppliers, the reputation it has with its customers, its knowledge of its process

32
technologies and the way its staff can work together in new product and service development.
These intangible resources may not always be obvious within the operation, but they are
important and have real value. It is these intangible resources, as well as its tangible resources,
that an operation needs to deploy in order to satisfy its markets. The central issue for operations
management, therefore, is to ensure that its pattern of strategic decisions really does develop
appropriate capabilities within its resources and processes.

Structural and infrastructural decisions


A distinction is often drawn between the strategic decisions which determine an operation’s
structure and those which determine its infrastructure. An operation’s structural decisions are
those which we have classed as primarily influencing design activities, while infrastructural
decisions are those which influence the workforce organization and the planning and control, and
improvement activities. This distinction in operations strategy has been compared to that
between ‘hardware’ and ‘software’ in computer systems. The hardware of a computer sets limits
to what it can do. In a similar way, investing in advanced technology and building more or better
facilities can raise the potential of any type of operation. Within the limits which are imposed by
the hardware of a computer, the software governs how effective the computer actually is in
practice. The most powerful computer can only work to its full potential if its software is capable
of exploiting its potential. The same principle applies with operations. The best and most costly
facilities and technology will only be effective if the operation also has an appropriate
infrastructure which governs the way it will work on a day-to-day basis.

2.5 The process of operations strategy


The ‘process’ of operations strategy refers to the procedures which are, or can be, used to
formulate those operations strategies which the organization should adopt. It is concerned with
‘how’ operations strategies are put together. It is important because, although strategies will vary
from organization to organization, they are usually trying to achieve some kind of alignment, or
‘fit’, between what the market wants, and what the operation can deliver, and how that
‘alignment’ can be sustained over time. So the process of operations strategy should both satisfy
market requirements through appropriate operations resources, and also develop those resources
in the long term so that they can provide competitive capabilities in the longer term that are
sufficiently powerful to achieve sustainable competitive advantage.
There are many ‘formulation processes’ which are, or can be, used to formulate operations
strategies. Most consultancy companies have developed their own frameworks, as have several
academics. Typically, these formulation processes include the following elements:
 A process which formally links the total organization strategic objectives (usually a
business strategy) to resource-level objectives.
 The use of competitive factors (called various things such as order winners, critical
success factors, etc.) as the translation device between business strategy and operations
strategy.
 A step which involves judging the relative importance of the various competitive factors
in terms of customers’ preferences.
 A step which includes assessing current achieved performance, usually as compared
against competitor performance levels.
 An emphasis on operations strategy formulation as an iterative process.

33
 The concept of an ‘ideal’ or ‘greenfield’ operation against which to compare current
operations. Very often the question asked is: ‘If you were starting from scratch on a
Greenfield site, how, ideally, would you design your operation to meet the needs of the
market?’ This can then be used to identify the differences between current operations and
this ideal state.
 A ‘gap-based’ approach which involves comparing what is required of the operation by
the marketplace against the levels of performance the operation is currently achieving.
 A strategic resonance approach (see discussion below).

There is no one best way to formulate strategy and the debate on whether strategy should be
internal, resource-based or fully externally market-driven may be seen as of intellectual interest
only. In practice, many organizations will combine both internal and external considerations in
the same way that they tend to innovate as a result of both.

Clearly, the process of operations strategy formulation should provide a set of actions that, with
hindsight, have provided the ‘best’ outcome for the organization. But that really does not help us.
What do we mean by ‘the best’, and what good is a judgement that can only be applied in
hindsight? Yet, even if we cannot assess the ‘goodness’ of a strategy for certain in advance, we
can check it out for some attributes that could stop it being a success.
1. First, is the operations strategy comprehensive?
2. Second, is there is internal coherence between the various actions it is proposing?
3. Third, do the actions being proposed as part of the operations strategy correspond to the
appropriate priority for each performance objective?
4. Fourth, does the strategy prioritize the most critical activities or decisions?

2.6 Strategic Resonance


The term strategic resonance describes how world-class firms devise and implement strategies.
World-class firms do not see strategy as either a market-driven or a resource-based process, but
create resonance between the two. World-class firms both seek new market opportunities and
have in place capabilities poised to be used. Strategic resonance is an ongoing, dynamic,
strategic process whereby customer requirements and organizational capabilities are in harmony
and resonate. Strategic resonance is more than strategic fit – a term that we mentioned earlier to
describe the ‘fit’ between the firms’ capabilities and the market that it serves. Strategic resonance
goes beyond that. Strategic fit may be likened to a jigsaw where all parts fit together; this is a
useful view, but it can have – and this was noted in interviews with key staff in this research – a
very static feel to it. In strategic fit it is as if, once the ‘bits’ are in place, the strategic planning is
done. By contrast, strategic resonance is a dynamic, organic process, which is about ensuring
continuous linkages and harmonization between:
 The market and the firm’s operations capabilities
 The firm’s strategy and its operations capabilities
 All functions and all levels within the firm.
Firms need to find and exploit their strategic resonance- between markets and the firm; within
the firm itself; and between senior level strategists and plant-level, operations capabilities.
Therein lies the problem – sometimes those who are in the position to make strategic decisions

34
know little or nothing about the strategic opportunities and strategic power that lie within its
operations’ resources and capabilities.

As a result there is no strategic resonance between strategy and operations, and consequently
senior level strategists articulate a mission and a strategy that has no chance of being realized. It
will not be realized because the firm does not know what the capabilities are in the first place, or
the firm simply does not possess the necessary operations know-how and capability, or the firm
seems incapable of seeking partnerships with other firms that do.

2.7 Operations Strategy Implementation


A large number of authors, writing about all forms of strategy, have discussed the importance of
effective implementation. This reflects an acceptance that no matter how sophisticated the
intellectual and analytical underpinnings of a strategy, it remains only a document until it has
been implemented. The essential elements of operations strategy that can affect its
implementation include:

1. Purpose. As with any form of project management, the more clarity that exists around the
ultimate goal, the more likely it is that the goal will be achieved. In this context, a shared
understanding of the motivation, boundaries and context for developing the operations
strategy is crucial.
2. Point of entry. Linked with the above point, any analysis, formulation and
implementation process is potentially politically sensitive and the support that the process
has from within the hierarchy of the organization is central to the implementation
success.
3. Process. Any formulation process must be explicit. It is important that the managers who
are engaged in putting operations strategies together actively think about the process in
which they are participating.
4. Project management. There is a cost associated with any strategy process. Indeed one of
the reasons why operations have traditionally not had explicit strategies relates to the
difficulty of releasing sufficient managerial time. The basic disciplines of project
management such as resource and time planning, controls, communication mechanisms,
reviews and so on, should be in place.
5. Participation. Intimately linked with the above points, the selection of staff to participate
in the implementation process is also critical. So, for instance, the use of external
consultants can provide additional specialist expertise, the use of line managers (and
indeed staff ) can provide ‘real-world’ experience and the inclusion of cross-functional
managers (and suppliers etc.) can help to integrate the finished strategy.

Chapter Summary
Strategy is the total pattern of decisions and actions that position the organization in its
environment and that are intended to achieve its long-term goals. A strategy has content and
process. The content of a strategy concerns the specific decisions which are taken to achieve
specific objectives. The process of a strategy is the procedure which is used within a business to
formulate its strategy. Strategy matters, because without it a firm does not have direction.
Consequently, any successes that it may gain will be by fluke. Any operations function has three

35
main roles to play within an organization: as an implementer of the organization’s strategies, as a
supporter of the organization’s overall strategy and as a leader or driver of strategy. The extent to
which an operations function fulfils these roles, together with its aspirations, can be used to
judge the operations function’s contribution to the organization. Hayes and Wheelwright provide
a four-stage model for doing this. At a strategic level, performance objectives relate to the
interests of the operation’s stakeholders. These relate to the company’s responsibility to
customers, suppliers, shareholders, employees and society in general. Specifically, these
objectives are related to quality, cost, speed, dependability, and flexibility. These performance
objectives involve trade-offs are the extent to which improvements in one performance objective
can be achieved by sacrificing performance in others. There are four perspectives to the process
and overall conception of operations strategy. The ‘top-down’ perspective views strategic
decisions at a number of levels. The ‘bottom-up’ view of operations strategy sees overall
strategy as emerging from day-today operational experience. Third, the ‘market requirements’
perspective of operations strategy sees the main role of operations as satisfying markets. Finally,
the ‘operations resource’ perspective of operations strategy is based on the resource-based view
of the firm and sees the operation’s core competences (or capabilities) as being the main
influence on operations strategy. There are many different procedures which are used by
companies, consultancies and academics to formulate operations strategies. Although differing in
the stages that they recommend, many of these models have similarities.

Review Questions
Multiple Choice Questions
1. Operations where internal dependability is high are more effective than those which are
not, for the following reasons except:
a. Dependability saves money
b. Dependability saves time
c. Dependability speeds up response
d. Dependability gives stability
e. All of the above.
2. Customers will need the operation to change so that it can provide one of the following
types of requirement
a. Service flexibility
b. Mix flexibility
c. Delivery flexibility
d. Volume flexibility
e. All of the above.
3. Those factors that are regarded by customers as key reasons for purchasing the product or
service are:
a. Order qualifiers
b. Order winners
c. Operations Objectives
d. Price
e. Quality
4. Those factors that may not be the major competitive determinants of success, but
important in another way are:
a. Order qualifiers

36
b. Order winners
c. Operations Objectives
d. Price
e. Quality
5. One of the following is not among the goodness attributes of operations a strategy
formulation process.
a. Comprehensive
b. Coherence
c. Priority
d. Critical
e. None of the above.

Discussion Questions

1. What is strategy and what is operations strategy?


2. What is the difference between a ‘market requirements’ and an ‘operations resource’
view of operations strategy?
3. How can an operations strategy be put together?
4. Discuss the requirements from an operations perspective of competing on (a) quality, (b)
cost, (c) flexibility, (d) speed, and (e) dependability. Give examples of manufacturing or
service firms that successfully compete on each of the criteria listed.
5. What role should operations play in corporate strategy?

CHAPTER III
STRATEGIC DECISIONS IN OPERATIONS MANAGEMENT

All operations managers are designers, because design is the process of satisfying people’s
requirements through the shaping or configuring products, services, and processes. This chapter
looks at how managers can manage the design of the products and services they produce and the
processes that produce them. At the most strategic level ‘design’ means shaping the network of
operations that supply products and services. At a more operational level it means the
arrangement of the processes, technology and people that constitute operations processes.
Thus, designing is a crucial part of operations managers’ activities and is thus discussed in this
Chapter.

Learning Objectives
After learning this chapter, student will be able to:
 Distinguish the different approaches of product designing
 Understand how to plan processes
 Explain the techniques and concepts of capacity planning
 Enunciate models of location decisions
 Discuss the common approaches to layout designing

37
3.1 Product Design

Product design is the process of deciding on the unique characteristics and features of the
company’s product or is the process of defining all of the product’s characteristics. It is the
process of defining all the features and characteristics of just about anything consumers responds
to a product’s appearance, color, texture, performance. All of its features, summed up, are the
product’s design. Product design defines a product’s characteristics, such as its appearance, the
materials it is made of, its dimensions and tolerances, and its performance standards.

Products and services are often the first thing that customers see of a company, so they should
have an impact. And although operations managers may not have direct responsibility for
product and service design, they always have an indirect responsibility to provide the
information and advice upon which successful product or service development depends.
But, increasingly, operations managers are expected to take a more active part in product and
service design. Unless a product, however well designed, can be produced to a high standard,
and unless a service, however well conceived, can be implemented, the design can never bring its
full benefits.

Product design together with others design issues affect product quality, product cost, and
customer satisfaction. Further, the product has to be manufactured using materials, equipment,
and labor skills that are efficient and affordable; otherwise, its cost will be too high for the
market .We call this the product’s manufacturability—the ease with which the product can be
made.

Finally, if a product is to achieve customer satisfaction, it must have the combined characteristics
of good design, competitive pricing, and the ability to fill a market need. This is true whether the
product is pizzas or cars. Most of us might think that the design of a product is not that
interesting. After all, it probably involves materials, measurements, dimensions, and blueprints.
When we think of design we usually think of car design or computer design and envision
engineers working on diagrams. However product design is much more than that.

The Process of Product Design


Design has a tremendous impact on the quality of a product or service. Poor designs may not
meet customer needs or may be so difficult to make that quality suffers. Costly designs can result
in overpriced products that lose market share. If the design process is too lengthy, a competitor
may capture the market by being the first to introduce new products, services, or features.
However, rushing to be first to the market can result in design flaws and poor performance,
which totally negate first-mover advantages. Design may be an art, but the design process must
be managed effectively.

Product design defines the appearance of the product, sets standards for performance, specifies
which materials are to be used, and determines dimensions and tolerances. To get to a final
design of a product or service, the design activity must pass through several key stages. These
form an approximate sequence, although in practice designers will often recycle or backtrack

38
through the stages. We will describe them in the order in which they usually occur, as shown in
Figure 3.1. First, comes the concept generation stage that develops the overall concept for the
product or service. The concepts are then screened to try to ensure that, in broad terms, they will
be a sensible addition to its product/service portfolio and meet the concept as defined. The
agreed concept has then to be turned into a preliminary design that then goes through a stage of
evaluation and improvement to see if the concept can be served better, more cheaply or more
easily. An agreed design may then be subjected to prototyping and final design. Product
designs are never finished, but are always updated with new ideas.

39
• Transforms an idea for a product or service into a concept which captures the nature of
Concept the product or service and provides an overall specification for its design.
Generation

• Involves examining its feasibility, acceptability and vulnerability in broad terms to


Concept ensure that it is a sensible addition to the company’s product or service portfolio.
Screening

• Involves the identification of all the component parts of the product or service and the
way they fit together. Typical tools used during this phase include component
Preliminary structures and flow charts.
Design

• Involve re-examining the design to see if it can be done in a better way, more cheaply
or more easily. Typical techniques used here include quality function deployment,
Evaluation & value engineering and Taguchi methods.
Improvement

• Involve providing the final details which allow the product or service to be produced.
The outcome of this stage is a fully developed specification for the package of products
Prototyping and services, as well as a specification for the processes that will make and deliver
&Final them to customers.
Design

Figure 3.1: The new product design process

The design process itself is beneficial because it encourages companies to look outside their
boundaries, bring in new ideas, challenge conventional thinking, and experiment. Product and
service design provide a natural venue for learning, breaking down barriers, working in teams,
and integrating across functions.

Stage 1: Concept generation


The ideas for new product or service concepts can come from sources outside the organization,
such as customers or competitors, and from sources within the organization, such as staff (for
example, from sales staff and front-of-house staff ) or from the R&D department.
All product designs begin with an idea. The idea might come from a product manager who
spends time with customers and has a sense of what customers want, from an engineer with a
flare for inventions, or from anyone else in the company. To remain competitive, companies
must be innovative and bring out new products regularly.

Ideas from customers. Steps in the product design process link between customers and product
design. Market researchers collect customer information by studying customer buying patterns
40
and using tools such as customer surveys and focus groups. Management may love an idea, but if
market analysis shows that customers do not like it, the idea is not viable. Analyzing customer
preferences is an ongoing process. Customer preferences next year may be quite different from
what they are today. For this reason, the related process of forecasting future consumer
preferences is important, though difficult. Marketing, the function generally responsible for
identifying new product or service opportunities may use many market research tools for
gathering data from customers in a formal and structured way, including questionnaires and
interviews.

These techniques, however, usually tend to be structured in such a way as only to test out ideas
or check products or services against predetermined criteria. Listening to the customer, in a less
structured way, is sometimes seen as a better means of generating new ideas. Focus groups, for
example, are one formal but unstructured way of collecting ideas and suggestions from
customers. A focus group typically comprises seven to ten participants who are unfamiliar with
each other but who have been selected because they have characteristics in common that relate to
the particular topic of the focus group. Participants are invited to ‘discuss’ or ‘share ideas with
others’ in a permissive environment that nurtures different perceptions and points of view,
without pressurizing participants. The group discussion is conducted several times with similar
types of participants in order to identify trends and patterns in perceptions.

Listening to customers. Ideas may come from customers on a day-to-day basis. They may write
to complain about a particular product or service, or make suggestions for its improvement. Ideas
may also come in the form of suggestions to staff during the purchase of the product or delivery
of the service. Although some organizations may not see gathering this information as important
(and may not even have mechanisms in place to facilitate it), it is an important potential source
of ideas.

Ideas from competitor activity. Competitors are another source of ideas. A company learns by
observing its competitors’ products and their success rate. This includes looking at product
design, pricing strategy, and other aspects of the operation. Perceptual maps, benchmarking, and
reverse engineering can help companies learn from their competitors. All market-aware
organizations follow the activities of their competitors. A new idea may give a competitor an
edge in the marketplace, even if it is only a temporary one, then competing organizations will
have to decide whether to imitate, or alternatively to come up with a better or different idea.
Sometimes this involves reverse engineering, that is taking apart a product to understand how a
competing organization has made it. Some aspects of services may be more difficult to reverse-
engineer (especially back-office services) as they are less transparent to competitors. However,
by consumer testing a service, it may be possible to make educated guesses about how it has
been created. Many service organizations employ ‘testers’ to check out the services provided by
competitors. Studying the practices of companies considered “best in class” and comparing the
performance of our company against them is called benchmarking. We can benchmark against a
company in a completely different line of business and still learn from some aspect of that
company’s operation. Perceptual maps compare customer perceptions of a company’s products
with competitors’ products.

41
Ideas from staff. The contact staff in a service organization or the salesperson in a product
oriented organization could meet customers every day. These staff may have good ideas about
what customers like and do not like. They may have gathered suggestions from customers or
have ideas of their own as to how products or services could be developed to meet the needs of
their customers more effectively.

Ideas from research and development. Ideas are also generated by a company’s R & D (research
and development) department, whose role is to develop product and process innovation. One
formal function found in some organizations is research and development (R&D). As its name
implies, its role is twofold. Research usually means attempting to develop new knowledge and
ideas in order to solve a particular problem or to grasp an opportunity. Development is the
attempt to try to utilize and operationalize the ideas that come from research. Companies mainly
use the ‘development’ part of R&D – for example, exploiting new ideas that might be afforded
by new materials or new technologies. And although ‘development’ does not sound as exciting
as ‘research’, it often requires as much creativity and even more persistence.

A practice called Zenbara that involves dismantling of old products to obtain ideas for new ones
is also commonly used. The practice claims that it can save significant investment in overly
complex technology that may not provide competitive advantage, in addition to reducing
development time

Ideas from Suppliers. Suppliers are another source of product design ideas. To remain
competitive more companies are developing partnering relationships with their suppliers, to
jointly satisfy the end customer.

Open-sourcing – using a ‘development community’. Not all ‘products’ or services are created
by professional, employed designers for commercial purposes. Many of the software applications
that we all use, for example, are developed by an open community, including the people who use
the products. If you use Google, the Internet search facility, or use Wikipedia, the online
encyclopaedia, or shop at Amazon, you are using open-source software. The basic concept of
open-source software is extremely simple. Large communities of people around the world, who
have the ability to write software code, come together and produce a software product. The
finished product is not only available to be used by anyone or any organization for free but is
regularly updated to ensure it keeps pace with the necessary improvements. The production of
open-source software is very well organized and, like its commercial equivalent, is continuously
supported and maintained.

However, unlike its commercial equivalent, it is absolutely free to use. Over the last few years
the growth of open-source has been phenomenal with many organizations transitioning over to
using this stable, robust and secure software. With the maturity open-source software now has to
offer, organizations have seen the true benefits of using free software to drive down costs and to
establish themselves on a secure and stable platform. Open-source has been the biggest change in
software development for decades and is setting new open standards in the way software is used.
The open nature of this type of development also encourages compatibility between products.
BMW, for example, was reported to be developing an open-source platform for vehicle

42
electronics. Using an open-source approach, rather than using proprietary software, BMW can
allow providers of ‘infotainment’ services to develop compatible, plug-and-play applications.

Other sources of ideas include the patent office and old products, which may have contained a
feature that has since gone out of common usage but would provide a differentiator today. Many
new product ideas emerge from off-the-wall people or groups, or at least what might be
considered ‘nonconventional’ sources. Indeed, breaking with convention lets people create the
necessary differentiation between existing and new products.
Firms such as Disney Corp. and Orange (the mobile communications company) hire people to
blue-sky new project ideas – so called because they spend time gazing into the sky, waiting for a
blinding flash of inspiration about what the next product will be! These are not marginal roles in
either firm. In Orange, the blue-sky department is located next to the main board offices.

The type of process for idea creation should depend on what the organization is trying to
achieve. Many firms, particularly those that operate in niche markets, are happy to evolve their
products and services continuously. Indeed, many larger firms prefer this gentle evolution to a
more radical approach – often referred to as discontinuous, because it is not based on any
previous experience of the firm. However, at the outset of the process the objective is to create as
many new ideas as possible, both for radical and evolutionary innovation. Traditional work
environments rarely provide the degree of inspiration for such creativity. Many organizations
have seen it necessary to create apparent chaos by stripping away many of the constraints to
creative work, and have targeted the working environment as one of these constraints. Creativity
has become a key attribute for many modern businesses.

Stage 2 – Concept Screening


After a product idea has been developed it is evaluated to determine its likelihood of success.
Not all concepts which are generated will necessarily be capable of further development into
products and services. Designers need to be selective as to which concepts they progress to the
next design stage. The purpose of the concept-screening stage is to take the flow of concepts and
evaluate them.. This is called product screening. Evaluation in design means assessing the worth
or value of each design option, so that a choice can be made between them. This involves
assessing each concept or option against a number of design criteria. While the criteria used in
any particular design exercise will depend on the nature and circumstances of the exercise, it is
useful to think in terms of three broad categories of design criteria:
 The feasibility of the design option – can we do it?
 Do we have the skills (quality of resources)?
 Do we have the organizational capacity (quantity of resources)?
 Do we have the financial resources to cope with this option?
 The acceptability of the design option - do we want to do it
 Does the option satisfy the performance criteria which the design is trying to
achieve?
 (These will differ for different designs.)
 Will our customers want it?
 Does the option give a satisfactory financial return?
 The vulnerability of each design option- do we want to take the risk? That is,
 Do we understand the full consequences of adopting the option?

43
 Being pessimistic, what could go wrong if we adopt the option? What would be the
consequences of everything going wrong? (This is called the ‘downside risk’ of an
option.)

Feasibility- What
How difficult is INVESTMENT,
it? both managerial
and financial will
be needed?

The Acceptability- What RETURN, Overall


criteria for How interms of evaluation
screening benefits to the of the
worthwhile
concepts operation will it concept
is it?
give?

Vulnerability- What RISKS, do


What could go we run if things
wrong? go wrong?

Figure 3.2: Broad categories of evaluation criteria for assessing concepts

Besides, the product design idea may be evaluated according to the needs of the major business
functions. In their evaluation, executives from each function area may explore issues such as the
following:

Operations. What are the production needs of the proposed new product and how do they match our
existing resources? Will we need new facilities and equipment? Do we have the labor skills to make the
product? Can the material for production be readily obtained?

Marketing. What is the potential size of the market for the proposed new product? How much effort will
be needed to develop a market for the product and what is the long-term product potential?

Finance. The production of a new product is a financial investment like any other. What is the proposed
new product’s financial potential, cost, and return on investment?

Unfortunately, there is no magic formula for deciding whether or not to pursue a particular product idea.
Managerial skill and experience, however, are key. Companies generate new product ideas all the time,
whether for a new brand of cereal or a new design for a car door. Approximately 80 percent of ideas do
not make it past the screening stage. Management analyzes operations, marketing, and financial factors,
and then makes the final decision. In general, most authors make a point that every business needs a

44
formal/structured evaluation process: fit with facility and labor skills, size of market,
contribution margin, break-even analysis, and return on sales. A popular one is break-even analysis,
which we look at next.

Break-even analysis is a technique that can be useful when evaluating a new product. This technique
computes the quantity of goods a company needs to sell just to cover its costs, or break even, called the
break-even point. When evaluating an idea for a new product it is helpful to compute its break-even
quantity. An assessment can then be made as to how difficult or easy it will be to cover costs and make a
profit. A product with a break-even quantity that is hard to attain might not be a good product choice to
pursue.

The design ‘funnel’


Applying these evaluation criteria progressively reduces the number of options which will be
available further along in the design activity. For example, deciding to make the outside casing
of a camera case from aluminium rather than plastic limits later decisions, such as the overall
size and shape of the case. This means that the uncertainty surrounding the design reduces as the
number of alternative designs being considered decreases. The design funnel analogy is
important to undertake a progressive reduction of design options from many to one. But reducing
design uncertainty also impacts on the cost of changing one’s mind on some detail of the design.
In most stages of design the cost of changing a decision is bound to incur some sort of rethinking
and recalculation of costs. Early on in the design activity, before too many fundamental decisions
have been made, the costs of change are relatively low. However, as the design progresses the
interrelated and cumulative decisions already made become increasingly expensive to change.

Ultimately, the performance specifications should be developed. Performance specifications are


written for product concepts that pass the feasibility study and are approved for development.
They describe the function of the product-that is, what the product should do to satisfy customer
needs.

Stage 3: Preliminary Design

Having generated an acceptable, feasible and viable product or service concept the next stage is
to create a preliminary design. The objective of this stage is to have a first attempt at both
specifying the component products and services in the package, and defining the processes to
create the package. Preliminary design stage begins with form and functional design.

a) Form and functional design


Form design refers to the physical appearance of a product-its shape, color, size, and style.
Aesthetics such as image, market appeal, and personal identification are also part of form design.
In many cases, functional design must be adjusted to make the product look or feel right. Apple
products have great form and functional design.

Functional design is concerned with how the product performs. It seeks to meet the
performance specifications of fitness for use by the customer. Three performance characteristics
considered during this phase of design are reliability, maintainability, and usability.

45
Reliability is the probability that a given part or product will perform its intended function for a specified
length of time under normal conditions of use. You may be familiar with reliability information from
product warranties. A car warranty might extend for three years or 50,000 miles. Normal conditions of
use would include regularly scheduled oil changes and other minor maintenance activities. A missed oil
change or mileage in excess of 50,000 miles in a three-year period would not be considered “normal” and
would nullify the warranty.

A product or system’s reliability is a function of the reliabilities of its component parts and how
the parts are arranged. If all parts must function for the product or system to operate, then the
system reliability is the product of the component part reliabilities.
For example, if two component parts are required and each has a reliability of 0.90, the reliability
of the system is 0.90 ×0.90= 0.81, or 81%.

Note that the system reliability of 0.81 is considerably less than the component reliabilities of
0.90. As the number of serial components increases, system reliability will continue to
deteriorate. This makes a good argument for simple designs with fewer components!

Failure of some components in a system is more critical than others-the brakes on a car, for
instance. To increase the reliability of individual parts (and thus the system as a whole),
redundant parts can be built in to back up a failure. Providing emergency brakes for a car is an
example. Consider the following redundant design with R1 representing the reliability of the
original component and R2 the reliability of the backup component. These components are said
to operate in parallel. If the original component fails (a 5% chance), the backup component will
automatically kick in to take its place-but only 90% of the time. Thus, the reliability of the
system is1 = 0.95 + (1 - 0.95)(0.90) = 0.995

Reliability can also be expressed as the length of time a product or service is in operation before
it fails, called the mean time between failures (MTBF). In this case, we are concerned with the
distribution of failures over time, or the failure rate. The MTBF is the reciprocal of the failure
rate (MTBF = 1/failure rate). For example, if your laptop battery fails four times in 20 hours of
operation, its failure rate would be 4/20 = 0.20, and its MTBF = 1/0.20 = 5 hours.

Reliability can be improved by simplifying product design, improving the reliability of


individual components, or adding redundant components. Products that are easier to manufacture
or assemble, are well maintained, and have users who are trained in proper use have higher
reliability.

Maintainability (also called serviceability) refers to the ease and/or cost with which a product or
service is maintained or repaired. Products can be made easier to maintain by assembling them in
modules, like computers, so that entire control panels, cards, or disk drives can be replaced when
they malfunction. The location of critical parts or parts subject to failure affects the ease of
disassembly and, thus, repair. Instructions that teach consumers how to anticipate malfunctions
and correct them themselves can be included with the product. Specifying regular maintenance
schedules is part of maintainability, as is proper planning for the availability of critical
replacement parts.

46
One quantitative measure of maintainability is mean time to repair (MTTR). Combined with the
reliability measure of MTBF, we can calculate the average availability or “uptime” of a system
as

Usability: All of us have encountered products or services that are difficult or cumbersome to
use. I often remember my 19 inch laptop as heavy as half of my weight. My father’s gun which
was as tall as almost my height at age 10. The doors that I can’t tell whether to pull or push.
Remote controls with more and more buttons of smaller and smaller size for multiple products.
These are usability issues in design. Usability is what makes a product or service easy to use and
a good fit for its targeted customer. It is a combination of factors that affect the user’s experience
with a product, including ease of learning, ease of use, and ease of remembering how to use,
frequency and severity of errors, and user satisfaction with the experience. Apple revolutionized
the computer industry with its intuitive, easy-to-use designs and continues to do so with its sleek
and functional iPods, iPads, and iPhones. Microsoft employs over 140 usability engineers.

Before a design is deemed functional, it must go through usability testing. Simpler, more
standardized designs are usually easier to use. They are also easier to produce, as we’ll see in the
next section.

Activity 3
Evaluate the design of any local product interms of form and functional designs. Give practical
example.

b. Specify the components of the package


The first task in this stage of design is to define exactly what will go into the product or service:
that is, specifying the components of the package. This will require the collection of information
about such things as the constituent component parts which make up the product or service
package and the component (or product) structure, the order in which the component parts of
the package have to be put together. For example the components for a remote mouse for a
computer may include, upper and lower casings, a control unit and packaging, which are
themselves made up of other components. The product structure shows how these components fit
together to make the mouse.

c. Reducing design complexity


Simplicity is usually seen as a virtue amongst designers of products and services. The most
elegant design solutions are often the simplest. However, when an operation produces a variety
of products or services (as most do) the range of products and services considered as a whole can
become complex, which, in turn, increases costs. Designers adopt a number of approaches to
reducing the inherent complexity in the design of their product or service range. The most
common approaches to complexity reduction include standardization, commonality and
modularization.

Standardization

47
Using standard parts in a product or throughout many products saves design time, tooling costs,
and production worries. Standardization makes possible the interchangeability of parts among
products, resulting in higher-volume production and purchasing, lower investment in inventory,
easier purchasing and material handling, fewer quality inspections, and fewer difficulties in
production. Some products, such as light bulbs, batteries, and DVDs, benefit from being totally
standardized. For others, being different is a competitive advantage. The question becomes how
to gain the cost benefits of standardization without losing the market advantage of variety and
uniqueness.

Commonality
Using common elements within a product or service can also simplify design complexity. Using
the same components across a range of automobiles is a common practice. Likewise,
standardizing the format of information inputs to a process can be achieved by using
appropriately designed forms or screen formats. The more different products and services can be
based on common components, the less complex it is to produce them. For example, the
European aircraft maker Airbus has designed its new generation of jetliners with a high degree of
commonality. Airbus developed full design and operational commonality with the introduction
of fly-by-wire technology on its civil aircraft in the late 1980s. This meant that ten aircraft
models ranging from the 100-seat A318 through to the world’s largest aircraft, the 555-seat
A380, feature virtually identical flight decks, common systems and similar handling
characteristics.

Modularization
The use of modular design principles involves designing standardized ‘sub-components’ of a
product or service which can be put together in different ways. It is possible to create wide
choice through the fully interchangeable assembly of various combinations of a smaller number
of standard sub-assemblies; computers are designed in this way, for example. These standardized
modules, or sub-assemblies, can be produced in higher volume, thereby reducing their cost.
Similarly, the package holiday industry can assemble holidays to meet a specific customer
requirement, from pre-designed and purchased air travel, accommodation, insurance, and so on.
In education also there is an increasing use of modular courses which allow ‘customers’ choice
but permit each module to have economical volumes of students. This issue has been discussed
in the forthcoming sections of this chapter as well.

Design for manufacture (DFM)


Design for manufacture is the process of designing a product so that it can be produced easily
and economically. The term was coined in an effort to emphasize the importance of
incorporating production/process design early in the design process. Design for
Manufacturability (DFM) is based on the premise that product designers can develop a product
design at the same time that the manufacturing engineers are developing a process to
manufacture the product. This allows both groups to more accurately track product cost and to
ensure that the final product can be produced. This requires teamwork between the two groups of
engineers.

When successful, DFM not only improves the quality of product design but also reduces both the
time and cost of product design and manufacture. Specific DFM software can recommend

48
materials and processes appropriate for a design and provide manufacturing cost estimates
throughout the design process. More generally, DFM guidelines promote good design practice,
such as:
 Minimize the number of parts and subassemblies.
 Avoid tools, separate fasteners, and adjustments.
 Use standard parts when possible and repeatable, well-understood processes.
 Design parts for many uses, and modules that can be combined in different ways.
 Design for ease of assembly, minimal handling, and proper presentation.
 Allow for efficient and adequate testing and replacement of parts.

d. Define the process to create the package


The product/service structure and bill-of-materials specifies what goes into a product. It is
around this stage in the design process where it is necessary to examine how a process could put
together the various components to create the final product or service. At one time this activity
would have been delayed until the very end of the design process. However, this can cause
problems if the designed product or service cannot be produced to the required quality and cost
constraints. Late changes in design are both costly and disruptive. An adjustment in one part may
necessitate an adjustment in other parts, “unraveling” the entire product design. That’s why
production design is considered in the preliminary design phase. For now, what is important to
understand is that processes should at least be examined in outline well before any product or
service design is finalized.

Stage 4: Design Evaluation and Improvement


The purpose of this stage in the design activity is to take the preliminary design and see if it can
be improved before the product or service is tested in the market. There are a number of
techniques that can be employed at this stage to evaluate and improve the preliminary design.
Here we treat the following which have proved particularly useful:
 Quality function deployment (QFD)
 Value engineering (VE)
 Taguchi methods
 Failure mode and effects analysis, and
 Fault tree analysis.

Quality Function Deployment (QFD)


The key purpose of quality function deployment (QFD) is to try to ensure that the eventual
design of a product or service actually meets the needs of its customers. It is a technique that was
developed in Japan at Mitsubishi’s Kobe shipyard and used extensively by Toyota, the motor
vehicle manufacturer, and its suppliers.

It is also known as the ‘house of quality’ (because of its shape) and the ‘voice of the customer’
(because of its purpose). The technique tries to capture what the customer needs and how it
might be achieved. The QFD matrix is a formal articulation of how the company sees the
relationship between the requirements of the customer (the whats) and the design characteristics
of the new product (the hows).

49
Although the details of QFD may vary between its different variants, the principle is generally
common, namely to identify the customer requirements for a product or service (together with
their relative importance) and to relate them to the design characteristics which translate those
requirements into practice. In fact, this principle can be continued by making the hows from one
stage become the whats of the next.

QFD is particularly valuable when design trade-offs are necessary to achieve the best overall
solution, e.g. because some requirements conflict with others. QFD also enables a great deal of
information to be summarized in the form of one or more charts. These charts capture customer
and product data gleaned from many sources, as well as the design parameters chosen for the
new product. In this way they provide a solid foundation for further improvement in subsequent
design cycles.

Value Engineering
The purpose of value engineering is to try to reduce costs, and prevent any unnecessary costs,
before producing the product or service. Simply put, it tries to eliminate any costs that do not
contribute to the value and performance of the product or service. (‘Value analysis’ is the name
given to the same process when it is concerned with cost reduction after the product or service
has been introduced.) Value-engineering programs are usually conducted by project teams
consisting of designers, purchasing specialists, operations managers and financial analysts. The
chosen elements of the package are subject to rigorous scrutiny, by analysing their function and
cost, then trying to find any similar components that could do the same job at lower cost. The
team may attempt to reduce the number of components, or use cheaper materials, or simplify
processes.
Value engineering requires innovative and critical thinking, but it is also carried out using a
formal procedure. The procedure examines the purpose of the product or service, its basic
functions and its secondary functions. Taking the example of the remote mouse used previously:
 The purpose of the remote mouse is to communicate with the computer.
 The basic function is to control presentation slide shows.
 The secondary function is to be plug-and-play-compatible with any system.
Team members would then propose ways to improve the secondary functions by combining,
revising or eliminating them. All ideas would then be checked for feasibility, acceptability,
vulnerability and their contribution to the value and purpose of the product or service.

Taguchi methods
The main purpose of Taguchi methods, as advocated by Genichi Taguchi, is to test the
robustness of a design. The basis of the idea is that the product or service should still perform in
extreme conditions. A telephone, for example, should still work even when it has been knocked
onto the floor. Although one does not expect customers to knock a telephone to the floor, this
does happen, and so the need to build strength into the casing should be considered in its design.
Product and service designers therefore need to brainstorm to try to identify all the possible
situations that might arise and check that the product or service is capable of dealing with those
that are deemed to be necessary and cost-effective. The major problem designers face is that the
number of design factors which they could vary to try to cope with the uncertainties, when taken
together, is very large. For example, in designing the telephone casing there could be many
thousands of combinations of casing size, casing shape, casing thickness, materials, jointing

50
methods, etc. Performing all the investigations (or experiments, as they are called in the Taguchi
technique) to try to find a combination of design factors which gives an optimum design can be a
lengthy process. The Taguchi procedure is a statistical procedure for carrying out relatively few
experiments while still being able to determine the best combination of design factors. Here
‘best’ means the lowest cost and the highest degree of uniformity.

Failure mode and effects analysis (FMEA)


Failure modes and effects analysis (FMEA) is a procedure for analysis of potential failure modes
within a system for the classification by severity or determination of the failures' effect upon the
system. It is widely used in the manufacturing industries in various phases of the product life
cycle and is now increasingly finding use in the service industry as well. Failure causes are any
errors or defects in process, design, or item especially ones that affect the customer, and can be
potential or actual. Effects analysis refers to studying the consequences of those failures. It
begins with listing the functions of the product and each of its parts. Failure modes are then
defined and ranked in order of their seriousness and likelihood of failure. Failures are addressed
one by one (beginning with the most catastrophic), causes are hypothesized, and design changes
are made to reduce the chance of failure. The objective of FMEA is to anticipate failures and
prevent them from occurring.

Fault tree analysis (FTA)


Fault tree analysis is a visual method of analyzing the interrelationship among failures. FTA lists
failures and their causes in a tree format using two hat-like symbols, one with a straight line on
the bottom representing “and” and one with a curved line on the bottom for “or”.

Stage 5: Prototyping and Final Design

At around this stage in the design activity it is necessary to turn the improved design into a
prototype so that it can be tested. It may be too risky to go into full production of the product
before testing it out, so it is usually more appropriate to create a prototype. Product prototypes
include everything from clay models to computer simulations. In the final design stage,
prototypes are built and tested. After several iterations, a pilot run of the process is conducted.
Adjustments are made as needed before the final design is agreed on. In this way, the design
specifications for the new product have considered how the product is to be produced, and the
manufacturing or delivery specifications more closely reflect the intent of the design. This should
mean fewer revisions in the design as the product is manufactured and service provided.

Service prototypes may also include computer simulations but also the actual implementation of
the service on a pilot basis. Many retailing organizations pilot new products and services in a
small number of stores in order to test customers’ reaction to them. Increasingly, it is possible to
store the data that define a product or service in a digital format on computer systems, which
allows this virtual prototype to be tested in much the same way as a physical prototype. This is
a familiar idea in some industries such as magazine publishing, where images and text can be
rearranged and subjected to scrutiny prior to them existing in any physical form. This allows
them to be amended right up to the point of production without incurring high costs. Now this
same principle is applied to the prototype stage in the design of three-dimensional physical

51
products and services. Virtual-reality-based simulations allow businesses to test new products
and services as well as visualize and plan the processes that will produce them. Individual
component parts can be positioned together virtually and tested for fit or interference. Even
virtual workers can be introduced into the prototyping system to check for ease of assembly or
operation. The final design consists of detailed drawings and specifications for the new product
or service.

At this juncture it is vital to note that the overall process of the product design should meet two
important issues: ability to get the product quickly to the market and friendliness to the eco-
system.

Reducing Time-to- Market


The ability to get new products to the market quickly has revolutionized the competitive
environment and changed the nature of manufacturing. Its benefits come from the reduction in
the elapsed time for the whole design activity, from concept through to market introduction. This
is often called the time to market (TTM). The argument in favour of reducing time to market is
that doing so gives increased competitive advantage. In other words, shorter TTM means that
companies get more opportunities to improve the performance of their products or services.

If the development process takes longer than expected (or even worse, longer than competitors’)
two effects are likely to show. The first is that the costs of development will increase. Having to
use development resources, such as designers, technicians, subcontractors, and so on, for a
longer development period usually increases the costs of development. Perhaps more seriously,
the late introduction of the product or service will delay the revenue from its sale (and possibly
reduce the total revenue substantially if competitors have already got to the market with their
own products or services). The net effect of this could be not only a considerable reduction in
sales but also reduced profitability – an outcome which could considerably extend the time
before the company breaks even on its investment in the new product or service.

A number of factors have been suggested which can significantly reduce time to market for a
product or service, including the following:
 Using design technologies
 Simultaneous development of the various stages in the overall process;
 Heavyweight cross-functional teams
 Skunkworks
 A strategic management of the development project.

a. Using design technologies:

The ability to get new products to the market quickly has been often influenced by the
advancement of technology available for designing products. It begins with computer-aided
design (CAD) and includes related technologies such as computer-aided engineering (CAE),
computer-aided manufacturing (CAM), and collaborative product design (CPD).

Computer-aided design (CAD) is a software system that uses computer graphics to assist in the
creation, modification, and analysis of a design. A geometric design is generated that includes

52
not only the dimensions of the product but also tolerance information and material specifications.
The ability to sort, classify, and retrieve similar designs from a CAD database facilitates
standardization of parts, prompts ideas, and eliminates building a design from scratch.

CAD systems provide the computer-aided ability to create and modify product drawings.
These systems allow conventionally used shapes such as points, lines, arcs, circles and text, to be
added to a computer-based representation of the product. Once incorporated into the design,
these entities can be copied, moved about, rotated through angles, magnified or deleted.

The designs thus created can be saved in the memory of the system and retrieved for later use.
This enables a library of standardized drawings of parts and components to be built up. The
simplest CAD systems model only in two dimensions in a similar way to a conventional
engineering ‘blueprint’. More sophisticated systems model products in three dimensions.
The most obvious advantage of CAD systems is that their ability to store and retrieve design data
quickly, as well as their ability to manipulate design details, can considerably increase the
productivity of the design activity. In addition to this, however, because changes can be made
rapidly to designs, CAD systems can considerably enhance the flexibility of the design activity,
enabling modifications to be made much more rapidly. Further, the use of standardized libraries
of shapes and entities can reduce the possibility of errors in the design.

CAD-generated products can also be tested more quickly. Engineering analysis, performed with
a CAD system, is called computer-aided engineering (CAE). CAE retrieves the description and
geometry of a part from a CAD database and subjects it to testing and analysis on the computer
screen without physically building a prototype. CAE can maximize the storage space in a car
trunk, detect whether plastic parts are cooling evenly, and determine how much stress will cause
a bridge to crack. With CAE, design teams can watch a car bump along a rough road, the pistons
of an engine move up and down, a golf ball soar through the air, or the effect of new drugs on
virtual DNA molecules.

Computer-aided manufacturing (CAD/CAM) involves the automatic conversion of CAD


design data into processing instructions for computer controlled equipment and the subsequent
manufacture of the part as it was designed. This integration of design and manufacture can save
enormous amounts of time, ensure that parts and products are produced precisely as intended,
and facilitate revisions in design or customized production.
Besides the time savings, CAD and its related technologies have also improved the quality of
designs and the products manufactured from them. The communications capabilities of CAD
may be more important than its processing capabilities in terms of design quality. CAD systems
enhance communication and promote innovation in multifunctional design teams by providing a
visual, interactive focus for discussion. Watching a vehicle strain its wheels over mud and ice
prompts ideas on product design and customer use better than stacks of consumer surveys or
engineering reports. New ideas can be suggested and tested immediately, allowing more
alternatives to be evaluated. To facilitate discussion or clarify a design, CAD data can be sent
electronically between designer and supplier or viewed simultaneously on computer screens by
different designers in physically separate locations. Rapid prototypes can be tested more
thoroughly with CAD/CAE.

53
More prototypes can be tested as well. CAD improves every stage of product design and is
especially useful as a means of integrating design and manufacture. With so many new designs
and changes in existing designs, a system is needed to keep track of design revisions. Such a
system is called product lifecycle management (PLM). PLM stores, retrieves, and updates
design data from the product concept, through manufacturing, revision, service, and retirement of
the product.

Collaborative product design systems: The benefits of CAD-designed products are magnified
when combined with the ability to share product-design files and work on them in real time from
physically separate locations. Collaborative design can take place between designers in the same
company, between manufacturers and suppliers, or between manufacturers and customers.
Manufacturers can send out product designs electronically with request for quotes (RFQ) from
potential component suppliers. Or performance specs can be posted to a Web site from which
suppliers can create and transmit their own designs. Designs can receive final approval from
customers before expensive processing takes place. A complex design can involve hundreds of
suppliers. The Web allows them to work together throughout the design and manufacturing
processes, not just at the beginning and the end.

Software systems for collaborative design are loosely referred to as collaborative product
design (CPD). These systems provide the interconnectivity and translation capabilities necessary
for collaborative work across platforms, departments, and companies. In conjunction with PLM
systems, they also manage product data, set up project workspaces, and follow product
development through the entire product lifecycle. Collaborative design accelerates product
development, helps to resolve product launch issues, and improves the quality of the design.
Designers can conduct virtual review sessions, test “what if ” scenarios, assign and track design
issues, communicate with multiple tiers of suppliers, and create, store, and manage project
documents.

Skunkworks
A ‘Skunkworks’ is taken to mean a small team who are taken out of their normal work
environment and granted freedom from their normal management activities and constraints. It is
a well-known approach to releasing the design and development creativity of a group. It was an
idea that originated in the Lockheed aircraft company in the 1940s, where designers were set up
outside the normal organizational structure and given the task of designing a high-speed fighter
plane. The experiment was so successful that the company continued with it to develop other
innovative products. The approach had an intention of encouraging creativity in design, while at
the same time recognizing the constraints of everyday business life, has always been one of the
great challenges of industrial design.

Concurrent Engineering/ Simultaneous development


Earlier in the chapter we described the design process as essentially a set of individual,
predetermined stages. Sometimes one stage is completed before the next one commences. This
step-by-step, or sequential, approach has traditionally been the typical form of product/ service
development. It has some advantages. It is easy to manage and control design projects organized
in this way, since each stage is clearly defined. In addition, each stage is completed before the
next stage is begun, so each stage can focus its skills and expertise on a limited set of tasks. The

54
main problem of the sequential approach is that it is both time-consuming and costly. When each
stage is separate, with a clearly defined set of tasks, any difficulties encountered during the
design at one stage might necessitate the design being halted while responsibility moves back to
the previous stage.

Yet often there is really little need to wait until the absolute finalization of one stage before
starting the next. For example, perhaps while generating the concept, the evaluation activity of
screening and selection could be started. It is likely that some concepts could be judged as ‘non-
starters’ relatively early on in the process of idea generation. Similarly, during the screening
stage, it is likely that some aspects of the design will become obvious before the phase is finally
complete. Therefore, the preliminary work on these parts of the design could be commenced at
that point. This principle can be taken right through all the stages, one stage commencing before
the previous one has finished, so there is simultaneous or concurrent work on the stages. (Note
that simultaneous development is often called simultaneous (or concurrent) engineering in
manufacturing operations.

It is an approach that brings many people together in the early phase of product design in order to
simultaneously design the product and the process. This type of approach has been found to
achieve a smooth transition from the design stage to actual production in a shorter amount of
development time with improved quality results. The old approach to product and process design
was to first have the designers of the idea come up with the exact product characteristics. Once
their design was complete they would pass it on to operations who would then design the
production process needed to produce the product. This was called the “over-the-wall” approach,
because the designers would throw their design “over-the-wall” to operations who then had to
decide how to produce the product.
There are many problems with the old approach. First, it is very inefficient and costly. A second
problem is that the “over-the-wall” approach takes a longer amount of time. The third problem is
that the old approach does not create a team atmosphere, which is important in today’s work
environment. Rather, it creates an atmosphere where each function views its role separately in a
type of “us versus them” mentality. With the old approach, when the designers were finished
with the designs, they considered their job done. If there were problems, each group blamed the
other. With concurrent engineering the team is responsible for designing and getting the product
to market. Team members continue working together to resolve problems with the product and
improve the process.
In concurrent engineering, early involvement is also promoted as early conflict resolution,
providing a forum for discussion of issues and the raising of assumptions that each group has
about others’ processes.

Heavyweight cross-functional teams

A major operational issue concerns the structure of the organization that is created for the design.
Whilst many operations activities are functional (under the direct control of the operations
manager), new product design involves many other parts of the organization (particularly
marketing, R&D and finance), and often people and resources outside the organization

55
(particularly suppliers and customers). A success story using this approach was in the
development of the Ford Taurus, which replaced the Honda Accord as the biggest selling car in
the USA in 1992. Ford used Team Taurus to bring together representatives from design,
engineering, manufacturing, sales, marketing and service and suppliers in the earliest stages of
the car’s design to bring about this success.

Heavyweight cross-functional teams are drawn from many functions within the organization, but
arranged so that the new product design manager has direct authority over them, over and above
that of their own functional managers. In effect, for the duration of that design project, those
people are transferred to the new product design team full-time. The role of the new product
design manager can change in this instance from one of coordination to one of having full
control over the team.

Strategic management of development projects

Development projects are key in terms of a firm developing competitive advantage through its
operations. Moving development projects from an isolated area of the firm into a mainstream
process requires a different approach to their management. Hence, an organization need to have a
design strategy and thus the project would be managed strategically. The product design strategy
is the master plan that links product design project to the corporate plan. It is this integration with
the corporate plan that is a common theme throughout the discussion of modern operations
management. In taking this strategy and operationalizing it, there are a large number of
considerations for the manager. These include the number and type of tools and techniques that
the design team will use to assist in the process.

Design for Environment

Design for environment (DFE) involves many aspects of design, such as designing products
from recycled material, reducing hazardous chemicals, using materials or components that can be
recycled after use, designing a product so that it is easier to repair than discard, and minimizing
unnecessary packaging.

In addition to the fact that our society becomes more environmentally conscious and focuses on
efforts such as recycling and eliminating waste, images of overflowing landfills, toxic streams,
and global warming have prompted governments worldwide to enact laws and regulations
protecting the environment and rewarding environmental stewardship.

Extended producer responsibility (EPR) is a concept that holds companies responsible for
their product even after its useful life. German law mandates the collection, recycling, and safe
disposal of computers and household appliances, including stereos and video appliances,
televisions, washing machines, dishwashers, and refrigerators. Some manufacturers pay a tax for
recycling; others include the cost of disposal in a product's price. Norwegian law requires
producers and importers of electronic equipment to recycle or reuse 80% of the product.
Nineteen U.S. states now have takeback laws that require the return and recycling of batteries,
appliances, and other electronics. Brazil considers all packaging that cannot be recycled
hazardous waste. The European Union requires that 80% of the weight of discarded cars must be

56
reused or recycled. Companies responsible for disposing of their own products are more
conscious of the design decisions that generated the excess and toxic waste that can be expensive
to process.

Hence, it became evident that the place to start meeting environmental requirements is with
green product design, or what is more globally referred to as design for environment (DFE).
Similarly, the closely related concept known as remanufacturing has also been gaining
increasing importance. Remanufacturing uses components of old products in the production of
new ones. In addition to the environmental benefits, there are significant cost benefits because
remanufactured products can be half the price of their new counterparts. Remanufacturing has
been quite popular in the production of computers, televisions, and automobiles.

3.2 Process Design


What is a process?
There are different schools of thought on what constitutes a process.

A process is defined in ISO 9000 as a set of interrelated or interacting activities which transforms
inputs into outputs and goes on to state that processes in an organization are generally planned
and carried out under controlled conditions to add value. The inclusion of the word generally
tends to suggest that organizations may have processes that are not planned, not carried out under
controlled conditions and do not add value and indeed they do!

Juran defines a process (Juran, J. M., 1992) as a systematic series of actions directed to the
achievement of a goal. In Juran’s model the inputs are the goals and product features and the
outputs are product features required to meet customer needs. The ISO 9000 definition does not
refer to goals or objectives.

Hammer defines a process (Hammer, Michael and Champy, James, 1993) as a collection of
activities that takes one or more kinds of inputs and creates an output that is of value to the
customer. Hammer places customer value as a criterion for a process unlike the ISO 9000
definition.

The concept of adding value and the party receiving the added value is seen as important in these
definitions. This distinguishes processes from procedures.

It is easy to see how these definitions can be misinterpreted and result in people simply drawing
flowcharts and calling them processes. They may describe the process flow but they are not in
themselves processes because they simply define transactions.

The concept of process may be conceived from different perspectives. For example,
conceptualizing it at the macro and micro levels is very common. Macro-processes are multi-
functional in nature consisting of numerous micro-processes. Macro-processes deliver business
outputs and have been referred to as Business Processes for nearly a decade or more. For
processes to be classed as business processes they need to be in a chain of processes having the
same stakeholder at each end of the chain. The input is an input to the business and the output is

57
an output from the business. This is so that the outputs can be measured in terms of the inputs. If
the outputs were a translation of the inputs they could not be measured against the inputs.
There is a view that design is not a business process because the stakeholders are different at
each end. On the input end could be marketing and the output end could be production. Under
this logic, production would not be a business process because on the input could be sales and
the output could be the customer. Therefore the business process flow is: customer – sales –
production – distribution – finance – customer.

The sales process takes the order from the customer and routes it to the production process. The
production process supplies product to the distribution process and the distribution process
delivers product to the customer, collects the cheque and routes it to the finance process where it
is put into the bank and turned into cash. The business process is therefore ‘order to cash’ or
order fulfillment. With this convention, there would be only four business processes in most
organizations. These are business management,(with both ends of stakeholders/
shareholders/owners), marketing (both ends in MD), order fulfillment (linking of Customer
Customer), and resource management (both ends of resource users).

Micro-processes deliver departmental outputs and are task oriented. These are sometimes called
work processes.

Accordingly, process in the case of operations management refers to the second category that
converts inputs to outputs of added value for the external interested parties. In this section,
therefore, we refer to process from the second perspective.

What is process design?


Operations managers are responsible for the design, and /or redesign of processes. Through
processes, we operationalize strategies (turn them into reality) and create and deliver the
product/ service offerings required. The operations manager’s role is vitally important in
integrating all the contributors into the design/redesign process.

To ‘design’ is to conceive the looks, arrangement, and workings of something before it is


created. In that sense it is a conceptual exercise. Yet it is one which must deliver a solution that
will work in practice. Design is also an activity that can be approached at different levels of
detail. One may envisage the general shape and intention of something before getting down to
defining its details. This is certainly true for process design. At the start of the process design
activity it is important to understand the design objectives, especially at first, when the overall
shape and nature of the process is being decided. The most common way of doing this is by
positioning it according to its volume and variety characteristics. Eventually the details of the
process must be analyzed to ensure that it fulfills its objectives effectively.

As defined previously, process is a group of related tasks with specific inputs and outputs.
Processes exist to create value for the customer, the shareholder, or society. To ‘design’ is to
conceive the looks, arrangement, and workings of something before it is created. It is a
conceptual exercise. Yet it is one which must deliver a solution that will work in practice.

58
Hence, process design defines what tasks need to be done and how they are to be coordinated
among functions, people, and organizations.

Factors Affecting Process Design


The operations manager has a key decision to make regarding process design that will determine
the future success or failure of the operation. Specific decisions are made concerning the
capacities and capabilities that the operation should have.

One input is the combination of the organization’s and the customer’s requirements. These
requirements are often expressed in terms of:
• Scope of operations – how much of the task will be done by the firm, in-house, and how
much by suppliers and customers
• Scale of operations – the required capacity of the operation (e.g. how many products per
hour or customers per day the system is required to handle)
• The cost of the products and services
• The time allowable for delivery of products and services to customers.

Each of these will in turn be influenced by the organizational strategy-including the level of
investment that the organization is prepared to make in these processes.
Nature of the product includes the characteristics of what is to be transformed. The production of
large-scale engineering products is clearly a quite different matter from the delivery of Internet-
based services. However, these characteristics will change with time, particularly with advances
in new technology – for example, the way that metal products are made today in highly
automated processes is very different from the processes that existed even 30 years ago. As this
factor plays very important role, we will have further discussions on how product design
influences the type of process to be selected in the forthcoming section.

Existing processes provide a major input to the design decision. The level of investment that
many firms have made in processes can and does provide a level of inertia, which can work
against change or improvement. A production line or any piece of technology may take many
years to repay the investment made in it. Furthermore, a process may have been in existence for
some time and the firm has gained considerable knowledge about how it works. Thus there can
sometimes be a dangerous mentality of ‘if it ain’t broke, don’t fix it’, and this is totally alien to
practices of continuous improvement

Supplier capacity and capabilities is the last major input to this decision. As well as considering
the operation itself, process decisions should include a consideration of the entire supply network
of which the operation is a part. Greater levels of integration with suppliers and their inclusion as
part of the process design are powerful and important inputs to process design.

Once we know what the requirements are, we can select the best process type for those
requirements. As with product design the decision will rarely be clear-cut, and we will have to
make trade-off decisions, such as between unit cost and flexibility.

59
Process Types
Process types refer to broad categories of operations configurations, which are available for the
operations manager to select from. They are general approaches to designing and managing
processes. The approach has been developed based on the effect that the position of a process on
the volume-variety continuum would have on the overall design and the general approach to
managing its activities. Different terms are sometimes used to identify process types depending
on whether they are predominantly manufacturing or service processes, and there is some
variation in the terms used. For example, it is not uncommon to find the ‘manufacturing’ terms
used in service industries. Figure 3.4 illustrates how these ‘process types’ are used to describe
different positions on the volume–variety spectrum. We will consider the five process types.
Process choice will provide essential, major clues about how a firm competes and what it can-
and cannot- do. The choice of the transformation process alternative actually dictates, to a large
extent, what the company ‘sells’ in terms of its capabilities and how it can compete. There may
be more than one process type being used within the same company, but there will usually be a
dominant ‘core’ process that is best suited to support the company in the market.

There are five basic types of process choice:


1. Project.
2. Job.
3. Batch.
4. Line.
5. Continuous process.

Project Processes
In project manufacturing environments, the nature of the products is often large-scale and
complex. The designs of the products undertaken in project manufacturing are, essentially,
unique by virtue of their not being repeated in exactly the same way. The distinguishing feature
between project and job manufacture is that, during the process of completion, the product in
project manufacture tends to be ‘fixed’. Scheduling of projects tends to be undertaken in a
‘phased completion’ programme, where each phase of completion will be distinct and separate
from other subsequent, or parallel, stages. At the simplest level of management, tools such as
Gantt charts will be used. Alternatively, more complicated programmes such as project network
planning will be employed.

Examples:
In manufacture this includes civil engineering of various types, aerospace and some major high-
tech projects – flight simulator manufacture would tend to fall into this category, for example.
Projects tend to be ‘one-offs’, where repetition in terms of the product being exactly the same is
unlikely. Construction in all forms – bridge manufacture, tunnel construction and shipbuilding –
is a common application of project process choice.

In services, all types of consulting would fall into this category. The relationship, expectations
and outcomes with each client should be seen as ‘unique’; each session with a client should be
seen as unique. This means that the project process links to Schmenner’s ‘professional services’
category within the matrix (see Figure 3.3).

60
Low Customer Interaction High
Service factory Service shop
Low Examples: Examples:
Airlines Hospitals
Hotels Auto repair
Trucking Upscale restaurants
labour intensity

Fast food Copy shop


Amusement parks Dentists

Mass service Professional services


Examples: Examples:
Retailing Doctors
Wholesaling Lawyers
Schools Counsellors/psychiatrists
High

Dry cleaners Investment bankers


Film developers Realtors

Figure 3.3. Service Matrix

Job processes
In manufacturing, job processes are used for ‘one-off’ or very small order requirements, similar
to project manufacture. However, the difference is that the product can often be moved during
manufacture. Perceived uniqueness is often a key factor for job manufacture. The volume is very
small and, as with project manufacture, the products tend to be a ‘one-off’ in terms of design; it
is very unlikely that they will be repeated in the short term and therefore investment in dedicated
technology for a particular product is unlikely. Investment in automation is for general purpose
process technology rather than product specific investment. Many different products are run
throughout the plant, and materials handling has to be modified and adjusted to suit many
different products and types. Detailed planning will evolve around sequencing requirements for
each product, capacities for each work centre and order priorities; because of this, scheduling is
relatively complicated, in comparison to repetitive ‘line’ manufacture.
Examples
Examples:
In manufacture, job processes are linked to traditional craft manufacture. Making special haute
couture clothing is a clear example. Job processes are common in the following:
a. Making prototypes of new products- even if the end volume is likely to be high for the
product, it makes sense to produce a ‘one-off’ or very low volume, which lends itself to
job manufacture.
b. Making unique products such as machines, tools and fixtures to make other products. The
process choice (job) is linked to the process layout.
In services, a job process is linked to the ‘service shop’ in Schmenner’s matrix. Car repairs and
many hospital service activities are job processes.

61
Batch Processes
As volume begins to increase, either in terms of individual products (i.e. total volume) or in the
manufacture of similar ‘types’ or ‘families’ of products (i.e. greater number of products in any
one group or family), the process will develop into batch manufacture. The difficulty in batch
manufacturing is that competitive focus can often become blurred – management attention
becomes fixed upon optimizing the batch conditions to the detriment of customer service. The
batch process is therefore often difficult to manage; the key is to map the range of products in
terms of either ‘job’ or ‘line’ characteristics.

Batch production may be arranged either in terms of the similarity of finished products or by
common process groupings. As a starting point, each product has to be determined by its
volume; focused ‘cells’ of manufacture will then be arranged so that low and high volumes can
be separated. Automation, especially for lower volumes of batch manufacturing, tends to be
general purpose rather than dedicated to a particular product whose volume does not demand
product-specific investment in automation. Scheduling is often complicated and has to be
completely reviewed on a regular basis – this applies to new products, to ‘one-offs’ and to higher
volume, standard products: all of these types will need to be scheduled.

In batch production, operators have to be able to perform a number of functions. This is clearly
also true for ‘job’-type processes, but in batch this flexibility is crucial, as it allows operators to
move to various workstations, as and when required. Where automation is being used, set-up
times need to be short, the ideal set-up time being that necessary to accommodate run lengths of
just one unit, switching over to other models and volumes as required.

Examples:
Batch is the most common form of process in engineering and the most difficult to manage.
Typical examples of this in manufacture will be in plastic moulding production – these would be
distinguished by determining those products that need much labour input (hand laminating in
glass-reinforced plastic, for example) and high-volume ‘standard’ products, where considerable
automation would be appropriate. Other examples include bread making – where batches of
similar types are produced. In general, batch processes link to process layout, although high-
volume batch will tend to have a type of line (product) layout, depending upon how often the
product is reproduced.

In services, ‘batching a process’ has become common in routing procedures for call centres. The
response message to many telephone call centres is: ‘press “1” for this service’, ‘press “2” for
that service’ and so on. If the service centre adds the message: ‘press “0” for all other enquiries’,
this puts the service provision back into a job-type service. This will equate either with a mass
service or a service shop in Schmenner’s matrix, depending on the extent of customization
involved with the customer.

Line processes
A line process becomes more appropriate as the volume of a particular product increases, leading
to greater standardization than in low batch volumes. Each stage of manufacture will be distinct
from the next; value and cost are added at each stage of manufacture until the product is
completed. The line is dedicated to a particular product (with possible variations of models) and

62
introducing new products that are significantly different from the former product is difficult or
even impossible to realize on an existing line manufacturing process. Individual operation
process times should be short – in order to satisfy delivery expectations. Competitive advantages
may be gained from simplification in production planning and control, and the tasks themselves
should also be simplified for each workstation. In line production, there should only be very
small amounts of work in process: where it does exist, it represents a poorly balanced line
loading and is seen as a signal for necessary improvement. Work in process is counted as an
asset by traditional accounting systems, but is actually a liability to the company as it represents
unsellable materials: unmanaged, this can ruin cash-flow and stifle quick response to market
requirements. Workstations should be located as closely as possible to each other to minimize
materials handling between them. Materials flow and control is critical and stock-outs have to be
avoided.

The disadvantages of line manufacture include the following:


• There can often be a lack of process flexibility and introducing new products on existing
technology can be difficult. This is alleviated to some degree by similar sub-components
which become included in the design for new products and which then allow the new
product to be made on existing lines.
• As standardization and volumes both increase, relative to batch and job manufacturing
processes, investment in technology also increases. Special product-specific technology
is used and this often involves vast amounts of firm-specific investments. Each
workstation is dependent upon the next -consequently, the speed of the line is determined
by the lowest capacity of a particular work centre; moreover, in ‘standard’ lines, if one
set of machines is not operating, the whole line can come to a stop, thus preventing any
production.
Examples
Examples:
High-volume, ‘standard’ products – such as particular models of cars, TVs, hi-fi, VCRs and
computers – lend themselves to line processes, often arranged in a U-shape. The process choice
(line) ties it to the product type of layout. In services, a sequential, line-type process can be put in
place where there is high standardization of the service offering.

This equates to Schmenner’s service factory quadrant. Where there is a high tangible element
within the offering – e.g. fast foods – the back-room facilities will resemble a factory and the
mode of delivery will go through specific stages. In less tangible elements, the service may
resemble a line process in that there may be, for example, set procedures to adopt for a particular
type of service process. For example, in dealing with high-volume, ‘standard’ applications – for
a mortgage – there will often be set sequences of events.

Continuous Processes

This is used when a process can (or must) run all day for each day of the year, on a continuous
basis. The volume of the product is typically very high and the process is dedicated to making
only one product. Huge investment in dedicated plant is often required. Much automation tends
to be evident and labour input is one of ‘policing’ rather than being highly skilled as an integral
input to the overall process.

63
Examples
Examples:
In manufacturing, a chemical refining plant, a blast furnace or steel works, and very-high-volume
food processing are all examples where a continuous process would be in place.
In services, strictly speaking, there is no real equivalent. For example, even though technology
might be in place to allow financial transactions to take place on a 24-hour basis, the amounts
being transferred from one account to another would vary: it is not a case of one transaction
being conducted many thousands of times.

Table 3.1. Summary of process types

Process type Process characteristics

1 Project Highly flexible. Individualized output results in high unit costs.


Mobile and flexible staff required. Quality determined by individual
customer requirements

2 Jobbing Significant flexibility required, though the volume is generally higher


than when compared to projects. Some repetition in the system, and
many more common elements to the process than occur with projects.
High unit costs relative to higher volume processes, but low set-up
costs
3 Batch Some flexibility to handle differences between batches still necessary,
requiring some investment in set-up for each batch.
Higher levels of specialization required in both people and machines
4 Line Highly specialized people and machines allow high rates of
throughput and low unit costs. Limited flexibility usually associated
with this process. Quality levels consistent.
5 Continuous Usually non-discrete products produced over a significant period of
time. Very high levels of investment required and limited possibility
for flexibility due to highly dedicated processes. Commonly highly
automated

The product–process matrix


Making comparisons between different processes along a spectrum which goes, for example,
from shipbuilding at one extreme to electricity generation at the other has limited value. No one
grumbles that yachts are so much more expensive than electricity. The real point is that because
the different process types overlap, organizations often have a choice of what type of process to
employ. This choice will have consequences to the operation, especially in terms of its cost and
flexibility. The classic representation of how cost and flexibility vary with process choice is the
product–process matrix that comes from Professors Hayes and Wheelwright of Harvard
University. They represent process choices on a matrix with the volume–variety as one
dimension, and process types as the other. Figure 3.4 shows their matrix adapted to fit with the

64
terminology used here. Most operations stick to the ‘natural’ diagonal of the matrix, and few, if
any, are found in the extreme corners of the matrix. However, because there is some overlap
between the various process types, operations might be positioned slightly off the diagonal.

The diagonal of the matrix shown in Figure 3.4 represents a ‘natural’ lowest cost position for an
operation. Operations which are on the right of the ‘natural’ diagonal have processes which
would normally be associated with lower volumes and higher variety. This means that their
processes are likely to be more flexible than seems to be warranted by their actual volume–
variety position. Put another way, they are not taking advantage of their ability to standardize
their processes. Because of this, their costs are likely to be higher than they would be with a
process that was closer to the diagonal. Conversely, operations that are on the left of the diagonal
have adopted processes which would normally be used in a higher-volume and lower-variety
situation. Their processes will therefore be ‘over-standardized’ and probably too inflexible for
their volume–variety position. This lack of flexibility can also lead to high costs because the
process will not be able to change from one activity to another as efficiently as a more flexible
process.

Volume

Variety
Professional Service
None
Project

Service Shop
More process
flexibility
Jobbing than needed
so high cost
Batch
Service factory
Less
process
Line flexibility
than
needed so Mass Service
Continuous high cost
None

Manufacturing process types Service process types

The natural line of fit of process to volume/variety


characteristics
Figure 3.4: Deviating from the ‘natural’ diagonal on the product–process matrix has consequences for
cost and flexibility

65
Trends in Process Design
In many industries, mass production is still the dominant way of organizing the production of
high volumes of standardized products. A major change is that recently in many markets
customers or competitors have forced mass producers to change their approach to process design,
to allow them to compete in new ways. These new approaches have been made possible through
the application of computer and communications technology, coupled with new ways of thinking
about operations. Three new approaches to designing process have created new options. Many
operations are actively pursuing one or both of the following alternatives to traditional mass
production:
 Mass customization
 Flexible manufacturing, and
 Agile manufacturing.

Mass customization
Mass customization describes the provision of what customers perceive as customized goods and
services in high volume, but the operation can create and deliver without incurring additional
costs to change the output’s design or appearance. Some infact argue that mass customization is
not a specific type of process type; it depends fundamentally upon the transformation process.
Davis (1987) coined the term mass customization and stated mass customization of markets
means that the same large number of customers can be reached as in mass markets of the
industrial economy, and simultaneously they can be treated individually as in customized
markets of pre-industrial economies.

In essence, this present era of mass customization combines the best of the craft era, where
products were individualized but at high cost, with the best of mass production, where products
were affordable but highly standardized.

Mass customization firms comprise a diverse range of products and services, and cannot be
identified as a homogeneous group. Customer involvement in the production process is argued to
be one of the defining characteristic of mass customization. Mass customization overcomes what
has typically been seen as a trade-off between volume and variety. It achieves this in a number of
ways, such as flexible manufacturing and agile production, which we discuss below. There are
actually five basic ways in which mass customization can be achieved, derived from how six key
operations processes are configured. The six key processes are:
1. Product development and design.
2. Product validation or manufacturing engineering (translates product design into a bill of
materials and set of manufacturing processes).
3. Order taking and co-ordination.
4. Order fulfilment management (schedules activities within the operation).
5. Order fulfilment realization (manages actual production and delivery).
6. Post-order processes (such as technical assistance, warranties and maintenance).

The best method for achieving mass customization is by creating modular components that can
be configured into a wide variety of end products and services. Such standardization of parts not
only reduces production costs and increases customizable output; it also reduces new product
development time and accommodates short life cycles. The following are the six kinds of

66
modularity. They are not mutually exclusive and may be combined together within one
operation. These are:
1. Component-sharing modularity. This refers to the same component being used in
multiple products, thereby reducing inventory costs and simplifying production.
2. Component-swapping modularity. In this instance, as opposed to different products
sharing the same components (as above), the same products have different components in
order to differentiate or customize them from each other. The classic example of this is
Swatch, who produces a range of standard watches, but with a wide range of colours and
faces.
3. Cut-to-fit modularity. This modularity is based around the ability to adapt or vary a
component to individual needs and wants, within preset or practical limits.
4. Mix modularity. This modularity is based on the concept of a recipe, so that components,
when mixed together, become something different. This can be applied to paints,
fertilizer, restaurant menu items, breakfast cereals and any other process in which
ingredients are mixed.
5. Bus modularity. This is based around the concept of a standard structure to which
different components can be added. The obvious example of this is the lighting track, to
which different light fittings can be attached. The term ‘bus’ derives from the electronics
industry, which uses this as the base from which computers and other electronic devices
are built up. This type of modularity allows ‘variations in the type, number and location
of modules that can plug into the product’.
6. Sectional modularity. This type of modularity is based on different types of components
fitting together in any number of possible ways through the use of standard interfaces.
Lego building blocks are the classic example of this. Whilst this achieves the greatest
degree of variety and customization, it is the most difficult to achieve.

There are four faces of customization: collaborative (designers working closely with customers);
adaptive (where standard products are changed by customers during use); cosmetic (where
packaging of standard products is unique for each customer); and transparent (where products
are modified to specific individual needs).

The requirements for mass customization are:


1. Customer demand for variety and customization must exist.
2. Market conditions must be appropriate.
3. The value chain should be ready.
4. Technology must be available.
5. Products should be customizable.
6. Knowledge must be shared.

Undoubtedly, mass customization presents firms with a number of challenges. These challenges
are often categorized as follow:
1. Elicitation. The requirement of an elaborate system for eliciting customers needs and
wants. Capturing customer input into the production process can prove difficult.
2. Process flexibility. The requirement of highly flexible production technology. Often
developing such technologies can be expensive and time-consuming.

67
3. Logistics. The requirement of a strong direct-to-customer logistics system. Zipkin argues
that processing and tracking individual customer orders through the supply chain presents
a variety of challenges.

Flexible manufacturing
Flexible manufacturing is an element of mass customization and the move towards flexible
manufacturing was one of the major competitive advantages of Japanese car manufacturing,
subsequently appearing in Western manufacturing. Flexible manufacturing can be applied to
high- or low-volume batch processes. It is normally applied by installing flexible manufacturing
systems (FMS)- groups of machines and other equipment that would usually include the
following:
 A number of workstations, such as computer numerically controlled machines, each
performing a wide range of operations.
 A transport system that will move material from one machine to another; loading and
unloading stations where completed or partially completed components will be housed
and worked upon.
 A comprehensive computer control system that will co-ordinate all the activities.

Flexible manufacturing systems are typically arranged in small, U-shaped cells. The reasons for
this shape include:
 reduction of space;
 shorter workflow paths;
 increased teamwork and better communication and motivation brought about by seeing a
completed product in the cell.

Workers are arranged into teams to operate the cells. A single cell can manufacture, inspect and
package finished products or components. Every cell is responsible for the quality of its products
and each worker will normally be able to perform a range of tasks. Once again, process choice is
the key insight in cell arrangements; under line processes, introducing variety or changing the
product meant stopping the entire assembly line. Such breakdowns and shortages are very costly
overheads for mass-producers, intent on low-cost production. To compensate for this, they have
to carry large stocks of parts and spares, ‘just-in-case’. Stocks of partly finished products also
tend to be high under traditional line processes. Components that have undergone part of the
production process often sit idle, waiting for the next stage. This is a major source of waste.
Large amounts of inventories, some sitting in large warehouses, are a feature of mass production.
By contrast, flexible manufacturing, via U-shape cells, and low inventory levels go hand in hand.

The advantages that FMS can provide go beyond the flexibility of the hardware. The real
advantage comes with the plant-specific know-how and enhanced skills that accompany FMS.
Consequently, investment in technologies such as computer-integrated manufacture (CIM) and
advanced manufacturing technology is seen as strategically important because it can provide
competitive options for the firm.

FMS allows the firm to compete on economies of scope rather than economies of scale. Because
technologies are more flexible, allowing numerous product variations to be made, the overall
volume achieved can be almost as great as manufacturing large volumes of standardized

68
products. This means that the basis of competition moves from a strategy of low-priced,
commodity products to an emphasis on low-cost special options and customized products.

Agile Production
Like mass customization, agile production is not a particular process choice but it is wholly
dependent upon the transformation process for agility to become a reality within the offer to
customers. It is clear that we are in an era that has evolved from mass production offering ‘any
colour of car as long as it is black’ to that of customer-centric offerings. It is commonly realized
now that any customer, in any industry, in any market wants stuff that is both cheaper and better,
and they want it yesterday. This comes under the umbrella of mass customization and agile
production.

Agility in manufacturing involves being able to respond quickly and effectively to the current
configuration of market demand, and also to be proactive in developing and retaining markets in
the face of extensive competitive forces. Agile manufacturing can be said to be a relatively new,
post-mass production concept for the creation and distribution of goods and services. It is the
ability to thrive in a competitive environment of continuous and unanticipated change and to
respond quickly to rapidly changing markets driven by customer-based valuing of products and
services. It includes rapid product realization, highly flexible manufacturing, and distributed
enterprise integration.

The model of agile manufacturing capabilities consists of four key interlinked parameters:
1. Agile strategy- involving the processes for understanding the firm’s situation within its
sector, committing to agile strategy, aligning it to a fast-moving market, and
communicating and deploying it effectively.
2. Agile processes- the provision of the actual facilities and processes to allow agile
functioning of the organization.
3. Agile linkages- intensively working with and learning from others outside the company,
especially customers and suppliers.
4. Agile people- developing a flexible and multi-skilled workforce, creating a culture that
allows initiative, creativity and supportiveness to thrive throughout the organization.

It is vivid, therefore, that technology alone cannot make an agile enterprise. Companies should
find the right combination of strategies, culture, business practices, and technology that are
necessary to make it agile, taking into account the market characteristics.

Dear learner, have you experienced a sort of confusion about the concepts of mass customization
and agile production? Well, that is very normal as it is recognized largely that there seems to be
no firm agreement as to the definitions for, and major differences between, these terms. Some
say mass customization also includes ‘Agile Supply Networks’ as a necessary factor. Contrary to
this, others argue that mass customization is best viewed as a powerful example of a firm’s
ability to be agile. Of course, there is considerable overlap between mass customization and agile
practices, and one will feed the other. It is best, therefore, not to see these paradigms as
conflicting and competing approaches, but rather as complementary sets of skills and abilities
that need to be in place for today’s highly competitive and demanding conditions.

69
Process Mapping
After the overall design of a process has been determined, its individual activities must be
configured. At its simplest this detailed design of a process involves identifying all the individual
activities that are needed to fulfill the objectives of the process and deciding on the sequence in
which these activities are to be performed and who is going to do them. There will, of course, be
some constraints on this. Some activities must be carried out before others and some activities
can only be done by certain people or machines. Nevertheless, for a process of any reasonable
size, the number of alternative process designs is usually large. Because of this, process design is
often done using some simple visual approach such as process mapping.

Process mapping simply involves describing processes in terms of how the activities within the
process relate to each other. There are many techniques which can be used for process mapping
(or process blueprinting, or process analysis, as it is sometimes called). However, all the
techniques identify the different types of activity that take place during the process and show the
flow of materials or people or information through the process.

Process mapping symbols are used to classify different types of activity. And although there is
no universal set of symbols used all over the world for any type of process, there are some that
are commonly used. Most of these derive either from the early days of ‘scientific’ management
around a century ago or, more recently, from information system flowcharting.

The classic process flowchart looks at the manufacture of a product or delivery of a service
from a broad perspective. The chart uses five standard symbols, shown in Figure 3.5. The details
of each process are not necessary for this chart; however, the time required to perform each
process and the distance between processes are often included. By incorporating nonproductive
activities (inspection, transportation, delay, storage), as well as productive activities
(operations), process flowcharts may be used to analyze the efficiency of a series of processes
and to suggest improvements.

Operation (an activity that directly adds value)

Transport (movement of something)

Delay (a wait, e.g for a material)


ovement of something)
Inspection (a check of some sort)

Storage (deliberate storage as opposed to a delay)

Figure 3.5. Some common process mapping symbols

70
They also provide a standardized method for documenting the steps in a process and can be used
as a training tool. Automated versions of these charts are available that will superimpose the
charts on floor plans of facilities. In this fashion, bottlenecks can be identified and layouts can be
adjusted. Process flowcharts are used in both manufacturing and service operations. They are a
basic tool for process innovation, as well as for job design.

Process improvement teams are likely to make a first pass at diagramming a process, with
adhesive notes plastered on large sheets of paper connected with hand-drawn arrows. As
documentation of the process becomes more complete, departments or companies may prefer
particular symbols to represent inputs, outputs, decisions, activities, and resources.
Process Performance Measures
In process analysis, we focus on three fundamental performance measures:
a. The number of flow units contained within the process is called the inventory (I) (or WIP).
Assuming we define the process boundary just before cutting and just after wrapping, this
inventory includes bagels currently being worked on by any of the three of you and bagels
between operations.
b. The time it takes a flow unit to get through the process is called the Flow time (T). An
interesting question to ask is "how long does it take one bagel to move from the beginning to
the end of the process?" Although this question is somewhat hypothetical in the present
example, it would be an important variable if you were selling bagels made to order.
c. Finally, the most important measure is the rate (measured in [flow units/time]) at which the
process is delivering output, which we will call the Flow rate (R). R is sometimes referred to
as throughput rate. The maximum rate with which the process can generate output is also
called the capacity of the process.

Note that any improvement in inventory, flow rate, or flow time will have a direct impact on
cost, or even better, on profit. Shorter flow times will make it easier to rapidly respond to
customers (especially in make-to-order environments and service operations). Typically, shorter
flow time will result in additional unit sales and/or higher prices. Lower inventory results in
lower working capital requirements as well as many quality advantages that we will explore in
this course. Higher inventory is also directly related to longer flow times. Thus a reduction in
inventory also yields a reduction in flow time.

Higher flow rate translates directly into more revenues, assuming your process is currently
capacity constrained, i.e. there is sufficient demand that you could sell any additional output you
make.

This mathematical relationship (throughput time= work-in-process, X cycle time) is called


Little’s law.

3.3 Long-term Capacity Planning

71
The next set of operations decisions concern the size or capacity of each part of the operations.
Capacity is the maximum capability to produce. Capacity planning takes place at several levels
of detail. Here we shall treat capacity in a general long-term sense.

Capacity defined
In operations management, the term capacity describes the level of output that the organization
can achieve over a specified period of time. It can be considered as the potential output of a
system that may be produced in a specified time, determined by the size, scale and configuration
of the system’s transformation inputs. At all stages of any process, limitations are placed on
capacity. A machine has a maximum output per hour, a truck has a maximum load, a production
line has a limit to its speed of operation, an aeroplane has a certain number of seats for
passengers, a computer processes a specified number of bytes per second, and so on.

In general, capacity can be defined in several different ways, and some of these are described
below.

Theoretical capacity
One definition of capacity is the maximum level of output that can be attained by the
organization, theoretical capacity, which is the level of output that can be achieved if the
organization’s resources are used fully. This would mean operating 24 hours per day, 7 days per
week, 365 days per year, and for all but continuous production this is clearly unrealistic. Many
manufacturing operations and most service operations operate either during fixed hours, such as
8 am to 5 pm, Monday to Friday, or have some periods where operations are minimal.

Design capacity
Even facilities running continuously find it difficult to achieve 100 per cent productive time; they
must generally shut down at least periodically for maintenance and cleaning. A second definition
of capacity is design capacity. This is the level of output that the operation was designed to have,
which includes allowances for planned nonproductive time. For example, a cinema might
calculate its capacity based not only on the length of the average film, but also including the time
for the audience to leave at the end of the film, the room to be cleaned and the audience for the
next film to be seated. This level of capacity is usually the one selected for planning purposes.
However, a drawback of design capacity is that that it does not include unplanned productive
time, such as unscheduled outages. These can result from internal factors, such as unplanned
staff shortages, or external factors, such as extreme weather or transportation disruptions. Given
these considerations, a practical definition of capacity might be the amount of resource inputs
relative to output requirements at a particular time.

Actual capacity
Operations managers and other decision-makers often need to know what level of outputs an
operation has produced or will produce over a certain period - its actual capacity – as well as
what it can produce theoretically or by design.

Capacity is therefore normally measured by considering how much can be processed in any
given time period. This is commonly the case in materials processing operations, many
information processing operations and some customer processing operations. For example, a car

72
plant is designed to produce a certain number of cars per shift; the work pattern of an insurance
company worker is designed to process a certain number of claims per hour, and fast-food stores
expect to be able to serve a certain number of customers in a defined time period (typically at a
rate of one every 90 seconds).

There is a difference between ‘designed capacity’, defined as ‘the maximum output of a process
under ideal conditions’, and ‘effective capacity’, defined as ‘maximum output that can be
realistically expected under normal conditions’. Usually, effective capacity is less than designed
capacity, due to set-up times, breakdowns, stoppages, maintenance, and so on. Whilst this is true
in many cases, especially in materials processing operations, there are instances in which
effective capacity may be greater than designed capacity. For example, there are many mass
transit systems around the world, such as the London Underground and metropolitan railways in
the Far East, where more passengers routinely travel than the system was designed for.

By distinguishing between designed capacity and effective capacity, we can establish the
difference between ‘utilization’ and ‘efficiency’.

Utilization is the ratio of actual output to designed capacity, whilst efficiency is the ratio of
output to effective capacity. In some operations, management focuses very much on utilization.
For example, key performance measures in many capacity-constrained services, such as hotels,
airlines and theatres, are utilization measures, namely room occupancy, passenger load and seat
occupancy. In other operations, especially those adopting high-volume production processes, the
focus are often upon efficiency measures. The mathematical expressions of these issues are
presented below.

Rated capacity

When capacity is measured relative to equipment alone, the appropriate measure is rated
capacity. It is an engineering assignment of maximum annual output, assuming continuous
operations except for an allowance for Norman maintenance, and repair downtime.

Rated capacity will always be less than or equal to effective capacity.

Importance of Capacity Decisions


1. Capacity decisions have a real impact on the ability of the organization to meet future
demands for products and services; capacity essentially limits the rate of output possible.

73
Having capacity to satisfy demand can allow a company to take advantage of tremendous
opportunities.
2. Capacity decisions affect operating costs. Ideally, capacity and demand requirements will
be matched, which will tend to minimize operating costs. In practice, this is not always
achieved because actual demand either differs from expected demand or tends to vary (e.g.,
cyclically). In such cases, a decision might be made to attempt to balance the costs of over
and under capacity.
3. Capacity is usually a major determinant of initial cost. Typically, the greater the capacity of
a productive unit, the greater its cost. This does not necessarily imply a one for-one
relationship; larger units tend to cost proportionately less than smaller units.
4. Capacity decisions often involve long-term commitment of resources and the fact that, once
they are implemented, it may be difficult or impossible to modify those decisions without
incurring major costs.
5. Capacity decisions can affect competitiveness. If a firm has excess capacity, or can quickly
add capacity, that fact may serve as a barrier to entry by other firms. Then too, capacity can
affect delivery speed, which can be a competitive advantage.
6. Capacity affects the ease of management; having appropriate capacity makes management
easier than when capacity is mismatched.

Process of Capacity Planning and Strategies

Capacity planning is concerned with defining the long-term and the short-term capacity needs of
an organization and determining how those needs will be satisfied. Capacity planning decisions
are taken based upon the consumer demand and this is merged with the human, material and
financial resources of the organization. As a planning function, both capacity available and
capacity required can be measured in the short term (capacity requirements plan), intermediate
term (rough-cut capacity plan), and long term (resource requirements plan). The objective of
long term capacity planning is to specify the overall capacity level of resources- facilities,
equipment and labour force size – that best supports the company’s long-range competitive
strategy for production. Long-term capacity planning is a strategic decision that establishes a
firm’s overall level of resources. It extends over a time horizon long enough to obtain those
resources-usually a year or more for building or expanding facilities or acquiring new businesses.
Capacity requirements can be evaluated from two perspectives-long-term capacity strategies and
short-term capacity strategies.

1. Long-term capacity strategies: Long-term capacity requirements are more difficult to


determine because the future demand and technology are uncertain. Forecasting for five or ten
years into the future is more risky and difficult. Even sometimes company’s today’s products
may not be existing in the future. Long-range capacity requirements are dependent on marketing
plans, product development and life-cycle of the product. Long-term capacity planning is
concerned with accommodating major changes that affect overall level of the output in long-
term. Marketing environmental assessment and implementing the long-term capacity plans in a
systematic manner are the major responsibilities of management. Following parameters will
affect long-range capacity decisions.

74
 Multiple products: Company’s produce more than one product using the same facilities
in order to increase the profit. The manufacturing of multiple products will reduce the
risk of failure. Having more than on product helps the capacity planners to do a better
job. Because products are in different stages of their life cycles, it is easy to schedule
them to get maximum capacity utilisation.
 Phasing in capacity: In high technology industries, and in industries where technology
developments are very fast, the rate of obsolescence is high. The products should be
brought into the market quickly. The time to construct the facilities will be long and there
is no much time, as the products should be introduced into the market quickly. Here the
solution is phase in capacity on modular basis. Some commitment is made for building
funds and men towards facilities over a period of 3-5 years. This is an effective way of
capitalizing on technological breakthrough.
 Phasing out capacity: The outdated manufacturing facilities cause excessive plant
closures and down time. The impact of closures is not limited to only fixed costs of plant
and machinery. Thus, the phasing out here is done with humanistic way without affecting
the community. The phasing out options makes alternative arrangements for men like
shifting them to other jobs or to other locations, compensating the employees, etc.

2. Short-term capacity strategies: Managers often use forecasts of product demand to estimate
the short-term workload the facility must handle. Managers looking ahead up to 12 months,
anticipate output requirements for different products, and services. Managers then compare
requirements with existing capacity and then take decisions as to when the capacity adjustments
are needed.
For short-term periods of up to one year, fundamental capacity is fixed. Major facilities will not
be changed. Many short-term adjustments for increasing or decreasing capacity are possible. The
adjustments to be required depend upon the conversion process like whether it is capital
intensive or labour intensive or whether product can be stored as inventory.
Capital-intensive processes depend on physical facilities, plant and equipment. Short-term
capacity can be modified by operating these facilities more or less intensively than normal. In
labour intensive processes short-term capacity can be changed by laying off or hiring people or
by giving overtime to workers. The strategies for changing capacity also depend upon how long
the product can be stored as inventory.
The short-term capacity strategies are:
1. Inventories: Stock finished goods during slack periods to meet the demand during peak
period.
2. Backlog: During peak periods, the willing customers are requested to wait and their
orders are fulfilled after a peak demand period.
3. Employment level (hiring or firing): Hire additional employees during peak demand
period and layoff employees as demand decreases.
4. Employee training: Develop multi skilled employees through training so that they can
be rotated among different jobs. The multi skilling helps as an alternative to hiring
employees.
5. Subcontracting: During peak periods, hire the capacity of other firms temporarily to
make the component parts or products.
6. Process design: Change job contents by redesigning the job.

75
The optimum capacity level

The optimum/ best operating level is that capacity for which the average cost is at the minimum.
Note that as we move down the unit cost curve for each plant size, we achieve economies of
scale until we reach the best operating level and then diseconomies of scale as we exceed this
point.

Economics and diseconomies of scale

The basic notion is well known: as a plant gets larger and volume increases, the average per unit
of output drops because each succeeding unit absorbs parts of the fixed costs.

Economies of scale: Economies of scale is a concept which state that the average unit cost of
goods or services can be reduced by increasing its output rate. There are four principal reasons
for why economies of scale can drive cost down when output increases:

 Fixed costs are spreads over more units: the fixed cost includes heating cost, debt
services, and management salaries. Depreciation of plant and equipment already
owned is also a fixed cost in the accounting sense. When the output rate increases.
The average unit cost drops because fixed costs are spread over more units.
 Construction costs are reduced: certain activities and expenses are required in
building small and large facilities alike: building permits, architects’ fees, rental of
building equipment, and the like. Industries such as breweries and oil refineries
benefits from strong economics of scale because of this phenomenon.
 Costs of purchased materials are cut: higher volume can reduce the cost of purchased
materials and services. They give a purchaser a better bargaining position and the
opportunity to take advantage of quantity discounts.
 Process advantages are found: high volume production provides many opportunities
for cost reduction. At a higher output rate, the process shifts towards a line process,
with resources dedicated to individual products. The benefits from dedicating
resources to individual products or services may includes spreading up the learning
effects, lowering inventory, improving process and job design, and reducing the
number of changeovers.

Diseconomies of scale: At some point a facility can become so large that diseconomies of scale
set in; that is, the average cost per unit increases as the facility size increase the reason is that
excessive size can bring complexity, loss of focus, and inefficiencies that raise the average unit
cost of a product or services. There may be too many layers of employees and bureaucracy, and
management loses touch with employees and customers. The organization is less agile and loses
the flexibility needed to respond to changing demand. Many large companies become so
involved in analysis and planning that they innovate less and avoid risks. The result is that small
companies outperform corporate giants in numerous industries.

Learning (experience) curve

76
Learning (experience) curve theory has a wide range of application in the business world. In
manufacturing, it can be used to estimate the capacity requirement and the time for product
design. Learning curves can be applied to individuals or organizations. Individual learning is
improvement that results when people repeat a process and gain skill or efficiency from their
own experience. That is ‘practices make perfect’. Organizational learning results from practices
as well, but it will also come from changes in administration, equipment, and product design. In
organizational settings, we expect to see both kinds of learning occurring simultaneously and
often describe the combined effect with the single learning curve.

Generally, it is quite possible that initially the operator takes longer time to accomplish the job
as compared to the subsequent cycles when he would have acquired the necessary skill and feel
in ‘learning’ the job. Usually, this learning curve is hyperbolic in nature. Though the learning
curve concept is important one, it has not been given due consideration. Scholars feel it would be
unfair if learning phase is not accounted for while determine capacity requirement and time
standard.

Capacity flexibility

Capacity flexibility means having the ability to rapidly increase or decrease production levels, or
to shift production capacity quickly from one product or service to another. Such flexibility is
achieved through:

 Flexible plants: perhaps the ultimate plant flexibility is the zero-change over time plant.
Using movable equipment, knockdown walls, and easily accessible and re routable
utilities e.g. tents. Such a plant can adapt to change in real time.
 Flexible process: flexible processes are epitomized by flexible manufacturing systems on
the one hand and simple, easily set up equipment on the other hand. Both of these
technological approaches permit rapid low cost switching from one product line to the
other, enabling what is referred to as economics of scope. By definition, economics of
scope exist when; multiple products can be produced at a lower cost in combination than
they can separately.
 Flexible workers: flexible workers have multiple skills and the ability to switch easily
from one kind of task to another. They required broader training than specialized
workers and need managers and staff support to facilitate quick changes in their work
assignment.

The timing of capacity change


Changing the capacity of an operation is not just a matter of deciding on the best size of a
capacity increment. The operation also needs to decide when to bring ‘on-stream’ new capacity.
The three basic strategies for the timing of capacity expansion in relation to a steady growth in
demand are:
 Capacity leads demand- timing the introduction of capacity in such a way that there is
always sufficient capacity to meet forecast demand. Capacity is expanded in anticipation
of demand growth. This aggressive strategy is used to lure customers from competitors
who are capacity constrained or to gain a foothold in a rapidly expanding market. It also

77
allows companies to respond to unexpected surges in demand and to provide superior
levels of service during peak demand periods.
• Average capacity strategy. Capacity is expanded to coincide with average expected demand.
This is a moderate strategy in which managers are certain they will be able to sell at least
some portion of expanded output, and endure some periods of unmet demand.
Approximately half of the time capacity leads demand, and half of the time capacity lags
demand.
• Capacity lags demand – timing the introduction of capacity so that demand is always equal
to or greater than capacity. Capacity is increased after an increase in demand has been
documented. This conservative strategy produces a higher return on investment but may
lose customers in the process. It is used in industries with standard products and cost-based
or weak competition. The strategy assumes that lost customers will return from competitors
after capacity has expanded.

Each strategy has its own advantages and disadvantages. The actual approach taken by any
company will depend on how it views these advantages and disadvantages. For example, if the
company’s access to funds for capital expenditure is limited, it is likely to find the delayed
capital expenditure requirement of the capacity-lagging strategy relatively attractive.

Tools for capacity planning

Long term capacity planning requires demand forecasts for an extended period of time
.unfortunately, forecast accuracy declines as the forecasting horizon lengthens. In addition in
anticipating what competitors will do increases the uncertainty of demand forecasts? Finally,
demand during any period of time is not evenly distributed; peaks and valleys of demand may
(and often do) occur within the time period. These realities necessitate the use of capacity
cushion. In this section, three types of tools that deal more formally with demand uncertainty and
variability are introduced.

Waiting line models: waiting line models often are useful in capacity planning. Waiting line tend
to develop in front of a work centre, such as an airport ticket counter, a machine centre , or a
central computer. The reason is that the arrival time between jobs or customers vary and the
processing time may vary from one customer to the next. Waiting line model use probability
distribution to provide estimate average customer delay time, average length of waiting lines,
and utilization of the work centre. Managers can use this information to choose the most cost
effective capacity, balancing customer services and the cost of adding capacity.

Simulation: more complex waiting line problems must be analyzed with simulation. It can
identify the process’s bottlenecks and appropriate capacity cushion even for complex process
with random demand patterns with predictable surges in demand during a typical day.

Decision trees: a decision tree can be particularly valuable for evaluating different capacity
expansion alternatives when demand is uncertain and sequential decisions are involved. A
decision tree is a systematic model of the sequence of steps in a problem and the conditions and
consequences of each step.

78
3.4 Facilities Location Decisions

Introduction
Facility location decisions are strategic, long term and non-repetitive in nature. Without sound
and careful location planning in the beginning itself, the new facility may pose continuous
operating disadvantages, for the future operations. Location decisions are affected by many
factors, both internal and external to the organization’s operations. Internal factors include the
technology used, the capacity, the financial position, and the work force required. External
factors include the economic political and social conditions in the various localities. Most of the
fixed and some of the variable costs are determined by the location decision. The efficiency,
effectiveness, productivity and profitability of the facility are also affected by the location
decision.

The facilities location problem is concerned primarily with the best (or optimal) location
depending on appropriate criteria of effectiveness. Location decisions are based on a host of
factors, some subjective, qualitative and intangible while some others are objective, quantitative
and tangible.

When does a location decision arise?

The impetus to embark upon a facility location study can usually be attributed to various reasons:
i. It may arise when a new facility is to be established.
ii. In some cases, the facility or plant operations and subsequent expansion are restricted
by a poor site, thereby necessitating the setting up of the facility at a new site.
iii. The growing volume of business makes it advisable to establish additional facilities in
new territories.
iv. Decentralization and dispersal of industries reflected in the Industrial Policy
resolution so as to achieve an overall development of a developing country would
necessitate a location decision at a macro level.
v. It could happen that the original advantages of the plant have been outweighed due to
new developments.
vi. New economic, social, legal or political factors could suggest a change of location of
the existing plant.

Some or all the above factors could force a firm or an organization to question whether the
location of its plant should be changed or not. Whenever the plant location decision arises, it
deserves careful attention because of the long term consequences. Any mistake in selection of a
proper location could prove to be costly. Poor location could be a constant source of higher cost,
higher investment, difficult marketing and transportation, dissatisfied and frustrated employees
and consumers, frequent interruptions of production, abnormal wastages, delays and substandard
quality, denied advantages of geographical specialization and so on. Once a facility is set up at a
location, it is very difficult to shift later to a better location because of numerous economic,
political and sociological reasons.

Location theory: Alfred Weber’s theory of the location of industries

79
Alfred Weber (1868–1958), with the publication of Theory of the Location of Industries in 1909,
put forth the first developed general theory of industrial location. His model took into account
several spatial factors for finding the optimal location and minimal cost for manufacturing plants.
The point for locating an industry that minimizes costs of transportation and labor requires
analysis of three factors:
1. The point of optimal transportation based on the costs of distance to the ‘material
index’—the ratio of weight to intermediate products (raw materials) to finished product.
2. The labour distortion, in which more favorable sources of lower cost of labour may
justify greater transport distances.
3. Agglomeration and degglomerating.

Agglomeration or concentration of firms in a locale occurs when there is sufficient demand for
support services for the company and labour force, including new investments in schools and
hospitals. Also supporting companies, such as facilities that build and service machines and
financial services, prefer closer contact with their customers.

Degglommeration occurs when companies and services leave because of over concentration of
industries or of the wrong types of industries, or shortages of labour, capital, affordable land, etc.
Weber also examined factors leading to the diversification of an industry in the horizontal
relations between processes within the plant.
The issue of industry location is increasingly relevant to today’s global markets and transnational
corporations. Focusing only on the mechanics of the Weberian model could justify greater
transport distances for cheap labour and unexploited raw materials. When resources are
exhausted or workers revolt, industries move to different countries.

Factors influencing Location Decisions


Facility location is the process of determining a geographic site for a firm’s operations.
Managers of both service and manufacturing organizations must weigh many factors when
selecting a suitable location.

As location selections are usually made in two phases namely, (i) the general territory selection
phase, and (ii) the exact site/community selection phase amongst those available in the general
locale. The considerations vary at the two levels, though there is substantial overlap as discussed
below.

A. Territory Selection
For the general territory/region/area selection, the following are some of the important factors
that influence the selection decision.

Markets: There has to be some customer/market for your product/service. The market growth
potential and the location of competitors are important factors that could influence the location.
Locating a plant or facility nearer to the market is preferred if promptness of service required, if
the product is fragile, or is susceptible to spoilage. Moreover, if the product is relatively
inexpensive and transportation costs add substantially to the cost, a location close to the markets
is desirable. Assembly type industries also tend to locate near markets.

80
Raw Materials and Supplies: Sometimes accessibility to vendors/suppliers of raw materials,
parts supplies, tools, equipment etc. may be very important. The issue here is promptness and
regularity of delivery and inward freight cost minimization.

If the raw material is bulky or low in cost, or if it is greatly reduced in bulk viz. transformed into
various products and by-products of if it is perishable and processing makes it less so, then
location near raw materials sources is important. If raw materials come from a variety of
locations, the plant/facility may be situated so as to minimize total transportation costs. The costs
vary depending upon specific routes, mode of transportation and specific product classifications.

Transportation Facilities: Adequate. transportation facilities are essential for the economic
operation of a production system. For companies that produce or buy heavy bulky and low value
per ton commodities, water transportation could be an; important factor in locating plants. It can
be seen that civilizations grew along rivers/waterways etc. Many facilities/plants are located
along river banks.

Manpower Supply: The availability of skilled manpower, the prevailing wage pattern, living
costs and the industrial relations situation influence the location.

Infrastructure: This factor refers to the availability and reliability of power, water, fuel and
communication facilities in addition to transportation facilities.

Legislation and Taxation: Factors such as financial and other incentives for new industries in
backward areas or no-industry-district centers, exemption from certain state and local taxes, etc.
are important.

Climate: Climatic factors could dictate the location of certain type of industries like textile
industry which requires high humidity zones.

B. Site/Community Selection
Having selected the general territory/region, next we would have to go in for site/community
selection. Let us discuss some factors relevant for this stage.

Community Facilities: These involve factors such as quality of life which in turn depends on
availability of facilities like schools, places of worship, medical services, police and fire stations,
cultural, social and recreation opportunities, housing, good streets and good communication and
transportation facilities.

Community Attitudes: These can be difficult to evaluate. Most communities usually welcome
setting up of a new industry especially since it would provide opportunities to the local people
directly or indirectly. However, in case of polluting, or `dirty' industries, they would try their
utmost to locate them as far away as possible.
Sometimes because of prevailing law and order situation, companies have been forced to relocate
their units. The attitude of people as well as the state government has an impact on industrial
location.

81
Waste Disposal: The facilities required for the disposal of process waste including solid, liquid
and gaseous effluents need to be considered. The plant should be positioned so that prevailing
winds carry any fumes away from populated areas, and so that waste may be disposed off
properly and at reasonable expense.

Ecology and Pollution: These days there is a great deal of awareness towards maintenance of
natural ecological balance. There are quite a few agencies / propagating the concepts to make the
society at large more conscious of/the dangers of certain avoidable actions.

Site Size: The plot of land must be large enough to hold the propose plant and parking and
access facilities and provide room for future expansion}: These days a lot of industrial
areas/parks are being earmarked in which certain/standard sheds are being provided to
entrepreneurs (especially small scale ones).

Topography: The topography, soil structure and drainage must be suitable. If considerable land
improvement is required, low priced land might turn out to be expensive.

Transportation Facilities: The site should be accessible by road and rail preferably.
The dependability and character of the available transport carriers, frequency of service and
freight and terminal facilities is also worth considering.

Supporting Industries and Services: The availability of supporting services such as tool rooms,
plant services etc. need to be considered

Land Costs: These are generally of lesser importance as they are non-recurring and possibly
make up a relatively small proportion of the total cost of locating a new plant.

Location Decision Techniques


The decision where to locate is based on many different types of information and inputs. There is
no single model or technique that will select the “best” location from a group. However,
techniques are available that help to organize site information and that can be used as a starting
point for comparing different locations.

a. Subjective Techniques
Three subjective techniques used for facility location are Industry Precedence, Preferential
Factor and Dominant Factor. Most of us are always looking for some precedents. So in the
industry precedence subjective technique, the basic assumption .is that if a location was best for
similar firms in the past, it must be the best for us now. As such, there is no need for conducting
a detailed location study and the location choice is thus subject to the principle of precedence-
good or bad. However in the case or the preferential factor, the location decision is dictated by a
personal factor. It depends on the individual whims or preferences e.g. if one belongs to a
particular state, he may like to locate his unit only in that state. Such personal factors may
override factors of cost or profit in taking a final decision. This could hardly be called a
professional approach though such methods are probably more common in practice than
generally recognized. However, in some cases of plant location there could be a certain dominant
factor (in contrast to the preferential factor) which could influence the location decision. In a true

82
dominant sense, mining or petroleum drilling operations must be located where the mineral
resource is available. The decision in this case is simply whether to locate or not at the source.

b. Systematic Techniques
Although operations managers must exercise considerable judgement in the choice of alternative
locations, there are some systematic and quantitative techniques which can help the decision
process. We will discuss selected techniques to help make a location decision—the location
rating factor, the center-of-gravity technique, and the load-distance technique. The location
factor rating mathematically evaluates location factors, such as those identified in the previous
section. The center-of gravity and load-distance techniques are quantitative models that centrally
locate a proposed facility among existing facilities.

I. Location Factor Rating Method

In the location factor rating system, factors that are important in the location decision are
identified. Each factor is weighted from 0 to 1.00 to prioritize the factor and reflect its
importance. A subjective score is assigned (usually between 0 and 100) to each factor based on
its attractiveness compared with other locations, and the weighted scores are summed. Decisions
typically will not be made based solely on these ratings, but they provide a good way to organize
and rank factors.
For evaluating factors, some factor ranking and factor weight rating systems may be used. In the
ranking procedure, a location is better or worse than another for the particular factor. By
weighing factors and rating locations against these weights comparison of location is possible.

Factor ratings are widely used to evaluate location alternatives because (i) their simplicity helps
decide why one site is better than another; (ii) they enable managers to bring diverse locational
considerations into the evaluation process; and (iii) they foster consistency of judgment about
location alternatives.

The following steps are involved in factor rating:


• Develop a list of relevant factors.
• Assign a weight to each factor to indicate its relative importance (weights may total 1.00).
• Assign a common scale to each factor (e.g., 0 to 100 points), and designate any minimums.
• Score each potential location according to the designated scale, and multiply the scores by
the weights.
• Total the points for each location, and choose the location with the maximum points; and
either (a) use them in conjunction with a separate economic analysis, or (b) include an
economic factor in the list of factors and choose the location on the basis of maximum
points.

II. Quantitative Techniques


Various quantitative models are used to help determine the best location of facilities. Sometimes,
models are tailor-made to meet the specific circumstances of a unique problem. There are some
general models that can be adapted to the needs of a variety of systems. These are center of
gravity model, load distance, linear programming, and simulation. All these models focus on

83
transportation costs, although each considers a different version of the basic problem. In the next
section, we briefly introduce two types of models that have been applied to the location problem.

a. Center-of-Gravity Technique
The centre-of-gravity method is used to find a location which minimizes transportation costs. It
is based on the idea that all possible locations have a ‘value’ which is the sum of all
transportation costs to and from that location. The best location, the one which minimizes costs,
is represented by what in a physical analogy would be the weighted centre of gravity of all points
to and from which goods are transported. So, for example, two suppliers, each sending 20 tonnes
of parts per month to a factory, are located at points A and B. The factory must then assemble
these parts and send them to one customer located at point C. Since point C receives twice as
many tonnes as points A and B (transportation cost is assumed to be directly related to the tonnes
of goods shipped) then it has twice the weighting of point A or B. The lowest transportation cost
location for the factory is at the centre of gravity of a (weightless) board where the two suppliers’
and one customer’s locations are represented to scale and have weights equivalent to the
weightings of the number of tonnes they send or receive.

In general, transportation costs are a function of distance, weight, and time. The center-of-
gravity, or weight center, technique is a quantitative method for locating a facility such as a
warehouse at the center of movement in a geographic area based on weight and distance. This
method identifies a set of coordinates designating a central location on a map relative to all other
locations.
The starting point for this method is a grid map set up on a Cartesian plane. There are three
locations, 1, 2, and 3, each at a set of coordinates (xi, yi) identifying its location in the grid. The
value Wi is the annual weight shipped from that location. The objective is to determine a central
location for a new facility.

The coordinates for the location of the new facility are computed using the following formulas:

where
x, y = coordinates of the new facility at center of gravity
xi, yi = coordinates of existing facility i
Wi = annual weight shipped from facility i

Example: A refining company needs to locate an intermediates ware house facilities between its
refinery plant in place M and its major distributors B, C, D & E. The following shows the
coordinate map for both the plants and distributors.

500
*E (25,450)
400 D *(350,400)
C *(450,350)

84
300
*(308,217) centre of gravity
200
B *(400,150)
100
M *(325,75)
0 100 200 300 400 500
Figure 3.6: Center of gravity

More over shipping volume from plant M to major distributor are given as follows
Location M B C D E
Gallon of gasoline
Per month (000,000) 1500 250 450 350 450

Solution
Using the above formula, we can calculate the coordinates of the centre of gravity i.e. the site of
the new location.

The location of the new facility should be at 308 and 217 i.e. (308,217)

b. Load-Distance Technique
A variation of the center-of-gravity method for determining the coordinates of a facility location
is the load-distance technique. In this method, a single set of location coordinates is not
identified. Instead, various locations are evaluated using a load-distance value that is a measure
of weight and distance. For a single potential location, a load-distance value is computed as
follows:

where
LD _ the load-distance value

85
li _ the load expressed as a weight, number of trips, or units being shipped from the proposed site
to location i
di _ the distance between the proposed site and location i

The distance di in this formula can be the travel distance, if that value is known, or can be
determined from a map. It can also be computed using the following formula for the straight-line
distance between two points, which is also the hypotenuse of a right triangle:

where

(x, y) = coordinates of proposed site

(xi, yi) = coordinates of existing facility

The load-distance technique is applied by computing a load-distance value for each potential
facility location. The implication is that the location with the lowest value would result in the
minimum transportation cost and thus would be preferable.

3.5 Facility Layout


Introduction
The process type is reflected in how the operation arranges its activities, or its layout.
Specifically, we now have to decide how the tasks that make up the operation are to be delivered.
The process type determines the nature of the tasks that will be performed – for example, will
these be project activities or will they be part of the work in a line process? The layout
determines where and in what sequence activities that make up a process are located.

Facility layout is a plan of physical arrangement of facilities including human resources,


operating equipment, storage space, material handling equipment and all other supporting
services along with the design of best structure to contain all these facilities. It is the spatial
arrangement of physical resources used to create the product. It also means how the space needed
for material movement, storage, indirect labor, etc is arranged in a factory. Plant layout refers to
the physical arrangement of production facilities. It is the configuration of departments, work
centres and equipment in the conversion process. It is a floor plan of the physical facilities,
which are used in production.

For a factory which is already in operation, this may mean the arrangement that is already
present. However, for a new factory this means the plan of how the machines, equipment, etc
will be arranged in the different sections or shops. These should be arranged in such a way that
material movement cost, cost of storage in between processes, the investment on machines and
equipment etc should be optimal and the product is as cheap as possible.
Need to plan a layout can emerge due to various reasons. Some of them could be
 Need to make minor changes in present layout due to method improvement, new type of
inspection plan, and new type of product,

86
 Need to rearrange the existing layout due to marketing and technological change,
 Re-allocating the existing facilities due to new location, or
 Building a new plant.

Principles of Layout Design


Decisions about layout are made only periodically, but since they have long-term consequences,
they must be made with careful planning.

Generally speaking, therefore, layout design must be done per the following principles:

1. Principle of integration: A good layout is one that integrates men, materials, machines
and supporting services and others in order to get the optimum utilisation of resources
and maximum effectiveness.
2. Principle of minimum distance: This principle is concerned with the minimum travel
(or movement) of man and materials. The facilities should be arranged such that, the total
distance travelled by the men and materials should be minimum and as far as possible
straight line movement should be preferred.
3. Principle of cubic space utilisation: The good layout is one that utilise both horizontal
and vertical space. It is not only enough if only the floor space is utilised optimally but
the third dimension, i.e., the height is also to be utilised effectively.
4. Principle of flow: A good layout is one that makes the materials to move in forward
direction towards the completion stage, i.e., there should not be any backtracking.
5. Principle of maximum flexibility: The good layout is one that can be altered without
much cost and time, i.e., future requirements should be taken into account while
designing the present layout.
6. Principle of safety, security and satisfaction: A good layout is one that gives due
consideration to workers safety and satisfaction and safeguards the plant and machinery
against fire, theft, etc.
7. Principle of minimum handling: A good layout is one that reduces the material
handling to the minimum.

Layouts are designed to meet these principles. After initial designs are developed, improved
designs are sought. This can be a tedious and cumbersome task because the number of possible
designs is so large. For this reason, quantitative and computer-based models are often used.
Nevertheless, as only a few of the layout types can be modeled mathematically, layout design of
physical facilities is still something of an art.

Factors Affecting Layout


Layouts are affected by types of industry, production systems, types of products, volume of
production, and types of manufacturing processes used to get the final products.

Types of Layout
Layout decisions include the best placement of machines (in production settings), offices and
furniture (in office settings), or service centers (in hospitals or department stores). We will
discuss the following layouts in this section:
87
1. Fixed position layout: addresses the layout requirements of large, bulky projects such as
ships and buildings.
2. Process-oriented layout: deals with low-volume, high-variety production (called as
‘Jobshop’ or intermittent production).
3. Product-oriented: seeks the best personnel and machine utilization in repetitive or
continuous production.
4. Hybrid Layout: the combination of two of the basic layouts listed above.
5. Office layout: positions workers, their equipment, and spaces/offices to provide for
movement and information.
6. Retail layout: allocates shelf space and responds to customer behavior.
7. Warehouse layout: addresses trade-offs between space and material handling.

The first three layout types are called the basic layouts. They are differentiated by the types of
work flows they entail; the work flow, in turn, is dictated by the nature of the product. Services
have work flows, just as manufacturing do. Often the work flow is paper, information, or even
the customers.

Fixed-position layout

In this layout type, the major part of the product remains in a fixed place. All the tools, machines,
workers and smaller pieces of materials are brought to it and the product is completed with the
major part staying in one place. There may be more than one operation performed on the product
at the same time hence, workers may carry out single or multiple activities to modify a product
or provide a service until completion. Each operation adds to the product until it is completed.
The breakdown of a particular machine will not halt an entire process and work can therefore be
transferred to other machines in the department.

Fixed-position layouts are associated generally with lower volume process types – most usually
projects (as in construction), but sometimes with jobbing processes (specialized contractors in
construction) and batch processes (as with the production of aeroplanes or construction of many
types of the same house on a housing development). Very heavy assemblies (e.g. ship, aircraft,
cranes, rail coaches, highway, a bridge, a house, an oil well, etc) requiring small and portable
tools are made by this method.

Examples:
In manufacturing, the production of heavy, bulky or fragile products, such as ships and airplanes,
and most construction projects take place with the people and machines moving around the
product.

Fixed-position layouts are used in services, e.g. in dental or surgical treatments where the patient
remains in a single location whilst being treated.

Advantages
 very easy and cheap to arrange.
 can be easily changed if product design is changed.
 since the worker work at one place, the supervision is easy.

88
 Cost of transporting heavy materials is reduced.
 Responsibility for quality is easily fixed on the worker or group of workers which make
the assembly.

Limitations
 such components which need only small and portable tools can be made by this method.
 skilled workers, complicated jigs and fixtures are required.

Process layout
In a process layout, specific types of operations are grouped together within the manufacturing
or service facility. The machines are not laid out in a particular, sequential process. Therefore,
the product does not move in a specified sequence but would go to a machine centre as and when
required for the particular product. Products move around according to processing requirements.
In manufacturing this allows a range or variety of products to be made. In this type, all the
machines and equipment of the same type are grouped together in one section or area or
department. For example, all welding equipment are kept in one section; all drilling machines in
other; all lathes in third section, and so on. It is used in intermittent (discontinuous) type of
production. It is most efficient when making products with different requirements or when
handling customers, patients, or clients with different needs.

This layout type is commonly used in hospitals, where specialisms are grouped together – e.g.
accident and emergency, X-ray facilities, pediatrics etc. Since few patients will have identical
problems and so will not receive identical treatment, wards and departments are laid out to
accommodate a wide range of potential patient requirements. Many retail operations, especially
department stores, use a process layout, where the customers move between areas dedicated to
different goods, such as kitchenware, furniture and clothing.

In manufacturing, a process layout is commonly associated with jobbing production, where low
volumes of products such as furniture, high-fashion clothing and jewellery are produced to
individual requirements. In general, low-volume batch production will also be associated with
process layout, although high-volume batch production may follow the product layout that is
described next.

Process layouts are associated with flexible equipment and workers, so that even if a single
operation breaks down the whole process does not have to stop. The problem is that even with
only two products running through this system the flow becomes complex and difficult to
manage. If you move between the different functions, such as in a hospital, you may have to
queue before being ‘processed’ by each of the specialist functions. This layout type is sometimes
known as functional layout. Some authors even warn the term process layout is superseded. This
functional approach therefore is usually not the fastest at handling throughput, and often requires
people to ‘progress chase’ items through the system, or develop complex IT systems to keep
track of the location of particular items.

The law firm in the example described at the start of this chapter originally used a process layout.
The redesign involved splitting the process to meet the needs of different client groups, and the

89
following two designs were used to provide the necessary focus for each microoperation (or
operation within an operation).

Advantages
 different products can be made on the same machine, so the number of machines needed
is reduced. This gives lots of flexibility with less capital needed.
 When one machine goes out of order, the job can be done on other similar machines.
 When a worker is absent, another worker of the same section can do the job.
 A worker becomes more skilled and can earn more money by working harder on his
machine.
 Varieties of job make the work more interesting for the workers.
 Layout is flexible with respect to the rate of production, design and methods of
production.
Limitations
 General purpose equipment requires high labor skills, and WIP inventories are higher
because of imbalances in the production processes.
 This layout needs more space.
 Automation of material handling is extremely difficult.
 Completion of a product takes more time due to difficult scheduling, changing setups,
and unique material handling. Total production cycle time is more also due to long
distances and waiting time.
 Raw material has to travel longer distances, thus the material handling cost is high.
 Needs more inspection and coordination.

When designing a process layout, the most common tactic is to arrange departments or work
centers so that the costs of material handling is minimum. For this, departments with large flows
of parts or people between them should be placed next to one another. Material handling costs in
this approach depend on:
 The number of loads or people to be moved between two departments during some period
of time, and
 The distance linked costs of moving loads or people between departments. Cost is
considered to be a function of distance between departments.

Computer software is available to deal with layout problems consisting of 40 departments. The
most popular one is CRAFT (Computerized Relative Allocation of Facilities Technique). It’s a
program that produces ‘good’ but not always the ‘optimal’ solutions. CRAFT is a search
technique that examines the alternative layouts systematically to reduce the total material
handling cost. Other software packages include Automated Layout Design Program (ALDEP),
Computerized Relationship Layout Planning (CORELAP), and Factory Flow.

Product layout
In a product layout, machines are dedicated to a particular product -or a very similar small range
of products – and each stage of manufacture is distinct from the next. Product layout involves
locating the transforming resources entirely for the convenience of the transformed resources.
The product layout was developed during mass production, as an extension of the principles of
90
scientific management in the context of assembly-line production. In a product layout, people
and machines are dedicated to a single product or small range of similar products.
Each workstation is laid out in a sequence that matches the requirements of the product exactly,
and each stage is separate from the next stage.

In product layout, equipment or departments are dedicated to a particular product line, duplicate
equipment is employed to avoid backtracking, and a straight-line flow of material movement is
achievable. Adopting a product layout makes sense when the batch size of a given product or
part is large relative to the number of different products or parts produced.

The sequence of operations in a product layout follows a straightforward sequence, where one
activity in the line cannot be started unless the previous activity has already been completed. In
manufacturing, the product layout is common in automobile assembly and other high-volume
applications. In services, this layout can be found in high-volume, standard services, especially
where there is a tangible element, such as fast-food preparation. IKEA, the furniture retailer, has
a product layout for its stores. People have to follow a pre-defined route through the store, from
one area to another. In this way IKEA achieve rates of customer throughput that few other
retailers can match.

The operation does not need to be laid out in this manner – indeed, space restrictions often
dictate that a straight line cannot be used. Keeping the divisions between the operations remains,
but the overall layout can take a ‘U’ or an ‘S’ shape. In line operations, workstations should be
located close together to minimize materials movement. Materials flow and control is critical,
especially in ensuring that there is a steady flow of work to do and that both stock-outs (where
materials run out) and large piles of work-in-process (WIP) are minimized. Because each
workstation is dependent on the next, the speed of the entire line is determined by the
workstation with the lowest capacity. Furthermore, if a single work centre is not operating the
entire line comes to a halt very rapidly. Japanese automotive manufacturers have made a feature
of this for sometime – if there is a problem with any part of the operation, any worker can stop
the line. This focuses attention on removing and preventing recurrence of the problem, which
would be hidden if the line were allowed to continue working.

In this type of layout, one product or one type of product is produced in a given area. This is used
in case of repetitive and continuous production or mass production type industries. The machines
and equipment are arranged in the order in which they are needed to perform operations on a
product. The raw material is taken at one end of the line and goes from one operation to the next
very rapidly with little material handling required.

This layout assumes that:


 Volume of production is adequate for high equipment utilization.
 Product demand is stable enough to justify high investment in specialized equipment.
 Product is standardized or approaching a phase of its life cycle that justifies investment in
specialized equipment.
 Supplies of raw materials and components are adequate and of uniform quality
(adequately standardized) to ensure that they will work with the specialized equipment.

91
Advantages
 Material handling cost is minimum.
 Labor does the same type of operations always, so he becomes specialized and does the
job very quickly.
 Since the labor has to do only one type of job, he can be easily trained.
 Control of product becomes very easy.
 Reduced WIP inventories, so cost of storage of materials between operations is less.
 Less space is required.
 Smooth and continuous work flow.
 Rapid throughput or product completion time is less.

Limitations
 Not good if the product is changed, no flexibility.
 Difficult to have load balance.
 Very costly because separate machines are needed to do the same operation on different
products.
 If one machine in the line fails or if one operator in the line is absent then the output is
immediately affected.
 Specialized and strict supervision.

Two types of product layout are fabrication and assembly lines. The fabrication line builds
components (viz. car tires, parts of a refrigerator, etc.) on a series of machines. An assembly line
puts the fabricated parts together at a series of workstations. Both are repetitive processes, and in
both cases, the line must be ‘balanced’- that is, the time spent to perform work on one machine
must equal or ‘balance’ the time spent to perform work on the next machine in the fabrication
line.

Assembly lines are a special case of product layout. In a general sense, the term assembly line
refers to progressive assembly linked by some material-handling device. The usual assumption is
that some form of pacing is present and the allowable processing time is equivalent for all
workstations. Within this broad definition, there are important differences among line types. A
few of these are material handling devices (belt or roller conveyor, overhead crane); line
configuration (U-shape, straight, branching); pacing (mechanical, human); product mix (one
product or multiple products); workstation characteristics (workers may sit, stand, walk with the
line, or ride the line); and length of the line (few or many workers). The range of products
partially or completely assembled on lines includes toys, appliances, autos, clothing and a wide
variety of electronic components. In fact, virtually any product that has multiple parts and is
produced in large volume uses assembly lines to some degree.

A more-challenging problem is the determination of the optimum configuration of operators and


buffers in a production flow process. A major design consideration in production lines is the
assignment of operation so that all stages are more or less equally loaded.

Assembly-line systems work well when there is a low variance in the times required to perform
the individual subassemblies. If the tasks are somewhat complex, thus resulting in a higher
assembly-time variance, operators down the line may not be able to keep up with the flow of

92
parts from the preceding workstation or may experience excessive idle time. An alternative to a
conveyor-paced assembly-line is a sequence of workstations linked by gravity conveyors, which
act as buffers between successive operations.

Assembly lines can be balanced by moving tasks from one individual to another. The central
problem then in product layout planning, is to balance the output at each workstation on the
production line so that it is nearly the same, while obtaining the desires amount of output. A
well-balanced assembly line has the advantage of high personnel and facility utilization and
equity between employees’ workloads. We will discuss more about assembly line balancing in
one of the forthcoming sections.

The hybrid (process/product) cell layout


In large or complex operations, neither the process nor product layout may be entirely
satisfactory. The machines or work centres (operating theatres, departments in a store) are
designed to accommodate a range of products, not a particular product family (or customer
grouping). This leads to too many compromises in the process characteristics. An approach that
has tried to eliminate these compromises is the adoption of cells, which are designed to meet the
needs of limited range of products or customers. By doing so they can be far more focused on
those needs, rather than trying to meet a much wider range.

The cell layout has a number of features. These include the layout in a U-shape, which allows
one operator to carry out more than one function and to maintain all operations within sight of
each other (facilitates communication and control). The facilities are more flexible than would be
found in a product layout and the operators are multi-skilled (they can carry out more than one
task).

There are other aspects of cell working that firms have found beneficial. These include the team-
working benefits that go with having a small group of people working together, and the increased
autonomy that such cells permit. They can, for example, considerably simplify the scheduling
process, allowing managers to schedule by cell rather than by scheduling each machine. In
addition, it is often found that the work moves faster through cells than is the case in more
traditional line processes, thus achieving short lead-times for customers.

In manufacturing, machines are grouped together in a cell to support the production of a single
product family. This approach is common in high-tech manufacturing environments, where high
volume and moderate variety can be achieved simultaneously. In services, activities are grouped
together to produce similar services or handle the requirements of a particular customer group.
Some high volume, batch-type services such as call centres use a cell layout, where calls are
routed through to specific areas. Department stores are clusters of cells, where each holds the
goods needed by a particular customer group. They also have the capability to deal with the
entire customer transaction, including taking payment and dealing with any after-sales issues.

With the above approach, the machines or points of activity (operating theatres, sections in the
department store) are not dedicated to a particular product family (customer) but are available for
a range of products. Another approach is to group machines or activities together around a
focused, product family cell.

93
Office Layout
The main difference between office and factory layouts is the importance placed on information.
However, in some office environments, just in manufacturing, production relies on the flow of
material. Office layout deals with grouping of workers, their equipment, and spaces/offices to
provide for comfort, safety, and movement of information.

We should note two major trends in case of office layout. First, technology, such as cellular
phones, beepers, faxes, the Internet, home offices, laptop computers, and PDAs, allows
increasing layout flexibility by moving information electronically. The technological change is
altering the way offices function. Second, virtual companies create dynamic needs for space and
services. These two changes require fewer office employees on-site.

Even though the movement of information is increasingly electronic, analysis of office layouts
still requires a task-based approach. Managers, therefore, examine both electronic and
conventional communication patterns, separation needs, and other conditions affecting employee
effectiveness by using a tool called relationship chart.

General office-area guidelines allot an average of about 100 square feet per person (including
corridors). A major executive is allotted about 400 square feet, and a conference room area is
based on 25 square feet per person, up to 30 people. By making effective use of the vertical
dimension in a workstation, some office designers expand upward instead of outward. This keeps
each workstation unit (what designers call the ‘footprint’) as small as possible.

Retail Layout
Retail layouts are based on the idea that sales and profitability vary directly with customer
exposure to products. Thus, most retail managers try to expose customers to as many products as
possible. Studies show that the greater the rate of exposure, the greater the sales and the higher
the return on investment.
The following ideas are helpful for determining the overall arrangement of many stores:
 Locate the high-draw items around the periphery of the store. Thus, we tend to find dairy
products on one side of a supermarket and bread and bakery products on another.
 Use prominent locations for high-impulse and high-margin items such as house-wares,
beauty aids, and shampoos.
 Distribute what are known in the trade as ‘power items’-items that may dominate a
purchasing trip- to both sides of an aisle, and disperse them to increase the viewing of
other items.
 Use end aisle locations because they have a very high exposure rate.
 Convey the mission of the store by careful selection in the positioning of the lead-off
department. For instance, if prepared foods are part of the mission, position the bakery up
front to appeal to convenience-oriented customers.

Once the overall layout of a retail store has been decided, products need to be arranged for sale.
Many considerations go into this arrangement. However, the main objective of retail layout is to
maximize profitability per square foot of floor space (or, in some stores on linear foot of shelf

94
space). Big-ticket, or expensive, items may yield greater dollar sales, but the profit per square
foot may be lower.

Although the main objective of retail layout is to maximize profit, there are other aspects of the
service that managers need to consider. The term servicescape is used to describe the physical
surroundings in which the service is delivered and how the surroundings have a humanistic effect
on customers and employees. She believes that in order to provide a good service layout, a firm
must consider these three elements:
 Ambient conditions, such as lighting, sound, smell, and temperature. All of these affect
workers and customers and can affect how much is spent and how long a person stays in
the building. For example, fine-dinning restaurant with linen tablecloths, candlelit
atmosphere, and light music.
 Spatial layout and functionality, which involve customer circulation path planning, aisle
characteristics (such as width, direction, angle, and shelf-spacing), and product grouping.
 Signs, symbols, and artifacts encourage shoppers to slow down and browse. For example,
greeter at the door, flower vases at the approach to the office.

The links between process choice and layout


There are clear links between the basic choice of process and type of layout. We can summarize
this in Figure 3.7

process Type of
type layout

project fixed

job process

process/
batch
product

Figure 3.7:
line product The link between
process choice and
layout.
continous product

It should be noted however that the link between batch and the type of layout would depend
upon volume and variety –in low-volume/high-variety batch, process layouts would be used; in
high-volume/low-variety batch, product layouts would be appropriate. Besides, continuous
process differs from line due to the fact that a line process can be stopped at a particular stage

95
and the product will be at that stage of production; in continuous process, stopping the process is
an exception and is very costly (e.g. shutting down a blast furnace).

It is important to note that operations and industries are not forever tied to one type of process or
one type of layout. It is not also the case that some operations do not involve one process type,
nor have one layout. This is most obviously the case in those service operations which have a
back and front office, such as a bank, or a front-of-house and back-of-house, such as a restaurant.
In these operations it may be that customers are processed in one, typically as a job shop, whilst
the materials and information processing that occurs to support the front office may be batch or
even line production.

3.6 Assembly-line Balancing

Product layouts assembly lines are used for high-volume production. To attain the required
output rate as efficiently as possible, jobs are broken down into their smallest indivisible
portions, called work elements. Work elements are so small that they cannot be performed by
more than one worker or at more than one workstation. But it is common for one worker to
perform several work elements as the product passes through his or her workstation. Part of the
layout decision is concerned with grouping these work elements into workstations so products
flow through the assembly line smoothly. A workstation is any area along the assembly line that
requires at least one worker or one machine. If each workstation on the assembly line takes the
same amount of time to perform the work elements that have been assigned, then products will
move successively from workstation to workstation with no need for a product to wait or a
worker to be idle. The process of equalizing the amount of work at each workstation is called
line balancing.

Line-balancing is done to minimize imbalance between machines or personnel while meeting a


required output from the line. The balancing effort operates under two constraints: precedence
requirements and cycle time restrictions.
Precedence requirements are physical restrictions on the order in which operations are
performed on the assembly line. For example, we would not ask a worker to package a product
before all the components were attached, even if he or she had the time to do so before passing
the product to the next worker on the line. To facilitate line balancing, precedence requirements
are often expressed in the form of a precedence diagram. The precedence diagram is a network,
with work elements represented by circles or nodes and precedence relationships represented by
directed line segments connecting the nodes.

Cycle time, the other restriction on line balancing, refers to the maximum amount of time the
product is allowed to spend at each workstation if the targeted production rate is to be reached.
Desired cycle time is calculated by dividing the time available for production by the number of
units scheduled to be produced:

Suppose a company wanted to produce 120 units in an 8-hour day. The cycle time necessary to
achieve the production quota is
96
Cycle time can also be viewed as the time between completed items rolling off the assembly line.
Consider the three-station assembly line shown here.

1 2 3

4 min 4 min 4 min

It takes 12 minutes (i.e., 4+ 4+ 4) for each item to pass completely through all three stations the
assembly line. The time required to complete an item is referred to as its flow time. However, the
assembly line does not work on only one item at a time. When fully operational, the line will be
processing three items at a time, one at each workstation, in various stages of assembly.
Every 4 minutes a new item enters the line at workstation 1, an item is passed from workstation 1
to workstation 2, another item is passed from workstation 2 to workstation 3, and a completed
item leaves the assembly line. Thus, a completed item rolls off the assembly line every 4
minutes. This 4-minute interval is the actual cycle time of the line.

The actual cycle time, Ca, is the maximum workstation time on the line. It differs from the
desired cycle time when the production quota does not match the maximum output attainable by
the system. Sometimes the production quota cannot be achieved because the time required for
one work element is too large. To correct the situation, the quota can be revised downward or
parallel stations can be set up for the bottleneck element.

Line balancing is basically a trial-and-error process. We group elements into workstations


recognizing time and precedence constraints. For simple problems, we can evaluate all feasible
groupings of elements. For more complicated problems, we need to know when to stop trying
different workstation configurations. The efficiency of the line can provide one type of guideline;
the theoretical minimum number of workstations provides another. The formulas for efficiency,
E, and minimum number of workstations, N, are

;
Where
ti = completion time for element i
j = number of work elements
n = actual number of workstations

97
Ca = actual cycle time
Cd = desired cycle time

The total idle time of the line, called balance delay, is calculated as (1 - efficiency). Efficiency
and balance delay are usually expressed as percentages. In practice, it may be difficult to attain
the theoretical number of workstations or 100% efficiency.
The line balancing process can be summarized as follows:
1. Draw and label a precedence diagram.
2. Calculate the desired cycle time required for the line.
3. Calculate the theoretical minimum number of workstations.
4. Group elements into workstations, recognizing cycle time and precedence constraints.
5. Calculate the efficiency of the line.
6. Determine if the theoretical minimum number of workstations or an acceptable efficiency
level has been reached. If not, go back to step 4.

Computerized Line Balancing


Line balancing by hand becomes unwieldy as the problems grow in size. Fortunately, there are
software packages that will balance large lines quickly. IBM’s COMSOAL (Computer Method
for Sequencing Operations for Assembly Lines) and GE’s ASYBL (Assembly Line
Configuration Program) can assign hundreds of work elements to workstations on an assembly
line. These programs, and most that are commercially available, do not guarantee optimal
solutions. They use various heuristics, or rules, to balance the line at an acceptable level of
efficiency. Five common heuristics are: longest operation time, shortest operation time, most
number of following tasks, least number of following tasks, and ranked positional weight.
Positional weights are calculated by summing the processing times of those tasks that follow an
element. These heuristics specify the order in which work elements are considered for allocation
to workstations. Elements are assigned to workstations in the order given until the cycle time is
reached or until all tasks have been assigned.

In addition to balancing a line for a given cycle time, managers must also consider four other
options: pacing, behavioral factors, number of models produced, and cycle times.
Pacing is the movement of product from one station to the next after the cycle time has elapsed.
Paced lines have no buffer inventory. Unpaced lines require inventory storage areas to be placed
between stations.

Chapter Summary
Design is the activity which shapes the physical form and purpose of both products and services
and the processes that produce them. Good design makes good business sense because it
translates customer needs into the shape and form of the product or service and so enhances
profitability. Product design includes processes such as concept generation, screening,
Preliminary design, design evaluation and improvement, and prototyping and final design.
Typical techniques such as quality function deployment, value engineering and Taguchi methods
are used in design evaluation and improvement. In practice product design and process design
should be carried out interactively to improve the quality of both product and service design and
process design. In manufacturing, these process types are (in order of increasing volume and
decreasing variety) project, jobbing, batch, mass and continuous processes. In service operations,

98
although there is less consensus on the terminology, the terms often used (again in order of
increasing volume and decreasing variety) are professional services, service shops and mass
services. The overall nature of any process is strongly influenced by the volume and variety of
what it has to process. The amount of capacity an organization will have depends on its view of
current and future demand. It is when its view of future demand is different from current demand
that this issue becomes important. When an organization has to cope with changing demand, a
number of capacity decisions need to be taken. These include choosing the optimum capacity for
each site, balancing the various capacity levels of the operation in the network, and timing the
changes in the capacity of each part of the network. Important influences on these decisions
include the concepts of economy and diseconomy of scale, supply flexibility if demand is
different from that forecast, and the profitability and cash-flow implications of capacity timing
changes. Facility location decisions are, strategic, long term and non repetitive in nature. There
are four location decision hierarchies: national, regional, community, and site considerations.
There are some methods which are used to evaluate and compare potential site locations: factor
rating method, centre of gravity etc. Subjective techniques are also used. Facility layout refers
to an optimum arrangement of different facilities including man, machine, equipment, materials,
etc. There are four basic layout types. They are fixed-position layout, functional layout, cell
layout and product layout. The type of layout that an operation system would consider partly is
influenced by the nature of the process type, which in turn depends on the volume–variety
characteristics of the operation. Partly also the decision will depend on the objectives of the
operation. Cost and flexibility are particularly affected by the layout decision.

Review Questions
Multiple Choice Questions

1. An organization's process strategy


a. will have long-run impact on efficiency and flexibility of production
b. is the same as its transformation strategy
c. must meet various constraints, including cost
d. is concerned with how resources are transformed into goods and services
e. All of the above are true.
2. The layout approach that addresses trade-offs between space and material handling is called
the fixed position layout. A. True B. False
3. A product-focused process is commonly used to produce
a. high-volume, high-variety products
b. low-volume, high-variety products
c. high-volume, low-variety products
d. low-variety products at either high- or low-volume
e. high-volume products of either high- or low-variety
4. What is sometimes referred to as rated capacity?
a. efficiency
b. utilization
c. effective capacity
d. expected output
e. design capacity
5. Which of the following is false regarding capacity expansion?
a. "Average" capacity sometimes leads demand, sometimes lags it.

99
b. If "lagging" capacity is chosen, excess demand can be met with overtime or
subcontracting.
c. Total cost comparisons are a rather direct method of comparing capacity alternatives.
d. Capacity may only be added in large chunks.
e. All of the above are true.
Discussion Questions
1. Describe the process choice that you would expect to find in the following:
a. A fast food restaurant
b. A general hospital
c. A car repair workshop.
2. What is process capability? What is process capability analysis, and when should it be
conducted? What are some of its benefits?
3. How are modules useful in manufacturing processes?
4. What is mass customization?
5. Compare an intermittent process to a continuous process on the basis of variety, volume,
equipment utilization, and inventory.

CHAPTER IV
OPERATING DECISIONS

The production planning system (PPS) or, as it is sometimes referred to, the production planning
and control system [PPCS] is critical to a firm’s success. This system includes many
management activities such as planning production, materials, monitoring equipment utilization,
maintaining inventories, scheduling, setting customer due dates, and supplying information to the
other functions. The PPCS system involves the manager in two of the typical activities of
management-planning and control. The PPCS is an integrated system that both creates plans and
controls activities to ensure that they adhere to the plan. This chapter introduces and provides an
overview of some of the principles and methods of planning and control.
Learning Objectives
After learning this chapter, students will be able to:
 Explain the concept and elements of operations planning systems
 Formulate aggregate production planning
 Distinguish the elements and features of master production schedule
 Explain the process and elements of Material Requirement Planning and ERP
 Describe the process of operations schedule.

4.1 Production Planning and Control Systems: An Overview

Within the constraints imposed by its design, an operation has to be run on an ongoing basis.
‘Planning and control’ is concerned with managing the ongoing activities of the operation so as
to satisfy customer demand. All operations require plans and require controlling, although the
degree of formality and detail may vary.

100
In this chapter, planning and control will be treated as two separate activities. Planning describes
the activities that take place in order for the transformation process to occur, whilst control
describes those activities that take place during the conversion of inputs into outputs. However,
you should be aware that in practice it is not always possible to separate planning activities and
control activities.

Process planning
Long
range
Strategic capacity planning

Intermediate Forecasting
range and demand Sales and operations (aggregate) planning
management
Sales plan Aggregate operations plan
Manufacturing
Services
Master scheduling

Material requirements planning

Weekly workforce and


Order scheduling customer scheduling
Short
range
Daily workforce and customer scheduling

Figure 4.1 Production Planning System

Process planning deals with determining the specific technologies and procedures required to
produce a product. Strategic capacity planning deals with determining the long-term capabilities(
such as the size and scope) of the production systems. Aggregate planning, sometimes known as
sales and operations planning, involves taking the sales plan from marketing and developing an
aggregate operations plan that balances demand and supply. For service and manufacturing, the
aggregate operations plan is essentially the same, the major exception being manufacturing’s use
of inventory buildups and cutback to smooth production. After the aggregate operations plan is
developed, manufacturing and service planning are generally quite different.
More specifically, in manufacturing, the planning process can be summarized as follows: the
production control group inputs existing or forecast orders into a master production schedule
(MPS). The MPS generates the amount and dates of specific items required for each order.
Rough Cut Capacity planning (RCCP) then verifies that production and warehouse facilities,
equipment, and labor are available and that key vendors have allocated sufficient capacity to
provide materials when required. Material requirement planning (MRP) takes the end product
requirement from MPS and breaks them down into their component parts and subassemblies to
create a material plan. This plan specifies when production and purchase orders must be placed
for each part and subassembly to complete the product on schedule. Most MRP systems also

101
allocate capacity to each order.(This is called capacity requirement planning). The final planning
activity is daily or weekly order scheduling of jobs to specific machines, production lines, or
work centers.

In service once the aggregate staffing level is determined, the focus is on workforce and
customer scheduling during the week or even hour by hour during the day. Workforce schedules
are a function of the hours the service is available to a customer, the particular skills needed at a
particular times over the relevant period, and so on. Customer (demand) scheduling deals with
setting appointments and reservations for customers to use the service and assigning priorities
when they arrive at the service facility.
Long-term plans are usually made for all of the operation’s outputs, or for general product or
service groupings. Medium-term plans are more detailed, and usually concern product families
or service offerings. Short-term plans for manufacturing usually involve individual products,
components or customer orders, whilst short-term plans for services are usually made for
individual workforce and customers.

Capacity Management
Techniques

Resource
Demand Production
Requirement
Management planning
Planning

Final Assembly Master Production Rough Cut Capacity


Scheduling scheduling Planning

Material Capacity
Requirements Requirements
Planning Planning

Input output
Production Activity control
Control
Operations
Sequencing

Figure 4.2 Capacity Decisions over the Time Horizon

In summary, the major activities in the production planning and control system are, thus, as
illustrated in Figure 4.1 and Figure 4.2. The figures are hierarchical frameworks reflecting the
interrelated characteristics of these activities. The top level decisions are made first and then the
increasingly detailed decisions of the lower levels are made. Note that the appropriate capacity
management technique is used at the same time as each level of the plan is developed.

102
4.1.1 Aggregate Production/ Operations Planning
Aggregate Production/ Operations Planning is a planning process that determines the
resource capacity a firm will need to meet its demand over an intermediate time horizon—6 to 12
months in the future. Within this time frame, it is usually not feasible to increase capacity by
building new facilities or purchasing new equipment; however, it is feasible to hire or lay off
workers, increase or reduce the workweek, add an extra shift, subcontract out work, use
overtime, or build up and deplete inventory levels. It is also known as aggregate planning.

Aggregate production planning is also defined by some authors as the process of determining
output levels of product groups over the coming six to eighteen months on a weekly or monthly
basis. It identifies the overall level of outputs in support of the business plan.

We use the term aggregate because the plans are developed for product lines or product families,
rather than individual products. An aggregate operations plan might specify how many bicycles
are to be produced but would not identify them by color, size, tires, or type of brakes. Resource
capacity is also expressed in aggregate terms, typically as labor or machine hours. Labor hours
would not be specified by type of labor, nor machine hours by type of machine. And they may be
given only for critical processes.

Please note that the terms operations plan, production plan, and aggregate plan are used
interchangeably in this module.

The objective to aggregate operations planning is to develop an economic strategy for meeting
demand. An economic strategy for meeting demand can be attained by either adjusting capacity
or managing demand. Hence, basically, aggregate planning evaluates alternative capacity
sources to find an economic strategy for satisfying demand.

If demand for a company’s products or services is stable over time, then the resources necessary
to meet demand are acquired and maintained over the time horizon of the plan, and minor
variations in demand are handled with overtime or undertime. Aggregate planning becomes more
of a challenge when demand fluctuates over the planning horizon.

Strategies of APP
Seasonal demand patterns can be met by:
1. Producing at a constant rate and using inventory to absorb fluctuations in demand (level
production)
2. Hiring and firing workers to match demand (chase demand).
3. Increasing or decreasing working hours (overtime and undertime)
4. Subcontracting work to other firms
5. Using part-time workers
6. Providing the service or product at a later time period (backordering)

When one of these alternatives is selected, a company is said to have a pure strategy for
meeting demand. When two or more are selected, a company has a mixed strategy.

Level Production

103
The level production strategy sets production at a fixed rate (usually to meet average demand)
and uses inventory to absorb variations in demand. During periods of low demand,
overproduction is stored as inventory, to be depleted in periods of high demand. The cost of this
strategy is the cost of holding inventory, including the cost of obsolete or perishable items that
may have to be discarded.

Chase Demand
The chase demand strategy matches the production plan to the demand pattern and absorbs
variations in demand by hiring and firing workers. During periods of low demand, production is
cut back and workers are laid off. During periods of high demand, production is increased and
additional workers are hired. The cost of this strategy is the cost of hiring and firing workers.
This approach would not work for industries in which worker skills are scarce or competition for
labor is intense, but it can be quite cost-effective during periods of high unemployment or for
industries with low-skilled workers.

Overtime and Undertime


Overtime and undertime are common strategies when demand fluctuations are not extreme. A
competent staff is maintained, hiring and firing costs are avoided, and demand is met temporarily
without investing in permanent resources. Disadvantages include the premium paid for overtime
work, a tired and potentially less efficient workforce, and the possibility that overtime alone may
be insufficient to meet peak demand periods. Undertime can be achieved by working fewer hours
during the day or fewer days per week. In addition, vacation time can be scheduled during
months of slow demand. Employers use shorter workweeks and mandatory vacations in
economic downturns.

Subcontracting
Subcontracting or outsourcing is a feasible alternative if a supplier can reliably meet quality and
time requirements. This is a common solution for component parts when demand exceeds
expectations for the final product. The subcontracting decision requires maintaining strong ties
with possible subcontractors and first-hand knowledge of their work. Disadvantages of
subcontracting include reduced profits, loss of control over production, long lead times, and the
potential that the subcontractor may become a future competitor.

Part-Time Workers
Using part-time workers is feasible for unskilled jobs or in areas with large temporary labor
pools (such as students, homemakers, or retirees). Part-time workers are less costly than full-time
workers—they receive no health-care or retirement benefits—and are more flexible—their hours
usually vary considerably. Part-time workers have been the mainstay of retail, fast-food, and
other services for some time and are becoming more accepted in manufacturing and government
jobs. Japanese manufacturers traditionally use a large percentage of part-time or temporary
workers. Part-time and temporary workers now account for one-third of our nation’s workforce,
and are expected to increase as companies gingerly enter recovery from the recession.

Backlogs, Backordering, and Lost Sales


Companies that offer customized products and services accept customer orders and fill them at a
later date. The accumulation of these orders creates a backlog that grows during periods of high

104
demand and is depleted during periods of low demand. The planned backlog is an important part
of the aggregate plan. For make-to-stock companies, customers who request an item that is
temporarily out-of-stock may have the option of backordering the item. If the customer is
unwilling to wait for the backordered item, the sale will be lost. Although in general both
backorders and lost sales should be avoided, the aggregate plan may include an estimate of both.
Backorders are added to the next period’s requirements; lost sales are not.

Factors Affecting Aggregate Planning


Aggregate operations plan should reflect company policy (such as avoiding layoffs, limiting
inventory levels, and maintaining a specified customer service level) and strategic objectives
(such as capturing a certain share of the market or achieving targeted levels of quality or profit).
Other inputs include financial constraints, demand forecasts (from sales), and capacity
constraints (from operations).
Put shortly, the following factors shall be considered critically before an aggregate planning
process actually starts;
 A complete information is required about available production facility and raw materials.
 A solid demand forecast covering the medium-range period
 Financial planning surrounding the production cost which includes raw material, labor,
inventory planning, etc.
 Organization policy around labor management, quality management, etc.

Techniques in Aggregate Production Planning Decisions


One aggregate planning strategy is not always preferable to another. The most effective strategy
depends on the demand distribution, competitive position, and cost structure of a firm or product
line. The process of aggregate planning involves formulating strategies for meeting demand,
constructing production plans from those strategies, determining the cost and feasibility of each
plan, and selecting the lowest cost plan from among the feasible alternatives. The effectiveness
of the aggregate planning process is directly related to management’s understanding of the cost
variables involved and the reasonableness of the scenarios tested.

The following are relevant costs that the planners may consider in APP decisions. These are:
a. Regular time costs: regular time wages paid to employees as well as contribution to such
benefits as: Health insurance, dental care, social security, and retirement funds and pay
for vacations, holidays and certain other types of absence.
b. Overtime costs include overtime wages that typically are 150% of regular time wages
when working in the evening, 200% of the regular time when working in weekends, and
250% when working during holydays.
c. Hiring and layoff costs include the costs of advertising jobs, interview, training programs
for new employees, scrap caused by the inexperience of new employees, loss of
production and initial paper work. A layoff cost on the other hand includes cost of exit
interviews, severance pay, retraining remaining workers and manages, and lost
productivity.
d. Inventory holding cost includes costs that vary with the level of inventory investment.
i.e., costs of capital tied up in inventory, various storages and warehousing costs,
pilferage and obsolescence costs, insurance costs, and taxes.

105
e. Backorder and stock out costs involves costs of expediting past due orders, costs of lost
sales, and the potential costs of losing the customer’s sales to competitors in the future
(sometimes called loss of goodwill -Usually these costs are very hard to measure.)
f. Sub contract as when there is temporary capacity shortage and a company’s operations
management decides to outsource some portion of the production.

Several quantitative techniques are available to help with the aggregate planning decisions.
These techniques include simple quantitative methods, linear programming and the
transportation methods.

The following are illustrations using simple quantitative methods.


Example 1: The Good and Rich Candy Company makes a variety of candies in three factories
worldwide. Its line of chocolate candies exhibits a highly seasonal demand pattern, with peaks
during the winter months (for the holiday season and Valentine’s Day) and valleys during the
summer months (when chocolate tends to melt and customers are watching their weight). Given
the following costs and quarterly sales forecasts, determine whether (a) level production, or (b)
chase demand would more economically meet the demand for chocolate candies:

Quarter 1 2 3 4
Sales Forecast ((lbs)) 80,000 50,000 120,000 150,000

Hiring Cost =$100 per worker Inventory Carrying =$ 0.50 per pound
Cost per quarter
Firing Cost =$500 per worker Regular production =$2.00
cost per pound
Beginning Production per =1000 pounds per
workforce =100 employee quarter
Solution
a. For the level production strategy, we first need to calculate average quarterly demand.

This becomes our planned production for each quarter. Since each worker can produce 1000
pounds a quarter, 100 workers will be needed each quarter to meet the production requirements
of 100,000 pounds. Production in excess of demand is stored in inventory, where it remains until
it is used to meet demand in a later period. Demand in excess of production is met by using
inventory from the previous quarter. The production plan and resulting inventory costs are as
follows:

Quarter 1 2 3 4 Total
Demand 80,000 50,000 120,000 150,000 400,000

106
Regular 100,000 100,000 100,000 100,000 400,000
Production
Inventory 20,000 70,000 50,000 0 140,000

b. For the chase demand strategy, production each quarter matches demand. To accomplish
this, workers are hired at a cost of $100 each and fired at a cost of $500 each. Since each
worker can produce 1000 pounds per quarter, we divide the quarterly sales forecast by
1000 to determine the required workforce size each quarter. We begin with 100 workers
and hire and fire as needed. The production plan and resulting hiring and firing costs are
given here.

Quarter 1 2 3 4 Total
Demand 80,000 50,000 120,000 150,000 400,000

Regular 100,000 100,000 100,000 100,000 400,000


Production
Workers 80,000/1000 50 120 150
Needed =80
Workers hired 70 30 100
Workers 100-80 80-50 50
Fired =20 =30

Comparing the cost of level production with chase demand, we find that chase demand is the
best strategy for the Good and Rich line of candies.

Example 2: XYZ Company’s six month forecasted demand and cost of various resources are
given as below:

Period/month/ 1 2 3 4 5 6 Total

Forecast 200 200 300 400 500 200 1800

Cost of regular, overtime and subcontracting outputs, inventory and backorder


Regular output Birr 2 per tractor
Overtime output Birr 3 per tractor

107
Subcontracting birr 6 per tractor
Inventory birr 1 per tractor per period on inventory average
Back order birr 5 per tractor per period
XYZ Company is specialized in producing a number of tractors to its customers in the world
market. The company has 15 regular workers and the planner assumes that there is no beginning
inventory at the first period and the planned ending inventory is zero. The level of output rate at
regular time is 300 units per period.

Required
Prepare an aggregate production plan and determine its total cost.

Solution

PERIOD 1 2 3 4 5 6 TOTAL

Forecast 200 200 300 400 500 200 1800

Regular Out put 300 300 300 300 300 300 1800

Overtime out put - - - - - - -

Subcontracting - - - - - - -

Output – Forecast 100 100 0 ( 100) ( 200) 100 0

Beginning Inventory 0 100 200 200 100 0

Ending Inventory 100 200 200 100 (100) 0

Average Inventory 50 150 200 150 50 0 600

Backorders 0 0 0 0 100 0 100

Regular cost $600 $600 $600 $600 $600 $600 $3,600

Overtime cost - - - - - - -

Subcontracting - - - - - - -

Hiring/layoff cost - - - - - - -

Inventory cost $150 $150 $200 $150 $50 $0 $700

Backorder cost - - - - $500 - $500

Total $750 $750 $800 $750 $1,150 $600 $4,800

Note:

108
Ending inventory = Beginning + Output- Forecast
If output – Forecast is negative, then inventory will be decreased by that amount in that period
like period 4 and Period 5.
If insufficient inventory exists, backlog [backorder] equals to the shortage amount like period 5.
Output is the sum of all outputs produced using regular time, overtime or subcontracting.

Example 3: Considering the same information as of example one, XYZ Company is specialized
in producing a number of tractors to its customers in the world market. The company has 15
regular workers and the planner assumes that there is no Beginning inventory at the first period
and the planned ending inventory is zero. But, the planner wants to develop an alternative plan.
Assuming that one person is about to retire from the company. Rather than replace that person,
they would like to stay with smaller workforce and use overtime to make up for the lost output.
The reduced regular time output is 20 units per period. The maximum amount overtime output
per period is 40 Units.

Required:
A. Develop aggregate production plan and determine the total cost.
B. Compare Plan 2 [APP in Example 2] and Plan 3 [APP in Example 3] interms of total cost.

Solution:
A. Develop and prepare Aggregate plan & its cost:
In this case, regular output is reduced to 280 units [300 units - 20 units] due to one person about
to retire from the company. Based on this, the aggregate production plan is summarized as
below.

Period 1 2 3 4 5 6 Totals

Forecast 200 200 300 400 500 200 1800

Regular output 280 280 280 280 280 280 1680

Overtime output - - 40 40 40 - 120

Subcontracting - - - - - - -

Output Forecast 80 80 20 [80] [180] 80 0

Beginning Inv 0 80 160 180 100 0

End Inventory 80 160 180 100 (80) 0

Average Inventory 40 120 170 140 50 0 520

Back orders 0 0 0 0 80 80

109
Regular Cost $560 $560 $560 $560 $560 $560 $3,360

Overtime Cost $0 $0 $120 $120 $120 $360

Subcontracting - - - - - - -

hiring/layoff - - - - - - -

Inventory Cost $40 $120 $170 $140 $50 0 $520

Backorder Cost $0 $0 $0 $0 $400 $0 $400

Total Cost of APP $600 $680 $850 $820 $1,130 $560 $4,640

Note
In period 1, 2 and 6, there is no need to use overtime due to the fact that output is greater
than the forecasted demand. But during period 3, 4, and 5, overtime are applicable
because the output can’t cover the forecasted demand.

B. Comparison between Plan 2 and Plan 3


Total Cost of Plan 1= $4,800; Plan 2 = $4,640 and the difference is $260. In plan 2, there is cost
of overtime, but backorder cost is low as compared to plan one. So, generally Plan 2 is less
costly than plan 1.
Example 5: Again considering the same information as of example two, the aggregate planner
wants to subcontract with the maximum amount of 120 units per period instead of using
overtime. Prepare an aggregate production plan and calculate the total cost to APP?

Solution:
Prepare an aggregate production plan and calculate cost of the plan.

Period 1 2 3 4 5 6 Totals
Forecast 200 200 300 400 500 200 1800

Regular output 280 280 280 280 280 280 1680

Overtime output - - - - - - -

Subcontracting - - 120 - 120

Output Forecast 80 80 (20) (120) (100) 80 0

Beginning inv. 0 80 160 140 20 0

Ending inventory 80 160 140 20 (80) 0

Average inventory 40 120 150 80 10 0 400

110
Backorder inv. - - - - 80 80

Regular cost $560 $560 $560 $560 $560 $560 $3,360

Overtime cost - - - - - - -

Subcontracting - - - - $720 - $720

Inventory $40 $120 $150 $80 $10 $0 $400

hiring/layoff - - - - - - -

Backorder cost $0 $0 $0 $0 $400 $0 $400

Total Cost of APP $600 $680 $710 $640 $1,690 $560 $4,880

Note
We can use overtime only in period 5 due to the fact that output can’t cover the
forecasted demand.

Linear programming for aggregate planning

Linear programming models for production planning seek the optimal production plan for a
linear objective function and a set of linear constraints; that is, there can be no cross products or
powers of decision variables or other types of nonlinear terms in the problem formulation. Liner
programming models can be used to determine optimal inventory level, backorders,
subcontractor quantities, production quantities overtime production, hires and layoffs. The main
drawbacks are that all relationships between variables must be linear and that the optimal values
of the decision variables may be fractional.

Transportation method of aggregate planning

The use of transportation method to solve production planning problems, assuming that a
demand forecast is available for each period, along with workforce level plan for regular time.
Capacity limits on over time and subcontractor production also are need for each period. The
other assumption is all costs are linearly related to the amount of goods produced –that is, that a
change in the amount of goods produced creates proportionate change in costs. With these
assumptions, the transportation method yields the optimal mixed strategy production plan for the
planning horizon.

Activity 4.
Suppose you are requested to prepare APP for a given manufacturing firm in Ethiopia, what
information do you collect as a requirement for your assignment? Explain.
4.1.2 Master Production Schedule(MPS)
Master scheduling follows aggregate planning. It expresses the overall plans in terms of specific
end items or models that can be assigned priorities. It is useful to plan for the material and

111
capacity requirements. The master production schedule (MPS) specifies which end items or
finished products a firm is to produce, how many are needed, and when they are needed. Recall
that the aggregate operations plan is a similar schedule for product lines or families, given by
months or quarters of a year. The master production schedule works within the constraints of the
aggregate production plan but produces a more specific schedule by individual products. The
time frame is more specific, too. An MPS is usually expressed in days or weeks and may extend
over several months to cover the complete manufacture of the items contained in the MPS. The
total length of time required to manufacture a product is called its cumulative lead time.

Time interval used in master scheduling depends upon the type, volume, and component lead
times of the products being produced. Normally weekly time intervals are used. The time horizon
covered by the master schedule also depends upon product characteristics and lead times. Some
master schedules cover a period as short as few weeks and for some products it is more than a
year.

The master production schedule drives the MRP process. The schedule of finished products
provided by the master schedule is needed before the MRP system can do its job of generating
production schedules for component items. Table … shows a sample master production schedule
consisting of four end items produced by a manufacturer of specialty writing accessories.

Functions of MPS
Master Production Schedule (MPS) gives formal details of the production plan and converts this
plan into specific material and capacity requirements. The requirements with respect to labour,
material and equipment are then assessed.

The main functions of MPS are:


1. To translate aggregate plans into specific end items: Aggregate plan determines level of
operations that tentatively balances the market demands with the material, labour and
equipment capabilities of the company. A master schedule translates this plan into specific
number of end items to be produced in specific time period.
2. Evaluate alternative schedules: Master schedule is prepared by trial and error. Many
computer simulation models are available to evaluate the alternate schedules.
3. Generate material requirement: It forms the basic input for material requirement planning
(MRP).
4. Generate capacity requirements: Capacity requirements are directly derived from MPS.
Master scheduling is thus a prerequisite for capacity planning.
5. Facilitate information processing: By controlling the load on the plant. Master schedule
determines when the delivery should be made. It coordinates with other management
information systems such as, marketing, finance and personnel.
6. Effective utilization of capacity: By specifying end item requirements schedule establishes
the load and utilization requirements for machines and equipment.

This process of identifying the amount of each specific end item to be produced in each time
period is called disaggregation. The MPS is changed and updated more frequently than the
production plan. It is a detailed plan and does not extend as far into the horizon as the production
plan. It is often viewed as a contract between production and the rest of the firm. In the time

112
horizon in which the MPS is frozen, production will produce everything that is stated on the
schedule and the rest of the firm will sell and/or accept what is produced. Firms set up policies
about when and the extent to which an MPS can be revised.

Table 4.1 Master Production Schedule (MPS)


MPS Item Period
1 2 3 4 5
Pencil case 125 125 125 125 125
Clipboard 85 95 120 100 100

Lapboard 75 120 47 20 17

Lap desk 0 60 0 60 0

4.1.3. Materials Requirements Planning (MRP)


MRP was developed and refined by Joseph Orlicky at IBM and by Oliver Wight, a consultant, in
the 1960s and 1970s. It replaced re-order point systems by deriving dependent demand for parts
and raw materials from production schedules and determining order points based on delivery
lead times and production needs. Material requirement planning (MRP) came about with the
recognition that, in high-volume manufacturing environments, assumptions that underpinned
materials management in the craft era did not apply in the mass production era. The inventory
control systems of the earlier era treated demand as if it were independent- that is, it is externally
generated directly by the customer- and as a consequence, due to the aggregation of demand over
time, it is generally smoother.
Materials requirements planning (MRP) is an approach to calculating how many parts or
materials of particular types are required and what times they are required. Material requirements
planning (MRP) is a set of techniques that uses bill of material data, inventory data, and the
master production schedule to calculate requirements for materials. It makes recommendations to
release replenishment orders for material. Further, because it is time-phased, it makes
recommendations to reschedule open orders when due dates and need dates are not in phase.
Time-phased MRP begins with the items listed on the MPS and determines (1) the quantity of all
components and materials required to fabricate those items and (2) the date that the components
and material are required. Time-phased MRP is accomplished by exploding the bill of material,
adjusting for inventory quantities on hand or on order, and offsetting the net requirements by the
appropriate lead times.
The first inputs to materials requirements planning are customer orders and forecast demand.
MRP performs its calculations based on the combination of these two parts of future demand. All
other requirements are derived from, and dependent on, this demand information.

When demand is dependent, the amount of material needed can be calculated and not forecasted.
The end items produced by the factory (i.e., those scheduled in the master production schedule)
are the only independent-demand items in the factory. The components needed to produce those

113
end items are dependent on the demand for the end items. It is the task of materials requirements
planning to calculate how many of each component are needed and when they are needed.
There is some confusion about Materials Requirements Planning (MRP), because much of the
software that has been developed to do materials requirements planning uses the same name.
But, the task of materials requirements planning must be done regardless of whether there is
software to do it. Whether it is done by hand or using computer software, all MRP systems use
the same logic to calculate when items needed to make another item must arrive at the factory or
warehouse to be available. The materials requirements planning stage considers both purchased
materials and manufactured components.

Objectives of MRP System


In any manufacturing operation, the questions of what materials and components are needed, in
what quantities, and when-and the answer to these questions are vital. An MRP system is
designed to provide just these answers. MRP provides the following:
• Inventory reduction: MRP determines how many of components are needed and when, in
order to meet the master schedule.
• Reduction in production and delivery lead time: MRP identifies materials and
components quantities, timings, availabilities, and procurement and production actions
required to meet delivery deadlines. By coordinating inventories, procurement, and
production decisions. MRP helps avoid delays in production. It prioritizes production
activities by putting due dates on customer job orders.
• Realistic commitments: Realistic delivery promises can enhance customer satisfaction.
• Increased efficiency: MRP provides close coordination among various work centers as
products progress through them.

The MRP System

114
Customer Orders Master Forecast Demand
Production
Schedule

Materials
Inventory
Bill of Materials Requirements
records
Planning

Work Orders
Purchase Orders Material Plans

Figure 4.3 Materials requirements planning (MRP) schematic

MRP Inputs
The inputs to the MRP system are the master production schedule (MPS), the bill of materials
(BOM), and an inventory records file, which has both the quantity of inventory in stock and the
lead times of each item.

The master production schedule (MPS) forms the main input to materials requirements planning
and (as discussed previously) contains a statement of the volume and timing of the end-products
to be made. It drives all the production and supply activities that eventually will come together to
form the end-products. It is the basis for the planning and utilization of labour and equipment,
and it determines the provisioning of materials and cash. The MPS should include all sources of
demand, such as spare parts, internal production promises, etc. It is generally recommended that
the planning horizon for the MPS be at least as large as the average lead time of the end items
produced by the facility. This allows the MPS to be stable, which then means that the
requirements being conducted in the MRP have some validity.

MRP requires that each end item have a BOM, which clearly identifies the components needed
to make the end item. The BOM can be represented in two different ways. One method is to
show it as a hierarchical table. The other is to show it as a product structure tree as in Figure 4.4.
Both of these represent the same BOM. When each item is needed can best be described in the
form of a product structure diagram, shown in Figure 4.4 for end product “A”. An assembled
item is sometimes referred to as a parent, and a component as a child. The number in parentheses
with each item is the quantity of a given component needed to make one parent. Thus, one item
of “ A” product , thre F’s, one E, one D, two B’s and C materials are needed to make each

115
product “A”. The material D and E appear at the same level of the product structure because they
are to be assembled together.

B(2) C(1)

D(1) E(1)

F(3)

Figure 4.4. Bill of Materials for End Item A

The inventory records for all the end items and for the components of these end items need to be
available. The item master file, or inventory file, contains an extensive amount of information
on every item that is produced, ordered, or inventoried in the system. It includes such data as on-
hand quantities, on-order quantities, lot sizes, safety stock, lead time, and past usage figures. It
provides a detailed description of the item, specifies the inventory policy, updates the physical
inventory count, summarizes the item’s year-to-date or month-to-date usage, and provides
internal codes to link this file with other related information in the MRP database. Needless to
say, they need to be accurate. Otherwise, creating an MRP will be an exercise in futility. Garbage
in gives garbage out. When computerized MRP failures are discussed, the lack of accuracy of the
inventory records is usually one of the major reasons a system failed.

The MRP Process


The MRP system is responsible for scheduling the production of all items beneath the end item
level. It recommends the release of work orders and purchase orders, and issues rescheduling
notices when necessary.

The MRP process consists of four basic steps: (1) exploding the bill of material, (2) netting out
inventory, (3) lot sizing, and (4) time-phasing requirements. The process is performed again and
again, moving down the product structure until all items have been scheduled. An MRP matrix,
as shown in Table 4.2, is completed for each item starting with level zero items. Identifying
information at the top of the matrix includes the item name or number, the lowest level at which
the item appears in the product structure (called low level code or LLC), the time required to

116
make or purchase an item (called lead time or LT), and the quantities in which an item is usually
made or purchased (called lot size).

Table 4.2. The MRP Matrix


Item_____________ LLC___ Period
Lot size _________ LT____
1 2 3 4 5
Gross Requirement Derived from MPS or planned order releases of the parent
Scheduled Receipts On order and scheduled to be received
Projected on Hand Beg. Inv Anticipated quantity on hand at the end of the period
Net Requirements Gross requirements net of inventory and scheduled receipts
Planned Order Receipts When orders need to be received
Planned Order Releases When orders need to be placed to be received on time

Entries in the matrix include gross requirements, scheduled receipts, projected on hand, net
requirements, planned order receipts, and planned order releases. Gross requirements begin the
MRP process. They are given in the master production schedule (MPS) for end items and derived
from the parent for component items. Scheduled receipts are items on order that are scheduled to
arrive in future time periods. Projected on hand is inventory currently on hand or projected to be
on hand at the end of each time period as a result of the MRP schedule. Net requirements are
what actually need to be produced after on-hand and on-order quantities have been taken into
account. Planned order receipts represent the quantities that will be ordered and when they must
be received. These quantities differ from net requirements by lot sizing rules when production or
purchasing is made in predetermined batches or lots. Common lot sizing rules include ordering
in minimum or multiple quantities, using an EOQ or periodic order quantity, or ordering the
exact quantities needed (called lot-for-lot or L4L). The last row of the matrix, planned order
releases, determines when orders should be placed (i.e., released) so that they are received when
needed. This involves offsetting or time phasing the planned order receipts by the item’s lead
time. Planned order releases at one level of a product structure generate gross requirements at the
next lower level. When the MRP process is complete, the planned order releases are compiled in
a planned order report.

More specifically, MRP can be computed as follows:


Gross Requirement: It is the projected needs for raw materials, components, subassemblies, or
finished goods by the end of the period shown. Gross requirement comes from the master
schedule (for end items) or from the combined needs of other items. But in MRP it is the
quantity of item that will have to be disbursed, i.e. issued to support a parent order (or orders),
rather than the total quantity of the end product

Scheduled Receipts: They are materials already on order from a vendor or in-house shop due to
be received at the beginning of the period. Put differently, they are open orders scheduled to
arrive from vendors or elsewhere in the pipeline.

117
On Hand or Available: The expected amount of inventory that will be on hand at the beginning
of each time period. This includes amount available from previous period plus planned order
receipts and scheduled receipts less gross requirements.

Net requirement: The actual amount needed in each time period.

Planned order receipt: The quantity expected to be received by the beginning of the period in
which it is shown under lot-for lot ordering; this quantity will equal net requirement. Any excess
is added to available inventory in next time period.

Planned order release: It indicates a planned amount to order in each time period; equals
planned-order receipts offset by lead time. This amount generates gross requirements at the next
level in the assembly or production chain. When an order is executed it is removed from the
“planned order-receipt” and planned-order-release row and entered in the “scheduled receipt”
row.

Example: Company X is specialized in producing product A. The product structure is stated as


follows:

A
LT= 4

B (1) C (2)
LT= 2
LT= 3

D (1) E (2)
LT= 1 LT= 1

Figure 4.5. Bill of Materials for End Item A

Suppose 100 units of product A must be available in period 8. If no stock on hand or on-
order was placed, 1) Determine the size to each order, and 2) When to release orders for
each component shown in the product structure record

118
Solution:
1) Quantity determination
A is made from components B & C, C is made from components D & E. by simple
computation the quantity requirements of each component are:
Component B= (1) (number of A’s =1(100) = 100
Component C= (2) (number of A’s) = 2(100) = 200
Component D= (1) (number of C’s) = 1(200) = 200
Component E= (2) (number of C’s) = 2(200) = 400

2) Time determination/when to release orders for each item/


Now the time element for all the items must be considered. The following table creates a
material requirements plan based on the demand for A, the knowledge of how A is made,
and the time needed to obtain each component. It shows which items are needed, how
many are needed, and when they are needed to complete 100 units of A in period 8. MRP
for 100 units of Product A in period 8

Lead 1 2 3 4 5 6 7 8
time
4 A Gross requirements 100
Planned order receipts 100
Planned order releases 100

1 2 3 4 5 6 7 8
3 B Gross requirements 100
Planned order receipts 100
Planned order releases 100
1 2 3 4 5 6 7 8
2 C Gross requirements 200
Planned order receipts 200
Planned order releases 200

1 2 3 4 5 6 7 8
1 D Gross requirements 200
Planned order receipts 200
Planned order releases 200

1 2 3 4 5 6 7 8
1 E Gross requirements 400
Planned order receipts 400
Planned order releases 400

119
The MRP developed for product A is based on the product structure of A and the lead time
needed to obtain each component. Planned order releases of the parent item are used to
determine gross requirements for its component items. Planned order releases generate gross
requirements in the same time period for its lower level components. In order to have 100 units
of product A available in period 8, it is necessary to release orders for 100 units of B in period 1,
200 units of C in period 2, 200 units of D in period 1, and 400 units of E in period 1. Planned
order release dates are simply obtained by offsetting the lead times (moving the requirement by
the lead time earlier in the schedule). A component’s gross requirements time period is the
planned order release period of its parent. Planned order releases indicated for the first period are
those that are in “action buckets” where immediate action is mandatory. Planned order releases
for two or more periods into the future do not require immediate action.

Other Issues in MRP


Management of dependent demand inventory is an important concern for manufacturing operations. The
following are a few additional issues that must at least be mentioned:
• Safety stock
• Lot sizing.

Let’s briefly examine each of these issues.


Safety Stock

Under independent demand, safety stocks are held to offset variabilities in demand and/or lead-time.
Under dependent demand, demand quantities tend to be less variable, although lead times may vary.
Consequently, under those circumstances it may make more sense to include some Safety time in
ordering; that is, when lead times tend to vary, order a bit earlier to allow for this rather than hold
additional stock. In some case, safety stock may be held; this usually occurs at the lowest level of product
structure (e.g. raw materials) or for purchased part that are used in several products (e.g. engines for
several different lawnmower model) safety stocks may represent a reasonable approach to variation in
lead time. In practical terms, however, safety stocks should be viewed as exceptions rather than the
general rule for dependent demand situations.

Lot Sizing
Selecting order (or production) quantities is referred to as Lot sizing. In general, the primary
goal in choosing a lot size is to minimize the sum of ordering (or set up) costs and inventory
carrying costs. When demand tends to be lumpy, as it often does with dependent-demand items,
the task of identifying an appropriate lot size is more difficult than when demand tend to be
uniformly distributed over time. A variety of difficult approaches are used in practice, none of
which is necessarily better than the others. Judgment and experience often are factors in
choosing a particular approach.
Economic Order Quantity (EOQ) models sometimes are employed. The closer demand is to
being uniformly distributed, the better such model work. EOQ models seem to work best either
for components at lower level of product tree or for purchased parts that are used in multiple
products, for which demand tends to be the most uniform. For other components and for end
items, demand tends to lumpy to use an EOQ approach to ordering. When demand is lumpy
and/or uneven, the average inventory is less likely to equal one-half the order quantity, and the
EOQ logic breaks down.

120
Lot –for-lot ordering often is used. It involves using an order size that is equal to demand. This
method was illustrated in the preceding example. It is by far the simplest approach; no
computations are needed. It has the advantage of holding inventory only for short period, unlike
the EOQ approach, which would result in carrying inventories on a continual basis. One
disadvantage is the inability to take advantage of economic related to a fixed order size, such as
standard container sizes.
Fixed period ordering involves ordering enough inventory for a set number of period e.g. one-
month supply. The number of period may be arbitrary or it may reflect an effort to incorporate
knowledge of demand patterns, such as cycles of demand.

MRP Implementation
Although MRP is easy to understand conceptually, it can be used and interpreted in a variety of
ways. Generally MRP system may be interpreted in three different ways.
1. An Inventory Control System
An inventory control system which releases manufacturing and purchase orders for the right time
to support the master schedule. This system launches orders to control work-in-process and raw
materials inventories through proper timing of order placement but it doesn’t include capacity
planning.
2. A Production and Inventory Control System
It is an information system used to plan and control inventories and capacities in manufacturing
companies. In this type of system the order resulting from parts explosion are checked to see
whether sufficient capacity is available. If there is not sufficient capacity, either the capacity or
the master schedule is changed. It has a feedback loop between the master schedules to adjust for
capacity availability. As a result this type of MRP system is called closed loop MRP system. It
controls both inventories and capacity.
3. A Manufacturing Resource Planning System
This type of MRP system is used to plan and control all manufacturing resources: inventory
capacity, cash, personnel, facilities and capital equipment. In this case the MRP parts-explosion
system also drives all other resource-planning sub-system in the company.
It takes a great deal of effort to make MRP successful. As a matter of fact, researches indicate
that five elements are essential for successful implementation of MRP:

 Implementation planning
 Adequate computer support
 Accurate data
 Management support
 User knowledge

MRP operates best under the following conditions:


a. High-volume line processes.
b. Product structure is complex and there are many levels of bills of materials.

121
c. Production is carried out in relatively large batch sizes.
d. There is limited volatility. Bottlenecks, rush jobs, high scrap rates and unreliable
suppliers create volatile conditions unsuited for the MRP system.

MRP also requires high data integrity – that is, the accuracy of the data must be high and
consistent. Since inventory level data is traditionally poor and quoted lead times from suppliers
even worse, the general failure of MRP should not surprise us. The precision was also inherently
poor.

Relaxing MRP Assumptions


The MRP process makes certain assumptions about production resources and how they should be
allocated. Today’s planning technologies allow us to relax some of the more restrictive
assumptions of MRP. For example, we have learned the following:
• Material is not always the most constraining resource. The iterative procedure described in
the previous section for determining material availability first, then verifying capacity may not
be relevant to some industries. If there are particular processes that constrain the system or other
capacity constraints that are difficult to relax, then they should drive the schedule rather than the
availability of materials. Similarly, a bill of material may not be as important as a bill of labor, a
bill of resources, a bill of distribution, or a bill of information.
• Lead times can vary. Fixed lead times assume that either lot sizes will continue unchanged or
that they have no bearing on lead time. Under this assumption, the lead time necessary to process
an order would remain the same whether that order consists of 1 unit or 100 units, and whether
the shop is operating empty or at capacity. ERP processors today are able to handle variable lead
times, but users must determine how sensitive the system should be to parameters that change.
• Not every transaction needs to be recorded. MRP tries to keep track of the status of all jobs in
the system and reschedules jobs as problems occur. In a manufacturing environment of speed
and small lot sizes, this is cumbersome. It might take as long to record the processing of an item
at a workstation as it does to process the item. Managers must assess how much processing detail
is really needed in the common database and how much control is enough.
• The shop floor may require a more sophisticated scheduling system. Dynamic scheduling
environments require a level of sophistication not present in most MRP systems.
• Scheduling in advance may not be appropriate for on-demand production. Many companies
today produce products on-demand from customers. The just-in-time or lean production
environment, discussed in the next chapter, may produce better results under those
circumstances. Whereas the master scheduling and bill-of-material explosion aspects of MRP are
used in virtually all manufacturing environments, the MRP/CRP (Capacity Requirement
Planning) process is unnecessary in repetitive manufacturing driven by customer orders.

4.1.3. Manufacturing Resource Planning: A transition From MRP to MRPII


MRP evolved into MRPII which, in essence, included MRP and added other management
ingredients such as tooling, routing procedures, capacity availability and labour-hours
requirement. MRP is therefore a subset of MRPII.
MRP-II has not replaced MRP, nor is it an improved version of MRP; rather, it represents an
effort to expand the scope of production resource planning, and to involve other functional areas
of the firm in the planning process. When the concepts of MRP are extended to the entire range
122
of manufacturing operations, and when corrective action is incorporated into the system, the
result is a ‘closed-loop MRP system’. In this, the various functions of ‘operations planning and
control (forecasting, operations planning, inventory management, MRP calculations, dispatching,
and progress control)’ have been integrated into one unified system. It also provides for feedback
from vendors (delayed order deliveries, etc) and from customers (changes in orders, models, etc).
Further, extension to the MRP concept can be made by linking the closed loop MRP system to
the financial function of the organization. This type of structure is called Manufacturing
Resource Planning or MRP-II.
MRP-II software package includes a simulation capability. This allows the manager to explore
various ‘what if ’ questions. For example, if a particular customer requests that its order for 500
units be delivered 3 weeks earlier than original contract date, the simulator allows the manager to
determine whether this request can be granted without an adverse effect on other customer
orders.
The major purpose of MRP-II is to integrate the primary functions (marketing and finance) as
well as other functions such us personnel, engineering, and purchasing in the planning process.
MRP is at the heart of the process. The process begins with an aggregation of demand from all
sources (e.g. Firm orders, forecasts, safety stock requirement) production, marketing, and finance
personnel work toward developing a master schedule. Although manufacturing people will have
a major input in determining that schedule and major responsibility for making it work,
marketing and finance also have an important input and responsibilities.
The rationale of having these functional area work together is the increased likelyhood of
developing a plan that will work and one that everyone can live with. In order for the plan to
work, it must be determined that all of these necessary resources should be available as needed.
After this, the material requirement planning comes into play, generating material and schedule
requirements. Then more detailed capacity requirement planning (CRP) has to be done to
determine if this more specific capacity requirement can be made. Again some adjustment in the
master schedule may be required.
In effect, this is a continuous process, with the master schedule updated and revised as necessary
to achieve corporate goals. MRPII systems have the capacity to do simulation. This enables
managers to answer a variety of “what if” questions, thereby gaining a better application of
available options and their consequences.
When executed properly, MRPII can make a powerful contribution to materials planning and
capacity management.
However, both MRP and MRPII have been severely criticized. The real problem is that often
managers expect an instant solution to poor management of inventory. They suspect that
software alone, via MRP/MRPII, will solve these problems. The lack of strategic importance
given to materials management by senior managers becomes a key reason for failure. But when
there is a strategic and holistic approach to managing inventory, the ‘closed loop’ system
becomes a reality.
In addition, MRP should facilitate better relationships with suppliers because, in theory, all lead
times are known and therefore unreasonable delivery requirements are not made on suppliers.

123
Admittedly, shorter lead times are preferable, especially when MRP is used alongside JIT, but
that has more to do with an ongoing pursuit of improvement in delivery performance via
relationships with suppliers than as a reflection on MRPII itself.

MRP II
Routing

Capacity
MRP Tooling
Requirement

Labour-Hours

Figure 4.6. MRP as a subset of MRP II

Resolving problems of MRP

MRP is still being used as the planning system, and then for the tools and techniques of
JIT to be used to actually ‘pull’ the materials only when needed. At any rate, there must
be some sort of master plan for a given time period in order for the firm to know what is
to be made in a particular time. MRP can therefore be used as an exhaustive management
tool whereby numbers of products and, consequently, sub-components can be determined
and tracked throughout the process. MRP should not be used to ‘push’ components or
materials onto a workstation before they are required.

MRP can provide a discipline so that key areas such as master production schedules, bill
of materials, lead times with suppliers and other data integrity are reliable, accurate,
relevant and known to all parties, which is essential to any well-run management
information system. MRP encourages a holistic approach within the firm itself. The
introduction of MRP needs considerable changes to an organization and these require
commitment from all areas. The MRP system can also serve to highlight business
performance problems with delivery speed and reliability. To this connection, it has been
suggested that not only can an MRP system detail what should be ordered and when, but

124
also it can indicate how and when late items will affect other aspects of production. It can
signal how tardiness will alter the existing production schedule. Since delivery speed and
reliability are crucial in many markets, it is clear that MRP can play an important role in
achieving these market requirements. MRP also becomes a powerful ally to just-in-time
management.

MRPII initiates production of various components, releases orders, and offsets inventory
reductions. MRPII grasps the final product by its parts, orders their delivery to operators,
keeps track of inventory positions in all stages of production and determines what is
needed to add to existing inventories.

MRP became an important step in the evolution toward the strategic management of
inventory (that has resulted modern approaches such as JIT, lean production systems,
etc), as shown in Figure 4.7.

Just-In-Time
JIT is fully dependent on an integrated
and strategic approach linking customer
requirements with supplier capabilities
and excellence in internal operations.

MRP/MRP II:
In order for these to be successful, an
integrated approach both within the
company and with suppliers is vital.

Economic Order Quantity:


EOQ is essentially tactical, a quick-fix
formula. It is not strategic in scope.

1920’s 1950’s 1970’s 1990’s


Figure 4.7. The development of the strategic importance of inventory management.

These issues will be explored in the forthcoming chapters of the module. Wish you bon appetite
to read these chapters!

4.1.4 Enterprise Resource Planning (ERP)


Enterprise resource planning (ERP) is software that organizes and manages a company’s
business processes by sharing information across functional areas. It transforms transactional
data like sales into useful information that supports business decisions in other parts of the
company, such as manufacturing, inventory, procurement, invoicing, distribution, and
accounting. In addition to managing all sorts of back-office functions, ERP connects with supply

125
chain and customer management applications, helping businesses share information both inside
and outside the company. Thus, ERP serves as the backbone for an organization’s information
needs, as well as its e-business initiatives.
Enterprise resource planning (ERP) systems go beyond MRP and MRPII to integrate internal and
external business processes. ERP has, infact, evolved from manufacturing resource planning
versions. SAP AG, a German software company, created a generic ERP software package to
integrate all business processes together for use by any business in the world. SAP sells the most
popular ERP system, R/3. With essentially one product SAP became the third largest software
company in the world.
ERP systems provide the information infrastructure for companies. They bring functions,
processes, and resources together to meet customer needs and provide value to shareholders.
Many corporations around the world have adopted enterprise resource planning (ERP) software.
ERP unites all of a company’s major business activities, from order processing to production,
within a single family of software modules. The system provides instant access to critical
information to everyone in the organization, from the CEO to the factory floor worker. Because
of the ability of ERP software to use a common information system throughout a company’s
many operations around the world, it is becoming the business information systems’ global
standard.
With ERP, companies could integrate their accounting, sales, distribution, manufacturing,
planning, purchasing, human resources, and other transactions into one application software.
This enabled transactions to be synchronized throughout the entire system. For example, a
customer order entered into an ERP system would ripple through the company, adjusting
inventory, parts supplies, accounting entries, production schedules, shipping schedules, and
balance sheets.

126
Customers

Human
Resource s

Retailers
Purchasing
ERP

Physical
Distribution
Production
Process

Figure 4.8. ERP systems link the supply chain.

ERP has gained in popularity over MRP to some extent, although MRP and MRPII
remain in use. The basic problem with MRP is that it can be a ‘push’ system of inventory
management. This means that there can be a danger of ordering materials and then
‘pushing’ them through the system before an operator is ready. Consequently it may force
the company to rely on larger economies of scale to compensate for the use of push-based
selling. Furthermore, the company may lose sight of real customer requirements because
it sales too many products from stock.

ERP Modules
ERP systems consist of a series of application modules that can be used alone or in concert. The
modules are fully integrated, use a common database, and support processes that extend across
functional areas. Transactions in one module are immediately available to all other modules at all
relevant sites-corporate headquarters, manufacturing plants, suppliers, sales offices, and
subsidiaries.
Although ERP modules differ by vendor, they are typically grouped into four main categories:
(1) finance and accounting, (2) sales and marketing, (3) production and materials management,
and (4) human resources.

Together, these modules form an integrated information technology strategy for effectively
managing the entire enterprise. ERP connects processes that belong together, giving every
employee fast, convenient access to the information required for their jobs. ERP creates a central
depository for the company’s data, which enables the company to perform various business
analyses. A company can quickly access data in real time related to forecasting and planning,
purchasing and materials management, product distribution, and accounting and financial
management, so that it can deploy its resources quickly and efficiently. It can help schedule its
production capacity to meet demand and reduce inventory levels. By consolidating information
from sales the company can better negotiate contracts and product prices, and determine their

127
effect on the company’s financial position. These types of decisions often require advanced
analytical capabilities collectively called business intelligence.

The Second Generation ERP


The second generation of ERP is Web-centric systems that enable consolidating data and
allowing dynamic access from various clients of an organization. Software vendors have
developed powerful new analytic tools and applications that capitalize on ERP’s infrastructure.
Examples of such software systems are customer relationship management, supply chain
management, and product lifecycle management. Customer relationship management (CRM) is
software that plans and executes business processes involving customer interaction, such as
sales, marketing, fulfillment, and customer service. Supply chain management (SCM)is a
software that plans and executes business processes related to supply chains. Similarly, product
lifecycle management (PLM) is software that manages the product development process, product
lifecycles, and design collaboration with suppliers and customers. PLM manages product data
through the life of the product, coordinates product and processes redesign, and collaborates with
suppliers and customers in the design process.

ERP software can be quite expensive and time consuming to install, maintain and operate. The
decomposition of large software systems into services has prompted another revolution in the
way software is delivered to customers. Many companies prefer accessing business software,
such as ERP or CRM, from the vendor’s site or a third-party host site, instead of installing,
running and maintaining it in-house. This approach, known as software as a service (SaaS), is
gaining popularity in both large and small companies. In addition to providing the software on-
demand to its clients, vendors or third party providers also maintain and run the IT infrastructure,
including the networks, servers, operating systems, and storage necessary to run the software.
This broader view of on demand IT services, usually delivered by a provider over the Internet, is
known as cloud computing.

Benefits of ERP
ERP systems help companies manage their resources efficiently and, at the same time, better
serve their customers. ERP is generally seen as having the potential to very significantly improve
the performance of many companies in many different sectors. This is partly because of the very
much enhanced visibility that information integration gives, but it is also a function of the
discipline that ERP demands. Yet this discipline is itself a ‘double-edged’ sword. On one hand, it
‘sharpens up’ the management of every process within an organization, allowing best practice (or
at least common practice) to be implemented uniformly through the business. No longer will
individual idiosyncratic behaviour by one part of a company’s operations cause disruption to all
other processes. On the other hand, it is the rigidity of this discipline that is both difficult to
achieve and (arguably) inappropriate for all parts of the business.
Nevertheless, the generally accepted benefits of ERP are usually held to be the following.
 Because software communicates across all functions, there is absolute visibility of what
is happening in all parts of the business.
 The discipline of forcing business-process-based changes is an effective mechanism for
making all parts of the business more efficient.
 There is better ‘sense of control’ of operations that will form the basis for continuous
improvement (albeit within the confines of the common process structures).
128
 It enables far more sophisticated communication with customers, suppliers and other
business partners, often giving more accurate and timely information.
 It is capable of integrating whole supply chains including suppliers’ suppliers and
customers’ customers.
In addition to the integration of systems, ERP usually includes other features which make it a
powerful planning and control tool:
 It is based on a client–server architecture; that is, access to the information systems is
open to anyone whose computer is linked to central computers.
 It can include decision support facilities which enable operations decision makers to
include the latest company information.
 It is often linked to external extranet systems, such as the electronic data interchange
(EDI) systems, which are linked to the company’s supply chain partners.
 It can be interfaced with standard applications programs which are in common use by
most managers, such as spreadsheets etc.
 Often, ERP systems are able to operate on most common platforms such as Windows or
UNIX, or Linux.

4.2 Shop floor Planning and Control


In many types of operations, processing is either completely or partly performed to specific
customer orders –make-to-order is a typical example. In such operations, a major task for
operations management is to determine when and in what order to perform customer jobs, which
falls into the area of operations scheduling and control. An operations scheduling and control
system generally involves the following activities:

1. Sequencing – deciding on the order in which jobs will be initiated and processed at each
stage
2. Loading- determining the amount of work to be assigned in each stage of the process,
whether to work centres or staff groups
3. Scheduling- allocating start and finishing times to each job.

4.2.1. Routing or Sequencing

Sequencing procedures seek to determine the best order for processing a set of jobs through a set
of facilities. Two types of problems can be identified. First, the static case, in which all jobs to be
processed are known and are available, and in which no additional jobs arrive in the queue
during the exercise. Second, the dynamic case, which allows for the continuous arrival of jobs in
the queue. Associated with these two cases are certain objectives. In the static case the problem
is merely to order a given queue of jobs through a given numbers of facilities, each job passing
through the facilities in the required order and spending the necessary amount of time at each.
The objective in such a case is usually to minimize the total time required to process all jobs: the
throughput time. In the dynamic case the objective might be to minimize facility idle time, to
minimize work in progress or to achieve the required completion or delivery dates for each job.
Sequencing procedures are relevant primarily for static cases.

129
Several simple techniques have been developed for solving simple sequencing problems, for
example the sequencing of jobs through two facilities, where each job must visit each facility in
the same order. Fairly complex mathematical procedures are available to deal with more realistic
problems, but in all cases either a static case is assumed or some other simplifying assumptions
are made.

Route or sequencing depends on the nature and type of industries as discussed below:

Continuous Industry: In this type of industry, once the route is decided in the beginning,
generally no further control over the route is needed. The raw material enters the plant, moves
through different processes automatically till it gets final shape, e.g. soft drink bottling plant,
brewery, food processing unit, etc.

Assembly Industry: Such industries need various components to be assembled at a particular


time. So, it is necessary that no component should fail to reach at the proper time and proper
place in required quantity, otherwise the production line will be held up, resulting in wastage of
time and production delay (e.g. assembly of bike, scooter, car, radio, type writer, watch, etc). If
all batches visit the same sequence of workstations, the system is called a flow shop. In these
industries much attention is paid for routing. A work-flow sheet for every component is prepared
which gives full information about the processes, machines and the sequence in which parts will
reach at the particular place and time. This type of routing needs a good technical knowledge, so
the staff of the production control department must be qualified and experienced one.

Job-shop Industry: This is also called sequencing and scheduling situation with many products.
The general job shop problem is to schedule production times for N jobs on M machines. At time
0, we have a set of N jobs. For each job we have knowledge of the sequence of machines
required by the job and the processing time on each of those machines. Due dates may also be
known. The objective may be to minimize the make span for completion of all jobs, minimizing
the number of tardy jobs or average tardiness, minimizing the average flow time, or achieving
some weighted combination of these criteria.

This problem is very complex and difficult to solve. On each of the M machines there are N!
possible job orderings making a total of (N!)M possible solutions. For just 10 jobs on 5 machines
there are over 6x1031 choices. Some techniques of optimization like Dynamic programming,
and Branch and bound have been attempted to do scheduling in random or job shop
environment. Since such industries always handle different types of products, so after receiving
the manufacturing orders, the planning dept has to prepare each time the detailed drawing and
planning.

This will indicate the proper sequence of routes for the job. In a job shop, each part type has its
own route. These individual routes may be carefully planned by an experienced process planner.
So, in such industries the PPC dept should be an expert in massive planning work. Because of
the variety of product requirements, job shops must be designed for higher flexibility. Batches
must be free to move between any pair of workstations, and normally should be processed in any
order. Individual workstations must be capable of performing a wide variety of tasks. Expertise

130
should be process related rather than product related. Job shops tend to be based on process
layout.

It is observed that in a job shop, jobs spend 95 percent of their time in nonproductive activity.
Much of this time is spent waiting in the queue. The remaining 5 percent is divided between lot
setup and processing. Many facilities do not produce high enough volumes of a particular parts
to justify products layout. Random batch arrival rates and processing times mean variability in
resource requirements through time.

The complexities of scheduling and engineering change orders in this environment aggravate the
problem. One solution could be to use the group technology (GT) approach to classify product
similarities that will allow medium volume/variety facilities to implement multiple product
layouts. However, this approach will not be of much use in case of highly customized job shop,
where each customer needs a specific feature in the product. Thus, even though the job shop
production system is less efficient, it has a place in the existing environment.

Sequencing and Dispatching Phase


Sequencing activities specify the order in which jobs are to be processed at the various work
centers. Dispatching is concerned with starting the processes. It gives necessary authority to start
a particular work. For starting the work, essential orders and instructions are given. Therefore,
the definition of dispatching is release of orders and instructions for starting of the production for
an item in accordance with the ‘route sheet’ and schedule charts.

Dispatching functions include:


 Implementing the schedule in a manner that retains any order priorities assigned at the
planning phase.
 Moving the required materials from stores to the machines, and from operation to
operation.
 Authorizing people to take work in hand as per schedule
 Distributing machine loading and schedule charts, route sheet, and other instructions and
forms.
 Issuing inspection orders, stating the type of inspection at various stages.
 Ordering tool-section to issue tools, jigs and fixtures.

Priority Decision Rules


Operations generally may have many jobs waiting to be processed. The principal method of job
dispatching is by means of priority rules, which are simplified guidelines (heuristics) to
determine the sequence in which jobs will be processed. The use of priority rule dispatching is an
attempt to formalize the decisions of the experienced human dispatcher. Most of the simple
priority rules used in job assignment are: first come, first served (FCFS), earliest due date
(EDD), longest processing time (LPT), and preferred customer order (PCO). Along with others,
these commonly used rules are as described below.

Customer priority
Operations will sometimes use customer priority sequencing, which allows an important or
aggrieved customer, or item, to be ‘processed’ prior to others, irrespective of the order of arrival

131
of the customer or item. This approach is typically used by operations whose customer base is
skewed, containing a mass of small customers and a few large, very important customers.
Some banks, for example, give priority to important customers. Similarly, in hotels, complaining
customers will be treated as a priority because their complaint may have an adverse effect on the
perceptions of other customers. More seriously, the emergency services often have to use their
judgement in prioritizing the urgency of requests for service. For example, Figure 10.8 shows the
priority system used by a police force. Here the operators receiving emergency and other calls
are trained to grade the calls into one of five categories. The response by the police is then
organized to match the level of priority. The triage system in hospitals operates in a similar way
(see short case below). However, customer priority sequencing, although giving a high level of
service to some customers, may erode the service given to many others. This may lower the
overall performance of the operation if work flows are disrupted to accommodate important
customers.

Due date (DD)


Prioritizing by due date means that work is sequenced according to when it is ‘due’ for delivery,
irrespective of the size of each job or the importance of each customer. For example, a support
service in an office block, such as a reprographic unit, will often ask when photocopies are
required, and then sequence the work according to that due date. Due date sequencing usually
improves the delivery reliability of an operation and improves average delivery speed.
However, it may not provide optimal productivity, as a more efficient sequencing of work may
reduce total costs. However, it can be flexible when new, urgent work arrives at the work centre.

Last-in first-out (LIFO)


Last-in first-out (LIFO) is a method of sequencing usually selected for practical reasons.
For example, unloading an elevator is more convenient on a LIFO basis, as there is only one
entrance and exit. However, it is not an equitable approach. Patients at hospital clinics may be
infuriated if they see newly arrived patients examined first. This sequencing rule is not
determined for reasons of quality, flexibility or cost, and none of these performance objectives is
well served by this method.

First-in first-out (FIFO)


Some operations serve customers in exactly the sequence they arrive in. This is called first-in
first-out sequencing (FIFO), or sometimes ‘first come, first served’ (FCFS). For example,
UK passport offices receive mail, and sort it according to the day when it arrived. They work
through the mail, opening it in sequence, and process the passport applications in order of arrival.
Queues in theme parks may be designed so that one long queue snakes around the lobby area
until the row of counters is reached. When customers reach the front of the queue, they are
served at the next free counter.

Longest operation time (LOT)


Operations may feel obliged to sequence their longest jobs first in the system called longest
operation time sequencing. This has the advantage of occupying work centres for long periods.
By contrast, relatively small jobs progressing through an operation will take up time at each
work centre because of the need to change over from one job to the next. However, although
longest operation time sequencing keeps utilization high, this rule does not take into account

132
delivery speed, reliability or flexibility. Indeed, it may work directly against these performance
objectives.

Shortest operation time first (SOT)


Most operations at some stage become cash-constrained. In these situations, the sequencing rules
may be adjusted to tackle short jobs first in the system, called shortest operation time
sequencing. These jobs can then be invoiced and payment received to ease cash-flow problems.
Larger jobs that take more time will not enable the business to invoice as quickly.
This has an effect of improving delivery performance, if the unit of measurement of delivery is
jobs. However, it may adversely affect total productivity and can damage service to larger
customers.

Physical constraints
The physical nature of the materials being processed may determine the priority of work. For
example, in an operation using paints or dyes, lighter shades will be sequenced before darker
shades. On completion of each batch, the colour is slightly darkened for the next batch. This is
because darkness of colour can only be added to and not removed from the colour mix.
Similarly, the physical nature of the equipment used may determine sequence. For example, in
the paper industry, the cutting equipment is set to the width of paper required. It is easier and
faster to move the cutting equipment to an adjacent size (up or down) than it is to reset the
machine to a very different size. Sometimes the mix of work arriving at a part of an operation
may determine the priority given to jobs. For example, when fabric is cut to a required size and
shape in garment manufacture, the surplus fabric would be wasted if it is not used for another
product. Therefore, jobs that physically fit together may be scheduled together to reduce waste.

Table 4.3. Standard Dispatching Rules

Rule Full form Description of the rule


SPT Shortest processing time Select a job with minimum processing time.
EDD Earlier Due Date Select a job which is due first.
FCFS First Come, First Served Select a job that has been in workstations
queue the longest.
FISFS First in System, First Served Select a job that has been on the shop floor the
longest.
S/RO Slack per Remaining Select a job with the smallest ratio of slack to
Operation operations remaining to be performed.
Covert Order jobs based on ratio-based priority to
processing time.
LTWK Least Total Work Select a job with smallest total processing time
(SPT).
LWKR Least Work Remaining Select a job with smallest total processing time
for unfinished operations.
MOPNR Most Operations Remaining Select a job with the most operations
remaining in its processing sequence.
MWKR Most Work Remaining Select a job with the most total processing time

133
remaining.
RANDOM Random Select a job at random.
WINQ Work in Next Queue Select a job whose subsequent machine
currently has the shortest queue.

It is customary to classify these rules as static or dynamic. Static rules do not incorporate an
updating feature. They have priority indices that stay constant as jobs travel through the plant,
whereas dynamic rules change with time and queue characteristics. LTWK and EDD (assuming
due dates are fixed) are static rules. LWKR is dynamic, since the remaining processing time
decreases as the job progresses through the shop, i.e. through time. Slack-based rules are also
dynamic.
Slack = due date – current time – remaining work

1. Job slack (S): This is the amount of contingency or free time, over and above the
expected processing time, available before the job is completed at a predetermined date
(to), i.e. , where ti = present date (e.g. day or week number, where ti <
to), = sum of remaining processing times. Where delays are associated with each
operation, e.g. delays caused by inter-facility transport, this rule is less suitable, hence the
following rule may be used.
2. Job slack per operation, i.e. S/N, where N = number of remaining operations. Therefore
where S is the same for two or more jobs, the job having the most remaining operations is
processed first.
3. Job slack ratio, or the ratio of the remaining slack time to the total remaining time, i.e.
S/(t0 – t1). In all the above cases, where the priority index is negative the job cannot be
completed by the requisite date. The rule will therefore be to process first those jobs
having negative indices.
4. Shortest imminent operation (SIO), i.e. process first the job with the shortest processing
times.
5. Longest imminent operation (LIO). This is the converse of SIO.
6. Scheduled start date. This is perhaps the most frequently used rule. The date at which
operations must be started in order that a job will meet a required completion date is
calculated, usually by employing reverse scheduling from the completion date, e.g. Xi =
t0 – or, Xi = to – where, Xi = scheduled start date for an operation, fi =
delay or contingency allowance. Usually some other rule is also used, e.g. first come, first
served, to decide priorities between jobs having equal Xi values.
7. Earliest due date, i.e. process first the job that is required first or the most urgent job.
8. Subsequent processing times. Process first the job that has the longest remaining process
times, i.e. or, in modified form, .
9. Value. To reduce work in progress inventory cost, process first the job which has the
highest value.
10. Minimum total float. This rule is the one usually adopted when scheduling by network
techniques.
11. Subsequent operation. Look ahead to see where the job will go after this operation has
been completed and process first the job which goes to a ‘critical’ queue, that is a facility
having a small queue of available work, thus minimizing the possibility of facility idle
time.

134
12. First come, first served (FCFS).
13. Random (e.g. in order of job number, etc.).

Priority rules can be classified further as follows:


1. Local rules depend solely on data relating to jobs in the queue at any particular facility.
Also known as Myopic, these rules look only at the individual machine.
2. General rules depend on data relating to jobs in the queue at any particular facility and/or
data for jobs in queues at other facilities. They look at the entire shops.
Local rules, because of the smaller amount of information used, are easier and cheaper to
calculate than general (sometimes called global) rules. All of the above rules with the exception
of rule 11 are local rules. Similarly, SPT is myopic whereas WINQ is global.

Another classification scheme of these rules is as follows:


1. Static rules are those in which the priority index for a job does not change with the
passage of time, during waiting in anyone queue.
2. Dynamic rules are those in which the priority index IS a function of the present time.
Rules 4, 5, 6, 7, 8, 9, 10, 11, 12 and 13 are all static, whereas the remainders are dynamic.
Perhaps the most effective rule according to present research is the SIO rule, and, lore
particularly, the various extensions of this rule. Massive simulation studies have shown that, of
all ‘local’ rules, those based on the SIO rule are perhaps the most effective, certainly when
considered against criteria such as minimizing the number of Jobs in the system, the mean of the
‘completion distribution’ and the throughput time. be SIO rule appears to be particularly
effective in reducing throughput time, the ‘truncated SIO’ and the ‘two-class SIO’ rules being
perhaps the most effective derivatives, having the additional advantage of reducing throughput
time variance and lateness.
The ‘first come, first served’ priority rule has been shown to be particularly beneficial in
reducing average lateness, whereas the ‘scheduled start date and total float’ rule has been proved
effective where jobs are of the network type.

Example: Let the current time is 10. Machine B has just finished a job, and it is time to select its
next job. Table 4.4 provides information on the four jobs available. For each of the dispatching
rules discussed above, determine the corresponding sequence.

Table 4.4. Available Jobs

Operation (machine, pij)


Job Arrival to system Arrival at B Due Date 1 2 3
1 10 10 30 (B, 5) (A, 1) (D, 6)
2 0 5 20 (A, 5) (B, 3) (C, 2)
3 0 9 10 (C, 3) (D, 2) (B, 2)
4 0 8 25 (E, 6) (B, 4) (C, 4)

Solution: pij = processing time for job i on machine j


SPT: Looking at machine B, we find that jobs (1, 2, 3, 4) have processing times of (5, 3, 2, 4).

135
Placing jobs in increasing order of processing time results in the job sequence (3, 2, 4, 1). So
load job 3 on machine B.
EDD: Jobs (1, 2, 3, 4) have due dates (30, 20, 10, 25) respectively. Arranging in increasing order
of the due dates, we have the job sequence (3, 2, 4, 1), which means job 3 should be loaded next
on machine B.
FCFS: Jobs arrived at machine B at times (10, 5, 9, 8). Placing earliest arrivals first, we obtain
the job sequence (2, 3, 4, 1).

Judging sequencing rules


Some general guidelines for when certain sequencing rules may be appropriate are given below:
1. SPT is most useful when the shop is highly congested. SPT tends to minimize mean flow
time, mean number of jobs in the system (and thus work-in-process inventory), and percent of
jobs tardy. By completing more jobs quickly, it theoretically satisfies a greater number of
customers than the other rules. However, with SPT some long jobs may be completed very late,
resulting in a small number of very unsatisfied customers.
For this reason, when SPT is used in practice, it is usually truncated (or stopped), depending on
the amount of time a job has been waiting or the nearness of its due date. For example, many
shared computer services process jobs by SPT. Jobs that are submitted are placed in several
categories (A, B, or C) based on expected CPU time. The shorter jobs, or A jobs, are processed
first, but every couple of hours the system stops processing A jobs and picks the first job from
the B stack to run. After the B job is finished, the system returns to the A stack and continues
processing. C jobs may be processed only once a day. Other systems that have access to due date
information will keep a long job waiting until its SLACK is zero or its due date is within a
certain range.
2. Use SLACK for periods of normal activity. When capacity is not severely restrained, a
SLACK oriented rule that takes into account both due date and processing time will produce
good results.
3. Use DDATE when only small tardiness values can be tolerated. DDATE tends to minimize
mean tardiness and maximum tardiness. Although more jobs will be tardy under DDATE than
SPT, the degree of tardiness will be much less.
4. Use LPT if subcontracting is anticipated so that larger jobs are completed in-house, and
smaller jobs are sent out as their due date draws near.
5. Use FCFS when operating at low-capacity levels. FCFS allows the shop to operate essentially
without sequencing jobs. When the workload at a facility is light, any sequencing rule will do,
and FCFS is certainly the easiest to apply.
6. Do not use SPT to sequence jobs that have to be assembled with other jobs at a later date.
For assembly jobs, a sequencing rule that gives a common priority to the processing of different
components in an assembly, such as assembly DDATE, produces a more effective schedule.

4.2.2 Loading or Assignment


Loading assigns jobs to work centres, within the limits of the maximum amount of work that the
work centre can perform. It is the amount of work that is allocated to a work centre. It means
assignment of job to a facility, viz: machine, men, dept, etc. Assigning a subject to a teacher is
loading. Loading should be done at the higher level. Frequently, when attempting to decide how
orders are to be scheduled onto available facilities, one is faced with various alternative

136
solutions. For example, many different facilities may be capable of performing the operations
required on one customer or item. Operations management must then decide which jobs are to be
scheduled onto which facilities in order to achieve some objective, such as minimum cost or
minimum throughput time.

There are two types of loading. Finite loading refers to assigning work to a work centre with a
fixed capacity limit, such as a machine with a maximum processing rate. Infinite loading applies
where there is no maximum capacity limit at a particular work centre or activity, such as a queue
at a cash machine (ATM) that is allowed to grow longer and longer. Loading must take account
of time not worked, set-ups and changeovers, and machine down time, which can limit both the
planned time available and the actual time available.

Finite loading is an approach which only allocates work to a work centre (a person, a machine,
or perhaps a group of people or machines) up to a set limit. This limit is the estimate of capacity
for the work centre (based on the times available for loading). Work over and above this capacity
is not accepted. Finite loading is particularly relevant for operations where:
 it is possible to limit the load – for example, it is possible to run an appointment system
for a general medical practice or a hairdresser;
 it is necessary to limit the load – for example, for safety reasons only a finite number of
people and weight of luggage are allowed on an aircraft;
 the cost of limiting the load is not prohibitive – for example, the cost of maintaining a
finite order book at a specialist sports car manufacturer does not adversely affect demand,
and may even enhance it.

Infinite loading is an approach to loading work which does not limit accepting work, but instead
tries to cope with it. The second diagram in Figure 10.7 illustrates this loading pattern where
capacity constraints have not been used to limit loading so the work is completed earlier. Infinite
loading is relevant for operations where:
 it is not possible to limit the load – for example, an accident and emergency department
in a hospital should not turn away arrivals needing attention;
 it is not necessary to limit the load – for example, fast-food outlets are designed to flex
capacity up and down to cope with varying arrival rates of customers. During busy
periods, customers accept that they must queue for some time before being served.
Unless this is extreme, the customers might not go elsewhere;
 the cost of limiting the load is prohibitive – for example, if a retail bank turned away
customers at the door because a set amount were inside, customers would feel less than
happy with the service.

In complex planning and control activities where there are multiple stages, each with different
capacities and with a varying mix arriving at the facilities, such as a machine shop in an
engineering company, the constraints imposed by finite loading make loading calculations
complex and not worth the considerable computational power which would be needed.

Example: A company must complete five orders during a particular period. Each order consists
of several identical products and each can be made on one of several facilities. Table-5.1 gives

137
the operation time for each product on each of the available facilities. The available capacity for
these facilities for the period in question is:
A = 100 hours, B = 80 hours, and C = 150 hours.
OPT-Order Processing Time (hour)
The index number for a facility is a measure of the cost disadvantage of using that facility for
processing, and is obtained by using this formula:

Table 4.5. Operation Time Per Item on Each facility


Order No of Operation time per item on facility j (hours)
No items A OPT Ij B OPT Ij C OPT Ij
per
order,
Q1
1 30 5.0 150 1.0 4.0 120 0.60 2.5 75 0
J1
2 25 1.5 37.5 0 2.5 62.5 0.67 4.0 100 1.67
J2
3 45 2.0 90 0 4.0 180 1.00 4.5 202.5 1.25
J3
4 15 3.0 45 0.2 2.5 37.5 0 3.5 52.5 0.40
J4
5 10 4.0 40 1.0 3.5 35 0.75 2.0 20 0
J5
Capacity 100 80 150
in hr
% use 90 78 98

Where
Ij = Index number for facility j
Xij = Operation time for item i on facility j
Ximin = Minimum operation time for item i

For order 1: IA = (5.0 – 2.5)/2.5 = 1.0


IB = (4.0 – 2.5)/2.5 = 0.6
IC = (2.5 – 2.5)/2.5 = 0
Table above shows the index numbers for all facilities and orders. Using Table above and
remembering that the index number is a measure of the cost disadvantage of using that facility,
we can now allocate orders to facilities.

138
• The best facility for order 1 is C (IC = 0); the processing time for that order (75 hours) is less
than the available capacity. We can therefore schedule the processing of this order on this
facility.
• Facility A is the best facility for order 2, but also the best for order 3. Both cannot be
accommodated because of limitations on available capacity, so we must consider the
possibility of allocating one of the orders to another facility. The next best facility for order 2
is facility B (IB = 0.67) and for order 3 the next best facility is also facility B (IB = 1 ).
Because the cost disadvantage on B is less for order 2, allocate order 2 to B and 3 to A as
shown in the table.
• The best facility for order 4 is B but there is now insufficient capacity available on this
facility. The alternatives now are to reallocate order 2 to another facility or to allocate order 4
elsewhere. In the circumstances it is better to allocate order 4 to facility C.
• Finally order 5 can be allocated to its best facility, namely, facility C.

4.2.3 Scheduling
Having determined the sequence that work is to be tackled in, some operations require a detailed
timetable showing at what time or date jobs should start and when they should end- this is
scheduling. Schedules are familiar statements of volume and timing in many consumer
environments. For example, a bus schedule shows that more buses are put on routes at more
frequent intervals during rush-hour periods. The bus schedule shows the time each bus is due to
arrive at each stage of the route. Schedules of work are used in operations where some planning
is required to ensure that customer demand is met. Other operations, such as rapid-response
service operations where customers arrive in an unplanned way, cannot schedule the operation in
a short-term sense. They can only respond at the time demand is placed upon them.

It is the time phase of loading. It is assignment of job to a facility specifying the particular
sequence of the work and the time of actual performance. Examples of scheduling include:
railway time-table, examination schedule, the time table for teaching various subjects.
Scheduling should be done at relatively lower level of the organization.
Scheduling activities are highly dependent on the type of the production system and the output
volume delivered by the system. Scheduling activities differ in

a) High-volume system
b) Intermediate-volume system, and
c) Low-volume Systems

High-Volume (flow) Systems: They make use of specialized equipment that routes work on a
continuous basis through the same fixed path of operations, generally at a rapid rate. The
problems of order release, dispatching, and monitoring are less complex than in low-volume,
make-to-order systems. However, material flows must be well coordinated, inventories carefully
controlled, and extra care taken to avoid equipment breakdowns, material shortages, etc. to avoid
production-line downtime.

Intermediate-volume (flow and batch) Systems: They utilize a mixture of equipment and
similar processes to produce an intermittent flow of similar products on the same facilities. The

139
sequencing of jobs and production-run lengths are of significant concern to schedulers, as they
attempt to balance the costs of changeover time against those of inventory accumulations.

Low-Volume (batch or single job) Systems: They use general-purpose equipment that must
route orders individually through a unique combination of work centers. The variable work-flow
paths and processing time generates queues, work-in-process inventories, and capacity utilization
concerns that can require more day-to-day attention than in the high- or intermediate-volume
systems.

There may be some overlapping in classifications of high-, intermediate-, and low-volume


production systems classification schemes. For example, some intermittent operations are much
like job shops, whereas some low-volume operations are done in batches. And job shops often
exist within continuous systems. The table also gives some comparative data for projects.

Techniques of Scheduling
Backward or Reverse Scheduling: External due date considerations will directly influence
activity scheduling in certain structures. The approach adopted in scheduling activities in such
cases will often involve a form of reverse scheduling with the use of bar or Gantt charts. A major
problem with such reverse or ‘due date’ scheduling is in estimating the total time to be allowed
for each operation, in particular the time to be allowed for waiting or queuing at facilities. Some
queuing of jobs (whether items or customers) before facilities is often desirable since, where
processing times on facilities are uncertain, high utilization is achieved only by the provision of
such queues.
Operation times are often available, but queuing times are rarely known initially. The only
realistic way in which queuing allowances can be obtained is by experience. Experienced
planners will schedule operations, making allowances which they know from past performances
to be correct. Such allowances may vary from 50 per cent to 2000 per cent of operation times
and can be obtained empirically or by analysis of the progress of previous jobs. It is normally
sufficient to obtain and use allowances for groups of similar facilities or for particular
departments, since delays depend not so much on the nature of the job, as on the amount of work
passing through the departments and the nature of the facilities.

Operations schedules of this type are usually depicted on Gantt or bar charts. The advantage of
this type of presentation is that the load on any facility or any department is clear at a glance, and
available or spare capacity is easily identified. The major disadvantage is that the dependencies
between operations are not shown and, consequently, any readjustment of such schedules
necessitates reference back to operation planning documents. Notice that, in scheduling the
processing of items, total throughput time can be minimized by the batching of similar items to
save set-up time, inspection time, etc.

Forward Scheduling: For a manufacturing or supply organization a forward scheduling


procedure will in fact be the opposite of that described above. This approach will be particularly
relevant where scheduling is undertaken on an internally oriented basis and the objective is to
determine the date or times for subsequent activities, given the times loran earlier activity, e.g. a
starting time.

140
In the case of supply or transport organizations, the objective will be to schedule forward from a
given start date, where that start date will often be the customer due date, e.g. the date at which
the customer arrives into the system. In these circumstances, therefore, forward scheduling will
be an appropriate method for dealing with externally oriented scheduling activities.

Forward scheduling involves starting work as soon as it arrives. Backward scheduling involves
starting jobs at the last possible moment to prevent them from being late. For example, assume
that it takes six hours for a contract laundry to wash, dry and press a batch of overalls. If the
work is collected at 8.00 am and is due to be picked up at 4.00 pm, there are more than six hours
available to do it.

The choice of backward or forward scheduling depends largely upon the circumstances.
Table 4.6 lists some advantages and disadvantages of the two approaches. In theory, both
materials requirements planning (MRP) and just-in-time planning (JIT) use backward
scheduling, only starting work when it is required. In practice, however, users of MRP have
tended to allow too long for each task to be completed, and therefore each task is not started at
the latest possible time. In comparison, JIT is started, as the name suggests, just in time.

Table 4.6. Advantages of forward and backward scheduling

Advantages of forward scheduling Advantages of backward scheduling

High labour utilization – workers always Lower material costs – materials are not used until
start work to keep busy they have to be, therefore delaying added value until
the last moment
Flexible – the time slack in the system Less exposed to risk in case of schedule change by
allows unexpected work to be loaded to be loaded the customer
Tends to focus the operation on customer due dates

Example 1. Assume that it takes six hours for a contract laundry to wash, dry and press a batch
of overalls. If the work is collected at 8.00 am and is due to be picked up at 4.00 pm, there are
more than six hours available to do it. Table 4.7 shows the different start times of each job,
depending on whether they are forward- or backward-scheduled.

Table4.7 Forward and Backward Scheduling

Task Duration /hrs Start time Start time (forwards)


(backwards)
Press 1 hr 3 .00pm 1.00pm
Dry 2 hr 1 .00pm 11.00am
Wash 3 hr 10 .00am 8.00am

Example2. A job is due to be delivered at the end of 12th week. It requires a lead time of 2
weeks for material acquisition, 1 week of run time for operation-1, 2 weeks for operation-2, and

141
1 week for final assembly. Allow 1 week of transit time prior to each operation. Illustrate the
completion schedule under (a) forward, and (b) backward scheduling methods.
Solution: The solution is shown in Table 4.8.
Table 4.8 Forward and Backward Scheduling

Task Duration /weeks Start time Start time (forwards)


(backwards)
Material acquisition 2 4th week 1st week
Transit time1 1 6th week 3rd week
Operation1 1 7th eek 4th week
Transit time2 1 8th week 5th week
9th
Operation 2 2 week 6th week
th
Transit time3 1 11 week 8th week
Assembly 1 12th week 9th week

Gantt charts
The most common method of scheduling is by use of the Gantt chart. This is a simple device
which represents time as a bar, or channel, on a chart. Often the charts themselves are made up
of long plastic channels into which coloured pieces of paper can be slotted to indicate what is
happening with a job or a work centre. The start and finish times for activities can be indicated
on the chart and sometimes the actual progress of the job is also indicated. The advantages of
Gantt charts are that they provide a simple visual representation both of what should be
happening and of what actually is happening in the operation. Furthermore, they can be used to
‘test out’ alternative schedules. It is a relatively simple task to represent alternative schedules
(even if it is a far from simple task to find a schedule which fits all the resources satisfactorily).

Automated Scheduling Systems


Scheduling large numbers of workers at numerous locations requires a computerized scheduling
system. Sophisticated employee scheduling software is available as a stand-alone system or as
part of an ERP package. For example, Workbrain, part of Infor’s ERP system, provides:
• Staff scheduling assigns qualified workers to standardize shift patterns taking into account
leave requests and scheduling conflicts. The solutions include social constraints such as labor
laws for minors, overtime payment regulations, and holidays or religious holidays that may
differ by global location.
• Schedule bidding puts certain shift positions or schedule assignments up for bid by workers;
allows workers to post and trade schedules with others as long as coverage and skill criteria
are met.
• Schedule optimization creates demand-driven forecasts of labor requirements and assigns
workers to variable schedules (in some cases, as small as 15 minutes blocks of time) that
change dynamically with demand. Uses mathematical programming and artificial intelligence
techniques.

142
Chapter Summary
Production Planning and control is the reconciliation of the potential of the operation to supply
products and services, and the demands of its customers on the operation. It is the set of day-to-
day activities that run the operation on an ongoing basis. Process planning deals with
determining the specific technologies and procedures required to produce a product. Strategic
capacity planning deals with determining the long-term capabilities( such as the size and scope)
of the production systems. Sales and operations planning involves taking the sales plan from
marketing and developing an aggregate operations plan that balances demand and supply. For
service and manufacturing, the aggregate operations plan is essentially the same, the major
exception being manufacturing’s use of inventory buildups and cutback to smooth production.
After the aggregate operations plan is developed, manufacturing and service planning are
generally quite different. In manufacturing, the planning process can be summarized as follows:
the production control group inputs existing or forecast orders into a master production schedule
(MPS). The MPS generates the amount and dates of specific items required for each order.
Rough Cut Capacity planning then verifies that production and warehouse facilities, equipment,
and labor are available and that key vendors have allocated sufficient capacity to provide
materials when required. Material requirement planning(MRP) takes the end product
requirement from MPS and breaks them down into their component parts and subassemblies to
create a material plan. This plan specifies when production and purchase orders must be placed
for each part and subassembly to complete the product on schedule. Most MRP systems also
allocate capacity to each order. The final planning activity is daily or weekly order scheduling of
jobs to specific machines, production lines, or work centers. In planning and control the jobshop,
the volume and timing of activity in operations, four distinct activities are necessary: loading,
which dictates the amount of work that is allocated to each part of the operation; sequencing,
which decides the order in which work is tackled within the operation; scheduling, which
determines the detailed timetable of activities and when activities are started and finished;

Review questions
Multiple Choice Questions
1. Aggregate planning for fast food restaurants is very similar to aggregate planning in
manufacturing, but with much smaller units of time.
a)True b) False
2. Planning tasks associated with loading, sequencing, expediting, and dispatching typically
fall under
a. short-range plans
b. intermediate-range plans
c. long-range plans
d. mission-related planning
e. strategic planning
3. Dependence on an external source of supply is found in which of the following aggregate
planning strategies?
a. varying production rates through overtime or idle time
b. subcontracting
c. using part-time workers
d. back ordering during high demand periods

143
e. hiring and laying off
4. Which of the following statements regarding aggregate planning is true?
a. In a pure level strategy, production rates or work force levels are adjusted to
match demand requirements over the planning horizon.
b. A pure level strategy allows lower inventories when compared to pure chase and
hybrid strategies.
c. In a mixed strategy, there are changes in both inventory and in work force and
production rate over the planning horizon.
d. Because service firms have no inventory, the pure chase strategy does not apply.
e. All of the above are true.
5. A document calls for the production of 50 small garden tractors in week 1; 50 small garden
tractors and 100 riding mowers in week 2; 100 riding mowers and 200 garden utility carts in
week 3; and 100 riding mowers in week 4. This document is most likely a(n)
a. net requirements document
b. resource requirements profile
c. aggregate plan
d. master production schedule
e. Wagner-Whitin finite capacity document

Discussion Questions
1. How is product development and design associated with production planning?
2. Explain how aggregate plans and MPS initiate functional activities.
3. What are priority-sequencing rules? Why are they needed.
4. Discuss major difference between finite and infinte loading.
5. Suppose Alemeda Textile Share Company is engaged in gentlemen garment business.
The order receiving unit, at the beginning of the working day, has received orders from
its customers for coats, gowns, shirts, trousers and jackets respectively. There is only one
job center that all kind of jobs are performed. The following is the time (in days) required
to complete the job in the work center.

Activity/Job Processing Time Number of days to Due Date


Coat/C/ 7 8
Gown/G/ 2 3
Shirts/S/ 5 7
Trousers/T/ 3 9
Jacket/J/ 6 6
Schedule the jobs by using first come first served and critical ratio; and evaluate the
schedule in terms of performance measures

144
CHAPTER V
QUALITY MANAGEMENT

This chapter will introduce you to the basic concepts and approaches associated with the
operations’ perspective on quality management. The chapter begins by presenting definitions of
quality, and describing the historical evolution of quality. The second section compares different
applications of quality management to managing the transformation process. The last section
presents a selection of tools and techniques for managing quality.

Learning Objectives
After reading this chapter you will be able to:
• Define quality from different perspectives
• Describe how quality management has evolved over time
• Identify different ways to manage quality within operations
• Distinguish some common techniques and tools for managing quality in operations.

5.1 Meaning and Nature of Quality


Quality is not an option in most walks of life. For example, it would be unthinkable for airline
pilots or hospital midwives to aim for anything less than perfection in what they do, and
nonsense to think of only trying for an ‘acceptable’ level of failure – one plane crash in 100 or
one baby dropped per 500 deliveries! In similar fashion, no artist who is serious about his or her
work would think of producing something that did not reflect their best endeavours and provide
an object or artifact of lasting value.

One of the annoying factors about quality is that seemingly unimportant details can have an
astonishing impact on how quality is perceived. For example, when Concorde crashed it was as a
result of a lack of attention to a piece of debris on the runway. The consequences to this were
both profound and fatal. Concorde had not suddenly become a poor quality product, but the issue
of safety (the most important element of travel in our evaluation of service quality) now became
paramount.

Quality has been defined as the totality of characteristics of an entity that bear on its ability to
satisfy stated or implied needs (FDRE, Federal Negarit Gazeta - No. 26, 3rd March, 1998, p
700)

American Society for Quality defines quality as a subjective term for which each person has his
or her own definition. In technical usage, quality can have two meanings: a. The characteristics
of a product or service that bear on its ability to satisfy stated or implied needs; b. A product or
service free of deficiencies. The definition by American Society talks about subjectivity of
quality. Quality has no specific meaning unless related to a specific function and/or object.
Quality is a perceptual, conditional and somewhat subjective attribute.

145
Quality has proved more difficult to define than other operations concepts. However, a number
of writers have attempted to clarify the nature of quality. The following definitions are taken as
examples:
 Fitness for purpose or use (Juran, an early doyen of quality management).
 The totality of features and characteristics of a product or service that bear on its ability
to satisfy stated or implied needs (ISO 9000:2000, 1986).
 Quality should be aimed at the needs of the consumer, present and future’ – Deming,
another early doyen of quality management.
 The total composite product and service characteristics of marketing, engineering,
manufacture and maintenance through which the product and service in use will meet the
expectation by the customer’ (Feigenbaum, author of ‘Total Quality Control).’
 Conformance to requirements (Crosby, an American quality management consultant
famous in the 1980s).

More broadly, Garvin (1983) stated that quality can be defined through five principal
approaches: (1) Transcendent quality is an ideal, a condition of excellence. (2) Product-based
quality is based on a product attribute. (3) User-based quality is fitness for use. (4)
Manufacturing-based quality is conformance to requirements. (5) Value-based quality is the
degree of excellence at an acceptable price.

1. Transcendent quality is ‘innate excellence’ – an absolute and universally recognizable


high level of achievement. Examples of this come from various artistic achievements that
have had profound emotional impact, which cannot necessarily be measured but is real
nonetheless. Transcendent quality often depends highly on intangible, subjective
elements.
2. User-based quality ‘lies in the eye of the beholder’, so that each person will have a
different idea of quality, based on its fitness for use by the individual.
3. Value-based quality is performance or conformance at an acceptable price or cost. In a
sense the distinction between ‘high’ and ‘low’ quality is largely meaningless – quality is
no longer a term associated with ‘high end’ market tastes, but rather is measured by each
particular customer segment within an overall market. Someone who owns a Lada, for
example, might be equally satisfied with their car’s performance as someone who owns a
Mercedes, since they may be willing to put up with a lower level of finish and
performance for the lower price.
4. Product-based quality is a precise and measurable variable, and goods can be ranked
according to how they score on this measure. This allows customers and manufacturers to
compare products, sometimes without even using or experiencing the product. Magazines
such as Which are good examples of this quality focus, as they provide summary tables
for different products based on measuring and comparing goods such as household
appliances, automobiles and home entertainment equipment.
5. Manufacturing-based quality is ‘conformance to requirements’, adhering to a design or
specification. This view of quality takes little account of customer needs or preferences.
A popular example is the ‘cement lifejacket’: an operation could claim quality products
under the manufacturing-based definition even if the products were completely useless,
as long as they adhered to the standards that had been set for their manufacture.

146
Subtly noted, the latter four definitions of quality can be arranged along a continuum from the
customer or client’s perception of the product (or service) to the producer’s perception. In reality,
successful quality management is achieved by linking the needs of the customer with operations
capabilities. This match is addressed by Feigenbaum (1983) in his definition to the term quality
that goes as follow. Quality is the total composite product and service characteristics through
which the product or service in use will meet the expectations of the customer. He also stated
that quality control must start with identification of customer quality requirements and end only
when the product has been placed in the hands of a customer who remains satisfied.

Thus, user-based and value-based quality are defined externally to the producing organization,
whilst product-based and manufacturing based quality are defined internally. Can these two
perspectives be reconciled? The bridge model in Figure 5.1 suggests a way of taking both into
account.

Bridge (Customer)
Quality is the opinion of the totality of goods and
service provision as determined by the customer;
Quality is affected by the concept of value.

Internal (Operations) External (Customers)


Quality is conformance to internal procedures; Quality is a set of expectations and
Quality is not making mistakes and maximizing internal perceptions, which we have a role in
efficiency; Quality is ‘fitness for purpose’ managing.

Figure 5.1 Bridge model of quality

This bridge model emphasizes the need for operations to manage the intangible aspects of quality
as well as those definable and measurable characteristics that can be controlled or at least
affected by operations, whilst marketing must have a good understanding of customer
requirements. These requirements must then be fed into operations so that customer satisfaction
can be achieved.

5.2 Dimensions of Quality


Dimensions of Quality for Manufactured Products
The dimensions of quality for manufactured products a consumer looks for include the following:
1. Performance: The basic operating characteristics of a product; for example, how well a car
handles or its gas mileage.

147
2. Features: The “extra” items added to the basic features, such as a stereo CD or a leather
interior in a car.
3. Reliability: The probability that a product will operate properly within an expected time
frame; that is, a TV will work without repair for about seven years.
4. Conformance: The degree to which a product meets preestablished standards.
5. Durability: How long the product lasts; its life span before replacement. A pair of Anbessa
Shoes Sco boots, with care, might be expected to last a lifetime.
6. Serviceability: The ease of getting repairs, the speed of repairs, and the courtesy and
competence of the repair person.
7. Aesthetics: How a product looks, feels, sounds, smells, or tastes.
8. Safety: Assurance that the customer will not suffer injury or harm from a product; an
especially important consideration for automobiles.
9. Other perceptions: Subjective perceptions based on brand name, advertising, and the like.

Dimensions of Quality for Services


The dimensions of quality for a service differ somewhat from those of a manufactured product.
Service quality is more directly related to time, and the interaction between employees and the
customer. Evans and Lindsay2 identify the following dimensions of service quality.
1. Time and timeliness: How long must a customer wait for service, and is it completed on
time? For example, is an overnight package delivered overnight?
2. Completeness: Is everything the customer asked for provided? For example, is a mail
order from a catalogue company complete when delivered?
3. Courtesy: How are customers treated by employees? For example, frontline employees of
Ethiopian commercial bank employees nice and pleasant?
4. Consistency: Is the same level of service provided to each customer each time? Is your
newspaper delivered on time every morning?
5. Accessibility and convenience: How easy is it to obtain the service?
6. Accuracy: Is the service performed right every time? Is your bank or credit card
statement correct every month?
7. Responsiveness: How well does the company react to unusual situations, which can
happen frequently in a service company? For example, how well is a telephone operator
at a given company able to respond to a customer’s questions about a catalogue item not
fully described in the catalogue?

These quality characteristics are weighed by the customer relative to the cost of the product.
In general, customers will pay for the level of quality they can afford. If they feel they are
getting what they paid for (or more), then they tend to be satisfied with the quality of the
product.
5.3 Historical Perspective on Quality and Its Management
It would not be an exaggeration to say that there has been a revolution in thinking and practice
around the theme of quality.
We have seen in previous chapters how the transition from craft to mass production through to
the modern era has had profound influence on operations management. This is clearly evident in
quality. In the earliest days of manufacturing, quality was essentially built into the work of the
craftsman. For example, the notion of ‘taking pride in work’ was a central pillar of the medieval

148
guild system, whereby concern for quality was trained into the hearts and minds of apprentices
onwards. The Industrial Revolution destroyed much of this one-to-one identification with the
product and led to a loss of the craft ethic to be gradually replaced by the factory system.
Although quality was important, especially in the pioneering applications of new technologies
evident in the bridges, machinery and other products of that period, it was often in competition
with the demands of high productivity for satisfying massively expanding demand.

In the latter part of the nineteenth century, the focus of attention in manufacturing shifted to the
USA, where the ideas of Taylor and Ford were of particular importance. In Taylor’s model of the
effective factory, quality was one of eight key functions identified as of critical importance for
shop foremen to manage, while Radford’s influential book, The Control of Quality in
Manufacturing, placed further emphasis on the task of inspection as a separate function.

In services at this time, quality remained strongly associated with the traditional values: a high-
quality solicitor or bank would be one that exhibited a haughtiness and aplomb, rather than
measurably excellent service. A ‘quality’ school would be one to which well-known people sent
their sons or daughters, rather than one in which the excellence of education could be assessed.
This reflected the immaturity of markets (e.g. their inability to demand or complain), as well as
some deeply entrenched vested interests (e.g. the superiority of some public schools).

Lastly, the concept of ‘professions’ and the reflection in deference meant that service providers
could get away with poor quality, simply telling the customer that failure was attributable to
factors that could not be explained to the layman.

Taylor’s model became the blueprint not only for the mass production factories of the 1920s and
1930s, but also for many other types of business. Typically, emphasis was placed on inspection
as the main control mechanism for quality, supporting a process of gradual refinement in product
and process design that aimed to eliminate variation and error. The essential character remained
one in which the majority of people were not involved; the task of managing quality fell to a
handful of specialists.

In 1931, in perhaps the most significant development, Walter Shewhart wrote a book based on
his experience in the Bell Telephone Laboratories entitled The Economic Control of
Manufactured Products. This study of methods for monitoring and measuring quality marked the
emergence of the concept of statistical quality control as a sophisticated replacement for the
simple inspection procedures of the 1920s. The development reinforced the idea of quality
needing specialists (able to understand and use statistical methods) to manage it, further
separating it from the labourer, the machinist and even the foreman.

Of particular interest was the work of a group of quality experts, including William Edwards
Deming (1986) and Joseph Juran (who worked for a while with Bell Labs in the quality
assurance department set up by Shewhart), who were involved in wartime training and who
helped establish the American Society for Quality Control. Within this forum, many of the key
ideas underpinning quality management today were first articulated, but their impact was limited
and little understanding of quality control principles extended beyond the immediate vicinity of
the shopfloor.

149
In 1951, Juran published his Quality Control Handbook, in which he highlighted not only the
principles of quality control, but also the potential economic benefits of a more thorough
approach to preventing defects and managing quality on a company-wide basis. He suggested
that failure costs were often avoidable, and the economic pay-off from preventive measures to
reduce or eliminate failures could be between $500 and $1000 per operator – what he referred to
as the ‘gold in the mine’. A few years later, Armand Feigenbaum extended these ideas into the
concept of ‘total quality control’, in which he drew attention to the fact that quality was not
determined simply in manufacturing, but began in the design of the product and extended
throughout the entire factory (Feigenbaum, 1956). As he put it, the first principle to recognize is
that quality is everybody’s job.

Strangely, these ideas were not taken up with any enthusiasm in the West – but they did find a
ready audience in post-war Japan, which was facing the twin problems of catching up with
Western practice and rebuilding its shattered industrial base. Much of the reason for the relative
lack of interest amongst Western firms can be traced back to economic factors. For most firms,
the 1950s were a boom period. One consequence of this relatively easy market environment was
that the stringencies of the war years were relaxed and there was a general slowdown in effort in
both productivity growth and quality improvement practices.

In the 1960s, the concept of ‘quality assurance’ began to be promoted by the defence industry in
response to pressure from the NATO defence ministries for some guarantees of quality and
reliability. This grew out of work on ‘reliability engineering’ in the USA, which led to a number
of military specifications establishing the requirements for reliability programmes in
manufacturing organizations. (Some indication of the size of the problem can be gauged from the
fact that, in 1950, only 30 per cent of the US Navy’s electronics devices were working properly
at any given time.) Such approaches were based on extensive application of statistical techniques
to problems like that of predicting the reliability and performance of equipment over time.

This link with the defence sector (and latterly, by association, with the aerospace industry) led to
the formalizing of quality standards for products, including components and materials, supplied
by subcontractors for military customers. In the USA and the UK, the so-called ‘military
specifications’ and ‘defence standards’ gave rise to the practice of formal assessment of
suppliers, for purposes of accreditation as acceptable sources.

Quality assurance (QA) is the name given to the set of systems (embodying rules and
procedures) that are used by a firm to assure the manufacture of quality products. Although
clearly a sound idea in principle, by 1969 it had become enshrined in an increasingly
bureaucratic set of rules and procedures that suppliers needed to go through to obtain
certification by defence agencies. Consequently, in the firms themselves, QA became an
increasingly dogmatic, bureaucratic and specialized function – a book of rules rather than a live
principle.

The combination of QA and the supplier assessment initiatives described above gave rise to the
concept of supplier quality assurance (SQA). In order to ensure compliance with increasingly
rigorous standards, certification and checking of suppliers began to take place, where the onus

150
was placed upon suppliers to provide evidence of their ability to maintain quality in products and
processes. Such vendor appraisal was often tied to the award of important contracts, and
possession of certification could also be used as a marketing tool to secure new business because
it provided an indication of the status of a quality supplier.

In keeping with the general tenor of quality management to date, however, SQA maintained the
idea that quality was something ‘outside’ the process – as if it were the result of inspection (this
time, with the customer wearing the ‘white coat’).

By the mid to late 1970s, there were many of these SQA schemes in operation, all complex and
often different for each major customer. As a result, suppliers faced a major task in trying to
ensure compliance and certification. Such congestion led to the need for some form of central
register of approved schemes and some common agreement on the rules of good QA practice.
There are now a number of national and international standards which relate to the whole area of
quality assurance and require the establishment and codification of complete quality assurance
systems, and achievement of certification (e.g. ISO 9000) has become a prerequisite for
participation in many global markets.

Since, 1980s quality has emerged as a major strategic factor due to increased competition on a
global scale, which in turn has given greater amounts of choice and power to consumers. When
Henry Ford declared that a customer can have a car painted any colour he likes ‘as long as it is
black!’, he was reflecting the competitive conditions of the time; the market was immature and
would do what it was told to do. However, in the new millennium many markets have numerous
competitors within them.

The oil shocks of the 1970s created a crisis in mass production, which led Western firms to adopt
many of the more visible elements of Japanese production (including quality) in the hopes of
regaining competitive advantage. Among these were quality circles (QCs) and total quality
management (TQM), along with a number of ideas developed by Japan’s own quality experts,
Kaoru Ishikawa and Genichi Taguchi, during the 1970s.

Both just-in-time and total quality management have been widely adopted by organizations.
Today, new variants on quality, including quality awards and quality certification systems, are
widely promoted as the new keys to organizational success, but their long-term worth has yet to
be proved. There are as many success stories as there are stories of failure. Motorola, Xerox, and
Ritz-Carlton Hotels are well-known examples of firms that have successfully implemented
quality efforts, and where quality has become part of the organizational culture as well as a top
organizational goal. A brief history of the development of quality is illustrated in Table 5.1.

Table 5.1 Major events in quality evolution

Emphasis Major themes Dates Key figures


1 Inspection Craft production Prior to
1900s
Inspection 1900s

151
Standardized parts and 1900s
gauging

Control charts and acceptance 1920s Walter Shewhart,


Sampling Harold Dodge,
Harry Romig
2 Statistical Theory of SPC 1931 Walter Shewart
process control US experts visit Japan 1940s W. Edward Deming,
Joseph Juran,
Arnold Feigenbaum
3 Quality Cost of quality 1950s Joseph Juran
assurance Total quality control 1950s Arnold Feigenbaum
Quality control circles in Kaoru Ishikawa,
Japan Taiichi Ohno

Reliability engineering 1960s


Zero defects
4 Total quality Robust design 1960s Genichi Taguchi
management
Quality function deployment 1970s

Design for 1980s


manufacture/assembly

TQM in West 1980s–


present

In a nutshell, since the 1970s, competition based on quality has grown in importance and has
generated tremendous interest, concern, and enthusiasm. Companies in every line of business are
focusing on improving quality in order to be more competitive. In many industries quality
excellence has become a standard for doing business. Companies that do not meet this standard
simply will not survive. As you will see later in the chapter, the importance of quality is
demonstrated by national quality awards and quality certifications that are coveted by businesses.
The term used for today’s new concept of quality is total quality management or TQM. Table 5.1
above presents a timeline of the old and new concepts of quality. You can see that the old
concept is reactive, designed to correct quality problems after they occur. The new concept is
proactive, designed to build quality into the product and process design. Next, we look at the
individuals who have shaped our understanding of quality.

5.4 Quality Gurus


To fully understand the quality movement, we need to look at the philosophies of notable
individuals who have shaped the evolution of quality management. Their philosophies and
teachings have contributed to our knowledge and understanding of quality today.

Walter A. Shewhart

152
Shewhart was a statistician at Bell Labs during the 1920s and 1930s. Shewhart studied
randomness and recognized that variability existed in all manufacturing processes. He developed
quality control charts that are used to identify whether the variability in the process is random or
due to an assignable cause, such as poor workers or miscalibrated machinery. He stressed that
eliminating variability improves quality. His work created the foundation for today’s statistical
process control, and he is often referred to as the “grandfather of quality control.”

A. W. Edwards Deming
Deming is often referred to as the “father of quality control. Deming (1986) stressed the
responsibilities of top management to take the lead in changing processes and systems. It is the
top management’s responsibility to create and communicate a vision to move the firm towards
continuous improvement. Top management is responsible for most quality problems; it should
give employees clear standards for what is considered acceptable work, and provide the methods
to achieve it. These methods include an appropriate working environment and climate for work
free of faultfinding, blame or fear.
Deming (1986) also emphasized the importance of identification and measurement of customer
requirements, creation of supplier partnership, use of functional teams to identify and solve
quality problems, enhancement of employee skills, participation of employees, and pursuit of
continuous improvement.
The means to improve quality lie in the ability to control and manage systems and processes
properly, and in the role of management responsibilities in achieving this. Deming (1986)
advocated methodological practices, including the use of specific tools and statistical methods in
the design, management, and improvement of process, which aim to reduce the inevitable
variation that occurs from “common causes” and “special causes” in production.
“Common causes” of variations are systemic and are shared by many operators, machines, or
products. They include poor product design, non-conforming incoming materials, and poor
working conditions. These are the responsibilities of management.
“Special causes” relate to the lack of knowledge or skill, or poor performance. These are the
responsibilities of employees.

Deming proposed 14 points as the principles of quality management, which are listed below:
1. Create constancy of purpose towards improvement of product and service, with the aim
to become competitive and to stay in business, and to provide jobs.
2. Adopt the new philosophy. We are in a new economic age. Western management must
awaken to the challenge, must learn their responsibilities, and take on leadership for
change.
3. Cease dependence on mass inspection to quality. Eliminate the need for inspection on a
mass basis by building quality into the product in the first place.
4. End the practice of awarding business on the basis of price tag. Instead, minimize total
cost. Move towards a single supplier for any one item, on a long-term relationship of
loyalty and trust.
5. Improve constantly and forever the system of production and service, to improve quality
and productivity, and thus constantly decrease costs.
6. Institute training on the job.

153
7. Institute leadership. The aim of supervision should be to help people and machines and
gadgets to do a better job. Supervision of management is in need of overhaul, as well as
supervision of production workers.
8. Drive out fear, so that people may work effectively for the company.
9. Break down barriers between departments. People in research, design, sales, and
production must work as a team, to foresee problems of production and in use that may
be encountered with the product or service.
10. Eliminate slogans, exhortations, and targets for the workforce asking for zero defects and
new levels of productivity. Such exhortations only create adversarial relationships, as the
bulk of the causes of low quality and low productivity belong to the system and thus lie
beyond the power of the workforce.
11. Eliminate work standards (quotas) on the factory floor. Substitute leadership. Eliminate
management by objective. Eliminate management by numbers, numerical goals.
Substitute leadership.
12. Remove barriers that rob the hourly worker of his right to pride of workmanship. The
responsibility of supervisors must be changed from sheer numbers to quality. Remove
barriers that rob people in management and in engineering of their right to pride of
workmanship. This means, inter alia, abolishment of the annual or merit rating and of
management by objective.
13. Institute a vigorous program of education and self-improvement.
14. Put everybody in the company to work to accomplish the transformation. The
transformation is everybody’s job.

These points are principles that help guide companies in achieving quality improvement. The
principles are founded on the idea that upper management must develop a commitment to quality
and provide a system to support this commitment that involves all employees and suppliers.
Deming stressed that quality improvements cannot happen without organizational change that
comes from upper management.

Joseph M. Juran
After W. Edwards Deming, Dr. Joseph Juran is considered to have had the greatest impact on
quality management. Juran originally worked in the quality program at Western Electric. He
became better known in 1951, after the publication of his book Quality Control Handbook. In
1954 he went to Japan to work with manufacturers and teach classes on quality. Though his
philosophy is similar to Deming’s, there are some differences. Whereas Deming stressed the
need for an organizational “transformation,” Juran believes that implementing quality initiatives
should not require such a dramatic change and that quality management should be embedded in
the organization.

One of Juran’s significant contributions is his focus on the definition of quality and the cost of
quality. Juran is credited with defining quality as fitness for use rather than simply conformance
to specifications. As we have learned in this chapter, defining quality as fitness for use takes into
account customer intentions for use of the product, instead of only focusing on technical
specifications. Juran is also credited with developing the concept of cost of quality, which allows
us to measure quality in dollar terms rather than on the basis of subjective evaluations.

154
Juran is well known for originating the idea of the quality trilogy: quality planning, quality
control, and quality improvement. The first part of the trilogy, quality planning, is necessary so
that companies identify their customers, product requirements, and overriding business goals.
Processes should be set up to ensure that the quality standards can be met. The second part of the
trilogy, quality control, stresses the regular use of statistical control methods to ensure that
quality standards are met and to identify variations from the standards. The third part of the
quality trilogy is quality improvement. According to Juran, quality improvements should be
continuous as well as breakthrough. Together with Deming, Juran stressed that to implement
continuous improvement workers need to have training in proper methods on a regular basis.

Phillip B. Crosby

Philip B. Crosby is another recognized guru in the area of TQM. He worked in the area of
quality for many years, first at Martin Marietta and then, in the 1970s, as the vice president for
quality at ITT. He developed the phrase “Do it right the first time” and the notion of zero defects,
arguing that no amount of defects should be considered acceptable. He scorned the idea that a
small number of defects is a normal part of the operating process because systems and workers
are imperfect. Instead, he stressed the idea of prevention.

To promote his concepts, Crosby wrote a book titled Quality Is Free, which was published in
1979. He became famous for coining the phrase “quality is free” and for pointing out the many
costs of quality, which include not only the costs of wasted labor, equipment time, scrap, rework,
and lost sales, but also organizational costs that are hard to quantify. Crosby stressed that efforts
to improve quality more than pay for themselves because these costs are prevented. Therefore,
quality is free.
Like Deming and Juran, Crosby stressed the role of management in the quality improvement
effort and the use of statistical control tools in measuring and monitoring quality. Crosby (1979)
identified a number of important principles and practices for a successful quality improvement
program, which include, for example, management participation, management responsibility for
quality, employee recognition, education, reduction of the cost of quality (prevention costs,
appraisal costs, and failure costs), emphasis on prevention rather than after-the-event inspection,
doing things right the first time, and zero defects.

Crosby claimed that mistakes are caused by two reasons: Lack of knowledge and lack of
attention. Education and training can eliminate the first cause and a personal commitment to
excellence (zero defects) and attention to detail will cure the second. Crosby also stressed the
importance of management style to successful quality improvement. The key to quality
improvement is to change the thinking of top managers-to get them not to accept mistakes and
defects, as this would in turn reduce work expectations and standards in their jobs.
Understanding, commitment, and communication are all essential. Crosby presented the quality
management maturity grid, which can be used by firms to evaluate their quality management
maturity.

The five stages are: Uncertainty, awakening, enlightenment, wisdom and certainty. These stages
can be used to assess progress in a number of measurement categories such as management
understanding and attitude, quality organization status, problem handling, cost of quality as

155
percentage of sales, and summation of firm quality posture. The quality management maturity
grid and cost of quality measures are the main tools for managers to evaluate their quality status.

Armand V. Feigenbaum

Another quality leader is Armand V. Feigenbaum, who introduced the concept of total quality
control. Feigenbaum (1991) defined quality management as: an effective system for integrating
the quality development, quality-maintenance, and quality-improvement efforts of the various
groups in a firm so as to enable marketing, engineering, production, and service at the most
economical levels which allow for full customer satisfaction. He claimed that effective quality
management consists of four main stages, described as follows:
• Setting quality standards;
• Appraising conformance to these standards;
• Acting when standards are not met;
• Planning for improvement in these standards.

In his 1961 book Total Quality Control, he outlined his quality principles in 40 steps.
Feigenbaum took a total system approach to quality. He promoted the idea of a work
environment where quality developments are integrated throughout the entire organization,
where management and employees have a total commitment to improve quality, and people learn
from each other’s successes. This philosophy was adapted by the Japanese and termed
“company-wide quality control.”

Kaoru Ishikawa

Ishikawa (1985) argued that quality management extends beyond the product and encompasses
aftersales service, the quality of management, the quality of individuals and the firm itself. He
claimed that the success of a firm is highly dependent on treating quality improvement as a
never-ending quest.

Kaoru Ishikawa is best known for the development of quality tools called cause-and-effect
diagrams, also called fishbone or Ishikawa diagrams. These diagrams are used for quality
problem solving, and we will look at them in detail later in the chapter. He was the first quality
guru to emphasize the importance of the “internal customer,” the next person in the production
process. He was also one of the first to stress the importance of total company quality control,
rather than just focusing on products and services.
Dr. Ishikawa believed that everyone in the company needed to be united with a shared vision and
a common goal. He stressed that quality initiatives should be pursued at every level of the
organization and that all employees should be involved. Dr. Ishikawa was a proponent of
implementation of quality circles, which are small teams of employees that volunteer to solve
quality problems.

Ishikawa’s concept of quality management contains the following six fundamental principles:
• Quality first-not short-term profits first;
• Customer orientation-not producer orientation;
• The next step is your customer-breaking down the barrier of sectionalism;
• Using facts and data to make presentations-utilization of statistical methods;

156
• Respect for humanity as a management philosophy, full participatory management;
• Cross-functional management.

Genichi Taguchi
Dr. Genichi Taguchi is a Japanese quality expert known for his work in the area of product
design. He estimates that as much as 80 percent of all defective items are caused by poor product
design. Taguchi stresses that companies should focus their quality efforts on the design stage, as
it is much cheaper and easier to make changes during the product design stage than later during
the production process.

Taguchi is known for applying a concept called design of experiment to product design.
This method is an engineering approach that is based on developing robust design, a design that
results in products that can perform over a wide range of conditions. Taguchi’s philosophy is
based on the idea that it is easier to design a product that can perform over a wide range of
environmental conditions than it is to control the environmental conditions. Taguchi has also had
a large impact on today’s view of the costs of quality. He pointed out that the traditional view of
costs of conformance to specifications is incorrect, and proposed a different way to look at these
costs.

5.5 Typology of approaches to quality management

Several methods have evolved to achieve, sustain and improve quality; they are quality control,
quality improvement and quality assurance, which collectively are known as quality
management. Operations managers must ensure that the goods or services produced by the
transformation process meet quality specifications, i.e., they have the responsibility of managing
quality. Many different techniques and tools for managing quality have emerged to support this
responsibility. One way to organize the different approaches to quality management is to show
where they are normally used within the transformation process (Figure 5.2).

Transformation process
Inputs Outputs
Process control

Design Process Inspection


Supply chain control Sampling

Feedback
Complaints Recovery

Figure 5.2: Quality management and the transformation Model.


157
1. Outputs
Quality management includes ensuring the quality of the outputs of the transformation process
by sorting them into acceptable or unacceptable categories before they are delivered to customers
or clients. This is most related to Garvin’s manufacturing-based definition of quality – quality as
meeting specifications.
Conformity describes the degree to which the design specifications are met in the production of
the product or service, and is, again, highly influenced by operations capabilities. Although
specifications are initially set in the design process, operations managers are responsible for
ensuring that the products and services that are delivered to customers meet those specifications.

Two kinds of specifications can be identified for products or services, attributes and variables.
Attributes are aspects of a product or service that can be checked quickly and a simple yes or no
decision made as to whether the quality is acceptable. Thus, attributes are quality aspects of a
product or service that are either met or not met.
Variable measures, on the other hand, are aspects of a product or service that can be measured
on a continuous scale, including factors of weight, length, speed, energy consumption and so on.
Variables are standards that can be met or not met as well.

The responsibility for conformity within manufacturing operations is sometimes assigned to a


specific quality control (QC) department. The QC department may be responsible for a variety
of activities, including assessing the level of quality of goods and services, and of the processes
that produce those goods and services. The tools used by the QC department are described later
in the chapter.

Quality control is usually associated with two types of quality management:


a. Inspection
b. Acceptance sampling.

Inspection
The most basic way of measuring quality is through inspection: measuring the level of quality of
each unit of output of the operation and deciding whether it does or does not meet quality
specifications. Inspection classifies each product as good or bad. Products that fail inspection
may be reworked to meet quality standards, sold as seconds (at reduced prices) or scrapped
altogether. One hundred per cent inspection requires sampling of all of the unit’s outputs. This is
clearly impractical in many circumstances -for example, a brewery would probably go out of
business quickly if inspectors had to take a sip from every cask or bottle of beer! In general,
inspection requires too many organizational resources to be used as a method of quality control
except when the consequences of nonconformance are significant. This may come into play with
very expensive products, or when there are high risks associated with failure.

158
Inspection is the art of determining actual conformity of product to the specifications laid down
for it. It is a tool to control the quality of a product. Thus, inspection means checking the
acceptability of the manufactured product. It measures the quality of a product or service in
terms of predefined standards. Product quality may be specified in terms of strength, hardness,
shape, surface finish, etc.

Types of Inspection
Inspection can be either preventive or corrective. Preventive inspection is concerned with
discovering defects, the cause of defects, and helping in the removal of such causes. Corrective
(remedial) inspection, on the other hand, deals with sorting out good parts from the bad ones. Its
primary purpose is to discover the defective parts that have already been manufactured and
prevent their use in the final product. Many firms use both types of inspection, but the emphasis
is often on preventive inspection.
The idea is to prevent the inferior parts from further processing down the production line. This,
in turn, will reduce the labor cost.

Purpose of Inspections
• It separates defective components from non-defective ones and thus ensures the adequate
quality of products.
• It locates the defects in raw materials and flaws in process which otherwise cause
problems at the final stage. For example, detecting parts with improper tolerances during
processing itself, will minimize the troubles at the time of assembly.
• It prevents further work being done on semi finished products already detected as spoiled.
• It ensures that the product works without hurting anybody.
• It detects sources of weakness and trouble in the finished products and thus checks the
work of designers.
• It builds up the reputation of the company with the customers by reducing the number of
complaints from them.

Acceptance sampling
Acceptance sampling is a technique for determining whether to accept a batch of items after
inspecting a sample of the items. The level of quality of a sample taken from a batch of products
or services is measured, and the decision as to whether the entire batch meets or does not meet
quality specifications is based on the sample. Acceptance sampling is used instead of inspection
when the cost of inspection is high relative to the consequences of accepting a defective item.

Rather than relying on guesswork, acceptance sampling is a statistical procedure based on one or
more samples. Acceptance sampling begins with the development of a sampling plan, which
specifies the size of the sample and the number of good items. The maximum allowable
percentage of defective (non-conforming) items in a batch for it still to be considered good is
called the acceptable quality level (AQL). This is the quality level acceptable to the consumer
and the quality level the producer aims for. On the other hand, the worst level of quality that the
consumer will accept is called the lot tolerance percent defective (LTPD) level.

Since the sample is smaller than the entire batch, then there is a risk that the sample will not
correctly represent the quality of the batch. The producer’s risk is the probability of rejecting a

159
lot whose quality meets or exceeds the acceptable quality level (AQL). The consumer’s risk is
the probability of accepting a lot whose level of defects is at or higher than the lot tolerance per
cent defective (LTPD). These may sometimes be described as Type I (alpha) and Type II (beta)
errors, terms that are derived from statistical theory.

In order to be useful, a sampling plan must balance the risk of mistakenly rejecting a good batch
(producer’s risk) and the risk of mistakenly accepting a bad batch (consumer’s risk). Together,
the AQL, LTPD and the two levels of risk define an operating characteristics curve (OC), which
is a statistical representation of the probability of accepting a batch based on the actual
percentage defective.

Sampling plans
One, two or more samples might be taken under different sampling plans. The number of
samples can be known in advance or determined by the results of each sample. In a single
sampling plan the decision to accept or reject a lot is made on the basis of one sample. This is the
simplest type of sampling plan.
In a double sampling plan a decision to accept or reject a lot can be made on the first sample but,
if it is not, a second sample is taken and a decision is made on the basis of the combined samples.
In a double sampling plan, after the first sample the batch will be accepted or rejected, or another
sample will be taken. Once a second sample has been taken, the lot will be either accepted or
rejected.

A sequential sampling plan extends the logic of the double sampling plan. Each time an item is
inspected, a decision is made to accept the lot, reject the lot, or continue sampling.

Cost of quality
Inspection and acceptance sampling are two quality management techniques whose main
emphasis is on conformity. The level of quality aimed for in conformity-centred approaches is
often determined using economic analyses of the costs of quality. Hence, managers must know
the costs of quality in order to manage quality effectively. These costs can be divided into the
costs of making sure that quality mistakes do not happen, and the costs of fixing quality
mistakes. The costs of making sure mistakes do not happen can be divided into the costs of
appraisal and prevention. These issues have been described in detail in one of the forthcoming
sections of this chapter.

2. Process control

The conformity-based approaches to quality management described above merely sort acceptable
from unacceptable outputs, but do not address the underlying causes of poor quality. Quality
management can be more proactive through addressing quality defects during the production
process, rather than after it.

Take a simple example; eating a meal out in a restaurant. If the server waited until the end of the
meal to see if there were any complaints or problems, then he or she wouldn’t have a chance to
correct any problems that had occurred. However, if checks were made regularly during the

160
meal-that the food is what has been ordered, that it has arrived without too much delay, and that
it is of the right temperature and tastes good – then any problems could be dealt with
immediately.

The key concepts associated with process control were developed by Walter Shewart at Bell
Laboratories in the 1920s. Some important techniques associated with process control include:
a) statistical process control
b) quality at the sources.

Statistical process control


Statistical process control (SPC) measures the performance of a process. Statistical process
control (SPC) can be used to monitor and correct quality as the product or service is being
produced, rather than at the conclusion of the process. SPC uses control charts to track the
performance of one or more quality variables or attributes. Samples are taken of the process
outputs, and if they fall outside the acceptable range then corrections to the process are made.
This allows operations to improve quality through a sequence of activities

Control charts
Control charts support process control through the graphical presentation of process measures
over time. They show both current data and past data as a time series. Both upper and lower
process control limits are shown for the process that is being controlled. If the data being plotted
fall outside of these limits, then the process is described as being ‘out of control’.

The statistical basis of control charts, and the insight that led to statistical process control rather
than process control based on guesswork or rule of thumb, is that the variation in process outputs
can be described statistically. Process variation results from one of two causes: common
(internal) factors or random (external) causes. Although there will always be some variation in
the process due to random or uncontrollable changes in factors that influence the process, such as
temperature, etc., there will also be changes due to factors that can be controlled or corrected,
including machine wear, adjustments and so on.

The goal of SPC is for the process to remain in control as much of the time as possible, which
means reducing or eliminating those causes of variation that can be controlled. For example,
wear over time can lead to a process going out of control.

SPC relies on a very simple graphical tool, the control chart, to track process variation. Control
charts plot the result of the average of small samples from the process over time, so that trends
can be easily identified. Managers are interested in the following:
 Is the mean stable over time?
 Is the standard deviation stable over time?

Two different types of control chart have been developed, for measurements of variables and
measurements of attributes.

Control charts for variables

161
Two kinds of control chart are usually associated with variable measures of quality, which
include physical measures of weight or length. Sample measurements can be described as a
normal distribution with a mean (μ) and a standard (s) deviation (the mean describes the average
value of the process, and the standard deviation describes the variation around the mean). The
mean and standard deviation of the process can be used to determine whether a process is staying
within its tolerance range, the acceptance range of performance for the operation.

Control charts are based on sampling means (X) and ranges (R) for every ‘n’ items and ‘m’
samples. Besides the norm for the process, both upper and control limits that the process should
not exceed are also defined. Control limits are usually set at three standard deviations on either
side of the population mean. In addition, warning lines may be in place so that operators can see
a trend in the sampling process that might result in movement toward either the upper or lower
control settings.

An x-chart plots the sample mean to determine whether it is in control, or whether the mean of
the process samples is changing from the desired mean. Manufacturers often measure product
weights, such as bags of flour, to make sure that the right amount (on average) is packaged.

Though sample means may vary around the long-run process mean, if they stay within the upper
and control limits, the process is said to be in control. If the means of one or more samples had
been outside the control limits, then the process would have been out of control and it would
have been necessary for the process operator to take some action to get it back in control.

Managers may also be interested in how much the variance of the process is changing – that is,
whether the process range (highest to lowest) is stable. A range chart (R-chart) for variable
measures plots the average range (the difference between the largest and smallest values in a
sample) on a chart to determine whether it is in control. The purpose is to detect changes in the
variation of the process.

Attribute charts
Process control using control charts can be done for attributes as well as variable measures. A p-
chart plots the sample proportion defective to determine whether the process is in control. The
population mean percentage defective (p) can be calculated from the average percentage
defective (p) of m samples of n items, as can the standard deviation (σs). This sort of chart is
similar to the x-chart described above.

Statistical process control (SPC), a manufacturing concept, has been applied to services
(especially in quasi-manufacturing or back-office environments) with mixed levels of success.

Process capability
Process capability describes the extent to which a process is capable of producing items within
the specification limits, and can be represented as:

where UTL = upper tolerance level, LTL = lower tolerance level and σ = standard deviation.
162
A general rule of thumb is that Cp should be greater than one (three-sigma quality), i.e. the
process should remain within three standard deviations of the mean as much as possible. The
process is thus in control 98 per cent of the time. However, based on the quality example
established by Japanese, six-sigma quality is a more ambitious target. The six-sigma target for
process capability is associated with the American electronics firm Motorola, which sets a target
of 3.4 defects/ million. This underlines Motorola’s view that defects should be very, very rare.

A related idea in services is service reliability - the ability of the service provider to deliver the
results that customers want time after time, without unpleasant surprises.

Other Quality Control Techniques and Tools


There are six other Quality Control Tools. They are basic quality tools that allow data to be
collected and factually analyzed. These quality tools and techniques play an important part in
quality management. There are ‘seven basic tools’ of quality management:

1. Pareto analysis – this recognizes that it is often the case that 80 per cent of failures are
due to 20 per cent of problems, and therefore tries to find those 20 per cent and solve
them first.
2. Histograms – used to represent this information in visual form.
3. Cause and effect diagrams (fishbone charts or Ishikawa diagrams)– used to identify the
effect and work backwards, through symptoms to the root cause of the problem.
4. Stratification – identifying different levels of problems and symptoms using statistical
techniques applied to each layer.
5. Check sheets – structured lists or frameworks of likely causes which can be worked
through systematically. When new issues are found, they are added to the list.
5 Scatter diagrams – used to plot variables against each other and help identify where
there is a correlation or other pattern.

There are also many new and powerful tools associated with more advanced forms of
approach linked to the wider questions of quality management – sometimes referred to as
the ‘seven advanced tools’ of quality management. These include affinity diagrams,
relations diagrams, matrix diagrams, tree diagrams, arrow diagrams, matrix data analysis
and process decision progress charts. All of these take a system-wide perspective and
provide ways of relating different elements in the quality process.
Quality at the source
As quality management shifts from process outputs to the process itself, there is a corresponding
change in the responsibility for quality. Inspection and sampling techniques supported the old
quality-control view that management or specialized QC personnel should be responsible for
ensuring quality. SPC highlights the idea that the people actually running the process should be
responsible for managing the quality associated with that process. This idea has been formalized
as quality at the source, the idea that each person involved in the production process is
responsible for making sure that their contribution meets specifications.

3. Design Quality

163
The importance of managing the inputs to the transformation process (besides those issues
having to do with the supply chain) should be noted. This includes the quality of the design of
the product or service in the first place. Design quality describes how the marketplace perceives
the product. Chapter 3 introduced practices that are associated with quality of design, including:
 Quality Function Deployment (QFD)
 Taguchi Methods
 Failure Mode Evaluation Analysis (FMEA).

5.6 Quality Standards, Certification and Awards


A quality management approach that is often associated with quality control and conformity is
quality certification to particular quality standards. Quality standards are codes that specify
certain organizational and operational practices, and that they are in place and being followed.
Quality certification certifies organizational compliance with industry, national or international
quality standards.

One of the best-known standards is ISO 9000, which began in the UK (in 1960s) but has been
widely adopted around the world. ISO 9000 provides generic guidelines and models for
accrediting the company’s quality management system. Its focus is conformity to practices
specified in the company’s own quality systems, setting out how the company will establish,
document and maintain an effective quality system that will demonstrate to its customers that it
is committed to quality and is able to supply their quality needs. The company defines the quality
system and can meet the requirements of the standard’s elements in various ways, but all of the
appropriate elements must be documented, the documentation must cover all of the requirements,
and the company must do what it has documented. ISO 9000 certification follows a satisfactory
audit by registrars.

Proponents of ISO 9000 certification have claimed that it leads to both internal and external
benefits, and to better quality through better quality systems, and that it signals that the firm has
better quality systems and thus enhances the marketing of the firm’s products and services. The
Department of Trade and Industry (1993) even proposed that a marketing advantage from BS
5750 could result for 18 months to 2 years, simply by being the first in an industry to be
registered! Internally, the systemization of processes and procedures within the organization
ensures ‘the continued repeatability of a set of product and service characteristics that have been
explicitly or implicitly agreed to by a customer and a supplier’. Externally, quality certification
systems give purchasers confidence in the quality of suppliers’ products, since they have been
certified to meet a common set of system quality standards.

ISO 9000 quality standards and certifications do at least ensure that a minimum level of practices
is being followed. They can be appropriate when applied in the right circumstances. The
environmental standard ISO 14000 has been developed to certify environmentally responsible
practices by organizations, and has been widely adopted by companies.

Similarly, quality awards have become popular as a way of recognizing outstanding


achievements in quality management, and as a way for organizations to assess their own quality
performance. These frameworks may be used as the basis for awards, for a form of ‘self-
assessment, or as a description of what should be in place.
164
Three important quality awards are the Deming Prize (the most desirable industrial quality award
in Japan), the Baldrige Award, and the European Quality Award.
• The Deming Prize in Japan was the first formal quality award framework established by
JUSE in 1950. The examination viewpoints include: top management leadership and
strategies; TQM frameworks, concepts and values; QA and management systems; human
resources; utilization of information; scientific methods; organizational powers;
realization of corporate objectives.
• The USA Baldrige Award aims to promote performance excellence and improvement in
competitiveness through a framework of seven categories which are used to assess
organizations: leadership; strategic planning; customer and market focus; information and
analysis; human resource focus; process management; business results.
• The European (EFQM) Excellence Model operates through a simple framework of
performance improvement through involvement of people in improving processes. The
full Excellence Model is a non-prescriptive framework for achieving good results –
customers, people, society, key performance– through the enablers – leadership, policy
and strategy, people, processes, partnerships and resources. The framework includes
proposed weightings for assessment.

5.7 Total Quality Management


Total quality management incorporates a holistic set of ideas about quality that go well beyond
operations. TQM goes beyond the idea of quality as conformance to some set of specifications to
that of quality as excelling on all dimensions that are important to the customer. TQM describes
an organizational culture as well as the sort of tools, techniques and organizational structures that
are associated with the conformance-based approaches above.

The development from inspection to TQM reveals the increasing strategic importance of quality
over time. As we mentioned earlier, quality used to mean conformance to specification: the
nature of this was conforming to process quality criteria (in-house). By the 1980s and 1990s,
however, quality became seen in terms of a total commitment from all areas including the supply
chain.

Total quality management is a term coined to describe Japanese style management approaches to
quality improvement. Since then, total quality management (TQM) has taken on many meanings
and interpretations. Some of them are being presented here.

TQM is a management philosophy for continuously improving overall business performance


based on leadership, supplier quality management, vision and plan statement, evaluation, process
control and improvement, product design, quality system improvement, employee participation,
recognition and reward, education and training, and customer focus.

Total quality management (TQM) has been the most prominent and visible approach to quality to
evolve from the work of Deming and the early quality gurus. TQM originated in the 1980s as a
Japanese style management approach to quality improvement, and became very popular during
the 1990s, being adopted by thousands of companies. Although it has taken on many meanings,
it was (and still is) a philosophy for managing an organization centered on quality and customer
satisfaction as “the” strategy for achieving long-term success. It requires the active involvement,

165
participation and cooperation of everyone in the organization, and encompasses virtually all of
its activities and processes.

It is best thought of as a philosophy of how to approach quality improvement. This philosophy,


above everything, stresses the ‘total’ of TQM. It is an approach that puts quality at the heart of
everything that is done by an operation and including all activities within an operation.

TQM can be viewed as a logical extension of the way in which quality-related practice has
progressed. Originally quality was achieved by inspection – screening out defects before they
were noticed by customers. The quality control (QC) concept developed a more systematic
approach to not only detecting, but also treating quality problems. Quality assurance (QA)
widened the responsibility for quality to include functions other than direct operations. It also
made increasing use of more sophisticated statistical quality techniques. TQM included much of
what went before but developed its own distinctive themes. We will use some of these themes to
describe how TQM represents a clear shift from traditional approaches to quality. These
distinctive themes include the following:
 Top management commitment
 All aspects of the business
 meeting the needs and expectations of customers;
 covering all parts of the organization;
 including every person in the organization; and
 managing supplier quality
 developing a continuous process of improvement.

Leadership. Top management commitment. In terms of setting an example in their commitment to


quality, particularly in terms of their willingness to invest in training and other important features of
TQM. The ‘quality gurus’, including Deming, Juran and, more recently, Crosby, all agreed that
there must be senior management commitment to quality within the firm. The senior executives
must take personal charge of managing for quality. The executives trained their entire managerial
hierarchies in how to manage for quality. TQM requires extensive personal leadership and
participation by managers. A key factor in translating senior commitment to quality into ‘frontline’
operations comes from group-based activities, which provide a focus for much of the powerful continuous
improvement effort characteristic of TQM.

All aspects of the business. The quality drive relates to all personnel within the firm and also outside – all
aspects of the supply chain. Specifically, TQM is customer driven, values employee involvement, and
supplier relations.

a. Customer-driven quality. TQM means meeting the needs and expectations of customers
In TQM, customers include both the external customer who purchases the products and
services, and the internal customers who receive the output of internal processes.
Customerdriven quality means that the organization listens to the ‘voice of the
customer’ in everything that they do. Techniques such as quality function deployment
(Chapter 3) support this customer focus.

Earlier in this chapter we defined quality as ‘consistent conformance to customers’ expectations’.


Therefore any approach to quality management must necessarily include the customer

166
perspective. In TQM this customer perspective is particularly important. It may be referred to as
‘customer-centricity’ or the ‘voice of the customer’. However it is called, TQM stresses the
importance of starting with an insight into customer needs, wants, perceptions and preferences.
This can then be translated into quality objectives and used to drive quality improvement.

b. TQM means covering all parts of the organization.For an organization to be truly


effective, every single part of it, each department, each activity, and each person and
each level, must work properly together, because every person and every activity
affects and in turn is affected by others. One of the most powerful concepts that has
emerged from various improvement approaches is the concept of the internal
customer or supplier. This is recognition that everyone is a customer within the
organization and consumes goods or services provided by other internal suppliers, and
everyone is also an internal supplier of goods and services for other internal customers.
The implication of this is that errors in the service provided within an organization will
eventually affect the product or service which reaches the external customer.
Some organizations bring a degree of formality to the internal customer concept by encouraging
(or requiring) different parts of the operation to agree service-level agreements (SLAs) with
each other. SLAs are formal definitions of the dimensions of service and the relationship
between two parts of an organization. The type of issues which would be covered by such an
agreement could include response times, the range of services, dependability of service supply,
and so on. Boundaries of responsibility and appropriate performance measures could also be
agreed.
c. Employee involvement. TQM means including every person in the organization. Every
person in the organization has the potential to contribute to quality. Although it may be
necessary to develop some specialists to assist with maintaining quality levels, TQM
was amongst the first approaches to stress the centrality of harnessing everyone’s
impact on quality and therefore their potential contribution to quality. There is scope
for creativity and innovation even in relatively routine activities, claim TQM
proponents. The shift in attitude which is needed to view employees as the most
valuable intellectual and creative resource which the organization possesses can still
prove difficult for some organizations. But ideas contributed are just one measure of
individual contributions. Other contributions may be even more important. These
include participation on quality improvement and quality planning teams, work on
statistical quality control and self-control of their own work processes, and working as
members of high-performance or self directing work teams. The concept of quality
circle has been discussed further in the forthcoming sections of this chapter.

d. Managing Supplier Quality: TQM extends the concept of quality to a company’s


suppliers. Traditionally, companies tended to have numerous suppliers that engaged in
competitive price bidding. When materials arrived, an inspection was performed to
check their quality. TQM views this practice as contributing to poor quality and wasted
time and cost. The philosophy of TQM extends the concept of quality to suppliers and
ensures that they engage in the same quality practices. If suppliers meet preset quality
standards, materials do not have to be inspected upon arrival. Today, many companies
have a representative residing at their supplier’s location, thereby involving the supplier
in every stage from product design to final production.

167
Continuous improvement. It is an attitude that sees improvement as a never-ending process of
small gains. Continuous improvement relies on committed and involved employees, who
contribute suggestions and ideas for improving products and processes. This is built on the
Japanese idea of kaizen, which emphasizes providing workers with various tools for improving
operations (one of the forthcoming sections of the chapter deals with continuous improvement
issues in depth).

5.8 Quality circles


The quality circles begun in Japan in 1960s. The concept of quality circles is based on the
participating style of management. It assumes that productivity will improve through an uplift of
morale and motivations which are in turn achieved through consultation and discussion in
informal groups. One organizational mechanism for worker participation in quality is the quality
circle. It is typically an informal group of people that consists of operators, supervisors,
managers and so on who get together to improve ways to make the product or deliver the service.

According to Juran, quality circle defined as a group of work force level people, usually from
within one department, who volunteer to meet weekly (on company time) to address quality
problems that occur within their department.

It involves a small group (five to ten people) who gather regularly in the firm’s time to examine
problems and discuss solutions to quality problems. They are usually drawn from the same area
of the factory and participate voluntarily in the circle. The circle is usually chaired by a foreman
or deputy and uses SQC methods and problem solving aids as the basis of their problem-solving
activity. An important feature, often neglected in considering QCs, is that there is an element of
personal development involved, through formal training but also through having the opportunity
to exercise individual creativity in contributing to improvements in the area in which participants
work.
The basic activity cycle of a QC goes from selection of a problem through analysis, solution
generation, presentations to management and implementation by management. Once the problem
is analysed and the root problem identified, ways of dealing with it can be identified.
The valuable techniques here include brainstorming (in its many variants) and goal orientation.
However, it is important that the structure and operation of the group support suggestions from
anyone (irrespective of levels in the organization, functional or craft skills background, etc.) and
allow for high levels of creativity – even if some of the ideas appear wild and impractical at the
time. The principles of brainstorming, especially regarding expert facilitation and enforcement of
a ‘no criticism’ rule during idea generation sessions, are important.
The circle does not have to confine itself to current problems – it can also involve itself in
forecasting. Here, the possible future problems resulting from each stage can be anticipated and
explored, perhaps employing failure mode effects analysis (see above). Finally, the group
presents the solution to management, who are expected to implement it. A key success factor in
QCs’ survival and effectiveness is the willingness of management to be seen to be committed to
the principles of TQM and to act on suggestions for improvement.

168
5.9 Total Quality Management (TQM) and Continuous Improvement/ Kaizen
TQM and kaizen are very related concepts. TQM as a system that drives improvement is very
analogous to a Kaizen approach. The elements and characteristics are considerably supportive of
each other, and the two philosophies mandate a similar organizational mindset. Consequently on
the road of a company to TQM, a Kaizen approach and any of its tools under its umbrella in
practice, is a compatible valuable tool to TQM.

But what is Kaizen? Here under we have presented some important concepts of Kaizen (as we
have already got familiarity with the concept of TQM as presented in the preceding sections).

In the decade of 1980, management techniques focusing on employee involvement, and


empowerment through teamwork approach and interactive communications and on improving
job design were not new, but Japanese companies seemed to implement such techniques much
more effectively than others .The business lesson of the 1980’s was that Japanese firms, in their
quest for global competitiveness, demonstrated a greater commitment to the philosophy of
continuous improvement than Western companies did. For such a philosophy the Japanese used
the term Kaizen.

Kaizen means improvement, continuous improvement involving everyone in the organization


from top management, to managers then to supervisors, and to workers. In Japan, the concept of
Kaizen is so deeply engrained in the minds of both managers and workers that they often do not
even realize they are thinking Kaizen as a customer-driven strategy for improvement. This
philosophy assumes according Imai that ‘’our way of life – be it our working life, our social life
or our home life – deserves to be constantly improved’’.

There is a lot of controversy in the literature as well as the industry as to what Kaizen signifies.
Kaizen is a Japanese philosophy for process improvement that can be traced to the meaning of
the Japanese words ‘Kai’ and ‘Zen’, which translate roughly into ‘to break apart and investigate’
and ‘to improve upon the existing situation’. The Kaizen Institute defines Kaizen as the Japanese
term for continuous improvement. It is using common sense and is both a rigorous, scientific
method using statistical quality control and an adaptive framework of organizational values and
beliefs that keeps workers and management focused on zero defects. It is a philosophy of never
being satisfied with what was accomplished last week or last year.
Improvement begins with the admission that every organization has problems, which provide
opportunities for change. It evolves around continuous improvement involving everyone in the
organization and largely depends on cross-functional teams that can be empowered to challenge
the status quo.

The essence of Kaizen is that the people that perform a certain task are the most knowledgeable
about that task; consequently, by involving them and showing confidence in their capabilities,
ownership of the process is raised to its highest level. In addition, the team effort encourages
innovation and change and, by involving all layers of employees, the imaginary organizational
walls disappear to make room for productive improvements. From such a perspective, Kaizen is
not only an approach to manufacturing competitiveness but also everybody's business, because
its premise is based on the concept that every person has an interest in improvement. The

169
premise of a Kaizen workshop is to make people's jobs easier by taking them apart, studying
them, and making improvements.
The message is extended to everyone in the organization, and thus everyone is a contributor. So,
when Kaizen for every individual could be an attitude for continuous improvement, for the
company also be a corporate attitude for continuous improvement .As presented by Imai ,Kaizen
is an umbrella concept that embraces different continuous improvement activities on an
organization. Kaizen constituents are presented on Figure 5.3.

leadershi
p
Cross
improv
functional
ements
teams

The Kaizen
Teams 5S
Philosophy

Discipline Productivity
In the improveme
Workplace nt
Figure 5.3 process
Constituents of focus
kaizen

A roadmap for Kaizen journey


As far as the roadmap is concerned, a model composed of five levels or stages of evolution of CI
has been developed. Each of these takes time to move through, and there is no guarantee that
organizations will progress to the next level. Moving on means having to find ways of
overcoming the particular obstacles associated with different stages.

The first stage – level 1 – is what we might call ‘unconscious CI’. There is little, if any, CI
activity going on, and when it does happen it is essentially random in nature and occasional in
frequency. People do help to solve problems from time to time – for example, they will pull
together to iron out problems with a new system or working procedure, or getting the bugs out of
a new product. But there is no formal attempt to mobilize or build on this activity, and many
organizations may actively restrict the opportunities for it to take place. The normal state is one
in which CI is not looked for, not recognized, not supported – and often, not even noticed. Not
surprisingly, there is little impact associated with this kind of change.

Level 2, on the other hand, represents an organization’s first serious attempts to mobilize CI. It
involves setting up a formal process for finding and solving problems in a structured and

170
systematic way – and training and encouraging people to use it. Supporting this will be some
form of reward/recognition arrangement to motivate and encourage continued participation.
Ideas will be managed through some form of system for processing and progressing as many as
possible and handling those that cannot be implemented. Underpinning the whole set-up will be
an infrastructure of appropriate mechanisms (teams, task forces or whatever), facilitators and
some form of steering group to enable CI to take place and to monitor and adjust its operation
over time. None of this can happen without top management support and commitment of
resources to back that up.

Level 2 is all about establishing the habit of CI within at least part of the organization. It
certainly contributes improvements but these may lack focus and are often concentrated at a local
level, having minimal impact on more strategic concerns of the organization. The danger in such
CI is that, once having established the habit of CI, it may lack any clear target and begin to fall
away. In order to maintain progress there is a need to move to the next level of CI – concerned
with strategic focus and systematic improvement.

Level 3 involves coupling the CI habit to the strategic goals of the organization such that all the
various local level improvement activities of teams and individuals can be aligned. In order to do
this, two key behaviours need to be added to the basic suite – those of strategy deployment and
of monitoring and measuring. Strategy (or policy) deployment involves communicating the
overall strategy of the organization and breaking it down into manageable objectives towards
which CI activities in different areas can be targeted. Linked to this is the need to learn to
monitor and measure the performance of a process and use this to drive the continuous
improvement cycle.

Level 3 activity represents the point at which CI makes a significant impact on the bottom line-
for example, in reducing throughput times, scrap rates, excess inventory, etc. It is particularly
effective in conjunction with efforts to achieve external measurable standards (such as ISO
9000), where the disciplines of monitoring and measurement provide drivers for eliminating
variation and tracking down root cause problems. The majority of ‘success stories’ in CI can be
found at this level – but it is not the end of the journey. One of the limits of level 3 CI is that the
direction of activity is still largely set by management and within prescribed limits. Activities
may take place at different levels, from individuals through small groups to cross-functional
teams, but they are still largely responsive and steered externally. The move to level 4 introduces
a new element – that of ‘empowerment’ of individuals and groups to experiment and innovate on
their own initiative.
Clearly, this is not a step to be taken lightly, and there are many situations where it would be
inappropriate – for example, where established procedures are safety critical. But the principle of
‘internally directed’ CI as opposed to externally steered activity is important, since it allows for
the open-ended learning behaviour that we normally associate with professional research
scientists and engineers. It requires a high degree of understanding of, and commitment to, the
overall strategic objectives, together with training to a high level to enable effective
experimentation.
It is at this point that the kinds of ‘fast learning’ organizations described in some ‘state-of-the-
art’ innovative company case studies can be found – places where everyone is a researcher and
where knowledge is widely shared and used.

171
Level 5 is a notional end-point for the journey – a condition where everyone is fully involved in
experimenting and improving things, in sharing knowledge and in creating the complete learning
organization. No such organization exists in our experience, but it represents the ideal towards
which CI development can be directed.

5.10 TQM and Benchmarking


Benchmarking is the process of measuring an organization's internal processes then identifying,
understanding, and adapting outstanding practices from other organizations considered to be
best-inclass. Benchmarking is a continuous, systematic process of evaluating and comparing the
capability of one organization with others normally recognized as industry leaders, for insights
for optimizing the organizations processes. Benchmarking is the process of comparing the cost,
time or quality of what one organization does against what another organization does. The result
is often a business case for making changes in order to make improvements. It is the systematic
process of comparing an organization’s products, services and practices against those of
competitor organizations or other industry leaders to determine what it is they do that allows
them to achieve high levels of performance.
The use of benchmarking to compare performance on quality indicators – for example, defective
parts per million – and the practices which different firms use to achieve such performance has
been used commonly in total quality management. This approach – which was originally
developed in the Xerox Corporation – provides a powerful learning and development aid to
quality improvement. Regular benchmarking can provide both the stimulus for improvement
(because of the performance gap that has to be closed) and new ideas about things to try in terms
of organizational tools, mechanisms and practices.

Advantages of benchmarking
 Benchmarking opens organizations to new methods, ideas and tools to improve their
effectiveness. It helps crack through resistance to change by demonstrating other methods.
 Allows employees to visualise the improvement which can be a strong motivator for
change
 Helps to identify weak areas and indicates what needs to be done to improve.

The Benchmarking Process


The benchmarking process consists of following phases:
1. Planning. The essential steps are those of any plan development: what, who and how.
 What is to be benchmarked? Every function of an organization has or delivers a
product or output. Benchmarking is appropriate for any output of a process or
function, whether it’s a physical good, an order, a shipment, an invoice, a service or a
report.
 To whom or what will we compare? Business-to-business, direct competitors are
certainly prime candidates to benchmark. But they are not the only targets.
Benchmarking must be conducted against the best companies and business functions
regardless of where they exist.
 How will the data be collected? There’s no one way to conduct benchmarking
investigations. There’s an infinite variety of ways to obtain required data - and most

172
of the data you’ll need are readily and publicly available. Recognize that
benchmarking is a process not only of deriving quantifiable goals and targets, but
more importantly, it’s the process of investigating and documenting the best industry
practices, which can help you achieve goals and targets.
2. Analysis. The analysis phase must involve a careful understanding of your current process
and practices, as well as those of the organizations being benchmarked. What is desired is
an understanding of internal performance on which to assess strengths and weaknesses. Ask:
 Is this other organization better than we are?
 Why are they better?
 By how much?
 What best practices are being used now or can be anticipated?
 How can their practices be incorporated or adapted for use in our organization?
Answers to these questions will define the dimensions of any performance gap: negative,
positive or parity. The gap provides an objective basis on which to act-to close the gap or
capitalize on any advantage your organization has.
3. Integration. Integration is the process of using benchmark findings to set operational
targets for change. It involves careful planning to incorporate new practices in the operation
and to ensure benchmark findings are incorporated in all formal planning processes.
Steps include:
 Gain operational and management acceptance of benchmark findings. Clearly and
convincingly demonstrate findings as correct and based on substantive data.
 Develop action plans.
 Communicate findings to all organizational levels to obtain support, commitment and
ownership.
4. Action. Convert benchmark findings, and operational principles based on them, to specific
actions to be taken. Put in place a periodic measurement and assessment of achievement.
Use the creative talents of the people who actually perform work tasks to determine how the
findings can be incorporated into the work processes.
Any plan for change also should contain milestones for updating the benchmark findings,
and an ongoing reporting mechanism. Progress toward benchmark findings must be reported
to all employees.
5. Maturity. Maturity will be reached when best industry practices are incorporated in all
business processes, thus ensuring superiority.
Tests for superiority:
 If the now-changed process were to be made available to others, would a knowledgeable
businessperson prefer it?
 Do other organizations benchmark your internal operations?
Maturity also is achieved when benchmarking becomes an ongoing, essential and self-initiated
facet of the management process. Benchmarking becomes institutionalized and is done at all
appropriate levels of the organization, not by specialists.

Types of Benchmarking
Process benchmarking - the initiating firm focuses its observation and investigation of business
processes with a goal of identifying and observing the best practices from one or more
benchmark firms.

173
Activity analysis will be required where the objective is to benchmark cost and efficiency;
increasingly applied to back-office processes where outsourcing may be a consideration.
Financial benchmarking - performing a financial analysis and comparing the results in an effort
to assess your overall competitiveness.
Performance benchmarking - allows the initiator firm to assess their competitive position by
comparing products and services with those of target firms.
Product benchmarking - the process of designing new products or upgrades to current ones.
This process can sometimes involve reverse engineering which is taking apart competitors
products to find strengths and weaknesses.
Strategic benchmarking - involves observing how others compete. This type is usually not
industry specific meaning it is best to look at other industries.
Functional benchmarking - a company will focus its benchmarking on a single function in
order to improve the operation of that particular function. Complex functions such as Human
Resources, Finance and Accounting and Information and Communication Technology are
unlikely to be directly comparable in cost and efficiency terms and may need to be disaggregated
into processes to make valid comparison

5.11 TQM and Business Process Re-engineering


Many companies in the West had adopted quality management initiatives in the 1980s hoping to
win back business lost to Japanese competition. When Ford benchmarked Mazda’s accounts
payable department, however, they discovered a business process being run by five people,
compared to Ford’s 500. Even with the difference in scale of the two companies, this still
demonstrated the relative inefficiency of Ford’s accounts payable process. At Xerox, taking a
customer’s perspective of the company identified the need to develop systems rather than stand-
alone products, which highlighted Xerox’s own inefficient office systems.

Both Ford and Xerox realized that incremental improvement alone was not enough. They had
developed high infrastructure costs and bureaucracies that made them relatively unresponsive in
customer service. Focusing on internal customer/supplier interfaces improved quality, but
preserved the current process structure and they could not hope to achieve in a few years what
had taken the Japanese 30 years. To achieve the necessary improvements required a radical
rethink and redesign of processes.

As a result, arguably, at the beginning of the 1990s, business process re-engineering (BPR)
emerged as a threat to quality. BPR is not meant to be the same as, or a replacement for, TQM.
Although both TQM and BPR are strategic in scope, BPR has more fundamental consequences
in terms of immediate–and sometimes radical – outcomes. Of course, BPR and quality
management are complementary under the umbrella of process management.

Hammer and Champy (1993), coined the term business process engineering. They then defined
it as the fundamental rethink and radical redesign of a business process, its structure and
associated management systems, to deliver major or step improvements in performance (which
may be in process, customer, or business performance terms). The concept of BPR was
introduced to the world via two articles that described the radical changes to business processes
being performed by a handful of Western business. These were also among the first to embark on

174
quality management initiatives in the 1980s and included Xerox, Ford, AT&T, Baxter
Healthcare, and Hewlett-Packard.

BPR is a means of aligning work processes with customer requirements in a dynamic, flexible
way, in order to achieve long-term corporate objectives. This requires the involvement of
customers and suppliers and thinking about future requirements. Indeed the secrets to
redesigning a process successfully lie in thinking about how to reshape it for the future. BPR
then challenges managers to rethink their traditional methods of doing work and to commit to
customer-focused processes. BPR uses recognized methods for improving business results and
questions the effectiveness of the traditional organizational structure.

BPR breaks down these internal barriers and encourages the organization to work in cross-
functional teams with a shared horizontal view of the business. This requires shifting the work
focus from managing functions to managing processes. Process owners, accountable for the
success of major cross-functional processes, are charged with ensuring that employees
understand how their individual work processes affect customer satisfaction. The
interdependence between one group’s work and the next becomes quickly apparent when
everyone understands who the customer is and the value they add to the entire process of
satisfying that customer.

Principles of BPR
The main principles of BPR have been summarized as follows

 Rethink business processes in a cross-functional manner which organizes work around


the natural flow of information (or materials or customers). This means organizing
around outcomes of a process rather than the tasks which go into it.
 Strive for dramatic improvements in the performance by radically rethinking and
redesigning the process.
 Have those who use the output from a process perform the process. Check to see whether
all internal customers can be their own supplier rather than depending on another
function in the business to supply them (which takes longer and separates out the stages
in the process).
 Put decision points where the work is performed. Do not separate those who do the work
from those who control and manage the work. Control and action are just one more type
of supplier–customer relationship which can be merged.

The redesign process


Central to BPR is an objective overview of the processes to be redesigned. Whereas information
needs to be obtained from the people directly involved in those processes it is never initiated by
them. Even at its lowest level, BPR has a top-down approach and most BPR efforts, therefore,
take the form of a major project. There are numerous methodologies proposed, but all share
common elements. Typically, the project takes the form of seven phases.
1. Discover
This involves first identifying a problem or unacceptable outcome, followed by determining the
desired outcome. This usually requires an assessment of the business need and will certainly

175
include determining the processes involved, including the scope, identifying process customers
and their requirements, and establishing effectiveness measurements.

2. Establish redesign team


Any organization, even a small company, is a complex system. There are customers, suppliers,
employees, functions, processes, resources, partnerships, finances, etc. and many large
organizations are incomprehensible – no one person can easily get a clear picture of all the
separate components. Critical to the success of the redesign is the makeup of a redesign team.
The team should comprise as a minimum the following:
 senior manager as sponsor;
 steering committee of senior managers to oversee overall re-engineering strategy;
 process owner;
 team leader;
 redesign team members.
It is generally recommended that the redesign team have between five and ten people; represent
the scope of the process (that is, if the process to be re-engineered is cross-functional, so is the
team); only work on one redesign at a time; and is made up of both insiders and outsiders.
Insiders are people currently working within the process concerned who help gain credibility
with co-workers. Outsiders are people from outside the organization who bring objectivity and
can ask the searching questions necessary for the creative aspects of the redesign. Many
companies use consultants for this purpose.

3. Analyze and document process(es)

Making visible the invisible, documenting the process (es) through mapping and/or flowcharting
is the first crucial step that helps an organization see the way work really is done and not the way
one thinks or believes it is done. Seeing the process as it is provides a baseline from which to
measure, analyze, test and improve. Collecting supporting process data, including benchmarking
information and IT possibilities, allows people to weigh the value each task adds to the total
process, to rank and select areas for the greatest improvement, and to spot unnecessary work and
points of unclear responsibility. Clarifying the root causes of problems, particularly those that
cross department lines, safeguards against quick-fix remedies and assures proper corrective
action, including the establishment of the right control systems.

4. Innovate and rebuild


In this phase the team rethink and redesign the new process, using the same process mapping
techniques, in an iterative approach involving all the stakeholders, including senior management.
A powerful method for challenging existing practices and generating breakthrough ideas is
‘assumption busting’‘ Assumption busting’, as it was named by Hammer and Champy, aims to
identify the rules that govern the way we do business and then uncover the real underlying
assumptions behind the adoption of these rules. Business processes are governed by a number of
rules that determine the way the process is designed, how it interfaces with other activities within
the organization, and how it is operated. These rules can exist in the form of explicit policies and
guidelines or, what is more often the case, in the mind of the people who operate the process.
These unwritten rules are the product of assumptions about the process environment that have
been developed over a number of years and often emerge from uncertainties surrounding trading

176
relationships, capabilities, resources, authorities, etc. Once these underlying assumptions are
uncovered they can be challenged for relevance and, in many cases, can be found to be false.
This opens up new opportunities for process redesign and, as a consequence, the creation of new
value and improved performance.

5. Reorganize and retrain


This phase includes piloting the changes and validating their effectiveness. The new process
structure and operation/system will probably lead to some reorganization, which may be
necessary for reinforcement of the process strategy and to achieve the new levels of
performance. Training and/or retraining for the new technology and roles play a vital part in
successful implementation. People need to be equipped to assess, re-engineer, and support – with
the appropriate technology –the key processes that contribute to customer satisfaction and
corporate objectives. Therefore, BPR efforts can involve substantial investment in training but
they also require considerable top management support and commitment.

6. Measure performance
It is necessary to develop appropriate metrics for measuring the performance of the new
process(es), subprocesses, activities, and tasks. These must be meaningful in terms of the inputs
and outputs of the processes, and in terms of the customers of and suppliers to the
process(es).

7. Continuous redesign and improvement


The project approach to BPR suggests a one-off approach. When the project is over, the team is
disbanded, and business returns to normal, albeit a radically different normal. It is generally
recommended that an organization does not attempt to re-engineer more than one major process
at a time, because of the disruption and stress caused. Therefore, in major re-engineering efforts
of more than one process, as one team is disbanded, another is formed to redesign yet another
process.
Considering that Ford took five years to redesign its accounts payable process, BPR on a large
scale is clearly a long-term commitment.
In a rapidly changing, ever more competitive business environment, it is becoming more likely
that companies will re-engineer one process after another. Once a process has been redesigned,
continuous improvement of the new process by the team of people working in the process should
become the norm.

5.12 The Cost of Quality


According to legendary quality guru Armand Feigenbaum, quality costs are the foundation for
quality systems economics. Quality costs have traditionally served as the basis for evaluating
investments in quality programs. The costs of quality are those incurred to achieve good quality
and to satisfy the customer, as well as costs incurred when quality fails to satisfy the customer.
Thus, quality costs fall into two categories: the cost of achieving good quality, also known as the
cost of quality assurance, and the cost associated with poor-quality products, also referred to as
the cost of not conforming to specifications.

B. The cost of achieving good quality


The costs of a quality management program are prevention costs and appraisal costs.

177
Prevention costs are those costs of all activities needed to prevent defects, including identifying
the causes of defects, corrective actions, and redesign. Managers must put in place measures to
prevent defects occurring – including company-wide training, planning and implementing
quality procedures. Prevention reflects the quality philosophy of “do it right the first time,” A
study of Japanese versus American manufacturing showed that the added cost of prevention
(which resulted in better quality Japanese goods) was half the cost of rectifying defective goods
made by American manufacturers.

Examples of prevention costs include:


 Quality planning costs: The costs of developing and implementing the quality
management program.
 Product-design costs: The costs of designing products with quality characteristics.
 Process costs: The costs expended to make sure the productive process conforms to
quality specifications.
 Training costs: The costs of developing and putting on quality training programs for
employees and management.
 Information costs: The costs of acquiring and maintaining (typically on computers) data
related to quality, and the development and analysis of reports on quality performance.

Appraisal costs are the costs of measuring, testing, and analyzing materials, parts, products, and
the productive process to ensure that product-quality specifications are being met. Examples of
appraisal costs include:
 Inspection and testing: The costs of testing and inspecting materials, parts, and the
product at various stages and at the end of the process.
 Test equipment costs: The costs of maintaining equipment used in testing the quality
characteristics of products.
 Operator costs: The costs of the time spent by operators to gather data for testing product
quality, to make equipment adjustments to maintain quality, and to stop work to assess
quality.
Appraisal costs tend to be higher in a service organization than in a manufacturing company and,
therefore, are a greater proportion of total quality costs. Quality in services is related primarily to
the interaction between an employee and a customer, which makes the cost of appraising quality
more difficult. Quality appraisal in a manufacturing operation can take place almost exclusively
inhouse; appraisal of service quality usually requires customer interviews, surveys,
questionnaires, and the like.

C. The cost of poor quality


The cost of poor quality is the difference between what it actually costs to produce a product or
deliver a service and what it would cost if there were no defects. Most companies find that
defects, reworks and other unnecessary activities related to quality problems significantly inflate
costs; estimates range as high as 20 to 30% of total revenues. This is generally the largest quality
cost category in a company, frequently accounting for 70 to 90% of total quality costs. This is
also where the greatest cost improvement is possible. The cost of poor quality can be categorized
as internal failure costs or external failure costs.

178
Internal failure costs are incurred when poor-quality products are discovered before they are
delivered to the customer. These are costs of deficiencies discovered before delivery which are
associated with the failure (nonconformities) to meet explicit requirements or implicit needs of
external or internal customers. Also included are avoidable process losses and inefficiencies that
occur even when requirements and needs are met. These are costs that would disappear if no
deficiencies existed.
Examples of internal failure costs include:
 Scrap costs: The costs of poor-quality products that must be discarded, including labor,
material, and indirect costs.
 Rework costs: The costs of fixing defective products to conform to quality specifications.
 Process failure costs: The costs of determining why the production process is producing
poor quality products.
 Process downtime costs: The costs of shutting down the productive process to fix the
problem.
 Price-downgrading costs: The costs of discounting poor-quality products—that is, selling
products as “seconds”.
External failure costs are incurred after the customer has received a poor-quality product and
are primarily related to customer service. They are costs of defects once they have reached the
consumer, including replacements, warranty and repair costs, and the loss of customer goodwill.
Examples of external failure costs include:
 Customer complaint costs: The costs of investigating and satisfactorily responding to a
customer complaint resulting from a poor-quality product.
 Product return costs: The costs of handling and replacing poor-quality products returned
by the customer.
 Warranty claims costs: The costs of complying with product warranties.
 Product liability costs: The litigation costs resulting from product liability and customer
injury.
 Lost sales costs: The costs incurred because customers are dissatisfied with poor-quality
products and do not make additional purchases.
Internal failure costs tend to be low for a service, whereas external failure costs can be quite
high.

Activity 5
Are good quality products costly? Are expensive products superior quality products? Argue.
Give example from your local products.

Chapter Summary
Quality is defined in a number of ways, the commonly used being given by ISO as the totality of
features and characteristics of a product or service that bear on its ability to satisfy stated or
implied needs. The basic principles for managing quality have been evolved through time and
shaped by a wave of factors and thinkers. The seven most notable individuals who shaped
today’s concept of quality are: Walter A. Shewhart, W. Edwards Deming, Joseph M. Juran,
Armand V. Feigenbaum, Philip B. Crosby, Kaoru Ishikawa, and Genichi Taguchi.Total quality
management (TQM) is different from the old concept of quality because its focus is on serving
customers, identifying the causes of quality problems, and building quality into the production
process. The features of TQM combine to create the TQM philosophy are customer focus,
continuous improvement, employee empowerment, use of quality tools, product design, process

179
management, and managing supplier quality. TQM as a management philosophy can be
contrasted against benchmarking and BPR as the latter are a recent version of the former. There
are two general and four specific categories of quality costs. The first two are prevention and
appraisal costs, which are incurred by a company in attempting to improve quality. The last two
costs are internal and external failure costs, which are the costs of quality failures that the
company wishes to prevent.

Review Questions
Multiple Choice Questions
1. Which of the following statements best describes the relationship between quality
management and product strategy?
a. Product strategy is set by top management; quality management is an independent
activity.
b. Quality management is important to the low-cost product strategy, but not to the
response or differentiation strategies.
c. High quality is important to all three strategies, but it is not a critical success factor.
d. Managing quality helps build successful product strategies.
e. Companies with the highest measures of quality were no more productive than other
firms.
2. “Quality is defined by the customer" is
a. an unrealistic definition of quality
b. a user-based definition of quality
c. a manufacturing-based definition of quality
d. a product-based definition of quality
e. all of the above.
3. "Making it right the first time" is
a. an unrealistic definition of quality
b. a user-based definition of quality
c. a manufacturing-based definition of quality
d. a product-based definition of quality
e. none of the above.
4. A recent consumer survey conducted for a car dealership indicates that, when buying a car,
customers are primarily concerned with the salesperson's ability to explain the car's features, the
salesperson's friendliness, and the dealer's honesty. The dealership should be especially
concerned with which determinants of service quality?
a. communication, courtesy, and credibility
b. competence, courtesy, and security
c. competence, responsiveness, and reliability
d. communication, responsiveness, and reliability
e. understanding/knowing customer, responsiveness, and reliability
5. The goal of inspection is to
a. detect a bad process immediately
b. add value to a product or service
c. correct deficiencies in products
d. correct system deficiencies
e. all of the above

180
Discussion questions
1. ‘Quality is free!’ proclaimed the title of Philip Crosby’s book in the 1970s. Argue. What
is inspection? What is the basic difference between inspection and quality control?
2. Describe the various kinds of inspections.
3. Elaborate atleast five tools of quality control.
4. Identify the four costs of quality. Which one is hardest to evaluate? Explain.
5. Define quality circles. Explain the objectives of quality circles. What is the difference
between quality circles and quality improvement teams?

CHAPTER VI
SIX SIGMA

Even when an operation is designed and its activities planned and controlled, the operations
manager’s task is not finished. All operations, no matter how well managed, are capable of
improvement. In fact, in recent years the emphasis has shifted markedly towards making
improvement one of the main responsibilities of operations managers. We treat improvement
model that advocates perfection of the process in this chapter. It is known by the term six-sigma.

Learning Objectives
After reading this chapter, you should be able to
 Distinguish six sigma from other management philosophies
 Discuss the evolution of six sigma as a business improvement tool
 Describe the process to implement six sigma as a business improvement methodology

6.1 Introduction
Six Sigma has been an industry term ever since Motorola introduced the concept in 1986. By
general definition Six Sigma is the measure of quality that strives for near perfection. It is a
disciplined, data-driven methodology focused on eliminating defects. The term ‘six sigma’ is
largely symbolic, referring to a methodology and a culture for continuous quality improvement,
as well as referring to the statistical goal, six sigma. Put shortly, the concept of Six Sigma refers
to a quality program designed to reduce defects to help lower costs, save time, and improve
customer satisfaction. It’s based on the statistical standard that establishes a goal of no more than
3.4 defects per million units or procedures.

181
Motorola began in the late 1920s as a small manufacturer of car radios (hence the name
motorola). It has grown to a $ 30 billion corporation with more than 68,000 employees at 320
facilities in 73 countries around the world, manufacturing such products as semi-conductor,
integrated circuits, paging systems, cellular telephone’s, computers and wireless communication
systems. Motorola was an engineering-oriented company that focused on product development to
create new market. In the mid-1970s it changed its focus from products to customers, with an
objective of total customer satisfaction. Motorola has been since recognized as having one of the
best quality management systems in the world. In 1988 it was among the first group of winners
of the prestigious Malcolm Baldrige national quality award and in 2002 it was one of the very
few companies to win the baldrige award a second time. It may surprise those who have come to
know Motorola for its cool cell phones, but the company's more lasting contribution to the world
is the quality-improvement process called Six Sigma.
Concomitant to setting its quality objective as ‘total customer satisfaction’, Motorola started to
explore what the slogan would mean to its operations processes. They decided that true customer
satisfaction would only be achieved when its products were delivered as promised, with no
defects, with no early-life failures and when the product did not fail excessively in service. To
achieve this, Motorola initially focused on removing manufacturing defects. However, it soon
came to realize that many problems were caused by latent defects, hidden within the design of its
products. These may not show initially but eventually could cause failure in the field. The only
way to eliminate these defects was to make sure that design specifications were tight (i.e. narrow
tolerances) and its processes very capable. Put simply, initially Motorola introduced the Six
Sigma system to reduce defects in manufactured products to only a few parts per million. Later,
it extended the system or the concept to business processes and service operations.

As other companies have taken ideas from Motorola and other leading companies and added
their own variations, six sigma has come to be “a programme aimed at the near-elimination of
defects from every product, process and transaction”. Six sigma has thus become a disciplined,
quantitative approach for improving operations in all types of industries and business functions.
It is labeled as one of the recent management revolutions which is widely applied. It is currently
one of the most popular quality management systems in the world.
Six Sigma became well known after Jack Welch made it a central focus of his business strategy
at General Electric in 1995, and today it is used in different sectors of industry.

The term Six Sigma originated from terminology associated with manufacturing, specifically
terms associated with statistical modeling of manufacturing processes. The maturity of a
manufacturing process can be described by a sigma rating indicating its yield or the percentage
of defect-free products it creates. A six sigma process is one in which 99.99966% of the products
manufactured are statistically expected to be free of defects (3.4 defects per million). Say, for
example a company is manufacturing steel rods of 1m length. Because the process is not perfect,
the lengths of some steel rods will be 0.998m, some 0.999m length and so on. As far as the
customer is concerned, this is OK as long as the length of each steel rod is between 0.997m and
1.003m. If the process is a ‘Six Sigma’ process, it will produce only 3.4 bad rods – rods shorter
than 0.997m and longer than 1.003m – for every million rods made.
The table given below maps the Sigma and %accuracy.

182
Table 6.1: Sigma and Accuracy of a Process
Defects per Million % Accuracy
Opportunities (DPMO)
One Sigma 691,500 30.85%
Two Sigma 308,500 69.15%
Three Sigma 66,810 93.32%
Four Sigma 6,210 99.38%
Five Sigma 233 99.977%
Six Sigma 3.4 99.9997%
Seven Sigma 0.020 99.999998%

Now the definition of Six Sigma has widened to well beyond this rather narrow statistical
perspective. General Electric (GE), who were probably the best known of the early adopters of
Six Sigma as mentioned previously, defined it as, a disciplined methodology of defining,
measuring, analysing, improving, and controlling the quality in every one of the company’s
products, processes, and transactions - with the ultimate goal of virtually eliminating all defects.

So, now Six Sigma is being seen as a broad improvement concept rather than a simple
examination of process variation, even though this is still an important part of process control,
learning and improvement. For example, Six Sigma has been defined as a comprehensive and
flexible system for achieving, sustaining and maximizing business success. Six Sigma is
uniquely driven by close understanding of customer needs, disciplined use of facts, data, and
statistical analysis, and diligent attention to managing, improving, and reinventing business
processes. Six Sigma uses facts and data from measured processes inside an organization, not
comparisons with some external standard. In other words, it precisely measures what is actually
happening and determines how it can be improved.
6.2 The Variants of Six Sigma
The era ‘1986 to 1990’ is often loosely referred to as the first generation of Six Sigma, or SSG1
for short. Then, in the 1990s, General Electric Corp. ushered in the second generation of Six
Sigma, or SSG2 as it is now known. The focus of Six Sigma shifted from product quality to
business quality. In this sense, Six Sigma became a business-centric system of management. The
results that world-class companies such as General Electric, Johnson & Johnson, Honeywell,
Motorola, and many others have accomplished speak for themselves. Six Sigma has become a
synonym for improving quality, reducing cost, improving customer loyalty, and achieving
bottom-line results. The original goal of Six Sigma was to focus on manufacturing processes;
however, marketing, purchasing, billing, and invoicing functions were also involved SSG2.
There is a new brand of Six Sigma emerging now that promises to deliver even more powerful
results than before. Dubbed Third Generation Six Sigma, or just Gen III, it can show companies
how to deliver products or services that, in the eyes of customers, have real value. Korean steel
maker Posco is implementing Gen III techniques corporation wide. Posco is the third-largest
steel maker in the world. Moreover, the Korean Standards Association has adopted Gen III
techniques and is trying to propagate these methods throughout that country. Electronics maker
Samsung, also in Korea, has begun a Gen III programme . And the government of India has

183
bought into the idea and has begun promoting it both in private and government-owned
industries there.
The word “value,” in the context of Gen III is best understood by analogy to previous Six Sigma
efforts. As practiced in the 1980s and ’90s, Six Sigma focused first on reducing defects. Later,
the emphasis was on minimizing costs. Six Sigma efforts at such companies as Motorola, GE,
and DuPont were successful at reaching both goals. One difficulty with both first and second-
generation efforts is that they didn’t address some of the larger issues that make for commercial
success. Gen III introduces the concept of the White Belt Six Sigma practitioner. This is an
individual who facilitates use of Six Sigma in work cells or similar settings. Higher-level White
Belts typically ferret out small benefits from applying Six Sigma to problems that would not
justify the time and attention of a Six Sigma Black Belt.
Recently, Honeywell has developed a new generation of Six Sigma . Six Sigma Plus is Morris
Township’s principal engine for driving growth and productivity across all its businesses,
including aerospace, performance polymers, chemicals, automation and control, transportation,
and power systems, among others. In addition to manufacturing, Honeywell applies Six Sigma
Plus to all of its administrative functions.

6.3 Key Elements of Six Sigma


Basically, Six Sigma is a project-oriented methodology (or system) that provides businesses with
the tools and expertise to improve their processes. This increase in performance through a
decrease in process variation leads to defect reduction (to near zero) and an increase in product
and service quality and increased profits. In its simplest form, Six Sigma is based on Deming’s
PDCA cycle and Joseph Juran’s assertion that “all quality improvement occurs on a project-by-
project” basis, with elements of kaizen-type employee involvement.

Although the scope of Six Sigma is disputed, among elements frequently associated with Six
Sigma include the following:
 Customer-driven objectives – Six Sigma is sometimes defined as the process of
comparing process outputs against customer requirements. It uses a number of measures
to assess the performance of operations processes. In particular it expresses performance
in terms of defects per million opportunities (DPMO). This is exactly what it says, the
number of defects which the process will produce if there were one million opportunities
to do so. This is then related to the ‘Sigma measurement’ of a process and is the number
of standard deviations of the process variability that will fit within the customer
specification limits. Customers are the center of Six Sigma companies. Customers define
quality. Customers’ expectation for performance, reliability, competitive prices, on-time
delivery, service, clear and correct transaction processing and more have been
recognized. Being good is not enough for six sigma companies. Delighting customers is a
necessity.
 The Outside-in Thinking/ process: Six-sigma companies look at their business from the
customer's perspective, not their own. In other words, six sigma companies must look at
their processes from the outside-in. By understanding the transaction lifecycle from the
customer's needs and processes, they do discover what the customers are seeing and

184
feeling. With this knowledge, they can identify areas where they can add significant
value or improvement from customers’ perspective.
 Process capability and control – Not surprisingly, given its origins, process capability
and control is important within the Six Sigma approach. For some processes, shifts in the
process average are so common that such shifts should be recognized in setting
acceptable values of Cp. The Motorola Company’s “six-sigma” approach recognizes the
likelihood of these shifts in the process average and makes use of a variety of quality
engineering techniques to change the product, the process, or both in order to achieve a
Cp of at least 2.0.
 Process design – Latterly Six Sigma proponents also include process design into the
collection of elements that define the Six Sigma approach.
 Employees: People create results. Involving all employees is essential to Six Sigma’s
quality approach. Quality is the responsibility of every employee. Every employee must
be involved, motivated and knowledgeable if we are to succeed. All employees must be
trained in the strategy, statistical tools and techniques of Six Sigma quality. There should
be structured training and organizational development. The Six Sigma approach holds
that improvement initiatives can only be successful if significant resources and training
are devoted to their management. It recommends a specially trained cadre of practitioners
and internal consultants named after ‘martial arts’ grades.
 Use of evidence – Although Six Sigma is not the first of the new approaches to operations
to use statistical methods it has done a lot to emphasize the use of quantitative evidence.
In fact much of the considerable training required by Six Sigma consultants is devoted to
mastering quantitative analytical techniques.
 Structured improvement cycle – The structured improvement cycle used in Six Sigma is
the DMAIC cycle.

6.3 The Six Sigma Process


The basic steps of the six sigma process are quite similar to the quality improvement processes
and quality control processes. The implementation stresses leadership at the highest levels of the
company. For many companies this has been the CEO, such as Jack Welch at General Electric,
and Bob Galvin at Motorola. The implementation is then cascaded throughout every level of
management, and clear responsibilities are understood.

As implemented by Motorola, Six Sigma follows four basic steps-align, mobilize, accelerate,
and govern.
1. In the first step, “align,” senior executives create a balanced scorecard of strategic goals,
metrics and initiatives to identify the areas of improvement that will have the greatest impact
on the company’s bottom line. Process owners (i.e, the senior executives who supervise the
processes) “champion” the creation of high-impact improvement projects that will achieve
the strategic goals.
2. In the second step, “mobilize,” project teams are formed and empowered to act. The process
owners select “black belts” to lead well-defined improvement projects. The teams follow a
step-by- step, problem-solving approach. The six sigma breakthrough methodologies referred
to as DMAIC ((design, measure, analysis, improve, and control) and/or DMADV (define,
measure, analyse, design, verify) may be applied. These are often labeleld as the five basic

185
steps in six sigma projects. In this ‘problem-solving’ phase (particularly also known as
DMAIC), first projects are defined from the perspective of customers or regarding process
(Define). Second based on the defined projects, the current level of the product quality is
measured into sigma level (Measure). Third causes of the problems are detected through the
analysis so as to improve the sigma level (Analyse). Fourth efforts are made to improve the
situation by working with the causes of the problems (Improve). Finally the optimal
condition generated by the above mentioned phases are controlled, maintained and monitored
(Control).

3. In the third step, “accelerate,” improvement teams made up of black belt and green belt team
members with appropriate expertise use an action-learning approach to build their capability
and execute the project. This approach combines training and education with project work
and coaching. Ongoing reviews with project champions ensure that projects progress
according to an aggressive timeline.

4. In the final step, “govern,” executive process owners monitor and review the status of
improvement projects to make sure the system is functioning as expected. Leaders share the
knowledge gained from the improvement projects with other parts of the organization to
maximize benefit.
In the next few sections we describe some components of the Six Sigma process in detail.

Improvement Projects

The first step in the Six Sigma process is the identification of improvement projects. These
projects are selected according to business objectives and the goals of the company. As such,
they normally have a significant financial impact. These projects are not one-time, unique
activities as projects are typically thought of, but team-based activities directed at the continuing
improvement of a process.

Once projects are identified, they are assigned a champion from upper management who is
responsible for project success, providing resources and overcoming organizational barriers.
Champions are typically paid a bonus tied to the successful achievement of Six Sigma goals.

6.4 The Six Sigma Team


According to the discipline of six sigma, project teams must be formed and empowered to act.
Analogous to the ‘martial arts’, Master Black Belt, Black Belt and Green Belt (denoting the
level of team member’s expertise) members of a team would be identified. Besides, team
members of Six Sigma implementation include Executive Leadership and Project Champions.

Executive Leadership
Six Sigma involves changing major business value streams that cut across organizational
barriers. It is the means by which the organization's strategic goals are to be achieved. This effort
cannot be led by anyone other than the CEO, who is responsible for the performance of the
organization as a whole. Six Sigma must be implemented from the top down.
Project Champions

186
Take their company's vision, missions, goals, and metrics and translate them into individual unit
tasks. Additionally, Champions must remove any roadblocks to the programme's success. Project
Champions are involved in selecting projects and identifying Black and Green Belt candidates.
They set improvement targets, provide resources, and review the projects on a regular basis so
that they can transfer knowledge gained throughout the organisation.

Master black belt


This is the highest level of technical and organisational proficiency. Master Black Belts are
experts in the use of Six Sigma tools and techniques as well as how such techniques can be used
and implemented. Primarily Master Black Belts are seen as teachers who can not only guide
improvement projects, but also coach and mentor Black Belts and Green Belts who are closer to
the day-to-day improvement activity. Masters must be able to assist black belts in applying the
methods correctly in unusual situations. Whenever possible, statistical training should be
conducted only by master black belts. They are expected to have the quantitative analytical skills
to help with Six Sigma techniques and also the organizational and interpersonal skills to teach
and mentor. Given their responsibilities, it is expected that Master Black Belts are employed full-
time on their improvement activities. They are usually certified after participating in about 20
successful projects, half while a Black Belt and half as a Master Black Belt.

Black belt (Technical leader)


The project leader who implements the DMAIC steps is called a Black Belt. Black Belts hold
fulltime positions and are extensively trained in the use of statistics and quality-control tools, as
well as project and team management. A Black Belt assignment normally lasts two years during
which the Black Belt will lead 8 to 12 projects from different areas in the company, each lasting
about one quarter. A Black Belt is certified after two successful projects. Black Belts are
typically much focused change agents who are on the fast track to company advancement.
Black belts are technically oriented individuals held in high regard by their peers. They are the
doers. They should be actively involved in the organizational change and development process.
Candidates may come from a wide range of disciplines and need not be formally trained
statisticians or engineers. Six Sigma technical leaders work to extract actionable knowledge
from an organization’s information warehouse. Good computer skills are vital. Probably more
important that their technical skills is their people management skills. Implementing change
successfully demands the ability to involve people and persuade the necessity for change.
Black Belts can take a direct hand in organizing improvement teams. Like Master Black Belts,
Black Belts are expected to develop their quantitative analytical skills and also act as coaches for
Green Belt. Black Belts are dedicated full-time to improvement, and although opinions vary on
how many Black Belts should be employed in an operation, some organizations recommend one
Black Belt for Number of units processed every hundred employees.

Green Belts
Provide internal team support to Black Belts. While they are not trained to the same depth of
knowledge as Black Belts, they are able to assist in data collection, computer data input, analysis
187
of data using the software, and preparation of reports for management. Typically, a Green Belt
will be a respected worker who can manage the team in the absence of the Black Belt. Green
Belts are part-time workers on a team and may migrate to this position because of their skills
using basic quality analysis tools and methods and their ability to facilitate team activities. Many
become Black Belts over time as they build a personal base of experience that boosts them into a
more technical role.
Green Belts work within improvement teams, possibly as team leaders. They have significant
amounts of training, although less than Black Belts. Green Belts are not full-time positions; they
have normal day-to-day process responsibilities but are expected to spend at least twenty per cent
of their time on improvement projects. Project team members are Green Belts, which is not a
full-time position; they do not spend all of their time on projects. Green Belts receive similar
training as Black Belts but somewhat less of it.

At General Electric employees are not considered for promotion to any management position
without Black Belt or Green Belt training. It is part of the Six Sigma overall strategy that as
Black Belts and Green Belts move into management positions they will continue to promote and
advance Six Sigma in the company. A generally held perception is that companies that have
successfully implemented Six Sigma have one Black Belt for every 100 employees and one
Master Black Belt for every 100 Black Belts. This will vary according to the size of the company
and the number of projects regularly undertaken. At GE, black belt projects typically save
$250,000 or more and green belt projects frequently yield savings in the $50,000 to $75,000
range.

In Six Sigma all employees receive training in the Six Sigma breakthrough strategy, statistical
tools, and quality-improvement techniques. Employees are trained to participate on Six Sigma
project teams. Because quality is considered to be the responsibility of every employee, every
employee must be involved in, motivated by, and knowledgeable about Six Sigma.

6.5 Six Sigma Methodologies


At the heart of Six Sigma is the breakthrough strategy, processes applied to improvement
projects. The six sigma quality improvement strategy is supported by two Six Sigma sub-
methodologies called DMAIC (define, measure, analyse, improve and control), and DMADV
(define, measure, analyse, design, verify). DMAIC shown in Figure 6.1 is an improvement
system for existing processes which fall below specifications and need to be improved
incrementally. DMADV is also an improvement system which is designed to develop new
processes and/ or products at Six Sigma quality levels. The five steps for both sub-methodologies
in the breakthrough strategy are very similar to Deming’s four-stage PDCA cycle, although more
specific and detailed. In both sub-methodologies, the objective is to continually find ways to
improve and refine processes, reduce defects and increase savings.

The breakthrough strategy steps namely define, measure, analyze, improve, and control,
(DMAIC) shown in Figure 6.1 below.

Perhaps one of the key contributions to its success has been the highly disciplined approach
taken to implementation and ongoing measurement. Taking a framework from the martial arts,

188
Six Sigma involves a rigorous training and development process in which capability is measured
in terms of grades, from beginner through to black belt.

Define

Control 1. Identify project that is


measurable
1. Ensure that the 2. Develop team charter
result is sustained 3. Define process map
2. Share the lessons Define
learnt

Measure
Measure

Control 1. Define
performance
standards
2. Measure current
level of quality into
Sigma level

Improve

1. Screen potential
causes
2. Discover variable Analyze
relationships
3. Establish operating Improve
tolerances

Analyze

1. Establish process capability 189


2. Define performance
objectives
3. Identify variation sources
Figure 6.1. DMAIC Cycle

Step 1: Define

This process includes identifying the projects that are measurable. Projects are defined including
the demands of the customer and process. It is the initial stage of starting the project and the
most significant step.
The problem is defined, including who the customers are and what they want, to determine what
needs to improve. It is important to know which quality attributes are most important to the
customer, what the defects are, and what the improved process can deliver.

Step 2: Measure
At the second stage of the project, the current level of quality would be measured into Sigma
level. This step precisely pinpoints the area causing problems. It forms the basis of the problem-
solving. Project defects must be precisely defined and all possible and potential causes for such
problems must be identified in this step. Subsequently such problems are analysed statistically.
Shortly put, in this step the process would be measured, data are collected, and compared to the
desired state.

Step 3: Analysis
In this step, when and where the defect occurs is investigated. The data are analyzed in order to
determine the cause of the problem. Projects are statistically analysed and the problems are
documented. Major elements to be performed in this step are as follows:
 Projects must be statistically and precisely defined in terms of Sigma.
 The gap between the target and the actual state is clearly defined in statistical terms like
mean and moving average.
 A comprehensive list of the potential causes of the problems is created.
 Statistical analysis is carried out to reduce the potential causes into few causes.
 Finally based on above steps, the financial implication of the project is calculated and
further review is carried out if necessary.
Different tools could be applied in the analysis step of six sigma projects. The following are
among the popular tools. Process Mapping, Failure Mode & Effect Analysis, Statistical Tests,
Design of Experiments, Control charts and Quality Function Deployment (QFD).

Step 4: Improve

190
Improvements for the potential causes identified in the ‘Analysis’ step are carried out in this step.
Solutions to all the potentials problems must be found. The choices are how to change, fix and
modify the process. A trial run must be carried out for a planned period of time to ensure the
revisions and improvements implemented in the process result in achieving the targeted values.
The steps are repeated if necessary.
The team brainstorms to develop solutions to problems; changes are made to the process, and the
results are measured to see if the problems have been eliminated. If not, more changes may be
necessary.

Step 5: Control
Proper control and maintenance of the improved states are established in this step. It is also a step to
regularise the new method. The results and accomplishments of all the improvement activities are
documented. There is continuous monitoring of whether the improved process is well maintained. If the
process is operating at the desired level of performance, it is monitored to make sure the
improvement is sustained and no unexpected and undesirable changes occur.

Design for Six Sigma


An important element of the Six Sigma system is Design for Six Sigma (DFSS), a systematic
methodology for designing products and processes that meet customer expectations and can be
produced at Six Sigma quality levels. It follows the same basic approach as the breakthrough
strategy with Master Black Belts, Black Belts, and Green Belts and makes extensive use of
statistical tools and design techniques, training, and measurement. However, it employs this
strategy earlier, up front in the design phase and developmental stages. This is a more effective
and less expensive way to achieve the Six Sigma goal than fixing problems after the product or
process is already developed and in place.

6.6 Measuring Performance


The Six Sigma approach uses a number of related measures to assess the performance of
operations processes.

 A defect is a failure to meet customer-required performance (defining performance


measures from a customer’s perspective is an important part of the Six Sigma approach).
 A defect unit or item is any unit of output that contains a defect (i.e. only units of output
with no defects are not defective, defective units will have one or more than one defect).
 A defect opportunity is the number of different ways a unit of output can fail to meet
customer requirements (simple products or services will have few defect opportunities,
but very complex products or services may have hundreds of different ways of being
defective).
 Proportion defective is the percentage or fraction of units that have one or more defect.
 Process yield is the percentage or fraction of total units produced by a process that are
defect-free (i.e. 1-proportion defective).
 Defect per unit (DPU) is the average number of defects on a unit of output (the number
of defects divided by the number of items produced).

191
 Defects per opportunity is the proportion or percentage of defects divided by the total
number of defect opportunities (the number of defects divided by (the number items
produced × the number of opportunities per item)).
 Defects per million opportunities (DPMO) is exactly what it says, the number of defects
which the process will produce if there were one million opportunities to do so.
 The Sigma measurement is derived from the DPMO and is the number of standard
deviations of the process variability that will fit within the customer specification limits.

6.7 Six Sigma versus TQM


Six Sigma has roots back to the teachings of Dr. Joseph Juran and Dr. W. Edwards Deming. Six
Sigma is a high performance, data driven method for improving quality by removing defects and
their causes in business process activities. The higher the number of sigmas, the more consistent
is the process output or the smaller the variation. It is particularly powerful when measuring the
performance of a process with a high volume of outputs. Six Sigma links customer requirements
and process improvements with financial results while simultaneously providing the desired
speed, accuracy and agility in today’s e-age. Some authors assert that Six Sigma is essentially a
methodology within- not alternative to – TQM. Because this quality improvement is a prime
ingredient of TQM, many firms have found that adding a Six Sigma programme to their current
business gives them all, or almost all, of the elements of a TQM programme. Thus it can be
concluded as:

Six Sigma uses a project based structured problem solving method linking customer
requirements with processes and tangible results. It selects the appropriate tools from a wide
variety of statistical tools. One of the most common methodologies used is Define, Measure,
Analyze, Improve, and Control (DMAIC). Yang (2004) developed an integrated model of TQM
and GE-Six Sigma based on 12 dimensions: development, principles, features, operation, focus,
practices, techniques, leadership, rewards, training, change and culture. The author concluded
that although management principles of TQM and GE-Six Sigma are somewhat different, there is
congruence among their quality principles, techniques, and culture. Hence the integration of
TQM and GE-Six Sigma is not difficult.
The six sigma initiatives may sound quite familiar to many leading companies with successful
total quality management systems but often sound quite new to those companies who have just
dabbled in quality management in the past.

The main focus of six sigma, like many other quality initiatives, is on cost and waste reduction,
yield improvements, capacity improvements, and cycle-time reductions. Heavy emphasis is put
on satisfying customer needs. Organizations try to estimate the financial impact of each
operation. These companies also establish clear performance metrics for each improvement in
costs, quality, yields, and capacity improvements. Financial figures are absolutely required. The
projects undertaken are usually substantial with improvements commonly in the $50,000 to
$100,000 range.

Another difference in the six sigma initiatives and many total quality management programs is
the assignment of full-time staff. The team leaders and facilitators (often called black belts and
master black belts) are chosen carefully and work 50 to 100 percent of their time on the

192
improvement projects. The training for these people is also extensive, usually 4 or 5 weeks of
intensive, highly quantitative training. Some companies have actually implemented training
programs lasting up to 6 months for their new black belts.

6.8 Six sigma Success factors


Research in to what makes a six sigma implementation successful has revealed ten critical
success factors. They are on order of importance:

 top management leadership and commitment


 a well implemented customers management system
 a continuous education and training system
 a well-organized information and analysis system
 a well implemented process management system
 a well-developed strategic planning system
 a well-developed suppliers management system
 equipping everyone in the organization, from top management to employees, with a
working knowledge of the quality tools
 a well-developed human resource management system
 a well-developed competitive benchmarking system.

Activity 6
Do you think that Six-Sigma can be implemented in Ethiopia? How? Justify.

Chapter Summary
Six Sigma has evolved over the last two decades and so has its definition. Six Sigma has literal,
conceptual, and practical definitions. Six Sigma is often interpreted at three different levels: a) as
a metric; b) as a methodology and c) as a management system. Essentially, Six Sigma is all three
at the same time. Features that set Six Sigma apart from previous quality improvement initiatives
include: a clear focus on achieving measurable and quantifiable financial returns from any
project; an increased emphasis on strong and passionate management leadership and support; a
special infrastructure of "Champions," "Master Black Belts," "Black Belts," etc. to lead and
implement the Six Sigma approach; and a clear commitment to making decisions on the basis of
verifiable data, rather than assumptions and guesswork
Review Questions
Multiple Choice Questions
1. If 1 million passengers pass through the Bole International Airport with checked
baggage each month, a successful Six Sigma program for baggage handling would result
in how many passengers with misplaced luggage?
a. 3.4 d. 2700
b. 6.0 e. 6 times the monthly standard
c. 34 deviation of passengers
2. Suppose that a firm has historically been achieving “three-sigma” quality. If the firm later
changes its quality management practices such that begins to achieve “six-sigma” quality,
which of the following phenomena will result?

193
a. The average number of defects will be cut in half.
b. The specification limits will be moved twice as far from the mean.
c. The average number of defects will be cut by 99.9997%.
d. The average number of defects will be cut by 99.87%.
e. The average number of defects will be cut by 99.73%.
3. Among the elements frequently associated with Six Sigma include the following except:
a. Customer driven objectives d. Cost reduction
b. Process capability e. All of the above.
c. Design quality
4. One is not among the basic steps of Six Sigma process as implemented by Motorola:
a. Mobilize d. Accelerate
b. Align e. None of the above.
c. Govern
5. The project leader who implements the DMAIC steps is a:
a. Black Belt.
b. Master black belt
c. Green belt
d. Project champion
e. All of the above.

194
Discussion Questions

1. Discuss the key elements of Six Sigma.


2. Identify the five steps of DMAIC.
3. Explain the six sigma measures used to assess the performance of operations processes.
4. What is the relationship and differences between TQM and Six Sigma?
5. Compare and contrast Six Sigma team against one –to five team formation as practiced by
FDRE civil service bureaus.

CHAPTER VII

JUST-IN-TIME AND LEAN PRODUCTION

Lean production has truly changed the face of manufacturing and transformed the global
economy. Originally known as just-in-time (JIT); it began at Toyota Motor Company as an effort
to eliminate waste (particularly inventories), but it evolved into a system for the continuous
improvement of all aspects of manufacturing operations. Lean production is both a philosophy
and a collection of management methods and techniques. In this chapter, we explore the
elements of JIT and/or lean production. We also explore the benefits and drawbacks of lean
production and its implementation.

The learning objective of the chapter is


 To introduce the concept of JIT and lean production
 To trace the evolution of JIT/ Lean production
 To describe the basic elements of lean production
 To discuss the Benefits of lean production

7.1 Introduction to the Early Periods of JIT


Just-In-Time (JIT) is originated in Japan. It is recognized as technique/philosophy/way of
working and is generally associated with the Toyota motor company. In fact JIT was initially
known as the ‘Toyota Production System’. Within Toyota Taiichi Ohno, is credited as the
originator of this way of working. The beginnings of this production system are rooted in the
historical situation that Toyota faced. After the Second World War the president of Toyota said
‘Catch up with America in three years, otherwise the automobile industry of Japan will not
survive’. At that time one American worker produced approximately nine times as much as a
Japanese worker. In the 1950s, the entire Japanese automobile industry produced 30,000
vehicles, fewer than a half day’s production for U.S. automakers.

Taiichi Ohno found that American manufacturers made great use of economic order quantities-
the idea that it is best to make a ‘lot’ or ‘batch’ of an item (such as a particular model of car or a
particular component) before switching to a new item. They also made use of economic order
quantities in terms of ordering and stocking the many parts needed to assemble a car. Ohno felt
that such methods would not work in Japan - total domestic demand was low and the domestic
market demanded production of small quantities of many different models. Accordingly, Ohno

195
devised a new system of production based on the elimination of waste. In his system waste was
eliminated by:

• Just-In-Time. Items only move through the production system as and when they are
needed
• Autonomation. Automating the production system so as to include inspection- human
attention only being needed when a defect is automatically detected whereupon the
system will stop and not proceed until the problem has been solved.

Ohno identified a number of sources of waste that he felt should be eliminated: Overproduction,
time spent waiting, transporting, movement, processing time, inventory and defects (explained
further in the forthcoming sections).

At that time, the car prices in the USA were typically set using selling price = cost + profit
mark-up. However in Japan, low demand meant that manufacturers faced price resistance, so if
the selling price is fixed, how can one increase the profit mark-up? Obviously by reducing costs
and hence a large focus of the system that Toyota implemented was to do with cost reduction.

To aid in cost reduction, Toyota instituted production leveling-eliminating unevenness in the


flow of items. So, if a component which required assembly had an associated requirement of 100
during a 25 day working month then 4 were assembled per day, one every two hours in an eight
hour working day. Leveling also applied to the flow of finished goods out of the factory and to
the flow of raw materials into the factory.

Toyota changed their factory layout. Previously all machines of the same type, e.g. presses, were
together in the same area of the factory. This meant that items had to be transported back and
forth as they needed processing on different machines. To eliminate this transportation, different
machines were clustered together so items could move smoothly from one machine to another as
they were processed.

This meant that workers had to become skilled on more than one machine - previously workers
were skilled at operating just one type of machine. Although this initially met resistance from the
workforce it was eventually overcome.

History of Relations between Management and Workers


In the immediate post Second World War period, for example, Japan had one of the worst strike
records in the world. In 1953, the car maker Nissan suffered a four month strike - involving a
lockout and barbed wire barricades to prevent workers returning to work. That dispute ended
with the formation of a company backed union, formed initially by members of the Nissan
accounting department. Striking workers who joined this new union received payment for the
time spent on strike, a powerful financial incentive to leave their old union during such a long
dispute. The slogan of this new union was ‘Those who truly love their union love their company’.

Adaptation to New Production Environment


In order to help the workforce to adapt to what was a very different production environment
Ohno introduced the analogy of teamwork in a baton relay race. As you are probably aware

196
typically in such races four runners pass a baton between themselves and the winning team is the
one that crosses the finishing line first carrying the baton and having made valid baton exchanges
between runners. Within the newly rearranged factory floor workers were encouraged to think of
themselves as members of a team - passing the baton (processed items) between themselves with
the goal of reaching the finishing line appropriately. If one worker flagged (e.g. had an off day)
then the other workers could help him, perhaps setting a machine up for him so that the team
output was unaffected.

The Kanban Control


In order to have a method of controlling production (the flow of items) in this new environment
Toyota introduced the kanban. The kanban is essentially information as to what has to be done.
Within Toyota the most common form of kanban was a rectangular piece of paper within a
transparent vinyl envelope.
The information listed on the paper basically tells a worker what to do - which items to collect or
which items to produce. In Toyota two types of kanban are distinguished for controlling the flow
of items:
• A withdrawal kanban. This details the items that should be withdrawn from the
preceding step in the process.
• A production ordering kanban. This details the items to be produced.

All movement throughout the factory is controlled by these kanbans-in addition since the
kanbans specify item quantities precisely no defects can be tolerated—e.g. if a defective
component is found when processing a production ordering kanban then obviously the quantity
specified on the kanban cannot be produced. Hence, the importance of autonomation (as referred
to above) the system must detect and highlight defective items so that the problem that caused
the defect can be resolved.

Another aspect of the Toyota Production System is the reduction of setup time. Machines and
processes must be re-engineered so as to reduce the setup time required before processing of a
new item can start.

In the Western world, JIT only began to impact on manufacturing in the late 1970’s and early
1980’s. Even then it went under a variety of names—e.g. Hewlett Packard called it ‘stockless
production’. Such adaptation by Western industry was based on informal analysis of the systems
being used in Japanese companies.

7.2 JIT today


Just-In-Time (JIT) is a very popular term these days among the managers of various industries.
It is seen in various ways by the practitioners in manufacturing, services and administrative
sectors. JIT is a system, a concept, a philosophy, a set of tools, a way of life and so on. No two
JIT are same-they vary according to the places and conditions in which they are being applied.
JIT is both a philosophy and a set of methods for manufacturing. JIT emphasizes waste
reduction, total quality control, and devotion to the customer. It strives to eliminate sources of
manufacturing waste by producing the right part in the right place at the right time. Waste results
from any activity that adds cost without adding value, such as moving and storing of an item. It
tries to provide the right part at the right place and at the right time.

197
Other names for adaptations of JIT include stockless production (Hewlett-Packard), material as
needed (Harley Davidson), and continuous- flow manufacturing (IBM). JIT forms a key part of
lean production. Sometimes the terms big JIT and little JIT are used to differentiate attempts to
eliminate waste in operations. Big JIT focuses on vendor relationships, human relations,
technology management, and materials and inventory management. Little JIT is more narrowly
focused on scheduling materials and services for operations.

Still by others, JIT is also known as lean production or stockless production system. It should
improve profits and return on investment by reducing inventory levels (or increasing the
inventory turnover rate), improving product quality, reducing production and delivery lead times,
and reducing other costs (such as those associated with machine setup and equipment
breakdown). In a JIT system, underutilized (or excess) capacity is used instead of buffer
inventories to hedge against problems that may arise. JIT applies primarily to repetitive
manufacturing processes in which the same products and components are produced over and
over again. The general idea is to establish flow processes (even when the facility uses a jobbing
or batch process layout) by linking work centers so that there is an even, balanced flow of
materials throughout the entire production process, similar to that found in an assembly line. To
accomplish this, an attempt is made to reach the goals of driving all queues toward zero and
achieving the ideal lot size of one unit.

7.3 Lean Production


Shortened product lifecycles, demanding customers, globalization, and e-commerce have placed
intense pressure on companies for quicker response and shorter cycle times. One way to ensure a
quick turnaround is by holding inventory. But inventory costs can easily become prohibitive,
especially when product obsolescence is considered. A wiser approach is to make your operating
system lean and agile, able to adapt to changing customer demands. Collaboration along a supply
chain can work only if the participants coordinate their production and operate under the same
rhythm. Companies have found this rhythm in a well-respected but difficult to implement
philosophy called lean production.

Lean production means doing more with less-less inventory, fewer workers, less space. The term
was coined by James Womack and Daniel Jones to describe the Toyota Production System,
widely recognized as the most efficient manufacturing system in the world. The Toyota
Production System evolved slowly over a span of years. Initially known as just-in-time (JIT) it
emphasized minimizing inventory and smoothing the flow of materials so that material arrived
just as it was needed or “just-in-time.” As the concept widened in scope, the term lean
production became more prevalent. Now the terms are often used interchangeably. What is
significant is that a system originally designed to reduce inventory levels eventually became a
system for continually improving all aspects of operations. Please note also that we have used
the terms interchangeably in this module as well unless explicitly stated otherwise.

The major difference between JIT and lean production/operations is that JIT is a philosophy of
continuing improvement with an internal focus, while lean production/operations begins
externally with a focus on the customer. Understanding what the customer wants and ensuring
customer input and feedback are starting points for lean production. Lean Production/operations
means identifying customer value by analyzing all of the activities required to produce the

198
product and then optimizing the entire process from the view of the customer. In other words
lean production focuses on the removal of waste, which is defined as anything not necessary to
produce the product and/or service. Within lean production waste (mudain Japanese) is defined
as an activity, which absorbs resources but creates no value. Waste can be anything, time, costs,
work, that adds no value in the eyes of the customer. The manager finds what creates value for
the customer and what does not. Lean production is an integrated set of activities designed to
achieve high-volume flexible production using minimal inventories of raw materials. It is based
on the premise that nothing will be produced until it is needed. A signal is generated when
materials and components are needed at a workstation, and they arrive “just-in-time” to be used.
Ideally, lean production is implemented throughout the supply chain, with the signal moving
backward from the customer to the basic raw materials.

Lean production/operations began as Lean Manufacturing in the mid-1900s. It was developed by


the Japanese automobile manufacturer, Toyota. The development in Japan was influenced by the
limited resources available at the time. Not surprisingly, the Japanese were very sensitive to
waste and inefficiency. Lean production, one of the most popular terms, was introduced by Krafcik in
1988. Widespread, interest in lean manufacturing occurred after the book about automobile
production, “The Machine That Changed the World” by James Womack, Daniel Jones, and
Daniel Roos, was published in 1991. As described in the book, Toyota’s focus was on the
elimination of all waste from every aspect of the process. Waste was defined as anything that
interfered with, or did not add value to, the process of producing automobiles. Toyota learned a
great deal from studying Ford’s operations and based its JIT approach on what it saw. However,
Toyota was able to accomplish something that Ford couldn’t: a system that could handle variety.

That is why lean production is sometimes called the Toyota Production System (TPS), with
Toyota Motor Company’s Eiji Toyoda, Taichi Ohno and Shiegeo Shingo given credit for its
approach and innovations. TPS represents a philosophy that encompasses every aspect of the
process, from design to after the sale of a product. There are 10 elements of the lean production
philosophy.

If there is any distinction between JIT, lean production, and TPS it is that a) JIT emphasizes
continuous improvement, b) Lean production emphasizes understanding the customer, whereas
c) TPS emphasizes employee learning and empowerment in an assembly line environment. In
practice, there is little difference, and the terms are often used interchangeably.
7.4 JIT Goals
The goal of JIT is to produce only the necessary item in the necessary quantity at the necessary
time. Achieving this goal can radically increase the responsiveness of a company to the demands
of its customers and improve its ability to compete on cost, quality, dependability, flexibility, and
speed. JIT aims to meet demand instantaneously, with perfect quality and no waste. JIT will not
achieve these aims immediately. Rather, it describes a state that a JIT approach helps to work
towards. No definition of JIT fully conveys its full implications for operations practice, however.

This is why so many different phrases and terms exist to describe JIT-type approaches, for
example:
 lean operations
 continuous flow manufacture
199
 high value-added manufacture
 stockless production
 war on waste
 fast-throughput manufacturing
 short cycle time manufacturing.

Generally, JIT manufacturing seeks to achieve the following goals:


1. To produce the required quality or zero defects. In manufacturing, traditionally people
thought that zero defects producing was not possible because of the fact that people thought that
at some level of production it would be no longer be possible to produce without defects. Also
despite there were defects, the product did reach customers expectation. With the aim of JIT
there will be no longer any cause of a defect and therefore all products will meet more than the
expectations. This is also related to a part of quality management. An approach to quality control
that starts from the premise that if quality cannot be built into the process, then the only way to
ensure that no defective products are passed on to the customer (downstream process) is to
inspect every part made. In a Just-in-time environment where waiting for an inspector would be
intolerable, the alternatives are self-inspection and error proofing. Inspection at source also
improves the likelihood of discovering the root cause of the problem so it can be eliminated.
Each individual and function involved in the manufacturing system must, therefore, accept the
responsibility for the quality level of its products. Traditional companies believe quality is costly,
defects are caused by workers, and the minimum level of quality that can satisfy the customer is
enough. Companies practicing the JIT believe quality leads to lower costs, than systems cause
most defects, and that quality can be improved within the Kaizen framework. This concept
introduces the correction of the problem before many other defective units have been completed.

2. Zero set-up time. Reducing the set-up times leads to a more predictable production. No setup
time also leads to a shorter production time/production cycle, and less inventories. To effectively
implement a low inventory system, the common practice of lot sizing through the economic
order quantity model must be forgotten. Therefore, the time to set up for a different product in
the line needed to be significantly reduced. Innovative designs and changeover techniques are
critical.

3. To produce the required items or zero inventories. Inventories, including work-in-progress,


finished goods and sub-assemblies, have to be reduced to zero. There will be no sub-assemblies,
no work-in-progress and no finished goods. This means a different view than in traditional
manufacturing, where inventories are seen as a buffer against a fluctuating demand, or as a
buffer against non-reliable suppliers. Also, in traditional manufacturing inventory was built up to
make sure expensive machines were running for full capacity, because only then the hourly costs
were as low as possible. In JIT, the inventory is minimized and thus, throughput and cycle times
improved significantly. Also, through elimination of large inventories, huge space savings are
realized because there is no need for large warehouses.

4. Zero handling. Zero handling in JIT means eliminating all non-value adding activities. So,
zero-handling means reducing (by redesigning) non-value adding activities.

200
5. Zero lead-time. Lead time is the time between ordering a product and receiving it. The time
taken to process orders, order parts, manufacture goods, store, pick and dispatch goods all impact
the customer lead-time. Zero lead-time is a result of the usage of small lots and increases the
flexibility of the system. When there are no lead times, the possibility to make planning which do
not rely on forecasts becomes bigger and bigger. The JIT philosophy recognizes that in some
markets it is impossible to have zero lead-times, but makes clear that when a firm focuses on
reducing lead-times, this firm can manufacture in the same market.

6. Enable Productivity in Diversified Small-Quantity Production. With every customer


desiring a customized product, diversity is extremely important. Many product variations can be
made on a single line, with short changeover times.

7. Reducing Manufacturing Cost. JIT involves designing products that facilitate and ease
manufacturing processes. This will help to reduce the cost of manufacturing and building the
product to specifications. One aspect in designing products for manufacturability is the need to
establish a good employer and employee relationship. This is to cultivate and tap the resources of
the production experts (production floor employee), and the line employees to develop cost
saving solutions. Participatory quality programs utilize employee knowledge about their job
functions and review the department performance, encouraging with rewards for suggested cost
saving solutions.

8. Jidohka (automation with a human touch). Technological advancements needed to be taken


advantage of along with improving worker skill level, without sacrificing employee morale.

Table 7.1: The effect of JIT on operations

Traditional manufacturing JIT/enlightened approaches


Quality Acceptable levels of rejects and Right first time, every time’, constant,
rework- an inevitability that failures ongoing pursuit of process improvement.
will occur. Everybody responsible for
A specialist function ensuring quality

Inventory An asset, part of the balance A liability, masking the operational


sheet and therefore part of the value of performance by hiding a number of
the firm; buffers necessary to keep problems
production running
Batch sizes An economic order can be determined Batch sizes must be as small as possible,
to show the balance between set-up aiming toward a batch size of 1
time and production runs
Materials Determined by the economic order Supply exactly meets demand, no more no
ordering quantity less, in terms of quantity; delivery is
exactly when required, not before and not
after
Bottlenecks Inevitable; shows that machine No queues- production is at the rate which
utilization is high prevents delays and queues
Workforce A cost which can be reduced A valuable asset, able to problem solve,

201
by introducing more automation and should be supported by managers

7.5 The Basic Elements of JIT/ Lean Production


JIT/ Lean production is the result of the mandate to eliminate waste. It is composed of ten
elements:

1. Flexible resources 5. Small lots 8. Quality at the source


2. Cellular layouts 6. Quick setups 9. Total productive maintenance
3. Pull system 7. Uniform 10. Supplier networks
4. Kanbans levels production

These elements can be loosely organized into three phases, as shown in Figure 7.1. Let’s explore
each of these elements and determine how they work in concert.

Eliminate Waste

Increase Flexibility
Flexible resources
Cellular Layout

Smooth the Flow


Pull System Kanaban Small Lots
Quick setups Uniform production

Continuously Improve
Quality at the source
Total Productive Maintenance
Supplier Networks

Figure 7.1: Elements of Lean Production

Eliminate waste
Arguably the most significant part of the lean philosophy is its focus on the elimination of all
forms of waste. Waste can be defined as any activity that does not add value. For example,
studies often show that as little as 5 per cent of total throughput time is actually spent directly
adding value. This means that for 95 per cent of its time, an operation is adding cost to the
product or service, not adding value. Such calculations can alert even relatively efficient
operations to the enormous waste which is dormant within all operations. This same
phenomenon applies as much to service processes as it does to manufacturing ones. Relatively
202
simple requests, such as applying for a driving licence, may only take a few minutes to actually
process, yet take days (or weeks) to be returned.

The seven types of waste

Just-in-time production (JIT) was conceived by Taiichi Ohno, the former head of production at
Toyota, in the 1980s. World-class JIT streamlines production, exposes problems and bottlenecks,
and attacks waste. The seven types of waste are shown in Table 7.2.

Table 7.2 Seven categories of waste

1. Overproduction 1. Stock on hand


 More than customer needs  Buffer against variability
 Out of sequence  Store excess parts
 The wrong part  WIP deadens responsiveness
 Early or late  Money tied up

2. Waiting 6. Motion
 By people  Process choice
 By products  Efficiency of task
 By machines (bottlenecks)  Maintain operator flow
 By customer  Maintain work flow
 Improve the method first, then inject
capital

3. Transportation 7. Defective goods


 Not value added  Cost of scrap
 Effort and cost  Creates inventory (just in case!!)
 Inventory  Cost of rectification
 No control, no ownership  Causes poor delivery performance

4. The process itself


 Basic raw material
 Basic process
 Value eng, value analysis
 Make/buy
 Why do it at all?
 Process choice

Identifying waste is the first step towards eliminating it. Toyota have identified seven types of
waste, which have been found to apply in many different types of operations- both service and
production- and which form the core of lean philosophy:
1. Over-production. Producing more than is immediately needed by the next process in the
operation is the greatest source of waste according to Toyota.

203
2. Waiting time. Equipment efficiency and labour efficiency are two popular measures which
are widely used to measure equipment and labour waiting time, respectively. Less obvious
is the amount of waiting time of items, disguised by operators who are kept busy
producing WIP which is not needed at the time.
3. Transport. Moving items around the operation, together with the double and triple
handling of WIP, does not add value. Layout changes which bring processes closer
together, improvements in transport methods and workplace organization can all reduce
waste.
4. Process. The process itself may be a source of waste. Some operations may only exist
because of poor component design, or poor maintenance, and so could be eliminated. The
critical metric to measure waste for any process is what percentage of the total cycle time
is spent in value-added activities and how much of its waste. Process Cycle Efficiency
(PCE), which relates the amount of value-added time to the total lead time of the process,
is used as an indicator.
Process cycle efficiency = Value-add time/Total Lead Time
Lead time is how long it takes to deliver product or service once the order is triggered. A
PCE of less than 10% indicates that the process has a lot of non-value add waste
opportunity and the process in “un-lean”.
5. Inventory. All inventory should become a target for elimination. However, it is only by
tackling the causes of inventory that it can be reduced.
6. Motion. An operator may look busy but sometimes no value is being added by the work.
Simplification of work is a rich source of reduction in the waste of motion.
7. Defectives. Quality waste is often very significant in operations. Total costs of quality are
much greater than has traditionally been considered, and it is therefore more important to
attack the causes of such costs. Between them, these seven types of waste contribute to
four barriers to any operation achieving lean synchronization. They are: waste from
irregular (non-streamlined) flow, waste from inexact supply, waste from inflexible
response, and waste from variability. We will examine each of these barriers to achieving
lean synchronization.

Sometimes the terms big JIT and little JIT are used to differentiate attempts to eliminate waste in
operations. Big JIT focuses on vendor relationships, human relations, technology management,
and materials and inventory management. Little JIT is more narrowly focused on scheduling
materials and services for operations.

7.6 JIT Tools


There is a lot of confusion between JIT the philosophy and the tools used by JIT to achieve its
goal of waste elimination. There is a distinction between these two. Any firm can use some of
the JIT tools and not create a JIT system. These tools are techniques to fulfill the goals stated
above. One of the most widely known JIT tools, for example, is the pull system.

There are some prerequisites for successful JIT implementation. Industries need to do the
following:
Flexible Resources
The concept of flexible resources, in the form of multifunctional workers and general-purpose
machines, is recognized as a key element of lean production, but most people do not realize that

204
it was the first element to fall into place. Taiichi Ohno had transferred to Toyota from Toyoda
textile mills with no knowledge of (or preconceived notions about) automobile manufacturing.
His first attempt to eliminate waste concentrated on worker productivity.

Borrowing heavily from U.S. time and motion studies, he set out to analyze every job and every
machine in his shop. He quickly noted a distinction between the operating time of a machine and
the operating time of the worker. Initially, he asked each worker to operate two machines rather
than one. To make this possible, he located the machines in parallel lines or in L-formations.
After a time, he asked workers to operate three or four machines arranged in a U-shape. The
machines were no longer of the same type (as in a process layout) but represented a series of
different processes common to a group of parts (i.e., a cellular layout). The operation of
different, multiple machines required additional training for workers and specific rotation
schedules. The time required for the worker to complete one pass through the operations
assigned is called the operator cycle time.

Closely related to the concept of cycle time is takt time. “Takt” is the German word for baton,
such as an orchestra leader would use to signal the timing at which musicians play. Takt time,
then, is the pace at which production should take place to match the rate of customer demand. An
operator’s cycle time is coordinated with the takt time of the product or service being produced.
With single workers operating multiple machines, the machines themselves also required some
adjustments. Limit switches were installed to turn off machines automatically after each
operation was completed. Changes in jigs and fixtures allowed machines to hold a workpiece in
place, rather than rely on the presence of an operator. Extra tools and fixtures were purchased
and placed at their point of use so that operators did not have to leave their stations to retrieve
them when needed. By the time Ohno was finished with this phase of his improvement efforts, it
was possible for one worker to operate as many as 17 machines (the average was 5 to 10).

The flexibility of labor brought about by Ohno’s changes prompted a switch to more flexible
machines. Thus, although other manufacturers were interested in purchasing more specialized
automated equipment, Toyota preferred small, general-purpose machines. A general-purpose
lathe, for example, might be used to bore holes in an engine block and then do other drilling,
milling, and threading operations at the same station. The waste of movement to other machines,
setting up other machines, and waiting at other machines was eliminated.

Cellular Layouts

While it is true that Ohno first reorganized his shop into manufacturing cells to use labor more
efficiently, the flexibility of the new layout proved to be fundamental to the effectiveness of the
system as a whole. The concept of cellular layouts did not originate with Ohno. It was first
described by a U.S. engineer in the 1920s, but it was Ohno’s inspired application of the idea that
brought it to the attention of the world.
Cells group dissimilar machines together to process a family of parts with similar shapes or
processing requirements. The layout of machines within the cell resembles a small assembly line
and is usually U-shaped. Work is moved within the cell, ideally one unit at a time, from one
process to the next by a worker as he or she walks around the cell in a prescribed path.

205
Work normally flows through the cell in one direction and experiences little waiting. In a one
person cell, the cycle time of the cell is determined by the time it takes for the worker to
complete his or her path through the cell. This means that, although different items produced in
the cell may take different amounts of time to complete, the time between successive items
leaving the cell remains virtually the same because the worker’s path remains the same. Thus,
changes of product mix within the cell are easy to accommodate. Changes in volume or takt time
can be handled by adding workers to or subtracting workers from the cell and adjusting their
walking routes accordingly.

Because cells produce similar items, setup time requirements are low and lot sizes can be
reduced. Movement of output from the cells to subassembly or assembly lines occurs in small
lots and is controlled by kanbans. Cellular layouts, because of their manageable size, workflow,
and flexibility, facilitate another element of lean production, the pull system.

The Pull System

A major problem in automobile manufacturing is coordinating the production and delivery of


materials and parts with the production of subassemblies and the requirements of the final
assembly line. It is a complicated process, not because of the technology, but because of the
thousands of large and small components produced by thousands of workers for a single
automobile. Traditionally, inventory has been used to cushion against lapses in coordination, and
these inventories can be quite large. Ohno struggled for five years trying to come up with a
system to improve the coordination between processes and thereby eliminate the need for large
amounts of inventory. He finally got the idea for his pull system from another American classic,
the supermarket. Ohno read (and later observed) that Americans do not keep large stocks of food
at home. Instead, they make frequent visits to nearby supermarkets to purchase items as they
need them. The supermarkets, in turn, carefully control their inventory by replenishing items on
their shelves only as they are removed. Customers actually “pull through” the system the items
they need, and supermarkets do not order more items than can be sold.

Applying this concept to manufacturing requires a reversal of the normal process/information


flow, called a push system. In a push system, a schedule is prepared in advance for a series of
workstations, and each workstation pushes its completed work to the next station. With the pull
system, workers go back to previous stations and take only the parts or materials they need and
can process immediately. When their output has been taken, workers at the previous station
know it is time to start producing more, and they replenish the exact quantity that the subsequent
station just took away. If their output is not taken, workers at the previous station simply stop
production; no excess is produced. This system forces operations to work in coordination with
one another. It prevents overproduction and underproduction; only necessary quantities are
produced. “Necessary” is not defined by a schedule that specifies what ought to be needed;
rather, it is defined by the operation of the shop floor, complete with unanticipated occurrences
and variations in performance.

Although the concept of pull production seems simple, it can be difficult to implement because it
is so different from normal scheduling procedures. After several years of experimenting with the
pull system, Ohno found it necessary to introduce kanbans to exercise more control over the pull
process on the shop floor.
206
Kanbans

Kanban is the Japanese word for card. In the pull system, each kanban corresponds to a standard
quantity of production or size of container. A kanban contains basic information such as part
number, brief description, type of container, unit load (i.e., quantity per container), preceding
station (where it came from), and subsequent station (where it goes to). Sometimes the kanban is
color-coded to indicate raw materials or other stages of manufacturing. The information on the
kanban does not change during production. The same kanban can rotate back and forth between
preceding and subsequent workstations.

Kanbans are closely associated with the fixed-quantity inventory system. In the fixed-quantity
system, a certain quantity, Q, is ordered whenever the stock on hand falls below a reorder point.
The reorder point is determined so that demand can be met while an order for new material is
being processed. Thus, the reorder point corresponds to demand during lead time. A visual fixed-
quantity system, called the two-bin system, illustrates the concept nicely. The first (and usually
larger) bin contains the order quantity minus the reorder point, and the second bin contains the
reorder point quantity. At the bottom of the first bin is an order card that describes the item and
specifies the supplier and the quantity that is to be ordered. When the first bin is empty, the card
is removed and sent to the supplier as a new order. While the order is being filled, the quantity in
the second bin is used. If everything goes as planned, when the second bin is empty, the new
order will arrive and both bins will be filled again.

Ohno looked at this system and liked its simplicity, but he could not understand the purpose of
the first bin. By eliminating the first bin and placing the order card (which he called a kanban) at
the top of the second bin, (Q- R) inventory could be eliminated. In this system, an order is
continually in transit. When the new order arrives, the supplier is reissued the same kanban to fill
the order again. The only inventory that is maintained is the amount needed to cover usage until
the next order can be processed. This concept is the basis for the kanban system.

Kanbans do not make the schedule of production; they maintain the discipline of pull production
by authorizing the production and movement of materials. If there is no kanban, there is no
production. If there is no kanban, there is no movement of material. There are many different
types and variations of kanbans. The most sophisticated is probably the dual kanban system used
by Toyota, which uses two types of kanbans: production kanbans and withdrawal kanbans. As
their names imply, a production kanban is a card authorizing production of goods, and a
withdrawal kanban is a card authorizing the movement of goods. Each kanban is physically
attached to a container or cart. An empty cart signals production or withdrawal of goods.
Kanbans are exchanged between containers as needed to support the pull process.

The dual kanban approach is used when material is not necessarily moving between two
consecutive processes, or when there is more than one input to a process and the inputs are
dispersed throughout the facility (as for an assembly process). If the processes are tightly linked,
other types of kanbans can be used.

A kanban square is a marked area that will hold a certain number of output items (usually one
or two). If the kanban square following his or her process is empty, the worker knows it is time
207
to begin production again. Kanban racks are also known as supermarkets. When the allocated
slots on a rack or shelf are empty, workers know it is time to begin a new round of production to
fill up the slots, often times, these racks or shelves will be open-backed and placed between two
operations. If the distance between stations prohibits the use of kanban squares or racks, the
signal for production can be a colored golf ball rolled down a tube, a flag on a post, a light
flashing on a board, or an electronic or verbal message requesting more.

Signal kanbans are used when inventory between processes is still necessary. It closely
resembles the reorder point system. A triangular marker is placed at a certain level of inventory.
When the marker is reached (a visual reorder point), it is removed from the stack of goods and
placed on a kanban post, thereby generating a replenishment order for the item. The rectangular-
shaped kanban in the diagram is called a material kanban. In some cases it is necessary to order
the material for a process in advance of the initiation of the process.

Kanbans can also be used outside the factory to order material from suppliers. The supplier
brings the order (e.g., a filled container) directly to its point of use in the factory and then picks
up an empty container with kanban to fill and return later. It would not be unusual for 5000 to
10,000 of these supplier kanbans to rotate between the factory and suppliers. To handle this
volume of transactions, a kind of kanban “post office” can be set up, with the kanbans sorted by
supplier. The supplier then checks his or her “mailbox” to pick up new orders before returning to
the factory. Bar-coded kanbans and electronic kanbans can also be used to facilitate
communication between customer and supplier.

It is easy to get caught up with the technical aspects of kanbans and lose sight of the objective of
the pull system, which is to reduce inventory levels. The kanban system is actually very similar
to the reorder point system. The difference is in application. The reorder point system attempts to
create a permanent ordering policy, whereas the kanban system encourages the continual
reduction of inventory.

To force the improvement process, the container size is usually much smaller than the demand
during lead time. At Toyota, containers can hold at most 10% of a day’s demand. This allows the
number of kanbans (i.e., containers) to be reduced one at a time. The smaller number of kanbans
(and corresponding lower level of inventory) causes problems in the system to become visible.
Workers and managers then attempt to solve the problems that have been identified.

Small Lots

Small-lot production requires less space and capital investment than systems that incur large
inventories. By producing small amounts at a time, processes can be physically moved closer
together and transportation between stations can be simplified. In small-lot production, quality
problems are easier to detect and workers show less tendency to let poor quality pass (as they
might in a system that is producing huge amounts of an item anyway). Lower inventory levels
make processes more dependent on each other. This is beneficial because it reveals errors and
bottlenecks more quickly and gives workers an opportunity to solve them.

The analogy of water flowing over a bed of rocks is useful here. The inventory level is like the
level of water. It hides problems but allows for smooth sailing. When the inventory level is
208
reduced, the problems (or rocks) are exposed. After the exposed rocks are removed from the
river, the boat can again progress, this time more quickly than before.

Although it is true that a company can produce in small lot sizes without using the pull system or
kanbans, it is obvious that small-lot production in a push system is difficult to coordinate.
Similarly, using large lot sizes with a pull system and kanbans would not be advisable. Let’s look
more closely at the relationship between small lot sizes, the pull system, and kanbans.

From the kanban system, it becomes clear that a reduction in the number of kanbans (given a
constant container size) requires a corresponding reduction in safety stock or in lead time itself.
The need for safety stock can be reduced by making demand and supply more certain. Flexible
resources allow the system to adapt more readily to unanticipated changes in demand. Demand
fluctuations can also be controlled through closer contact with customers and better forecasting
systems. Deficiencies in supply can be controlled through eliminating mistakes, producing only
good units, and reducing or eliminating machine breakdowns.

Lead time is typically made up of four components:


• Processing time
• Move time
• Waiting time
• Setup time

Processing time can be reduced by reducing the number of items processed and the efficiency or
speed of the machine or worker. Move time can be decreased if machines are moved closer
together, the method of movement is simplified, routings are standardized, or the need for
movement is eliminated. Waiting time can be reduced through better scheduling of materials,
workers and machines, and sufficient capacity. In many companies, however, lengthy setup times
are the biggest bottleneck. Reduction of setup time is an important part of lean production.

Quick Setups

Several processes in automobile manufacturing defy production in small lots because of the
enormous amount of time required to set up the machines. Convinced that major improvements
could be made, a consultant, Shigeo Shingo, was hired to study die setup systematically, to
reduce changeover times further, and to teach these techniques to production workers and Toyota
suppliers.

Shingo proved to be a genius at the task. He reduced setup time on a 1000-ton press from 6 hours
to 3 minutes using a system he called SMED (single-minute exchange of dies). SMED is based
on the following principles, which can be applied to any type of setup:

1. Separate internal setup from external setup. Internal setup has to be performed while the
machine is stopped; it cannot take place until the machine has finished with the previous
operation. External setup, on the other hand, can be performed in advance, while the
machine is running. By the time a machine has finished processing its current operation, the
worker should have completed the external setup and be ready to perform the internal setup
for the next operation. Applying this concept alone can reduce setup time by 30 to 50%.
209
2. Convert internal setup to external setup. This process involves making sure that the
operating conditions, such as gathering tools and fixtures, preheating an injection mold,
centering a die, or standardizing die heights, are prepared in advance.

3. Streamline all aspects of setup. External setup activities can be reduced by organizing the
workplace properly, locating tools and dies near their points of use, and keeping machines
and fixtures in good repair. Internal setup activities can be reduced by simplifying or
eliminating adjustments. Examples include precoding desired settings, using quick fasteners
and locator pins, preventing misalignment, eliminating tools, and making movements easier.

4. Perform setup activities in parallel or eliminate them entirely. Adding an extra person to the
setup team can reduce setup time considerably. In most cases, two people can perform a
setup in less than half the time needed by a single person. In addition, standardizing
components, parts, and raw materials can reduce and sometimes eliminate setup
requirements.

Uniform Production Levels


The flow of production created by the pull system, kanbans, small lots, and quick setups can only
be maintained if production is relatively steady. Lean production systems attempt to maintain
uniform production levels by smoothing the production requirements on the final assembly
line. Changes in final assembly often have dramatic effects on component production upstream.
When this happens in a kanban system, kanbans for certain parts will circulate very quickly at
some times and very slowly at others. Adjustments of plus or minus 10% in monthly demand can
be absorbed by the kanban system, but wider demand fluctuations cannot be handled without
substantially increasing inventory levels or scheduling large amounts of overtime.

One way to reduce variability in production is to guard against unexpected demand through more
accurate forecasts. To accomplish this, the sales division of Toyota takes the lead in production
planning. Toyota Motor Sales conducts surveys of tens of thousands of people twice a year to
estimate demand for Toyota cars and trucks. Monthly production schedules are drawn up from
the forecasts two months in advance. The plans are reviewed one month in advance and then
again 10 days in advance. Daily production schedules, which by then include firm orders from
dealers, are finalized four days from the start of production. Model mix changes can still be made
the evening before or the morning of production. This flexibility is possible because schedule
changes are communicated only to the final assembly line. Kanbans take care of dispatching
revised orders to the rest of the system.

Another approach to achieving uniform production is to level or smooth demand across the
planning horizon. Demand is divided into small increments of time and spread out as evenly as
possible so that the same amount of each item is produced each day, and item production is
mixed throughout the day in very small quantities. The mix is controlled by the sequence of
models on the final assembly line.

Toyota assembles several different vehicle models on each final assembly line. The assembly
lines were initially designed this way because of limited space and resources and lack of

210
sufficient volume to dedicate an entire line to a specific model. However, the mixed-model
concept has since become an integral part of lean production systems. Daily production is
arranged in the same ratio as monthly demand, and jobs are distributed as evenly as possible
across the day’s schedule. This means that at least some quantity of every item is produced daily,
and the company will always have some quantity of an item available to respond to variations in
demand. The mix of assembly also steadies component production, reduces inventory levels, and
supports the pull system of production.

Quality at the Source

For lean systems to work well, quality has to be extremely high. There is no extra inventory to
buffer against defective units. Producing poor-quality items and then having to rework or reject
them is a waste that should be eliminated. Producing in smaller lots encourages better quality.
Workers can observe quality problems more easily; when problems are detected, they can be
traced to their source and remedied without reworking too many units. Also, by inspecting the
first and the last unit in a small batch or by having a worker make a part and then use the part,
virtually 100% inspection can be achieved.

Visual Control

Quality improves when problems are made visible and workers have clear expectations of
performance. Production systems designed with quality in mind include visible instructions for
worker or machine action, and direct feedback on the results of that action. This is known as
visual control. Examples include kanbans, standard operation sheets, andons, process control
charts, and tool boards. A factory with visual control will look different from other factories.
You may find machines or stockpoints in each section painted different colors, material-handling
routes marked clearly on the floor, demonstration stands and instructional photographs placed
near machines, graphs of quality or performance data displayed at each workstation, and
explanations and pictures of recent improvement efforts posted by work teams.

Visual control of quality often leads to what the Japanese call a poka-yoke. A poka-yoke is any
foolproof device or mechanism that prevents defects from occurring. For example, a dial on
which desired ranges are marked in different colors is an example of visual control. A dial that
shuts off a machine whenever the instrument needle falls above or below the desired range is a
poka-yoke. Machines set to stop after a certain amount of production are poka-yokes, as are
sensors that prevent the addition of too many items into a package or the misalignment of
components for an assembly.

Kaizen

Quality in lean systems is based on kaizen, the Japanese term for “change for the good of all” or
continuous improvement. It is a monumental undertaking that requires the participation of every
employee at every level. The essence of lean success is the willingness of workers to spot quality
problems, halt operations when necessary, generate ideas for improvement, analyze processes,
perform different functions, and adjust their working routines.

211
One of the keys to an effective kaizen is finding the root cause of a problem and eliminating it so
that the problem does not reoccur. A simple, yet powerful, technique for finding the root cause is
the 5 Why’s, a practice of asking “why?” repeatedly until the underlying cause is identified
(usually requiring five questions).

Jidoka

It was the idea that workers could identify quality problems at their source, solve them, and
never pass on a defective item that led Ohno to believe in zero defects. To that end, Ohno was
determined that the workers, not inspectors, should be responsible for product quality. To go
along with this responsibility, he also gave workers the unprecedented authority of jidoka—the
authority to stop the production line if quality problems were encountered.

To encourage jidoka, each worker is given access to a switch that can be used to activate call
lights or to halt production. The call lights, called andons, flash above the workstation and at
several and on boards throughout the plant. Green lights indicate normal operation, yellow lights
show a call for help, and red lights indicate a line stoppage. Supervisors, maintenance personnel,
and engineers are summoned to troubled workstations quickly by flashing lights on the andon
board. At Toyota, the assembly line is stopped for an average of 20 minutes a day because of
jidoka. Each jidoka drill is recorded on easels kept at the work area. A block of time is reserved
at the end of the day for workers to go over the list and work on solving the problems raised. For
example, an eight hour day might consist of seven hours of production and one hour of problem
solving.

This concept of allocating extra time to a schedule for nonproductive tasks is called
undercapacity scheduling. Another example of undercapacity scheduling is producing for two
shifts each day and reserving the third shift for preventive maintenance activities. Making time to
plan, train, solve problems, and maintain the work environment is an important part of lean’s
success.

Total Productive Maintenance

Machines cannot operate continuously without some attention. Maintenance activities can be
performed when a machine breaks down to restore the machine to its original operating
condition, or at different times during regular operation of the machine in an attempt to prevent a
breakdown from occurring. The first type of activity is referred to as breakdown maintenance;
the second is called preventive maintenance. Breakdowns seldom occur at convenient times.
Lost production, poor quality, and missed deadlines from an inefficient or broken-down machine
can represent a significant expense. In addition, the cost of breakdown maintenance is usually
much greater than preventive maintenance. For these reasons, most companies do not find it
cost-effective to rely solely on breakdown maintenance. The question then becomes, how much
preventive maintenance is necessary and when should it be performed? With accurate records on
the time between breakdowns, the frequency of breakdowns, and the cost of breakdown and
preventive maintenance, we can mathematically determine the best preventive maintenance
schedule. But even with this degree of precision, breakdowns can still occur. Lean production
requires more than preventive maintenance-it requires total productive maintenance.

212
Total productive maintenance (TPM) combines the practice of preventive maintenance with
the concepts of total quality—employee involvement, decisions based on data, zero defects, and
a strategic focus. Machine operators maintain their own machines with daily care, periodic
inspections, and preventive repair activities. They compile and interpret maintenance and
operating data on their machines, identifying signs of deterioration prior to failure. They also
scrupulously clean equipment, tools, and workspaces to make unusual occurrences more
noticeable. Oil spots on a clean floor may indicate a machine problem, whereas oil spots on a
dirty floor would go unnoticed.

In Japan this is known as the 5 S’s—seiri, seiton, seiso, seiketsu, and shisuke—roughly translated
as sort, set, shine, standardize, and sustain. Table 7.3 explains the 5 S’s in more detail.

Table 7.3: 5S Workplace Scan


5S’s Goal Eliminate or correct
1. Seiri(sort) Keep only what you Unneeded equipment, tools, furniture; unneeded
need items on walls, bulletins, items blocking aisles or
stacked in corners; unneeded inventory supplies,
parts; safety hazards

2. Selton (set in A place for everything Items not in their correct places; correct places not
order) and everything in its obvious; aisles, workstations, and equipment
place locations not indicated; items not put away
immediately after use

3. Seiso(Shine) Cleaning, and looking Floors, walls, stairs, equipment, and surfaces not
for ways to keep clean clean; cleaning materials not easily accessible;
and organized lines, labels, or signs broken or unclean; other
cleaning problems

4.Seiketsu Maintaining and Necessary information not visible; standards not


(standardize) monitoring the first known; checklists missing; quantities and limits
three categories not easily recognizable; items can’t be located
within 30 seconds

5.Shisuke Sticking to the rules Number of workers without 5S training; number of


(sustain) daily 5S inspections not performed; number of
personal items not stored; number of times job
instructions not available or up-to-date

The 5 Ss can be thought of as a simple housekeeping methodology to organise work areas that
focuses on visual order, organisation, cleanliness and standardisation. It helps to eliminate all
types of waste relating to uncertainty, waiting, searching for relevant information, creating
variation and so on. \by eliminating what is unnecessary and making everything clear and
predictable, clutter is reduced, needed items are always in the same place and work is made
easier and faster.

213
In addition to operator involvement and attention to detail, TPM requires management to take a
broader, strategic view of maintenance. That means:
• Designing products that can easily be produced on existing machines;
• Designing machines for easier operation, changeover, and maintenance;
• Training and retraining workers to operate and maintain machines properly;
• Purchasing machines that maximize productive potential; and
• Designing a preventive maintenance plan that spans the entire life of each machine.

Supplier Networks

Supplier support is essential to the success of lean production. Not only do suppliers need to be
reliable, their production needs to be synchronized to the needs of the customer they are
supplying. Toyota understood this and developed strong long-term working relationships with a
select group of suppliers. Supplier plants encircled the 50-mile radius around Toyota City,
making deliveries several times a day. Bulky parts such as engines and transmissions were
delivered every 15 to 30 minutes. Suppliers who met stringent quality standards could forgo
inspection of incoming goods. That meant goods could be brought right to the assembly line or
area of use without being counted, inspected, tagged, or stocked.

Suppliers who try to meet the increasing demands of a lean customer without being lean
themselves are overrun with inventory and exorbitantly high production and distribution costs.
Lean supply involves:
1. Long-term supplier contracts. Suppliers are chosen on the basis of their ability to meet
delivery schedules with high quality at a reasonable cost, and their willingness to adapt their
production system to meet increasingly stringent customer requirements. Typical contracts are
for three to five years, although some companies will choose a supplier for the life of the
product.

2. Synchronized production. With longer term contracts, suppliers are able to concentrate on
fewer customers. Guaranteed, steady demand with advanced notice of volume changes allows
the supplier to synchronize their production with that of the customer. Engineering and quality
management assistance may also be provided to the supplier.

3. Supplier certification. Suppliers go through several stages before certification. Typically, their
products undergo quality tests, their production facilities and quality systems are examined, and
statistical measures of quality are sent with each shipment. After six months or so with no
complications, a certification is issued that exempts the supplier from incoming quality and
quantity inspections. In spite of certification, many companies bill their suppliers for the damage
incurred by a defective part, such as the cost of a line shutdown or product recall.

4. Mixed loads and frequent deliveries. A lean supplier is an extension of the customer’s
assembly line. Small quantities may be delivered several times a day (or even hourly) directly to
their point of use in the customer’s factory. This usually involves smaller trucks containing a
mixed load of goods. Different suppliers often join together to consolidate deliveries or share
local warehouses.

214
5. Precise delivery schedules. Delivery windows to specific locations (docks, bays, or areas
along an assembly line) can be as short as 15 minutes. Penalties for missing delivery times are
high. Chrysler penalizes its suppliers $32,000 for each hour a delivery is late. With such tight
schedules, signing for and paying for a shipment at the time of delivery is too-time consuming.

6. Standardized, sequenced delivery. Using standardized containers and exchanging full


containers with empty ones upon delivery also speeds the delivery and replenishment process. In
some cases, deliveries made directly to the manufacturer are sequenced in the order of assembly.
Nissan, for example, receives deliveries of vehicle seats four times an hour and notifies the
supplier two hours in advance with the exact sequence (size and color) in which seats are to be
unloaded.

7. Locating in close proximity to the customer. With the increased number of deliveries in lean
production, it is imperative that the source of supply be located close to the customer. When
geographic distances between supplier and customer prohibit daily deliveries, suppliers may
need to establish small warehouses near to the customer or consolidate warehouses with other
suppliers. Trucking firms increasingly use consolidation warehouses as load-switching points for
JIT delivery to various customers. Maintaining close proximity can mean relocating around the
world, as shown by the number of suppliers who have moved to China and other Asian countries
in support of their customers.

JIT relies on a small network of reliable suppliers who can deliver parts frequently without the
need for inspection. Practices associated with JIT supply include:
 Locating near to the customer
 Using specially adapted vehicles
 Establishing small warehouses near the customer
 Using standardized containers
 Certification.

Activity 7
Which elements of lean are the most difficult to implement in Ethiopia? Why?

7.7 Advantages and Disadvantages of JIT


Advantages of JIT
Advocates of JIT claim it is a revolutionary concept that all manufacturers will have to adopt in
order to remain competitive. JIT encompasses the successful execution of all production
activities required to produce a product, from designing to delivery. Its benefits are many:
1. Shortened lead time.
2. Reduced time spent on non-process work.
3. Eliminate waste and rework and consequently reduce requirements for raw materials,
person, power and machine capacity
4. It increases worker motivation and teamwork.
5. Reduced inventory. As a result:
 Frees up working capital for other projects.
 Less space is needed.

215
 Customer responsiveness increases.
6. Reduce or eliminate setup times
7. Reduce lot sizes (manufacturing and purchase): reducing setup times allows economical
production of smaller lots; close cooperation with suppliers is necessary to achieve
reductions in order lot sizes for purchased items, since this will require more frequent
deliveries.
8. Problem clarification.
9. Cost savings
(a) Materials Cost Savings: Materials cost savings is basically the reduction of costs
incorporated with purchasing, receiving, inspection, and stockroom costs. Elements in Materials
Cost Saving are:
 Reduction of Suppliers
 Long-term Contracts
 Reduce Order Scheduling
 Simplify Receiving Systems
 Eliminate unpacking
 Eliminate Inspection
 Eliminate inventory Stocking
 Eliminate Excess Material.
(b) Manufacturing Cost Savings: Manufacturing cost savings identifies saving in the
engineering, production, and the quality control activities. A major part of manufacturing cost
savings is keeping a high level of quality, quality reduces cost and increases revenue.
(c) Sales Cost Savings: Sales cost saving comes in the form of reducing overlap between the
supplier and the customer, which is inspection and testing. The most effective situation that the
sales department can establish is finding customers that also use JIT systems.
a. Total product cycle time drops.
b. Product quality improved.
c. Reduced scrap and rework
d. Smoother production flow
e. Less inventory, of raw materials, work-in-progress and finished goods.
f. Higher productivity
g. Higher worker participation
h. More skilled workforce, able and willing to switch roles
i. Reduced space requirement
j. Improved relationships with suppliers

Disadvantages of JIT
There are often a number of barriers that also have to be overcome to achieve the final goal.
 The JIT method demands a much disciplined assembly-line process. The entire factory
has to be in sync to successfully exploit its methods. Manufacturers can afford fewer
errors in the delivery of the supplier’s component; if a part isn’t there, the assembly line
stops, and that can result in the loss of manpower and cash.
 Changes in production planning, inaccurate forecasting procedures resulting in under or
over forecasting of demand, equipment failures creating capacity problems and employee
absenteeism all create problems in implementing JIT.
 JIT requires special training and the reorganization of policies and procedures.
216
 The organizational cultures vary from firm to firm. There are some cultures that tie to JIT
success but it is difficult for an organization to change its cultures within a short time.
 Difference in implementation of JIT. Because JIT was originally established in Japan, the
benefits may vary.
 Resistance to change. JIT involves a change throughout the whole organization, but
human nature resists changing. The most common resistances are emotional resistance
and rational resistance. Emotional resistances are those psychological feeling which
hinder performance such as anxiety. Rational resistance is the deficient of the needed
information for the workers to perform the job well.
 JIT requires workers to be multi-skilled and flexible to change.

Even with these drawbacks, however, we have found that most types of businesses can find some
parts or processes that can benefit from lean concepts. That includes service industries.

7.8 Basics of Constraints Management


The strength of any chain is determined by its weakest link. That fundamental truth is at the root
of the management concept called the theory of constraints. The implication in the management
philosophy is that all systems will have at least one constraint and it is better to manage it rather
than to constantly try to eliminate it. Once you strengthen the weakest link, some other link will
become the weakest. In every system, regardless of its size or complexity, there will be a
constraint. Realizing that the concept is not a theory but a reality, the names most often used
today for this business philosophy are constraints management and synchronous flow.

Presentation to the market on a timely basis is the greatest opportunity for competitive advantage
available to the manufacturing industries in the modern world. Be able to satisfy the market
demand NOW, assuming you have good quality and competitive pricing, and you are likely to
have all the business you can handle. This means that you must operate under a system that
allows very short production cycle times so that your lead time to the customer is significantly
better than that of the competition. Lead time is defined as the number of days (or hours) you can
quote to your customer for delivery of a product or production order. Lead time is the total
production cycle, which includes all the work-in-process currently in the system; the raw
material purchase cycle; and any order backlog position that exists at that time. If it is ordered
today, without pulling this order around any that are now in the pipeline, how long will it take to
get it delivered to the customer?

The need to address this dilemma was the basis for development of constraints management.
How do we operate a manufacturing system so that it will meet short cycle production demands
without the need to maintain large stocks of finished goods inventory and, at the same time, is
resistant to the effects of variability? The concept of constraints management offers a unique
approach to this dilemma. The entire system is like a chain.

The strength of a chain, like the strength of a manufacturing system, is dependent on the weakest
link. You cannot get more through the system than the capacity of the weakest link or constraint
of the system. By focusing on the performance of the system’s constraint, rather than on the

217
performance of each resource, the highest total system productivity is achieved with the available
resources.

Protective Capacity
Synchronizing a manufacturing system involves selecting a constraint and operating the system
with unbalanced capacity, also called protective capacity.

The closer the system is to a balanced state, the more unstable it becomes. In other words, since a
truly balanced capacity system is a practical impossibility, when there is an attempt to create
balance, a condition called the wandering bottleneck occurs. Wherever “Murphy” last visited is
the current bottleneck. The management/supervisory staff finds itself always reacting to the latest
production problem. There is little opportunity to be proactive, because there is no way to predict
where or to what degree normal variability will appear.

Let us be clear on the definition of a constraint. A constraint is anything that limits the degree to
which an organization can satisfy its purpose. This is similar to commonly known issue called
bottleneck , but the difference is that a constraint can be strategically located rather than allowing
its location to be determined by chance. This will be explained in greater detail in the next
section.

Types of Constraints
There are three types of constraints. Physical (logistical) constraints are the most obvious. They
are resources within the system, which have a capacity that is equal to or less than the demand
placed upon it. Physical constraints can be both internal (the capacity of a given resource) and
external (the capacity of a supplier to provide the necessary raw material or even the market
when manufacturing capacity exceeds the demand of the market). Policy (managerial)
constraints are decrees or rules from the management staff that set limits on the system’s
performance in that they do not lead directly to achieving the goals and objectives of the system.
Causing the system to emphasize efficiency or resource utilization is an example of policy
constraint.

Paradigm (behavior) constraints are entrenched habits or assumptions of people in the system
that “things must be done this way because they have always been done this way.” Ironically,
paradigm constraints often lead to policy constraints, which may lead to physical constraints.
One thing is for sure: If the goal of the organization is to continually increase value added, and
its actual value added is something less than infinite, then a constraint exists. Every system,
without exception, has a constraint.

The synchronous manufacturing approach requires that the constraint be clearly visible. The
capacity of the constraint versus that of the nonconstraints must be significant enough so that
normal production problems (such as machine malfunctions and operator absenteeism) do not
disrupt the product flow. There must be “protective capacity” at all the nonconstraints. The
capacity of these resources must be sufficient to absorb the normal system variability without
starving the constraint of work. Also, there must be a sufficient buffer of work preceding the
constraint to act as a shock absorber for these normal fluctuations in production. In fact, there is
a dynamic relationship between productive capacity (the capacity of the constraint which is the

218
capacity of the system), protective capacity (the capacity of the nonconstraints which must be
greater than that of the constraint), and inventory (the amount of buffer within the system to
protect the constraint). The greater the differential between protective and productive capacities,
the smaller the buffer inventory needs to be.Conversely, the closer this differential is to being
even, the greater the inventory must be to protect the constraint from starvation.

Variability is the compounding factor in this relationship. The greater the system variability
(normal statistical fluctuations), the more protective capacity and/or inventory is needed to
maintain stability. Keep in mind that anytime the constraint is starved, production is lost for the
whole system.

Drum–Buffer–Rope
The concept just described is called Drum–Buffer–Rope (DBR) scheduling. The constraint is the
drumbeat of the system. Ideally, the constraint never stops working. It is always producing or
setting up to be producing products that the system can ship. It is protected by a buffer of work-
in-process to assure that it always has work to do. As variability occurs at any of the resources
feeding the constraint, the buffer is depleted. By definition, the nonconstraints have a greater
capacity than the constraint, so when the problem is corrected, the nonconstraint has the time to
catch up before the constraint depletes the buffer. When the buffer is replenished to its specified
level, the nonconstraint stops working on that operation to avoid unnecessary inventory buildup.
In fact, the amount of raw material released into the system is controlled by the rope based on the
consumption of that raw material at the constraint. The rope is like a signal from the constraint
indicating the amount of raw material to be released.

There are several types of buffers. The constraint buffer protects the constraint’s ability to meet
its schedule. The shipping buffer protects due date integrity, given that due date performance is
the first constraint imposed on the system. The assembly buffer prevents constraint parts from
waiting on nonconstraint parts at assembly. A raw material buffer will protect the ability to meet
the release schedule against nonperformance of the raw material suppliers.

The size of the buffer is expressed in terms of time. The level of inventory in the buffer is
converted into time by determining how long it would take the bottleneck or constraint to
produce that amount of inventory. For example, if the shipping buffer has 20 products in it and
each of these requires 30 minutes of processing on the bottleneck, then the shipping buffer is a
10-hour buffer.

The concept of DBR contains several basic algorithms, which are:


 The customer due date minus the shipping buffer equals the constraint’s due date.
 The constraint’s due date minus the constraint processing and setup times equals the
constraint start date.
 The constraint start date minus the constraint buffer equals the material release date.

The Five Focusing Steps


There are five focusing steps to applying this concept to a manufacturing system. It is impossible
to “focus” on everything (as is the normal strategy) so we should decide what resources are truly
determining the capacity of the system and concentrate our efforts there. Just as the weakest link

219
of a chain determines the strength of the chain, the system’s constraint determines the capacity of
the system. Every system has a constraint, just as every chain has only one weakest link.

Before applying the five focusing steps, it is important to define the system or the scope of the
process to be synchronized. The system could be a plant, a value stream within a plant, or a
supply chain composed of several plants including suppliers and the customer. In any case, it is
important to first decide what the boundaries of the system to be synchronized are. Next, we
must describe the purpose of the system and decide how to measure it. As stated earlier, we must
know just what we are seeking and we must set up measurements to achieve those results. It
cannot be overemphasized that the measurements will determine the performance, so we had
better be monitoring the important things and not just the traditional cost accounting factors such
as efficiency and utilization.

Having defined the system and its purpose including the measurements to be achieved, the first
of the focusing steps is to identify the constraint. Find it or pick it. In other words, determine
where the constraint naturally exists or pick a place where you strategically want the constraint
to be. This will be the focal point of the entire synchronized system, so it is an important
decision. It is important to note that, in the context of synchronous manufacturing, a constraint is
actually a good thing as long as we have correctly identified it and we use its performance to
manage the rest of the system.

The second of the focusing steps is to exploit the constraint. Exploitation means assuring that the
constraint is working only on products that will ship and for which the system will receive
payment. It means assuring that only first quality parts are allowed to pass through the constraint.
We would not want to waste any of this valuable constraint time on products that will not be
shippable. Also, exploit means working the constraint during every available hour of the work
day. Any time lost at the constraint is time lost for the system.

The third focusing step is to subordinate everything else in the system to the decision on where
the constraint is to be located. Subordination is the most difficult of the synchronous flow
methods. Every other management decision including the release of raw materials into the
system must be based on the consumption at the constraint. Releasing raw materials at a faster
rate just to keep some resource busy will only add unneeded inventory in the system, which will
lengthen the customer lead times. If only we worried as much about idle inventory as we do
about idle workers, our plants would be much more responsive to the market.

Having completed the first three focusing steps, the only way to get more out of the system is to
elevate the constraint. That means to add more capacity at this critical resource. This could mean
adding equipment or people, or it could mean offloading some of the constraint duties to other
resources. The net result is more capacity at the constraint, which always results in more capacity
for the entire system. However, if we elevate the constraint, it is possible that it will be broken.
That is, the constraint we identified may no longer be the constraint. Whenever this happens, the
logistical constraint will move to some other location in the system, internally as another
resource or externally as at a supplier or in the market itself. We must then return to step one and
rethink the entire system identifying the new constraint. It is important here to watch out for
inertia in the form of policy or paradigm constraints that may not have been issues when the

220
constraint was in its original location, but now may well be. Because everything in the system
must be subordinated to the performance of the constraint, we often create rules to assure that
this happens. When the constraint moves, those rules are no longer valid, so it is imperative that
we review the policies (both formal and informal) that have been developed. Where they are no
longer needed, they must be voided.

Chapter Summary
Lean production has truly changed the face of manufacturing and transformed the global
economy. Originally known as just-in-time (JIT); it began at Toyota Motor Company as an effort
to eliminate waste (particularly inventories), but it evolved into a system for the continuous
improvement of all aspects of manufacturing operations. Lean production is both a philosophy
and a collection of management methods and techniques. The main advantage of the system is
derived from the integration of the techniques into a focused, smooth-running management
system. In lean systems, workers are multifunctional and are required to perform different tasks,
as well as aid in the improvement process. Machines are also multifunctional and are arranged in
small, U-shaped work cells that enable parts to be processed in a continuous flow through the
cell. Workers produce parts one at a time within the cells and transport parts between cells in
small lots as called for by subassembly lines, assembly lines, or other work cells. The
environment is kept clean, orderly, and free of waste so that unusual occurrences are visible.
Schedules are prepared only for the final assembly line, in which several different models are
assembled on the same line. Requirements for component parts and subassemblies are then
pulled through the system with kanbans. The principle of the pull system is not to make anything
until requested to do so by the next station. The “pull” system will not work unless production is
uniform, setups are quick, and lot sizes are low. The pull system and kanbans are also used to
order materials from outside suppliers. Suppliers are fewer in number and must be very reliable.
They may be requested to make multiple deliveries of the same item in the same day, so their
manufacturing system must be flexible, too. Deliveries are made directly to the factory floor,
eliminating stockrooms and the waste of counting, inspecting, recording, storing, and
transporting. Lean production does not produce in anticipation of need. It produces only
necessary items in necessary quantities at necessary times. Inventory is viewed as a waste of
resources and an obstacle to improvement. Because there is little buffer inventory between
workstations, quality must be extremely high, and every effort is made to prevent machine
breakdowns. When all these elements are in place, lean systems produce high-quality goods,
quickly and at low cost. These systems also are able to respond to changes in customer demand.
Lean production systems are most effective in repetitive environments, but elements of lean can
be applied to almost any operation, including service operations. Lean retailing, lean banking,
and lean health care are good examples. The process of synchronous manufacturing is largely
intuitive, but is usually contrary to the accepted practices of management, particularly if
management decisions are made based on cost accounting practices. If we can look beyond the
policies and rules that govern our organizations, the ideas of synchronizing the flow of material
through the system based on consumption of an identified constraint seem very intuitive.
However, most companies do not look at their systems in this manner. The constraints
management business philosophy is the process by which an organization can take a
commonsense look at the whole system and achieve results that are uncommon by today’s
standards. The constraint management is truly not limited to manufacturing organizations.
Service companies, distribution systems, and even not-for-profit organizations are using the

221
same principles previously used only in manufacturing to help them reach their goals. Every
system, no matter what its process or its ultimate objective, has a constraint and its performance
can be maximized by applying the principles of constraints management. This is a process of
continuous improvement, so it is being utilized in all types of organizations as a methodology to
continue raising the bar of performance. Constraint management techniques are proven to offer
solutions to the problems facing the world’s manufacturing base. The commonsense approaches
of this philosophy allow higher productivity, lower work-in-process levels, and faster processing
times which lead to shorter customer lead times. Application to the entire supply chain will offer
increase advantages by providing the tools of communication and synchronization that are
critical to optimum performance.

Review Questions
Multiple Choice Questions
1. All but one is true?
a) With level schedules, a few large batches, rather than frequent small batches, are
processed.
b) The number of kanbans decreases as safety stock is decreased.
c) A kanban system requires little variability in lead time because shortages have their
impact on the entire productive system.
d) Inventory has only one positive aspect, which is availability; inventory has several
negatives, including increased material handling, obsolescence, and damage.
e) None of the above.
2. Which of the following is generally found in most JIT environments?
a. a push or pull system, depending upon the rate of demand
b. a push system for high margin items and a pull system for low margin items
c. a push system for purchased parts and a pull system for manufactured parts
d. push systems
e. pull systems
3. Which one of the following is not a benefit of the implementation of JIT?
a. cost reduction
b. variability increase
c. rapid throughput
d. quality improvement
e. rework reduction
4. Which of the following is specifically characterized by continuous and forced problem
solving via a focus on throughput and reduced inventory?
a. Just-in-time (JIT)
b. Toyota Production System (TPS)
c. Lean operations
d. Material requirements planning (MRP)
e. kanban
5. Which of the following statements regarding a pull system is true?
a. Large lots are pulled from upstream stations.
b. Work is pulled to the downstream stations before it is actually needed.
c. Manufacturing cycle time is increased.
d. Problems become more obvious.

222
e. None of the above is true of a pull system.

Discussion Questions
1. Explain how just-in-time processes relate to the quality of an organization's outputs.
2. What is the purpose of lean production?
3. Why are flexible resources essential to lean production?
4. What does a cellular layout contribute to lean production?
5. Differentiate between a push and a pull production system.

CHAPTER VIII

MECHANIZATION AND AUTOMATION

Mechanization and automation are these mentionable means that have influenced the
performance of operations management historically. This chapter addresses about the two issues.
Besides, the effects of automation as well as the strategies of automation have been addressed in
the chapter.

Learning Objectives
After studying this chapter, students will be able to:
 Understand the historical background of mechanization
 Distinguish the concepts and growth of automation
 Discuss the effects of automation

8.1 Introduction
Mechanization refers to the use of powered machinery to help a human operator in some task. It
also refers to the replacement of human (or animal) power with mechanical power of some form.
The driving force behind mechanization has been humankind’s propensity to create tools and
mechanical devices. The term is most often used in industry. The addition of powered machine
tools, such as the steam powered lathe dramatically reduced the amount of time needed to carry
out various tasks, and improved productivity. Today very little construction of any sort is carried
out with hand tools. The term is also used in the military where it refers to the use of vehicles,
notably armored personnel carriers (APCs). The use of hand powered tools, however, is not an
example of mechanization.

8.2 Assembly Line


An assembly line is a manufacturing process in which interchangeable parts are added to a
product in a sequential manner to create an end product. The assembly line was first introduced
by Eli Whitney to create muskets for the U.S. Government. Henry Ford later introduced the
moving assembly line for his automobile factory to cut manufacturing costs and deliver a
cheaper product.

History of the Assembly Line

223
Until the 1800s, craftsmen would create each part of a product individually, and assemble them,
making changes in the parts so that they would fit together - the so-called English System of
manufacture.
 Eli Whitney invented the American System of manufacturing in 1799, using the ideas of
division of labor and of engineering tolerance, to create assemblies from parts in a
repeatable manner.
 This linear assembly process, or assembly line, allowed relatively unskilled laborers to
add simple parts to a product. As all the parts were already made, they just had to be
assembled.
 While originally not of the quality found in hand-made units, designs using an assembly
line process required less knowledge from the assemblers, and therefore could be created
for a lower cost.
Henry Ford installed the World’s first moving assembly line on December 1, 1913, as one of
several innovations intended to cut costs and permitting mass production. The idea was an
adaptation of the system used in the meat processing factories of Chicago, and the conveyor belts
used in grain mills. By bringing the parts to the workers, considerable time was saved.
Although Whitney was first to use the assembly line in the industrial age, the idea of
interchangeable parts and the assembly line was not new, though it was little used. The idea was
first developed in Venice several hundred years earlier, where ships were produced using pre-
manufactured parts, assembly lines, and mass production; the Venice Arsenal apparently
produced nearly one ship every day, in what was effectively the world’s first factory.

8.3 Industrial Robot


An industrial robot is an automatically controlled, reprogrammable, multipurpose manipulator
programmable in three or more axes. In a simple phrase, industrial robotics refers to the study,
design and use of robots for manufacturing. Typical applications of industrial robots include
welding, painting, ironing, assembly, palletizing, product inspection, and testing.

There are a small number of commonly used robot configurations for industrial automation,
including articulated robots (the original, and most common), SCARA (Selective Compliance
Assembly Robot Arm) robots and gantry robots (Cartesian robots, or x-y-z robots). In the context
of general robotics, most types of industrial robots would fall into the category of robot arms
(inherent in the use of the word manipulator in the above-mentioned ISO standard).

Industrial robot actions are determined by programmed routines that specify the direction, speed,
and distance of a series of coordinated motions. For more precise guidance, robots are often
assisted by machine vision systems acting as their “eyes”. The setup of motions and sequences
for an industrial robot is sometimes done by an operator using a teaching pendant, a handheld
control and programming unit.
The first company to produce an industrial robot was Unimation.

8.4. The Age of Automation


Automation or Industrial Automation is the use of computers to control industrial machinery and
processes, replacing human operators. It is a step beyond mechanization, where human operators
are provided with machinery to help them in their jobs. Automation is a technology concerned

224
with the application of mechanical, electronic, and computer based systems to operate and
control production. This technology includes automatic machine tools to process parts, automatic
assembly machines, industrial robots, automatic material handling and storage systems,
automatic inspection systems for quality control, feedback control and computer process control,
computer systems for planning, data collection and decision-making to support manufacturing
activities. The most visible part of automation can be said to be industrial robotics. Some
advantages are repeatability, tighter quality control, waste reduction, integration with business
systems, increased productivity and reduction of labor. Some disadvantages are high initial costs
and increased dependence on maintenance.

8.5 Types of Automation


Automated production systems can be classified into three basic types:
1. Fixed automation,
2. Programmable automation, and
3. Flexible automation.

1. Fixed Automation
It is a system in which the sequence of processing (or assembly) operations is fixed by the
equipment configuration. The operations in the sequence are usually simple. It is the integration
and coordination of many such operations into one piece of equipment that makes the system
complex. The typical features of fixed automation are:
a. High initial investment for custom–Engineered equipment;
b. High production rates; and
c. Relatively inflexible in accommodating product changes.
The economic justification for fixed automation is found in products with very high demand
rates and volumes. The high initial cost of the equipment can be spread over a very large number
of units, thus making the unit cost attractive compared to alternative methods of production.
Examples of fixed automation include mechanized assembly and machining transfer lines.

2. Programmable Automation
In this the production equipment is designed with the capability to change the sequence of
operations to accommodate different product configurations. The operation sequence is
controlled by a program, which is a set of instructions coded so that the system can read and
interpret them.

New programs can be prepared and entered into the equipment to produce new products. Some
of the features that characterize programmable automation are:
(a) High investment in general-purpose equipment;
(b) Low production rates relative to fixed automation;
(c) Flexibility to deal with changes in product configuration; and
(d) Most suitable for batch production.
Automated production systems that are programmable are used in low and medium volume
production. The parts or products are typically made in batches. To produce each new batch of a
different product, the system must be reprogrammed with the set of machine instructions that
correspond to the new product. The physical setup of the machine must also be changed over:

225
Tools must be loaded, fixtures must be attached to the machine table also be changed machine
settings must be entered. This changeover procedure takes time. Consequently, the typical cycle
for given product includes a period during which the setup and reprogramming takes place,
followed by a period in which the batch is produced. Examples of programmed automation
include numerically controlled machine tools and industrial robots.

3. Flexible Automation
It is an extension of programmable automation. A flexible automated system is one that is
capable of producing a variety of products (or parts) with virtually no time lost for changeovers
from one product to the next. There is no production time lost while reprogramming the system
and altering the physical setup (tooling, fixtures, and machine setting). Consequently, the system
can produce various combinations and schedules of products instead of requiring that they be
made in separate batches. The features of flexible automation can be summarized as follows:
a. High investment for a custom-engineered system.
b. Continuous production of variable mixtures of products.
c. Medium production rates.
d. Flexibility to deal with product design variations.

The essential features that distinguish flexible automation from programmable automation are:
(1) the capacity to change part programs with no lost production time; and (2) the capability to
changeover the physical setup, again with no lost production time. These features allow the
automated production system to continue production without the downtime between batches that
is characteristic of programmable automation. Changing the part programs is generally
accomplished by preparing the programs off-line on a computer system and electronically
transmitting the programs to the automated production system. Therefore, the time required to do
the programming for the next job does not interrupt production on the current job. Advances in
computer systems technology are largely responsible for this programming capability in flexible
automation. Changing the physical setup between parts is accomplished by making the
changeover off-line and then moving it into place simultaneously as the next part comes into
position for processing. The use of pallet fixtures that hold the parts and transfer into position at
the workplace is one way of implementing this approach. For these approaches to be successful;
the variety of parts that can be made on a flexible automated production system is usually more
limited than a system controlled by programmable automation.

8.6 Reasons for Automation


Following are some of the reasons for automation:
1. Increased productivity: Automation of manufacturing operations holds the promise of
increasing the productivity of labor. This means greater output per hour of labor input.
Higher production rates (output per hour) are achieved with automation than with the
corresponding manual operations.
2. High cost of labor: The trend in the industrialized societies of the world has been toward
ever-increasing labor costs. As a result, higher investment in automated equipment has
become economically justifiable to replace manual operations. The high cost of labour is
forcing business leaders to substitute machines for human labor. Because machines can
produce at higher rates of output, the use of automation results in a lower cost per unit of
product.

226
3. Labor shortages: In many advanced nations there has been a general shortage of labor.
Labor shortages stimulate the development of automation as a substitute for labor.
4. Trend of labor toward the service sector: There has been a tendency for people to view
factory work as tedious, demeaning, and dirty. This view has caused them to seek
employment in the service sector of the economy government, insurance, personal
services, legal, sales, etc. Hence, the proportion of the work force employed in
manufacturing is reducing.
5. Safety: By automating the operation and transferring the operator from an active
participation to a supervisory role, work is made safer.
6. High cost of raw materials: The high cost of raw materials in manufacturing results in
the need for greater efficiency in using these materials. The reduction of scrap is one of
the benefits of automation.
7. Improved product quality: Automated operations not only produce parts at faster rates
but they produce parts with greater consistency and conformity to quality specifications.
8. Reduced manufacturing lead time: With reduced manufacturing lead time automation
allows the manufacturer a competitive advantage in promoting good customer service.
9. Reduction of in-process inventory: Holding large inventories of work-in-process
represents a significant cost to the manufacturer because it ties up capital. In-process
inventory is of no value. It serves none of the purposes of raw materials stock or finished
product inventory. Automation tends to accomplish this goal by reducing the time a work
part spends in the factory.
10. High cost of not automating: A significant competitive advantage is gained by
automating a manufacturing plant. The benefits of automation show up in intangible and
unexpected ways, such as, improved quality, higher sales, better labour relations, and
better company image. All of these factors act together to make production automation a
feasible and attractive alternative to manual methods of manufacture.

8.7 Advantages and Disadvantages of Automation


Advantages
Following are some of the advantages of automation:
1) Automation is the key to the shorter workweek. Automation will allow the average
number of working hours per week to continue to decline, thereby allowing greater leisure
hours and a higher quality life.
2) Automation brings safer working conditions for the worker. Since there is less direct
physical participation by the worker in the production process, there is less chance of
personal injury to the worker.
3) Automated production results in lower prices and better products. It has been estimated
that the cost to machine one unit of product by conventional general-purpose machine
tools requiring human operators may be 100 times the cost of manufacturing the same unit
using automated mass-production techniques. The electronics industry offers many
examples of improvements in manufacturing technology that have significantly reduced
costs while increasing product value (e.g., colour TV sets, stereo equipment, calculators,
and computers).
4) Automation is the only means of increasing standard of living. Only through productivity
increases brought about by new automated methods of production, it is possible to
advance standard of living. Granting wage increases without a commensurate increase in

227
productivity will results in inflation. To afford a better society, it is a must to increase
productivity.

Disadvantages
Following are some of the disadvantages of automation:
1) Automation will result in the subjugation of the human being by a machine. Automation
tends to transfer the skill required to perform work from human operators to machines. In
so doing, it reduces the need for skilled labour. The manual work left by automation
requires lower skill levels and tends to involve rather menial tasks (e.g., loading and
unloading workpart, changing tools, removing chips, etc.). In this sense, automation tends
to downgrade factory work.
2) There will be a reduction in the labour force, with resulting unemployment. It is logical to
argue that the immediate effect of automation will be to reduce the need for human labour,
thus displacing workers.
3) Automation will reduce purchasing power. As machines replace workers and these
workers join the unemployment ranks, they will not receive the wages necessary to buy
the products brought by automation. Markets will become saturated with products that
people cannot afford to purchase. Inventories will grow. Production will stop.
Unemployment will reach epidemic proportions and the result will be a massive economic
depression.

8.8 Automation Strategies


There are certain fundamental strategies that can be employed to improve productivity in
manufacturing operations technology. These are referred as automation strategies.
1. Specialization of operations: The first strategy involves the use of special purpose equipment
designed to perform one operation with the greatest possible efficiency. This is analogous to the
concept of labour specializations, which has been employed to improve labour productivity.
2. Combined operations: Production occurs as a sequence of operations. Complex parts may
require dozens, or even hundreds, of processing steps. The strategy of combined operations
involves reducing the number of distinct production machines or workstations through which the
part must be routed. This is accomplished by performing more than one operation at a given
machine, thereby reducing the number of separate machines needed. Since each machine
typically involves a setup, setup time can be saved as a consequence of this strategy. Material
handling effort and nonoperation time are also reduced.
3. Simultaneous operations: A logical extension of the combined operations strategy is to
perform at the same time the operations that are combined at one workstation. In effect, two or
more processing (or assembly) operations are being performed simultaneously on the same
workpart, thus reducing total processing time.
4. Integration of operations: Another strategy is to link several workstations into a single
integrated mechanism using automated work handling devices to transfer parts between stations.
In effect, this reduces the number of separate machines though which the product must be
scheduled. With more than one workstation, several parts can be processed simultaneously,
thereby increasing the overall output of the system.
5. Increased flexibility: This strategy attempts to achieve maximum utilisation of equipment for
job shop and medium volume situations by using the same equipment for a variety of products.

228
It involves the use of the flexible automation concepts. Prime objectives are to reduce setup time
and programming time for the production machine. This normally translates into lower
manufacturing lead time and lower work-in-process.
6. Improved material handling and storage systems: A great opportunity for reducing non-
productive time exists in the use of automated material handling and storage systems. Typical
benefits included reduced work-in-process and shorter manufacturing lead times.
7. On-line inspection: Inspection for quality of work is traditionally performed after the process.
This means that any poor quality product has already been produced by the time it is inspected.
Incorporating inspection into the manufacturing process permits corrections to the process as
product is being made. This reduces scrap and brings the overall quality of product closer to the
nominal specifications intended by the designer.
8. Process control and optimization: This includes a wide range of control schemes intended to
operate the individual process and associated equipment more efficiency. By this strategy, the
individual process times can be reduced and product quality improved.
9. Plant operations control: Whereas the previous strategy was concerned with the control of
the individual manufacturing process, this strategy is concerned with control at the plant level of
computer networking within the factory.
10. Computer integrated manufacturing (CIM): Taking the previous strategy one step further,
the integration of factory operations with engineering design and many of the other business
functions of the firm. CIM involves extensive use of computer applications, computer data bases,
and computer networking in the company.

Activity 8
Do you suggest automation for Ethiopian Manufacturing firms at the present time? Why/ why
not?
Chapter Summary
Mechanization refers to the use of powered machinery to help a human operator in some task. It
also refers to the replacement of human (or animal) power with mechanical power of some form.
The driving force behind mechanization has been humankind’s propensity to create tools and
mechanical devices. The term is most often used in industry. The addition of powered machine
tools, such as the steam powered lathe dramatically reduced the amount of time needed to carry
out various tasks, and improved productivity. Automation or Industrial Automation is the use of
computers to control industrial machinery and processes, replacing human operators. It is a step
beyond mechanization, where human operators are provided with machinery to help them in
their jobs. The most visible part of automation can be said to be industrial robotics. Some
advantages are repeatability, tighter quality control, waste reduction, integration with business
systems, increased productivity and reduction of labor. Some disadvantages are high initial costs
and increased dependence on maintenance. Bothe mechanization and automation have
advantages interms of productivity improvement in different sectors of an economy. However,
they have limitations as well.

Review Questions
Multiple Choice Questions
1. Automation has brought about changes in the worker’s relation to the job. A. True B.
False

229
2. Mechanization is a step beyond automation, where human operators are provided with
machinery to help them in their jobs. A. True B. False
3. The most visible part of mechanization can be said to be industrial robotics. A. True B.
False
4. ________is the use of computers to control industrial machinery and processes, replacing
human operators.
A. Industrial Automation
B. Automation
C. Mechanization
D. A and B
E. None of the above.
5. One of the following is not among the breakthrough that can characterize the
development of automation overtime.
A. interchangeable manufacture;
B. Assembly lines.
C. Robots,
D. Mechanized automation,
E. None of the above.

Discussion
1. What is the basic difference between mechanization and automation?
2. Discuss different types of automation.
3. Discuss the reasons for automation.
4. Discuss the different strategies of automation.
5. What are the advantage and disadvantage of automation?

CHAPTER IX

THE FUTURE OF OPERATIONS MANAGEMENT

In this module, we have developed an understanding of the future roles of operations


management by considering how manufacturing have developed over years. Finally, we will try
to spotlight on “what lies ahead for managers concerned with the value creation processes?”
Even though foretelling the future is not possible scene of the future of operations management
can be set by understanding the challenges expected related to the following three issues:
managing global growth, understanding and gaining competitive advantage from e-commerce,
and achieving environmental soundness in operations. We shall examine each of these in turn.

Learning Objectives
The purpose of this chapter is, therefore, to acquaint you with:
 The challenges of operations management as a result of global growth
 The issues of internet in the future of operations management
 Environmental concerns of operations management.

230
9.1 The Challenge of Global Growth
As newly industrializing countries seek to achieve widespread improvements in quality of life
and political stability, so their industries simultaneously present new consumer markets and new
sources of products and services to the world. In such countries, labour costs are low,
development grants are available to tempt inwards investors, and local markets are homogeneous
and tolerant.

These were the conditions within which mass production was born at the end of the nineteenth
century. Now, widespread communications mean that consumers in newly developing countries
are aware of the array of products and services from which their counterparts in the West may
choose, and want some of the same for themselves. While it may be true, at the end of the
twentieth century, that three-quarters of the people in the world had never used a telephone,
influences such as television and American feature films are effective stimulants for demand in
new consumer markets. More significantly, the growth of mobile phone technology has become
common place throughout the world.

Within this context, while many of the challenges relate to international (or global) marketing, it
is clear that operations strategy is of pivotal importance. The windows of opportunity for
commercial exploitation that appear as developing countries emerge as potential manufacturing
bases may be brief. Developing countries must export to earn foreign capital, but will inevitably
import goods as well. Once goods produced for Western markets are available in developing
countries, the nascent consumers there will begin to demand Western levels of quality. This may
seem obvious but the implications for operations management are immense.

Under mass production principles, the typical practice in setting up a manufacturing plant in a
developing country would be a compromise –based upon low expectations of competence in
workforces and local management. The local government’s motivation for allowing in a foreign
investor would be jobs (and thus prosperity) and a chance to export – at least to elsewhere in its
region. For the investor, however, the primary motivation would be sell goods in the host country
– and perhaps in those surrounding it: the expectation would be that products made in the new
plant would not be acceptable for import to the home country of the investor. The opportunity for
its firms to hit the affluent export markets of the West (where the real money lay) would thus be
very limited for the host country. In this practice, products or component parts that were
considered good enough for export would typically be separated out from a batch – leaving those
of inferior quality to be sold to the local market. Now, as soon as the local market has
experienced (albeit vicariously) export quality, and has the income to allow it, such consumers
may be expected not to accept second best.

The wish of the developing country’s government to achieve export income is, of course, very
much in tune with foreign investors’ wishes to play global games with their capacity – reducing
the need to transport goods around the world. The strategy in some manufacturing industries is
now to produce for local requirements in each region and then cross-ship products from one
region to another to create niche (smaller quantity) markets.
While playing a global game with capacity has been a central part of corporate strategy since the
days of Henry Ford, actually exploiting global positions and developing economies in the present
day requires redefined excellence in operating strategy. In the expansion of mass production in

231
the early part of the twentieth century, American firms showed Europeans (and subsequently
others) how to apply Taylorism and Fordism, both for local consumption and for regional
exports. The same happened with the Japanese in the 1970s and 1980s, as lean production was
rolled into North America and Europe.

Some of the rationale for global manufacturing relates more to financial and marketing matters
than operations- and there are cases where, despite excellence in manufacturing, firms have met
failure. For example, Hewlett Packard moved production of its disk drives from the USA to
Malaysia in the early 1990s, in order to benefit from low taxation in the developing country. In
fact, the market for these products collapsed shortly afterwards and HP made losses in the
Malaysian operation. Had the production facility been in the USA, HP could have set these
losses against its profits there. Having moved to Malaysia, however, this was not possible. HP
was thus unable to reduce its tax burden in the USA and was faced with operating losses in
Malaysia – the worst of both worlds. The lesson is that, no matter how good the manufacturing
is, globalizing business may not always be the best path.

This has developed further into the partial or even complete exit from manufacturing of many
product-related firms (including HP). Instead of producing laptop computers, for example, most
of the main brands have moved to outsourced production – thereby avoiding some of the
difficulties of global networking, by leaving them to the specialist firms that have grown up to
address them. This is a trend that may be expected to continue- at least until international
agreements on such things as environmental impacts (and thus costs) of large-scale logistics, and
perhaps the intolerable risks of terrorism, are established and addressed.

Globalization and service operations


It can be argued that the internationalization of manufacturing has been one of the key drivers of
service firm internationalization. Airlines, hotels and banks have expanded into countries in
order to serve their globalizing customers, especially business travellers, who visit and work
outside their home country. Thus, the average split between domestic and foreign business in the
hotel industry is 50:50 worldwide. Some of the early growth in international hotel development
was linked directly to the airlines’ desire to have suitable accommodation available for their
passengers. Increasingly, service firm international expansion is fuelled by scale economies that
can be leveraged from strong brands, loyal customers and centralized facilities, especially
reservations and distribution systems.

Inward investment is both capital intensive and risky. The political, financial, economic and
social circumstances in a country may vary widely. Hence, service firms have developed
alternative strategies for creating brand presence without a high degree of financial investment.
The two most common alternatives are franchising and management contracting. These two
alternative business formats both involve collaboration between the international service firm
and a local investor. In the case of a franchise, the franchisee typically builds or leases the local
infrastructure, such as the restaurant, hotel or retail outlet, and agrees to run the business in
conformance with the precise system developed by the franchisor. Sheraton Addis is an example
of franchising system. A management contract, on the other hand, involves the local developer
appointing an international firm to manage and operate their business on their behalf.

232
It is clear that internationalization for service firms is of itself a challenge. The geographic
distance between operating units, the differences in the environmental context, local market and
labour conditions all create an operations challenge with regards to successful, consistent
performance, especially if the service is strongly branded. However, the lack of direct control
over the business through the adoption of these alternative business formats creates an additional
challenge. There are three main forms of control: centralizing, which means that all decisions are
taken by senior managers, usually at corporate headquarters; bureaucratic, which involves highly
detailed policies and procedures limiting subordinate discretion; and socialization, which is
largely through the adoption and dissemination of a shared organizational culture, especially
amongst operational managers.

There is some evidence emerging to suggest that hybrid firms, those that own and manage their
own operations and franchise them, have so called plural processes that help them outperform
firms that rely solely on one business format. Such processes include:
 modelling – franchise operators model themselves on company run units, thereby
encouraging the adoption of system-wide standards;
 ratcheting – the juxtaposition of company-owned and franchised units encourages
benchmarking across the two, thereby creating a climate of friendly competition, with
each trying to outperform the other;
 local learning – the franchisee’s closeness to their market enables the firm to learn
quickly about local market conditions;
 market pressure – corporate staff services developed to support operations are exposed to
market conditions when franchisees can opt out of utilizing these services;
 mutual learning – hybrid firms have a more diverse source of new ideas and alternative
range of screening processes than those available to firms operating within one business
format.

The impact of globalization on operations management


The rise of globalization has clearly necessitated a complete rethink for some firms in terms of
how they can organize and reconfigure themselves. Firms have to equip themselves in order to
compete in these global markets and two other factors – focus and agility, which we will discuss
later in the chapter – are often key means of doing so. Many markets now contain a number of
global players. This is true of both manufacturing and service sectors.

Table 9.1: Taxonomy of international operations strategies

Operations Domestic/multi-domestic orientation Global orientation


location
Domestic Home country operations with domestic Home country operations with
customers and suppliers. General global sourcing, marketing and
purpose or no focused plant strategy distribution. Product or process
plant strategy

Regional World is divided into regions containing Operations are regionally divided
common requirements, culture and but with product or process plant

233
practices. Operations are located in each strategies for each site. Each site
region and tailored to them, with a can thus serve the rest of the
market plant strategy. Very few links world. Global sourcing.
between regions
Multinational Multiple locations, dispersed Corporate value-adding chains are
internationally, taking advantage of low- located to exploit optimal
cost resources. resources and strategic capability.
May adopt a general-purpose plant Global logistics, global sourcing
strategy or a strategy focused on product and global brands
or process. Sourcing may be local or a
combination of local/international

Worldwide Market plant strategies with maximum Product market, product and
market coverage worldwide. Separate, process plant strategies
autonomous plant strategies employed, providing global
products as well as global brands

Globalization impacts on operations management in a number of ways, including:


 capacity (locations and levels at each plant);
 skills requirements and employment levels in production;
 plant technology and supplier relations – which have to be configured in a plant-specific
way in order to deal with the peculiar requirements of each plant but managed as a global
network.

Investment in global operations


We need to bear in mind that the very presence of globalization may determine the very
definition of what manufacturing operations are about. Globalization can have massive
implications for operations management and this can entail substantial investment in plant. This
investment by itself will not ensure success of global aspirations, but it is a necessary feature in
many industries. There are many sectors where firms continue to struggle to globalize their
business and establish their brand. Despite huge investment and an established brand name, some
companies have undergone significant difficulties. Major international firms, such as British
Airways, Sears the retailers and DuPont, are currently engaged in establishing their global
presence, not without problems. Underpinning their success is a range of operations-specific
areas, which combine to form a major part of operations strategy. These areas include capacity,
skill requirements, plant technology, and strategic alliances with suppliers and other, long-term
partnerships with key stakeholders in the business.

Developing an operations strategy for growth


A comparison of the traditional and modern aspects of this management task may lead us to
identify key parts of a strategy for growth. This is shown in Table 9.2.

Table 9.2: Comparing traditional and modern approaches to expanding operations

234
Factor Traditional approach Modern approach
Human Recruit locals and train them to Train and educate locals to operate
resources necessary levels to operate equipment and develop ideas in
equipment; limited learning or participative manner (quality circles,
skill development kaizen, employee involvement, policy
deployment, etc.)
Capacity Transfer old equipment from home Set up world-class facilities to compete
and produce for local market; limit with products made anywhere. Plan
export to that demanded by local regional and even global (niche) exports
government as well as local consumption
Process Use old equipment to limit Invest for world-class operation and
technology investment risks and satisfy local export; achieve payback within short
market requirements product life
Product Retain at home. Overseas plants A compromise: high enough technology
technology make old products, no longer to satisfy local government
sellable at home stipulations/market demands, without
releasing advanced R&D
Supply Import kits of parts for assembly; Set up supply lines to deliver in a lean
buy locally only those items supply manner all items to support
necessary to appease local content world-class operations; retain high
requirements (generally low-value, technology items at home and import to
heavy, bulky items) overseas plants

Market impacts Overseas plants do not affect Overseas plants have impact on
markets international markets and require
for home plants strategies that develop network
potential without harming home
employment
Management Second managers from home Train and educate local managers;
country gradually withdraw home managers

Environmental Convince local people that the Design environmentally sound operations
impact impact and supply chains to minimize impact
is justified by the benefits of the while maintaining
investment economic performance and remaining
a ‘good corporate citizen’

Information Secrecy Transparency

9.2 The challenge of the Internet


The rate of change in management techniques reflects that in the commercial world itself and it
has probably never been greater. Whereas it is generally agreed that the technological changes in
the West were actually of greater significance in the first part of the twentieth century than in the
second, the arrival of developing nations in the commercial world – as independent players

235
rather than outposts of empires – is something new and is likely to be more pronounced in the
early decades of the twenty-first century than ever before.

Perhaps the most potentially significant technology facing operations managers at this moment in
time is that of the worldwide communications available from the Internet: in practice, this
translates to e-commerce (or e-business). Originally a military device, the Internet has grasped
the imagination of businesses and consumers on a massive scale.
The nature of exchange on the Internet is very different:
 it is not just an economic exchange, but an informational and emotional exchange;
 sellers and buyers mutually create and consume;
 communication between parties is interactive;
 exchange can be carried out at any time and in any place.
Three of these challenges are designing the offer, speed of response and transparency.

a. Designing the offer


If the Internet delivers things that cannot be distinguished as either products or services, we need
a term to describe what is delivered. Davis and Meyer (1999) suggest the term ‘offer’. Offers
deliver economic utility just as products and services have always done, but they include two
other features, namely information capital and emotional capital. Moreover, the flow of utility,
information and emotion is not just from the provider to the consumer, but in both directions.
Value is derived in all three areas to the mutual benefit of both parties. For some time now, firms
have been paying their customers as well as customers paying them. For example, in the food
industry suppliers pay supermarket chains for premium space on their shelves and the
supermarkets ‘pay’ their customers with discounts in proportion to their spending power. This
happens in many other industries. In the first example the supplier is paying for the marketing of
their product, and in the second the chain is buying the loyalty of its customers. However, in the
world of blur, these exchanges can take on other dimensions.

b. Speed of response
The ability to conduct business at any time of day or night and in any part of the world (i.e. from
a laptop computer via wireless communication) is now taken for granted. With the notable
exception of aeroplanes, there are almost no locations on earth where one cannot receive, process
and transmit information in the form of text, spreadsheets, databases, illustrations, video and
audio clips, and so forth.

The operations manager can no longer rely upon reasonable expectations for response time:
requests for services and information may arrive at any time and require immediate action. The
consumer is thus being tempted with shorter lead times for manufactured items: in the fashion
industry, this threatens to upset the age-old tradition of seasons.

In retailing, the arrival of firms such as Amazon has destroyed traditional assumptions about
both highstreet shopping and mail order: from the PC in the bedroom one can literally ‘shop the
world’ and have books, CDs – potentially anything – delivered to the door within a few days.
Elsewhere, grocers now make arrangements for goods to be ordered via the Internet and
delivered to the home. In the motor industry, the race is on to become the first company to make
the ‘3-day car’ – literally a personally specified car, available to the customer within 3 days.

236
The ability to communicate has been seen before as a prime force on operations: the demise of
mass production was brought about when consumers could no longer be kept in the dark about
alternatives to Henry Ford’s ‘black cars’. The Internet once again shows the power of the
unrestricted consumer to steer the development of operations by its demands. Achieving
sufficient speed of response for these consumer demands, once the possibilities are realized, may
not simply be a matter of working faster: traditional operations may need to be scaled down or
even destroyed and recreated.

c. Transparency
Traditional business has always assumed that confidential information can be kept secret. The
flip side of the Internet is that this comforting assumption may no longer be true. The amount of
information available and the technology for espionage combine to make almost any factor in an
organization impossible to conceal. If nothing can be kept secret, then the operations manager
has to learn to deal with managed risk. This opens up a new set of challenges –but also
opportunities for operations to become a genuine part of the competitive efforts of the operation.

9.3 The challenge of the environment


Without doubt one of the key phrases of the 1990s was sustainable development, often
abbreviated to the single word sustainability. This is generally agreed to mean meeting the needs
of the present without compromising the ability of future generations to meet their own needs.

In the early 1990s, 75 per cent of North American consumers claimed that their own purchasing
decisions were influenced by their perception of an organization’s environmental correctness,
and 80 per cent said they would pay more for environmentally ‘friendlier’ goods. A survey on
public attitudes showed that concern about the environment had remained during the years of
recession and was growing in line with the economic recovery; it was given as the third most
important issue that the public believed government should be addressing, second only to
unemployment and health, and above crime, education and the economy in general. The survey
also showed a dramatic shift in public opinion to favour the ‘polluter pays’ principle.

Commercial organizations must comply with policy and its related legislation, of course, and be
‘good corporate citizens’, but they are answerable to shareholders and many of the aspects of
sustainability, for good or ill, are in conflict with shareholders’ interests (for example, provision
for long-term considerations may reduce short-term financial returns).

To address this problem and give managers a reasonable (but still not easy) target on which to
focus, the concept of environmental soundness was developed. This starts with a division of
sustainability into three types of consideration: economic, environmental, and social (especially
social justice). Addressing the first two may result in environmental soundness. There are, of
course, many examples of large firms contributing to social policy – especially where there is a
high degree of development going on in the local economy. Thus, for example, large firms
wishing to exploit local natural mineral resources will build schools, roads and hospitals for the
host country, as a means of getting to the minerals. The danger here is that the large firm imposes
its culture (often Western) on the host country and begins to change morals and ways of

237
working. Such cultural imperialism may well be met with sectarian reactions – at the least it will
cause friction in the host country.

The environmental and economic areas of concern may be seen as the responsibility of the firm:
on the one hand, complying with – or exceeding – regulatory requirements on biophysical
impacts and, on the other, behaving in an appropriate manner to ensure value is returned to its
shareholders. Operations managers have two concerns in achieving environmental soundness (in
the eyes of their customers and regulators): ensuring that the operations in their own organization
are appropriate (or, perhaps, ‘green’) and dealing with the problems (and opportunities) that may
exist in the supply chains which feed it. We shall examine each of these, starting with the
organization itself.

Standards
For the operations manager’s own organization, there is a relatively straightforward approach –
that of complying with regulations and achieving accreditation to an approved standard. Just as
the business world has adopted, on a global level, the ISO 9000 series of standards in quality
(developed from the British Standard, BS 5750, during the early 1980s), so environmental
considerations have been covered by ISO 14000 (developed from BS 7750 in the early 1990s).
This is a series of qualifications given by accredited consultancies and other bodies. ISO 14001,
for example, is the first level of accreditation within the ISO 14000 series, under which the
inspectors will assess systems in use to monitor such factors as energy use, waste disposal,
recycling, and air and water pollution. Inevitably, perhaps, such standards are applied not to the
outputs of an operation (i.e. how ‘green’ are the firm’s products or services), but to the
management systems and procedures used to generate them. The criticism of this approach
(which is perhaps the only really practical approach for such standards) is that an organization
may develop excellent systems but still not produce environmentally sound outputs – good
paperwork does not guarantee good products and services. Nevertheless, the standards are a very
positive development for the environmental agenda and are widely respected.

Environmental concern can spill over very quickly into ‘ethical’ concern– especially in the eyes
of the popular press and therefore the consumers. This moves the focus away from
environmental soundness and towards sustainability – a goal that the individual firm may not be
able to espouse, as we have seen above. The UN has developed another standard for this – SA
8000 (modelled after ISO 9000 and 14000, but with performance-based provisions). This was
launched in 1997 by the Council on Economic Priorities as a universal, independently verifiable
standard for social accountability, covering such matters as child labour, forced labour, health
and safety, freedom of association and the right to collective bargaining, discrimination,
disciplinary practices, working hours and compensation. SA 8000 is based on conventions
ratified at the International Labour Organization and related human rights instruments including
the Universal Declaration of Human Rights and the UN Convention on the Rights of the Child.

The combination of ethical and environmental concerns is known as corporate social


responsibility (CSR) – and is recognized as a prime issue for firms. However, while there may be
defined ‘scientific’ measurements of performance for ‘environmental’ and ‘economic’
responsibilities (e.g. particles in the atmosphere, levels of chemicals in discharges, and return on

238
net assets or simply profitability), for social and ethical activity no universal value systems can
exist.

Recycling
One of the most stringent legal penalties has arisen from the European; Union legislation (2001)
that has required product manufacturers to take responsibility for recycling the products they sell
to consumers.

Quite simply, this will eventually mean that, at the end of the product’s useful life, the consumer
can return – free of charge – the fridge, or television, or lawnmower, or article of clothing, to the
place where it was bought, from whence it will be passed to the manufacturer, who must recycle
it in some way. This ‘reverse logistics’ operation is, of course – like the practice of taking empty
bottles back to the shop to claim a refund of the deposit –still common practice in parts of North
America and Europe, although largely phased out in the UK.

The EU regulators have also focused on the enemy of environmental soundness – the motor car.
Since 2001, under EU Directives launched progressively during the 1990s, vehicle producers
have begun to take responsibility for every car they produce at the end of its working life. This
regulation is intended to encourage producers to design cars for recycling.

The recycling directives go further in the car industry; however, with the threat of vehicle
manufacturers eventually having to take back every vehicle they have ever made that is still
running on European roads. Provision for such costs could clearly ruin some commercial
organizations.

In the consumer electronics field (next in line for the EU, after the car industry) manufacturers
face similar problems. The response of one producer, Panasonic, selling some 6000 televisions
each year, reveals an interesting twist to this challenge: if the products remain their
responsibility, perhaps they should also remain their property. Thus, rather than selling the
televisions to consumers, Panasonic is considering leasing or renting them – thus improving the
firm’s ability to keep records and discharge its responsibilities. A similar approach has been
taken in the UK by the communications giant BT. With one of the largest fleets of vehicles in the
country to manage, BT is reconsidering its policy on lubricants and other fluids. In the future, it
may effectively rent the oil from suppliers, returning it to them for cleaning and reprocessing
after a specified period of use in the engines of its vehicles.

The implications of all this for operations managers are clearly immense. Having grasped the
notions of just-in-time and leanness in operations, the challenge now must be to reduce the
biophysical impact of all activities and to consider the future state of products beyond the sale
and use by the consumer, working with materials which may not perform or behave in the same
way as virgin matter. Management systems must be designed to track a product, sometimes for
many years, in order to deal with it when it returns. This will always be a greater concern for
industries in which product lives are short or shortening, as they will present the greatest
potential environmental problems for society.

The supply chain

239
Another concern for operations managers is the environmental soundness of the origins of the
goods and services upon which the organization relies. This is a more complex matter than the
first concern, since the supply base is a more complex entity to address and it may not be
managed in the ‘planning and control’ way. The international standards described above make
some provision for this and, indeed, many organizations have sought to require their suppliers of
goods and services to be accredited to ISO 14000, in the same way that they used ISO 9000 a
decade earlier. However, as we saw above, these systems cannot guarantee environmental
soundness in the products and services, only in the systems which their producers and providers
use to manage their processes.

In the light of this realization, some organizations have developed their own systems for
measuring environmental performance of their supplier – often as an extension of their existing
vendor assessment schemes – and also sought more effective accreditation.

It is possible to portray almost any management problem as a supply chain issue. This illustrates
how the environmental (and ethical) agenda must be dealt with in greater scope than simply that
of the immediate arena (i.e. the firm). Operations managers have thus to concern themselves with
their own responsibilities, and those of their suppliers of goods and services, if they are to avoid
the finger of public accusation being pointed at them for not being ‘environmentally friendly’- or
perhaps environmentally sound.

Activity 9
Identify three key issues for operations managers in Ethiopia in trying to manage the future.
Explain.

Chapter Review
The future role of operations managers will take on far greater responsibility than before.
Industrial operations/ manufacturing used to be pretty simple. The factory manager or the
production director rarely had to think about suppliers or customers. All he did was to make sure
that his machinery was producing widgets at the maximum hourly rate. Once he had worked out;
how to stick to that ‘standard rate’ of production, he could sit back and relax. Customer needs?
Delivery times? Efficient purchasing? That was what the purchasing department and the sales
department were there for. Piles of inventory lying around, both raw materials and finished
goods? Not h is problem. Now it is. The 1980s were the decade of lean production and right-
first-time quality management. In the 1990s the game has grown even tougher. Customers are
more and more demanding. They increasingly want the basic product to be enhanced by some
individual variation, or some special service. Companies sweat to keep up with their demands, in
terms both of the actual products and of the way they are delivered. A number of these issues
have been addressed in previous chapters of this module. But the point to bear in mind here is
this: not only do operations managers have to take on board these additional, major competitive
requirements, there are also other vitally important social/ environmental pressures that need to
be managed, as we have outlined in this chapter. In the future, the pressure put on
production/operations managers will be greater than ever. A key issue for operations managers in
trying to manage the future is that operations strategy must be in place to enable the firm to deal
with such changes. Undoubtedly, having strategic operations in place will decide the fate of
firms in both manufacturing and services settings, and combinations of both.

240
Review Questions
Multiple Choices Questions

1. The challenge of operations management that relates to the way in which an operation is
managed has a significant impact on its customers, the individuals who work for it, the
individuals who work for its suppliers and the local community in which the operation is
located refers to:
a. Globalization
b. Corporate social responsibility
c. Environmental concern
d. All of the above
e. None of the above.
2. One of the following is not among the nature of exchange on the Internet.
a. It is not just an economic exchange, but an informational and emotional exchange;
b. Sellers and buyers mutually create and consume;
c. Communication between parties is interactive;
d. Exchange can be carried out at any time and in any place.
e. None of the above.
3. Meeting the needs of the present without compromising the ability of future generations
to meet their own needs refers to
a. Sustainability
b. Sustainable development
c. Recycling
d. A and B
e. All of the above.
4. One of the following practices does not reflect the the modern approaches to expanding
operations globally
a. Training and educating locals to operate equipment and develop ideas in
participative manner
b. Recruiting locals and training them to necessary levels to operate equipment only.
c. Setting up world-class facilities to compete with products made anywhere.
d. Planning regional and even global (niche) exports as well as local consumption.
e. None of the above.
5. The very presence of globalization determines the very definition of what manufacturing
operations are about. a) True b) False

Discussion questions
1. What impact will globalization and an increasingly international perspective on business
have on operations management?
2. How does a wider view of corporate social responsibility influence operations
management?
3. Why is it important for operations management to take its environmental responsibility
seriously?
4. What will new information and communication technologies mean for operations
management?

241
5. How does the issue of supply chain affects operations management in future? Discuss.

Hints for Activities

Activity 1:
The inputs for the model would be students, staff, materials such as stationeries and other
teaching supplies, capital in the form of government budgets, The transformation process would
take place in the form of knowledge) a version of informational transformation as discussed in
this module. The outputs are graduates, research findings, and community services.
Controlling takes place in the form of assessments and examinations, audits, performance
evaluations of the staffs, and the like. Feedback to the system can be represented when the
school head has discussion meetings with students, academia and other stakeholders.

Activity 2:
As most customers of the industry, particularly so is the domestic market, belong to low earning
groups, it would be most strategic if cost objective is prioritized. Yet, other objective all should
be achieved to the acceptable level that tolerable level of quality, speed, flexibility, and
dependability should be achieved.

Activity 3
Hint: Any product may be used for the purpose. For example, we can take Bic (pen) product for
our purpose hoping that these products are a very frequently used by students like you.

Form design refers to the physical appearance of a product-its shape, color, size, and style. To
that end, thus, the design of Bic pen is very generic that it is neither a breakthrough nor ugly
designed. However, as compared to other pen ball designs (e.g., lexi penball) its comfort for
handling and even the aesthetics can be criticized.
In terms of functional design which refers to how the product performs, Bic pen ball has an
outstanding image as confirmed by its market presence since very long time.

Activity 4
To prepare APP for a given manufacturing firm in Ethiopia, the information you need to collect
as requirements are
a. Details of the available production facility and raw materials.
b. The forecasted demand level covering the medium-range period
c. Financial planning information including the production cost which includes raw
material, labor, inventory planning, etc. Specific cost information such as labour regular
time costs, Overtime costs, hiring and layoff costs, inventory holding cost , backorder
and stock out costs and sub contract cots are also relevant.
d. Organization policy around labor management, quality management, etc.

Activity 5
The relationship between cost and quality is not necessarily linear. Some products are simply
expensive simply as charged by the businesspersons or as a result of weak market mechanisms.
Hence, expensive products are not necessarily superior quality products. Likewise, all superior
242
quality products are not necessarily produces costly. Best example can be given from local
products as for example Jebena Buna and Cup of Coeffiee in star rated hotels. Likely Jebena
Buna is more organic product and pure coffree grains might be used. In the case of the latter, the
powder used might not be pure coffee and organic one as these hotels may not use the locally
produced coffee bean. The powder used may not also suitable for the local tastes of Ethiopian
coffee users.

Activity 6
Yes, Six sigma can be implemented in Ethiopia or in any country. What is necessary rather to
implement six sigma includes: top management leadership and commitment; a well implemented
customers management system; a continuous education and training system; a well-organized
information and analysis system; a well implemented process management system; a well-
developed strategic planning system; a well-developed suppliers management system; equipping
everyone in the organization, from top management to employees, with a working knowledge of
the quality tools; a well-developed human resource management system; and a well-developed
competitive benchmarking system. Hence, any organization that can realize these preconditions
can implement six sigma in Ethiopia and can reap the associated benefits thus from.

Activity 7
Among the elements of JIT/ Lean production, the most difficult aspects to implement in Ethiopia
may vary based on the nature of industry and the firm per se. On average, however, these
elements that help to achieve smooth flow such as pull systems, Kannaban, uniform production,
flexible resources including quick setups, and effective and reliable suppliers networking can be
mentioned as the most difficult aspects that may challenge the implementation of JIT/ Lean
production in Ethiopia, yet subjectively guessed by the writers of the module.

Activity 8
Yes, automation is highly recommended to some manufacturing firms in Ethiopia. Though it has
limitations, it would help Ethiopian firms in increasing productivity. It also helps them to cope
up the global competition as it would improve the quality of the products produced. However, a
careful analysis should be performed in selecting appropriate strategy of automation that it would
balance the negative effects of automation particularly in terms of labor turnover.

Activity 9
Just like operations managers of other countries, these assuming responsibility in FDRE would
face the challenges in their day to day activities that may result from the globalization, corporate
social responsibility, environmental concern and supply chain integrity.

Answer Key for Multiple Choice Review Questions

1 2 3 4 5
Chapter I
e d a c a

1 2 3 4 5
Chapter II
243
c e b a e

1 2 3 4 5
Chapter III
e b c d d

1 2 3 4 5
Chapter IV
a a b c d

1 2 3 4 5
Chapter V
d b c a a

1 2 3 4 5
Chapter VI
a d d e a

Chapter VII 1 2 3 4 5
a e b a d

1 2 3 4 5
Chapter VIII
a b b d e

1 2 3 4 5
Chapter IX
b e d b a

References .
Taylor, Bernard, W. and Russell, Roberta S. (2011). Opeartions Maangement. Creating Value
along the Supply Chain. John Wiley and Sons, Printed in the United States of America.

Slack, Nigel, Chambers, Stuart and Johnston, Robert (2010). Operations Management. 6th ed.,
Pearson Education Limited. England.

244

You might also like