P. 1
Operations Management - Policy, Practice and Performance Improvement

Operations Management - Policy, Practice and Performance Improvement

|Views: 85|Likes:
Published by Sarah Gogo
A very execellent Management Document for performance monitoring.
A very execellent Management Document for performance monitoring.

More info:

Categories:Types, Research
Published by: Sarah Gogo on Jul 09, 2012
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

04/14/2015

pdf

text

original

Sections

Operations managers must ensure that the goods or services produced
by the transformation process meet quality specifications.Many
different techniques and tools for managing quality have emerged to
support this responsibility (Figure 9.2). One way to organize the
different approaches to quality management is to show where they are
normally used within the transformation process.

Outputs

Quality management includes ensuring the quality of the outputs of
the transformation process by sorting them into acceptable or
unacceptable categories beforethey are delivered to customers or
clients (Deming called them ‘the final inspectors’). This is most
related to Garvin’s manufacturing-based definition of quality – quality
as meeting specifications.
Conformitydescribes the degree to which the design specifications
are met in the production of the product or service, and is, again,
highly influenced by operations capabilities. Although specifications
are initially set in the design process, operations managers are
responsible for ensuring that the products and services that are
delivered to customers meet those specifications.

275

Figure 9.2Quality management and the transformation model.

OPERATIONS MANAGEMENT

Two kinds of specifications can be identified for products or services,
attributes and variables. Attributesare aspects of a product or service that
can be checked quickly and a simple yes or no decision made as to
whether the quality is acceptable. Thus, attributes are quality aspects of
a product or service that are either met or not met.
Variable measures, on the other hand, are aspects of a product or
service that can be measured on a continuous scale, including factors
of weight, length, speed, energy consumption and so on. Variablesare
standards that can be met or not met as well.
The responsibility for conformity within manufacturing operations
is sometimes assigned to a specific quality control (QC) department. The
QC department may be responsible for a variety of activities, including
assessing the level of quality of goods and services, and of the processes
that produce those goods and services. The tools used by the QC
department are described later in the chapter.
Quality control is usually associated with two types of quality

management:

1Inspection
2Acceptance sampling.

Inspection

The most basic way of measuring quality is through inspection:
measuring the level of quality of each unit of output of the operation
and deciding whether it does or does not meet quality specifications.
Inspection classifies each product as good or bad. Products that fail
inspection may be reworked to meet quality standards, sold as seconds
(at reduced prices) or scrapped altogether.
One hundred per cent inspection requires sampling of all of the
unit’s outputs. This is clearly impractical in many circumstances – for
example, a brewery would probably go out of business quickly if
inspectors had to take a sip from every cask or bottle of beer! In general,
inspection requires too many organizational resources to be used as a
method of quality control except when the consequences of non-
conformance are significant. This may come into play with very expen-
sive products, or when there are high risks associated with failure.

Acceptance sampling

Acceptance samplingis a technique for determining whether to accept a
batch of items after inspecting a sample of the items. The level of

276

Acceptbatch

Rejectbatch–
Producer’s risk

Acceptbatch–
Consumer’s risk

Rejectbatch

Sample’s indicated quality level

Batch’sactualqualitylevel

Bad

Good

Acceptable

Unacceptable

9 • MANAGING QUALITY

quality of a sample taken from a batch of products or services is
measured, and the decision as to whether the entire batch meets or
does not meet quality specifications is based on the sample. Accep-
tance sampling is used instead of inspection when the cost of
inspection is high relative to the consequences of accepting a defective
item.

Rather than relying on guesswork, acceptance sampling is a
statistical procedure based on one or more samples. Acceptance
sampling begins with the development of a sampling plan, which
specifies the size of the sample and the number of good items. The
maximum allowable percentage of defective (non-conforming) items
in a batch for it still to be considered good is called the acceptable quality
level(AQL). This is the quality level acceptable to the consumer and
the quality level the producer aims for. On the other hand, the worst
level of quality that the consumer will accept is called the lot tolerance
percent defective(LTPD) level.
Since the sample is smaller than the entire batch, then there is a risk
that the sample will not correctly represent the quality of the batch.
The producer’s riskis the probability of rejecting a lot whose quality
meets or exceeds the acceptable quality level (AQL). The consumer’s
riskis the probability of accepting a lot whose level of defects is at or
higher than the lot tolerance per cent defective (LTPD) (Figure 9.3).

277

Figure 9.3Producer’s and consumer’s risk.

Probabilityofacceptance(%

)

Percentage defective

0%

0.1

0.95

1.0

80%

LTPD

AQL

Producer’s risk
= 0.05 (TypeI error)

a

Consumer’s risk
= 0.10 (TypeII error)

b

OPERATIONS MANAGEMENT

These may sometimes be described as Type I (alpha) and Type II
(beta) errors, terms that are derived from statistical theory.
In order to be useful, a sampling plan must balance the risk of
mistakenly rejecting a good batch (producer’s risk) and the risk of
mistakenly accepting a bad batch (consumer’s risk). Together, the
AQL, LTPD and the two levels of risk define an operating characteristics
curve(OC), which is a statistical representation of the probability of
accepting a batch based on the actual percentage defective. Figure 9.4
presents an operating characteristics curve where the producer’s risk
has been set at 0.05 and the consumer’s risk has been set at 0.10. The
acceptable quality level is 20 per cent defectives, and the lot tolerance
percentage defective is 80 per cent defective.

Sampling plans

One, two or more samples might be taken under different sampling
plans. The number of samples can be known in advance or
determined by the results of each sample.
In a single sampling planthe decision to accept or reject a lot is made
on the basis of one sample. This is the simplest type of sampling
plan.

In a double sampling plana decision to accept or reject a lot can be
made on the first sample but, if it is not, a second sample is taken and
a decision is made on the basis of the combined samples. In a double
sampling plan, after the first sample the batch will be accepted or
rejected, or another sample will be taken. Once a second sample has
been taken, the lot will be either accepted or rejected.

278

Figure 9.4An acceptance sampling plan.

9 • MANAGING QUALITY

A sequential sampling plan extends the logic of the double sampling
plan. Each time an item is inspected, a decision is made to accept the
lot, reject the lot, or continue sampling.

279

Cost of quality

Inspection and conformance sampling are two quality management
techniques whose main emphasis is on conformity. The level of quality
aimed for in conformity-centred approaches is often determined using
economic analyses of the costs of quality.
Quality creates a significant level of cost to the organization. Juran
(1951) argued in his Quality Control Handbook that managers must
know the costs of quality in order to manage quality effectively. These
costs can be divided into the costs of making sure that quality mistakes
do not happen, and the costs of fixing quality mistakes. The costs of
making sure mistakes do not happen can be divided into the costs of
appraisal and prevention.
Prevention costsare those costs of all activities needed to prevent
defects, including identifying the causes of defects, corrective actions,
and redesign. Managers must put in place measures to prevent defects
occurring – including company-wide training, planning and imple-
menting quality procedures. In Garvin’s (1983) study of Japanese
versus American manufacturing, he found that the added cost of
prevention (which resulted in better quality Japanese goods) was
half the cost of rectifying defective goods made by American
manufacturers.

Appraisal costsare the costs of inspections and tests and any other
activities needed to make sure that the product or process meets the
specified level of quality. Quality laboratories may also be part of the
appraisal process, whereby a product or component is analysed outside
of the immediate production area. Inspection, in addition to in-built
statistical processes, will often take place in the early stages of a ‘quality
drive’ in critical areas of production, for example:

In operations that have historically caused problems

Before costly operations take place – reworking on a costly area is

particularly expensive

Before an assembly operation which would make ‘disassembly’

difficult

With regard to finished goods – the extent of inspection of finished

goods will diminish over a period of time as the disciplines of quality
management become integral to the operational process.

Totalcosts

Failures (%)

100%

0%

Costsof
failures

Total
costs

Costsof
prevention

OPERATIONS MANAGEMENT

The costs of fixing quality mistakes can be classified as the costs of
internal and external failures. The costs of internal failuresinclude
the costs of defects that are detected before products or services
reach the consumer, such as reworking or scrapping defective
products. The cost of this will appear as an overhead, which will
impact on pricing strategy. The costs of external failuresare those
costs of defects once they have reached the consumer, including
replacements, warranty and repair costs, and the loss of customer
goodwill. Hutchins (1988, p. 39) makes an important point on the
real cost of external failure:

It is most unusual to find any computations which take into account the
consequential losses. For example, there is the time spent in placating
an irate customer; the loss of machine time; the effect on scheduling;
the costs associated with the purchase of replacement materials ... the
cost of stockholding associated products which must be held in
temporary storage awaiting the arrival of satisfactory replacement
parts is never included in the figures. Neither are any estimates relating
to the loss of sales revenue.

Figure 9.5 suggests that the optimum level of quality will always be
set so that some level of defects is acceptable, since the costs of
prevention approach infinity as 100 per cent conformance is approa-
ched. However, under the quality management philosophy of zero
defects the ultimate goal of operations is 100 per cent conformity. In

280

Figure 9.5A cost of quality model of optimum level of quality.

Totalcosts

Traditional
‘quality’

‘Enlightened’approaches
toquality

Absolutecost
difference

Prevention

Appraisal

Internal failure

External failure

Prevention

Appraisal

Internal failure

External failure

Differencesinapproaches:

Changes toabsoluteand relativecosts in quality

9 • MANAGING QUALITY

part, this is based on Philip Crosby’s idea that ‘quality is free’, in which
he argues that the benefits from improving quality more than pay for
their costs (Crosby, 1979, p. 2):

Quality is free. It is not a gift, but it is free. What costs money are the
unquality things – all the actions that involve not doing jobs right the
first time.

Crosby discusses how firms can evolve into becoming enlightened. He
spoke of five stages of development, and in the first stage the cost of
quality was reckoned to be about 20 per cent of sales. At this first stage,
Crosby argued, management has no real comprehension of quality.
However, by the fifth stage, the final stage, the cost of quality should
fall to about 2.5 per cent.
When an organization commits to quality, costs will come down.
However, it is not just the total sum of the costs that is important; the
composition of these costs also provides insight. Under traditional
approaches, the largest cost will be external failure – with all of the
strategic losses this may bring. As firms evolve into becoming
enlightened in quality, the largest portion of cost changes from
external failure to that of prevention (Figure 9.6).

281

Figure 9.6A comparison of quality costs in traditional and enlightened
approaches to quality.

Determine
thecauseof
variation

Analysecurrent
process
performance

Identify
and correct
variations

OPERATIONS MANAGEMENT

Process control

The conformity-based approaches to quality management described
above merely sort acceptable from unacceptable outputs, but do not
address the underlying causes of poor quality. Quality management
can be more proactive through addressing quality defects duringthe
production process, rather than afterit.
Take a simple example; eating a meal out in a restaurant. If the server
waited until the end of the meal to see if there were any complaints or
problems, then he or she wouldn’t have a chance to correct any
problems that had occurred. However, if checks were made regularly
during the meal – that the food is what has been ordered, that it has
arrived without too much delay, and that it is of the right temperature
and tastes good – then any problems could be dealt with immediately.
The key concepts associated with process control were developed by
Walter Shewart at Bell Laboratories in the 1920s. Some important
techniques associated with process control include:

statistical process control

quality at the sources.

Statistical process control

Statistical process control(SPC) measures the performance of a process.
Statistical process control (SPC) can be used to monitor and correct
quality as the product or service is being produced, rather than at the
conclusion of the process. SPC uses control charts to track the
performance of one or more quality variables or attributes. Samples
are taken of the process outputs, and if they fall outside the acceptable
range then corrections to the process are made. This allows operations
to improve quality through a sequence of activities (Figure 9.7).

282

Figure 9.7Process control activities.

9 • MANAGING QUALITY

Control charts

Control chartssupport process control through the graphical presenta-
tion of process measures over time. They show both current data and
past data as a time series. Both upper and lower process control limitsare
shown for the process that is being controlled. If the data being plotted
fall outside of these limits, then the process is described as being ‘out
of control’.

The statistical basis of control charts, and the insight that led to
statistical process control rather than process control based on
guesswork or rule of thumb, is that the variation in process outputs can
be described statistically. Process variation results from one of two
causes: common (internal) factors or random (external) causes.
Although there will always be some variation in the process due to
random or uncontrollable changes in factors that influence the
process, such as temperature, etc., there will also be changes due to
factors that can be controlled or corrected, including machine wear,
adjustments and so on.
The goal of SPC is for the process to remain in control as much of
the time as possible, which means reducing or eliminating those causes
of variation that can be controlled. For example, wear over time can
lead to a process going out of control.

Process control charts

SPC relies on a very simple graphical tool, the control chart, to track
process variation. Control charts plot the result of the average of small
samples from the process over time, so that trends can be easily
identified. Managers are interested in the following:

Is the mean stable over time?

Is the standard deviation stable over time?

Two different types of control chart have been developed, for
measurements of variables and measurements of attributes.

Control charts
for variables

Two kinds of control chart are usually associated with variable
measures of quality, which include physical measures of weight or
length. Sample measurements can be described as a normal distribu-
tion with a mean (µ) and a standard (s) deviation (the mean describes
the average value of the process, and the standard deviation describes
the variation around the mean). The mean and standard deviation of

283

Length

Sample number

10.25

10.20

9.95

9.90

9.85

9.80

9.75

0

5

10

15

20

25

30

10.15

10.10

10.05

10.00

Upper control limit

Lower control limit

X-chart

OPERATIONS MANAGEMENT

the process can be used to determine whether a process is staying
within its tolerance range, the acceptance range of performance for
the operation.

Control charts are based on sampling means (X) and ranges (R) for
every ‘n’ items and ‘m’ samples. Besides the norm for the process,
both upper and control limits that the process should not exceed are
also defined. Control limits are usually set at three standard deviations
on either side of the population mean. In addition, warning lines may
be in place so that operators can see a trend in the sampling process
that might result in movement toward either the upper or lower
control settings.

An x-chartplots the sample mean to determine whether it is in
control, or whether the mean of the process samples is changing from
the desired mean. Manufacturers often measure product weights, such
as bags of flour, to make sure that the right amount (on average) is
packaged. Figure 9.8 shows an x-chart for a process where the desired
process is set at 10. Samples of five items were taken at regular
intervals, and the average weight of the five items was calculated and
plotted on the chart. The middle line plots the long-run average of the
process output. The upper and lower control limits, which are set at
three standard deviations from the average, are shown on either
side.

From Figure 9.8 it can be seen that the sample means vary around
the long-run process mean, but they stay within the upper and control

284

Figure 9.8Control charts: an x-chart.

Length

Sample number

0.20

0.18

0.08

0.06

0.04

0.02

0.00

0

5

10

15

20

25

30

0.16

0.14

0.12

0.10

Upper control limit

Lower control limit

R-chart

9 • MANAGING QUALITY

limits, so the process is said to be in control. If the means of one or
more samples had been outside the control limits, then the process
would have been out of control and it would have been necessary for
the process operator to take some action to get it back in control.
Managers may also be interested in how much the variance of the
process is changing – that is, whether the process range (highest to
lowest) is stable. A range chart(R-chart, Figure 9.9) for variable
measures plots the average range (the difference between the largest
and smallest values in a sample) on a chart to determine whether it
is in control. The purpose is to detect changes in the variation of the
process.

As you can see in Figure 9.9, the range exceeds the lower control
limit towards the end of the observation period. The process operator
would need to take corrective action to bring the process back into
control.

Attribute charts

Process control using control charts can be done for attributes as well
as variable measures. A p-chartplots the sample proportion defective to
determine whether the process is in control. The population mean
percentage defective (p) can be calculated from the average per-
centage defective (p) of m samples of n items, as can the standard
deviation ( s). This sort of chart is similar to the x-chart described
above.

285

Figure 9.9Control charts: a range chart.

OPERATIONS MANAGEMENT

Statistical process control (SPC), a manufacturing concept, has been
applied to services (especially in quasi-manufacturing or back-office
environments) with mixed levels of success.

Process capability

Process capabilitydescribes the extent to which a process is capable of
producing items within the specification limits, and can be repre-
sented as:

Cp=(UTL – LTL)/6

where UTL = upper tolerance level, LTL = lower tolerance level and

= standard deviation.

A general rule of thumb is that Cpshould be greater than one
(three-sigma quality), i.e. the process should remain within three
standard deviations of the mean as much as possible. The process is
thus in control 98 per cent of the time. However, based on the quality
example established by Japanese, six-sigma qualityis a more ambitious
target. The six-sigma target for process capability is associated with the
American electronics firm Motorola, which sets a target of 3.4 defects/
million. This underlines Motorola’s view that defects should be very,
very rare.

A related idea in services is service reliability–the ability of the service
provider to deliver the results that customers want time after time,
without unpleasant surprises. We will expand on ideas related to
services later in this chapter.

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->