You are on page 1of 62

Very precise and basic knowledge of the total quality management

The term quality means ‘fitness for use’ or ‘customer’s requirement / satisfaction’. But the
mathematical form of the quality can be expressed as P/E, where P is the performance of the
product and E is the expectation of the customer. There are several tools for identifying the
causes of problems or defects in the products (Goodsor services) like Fishbone diagram (to
identify the root cause of problems or defects), Pareto Analysis (to identify the vital few
factors for majority of the problems) or (Selection of a limited number of tasks that
produce significant overall effect.)
Quality control and quality assurance are basic backbone of quality management. For quality
control of products, process control is the most effective way of controlling the defects or rejects
in a production process. For quality control chart for number of defectives in a lot of
production when the sample size is more than 20, the control limits are calculated based on
Binomial distribution but for sample size less than 20, the bionomial distribution is
approximated to Poisson distribution. The control charts uses statistics of a sample of data
which are mean ( x́ ¿ , standard deviation () and the range (R) which is the difference between
highest value and lowest value in the data sample. The standard deviation is calculated
from the difference of the mean squared of the individual data divided by the number of
observation. This is also RMS (Root Mean Squared) deviation and is equal to stadard
deviation because the deviation is calculated from the mean and the individual data. The
uper control limit (UCL) and lower control limits (LCL) are separated from mean (central line in
a control chart) by 3 on each side. The total spread of the UCL and LCL is 6. The total % of
data sample that falls within the 6, amounts to 99.73%. The random variation in the value
of the variables in a production process occurs due to chance causes or non-assignable
causes or natural or common causes but variations other than that which are not natural are
due to artificial /special/ assignable causes. When a process is under control, processis stable ,
then it is necessary to assess that the process capability(Cp) to satisfyi customers’ requirement.
Normally process capability is measured by the tolerance of the Customers specification limits
(USL – LSL)/6. When (USL – LSL) corresponds to 6, Cp = 1, and when the process is not
capable, then Cp = <1 and when capable, Cp =  1.
The 6 variation is chosen for control charts but 6 also has emerged as methodology for
process improvement ot maintain the level of defects in a manufacturing or service sector.
Actually 6 methodology is used for process improvemet and not just reducing the number or
proportion of defects.
When for evaluation purpose some experiment is conducted with number of trials, 5% level of
significance of the output result of an experiment corresponds to the following confidence
interval of 95%.
Of all the resources used in the industry or business, all ther resources like machine, material,
energy etc, depreciate over time except human resource. Humans gather experience over
time while working and the experienced men become more valuable to the industry or the
business, hence humans appreciate.
Ultimately the success of TQM depends on the participation of all in the organisation and outside
the organisation. The awareness of quality is very important among the member of employees in
the organisation who will volunteer for the quality awarenes programme like training, quitz,
brainstorming sesssion. This sort of approaches in the organisation is normally conducted by the
particiaption of workers under the umbrela of ‘Quality Circle’ headed by the leadership of
Supervisors/Leaders of the working class/workers

In the total quality management, Japanese introduced a very valuable process steps to follow
which are
1. Kaizen –                Focuses on "Continuous Process Improvement", to make
processes visible, repeatable and measurable.
2. Atarimae Hinshitsu – The idea that "things will work as they are supposed to" (for
example, a pen will write).
3. Kansei –                     Examining the way the user applies the product leads to
improvement in the product itself.
4. Miryokuteki Hinshitsu–The idea that "things should have an aesthetic quality" (for
example, a pen will write in a way that is pleasing to the writer).

And subsequent to that after each step of actrivities in the over all process, the PDCA (PLAN,
DO, CHECK and ACT) cycle is followed tfor continuously imroving the quality. The TQM not
only cares for the in-house quality of the organisation and its products and resources but also
takes care of the environment which is followed as per the guidelines of ISO 14000. All other
quality activities inside the organisation are guided by the norms of ISO 9000. And some other
standards

Paradigm shift of quality


Or
The change of concept of quality of products over time
The concept of quality has existed for many years, though its meaning has changed and evolved
over time. The paragigm shift over time may be categorised as under:

1. Pre industrial paradigm: At the time the reputation of the artisan was measured through the
quality characteristics of the product. Trademarks, guilds, and punitive measures were used to
defend the interest of the consumers.

2. Industrial paradigm of quality control. The industrial revolution (18th/19th century) raised the
level of product and process complexity with the boom of production level. A new quality
paradigm of quality control was born, evolving a broader set of changes. Practices like sampling
inspection, the use of statistical methods, standardization techniques became quite familiar. From
that time these tools kept along in use.

3. Postindustrial paradigm Total Quality Management. A third paradigm came into force – Total
Quality Management (TQM). TQM brought the awareness and practice of quality principles to a
new level, and emphasized events and facts like organizational learning and participative
management.

To fully understand the ‘Quality Movement’, the evolution of the concepts of quality of the
manufactured goods and delivered services over time are summarized in Table below.
Time Early 1900s 1940s 1960s 1980s and Beyond

Focus Inspection Statistical Organizational Customer driven quality


sampling quality focus

Old Concept of Quality: New Concept of Quality:


Inspect for quality after production. Build quality into the process.
Identify and correct causes of
quality problems.

The people who were the pioneers and proponents in accelerating those evolving paradigms of
quality are given below.

Walter A. Shewhart –Contributed to understanding of process variability.


–Developed concept of statistical control charts.
W. Edwards Deming –Stressed management’s responsibility for quality.
–Developed “14 Points” to guide companies in quality
improvement.
Joseph M. Juran –Defined quality as “fitness for use.”
–Developed concept of cost of quality.
Armand V. Feigenbaum –Introduced concept of total quality control.
Philip B. Crosby –Coined phrase “quality is free.”
–Introduced concept of zero defects.
Kaoru Ishikawa –Developed cause-and-effect diagrams.
–Identified concept of “internal customer.”
Genichi Taguchi –Focused on product design quality.
–Developed Taguchi loss function.

Companies in every line of business are focusing on improving quality in order to be more
competitive. In many industries quality excellence has become a standard for doing business.
Companies that do not meet this standard simply will not survive. The term used for today’s new
concept of quality is total quality management or TQM. TQM is “The management of quality
at every stage of operations, from planning and design through self-inspection, to continual
process monitoring for improvement opportunities”. The table above presents a timeline of the
old and new concepts of quality. It is seen that the old concept is reactive, designed to correct
quality problems after they occur. The new concept is proactive, designed to build quality into
the product and process design. The eminent personalities who have shaped our understanding of
quality are called the “Quality Gurus”, a list of those are given above.
The current paradigm of quality is underlying principles of Total Quality Management which is
implemented to maintain and assure quality of the products i.e., goods & services as per the
requirement of the customers both internal & external to the organisation. This approach towards
quality of products, the organisation itself and the society has made it possible for the
organisations to satisfy the direct and indirect customers in the market place and capture the
market in greater share of the market for sustainable business.

Definition of total quality management (TQM) in the light of a modern


manufacturing organisation
Total Quality Management is the organization wide management of quality. Management
consists of planning, organizing, directing, control, and assurance. Total quality is called total
because it consists of various scopes of improvement in multidimensional way. First of all, two
qualities: quality of return to satisfy the needs of the shareholders, and quality of products to
satisfy customers’ requirement; next, the internal employees of the organisation in regard to their
upkeeping, training & skill development for better output from them, and last but not the least is
the external customers including the environment.

The current paradigm of TQM consist of three terms as Total, Quality & Management, all of
which have evolved and shifted from ‘no idea’ of quality through the concept of quality control
to the present day-concept of Total Quality Control & Assurance for combating the challenges of
the forthcoming competitive edge of the business and survival thereof.
TQM is composed of three paradigms:
 Total: Involving the entire organization, supply chain, and/or product life cycle
 Quality: With its usual definitions, with all its complexities
 Management: The system of managing with steps like Plan, Organize, Control, Lead,
Staff, provisioning and organizing.
The meaning of quality has changed over time. TQM is defined by the International
Organization for Standardization (ISO) as under:
"TQM is a management approach for an organization, centered on quality, based on the
participation of all its members and aiming at long-term success through customer satisfaction,
and benefits to all members of the organization and to society." ISO 8402:1994.One major aim is
to reduce variation from every process so that greater consistency of effort is obtained.

In Japan, TQM comprises four process steps, namely:

1. Kaizen –                Focuses on "Continuous Process Improvement", to make


processes visible, repeatable and measurable.
2. Atarimae Hinshitsu – The idea that "things will work as they are supposed to" (for
example, a pen will write).
3. Kansei –                     Examining the way the user applies the product leads to
improvement in the product itself.
4. Miryokuteki Hinshitsu–The idea that "things should have an aesthetic quality" (for
example, a pen will write in a way that is pleasing to the writer).
TQM requires that the company maintain this quality standard in all aspects of its business. This
requires ensuring that things are done right the first time and that defects and waste are
eliminated from operations.

The core concepts of TQM can be classified into two broad categories or dimensions: social or
soft TQM and technical or hard TQM. The issues are centered on human resource management
and emphasize leadership, teamwork, training and employee involvement. The technical issues
reflect an orientation toward improving production methods and operations.
Secondly, the management of social or technical TQM issues cannot be performed in isolation.
Social and technical dimensions (and the core concepts that form them) should be interrelated
and mutually support one another reflecting the holistic character of TQM initiatives. Thirdly,
the literature suggests that the optimal management of TQM core concepts will lead to better
organizational performance.

Therefore, TQM includes both an empirical component associated with statistics and an
explanatory component that is associated with management of both people and processes. TQM
is an approach to improving the quality of goods and services through continuous improvement
of all processes, customer-driven quality, production without defects, focus on improvement of
processes rather than criticism of people and data-driven decision making.

Objectives of TQM in an organisation:


TQM Objective

i. Total customer satisfaction


ii. Totality of functions
iii. Total range of products and services
iv. Addressing all aspects of dimensions of quality
v. Addressing the quality aspect in everything – products, services, processes, people,
resources and interactions.
vi. Satisfying all customers – internal as well as external
vii. Addressing the total organizational issue of retaining customers and
viii. Improving profits, as well as generating new business for the future.
ix. Involving everyone in the organization in the attainment of the said objective.
x. Demanding total commitment from all in the organization towards the achievement of the
objective.

Total means 100%, so TQM is about managing all aspects of quality and ultimate goal should be
the ‘Total Customer Satisfaction’. Every functional area should stick to the quality plan of the
organization and strive to attain the planned quality target. Each offering from the organization
should be of optimum quality. Because, “one rotten apple can spoil the whole basket.”
TQM is about addressing all aspects of dimensions of quality. If there is a good product in bad
packaging it is not going to give the desired returns to the organization. A good car with a bad
bumper will tarnish the image of the company. An ill tempered receptionist can turn away
potential customers from a nice 5-star hotel. So people and process should match the quality of
the product being offered by the organization. A satisfied employee will always bring a satisfied
customer, so internal customers are also important. All hygiene factors and motivation factors
should be maintained to satisfy the needs of the internal customer. Retaining internal customer is
important for better knowledge management and continuity of the process. Retaining external
customer is important to get repeat sales. It is always easier to get repeat sales from existing
customers than to get sales from a new customer. Everybody, right from the shop-floor employee
to the top management, should have total commitment to the predetermined quality goals.
Quality is having different meanings for different people. In spite of this any organization aiming
for sustainable competitive advantage needs to assess customer’s needs to fix a quality objective.
Immaculate planning is required to attain the pre decided quality goals. Proper monitoring and
people’s involvement can ultimately enable an organization to achieve the desired results. In the
long run the good quality always wins the customer’s heart.

Seven Basic tools for quality control & improvement


Statistical Quality Control (SPC) have many names for these seven basic tools of quality, first
emphasized by Kaoru Ishikawa, a professor of engineering at Tokyo University and the father of
"quality circles." These are typical process control techniques and there are many ways to
implement process control. Key monitoring and investigating tools include:
 Histograms
 Check Sheets
 Pareto Charts
 Cause and Effect Diagrams
 Stratification ( alternately Flow chart or Run chart)
 Scatter Diagrams
 Control Charts

Check Sheet
Check sheets are simple forms with certain formats that can aid the user to record data in an firm
systematically. Data are “collected and tabulated” on the check sheet to record the frequency of
specific events during a data collection period. They prepare a “consistent, effective, and
economical approach” that can be applied in the auditing of quality assurance for reviwing and to
follow the steps in a particular process. Also, they help the user to arrange the data for the
utilization later. The main advantages of check sheets are to be very easily to apply and
understand, and it can make a clear picture of the situation and condition of the organization.
They are efficient and powerful tools to identify frequently problems, but they dont have
effective ability to analyze the quality problem into the workplace.
The chech sheets are in three major types are such as Defect-location check sheets; tally check
sheets, and; defect-cause check sheets. Figure below is depicted as a tally check sheet that cn be
used for collecting data during production process.
Telephone interruption
Figure : Check sheet (Tally) for telephone interruptions

Histogram: Histogram is very useful tool to describe a sense of the frequency distribution of
observed values of a variable. It is a type of bar chart that visualizes both attribute and variable
data of a product or process, also assists users to show the distribution of data and the amount of
variation within a process. It displays the different measures of central tendency (mean, mode,
and average). It should be designed properly for those working into the operation process can
easily utilize and un distribution of the variable being explored. Figure below illustrates a
histogram of the frequency of defects in a manufacturing process.

Pareto Analysis
It was introduced by an Italian economist, named Vilfredo Pareto, who worked with income and
other unequal distributions in 19th century; he noticed that 80% of the wealth was owned by only
20% of the population. Later, Pareto principle was developed by Juran in 1950. A Pareto chart is
a special type of histogram that can easily be applied to find and prioritize quality problems,
conditions, or their causes in the organization (Juran and Godfrey, 1998). On the other hand, it is
a type of bar chart that shows the relative importance of variables, prioritized in descending order
from left to right side of the chart. The aim of Pareto chart is to figure out the different kind of
“nonconformity” from data figures, maintenance data, repair data, parts scrap rates, or other
sources. Also, Pareto chart can generate a mean for investigating concerning quality
improvement, and improving efficiency, “material waste, energy conservation, safety issues, cost
reductions”, etc., as Figure 4 demonstrated concerning Pareto chart, it can able to improve the
production before and after changes.
Fishbone Diagram
Kaoru Ishikawa is considered by many researchers to be the founder and first promoter of the
‘Fishbone’ diagram (or Cause-and-Effect Diagram) for root cause analysis and the concept of
Quality Control (QC) circles. Cause and effect diagram was developed by Dr. Kaoru Ishikawa in
1943. It has also two other names that are Ishikawa diagram and fishbone because the shape of
the diagram looks like the skeleton of a fish to identify quality problems based on their degree of
importance. The cause and effect diagram is a problem-solving tool that investigates and analizes
systematically all the potential or real causes that result in a single effect. On the other hand, it is
an efficient tool that equips the organization's management to explore for the possible causes of a
problem (Juran and Godfrey, 1998). This diagram can provide the problem-solving efforts by
“gathering and organizing the possible causes, reaching a common understanding of the problem,
exposing gaps in existing knowledge, ranking the most probable causes, and studying each
cause”. The generic categories of the cause and effect diagram are usually six elements (causes)
such as environment, materials, machine, measurement, man, and method, as indicated in Figure
5. Furthermore, “potential causes” can be indicated by arrows entering the main cause arrow.
Figure: The cause and effect diagram (Fishbone/Fishikawa/Kawasaki/Ishikawa Diagram)

Scatter Diagram:
Scatter diagram is a powerful tool to draw the distribution of information in two dimensions,
which helps to detect and analyze a pattern relationships between two quality and compliance
variables (as an independent variable and a dependent variable), and understanding if there is a
relationship between them, so what kind of the relationship is (Weak or strong and positive or
negative). The shape of the scatter diagram often shows the degree and direction of relationship
between two variables, and the correlation may reveale the causes of a problem. Scatter diagrams
are very useful in regression modeling (Montgomery, 2009; Oakland, 2003). The scatter diagram
can indicate that which one of these following correlations between two variables has: a) Positive
correlation; b) Negative correlation, and c) No correlation, as demonstrated in Figure below.

Figure: Scatter Diagrams

Flowchart
Flowchart presents a diagrammatic picture that indicats a series of symbols to describe the
sequence of steps exist in an operation or process. On the other hand, a flowchart visualize a
picture including the inputs, activities, decision points, and outputs for using and understanding
easily concerning the overall objective through process. This chart as a problem solving tool can
apply methodically to detect and analyze the areas or points of process may have had potential
problems by “documenting” and explaining an operation, so it is very useful to find and improve
quality into process as shown in Figure below.

Figure: Flow chart of review process


Control Char (Shewhart Chart):
Control chart or Shewhart control chart was introduced and developed by Walter A. Shewhart in
the 1920s at the Bell Telephone Laboratories, and is likely the most “technically sophisticated”
for quality management (Montgomery, 2009). Control charts are a special form of “run chart that
it illustrates the amount and nature of variation in the process over time”. Also, it can draw and
describe what has been happning in the process. Therefore, it is very important to apply control
chart, becaust it can observe and moniter process to study process that is in “statistical control”
(No problem with quality) accordant to the samplings or samplings are betwwen UCL and LCL
(upper control limit (UCL) and the lower control limit (LCL)). “Statistical control” is not
between UCL and LCL, so it means the process is out of control, then control can be applied to
finde causes of quality problem, as shown in Figure 8 that a point is in control and B point is out
of control. In addition, this chart can be utilized for estimating “the parameters” and “reducing
the variability” in a process (Omachonu and Ross, 2004). The main aim of control chart is to
prevent the defects in process. Itt is very essentialiy for different businesses and industries; the
reason is that unsatisfactories products or services are more costed than spending expenses of
prevention by some tools like controlcharts (Juran and Godfrey, 1998). A Control Chart is
presented in the following Figure.
CONCLUSION
This study identified that is very essential to apply all seven QC tools for troubleshooting issues
within production processes in the organizations. Doubtlessly, all of the aforementioned quality
tools should be considered and used by management for identifying and solving quality problems
during producing the products and services. Thus, the production processes can be affected and
improved by multiple factors of these statistical QC tools. Also, Mirko et al. (2009) designed and
developed an effective layout for using these QC in the organizations based on the performance
of them, in order to apply appropriately these quality tools for solving quality problems and
quality improvement, as demonstrated in Figure 9. Accordingly, the following Figure interprets
how the 7 QC should be employed from firs step to end of production processes for identifying
the problems of quality performance and controlling them.

Basic Statistics (for understanding, you may consult books on Basic Startistics)

Population and Sample


Population and sample are two basic concepts of statistics. Population can be characterized as
the set of individual persons or objects in which an investigator is primarily interested during
his or her research problem. Sometimes wanted measurements for all individuals in the
population are obtained, but often only a set of individuals of that population are observed; such
a set of individuals constitutes a sample. This gives us the following definitions of population
and sample.
Definition Population: Population is the collection of all individuals or items under
consideration in a statistical study. A (statistical) population is the set of measurements (or
record of some qualitive trait) corresponding to the entire collection of units for which
inferences are to be made.
Example Finite population: In many cases the population under consideration is one which
could be physically listed. For example:
–The students of the University of Tampere,
–The books in a library.
Example Hypothetical population: Also in many cases the population is much more abstract and
may arise from the phenomenon under consideration. Consider e.g. a factory producing light
bulbs. If the factory keeps using the same equipment, raw materials and methods of production
also in future then the bulbs that will be produced in factory constitute a hypothetical
population. That is, sample of light bulbs taken from current production line can be used to
make inference about qualities of light bulbs produced in future.
Definition Sample: Sample is that part of the population from which information is collected. A
sample from statistical population is the set of measurements that are actually collected in the
course of an investigation.

Variables
A characteristic that varies from one person or thing to another is called a variable, i.e, a
variable is any characteristic that varies from one individual member of the population to
another. Examples of variables for humans are height, weight, number of siblings, sex, marital
status, and eye color. The first three of these variables yield numerical information (yield
numerical measurements) and are examples of quantitative (or numerical) variables, last three
yield non-numerical information (yield non-numerical measurements) and are examples of
qualitative (or categorical) variables.
Quantitative variables can be classified as either discrete or continuous.
Discrete variables: Some variables, such as the numbers of children in family, the numbers of
car accident on the certain road on different days, or the numbers of students taking basics of
statistics course are the results of counting and thus these are discrete variables. Typically, a
discrete variable is a variable whose possible values are some or all of the ordinary counting
numbers like 0, 1, 2, 3, . . . . As a definition, we can say that a variable is discrete if it has only a
countable number of distinct possible values. That is, a variable is discrete if it can assume only
a finite numbers of values or as many values as there are integers.
Continuous variables: Quantities such as length, weight, or temperature can in principle be
measured arbitrarily accurately. There is no indivible unit. Weight may be measured to the
nearest gram, but it could be measured more accurately, say to the tenth of a gram. Such a
variable, called continuous, is intrinsically different from a discrete variable.

Sample and Population Distributions


Frequency distributions for a variable apply both to a population and to samples from that
population. The first type is called the population distribution of the variable, and the second
type is called a sample distribution.
In a sense, the sample distribution is a blurry photograph of the population distribution. As the
sample size increases, the sample relative frequency in any class interval gets closer to the true
population relative frequency. Thus, the photograph gets clearer, and the sample distribution
looks more like the population distribution.
When a variable is continous, one can choose class intervals in the frequency distribution and
for the histogram as narrow as desired. Now, as the sample size increases indefinitely and the
number of class intervals simultaneously increases, with their width narrowing, the shape of the
sample histogram gradually approaches a smooth curve. We use such curves to represent
population distributions. Figure 6. shows two samples histograms, one based on a sample of size
100 and the second based on a sample of size 2000, and also a smooth curve representing the
population distribution.
The bell-shaped and U-shaped distributions may be symmetric or nonsymetric. A sysmetric
distibution is uniformly distributed over the entire range on both sides of the mean position,
whereas a nonsymmetric distribution is said to be skewed to the right or skewed to the left,
according to which tail is longer.

Measures of central tendency


Descriptive measures that indicate where the center or the most typical value of the variable lies
in collected set of measurements are called measures of center. Measures of center are often
referred to as averages. There are three different measures of central tendency of a population
of data, namely Mean, Median & Mode.
The Mean
The most commonly used measure of center for quantitative variable is the (arithmetic) sample
mean. When people speak of taking an average, it is mean that they are most often referring to.
Definition of Mean: The sample mean of the variable is the sum of observed values in a data
divided by the number of observations.
If the sample size is n, and the symbol xi denotes ith observation of that variable in the data set
then the mean of the variable x (x́) is
x1 + x 2 + x3 + x 4 … … ..+ x n
n
Or

The Median:
The sample median of a quantitative variable is that value of the variable in a data set that
divides the set of observed values in half, so that the observed values in one half are less than or
equal to the median value and the observed values in the other half are greater or equal to the
median value.
To obtain the median of the variable, we arrange observed values in a data set in increasing
order and then determine the middle value in the ordered list.
Definition of Median: Arrange the observed values of variable in a data in increasing order.
1. If the number of observation is odd, then the sample median is the observed value exactly in
the middle of the ordered list.
2. If the number of observation is even, then the sample median is the number halfway between
the two middle observed values in the ordered list.
In both cases, if we let n denote the number of observations in a data set, then the sample
median is at position (n+1)/2 in the ordered list.
The median is a "central" value – there are as many values greater than it as there are less than
it.
The Mode
The sample mode of a qualitative or a discrete quantitative variable is that value of the variable
which occurs with the greatest frequency in a data set.
A more exact definition of the mode is given below.
Definition Mode: Obtain the frequency of each observed value of the variable in a data and note
the greatest frequency.
1. If the greatest frequency is 1 (i.e. no value occurs more than once), then the variable has no
mode.
2. If the greatest frequency is 2 or greater, then any value that occurs with that greatest
frequency is called a sample mode of the variable.
To obtain the mode(s) of a variable, we first construct a frequency distribution for the data using
classes based on single value. The mode(s) can then be determined easily from the frequency
distribution.

Measures of variation:
Just as there are several different measures of center, there are also several different measures
of variation. Of the most frequently used measures of variation; Range and Standard deviations
are frequently and commonly encountered in simple statistical analysis.
Measures of variation are used mostly only for quantitative variables.

Definition Range: The sample range of the variable is the difference between its maximum and
minimum values in a data set: Range = Max −Min.
Standard deviation: The sample standard deviation is the most frequently used measure of
variability, although it is not as easily understood as ranges. It can be considered as a kind of
average of the absolute deviations of observed values from the mean of the variable in question.
Definition Standard deviation: For a variable x, the sample standard deviation, denoted by sx
(or when no confusion arise, simply by s), is
In a formula of the standard deviation, the sum of the squared deviations from the mean,

is called sum of squared deviations and provides a measure of total deviation from the mean for
all the observed values of the variable. Once the sum of squared deviations is divided by n − 1,
we get

which is called the sample variance. The sample standard deviation has following alternative
formulas:
n

Sx=
√ ∑ ( x i− x́ )2
i=1
n−1
--------------------------- (1)

=
x 12 + x 22 + x 32+−−−−x n2−2 x́ ( x 1 + x 2+ x 3±−−+ x n) −n x́ 2
√ n−1
=
√ ∑ x i2−n x́ 2 -------------- (2)
i=1
n−1

Sample statistics and population parameters


Of the measures of center and variation, the sample mean ¯x and the sample standard deviation
s are the most commonly reported. Since their values depend on the sample selected, they vary in
value from sample to sample. In this sense, they are called random variables to emphasize that
their values vary according to the sample selected. Their values are unknown before the sample
is chosen. Once the sample is selected and they are computed, they become known sample
statistics. A statistic describes a sample, while a parameter describes the population from which
the sample was taken.
Notation for parameters: Let μ and σ denote the mean and standard deviation of a variable for
the population.
We call μ and σ the population mean and population standard deviation. The population mean is
the average of the population measurements. The population standard deviation describes the
variation of the population measurements about the population mean.
Whereas the statistics x́ & s are variables, with values depending on the sample chosen, the
parameters μ and σ are constants. This is because μ and σ refer to just one particular group of
measurements, namely, measurements for the entire population. Of course, parameter values are
usually unknown which is the reason for sampling and calculating sample statistics as estimates
of their values. That is, we make inferences about unknown parameters (such as μ and σ using
sample statistics (such as x́ and s).
Problem: Explain the central tendency & dispersion with example of a group of data
collected as sample from a population as under:
Marks obtained by 80 students are given below, Read off the value of the median of the
distribution by graphical means and compare with that obtained by calculation.

Marks 0-10 10-20 20-30 30 - 40 40 50 -60


-50
Ans: Frequenc 3 9 15 30 18 5 Let us draw the
y cumulative frequency
table as under
Marks Frequency Cumulative frequency
0 – 10------------- 3---------------------------  3
10 - 20 9------------------------- 12
20 -30 15-------------------------- 27
30 – 40                       30---------------------------57
40- 50                         18-------------------------  75                                                   
50- 60     5---------------------------80

Plot the points (10,3) ; (20,12); (30,27); (40, 57); (50, 75); (60, 80) . the points are connecetd
by a free hand smooth curve to obtain the ogive.

To determine the median, we have N/2 =40th & (N/2 + 1) = 41st . So, we have to consider the
average of the marks obtained by the 40th & 41st students which from the graph appears to be

From the point 40 & 41 on the y-axis, draw a straight line parallel to the x-axis cutting the ogive
curve at a point. Read the abscissa of these points on the x –axis, the marks very close to each
other and the average of which is found to be 34.4.
This is verified by mathematical calculation that 40th student gets = 30 + 10* (40 – 27)/30 = 34.3
& 41st student gets = 30 + 10 * (41-27)/30 = 30.67
The average of the two marks = 30.48

Explanation of the 80-20 rule in regard to the usefulness of Pareto


diagram for problem solving with an example

Pareto analysis:
Pareto analysis is a formal technique useful where many possible courses of action are
competing for attention. In essence, the problem-solving process estimates the benefit delivered
by each action, then selects a number of the most effective actions that deliver a total benefit
reasonably close to the maximal possible one. However, it can be limited by its exclusion of
possibly important problems which may be small initially, but which grow with time. It should
be combined with other analytical tools such as failure mode and effects analysis and fault tree
analysis for example.
In order to illustrate the operation of the Pareto diagram, the following example of a support officer
who has to analyse and solve various product defects. In this case, the support officer considered 845
defects, which are group into the following categories.
Serial Defect Category Total Cumulated Number of Defects

1 Fuel System 320 320


2 Suspension 210 530
3 Tyres 92 622
4 Driver Error 75 697
5 Engine 60 757
6 Air System 53 810
7 Software 18 828
8 Hydraulics 13 841
9 Others 4 845

Using a BAR chart to plot failure category against Total and Cumulative Number of Defects and
mark on it the vertical line showing 80% (676 defects) of the cumulative total, the categories to the
left of it indicating those categories responsible for 80% of the failures.
This technique helps to identify the top portion of causes that need to be addressed to resolve the
majority of problems. Once the predominant causes are identified, then tools like the Ishikawa
diagram or Fish-bone Analysis can be used to identify the root causes of the problems. While it
is common to refer to pareto as "80/20" rule, under the assumption that, in all situations, 20% of
causes determine 80% of problems, this ratio is merely a convenient rule of thumb and is not nor
should it be considered immutable law of nature.
The application of the Pareto analysis in risk management allows management to focus on those
risks that have the most impact on the project.

Steps to identify the important causes using 80/20 rule


1. Form an explicit table listing the causes and their frequency as a percentage.
2. Arrange the rows in the decreasing order of importance of the causes (i.e., the most
important cause first)
3. Add a cumulative percentage column to the table
4. Plot with causes on x- and cumulative percentage on y-axis
5. Join the above points to form a curve
6. Plot (on the same graph) a bar graph with causes on x- and percent frequency on y-axis
7. Draw a line at 80% on y-axis parallel to x-axis. Then drop the line at the point of
intersection with the curve on x-axis. This point on the x-axis separates the important
causes (on the left) and trivial causes (on the right)
8. Explicitly review the chart to ensure that causes for at least 80% of the problems are
captured.
The Pareto principle (also known as the 80–20 rule, the law of the vital few, and the principle
of factor sparsity) states that, for many events, roughly 80% of the effects come from 20% of
the causes. Management consultant Joseph M. Juran suggested the principle and named it after
Italian economist Vilfredo Pareto, who, while at the University of Lausanne in 1896, published
his first paper "Cours d'économie politique." Essentially, Pareto showed that approximately 80%
of the land in Italy was owned by 20% of the population; Pareto developed the principle by
observing that 20% of the peapods in his garden contained 80% of the peas.
It is a common rule of thumb in business; e.g., "80% of your sales come from 20% of your
clients." Mathematically, the 80–20 rule is roughly followed by a power law distribution (also
known as a Pareto distribution) for a particular set of parameters, and many natural phenomena
have been shown empirically to exhibit such a distribution.
The Pareto principle is only tangentially related to Pareto efficiency. Pareto developed both
concepts in the context of the distribution of income and wealth among the population.
The distribution is claimed to appear in several different aspects relevant to entrepreneurs and
business managers. For example:
 80% of a company's profits come from 20% of its customers
 80% of a company's complaints come from 20% of its customers
 80% of a company's profits come from 20% of the time its staff spend
 80% of a company's sales come from 20% of its products
 80% of a company's sales are made by 20% of its sales staff
Therefore, many businesses have an easy access to dramatic improvements in profitability by
focusing on the most effective areas and eliminating, ignoring, automating, delegating or
retraining the rest, as appropriate.

Fish-bone diagram for the resolving the problem of manufacturing


defect in a product with example:

This fishbone diagram was drawn by a manufacturing team to try to understand the source of
manufacturing problem. The team used the six generic headings to prompt ideas. Layers of
branches show thorough thinking about the causes of the problem.

Ishikawa diagram, in fishbone shape, shows factors of Equipment, Process, People, Materials,
Environment and Management, all affecting the overall problem. Smaller arrows connect the
sub-causes to major causes.

Criticism of Ishikawa Diagrams


In a discussion of the nature of a cause it is customary to distinguish between necessary and
sufficient conditions for the occurrence of an event. A necessary condition for the occurrence of
a specified event is a circumstance in whose absence the event cannot occur. A sufficient
condition for the occurrence of an event is a circumstance in whose presence the event must
occur. Ishikawa diagrams have been criticized for failing to make the distinction between
necessary conditions and sufficient conditions. It seems that Ishikawa was not even aware of this
distinction.

Explanation of a process control chart with usual notations of the


parameters used for identifying the assignable (special) and non-
assignable (common) causes of variations in a process

Control Chart

Differentiate between variables and attributes as quality parameter.


Ans:
The data obtained from measuring or counting or perceptive observations in the industry for
controlling process outputs with a given input are utilised for the preparation of control chart, an
important quality control tool. Control Chart is a statistical tool used to distinguish between
process variation resulting from common causes and that from special causes.
There are two main categories of data attribute data and variables data that are used in the
control Chart preparation.

Attribute Data: A category of Control Chart displays data that result from counting the number
of occurrences or items in a single category of similar items or occurrences. These “count” data
may be expressed as pass/fail, yes/no, or presence/absence of a defect.

 Variables Data: This category of Control Chart displays values resulting from the
measurement of a continuous variable. Examples of variables data are Length, Weight,
Volume, Temperature, Elapsed Time, and radiation dose.
Variable Control Charts - are used when the quality characteristic can be measured and
expressed in numbers with usual units.
 Examples of Variable Control Charts
o X and R
o X and s( s stands for sample standard deviation as against population standard
deviation)
o Delta
o X and Moving Range
Attribute Control Charts - are used for product characteristics that can be evaluated with a
discrete response (pass/fail, yes/no, good/bad, number defective)
Examples of Attribute Control Charts are
 p chart
 np chart
 c chart
 u chart
Limitations of the two types of data charts:
 Variable Control Charts
o must be able to measure the quality characteristics in numbers
o may be impractical and uneconomical
 e.g. manuf. plant responsible of 100,000 dimensions
 Attribute Control Charts
o In general are less costly when it comes to collecting data
o Can plot multiple characteristics on one chart
o But, loss of information vs. variable chart

Process control chart with usual notations of the parameters used for
identifying the assignable (special) and non-assignable (common)
causes of variations in a process

Different steps in calculating and plotting an X bar and R chart for variable data and draw a
control chart for variables with mean ( X́ ¿ and range (R) with control limits and specification
limits.

The steps for constructing this type of Control Chart are:

Step 1 - Determine the data to be collected. Decide what questions about the
process you plan to answer. Refer to the Data Collection module for information on
how this is done.
Step 2 - Collect and enter the data by subgroup. A subgroup is made up of
variables data that represent a characteristic of a product produced by a process. The
sample size relates to how large the subgroups are. Enter the individual subgroup
measurements in time sequence in the portion of the data collection section of the
Control Chart labeled MEASUREMENTS.
STEP 3 - Calculate and enter the average for each subgroup. Use the formula
below to calculate the average (mean) for each subgroup and enter it on the line
labelled Average in the data collection section.
Where: x́ , The average of the measurements within each
subgroup
xi The individual measurements within a subgroup
n The number of measurements within a subgroup

Constructing an X-Bar & R Chart


Step 5 - Calculate grand mean
Step 6 - Calculate average of subgroup ranges
Step 7 - Calculate UCL and LCL for subgroup averages
Step 8 - Calculate UCL for ranges
Step 9 - Select scales and plot
Step 10 - Document the chart

Example of a control chart:


Explanation for the assignable and non-assignable causes of
variability in the process control activities in a quality system:
Ans:
Assignable causes of variations: Assignable causes of variation are present in most
production processes. These causes of variability are also called special causes of
variation. Assignable causes of variability can be detected leading to their correction through the
use of control charts.
Tool wear, equipment that needs adjustment, defective materials, or operator error are typical
sources of assignable variation. If assignable causes are present, the process cannot operate at
its best. A process that is operating in the presence of assignable causes is said to be “out of
statistical control.” Walter A. Shewhart (1931) suggested that assignable causes, or local
sources of trouble, must be eliminated before managerial innovations leading to improved
productivity can be achieved.
Special-cause variation is characterised by New, unanticipated, emergent or previously neglected
phenomena within the system;
 Variation inherently unpredictable, even probabilistically;
 Variation outside the historical experience base; and
 Evidence of some inherent change in the system or our knowledge of it.
Special-cause variation always arrives as a surprise. It is the signal within a system.
Walter A. Shewhart originally used the term assignable cause. The term special cause was
coined by W. Edwards Deming. The Western Electric Company used the term unnatural pattern.

Non-assignable causes of Variation: This type of variations is also called Common-cause


variations. Common-cause variation is characterised by
 Phenomena constantly active within the system;
 Variation predictable probabilistically;
 Irregular variation within a historical experience base; and
 Lack of significance in individual high or low values.
The outcomes of a perfectly balanced roulette wheel are a good example of common-cause
variation. Common-cause variation is the noise within the system.
Walter A. Shewhart originally used the term chance cause. The term common cause was coined
the term natural pattern. Shewhart called a process that features only common-cause variation as
being in  statistical control.

Problem 1: Construct X bar and R charts from the following table. For n = 5, A2 = 0·58, D4 =
2·11, D3 = 0. Comment on the state of control.

Sample 1 2 3 4 5 6 7 8 9
No.
X bar 50.4 26.0 86.6 95.6 39.2 88.9 61.3 22.5 59.4
R 35 44 23 65 18 26 51 19 33

Ans:

X́ 50.4 26.0 86.6 95.6 39.2 88.9 61.3 22.5 59.4


9

∑ X́ 529. 9
= 1 = =58.87
9
9
R 35 44 23 65 18 26 51 19 33
9

In the X́ ∑ X́ = 314 =34.88. control
chart, = 1 the
9
upper & 9 lower
control Sample 1 2 3 4 5 6 7 8 9 limits
(UCL & No. LCL)
are respectively X́ + A2 Ŕ & X́ - A2 Ŕ ; where X́ is the central line as shown in the
diagram.
UCL = X́ + A2 Ŕ = 58.87 + 0.58 x34.88 = 79.1
LCL = X́ - A2 Ŕ = 58.87 – 20.23 = 38.63
& X́ = 58.87
Similarly the R chart for the observation as given in the problem, the central line is Ŕ & the UCL
for the range chart = D4 Ŕ = 2.11 x 34.88 = 73.6
LCL for the range chart = D3 Ŕ = 0

Now plot the data in the control chart on a graph paper with the above UCL, CL & LCL. See
if any assignable and non-assignable causes are there or not and accordingly comment on the
same.
Problem 2: The following data shows rejection pattern of a food product based on its odour
for various subgroups produced in a single day. Assume that subgroup size is constant at 96.
Determine revised average fraction rejected for that day on excluding those subgroups having ‘p’
above upper control limit for once.
Subgroup No. 1 2 3 4 5 6 7
Number rejected 4 3 12 2 6 3 2
Ans: Find out the proportion defective in each subgroup p 1, p2, p3,.......p7 and then find out ṕ ,
ṕ ( 1− ṕ )
average proportion defective. Use this ṕto estimateσ , standard deviation such as σ =
Use UCL = ṕ + 3 σ
√ n
.

CL = ṕ

And LCL = ṕ -3 σ

Plot the observed points p1, p2, p3, ......., p7 . See if any data is out of control. Comment on the
observations whether there are assignable beyond the UCL & LCL & non-assignable causes
within the UCL & LCL.

Problem3: 20 successive wafers (100 chips on each) are inspected. The numbers of defects
found in wafers are: Draw the suitable control chart and comment

Wafer No. 1 2 3 4 5 6 7 8 9 10
No. of defects 16 14 28 16 12 20 10 12 10 17
Wafer No. 11 12 13 14 15 16 17 18 19 20
No. of defects 19 17 14 16 15 13 14 16 31 20  Ans:

Wafer No. 1 2 3 4 5 6 7 8 9 10
No. of defects 16 14 28 16 12 20 10 12 10 17
Wafer No. 11 12 13 14 15 16 17 18 19 20
No.of defects 19 17 14 16 15 13 14 16 31 20

This is a problem of Control chart using counts. We are inspecting 20 successive wafers, each
containing 100 chips; the wafer is the inspection unit. The observed number of defects are given
in the table of the problem.

It is known that by assuming the distribution of defects in the sample follow poison distribution,
both the mean and the variance of this distribution are equal to c. Then the k-sigma control chart
is
UCL = c+k√c
Center Line = c
LCL = c − k√c.
If the LCL comes out negative, then there is no lower control limit. This control scheme assumes
that normally a standard value for c is available. If this is not the case as in this, then c may be
estimated as the average of the number of defects in a preliminary sample of inspection units,
designated as ć. Usually k is set to 3 by many practioners.

From this table we have

total number of defects 330


ć= = =16.5
total number of samples 20

UCL = ć +3√ ć =16.5 + 3√ 16.5 = 16.5 + 12.18 = 28.68


LCL= = ć - 3√ ć = 16.5 – 12.18 = 3.7

Control Chart for Counts

We have seen that the 3-sigma limits for c chart, where c represents the number of
nonconformities, are given by
ć ± 3√ć
Where, it is assumed that the normal approximation to the Poisson distribution holds, hence the
symmetry of the control limits. It is shown in the literature that the normal approximation to the
Poisson is adequate when the mean of the Poisson is at least 5. When applied to the c chart this
implies that the mean of the defects should be at least 5. This requirement will often be met in
practice, but still, when the mean is smaller than 9 (solving the above equation) there will be no
lower control limit.
It is observed from the above control chart that one data at sample no. 19 has gone of UCL.
Normal approximation to Poisson distribution is adequate when the mean of the Poisson
distribution is at least 5

Enumeration of the role of suppliers of input materials into the


manufacturing process in generating output quality for customers’
satisfaction:
Supplier partnership is one of the TQM tools. One of the reasons of success of many leading
companies of the world like Toyota Motors & Macedes Benz, BMW & du Pont ets is developing
and nurturing reliable and good vendors of materials, parts, components & subassemblies. It is
therefore essentail for a TQM comapny to evaluate the vendors’ suupplies and rate them
according to the grade for being selected in the company as their rated vendors.
Importance of the vendor/suppliers: The dynamics of market forces demand every
organisation to convert itself into a system of operations for cost effectiveness and improved
quality. In all such organisation, the effectiveness of & efficiency depends largely on the network
of suppliers involved. Thus a large proportion of the work is outsourced in service industries and
manufacturing but it is more critical in manufacturing; because company’s operations with the
vendors’ materials will determine the ultimate quality of the products. Long-term contract
between the vendor & the company including the implemetation of prevalent quality system,
such as ISO 9000 quality standards, is more beneficial for both and for that to happen, both
should work as a team for mutual benefit so that they can maintain the partnership. In those
cases, the vendor can be involved in development of new products or service tools right from the
design stage and subsequently to design review and by that thwe vendors would be able to
understand the requirements of supplies. It will help in reduction of design cycle time for the
company and the vendors also will grow with the organisation/company. As a part of quality
system, quality audit of the company will identify the non-conformance of the company and the
vendors and effective timely communication will reduce the degree of non-conformance on
either side.
Incoming inspection:
Vendors’ materials will be inspected on the agreed terms of sampling plan and all other aspects
and appraisal of any nonconformance or dispute or issues of delivery, price & quantity so that
manufacturing or service providing syatem remains smooth.
Vendor Rating: For the selection of vendors, the company will conduct periodic assessment &
rating of the vendors’ with respect to the appropriate parameters as under and the vendors should
be duly communicated:
 Quality,
 Price,
 Delivery &
 Service.
Vendor rating is an important part of quality system and it is objective oriented method of
continuous process.
Quality rating: Quality rating, QR for a lot or consignment is given by:
Q + X ∗Q + X ∗Q
QR = 1 1 2 2 3
Q
Where, Q1 = quantity accepted
Q2 = quantity accepted with deviation,
Q3 = quantity accepted with rectification,
Q4 = quantity rejected,
Q = total quantity supplied
X1 = demerit factor ( less than one) when material is accepted with deviation, and
X2 = demerit factor (less than one) when material is accepted with rectification
The values of X1 & X2 are decided by the management.
PL
Price Rating: Price rating PR for a lot or consignment is given by PR =
P
PL = Lower of the prices quoted by vendors for the item, and
P= price quoted by the vendor being rated

Delivery rating: Delivery rating, DR for a lot or consignment depends on the quality supplied
within the stipulated delivery time and also on the actual delivery time for the full consignmnet.
The delivery rating may, therefore, be obtained by the following formula:
Q1 T
DR = X
Q2 T∗P+1.5∗T 1∗q
Where, Q = quantity promised to be supplied within the stipulated delivery time
Q1 = actual quantity supplied within the stipulated delivery
T = promised delivery time for the full consignment,
T1= actual delivery time for the full consignment,
Q1
P= , and q = 1 – p
Q2
Composite vendor rating: We have to assign weightage for each of the ratings to get a
composite vendor rating for each lot. For some supplies, price may be more important, fopr some
quality may be the most important and so on. Depending on the relative importance,
management mat assign weightage for each of the ratings. For instance, the company arrives at
the following weightages for the component type, X.
Rating Weightage
Qw-Quality 60% 40
Pw - Price 80% 40
Dw - Delivery 25% 20

Now we can arrive at a composite vendor rating using the formula for the vendor A discussed in
the following example:
VR = Qw*QR + Pw*PR + Dw*DR = 40x0.6 + 40 x 0.8 + 20 x 0.25 = 61
Vendor rating for a product is obtained as weighted average of the ratings for the lots received
over a period of time. This may be computed by the follwing formula:
n1∗VR 1 +n 2∗VR2+ n3∗VR3 +−−−−−−+n k∗VR k
Average rating=
n1+ n2+ n3 +−−−−−−−−+n k
th
VRi = vendor rating of i lot, and
ni = lot size for the ith lot
Based on the vendor rating as above, the vendors may be classified into three classes. An
example to illustrate the classification scheme is as under
Tating obtained Class of vendor
90 & above A
80 – 90 B
Below 80 C
Vender Rating Example:
Work out vendor rating for following data of three companies X, Y, Z. Weightage is Quality –
50%, Price – 20%, Delivery – 20%, TQM System – 10%
Table 1: Vendor rating example:

Service Details Company X Company Y Company Z

Total quantity supplied 100 Nos. 90 Nos. 80 Nos.

Total quantity accepted 95 Nos. 88 Nos. 76 Nos.

Unit price of item Rs.10.00 Rs.9.80 Rs.10.20

Delivery expected as per PO 4 weeks 4 weeks 4 weeks

Delivery arranged 5 weeks 4-5 weeks 6 weeks

Care taken for TQM system 75% 80% 70%

Solution:

Company X Company Y Company Z


(a) Quality Rating 95/ 100 x 100= 98/90 x 100= 97.8% 76/ 80 x 100= 95%
95%
Multiply by 50% 95 x 0.5 = 47.5% 97.8 x 0.5= 48.9% 95 x 0.5= 47.5%
weightage
(b) Price Ratio Lowest / (9.8 /10) x 100 = 10/10) x 100 =100% 9.8/ 10.2 x 100=
Supplier’s Price x 100 98% 96%
Multiply by 20% 98 x 0.2 = 19.6% 20% 96 x 0.2= 19.2%
weightage
(c) Delivery Rating 4/5 x 100 = 80% 4 x 4.5/100 = 88.9% 4 x 6 100 = 66.7%
Multiply by 20% 16% 17.8% 13 .3%
weightage

(d) TQM System with = 7.5% = 8.0% =7.0%


10% weightage

Total rating percentage


x=90.6%, y=94.7%, z=87%.

Hence, vendor is preferred for supply of the item under consideration.


Definion of Process capability and explanation for the role of standard
deviation (σ) in determining process capability:

A process is a unique combination of tools, materials, methods, and people engaged in


producing a measurable output; for example a manufacturing line for machine parts. All
processes have inherent statistical variability which can be evaluated by statistical methods. Two
parts of process capability are: 1) measure the variability of the output of a process, and 2)
compare that variability with a proposed specification or product tolerance.
The ability of a process to meet specifications can be expressed as a single number using
a process capability index or it can be assessed using control charts. Either case requires running
the process to obtain enough measurable output so that engineering is confident that the process
is stable and so that the process mean and variability can be reliably estimated. 
A capable process meets customer requirements 100% of the time. Customer requirements are
defined using an upper specification limit (USL) and a lower specification limit (LSL).
Process capability metrics Cp (process potential) and Cpk (process capability index) are used to
determine how well the output of a stable process meets these specifications.
Cp tells us about how well the data would fit within the spec limits (USL, LSL) and Cpk tells us
about how centered the data is between the spec limits.
The two indices for measuring capability are Cp and Cpk.
Cp and Cpk are used. Considering that the formulas to calculate process capability assume the
process is stable and the data is normally distributed.
 Cp: a measure of the ability of a process to produce consistent results and meet
specifications – the ratio between the permissible spread and the actual spread of a process

 Cpk: a variation of Cp which takes off-centeredness into account

In the denominator of the expression for both Cp & Cpk, 6σ refers to the natural variability of
the stated process. The standard deviation, σ is measure of natural variability of the output of a
process subject to a definite set of parameters which are measurable and controllable. The
n

process variation is designated as an unbiased estimate of σ which is expressed, σ

where x́ denotes the mean of the outputs and xi is the individual output.
¿
√ ∑ ( x i− x́ ) 2 ,
i=1
n−1

This σ variation is taken on either side of the mean with different multiples of σ like σ, 2σ, 3σ, to
get a total width of variations around the mean as 2σ, 4σ, 6σ etc. The total width of 6σ variation
represents the fraction of the total number of outputs remaining within. Statistically, it can be
calculated that 6σ represents 99.

Sigma level Percent defective Percentage yield

1 69% 31%

2 31% 69%
3 6.7% 93.3%

4 0.62% 99.38%

5 0.023% 99.977%

6 0.00034% 99.99966%

The term "six sigma process" comes from the notion that if one has six standard
deviations between the process mean and the nearest specification limit, as shown in the graph,
practically noitems will fail to meet specifications. This is based on the calculation method
employed in process capability studies. Capability studies measure the number of standard
deviations between the process mean and the nearest specification limit in sigma units,
represented by the Greek letter σ (sigma). As process standard deviation goes up, or the mean of
the process moves away from the center of the tolerance, fewer standard deviations will fit
between the mean and the nearest specification limit, decreasing the sigma number and
increasing the likelihood of items outside specification.

Differentiation between the process potential index Cp and process


capability Index Cpk of a process:

A proces is undertaken in an organisation for manufacturing or assembling products or


preparaing service packages etc to satisfy customers’ requirements. The customer sts his
requirements by means of upeer limit and lower limit which is called tolerance and the process
delivers products (goods or services) with quality limits depending on its natural variations
which are not known. Therefore the process variations are studied and the process parameters are
reset time and again to keep it within the limits of tolerance. After a reasonable time of
operations, the process attains stability within the limits of tolerance.

After establishing stability - a process in control - the process can be compared to the tolerance
to see how much of the process falls inside or outside of the specifications. It is to be noted that
this analysis requires that the process be normally distributed. Distributions with other shapes are
beyond the scope of this material. Specifications are not related to control limits - they are
completely separate. Specifications reflect "what the customer wants", while control limits tell us
"what the process can deliver".

The first step is to compare the natural six-sigma spread of the process to the tolerance. This
index is known as Cp.

Here is the information you will need to calculate the Cp and Cpk:

 Process average, or x̄
 Upper Specification Limit (USL) and Lower Specification Limit (LSL).
 The Process Standard Deviation (σest). This can be calculated directly from the individual
data, or can be estimated by: σest = R̄ / d2

Cp is calculated as follows:

Following is an illustration of the Cp concept:

Cp is often referred to as "Process Potential" because it describes how capable the process could
be if it were centered precisely between the specifications. A process can have a Cp in excess of
one but still fail to consistently meet customer expectations, as shown by the illustration below:
The measurement that assesses process centering in addition to spread, or variability, is Cp k. Cpk
is calculated as follows:

The illustrations below provide graphic examples of Cp and Cpk calculations using hypothetical
data:

The Lower Specification Limit is 48

The Nominal or Target Specification is 55

The Upper Specification Limit is 60

Therefore, the Tolerance is 60 - 48, or 12

As seen in the illustration, the 6-Sigma process spread is 9.

Therefore, the Cp is 12/9 or 1.33.

The next step is to calculate the Cpk index:


Cpk is the minimum of: 57-48/4.5 = 2, and 60-57/4.5 = 0.67

So Cpk is 0.67, indicating that a small percentage of the process output is defective (about 2.3%).
Without reducing variability, the Cpk could be improved to a maximum1.33, the Cp value, by
centering the process. Further improvements beyond that level will require actions to reduce
process variability.

When the capability of a process is understood and documented, it can be used for measuring
continual improvement using trends over time, prioritizing the order of process improvements to
be made, and determining whether or not a process is capable of meeting customer requirements.

Narrate and illustrate the differences between TQM and 6 σ, where σ


stands for estimate of standard deviation of the population of data:
Ans:
TQM Six Sigma
A functional specialty within the An infrastructure of dedicated change
organization. agents. Focuses on cross-functional value
delivery streams rather than functional
division of labour.
Focuses on quality. Focuses on strategic goals and applies them
to cost, schedule and other key business
metrics.
Motivated by quality idealism. Driven by tangible benefit far a major
stockholder group (customers,
shareholders, and employees).
Loosely monitors progress toward goals. Ensures that the investment produces the
expected return.
People are engaged in routine duties “Slack” resources are created to change key
(Planning, improvement, and control). business processes and the organization
itself.
Emphasizes problem solving. Emphasizes breakthrough rates of
improvement.
Focuses on standard performance, e.g. ISO Focuses on world class performance, e.g.,
9000. 3.4 PPM error rate.
Quality is a permanent, full-time job. Six Sigma job is temporary. Six-Sigma is a
Career path is in the quality profession. stepping-stone; career path leads
elsewhere.
Provides a vast set of tools and techniques Provides a selected subset of tools and
with no clear framework for using them techniques and a clearly defined framework
effectively. for using them to achieve results (DMAIC).
Goals are developed by quality department Goals flow down from customers and
based on quality criteria and the senior leadership's strategic objectives.
assumption that what is good for quality is Goals and metrics are reviewed at the
good for the organization. enterprise level to assure that local sub-
optimization does not occur.
Developed by technical personnel. Developed by CEOs.
Focuses on long-term results. Expected Six Sigma looks for a mix of short-term
payoff is not well-defined. and long-term results, as dictated by
business demands.

Business Process Benchmarking:


Benchmarking is a process of comparison of two or more brand of products or services or
processes or organisational practices. Business process benchmarking (BPB) is comparing a
business process with the best in that market area. Benchmarking is basically selecting a
reference point for standards. It is one of the tools of TQM. Computation of data with the help of
software in different brand of computer hardwares measures the relative speed of
computationand hence setting the benchmark ofr that job. Likewise, for automobile cars of
different brands with the same capacity of fuel combustion chamber measures the milleage for
different cars and sets benchmarking. This idea is applied in every field of business practices or
manufacturing practices.
There are basically two types of benchmarking: i) Problem-based benchmarking; ii) Process –
based benchmarking.
Problem-based benchmarking arises out of a) adverse customer feedback; b) increasing quality
cost; c) Alarming error rates; d) Increase in cycle time.
Process-based benchmarking arises out of a) defined mission; b) defined objectives; c) defined
priorities.
Benchmarking does not only dictate comparison of performance standards but also improves the
standards of competitors,. Therefore, benchmarking helps the organisations to make quantum
jump and reach the level of the best practices in the industry.

There are three types of benchmarking pratices as given below:


i) Internal, ii) Competitive, iii) Functional

Internal benchmarking involves the teams or divisions of workgroup within the organisation
for competitive performance of the groups so as to improve the overall performance of the
organisation and hence higher return on investment results. A few vital processes out of the all
processes as per the process flow chart should be selected for the purpose and involve them in
the benchmarking activities.

Competitive benchmarking refers to the selected process for improvement of activities and the
measures of those activities are evaluated and documented. For instance, in case of a diagnostic
centre, many measures could be thought of like, waiting time, late arrival of patients or the staff,
machine downtime and so on.Thus all processes in steps can be involved in the process and the
overall job then could be done at lower cost with higher efficiencies. The efficiency level then
should be compared with that of the other best relevant diagnostic centres or competitors of the
zone for better performance.

Functional benchmarking includes the performance of similar processes of one organisation


with that of the best performing organisation in different other fields of business or
manufacturing, such as the process of calibration, where receiving the instruments, checking
calibration and setting right and then despatching the instruments is similar to the process of
registering the patients, testing clinical parameters of the patients and the despatching the report
of such tests to the patients. In such cases, processing systems may be compared between the two
and setting the timeline and actions to be taken for imroving the system.

Identifying the processes for benchmarking is an important job in the organisation. One input
into a process results an output from the process which cyclically becomes an input intp another
process resulting another output. The series of such input and out makes the whole process in the
organisation. So, on the completion of one process the output should be reported to the next
process where the percentage of errors in the reported output should be assessed and reported
back to the earlier process for corrective actions. The bottlenecks in the overall process thus can
be identified through a series of assessment over sometime in the processes.
For such identification, selection of a process, determination of the vital errors therein and
prioritising the procecess for corrective actions are the three steps involved in benchmarking the
entire processes one after the other.

The next step is to select the role model partners specific for a particular processing job and also
the selection of team for conducting such conductivities are the key steps. The project of
benchmarking for a vital process/es is planned, scheduled, directed and controlled by the
constituted team with due documentation of the purpose, process under consideration, reasons
for selection, scope, description of the key practices, process measures identified, estimated
opportunity for improvement and anticipated impact after the project completion.

Benchmarking process should adopt a model like Motorola’s five-step model, or Westinghouse’s
seven-step model or Xerox’s 10 step model or Demings wheel including Shewart’s PDCA cycle
for conducting the activities for improvement.

Explain Deming Cycle: The Wheel of Continuous Improvement in


relation to quality improvement:
Definition of Taguchi’s loss function:
Ans: Dr. Ganichi Taguchi’s experimental design japanese manufacturing companies date back to
mid 1950s has developed interesting quality cost model that today is called Taguchi’s loss
function.
The basic philosophy is that, whenever, something is produced, a cost is imposed on society, part
of the cost is borne by the producer and part is borne by the customer as the result of using the
product or service. If these costs are plotted as a function of “quality”, producer’s cost tends to
increase with increased quality; customer costs tend to decrease because of greater efficiency,
less breakdowns, so on & so forth. The total cost, or loss to the society, is the sum of these two
cost functions. It tends to decrease over some range of increasing ‘quality” until a minimum is
reached and thjen increases beyond that point.
He views that that the customer becomes increasingly dissatisfied as performance of the product
or process moves away from the target say, τ. He suggests a quadratic curve to represnt customer
dissatisfaction with a process or product’s performance.The quadratic curve is called the
quadratic loss function as shown in the diagram below.

The quadratic loss function is also centered on the target τ. As the performance moves away
from the target, there are losses. Therefore, producing within specification limits is not good
enough. The premise of loss function is that at some point as a process moves away from the
target value, there is a corresponding decrease in the quality. The quality loss may be difficult to
discern by the customer, but eventually it reaches a threshold where a complaint is made or the
customer is dissatisfied. The ideal quality defined by Taguchi “is that quality which customer
would experience when product is performs on target every time the product is used under all
intended operating conditions throught its intended life without causing harmful side effects to
the society.”
The cost of product consists of two elements as given below:
 Unit manufacturing cost – cost incurred on manufacturing the product includding design
cost, material cost, manufacturing cost , depreciation of the machinery, etc. these are the
costs incurred before delivery to the customer. A low manufacturing cost satisfies the
customer.
 Quality cost- cost incurred on the product after delivery to customer including cost of
operating the product (energy, environmental control like temperature, humiditycontrol
and cost of repairs) . A low qua;lity loss satisfies the customer.

the financial loss due to variation is called societal loss. It is approximately proportional to the
square of the deviation from the target.

Thus, quality loss occurs even when the product performs within the specification limits, but
away from the target. Tagucjhi’s loss function recognises the customer’s desire to have products
that are consistent and producer’s drive to control manufacturing cost. The goal of quality loss
function is to reduce the societal loss.
Types of loss function:
Loss functions enable calculations of social loss, when products deviate from the target value.
Taguchi developed many loss functions with different equations-to suit to different applications.
There are three types of loss functions as given below:
 Nominal –the-best
 Lower-the-better
 Higher-the-better
 Each one of them is suitable for a class of applications.

Nominal-the-best: This loss function is applicable tpo those parameters, which have a central
value, and allowable tolerance on either side. The target τ is not necessarily the average process
performance, but it is the choice of the customers. It is that value with which majority of
customers will be satisfied. It may not be directly derivable. The quadratic loss function in the
case of nominal-the –best is given by
Loss = K (Y- τ)2
Where Loss = cost incurred asd performance deviates from the customer’s target value.
Y = actual performance
Τ = target value
K = Rs./∆ 2 , where ∆=USL−τ or, τ – LSL
K is also called the quality loss coefficient. Let us look at an example to understand this types of
loss function.
Smaller-the-better:
This is useful, for instance, in many day-to-day applications such as:
 Waiting time for a bus
 Waiting time in a retaurant, etc.
 Here the target will be ideally zero. The loss function is hown below:

Explanation of Bath-Tub curve in relation to failure rate over time of


an industrial product, say computer monitor:
A bathtub curve is a graph of failure rate over time. As we know that most of the industrial
products foliow the human failure pattern in the form of bathtub curve as shown below:

There are three regions in the bathtub graph. The region 1


is called the infant-mortality failure region. As we know,
the failure is relatively high during the infant stages even
in human beings. Similar phenomena can be expected in
every manufactured product. The region 3 is called the
wear-out failure period. During the wered-out period,
since the product’s life has been utilised fully, due to
wear & tear, product fails rapidly. The period in between,
i.e., region 2 is called useful life period of the product.
This could be ven of the order of 20 years for electronic
products or 10 years for an automobile product.The shape
of the curve may also vary with the type of product. The
product when sold to a customer should have crossed the infant mortality period, which is
achieved by the industries through burn-in test or stress screening. Some manufacturing
companies remove those products, which could fail due to infant mortality and deliver the
products in the useful life period where we expect fewer failure.These failures will be due to
chance causes.
Region 2 is also called the constant failure rate region since the failure rate will be nearly
constant. Reliability of a product during the region 2 life period can be estimated as a measure of
the probability that the product will not fail to perform as intended for a period of time ‘t’ under
stated operating conditions.

Relation of the qualty of a product with its ultimate cost to the


customers:
Quality, Cost & Profit relationship: Many people think that quality costs money and adversely
affects profits. But these costs are the costs of doing it wrong first time .Quality in the long run
results in increased profitability.
Cost,for example, if we design the product right first time, build it right first time - we save all
the costs of redesign, rework, scrap,resetting, repair, warranty work etc.

Quality and Profit:


Traditional thinking is that little quality improvement is achieved at higher costs and hence the
profit becomes less.
But paradigm shift in quality dictates that little increase in quality will bring down the overall
cost and hence more profit
Quality leads to higher productivity by higher production due to
• Improved cycle time and reduced errors and defects,
• Increased use of machine and resources.
• Improved material use from reduced scrap and rejects
• Increased use of personnel resources
• Lower level of asset investments required to support operations.
• Lower service and support costs for eliminated waste, rework and non value added   
activities.
Increased profitability due to
•Larger sales,
•Faster turnover
•Lower production costs

Higher quality and lower profit:


• If the organization does not offer high quality product or service, it will soon go out of high
business.
• But just having high quality will not be enough because your competitors will also have the
high quality at lower price.
• To win, the organizations will have to offer high price quality at a lower price than their
competitors.

The concept of quality costs was first mentioned by Juran and this concept was primarily applied
in the manufacturing industry. The price of nonconformance (Philip Crosby) or the cost of poor
quality (Joseph Juran), the term 'Cost of Quality', referred to the costs associated with providing
poor quality product or service.
Juran advocated the measurement of costs on a periodic basis as a management control tool.
Quality processes cannot be justified simply because "everyone else is doing them" - but return
on quality (ROQ) has dramatic impacts as companies mature. Research shows that the costs of
poor quality can range from 15%-40% of business costs (e.g., rework, returns or complaints,
reduced service levels, lost revenue). Most businesses do not know what their quality costs are
because they do not keep reliable statistics. Finding and correcting mistakes consumes an
inordinately large portion of resources. Typically, the cost to eliminate a failure in the customer
phase is five times greater than it is at the development or manufacturing phase. Effective quality
management decreases production costs because the sooner an error is found and corrected, the
less costly it will be. 

Cost of quality comprises of four elements:

1. External Failure Cost: cost associated with defects found after the customer receives the
product or service ex: processing customer complaints, customer returns, warranty claims,
product recalls.
2. Internal Failure Cost: Cost associated with defects found before the customer receives the
product or service ex: scrap, rework, re-inspection, re-testing, material review, material
downgrades.
3. Inspection (appraisal) Cost: cost incurred to determine the degree of conformance to quality
requirements (measuring, evaluating or auditing) ex: inspection, testing, process or service
audits, calibration of measuring and test equipment.
4. Prevention Cost: cost incurred to prevent (keep failure and appraisal cost to a minimum) poor
quality ex: new product review, quality planning, supplier surveys, process reviews, quality
improvement teams, education and training.

The most widely accepted method for measuring and classifying quality costs is the prevention,
appraisal, and failure (PAF) model. Follow this five step process.
1. Gather some basic information about the number of failures in the system
2. Apply some assumptions to that data in order to quantify the data
3. Chart the data based on the four elements listed above and study it
4. Allocate resources to combat the weak-spots
5. Do this study on a regular basis and evaluate your performance

  Figure 1: Cost and Value of Quality

It is believed that the customer places a certain value on quality," "At first, if the quality is too
low, and the customer wouldn't buy the products. When quality is improved, the costs also
increases, but the customer wouldn't pay the higher prices; so one has to charge. It is found that
profitability is maximized when total cost of poor quality is about 25 percent of sales. The
problem is that there is very little profit, even at that cost level."
It is known that the typical three-sigma company spends about 25 percent of each sales rupees on
the cost of poor quality. Before starting the six sigma journey, Peerless was in similar shape. The
slide prepared is shown in Figure below to llustrate the difference between three-sigma and six-
sigma quality for your staff.

 Figure 2: Three Sigma Profits vs. Six Sigma Profits

Right now, suppose a business is only capable of operating at a level equivalent to about three-
sigma quality,". "Trying to get better quality out of the existing systems only adds costs. To
develop a new systems that deliver better quality and lower costs simultaneously, need of
implementing six sigma systems is must."

 Six sigma is not a destination, but a journey of continuous improvement. Of course, any
company won't go from the 3-sigma to 6-sigma in one big jump. Instead, overall performance
will move from three sigma to 4-sigma, then to 5-sigma and so on as people are trained and
systems redesigned and improved. Figure 3 illustrates the expected progress toward 6-sigma.

To summarize that six-sigma is not about quality for the sake of quality; it is about providing
better value to customers, investors and employees.
Paraphrasing the, "Six sigma is a journey of a thousand miles. Creating a roadmap that links
customer satisfaction, quality and costs is the first step."

Figure 3: The Journey to Six-Sigma

Discussion about Six Sigma spread of the control limit in a process


control chart:

Statistical process control (SPC) is a method of quality control which employs statistical


methods to monitor and control a process. This helps to ensure that the process operates
efficiently, producing more specification-conforming products with less waste (rework or scrap)
and time. As a a statistical process control tool used to determine if a manufacturing or business
process is in a state of control, control charts, also known as Shewhart charts  or process-
behavior charts are used most appropriately.
An advantage of SPC over other methods of quality control, such as "inspection", is that it
emphasizes early detection and prevention of problems, rather than the correction of problems
after they have occurred.
Shewhart concluded that while every process displays variation, some processes display
variation that is natural to the process ("common" /natural / non-assignable causes of variation);
these processes he described as being in (statistical) control. Other processes additionally display
variation that is not present in the causal system of the process at all times ("special"/assignable
causes of variation), which Shewhart described as not in control.

SPC uses statistical tools to observe the performance of the production process in order to detect
significant variations before they result in the production of a sub-standard article. Any source of
variation at any point of time in a process will fall into one of the above two classes, assignable
or non-assignable

An example of Schewart control chart is shown


for clarity.

Choice of limits
Shewhart set 3-sigma (3-standard deviation) limits on the following basis.
 The coarse result of Chebyshev's inequality that, for any probability distribution,
the probability of an outcome greater than k standard deviations from the mean is at most
1/k2.
 The finer result of the Vysochanskii–Petunin inequality, that for any unimodal probability
distribution, the probability of an outcome greater than k standard deviations from
the mean is at most 4/ (9k2).
 In the Normal distribution, a very common probability distribution, 99.7% of the
observations occur within three standard deviations of the mean.
Calculation of standard deviation
The standard deviation (error) for the common-cause variation in the process is used to calculate
the control limits. Hence, the usual estimator, in terms of sample variance, is not used as this
estimates the total squared-error loss from both common- and special-causes of variation.
An alternative method is to use the relationship between the range of a sample and its standard
deviation derived by Leonard H. C. Tippett, as an estimator which tends to be less influenced by
the extreme observations which typify special-causes
The application of SPC involves three main phases of activity:
1. Understanding the process and the specification limits.
2. Eliminating assignable (special) sources of variation, so that the process is stable.
3. Monitoring the ongoing production process, assisted by the use of control charts, to
detect significant changes of mean or variation.
99.7% of observations sampled from a Normal distribution fall within 3 standard deviations of
the mean, i.e. between the limits µ  3σ. Thus, if observations started to appear outside these
limits then we would suspect that the process is no longer in control, and that the distribution of
X had changed. When the distribution of X is not normal, a famous result called the Chebyshev's
inequality tells us that, irrespective of the distribution, at least 89% of observations fall within
the 3-sigma limits. The probability of falling within the 3-sigma limits increases towards 99.7%
as the distribution becomes more and more Normal.

The different core values of a TQM company along with its tools & techniques
generally employed in the system in a diagram with their inter-relationship.

Ans: TQM-Tools & Techniques


Total Quality Management (TQM) is a management strategy aimed at embedding awareness of
quality in all organizational processes. TQM defined as: “a set of systematic activities carried out
by the entire organization to effectively and efficiently achieve the organization’s objectives so
as to provide products and services with a level of quality that satisfies customers, at the
appropriate time and price”.
There are many proposed tools and techniques to achieve the TQM promises. Generally, a
technique can be considered as a number of activities performed in a certain order to reach the
values (Hellsten & Klefsjö, 2000). On the other hand, tools sometimes have statistical basis to
support decision making or facilitate analysis of data.
Most of the studies in TQM implementation focus on the concept of TQM. There are very few
studies in the literature that directly suggest an implementation roadmap of TQM tools and
techniques and usually they are not a complete roadmap. Therefore, a comprehensive roadmap
for TQM implementation is proposed that covers all the cited tools and techniques.
The management focus and commitment phase requires the use of data analysis tools (e.g. cause
& effect analysis, flow charts and Pareto analysis) to identify problem areas, quantify their
effects and prioritize the need for solution. During the intensive improvement phase the
introduction of more complex tools (e.g. statistical process control (SPC) and failure mode and
effects analysis (FMEA)) help to facilitate company-wide improvement.
Bunney and Dale (1997) also categorized TQM tools and techniques in two different ways, first
in five categories regarding to their application and second in seven categories regarding to the
function that they can be used. Table 1 shows the TQM tools regarding to their application.

Table 1: Analysis of Tools and Techniques in Application

Table 2: Analysis of Tools and Techniques used within each Function

In another study, TQM can be defined as a management system, which consists of three
interdependent units, namely core values, techniques and tools. The idea is that the core values
must be supported by techniques, such as process management, benchmarking, and customer
focused planning, or improvement teams, and tools, such as control charts, the quality house or
Ishikawa diagrams, in order to be part of a culture. They emphasized that this systematic
definition will facilitate for organizations the understanding and implementation of TQM.
Therefore, the implementation work should begin with the acceptance of the core values that
characterizing the culture of organization. The next step is to continuously choose techniques
that are suitable for supporting the selected values. Ultimately, suitable tools have to be
identified and used in an efficient way in order to support the chosen techniques.

Figure 1: TQM as a Management System Consists of Values, Techniques and Tools (Hellsten &
Klefsjö, 2000)
According study, the basis for the culture of the organization are the core values. Another
component is techniques, i.e. ways to work within the organization to reach the values. A
technique consists of a number of activities performed in a certain order. The important concept
here is that TQM really should be looked on as a system. The values are supported by techniques
and tools to form a whole. We have to start with the core values and ask: Which core values
should characterize our organization? When this is decided, we have to identify techniques that
are suitable for our organization to use and support our values. Finally, from that decision the
suitable tools have to be identified and used in an efficient way to support the techniques (see
Figure 2).

Figure 2: TQM Implementation Steps (Hellsten & Klefsjö, 2000)


As an example, “Benchmarking” should not be used without seeing the reason for using that
technique and an organization should not use just control charts without seeing the core value
behind the choice and a systematic implementation roadmap of the techniques and tools. It is, of
course, important to note that a particular technique can support different core values and the
same tool can be useful within many techniques.
In another work, 15 frequently used TQM tools and classified them according to qualitative
TQM tools and quantitative TQM tools. Qualitative tools consist mainly of subjective inputs,
which often do not intend to measure something of a numerical nature. Quantitative tools, on the
other hand, involve either the extension of historical data or the analysis of objective data, which
usually avoid personal biases that sometimes contaminate qualitative tools. They categorized
TQM tools as below:
Qualitative tools: Quantitative tools:
flow
charts;
Shewart cycle (PDCA);
cause-and-effect
diagrams;
control charts;
multi-voting;
scatter diagrams;
affinity
diagram;
 Pareto charts;
process action
teams;
 sampling;
brainstorming;

run charts;
election
grids;
 histograms.
task lists.

Differences between Manufacturing and Service Organizations:

Defining quality in manufacturing organizations is often different from that of services.


Manufacturing organizations produce a tangible product that can be seen, touched, and directly
measured. Examples include cars, CD players, clothes, computers, and food items. Therefore,
quality definitions in manufacturing usually focus on tangible product features. The most
common quality definition in manufacturing is conformance, which is the degree to which a
product characteristic meets preset standards. Other common definitions of quality in
manufacturing include performance—such as acceleration of a vehicle; reliability—that the
product will function as expected without failure; features—the extras that are included beyond
the basic characteristics; durability— expected operational life of the product; and serviceability
—how readily a product can be repaired. The relative importance of these definitions is based on
the preferences of each individual customer. It is easy to see how different customers can have
different definitions in mind when they speak of high product quality.
In contrast to manufacturing, service organizations produce a product that is intangible. Usually,
the complete product cannot be seen or touched. Rather, it is experienced.
Examples include delivery of health care, experience of staying at a vacation resort, and learning
at a university. The intangible nature of the product makes defining quality difficult. Also, since
a service is experienced, perceptions can be highly subjective. In addition to tangible factors,
quality of services is often defined by perceptual factors. These include responsiveness to
customer needs, courtesy and friendliness of staff, promptness in resolving complaints, and
atmosphere. Other definitions of quality in services include time—the amount of time a customer
has to wait for the service; and consistency—the degree to which the service is the same each
time. For these reasons, defining quality in services can be especially challenging. Dimensions of
quality for manufacturing versus service organizations are shown in Table 5-1.

Dimensions of Quality for Manufacturing versus Service Organizations


Manufacturing Organizations Service Organizations

Conformance to specification Intangible factors

Performance Consistency

Reliability Responsiveness to Customer needs

Features Courtesy or Friendliness

Durability Timeliness/ Promptness


Serviceability Atmosphere

Producer’s risk & Consumer’s risk;

Ans: Producer’s risk & Consumer’s risk:

Type I Error (Producer's Risk): This is the probability, for a given (n,c) sampling plan, of
rejecting a lot that has a defect level equal to the AQL. The producer suffers when this occurs,
because a lot with acceptable quality was rejected. The symbol is commonly used for the Type
I error and typical values for range from 0.2 to 0.01.

Type II Error (Consumer's Risk): This is the probability, for a given (n,c) sampling plan, of
accepting a lot with a defect level equal to the LTPD. The consumer suffers when this occurs,
because a lot with unacceptable quality was accepted. The symbol is commonly used for the
Type II error and typical values range from 0.2 to 0.01.

Out of various methods of quality checking, Acceptance Sampling is just a simple recipe that is
followed, and may not be the best thing to do.
Situation: large batches of items are produced. We must sample a small proportion of each batch
to check that the proportion of defective items is sufficiently low.

One-stage sampling plans


Sample n items, X= number of defective items in the sample. The batch is rejected if X > c and
accepted if X c.
To choose the appropriate values for n and c, let p = proportion of defective items in the batch
(typically small). Then X B(n; p) if the population the samples are drawn from is large.
Operating characteristic (OC): probability of accepting the batch

Plot of a typical OC curve (n = 100,c = 3):

The Producer and Consumer of the items have to agree some unacceptable defective fraction that
should be rejected with high probability, and some good low defective fraction that should be
accepted with high probability. So the Producer and Consumer of items have to agree what
constitutes:
 Acceptable quality level: p1 (consumer happy, want to accept with high probability)
 Unacceptable quality level: p2 (consumer unhappy, want to reject with high probability)

Ideal sampling scheme: always accept batch if p p1 and always reject if p p2,
i.e. L(p p1) = 1 and L(p p2) = 0. However the only way to guarantee this would be to inspect
the whole batch, which is usually not desirable (esp. if testing requires destruction of the item!).
We therefore want to use a sampling scheme, optimized so that the risk of one of these
undesirable outcomes is minimized:
= P (Reject batch when p = p1) = 1 – L (p1): the Producer's Risk.
β = P (Accept batch when p = p2) = L (p2): the Consumer's Risk.
Of course the Producer really cares about rejecting the batch when p p1, but taking p = p1 is
conservative as the probability is always lower for p < p1.
Similarly for the Consumer’s risk.
Once the Producer and Consumer have agreed the values of p1, p2, and β , values of n and c
can be calculated.
Two-stage sampling plan:
 Sample n1 items, X1 = number of defectives in the sample.
 Accept batch if X1 c1, reject if X1 > c2 (where c2 > c1)
 if c1 < X1≤ c2, sample a further n2 items; let X2 = number of defectives in 2nd sample;
 Accept batch if X2≤_ c3, otherwise reject batch.

Although more complicated, by suitable choice of n1, n2, c1, c2 and c3, it is practicable to find a
plan with similar L(p) to a single stage design but smaller average sample size.

Quality
Acceptance sample is a rather limited method of ensuring good quality:
 It is too far downstream in the production process; a method is required to be found out
which identifies where things are going wrong.
 It is 0/1 (i.e. defective/OK) and so does not make efficient use of data;
It is seen that large samples are required. It is better to have quality measurements on a
continuous scale; there will be an earlier warning of deteriorating quality and less need for large
sample sizes.

Problem 4: It has been decided to sample 100 items at random from each large batch and
to reject the batch if more than 2 defectives are found. The acceptable quality level is 1%
and the unacceptable quality level is 5%. Find the Producer’s and Consumer’s risks.

Answer:
n = 100, c = 2, p1 = 0.01, p2 = 0.05.

For the Producer’s Risk X ~ B (100, 0.01)

 = P (Reject batch when p = 0:01) = 1 - L(0:01)

= 1 - P(X = 0) - P(X = 1) - P(X = 2)

=1_ (1000 )0.01 x 0.99 (1001 )0.01 x 0:99 (1002 ) 0.01 x 0.99
0 100 _ 99 _ 2 98

= 1 _ 0:3660 _ 0:3697 _ 0:1849 = 0.079

For the Consumer’s Risk X ~ B (100; 0:05)


= P (Accept batch when p = 0:05) = L (0:05)
= P(X = 0) + P(X = 1) + P(X = 2)

= (1000 ) 0.05 x 0:95 - (1001 ) 0.05 x 0.95 (1002 )0.05 x 0.95


0 100 _ 1 99 _ 2 98

= 0:118

SAMPLING & SAMPLING PLANS:

What is sampling and explain the use of different simple sampling plans for
sampling of attributes and variables:

Sampling and Non-Sampling errors:

SAMPLING ERRORS: Sampling errors occur as a result of calculating the estimate (estimated
mean, total, proportion, etc) based on a sample rather than the entire population. This is due to
the fact that the estimated figure obtained from the sample may not be exactly equal to the true
value of the population. Three factors affect sampling errors with respect to the design of
samples – the sampling procedure, the variation within the sample with respect to the variate of
interest, and the size of the sample. A large sample results in lesser sampling error.
The statistics based on samples drawn from the same population always vary from each other
(and from the true population value) simply because of chance. This variation is sampling error
and the measure used to estimate the sampling error is the standard error.
Sampling error is one of two reasons for the difference between an estimate of a population
parameter and the true, but unknown, value of the population parameter. Sampling errors and
biases are common in any sampling method. Sampling error is one which occurs due to
unrepresentativeness of the sample selected for observation.
Sampling errors and biases are induced by the sample design. They include:
1. Selection bias: When the true selection probabilities differ from those assumed in
calculating the results.
2. Random sampling error: Random variation in the results due to the elements in the
sample being selected at random.
Sampling Error denotes a statistical error arising out of a certain sample selected being
unrepresentative of the population of interest. In simple terms, it is an error which occurs
when the sample selected does not contain the true characteristics, qualities or figures of the
whole population.
NONSAMPLING ERRORS
The accuracy of an estimate is also affected by errors arising from causes such as incomplete
coverage and faulty procedures of estimation, and together with observational errors, these make
up what are termed nonsampling errors. The aim of a survey is always to obtain information on
the true population value. The idea is to get as close as possible to the latter within the resources
available for survey. The discrepancy between the survey value and the corresponding true value
is called the observational error or response error. Response Nonsampling errors occur as a
result of improper records on the variate of interests, careless reporting of the data, or deliberate
modification of the data by the data collectors and recorders to suit their interests. Nonresponse
error occurs when a significant number of people in a survey sample are either absent; do not
respond to the questionnaire; or, are different from those who do in a way that is important to the
study.
Non-sampling error is an error arising from human error, such as error in problem identification,
method or procedure used, etc. Non-sampling errors are other errors which can impact the final
survey estimates, caused by problems in data collection, processing, or sample design. They
include:
1. Over coverage: Inclusion of data from outside of the population.
2. Under coverage: Sampling frame does not include elements in the population.
3. Measurement error: e.g. when respondents misunderstand a question, or find it difficult
to answer.
4. Processing error: Mistakes in data coding.
5. Non-response: Failure to obtain complete data from all selected individuals.
Non-Sampling Error is an umbrella term which comprises of all the errors, other than the
sampling error. They arise due to a number of reasons, i.e. error in problem definition,
questionnaire design, approach, coverage, information provided by respondents, data
preparation, collection, tabulation, and analysis.
There are two types of non-sampling error:
Basis for Sampling error Non-sampling error
comparison
Meaning Sampling error is a type of statistical An error occurs due to sources
error, occurs due to the sample selected other than sampling, while
does not perfectly represents the conducting survey activities is
population of interest. known as non sampling error.
Cause Deviation between sample mean and Deficiency and inappropriate
population mean analysis of data
Type Random Random or Non-random
Occrences Sample error arises only when the Both in sample & census
sample is selected as a representative of
a population.
Sample size Possibility of error reduced with the It has nothing to do with the
increase in sample size. sample size.

A “lot,” or batch, of items can be inspected in several ways, including the use of single, double,
or sequential sampling. Types of acceptance plans to choose from LASPs fall into the following
categories:

Single sampling plans: One sample of items is selected at random from a lot and the disposition
of the lot is determined from the resulting information. Two numbers specify a single sampling
plan: They are the number of items to be sampled (n) and a pre-specified acceptable number of
defects (c). If there are fewer or equal defects in the lot than the acceptance number, c, and then
the whole batch will be accepted. If there are more than c defects, the whole lot will be rejected
or subjected to 100% screening. These are the most common (and easiest) plans to use although
not the most efficient in terms of average number of samples needed.

Double sampling plans: After the first sample is tested, there are three possibilities:

1. Accept the lot


2. Reject the lot
3. No decision

If the outcome is (3), and a second sample is taken, the procedure is to combine the results of
both samples and make a final decision based on that information. Often a lot of items is so good
or so bad that we can reach a conclusion about its quality by taking a smaller sample than would
have been used in a single sampling plan. If the number of defects in this smaller sample (of size
n1) is less than or equal to some lower limit (c1), the lot can be accepted. If the number of defects
exceeds an upper limit (c2), the whole lot can be rejected. But if the number of defects in the n1
sample is between c1 and c2, a second sample (of size n2) is drawn. The cumulative results
determine whether to accept or reject the lot. The concept is called double sampling.

Multiple sampling plans: This is an extension of the double sampling plans where more than
two samples are needed to reach a conclusion. The advantage of multiple sampling is smaller
sample sizes.

Sequential sampling plans: This is the ultimate extension of multiple sampling where items are
selected from a lot one at a time and after inspection of each item a decision is made to accept or
reject the lot or select another unit. Multiple sampling is an extension of double sampling, with
smaller samples used sequentially until a clear decision can be made. When units are randomly
selected from a lot and tested one by one, with the cumulative number of inspected pieces and
defects recorded, the process is called sequential sampling. If the cumulative number of defects
exceeds an upper limit specified for that sample, the whole lot will be rejected. Or if the
cumulative number of rejects is less than or equal to the lower limit, the lot will be accepted. But
if the number of defects falls within these two boundaries, we continue to sample units from the
lot. It is possible in some sequential plans for the whole lot to be tested, unit by unit, before a
conclusion is reached.
Selection of the best sampling approach—single, double, or sequential—depends on the types of
products being inspected and their expected quality level. A very low-quality batch of goods, for
example, can be identified quickly and more cheaply with sequential sampling. This means that
the inspection, which may be costly and/or destructive, can end sooner. On the other hand, in
many cases a single sampling plan is easier and simpler for workers to conduct even though the
number sampled may be greater than under other plans.
Skip lot sampling plans:. Skip Lot sampling means that only a fraction of the submitted lots are
inspected. This mode of sampling is of the cost-saving variety in terms of time and effort.
However skip-lot sampling should only be used when it has been demonstrated that the quality
of the submitted product is very good. Implementation of skip-lot sampling plan A skip-lot
sampling plan is implemented as follows:

1. Design a single sampling plan by specifying the alpha and beta risks and the
consumer/producer's risks. This plan is called "the reference sampling plan".
2. Start with normal lot-by-lot inspection, using the reference plan.
3. When a pre-specified number, i, of consecutive lots are accepted, switch to inspecting
only a fraction f of the lots. The selection of the members of that fraction is done at
random.
4. When a lot is rejected return to normal inspection.

Acceptance sampling

ACCEPTANCE SAMPLING: Acceptance sampling is a major statistical tool of quality


control. Sampling plans and operating characteristic (OC) curves facilitate acceptance sampling
and provide the manager with tools to evaluate the quality of a production run or shipment.
Acceptance sampling is an important field of statistical quality control that was popularized by
Dodge and Romig and originally applied by the U.S. military to the testing of bullets during
World War II.
Acceptance sampling is a form of testing that involves taking random samples of “lots,” or
batches, of finished products and measuring them against predetermined standards. Dodge
reasoned that a sample should be picked at random from the lot, and on the basis of information
that was yielded by the sample, a decision should be made regarding the disposition of the lot. In
general, the decision is either to accept or reject the lot. This process is called Lot Acceptance
Sampling or just Acceptance Sampling.
There are two major classifications of acceptance plans: by attributes ("go, no-go") and by
variables. The attribute case is the most common for acceptance sampling. Important point A
point to remember is that the main purpose of acceptance sampling is to decide whether or not
the lot is likely to be acceptable, not to estimate the quality of the lot.

Scenarios leading to acceptance sampling:


Acceptance sampling is employed when one or several of the following hold: 
 Testing is destructive
 The cost of 100% inspection is very high
 100% inspection takes too long
It was pointed out by Harold Dodge in 1969 that Acceptance Quality Control is not the same as
Acceptance Sampling. The latter depends on specific sampling plans, which when implemented
indicate the conditions for acceptance or rejection of the immediate lot that is being inspected.
The former may be implemented in the form of an Acceptance Control Chart. The control limits
for the Acceptance Control Chart are computed using the specification limits and the standard
deviation of what is being monitored.

A lot acceptance sampling plan (LASP) is a sampling scheme and a set of rules for making
decisions. The decision, based on counting the number of defectives in a sample, can be to
accept the lot, reject the lot, or even, for multiple or sequential sampling schemes, to take another
sample and then repeat the decision process.

Definitions of basic Acceptance Sampling terms:


All derivations depend on the properties you want the plan to have. These are described using the
following terms:

 Acceptable Quality Level (AQL): The AQL is a percent defective that is the base line
requirement for the quality of the producer's product. The producer would like to design a
sampling plan such that there is a high probability of accepting a lot that has a defect
level less than or equal to the AQL.
 Lot Tolerance Percent Defective (LTPD): The LTPD is a designated high defect level
that would be unacceptable to the consumer. The consumer would like the sampling plan
to have a low probability of accepting a lot with a defect level as high as the LTPD.
 Type I Error (Producer's Risk): This is the probability, for a given (n,c) sampling plan,
of rejecting a lot that has a defect level equal to the AQL. The producer suffers when this
occurs, because a lot with acceptable quality was rejected. The symbol is commonly
used for the Type I error and typical values for range from 0.2 to 0.01.
 Type II Error (Consumer's Risk): This is the probability, for a given (n,c) sampling
plan, of accepting a lot with a defect level equal to the LTPD. The consumer suffers when
this occurs, because a lot with unacceptable quality was accepted. The symbol is
commonly used for the Type II error and typical values range from 0.2 to 0.01.
 Operating Characteristic (OC) Curve: This curve plots the probability of accepting the
lot (Y-axis) versus the lot fraction or percent defectives (X-axis). The OC curve is the
primary tool for displaying and investigating the properties of a LASP.
 Average Outgoing Quality (AOQ): A common procedure when sampling and testing is
non-destructive, is to 100% inspect rejected lots and replace all defectives with good
units.
 Average Outgoing Quality Level (AOQL): A plot of the AOQ (Y-axis) versus the
incoming lot p (X-axis) will start at 0 for p = 0, and return to 0 for p = 1 (where every lot
is 100% inspected and rectified). In between, it will rise to a maximum. This maximum,
which is the worst possible long term AOQ, is called the AOQL.
 Average Total Inspection (ATI): When rejected lots are 100% inspected, it is easy to
calculate the ATI if lots come consistently with a defect level of p. For a LASP (n,c) with
a probability pa of accepting a lot with defect level p, we have

ATI = n + (1 - pa) (N - n), where N is the lot size.

 Average Sample Number (ASN): For a single sampling LASP (n,c) we know each and
every lot has a sample of size n taken and inspected or tested. For double, multiple and
sequential LASP's, the amount of sampling varies depending on the number of defects
observed. For any given double, multiple or sequential plan, a long term ASN can be
calculated assuming all lots come in with a defect level of p. A plot of the ASN, versus
the incoming defect level p, describes the sampling efficiency of a given LASP scheme.
Short notes
Fishikawa Diagramme:

Fishbone diagramme already given above

Pareto Analysis
Already given above
Reliability & hazard function
Reliability is one of the dimensions of quality of products i.e., goods &r services. The buyer
expects that the goods or services will satisfy the requirement for quite sometime or a specified
pewriod of time, say 10 years. The measure of the probability of the item satisfying the
customers at a given time‘t’ is described by the term reliability. Thus, reliability is quite an
important characteristic of any product. Reliability can be defined as:
“Reliability of an entity is defined as the probability that it will perform its intended functions for
a specified period of time, under the stated operating conditions.” Thus reliability is a measure of
the ability of the product to function as intended at a given time. It is time dependent. Sincxe
reliability of any product at any given time is a probability; it will be equal to or less than 1.

Probability Density Function:


Reliability is probability of survival of product. The survival of a product is characterised by a
random variable. There are two types of variablesas given below:
 Continuous
 Discrete.
A function f(x) is said to be probability density function of a continuous variable x, if x takes any
value from –infinity to + infinity such that
F(x) ≥ 0 and integral of f(x) dx =1
Normal distribution exponential distribution etc. is examples of probabaility density functions of
continuous random variables. The probability density function for an exponeential distribution is
given below.

Figure: Exponeential probability density distribution


function
1
The above pdf can be expressed as f(t) = λe –λt , where λ = and µ is the mean.
μ
In the reliability context, we can call µ as the mean time betweeen failures and λ as the failuer
rate.

Hazard function:
Hazard function is a measure of tendency of a product failure. If the value of the hazard function
is high, the probabbility of failure will be greater. The hazard function of an exponential
distribution is shown in the diagram below:

h (t) = λ and it is constant in the case of exponential probability


distribution.
Reliability equation:
f (t)
The reliability can be mathematically described as: R (t) =
h(t)
Where f(t) is the probability density function and h(t) is the hazard
function. All of them with time.

Figure: Hazard function of an exponential distribution

We know that in the case of exponential probability density function


f(t) = λe –λt and h (t) = λ
f (t)
Therefore, R(t) = = e –λt where, µ is the mean time betweeen failures and λ is the failuer
h(t)
rate.
A plot of reliability with time when the failure patern follows exponential distribution will
appear as given in figure below:

µ is also called θ. µ represents the mean time between failures (MTBF).


Since the exponential distribution describes constant failure rate, it can be
linked to a bathtub curve. It is an important assumption for FMEA of
products particularly electronic products. Essentially, , R(t) refers to the
instantaneous reliability at any time t. At time t = 0, the reliability is 1
i.e., 100% reliable as shown in the curve. As time passes with the use of
the product, the reliability goes down and after a large time is elapsed
from the start of use, i.e., at infinite time, reliability is zero.
Let the random variable X be the lifetime or the time to failure of a component. The probability
that the component survives until some time t is called the reliability R(t) of the component:
R(t) = P(X > t) =1− F(t)
Where, F is the distribution function of the component lifetime, X. • The component is assumed
to be working properly at time t=0 and no component can work forever without failure:
R (0) =1 & lim →∞
R (t) = 0
 R (t) is a monotone non-increasing function of t.
 For t less than zero, reliability has no meaning, but: sometimes we let R(t)=1 for t < 0
F(t) will often be called the unreliability.
 Consider a fixed number of identical components, N0, under test.
 After time t, Nf (t) components have failed and Ns(t) components have survived
∴ Nf (t)+ Ns(t) = N0
N s (t)
 The estimated probability of survival: ^ P( survival) =
N0
In the limit as N0→∞ , we expect (survival) to approach R(t). As the test progresses, N s(t) gets
smaller and R(t) decreases.
N s (t) N 0−N f (t) N f (t )
R (t) = = = =1-
N0 N0 N0
(N0 is constant, while the number of failed components Nf increases with time.)
• Taking derivatives:
−1 '
R/(t) = N N f f (t)
0
N 'f f (t) is rate at which the component fails.
As , N0 tends to ∞ , the right hand side may be interpreted as the negative of the failure density
function, Fx(t) R/(t) = - fx(t)
• Note: f(t) ∆t is the (unconditional) probability that a component will fail in the interval (t, t +
Δt)
:

Kaizen:
Kaizen is a very significant concept within quality management and deserves specific
explanation:
Kaizen (usually pronounced 'kyzan' or 'kyzen' in the western world) is a Japanese word,
commonly translated to mean 'continuous improvement'.
Kaizen is a core principle of quality management generally, and specifically within the methods
of Total Quality Management and 'Lean Manufacturing'.
Originally developed and applied by Japanese industry and manufacturing in the 1950s and 60s,
Kaizen continues to be a successful philosophical and practical aspect of some of the best known
Japanese corporations, and has for many years since been interpreted and adopted by 'western'
organizations all over the world.
Kaizen is a way of thinking, working and behaving, embedded in the philosophy and values of
the organization. Kaizen should be 'lived' rather than imposed or tolerated, at all levels.
The aims of a Kaizen organization are typically defined as:
To be profitable, stable, sustainable and innovative.
 To eliminate waste of time, money, materials, resources and effort and increase
productivity.
 To make incremental improvements to systems, processes and activities before problems
arise rather than correcting them after the event.
 To create a harmonious and dynamic organization where every employee participates and
is valued.
Key concepts of Kaizen:
 ‘Every’ is a key word in Kaizen: improving everything that everyone does in every
aspect of the organization in every department, every minute of every day.
 Evolution rather than revolution: continually making small, 1% improvements to 100
things is more effective, less disruptive and more sustainable than improving one thing by
100% when the need becomes unavoidable.
 Everyone involved in a process or activity, however apparently insignificant, has
valuable knowledge and participates in a working team or Kaizen group (see also Quality
Circles below).
 Everyone is expected to participate, analysing, providing feedback and suggesting
improvements to their area of work.
 Every employee is empowered to participate fully in the improvement process: taking
responsibility, checking and co-coordinating their own activities. Management practice
enables and facilitates this.
 Every employee is involved in the running of the company, and is trained and informed
about the company. This encourages commitment and interest, leading to fulfillment and
job satisfaction.
Kaizen teams use analytical tools and techniques to review systems and look for ways to
improve.

At its best, Kaizen is a carefully nurtured philosophy that works smoothly and steadily, and
which helps to align 'hard' organizational inputs and aims (especially in process-driven
environments), with 'soft' management issues such as motivation and empowerment.
Like any methodology however, poor interpretation and implementation can limit the usefulness
of Kaizen practices, or worse cause them to be counter-productive.
Kaizen is unsuccessful typically where:
 Kaizen methods are added to an existing failing structure, without fixing the basic
structure and philosophy.
 Kaizen is poorly integrated with processes and people's thinking.
 Training is inadequate.
 Executive/leadership doesn't understand or support Kaizen.
 Employees and managers regard Kaizen as some form of imposed procedure, lacking
meaningful purpose.
Kaizen works best when it is 'owned' by people, who see the concept as both empowering of
individuals and teams, and a truly practical way to improve quality and performance, and thereby
job satisfaction and reward. As ever, such initiatives depend heavily on commitment from above,
critically:
 to encourage and support Kaizen, and
 to ensure improvements produce not only better productivity and profit for the
organization, but also better recognition and reward and other positive benefits for
employees, whose involvement drives the change and improvement in the first place.
Interestingly, the spirit of Kaizen, which is distinctly Japanese in origin - notably its significant
emphasis upon individual and worker empowerment in organizations - is reflected in many
'western' concepts of management and motivation, for example the Y-Theory principles
described by Douglas McGregor; Herzberg's Motivational Theory, Maslow's Needs Hierarchy
and related thinking; Adams' Equity Theory; and Charles Handy's motivational theories.
Fascinatingly, we can now see that actually very close connections exist between:
 the fundamental principles of Quality Management - which might be regarded as cold
and detached and focused on 'things' not people, and
 progressive 'humanist' ideas about motivating and managing people - which might
be regarded as too compassionate and caring to have a significant place in the
optimization of organizational productivity and profit.
The point is that in all effective organizations a very strong mutual dependence exists between:
 systems, processes, tools, productivity, profit - the 'hard' inputs and outputs (some say
'left-side brain'), and
 people, motivation, teamwork, communication, recognition and reward - the 'soft' inputs
and outputs ('right-side brain')
Kaizen helps to align these factors, and keep them aligned.

Discussion about the ISO 9000 family of standards in regard to


implementation of TQM:

The ISO 9000 family of standards stands for “International Organization for
Standardization”. The ISO 9000 family of standards relate to quality management systems and
are designed to help organizations ensure they meet the needs of customers and other
stakeholders. The standards are published by ISO, the International Organization for
Standardization and available through National standards bodies. ISO 9000 deals with the
fundamentals of quality management systems including the eight management principles on
which the family of standards is based.

Mention of a few important standards with their scope of


implementation areas under the family of ISO 9000:
Ans: The ISO 9000 family of standards relate to quality management systems and are designed
to help organizations ensure they meet the needs of customers and other stakeholders. The
standards are published by ISO, the International Organization for Standardization and available
through National standards bodies. ISO 9000 deals with the fundamentals of quality management
systems, including the eight management principles on which the family of standards is based.
ISO Geneva published in 1987 a series of Quality system models to enable the world
communities to standardise on a common set of Quality System requirements. The following
constitute the ISO 9000 series.

ISO 9000 - 1: Quality Management and Quality Assurance standards-Part1 which mentions
about the guidelines for selection and use.
ISO 9000 - 2: Quality Management and Quality Assurance standards - Part 2 General guide lines
for the application of ISO 9001, ISO 9002, and ISO 9003.
ISO 9000 – 3: Quality Management and Quality Assurance standards-Part 3 Guide lines for the
application of ISO 9001 to the development, supply and maintenance of software.
ISO 9001 -: Quality systems-Model for Quality Assurance in design / development, production,
installation and servicing.
ISO 9002 -: Quality Systems - Model for Quality Assurance in production installation and
servicing.
ISO 9003-: Quality Systems - Model for Quality Assurance in final inspection and test.
ISO 9004-: Quality Management and Quality system elements - Guidelines.
The standards are reviewed every few years by the ISO authority. The version in 1994 was called
the ISO 9000:1994 series; consisting of the ISO 9001:1994, 9002:1994 and 9003:1994 versions.
The last major revision was in the year 2008 and the series was called ISO 9000:2000 series. The
ISO 9002 and 9003 standards were integrated into one single certifiable standard: ISO
9001:2008.

The ISO 9004:2009 document gives guidelines for performance improvement over and above the
basic standard (ISO 9001:2000). The Quality Management System standards created by ISO are
meant to certify the processes and the system of an organization, not the product or service itself.
ISO 9000 standards do not certify the quality of the product or service. In 2005 the
International Organization for Standardization released a standard, ISO 22000, meant for the
food industry. ISO has also released standards for other industries. For example Technical
Standard TS 16949 defines requirements in addition to those in ISO 9001:2008 specifically for
the automotive industry. ISO has a number of standards that support quality management.

d) What are the activities involved in implementing ISO9000 Quality System? 5


Ans:
There is a need for a structured approach which includes preparation, implementation and
certification.
a) Preparation - Awareness Training Programme
This calls for creating awareness amongst all including grass root level employees about why we
should go for ISO 9000 certification, where we stand and what is our desired objective. Next step
is the choice of model. This is normally decided based on certain parameters like, customer need
and the interface with him, complexity of the design process and its maturity, complexity of the
process, characteristics of the product or service, safety and economic consideration.
b) Implementation:
First and foremost the organisation should review its business plan, policies and strategies and
how they suit with the implementation. An organisation has to write down the standards, and
assess their effectiveness. Then draw out an action plan approved by the steering committee.
Once action plan is drawn, next step is documentation. Documentation involves preparation of
quality manual, procedures, work instruction, quality plans and quality records. ISO 9000 calls
for documentation of all work that will affect quality. Standard systems are available as a
guideline for this purpose.
i) Quality Manual: This explains the Quality policy, quality aims and objectives, the
organisational structure and fulfilling the ISO 9000 requirement.
ii) Quality Control Procedures are the documents which describe how an activity is to be
carried out.
iii) Work instructions are documents which specify how a job is to be done.
iv) Quality plans describe what quality requirements are, how they are to be verified, who will
do and what measuring instrument/equipment will be used.
v) Quality records are results of activities carried out as per the documented system. Next step
is not mandatory but advisable i.e. getting a second party audit done. This is done by a
customer over a supplier, persons from sister organisation or by an external agency
knowledgeable (not the certifying agency) on the subject.
c) Certification
For this, contact accredited certification body and remit the required fee. On application, the
certifying agency will carry out a document review (also called adequacy audit) i.e. check up the
documentation to ensure that it covers all the aspects and fulfils the requirements of the chosen
standard. Once the adequacy audit is satisfactory they carry out a conformity audit or compliance
audit which is a ‘Third party audit’. Such an audit is carried out in accordance with the ISO
10011 standard. On satisfactory compliance audit, the agency grants certificate and includes the
organisation in the certified list. Such a certification has a validity period (Currently it is 3 years).
Hence the certifying agency carries out surveillance audit as part of the certification scheme. To
ensure continuity, it is important to carry out internal Quality Audit regularly as per requirement
of ISO 9000 standard. Audit by customer also can be conducted on a regular basis. After the
expiry of the 3 years period recertification audit is carried out.
Preparation of Quality Manual should be taken up at an appropriate level of knowledge acquired
by the Management representative in regard to the structure of the document as per standard of
ISO-9000 on one side and on the other, the management commitment and employees’
cooperation and involvement shall be ensured for the implementation of the quality system.
Besides the detail given as step by step method in the literature on ISO 9000, a general guideline
is given in ISO 10013. The contents of the manual are:
Introduction:
This normally contains profile of company’s activities, its products and market, top
management’s commitment and a table of contents.
Quality Manual Management 
This gives manual review, history of revisions carried out and names of registered holders.
Object of the Quality Manual 

Upto this it is common for manufacturing as well as service organisations. Quality system
requirements vary depending upon the model chosen. A sum-up of ISO 9000 standards should be
appraised to all categories of employees for their awareness as under:

• Standards on quality system and not on products


• Complementary to products specifications.
• Twenty elements affecting quality.
• Organisations to establish their own procedures and implement.
• Management responsibility.
• Responsibility and authority of the departments well defined.
• Documented Quality System - quality manual and procedures.
• Training needed to all on quality system standard and organisation procedures.
• User friendly, logical and easily understood format.

You might also like