Professional Documents
Culture Documents
The term quality means ‘fitness for use’ or ‘customer’s requirement / satisfaction’. But the
mathematical form of the quality can be expressed as P/E, where P is the performance of the
product and E is the expectation of the customer. There are several tools for identifying the
causes of problems or defects in the products (Goodsor services) like Fishbone diagram (to
identify the root cause of problems or defects), Pareto Analysis (to identify the vital few
factors for majority of the problems) or (Selection of a limited number of tasks that
produce significant overall effect.)
Quality control and quality assurance are basic backbone of quality management. For quality
control of products, process control is the most effective way of controlling the defects or rejects
in a production process. For quality control chart for number of defectives in a lot of
production when the sample size is more than 20, the control limits are calculated based on
Binomial distribution but for sample size less than 20, the bionomial distribution is
approximated to Poisson distribution. The control charts uses statistics of a sample of data
which are mean ( x́ ¿ , standard deviation () and the range (R) which is the difference between
highest value and lowest value in the data sample. The standard deviation is calculated
from the difference of the mean squared of the individual data divided by the number of
observation. This is also RMS (Root Mean Squared) deviation and is equal to stadard
deviation because the deviation is calculated from the mean and the individual data. The
uper control limit (UCL) and lower control limits (LCL) are separated from mean (central line in
a control chart) by 3 on each side. The total spread of the UCL and LCL is 6. The total % of
data sample that falls within the 6, amounts to 99.73%. The random variation in the value
of the variables in a production process occurs due to chance causes or non-assignable
causes or natural or common causes but variations other than that which are not natural are
due to artificial /special/ assignable causes. When a process is under control, processis stable ,
then it is necessary to assess that the process capability(Cp) to satisfyi customers’ requirement.
Normally process capability is measured by the tolerance of the Customers specification limits
(USL – LSL)/6. When (USL – LSL) corresponds to 6, Cp = 1, and when the process is not
capable, then Cp = <1 and when capable, Cp = 1.
The 6 variation is chosen for control charts but 6 also has emerged as methodology for
process improvement ot maintain the level of defects in a manufacturing or service sector.
Actually 6 methodology is used for process improvemet and not just reducing the number or
proportion of defects.
When for evaluation purpose some experiment is conducted with number of trials, 5% level of
significance of the output result of an experiment corresponds to the following confidence
interval of 95%.
Of all the resources used in the industry or business, all ther resources like machine, material,
energy etc, depreciate over time except human resource. Humans gather experience over
time while working and the experienced men become more valuable to the industry or the
business, hence humans appreciate.
Ultimately the success of TQM depends on the participation of all in the organisation and outside
the organisation. The awareness of quality is very important among the member of employees in
the organisation who will volunteer for the quality awarenes programme like training, quitz,
brainstorming sesssion. This sort of approaches in the organisation is normally conducted by the
particiaption of workers under the umbrela of ‘Quality Circle’ headed by the leadership of
Supervisors/Leaders of the working class/workers
In the total quality management, Japanese introduced a very valuable process steps to follow
which are
1. Kaizen – Focuses on "Continuous Process Improvement", to make
processes visible, repeatable and measurable.
2. Atarimae Hinshitsu – The idea that "things will work as they are supposed to" (for
example, a pen will write).
3. Kansei – Examining the way the user applies the product leads to
improvement in the product itself.
4. Miryokuteki Hinshitsu–The idea that "things should have an aesthetic quality" (for
example, a pen will write in a way that is pleasing to the writer).
And subsequent to that after each step of actrivities in the over all process, the PDCA (PLAN,
DO, CHECK and ACT) cycle is followed tfor continuously imroving the quality. The TQM not
only cares for the in-house quality of the organisation and its products and resources but also
takes care of the environment which is followed as per the guidelines of ISO 14000. All other
quality activities inside the organisation are guided by the norms of ISO 9000. And some other
standards
1. Pre industrial paradigm: At the time the reputation of the artisan was measured through the
quality characteristics of the product. Trademarks, guilds, and punitive measures were used to
defend the interest of the consumers.
2. Industrial paradigm of quality control. The industrial revolution (18th/19th century) raised the
level of product and process complexity with the boom of production level. A new quality
paradigm of quality control was born, evolving a broader set of changes. Practices like sampling
inspection, the use of statistical methods, standardization techniques became quite familiar. From
that time these tools kept along in use.
3. Postindustrial paradigm Total Quality Management. A third paradigm came into force – Total
Quality Management (TQM). TQM brought the awareness and practice of quality principles to a
new level, and emphasized events and facts like organizational learning and participative
management.
To fully understand the ‘Quality Movement’, the evolution of the concepts of quality of the
manufactured goods and delivered services over time are summarized in Table below.
Time Early 1900s 1940s 1960s 1980s and Beyond
The people who were the pioneers and proponents in accelerating those evolving paradigms of
quality are given below.
Companies in every line of business are focusing on improving quality in order to be more
competitive. In many industries quality excellence has become a standard for doing business.
Companies that do not meet this standard simply will not survive. The term used for today’s new
concept of quality is total quality management or TQM. TQM is “The management of quality
at every stage of operations, from planning and design through self-inspection, to continual
process monitoring for improvement opportunities”. The table above presents a timeline of the
old and new concepts of quality. It is seen that the old concept is reactive, designed to correct
quality problems after they occur. The new concept is proactive, designed to build quality into
the product and process design. The eminent personalities who have shaped our understanding of
quality are called the “Quality Gurus”, a list of those are given above.
The current paradigm of quality is underlying principles of Total Quality Management which is
implemented to maintain and assure quality of the products i.e., goods & services as per the
requirement of the customers both internal & external to the organisation. This approach towards
quality of products, the organisation itself and the society has made it possible for the
organisations to satisfy the direct and indirect customers in the market place and capture the
market in greater share of the market for sustainable business.
The current paradigm of TQM consist of three terms as Total, Quality & Management, all of
which have evolved and shifted from ‘no idea’ of quality through the concept of quality control
to the present day-concept of Total Quality Control & Assurance for combating the challenges of
the forthcoming competitive edge of the business and survival thereof.
TQM is composed of three paradigms:
Total: Involving the entire organization, supply chain, and/or product life cycle
Quality: With its usual definitions, with all its complexities
Management: The system of managing with steps like Plan, Organize, Control, Lead,
Staff, provisioning and organizing.
The meaning of quality has changed over time. TQM is defined by the International
Organization for Standardization (ISO) as under:
"TQM is a management approach for an organization, centered on quality, based on the
participation of all its members and aiming at long-term success through customer satisfaction,
and benefits to all members of the organization and to society." ISO 8402:1994.One major aim is
to reduce variation from every process so that greater consistency of effort is obtained.
The core concepts of TQM can be classified into two broad categories or dimensions: social or
soft TQM and technical or hard TQM. The issues are centered on human resource management
and emphasize leadership, teamwork, training and employee involvement. The technical issues
reflect an orientation toward improving production methods and operations.
Secondly, the management of social or technical TQM issues cannot be performed in isolation.
Social and technical dimensions (and the core concepts that form them) should be interrelated
and mutually support one another reflecting the holistic character of TQM initiatives. Thirdly,
the literature suggests that the optimal management of TQM core concepts will lead to better
organizational performance.
Therefore, TQM includes both an empirical component associated with statistics and an
explanatory component that is associated with management of both people and processes. TQM
is an approach to improving the quality of goods and services through continuous improvement
of all processes, customer-driven quality, production without defects, focus on improvement of
processes rather than criticism of people and data-driven decision making.
Total means 100%, so TQM is about managing all aspects of quality and ultimate goal should be
the ‘Total Customer Satisfaction’. Every functional area should stick to the quality plan of the
organization and strive to attain the planned quality target. Each offering from the organization
should be of optimum quality. Because, “one rotten apple can spoil the whole basket.”
TQM is about addressing all aspects of dimensions of quality. If there is a good product in bad
packaging it is not going to give the desired returns to the organization. A good car with a bad
bumper will tarnish the image of the company. An ill tempered receptionist can turn away
potential customers from a nice 5-star hotel. So people and process should match the quality of
the product being offered by the organization. A satisfied employee will always bring a satisfied
customer, so internal customers are also important. All hygiene factors and motivation factors
should be maintained to satisfy the needs of the internal customer. Retaining internal customer is
important for better knowledge management and continuity of the process. Retaining external
customer is important to get repeat sales. It is always easier to get repeat sales from existing
customers than to get sales from a new customer. Everybody, right from the shop-floor employee
to the top management, should have total commitment to the predetermined quality goals.
Quality is having different meanings for different people. In spite of this any organization aiming
for sustainable competitive advantage needs to assess customer’s needs to fix a quality objective.
Immaculate planning is required to attain the pre decided quality goals. Proper monitoring and
people’s involvement can ultimately enable an organization to achieve the desired results. In the
long run the good quality always wins the customer’s heart.
Check Sheet
Check sheets are simple forms with certain formats that can aid the user to record data in an firm
systematically. Data are “collected and tabulated” on the check sheet to record the frequency of
specific events during a data collection period. They prepare a “consistent, effective, and
economical approach” that can be applied in the auditing of quality assurance for reviwing and to
follow the steps in a particular process. Also, they help the user to arrange the data for the
utilization later. The main advantages of check sheets are to be very easily to apply and
understand, and it can make a clear picture of the situation and condition of the organization.
They are efficient and powerful tools to identify frequently problems, but they dont have
effective ability to analyze the quality problem into the workplace.
The chech sheets are in three major types are such as Defect-location check sheets; tally check
sheets, and; defect-cause check sheets. Figure below is depicted as a tally check sheet that cn be
used for collecting data during production process.
Telephone interruption
Figure : Check sheet (Tally) for telephone interruptions
Histogram: Histogram is very useful tool to describe a sense of the frequency distribution of
observed values of a variable. It is a type of bar chart that visualizes both attribute and variable
data of a product or process, also assists users to show the distribution of data and the amount of
variation within a process. It displays the different measures of central tendency (mean, mode,
and average). It should be designed properly for those working into the operation process can
easily utilize and un distribution of the variable being explored. Figure below illustrates a
histogram of the frequency of defects in a manufacturing process.
Pareto Analysis
It was introduced by an Italian economist, named Vilfredo Pareto, who worked with income and
other unequal distributions in 19th century; he noticed that 80% of the wealth was owned by only
20% of the population. Later, Pareto principle was developed by Juran in 1950. A Pareto chart is
a special type of histogram that can easily be applied to find and prioritize quality problems,
conditions, or their causes in the organization (Juran and Godfrey, 1998). On the other hand, it is
a type of bar chart that shows the relative importance of variables, prioritized in descending order
from left to right side of the chart. The aim of Pareto chart is to figure out the different kind of
“nonconformity” from data figures, maintenance data, repair data, parts scrap rates, or other
sources. Also, Pareto chart can generate a mean for investigating concerning quality
improvement, and improving efficiency, “material waste, energy conservation, safety issues, cost
reductions”, etc., as Figure 4 demonstrated concerning Pareto chart, it can able to improve the
production before and after changes.
Fishbone Diagram
Kaoru Ishikawa is considered by many researchers to be the founder and first promoter of the
‘Fishbone’ diagram (or Cause-and-Effect Diagram) for root cause analysis and the concept of
Quality Control (QC) circles. Cause and effect diagram was developed by Dr. Kaoru Ishikawa in
1943. It has also two other names that are Ishikawa diagram and fishbone because the shape of
the diagram looks like the skeleton of a fish to identify quality problems based on their degree of
importance. The cause and effect diagram is a problem-solving tool that investigates and analizes
systematically all the potential or real causes that result in a single effect. On the other hand, it is
an efficient tool that equips the organization's management to explore for the possible causes of a
problem (Juran and Godfrey, 1998). This diagram can provide the problem-solving efforts by
“gathering and organizing the possible causes, reaching a common understanding of the problem,
exposing gaps in existing knowledge, ranking the most probable causes, and studying each
cause”. The generic categories of the cause and effect diagram are usually six elements (causes)
such as environment, materials, machine, measurement, man, and method, as indicated in Figure
5. Furthermore, “potential causes” can be indicated by arrows entering the main cause arrow.
Figure: The cause and effect diagram (Fishbone/Fishikawa/Kawasaki/Ishikawa Diagram)
Scatter Diagram:
Scatter diagram is a powerful tool to draw the distribution of information in two dimensions,
which helps to detect and analyze a pattern relationships between two quality and compliance
variables (as an independent variable and a dependent variable), and understanding if there is a
relationship between them, so what kind of the relationship is (Weak or strong and positive or
negative). The shape of the scatter diagram often shows the degree and direction of relationship
between two variables, and the correlation may reveale the causes of a problem. Scatter diagrams
are very useful in regression modeling (Montgomery, 2009; Oakland, 2003). The scatter diagram
can indicate that which one of these following correlations between two variables has: a) Positive
correlation; b) Negative correlation, and c) No correlation, as demonstrated in Figure below.
Flowchart
Flowchart presents a diagrammatic picture that indicats a series of symbols to describe the
sequence of steps exist in an operation or process. On the other hand, a flowchart visualize a
picture including the inputs, activities, decision points, and outputs for using and understanding
easily concerning the overall objective through process. This chart as a problem solving tool can
apply methodically to detect and analyze the areas or points of process may have had potential
problems by “documenting” and explaining an operation, so it is very useful to find and improve
quality into process as shown in Figure below.
Basic Statistics (for understanding, you may consult books on Basic Startistics)
Variables
A characteristic that varies from one person or thing to another is called a variable, i.e, a
variable is any characteristic that varies from one individual member of the population to
another. Examples of variables for humans are height, weight, number of siblings, sex, marital
status, and eye color. The first three of these variables yield numerical information (yield
numerical measurements) and are examples of quantitative (or numerical) variables, last three
yield non-numerical information (yield non-numerical measurements) and are examples of
qualitative (or categorical) variables.
Quantitative variables can be classified as either discrete or continuous.
Discrete variables: Some variables, such as the numbers of children in family, the numbers of
car accident on the certain road on different days, or the numbers of students taking basics of
statistics course are the results of counting and thus these are discrete variables. Typically, a
discrete variable is a variable whose possible values are some or all of the ordinary counting
numbers like 0, 1, 2, 3, . . . . As a definition, we can say that a variable is discrete if it has only a
countable number of distinct possible values. That is, a variable is discrete if it can assume only
a finite numbers of values or as many values as there are integers.
Continuous variables: Quantities such as length, weight, or temperature can in principle be
measured arbitrarily accurately. There is no indivible unit. Weight may be measured to the
nearest gram, but it could be measured more accurately, say to the tenth of a gram. Such a
variable, called continuous, is intrinsically different from a discrete variable.
The Median:
The sample median of a quantitative variable is that value of the variable in a data set that
divides the set of observed values in half, so that the observed values in one half are less than or
equal to the median value and the observed values in the other half are greater or equal to the
median value.
To obtain the median of the variable, we arrange observed values in a data set in increasing
order and then determine the middle value in the ordered list.
Definition of Median: Arrange the observed values of variable in a data in increasing order.
1. If the number of observation is odd, then the sample median is the observed value exactly in
the middle of the ordered list.
2. If the number of observation is even, then the sample median is the number halfway between
the two middle observed values in the ordered list.
In both cases, if we let n denote the number of observations in a data set, then the sample
median is at position (n+1)/2 in the ordered list.
The median is a "central" value – there are as many values greater than it as there are less than
it.
The Mode
The sample mode of a qualitative or a discrete quantitative variable is that value of the variable
which occurs with the greatest frequency in a data set.
A more exact definition of the mode is given below.
Definition Mode: Obtain the frequency of each observed value of the variable in a data and note
the greatest frequency.
1. If the greatest frequency is 1 (i.e. no value occurs more than once), then the variable has no
mode.
2. If the greatest frequency is 2 or greater, then any value that occurs with that greatest
frequency is called a sample mode of the variable.
To obtain the mode(s) of a variable, we first construct a frequency distribution for the data using
classes based on single value. The mode(s) can then be determined easily from the frequency
distribution.
Measures of variation:
Just as there are several different measures of center, there are also several different measures
of variation. Of the most frequently used measures of variation; Range and Standard deviations
are frequently and commonly encountered in simple statistical analysis.
Measures of variation are used mostly only for quantitative variables.
Definition Range: The sample range of the variable is the difference between its maximum and
minimum values in a data set: Range = Max −Min.
Standard deviation: The sample standard deviation is the most frequently used measure of
variability, although it is not as easily understood as ranges. It can be considered as a kind of
average of the absolute deviations of observed values from the mean of the variable in question.
Definition Standard deviation: For a variable x, the sample standard deviation, denoted by sx
(or when no confusion arise, simply by s), is
In a formula of the standard deviation, the sum of the squared deviations from the mean,
is called sum of squared deviations and provides a measure of total deviation from the mean for
all the observed values of the variable. Once the sum of squared deviations is divided by n − 1,
we get
which is called the sample variance. The sample standard deviation has following alternative
formulas:
n
Sx=
√ ∑ ( x i− x́ )2
i=1
n−1
--------------------------- (1)
=
x 12 + x 22 + x 32+−−−−x n2−2 x́ ( x 1 + x 2+ x 3±−−+ x n) −n x́ 2
√ n−1
=
√ ∑ x i2−n x́ 2 -------------- (2)
i=1
n−1
Plot the points (10,3) ; (20,12); (30,27); (40, 57); (50, 75); (60, 80) . the points are connecetd
by a free hand smooth curve to obtain the ogive.
To determine the median, we have N/2 =40th & (N/2 + 1) = 41st . So, we have to consider the
average of the marks obtained by the 40th & 41st students which from the graph appears to be
From the point 40 & 41 on the y-axis, draw a straight line parallel to the x-axis cutting the ogive
curve at a point. Read the abscissa of these points on the x –axis, the marks very close to each
other and the average of which is found to be 34.4.
This is verified by mathematical calculation that 40th student gets = 30 + 10* (40 – 27)/30 = 34.3
& 41st student gets = 30 + 10 * (41-27)/30 = 30.67
The average of the two marks = 30.48
Pareto analysis:
Pareto analysis is a formal technique useful where many possible courses of action are
competing for attention. In essence, the problem-solving process estimates the benefit delivered
by each action, then selects a number of the most effective actions that deliver a total benefit
reasonably close to the maximal possible one. However, it can be limited by its exclusion of
possibly important problems which may be small initially, but which grow with time. It should
be combined with other analytical tools such as failure mode and effects analysis and fault tree
analysis for example.
In order to illustrate the operation of the Pareto diagram, the following example of a support officer
who has to analyse and solve various product defects. In this case, the support officer considered 845
defects, which are group into the following categories.
Serial Defect Category Total Cumulated Number of Defects
Using a BAR chart to plot failure category against Total and Cumulative Number of Defects and
mark on it the vertical line showing 80% (676 defects) of the cumulative total, the categories to the
left of it indicating those categories responsible for 80% of the failures.
This technique helps to identify the top portion of causes that need to be addressed to resolve the
majority of problems. Once the predominant causes are identified, then tools like the Ishikawa
diagram or Fish-bone Analysis can be used to identify the root causes of the problems. While it
is common to refer to pareto as "80/20" rule, under the assumption that, in all situations, 20% of
causes determine 80% of problems, this ratio is merely a convenient rule of thumb and is not nor
should it be considered immutable law of nature.
The application of the Pareto analysis in risk management allows management to focus on those
risks that have the most impact on the project.
This fishbone diagram was drawn by a manufacturing team to try to understand the source of
manufacturing problem. The team used the six generic headings to prompt ideas. Layers of
branches show thorough thinking about the causes of the problem.
Ishikawa diagram, in fishbone shape, shows factors of Equipment, Process, People, Materials,
Environment and Management, all affecting the overall problem. Smaller arrows connect the
sub-causes to major causes.
Control Chart
Attribute Data: A category of Control Chart displays data that result from counting the number
of occurrences or items in a single category of similar items or occurrences. These “count” data
may be expressed as pass/fail, yes/no, or presence/absence of a defect.
Variables Data: This category of Control Chart displays values resulting from the
measurement of a continuous variable. Examples of variables data are Length, Weight,
Volume, Temperature, Elapsed Time, and radiation dose.
Variable Control Charts - are used when the quality characteristic can be measured and
expressed in numbers with usual units.
Examples of Variable Control Charts
o X and R
o X and s( s stands for sample standard deviation as against population standard
deviation)
o Delta
o X and Moving Range
Attribute Control Charts - are used for product characteristics that can be evaluated with a
discrete response (pass/fail, yes/no, good/bad, number defective)
Examples of Attribute Control Charts are
p chart
np chart
c chart
u chart
Limitations of the two types of data charts:
Variable Control Charts
o must be able to measure the quality characteristics in numbers
o may be impractical and uneconomical
e.g. manuf. plant responsible of 100,000 dimensions
Attribute Control Charts
o In general are less costly when it comes to collecting data
o Can plot multiple characteristics on one chart
o But, loss of information vs. variable chart
Process control chart with usual notations of the parameters used for
identifying the assignable (special) and non-assignable (common)
causes of variations in a process
Different steps in calculating and plotting an X bar and R chart for variable data and draw a
control chart for variables with mean ( X́ ¿ and range (R) with control limits and specification
limits.
Step 1 - Determine the data to be collected. Decide what questions about the
process you plan to answer. Refer to the Data Collection module for information on
how this is done.
Step 2 - Collect and enter the data by subgroup. A subgroup is made up of
variables data that represent a characteristic of a product produced by a process. The
sample size relates to how large the subgroups are. Enter the individual subgroup
measurements in time sequence in the portion of the data collection section of the
Control Chart labeled MEASUREMENTS.
STEP 3 - Calculate and enter the average for each subgroup. Use the formula
below to calculate the average (mean) for each subgroup and enter it on the line
labelled Average in the data collection section.
Where: x́ , The average of the measurements within each
subgroup
xi The individual measurements within a subgroup
n The number of measurements within a subgroup
Problem 1: Construct X bar and R charts from the following table. For n = 5, A2 = 0·58, D4 =
2·11, D3 = 0. Comment on the state of control.
Sample 1 2 3 4 5 6 7 8 9
No.
X bar 50.4 26.0 86.6 95.6 39.2 88.9 61.3 22.5 59.4
R 35 44 23 65 18 26 51 19 33
Ans:
Now plot the data in the control chart on a graph paper with the above UCL, CL & LCL. See
if any assignable and non-assignable causes are there or not and accordingly comment on the
same.
Problem 2: The following data shows rejection pattern of a food product based on its odour
for various subgroups produced in a single day. Assume that subgroup size is constant at 96.
Determine revised average fraction rejected for that day on excluding those subgroups having ‘p’
above upper control limit for once.
Subgroup No. 1 2 3 4 5 6 7
Number rejected 4 3 12 2 6 3 2
Ans: Find out the proportion defective in each subgroup p 1, p2, p3,.......p7 and then find out ṕ ,
ṕ ( 1− ṕ )
average proportion defective. Use this ṕto estimateσ , standard deviation such as σ =
Use UCL = ṕ + 3 σ
√ n
.
CL = ṕ
And LCL = ṕ -3 σ
Plot the observed points p1, p2, p3, ......., p7 . See if any data is out of control. Comment on the
observations whether there are assignable beyond the UCL & LCL & non-assignable causes
within the UCL & LCL.
Problem3: 20 successive wafers (100 chips on each) are inspected. The numbers of defects
found in wafers are: Draw the suitable control chart and comment
Wafer No. 1 2 3 4 5 6 7 8 9 10
No. of defects 16 14 28 16 12 20 10 12 10 17
Wafer No. 11 12 13 14 15 16 17 18 19 20
No. of defects 19 17 14 16 15 13 14 16 31 20 Ans:
Wafer No. 1 2 3 4 5 6 7 8 9 10
No. of defects 16 14 28 16 12 20 10 12 10 17
Wafer No. 11 12 13 14 15 16 17 18 19 20
No.of defects 19 17 14 16 15 13 14 16 31 20
This is a problem of Control chart using counts. We are inspecting 20 successive wafers, each
containing 100 chips; the wafer is the inspection unit. The observed number of defects are given
in the table of the problem.
It is known that by assuming the distribution of defects in the sample follow poison distribution,
both the mean and the variance of this distribution are equal to c. Then the k-sigma control chart
is
UCL = c+k√c
Center Line = c
LCL = c − k√c.
If the LCL comes out negative, then there is no lower control limit. This control scheme assumes
that normally a standard value for c is available. If this is not the case as in this, then c may be
estimated as the average of the number of defects in a preliminary sample of inspection units,
designated as ć. Usually k is set to 3 by many practioners.
We have seen that the 3-sigma limits for c chart, where c represents the number of
nonconformities, are given by
ć ± 3√ć
Where, it is assumed that the normal approximation to the Poisson distribution holds, hence the
symmetry of the control limits. It is shown in the literature that the normal approximation to the
Poisson is adequate when the mean of the Poisson is at least 5. When applied to the c chart this
implies that the mean of the defects should be at least 5. This requirement will often be met in
practice, but still, when the mean is smaller than 9 (solving the above equation) there will be no
lower control limit.
It is observed from the above control chart that one data at sample no. 19 has gone of UCL.
Normal approximation to Poisson distribution is adequate when the mean of the Poisson
distribution is at least 5
Delivery rating: Delivery rating, DR for a lot or consignment depends on the quality supplied
within the stipulated delivery time and also on the actual delivery time for the full consignmnet.
The delivery rating may, therefore, be obtained by the following formula:
Q1 T
DR = X
Q2 T∗P+1.5∗T 1∗q
Where, Q = quantity promised to be supplied within the stipulated delivery time
Q1 = actual quantity supplied within the stipulated delivery
T = promised delivery time for the full consignment,
T1= actual delivery time for the full consignment,
Q1
P= , and q = 1 – p
Q2
Composite vendor rating: We have to assign weightage for each of the ratings to get a
composite vendor rating for each lot. For some supplies, price may be more important, fopr some
quality may be the most important and so on. Depending on the relative importance,
management mat assign weightage for each of the ratings. For instance, the company arrives at
the following weightages for the component type, X.
Rating Weightage
Qw-Quality 60% 40
Pw - Price 80% 40
Dw - Delivery 25% 20
Now we can arrive at a composite vendor rating using the formula for the vendor A discussed in
the following example:
VR = Qw*QR + Pw*PR + Dw*DR = 40x0.6 + 40 x 0.8 + 20 x 0.25 = 61
Vendor rating for a product is obtained as weighted average of the ratings for the lots received
over a period of time. This may be computed by the follwing formula:
n1∗VR 1 +n 2∗VR2+ n3∗VR3 +−−−−−−+n k∗VR k
Average rating=
n1+ n2+ n3 +−−−−−−−−+n k
th
VRi = vendor rating of i lot, and
ni = lot size for the ith lot
Based on the vendor rating as above, the vendors may be classified into three classes. An
example to illustrate the classification scheme is as under
Tating obtained Class of vendor
90 & above A
80 – 90 B
Below 80 C
Vender Rating Example:
Work out vendor rating for following data of three companies X, Y, Z. Weightage is Quality –
50%, Price – 20%, Delivery – 20%, TQM System – 10%
Table 1: Vendor rating example:
Solution:
In the denominator of the expression for both Cp & Cpk, 6σ refers to the natural variability of
the stated process. The standard deviation, σ is measure of natural variability of the output of a
process subject to a definite set of parameters which are measurable and controllable. The
n
where x́ denotes the mean of the outputs and xi is the individual output.
¿
√ ∑ ( x i− x́ ) 2 ,
i=1
n−1
This σ variation is taken on either side of the mean with different multiples of σ like σ, 2σ, 3σ, to
get a total width of variations around the mean as 2σ, 4σ, 6σ etc. The total width of 6σ variation
represents the fraction of the total number of outputs remaining within. Statistically, it can be
calculated that 6σ represents 99.
1 69% 31%
2 31% 69%
3 6.7% 93.3%
4 0.62% 99.38%
5 0.023% 99.977%
6 0.00034% 99.99966%
The term "six sigma process" comes from the notion that if one has six standard
deviations between the process mean and the nearest specification limit, as shown in the graph,
practically noitems will fail to meet specifications. This is based on the calculation method
employed in process capability studies. Capability studies measure the number of standard
deviations between the process mean and the nearest specification limit in sigma units,
represented by the Greek letter σ (sigma). As process standard deviation goes up, or the mean of
the process moves away from the center of the tolerance, fewer standard deviations will fit
between the mean and the nearest specification limit, decreasing the sigma number and
increasing the likelihood of items outside specification.
After establishing stability - a process in control - the process can be compared to the tolerance
to see how much of the process falls inside or outside of the specifications. It is to be noted that
this analysis requires that the process be normally distributed. Distributions with other shapes are
beyond the scope of this material. Specifications are not related to control limits - they are
completely separate. Specifications reflect "what the customer wants", while control limits tell us
"what the process can deliver".
The first step is to compare the natural six-sigma spread of the process to the tolerance. This
index is known as Cp.
Here is the information you will need to calculate the Cp and Cpk:
Process average, or x̄
Upper Specification Limit (USL) and Lower Specification Limit (LSL).
The Process Standard Deviation (σest). This can be calculated directly from the individual
data, or can be estimated by: σest = R̄ / d2
Cp is calculated as follows:
Cp is often referred to as "Process Potential" because it describes how capable the process could
be if it were centered precisely between the specifications. A process can have a Cp in excess of
one but still fail to consistently meet customer expectations, as shown by the illustration below:
The measurement that assesses process centering in addition to spread, or variability, is Cp k. Cpk
is calculated as follows:
The illustrations below provide graphic examples of Cp and Cpk calculations using hypothetical
data:
So Cpk is 0.67, indicating that a small percentage of the process output is defective (about 2.3%).
Without reducing variability, the Cpk could be improved to a maximum1.33, the Cp value, by
centering the process. Further improvements beyond that level will require actions to reduce
process variability.
When the capability of a process is understood and documented, it can be used for measuring
continual improvement using trends over time, prioritizing the order of process improvements to
be made, and determining whether or not a process is capable of meeting customer requirements.
Internal benchmarking involves the teams or divisions of workgroup within the organisation
for competitive performance of the groups so as to improve the overall performance of the
organisation and hence higher return on investment results. A few vital processes out of the all
processes as per the process flow chart should be selected for the purpose and involve them in
the benchmarking activities.
Competitive benchmarking refers to the selected process for improvement of activities and the
measures of those activities are evaluated and documented. For instance, in case of a diagnostic
centre, many measures could be thought of like, waiting time, late arrival of patients or the staff,
machine downtime and so on.Thus all processes in steps can be involved in the process and the
overall job then could be done at lower cost with higher efficiencies. The efficiency level then
should be compared with that of the other best relevant diagnostic centres or competitors of the
zone for better performance.
Identifying the processes for benchmarking is an important job in the organisation. One input
into a process results an output from the process which cyclically becomes an input intp another
process resulting another output. The series of such input and out makes the whole process in the
organisation. So, on the completion of one process the output should be reported to the next
process where the percentage of errors in the reported output should be assessed and reported
back to the earlier process for corrective actions. The bottlenecks in the overall process thus can
be identified through a series of assessment over sometime in the processes.
For such identification, selection of a process, determination of the vital errors therein and
prioritising the procecess for corrective actions are the three steps involved in benchmarking the
entire processes one after the other.
The next step is to select the role model partners specific for a particular processing job and also
the selection of team for conducting such conductivities are the key steps. The project of
benchmarking for a vital process/es is planned, scheduled, directed and controlled by the
constituted team with due documentation of the purpose, process under consideration, reasons
for selection, scope, description of the key practices, process measures identified, estimated
opportunity for improvement and anticipated impact after the project completion.
Benchmarking process should adopt a model like Motorola’s five-step model, or Westinghouse’s
seven-step model or Xerox’s 10 step model or Demings wheel including Shewart’s PDCA cycle
for conducting the activities for improvement.
The quadratic loss function is also centered on the target τ. As the performance moves away
from the target, there are losses. Therefore, producing within specification limits is not good
enough. The premise of loss function is that at some point as a process moves away from the
target value, there is a corresponding decrease in the quality. The quality loss may be difficult to
discern by the customer, but eventually it reaches a threshold where a complaint is made or the
customer is dissatisfied. The ideal quality defined by Taguchi “is that quality which customer
would experience when product is performs on target every time the product is used under all
intended operating conditions throught its intended life without causing harmful side effects to
the society.”
The cost of product consists of two elements as given below:
Unit manufacturing cost – cost incurred on manufacturing the product includding design
cost, material cost, manufacturing cost , depreciation of the machinery, etc. these are the
costs incurred before delivery to the customer. A low manufacturing cost satisfies the
customer.
Quality cost- cost incurred on the product after delivery to customer including cost of
operating the product (energy, environmental control like temperature, humiditycontrol
and cost of repairs) . A low qua;lity loss satisfies the customer.
the financial loss due to variation is called societal loss. It is approximately proportional to the
square of the deviation from the target.
Thus, quality loss occurs even when the product performs within the specification limits, but
away from the target. Tagucjhi’s loss function recognises the customer’s desire to have products
that are consistent and producer’s drive to control manufacturing cost. The goal of quality loss
function is to reduce the societal loss.
Types of loss function:
Loss functions enable calculations of social loss, when products deviate from the target value.
Taguchi developed many loss functions with different equations-to suit to different applications.
There are three types of loss functions as given below:
Nominal –the-best
Lower-the-better
Higher-the-better
Each one of them is suitable for a class of applications.
Nominal-the-best: This loss function is applicable tpo those parameters, which have a central
value, and allowable tolerance on either side. The target τ is not necessarily the average process
performance, but it is the choice of the customers. It is that value with which majority of
customers will be satisfied. It may not be directly derivable. The quadratic loss function in the
case of nominal-the –best is given by
Loss = K (Y- τ)2
Where Loss = cost incurred asd performance deviates from the customer’s target value.
Y = actual performance
Τ = target value
K = Rs./∆ 2 , where ∆=USL−τ or, τ – LSL
K is also called the quality loss coefficient. Let us look at an example to understand this types of
loss function.
Smaller-the-better:
This is useful, for instance, in many day-to-day applications such as:
Waiting time for a bus
Waiting time in a retaurant, etc.
Here the target will be ideally zero. The loss function is hown below:
The concept of quality costs was first mentioned by Juran and this concept was primarily applied
in the manufacturing industry. The price of nonconformance (Philip Crosby) or the cost of poor
quality (Joseph Juran), the term 'Cost of Quality', referred to the costs associated with providing
poor quality product or service.
Juran advocated the measurement of costs on a periodic basis as a management control tool.
Quality processes cannot be justified simply because "everyone else is doing them" - but return
on quality (ROQ) has dramatic impacts as companies mature. Research shows that the costs of
poor quality can range from 15%-40% of business costs (e.g., rework, returns or complaints,
reduced service levels, lost revenue). Most businesses do not know what their quality costs are
because they do not keep reliable statistics. Finding and correcting mistakes consumes an
inordinately large portion of resources. Typically, the cost to eliminate a failure in the customer
phase is five times greater than it is at the development or manufacturing phase. Effective quality
management decreases production costs because the sooner an error is found and corrected, the
less costly it will be.
1. External Failure Cost: cost associated with defects found after the customer receives the
product or service ex: processing customer complaints, customer returns, warranty claims,
product recalls.
2. Internal Failure Cost: Cost associated with defects found before the customer receives the
product or service ex: scrap, rework, re-inspection, re-testing, material review, material
downgrades.
3. Inspection (appraisal) Cost: cost incurred to determine the degree of conformance to quality
requirements (measuring, evaluating or auditing) ex: inspection, testing, process or service
audits, calibration of measuring and test equipment.
4. Prevention Cost: cost incurred to prevent (keep failure and appraisal cost to a minimum) poor
quality ex: new product review, quality planning, supplier surveys, process reviews, quality
improvement teams, education and training.
The most widely accepted method for measuring and classifying quality costs is the prevention,
appraisal, and failure (PAF) model. Follow this five step process.
1. Gather some basic information about the number of failures in the system
2. Apply some assumptions to that data in order to quantify the data
3. Chart the data based on the four elements listed above and study it
4. Allocate resources to combat the weak-spots
5. Do this study on a regular basis and evaluate your performance
It is believed that the customer places a certain value on quality," "At first, if the quality is too
low, and the customer wouldn't buy the products. When quality is improved, the costs also
increases, but the customer wouldn't pay the higher prices; so one has to charge. It is found that
profitability is maximized when total cost of poor quality is about 25 percent of sales. The
problem is that there is very little profit, even at that cost level."
It is known that the typical three-sigma company spends about 25 percent of each sales rupees on
the cost of poor quality. Before starting the six sigma journey, Peerless was in similar shape. The
slide prepared is shown in Figure below to llustrate the difference between three-sigma and six-
sigma quality for your staff.
Right now, suppose a business is only capable of operating at a level equivalent to about three-
sigma quality,". "Trying to get better quality out of the existing systems only adds costs. To
develop a new systems that deliver better quality and lower costs simultaneously, need of
implementing six sigma systems is must."
Six sigma is not a destination, but a journey of continuous improvement. Of course, any
company won't go from the 3-sigma to 6-sigma in one big jump. Instead, overall performance
will move from three sigma to 4-sigma, then to 5-sigma and so on as people are trained and
systems redesigned and improved. Figure 3 illustrates the expected progress toward 6-sigma.
To summarize that six-sigma is not about quality for the sake of quality; it is about providing
better value to customers, investors and employees.
Paraphrasing the, "Six sigma is a journey of a thousand miles. Creating a roadmap that links
customer satisfaction, quality and costs is the first step."
SPC uses statistical tools to observe the performance of the production process in order to detect
significant variations before they result in the production of a sub-standard article. Any source of
variation at any point of time in a process will fall into one of the above two classes, assignable
or non-assignable
Choice of limits
Shewhart set 3-sigma (3-standard deviation) limits on the following basis.
The coarse result of Chebyshev's inequality that, for any probability distribution,
the probability of an outcome greater than k standard deviations from the mean is at most
1/k2.
The finer result of the Vysochanskii–Petunin inequality, that for any unimodal probability
distribution, the probability of an outcome greater than k standard deviations from
the mean is at most 4/ (9k2).
In the Normal distribution, a very common probability distribution, 99.7% of the
observations occur within three standard deviations of the mean.
Calculation of standard deviation
The standard deviation (error) for the common-cause variation in the process is used to calculate
the control limits. Hence, the usual estimator, in terms of sample variance, is not used as this
estimates the total squared-error loss from both common- and special-causes of variation.
An alternative method is to use the relationship between the range of a sample and its standard
deviation derived by Leonard H. C. Tippett, as an estimator which tends to be less influenced by
the extreme observations which typify special-causes
The application of SPC involves three main phases of activity:
1. Understanding the process and the specification limits.
2. Eliminating assignable (special) sources of variation, so that the process is stable.
3. Monitoring the ongoing production process, assisted by the use of control charts, to
detect significant changes of mean or variation.
99.7% of observations sampled from a Normal distribution fall within 3 standard deviations of
the mean, i.e. between the limits µ 3σ. Thus, if observations started to appear outside these
limits then we would suspect that the process is no longer in control, and that the distribution of
X had changed. When the distribution of X is not normal, a famous result called the Chebyshev's
inequality tells us that, irrespective of the distribution, at least 89% of observations fall within
the 3-sigma limits. The probability of falling within the 3-sigma limits increases towards 99.7%
as the distribution becomes more and more Normal.
The different core values of a TQM company along with its tools & techniques
generally employed in the system in a diagram with their inter-relationship.
In another study, TQM can be defined as a management system, which consists of three
interdependent units, namely core values, techniques and tools. The idea is that the core values
must be supported by techniques, such as process management, benchmarking, and customer
focused planning, or improvement teams, and tools, such as control charts, the quality house or
Ishikawa diagrams, in order to be part of a culture. They emphasized that this systematic
definition will facilitate for organizations the understanding and implementation of TQM.
Therefore, the implementation work should begin with the acceptance of the core values that
characterizing the culture of organization. The next step is to continuously choose techniques
that are suitable for supporting the selected values. Ultimately, suitable tools have to be
identified and used in an efficient way in order to support the chosen techniques.
Figure 1: TQM as a Management System Consists of Values, Techniques and Tools (Hellsten &
Klefsjö, 2000)
According study, the basis for the culture of the organization are the core values. Another
component is techniques, i.e. ways to work within the organization to reach the values. A
technique consists of a number of activities performed in a certain order. The important concept
here is that TQM really should be looked on as a system. The values are supported by techniques
and tools to form a whole. We have to start with the core values and ask: Which core values
should characterize our organization? When this is decided, we have to identify techniques that
are suitable for our organization to use and support our values. Finally, from that decision the
suitable tools have to be identified and used in an efficient way to support the techniques (see
Figure 2).
Performance Consistency
Type I Error (Producer's Risk): This is the probability, for a given (n,c) sampling plan, of
rejecting a lot that has a defect level equal to the AQL. The producer suffers when this occurs,
because a lot with acceptable quality was rejected. The symbol is commonly used for the Type
I error and typical values for range from 0.2 to 0.01.
Type II Error (Consumer's Risk): This is the probability, for a given (n,c) sampling plan, of
accepting a lot with a defect level equal to the LTPD. The consumer suffers when this occurs,
because a lot with unacceptable quality was accepted. The symbol is commonly used for the
Type II error and typical values range from 0.2 to 0.01.
Out of various methods of quality checking, Acceptance Sampling is just a simple recipe that is
followed, and may not be the best thing to do.
Situation: large batches of items are produced. We must sample a small proportion of each batch
to check that the proportion of defective items is sufficiently low.
The Producer and Consumer of the items have to agree some unacceptable defective fraction that
should be rejected with high probability, and some good low defective fraction that should be
accepted with high probability. So the Producer and Consumer of items have to agree what
constitutes:
Acceptable quality level: p1 (consumer happy, want to accept with high probability)
Unacceptable quality level: p2 (consumer unhappy, want to reject with high probability)
Ideal sampling scheme: always accept batch if p p1 and always reject if p p2,
i.e. L(p p1) = 1 and L(p p2) = 0. However the only way to guarantee this would be to inspect
the whole batch, which is usually not desirable (esp. if testing requires destruction of the item!).
We therefore want to use a sampling scheme, optimized so that the risk of one of these
undesirable outcomes is minimized:
= P (Reject batch when p = p1) = 1 – L (p1): the Producer's Risk.
β = P (Accept batch when p = p2) = L (p2): the Consumer's Risk.
Of course the Producer really cares about rejecting the batch when p p1, but taking p = p1 is
conservative as the probability is always lower for p < p1.
Similarly for the Consumer’s risk.
Once the Producer and Consumer have agreed the values of p1, p2, and β , values of n and c
can be calculated.
Two-stage sampling plan:
Sample n1 items, X1 = number of defectives in the sample.
Accept batch if X1 c1, reject if X1 > c2 (where c2 > c1)
if c1 < X1≤ c2, sample a further n2 items; let X2 = number of defectives in 2nd sample;
Accept batch if X2≤_ c3, otherwise reject batch.
Although more complicated, by suitable choice of n1, n2, c1, c2 and c3, it is practicable to find a
plan with similar L(p) to a single stage design but smaller average sample size.
Quality
Acceptance sample is a rather limited method of ensuring good quality:
It is too far downstream in the production process; a method is required to be found out
which identifies where things are going wrong.
It is 0/1 (i.e. defective/OK) and so does not make efficient use of data;
It is seen that large samples are required. It is better to have quality measurements on a
continuous scale; there will be an earlier warning of deteriorating quality and less need for large
sample sizes.
Problem 4: It has been decided to sample 100 items at random from each large batch and
to reject the batch if more than 2 defectives are found. The acceptable quality level is 1%
and the unacceptable quality level is 5%. Find the Producer’s and Consumer’s risks.
Answer:
n = 100, c = 2, p1 = 0.01, p2 = 0.05.
=1_ (1000 )0.01 x 0.99 (1001 )0.01 x 0:99 (1002 ) 0.01 x 0.99
0 100 _ 99 _ 2 98
= 0:118
What is sampling and explain the use of different simple sampling plans for
sampling of attributes and variables:
SAMPLING ERRORS: Sampling errors occur as a result of calculating the estimate (estimated
mean, total, proportion, etc) based on a sample rather than the entire population. This is due to
the fact that the estimated figure obtained from the sample may not be exactly equal to the true
value of the population. Three factors affect sampling errors with respect to the design of
samples – the sampling procedure, the variation within the sample with respect to the variate of
interest, and the size of the sample. A large sample results in lesser sampling error.
The statistics based on samples drawn from the same population always vary from each other
(and from the true population value) simply because of chance. This variation is sampling error
and the measure used to estimate the sampling error is the standard error.
Sampling error is one of two reasons for the difference between an estimate of a population
parameter and the true, but unknown, value of the population parameter. Sampling errors and
biases are common in any sampling method. Sampling error is one which occurs due to
unrepresentativeness of the sample selected for observation.
Sampling errors and biases are induced by the sample design. They include:
1. Selection bias: When the true selection probabilities differ from those assumed in
calculating the results.
2. Random sampling error: Random variation in the results due to the elements in the
sample being selected at random.
Sampling Error denotes a statistical error arising out of a certain sample selected being
unrepresentative of the population of interest. In simple terms, it is an error which occurs
when the sample selected does not contain the true characteristics, qualities or figures of the
whole population.
NONSAMPLING ERRORS
The accuracy of an estimate is also affected by errors arising from causes such as incomplete
coverage and faulty procedures of estimation, and together with observational errors, these make
up what are termed nonsampling errors. The aim of a survey is always to obtain information on
the true population value. The idea is to get as close as possible to the latter within the resources
available for survey. The discrepancy between the survey value and the corresponding true value
is called the observational error or response error. Response Nonsampling errors occur as a
result of improper records on the variate of interests, careless reporting of the data, or deliberate
modification of the data by the data collectors and recorders to suit their interests. Nonresponse
error occurs when a significant number of people in a survey sample are either absent; do not
respond to the questionnaire; or, are different from those who do in a way that is important to the
study.
Non-sampling error is an error arising from human error, such as error in problem identification,
method or procedure used, etc. Non-sampling errors are other errors which can impact the final
survey estimates, caused by problems in data collection, processing, or sample design. They
include:
1. Over coverage: Inclusion of data from outside of the population.
2. Under coverage: Sampling frame does not include elements in the population.
3. Measurement error: e.g. when respondents misunderstand a question, or find it difficult
to answer.
4. Processing error: Mistakes in data coding.
5. Non-response: Failure to obtain complete data from all selected individuals.
Non-Sampling Error is an umbrella term which comprises of all the errors, other than the
sampling error. They arise due to a number of reasons, i.e. error in problem definition,
questionnaire design, approach, coverage, information provided by respondents, data
preparation, collection, tabulation, and analysis.
There are two types of non-sampling error:
Basis for Sampling error Non-sampling error
comparison
Meaning Sampling error is a type of statistical An error occurs due to sources
error, occurs due to the sample selected other than sampling, while
does not perfectly represents the conducting survey activities is
population of interest. known as non sampling error.
Cause Deviation between sample mean and Deficiency and inappropriate
population mean analysis of data
Type Random Random or Non-random
Occrences Sample error arises only when the Both in sample & census
sample is selected as a representative of
a population.
Sample size Possibility of error reduced with the It has nothing to do with the
increase in sample size. sample size.
A “lot,” or batch, of items can be inspected in several ways, including the use of single, double,
or sequential sampling. Types of acceptance plans to choose from LASPs fall into the following
categories:
Single sampling plans: One sample of items is selected at random from a lot and the disposition
of the lot is determined from the resulting information. Two numbers specify a single sampling
plan: They are the number of items to be sampled (n) and a pre-specified acceptable number of
defects (c). If there are fewer or equal defects in the lot than the acceptance number, c, and then
the whole batch will be accepted. If there are more than c defects, the whole lot will be rejected
or subjected to 100% screening. These are the most common (and easiest) plans to use although
not the most efficient in terms of average number of samples needed.
Double sampling plans: After the first sample is tested, there are three possibilities:
If the outcome is (3), and a second sample is taken, the procedure is to combine the results of
both samples and make a final decision based on that information. Often a lot of items is so good
or so bad that we can reach a conclusion about its quality by taking a smaller sample than would
have been used in a single sampling plan. If the number of defects in this smaller sample (of size
n1) is less than or equal to some lower limit (c1), the lot can be accepted. If the number of defects
exceeds an upper limit (c2), the whole lot can be rejected. But if the number of defects in the n1
sample is between c1 and c2, a second sample (of size n2) is drawn. The cumulative results
determine whether to accept or reject the lot. The concept is called double sampling.
Multiple sampling plans: This is an extension of the double sampling plans where more than
two samples are needed to reach a conclusion. The advantage of multiple sampling is smaller
sample sizes.
Sequential sampling plans: This is the ultimate extension of multiple sampling where items are
selected from a lot one at a time and after inspection of each item a decision is made to accept or
reject the lot or select another unit. Multiple sampling is an extension of double sampling, with
smaller samples used sequentially until a clear decision can be made. When units are randomly
selected from a lot and tested one by one, with the cumulative number of inspected pieces and
defects recorded, the process is called sequential sampling. If the cumulative number of defects
exceeds an upper limit specified for that sample, the whole lot will be rejected. Or if the
cumulative number of rejects is less than or equal to the lower limit, the lot will be accepted. But
if the number of defects falls within these two boundaries, we continue to sample units from the
lot. It is possible in some sequential plans for the whole lot to be tested, unit by unit, before a
conclusion is reached.
Selection of the best sampling approach—single, double, or sequential—depends on the types of
products being inspected and their expected quality level. A very low-quality batch of goods, for
example, can be identified quickly and more cheaply with sequential sampling. This means that
the inspection, which may be costly and/or destructive, can end sooner. On the other hand, in
many cases a single sampling plan is easier and simpler for workers to conduct even though the
number sampled may be greater than under other plans.
Skip lot sampling plans:. Skip Lot sampling means that only a fraction of the submitted lots are
inspected. This mode of sampling is of the cost-saving variety in terms of time and effort.
However skip-lot sampling should only be used when it has been demonstrated that the quality
of the submitted product is very good. Implementation of skip-lot sampling plan A skip-lot
sampling plan is implemented as follows:
1. Design a single sampling plan by specifying the alpha and beta risks and the
consumer/producer's risks. This plan is called "the reference sampling plan".
2. Start with normal lot-by-lot inspection, using the reference plan.
3. When a pre-specified number, i, of consecutive lots are accepted, switch to inspecting
only a fraction f of the lots. The selection of the members of that fraction is done at
random.
4. When a lot is rejected return to normal inspection.
Acceptance sampling
A lot acceptance sampling plan (LASP) is a sampling scheme and a set of rules for making
decisions. The decision, based on counting the number of defectives in a sample, can be to
accept the lot, reject the lot, or even, for multiple or sequential sampling schemes, to take another
sample and then repeat the decision process.
Acceptable Quality Level (AQL): The AQL is a percent defective that is the base line
requirement for the quality of the producer's product. The producer would like to design a
sampling plan such that there is a high probability of accepting a lot that has a defect
level less than or equal to the AQL.
Lot Tolerance Percent Defective (LTPD): The LTPD is a designated high defect level
that would be unacceptable to the consumer. The consumer would like the sampling plan
to have a low probability of accepting a lot with a defect level as high as the LTPD.
Type I Error (Producer's Risk): This is the probability, for a given (n,c) sampling plan,
of rejecting a lot that has a defect level equal to the AQL. The producer suffers when this
occurs, because a lot with acceptable quality was rejected. The symbol is commonly
used for the Type I error and typical values for range from 0.2 to 0.01.
Type II Error (Consumer's Risk): This is the probability, for a given (n,c) sampling
plan, of accepting a lot with a defect level equal to the LTPD. The consumer suffers when
this occurs, because a lot with unacceptable quality was accepted. The symbol is
commonly used for the Type II error and typical values range from 0.2 to 0.01.
Operating Characteristic (OC) Curve: This curve plots the probability of accepting the
lot (Y-axis) versus the lot fraction or percent defectives (X-axis). The OC curve is the
primary tool for displaying and investigating the properties of a LASP.
Average Outgoing Quality (AOQ): A common procedure when sampling and testing is
non-destructive, is to 100% inspect rejected lots and replace all defectives with good
units.
Average Outgoing Quality Level (AOQL): A plot of the AOQ (Y-axis) versus the
incoming lot p (X-axis) will start at 0 for p = 0, and return to 0 for p = 1 (where every lot
is 100% inspected and rectified). In between, it will rise to a maximum. This maximum,
which is the worst possible long term AOQ, is called the AOQL.
Average Total Inspection (ATI): When rejected lots are 100% inspected, it is easy to
calculate the ATI if lots come consistently with a defect level of p. For a LASP (n,c) with
a probability pa of accepting a lot with defect level p, we have
Average Sample Number (ASN): For a single sampling LASP (n,c) we know each and
every lot has a sample of size n taken and inspected or tested. For double, multiple and
sequential LASP's, the amount of sampling varies depending on the number of defects
observed. For any given double, multiple or sequential plan, a long term ASN can be
calculated assuming all lots come in with a defect level of p. A plot of the ASN, versus
the incoming defect level p, describes the sampling efficiency of a given LASP scheme.
Short notes
Fishikawa Diagramme:
Pareto Analysis
Already given above
Reliability & hazard function
Reliability is one of the dimensions of quality of products i.e., goods &r services. The buyer
expects that the goods or services will satisfy the requirement for quite sometime or a specified
pewriod of time, say 10 years. The measure of the probability of the item satisfying the
customers at a given time‘t’ is described by the term reliability. Thus, reliability is quite an
important characteristic of any product. Reliability can be defined as:
“Reliability of an entity is defined as the probability that it will perform its intended functions for
a specified period of time, under the stated operating conditions.” Thus reliability is a measure of
the ability of the product to function as intended at a given time. It is time dependent. Sincxe
reliability of any product at any given time is a probability; it will be equal to or less than 1.
Hazard function:
Hazard function is a measure of tendency of a product failure. If the value of the hazard function
is high, the probabbility of failure will be greater. The hazard function of an exponential
distribution is shown in the diagram below:
Kaizen:
Kaizen is a very significant concept within quality management and deserves specific
explanation:
Kaizen (usually pronounced 'kyzan' or 'kyzen' in the western world) is a Japanese word,
commonly translated to mean 'continuous improvement'.
Kaizen is a core principle of quality management generally, and specifically within the methods
of Total Quality Management and 'Lean Manufacturing'.
Originally developed and applied by Japanese industry and manufacturing in the 1950s and 60s,
Kaizen continues to be a successful philosophical and practical aspect of some of the best known
Japanese corporations, and has for many years since been interpreted and adopted by 'western'
organizations all over the world.
Kaizen is a way of thinking, working and behaving, embedded in the philosophy and values of
the organization. Kaizen should be 'lived' rather than imposed or tolerated, at all levels.
The aims of a Kaizen organization are typically defined as:
To be profitable, stable, sustainable and innovative.
To eliminate waste of time, money, materials, resources and effort and increase
productivity.
To make incremental improvements to systems, processes and activities before problems
arise rather than correcting them after the event.
To create a harmonious and dynamic organization where every employee participates and
is valued.
Key concepts of Kaizen:
‘Every’ is a key word in Kaizen: improving everything that everyone does in every
aspect of the organization in every department, every minute of every day.
Evolution rather than revolution: continually making small, 1% improvements to 100
things is more effective, less disruptive and more sustainable than improving one thing by
100% when the need becomes unavoidable.
Everyone involved in a process or activity, however apparently insignificant, has
valuable knowledge and participates in a working team or Kaizen group (see also Quality
Circles below).
Everyone is expected to participate, analysing, providing feedback and suggesting
improvements to their area of work.
Every employee is empowered to participate fully in the improvement process: taking
responsibility, checking and co-coordinating their own activities. Management practice
enables and facilitates this.
Every employee is involved in the running of the company, and is trained and informed
about the company. This encourages commitment and interest, leading to fulfillment and
job satisfaction.
Kaizen teams use analytical tools and techniques to review systems and look for ways to
improve.
At its best, Kaizen is a carefully nurtured philosophy that works smoothly and steadily, and
which helps to align 'hard' organizational inputs and aims (especially in process-driven
environments), with 'soft' management issues such as motivation and empowerment.
Like any methodology however, poor interpretation and implementation can limit the usefulness
of Kaizen practices, or worse cause them to be counter-productive.
Kaizen is unsuccessful typically where:
Kaizen methods are added to an existing failing structure, without fixing the basic
structure and philosophy.
Kaizen is poorly integrated with processes and people's thinking.
Training is inadequate.
Executive/leadership doesn't understand or support Kaizen.
Employees and managers regard Kaizen as some form of imposed procedure, lacking
meaningful purpose.
Kaizen works best when it is 'owned' by people, who see the concept as both empowering of
individuals and teams, and a truly practical way to improve quality and performance, and thereby
job satisfaction and reward. As ever, such initiatives depend heavily on commitment from above,
critically:
to encourage and support Kaizen, and
to ensure improvements produce not only better productivity and profit for the
organization, but also better recognition and reward and other positive benefits for
employees, whose involvement drives the change and improvement in the first place.
Interestingly, the spirit of Kaizen, which is distinctly Japanese in origin - notably its significant
emphasis upon individual and worker empowerment in organizations - is reflected in many
'western' concepts of management and motivation, for example the Y-Theory principles
described by Douglas McGregor; Herzberg's Motivational Theory, Maslow's Needs Hierarchy
and related thinking; Adams' Equity Theory; and Charles Handy's motivational theories.
Fascinatingly, we can now see that actually very close connections exist between:
the fundamental principles of Quality Management - which might be regarded as cold
and detached and focused on 'things' not people, and
progressive 'humanist' ideas about motivating and managing people - which might
be regarded as too compassionate and caring to have a significant place in the
optimization of organizational productivity and profit.
The point is that in all effective organizations a very strong mutual dependence exists between:
systems, processes, tools, productivity, profit - the 'hard' inputs and outputs (some say
'left-side brain'), and
people, motivation, teamwork, communication, recognition and reward - the 'soft' inputs
and outputs ('right-side brain')
Kaizen helps to align these factors, and keep them aligned.
The ISO 9000 family of standards stands for “International Organization for
Standardization”. The ISO 9000 family of standards relate to quality management systems and
are designed to help organizations ensure they meet the needs of customers and other
stakeholders. The standards are published by ISO, the International Organization for
Standardization and available through National standards bodies. ISO 9000 deals with the
fundamentals of quality management systems including the eight management principles on
which the family of standards is based.
ISO 9000 - 1: Quality Management and Quality Assurance standards-Part1 which mentions
about the guidelines for selection and use.
ISO 9000 - 2: Quality Management and Quality Assurance standards - Part 2 General guide lines
for the application of ISO 9001, ISO 9002, and ISO 9003.
ISO 9000 – 3: Quality Management and Quality Assurance standards-Part 3 Guide lines for the
application of ISO 9001 to the development, supply and maintenance of software.
ISO 9001 -: Quality systems-Model for Quality Assurance in design / development, production,
installation and servicing.
ISO 9002 -: Quality Systems - Model for Quality Assurance in production installation and
servicing.
ISO 9003-: Quality Systems - Model for Quality Assurance in final inspection and test.
ISO 9004-: Quality Management and Quality system elements - Guidelines.
The standards are reviewed every few years by the ISO authority. The version in 1994 was called
the ISO 9000:1994 series; consisting of the ISO 9001:1994, 9002:1994 and 9003:1994 versions.
The last major revision was in the year 2008 and the series was called ISO 9000:2000 series. The
ISO 9002 and 9003 standards were integrated into one single certifiable standard: ISO
9001:2008.
The ISO 9004:2009 document gives guidelines for performance improvement over and above the
basic standard (ISO 9001:2000). The Quality Management System standards created by ISO are
meant to certify the processes and the system of an organization, not the product or service itself.
ISO 9000 standards do not certify the quality of the product or service. In 2005 the
International Organization for Standardization released a standard, ISO 22000, meant for the
food industry. ISO has also released standards for other industries. For example Technical
Standard TS 16949 defines requirements in addition to those in ISO 9001:2008 specifically for
the automotive industry. ISO has a number of standards that support quality management.
Upto this it is common for manufacturing as well as service organisations. Quality system
requirements vary depending upon the model chosen. A sum-up of ISO 9000 standards should be
appraised to all categories of employees for their awareness as under: