You are on page 1of 29

Quality/ System & Leaders

W. Edwards. Deming

Joseph Juran

Philip Crosby

Malcolm Baldridge

Page 1 of 29
By Prabhat Khare (prabhatkhare22659@gmail.com)
Edward Deming

Who was he?

Born in 1900, Edwards Deming was an American engineer, professor, statistician, lecturer, author, and
management consultant.

What was his philosophy?

Deming opined that by embracing certain principles of the management, organizations can improve the quality
of the product and concurrently reduce costs. Reduction of costs would include the reduction of waste
production, reducing staff attrition and litigation while simultaneously increasing customer loyalty. The key, in
Deming’s opinion, was to practice constant improvement, and to imagine the manufacturing process as a
seamless whole, rather than as a system made up of incongruent parts.

In the 1970s, some of Deming's Japanese proponents summarized his philosophy in a two-part comparison:

Organizations should focus primarily on quality, which is defined by the equation ‘Quality = Results of
work efforts/total costs. When this occurs, quality improves, and costs plummet, over time.
When organizations' focus is primarily on costs, the costs will rise, but over time the quality drops.

The Deming Cycle

Also known as the Shewhart Cycle, the Deming Cycle, often called the PDCA, was a result of the need to link the
manufacture of products with the needs of the consumer along with focusing departmental resources in a
collegial effort to meet those needs.

The steps that the cycle follows are:

Plan: Design a consumer research methodology which will inform business process components.
Do: Implement the plan to measure its performance.
Check: Check the measurements and report the findings to the decision makers
Act/Adjust: Draw a conclusion on the changes that need to be made and implement them.

Page 2 of 29
By Prabhat Khare (prabhatkhare22659@gmail.com)
The 14 Points for Management

Deming’s other chief contribution came in the form of his 14 Points for Management, which consists of a set of
guidelines for managers looking to transform business effectiveness.

1. Create Constancy of Purpose for Improvement of Product and Service


2. Follow A New Philosophy
3. Discontinue Dependence on Mass Inspection
4. Cease the Practices of Awarding Business on Price Tags.
5. Strive Always to Improve the Production and Service of The Organization
6. Introduce New and Modern Methods of On-The-Job Training
7. Device Modern Methods of Supervision
8. Let Go of Fear
9. Destroy Barriers Among the Staff Areas.
10. Dispose of The Numerical Goals Created for The Workforce.
11. Eradicate Work Standards and Numerical Quotas
12. Abolish the Barriers That Burden the Workers
13. Device A Vigorous Education and Training Program
14. Cultivate Top Management That Will Strive Toward These Goals

The 7 Deadly Diseases for Management

The 7 Deadly Diseases for Management defined by Deming are the most serious and fatal barriers that
managements face, in attempting to increase effectiveness and institute continual improvement.

1. The Inadequacy of the Constancy of Purpose Factor, To Plan A Product or Service.


2. Organizations Giving Importance to Short Term Profits.
3. Employing Personal Review Systems to Evaluate Performance, Merit Ratings, and Annual Reviews for
Employees.
4. Constant Job Hopping
5. Use of Visible Figures Only for Management, With Little or No Consideration of Figures That Are
Unknown or Unknowable.
6. An Overload of Medical Costs
7. Excessive Costs of Liability.

Page 3 of 29
By Prabhat Khare (prabhatkhare22659@gmail.com)
Joseph Juran

Who was he?

Born in 1904, Joseph Juran was a Romanian-born American engineer and management consultant of the 20th
century, and a missionary for quality and quality management. Like Deming, Juran's philosophy also took root
in Japan. He stressed on the importance of a broad, organizational-level approach to quality – stating that total
quality management begins from the highest position in the management and continues all the way to the
bottom.

Influence of the Pareto Principle

In 1941, Juran was introduced to the work of Vilfredo Pareto. He studied the Pareto principle (the 80-20 law),
which states that, for many events, roughly 80% of the effects follow from 20% of the causes and applied the
concept to quality issues. Thus, according to Juran, 80% of the problems in an organization are caused by 20%
of the causes. This is also known as the rule of the "Vital few and the Trivial Many". Juran, in his later years,
preferred "the Vital Few and the Useful Many" suggesting that the remaining 80% of the causes must not be
completely ignored.

Pareto Cycle/ Analysis

Pareto analysis is a formal technique useful where many possible courses of action are competing for attention.
In essence, the problem-solver estimates the benefit delivered by each action, then selects a number of the most
effective actions that deliver a total benefit reasonably close to the maximal possible one.

Pareto analysis is a creative way of looking at causes of problems because it helps stimulate thinking and
organize thoughts. However, it can be limited by its exclusion of possibly important problems which may be
small initially, but which grow with time. It should be combined with other analytical tools such as failure mode
and effects analysis and fault tree analysis for example.

This technique helps to identify the top portion of causes that need to be addressed to resolve the majority of
problems. Once the predominant causes are identified, then tools like the Ishikawa diagram or Fish-bone
Analysis can be used to identify the root causes of the problems. While it is common to refer to Pareto as
"80/20" rule, under the assumption that, in all situations, 20% of causes determine 80% of problems, this ratio
is merely a convenient rule of thumb and is not nor should it be considered an immutable law of nature.

The application of the Pareto analysis in risk management allows management to focus on those risks that have
the most impact on the project.

What was Juran’s philosophy?


The primary focus of every business, during Juran's time, was the quality of the end product, which is what
Deming stressed upon. Juran shifted track to focus instead on the human dimension of quality management. He
laid emphasis on the importance of educating and training managers. For Juran, the root cause of quality issues
was the resistance to change, and human relations problems.

His approach to quality management drew one outside the walls of a factory and into the non-manufacturing
processes of the organization, especially those that were service related.

Page 4 of 29
By Prabhat Khare (prabhatkhare22659@gmail.com)
The Juran Quality Trilogy

One of the first to write about the cost of poor quality, Juran developed an approach for cross-functional
management that comprises three legislative processes:

Quality Planning: this is a process that involves creating awareness of the necessity to improve, setting certain
goals and planning ways to reach those goals. This process has its roots in the management's commitment to
planned change that requires trained and qualified staff.
Quality Control: this is a process to develop the methods to test the products for their quality. Deviation from
the standard will require change and improvement.
Quality Improvement: this is a process that involves the constant drive to perfection. Quality improvements
need to be continuously introduced. Problems must be diagnosed to the root causes to develop solutions. The
Management must analyze the processes and the systems and report back with recognition and praise when
things are done right.

Juran Trilogy

Three Steps to Progress


Juran also introduced the Three Basic Steps to Progress, which, in his opinion, companies must implement if
they are to achieve high quality.

1. Accomplish Improvements That Are Structured On A Regular Basis With Commitment And A Sense Of
Urgency.
2. Build An Extensive Training Program.
3. Cultivate Commitment And Leadership At The Higher Echelons Of Management.

Ten Steps to Quality


Juran devised ten steps for organizations to follow to attain better quality.

1. Establish Awareness For The Need To Improve And The Opportunities For Improvement.
2. Set Goals for Improvement.
3. Organize To Meet The Goals That Have Been Set.
4. Provide Training.
5. Implement Projects Aimed At Solving Problems.
6. Report Progress.
7. Give Recognition.
8. Communicate Results.
9. Keep Score.
10. Maintain Momentum By Building Improvement Into The Company's Regular Systems.

Page 5 of 29
By Prabhat Khare (prabhatkhare22659@gmail.com)
Philip Crosby

Who was he?

Born in 1926, Philip Crosby was an author and businessman who contributed to management theory and
quality management practices. He started his career in quality much later than Deming and Juran. He founded
Philip Crosby and Associates, which was an international consulting firm on quality improvement.

Philip Crosby Philosophy/ Theory

Crosby's principle, Doing It Right the First Time, was his answer to the quality crisis. He defined quality as full
and perfect conformance to the customers' requirements. The essence of his philosophy is expressed in what
he called the Absolutes of Quality Management and the Basic Elements of Improvement.

The Absolutes of Quality Management

Crosby defined Four Absolutes of Quality Management, which are

The First Absolute: The definition of quality is conformance to requirements


The Next Absolute: The system of quality is prevention
The Third Absolute: The performance standard is zero defects
The Final Absolute: The measurement of quality is the price of non-conformance

Zero Defects

Crosby's Zero Defects is a performance method and standard that states that people should commit themselves
to closely monitoring details and avoid errors. By doing this, they move closer to the zero defects goal.
According to Crosby, zero defects was not just a manufacturing principle, but was an all-pervading philosophy
that ought to influence every decision that we make. Managerial notions of defects being unacceptable and
everyone doing ‘things right the first time’ are reinforced.

The Fourteen Steps to Quality Improvement

1. Make it clear that management is committed to quality for the long term.
2. Form cross-departmental quality teams.
3. Identify where current and potential problems exist.
4. Assess the cost of quality and explain how it is used as a management tool.
5. Improve the quality awareness and personal commitment of all employees.
6. Take immediate action to correct problems identified.
7. Establish a zero-defect program.
8. Train supervisors to carry out their responsibilities in the quality program.
9. Hold a Zero Defects Day to ensure all employees are aware there is a new direction.
10. Encourage individuals and teams to establish both personal and team improvements.
11. Encourage employees to tell management about obstacles they face in trying to meet quality goals.
12. Recognize employees who participate.
13. Implement quality controls to promote continual communication.
14. Repeat everything to illustrate that quality improvement is a never-ending process.

Page 6 of 29
By Prabhat Khare (prabhatkhare22659@gmail.com)
The Quality Vaccine

Crosby explained that this vaccination was the medicine for organizations to prevent poor quality.

Integrity: Quality must be taken seriously throughout the entire organization, from the highest levels to the
lowest. The company's future will be judged by the quality it delivers.

Systems: The right measures and systems are necessary for quality costs, performance, education,
improvement, review, and customer satisfaction.

Communication: Communication is a very important factor in an organization. It is required to communicate


the specifications, requirements and improvement opportunities of the organization. Listening to customers
and operatives intently and incorporating feedback will give the organization an edge over the competition.

Operations: a culture of improvement should be the norm in any organization, and the process should be solid.

Policies: policies that are implemented should be consistent and clear throughout the organization.

Page 7 of 29
By Prabhat Khare (prabhatkhare22659@gmail.com)
Malcolm Baldrige
Who was he?

Baldrige was born on October 4, 1922 in Omaha, Nebraska. He attended The Hotchkiss School and Yale
University. At Yale, he was a member of a Delta Kappa Epsilon.
Baldrige began his career in the manufacturing industry in 1947, as the foundry hand in an iron
company in Connecticut and rose to the presidency of that company by 1960. During World War II,
Baldrige served in combat in the Pacific as Captain in the 27th Infantry Division. Prior to entering the
Cabinet, Baldrige was chairman and chief executive officer of Waterbury, Connecticut-based brass
company Scovill, Inc. Having joined Scovill in 1962, he is credited with leading its transformation to a
highly diversified manufacturer of consumer, housing and industrial goods from a financially troubled
brass mill.

Baldrige Excellence Framework

The Baldrige Excellence Framework has three parts:

1. The Criteria For Performance Excellence,


2. Core Values And Concepts, And
3. Scoring Guidelines.

The framework serves two main purposes:

1. To Help Organizations Assess Their Improvement Efforts, Diagnose Their Overall Performance
Management System, And Identify Their Strengths And Opportunities For Improvement And
2. To Identify Baldrige Award Recipients That Will Serve As Role Models For Other Organizations.

The criteria for performance excellence are based on a set of core values:

1. Systems Perspective
2. Visionary Leadership
3. Customer-Focused Excellence
4. Valuing People
5. Organizational Learning And Agility
6. Focus On Success
7. Managing For Innovation
8. Management By Fact
9. Societal Responsibility
10. Ethics And Transparency
11. Delivering Value And results

Page 8 of 29
By Prabhat Khare (prabhatkhare22659@gmail.com)
The questions that make up the criteria represent seven aspects of organizational management and
performance:

1. Leadership
2. Strategy
3. Customers
4. Measurement, Analysis, And Knowledge Management
5. Workforce
6. Operations
7. Results

The three sector-specific versions of the Baldrige framework are revised every two years:

1. Baldrige Excellence Framework (Business/Non-profit)


2. Baldrige Excellence Framework (Education)
3. Baldrige Excellence Framework (Health Care)

Page 9 of 29
By Prabhat Khare (prabhatkhare22659@gmail.com)
Fishbone (Ishikawa) Diagram

Also Called: Cause–and–Effect Diagram, Ishikawa Diagram

Variations: cause enumeration diagram, process fishbone, time–delay fishbone, CEDAC (cause–and–effect
diagram with the addition of cards), desired–result fishbone, reverse fishbone diagram
The fishbone diagram identifies many possible causes for an effect or problem. It can be used to structure a
brainstorming session. It immediately sorts ideas into useful categories.

When to Use a Fishbone Diagram


 When identifying possible causes for a problem.
 Especially when a team’s thinking tends to fall into ruts.

Fishbone Diagram Procedure

Materials needed: flipchart or whiteboard, marking pens.


1. Agree on a problem statement (effect). Write it at the centre right of the flipchart or whiteboard. Draw a
box around it and draw a horizontal arrow running to it.
2. Brainstorm the major categories of causes of the problem. If this is difficult use generic headings:
 Methods
 Machines (equipment)
 People (manpower)
 Materials
 Measurement
 Environment
3. Write the categories of causes as branches from the main arrow.
4. Brainstorm all the possible causes of the problem. Ask: “Why does this happen?” As each idea is given,
the facilitator writes it as a branch from the appropriate category. Causes can be written in several
places if they relate to several categories.
5. Again ask “why does this happen?” about each cause. Write sub–causes branching off the causes.
Continue to ask “Why?” and generate deeper levels of causes. Layers of branches indicate causal
relationships.
6. When the group runs out of ideas, focus attention to places on the chart where ideas are few.

Page 10 of 29
By Prabhat Khare (prabhatkhare22659@gmail.com)
Fishbone Diagram Example

This fishbone diagram was drawn by a manufacturing team to try to understand the source of periodic iron
contamination. The team used the six generic headings to prompt ideas. Layers of branches show thorough
thinking about the causes of the problem.

For example, under the heading “Machines,” the idea “materials of construction” shows four kinds of equipment
and then several specific machine numbers.

Note that some ideas appear in two different places. “Calibration” shows up under “Methods” as a factor in the
analytical procedure, and also under “Measurement” as a cause of lab error. “Iron tools” can be considered a
“Methods” problem when taking samples or a “Manpower” problem with maintenance personnel.

Page 11 of 29
By Prabhat Khare (prabhatkhare22659@gmail.com)
Seven Basic Tools of Quality

The Seven Basic Tools of Quality is a designation given to a fixed set of graphical techniques identified as being
most helpful in troubleshooting issues related to quality. They are called basic because they are suitable for
people with little formal training in statistics and because they can be used to solve the vast majority of quality-
related issues.

The seven tools are:

1. Cause-and-effect diagram (also known as the "fishbone" or Ishikawa diagram)


2. Check sheet
3. Control chart
4. Histogram
5. Pareto chart
6. Scatter diagram
7. Stratification (alternately, flow chart or run chart)

The designation arose in post war Japan, inspired by the seven famous weapons of Benkei. It was possibly
introduced by Kaoru Ishikawa who in turn was influenced by a series of lectures W. Edwards Deming had given
to Japanese engineers and scientists in 1950. At that time, companies that had set about training their
workforces in statistical quality control found that the complexity of the subject intimidated the vast majority
of their workers and scaled back training to focus primarily on simpler methods which suffice for most quality-
related issues.

The Seven Basic Tools stand in contrast to more advanced statistical methods such as survey sampling,
acceptance sampling, statistical hypothesis testing, design of experiments, multivariate analysis, and various
methods developed in the field of operations research.

Page 12 of 29
By Prabhat Khare (prabhatkhare22659@gmail.com)
Examples
1. Cause-and-effect diagram 2. Check sheet

3. Control Chart 4. Histogram

5. Pareto Chart 6. Scatter Diagram

7. Flow Chart 8. Run Chart

Page 13 of 29
By Prabhat Khare (prabhatkhare22659@gmail.com)
Six Sigma

Six Sigma is a disciplined, statistical-based, data-driven approach and continuous improvement methodology
for eliminating defects in a product, process or service. It was developed by Motorola in early to middle 1980’s
based on quality management fundamentals, then became a popular management approach at General Electric
(GE) in the early 1990’s. Hundreds of companies around the world have adopted Six Sigma as a way of doing
business.

Sigma represents the population standard deviation, which is a measure of the variation in a data set collected
about the process. If a defect is defined by specification limits separating good from bad outcomes of a process,
then a six sigma process has a process mean (average) that is six standard deviations from the nearest
specification limit. This provides enough buffer between the process natural variation and the specification
limits.

For example, if a product must have a thickness between 10.32 and 10.38 cms to meet customer requirements,
then the process mean should be around 10.35, with a standard deviation less than 0.005 (10.38 would be 6
standard deviations away from 10.35).

Six Sigma can also be thought of as a measure of process performance, with Six Sigma being the goal, based on
the defects per million. Once the current performance of the process is measured, the goal is to continually
improve the sigma level striving towards 6 sigma. Even if the improvements do not reach 6 sigma, the
improvements made from 3 sigma to 4 sigma to 5 sigma will still reduce costs and increase customer
satisfaction.

Page 14 of 29
By Prabhat Khare (prabhatkhare22659@gmail.com)
Statistical Process Control (SPC)
Cm (capability machine)

The Cm index describes machine capability;


it is the number of times the spread of the
machine fits into the tolerance width. The
higher the value of Cm, the better the
machine.

Example: if Cm = 2.5, the spread fits 2½


times into the tolerance width, while Cm = 1
means that the spread is equal to the
tolerance width.

Note that even if the spread is off-centre, it


is still the same size (Cm index). The figure
takes no account of where the spread is
positioned in relation to the upper and
lower tolerance limits, but simply expresses
the relationship between the width of the
spread and the tolerance width (see Fig. 1).
Cmk (capability machine index)

If you also want to study the position of the


machine’s capability in relation to the
tolerance limits, you use the Cmk index,
which describes the capability corrected for
position. It is not much use having a high
Cm index if the machine setting is way off
centre in relation to the middle of the
tolerance range.

A high Cmk index means, then, that you have


a good machine with a small spread in
relation to the tolerance width, and also that
it is well centred within that width. If Cmk is
equal to Cm, the machine is set to produce
exactly in the middle of the tolerance range
(see Fig. 2).

A normal requirement is that Cmk should be


at least 1.67.

Page 15 of 29
By Prabhat Khare (prabhatkhare22659@gmail.com)
Cp (capability process)

The Cp index describes process capability; it


is the number of times the spread of the
process fits into the tolerance width. The
higher the value of Cp, the better the
process.

Example: if Cp = 2.5, the spread of the


process fits 2½ times into the tolerance
width, while Cp = 1 means that the spread is
equal to the tolerance width.

Note that even if the spread is off-centre, it


is still the same size (Cp index). The figure
takes no account of where the spread is
positioned in relation to the upper and
lower tolerance limits, but simply expresses
the relationship between the width of the
spread and the tolerance width (see Fig. 3).
Cpk (capability process index)

If you also want to study the position of the


process in relation to the tolerance limits,
you use the Cpk index, which describes the
process capability corrected for position. It
is not much use having a high Cp index if the
process setting is way off centre in relation
to the middle of the tolerance range.

A high Cpk index means, then, that you have


a good process with a small spread in
relation to the tolerance width, and also that
it is well centred within that width. If Cpk is
equal to Cp, the process is set to produce
exactly in the middle of the tolerance range
(see Fig. 4).

A normal requirement is that Cpk should be


at least 1.33.
The six factors

These are the factors that are generally regarded as causing variation in capability measurements:

 Machine (e.g. degree of wear and choice of tooling);


 Measurement (e.g. resolution and spread of measuring instrument);
 Operator (e.g. how experienced and careful he/she is);
 Material (e.g. variations in surface smoothness and hardness);
 Environment (e.g. variations in temperature, humidity and voltage);
 Method (e.g. type of machining operation).

Page 16 of 29
By Prabhat Khare (prabhatkhare22659@gmail.com)
Capability
Machine capability

Machine capability is measured in Cm and


Cmk; it is a snapshot picture that shows how
well a machine is performing right now in
relation to the tolerance limits. Fig. 6 shows
some examples.

When measuring machine capability you


must not alter machine settings or change
tools, materials, operators or measurement
methods, stop the machine, etc. In other
words: Out of the six factors, only machine
and measurement are allowed to affect the
result.
Process capability

Process capability is a long-term study,


measured in Cp and Cpk that shows how
well a process is performing in relation to
the tolerance limits while the study is in
progress, as well as indicating likely
performance in the immediate future.

You could say that process capability is the


sum of an index of machine capabilities
measured over a period of time (see Fig. 7).
When measuring process capability, you must include everything that affects the process, i.e. all six
factors.
Machine capability Process capability
Index Cm and Cmk Cp and Cpk
Factors influencing result Machine & measurement All six factors
Stoppages Not to be included Included
Adjustments Not to be included Included
No. of components 20–50 in succession 50–250
Time Short Long

Page 17 of 29
By Prabhat Khare (prabhatkhare22659@gmail.com)
Standard deviation
Target value

Imagine a shaft fitted in a round hole.

If the diameter of the shaft is on the large


side, it leaves less-than-optimum room for
lubricant between the shaft and the hole.
This results in poorer lubrication, faster
wear and shorter lifetime.

A smallish diameter, on the other hand,


means greater-than-optimum play. The play
tends to increase faster, which also shortens
lifetime.

The assembly works best at the target value


T, which in this case is in the middle of the
tolerance range (see Fig. 8)

For unilateral properties such as run-out,


surface smoothness or mechanical strength,
the target value is zero (see Fig. 9).

Statistical process control lets you center


your process on the target value.

Centering value for target value

This is the distance from the target value T


to the mean value of the machine or process
spread (the hump on the normal
distribution curve), expressed as a
percentage of the tolerance width (see Fig.
10).

There is less talk of centering value for


target value nowadays, but the maximum
permitted deviation used to be in the region
of TO +15%.

Page 18 of 29
By Prabhat Khare (prabhatkhare22659@gmail.com)
Normal distribution curve

Also called the bell curve because of its


shape, this is the pattern in which
measurement readings are distributed in
most cases because of random variations
about the mean value (the highest point on
the hump, see Fig. 11).

Note that most of the readings are grouped


near the hump; the farther out toward the
edges, the fewer the readings. In other
words, it is not very likely that you will find
any components at all giving widely deviant
readings when making normal spot-check
measurements. So, it is not enough that the
components you happen to measure are all
within the tolerances.

It takes measurements of a large number of


components to determine the size and
shape of the bell curve, and that can be
time-consuming. But the standard deviation
offers you a shortcut!
PPM (parts per million)

In a quality control context, PPM stands for


the number of parts per million (cf. percent)
that lie outside the tolerance limits. Cpk
1.00 means that 2 700 PPM (0.27%) of the
manufactured parts are out of tolerance,
while Cpk 1.33 means that 63 PPM
(0.0063%) are rejects.

Note that the PPM index can be drastically


improved by a relatively small improvement
in the capability index (see Fig. 12).

Page 19 of 29
By Prabhat Khare (prabhatkhare22659@gmail.com)
Six Sigma

Six Sigma is a philosophy and a mindset for


quality improvement used by companies
and organizations. The method focuses on
minimizing waste by minimizing variations
in processes.

In concrete terms, Six Sigma means that the


company's processes maintain six standard
deviations from the mean value of the
process to the nearest tolerance limit. It
follows from this that Cpk 2.0 gives 6 sigma,
while for example Cpk 1.33 gives 4 sigma
(see Fig. 13).

One standard deviation

This is a statistical function used to calculate


the normal distribution curve, for example.
The procedure is that you measure the
distance from the mean value (highest point
on the hump) to the point where the curve
changes direction and starts to swing
outward. This distance constitutes one
standard deviation (see Fig. 14). This means
that you do not need to measure hundreds of
components to find out how much the
machine or process is varying.
Instead you can calculate the spread using
the standard deviation (see below).
Six standard deviations

To calculate the normal distribution spread,


you simply multiply the standard deviation
by 6 to get the total width of the normal
distribution curve. If you had gone on
making measurements you could have
plotted the curve, but now you have
calculated it instead (see Fig. 15).

The normal distribution curve is thus


derived from one standard deviation and
consists of six of them. These six factors
account for 99.73% of the actual result. That
also means that 0.27% of the outcome is not
included in the normal distribution curve.

Page 20 of 29
By Prabhat Khare (prabhatkhare22659@gmail.com)
Control limits

Control limits are an important aspect of


statistical process control. They have
nothing to do with tolerance limits, because
they are designed to call your attention
when the process changes its behavior.

An important principle is that control limits


are used along with the mean value on the
control graph to control the process, unlike
tolerance limits, which are used along with
individual measurements to determine
whether a given part meets specifications or
not.

The function of control limits is to center


the process on the target value, which is
usually the same as the middle of the
tolerance width, and to show where the
limit of a stable process lies. This means, in
principle, that you have no reason to react
until the control chart signals certain
behavior.

A commonly used control graph is the XR


graph, where the position and spread of the
process are monitored with the help of sub
groups and control limits.

If a point falls outside a control limit on the


X graph, the position of the process has
changed (see Fig. 16).

If a point falls outside a control limit on the


R graph, the spread of the process has
changed (see Fig. 17).

Page 21 of 29
By Prabhat Khare (prabhatkhare22659@gmail.com)
How are control limits determined?

The correct way is to let the control limits


adapt to the process. That way, a smaller
spread in the process gives a narrower
control zone, while a greater spread gives a
wider control zone (see Fig. 18).

It is a widespread myth that this will cause


the operator to adjust the process more
often, but in practice the reverse is true; the
process is adjusted less often compared to
operation without SPC. If you let the control
limits follow the process, you will react
neither too early nor too late when the
behavior of the process changes.
Other ways of determining control limits

In some cases, there may be difficulties


about letting the control limits adapt to the
process. One such case is where the process
uses tools that are not easily adjustable,
such as fixed reamers or punches.

Since such tools often produce very little


variation in the process and therefore allow
a narrow control zone without the
possibility of adjusting the tool, it may be
better to cut the control limits loose from
the process and lock them to a given
distance from the tolerance limits instead
(see Fig. 19).
Sub group

A sub group consists of a number of


individual measurements (normally three,
four or five) made in sequence from a
process. The mean value of these sub-
groups, like the process itself, follows a
normal distribution curve (see Fig. 20).

Mean values show much less variation


compared to individual measurements. This
fact, combined with control limits which
follow the process in a control graph, means
that the machine operator no longer over
reacts compared to when adjustment of the
process is based on individual
measurements. The operator will therefore,
as a rule, both measure and adjust the
process less often, while at the same time
quality will be better.

Page 22 of 29
By Prabhat Khare (prabhatkhare22659@gmail.com)
Cp, Cpk, Pp and Ppk: Know How and When
to Use Them
For many years industries have used Cp, Cpk, P and Ppk as statistical measures of process quality
capability. Some segments in manufacturing have specified minimal requirements for these parameters,
even for some of their key documents, such as advanced product quality planning and ISO/TS-16949. Six
Sigma, however, suggests a different evaluation of process capability by measuring against a sigma level,
also known as sigma capability.

Incorporating metrics that differ from traditional ones may lead some companies to wonder about the
necessity and adaptation of these metrics. It is important to emphasize that traditional capability studies
as well as the use of sigma capability measures carry a similar purpose. Once the process is under
statistical control and showing only normal causes, it is predictable. This is when it becomes interesting
for companies to predict the current process’s probability of meeting customer specifications or
requirements.

Capability Studies
Traditional capability rates are calculated when a product or service feature is measured through a
quantitative continuous variable, assuming the data follows a normal probability distribution. A normal
distribution features the measurement of a mean and a standard deviation, making it possible to
estimate the probability of an incident within any data set.

The most interesting values relate to the probability of data occurring outside of customer
specifications. These are data appearing below the lower specification limit (LSL) or above the upper
specification limit (USL). An ordinary mistake lies in using capability studies to deal with categorical
data, turning the data into rates or percentiles. In such cases, determining specification limits becomes
complex. For example, a billing process may generate correct or incorrect invoices. These represent
categorical variables, which by definition carry an ideal USL of 100 percent error free processing,
rendering the traditional statistical measures (Cp, Cpk, Pp and Ppk) inapplicable to categorical variables.
When working with continuous variables, the traditional statistical measures are quite useful,
especially in manufacturing. The difference between capability rates (Cp and Cpk) and performance
rates (Pp and Ppk) is the method of estimating the statistical population standard deviation. The
difference between the centralized rates (Cp and Pp) and unilateral rates (Cpk and Ppk) is the impact of
the mean decentralization over process performance estimates.
The following example details the impact that the different forms of calculating capability may have over
the study results of a process. A company manufactures a product that’s acceptable dimensions,
previously specified by the customer, range from 155 mm to 157 mm. The first 10 parts made by a
machine that manufactures the product and works during one period only were collected as samples
during a period of 28 days. Evaluation data taken from these parts was used to make a Xbar-S control
chart (Figure 1).

Page 23 of 29
By Prabhat Khare (prabhatkhare22659@gmail.com)
Figure 1: Xbar-S Control Chart of Evaluation Data

This chart presents only common cause variation and as such, leads to the conclusion that the process
is predictable. Calculation of process capability presents the results in Figure 2.

Figure 2: Process Capability of Dimension

Page 24 of 29
By Prabhat Khare (prabhatkhare22659@gmail.com)
Calculating Cp
The Cp rate of capability is calculated from the formula:

where s represents the standard deviation for a population taken from , with s-
bar representing the mean of deviation for each rational subgroup and c 4 representing a statistical
coefficient of correction.
In this case, the formula considers the quantity of variation given by standard deviation and an
acceptable gap allowed by specified limits despite the mean. The results reflect the population’s standard
deviation, estimated from the mean of the standard deviations within the subgroups as 0.413258, which
generates a Cp of 0.81.

Rational Subgroups
A rational subgroup is a concept developed by Shewart while he was defining control graphics. It
consists of a sample in which the differences in the data within a subgroup are minimized and the
differences between groups are maximized. This allows a clearer identification of how the process
parameters change along a time continuum. In the example above, the process used to collect the
samples allows consideration of each daily collection as a particular rational subgroup.

The Cpk capability rate is calculated by the formula:


considering the same criteria of standard deviation.

In this case, besides the variation in quantity, the process mean also affects the indicators. Because the
process is not perfectly centralized, the mean is closer to one of the limits and, as a consequence,
presents a higher possibility of not reaching the process capability targets. In the example above,
specification limits are defined as 155 mm and 157 mm. The mean (155.74) is closer to one of them than
to the other, leading to a Cpk factor (0.60) that is lower than the Cp value (0.81). This implies that the LSL
is more difficult to achieve than the USL. Non-conformities exist at both ends of the histogram.

Page 25 of 29
By Prabhat Khare (prabhatkhare22659@gmail.com)
Estimating Pp

Similar to the Cp calculation, the performance Pp rate is found as follows:

where s is the standard deviation of all data.


The main difference between the Pp and Cp studies is that within a rational subgroup where samples are
produced practically at the same time, the standard deviation is lower. In the Pp study, variation between
subgroups enhances the s value along the time continuum, a process which normally creates more
conservative Pp estimates. The inclusion of between-group variation in the calculation of Pp makes the
result more conservative than the estimate of Cp.
With regard to centralization, Pp and Cp measures have the same limitation, where neither considers
process centralization (mean) problems. However, it is worth mentioning that Cp and Pp estimates are
only possible when upper and lower specification limits exist. Many processes, especially in
transactional or service areas, have only one specification limit, which makes using Cp and Pp impossible
(unless the process has a physical boundary [not a specification] on the other side). In the example
above, the population’s standard deviation, taken from the standard deviation of all data from all
samples, is 0.436714 (overall), giving a Pp of 0.76, which is lower than the obtained value for Cp.

Estimating Ppk

The difference between Cp and Pp lies in the method for calculating s, and whether or not the existence of
rational subgroups is considered. Calculating Ppk presents similarities with the calculation of Cpk. The
capability rate for Ppk is calculated using the formula:

Once more it becomes clear that this estimate is able to diagnose decentralization problems, aside from
the quantity of process variation. Following the tendencies detected in Cpk, notice that the Pp value (0.76)
is higher than the Ppk value (0.56), due to the fact that the rate of discordance with the LSL is higher.
Because the calculation of the standard deviation is not related to rational subgroups, the standard
deviation is higher, resulting in a Ppk (0.56) lower than the Cpk (0.60), which reveals a more negative
performance projection.

Page 26 of 29
By Prabhat Khare (prabhatkhare22659@gmail.com)
Calculating Sigma Capability
In the example above, it is possible to observe the incidence of faults caused by discordance, whether to
the upper or lower specification limits. Although flaws caused by discordance to the LSL have a greater
chance of happening, problems caused by the USL will continue to occur. When calculating Cpk and Ppk,
this is not considered, because rates are always calculated based on the more critical side of the
distribution.
In order to calculate the sigma level of this process it is necessary to estimate the Z bench. This will allow
the conversion of the data distribution to a normal and standardized distribution while adding the
probabilities of failure above the USL and below the LSL. The calculation is as follows:

Above the USL:

Below the LSL:

Summing both kinds of flaws produces the following result:

(Figure 3)

Figure 3: Distribution Z

Page 27 of 29
By Prabhat Khare (prabhatkhare22659@gmail.com)
The calculation to achieve the sigma level is represented below:

Sigma level = Zbench + 1.5 = 1.51695 + 1.5 = 3.1695


There is great controversy about the 1.5 deviation that is usually added to the sigma level. When a great
amount of data is collected over a long period of time, multiple sources of variability will appear. Many of
these sources are not present when the projection is ranged to a period of some weeks or months. The
benefit of adding 1.5 to the sigma level is seen when assessing a database with a long historical data
view. The short-term performance is typically better as many of the variables will change over time to
reflect changes in business strategy, systems enhancements, customer requirements, etc. The addition of
the 1.5 value was intentionally chosen by Motorola for this purpose and the practice is now common
throughout many sigma level studies.

Comparing the Methods


When calculating Cp and Pp, the evaluation considers only the quantity of process variation related to
the specification limit ranges. This method, besides being applicable only in processes with upper and
lower specification limits, does not provide information about process centralization. At this point, Cpk
and Ppk metrics are wider ranging because they set rates according to the most critical limit. The
difference between Cp and Pp, as well as between Cpk and Ppk, results from the method of calculating
standard deviation. Cp and Cpk consider the deviation mean within rational subgroups, while Pp and Ppk
set the deviation based on studied data. It is worth working with more conservative Pp and Ppk data in
case it is unclear if the sample criteria follow all the prerequisites necessary to create a rational
subgroup.
Cpk and Ppk rates assess process capability based on process variation and centralization. However, here
only one specification limit is considered, different from the sigma metric. When a process has only one
specification limit, or when the incidence of flaws over one of the two specification limits is insignificant,
sigma level, Cpk and Ppk bring very similar results. When faced with a situation where both specification
limits are identified and both have a history of bringing restrictions to the product, calculating a sigma
level gives a more precise view of the risk of not achieving the quality desired by customers.
As seen in the examples above, traditional capability rates are only valid when using quantitative
variables. In cases using categorical variables, calculating a sigma level based on flaws, defective
products or flaws per opportunity, is recommended.

Page 28 of 29
By Prabhat Khare (prabhatkhare22659@gmail.com)
Comments/ Notes:

Page 29 of 29
By Prabhat Khare (prabhatkhare22659@gmail.com)

You might also like