Professional Documents
Culture Documents
Abstract: This paper gives the summary of the Entire framework Of course, there is truth in both of these assessments, which is
of DMAIC of Six Sigma by applying the usage of statistical why both Lean and Six Sigma methodologies have been so
software tools like Minitab, Sigma XL & Quality Companion successful in improving overall business performance in a
Software and discussing the intricacies of Define, Measure, variety of fields. In fact, these two disciplines have proven to
Analyse, and Improve& Control Phase with examples.
be especially successful when working in tandem - hence the
creation of Lean Six Sigma.
I. INTRODUCTION
Maturity Models –Process Based
T he paper gives a clear concept of the various phases of
Six Sigma and the statistical calculations used in it, it has
been derived by referring various books and concisely put
A Wide variety of process-based maturity models have been
available for some time. Some are focussed specifically on
together so that it is easy to understand Six Sigma. The departments such as Information technology while others
Concept of Six Sigma begins with a process model and its include more of the broader aspects of organization.
implementation is called Business process Management Regardless, most of these models have their roots in the area
(BPM) and the process of Six Sigma began at Motorola in the of software development. Common process based models
1980’s. Six Sigma is the letter σ in the Greek alphabet used include:
to denote Standard deviation, a statistical measurement of ITIL ( Information Technology Infrastructure
variation. Standard deviation can be thought of as a Library)
comparison between expected results or outcomes in a group ISO/IEC 15504 ( Software Process Improvement and
of operations versus those that fail. Bill Smith of Motorola Process Capability Determination)
Coined the term Six Sigma and is also called the father of CMMI (Capability Maturity Model Integration)
Six Sigma; The Quality leader responsible for the term
Total Quality Management is Feigen Baum; robustness in Most of the above models are level based. For Example the
quality management was pioneered by Dr Taguchi; CMMI model, which has grown significantly in popularity is
quality leader who did extensive work in Japanese structured into five levels
industry is Dr Deming who paved the way for Quality Initial (Ad Hoc)
Management Concept; Walter She wart is also one of the - Level 0 – Person Dependent Practices
contributors in Six Sigma, he is sometimes referred to as Managed (Repeatable)
father of Statistical Control. With Six Sigma Standard - Level 1- Document Process
deviation we arrive at 3.4 defects per million opportunities or Defined (Defined and Measured)
99.9997 percent this would mean that at six sigma, an airline - Level 2- Partial Deployment
would only loose three pieces of luggage for every one Quantitative Managed ( Managed)
million that it handles. The Three primary elements of Six - Level 3- Full Deployment
Sigma are Customers, Processes & Employees. Optimizing ( self-Optimizing)
Comparing Six Sigma and Lean - Level 4- Measured and Automated
Essentially, Six Sigma and Lean systems have the same goal. - Level 5- Continuously Improving
They both seek to eliminate waste and create the most
efficient system possible, but they take different approaches Six Sigma Encourages Improvement within the CMMI
toward how achieving this goal. In simplest terms, the main Framework
difference between Lean and Six Sigma is that they identify Six Sigma success is not equivalent to compliance with
the root cause of waste differently. standards and models, and vice versa. Industry models and
Lean practitioners believe that waste comes from unnecessary standards frequently demand measurements, monitoring, and
steps in the production process that do not add value to the control. Frequently used standards include CMMI models,
finished product, while Six Sigma proponents assert that ISO, IEEE standards, and ISO 12207. Six Sigma can be used
waste results from variation within the process. to achieve compliance with aspects of each of these
standards. However, interpreting Six Sigma usage as
*The Author © Suresh .V. Menon is a lean Six Sigma Black Belt Certified achievement of model compliance, and likewise assuming
Accredited by IASSC and has an experience of 23 years in the IT Industry, Six Sigma when compliance is found, is a mistake. Six Sigma
Currently pursuing (PhD) in Computer Science is also a Six Sigma Trainer
and Consultant.
is not limited to use in high maturity organizations. In
organizations that primarily use CMMI, many people
associate Six Sigma with the high maturity process areas. There are several roadblocks if you try to implement Six
However, there is a direct connection between Six Sigma and Sigma in a respective organization there will be resistance
the generic practices, which are used for process areas at all stating that Six Sigma is too Statistical, non-cooperation from
maturity levels. Six Sigma enables a tactical approach to the employees as they don’t want to come out from the comfort
implementation of the generic practices, and therefore much zone of their work and are afraid whether their performance
of the intent of the high-maturity process areas is will be measured during the rigorous process of
implemented at lower maturity or within the continuous implementing the Six Sigma Framework –DMAIC(Define-
representation. This drastically accelerates the cycle time Measure-Analyse-Improve-Control) because Lean Six Sigma
required for the final steps to high maturity by putting the is all about reducing the waste, redundant processes and
building blocks for the high-maturity process areas in place. variation in the process ,so people from all the departments
Six Sigma is not a competitor to CMMI or other process like Production , Finance , Planning have to cooperate in
models and standards. There are many domain-specific improving the process which they are likely to resist as the
models and standards. Six Sigma is not domain specific and people in the organization feel that they might lose their job
can be a governance model or a tactical improvement engine. during the implementation activity. Typically in an
It can provide the problem definition and statement of benefit Organization the percentages of people having neutral
against which a decision about adopting a technology can be attitude towards the implementation of Six Sigma will be
made. It can help solve specific problems and improve 70% and 15% will be resistant towards the Six Sigma
specific products or processes within the larger context of initiative the rest 15% will be supportive for the Six Sigma
overall organizational process improvement. Or, in more implementation initiative.
general terms, it can serve as an enabler for the successful
There are some areas where the Six Sigma Consultant finds it
implementation of domain-specific improvement models.
difficult to do the formulation of problem statement.
Integrate CMMI, Six Sigma, and all other improvement Resistance to implement the concepts, in all levels of
initiatives to provide a standard for the execution of every management, leading to political intelligence coming into
project throughout its life cycle. While the previous three play and finding and freezing the right team members for
approaches are tactical (i.e., they provide a course of action), implementing and spearheading the projects selected.
this is an approach for setting an organization’s strategy. It is
Resistance from the employees or employers will be also like
longer term and more visionary. It promotes the idea that an
they disagree with benefits and return of investment on a Six
organization should take control of its destiny and manage its
Sigma Project, criticizing the statistical tools stating that that
initiatives rather than be managed by them. Six Sigma
they are cumbersome to learn and whether the employees are
methods can be leveraged to design the organization’s
trainable on these tools, sometimes the employees will delay
standard processes, but the focus here is embedding Six
the implementation to show that Six Sigma is just a fad and
Sigma alongside other initiatives in the organizational
there are no benefits by implementing it.
business and engineering processes. This approach can be
executed at any maturity level, with any maturity level as the There is also a misunderstanding that Six Sigma is meant for
end goal. When possible, it’s best to start while at low large Manufacturing Companies and not small and Medium
maturity Companies so employers of small and medium companies
resist to the implementation of Six Sigma.
CMMI is used to create an organizational process
infrastructure by addressing particular domains, such as There can be resistance if the management is not qualified to
software and systems engineering. Six Sigma is a top-down understand the various business intricacies of their company.
initiative that cuts across the entire enterprise, including areas
such as engineering, sales, marketing, and research. Six Stakeholders (People who are investing money in the Six
Sigma is intended to be implemented with a focus on Sigma Project) can also resist the change stating that Six
Sigma is very Complex and whether the Company has the
problems and opportunities, often with narrow scopes, that
resources to understand the DMAIC Framework.
will yield significant business benefits. It focuses on the
performance of processes and practices as implemented Other kind of resistance will be from the workers in the
rather than checking for compliance against a definition or factory because they don’t understand the value of
model. While these two improvement initiatives are different implementing Six Sigma and don’t want interference in their
by design, they are interdependent in their use. In practice, a day to day work.
back and forth focus is often effective. For instance, Six
Sigma could be used to discover that processes need to be The degree of resistance to implement Six Sigma can vary
more repeatable, CMMI could be used to institute processes from organization to organization depending on the
based on community best practice, and then Six Sigma could environment, domain of the organization and people working
be used to optimize those processes. there.
1.1.1 Challenges in Implementing Six Sigma 1.1.2 How to overcome the challenges in implementing Six
Sigma
These are some strategies as listed below to overcome the Starting the 1. Charter Initiating
challenges while implementing Six Sigma which are as listed Project 2. Documentation of Need Process
3. Clear Goals
below: 4. Quality Requirements
5. Scope
1. Convince the Management of the Return of 6. Initial Resource
Investment based on Operation Costs and requirements
Productivity efficiency also demonstrate the 7. Cost benefit Analysis
Software Minitab and explain its Simplicity by 8. Approval to proceed to
next stage
displaying various graphs in the Software so that Organizing 1. Communication Plan Planning
they don’t avoid statistics. and Preparing 2. Quality Plan Process
2. Convince the employees that nobody will be out of 3. WBS
job but their participation will incur rewards in the 4. Resource Plan
5. Project plan
form of financial bonus from the top management 6. Risk management plan
3. Select the best, qualified team in the organization and 7. Approval to proceed to
mentor them and train, certify them to be black belts next stage
or green belts. Carrying out 1. Project Results Execution
4. Form a very good Standard Operating Procedure for the Work 2. Progress Reports Process
3. Communications Controlling
imparting training to the employees and prepare 4. Develop tracking and
training manuals which are easily understood by the Systems Monitoring
staff with day to day life examples. 5. Team Process
5. As per the respective organizational process assets Management/Developm
ent
the Six Sigma Consultant has to bring a cultural and 6. Approval to Proceed to
smooth transition in the organization by getting the next Stage
buy in from the top management and further Closing out 1. Demonstrating results Closing
convincing the staff about the importance of change the Work 2. External and Internal process
Approval for the project
management for the better and future prospects and results
not short term benefits. 3. Lessons learned
4. Transitioning team
Six Sigma Deployment & Planning Readiness Assessment members to new
assignments
Cultural Assessment Operations Assessment
5. Closing project
How do Senior leaders cascade How is success measured? Accounts
information Are the right things measured? 6. Post-mortem and
How are important Decisions made lessons learned
How often are things measured? 7. Recognizing the team
Who makes the decision Are there a few metrics that all 8. Adjourning the team
How fast are the decision made employees understand and use?
How are success and failure Are decision based on assumptions Project Planning
recognized or data?
How does the organization handle Who owns each critical process? The planning process group has more processes than any
failures? Who owns the data? other process group as a result a lot of time and effort goes
Does everyone understand the Are the data on spread sheets, into the planning process group of any company on some
mission vision and strategy laptops or in a data warehouse? projects you might almost spend as much time planning with
Is everyone aware of critical Have the data been validated? that of executing in a way it is a good sign as the project is
customer, revenue, and expense properly planned.
issues? Are reports written in simple terms
and free from fuzzy language?
How are Corporate goals The define activities process is the further breakdown of the
established? Are updated process maps
available for most of the critical
work package elements of the WBS
How clear are the goals? process?
Tools and techniques for defining activities
Are the goals measurable? Do executives know what a
process map is? Decomposition
Rolling wave planning
The Five Processes of PMP (Initiation, Planning, Execution, Templates
Monitor & Control, Closing Phase) Expert judgement
The relationship of the project stages to the life cycle Precedence Diagramming Method (PDM)
phases
The PDM Method is what most project management software
Stage Outputs Life Cycle programs use to sequence activities.
Phases
Precedence diagrams use boxes or rectangles to represent the Every Project requires some measure of progress and in many
activities called nodes the nodes are connected with arrows cases, multiple forms of measurement must be used to gauge
showing the dependencies this method is called AON activity the overall health of the project, project measures might be
on node. Diagram is given below. used to answer basic questions like:
Is the project on track to finish on time?
Is the project expending funds within budget?
Let us consider some of the common forms of measures used
on most projects at the task(work package) level
Planned value (PV). This is the approved budget for
work scheduled for completion by a specific date. It
The four dependencies are as follows
is also known as budgeted cost of work
Finish to Start (FS) The finish to start relationship is the most scheduled(BCWS)
frequently used relationship the relationship says that the Earned Value (EV). This is the approved budget for
predecessor or from activity must finish before the successor the work actually completed by a specific date. It is
or to activity can start in PDM diagrams this is the most often also known as Budgeted Cost of work Performed
used logical relationship. (BCWP).
Actual Value (AV). This is the actual cost incurred
Start to Finish (SF) The Start to finish relationship says that
for work completed by a specific date. It is also
the predecessor activity must start before the successor
known as actual cost of work performed.
activity can finish this logical relationship is seldom used.
Scheduled Variance (SV). The Schedule variance is
Finish to Finish (FF) the finish to finish relationship says that the difference between the amounts budgeted for
the predecessor activity must finish before the successor work performed and the planned cost. It is
activity finishes. calculated as SV=EV-PC.
If SV is positive, the project is ahead of schedule, if
Start to Start (SS) the start to start relationship says that the
the SV is zero the project is on schedule and if SV is
predecessor activity depends on before the successive activity
negative, the project is behind schedule a corrective
can start.
action must be taken.
Critical Path Method (CPM) is a schedule network analysis Cost Performance index(CPI) : This is a
technique it determines the amount of float or schedule dimensionless index used to measure the project’s
flexibility for each network paths by calculating the earliest cost efficiency it is calculated as
start date, earliest finish date , latest start date and latest
finish date for each activity. The Critical path is generally the CPI = EV/AC = BCWP/ACWP.
longest full path on the project any project activity with a If the CPI is greater than 1 the project is under
float time of zero is considered a critical path task. budget. If the CPI is equal to 1 , the project is on
Calculating Expected Value : Say for example for a Budget and if the CPI is less than 1 , the project is
particular project you are given 38 days of optimistic time, 57 over budget and corrective action must be taken.
days for the pessimistic and 45 days for the most likely so the Schedule Performance Index (SPI). This is a
formula for calculating expected value is (Optimistic + dimensionless index used to measure the projects
pessimistic +(4* most likely))/6 schedule efficiency
The expected value for the sample project activity is as given SPI=EV/PV = BCWP/BCWS
below:- If SPI is greater than 1, the project is ahead of
schedule, if the SPI is equal to 1 the project is on
(38 + 57+ (4*45))/6 = 45.83 schedule and if the SPI is less than 1 the project is
The formula for standard deviation which helps to determine behind schedule and corrective action should be
the confidence level is (pessimistic – optimistic)/6 taken.
the project per week and estimated project duration. Define the Process (top-level) and its stake holders
Therefore we can budget labour dollars for the project and in Select team members
instances where doubt exits it may be possible to establish a Obtain Authorization from Sponsor
project reserve budget. Such budgets may be set as a
percentage (for example, 10%, 15%, and 20%) of the actual Assemble and train the team.
dollar budget. These budgets are typically set aside as Project charters the charter documents the why, how, who
contingency dollars to help mitigate risk and when of a project include the following elements
Supply/Demand Management Problem Statement
Two important aspects of supply and demand management Project objective or purpose, including the business
are the ability to keep the pipeline full and the development need addressed
of trained resources’ to sustain the lean six sigma deployment Scope
initiative. Deliverables
Sponsor and stakeholder groups
1.1.3 Basic Questions to be asked before Implementing Six
Team members
Sigma
Project schedule (using GANTT or PERT as an
Six Sigma has been successfully applied in banking, attachment)
healthcare, Military Fast Food Chains, airlines, Hotels, retail Other resources required
stores if there’s a repetitive process with a problem you can
apply Six Sigma. 2.2 Work break down Structure
1.1.4 Below is some basic Questionnaire which can be used It is a process for defining the final and intermediate products
by a Six Sigma Consultant of a project and their relationship. Defining Project task is
typically complex and accomplished by a series of
1. List four Problems your business is facing right decomposition followed by a series of aggregations it is also
now? called top down approach and can be used in the Define
2. Now think about the day to day operations of your phase of Six Sigma framework.
organization list four of the major repetitive
processes 2.3 The Second is Pareto Analysis which is a vertical bar
3. Next think about the defects affecting your products graph showing problems in a prioritized order so it, so it can
or services or outputs what four defects you see on a be determined which problems or opportunities should be
regular basis list them out tackled first. In the define phase the Pareto diagram is used in
4. Finally what variations you see in your business conjunction with the WBS to quantify the opportunities
processes list the four variations it might include the inherent in each of the tiny elements resulting from the WBS.
differences in the way you do things from day to day 2.3.1 Deliverables
or in your outputs.
Six Sigma project focus on one or more of the following
Dr Taguchi’s Loss Equation: Dr Taguchi Defined quality as three deliverables: Cost, Quality and schedule. Factors
―loss imparted by the product to society from the time the Critical to Cost (CTC), include parameters that impact work
product is shipped‖. This loss includes not only to the in progress, finished goods inventory, overhead, delivery,
company but also costs to the customer through poor product material and labour, even when the costs can be passed on to
performance and reliability, leading to further loss to the the customer. Critical to Quality (CTQ) factors are perhaps
manufacturer. most familiar to operational personnel since they directly
According to Dr Taguchi the mathematical equation for impact the functional requirement specified by the internal
quality loss function is given by L=k(Y-T) ^2 where L is and external customers. Critical to Schedule (CTS) factors
Loss, K is a Constant and Y-T is the distance of the impact the delivery time of the product or service.
characteristic from the target thus increase in loss is 2.3.2 Throughput Yield
proportional to the square of the distance from the target.
Taguchi’s Contribution later became the foundation for For a process, throughput is a measure of the productivity of
design for Six Sigma (DFSS). the process in producing error free product or service. For a
product, throughput is a measure of the quality of the product
II. DEFINE PHASE SOME TOPICS of the entire product as a function of the quality of its various
features.
2.1 The objectives within the Define Phase which is first
phase in DMAIC framework of Six Sigma are:- The throughput yield is calculated based on the defects per
unit (DPU) for each process step
Define the Project Charter
DPU = #errors / # items
Define scope, objectives, and schedule
Quality Function Deployment Delay time – Time spent waiting for operator or
other process to finish, or parts or raw material
Quality function deployment (QFD) is a customer driven availability; and
process for planning innovation in products or services. It
starts with the voice of customer which becomes the basis of Machine time – Time spent by machine working on
requirements. QFD Matrices are sometimes referred to as the part.
―House of Quality‖. In its simplest form QFD involves a In an ideal production process, the cycle time must be as
matrix that represents customer requirements as rows and close to the takt time as possible.
product or service features as columns. The Cell where the
row and column intersect shows the correlation between Lead time is the time between customer order and delivery of
individual customer requirements and the product or service the product.
requirement. The matrix is sometimes called as the For Example in Delivering a Burger of Mc Donald’s to the
―requirement matrix‖. customer the company takes 30 minutes so 30 minutes is the
lead time.
Reducing lead time should be the target, but to do so, you
should reduce cycle time, set up time, and wait time. Lead
time will automatically get impacted.
Calculating Batch Size
Number of machines – 10
Cycle time per part – 2 minutes
Average daily demand – 20
Work hours – 480 minutes
Average changeover time – 60 minutes (Once at the
end)
Solution
Step 1 – Production time needed Percent profit margin : This metric provides and
overall sense of how efficiently an organization
Production time needed = No. of machines * cycle time per
converts sales into profits it is computed as given
part * average daily demand
below :-
= 10 * 2 * 20 = 400 minutes
Percent profit margin= (Sales of a product or service-
•Time available for set up = Available production time – Cost of a product or service/ Sales of a Product or
production time needed Service) * (100)
= 480 – 400 = 80 minutes = (Net Profit/Net Sales) * 100
Step 2 – Calculate EPEI Percent Gross Margin: This metric measures the
EPEI = number of setups possible in the 80 minutes set up efficiency of producing sales. It is computed as
time Follows
Percent Gross Margin = (Gross profit/Net
= (Total set up time calculated)/ (Average set up time Sales) * 100
provided)
= 80/60 = 1.33 Percent Operating Margin: This Metric Measures
how effectively an Organization Controls Cost not
•Number of machines = 10 related to production and sales of a product or service
•If there are 10 machines, there would be 10 changeovers. It is Computed as Percent Operating margin =
(Operating profit/Net Sales) * 100
•EPEI is the time required to cycle all the parts. Gross Profit = Net Sales – Cost of Goods Sold
•Working at 1.33 changeovers per day, it would take 7.5 days Cost of Goods Sold = Cost of Raw Goods + Cost of
to complete all changeovers. Labour + Cost of Inventory
Net Profit = Net Income = Net Earnings = Bottom
That is, Line = Total revenue – Total Expenses
•EPEI = 7.5 days Operating Profit = Operating Income = Earnings =
Gross Profit – Operating Expenses
Batch size = EPEI * Daily demand.
3.1 Cost Benefit Analysis
•From the calculation, batch size = 7.5 * 20 = 150.
Top Management must approve funding for Projects that
Interpretation: The daily demand is 20, but the batch size is require significant financial resources, therefore before a
150. That is over-production. proposed project is initiated, a cost-benefit analysis using
Crew size is the number of people needed to man the projected revenues, costs and net cash flows is performed to
production line. determine the project’s financial feasibility. The analysis is
based on estimates of the project’s benefits and costs and the
timing thereof. The result of the analysis are a key factor in
Example: A product works on a takt of 100 minutes and has deciding which projects will be funded. Upon project
production cycle time of 400 minutes. What is the crew size? completion accounting data are analysed to verify the projects
financial impact on the organization. In other words the
Solution: management must confirm whether the project added to or
detracted from the company’s financial wellbeing.
•Crew Size = 400/100 = 4
A cost-benefit analysis attempts to evaluate the benefits and
•4 staff are needed on the line to ensure takt target is met.
costs associated with a project or initiative. Two types of
benefits can be evaluated: direct or indirect. Direct benefits
III. COMMON FINANCIAL MEASURES
are easily measured and includes issues such as increased
Revenue growth is the projected increase in income that will production, higher quality, increased sales, reduced delivery
result from the project that is calculated as the increase in costs and lower warranty costs. Indirect benefits are more
gross income minus the cost. Revenue growth may be stated difficult to measure and include such items as improved
in dollars per year or as percentage per year. quality of work life, increased internal customer satisfaction
and better trained employees.
Conceptually margin refers to the difference between income
and cost however there are multiple financial concepts of The Simple formula for computing benefit-cost ratio is:
margin which is used to measure the efficiency and
∑ NPV of all Benefits anticipated/ ∑ NPV of all costs
effectiveness of an organization’s financial capabilities and
anticipated
finesse.
Note: NPV Stands for Net present Value where NPV is – Political stability. Mid-term congressional elections
calculated using the formula P = F (1+i) ^ -n where P = Net have shifted the dominating party from Democratic
Present Value, F = Amount to be received n years from now to Republican in the U.S. House of Representatives.
also called as Future value: and I = Annual interest rate
– Threat of wars. The war in Afghanistan continues;
expressed as decimal the formula assumes that the money is
arguably, the war in Iraq is over; an unstable ruler of
compounded annually and n stands for number of years.
North Korea recently died and leadership was
3.1.1 Strategizing assumed by his hardliner son; Iran is clamouring for
its own nuclear weapons and attempting to threaten
SWOT
the world’s oil supply; and terrorist bombings are
A SWOT analysis is an effective strategic planning tool rampant.
applicable to an organization or project objective. SWOT
– Regulatory issues. The recent financial meltdown
stands for: strengths, weaknesses, opportunities, and
has resulted in strengthening the power and
threats. It is attributed to Albert Humphrey from Stanford
authority of regulatory agencies.
University during the 1950s and 1960s.
– Tax considerations. In an attempt to stimulate the
Strengths and weaknesses are identified with respect to
economy, the federal government has proposed
the internal capabilities of an organization. They provide
special depreciation considerations designed to
an introspective view. Strengths and weaknesses should
motivate organizations to purchase capital
not be considered opposites. This will become apparent
equipment.
in the example below.
– Labour problems. Eight and a half percent
Strengths address what the organization does well. This
unemployment continues to strain our government
might be: research and development, production,
welfare systems, and public pension legacy issues
marketing, acquisitions, partnering, and so on. Strengths
continue to mount.
need to be assessed fairly and objectively to bring
credibility to the SWOT analysis. – Laws. Broad-sweeping healthcare and financial
reform laws were passed that will have significant
Weaknesses address the organization in terms of skills,
impact to both organizations and individuals.
capabilities, and competencies. For example, an
organization continually fails to deliver its products to its • Economic. This perspective looks at currency
customers on time. This might be due to a lack of skill, exchange rates, market conditions, unemployment
incapable processes, or poor training of employees. Note: rates, inflation factors, interest rates, and so forth.
A weakness should never be improved at the expense of Current examples include: the U.S. dollar against
a strength! On the other hand, opportunities and threats other major international currencies, rates of
provide the organization with an external view. Quite unemployment in the United States and in other
simply, we are trying to identify opportunities for the major countries of the world, almost zero percent
organization to grow to new levels, and threats to the inflation, low GDP growth rates in the developed
organization that will impede its progress. Threats are world, outsourcing to low-cost nations such as India,
often barriers to an organization’s growth and can China, and those of Southeast Asia, and, in some
represent a competitive disadvantage. When analysing a cases, negative interest rates on bonds.
SWOT, the question to ask is, ―How can the organization
• Social. This perspective looks at education
leverage its strengths and improve its weaknesses to take
considerations, demographics, health factors,
advantage of the opportunities while mitigating or
cultural implications, and so forth. Current examples
eliminating the threats?‖
include: the widening educational gap between the
PEST United States and other first-world countries, and
shifting demographics such that ―baby boomers‖ are
An analysis similar to SWOT is the PEST analysis,
now the largest group, and consequently place the
which brings together four environmental scanning
biggest strain on the U.S. healthcare system.
perspectives that serve as useful input into an
organization’s strategic planning process: • Technological. This perspective looks at new
technology developments, rate of technological
• Political. This perspective looks at political stability,
change, cost impact of technology, and so forth.
threats of wars, regulatory issues, tax considerations,
labour problems, laws, and so forth. For example, as Aligning Training to Strategy: If training is to be
this book is being written, we have: positioned as supporting the needs of the
organization senior leaders must be involved with
and stand behind the training plans. Technically
training plans are action plans that support the
strategic plans of the organization , if the plans do Table 1.1 Comparing DMAIC and DFSS methodologies.
not have strategic outcomes, then the organization
DFSS
should not be supporting the training so the
following sequence should be followed 1]Strategic DMAIC DMADV DMADOV DMEDI IDOV
planHuman Resource Strategic PlanTraining
PlanTraining Needs Analysis. Define Define Define Define
Identify
Measure Measure Measure Measure
Training Plans Are Not a Panacea: Often poor
Performance is attributed to a skill or knowledge Analyse Analyse Analyse Explore*
deficiency that can be corrected by training. Design
Improve Design Design Develop
Nonetheless many times the root cause of poor
performance is not skill related at all. However Optimize Optimize
training cannot remedy a situation caused by poor Verify** Verify** Implement Validate***
performance or other root causes that are not skill
related. For example, to expect training can fix Control
operating systems problems is unrealistic. Consider
the following points below where training would not DMADV is a well-recognized Design for Six Sigma (DFSS)
be beneficial. methodology and an acronym for define, measure, analyse,
design, and verify. Note that the ASQ Black Belt Body of
Increased defects in a manufacturing environment Knowledge replaces ―verify‖ with ―validate.‖ The difference
caused by outdated or inadequate equipment 2] is likely because, although different, ―verify‖ and ―validate‖
unsatisfactory help desk customer support resulting are often used synonymously.
from poor morale and compensation issues 3]
Mistakes caused by employees working overtime on • Design. Quite simply, this means carrying out the
a sustained basis. process of designing a product or process. Many
organizations have well-established policies and
IV. DFSS procedures for their respective design processes. One
valuable Lean Six Sigma technique that supports the
While DMAIC may be traditionally viewed as the foundation design process is quality function deployment (QFD),
for Lean Six Sigma, its application is primarily limited to the details of which were discussed in Chapter 15.
improving existing processes; it does little to address the Additional tools useful in this phase include pilot runs,
design of new products or processes. This concept is simulations, prototypes, and models.
illustrated in the flowchart given in Figure 1.0
• Verify. This phase is directed at ensuring that the
Fortunately, several additional structured design output meets the design requirements and
methodologies exist. Several of the common specifications and is performed on the final product or
methodologies are depicted in Figure 1.0. Each of process. Basically, verification speaks to the design
these methodologies has its usefulness as long as the meeting customer requirements and ensures that the
nuances of each are fully understood. design yields the correct product or process. By
Table 1.1 has been created to provide a loose contrast, validation speaks to the effectiveness of the
comparison between the DMAIC and DFSS design process itself and is intended to ensure that it is
methodologies. These comparisons are not exact, but capable of meeting the requirements of the final
do provide a sense of how the methodologies align. product or process.
This table may be beneficial for readers who may be
required to use multiple methodologies or find Both verification and validation are necessary functions in
themselves in need of selecting one suitable for a any design process. As such, this suggests that DMADV
particular project. might be more appropriately named DMADVV.
DFSS
V. SIX SIGMA (SARBANES OXLEY ACT-2002)
DMADV The Sarbanes–Oxley Act of 2002 changed many of the rules
under which corporations and executives were required to
DMADOV provide information to investors and to others. In many
respects, the Act defined a customer service approach that
corporations would be required to adopt (in addition to
DMEDI
attempting to prevent a repeat of many abuses among
corporate executives, accounting firms, and analysts that came
IDOV
to light in 2001 and 2002). For example, the new law
Figure 1.0 The DFSS family of methodologies. ―requires steps to enhance the direct responsibility of senior
Hypothesis Testing SIPOC formulas and the deliverables of the measure phase of Six
Meeting Minutes Tollgate review Sigma.
Multi vari Studies Tree diagram Before proceeding ahead a brief recap of the Measure phase.
Every organization measures various performance parameters
Multiple regression Waste analysis
during the course of the day to access whether their plans are
working and achieving the desired results. If there are no
6.4 Tools used In Improve Phase
measurements it will be not possible to know whether the
Activity network Diagrams PERT organization is meeting customer requirement and making
Anova Process Capability Analysis profits. At the same time it is not economical to measure
every activity and control all the parameters. When there is a
Brainstorming Project Management specific problem in the organization to be eliminated the first
Control Charts Project tracking step is to measure the extent of the problem. Measurement of
Design of Experiments(DOE) Risk Analysis
key performance parameters of any process is a key to find
solutions and eliminate the problem in the process. During
FMEA Taguchi designs measure phase the performance of the current process is
Process Mapping ( to be –Future) Toll gate review measured to identify the problem area.
Gantt charts Value Stream maps We ask the following questions given below in measure
Work breakdown Structure
phase:
Hypothesis Testing
(WBS) What needs to be measured?
Measurement System Analysis
Y=f(x)
How to measure it?
(MSA) What is the customer requirement?
Meeting Minutes What is the variation between the customer
requirement and the current level of performance?
Pareto charts
Is there any additional measurement required to
precisely know the problem?
What measurements are required after the process is
improved?
6.5 Tools used in Control Phase In Measure phase we will be faced with 2 types of Data that
5S Pre control is 1] Continuous Data 2] Discrete Data.
Basic Statistics Project management Continuous data are the data which are measured to the finer
elements and expressed in units of physical property such as
Communication Plan Project tracking
length, height, diameter, temperature and weight and Discrete
Control Charts Run Charts data is that data which cannot be measured these are data
Control Plan Six Sigma Story board regarding characteristics like colour, odour, and count data
such as pass/fail.
Standard Operating
Data Collection To measure data to improve the process of a part
procedures(SOP)
manufacturing factory we should also determine the sampling
Data Collection plan Statistical process control (SPC)
plan from the sampling types such as 1] Simple random
Kaizen Tollgate review Sample here the lot we collect is homogeneous that is from
Kanban Total productive maintenance the same category. 2] Stratified Sample this is a refined
method of random sampling where the lot is segregated based
Lessons Learned Training Plan deployment
on certain criteria from a heterogeneous lot. 3] Systematic
MSA Work instructions Sample in this samples are taken after every fixed interval of
Meeting Minutes
time when the machine produces the part 4] Rational Sample
is done when the team finds there are large numbers of data
Mistake proofing / Poka Yoke already existing, it is possible to identify a critical area from
the past records and collect samples only from specific area.
VII. MEASURE PHASE SOME TOPICS Central Limit Theorem: The Central Limit Theorem
states that for Sufficiently Large populations, the normal
As we have already discussed about DMAIC (Define- distribution can be used to analyse samples drawn from
Measure-Analyse-Improve-Control) phase of Six Sigma in populations that are not normally distributed or whose
my previous articles in this paper we will discuss about the shapes are unknown. It can be also shown that the
average of sample averages will equal to the average of
the population and that the standard deviation of the Cp= 1.111
average equals to standard deviation of the population
Cr = 1/ 1.111 = 0.9
divided by the square root of the sample size.
Since Cp is greater than 1 and therefore Cr is less than 1; we
Formulas used in Measure Phase
can conclude that the process is potentially capable.
Some Little Statistics
Long term Potential Performance
Central tendency is defined as the tendency for the values
The long term process performance is calculated using the
of a random variable to cluster round its mean, mode, or
similar methods as Cp and Cpk with the difference being that
median.
the standard deviation used in this case applies to long term
Central Tendency Population Sample variations.
Formula is Cp = USL-LSL/ 6 * Long term Standard
Mean/Average µ =∑ (Xi)^2/N XBar =∑ (Xi)^2/n
Deviation.
Median Middle value of Data Calculation of Cp by the formula applied above would be
able to say whether the process is stable it would not say the
Mode Most occurring value in the data
process is generating defects for that we use Cpk = (1-k) * Cp
Dispersion where k is called the K factor ; it measures the level of
variation of the process mean from the specified target.
S^2 = ∑(Xi-
Variance σ ^2 = ∑(X- µ)^2/ N Where k = (USL+LSL)/2- X Double Bar / (USL-LSL)/2
XBar)^2/ n-1
K=0 means that the process is perfectly centred and therefore
S = ∑(Xi-
Standard Deviation σ= ∑(Xi- µ)^2/ N Cpk=Cp.
XBar)^2/ n-1
Before and After Capability Analysis Using Minitab
Range Max-Min
Open Minitab-.> Select Assistant->Select baseline Process
Calculate the Process Capability – Process Capability is data & Enter the before column values and in improved
measured by comparing the variation in the product to the Process data -> Select the column where the improved data is
upper and lower specification limits, assuming that the there -> enter -> specify the lower specification limit and
product mean value is exactly centred between the upper and upper specification limit and the target value which you want
lower specification limits to achieve and then click on OK.
The process capability is calculated using the formula CP How to Determine Sample Size, Determining Sample Size
= USL-LSL/6 * Standard Deviation where CP = process In order to prove that a process has been improved, you must
capability index, USL = Upper Specification Limit and measure the process capability before and after improvements
LSL = Lower Specification Limit. are implemented. This allows you to quantify the process
The Process capability measures indicates the following improvement (e.g., defect reduction or productivity increase)
and translate the effects into an estimated financial result –
1. Process is fully capable something business leaders can understand and appreciate. If
2. Process could fail at any time data is not readily available for the process, how many
3. Process is not capable. members of the population should be selected to ensure that
When the process is spread well within the customer the population is properly represented? If data has been
specification the process is considered to be fully capable that collected, how do you determine if you have enough data?
means the CP is more than 2.In this case, the process standard Determining sample size is a very important issue because
deviation is so small that 6 times of the standard deviation samples that are too large may waste time, resources and
with reference to the means is within the customer money, while samples that are too small may lead to
specification inaccurate results. In many cases, we can easily determine the
Example: The Specified limits for the diameter of car minimum sample size needed to estimate a process parameter,
tires are 15.6 for the upper limit and 15 for the lower limit such as the population mean .
with a process mean of 15.3 and a standard deviation of
0.09.Find Cp and Cr what can we say about Process When sample data is collected and the sample mean is
Capabilities ? calculated, that sample mean is typically different from the
Cp= USL-LSL/ 6 * Standard deviation = 15.6 – 15 / 6 * 0.09 population mean . This difference between the sample and
= 0.6/0.54 = 1.111 population means can be thought of as an error.
experiment. Determining the number of defects out of a Poisson distribution: The binomial are used to calculate the
sample of 100 is an experiment and there are many possible probability for one outcome out of two. Not all situations
events the possible outcomes can be anywhere between 0 and exhibit only 2 alternatives. For instance an engineer might
100. want to know the number of defects in a engine these
examples do not state a number of trials nor do they state the
Discrete probability distributions
number of alternative outcomes they only describe the
A distribution is said to be discrete if it is built on discrete occurrence of an event within a time frame.
random variables. The discrete probability distributions are
Problem: For Example the employee attrition rate at a
Binomial and Poisson
company follows a Poisson distribution with a mean of 4 a
Binomial distribution is one of the simplest probability month; the HR Director wants to know the probability that
distributions. It assumes an experiment with n identical trials between five and seven employees would leave the company
with each trial having only 2 possible outcomes considered as next month.
success or failure. Each trial is independent of the previous
We will use SigmaXLtoold to do the Poissson Distribution
one.
: Open SigmaXL->Templates & calculators->Poisson
Problem: An operations manager has 16 production lines Distribution calculator->Poisson and enter the values in
that produce the exact same type of product. The Production the macro enable sheet and you will find the probability
lines are supposed to be operational at 6:00 AM every day. of five employees leaving is = 0.156.
The Probability for a machine on a line to go down is 0.15.
Exponential Distribution: The exponential distribution is
The operational manager wants to know how many lines will
close to the poisson distribution but the poisson distribution is
be operational at the start of the shift. He also wants to have
built on discrete random variables while the exponential
the distribution for the probability of the lines not operating.
distribution is continuous variables.
Solution :- Open Minitab -> Click on Calc then Click on
Solution: - Open Minitab -> Click on Calc then Click on
Probability Distribution and then on Binomial and in
Probability Distribution and then on Exponential and in
column C1 enter values 1 to 16 when the binomial
column C1 enter values from the file Acid1.
Distribution box appears in number of trials enter 16 and
probability of success enter 0.15 and in input column Output as Shown Below:-
enter C1 you get the probability as shown in the table
Results for: Acid1.MTW
below.
Cumulative Distribution Function
Binomial with n = 16 and p = 0.15
Exponential with mean = 1
x P( X ≤ x )
x P( X ≤ x )
1 0.28390
0.123 0.115736
2 0.56138
0.109 0.103270
3 0.78989
0.110 0.104166
4 0.92095
0.109 0.103270
5 0.97646
0.112 0.105956
6 0.99441
0.109 0.103270
7 0.99894
0.110 0.104166
8 0.99984
0.110 0.104166
9 0.99998
0.110 0.104166
10 1.00000
0.112 0.105956
11 1.00000
0.110 0.104166
12 1.00000
0.101 0.096067
13 1.00000
0.110 0.104166
14 1.00000
0.110 0.104166
15 1.00000
0.110 0.104166
16 1.00000
VIII. PARAMETRIC HYPOTHESIS TESTING (ANALYSE reject or fail to reject the null hypothesis .The test
PHASE) statistic is derived from Z formula if the samples are
greater than 30.
A hypothesis is a value judgement made about a
circumstance, a statement made about a population .Based on Z = Xbar-µ/σ/ √n
experience an engineer can for instance assume that the
If the sample are less than 30, then the t-test is used
amount of carbon monoxide emitted by a certain engine is
twice the maximum allowed legally. However his assertions T= X bar -µ/ s/√n
can only be ascertained by conducting a test to compare the
4. Level of Risk: The level of risk addresses the kinds
carbon monoxide generated by the engine with the legal
of errors that can be made while making an
requirements.
inference based on the test statistics obtained from
Number of Tails: The number of tails refers to the the testing. Two types of errors can be made. The
distribution used for testing the hypothesis. There experimenter can falsely reject a hypothesis that is
are left and right tails for distribution. Depending on true. In that case we say that he made a type 1 error
the objective of the test the left, right or both tails or α error. If he fails to reject a hypothesis that is
are considered in a hypothesis test and the number actually false he makes a type II error or β error. In
of tails decided. The Number can either 2 or 1 the case of the engineer testing the level of carbon
inequality sign ≠ indicates 2 tail and less than or dioxide is in fact twice as great as the prescribed
greater than indicates one tail. (˂ ˃). level and he rejected the null hypothesis he would
In statistical significance testing, a one-tailed have made a type 1 error. If the carbon monoxide
test and a two-tailed test are alternative ways of generated by the engine is less than the legally
computing the statistical significance of prescribed level and the experimenter fails to reject
a parameter inferred from a data set, in terms of the null hypothesis he would have made a type ii
a test statistic. A two-tailed test is used if deviations error he would have failed to reject a null hypothesis
of the estimated parameter in either direction from that happened to be false.
some benchmark value are considered theoretically 5. Decision Rule: Only two decisions are considered
possible; in contrast, a one-tailed test is used if only rejecting the hypothesis or failing to reject it. The
deviations in one direction are considered possible. decision rule determines the conditions under which
the null hypothesis is rejected or failed to be
If the data used to make the comparison are parametric data rejected. The Decision to reject the null hypothesis
that is data that can be used to derive the mean and the is based on the alpha level. Before conducting the
standard deviation, the population from which the data are test the experimenter must set the confidence level
taken are normally distributed they have equal variances. A for the test. He can decide to test the hypothesis with
standard error based hypothesis testing using the t-test can be a 95% confidence level. That means he would be
used to test the validity of the hypothesis made about the 95% sure that the decision to reject or fail to reject
population. There are at least five steps to follow when the null hypothesis is correct. However 95%
conducting hypothesis. confidence level means that 5% chance that an error
1. Null Hypothesis: The first step consists of stating the will be made.
null hypothesis which is the hypothesis being tested. Example: A machine used to average a production rate of
In the case of the engineer making a statement about 245 units per hour before it went for repair. After it came
the level of carbon monoxide generated by the back from repair over a period of 25h. It produced an
engine , the null hypothesis is average of 249 units with a standard deviation of
H0: the level of carbon monoxide generated by the 8.Determine if there is a statistically significant difference
engine is twice as great as the legally required between the machines productivity before and after
amount. The Null hypothesis is denoted by H0 repair at a confidence level of 95%?
2. Alternate hypothesis: the alternate (or alternative)
hypothesis is the opposite of null hypothesis. It is Solution:-
assumed valid when the null hypothesis is rejected
Since the sample is smaller than 30 we will use the t-test
after testing. In the case of the engineer testing the
carbon monoxide the alternative hypothesis would The null hypothesis is the following
be
H0: The productivity before repair = Productivity after repair
H1: The level of carbon monoxide generated by the
engine is not twice as great as the legally required The alternate hypothesis should be the opposite
amount.
HA: The productivity before repair ≠ the productivity after
3. Testing the hypothesis: the objective of the test is to
repair.
generate a sample test statistic that can be used to
Step 2: Test Statistic to be used: The test statistic to conduct Mood’s Median Test
hypothesis testing is based on the calculated
This test is used to determine whether there is sufficient
ᵡ ^2 = ∑ (f0-fe) ^2/ fe where fe represents the expected evidence to conclude that samples are drawn from
frequencies and f0 represents the observed frequencies populations with different medians
Step 3: Calculating the X^2 test statistic:- Ho: The Population from which the samples are drawn have
We will use the same table as given above if a sample of 80 the same median
customers were surveyed the data in the table Ha: At least one of the populations has a different median
Table A The procedure is best illustrated with an example.
Expected Example: - Three machines produce plastic parts with a
Categories Rating %
frequencies fe
critical location dimension samples are collected from each
Excellent 10 0.10 * 80 =8 machine with each sample size at least 10. The Dimension is
Very good 45 0.45 * 80 = 36 measured on each part. Does Mood’s median test permit
rejection at the 90% significance level of the hypothesis that
Good 15 0.15 * 80 = 12 the median dimensions of the three machines are the same?
Fair 5 0.05 * 80 = 4
The Data:
Poor 10 0.10 * 80 = 8
Machine # 1: 6.48, 6.42, 6.47, 6.49, 6.47, 6.48, 6.46, 6.45,
Very poor 15 0.15 * 80 = 12 6.46
Total 100 80
N= 10
We can summarize the observed frequencies and the expected Machine # 2 :
frequencies in the table given below: 6.42,6.46,6.41,6.47,6.45,6.46,6.42,6.46,6.46,6.48,6.47,6.43,6.
48,6.47
Observed Expected
Categories (f0-fe)^2/fe N = 14
Frequencies fo Frequencies fe
Excellent 8 8 0
Machine # 3:
6.46,6.48,6.45,6.41,6.47,6.44,6.47,6.42,6.43,6.47,6.41
Very good 37 36 0.028
Good 11 12 0.083
N = 11
Fair 7 4 2.25 Procedure:
Poor 9 8 0.125 1. Find the overall median of all 35 readings , overall
median = 6.46
2. Construct a table with one column showing number Mann-Whitney Test (Non-Parametric Test)
of readings above the overall median and another
Mann-Whitney test is better explained thru an example
showing number of medians below the overall
category median or each category. Half of the Example: An operations manager wants to compare the
readings that are equal to the median should be number of inventory discrepancies found in two operating
counted in the ―above‖ column and half in the shifts. The inventory discrepancies are not normally
―below‖ column. distributed. The manager takes a sample of discrepancies
found over 7 days for the first shift and 5 days for the second
Observed Values shift tabulates the data as shown below
Number above the Number below the First Shift Second Shift
overall Median overall median
15 17
Machine #1 7 3
24 23
Machine #2 7 7
19 10
Machine #3 4.5 6.5
9 11
12 18
If the three Populations have the same median, each
should have half their readings in the ―above‖ column 13
and half ―below‖. Therefore the expected value for 16
number of readings would be half the sample size in each
case, so the expected values would be as shown in the
table. We can make several observations from this table. First the
sample sizes are small and we only have 2 samples so the
Expected Values first thing that comes to mind would be to use t-test, however
the t-test assumes that the population from which the samples
Number above Number below
overall median are taken should be normally distributed which is not the case
overall median
therefore the t-test cannot be used, instead the Mann-Whitney
Machine # 1 5 5 U Test should be used. The Mann-Whitney U test assumes
7 that samples are independent and from dissimilar populations
Machine # 2 7
5.5 Step 1: Define the Null Hypothesis: Just as the case of T-
Machine # 3 5.5
Test the Mann-Whitney U Test is a hypothesis test the null
and alternate hypothesis are
The next step is to calculate (observed – expected) ^2 /
expected. For each cell in the table H0: The number of Discrepancies in the first shift is the same
The sum of these values in these cells =1.96 this is ᵡ ^2 as the one in the second shift
test statistic with degrees of freedom k-2 where k is the H1: The number of Discrepancies in the first shift is different
number of machines. Since the critical value for ᵡ ^2 1, from the ones in the second shift
0.95 = 3.84 the data do not justify rejecting the null
hypothesis. The result of the test will lead to the rejection of the null
hypothesis or a failure to reject the null hypothesis.
(Observed-Expected)^2/ expected Step 2: Analyse the data: The first step in the analysis of the
Number below
data consists of naming the groups. In our case they are
Number above already named first and second shift. The Next Step consists
overall median overall median
of grouping the two columns in one and sorting the
Machine # 1 2^2/5 = 0.8 2^2/5 = 0.8 observations in ascending order ranked from 1 to n. Each
observation is paired with the name of the original group to
Machine # 2 0 0
which it belonged
Machine # 3 1/5.5=0.18 1/5.5 =0.18
The table below illustrates the fact:
To find the P-value for 1.96 using MS Excel, choose the Observations Group Ranks
statistical function CHIDIST and enter 1.96 for ᵡ ^ 2 and 9 First Shift 1
1 for the degrees of freedom. You get the value
0.161513. 10 Second Shift 2
11 Second Shift 3 When the data being analysed are not normal and the tools
required for analysis require their normality then one option
12 First Shift 4 would be to normalize them. Normalizing the data means
transforming them from non-normal to normal. This can be
13 First Shift 5
done using Box-Cox transformation, the Johnson
15 First Shift 6 transformation.
From the menu bar of Minitab click on stat then click on
16 First Shift 7
quality tools and from the drop down list select Johnson
17 Second Shift 8 transformation the graph shows the probability plot
before and after transformation now the p value becomes
18 Second Shift 9 greater than the standard (0.05) the data are therefore
concluded to be normally distributed.
19 First Shift 10
Anova
23 Second Shift 11
If for instance 3 sample means A, B, C are being compared
24 First Shift 12 using the t-test is cumbersome for this we can use analysis of
variance ANOVA can be used instead of multiple t-tests.
ANOVA is a Hypothesis test used when more than 2 means
We will call v1 the sum of the ranks of the observations of
are being compared.
the first group called as first shift and v2 the sum of the ranks
of the observations for group second shift If K Samples are being tested the null hypothesis will be in
the form given below
V1= 1+4+5+6+7+10+12 =45
H0: µ1 = µ2 = ….µk
V2 = 2+3+8+9+11 = 33
And the alternate hypothesis will be
Step 3: Determining the values of U Statistic by the formula
given below H1: At least one sample mean is different from the others
U1 = n1*n2 + n1*(n1+1) / 2 – v1 Annova are One-way, 2-way, 3 way etc. one way annova has
one independent factor and 2 way annova has 2 Independent
U2 = n1 * n2 + n2 (n2 + 1) / 2 –
factors.
v2
The Sum of Squares between samples (or Treatments) is
If any or both of the sample size are greater than 10 , then U
noted as SST and is obtained from the formula SST= ∑ n j
will be approximately normally distributed and we could use
(Xbarj- XDouble Bar)^2.
Z transformation but in our case both sample sizes are less
than 10 there fore Example: - An operations Manager is Comparing the
U1 = 7*5 +7(7+1)/2 – 45 = 35 + Productivity per Hour of Four Machines. He takes sample
28 -45 = 18 of units produced from each Machine and tabulates them
in the table given below. Can we say that the production
U2 = 7 * 5 + 5(5+1) / 2 -33 = 35 + of the four machines are equal with a confidence level of
15-33 = 17 95%?
Since the calculated test statistic should be the smallest of the Table A:
two we will use U2 = 17 and n2=7 and n1=5 from the Mann
Sample 1 Sample 2 Sample 3 Sample 4
Whitney table we find that p value is 5 therefore we fail to
reject the null hypothesis and have to conclude that the 23 25 25 23
number of discrepancies in the first shift are the same as the 24 22 25 23
ones in the second shift.
25 22 25 23
IX. NORMALIZING DATA 24 23 24 23
The null hypothesis will be Degree of Freedom within treatments: Since there are 28
observations and 4 treatments the degree of freedom will be
H0: mean of sample 1= mean of sample 2 = mean of sample
28-4 = 24.
3 and the alternate hypothesis HA: At least one sample mean
is different from others. Mean Squares MST = SST/ DFtreatment = 11.536 / 3 =
3.845.
Table B: Sum of Square for treatments
MSE = SSE / DF error = 23.143 / 24 = 0.963.
Sample 1 Sample 2 Sample 3 Sample 4
Calculated F Statistic F = MST / MSE = 3.845 / 0.963 =
23 25 25 23
3.993.
24 22 25 23
The Computed Statistics can be summarize in the ANOVA
25 22 25 23 table
24 23 24 23 Sources of Sum of Degree of Mean
F-Statistic
Variation Squares Freedom Square
24 24 24 24
Between
11.536 3 3.845
25 22 23 24 Treatments
Within
26 23 26 25 23.143 24 0.963 3.993
Treatments
xBar j Total 34.679 27
23.00 24.571 23.571
=24.429
(xbarj-
We can find the F-Statistics in the F-Table. The Critical F
xdouble 0.797 0.460 0.103 Statistic is based on the degree of freedom of the
bar)^2 =0.287 treatment, the degree of freedom for error based on the
table.
∑ (Xbarj- XDouble Bar) ^2=1.648. Based on the table the Critical F-Statistic is equal to 3.01
which we get from the F-Anova table; while the calculated
SST= ∑ n (Xbarj- XDouble Bar)^2 = 7 * 1.648 = 11.536 F-Statistic is = 3.993 the calculated F statistic is larger
than the critical F-Statistic ; therefore we have to reject
Table C: Sum of Squares for Error the null hypothesis an conclude that at least one sample
Sample 1 Sample 2 Sample 3 Sample 4
mean is different from others.
Degree of freedom between treatments: Since there are Magnesium 1 21.125 21.125 24.14 0.008
four treatments (Samples), the degree of freedom = 4-1 Interaction 1 15.125 15.125 17.29 0.014
=3.
6 1.04
Coe Stan t P- Low Upp Low Up
7 1.07 ffici dard St va er er er pe
ents Erro at lu 95% 95 95.0 r
8 1.12 r e % % 95.
0
9 1.15 %
4 47 16 5 2 5 82 input variables x can fail and determine what effect that has
Quantity 0.03 0.00 99 1. 0.03 0.03 0.03 0.0 on process outputs.
of 884 0390 .5 16 794 974 794 39
Chemical 848 427 02 E- 815 881 815 74 Types of FMEA:-
in gm 5 5 13 8 2 8 88
12
Design FMEA and the process FMEA are the two most
commonly used tools by Six Sigma teams. A design FMEA is
called DFMEA, analyses the design for potential failures that
The above table clearly Shows that the intercept in the case
could arise as a result of weakness in the design of the
was 0.803939394 and the coefficient x variable was
product rather than due to its processes or manufacture. If
0.038848485 the six sigma team calculated
both FMEAs are used the design FMEA is usually done first
Y by the formula y = a + bx where a = 0.803 and b= 0.039 so before production begins. The Process FMEA called as
using the formula y = 0.803 + 0.039 * x PFMEA is used to identify the failures caused by weakness in
processes such as transaction, assembly or production
The relationship between y (Specific gravity of the Solution)
processes. During the process FMEA it is presumed that the
and the quantity of the chemical was established and shows
product meets the intent of the design.
that the presence of chemical increases the specific gravity of
the solution increases proportionately.
XI. IMPROVE PHASE
Coefficient of Correlation The statistic used to determine the
In many projects when data is analysed using the six sigma
measure of relatedness between factors is the coefficient of
tools there are good chances that a solution will emerge in the
correlation generally noted as R it is a number between -1
analyse phase itself. This Solution may pertain to anything
and + 1. When the coefficient of correlation is equal to -1
from say finding a new supplier, change of a process
there is a perfect negative correlation between the factors; in
parameter, modification in a layout or to provide a new
other words an increase in the magnitude of the independent
software application, a new poka-yoke gadget; however if the
factor will necessarily lead to a decrease in the dependent
solution is known implementing the process to get the desired
factor in the exact same proportion and a decrease in the
result requires a great deal of effort. Take for instance the
independent factor will lead to an increase in the dependent
change in the measuring equipment; it may take a few weeks
factor in the exact same proportion.
or even a month or two to procure or develop a new system.
If the coefficient of correlation is equal to 0 there is Similarly if we want to switch to a new supplier, it may
absolutely no relationship between the 2 factors and when it involve negotiation, capacity creation etc. in Some Projects
is equal to + 1 there is a positive correlation between the we may not be lucky enough to arrive at the final solution at
factors and increase in the independent factor leads to an this stage. In this case we need to use some more tools such
increase in the dependent factor in the same proportions. as the design of experiments (DOE).
Coefficient of Determination : While the coefficient of Let us consider the example of an experimenter who is trying
correlation measures the strength of the relation between the to optimize the production of organic foods. After screening
2 factors ,the coefficient of determination shows what to determine the factors that are significant for his experiment
proportion in the variations of the dependent factor(y) is due he narrows the main factors that affect the production of
to the variations in the in the independent factor(x). The fruits to ―light‖ and ―water‖. He wants to optimize the time
coefficient of determination noted r^2 is the square of that it takes to produce the fruits. He defines optimum as the
coefficient of relation minimum time necessary to yield comestible fruits.
Multiple regression analysis: When more than one To conduct his experiment he runs several tests combining
independent factor is used to explain the dependent factor a the two factors (water and light) at different levels. To
multiple regression analysis is used. The principle followed minimize the cost of experiments he decides to use only 2
when conducting a multiple regression analysis is the same as levels of the factors: high and low. In this case we will have
when conducting a simple regression with the difference that two factors and two levels therefore the number of runs will
more input factors (x’s) are used. be 2^2=4. After conducting observations he obtains the
results tabulated in the table below.
FMEA is one of the special tools used during
analyse/improve phase to verify whether the experiment Factors Response
being planned will have its effect and what are the chances of
failure. Failure mode effect analysis is a tool for facilitating Water –High Light High 10 days
the process of predicting failures, planning preventive Water high – Light low 20 days
measures, estimating the cost of failure, and planning
redundant systems or system responses to failures. The Water low – Light high 15 days
primary objective is to identify ways in which the process
Water low – Light low 25 days
The coded form of the table above is given below: other screening designs. One good strategy is to include all
potential Xs in a first screening experiment — even the ones
Water Light Response in Days
you’re sceptical about.
+ + 10 You then use the analysis of the experiment results to tell you
objectively, without any guessing, which variables to keep
+ - 20
pursuing and which ones to set aside. Remember, in Six
- + 15
Sigma, you let the data do the talking.
Plackett-Burman experiment designs are an advanced method
- - 25 you may hear about for efficiently screening dozens of
potential Xs. Although they don’t reveal all the detailed
The reading we make of the tables is that when the level of knowledge provided by a 2k factorial design, Plackett-
light is high and the level of water is high, it takes the fruits Burman experiments quickly identify which experimental
10 days of time to become comestible and when the level of variables are active in your system or process. You then
water is low and the level of light is low it takes 25 days for follow these screening studies up with more detailed
the fruit to become comestible. So the factors water and light characterization experiments.
has a significant effect on the response variable which is days
Set the factor levels
in this experiment. You can also illustrate this experiment
using Minitab 17. 2k factorial experiments all have one thing in common: They
use only two levels for each input factor. For each X in your
Example: In Minitab open file Yield.mtw and take the 3
experiment, you select a high and a low value that bound the
levels time, temperature, catalyst which will affect the yield
scope of your investigation.
of the chemical. Go to Stat-> DOE->Factorial->Create
Factorial Design Select 2 level factorial and number of For example, suppose you’re working to improve an ice
factors as 2 and if you want to repeat the experiment 2 times cream carton filling process. Each filled half-gallon carton
choose the number of replicates as 2 or 3 etc. After Designing needs to weigh between 1,235 and 1,290 grams. Your Six
the factorial select the option Analyse Factorial Design to Sigma work up to this point has identified ice cream flavour,
find out which factor time , temperature or catalyst gives the the time setting on the filling machine, and the pressure
maximum yield of the chemical and in response column give setting on the filling machine as possible critical Xs to the Y
yield as the response factor and select pareto graph the graph output of weight.
will show which factors contribute to high yield. By selecting
For each of these three factors, you need to select a high and
terms option you can remove the non-contributing factors by
a low value for your experiment. With only two values for
double clicking on them and get the result of the contributing
each factor, you want to select high and low values that
factors which give the maximum yield.
bracket the expected operating range for each variable. For
Design of Experiments using Factorial Method the ice cream flavour variable, for example, you may select
vanilla and strawberry to book-end the range of possible ice
How to Plan a Six Sigma 2k Factorial Experiment
cream consistencies.
Like in most other endeavours, time spent planning for Six
Sigma is rewarded with better results in a shorter period of Variable Symbol Low Setting High Setting
time. Planning 2k factorial experiments follows a simple
Ice cream flavour X1 Vanilla Strawberry
pattern: choosing the factors you want to experiment with,
establishing the high and low levels for those factors, and
Fill time (seconds) X2 0.5 1.1
creating the coded design matrix.
Select the experiment factors Pressure (psi) X3 120 140
of unique runs, where k is the number of variables included in XII. CONTROL PHASE
your experiment.
Control Charts, Control Plan, Poka-Yoke, Project Closing.
For the ice cream carton filler example, then, you have 2 3 = 2
In this paper we are talking about the Monitor and Control
× 2 × 2 = 8 runs in the experiment because you have three
phase of the project which comes before project closure.
input variables. For an experiment with two variables, you
have 22 = 2 × 2 = 4 runs, and so on. Maintaining Gains
k
Each of these 2 experimental runs corresponds to a unique If your project has successfully entered the Monitor and
combination of the variable settings. In a full 2 kfactorial control phase you have to make sure that you maintain this
experiment, you conduct a run or cycle of your experiment at stability, here are some suggested ways to protect hard won
each of these unique combinations of factor settings. In a gains.
two-factor, two-level experiment, the four unique setting
combinations are with Policy Changes – Which corporate policies should
be changed as a result of the project – have some
• Both factors at their low setting policies become obsolete are new policies needed
• The first factor at its high setting and the second factor at New Standards- Did the project bring the
its low setting organization into compliance with a standard. If so
having the company adopt the standard might
• The first factor at its low setting and the second factor at prevent backsliding, there are standards like ISO,
its high setting ANSI, ASTM etc. from which you can take your
• Both factors at their high setting pick.
Modify procedures-procedures describe the way
These groupings are the only ways that these two factors can things are supposed to be done , since the project
combine. For a three-factor experiment, eight such unique produced better results we can find the best suited
variable setting combinations exist. procedure which has resulted in better modification
A quick way to create a complete table of an experiment’s or change.
run combinations is to create a table called the coded design Change Manufacturing Planning-An organizations
matrix. Make a column for each of the experiment variables manufacturing plan describe in detail how a product
and a row for each of the 2k runs. Using –1s as code for the is to be processed and produced. Often the Six
low variable settings and +1s as code for the high settings, fill Sigma team will discover better way of doing things.
in the left-most variable column cells with alternating –1s and Modify training-personnel need to become familiar
+1s. with the new way of doing things. Be sure all
employees are retrained. Evaluate the existing
Repeat the process with the next column to the right, this time training materials and revise them as necessary.
with alternating pairs of –1s and +1s. Fill in the next column
to the right with alternating quadruplets of –1s and +1s, and 12.1 Tools and techniques useful for Control Planning
so on, repeating this process from left to right until, in the Project Planning-Many of the six sigma tools and
right-most column, you have the first half of the runs marked techniques used during the define, measure, analyse
as –1s and the bottom half listed as +1s. and improve phases can also be used to develop a
Run X1 X2 X3 control plan.
1 –1 –1 –1
Failure Mode and Effect Analysis use FMEA in the
improve phase, please refer my earlier articles on
2 +1 –1 –1 FMEA.
3 –1 +1 –1 Force Field diagram- A force Field diagram can be
4 +1 +1 –1
very useful at this point. Show the Forces that will
push to undo the changes and create counterforces
5 –1 –1 +1 that will maintain them.
6 +1 –1 +1
12.2 Preparing the Process Control Plan
7 –1 +1 +1
Process Control plan should be prepared for each key
8 +1 +1 +1 process. The plans should be prepared by teams of people
who understand the process. The team should begin with
Remember that these three factors are coded values; when creating a flow chart of the process using the process
you see a –1 under the X1 column, it really represents a elements. The flow chart will show how the process elements
discrete value, such as ―vanilla‖ in the ice cream experiment; relate to each other which will help in the selection of control
a +1 really represents the other value, like ―strawberry.‖
point. It will also show the point of delivery to the customer 1. If only one point is outside the UCL , it indicates
which is usually an important control point. one of the following:
a. The Control limits have been miscalculated or
Process Audits
have been drawn incorrectly.
The requirement for all processes should be documented. A b. The plotted sample that is above the control
process audit check list should be prepared and used to limit has been miscalculated or plotted
determine the condition of the Process prior to production. improperly.
c. The within sample variation has increased from
12.3 SPC (Statistical Process Control)
that of the past and the nature of excessive
Control Charts variation has to be investigated.
2. An Upward trend of seven or more data points in the
Variable Charts R-Bar Chart indicates that the process spread is
The Xbar and R Chart is the flagship of the control charts. It increasing, this has to be investigated immediately
is called a variables chart because the data to be plotted result because process capability is degrading and the
from measurement on a variable on continuous scale. This variation is increasing.
type of measurement occurs when for each pair of values 3. A downward trend of seven (7) or more data points
there are infinite number of values between them. On an inch in the R-bar chart means that the process spread has
scale for instance there are infinite number of values between decreased. The Process is improving and the control
1.250 and 1.251, values such as 1.2503, 1.2508 and so on. limits may need to be recalculated
The Xbar and S chart is another variable control chart with Attribute Charts
this chart sample standard deviation s is used to indicate
dispersion instead of the range. The Standard deviation is a Attribute Charts are used for Count Data
better measure of spread especially if the sample size is large
In attribute Control Charts every item would be either in one
so this chart is somewhat more precise than the xbar and R
or two categories such as ―good‖ or ―bad‖.
chart. TheXbar and S chart should be used when calculation
of the standard deviation is feasible. Users who are P control charts
comfortable with the standard deviation function on a
The P Chart is used to chart Binary Data where each item is
handheld calculator will be able to calculate S almost as
in one or two categories. This chart would be the appropriate
easily as R. The Xbar and S chart is also the preferred chart
chart for plotting number of defectives.
when the chart is constructed using a software package.
UCL for P Chart = PBar + 3 √pBar(1-pBar)/ nbar
XBar and R Control Charts:
LCL for P Chart = PBar – 3 √Pbar (1-PBar)/ nbar
To get Control charts in Minitab please do the following
open Minitab Click on Assistant? Click on Control Where nbar = Sum of Sample Sizes / Number of samples.
Charts You will get flow chart of various types of
And Pbar = Sum of Discrepancies / Sum of sample Sizes
Control Charts-> Select the data from the sheet column
and then specify estimate control limits from the data Analyse the P Chart for Process Control: - If all of the plotted
you will get the different types of control charts which can data on the p-
determine whether the process is in control.
Chart are within control limits and randomly distributed
Control limits for Xbar and R control charts are given by the around the centre line
following formulas
The defective rate for the process is stable and predictable.
Upper Control Limit for the average chart UCLx =Xbar + A2
If there is a run of seven or more proportions (p) on one side
* RBar
of the centre line,
Lower Control Limit for the average Chart LCLx = Xbar –
A2 * RBar It means one or more of the following.
Upper Control Limit for the Range Chart UCLr = D4 * Rbar 1. The process defective rate has changed (or is still
different) from past.
Lower Control Limit for Range Chart LCLr = D3 * R Bar 2. The measurement system or the definition of a
nonconforming part
Where A2 and D3 are got from the Statistical constants table
Has changed.
and XBr and RBar are calculated from the problem given to
3. The Control limits calculated from these data are
you arrived by collecting samples.
questionable and may
Interpreting the X-Bar and R-bar chart:- Require recalculation.
4. A trend (up or down) of seven or more sample
proportions means that
A Brief of Tollgate Review table is given below which is done after every phase of DMAIC is Implemented
Component Define Measure Analyse Improve Control
Purpose Provide a Compelling Collect process Analyse and establish Identify and Implement Establish and deploy a
business case performance data and optimal performance process improvement control plan to ensure that
appropriately scoped primary and secondary settings for each X and solutions gains in performance are
complete with SMART metrics to gain insight an verify root causes maintained
goals and linked to a understanding into root
hoshin Plan causes and to establish
baseline performance
Participants 1. Sponsor 1. Sponsor 1. Sponsor 1. Sponsor 1. Sponsor
Required 2. Process owner 2. Process owner 2. Process 2. Process 2. Process owner
3. MBB/BB 3. MBB/BB owner owner 3. MBB/BB
Coach 4. Deployment 3. MBB/BB 3. MBB/BB 4. Deployment
4. Deployment Champion 4. Deployment 4. Deployment Champion
Champion1 5. Project team Champion Champion 5. Project team
5. Project team 5. Project team 5. Project team 6. Finance Partner
6. Finance 6. Finance Partner
partner
Schedule Variation
XIII. KEY PERFORMANCE INDICATORS (KPI’S)
This metric is the ratio of difference between the Actual End
Are measurements defined to show the state of the strategic Date and Planned End Date Vs difference between Planned
goals or targets the organization has set for itself. End Date and Planned Start Date for the project.
What is Metrics: Combination of two or more measures used Objective: The objective of this metric is to reduce the
to compare s/w processes, projects and products. Set of schedule variation by tracking it from beginning stage of the
Metrics derived from data collected from past projects at the project through the end of the project, thereby reducing time
organization level. These baselines are used as a basis to overruns. Schedule Variation metric is mainly used as an
arrive at project specific quality goals for KPI’s. indicator for capability to meet milestones
Effort Variation When it should be measured: It should be measured at
overall project level, Stage level and Task level (Process level
This metric is the difference between Estimated and Actual
and Sub process level for SDLC stages). Schedule variation
effort as compared against the Estimated Effort.
need to be calculated only when the stage is completed.
Objective: The objective of this metric is to study the
distribution of workload by Stage and to reduce the deviation Input/Measure Formula Example
of the actual effort expended as against the estimated effort.
When it should be measured: It should be measured at overall Actual End Date ((Actual End date – Planned Start Date =
project level, Stage level and Task level (Process level and Planned Start Planned End date) / 1-Jan-13
Date (Planned End date - Planned End Date =
Sub process level for SDLC stages) Planned End Planned Start date)) * 31-Jan-13
Date 100 Actual Start Date = 2-
Input/Measure Formula Example Jan-13
Actual End Date = 1-
Feb-13
Actual Effort (Actual Effort - Estimated Effort (in Schedule Variation %
Estimated Estimated Effort)/ person days) = 5 = (1 / 31) * 100
Effort (Estimated Effort) Actual Effort (in person = 3.22%
*100 days) = 7 Duration Variation
Effort Variation% = =(7-
5)/5 * 100 = This metric is the difference between Total Planned Vs Actual
40% duration for the project
%Effort Expended =
Benefits:
• Gives insight into efficiency to mitigate risk and take improvement programs and obtained fair gains from these.
further actions. The engineer’s workers and staff were given training in
technology and management practice such as JIT, Kaizen,and
Review Efficiency Six Sigma etc. The Company evolved its own excellence
It’s the ratio of number of review defects to total defects in model and developed a check list of excellence norms. IN
software (review and testing) spite of that PEL has been losing some of its key personnel to
its competitors. Retention of people has become a concern and
Objective: This metric shows the efficiency of the review a priority for the HR head.PEL spends about 3% of its sales
process. A higher ratio of Review Defects to Total Defects turnover on warranty.
indicates a very efficient Review Process. However, a lower
ratio need not necessarily mean that the review is inadequate. PEL purchases about 60% of parts by value from suppliers.
This may be due to frequent changes in scope, requirements These parts include pistons, Crankshafts, heavy castings like
etc. Typically, the review defects include all reviews, starting cylinder block, Cylinder head, electrical parts, starters,
from the Requirements Stage. All review comments that arise radiators, cables, belts, hardware etc. The Company has
from the review of Code/unit / integration / system test plans / developed a good supplier base and has enjoyed good
procedures / cases shall be counted under Review defects. The relationship with most of them.in the past few years the
total number of Defects includes all the defects found in company has been forced to negotiate with the suppliers to
reviews and testing. reduce costs. With the new initiative of senior management
the supply base has been rationalized significantly. PEL has a
When it should be measured: It should be measured at modern machine Shop that has the capability to manufacture
overall project level and stage level. machine parts with High precision. This includes heavy
medium and small parts.
Input/Measure Formula Example
PEL Ships its machines to several original equipment
Number of Review manufacturers (OEMs) who assemble them on their
defects = 10 equipment. Most of the OEMs are not happy about the quality
Number of Review Total number of performance of PEL now, since the inferior quality machine
(Number of Review
Testing Defects
defects defects) / (Total number
including customer
parts have resulted in dissatisfied end customers. Apart from
Total number of of Review + Testing the defects reported OEMs have been constantly complaining
reported test
Testing Defects Defects [including
customer reported test
defects = 100 about the quality of the paint. They reported that the end
including customer
reported test defects defects])*100 Review customers have given a feedback that the machines do not
Efficiency% = look like new. Moreover the replacement of defective parts
(10/110)*100 =
9.09
typically takes 2 weeks to reach OEMs .Under pressure of the
delivery of the equipment , OEMs sometimes have to remove
Benefits: parts from other machines and use them in the place of
defective ones. The top management of the company has
• This helps to know effectiveness of the reviews performed assured the OEMs that it will make efforts to improve its
and to improve the review process. performance within a year. They have assigned the
responsibility of improving the performance to the existing
• Increase in review efficiency decrease the defect leakage to
Six Sigma Steering Committee. They have asked the
next stage.
committee to assess the current situation with inputs from the
Case Study: quality manager and recommend an action plan.
Power Equipment Limited (PEL) is an Indian Subsidiary of an After the review of the inputs the steering committee
MNC that has its headquarters in Europe. The Company identified and finalized two six sigma projects:
customizes designs of its parent company and manufactures
1. Reducing Shipment defects
machines for industrial use. Its annual turnover is Rs 1500
2. Improvement in the quality of the painting.
crore. The Company Ships about 600 machines every month.
Approximately 30% of this volume is exported to various Project Execution Phase 2: Improvement in the Quality of
countries such as UK, US and other European countries. Its Painting
turnover from the domestic market in India is approximately
60%.PEL was a market leader and enjoyed a monopolistic PEL paints machine before dispatch. These machines are
position from 1970 to 1990. However Competition entered the dispatched to the original equipment manufacturers (OEMs)
scenario in 1990 resulting in pressures on quality, cost and for installation on their equipment. Many OEM’s had
delivery. As a result in the past few years profit did not expressed dissatisfaction regarding the quality of paint. They
increase in proportion to its turnover. The company is a had informed that the end users were unhappy.
certified to ISO 9000 since 1993 and has a fairly strong Some customers even doubted whether the machines were
quality management system.PEL embraced various quality new.
PEL LTD
Category(BB/GB) BB
Sponsor ABC
5th September
Start Date
2016
Opportunity for
Paint Gloss improvement and Cost reduction due to excess coating
Improvement
Short Statement of the Improve average paint gloss from 75 to 85 and capability better than 1.5
project deliverable
(Problem to be Solved)
Reduction in
250000 25000 US $/ Year
COPQ
DPMO/DPU/%
27 1
Defective
Same
Cycle time
or better
Same
Productivity
or Better
Payback
Painting of
Project Scope Machine of Model
V Series
Each machine was inspected for gloss and was reworked for Painter Skill 3 9 3 102
conformance. However approximately 27% of machines did not Drying oven 0 9 9 135
meet the specification with the estimated Sigma level that was Temperature
2.1. The target sigma level was 4.5. The existing process was
mapped to identify KPOVs and KPIVs as shown in the table Drying time 0 9 0 63
below Ambient 0 3 3 45
KPIVs C/U Process KPOVs Temperature
(Xs) Ambient Humidity 0 3 3 45
X1 Type of C Gloss Y1
The Improve Phase
Paint Gun
The team then decided to conduct an experiment with 5 factors
X2 Type of Paint C Adhesion Y2 refer table given below.
X3 Surface C Dry Film Y3 Factors Name Type Low High
Thickness
Cleanliness A Paint Source Numeric 1 2
X4 Paint Viscosity C B Gun type Numeric 1 2
X5 Distance of the C Painting of
Machines C Distance Numeric 50 80
Gun
D Viscosity Numeric 20 24
X6 Painting time C
E Temp Numeric 120 140
X7 Painter Skill C
X8 Drying oven C
Temperature REFERENCES
X9 Drying time C [1]. The Certified Master Black belt by Kubiak; lean Six Sigma by Lawton
; Six Sigma for Business Excellence by Hemant Urdhwareshe; Kaner &
X10 Ambient U Walter P. Bond-Software Engineering Metrics-What do they Measure
Temperature and How do we Know.
[2]. Reference Software Free Trial Used: 30 Days-Minitab17;
SigmaXL; Quality Companion 3 Software.
[3]. Statistical Tables used as Reference to do critical Statistic
Measurement system analysis showed that the system was Calculation: T-Table; Mann Whitney Table; Chi-Square Table;
adequate with R&R value of 9%. F- ANOVA table which are available in Google search.
Input/Process Weighted
Variables (X’s) Score
Type of Paint Gun 3 9 1 86
Type of Paint 3 9 3 102
Surface Cleanliness 1 1 9 84
Paint Viscosity 9 9 1 116
Distance of the Gun 9 3 3 90