You are on page 1of 4

Control charts, including identification of chance and assignable causes of variation - Identification of chance (common cause is usually 85%

of variation) and assignable causes of variation (anything 3 or greater, approx. 15% of variation) - Process is in control when: - Two-thirds data points near center - few data points on/near center - data floats back and forth across centerline - points balanced on both sides of CL - no points beyond control limits - no patterns or trends in chart - Process must be in control to predict what will happen next - s is a way to estimate to then establish the 3 control limits - After process change, do F-test on 2 samples for variance to determine if change was statistically significant Variable control charts, including X-bar & R, X-bar & s, and I & mR - X-bar & R use when sample sizes are < 10, easiest to build(?), X-bar charts control central tendency and R charts show dispersion - X-bar & s use when sample sizes are 10+, more accurate as sample size grows - I&mR Individual and Moving Range, use when sample is of single values or sample sizes vary Use X-bar chart combined with an R or s chart when: - The characteristic can be measured - The process is unable to hold tolerances - The process must be monitored for adjustments - Changes are being made to the process and need to be monitored - Process stability and process capability must be monitored and demonstrated to a customer - The process average and variation must be measured Attribute control charts, including P, NP, C, and U - p-charts Fraction Nonconforming or percent nonconforming - np-charts Number nonconforming Use p, np, or 100p charts when 1. There is a need to monitor the portion of the lot that is nonconforming 2. The characteristics under study in the process can be judged either conforming or nonconforming 3. The sample size varies (p chart) 4. Process monitoring is desired but measurement data cannot be obtained - c-chart Number of nonconformities (in a single unit so n = 1) as sample size grows, control limits will tighten - u-chart Number of nonconformities per unit where u = c/n with count c of nonconformities found in n units Use c, u charts when Defects per million opportunities (DPMO) Defective whole thing unusable (cust. quality) Defects could be many in a single unit, any failure to meet a customer requirement (process quality)

1. 2. 3. 4.

There is a need to monitor the number of nonconformities in a process (c charts for counts of nonconformities, u charts for nonconformities per unit) The characteristics under study in the process can be judged as having one or more nonconformities Process monitoring is desired but measurement data cannot be obtained The sample size varies (u chart)

Process Capability, including Cp and Cpk indices and their interpretation - Process Capability the range of expected results that can be achieved by the process, ability of process to meet specs - Process must be normal and in control to then evaluate whether it is capable to meet customer requirements - Control limits are voice of process - Specification limits are voice of customer - Capability Index Cp is about variation, Capability Ratio Cpk is about centering and defects - Process shift over long-term will be 1.5 so want to have Cp = 1.5 or 2.0+ -

Cp =

USL LSL 6 If Cp=.333 then cust tolerance is 1/3 of

process width = high defect rate. If Cp = 3 then cust tolerance is 3x process width, virtually no defects even with process shift. Want to have Cp = 1.5 or 2.0+ -

Cpk =

min(USL X , X LSL) 3 . If Cpk = 0 then process

average is equal to one of the spec limits, if Cpk is negative, defect rate is 50%+, if Cpk is -1.0 or more defect rate is nearly 100% - Centered if Cpk = Cp, want both values to be 1.0+ Specification Limits, including their relationships to control limits in 3-sigma and 6-sigma processes - Control limits are based on data, specification limits are based on customer - The spread of the individuals in a process, 6 sigma, is the measure used to compare the realities of production with the desires of the customers. - 6 sigma < USL LSL the process spread is less than spread of specification limitsmost desireable case, allows room for process shifts while still staying within specification limits - 6 sigma = USL LSL the process spread is equal to spread of specification limitsprocess shift or increase in variation will result in out-of-spec - 6 sigma > USL LSL process spread exceeds spread of specification limitsundesireable situation, natural variation will produce out-of-spec Dist. Binomial Binomial Sample Constant (usually > 50) Variable (usually >50) Constant Variable

Chart Type Focus on proportion (Defectives) np chart # of non-conforming items p chart Proportion of non-conforming items

Focus on the non-conformity itself (Defects) or DPU c chart Total # of non-conformities Poisson u chart Non-conformities per unit Poisson

Probability distributions, including Hypergeometric, Binomial, Poisson, and Normal - Sum of probabilities = 1.0 - Probability that event A will not occur: P(A) = 1.00 P(A) - Mutually exclusive events cannot occur simultaneously. P(A or B) = P(A) + P(B) - NOT Mutually exclusive either or both will occur. P(A or B or both) = P(A) + P(B) P(both) - Dependent events both will occur. P(A and B) = P(A) x P(B|A) where P(B|A) = P(A and B)/P(A) - Independent events when P(A|B) = P(A) and P(B|A) = P(B) and P(A and B) = P(A) x P(B) Permutations number of arrangements of n objects when r of them are used. Order matters Combinations when order is not important for n objects when r of them are used. The number of combinations is always smaller than the number of permutations where Hypergeometric Probability Distribution where D is defects in population, d defects in sample, N population size, n sample size - where n/N > 0.1, random sample and without replacement - Hypergeometric where n/N <= 0.10 can be approximated with the binomial Binomial Probability Distribution: only two possible outcomes where d is defectives sought, n sample size, p proportion of defectives and q proportion of non-defectives Poisson Probability Distribution: where np is avg number of events in sample, c is number of events in sample, and e = 2.718281 Continuous Probability Distribution (Normal Distribution): where the mean of normal is np and the stdev of normal is

System reliability, including the impact of sequential, parallel, and backup components Reliability = binomial distribution In Series: Rs = r1 x r2 x r3 x In Parallel: Rp = 1 (1 r1)(1 r2)(1 r3) With Backup: Rb = r1 + rb(1 r1) will always increase reliability of system - The reliability of a process will always fall somewhere between the UCL and LCL of an attribute control chart. - When combining multiple components assuming each is in control the reliability should fall at the centerline where reliability = 1 p-bar. - If risk is high (critical component) use UCL, if no risk then use LCL. Measures of Reliability, including failure rate, availability with MTTF and MTTR, and mean life - Failure rate

- Average life (MTBF) - Availability =

also known as mean time between failure

Product lifecycle curve, including how it can be used to plan to deliver high perceived reliability - Product Life Cycle Curve (bathtub curve). Three phases: early failure, chance failure, wear-out

Costs of Quality (COQ), including Prevention, Appraisal, Failure, and Intangible forms of Internal or External elements of Conformance and Nonconformance costs - Quality cost is considered to be any cost incurred in order to ensure that the quality of the product is perfect. - Quality costs are the portion of the operating costs brought about by providing a product that does not conform to performance standards - Quality costs are also the costs associated with the prevention of poor quality - Types of costs: Prevention, appraisal (testing and evaluation), failure (internal, external), and intangible (largest category and hardest to measure) - Quality costs increase as the faulty product reaches the customer. - After improving the process, invest move in prevention and appraisal to bring total cost of quality down How an analysis of quality costs can impact a decision to offer a warrantee or defend against a liability claim - Prefer warrantee to be before LCL of failure rate - Lawsuits are reason companies pay attention to Reliability.

Quality Function Deployment (QFD), including the interaction of multiple Houses of Quality - QFD is the process of connecting multiple House of Quality in successive layers where the winners from each layer move on to the next. Requirements produced from the first become the starting What point for the second and the functions from the first become the How for the second.
Process

House
Requirements of Proc ess Requ irem ents

Functional

House
Requirements of Func tional Requ irem ents

Customer CTQs

Design House Characteristics

Operational

Quality
QFD Matrix 1

of Quality
QFD Matrix 2

House Characteristics of
Design Charac teristic s

Quality
QFD Matrix 3

CTQ = Critical-To-Quality

Functional CTQs

Quality
QFD Matrix 4

Technical CTQs

System CTQs

Control CTQs

- QFD is a technique that seeks to bring the voice of the customer into th eprocess of designing and developing a product or service. - QFD is a planning process for guiding the design or redesign of a product or service. The principal objective of a WFD is to enable a company of organize and anaysze pertinent information associated with its product or service. - QFD has two principle parts - The horizontal component records information related to the customer (the WHAT) - The vertical component records the technical information that responds to customer inputs (the HOW) - Relationships (center of hourse): empty row may be a need with no measure, empty column may be an unnecessary measure (design variable with no requirement), diagonal pattern (someone picked a design variable for each requirement or the other way around) - Correlations (the roof) used to include support features necessary to implement design variables identified with high scores - Technical impacts (basement): where DOW will help identify impact or significance of design variables, helps with prioritization of variables and the impact they have on design - Target Values / Operational Goals (sub-basement): How design variables are measured Design of Experiments (DOE), including full and fractional factorial designs; and the relationship between experimental factors and experimental levels - Full factorial designs include all possible combinations of all selected levels of the factors to be investigated. Allows the most complete analysis and can determin main effects of factors and factor interactions. Time consuming and expensive. - Fractional factorial designs allow a large scale experiment can be condensed into several 3x2, 3x3, or 2x2 experiments where the combined results show the critical factors and interactions between factors. - A factor or interaction is significant when the CI does not include 0. - When results plotted, if lines intersect there is an interaction between factors. - Conjoint analysis (by definition 3 factor, 2 level) is a specific kind of DOE for subjective human valuation to measure customer perception

Failure Mode & Effects Analysis (FMEA), including the distinctions among failure modes, effects, and causes; and the meaning and implications of the Risk Priority Number (RPN) - Purpose is to identify and prioritize the vital few - Failure mode (defect), cause (product or process weakness), effect (external customer or business performance) - Develop a process map, establish SOD scales and definitions, list process steps on FMEA, identify potential failures, effects, causes of failure, and means of detection for each step. Score each failure on SOD then take product to find RPN. - For RPNs above threshold, fill out recommended actions, responsible party, action taken, and new estimated RPN. - Pareto may be used to prioritze the RPNs - Control types: 1. Prevention countermeasure (cause) 2. Detection of error condition (failure mode) 3. Reaction (detection, focus on monitoring/feedback, effect) Six Sigma, including the purpose and impact of the Define, Measure, Analyze, Improve, and Control process improvement lifecycle phases - Six Sigma identifies key performance variables, reduces variation, improves quality/lowering costs/increasing satisfaction - Focuses on changes to processes that are already out of control - Six Sigma model is an improvement method that uses the DMAIC approach (Define, Measure, Analyze, Improve, Control) - Most DMAIC lifecycles add 1 to 1.5 sigma to the process, diminishing return at the 5 sigma wall - Reduces defects and reacts to failures - Design for Six Sigma will force you to rely on data not intuition and is essential to reach 6 sigma (scrap process, start new) - SIPOC (Supplier, Input, Process, Output, Customers) is most important tool in Six Sigma process Read left side: The process wants suppliers to provide inputs that satisfy the requirement. Read right side: The customers want the process to provide outputs that satisfy the requirements Quality Management Systems (QMS), focusing on how they generally impact organizational processes and culture, including ISO 9000, ISO 14000, and Baldrige - A QMS is what takes us from Vision and Mission down through all levels of the organization to which control charts to build. - ISO - focuses on how to control processes 1. Say what you do (document processes) 2. Do what you say ( follow documentation 3. Do it well (be consistent) 4. Provide evidence that you did it well (audit results and improve processes) - ISO 9000 general, process-oriented approach. - ISO 14000 includes environmental aspect - Baldrige is not a quality tool, it is a measure of outputs and of the management system. Not a measure of products or how quality is measured/controlled. - Baldrige categories: 1) Leadership, 2) Strategic Planning, 3) Customer Focus, 4) Measurement, Analysis, and Knowledge Management, 5) Workforce Focus, 6) Operations Focus, 7) Results Why an organization might benchmark itself against different types of other organizations; and how a benchmarking study might differ when an external standard is selected for comparison How Dashboards and Scorecards differ in how and when they are used in the QMS

Cust omer Requ irem ents

Tips: Answer the questions. Dont repeat them. Show as much of your thought process as possible. Clearly label all diagram and spreadsheet components. If you are asked to interpret or compare a concept, dont respond by defining it. If you are asked to describe how you would do something, answer in terms of the theories and methods taught in this class, not the Excel or other tool functions and features you would use.