This action might not be possible to undo. Are you sure you want to continue?
Statistical process control (SPC) involves using statistical techniques to measure and analyze the variation in processes. Most often used for manufacturing processes, the intent of SPC is to monitor product quality and maintain processes to fixed targets. Statistical quality control refers to using statistical techniques for measuring and improving the quality of processes and includes SPC in addition to other techniques, such as sampling plans, experimental design, variation reduction, process capability analysis, and process improvement plans. SPC is used to monitor the consistency of processes used to manufacture a product as designed. It aims to get and keep processes under control. No matter how good or bad the design, SPC can ensure that the product is being manufactured as designed and intended. Thus, SPC will not improve a poorly designed product's reliability, but can be used to maintain the consistency of how the product is made and, therefore, of the manufactured product itself and its as-designed reliability. A primary tool used for SPC is the control chart, a graphical representation of certain descriptive statistics for specific quantitative measurements of the manufacturing process. These descriptive statistics are displayed in the control chart in comparison to their "incontrol" sampling distributions. The comparison detects any unusual variation in the manufacturing process, which could indicate a problem with the process. Several different descriptive statistics can be used in control charts and there are several different types of control charts that can test for different causes, such as how quickly major vs. minor shifts in process means are detected. Control charts are also used with product measurements to analyze process capability and for continuous process improvement efforts.
Typical charts and analyses used to monitor and improve manufacturing process consistency and capability (produced with Minitab statistical software).
• • • • • • • •
Provides surveillance and feedback for keeping processes in control Signals when a problem with the process has occurred Detects assignable causes of variation Accomplishes process characterization Reduces need for inspection Monitors process quality Provides mechanism to make process changes and track effects of those changes Once a process is stable (assignable causes of variation have been eliminated), provides process capability analysis with
comparison to the product tolerance
• • • • • • •
All forms of SPC control charts o Variable and attribute charts o Average (X— ), Range (R), standard deviation (s), Shewhart, CuSum, combined Shewhart-CuSum, exponentially weighted moving average (EWMA) Selection of measures for SPC Process and machine capability analysis (Cp and Cpk) Process characterization Variation reduction Experimental design Quality problem solving Cause and effect diagrams
Statistical process control, process characterization, and process capability analysis for Sandia's neutron tube manufacturing process
2. Mention the seven basic Quality control tools. Describe briefly the Cause and Effect Diagram
In 1950, the Japanese Union of Scientists and Engineers (JUSE) invited legendary quality guru W. Edwards Deming to go to Japan and train hundreds of Japanese engineers, managers and scholars in statistical process control. Deming also delivered a series of lectures to Japanese business managers on the subject, and during his lectures, he would emphasise the importance of what he called the "basic tool" that were available to use in quality control. One of the members of the JUSE was Kaoru Ishikawa, at the time an associate professor at the University of Tokyo. Ishikawa had a desire to 'democratise quality': that is to say, he wanted to make quality control comprehensible to all workers, and inspired by Deming’s lectures, he formalised the Seven Basic Tools of Quality Control. Ishikawa believed that 90% of a company’s problems could be improved using these seven tools, and that –- with the exception of Control Charts -- they could easily be
taught to any member of the organisation. This ease-of-use combined with their graphical nature makes statistical analysis easier for all. The seven tools are:
• • • • • • •
Cause and Effect Diagrams Pareto Charts Flow Charts Check sheet Scatter Plots Control (Run) Charts Histograms
What follows is a brief overview of each tool. If you would like to know more or be trained in their use, please get in touch using the form at the top of the page. Click on the images to enlarge; you will need to disable your pop-up blocker.
Cause and Effect Diagrams
Also known as Ishikawa and Fishbone Diagrams First used by Ishikawa in the 194os, they are employed to identify the underlying symptoms of a problem or "effect" as a means of finding the root cause. The structured nature of the method forces the user to consider all the likely causes of a problem, not just the obvious ones, by combining brainstorming techniques with graphical analysis. It is also useful in unraveling the convoluted relationships that may, in combination, drive the problem. The basic Cause and Effect Diagram places the effect at one end. The causes feeding into it are then identified, via brainstorming, by working backwards along the "spines" (sometimes referred to as "vertebrae"), as in the diagram below:
Basic Cause and Effect Diagram For more complex process problems, the spines can be allocated a category and then the causes/inputs of each identified. There are several standard sets of categorisations that can be used, but the most common is Material, Machine/Plant, Measurement/Policies,
Methods/Procedures, Men/People and Environment –- easily remembered as the "5M’s and an E" –- as shown in the below:
Process Cause and Effect Diagram Each spine can then be further sub-divided, as necessary, until all the inputs are identified. The diagram is then used to highlight the causes that are most likely a contributory factor to the problem/effect, and these can be investigated for inefficiencies/optimization.
Control (Run) Charts
Dating back to the work of Shewhart and Deming, there are several types of Control Chart. They are reasonably complex statistical tools that measure how a process changes over time. By plotting this data against pre-defined upper and lower control limits, it can be determined whether the process is consistent and under control, or if it is unpredictable and therefore out of control. The type of chart to use depends upon the type of data to be measured; i.e. whether it is attributable or variable data. The most frequently used Control Chart is a Run Chart, which is suitable for both types of data. They are useful in identifying trends in data over long periods of time, thus identifying variation. Data is collected and plotted over time with the upper and lower limits set (from past performance or statistical analysis), and the average identified, as in the diagram below.
Example of a Run Chart
Based upon the Pareto Principle that states that 80% of a problem is attributable to 20% of its causes, or inputs, a Pareto Chart organises and displays information in order to show the relative importance of various problems or causes of problems. It is a vertical bar chart with items organised in order from the highest to the lowest, relative to a measurable effect: i.e. frequency, cost, time. A Pareto Chart makes it easier to identify where the greatest possible improvement gains can be achieved. By showing the highest incidences or frequencies first and relating them to the overall percentage for the samples, it highlights what is known as the "vital few". Factors are then prioritized, and effort focused upon them.
An example of a Pareto Chart
A Scatter Diagram, or Chart, is used to identify whether there is a relationship between two variables. It does not prove that one variable directly affects the other, but is highly effective in confirming that a relationship exists between the two. It is a graphical more than statistical tool. Points are plotted on a graph with the two variables as the axes. If the points form a narrow "cloud", then there is a direct correlation. If there is no discernible pattern or a wide spread, then there is no or little correlation. If both variables increase as the other increases – i.e. the cloud extends at roughly 45 degrees from the point where the x and y axes cross – then they are said to be positively correlated. If the one variable decreases as the other increases, then they are said to be
negatively correlated. These are linear correlations; they may also be non-linearly correlated. Below is an example of a Scatter Diagram where the two variables have a positive linear correlation.
An example of a Scatter Diagram
Like Pareto Charts, Histograms are a form of bar chart. They are used to measure the frequency distribution of data that is commonly grouped together in ranges or "bins". Most commonly they are used to discern frequency of occurrence in long lists of data. For instance, in the list 2, 2, 3, 3, 3, 3, 4, 4, 5, 6, the number 3 occurs the most frequently. However, if that list comprises several hundred data points, or more, it would be difficult to ascertain the frequency. Histograms provide an effective visual means of doing so. "Bins" are used when the data is spread over a wide range. For example, in the list 3, 5, 9, 12, 14, 17, 20, 24, 29, 31, 45, 49, instead of looking for the occurrence of each number from 1 to 49, which would be meaningless, it is more useful to group them such that the frequency of occurrence of the ranges 1-10, 11-20, 21-30, 31-40 and 41-50 are measured. These are called bins. Histograms are very useful in discerning the distribution of data and therefore patterns of variation. They monitor the performance of a system and present it in a graphical way which is far easier to understand and read than a table of data. Once a problem has been identified, they can then also be used to check that the solution has worked.
An example of a Histogram
A flow chart is a visual representation of a process. It is not statistical, but is used to piece together the actual process as it is carried out, which quite often varies from how the process owner imagines it is. Seeing it visually makes identifying both inefficiencies and potential improvements easier. A series of shapes are used to depict every step of the process; mental decisions are captured as well as physical actions and activities. Arrows depict the movement through the process. Flow charts vary in complexity, but when used properly can prove useful for identifying non-value-adding or redundant steps, the key parts of a process, as well as the interfaces between other processes. Problems with flow charts occur when the desired process is depicted instead of the actual one. For this reason, it is better to brainstorm the process with a group to make sure everything is captured.
Also known as Data Collection sheets and Tally charts Like flow charts, check sheets are non-statistical and relatively simple. They are used to capture data in a manual, reliable, formalised way so that decisions can be made based on facts. As the data is collected, it becomes a graphical representation of itself. Areas for improvement can then be identified, either directly from the check sheet, or by feeding the data into one of the other seven basic tools. Simply, a table is designed to capture the incidences of the variable(s) to be measured. Tick marks are then manually put in the relevant boxes. As the ticks build up, they give a graphical representation of the frequency of incidences. Below is a typical example.
An example of a Check Sheet
3. What are the causes of variation in a process? Differentiate between ‘accuracy’ and ‘precision’?
Accuracy means getting a result that is close to the real answer. Precision means getting a similar result every time you try. Think of shooting at a target: Being accurate means you hit the bull's eye. Being precise means hitting the same spot on the target every time.
Accuracy versus precision: the target analogy
High accuracy, but low precision
High precision, but low accuracy
Accuracy is the degree of veracity while in some contexts precision may mean the degree of reproducibility. The analogy used here to explain the difference between accuracy and precision is the target comparison. In this analogy, repeated measurements are compared to arrows that are shot at a target. Accuracy describes the closeness of arrows to the bullseye at the target center. Arrows that strike closer to the bullseye are considered more accurate. The closer a system's measurements to the accepted value, the more accurate the system is considered to be. To continue the analogy, if a large number of arrows are shot, precision would be the size of the arrow cluster. (When only one arrow is shot, precision is the size of the cluster one would expect if this were repeated many times under the same conditions.) When all arrows are grouped tightly together, the cluster is considered precise since they all struck close to the same spot, even if not necessarily near the bullseye. The measurements are precise, though not necessarily accurate. However, it is not possible to reliably achieve accuracy in individual measurements without precision—if the arrows are not grouped close to one another, they cannot all be close to the bullseye. (Their average position might be an accurate estimation of the bullseye, but the individual arrows are inaccurate.) See also circular error probable for application of precision to the science of ballistics.
4. What is a Control chart? Describe the structure and construction of control chart.
Ans:- Control Charts for Variables Control charts based upon measurements of quality characteristics are called as control charts for variables. Control charts for variables are often found to be a more economical means of controlling quality than control charts based on attributed. The variable control charts that are most commonly used are average
-charts, range or R-charts and σ charts or standard deviation charts.
BENEFITSOFCONTROLCHARTS . Help you recognize and understand variability and how to control it . Identify .special causes. of variation and changes in performance . Keep you from fixing a process that is varying randomly within control limits; that is, no .special causes. are present. If you want to improve it, you have to objectively identify and eliminate the root causes of the process variation . Assist in the diagnosis of process problems . Determine if process improvement effects are having the desired affects
Objectives of the Control Charts Control charts are based on statistical techniques. In general, control charts for variables, either and R or and σ charts are used for some or all of the following purposes. 1. And R or and charts are used in combination for the control process.
-chart shows the centering of the process, i.e., it shows the variation in the averages of samples. It is the most commonly used variables chart. R-chart shows the uniformity or consistency of the process, i.e., it shows the variations in the ranges of samples. It is a chart for measure of spread. σ-chart shows the variation of the process. 2. The control charts are used to determine whether a given process can meet the existing specifications without a fundamental change in the production process. In other words they tell whether the process is in control and if so at what dispersion. 3. To secure information to be used in establishing or changing production procedures. Such changes may be either elimination of assignable causes of variation that may be called for whenever the control chart makes it clear that specifications cannot be met with present methods. Example When both upper and lower values are specified for a quality characteristic, as in the case of dimensional tolerances, if the basic variability of the process is so great that it is impossible to make all the products within the specifications limits, and when the specification cannot be changed then the alternative will be: (a) To make a fundamental change in the production process that will reduce the basic variability or (b) To suffer and sort out the good (non-defective) products from the bad (defective products). 4. To secure information when it is necessary to widen the tolerances. Sometimes the control chart shows so much n=basic variability that some product is sure to be made outside the tolerances, a review of the situation may show that the tolerances are tighter than necessary for the functioning of the product. Therefore, the appropriate action will be to change the specifications to widen the tolerances for the sake of economy. 5. To secure information to be used in establishing or changing inspection procedure or acceptance procedures or both.
6. To provide a basis for current decisions on acceptance or rejection of manufactured or purchased product. It is possible to reduce inspection costs by using the control charts for variables for acceptance. 7. To provide a basis for current decisions during production as to when to hunt for causes of variation and take action so as to correct them, and when to leave a process alone. 8. To familiarize personnel with the use of the control charts. 5.3.2 Starting the control charts Making and recording measurements. The information given by control chart is influenced by variations in quality as well as variations in measurement. Any measuring system will have its own inherent variability which should not be increased due to assignable causes such as error in reading or recording. 5.3.3 Calculation procedure 1. Calculate the average and range R for each subgroup. A good number of samples of items manufactured are collected at random, at different intervals of time and their quality characteristics (say diameter, thickness, weight, length etc.) are measured. For each sample mean value and the range is calculated. For example, if a sample contain 5 items whose dimensions are X1, X2, X3, X4 and X5, the sample average, = (X1 + X2 + X3 + X4 + X5) / 5 The range is computed by subtracting the lowest value from the highest value. [Range, R = Highest value – Smallest value] 2. Calculate the grand average and average range . After calculating the average and
range of each sub-group the next step is to find and where is the average of the values for each sub-group. This is the sum of values divided by the number of subgroups.
= average of averages
and N = number of sub-groups Similarly, the average , is the sum of the ranges of the sub-groups divided by the number of sub-groups.
3. Calculation of 3 sigma limits on control chart for chart. Tables A, B, C, and D may be used to obtain the relevant factors like A, A1, A2, D1, D2, D3, and D4 for a particular sample size, according to the method used. If Table A is to be used, the next step is to estimate σ’ (standard deviation of the universe form which the samples are taken). From Table A find the value of the factor d2 for a particular sample size. Then
To shorten the calculations of control limits from , this factor 3 / (d2 √n) the multiplier of has been computed for each value of n from 2 to 20 and tabulated in Table B. this factor is designated as A2. The formulae for 3-sigma control limits on charts for UCL LCL = = + A2 - A2 rather than from , then then become,
If control limits are to be calculated from
Where, N = number of sub-groups, using C2 factor from Table A to estimate σ’
To shorten the calculations for control limits from , the factor 3 / (C2√n), the multiplier of in the above calculation has been computed for each value of n from 2 to 25, hence by 5’s to 100 and tabulated in Table C. this factor is designated as A1. The formulae for 3-sigma control limits using factor A1 are: UCL LCL = = + A1 - A1
For those situations where it is desired to calculate control limits directly from known or standard values of σ’ and , the factor 3/√n has been computed and tabulated in Table D. This factor is designed as A. The formulae for 3-sigma control limits using the factor are UCL LCL = = + A1σ’ – A1σ’
5.3.4 Calculate the Control Limits for R chart The control limits on the chart for ranges (R chart) are given by, UCLR = D4 LCLR = D3 Factors D4 and D3 have been given in Table B. For calculating the control limits on R chart directly from the known or assumed values of σ’. Then the control limits on R chart are given by,
Where the factors D1, D2 can be obtained from Table D for a particular sample size. 5.3.5 Plot the and R charts
While plotting the chart, the central line on the chart should be drawn as a solid horizontal line at . The upper and lower control limits for chart should be drawn as dotted horizontal lines at the computed values. Similarly, for R chart the central line should be drawn as a solid horizontal line at . The upper control limit should be drawn as dotted horizontal line at the computed value of UCLR. If the sub-group size is seven or more, the lower control limit should be drawn as dotted horizontal line at LCLR. However, it should be noted that if the sub-group size is six or less, the lower control limit for R is zero. Plot the averages of the sub-groups in chart, in the order collected and the ranges on R chart which should be below the chart so that the sub-groups correspond to one another in both the charts. Points outside the control limits are indicated with cross on chart [( × )], and the points outside the limits on R chart by a circle [ О ].
Table A: Factors for Estimating σ’ from Number of observations in sub-group n 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 30 35 40 45 50 55 60 65 or
1.128 1.693 2.059 2.326 2.534 2.704 2.847 2.970 3.078 3.173 3.258 3.326 3.407 3.472 3.532 3.588 3.640 3.689 3.735 3.778 3.819 3.858 3.895 3.931 4.086 4.213 4.322 4.415 4.498 4.573 4.639 4.699
0.5642 0.7236 0.7979 0.8407 0.8686 0.8882 0.9027 0.9139 0.9227 0.9300 0.9359 0.9410 0.9453 0.9490 0.9523 0.9551 0.9576 0.9599 0.9619 0.9638 0.9655 0.9670 0.9684 0.9696 0.9748 0.9784 0.9811 0.9832 0.9849 0.9863 0.9874 0.9884
70 75 80 85 90 95 100 Estimate of σ’ =
4.755 4.806 4.854 4.898 4.939 4.978 0.015 /d2 or /c2
0.9892 0.9900 0.9906 0.9912 0.9916 0.9921 0.9925
These factors assume sampling from a normal universe. Table B Factors for determining from R charts. the 3-sigma control limits for and
Upper Control Limit for Lower Control Limit for
= UCL = LCL
+ A2 - A2
Upper Control Limit for R = UCLR = D4 Lower Control Limit for R = LCLR = D3 All factors in Table B are based on the normal distribution. Table C
Factors for determining from charts
the 3-sigma control limits for
All factors in Table B are based on the normal distribution. Table D Factors for determining from σ’ the 3-sigma control limits for charts , R and
5. The following numbers indicates the number of defectives in 20 samples containing 2000 items: 425, 430, 216, 341, 225, 322, 280, 306, 337, 305, 356, 402, 216, 264, 126, 409, 193, 280, 389, 326 Calculate the values for central line and control limits for P chart and construct the control chart.
Ans B :- Total items = 20 * 2000 = 40000 items
Total no. of defectives in 40000 items is ∑d = 425+430+216+…..+326 =6148 = 6148 / 40000 = 0.1537 Therefore, CL = = 0.1537
= 0.1537 + 0.0242 = 0.1779
6. Write a note on the following: a. Pareto Chart
The Pareto chart is simply a frequency distribution (or histogram) of attribute data arranges by category. Example #1 shows how many customer complaints were received in each of five categories. Example #2 takes the largest category, “documents,” from Example #1, breaks it down into six categories of document-related complaints, and shows cumulative values. If all complaints cause equal distress to the customer, working on eliminating documentrelated complaints would have the most impact, and of those, working on quality certificates should be most fruitful.
Excerpted from Nancy R. Tague’s The Quality Toolbox, Second Edition, ASQ Quality Press, 2004, pages 376-378. Note that the Pareto chart does not automatically identify the most important defects, but rather only those that occur most frequently. In general, the Pareto chart is one of the most useful of the “magnificent seven”. Its applications to quality improvement are limited only by the ingenuity of the analyst.
b. Scatter diagram
The Scatter Diagram is another Quality Tool that can be used to show the relationship between "paired data", and can provide more useful information about a production process. What is meant by "paired data"? The term "cause-andeffect" relationship between two kinds of data may also refer to a relationship
between one cause and another, or between one cause and several others. For example, you could consider the relationship between an ingredient and the product hardness; between the cutting speed of a blade and the variations observed in length of parts; or the relationship between the illumination levels on the production floor and the mistakes made in quality inspection of product produced. To illustrate this relationship, below are a few examples of scatter diagrams indicating the relationships between paired data. We will discuss how to interpret these charts, and then we will learn how to make one with paper and pencil.
No Correlation In the above examples, you can see that the dots, which are actually data points, have various relationships. The Strong correlation indicates that there is a close relationship between the data that is paired together. In the middle diagram, you see a slightly different pattern indicating that there is, in some cases, a relationship and in other cases there is no relationship. The last diagram on the right indicates that there is no correlation, or no relationship at all between the paired data. In the first diagram on the left, you would be able to determine that you have a strong relationship and thus one measurement has a strong relationship to the other; therefore, you would be able to prove that one item affects the other closely. In the last diagram on the right, you would be able to determine that there is absolutely no relationship between the two items, and you need to review the "Cause-and-Effect" Diagram or "brain-storming" session to try and find another
item that your primary item measured, might have a relationship to. The middle diagram is the one that is going to cause you some grief. This particular diagram is more difficult to interpret, and actually requires a more detailed investigation into which data points correlate, and which data points have absolutely no comparison. Then, you need to try and determine why certain ones reveal a relationship and others do not.
How To Make A Scatter Diagram
The Basic Scatter Diagram Layout Once again, it is best if you have graph paper to make your diagram with. However, I am going to show you how to do this with a spreadsheet form, and at the bottom of this lesson, there is a blank spreadsheet that you can use for the production floor. On gridline or graph paper:
STEP #1 - Draw an "L" form just like you did for the pareto diagram (see the below figure). Make your scale units at even multiples, such as 10, 20, etc. so as to have an even scale system.
STEP #2 - On the Horizontal axis (Known as the "X" axis, from Left to Right) you place the Independent or "cause" variable.
STEP #3 - On the Vertical axis (Known as the "Y" axis, from Bottom to Top) you place the Dependent or "effect" variable.
STEP #4 - Plot your data points at the intersection of your data plots of the X
and Y values. For Example = X = 5, Y = 2. Go right 5 spaces, and then go up 2 spaces to plot the point. Linear Relationship: does the Data "Line Up"? Linearity has Four Parameters: 1. Correlation - Measures how well the data line up. The more the data resembles a straight line, the higher the correlation to each other. 2. Slope - Measures the steepness of the data. The steeper the data slope, assuming the correlation is good, the greater the importance of the relationship. A change in the "X" variable will have a larger impact on the "Y" variable, and you will begin to see a pattern that represents the Moderate Correlation diagram above. 3. Direction - The "X" variable can have a positive or a negative impact on the "Y" variable. As one factor goes up, the other goes down. In a positive correlation, both factors will move in the same direction. In the graph examples below, you can see that the positive correlation moves from the lower left, toward the upward right. The negative correlation moves from the lower right, toward the upward left. 4. Y Intercept - where a line drawn through the data crosses the "Y" axis. For a positive correlation, it represents the minimum "Y" value; for a negative correlation it presents the maximum "Y" value.
You can see that the data pattern moving from the bottom left upward to the top right indicate a positive correlation between the data. This is an upward sloping data grouping.
Conversely, here the data pattern moving from the top left downward to the bottom right indicate a negative correlation between the data, and hence a downward sloping data grouping.
ASSIGNMENT- Set 2
1. Explain the concept of process with an example. Write a brief note on SIPOC.
A SIPOC diagram is a type of process map typically used in Lean Six Sigma projects to identify the primary elements of a process. It provides a macro view that brings together Suppliers, Inputs, Process, Outputs, and Customers. This article provides a simple and effective method for building a SIPOC with your team. Preparation. Gather all of your supplies and make sure you have ample space for the team to work. Hang your paper on the wall and write the words "Suppliers", "Inputs", "Process", "Outputs", and "Customers" along the top of the paper, leaving ample room below for plenty of notes. Give each team member a stack of post-its and magic markers. Process. Resist the urge to start on the left with your suppliers. Instead, start with the process first. Use post-it notes to create a high-level process map, sticking to no more than 5-7 steps. Make sure the team agrees that you have created an accurate representation of the process. Once you are satisfied, move on the the Outputs. Outputs. Have the team brainstorm the outputs of the process. Each output should be written out and posted to the wall. Outputs of the process don't just include the product or service you are delivering, and not all are desirable. They can include paperwork, approvals, scrap, and just about anything else you can think of that results from your process. Customers. In this step, you'll want to look at the outputs of the process and determine who your customers are. In most cases, the customer isn't the person who will eventually buy your product or service, but the recipients of each output of your process. Think about where each output goes and you know who the customer is for your process. Inputs. For the inputs, review each step of the process map to determine what is necessary to complete it. Inputs can include materials, people, machines, IT systems, information, or anything else that is necessary for the process to run. Take some extra time with the inputs and write down everything you can think of. Suppliers. This time, you want to list all of the suppliers who provide your inputs. These might include the company that supplies your widgets, the team that performed previous steps, or the IT department. Don't forget your customers; they are often suppliers to a process as well.
2. What is Normal distribution? What are the properties of Normal distribution?
The Normal Distribution (Bell Curve) In many natural processes, random variation conforms to a particular probability distribution known as the normal distribution, which is the most commonly observed probability distribution. Mathematicians de Moivre and Laplace used this distribution in the 1700's. In the early 1800's, German mathematician and physicist Karl Gauss used it to analyze astronomical data, and it consequently became known as the Gaussian distribution among the scientific community.
The shape of the normal distribution resembles that of a bell, so it sometimes is referred to as the "bell curve", an example of which follows:
The above curve is for a data set having a mean of zero. In general, the normal distribution curve is described by the following probability density function:
Bell Curve Characteristics The bells curve has the following characteristics:
• • • •
Symmetric Unimodal Extends to +/- infinity Area under the curve = 1
Completely Described by Two Parameters
The normal distribution can be completely specified by two parameters:
mean standard deviation
If the mean and standard deviation are known, then one essentially knows as much as if one had access to every point in the data set.
The Empirical Rule
The empirical rule is a handy quick estimate of the spread of the data given the mean and standard deviation of a data set that follows the normal distribution. The empirical rule states that for a normal distribution:
• • •
68% of the data will fall within 1 standard deviation of the mean 95% of the data will fall within 2 standard deviations of the mean Almost all (99.7%) of the data will fall within 3 standard deviations of the mean
Note that these values are approximations. For example, according to the normal curve probability density function, 95% of the data will fall within 1.96 standard deviations of the mean; 2 standard deviations is a convenient approximation.
Normal Distribution and the Central Limit Theorem
The normal distribution is a widely observed distribution. Furthermore, it frequently can be applied to situations in which the data is distributed very differently. This extended applicability is possible because of the central limit theorem, which states that regardless of the distribution of the population, the distribution of the means of random samples approaches a normal distribution for a large sample size
Applications to Business Administration
The normal distribution has applications in many areas of business administration. For example:
• • •
Modern portfolio theory commonly assumes that the returns of a diversified asset portfolio follow a normal distribution. In operations management, process variations often are normally distributed. In human resource management, employee performance sometimes is considered to be normally distributed.
The normal distribution often is used to describe random variables, especially those having symmetrical, unimodal distributions. In many cases however, the normal
distribution is only a rough approximation of the actual distribution. For example, the physical length of a component cannot be negative, but the normal distribution extends indefinitely in both the positive and negative directions. Nonetheless, the resulting errors may be negligible or within acceptable limits, allowing one to solve problems with sufficient accuracy by assuming a normal distribution.
5. What is meant by acceptance sampling? Explain the various quality indices for acceptance sampling plan.
Ans A. Advantages of sampling 1. The items which are subjected to destructive test must be inspected by sampling inspection only. 2. The cost and time required for sampling inspection is quite less as compared to 100% inspection. 3. Problem of inspection fatigue which occurs in 100% inspection is eliminated. 4. Smaller inspection staff is necessary. 5. Less damage to products because only few items are subjected to handling during inspection. 6. The problem of monotony and inspector error introduced by 100% inspection is minimized. 7. The most important advantage of sampling inspection is that, it exerts more effective pressure on quality improvement. Since the rejection of entire lot on the basis of sampling brings much stronger pressure on quality improvement than the rejection of individual articles. 9.2.2 Limitations of sampling (1) Risk of making wrong decisions: However, in sampling inspection, since only a part is inspected, it is inevitable that the sample may not always represent the exact picture obtaining in the lot and hence, there will be likelihood or risk of making wrong decisions about the lot. This wrong decision can be made in two ways. Firstly, a really good lot (that is, containing less proportion of defectives than specified) may be rejected because the sample drawn may be bad. Secondly, a really bad lot (that is a lot containing greater proportion of defectives than specified) may be accepted because the sample drawn may be good. In the former case, the producer has to suffer a risk of his good lots being rejected and hence the associated risk (chance) is called as the producer’s risk. In the later case, the consumer runs the risk of accepting bad lots and hence the associated risk is called as consumer’s risk.
(2) The sample usually provides less information about the product than 100% inspection. (3) Some extra planning and documentation is necessary. However, in scientific sampling plans, these risks are quantified and the sampling criteria are adjusted these risks, in the light of the economic factors involved. Although this last point is often mentioned as a disadvantage of acceptance sampling, proper design of an acceptance-sampling plan usually requires study of the actual level of quality required by the consumer. This resulting knowledge is often a useful input into the overall quality planning and engineering process. Thus, in many applications, it may not be a significant disadvantage. The success of sampling scheme depends upon the following factors: i) Randomness of samples. ii) Sample size. iii) Quality characteristic to be tested. iv) Acceptance criteria. v) Lot size. Ans B :- Guidelines for using Acceptance Sampling An acceptance-sampling plan is a statement of the sample size to be used and the associated acceptance or rejection criteria for sentencing individual lots. A sampling scheme is defined as a set of procedures consisting of acceptance-sampling plans in which lot sizes, sample sizes, and acceptance or rejection criteria along with the amount of 100% inspection and sampling are related. Finally, a sampling system is a unified collection of one or more acceptance-sampling schemes. Table 9.1: Acceptance-Sampling Procedures Objective Assure quality levels for consumer / producer Attributes Procedure Select plan for specific OC curve Variables Procedure Select plan for specific OC curve AQL system; MIL STD 414, ANSI/ ASQC Z1.9 AOQL system
Maintain quality at a target AQL system; MIL STD 105E, ANSI / ASQC Z1.4 Assure average outgoing quality level AOQL system: DodgeReduce inspection, with
small sample sizes, goodquality history Reduce inspection after good-quality history Assure quality no worse than target
Romig plans Chain sampling
Narrow-limit gaging Skip-lot sampling; double sampling
Skip-lot sampling; double sampling LTPD plan; hypothesis testing LTPD plan; DodgeRomig plans
The major types of acceptance-sampling procedures and their applications are shown in Table 9.1. In general, the selection of an acceptance-sampling procedure depends on both the objective of the sampling organization and the history of the organization whose product is sampled. Furthermore, the application of sampling methodology is not static; that is, there is a natural evolution from one level of sampling effort to another. For example, if we are dealing with a vendor who enjoys an excellent quality history, we might begin with an attributes sampling plan. As our experience with the vendor grows, and its good-quality reputation is proved by the results of our sampling activities, such as skip-lot sampling. Finally, after extensive experience with the vendor, and if its process capability is extremely good, we might stop all acceptance-sampling activities on the product. In another situation, where we have little knowledge of or experience with the vendor’s quality-assurance efforts, we might begin with attributes sampling using a plan that assures us that the quality accepted lots is worse than a specified target value. If this plan proves successful, and if the vendor performance is satisfactory, we might transition from attributes to variables inspection, particularly as we learn more about the nature of the vendor’s process. Finally, we might use the information gathered in variables sampling plans in conjunction with efforts directly at the vendor’s manufacturing facility to assist in the installation of process control. A successful program or process controls at the vendor level might improve the vendor’s process capability to the point where inspection could be discontinued. These examples illustrate that there is a life cycle of application of acceptance-sampling techniques. Typically, we find that organization with relatively new quality-assurance efforts place a great deal of reliance on acceptance sampling. As their maturity grows and the quality organization develops, they begin to rely less on acceptance sampling and more on statistical process control and experimental design. Manufacturers try to improve the quality of their products by reducing the number of vendors from whom they buy their components, and by working more closely with the ones they retain. Once again, the key tool in this effort to improve quality is statistical process control. Acceptance sampling can be an important ingredient of any qualityassurance program; however, remember that it is an activity that you try to avoid doing. It
is much more cost effective to use statistically based process monitoring at the appropriate stage of the manufacturing process. Sampling methods can in some cases be a tool that you employ along the road to that ultimate goal.
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview.