You are on page 1of 44

Quality Assurance and Reliability Lab Sub Code: 20IM5DLQAR

QUALITY ASSURANCE AND RELIABILITY LAB

Sub Code : 20IM5DLQAR IA Mark : 50


Hours/week : 02 Exam Marks : 50

1. To test the Goodness of fit for the given quality characteristic using Uniform distribution

2. To test the Goodness of fit for the given quality characteristic using Binomial distribution

3. To test the Goodness of fit for the given quality characteristic using Poisson distribution

4. To test the Goodness of fit for the given quality characteristic using Normal distribution
5. Construction of control chart for attribute quality characteristic ( P,NP,C, X - R chart)
6. Construction of control charts using Minitab Software
7. Conduction of Repeatability and Reproducibility studies for appraiser
8. Assessing Process Capability of the given manufacturing process using Normal Probability
paper method and process capability indices

9. Attribute sampling Plans–Single, Double sampling plans.

10. Conduction of Design of Experiments–Full Factorial approach for the given quality
characteristic for Catapult and Golf Experiment

11. Simple Linear Regression model and find the dependent variable with help of independent
variable

REFERENCE BOOKS

1. Introduction to statistical Quality Control-DCMontgomery,3rd Edition, John Wiley and


Sons.

2. Quality Planning &Analysis - J M Juran, Frank M Gryna; Tata McGrawHill,3rdedition,


3. Statistical Quality Control- Grant and Leavenworth, McGraw Hill,6thEdition

Department of IEM, BMSCE Bangalore - 19Page 1


Quality Assurance and Reliability Lab Sub Code: 20IM5DLQAR

Exercise No. 1

UNIFORM DISTRIBUTION
The simplest of all discrete probability distributions is one where the random variable assumes all
its values with equal probability.

If the random variable X assumes all the values X1,X2,X3-------XN with equal probability, then the
discrete uniform distribution is given by

f(X : k) = X=X1, X2, X3,------XN.

Goodness – of - fit test: A test to determine if a population has a specified theoretical distribution
.The test is based on how good a fit. We have between the frequency of occurrence of
observation in an observed sample and the expected frequencies obtained from the hypothesized
distribution

To illustrate, consider the tossing of a die, we hypothesize that die is honest, which is equivalent to
testing the hypothesis that the distribution of outcomes is uniform. Suppose that the die is tossed
120 times and each outcome is recorded.

Theoretically if the die is balanced, we would expect each face to occur 20 times.

By comparing the observed frequencies with the corresponding expected frequencies, we must
decide whether these discrepancies are likely to occur as a result of sampling fluctuations and the
die is balanced or the die is not honest and the distribution of outcomes is not uniform .It is
common practice to refer to each possible outcomes of an experiment as a cell .Hence in our
illustration we have six cells. The appropriate statistic on which we base our decision criteria for an
experiment involving k cells is defined by the following theorem.

A goodness – of - fit test between observed k expected frequencies is based on the quantity.

Aim: To check whether the distribution of outcomes of tossing a dice is a uniform distribution or
not.

Apparatus: Fair dice

Sl. Out Tallies Observed Expected

Department of IEM, BMSCE Bangalore - 19Page 2


Quality Assurance and Reliability Lab Sub Code: 20IM5DLQAR

No. Come frequency frequency(fe)


(fo)

1 1 1111 1111 1111 21 20 0.05


11111 1

2 2 1111 1111 1111 25 20 1.25


11111 1111

3 3 1111 1111 1111 1111 19 20 0.05

4 4 1111 1111 1111 11 17 20 0.45

5 5 1111 1111 1111 24 20


0.8
1111 1111

6 6 1111 1111 1111 14 20 1.8

Formulae :

1) Probability of face occurring,


2) No of Trials =120
3) Degrees of freedom, ν = k-1=6-1 where, k – no of cells.
4) Expected frequency = P(r) × no of trails

= 1/6×120 = 20

5)

Calculation :

Trail 1 : for outcome 1

fo = 21, fe = 20

= 0.05

Where is value of the random variable whose sampling distribution is approximated very closely
by the Chi-square distribution .The symbols oi and ei represents the observed and expected
frequencies, respectively for the i cell.

Department of IEM, BMSCE Bangalore - 19Page 3


Quality Assurance and Reliability Lab Sub Code: 20IM5DLQAR

If the observed frequencies are close to the corresponding expected frequencies, the  2 value will
be small indicating a good fit. If the observed frequencies differ considerably from the expected
frequencies, the value will be large and the fit is poor. A good fit leads to the acceptance of Ho,
where as a poor fit leads to its rejections. The critical region will, therefore fall in the right tail of
the chi-square distribution. For level of significance equal to  .We find the critical value chi –
square  value from table and then > alpha constitutes the critical region. The decision criterion
described here should not be used unless of the expected frequencies is at least equal to 5.

The number of degrees of freedom associated with the chi-square distribution used here depends on
two factors; the no. of cells in the experiments and the no. of quantities obtained.

Result: the value calculated is 4.4 and it is less than the table value i.e, 11.07 for 95% confidence
level.

Exercise No. 2

BINOMIAL DISTRIBUTION

Department of IEM, BMSCE Bangalore - 19Page 4


Quality Assurance and Reliability Lab Sub Code: 20IM5DLQAR

An experiment often consists of repeated trails, each with two possible outcomes, which may be
labeled as success or failure. Which is true in testing items as they come off an assembly line.
Where each test or trail may indicates a defective or a non – defective item,

The binomial experiment is one that processes the following properties:

1. The experiment consists of “ n” repeated trials.


2. Each trail results in an outcome that may be classified as a success or failure.
3. The probability of success denoted by ‘p’, remains constant from trail to trail.
4. The repeated trails are independent.

Goodness fit for Binomial distribution:

Aim: To conduct goodness fit for five dice which follow binomial distribution.

Apparatus: Fair dice

Formulae used:

P(r) = nCr Pr qn - r

Degree of freedom ν = k – 1 = 6-1 = 5

P = success = 0.5

q = failure = 0.5

n = No. of trails

TABULAR COLUMN:

Department of IEM, BMSCE Bangalore - 19Page 5


Quality Assurance and Reliability Lab Sub Code: 20IM5DLQAR

Sl. No. Out Tallies Observed Expected frequency


Come frequency
(fo) f(e)  no. of trials

1 0 1111 4 3.75 0.0167

2 1 1111 1111 1111 20 18.75 0.0833


11111

2 1111 1111 1111 30 37.5 1.5


3
1111 1111 1111

3 1111 1111 1111 38 37.5 0.0066


4 1111 1111 1111
1111 111

5 4 1111 1111 1111 22 18.75


0.5633
1111 11

6 5 1111 1 6 3.75 1.35

∑ = 3.5199

P(0) = 5C0 (0.5)0 (0.5)5-0 = 0.03125

P(1) = 5C1 (0.5)1 (0.5)4 = 0.15625

P(2) = 5C2 (0.5)2 (0.5)3 = 0.3125

P(3) = 5C3 (0.5)3 (0.5)2 = 0.3125

P(4) = 5C4 (0.5)4 (0.5)1 = 0.15625

P(5) = 5C5 (0.5)5 (0.5)0 = 0.03125

Expected frequency, fe = P(r)  no. of trails

P(r)  120

1. fe = 0.03125  120 = 3.75


2. fe = 0.15625  120 = 18.75
3. fe = 0.3125  120 = 37.5
Department of IEM, BMSCE Bangalore - 19Page 6
Quality Assurance and Reliability Lab Sub Code: 20IM5DLQAR

4. fe = 0.3125  120 = 37.5


5. fe = 0.15625  120 = 18.75
6. fe = 0.3125  120 = 37.5
Total fe = 120

Criterion is > 3 (4,5,6)

Calucaltion :

Trail 1 : for outcome 1

fo = 4, fe = 3.75

= 0.0167

Result: the value calculated is 3.5199 and it is less than the table value i.e, 11.07 for 95%
confidence level, The fit is good.

Exercise No. 3

POISSON DISTRIBUTION

Department of IEM, BMSCE Bangalore - 19Page 7


Quality Assurance and Reliability Lab Sub Code: 20IM5DLQAR

Experiments yielding numerical values of a random variable X, the number of success occurring
during a given time, interval or in a specified region are often called “Poisson experiments”.
The given time intervals may be of any length. Such as a minute a day, a week, a month or even a
year. Hence a poisson experiments might generate observation for the random Variable X
representing the number of telephone calls per hour received by an office, the number of days
school is closed due to snow during the winter or the number of postponed games due to rain during
a baseball season.

The specified region could be a line segment ,an area, a volume or perhaps a Piece of material. In
this case, X might represent the number of field mice per acre. The number of bacteria in a given
culture, or the number of typing errors per page.

A poisson experiments is one that possess the following properties:

1. The number of successes occurring in one time interval or specified region are independent of
those occurring in any other disjoint time interval or region of space.

2. The probability of a single success occurring during a very short time interval or in a small
region is proportional to the length of the time interval or the size of the region and does not
depend on the number of success occurring outside this time interval or region.

3. The probability of more than one success occurring in such a short time interval or failing in such
a small region is negligible.

GOODNESS OF FIT FOR POISSON DISTRIBUTION

Aim: To check whether the distribution of the outcomes of defective marbles is a poisson
distribution.

Formula used:

where µ = n  p (mean)

p = % of defective = 5% = 0.05

n = sample size = 20

No. of trials = 100

Total no. of marbles = 500 = lot size N

Tabular column:-

Sl. Out Tallies Observed P (r) Expected


No. Come frequency frequency

Department of IEM, BMSCE Bangalore - 19Page 8
Quality Assurance and Reliability Lab Sub Code: 20IM5DLQAR

(fo) P(r) no. of


trials

1 0 1111 1111 1111 1111 33 0.3678 36.78 0.3885


1111 1111 111

2 1 1111 1111 1111 1111 36 0.3678 36.78 0.0165


1111 1111 1111 1

3 2 1111 1111 1111111 18 0.1839 18.39 0.0083

4 3 1111 11 7 0.0613 6.13 0.1235

5 4 1111 4 0.0153 1.53

6 5 1 1 0.003 0.3 5.7246

7 6 1 1 0.003 0.51

∑ = 6.26

Observation :-

Degree of freedom, ν = k- m -1

K = no. of class intervals

m =1 (the parameter being used)

For  = @ 95% confidence level the table value is,

Calculation:

(1) µ = n  p = 20  0.05 = 1

(2)

= 0.3678

P(1) = 0.3678

Department of IEM, BMSCE Bangalore - 19Page 9


Quality Assurance and Reliability Lab Sub Code: 20IM5DLQAR

P(2) = 0.1839

P(3) = 0.0613

P(4) = 0.0153

fe = P(r)  no. of trails.

= 0.3678  100

= 36.78

= 0.3885

Result: the value calculated is 6.26 and it is less than the table value i.e, 11.07 for 95% confidence
level, The fit is good and hence the null hypothesis is accepted.

Exercise No. 4

NORMAL DISTRIBUTION
The variation of the pattern obtained for a set of quality characteristic data that has been produced
by a process subjected to a chance cause alone is known as normal distribution.

Department of IEM, BMSCE Bangalore - 19Page 10


Quality Assurance and Reliability Lab Sub Code: 20IM5DLQAR

The Properties or Characteristics of Normal Distribution

1) Mean, Median and mode are identical


2) It is a bell shaped curve
3) It is symmetric about the mean
4) Curve starts from – ∞ to + ∞.
5) It represents populations of infinite size.
6) It is defined by two parameters namely mean and standard deviation.
7) The distribution is unimodal.

GOODNESS OF FIT FOR NORMAL DISTRIBUTION

AIM: - To Conduct a goodness of fit study for a quality characteristic and to ascertain

Whether the fit is good or not.

APPARATUS REQUIRED:- Electronic balance, Metal plates and statistical tables.

OBSERVATIONS:- (For eg. Weight of metal plates – 50 values)

35.7 32.3 32.9 31.8 32.4 33.8 32.7 33.3 33.8 31.3
4 2 7 3 9 0 7 6 1 4
33.5 30.2 30.5 29.8 32.5 31.9 35.1 31.9 30.9 30.9
6 4 7 1 9 3 6 5 7 1
33.3 31.7 31.6 31.1 31.6 33.3 32.2 33.4 29.7 29.9
2 6 1 9 9 4 1 1 4 5
28.9 36.0 35.3 34.8 34.1 33.9 29.1 29.1 33.8 28.7
6 1 9 6 6 1 0 2 1 9
29.2 28.5 33.5 31.5 31.5 30.6 31.5 31.3 30.5 30.5
0 9 6 9 3 7 9 6 3 9
TABULAR COLUMN:-

Class Internal Mid Frequency Observed di di2 f0di f0di2


(C.I) Point Tally Frequenc
(A) y (fo)
27.9717 – 28.5901 1111 1 6 -2 4 -12 24
29.2084
29.2084 – 29.8268 1111 4 -1 1 -4 4
30.4451
30.4451 – 31.0635 1111 1111 111 13 0 0 0 0
31.6818
31.6818 - 32.9185 32.3002 1111 1111 10 1 1 10 10
32.9185 – 33.5369 1111 1111 1 11 2 4 22 44
34.1552
34.1552 – 34.7736 1111 4 3 9 12 36
35.3919
Department of IEM, BMSCE Bangalore - 19Page 11
Quality Assurance and Reliability Lab Sub Code: 20IM5DLQAR

35.3919 – 36.0103 11 2 4 16 8 32
36.2684
50 36 150

TABULAR COLUMN

P(r) = p[z2 – z1] fe=p(r)*n

0.0207 0.0808 0.0601 3.005 2.985


0.0808 0.2206 0.1398 6.99 1.279
0.2206 0.4433 0.2277 11.385 0.2291
0.4483 0.6879 0.2396 11.98 0.3271
0.6879 0.8708 0.1829 9.145 0.3763
0.8708 0.9608 0.09 4.5 0.0556
0.9608 0.9864 0.0256 1.28 0.4050
= 5.65

Calculations:

Highest Value = 36.01, lowest value = 28.59, n=50

Range = Highest value – lowest = (36.01 – 28.59) = 7.42

No of class intervals = √50 = 7

Class width (h) = 1.2367

HSV = highest value ± ½ (h) = 36.01 + ½ (1.2367) = 36.6284

LSV = lowest value - ½ (h) = 28.59 - ½ (1.2367) = 27.9717

A = 31.0635,

Mean (µ) = h = 31.0635 + + 1.2367


Mean (µ) = 31.9538

Std. deviation (σ) = =

Std. deviation (σ) = 1.9482

Specimen calculation

= = 0.0207

= = 0.0808

Department of IEM, BMSCE Bangalore - 19Page 12


Quality Assurance and Reliability Lab Sub Code: 20IM5DLQAR

P(r) = p[z2 – z1] = 0.0808 – 0.0207 = 0.0601

Fe = P(r) × 50 = 0.0601 × 50 = 3.005

= 2.985

Degrees of freedom ν = k-m-1 = 7-2-1 = 4

Result: the value calculated is 5.6572 and it is less than the table value i.e, 9.48 for 95% confidence
level, The fit is good

Graph:- Plot Observed frequency and Expected Frequency

Exercise No 5

CONTROL CHARTS – p CHART

Department of IEM, BMSCE Bangalore - 19Page 13


Quality Assurance and Reliability Lab Sub Code: 20IM5DLQAR

Aim: To conduct 100% inspection on manufacturing washer and the data is summarized by the
hour. To plot a P chart for the data and interpret it.

Apparatus: washer, inspection equipment

Theory: The terminology defective or non defective is often used to identify the two classification
of product. More recently the terminology conforming and non conforming has become popular.
Quality characteristics of this type are called attribute. The first of these relates to the fraction of
non conforming or defective product produced by a manufacturing process and is called the control
chart for non conforming fraction or P chart
The actual operation of this chart would consist of rating subsequent samples of ‘n’ unit,
computing the sample fraction non conforming ‘p’, and plotting the statistic P on the chart. As long
as P remains within the control limits and the sequence of plotted points does not exhibit any
systematic non random pattern, we can conclude that the process is in control at the level ‘P’ if a
plot is outside the control limits, we can conclude that the process fraction non conforming has
most likely shifted to a new level and process is out of control.

The fraction non conforming is given by


Di
i = n , i = 1,2,……m

And the average of fraction non conforming is given by,

p
 Di
mn

The control limit are given by the following formula,

p1  p 
UCL p  p  3
n

CL p  p

p1  p 
LCL p  p  3
n

FORMULAE:

Department of IEM, BMSCE Bangalore - 19Page 14


Quality Assurance and Reliability Lab Sub Code: 20IM5DLQAR

d
1. i = n , where d ---- no of defectives

n ---- no of items inspected

I ------ fraction defective of each sample.

p
d p

2. n where is standard fraction defective

3. Upper Control Limit (UCL)

p1  p 
UCL p  p  3
n

4. Control Limit (CL)

CL p  p

5. Lower Control Limit (LCL)

p1  p 
LCL p  p  3
n

OBSERVATION:

SL NO SAMPLE NO. OF NO.O F FRACTION


NUMBE UNITS DEFECTIVE DEFECTIV
R INPECTED UNITS (d) E
(n) (i )
1 1 20 0 0
2 2 20 0 0
3 3 20 1 0.05
4 4 20 1 0.05
5 5 20 5 0.025
6 6 20 4 0.2
7 7 20 0 0
8 8 20 4 0.2
9 9 20 2 0.1
SL NO SAMPLE NO. OF NO.O F FRACTION
NUMBE UNITS DEFECTIVE DEFECTIV
R INPECTED UNITS (d) E
Department of IEM, BMSCE Bangalore - 19Page 15
Quality Assurance and Reliability Lab Sub Code: 20IM5DLQAR

(n) (i)
10 10 20 0 0
11 11 20 3 0.15
12 12 20 2 0.1
13 13 20 3 0.15
14 14 20 0 0
15 15 20 2 0.1
16 16 20 0 0
17 17 20 3 0.15
18 18 20 1 0.05
19 19 20 1 0.05
20 20 20 1 0.05

 n  400  d  33
CALCUATION:

1) Standard fraction defective,

p
d
n = = 0.0825

2) Fraction defective

d
i = n = = 0.05

3) Upper Control Limit (UCL)

p1  p  0.08251  0.0825


UCL p  p  3 3
n = 0.0825 20 = 0.267

4) Control Limit (CL)

CL p  p = 0.0825

5) Lower Control Limit (LCL)

p1  p  0.08251  0.0825


LCL p  p  3 3
n = 0.0825 20 = - 0.1021 ∞ 0.00

Department of IEM, BMSCE Bangalore - 19Page 16


Quality Assurance and Reliability Lab Sub Code: 20IM5DLQAR

Procedure :
1. The group with a constant sample size is inspected for 20 hours
2. At the end of each hour, the number of unit is inspected for defectives value are using Go &
no go gauge noted down.
3. 20 bowls of washer are given in which a random amount of washer are taken and inspected.
4. The fraction defective is calculated as the ratio of number of defective unit to the total no of
units checked.
5. The total number of inspected washer are calculated as and total number of defective are
called .
6. A graph of fraction defective v/s the number of hours is to be plotted to determine whether
the process is in control or not.

Result:
It is seen from the graph that the points are within control limit, there for the process is in
state of statistical control.
UCLp = 0.267
CLp = 0.0825
LCLp = -0.1021

Exercise No 6

CONTROL CHARTS – np CHART

Department of IEM, BMSCE Bangalore - 19Page 17


Quality Assurance and Reliability Lab Sub Code: 20IM5DLQAR

Aim : To plot an np- chart and determine the trail control limits and to ascertain whether the
production of washers is in statistical control or not.

APPARATUS : washers, weighing machine.

THEORY: It is always possible to base a control chart on the number non conforming rather than
the fraction non conforming. This is often caked ab number forming control chart or np-chart.
The parameter of this chart is as follows,
UCLnp  np  3 np 1  p 

CLnp  np

LCLnp  np  3 np 1  p 
p
 Di
where, n
Observation:

SL SAMPLE NO. OF UNITS NO.O F


NO NUMBER INPECTED DEFECTIVE
(n) UNITS (d)
1 1 20 7
2 2 20 8
3 3 20 10
4 4 20 2
5 5 20 12
6 6 20 8
7 7 20 11
8 8 20 11
9 9 20 5
10 10 20 6
11 11 20 8
12 12 20 9
13 13 20 9
14 14 20 8
15 15 20 12

 n  300  d  126

If a standard value for P is un available, then p can be used to estimate P. many non statistically
trained personnel find the np chart easier to interpret then the usual fraction non conforming control
chart.

Procedure:
Department of IEM, BMSCE Bangalore - 19Page 18
Quality Assurance and Reliability Lab Sub Code: 20IM5DLQAR

1. A group of washer is inspected with a constant sample size of 20.


2. The washers are weighed using a weighing machine and a tolerance limit of 6.5 ± 0.6 is
fixed.
3. The washers exceeding the tolerance are identified and are taken as defective. This
procedure is replaced for 15 subgroups.
4. Then the fraction defective, p is calculated as the ratio of total defective units to the
total number of units inspected
5. The control limits are calculated using the formulae
6. Then the np-chart is plotted for 15 sub groups and inference is drawn.

CALCUATION:

1) Standard fraction defective,

p
d
n = = 0.42

np = 20 × 0.42 = 8.4

2) Upper Control Limit (UCL)

UCLnp  np  3 np 1  p  = 8.4  3 8.4(1  0.42) = 15.02

3) Control Limit (CL)


np = 20 × 0.42 = 8.4

4) Lower Control Limit (LCL)

LCLnp  np  3 np 1  p  = 8.4  3 8.4(1  0.42) = 1.78

Result:

It is seen form the graph that the points are within control limits, therefore the process is in
the state of statistical control.

The UCL is 15.02, LCL is 1.78 and the control limit CL is 8.4.

Exercise No 7

CONTROL CHARTS – C CHART

Department of IEM, BMSCE Bangalore - 19Page 19


Quality Assurance and Reliability Lab Sub Code: 20IM5DLQAR

AIM:-To Plot a control chart for the number of defects.

Apparatus: Article with spelling mistakes

Theory: C chart are use for number of non conformities with constant sample size. The of
conformities ‘x’ follows the poisson distribution with

where, x = 1,2,…………n

The Sample size of each trail is a constant. i.e., n=constant

Tabular column

Sl No. No. of Defects (C)


1 13
2 10
3 15
4 16
5 9
6 14
7 15
8 16
9 9
10 7

Procedure:
1. A set of 10 sheet of page from a certain article is taken
2. The spelling mistake from each such page is found and counted. They are considerd as
defects.
3. The number of errors/defectives in each page is found and counted.
4. The total number of defects and the number of subgroups is calculated and hence c is found
using the formula.
5. The control limit is found a group is plotted to determine if the process is in control.

CALCUATION:

Standard fraction defective,

c
c
k = = 12.4
Department of IEM, BMSCE Bangalore - 19Page 20
Quality Assurance and Reliability Lab Sub Code: 20IM5DLQAR

Upper Control Limit (UCL)

UCLc  c  3 c = 12.4  3 12.4 = 22.96

Control Limit (CL)


c = 12.4

Lower Control Limit (LCL)

LCLc  c  3 c = 12.4  3 12.4 = 1.835

Result: It is seen form the graph that the points are within control limits, therefore the process is
in the state of statistical control.

The UCL is 22.96, LCL is 1.835 and the control limit CL is 12.4

Exercise No 8

CONTROL CHARTS - X -R CHART

AIM:-To Plot a control chart for variable for the random size of bolt head and to ascertain whether
the process of manufacturing of bolts is a state of statistical control.

Department of IEM, BMSCE Bangalore - 19Page 21


Quality Assurance and Reliability Lab Sub Code: 20IM5DLQAR

Apparatus: 100 Bolts, vernier calipers.

Theory: Many quality characteristics can be expressed in terms of numerical measurement. A


single measurement quality characteristic, such as a dimension, weight or volume is called a
variable. When dealing with a quality characteristic that is a variable, it is usually necessary to
monitor both the mean value of the quality characteristic and its variability. Control of the process
average or mean quality level is usually done with the control chart for means or the X -R Chart.
Process capability can be monitored with either a control chart for standard deviation or a control chart for
the range, called an “R control chart”. Usually separate X -R chart are maintained for each quality
characteristic of interest.

The control limits for X-chart:


_ _
UCL _  X  A2 R
X

_
CL _  X
X

_ _
LCL _  X  A2 R
X

The constant A2 is tabulated for various sample size.

The process variability maybe monitored by plotting value of the sample range R on a control chart.
The centre line and control limits for R-chart are as follows
_
UCL _  D4 R
R

_
CL _  R
R

_
LCL _  D3 R
R

The constant D3 and D4 are tabulated for various value of n.

Observation:

Sl No Random variable ‘X’ _


Range (R)
X (mean)
(Xmax-Xmin)
X1 X2 X3 X4 X5
1 31.99 30.32 30.49 31.90 30.28 30.99 1.71

Department of IEM, BMSCE Bangalore - 19Page 22


Quality Assurance and Reliability Lab Sub Code: 20IM5DLQAR

2 31.45 30.93 31.85 30.00 29.98 30.84 1.47


3 30.33 30.23 29.16 30.02 30.12 29.97 1.17
4 31.88 30.05 31.98 31.52 31.87 31.26 1.93
5 31.92 31.58 31.47 31.87 30.85 31.53 1.07
6 29.86 31.80 31.56 31.94 31.49 31.33 2.08
7 31.95 31.94 31.93 31.88 31.52 31.84 0.43
8 31.84 31.87 31.95 30.23 31.94 31.56 1.71
9 31.41 31.42 29.95 28.16 28.11 29.81 3.31
10 31.88 31.57 30.09 31.49 29.96 30.99 1.92
11 31.80 31.83 31.92 31.96 31.40 31.78 0.56
12 31.97 30.03 30.20 28.21 31.84 30.65 3.76
13 30.96 31.81 31.36 31.82 31.89 31.61 0.93
14 31.54 30.07 31.56 31.54 29.90 30.88 1.66
15 30.20 30.24 31.80 31.88 30.31 30.88 1.68
16 31.45 30.20 31.45 31.58 30.4 30.94 1.54
17 29.80 31.41 31.80 31.88 31.58 31.29 2.08
18 30.01 31.53 31.49 29.94 31.58 30.87 1.64
19 30.11 31.84 28.00 31.51 29.59 30.19 3.84
20 30.11 30.02 31.59 29.99 29.96 30.33 1.61

 _
X = 619.49  _
R = 36.04

USL = 32 LSL = 30 K=20 , n=5

Procedure :

1. The 100 bolts are place in 20 group, each group consisting of 5 bolts.
2. The first group is selected and with the help of vernier caliper length is measures the
obtained value are noted down which provides X – values.

Department of IEM, BMSCE Bangalore - 19Page 23


Quality Assurance and Reliability Lab Sub Code: 20IM5DLQAR

_
3. The average of these 5 values gives X and the difference of max value gives the range.
4. The above procedure is repeated for all 20 sub groups.
_
5. Using the value, X and R are calculated.
6. Then, the upper control limit and the lower control limit are calculated for both X -R
charts.
7. The control limits and the values of X is plotted on a graph. Similarly the value of R
and the control limits are plotted on another graph to determine whether, the process is
in control or not.

CALCULATION:
_
_
X 
X
k = = 30.974
_
R
R
k = = 1.802

From table for n=5,

A2=0.577, d2=2.326, D3=0, D4=2.115 (from the table)

The control limits for X-chart:


_ _
UCL _  X  A2 R
X = 30.974 + (0.577 × 1.80) = 32.01
_
CL _  X
X = 30.974
_ _
LCL _  X  A2 R
X = 30.974 – (0.577 × 1.80) = 29.93

The control limits for R-chart:-


_
UCL _  D4 R
R = 2.114 × 1.80 = 3.804
_
CL _  R
R = 1.802
_
LCL _  D3 R
R =0
Department of IEM, BMSCE Bangalore - 19Page 24
Quality Assurance and Reliability Lab Sub Code: 20IM5DLQAR

 R 1.802
= d 2 = 2.326 = 0.774

Cp = 6  = 6× 0.774 = 4.648

USL  LSL
 Pr ocess Capability 
6 = = 0.43 < 1

Since, PC < 1. The process is not capable.

Result:

It is seen from the graph that all the points are within the control limits, thre fore the process
is in a state of statistical control

The recommended value of R – chart are,

UCL _  CL _ LCL _
R 3.804, R = 1.802, R =0

The recommended value of X-chart are

UCL _  32.01 CL _ LCL _ 


X , X = 30.974, X 29.93

Since the process capability index is less than 1, the process is not capable.

Inference: it is seen that all the points are within the control limit. Therefore the process is not in the
state of statistical control.

Exercise No 9

Repeatability and Reproducibility Study


Aim: To conduct a Repeatability and Reproducibility Study using range and average method

Apparatus: Vernier calipers, 20 bolts , 3 operators

Theory:

Department of IEM, BMSCE Bangalore - 19Page 25


Quality Assurance and Reliability Lab Sub Code: 20IM5DLQAR

Reproducibility is defined as the variability due to different operation using the gauge,[I,e due to
different time period, different environment, or different condition]

Repeatability is defined as the inherent precision of the gauge itself. Range average method
computes the total measurement system variability to be separated in to repeatability,
reproducibility and part variation. Repeatability is given by,

 R _
R
repeatability =
d2 = average of ranges for all appropriates parts

2 = taken from table for given ‘n’

Reproducibility is given by

 Rx _
R
reproducibility =
d2 = average of ranges for all appropriates parts

2 = taken from table for given ‘n’

Total variability of the gauge is given by,

 2
gage = 2repeatability + 2reproducibility

Observation:

SL
No Operators

A B C
TR 1 TR2 X R TR1 TR X R TR TR2 X R
2 1
1 7.82 7.80 7.8 0.0 7.92 7.86 7.8 0.0 7.80 7.78 7.79 0.02
1 2 9 6
2 7.90 7.86 7.8 0.0 7.90 7.92 7.9 0.0 7.80 7.80 7.8 0
8 4 1 1
3 7.86 7.82 7.8 0.0 7.90 7.90 7.9 0 7.9 7.86 7.88 0.04
4 4
SI Operators
No A B C
TR 1 TR2 X R TR1 TR X R TR TR2 X R
2 1
4 7.8 7.86 7.8 0.0 7.96 7.90 7.9 0.0 7.88 7.90 7.89 0.02
3 6 5 6
5 7.82 7.84 7.8 0.0 7.86 7.80 7.8 0.0 7.86 7.90 7.89 0.02
3 4 3 6
Department of IEM, BMSCE Bangalore - 19Page 26
Quality Assurance and Reliability Lab Sub Code: 20IM5DLQAR

6 7.90 7.7 7.8 0.2 7.9 7.9 7.9 0 7.88 7.9 7.89 0.02
7 7.88 7.8 7.8 0.0 7.8 7.9 7.8 0.0 7.86 7.84 7.85 0.02
4 8 5 1
8 7.86 7.80 7.8 0.0 7.86 7.90 7.8 0.0 7.9 7.92 7.91 0.02
3 6 8 4
9 7.8 7.7 7.7 0.1 7.88 7.70 7.7 0.0 8 8 8 0
5 9 2
10 7.9 7.88 7.8 0.0 7.7 7.88 7.7 0.0 7.86 7.88 7.87 0.02
9 2 9 2
11 7.8 7.78 7.7 0.0 8 8 8 0 7.9 7.8 7.85 0.01
9 2
12 7.8 7.78 7.7 0.0 7.80 7.86 7.8 0.0 7.86 7.88 7.87 0.02
9 2 3 6
13 7.8 7.78 7.7 0.0 7.86 7.86 7.8 0 7.86 7.88 7.87 0.02
9 2 6
14 7.86 7.8 7.8 0.0 7.9 7.86 7.8 0.0 7.8 7.92 7.86 0.12
3 6 8 4
15 7.7 7.84 7.7 0.1 7.8 7.86 7.8 0.0 7.86 7.88 7.87 0.02
7 4 3 6
16 7.8 7.9 7.8 0.1 7.86 7.9 7.8 0.0 7.86 7.86 7.86 0
5 8 6
17 7.7 7.72 7.7 0.0 7.7 7.8 7.7 0.0 7.86 7.86 7.86 0
1 2 5 1
18 7.8 7.9 7.8 0.1 7.88 7.86 7.8 0.0 7.76 7.7 7.73 0.06
5 7 2
19 7.82 7.84 7.8 0.0 7.86 7.9 7.8 0.0 7.86 7.82 7.84 0.04
3 2 8 6
20 8 8 8 0 7.9 7.88 7.8 0.0 7.86 7.84 7.82 0.04
9 2
TR = Trail
_ _ _
X 1  7.827 X 2  7.867 X 3  7.85
_ _ _
R1 = 0.06 R 2 = 0.04 R3 = 0.03

Procedure:

1. 3 operator are provided with 20 bolts.


2. Each operator measures bolt diameter using vernier caliper and does 2 trail for all 20 bolts.
_
3. Reading are tabulated and range for each operator reading is found. Then R is found which leads
to repeatability.
_
4. Reproducibility is calculated using X range values.
5. Finally total variability of the gauge is calculated using the formulae.
Department of IEM, BMSCE Bangalore - 19Page 27
Quality Assurance and Reliability Lab Sub Code: 20IM5DLQAR

Calculation:

Specification limit : 7.92 0.2 mm

_ _ _ _

R ( R1 + R 2 + R3 ) = = 0.0433
_ _ _ _
X max  max( X 1, X 2, X 3 ) = max (7.827,7.867,7.85) = 7.867
_ _ _ _
X min = min ( X 1, X 2, X 3 ) = min (7.827,7.867,7.85) = 7.827
R_   _ _
X X max - X min = 7.867 – 7.827 = 0.04
From table d2 = 1.128, for n = 2

 R
repeatability =
d 2 = = 0.0384

From table d2 = 1.693, for n = 3

 Rx
reproducibility =
d2 = = 0.0236

 2
gage = 2repeatability + 2reproducibility

= (0.0384)2+(0.0236)2 = (0.002)

 gage = 0.045

= = = 0.67 > 0.1


Result: From conducting the experiment we have found that the,

 repeatability = 0.0384,  reproducibility = 0.0236,  gage = 0.045

Since P/T = 0.67 > 0.1, the gauge cannot be used

Inference: Since  repeatability is more than  reproducibility and the P/T ratio is 0.67 which is greater than
0.1, we can conclude that the gauge capability is not good.

Exercise No 10

Process capability analysis using probability plotting


Department of IEM, BMSCE Bangalore - 19Page 28
Quality Assurance and Reliability Lab Sub Code: 20IM5DLQAR

Probability plotting is an informal way of conducting a process capability analysis especially with
a small number of data points, say 30 or fewer. As in the precision case, the process must be in
control to use the technique.

Monographs by Nelson (1979) and Shapiro (1980), both sponsored by the a SQC, describe
probability plotting in considerable detail. The basic idea behind the method is that a special
transformation has been applied to the vertical scale of a graph of the assumed cumulative
distribution function. This transforms all cumulative distribution function of the assumed type into
straight lines.

If the assumed distribution is correct, the sample order statistics will be nearly linear when plotted
on probability paper. There will usually be random deviations from linearity, but the larger the
sample size, the greater the tendency toward line rarity of the distributional assumption is incorrect
the plotted points will be nonlinear in a systematic manner.

Shapiro (1980) gives the basic steps in the preparation of probability plots:

1. Make a distributional assumption and obtain the relevant probability paper.


2. Let {x,l = 1,2,……,n}be the sample data, with n, observations. Order the observations from
the smallest to the largest. Denote these as {x[i], i = 1,2,…..,n], where x[i],≤ x[2]≤ ……≤ x[n].
Let [i] = 1 for the smallest and [i] = n for the largest.
3. Plot the x[i],value against the quantity P[i]= 100 ([i] – 0.5)/n on probability paper of the
correct type.(To simplify this computation, determine P[i]= 50/n and P[i+1] = P[i] + 100/n, i =
1,2,…..n-1.)
4. If the assumed distribution is appropriate, the plotted points will appear as a straight, line. If
the assumed distribution is inappropriate, the points will deviate from a straight line, usually
in a systematic manner. The decision on whether or not to rejected some hypothesized
model is subjective.

USING PROBABILITY PLOTTING TO ESTIMATE STATISTICS

Since we do not need 100 data points for probability plotting, we have randomly selected 20 points,
without replacement from table 9.2 and listed them (in order selected) as follows:

102.90 104.45 104.13 106.81 99.51


100.57 95.89 102.13 105.19 104.58
106.94 113.56 107.41 108.12 97.05
104.71 100.12 94.59 93.21 102.02
The observations are now ranked from smallest to largest as follows:

Rank Observation Rank Observation


i x[i] i x[i]
1 93.21 11 104.13

Department of IEM, BMSCE Bangalore - 19Page 29


Quality Assurance and Reliability Lab Sub Code: 20IM5DLQAR

2 94.59 12 104.45
3 95.89 13 104.58
4 97.05 14 104.71
5 99.51 15 105.19
6 100.12 16 106.81
7 100.57 17 106.94
8 102.13 18 107.41
9 102.90 19 108.12
10 104.13 20 113.56

The ordered observations are then plotted against P[i]. The special graph paper for this example has a
relabeled horizontal axis for plotting the P[i] values. The drawn to accommodate the smallest and
largest values. The smallest sample value x[i] = 93.21, is plotted against P[i] = 50/20 = 2.5, the second
smallest; x[2] = 94.59, is plotted against P[2] = 2.5 + 100/20 = 7.5; and so on, until finally, the highest
value, x[20] =113.56, is plotted against P[20] = 97.5. The plotted values are shown in Fig 9.6.

Using the “eyeball” method, a straight line has been drawn through the plotted points. In
evaluation the linearity of the plot, the following should be considered.

1. The observed values are random and will never fall exactly on a straight line.
2. The ordered values are not independent, since they have been ranked Hence, if one point is
above the line. it is unlikely that the points will be scattered about the line.
3. The variances of the extremes (largest and smallest values) are much higher than the
variances in the middle of the plot. Greater discrepancies can be accepted at the extremes.
The linearity of the points in the middle of the plot is more important than the linearity of
the points in the middle of the plot is more important than the linearity at the extremes.
Outliers in the data should be treated with caution. Nelson (1979) states the all too often
“inexperienced analysts tend to over interpret plots and expect them be more orderly than
they usually are.” If a data point is suspected of being a outlier, investigate the causes
behind it rather than discarding it without a second. Remember the first of Shapiro’s three
points presented earlier. The data will some variability, since they are random values.
The plotted data points in Fig 9.6 fall close to a straight line. Informally the analysis of
normality is not rejected. Now and  can be estimated form the probability plot. We
approximate = 102.9 hours as the point at which P[i] = 84.1 “percent under” P[2] = 50 “percent
under.” Thus.
 = 108.2 – 102.9 =5.3 hours
Terms that is rater far from the value determined using all of the data in Eq. but the
estimated standard deviation is quite close.

Department of IEM, BMSCE Bangalore - 19Page 30


Quality Assurance and Reliability Lab Sub Code: 20IM5DLQAR

Probability Plot of percent(2)


Normal - 95% CI
99
Mean 102.7
StDev 5.024
N 20
95
AD 0.256
90 P-Value 0.688

80
70
Percent

60
50
40
30
20

10

1
85 90 95 100 105 110 115 120
percent(2)

Fig 9.6 Normal probability plot of engine overhaul times.

Aim :- To determine the process capability using Normal Probability Distribution.

Apparatus :- M.S Plates, Electronic Balance etc.

Theory: Process capability is defined as minimum spread of a specific measurement variation


which will include 99.7% of measurements from given process. In other words process capability is
6  . Since 6  is taken as measure of spread of process which is also called natural tolerance.
Process capability study is carried out to measure ability of process to meet the spread tolerances.
By this study it becomes possible to know the percentage of product which will be produced with in
3  limit on either side of mean. The study is aimed at determining standard deviation of individual
measurement of products when process is in control.

Median ranking = × 100 where, i ------ is serial number,

n ----- is sample size

Observation:

Department of IEM, BMSCE Bangalore - 19Page 31


Quality Assurance and Reliability Lab Sub Code: 20IM5DLQAR

SL NO Weights Median ranking = ×


100
1 28.58 2.3
2 28.78 5.5
3 29.73 8.8
4 29.8 12.1
5 29.95 15.4
6 30.23 18.7
7 30.54 22
8 30.58 25.3
9 30.66 28.6
10 30.96 31.9
11 31.32 35.2
12 31.36 38.4
13 31.58 41.7
14 31.59 45
15 31.61 48.3
16 31.68 51.6
17 31.92 54.9
18 31.95 58.2
19 32.21 61.5
20 32.48 64.8
21 32.58 68
22 32.76 71.3
23 33.13 74.6
24 33.31 77.9
25 33.55 81.2
26 33.82 84.5
27 33.91 87.8
28 34.16 91.1
29 35.39 94.4
30 35.73 97.7

Procedure:
1. A set of 30 ms plates are taken and weighed..
2. These weight are noted down from the electronic weighing machine.
3. The weight are ranked in the descending order and the median ranking is called using
the formula.
4. Using the weights, the tolerance limit for the m.s. plate are set.
5. The values are plotted on the normal probability paper with weight on x- axis and values
of probability on i.e median ranking y- axis.
6. The mean µ and standard deviation (  ) is found from the graph
7. The value of process capability is found from the graph
8. The value of process capability is found using the formula.

Department of IEM, BMSCE Bangalore - 19Page 32


Quality Assurance and Reliability Lab Sub Code: 20IM5DLQAR

CALCULATION:

The tolerance limit is 30 5 ie, USL = 35, LSL = 25

From graph µ50 = 31.96, µ90 = 34.4

µ50 - µ90 = 1.28  .

 = = 1.593

USL  LSL
 Pr ocess Capability 
6

= = 1.04

RESULT:

From the graph i.e normal probability paper

From graph µ50 = 31.96, µ90= 34.4

 = 1.593 , Pr ocess Capability = 1.04.

Since PC is greater than 1 , it shows that the process is capable.

Exercise No 11

Department of IEM, BMSCE Bangalore - 19Page 33


Quality Assurance and Reliability Lab Sub Code: 20IM5DLQAR

SINGLE SMPLING PLAN

Aim: To Construct operating characteristic cure and determine consumers risk(α) and producer’s
risk (β) for the given sampling plan

N=1000, n=25, c=2, When AQL = 1.5% and LTPD =6.5%

Apparatus: Sampling plan gadget, different colour beads bins scoop

Theory: Acceptances sampling is the process of evaluating a portion of the product in a lot for the
propose of accepting or rejecting the lots as either conforming or non conforming a quality
specification. Inspection for acceptance purpose is carried out at many stages in manufacturing.
There are generally two ways in which inspection is carried out: 1) 100% inspection ii) Sampling
inspection. Sampling plans may be grouped in to three categories they are i) Single sampling plan,
ii) Double sampling plan and iii) multiple sampling plan when a decision on acceptance or rejection
of the lot is made on the basis of only one sample, the acceptance plan is known as single sampling
plan. In single sampling plans three number are specified. N= Lot size and C= acceptance number.
The OC Curve for an attribute sampling is a graph of fraction defective in a lot against the
acceptance Pa. for any fraction defective p’ in a submitted lot, the OC Curve shows the probability
Pa that such a lot will be accepted by the sampling plan, by using sampling plan and respective OC
Curve for the given AQL and LTPD producer’s risk and consumer’s risk can be calculated.

Procedure: Preparation of the lot size.

For simulating the manufacturing situations in these exercise plastic beads are represented by
brown colour beads and defective are represented by white beads. A mixture of brown and white
beads will constitute a real time lot containing good ones and bad ones. For preparing a lot size of
1000 containing 1% defective take 990 brown colour bead and 10 white beads. Similarly up to 9%
defective lots can be prepared by mixing appropriate defective beads in the lot. Selection of a
sampling plan: N = 100, n= 25, c=2, conduction of the experiment:

The lot size of 1000 with different proportion defectives is prepared as per the procedure explained
above

1. The number of the cavities in the scoop determines the sample size, we selected scoop
having 25 cavities, i.e. n=25
2. Start with 1% defective lot, the scoop is dipped in the bin containing the beads and taken out
3. Note down the number of defective in the observation sheet. If the number of white beads
are less than or equal to the acceptance number i.e., 2, accept the lot or else reject the lot.
4. Beads collected in the scoop are put back in to bin the mixture of bead is stirred well to
ensure uniformity of distribution of defectives in the lot. Repeat the above steps for 25 times
and note down the number of times the lot is accepted or rejected
5. Step no. 5 is repeated for up to 9% defective lots.
Department of IEM, BMSCE Bangalore - 19Page 34
Quality Assurance and Reliability Lab Sub Code: 20IM5DLQAR

6. Check whether the obtained data follows Poisson distributing using goodness of fit chi
square test.
7. For the different value percent defective(p’) and probability acceptance Pa, draw the OC
Curve.
8. For AQL = 1.5% and LTPD= 6.5%. Draw the horizontal line corresponding to points.the
probability values represents the producer’s risk and consumer’s risk

Tabular column:

P’ np’ Pa oi ei= Pa×25

1% 0.25 0.986 25 24.65 0.00496


2% 0.5 0.920 25 23.20.225 0.1739
3% 0.75 0.809 22 20.225 0.143
4% 1 0.677 17 16.925 0.000378
5% 1.25 0.544 18 13.6 1.423
6% 1.5 0.423 10 10.575 0.0312
7% 1.75 0.321 5 8.025 1.14026
8% 2 0.238 10 14 5.95 10.5 1.2461
9% 2.25 0.178 4 4.45

Pa = P(0) + P(1) + P(2)

= 4.1627

= 11.92 for α = 0.05 and DOF = n-k-l = 8-1-1=6

<
Conclusion: since chi square calculated value is less than chi square critical value, the observed
data follows passion distribution.

Exercise No 12

DESIGN OF EXPERIMENT: CATAPULT EXPERIMENT


Aim : To study the Design of experiment technique and to find the optimum value using catapult.
Equipment and material used: catapult equipment, different size balls, different size rubber
bands, scale.
Department of IEM, BMSCE Bangalore - 19Page 35
Quality Assurance and Reliability Lab Sub Code: 20IM5DLQAR

Theory: Design of experiment (DOE) is a powerful technique used for discovering a set of process
variables (or factors) which are most important to the process(or system) and then determine at
what levels these factors must be kept to optimize the process (or system ) performance. It provides
a quick and cost-effective method of understand and optimize any manufacturing processes. It is a
direct replacement of the hit or miss approach of experimentation which requires a lot of guess
work and luck for its success in real life situations. Moreover, the hit or miss approach does not take
into account interactions among the factors (or variables) and therefore there is always a risk of
arriving a false optimum conditions for the process under investigation.
In the past decade or so, DOE has gained increasing importance in the reduction of variability in
core processes, whereby consistent product quality can be achieved. Moreover, companies striving
for a six-sigma approach to achieving quality treat DOE as the key player. The author believes that
DOE must be a key element of the management strategy in the 21st century in many manufacturing
companies during new product and process introduction, so that robust and consistent performance
of the product can be achieved in the user environment. The purpose of this experiment is to
provide an insight into the process of understanding the role of DOE on the part of a group of
engineers and managers with the help of golf experiment. The results of the experiment have been
extracted from real golf experiment performed by a group of engineers in a company during a
training program on DOE. The first experiment was performed to identify the key variables or
factors which affect the response (in-flight distance) of interest. This is called a screening
experiment and the objective is to separate out the key factors (or variables) from the trivial. Having
identified the key factors, the team performed a second experiment with object of understanding the
nature of interaction (if present) among the key factors. The results of the experiment were analyzed
using Minitab software for rapid and easier understanding of the results.

DESIGN OF EXPERIMENT APPROACH:


In this case, a design of experiments approach required 6 man hours and accuracy was within 3
inches. The design of experiment approach effectively defines a response envelope for the system.
Once the envelope is defined, any combination of factor setting can be estimated.

Factors –
The following are the factors that are considered in this case are
Stop Angle
Hook attachment point
Type of Ball
Levels – These factors are each tested at only two levels, high and low. A complete set of tests is
run as shown in Table 1. In Table 1, a plus sign indicates a level and a minus sign indicates a low
level. Thus, in test #1 the club, the swinging angle and the ball are all at the low level. In the test
#8, all the levels are high setting.
Interactive effects: you notice three other columns in Table 1, AB,BC, and AC. AB is the products
of the level values for A and B (Ball and Strings) . The other columns also products. These products
will be indicators of interactive effects between the factors.

Table 1: Test matrix for test


Test# Stop Arm Length Types of Ball AB BC AC
Angle (A) (B) (C)
Department of IEM, BMSCE Bangalore - 19Page 36
Quality Assurance and Reliability Lab Sub Code: 20IM5DLQAR

1 -1 -1 -1 +1 +1 +1
2 -1 -1 +1 +1 -1 -1
3 -1 +1 -1 -1 -1 +1
4 -1 +1 +1 -1 +1 -1
5 +1 -1 -1 +1 +1 -1

6 +1 -1 +1 -+1 -1 +1
7 +1 +1 -1 -1 -1 -1
8 +1 +1 +1 -1 +1 +1

For each test number, three repetitions were made. In other words, the ball was hit three times at
each setting and the travel distance was recorded. From these an average value for each test is
determined.

Table 2: Measured values all dimensions in cm


Sl No Trial A B C Trail Trail 2 Trail 3 Avg. (Yr)
No 1
1 6 60 Gol 50.5 51 51.5 51
® f
2 8 60 TT 24 23.5 24 24
®
3 2 60 Gol 90 94 90.25 91
® f
4 5 60 TT 39.5 42 40 41
®
5 7 45 Gol 76.5 76 76.5 76
® f
6 1 45 TT 48.5 48.5 50 49
®
7 4 45 Gol 117.5 116 117 117
® f
8 3 45 TT 84 81.5 82 82
®
Ȳ = 66.4
Data Reduction - The interesting thing is how the data is analyzed. If we average all the distances
from tests in which the hook is at the high level, then compare that with the average from all the
tests in which the hook is at the low level, we would expect to see some difference. Indeed, this
difference between high and low setting is the key to this technique. Table 3 shows the results for
those calculations.

Department of IEM, BMSCE Bangalore - 19Page 37


Quality Assurance and Reliability Lab Sub Code: 20IM5DLQAR

Important variables (screening) – If we see a small difference we can say that the factors is not so
important. Thus, the difference defines the relative importance of each factor. This also works for
the combined factors so we can see the relative importance of the interactive effects.

Table 3: Relative effects of each factor

A B C
AB BC AC
Avg Y- 51.8 50.1 83.8 64.4 70.4 64.5
Avg Y+ 81.1 82.8 49.1 68.5 62.5 68.4
Y 29.4 32.7 -34.7 4.1 -7.9 3.9

From the Y values we can see that the three factors are of relatively equal importance to the
distance travelled by the ball. The interactive effects are of relatively small importance, however.

ANALYSIS OF RESULTS:
Predicting with the Empirical Model:
The information in Table 3 can be combined to produce a predictive equation for travel distance.

Ŷ=Ȳ+ (eqn. 2)

Where Ŷ is the predicted travel distance of the ball. A is the Y value for factor A. A is the scaled for
type of ball ( scaled between -1 and +1) and the other terms follow from those definitions.

Ŷ = 66.4+

Ŷ = 52.4 inches.
52.4 inches is close to the 51 inches that was average for the experimental value. For the #8,
equation 2 predicts a distance of 80.1 inches, less than 2 inches away from the 82 inch experimental
value. Thus we can see that the equation is a reasonable predicator of the past. Now we need to see
if it can predict intermediate values of the different factors.

Predicting the future: To keep things simple, let’s set the string lower position (A=-1), the stop
angle at high position (C= +1), and the arm length halfway between high and low (B=0). Therefore,
AB=0,BC=0 and AC=-1. Equation 2 predicts a value of 32.4

It may be noted that there is an assumption of linearity for the effect of each of the factors. If
expectations are not be true, there is a need for more runs for reduction of variability of results.
With only two levels, a linear model is the most complex possible

Conclusion:
Department of IEM, BMSCE Bangalore - 19Page 38
Quality Assurance and Reliability Lab Sub Code: 20IM5DLQAR

The goal was to introduce you to the field of Design of Experiments and to convince you that it has
value for engineers and scientists. The catapult experiment shows how DOE can be used to build
predictive models and show interactive effects.

Exercise No 13

DESIGN OF EXPERIMENT: GOLF EXPERIMENT


Aim : To study the Design of experiment technique and to find the optimum value using Golf.

EXPERIMENTAL WORK:
Factors –
The following are the factors that are considered in this case are
Swinging Angle
Length of club

Department of IEM, BMSCE Bangalore - 19Page 39


Quality Assurance and Reliability Lab Sub Code: 20IM5DLQAR

Type of Ball
Levels – These factors are each tested at only two levels, high and low. A complete set of tests is
run as shown in Table 1. In Table 1, a plus sign indicates a level and a minus sign indicates a low
level. Thus, in test #1 the club, the swinging angle and the ball are all at the low level. In the test
#8, all the levels are high setting.

Experimentation on Equipment

Table 1: Test matrix for Golf test


Test# Golf Club position Swinging AB BC AC
Ball Angle
1 -1 -1 -1 +1 +1 +1
2 -1 -1 +1 +1 -1 -1
3 -1 +1 -1 -1 -1 +1
4 -1 +1 +1 -1 +1 -1
5 +1 -1 -1 -1 +1 -1
6 +1 -1 +1 -1 -1 +1
7 +1 +1 -1 +1 -1 -1
8 +1 +1 +1 +1 +1 +1

For each test number, three repetitions were made. In other words, the ball was hit three times at
each setting and the travel distance was recorded. From these an average value for each test is
determined.

Table 2: Measured values all dimensions in cm

Sl No Trial A B C Trail Trail 2 Trail 3 Avg. (Yr)


No 1
1 6
60 Gol 250 250 250 250
® f
2 8 60 TT 150 150 150 150
®
3 2 60 Gol 300 300 325 308.33
® f
4 5 60 TT 150 125 125 133.33
Department of IEM, BMSCE Bangalore - 19Page 40
Quality Assurance and Reliability Lab Sub Code: 20IM5DLQAR

®
5 7 45 Gol 150 156 175 160.33
® f
6 1 45 TT 100 100 100 100
®
7 4 45 Gol 125 125 125 125
® f
8 3 45 TT 100 100 100 100
®

Ȳ = 165.87

Data Reduction - The interesting thing is how the data is analyzed. If we average all the distances
from tests in which the hook is at the high level, then compare that with the average from all the
tests in which the hook is at the low level, we would expect to see some difference. Indeed, this
difference between high and low setting is the key to this technique. Table 3 shows the results for
those calculations.

Important variables (screening) – If we see a small difference we can say that the factors is not so
important. Thus, the difference defines the relative importance of each factor. This also works for
the combined factors so we can see the relative importance of the interactive effects.

Table 3: Relative effects of each factor

Type of Club Swinging


Ball Position Angle AB BC AC
A B C
Avg Y- 210.4 165.08 210.9 175.49 170.82 142.16
Avg Y+ 121.33 166.65 120.75 156.25 160.91 189.57
Y -89.07 +1.57 -90.15 -19.24 -9.91 47.41

From the Y values we can see that the three factors are of relatively equal importance to the
distance travelled by the ball. The interactive effects are of relatively small importance, however.

ANALYSIS OF RESULTS:
Predicting with the Empirical Model:
The information in Table 3 can be combined to produce a predictive equation for travel distance.

Ŷ=Ȳ+ (eqn. 2)
Department of IEM, BMSCE Bangalore - 19Page 41
Quality Assurance and Reliability Lab Sub Code: 20IM5DLQAR

Where Ŷ is the predicted travel distance of the ball. A is the Y value for factor A. A is the scaled
for type of ball ( scaled between -1 and +1) and the other terms follow from those definitions.

Ŷ = 165.87+

Sl.no Y predicted Y actual


1 263.85 250
2 136.17 150
3 294.54 308.33
4 147.07 133.33
5 146.58 160.33
6 113.75 100
7 138.82 125
8 86.17 100

It may be noted that there is an assumption of linearity for the effect of each of the factors. If
expectations are not be true, there is a need for more runs for reduction of variability of results.
With only two levels, a linear model is the most complex possible

Conclusion:
The goal was to introduce you to the field of Design of Experiments and to convince you that it has
value for engineers and scientists. The catapult experiment shows how DOE can be used to build
predictive models and show interactive effects.

Exercise No 14

Simple Linear Regression


Aim : To build Simple Linear Regression model and find the dependent variable with help of
independent variable.
Apparatus: LVDT, slip gauge set.
Department of IEM, BMSCE Bangalore - 19Page 42
Quality Assurance and Reliability Lab Sub Code: 20IM5DLQAR

Theory: Regression analysis is a technique for the modeling and analysis of numerical data
consisting of values of a dependent variable (also called response variable or measurement) and of
one or more independent variables (also known as explanatory variable or predictors). The
dependent variable in the regression equation is modeled as a function of the independent variable,
corresponding parameter (constant). The parameters are estimated so as to give a best fit of the data.
Most commonly the best fit is evaluated by using the least squares method.
Regression can be used for prediction inference, and hypothesis testing, and modeling of causal
relationships.

Procedure:
1. Clean the slip gauge
2. Measure the slip gauge with the help of LVDT.
3. Tabulated the LVDT reading for 15 slip gauges
4. Build the simple regression model.
5. Plot the graph.

Observation:

SI No Slip gauge LVDT reading


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15

Calculation
SI
X Y (X)2 (XY) (Y)2
No

Department of IEM, BMSCE Bangalore - 19Page 43


Quality Assurance and Reliability Lab Sub Code: 20IM5DLQAR

∑X = ---- ∑Y= ---- ∑X2 = ---- ∑XY = ----- ∑Y2 = ----

Regression Equation

Equation for st line: y = b0+b1X


Where,
b1 =
and,
_ _
b0 = Y – b1 X
Expected Graph:

X Slop

Department of IEM, BMSCE Bangalore - 19Page 44

You might also like