You are on page 1of 327

AUTOMOTIVE ENGINEERING

Mario Vianello
vianello.clm@tin.it

PRODUCT
QUALITY DESIGN

01OFHLO

These slides can not be reproduced or distributed without permission in writing from the author. 1
COURSE CONTENTS

1. Design for Quality considering customer needs and pro-


duct targets
2. Freshen up of Applied Statistics Fundamentals
3. Fundamentals of Reliability and of Robust Design
4. Measure and prevention methodological instruments for
Reliability
5. Criteria and methods to plan reliability experimental verifi-
cations
6. Managerial considerations: mention about Problem solving,
Lessons learned, Experience accumulation, Technical Memo,
Global approaches like Six Sigma

These slides can not be reproduced or distributed without permission in writing from the author. 2
PRODUCT QUALITY DESIGN

4
MAIN STATISTICAL TOOLS
(PROACTIVE ACTIONS)
TO PREVENT FAILURES
AND ENSURE RELIABILITY

These slides can not be reproduced or distributed without permission in writing from the author. 3
CONTENTS OF THIS CHAPTER

 F.M.E.A. (Failure Mode and Effects Analysis)


- Classical approach to F.M.E.A.
- Classical F.M.E.A. sheet
- Process F.M.E.A. (in comparison with Design F.M.E.A.)
- Practical guidelines about F.M.E.A.
- 2nd generation F.M.E.A. (short mention)
- Correlation Matrix
- Risk Matrix
- Main benefits of the 2° generation F.M.E.A.
- System F.M.E.A.
- Conclusions on F.M.E.A.
- From F.M.E.A. to F.M.E.A.
- F.M.E.A. validation
- Limits and benefits of F.M.E.A.
- Automotive regulations and related Institutions
- Ending considerations
 Parts Count Method [MIL-STD 756 B]F.M.E.A.
 Worst Case Analysis (W.C.A.):
- Intuitively
- Chain of tolerances
- Computerized mathematical models
 Fault Tree Analysis (F.T.A.)

These slides can not be reproduced or distributed without permission in writing from the author. 4
CONTENTS OF THIS CHAPTER

 Experimental Design
- Concepts
- Interactions
- Full factorial plans
- Two quick clarifications on statistical matters
- Analysis of results (AN.O.VA. + AN.O.M.)
- Pooling
- Importance and use of “Percentage of contribution”
- Fractional factorial plans and Confounding (or Aliasing)
- Experimental Design applied to Robust Design
- Concluding remarks on Experimental Design
 Multiple Regression
 Effective approach to a complex problem
 Technical Memo and Lessons Learned
 Summary of proactive actions
 Preliminary Reliability predictions for a new product
(Reliability Plans using a Bayesian approach)

These slides can not be reproduced or distributed without permission in writing from the author. 5
4.1. Main statistical tools aimed to prevention
The reliability final experimental verification,
as well as being demanding and expensive,
is almost close to the commercial launch
and corrective actions, at this time, have very high costs.
Therefore the emphasis must necessarily move upstream
on proactive activities .

Proactive activities:
1. they allow a preliminary theoretical reliability pre-
diction to be compared with the customer’s appre-
ciation level and with the planned costs;
2. they require an experimental validation that the
reliability targets have been actually achieved on the
product (final experimental verification).

These slides can not be reproduced or distributed without permission in writing from the author. 6
4.1. Main statistical tools aimed to prevention

In the following we will discuss the main proactive activities


listed below:

1. F.M.E.A.: both classical and of 2nd generation


2. Parts Count Method
This is a total
3. W.C.A. = Worst Case Analysis of 6 tools:
nor 3 nor 20 !
4. F.T.A. = Fault Tree Analysis
5. Experimental Design (D.O.E.) and Robust Design
6. Multiple Regression

These slides can not be reproduced or distributed without permission in writing from the author. 7
4. Main statistical tools to prevent failure and ensure reliability

4.1
F.M.E.A.
(Failure Mode and Effects Analysis)

These slides can not be reproduced or distributed without permission in writing from the author. 8
4.1. Main statistical tools aimed to prevention

4.1.1
F.M.E.A.
(Failure Mode and Effects Analysis)
“classical approach”

These slides can not be reproduced or distributed without permission in writing from the author. 9
4.1.1. Classical F.M.E.A.
In corporate life, it is a physiological fact that unforeseen events occur.

A poorly managed company is often unprepared to deal with an


emergency.
a) Being little known, the trouble is not immediately detected.
b) When we finally realize what is happening, it's easy to get confused, be-
cause we do not know exactly how to run for cover.
c) The news runs across the whole hierarchy from bottom to top and reaches
the Top Management.
d) And the Top Management reacts with arrogance (from top to bottom),
sharply addressing employees ("You must ...", "... for now", "Woe to you if
you do not ...", and so on).
e) But too quickly collected data are often misleading, so that hypothesized
causes are not the true ones and corrective actions are ineffective.
f) At this point, we have to go back over the previous points from c to e
under new hypothesis (and further investigation).

At the end we will be able to find the right solution, but only after a long
time and with considerable damage/costs !

These slides can not be reproduced or distributed without permission in writing from the author. 10
4.1.1. Classical F.M.E.A.
In corporate life, it is a physiological fact that unforeseen events occur.

Instead, a well managed company tries to predict in advance all


adverse events (emergencies) that may occur. It shall draw up a
complete list of them and prescribes, for each situation, what needs
to be done and who must do it. In this way, the severity of each
emergency is reduced to a minimum.

At this point, however, about the envisaged potential emergencies,


we have enough information to make a further step: that is to try to
reduce their probability (or frequency) of occurrence.

The F.M.E.A. (Failure Mode and Effects Analysis) is the main method,
world-wide used, to perform these two steps.

These slides can not be reproduced or distributed without permission in writing from the author. 11
4.1.1. Classical F.M.E.A.

FMEA (Failure Mode and Effects Analysis)


is the best way to set up the design
in order to satisfy all expected functions

FMEA manages all potential failures related both to the design


(Design FMEA) and to the process (Process FMEA); evaluating
all solutions, so that preventive actions are effective.

FMEA must be managed as a main part of product/process


design, and must follow all its evolutions, in order to get a ro-
bust design and a process with a suitable capability.

These slides can not be reproduced or distributed without permission in writing from the author. 12
4.1.1. Classical F.M.E.A.

4.1.1.1
Classical F.M.E.A. sheet

These slides can not be reproduced or distributed without permission in writing from the author. 13
4.1.1.1. Classical F.M.E.A. sheet

Every FMEA, regardless of the level analized (system, subsy-


stem or component), is developed in two steps:

1. analysis aimed to list all potential failures and their


causes (root-causes);
2. creation aimed to find the most suitable countermea-
sures to avoid every failure, remembering, of course,
technical feasibility and costs.

Failure happens when the product does not meet a customer


requirement: so great care must be taken in achieving all the
customer’s requirements in detail.

These slides can not be reproduced or distributed without permission in writing from the author. 14
4.1.1.1. Classical F.M.E.A. sheet
The worksheet is divided into 2 macro areas

RECOMMENDED
PRESENT SITUATION
ACTIONS

These slides can not be reproduced or distributed without permission in writing from the author. 15
4.1.1.1. Classical F.M.E.A. sheet

Area for the PRESENT SITUATION


and evaluation of “RPN”

These slides can not be reproduced or distributed without permission in writing from the author. 16
4.1.1.1. Classical F.M.E.A. sheet

Grouping of related items/functions


useful as an index

These slides can not be reproduced or distributed without permission in writing from the author. 17
4.1.1.1. Classical F.M.E.A. sheet

Failure description
may be considered
as an engineer perception:
Failure it’s more detailed
description than the customer’s one,
but it doesn’t get the “root cause”.
It can be
This definition the output of a brainstorming
not unequivocal and can be used as starting point
may create for the drawing up of FMEA.
some trouble
for beginners.

These slides can not be reproduced or distributed without permission in writing from the author. 18
4.1.1.1. Classical F.M.E.A. sheet

Overall consequences
perceived by the (final and/or
intermediate and/or internal) customer

These slides can not be reproduced or distributed without permission in writing from the author. 19
4.1.1.1. Classical F.M.E.A. sheet

It takes an equal value


for all causes
of the same effect,
Severity (S) of the
regardless of
failure mode effect
the failure mode
from the customer
(and of the market class).
point of view

1…10

The SEVERITY value associated with a FAILURE MODE is that of the most
“severe” EFFECT and therefore it is always only one.
These slides can not be reproduced or distributed without permission in writing from the author. 20
4.1.1.1. Classical F.M.E.A. sheet

These slides can not be reproduced or distributed without permission in writing from the author. 21
The table above
is supplemented
with examples
to improve
the uniformity
of the
assessments

These slides can not be reproduced or distributed without permission in writing from the author. 22
4.1.1.1. Classical F.M.E.A. sheet

Characteristics
classification

These slides can not be reproduced or distributed without permission in writing from the author. 23
4.1.1.1. Classical F.M.E.A. sheet
CHARACTERISTICS CLASSIFICATION

Critical Generally safety


CC Characteristic characteristics.

Significant Generally performance


SC Characteristic characteristics.

For instance, characteri-


High Impact
HIC stics highlighted only on
Characteristic supplier’s designs.

These slides can not be reproduced or distributed without permission in writing from the author. 24
4.1.1.1. Classical F.M.E.A. sheet

5 whys rule
Technical to reach the true root cause
root causes along the chain of intermediate causes

They are detected or confirmed


in laboratory.
Each of them
heads a FMEA row

These slides can not be reproduced or distributed without permission in writing from the author. 25
4.1.1.1. Classical F.M.E.A. sheet
The “5 whys” rule and the intermediate causes chain
(example)

Engine seizure Why did it happen ? 1st why


Exhaust valve hits the piston
crown and broke inside the
cylinder

These slides can not be reproduced or distributed without permission in writing from the author.
4.1.1.1. Classical F.M.E.A. sheet
The “5 whys” rule and the intermediate causes chain
(example)

Engine seizure Why did it happen ? 1st why


Exhaust valve hits the piston
crown and broke inside the Why did it happen ? 2nd why
cylinder

Badly timed driving of the


valve Why did it happen ? 3rd why

Camshaft jamming Why did it happen ? 4th why

Camshaft lubrication failed Why did it happen ? 5th why

Something wrong with root cause because the oil pump


the oil pump is a “buy” component, whose FMEA is
a supplier’s task.
These slides can not be reproduced or distributed without permission in writing from the author.
4.1.1.1. Classical F.M.E.A. sheet
Of course the number of “WHYs”
may be lower or greater than 5:
the goal is to get the true root cause.
To solve the problem, we must tackle
the root cause
(in the previous example, if we had improved
the valve or the camshaft and not the oil pump,
the obtained results would be very poor).
In other words, the analysis must get
the root cause,
because only this points out
effective recommended actions
(to be inserted in the right side of the FMEA worksheet):
this is a criterion to understand if the root cause
has been achieved or not.
These slides can not be reproduced or distributed without permission in writing from the author.
4.1.1.1. Classical F.M.E.A. sheet
Differences among FAILURE MODES, EFFECTS and CAUSES
POTENTIAL
EFFECT (on the customer)
FAILURE M A C R O C A U S E S (*)
→ failure mode severity
MODE
• battery: discharged or failed
• ignition barrel: contact missing
Power The car cannot start (S = 9)
doesn't not • ignition control unit: defect
reach • electric wiring: cut/shorted
spark plugs • ignition control unit: defect
The car stalls (S = 10)
• electric wiring: disjunction
• petrol pump: failure
Petrol • ignition control unit: defect
doesn't not The car cannot start (S = 9) • injectors: clogged/jammed
reach or the car stalls (S = 10) • fuel pipe: obstructed/broken (leaks)
cylinders • fuel tank: vent pipe clogged (suction stopped)
• fuel tank: breaking (leaks)
Torque • clutch: slips
doesn't not The car cannot start (S = 9)
• accelerator cable: disjunction
reach or the car stalls (S = 10)
wheels • gearbox: shifting impossible

(*) These “MACROCAUSES” are too general to get effective recommended actions. Each of them becomes a
POTENTIAL FAILURE MODE in a SUBASSEMBLY FMEA.
These slides can not be reproduced or distributed without permission in writing from the author. 29
4.1.1.1. Classical F.M.E.A. sheet
Examples of product POTENTIAL FAILURE MODES
related with their EFFECTS and DESIGN CAUSES

POTENTIAL EFFECT DESIGN CAUSES (*)


FAILURE MODE (on the customer)
• prescribed torque is incorrect;
Screw unloosed
• ...............................................
vibrations
• prescribed minor diameter is too small;
Screw yielded
• ...............................................
Cracked weld bead on loss of driving • prescribed weld material is unsuitable;
a cushioned support control • ...............................................
A sheet steel welding water •prescribed minor diameter is too small;
spot broke loose infiltrations • ...............................................
poor • it’s not prescribed a galvanized sheet steel;
Oxidized sheet steel appearance • ...............................................
over-strength
Door sheet steel • prescribed sheet steel thickness is too thin;
needed to
strained close the door • ...............................................

(*) For each POTENTIAL FAILURE MODE we must list all possible CAUSES, generally more than one. That
doesn't happen here and is the defect of this slide.
These slides can not be reproduced or distributed without permission in writing from the author. 30
4.1.1.1. Classical F.M.E.A. sheet
Failure causes, failure modes and failure effects
can change according to
the point of view from which FMEA is made,
both in managerial field …

FAILURE FAILURE MODE


CAUSE
MODE EFFECT

• Worries • Ulcer • Stomachache


Clerk

• Ulcer • Stomachache • Absence from work


Head clerk

• Stomachache • Absence from work • Loss of production


Manager
These slides can not be reproduced or distributed without permission in writing from the author. 31
4.1.1.1. Classical F.M.E.A. sheet
Failure causes, failure modes and failure effects
can change according to
the point of view from which FMEA is made,
… and in technical field
FAILURE FAILURE MODE
CAUSE
MODE EFFECT

• Gasket breakage • Oil leakage • Car stalling


Customer

• Gasket wrong mounting • Gasket breakage • Oil leakage


Engineer

• Gasket wrong mounting • Car stalling • Customer’s complaints


Company
These slides can not be reproduced or distributed without permission in writing from the author. 32
4.1.1.1. Classical F.M.E.A. sheet

It can be evaluated according


Occurrence (O)
to the results on the field
of a specific and also (with a certain
root cause degree of subjectivity)
in comparison
with the occurrences
of other kind of failures

1…10

These slides can not be reproduced or distributed without permission in writing from the author. 33
4.1.1.1. Classical F.M.E.A. sheet

We have to quantify the "occurence" according to the probability of


the root cause (which has to be indicated for each line, since each
line of the FMEA sheet is devoted to a single root cause).
More precisely, we have to specify the probability that the root cause
produces the effect (of the failure mode). This distinction is important
in cases where the happening of the root cause does not always pro-
duce the unwanted effect (but only with a certain probability).
For example, deficiencies in the signs along the production line, which recall
some warnings, may mislead the workers (who performs a certain task) mo-
re frequently than with the correct signs, but not always.

In such cases, the probability to be indicated in the FMEA sheet is


given by the probability that the root cause (e.g. poor signage)
happens multiplied by the (conditional) probability that the effect
actually occurs (e.g. the worker actually makes a mistake), i.e. this
probability is given by the product:
P = P(effect\cause) . P(cause)
These slides can not be reproduced or distributed without permission in writing from the author. 34
4.1.1.1. Classical F.M.E.A. sheet
Another example to illustrate the expression just view can be the following.
Suppose that, in a production process, there is a 2% probability
that some production chips end up inside a small tank.

But, based on experience, it can be assumed that:


 70% of these chips stops near the edges without creating problems (case a);
 while the remaining 30% goes to occlude the exhaust pipe, which is a serious
defect (case b).

Case a (70%) Case b (30%)


The probability that the fault actually occurs is given by the probability of having
chips inside the tank multiplied by the probability that the chips are positioned as in
case b, so as to occlude the exhaust pipe:
0.02 x 0.30 = 0.006  0.6%.
These slides can not be reproduced or distributed without permission in writing from the author. 35
4.1.1.1. Classical F.M.E.A. sheet

OCCURRENCE

FREQUENCY
APPRAISAL RANKING
[repairs/100 units]
ALMOST ZERO f < 0,001 1
Automobiles
Fiat Group

VERY REMOTE 0,001 < f > 0,005 2


REMOTE 0,005 < f > 0,025 3
VERY LOW 0,025 < f > 0,100 4
LOW 0,10 < f > 0,25 5
MODERATE 0,25 < f > 1,25 6
HIGH 1,25 < f > 2,50 7
QUITE HIGH 2,5 < f > 10,0 8
VERY HIGH 10 < f > 20 9
HIGHEST f > 20 10

These slides can not be reproduced or distributed without permission in writing from the author. 36
4.1.1.1. Classical F.M.E.A. sheet

Photograph of the present situation


as a starting point to:
1) evaluating Detection (D);
2) and then defining “recommended
corrective actions”.
In both cases, it is useful to (separately) consider
proactive activities
and control/experimental activities.

These slides can not be reproduced or distributed without permission in writing from the author. 37
4.1.1.1. Classical F.M.E.A. sheet

Detection (D)
i.e. the probability to
… before delivering detect (and eliminate)
the product the fault ...
to the final customer

10…1

These slides can not be reproduced or distributed without permission in writing from the author. 38
4.1.1.1. Classical F.M.E.A. sheet

The Detection (D):

 at first it was defined as the probability to detect the


fault generated by the considered root cause, before
delivering the product to the final customer;
 now:
 Design FMEA: detection is defined as the probability
to highlight the fault during design verification expe-
rimental tests;
 Process FMEA: detection is defined as the probability
that production controls allow to detect the fault.

These slides can not be reproduced or distributed without permission in writing from the author. 39
4.1.1.1. Classical F.M.E.A. sheet

DETECTION

PROBABILITY
APPRAISAL RANKING
[%]
ZERO < 15 % 10
Automobiles
Fiat Group

VERY REMOTE 15 ÷ 24 % 9
REMOTE 25 ÷ 34 % 8
VERY LOW 35 ÷ 44 % 7
LOW 45 ÷ 54 % 6
MODERATE 55 ÷ 64 % 5
HIGH 65 ÷ 74 % 4
QUITE HIGH 75 ÷ 84 % 3
VERY HIGH 85 ÷ 94 % 2
HIGHEST > 95% 1

These slides can not be reproduced or distributed without permission in writing from the author. 40
4.1.1.1. Classical F.M.E.A. sheet

Risk Priority Number


RPN = S x O x D

1…1000

These slides can not be reproduced or distributed without permission in writing from the author. 41
4.1.1.1. Classical F.M.E.A. sheet

The risk defined by RPN is evaluated as follows

These slides can not be reproduced or distributed without permission in writing from the author. 42
4.1.1.1. Classical F.M.E.A. sheet

To sum-up, each FMEA row regards only one root cause


and contains a value of RPN, calculated as:

POTENTIAL EFFECT(s) OF FAILURE Severity, S

POTENTIAL (root) CAUSE(s)


( mechanism of failure )
Occurrence, O
Every line of the FMEA
regards a single root cause

CURRENT DESIGN CONTROLS


Detection, D
DETECTION

Risk Priority Number, RPN = S x O x D


These slides can not be reproduced or distributed without permission in writing from the author. 43
4.1.1.1. Classical F.M.E.A. sheet

 RPN (Risk Priority Number) is the first FMEA’s output.


It is the product of Severity, Occurrence and Detection
and gives the priorities to eliminate failure (root) causes.
 Causes which produce failures of Severity 9 and 10
must always be faced, in order to minimize their
occurrence (regardless of their RPN values).
 Although there is no scientific reason, it is advisable to
consider all the causes with RPN greater than or equal to
100 (reference value commonly used by the Certification
Inspector).
 In case of lack of time or resources, we should always try
to act at least on all RNP values greater than 80% of the
highest RPN calculated: e.g. let’s suppose the maximum IPR
value in the FMEA is 600, we should examine all possible causes until
a value of IPR = 0,8 . 600 = 480, if we can not do that, we have to
provide a warning.
These slides can not be reproduced or distributed without permission in writing from the author. 44
4.1.1.1. Classical F.M.E.A. sheet

RECOMMENDED
PRESENT SITUATION CORRECTIVE
ACTIONS

Recommended actions are the second output of FMEA

These slides can not be reproduced or distributed without permission in writing from the author. 45
4.1.1.1. Classical F.M.E.A. sheet

Area of recommended
corrective actions
and
of new “RPN” evaluation

These slides can not be reproduced or distributed without permission in writing from the author. 46
4.1.1.1. Classical F.M.E.A. sheet

Corrective actions
proposed
by the Team

These slides can not be reproduced or distributed without permission in writing from the author. 47
4.1.1.1. Classical F.M.E.A. sheet

Approved corrective actions,


responsibility assignment
and execution scheduling

These slides can not be reproduced or distributed without permission in writing from the author. 48
4.1.1.1. Classical F.M.E.A. sheet

Monitoring
of actions actually implemented
and, if necessary,
reminders to latecomers

These slides can not be reproduced or distributed without permission in writing from the author. 49
4.1.1.1. Classical F.M.E.A. sheet

New evaluation
of “RPN”

These slides can not be reproduced or distributed without permission in writing from the author. 50
STEPS THROUGH OUT TIME

3.
2.
Approval
New RPN
of (some) 4.
1. evaluation
improve- Monito-
List of (supposing
ment ring of
suggested impro-
actions: what
impro- vement
responsi- actually
vement actions
bility as- is imple-
actions have been
signment mented
imple-
and sche-
mented)
duling

These slides can not be reproduced or distributed without permission in writing from the author. 51
4.1.1.1. Classical F.M.E.A. sheet
It’s easy to understand that …
• the analysis developed in the Present Situation Area
(with RPN evaluation) is necessary in order to focus on
all main points, keeping in mind the root causes of all
failures, but it’s only the premise of the actual result;
• the actual result on the product is the analysis develo-
ped in the Area of recommended corrective actions and
of new “RPN” evaluation; this area must:
 contain a list of recommended actions, which have to be
technically feasible and compatible with the budget;
 identify a Responsible, define a deadline for each approved
recommended action and identify somebody who monitors the work
progression in order to be sure that what has been approved
will be effectively done.
Of course the filling of the first Area (premise) needs some skills,
but the realization of the second one (result) needs even more.
These slides can not be reproduced or distributed without permission in writing from the author. 52
4.1.1.1. Classical F.M.E.A. sheet

The result of the whole activity must be ...


… to introduce some (few) changes to the usual
way of working.

• If no recommended action has been found, the ana-


lysis has not been detailed enough.
• If too many improving actions have been detected,
probably they are very superficial and not effective
in increasing the product quality.
• A credible number of improving actions may be
around 2 to 5.

These slides can not be reproduced or distributed without permission in writing from the author. 53
4.1.1. Classical F.M.E.A.

4.1.1.2
Process F.M.E.A.
(in comparison with Design F.M.E.A.)

These slides can not be reproduced or distributed without permission in writing from the author. 54
4.1.1.2. Process F.M.E.A. (in comparison with Design F.M.E.A.)

The main difference of a Process FMEA


(compared to a Design FMEA) is the analyzed object,
which, in this case, is not a physical component,
but a phase of a production process.

With the Process FMEA, the concept of customer expands, be-


cause it includes now, not only the final customer, but also the fol-
lowing process phases that can be considered as "internal custo-
mers" of the previous ones.
For this reason, in the Process FMEA, failures are listed according
to the process lay-out (instead of in a morphological order as in
Design FMEA).
Moreover, the meaning of DETECTION is different. In Design FMEA
it is referred to the design verification experimental test, while in Pro-
cess FMEA it regards the effectiveness of the set process controls.

These slides can not be reproduced or distributed without permission in writing from the author. 55
4.1.1.2. Process F.M.E.A. (in comparison with Design F.M.E.A.)

All the following processes must be considered in Design FMEA


and have to be validated by specific Process FMEA
Storing,
Packing & transport Car
final tuning up
processes handling & storing
and delivery

Systems/com-
ponents PLANTS YARDS NETWORK
SUPPLIERS

Transport

Production/assembly - Assembly on car processes


Processes - Production/assembly/delivery processes
(buy components) - Handling/storing processes

 In the FMEA development and validation, are involved not only the Design, Production and Purchasing
Departments, but also Organizing and Sales Departments.
 Besides components Suppliers, services Suppliers ( e.g. Dealers) must be involved too.
These slides can not be reproduced or distributed without permission in writing from the author. 56
4.1.1.2. Process F.M.E.A. (in comparison with Design F.M.E.A.)

IMPORTANT RULE

In the PROCESS FMEA,


the analysis of each process phase
must be conducted under the assumption that
carrying out the previous phases
has been correctly executed, without any kind of faults
(because otherwise we would investigate
to find corrective actions for previous phases).
Of course this does not apply to controls
for the investigated phase.
In essence, each single phase
is considered as a complete micro-company,
for which the previous phase is the supplier
and the next phase is the customer.
These slides can not be reproduced or distributed without permission in writing from the author. 57
4.1.1.2. Process F.M.E.A. (in comparison with Design F.M.E.A.)

In the analysis of a whole company,


the percentage of incoming defective constitutes a problem
attributable to the company (too) and not only the supplier.
For example, compared to a supplier excellent but expensive,
a much cheaper supplier may be more appropriate,
even if he systematically provides lots
with a certain percentage of defective
(which will then be managed by an appropriate acceptance testing
and/or by suitable devices along the whole production process).

Similarly, when examining a single phase of a process,


the systematic (or "natural") defectiveness
incoming from the previous phase are the responsibility
of the investigated phase (too)
and not just of the previous phase.

These slides can not be reproduced or distributed without permission in writing from the author. 58
4.1.1.2. Process F.M.E.A. (in comparison with Design F.M.E.A.)

This approach is not inconsistent with the earlier stated RULE,


because it is assumed that no fault behavior in the previous step:
because it is assumed that no abnormality has occurred
in the previous phase:
it is simply taking into account that the average quality
of what provided by the previous phase
has some inherent limitations (systematic and non-exceptional).

It is therefore justified to try to limit, in the investigated phase,


the negative effects of the poor quality of the previous phase
(generated by common causes and not by special causes).

These slides can not be reproduced or distributed without permission in writing from the author. 59
4.1.1.2. Process F.M.E.A. (in comparison with Design F.M.E.A.)

On the other hand, only in the analysis of this phase


we can realize the severity of the effects
due to the non-conformity
of what has been received from the previous phase.

This severity value


will be submitted to the FMEA Team of the previous phase,
where it will be used
as severity from the customer’s point of view,
just because each phase is seen
as customer of the previous one and supplier of the next.

These slides can not be reproduced or distributed without permission in writing from the author. 60
4.1.1. Classical F.M.E.A.

4.1.1.3
Practical guidelines
about F.M.E.A.

These slides can not be reproduced or distributed without permission in writing from the author. 61
4.1.1.3. Practical guidelines about F.M.E.A.
GUIDE
SITUATION
(analysis of now)
OBSERVATION
(cause search)
ANALYSIS
(hypothesis
ACTION
(proposed to critical analysis
performing) solutions)

KEY WHY WHAT WHAT CAN


HAPPENS IF …

+
WE DO
QUESTIONS TO …

WHAT Why do they do that ? What happens if What can we


they don’t do to elimi-
do that ? nate …?
WHERE Why do they do that in What happens if What can we
this place ? they do it in ano- do to move
“5 whys” rule
+
(they do that) ther place ? and/or
regroup ?
WHEN Why do they do that in What happens if What can we
this moment ? they do it in ano- do to antici-
(they do that) ther moment ? pate/delay ?
Think out
WHO Why is this person What happens if What can we

(does that)
doing that ? another person do to substi-
does it ? tute/reas- of the box
sign ?
HOW Why do they do that What happens if What can we
this way ? they do it ano- do to simpli-
(they do that) ther way ? fy ?

HOW MANY Why do they do that What happens if What can we


so many times ? they do it a low- do to de-
TIMES er/greater num- crease/in-
ber of times ? crease ?

These slides can not be reproduced or distributed without permission in writing from the author. 62
4.1.1.3. Practical guidelines about F.M.E.A.

Example of
“thinking out of the box”

The first Fiat 600 of 1955 had a profile in chromed aluminum along the sides. At
the time, moldings of this type were considered a "shiny" frieze almost essential.
However, these profiles were so delicate that they became dented to only look at
them, and so their only function, merely aesthetic, rapidly worsened.
Today car models often have profiles in black rubber. They would be conside-
red horrible in 1955, but they are functional, as they protect the door from minor
impacts (e.g. parking) and remain practically undeformed. At the time, who had
conceived a solution of this type, with functional characteristics (appreciated by
customers) would have "thought out of the box" and would have introduced an
innovation able to drive up sales.

These slides can not be reproduced or distributed without permission in writing from the author. 63
4.1.1.3. Practical guidelines about F.M.E.A.

WHEN FMEA MUST BE DONE ?

As it’s an optimizing preventive action, FMEA must be


done before or together with design/process defini-
tion:
after that, it cannot add any value !
In practice, Design FMEA must be started as the idea
of a new product begins to take shape. It must be sy-
stematically up to dated and must be concluded before
equipping starts.

These slides can not be reproduced or distributed without permission in writing from the author. 64
4.1.1.3. Practical guidelines about F.M.E.A.

IN WHICH SITUATIONS FMEA MUST BE DONE ?

FMEA must be done (or revised) whenever something has


changed. Practically, we have to do it for:
 all new components;
That can be a very
 components already analyzed and hard job with the
experienced, but used in a new way classical FMEA, but
(application and/or environment); not with the 2nd ge-
neration FMEA.
 existing components to be improved.

These slides can not be reproduced or distributed without permission in writing from the author. 65
4.1.1.3. Practical guidelines about F.M.E.A.

WHAT KIND OF FAILURES MUST BE CONSIDERED


IN A FMEA ?

 Of course every failure which has been detected


in the field,
 but not only; we have to consider (for instance):
 all failures never happened only by virtue of good
practice of the company (but if it would fail, e.g. for
cost reduction …);
 all failures (never or rarely happened) which can pro-
duce heavy effects (e.g. safety): in order to realize a
“robust” design;
 etc…

PRACTICALLY ALL KINDS OF FAILURE !


These slides can not be reproduced or distributed without permission in writing from the author. 66
4.1.1.3. Practical guidelines about F.M.E.A.

NEVERTHELESS WE MUST PAY ATTENTION


TO NOT EXAGERATE

First of all, FMEA is a checklist to consult, in order to


verify to have avoided all (attractive) choices which have
produced (or could produce) some criticalities.
Every system/component can have only a very low number
of failure modes: if we eliminate them, the failure proba-
bility of the system/component is reduced by one size.

These slides can not be reproduced or distributed without permission in writing from the author. 67
It’s useful to consider 3 different cathegories of FMEA
(design, process and installation )

Components/systems
BUY (responsibility of SUPPLIER)

Components/systems
DESIGN (responsibility of COMPANY)

MAKE “Interfaces” for


INSTALLATION
(responsibility of COMPANY sup-
ported by Supplier)

Components/systems
BUY (responsibility of SUPPLIER)

Components/systems
PROCESS (responsibility of COMPANY)
MAKE INSTALLATION
(responsibility of COMPANY sup-
ported by Supplier)

These slides can not be reproduced or distributed without permission in writing from the author. 68
4.1. Main statistical tools aimed to prevention

4.1.2
2nd generation F.M.E.A.
(short mention)

These slides can not be reproduced or distributed without permission in writing from the author. 69
4.1.2. 2nd generation F.M.E.A. (short mention)
Classical (or 1st generation) FMEA
should be completely rewritten
(often with few alterations)
every time a component has to be redesigned.
2nd generation FMEA is aimed at defining
“standards” and becomes the “Risk Matrix”,
which contains the best state of art
at the moment available in the company.
So that, on one hand, it’s not necessary to
remake a Risk Matrix for each new design,
and on the other, it must be
systematically updated according to
the “continuous improvement“ purposes.
These slides can not be reproduced or distributed without permission in writing from the author. 70
4.1.2. 2nd generation F.M.E.A. (short mention)

Another important difference between


classical and 2nd generation FMEA
is that the latter must be preceded
by a short preliminary
Functional Analysis.
Before starting a “Risk Matrix”,
a “Correlation Matrix”
is performed.
It is intended as a simplified and condensed
Functional Analysis
mainly aimed at finding out
all the customer’s expected functions.
These slides can not be reproduced or distributed without permission in writing from the author. 71
4.1.2. 2nd generation F.M.E.A. (short mention)

Functional Analysis BLOCK DIAGRAM (simplified)

HEAD EYE

SOLAR
HAND
RADIATION

ELECTRICAL
TOP
SYSTEM

HINGE & CLAMPS SUN VISOR HANDHOLD

INTERNAL
LINK & CLAMPS
MIRROR

TOP BAR CEILING LIGHT

WINDSCREEN UPRIGHTS

These slides can not be reproduced or distributed without permission in writing from the author. 72
4.1.2. 2nd generation F.M.E.A. (short mention)

4.1.2.1
Correlation Matrix

These slides can not be reproduced or distributed without permission in writing from the author. 73
4.1.2.1. Correlation Matrix

The real innovation of the 2nd generation FMEA


is the introduction
of a synthetic preliminary Functional Analysis,
called Correlation Matrix.

The Correlation Matrix relates


design/process parameters
to the customer requirements (VOC),
with the aim to assure the identification
of all the functions/performances
demanded by the customers
(expected functions).

These slides can not be reproduced or distributed without permission in writing from the author. 74
4.1.2.1. Correlation Matrix

SUB-PARTS
and/or AREAS

Components
to be analyzed

These slides can not be reproduced or distributed without permission in writing from the author. 75
4.1.2.1. Correlation Matrix

SUB-PARTS
and/or AREAS

Description of the function


that the system must accomplish
using “customer words”

These slides can not be reproduced or distributed without permission in writing from the author. 76
4.1.2.1. Correlation Matrix

SUB-PARTS
and/or AREAS

Splitting
into intermediate functions
not too detailed

These slides can not be reproduced or distributed without permission in writing from the author. 77
4.1.2.1. Correlation Matrix

SUB-PARTS
and/or AREAS

Splitting
into elementary functions
in a very detailed/technical way
(measurable functions)

These slides can not be reproduced or distributed without permission in writing from the author. 78
4.1.2.1. Correlation Matrix

SUB-PARTS
and/or AREAS

Parts and physical aspects


which affect
elementary functions

These slides can not be reproduced or distributed without permission in writing from the author. 79
4.1.2.1. Correlation Matrix

Usually, in the Correlation Matrix, we tend not to indi-


cate, in the columns of customer needs, requirements
such as cost, reliability and durability, implying that
we should always pursue the minimum cost and highest
reliability and durability.

Exception is made (and thus these types of require-


ments are indicated as well) if this is the only way to
justify the presence of some components (e.g.: anti-
friction material, washers, etc.).

These slides can not be reproduced or distributed without permission in writing from the author. 80
4.1.2.1. Correlation Matrix

SUB-PARTS
and/or AREAS

Subsystems and elements


which are not part
of the analyzed system,
but which can interact with it
and affect its performances:
it’s important to take into account
these interactions
also in their own norms

These slides can not be reproduced or distributed without permission in writing from the author. 81
4.1.2.1. Correlation Matrix

SUB-PARTS
and/or AREAS

These mutual relations can be indicated


by an “X”. Often it’s better to evaluate them
by a value chosen between 1, 3 and 9 (as
in Q.F.D.): these values can be helpful in a
progressive construction and refinement of
simulation mathematical models, starting
Highlighting of from strongest relations (9).

mutual relations between


components/parts and
elementary functions

These slides can not be reproduced or distributed without permission in writing from the author. 82
4.1.2.1. Correlation Matrix

Meaning of the values 1, 3, 9, both in


Quality Function Deployment (Q.F.D.)
and in Correlation Matrix:

9 = strong relationship, determining in a mathematical


model;
3 = medium relationship, with an impact less important
than previous one, but with effects not negligible (if
not too expensive);
1 = weak relationship, with little or dubious influence
(negligible, at least in the first rough mathematical
models).

These slides can not be reproduced or distributed without permission in writing from the author. 83
4.1.2.1. Correlation Matrix
ELEMENTS/PARTS/INTERFACES
PRACTICAL REVIEW OF THE SYSTEM
(to be inserted with the help
CUSTOMER EXPECTED of the Base List and Designs)
FUNCTIONS

PHYSICAL Outside of the system


Analyzed system:
DISASSEMBLY/ components

● Final customer needs


/INTERFACES COMPONENTS INTERFACES
PARTS

…………………………. FUNCTIONAL

…………………………
DISASSEMBLY SUB-PARTS
BASIC and/or AREAS

● Intermediate custo-
TECH-
NICAL
INTERMEDIATE ELEMENTARY
mer needs
FUNC-
TION FUNCTIONS FUNCTIONS

- Ease to work;
If it’s possible,
Final customer

- etc.. write the first System


● Internal customer functions with
needs the words of BOUNDARIES
- Workers safety; final customer.
- Low cost;
- Prevent reductions in As they are
Internal Intermediate

productivity; essentially
customer

- etc.. “technical”
problems, these
And any other needs able to needs can be
justify the presence of all the expressed
components heading the matrix
customer

directly
columns. If not justified, they in a technical
must be eliminated, according to language.
Value Analysis principles.
These slides can not be reproduced or distributed without permission in writing from the author. 84
4.1.2.1. Correlation Matrix

Strictly speaking, we must use 3 different Correlation Matrices:

Design
To relate customer expec-
ted functions with the sub-
systems in which the exa-
This is the basic information mined system is divided.
and therefore it can never be omitted
(its responsibility is of the system-company)

Installation
As the above, but restricted
to the only functions affec-
ted by the installation.

Process
To identify minimal (= less
expensive) process charac-
teristics consistent with the
customer needs (converted
in technical terms by the De-
sign Correlation Matrix).

These slides can not be reproduced or distributed without permission in writing from the author. 85
4.1.2. 2nd generation F.M.E.A. (short mention)

4.1.2.2
Risk Matrix

These slides can not be reproduced or distributed without permission in writing from the author. 86
4.1.2.2. Risk Matrix

RISK MATRIX should be completed


by developing, one after the other,
the cells of the Correlation Matrix.

In the following, we will show the successive steps


by which we can transform the classical FMEA sheet
into the RISK MATRIX sheet.

These slides can not be reproduced or distributed without permission in writing from the author. 87
4.1.2.2. Risk Matrix

Not necessary any more,


because “Potential Failure Modes”
have been replaced
by expected functions “denied”,
which are the
“Potential Effects of Failure”
deduced by Correlation Matrix

These slides can not be reproduced or distributed without permission in writing from the author. 88
4.1.2.2. Risk Matrix

Not necessary any more,


because Risk Matrix
is aimed at defining
“STANDARDS”

These slides can not be reproduced or distributed without permission in writing from the author. 89
4.1.2.2. Risk Matrix

Not necessary any more,


because Risk Matrix
is a guideline
and therefore unrelated
with the present situation

These slides can not be reproduced or distributed without permission in writing from the author. 90
4.1.2.2. Risk Matrix

Moved to the right side

These slides can not be reproduced or distributed without permission in writing from the author. 91
4.1.2.2. Risk Matrix

All
RECOMMENDED CORRECTIVE ACTIONS
defined with time
become standards
for designing and testing

These slides can not be reproduced or distributed without permission in writing from the author. 92
4.1.2.2. Risk Matrix
Remaining columns of classical FMEA
CORRECTIVE SITUATION
ACTIONS after they have been implemented
Item Potential Potential
Effect(s) of Cause(s) / RECOM-

Occurrence

Detection
Severity
Mecha- MENDED

R.P.N.
Failure
(customer nism(s) of (and adopted)
Function perception) Failure CORRECTIVE
ACTIONS

PROBA- PROBABI
REQUIRED THEORETICAL LITY OF
EFFECT BILITY

CHARACTERISTIC
CLASSIFICATION
DESIGN AND CRITERIA NOT DE-
perceived by that failure EXPERI-
REFERENCE

INSTALLA- for defining TECTING


SEVERITY

the customer: occurs MENTAL


Potential values of

R.P.N.
TION on the THE
ELEMENTARY deniyng TESTS
CAUSES CHARACTE- Design/Instal- DEFECT
FUNCTION (reduction or prototypes necessary
of failure RISTICS lation charac- (even though to design through the
loss) of an
expected (their teristics or for they corre- standardi-
validation
function non-observance prescribing spond to de- zed expe-
may cause failures) sign specifi-
specifications rimental
cations) tests

RISK MATRIX columns


These slides can not be reproduced or distributed without permission in writing from the author. 93
4.1.2.2. Risk Matrix

SYSTEM / MODULE
COMPONENT
normazione ITEM
INTERFACE

PROBA- PROBABI-
EFFECT REQUIRED THEORETICAL LITY TO
BILITY

CHARACTERISTIC
CLASSIFICATION
perceived DESIGN AND CRITERIA DON'T
that failure EXPERI-
REFERENCE

by the INSTALLA- for defining DETECT


SEVERITY Potential values of
occurs MENTAL

R.P.N.
ELEMENTARY customer: TION on the THE
TESTS
CAUSES Design/Instal-
FUNCTION deniyng CHARACTE- prototypes necessary DEFECT
(reduction or of failure RISTICS lation charac- (even though to design through the
loss) of an (their teristics or for they corre- standar-
expected validation
non-observance prescribing spond to de- dized expe-
function may cause failures) sign specifi-
specifications rimental
cations) tests

Taken from
Correlation Matrix

These slides can not be reproduced or distributed without permission in writing from the author. 94
4.1.2.2. Risk Matrix

SYSTEM / MODULE
COMPONENT
normazione ITEM
INTERFACE

PROBA- PROBABI-
EFFECT REQUIRED THEORETICAL LITY TO
BILITY

CHARACTERISTIC
CLASSIFICATION
perceived DESIGN AND CRITERIA DON'T
that failure EXPERI-
REFERENCE

by the INSTALLA- for defining DETECT


SEVERITY Potential values of
occurs MENTAL

R.P.N.
ELEMENTARY customer: TION on the THE
TESTS
CAUSES Design/Instal-
FUNCTION deniyng CHARACTE- prototypes necessary DEFECT
(reduction or of failure RISTICS lation charac- (even though to design through the
loss) of an (their teristics or for they corre- standar-
expected validation
non-observance prescribing spond to de- dized expe-
function may cause failures) sign specifi-
specifications rimental
cations) tests

The effect
perceived by the customer
is an elementary function “denied”:
in general, there can be
more than one way to deny it

These slides can not be reproduced or distributed without permission in writing from the author. 95
4.1.2.2. Risk Matrix

SYSTEM / MODULE
COMPONENT
normazione ITEM
INTERFACE

PROBA- PROBABI-
EFFECT REQUIRED THEORETICAL LITY TO
BILITY

CHARACTERISTIC
CLASSIFICATION
perceived DESIGN AND CRITERIA DON'T
that failure EXPERI-
REFERENCE

by the INSTALLA- for defining DETECT


SEVERITY Potential values of
occurs MENTAL

R.P.N.
ELEMENTARY customer: TION on the THE
TESTS
CAUSES Design/Instal-
FUNCTION deniyng CHARACTE- prototypes necessary DEFECT
(reduction or of failure RISTICS lation charac- (even though to design through the
loss) of an (their teristics or for they corre- standar-
expected validation
non-observance prescribing spond to de- dized expe-
function may cause failures) sign specifi-
specifications rimental
cations) tests

As in the
classical FMEA

These slides can not be reproduced or distributed without permission in writing from the author. 96
4.1.2.2. Risk Matrix

SYSTEM / MODULE
COMPONENT
normazione ITEM
INTERFACE

PROBA- PROBABI-
EFFECT REQUIRED THEORETICAL LITY TO
BILITY

CHARACTERISTIC
CLASSIFICATION
perceived DESIGN AND CRITERIA DON'T
that failure EXPERI-
REFERENCE

by the INSTALLA- for defining DETECT


SEVERITY Potential values of
occurs MENTAL

R.P.N.
ELEMENTARY customer: TION on the THE
TESTS
CAUSES Design/Instal-
FUNCTION deniyng CHARACTE- prototypes necessary DEFECT
(reduction or of failure RISTICS lation charac- (even though to design through the
loss) of an (their teristics or for they corre- standar-
expected validation
non-observance prescribing spond to de- dized expe-
function may cause failures) sign specifi-
specifications rimental
cations) tests

These columns substitute


those of corrective actions in a classical Design FMEA,
highlighting, in terms of “standard”,
the best state of art in the company

These slides can not be reproduced or distributed without permission in writing from the author. 97
4.1.2.2. Risk Matrix

SYSTEM / MODULE
COMPONENT
normazione ITEM
INTERFACE

PROBA- PROBABI-
EFFECT REQUIRED THEORETICAL LITY TO
BILITY

CHARACTERISTIC
CLASSIFICATION
perceived DESIGN AND CRITERIA DON'T
that failure EXPERI-
REFERENCE

by the INSTALLA- for defining DETECT


SEVERITY Potential values of
occurs MENTAL

R.P.N.
ELEMENTARY customer: TION on the THE
TESTS
CAUSES Design/Instal-
FUNCTION deniyng CHARACTE- prototypes necessary DEFECT
(reduction or of failure RISTICS lation charac- (even though to design through the
loss) of an (their teristics or for they corre- standar-
expected validation
non-observance prescribing spond to de- dized expe-
function may cause failures) sign specifi-
specifications rimental
cations) tests

Probability (1÷10)
that a “failure root cause” occurs.
Although prescriptions of the previous three columns
have been followed,
the physical failure probability
can be rarely reduced to zero !

These slides can not be reproduced or distributed without permission in writing from the author. 98
4.1.2.2. Risk Matrix

SYSTEM / MODULE
COMPONENT
normazione ITEM
INTERFACE

PROBA- PROBABI-
EFFECT REQUIRED THEORETICAL LITY TO
BILITY

CHARACTERISTIC
CLASSIFICATION
perceived DESIGN AND CRITERIA DON'T
that failure EXPERI-
REFERENCE

by the INSTALLA- for defining DETECT


SEVERITY Potential values of
occurs MENTAL

R.P.N.
ELEMENTARY customer: TION on the THE
TESTS
CAUSES Design/Instal-
FUNCTION deniyng CHARACTE- prototypes necessary DEFECT
(reduction or of failure RISTICS lation charac- (even though to design through the
loss) of an (their teristics or for they corre- standar-
expected validation
non-observance prescribing spond to de- dized expe-
function may cause failures) sign specifi-
specifications rimental
cations) tests

Standardization
of “classical FMEA corrective actions” ,
as regards the best state of art in the company
in order to plan and conduct
design verification experimental tests

These slides can not be reproduced or distributed without permission in writing from the author. 99
4.1.2.2. Risk Matrix

SYSTEM / MODULE
COMPONENT
normazione ITEM
INTERFACE

PROBA- PROBABI-
EFFECT REQUIRED THEORETICAL LITY TO
BILITY

CHARACTERISTIC
CLASSIFICATION
perceived DESIGN AND CRITERIA DON'T
that failure EXPERI-
REFERENCE

by the INSTALLA- for defining DETECT


SEVERITY Potential values of
occurs MENTAL

R.P.N.
ELEMENTARY customer: TION on the THE
TESTS
CAUSES Design/Instal-
FUNCTION deniyng CHARACTE- prototypes necessary DEFECT
(reduction or of failure RISTICS lation charac- (even though to design through the
loss) of an (their teristics or for they corre- standar-
expected validation
non-observance prescribing spond to de- dized expe-
function may cause failures) sign specifi-
specifications rimental
cations) tests

Probability not to detect the defect,


supposed to be present (= conditional probability),
during the prescribed experimental tests:
it replaces “detection” of a classical FMEA,
but according to a different criterion

These slides can not be reproduced or distributed without permission in writing from the author. 100
4.1.2.2. Risk Matrix

1. In the classical FMEA, the values ​(1 to 10) to be attributed to the DETEC-
TION must express the danger of the situation and therefore must be high
when DETECTION is low and low when DETECTION is high (this reversal
does not happen with SEVERITY or with OCCURENCE). With the RISK
MATRIX, this problem has been eliminated by using, as column heading, the
NO-DETECTION instead of the DETECTION. The NO-DETECTION is the
PROBABILITY OF NOT DETECTING THE DEFECT (during the experimental
tests to validate the design) and can be calculated as the complement to 1 of
the DETECTION.
2. An example to clarify the meaning of conditional probability regarding the
DETECTION. Suppose you have tested 30 components, knowing that 20 of
them are without defect and 10 with defect. At the end of the tests, only 8
defects have been detected in all. In this case, the DETECTION is equal to
8/10  80% (and not equal to 8/30  27%) and the NO-DETECTION is equal
to (10-8)/10  20% (and not equal to (10-8)/30  6,7%).
3. In conclusion, the use of NO-DETECTION or of DETECTION is a convention
that does not affect the substance of things. Instead, it is important to refer to
the design experimental validation and to the conditional probability.

These slides can not be reproduced or distributed without permission in writing from the author. 101
4.1.2.2. Risk Matrix

SYSTEM / MODULE
COMPONENT
normazione ITEM
INTERFACE

PROBA- PROBABI-
EFFECT REQUIRED THEORETICAL LITY TO
BILITY

CHARACTERISTIC
CLASSIFICATION
perceived DESIGN AND CRITERIA DON'T
that failure EXPERI-
REFERENCE

by the INSTALLA- for defining DETECT


SEVERITY Potential values of
occurs MENTAL

R.P.N.
ELEMENTARY customer: TION on the THE
TESTS
CAUSES Design/Instal-
FUNCTION deniyng CHARACTE- prototypes necessary DEFECT
(reduction or of failure RISTICS lation charac- (even though to design through the
loss) of an (their teristics or for they corre- standar-
expected validation
non-observance prescribing spond to de- dized expe-
function may cause failures) sign specifi-
specifications rimental
cations) tests

Similar to the RPN of a classical FMEA,


but not exactly the same,
mainly for the different meaning
of “detection”

These slides can not be reproduced or distributed without permission in writing from the author. 102
4.1.2. 2nd generation F.M.E.A. (short mention)

4.1.2.3
Main benefits
of the 2nd generation F.M.E.A.

These slides can not be reproduced or distributed without permission in writing from the author. 103
4.1.2.3. Main benefits of the 2nd generation F.M.E.A.

A classical brainstorming
usually is specially aimed to
failure modes and it is not
Obviously, failure modes able to point out features
like the following:
of a classical brainstorming
• shape
are only a part (failures oriented)
• colors
of the whole list • brightness
of customer expected functions • ergonomic features
obtained by a Correlation Matrix. • tactile pleasure
• handling defects
• etc.

These slides can not be reproduced or distributed without permission in writing from the author. 104
4.1.2.3. Main benefits of the 2nd generation F.M.E.A.
As it crosses
expected functions with product subsystems,
the Correlation Matrix provides a good completeness
of the expected functions list,
that can not be assured by classical brainstorming.

An empty column may mean:


• a component to eliminate
• or a missing customer requirement

These slides can not be reproduced or distributed without permission in writing from the author. 105
4.1.2.3. Main benefits of the 2nd generation F.M.E.A.

We have already seen that it is not very easy


to do a new FMEA for:
 all new components;
 old components used in a new way;
 existing components to be improved.

The value of the 2nd generation FMEA as a norm


allows us
to act only on the differences (which are few):
so we need to update FMEA
only on these differences,
which can be done for all the above.
These slides can not be reproduced or distributed without permission in writing from the author. 106
4.1. Main statistical tools aimed to prevention

4.1.3
System F.M.E.A.
(Design F.M.E.A. only)

These slides can not be reproduced or distributed without permission in writing from the author. 107
4.1.3. (Design) System F.M.E.A.

The (Design) System FMEA links


customer needs at system level
(where it’s easier to identify them)
with the system functional specifications,
and, hence,
with the functional specifications of each
subsystems/components of the system,
without considering
design and production modes,
because this is the task of specific FMEA
for each subsystem/component.

These slides can not be reproduced or distributed without permission in writing from the author. 108
4.1.3. (Design) System F.M.E.A.

SYSTEM FMEA

PURPOSE
Maximum compatibility of components,
in order to achieve the planned targets for the whole system

OUTPUTs
Specifications of the components to the suppliers

These slides can not be reproduced or distributed without permission in writing from the author. 109
4.1.3. (Design) System F.M.E.A.
To perform a (Design) System FMEA is not so easy. We must follow these 3 principles.
1. “Failure” is not necessarily something “broken”. It is sufficient that a “re-
sponse” to one of the customer’s expected functions is outside its “ac-
ceptability range”. That helps to define the acceptability range (i.e. the
“minimal requirements”) for the responses to the functions required in
each subsystem.
2. Therefore, in the System FMEA, analysis does not go into design/pro-
cess details, but must stop at the functional requirements (i.e. at the
specifications for Suppliers). Otherwise, there is the risk that System
FMEA becomes the sum of all the subsystem’s specific FMEAs (what
would generate a mess of paper both enormous and useless!). Some-
thing similar is done for all the “buy” components, whose features,
performances, mission profiles and validation tests we specify to the
Suppliers. The technical solutions are not specified, because Suppliers
are the specialists!
3. Before beginning a System FMEA, it’s necessary to have performed a
functional block diagram of the whole system in order to optimize
its deployment into subsystems with their own detailed FMEAs (made
by Suppliers for “buy” components). An important preliminary to that is
to have performed a System Functional Analysis.
These slides can not be reproduced or distributed without permission in writing from the author. 110
4.1.3. (Design) System F.M.E.A.

[ Source: VDA - Verband der Automobilindustrie, Qualitätsmanagement in der Automobilindustrie - Sicherung der Qualität, vol. 4, System FMEA ]

These slides can not be reproduced or distributed without permission in writing from the author. 111
4.1.3. (Design) System F.M.E.A.

Operational steps are listed below and illustrated in the next slide:

1. First of all, a FUNCTIONAL BLOCK DIAGRAM must be defined. De-


signs and Material Bill may give useful support (Material Bill is mainly
oriented for commercial purposes and then, for this application, it often
has to be adjusted to a functional logic).
2. Then a Functional Analysis (Interface Matrix or Correlation Matrix)
is to be drawn up to the system level.
3. At this point, we are ready to perform a System FMEA (or a Risk Ma-
trix at the system level), taking care of evidencing minimal require-
ments of each subsystem/component at a functional level (without
specifying technical details).
4. Now we can pass to the Functional Analysis (Interface Matrix or
Correlation Matrix) of each subsystem/component in which the
whole system has been split.
5. After that, a specific FMEA (or a Risk Matrix) may be performed, this
time examining in detail all technical aspects.
These slides can not be reproduced or distributed without permission in writing from the author. 112
4.1.3. (Design) System F.M.E.A.

These slides can not be reproduced or distributed without permission in writing from the author. 113
4.1. Main statistical tools aimed to prevention

4.1.4
Conclusions on F.M.E.A.

These slides can not be reproduced or distributed without permission in writing from the author. 114
4.1.4. Conclusions on F.M.E.A.

4.1.4.1
From F.M.E.A. to F.M.E.A.

These slides can not be reproduced or distributed without permission in writing from the author. 115
4.1.4.1. From F.M.E.A. to F.M.E.A.

It’s worth observing that almost every proactive activity


originates from FMEA.

After these activities have been concluded, it’s always the FMEA
which gets and organizes the results, converting them into stan-
dard prescriptions (Technical Memo).

It doesn’t make any sense to repeat demanding investigations (ex-


perimental or analytical), when it is possible to use (or at least to ar-
range) the results of similar studies already developed.

The philosophy of 2nd generation FMEA is that Risk Matrix sum-


marizes the results of all prediction/prevention activities so far
developed.

These slides can not be reproduced or distributed without permission in writing from the author. 116
4.1.4. Conclusions on F.M.E.A.

4.1.4.2
F.M.E.A. validation

These slides can not be reproduced or distributed without permission in writing from the author. 117
4.1.1.2. F.M.E.A. validation

If FMEA was done in a correct way,


during experimental tests,
no (or very few) defect would occur.

But we have to distinguish between a simple component


(with a very small number of potential failures)
and a complex system (with many potential failures).

About the above, SIX SIGMA approach has introduced


the concept of opportunities, which are the
“chances of failure that can be potentially attributed
to a given component/subsystem”.

These slides can not be reproduced or distributed without permission in writing from the author. 118
4.1.1.2. F.M.E.A. validation
The number of the opportunities
of a given component/subsystem can be taken from its FMEA.
The numbers of opportunities may be considered
equal to one of the following:
• the number of effects (of the failure modes)
• or the number of failures modes
• or the number of root causes of failure modes.

Every company can choose, among these three alternatives,


the nearer to his situation and will use it as a definite reference.

The ratio
of the number of defects experimentally detected
to the number of opportunities
is called DPO (Defects Per Opportunity)
and is a good indicator of the (poor) validity of the FMEA.
These slides can not be reproduced or distributed without permission in writing from the author. 119
4.1.4. Conclusions on F.M.E.A.

4.1.4.3
Limits and benefits
of F.M.E.A.

These slides can not be reproduced or distributed without permission in writing from the author. 120
4.1.4.3. Limits and benefits of F.M.E.A.
The most important limits, still relevant today, are:
 the scales from 1 to 10 of Occurrence and Detection can
be defined according to criteria which are rather free (whi-
le for Severity the criteria always follow the standards pre-
viously exposed);
 to evaluate RPN as product of SxOxD implies to attribute
to S, O and D the same weight in RPN evaluation: which
is not always true.

The consequence is that not always RPN gives an intervention


priority according to the choices of the Designers, based on the
good sense. For instance, the sets of three S, O, D values equal to
9, 3, 1 and equal to 3, 3, 3 give both a RPN value equal to 27, but
a result of an equal priority value is questionable !
Several studies have been carried out to overcome these limits, but
the proposal were too complicated to be implemented worldwide.
These slides can not be reproduced or distributed without permission in writing from the author. 121
4.1.4.3. Limits and benefits of F.M.E.A.

To minimize the consequences of these subjective aspects, a


good way of action can be the following:

1. create a Team;
2. before starting the FMEA, share the basic criteria to
give each score (to S, O and D);
3. start individually, giving preliminary scores (accor-
ding to the shared criteria) and calculating consequent
RPN values;
4. discuss together all the previous values individually
assigned;
5. assign jointly the final values and calculate the final
RPN values.

These slides can not be reproduced or distributed without permission in writing from the author. 122
4.1.4.3. Limits and benefits of F.M.E.A.

The benefits to follow FMEA methodology (both classical


or 2nd generation), instead of proceeding in an intuitive way,
are essentially two:

 an analytical approach which gives considerable help


to make decisions on an objective basis and to follow
continuous improvement;
 the creation of a systematic and well organized docu-
mentation of the design (of both the product and its
production process).

Of course the quality of results is related to


the competence and the creativity of the Team.

These slides can not be reproduced or distributed without permission in writing from the author. 123
4.1.4. Conclusions on F.M.E.A.

4.1.4.4
Automotive regulations
and related Institutions

These slides can not be reproduced or distributed without permission in writing from the author. 124
GERMANY

NATIONAL LEVEL
Automotive Industry
Action Group
(DaimlerChrysler,
Ford Motor Company,
General Motors Corp.)

ITALY
These slides can not be reproduced or distributed without permission in writing from the author. 125
4.1.4.4. Automotive regulations and related Institutions
SAE International (Society of Automotive Engineers) - U.S.A. Society
operating at international level by the support of international Contributors.
It promotes important Congresses and publishes many influential journals.
AIAG (Automotive Industry Action Group) - Organization providing norms
shared by DaimlerChrysler, Ford Motor Company and General Motors Cor-
poration: it’s closely connected with SAE.
ISO (International Organisation for Standardization) - Worldwide Organiza-
tion aimed at defining International Standards regarding products, services,
processes, materials, systems, organizational and managerial methods, etc.
VDA (Verband der Automobilindustrie) - German Association of Automoti-
ve Industry, who connects motor builders, partners and suppliers. It’s editor
of many interesting publications (but in German Language only): of special
interest a fundamental Manual on the Weibull distribution and its
applications and a Norm on System FMEA.
ANFIA (Associazione Nazionale Filiera Industria Automobilistica) - Italian
Association, who is the support and mouth piece of the associated Indu-
stries about all problems (technical and not) regarding the mobility and the
transport of people and commodities.
These slides can not be reproduced or distributed without permission in writing from the author. 126
4.1.4.4. Automotive regulations and related Institutions

These slides can not be reproduced or distributed without permission in writing from the author. 127
4.1.4.4. Automotive regulations and related Institutions

These slides can not be reproduced or distributed without permission in writing from the author. 128
These slides can not be reproduced or distributed without permission in writing from the author. 129
4.1.4.4. Automotive regulations and related Institutions

These slides can not be reproduced or distributed without permission in writing from the author. 130
4.1.4.4. Automotive regulations and related Institutions

These slides can not be reproduced or distributed without permission in writing from the author. 131
4.1.4.4. Automotive regulations and related Institutions

These slides can not be reproduced or distributed without permission in writing from the author. 132
4.1.4.4. Automotive regulations and related Institutions

FMEA
(Failure Mode and Effects Analysis)

Seconda edizione Gennaio 2007

These slides can not be reproduced or distributed without permission in writing from the author. 133
4.1.4. Conclusions on F.M.E.A.

4.1.4.5
Ending considerations

These slides can not be reproduced or distributed without permission in writing from the author. 134
4.1.4.5. Ending considerations

Among different tools of prediction/prevention,


FMEA is the most diffused and used,
because it’s easy to understand and not demanding.

In practice, the only reason not to develop a FMEA


is to have one (valid and updated) already available !

Nevertheless it’s clear that is not enough to fill a sheet


to be sure to obtain a product of high reliability.

These slides can not be reproduced or distributed without permission in writing from the author. 135
4.1.4.5. Ending considerations

The sheet is only a guideline


(although precious)
to an ordered and consistent
development of ideas.

Only the Team competence produces ideas


which allows to reach excellence !
These slides can not be reproduced or distributed without permission in writing from the author. 136
4. Main statistical tools to prevent failure and ensure reliability

4.2
Parts Count Method
[MIL-STD 756 B, Reliability Modeling & Prediction, nov. 1991]

These slides can not be reproduced or distributed without permission in writing from the author. 137
4.2. Parts Count Method

In the preliminary stage of a project, “Parts Count Method“ rou-


ghly compares and estimates the reliability of complex units by the
addition function of the “unreliabilities” (i.e. the failure probabilities)
of all elementary components used.

For the failure frequencies of each elementary component are assu-


med the average values (e.g. derived from MIL-HDBK-217 or better
from Customer Service warranty-data) of every related kind of ele-
ment, always taking into account the operating environment of the
component. The method is so simple to seem trivial. This is an ad-
ditional reason to never forget to apply it in Reliability verifications.

This method has proved useful for reliability predictions based on a


comparison between systems of different configuration (usually one
known and one in development), but having the same uses. The
German VDA recommends this method for a quick and simple
preliminary evaluation of Reliability.

These slides can not be reproduced or distributed without permission in writing from the author. 138
4.2. Parts Count Method
EXAMPLE

PARTS COUNT METHOD


application example on two car ELECTRICAL SYSTEMS

POWER system SIGNAL system


CLASSES OF COMPONENT
( known ) ( new = unknown )
Failure
frequency No. Overall failure No. Overall failure
(of each unit) of probability of probability
Class
compo- for all components compo- for all components
repairs
[ ------------------------- ] nents of a class nents of a class
100 pieces x year

Wired cable 0.0228 400 9.120 12 0.274


ECU 0.192 6 1.152 40 7.680
(Electronic Control Unit)

TOTALI 10.27 7.95

These slides can not be reproduced or distributed without permission in writing from the author. 139
4.2. Parts Count Method

As we have seen, the Parts Count Method is a "proactive action"


aimed at highlighting the most critical points since the beginning of
the design, before going on with the Product Development Process.
Omitting the Parts Count Method involves the risk of not immediate-
ly intercept some big problem, which will be detected only during
the final reliability experimental tests (after testing prototypes and
then close to the Commercial Launch!). But, at this point, the pos-
sible corrective actions (assuming without conceding that they are
still feasible!) almost always have astronomical costs!
Although its results are essentially rough, the application of Parts
Count Method is of the utmost importance for the prevention and
therefore it must be carried out every time it is possible, as a rou-
tine verification of the design (similar to the litmus test, when we do
a multiplication manually!).

These slides can not be reproduced or distributed without permission in writing from the author. 140
4.2. Parts Count Method
MAIN STEPS OF PARTS COUNT METHOD
1. Identify an already known system, suitable to be compared with the one in
development.
2. Decompose the two systems in a (different) number of parts of the same kind:
this activity requires some experience.
3. Estimate, even roughly (since this same value will be used for both systems),
the failure frequency of each kind of part.
4. Count the number of parts in the designs of the two compared systems.
5. Perform calculations separately for each compared system, multiplying the
number of presences (in the designs) of each part by the corresponding failure
frequency and then summing the products.

Remark - Comparing the system in development with one already on the market (and
therefore known), rather than just working on the system in development, makes the
estimation much more expressive. Furthermore, given that we know the actual failure
frequency of the known system, we can divide this failure frequency by that (always
of the known system) calculated with the Parts Count Method and thus we obtain a
ratio between reality and model. This ratio, at least in a first approximation, can be
applied also to the system in development in order to get a more realistic estimate.
These slides can not be reproduced or distributed without permission in writing from the author. 141
4. Main statistical tools to prevent failure and ensure reliability

4.3
Worst Case Analysis
(W.C.A.)

These slides can not be reproduced or distributed without permission in writing from the author. 142
4.3. Worst Case Analysis (W.C.A.)

D E F I N I T I O N

Test
Test

The Worst Case Analysis


is a method to identify and analyze
all worst possible operating conditions
in which a system can reasonably operate.

These slides can not be reproduced or distributed without permission in writing from the author. 143
4.3. Worst Case Analysis (W.C.A.)

E X A M P L E
Let us suppose we want to define/find the worst steering
column that can reasonably be produced by our production
process. Let’s suppose to know that the failure probability of
the steering column depends on:
• Size: diameter near the lower limit of tolerance.
• Intrusions and blowholes: related to the quality of the
material provided by the supplier.
• Surface roughness: that can generate cracks (which
propagate, stop and then re-start to propagate ...).
Assuming that each of these conditions has probability of 1
per thousand to occur, to find the worst steering column,
we should examine (with non destructive tests) at least 109
pieces!
These slides can not be reproduced or distributed without permission in writing from the author. 144
4.3. Worst Case Analysis (W.C.A.)

WORST CASE ANALYSIS


This principle can be applied (at least)
to 3 increasing levels of commitment:

 Intuitively
 Chain of tolerances
 Mathematical models

These slides can not be reproduced or distributed without permission in writing from the author. 145
4.3. Worst Case Analysis (W.C.A.)

4.3.1
Intuitively

These slides can not be reproduced or distributed without permission in writing from the author. 146
4.3.1. Intuitively

We have not to assume as design parameters the mean values of the loads,
but their most suitable percentiles (quantiles): e.g. 90 or 95 percentile.

At an intuitive level it is important to chose nominal values


and tolerances (design) and to plan the reliability experimen-
tal verifications (testing) in conditions considered to be the
worst that can reasonably occur.

This is fine as long as the design (and/or test) parameters are very few
(1 or 2). If they are more, we risk to prevent really unlikely events.
For example, let us consider 3 different parameters, each with probability
of 2.7‰ to be in the worst conditions. The amount 2.7‰ is related to the
relationship mean ± 3s, typical of the Plant-Quality approach. The proba-
bility of finding all three parameters simultaneously in their worst condi-
tions on the same piece is (2.7x2.7x2.7)/(1000x1000x1000) correspon-
ding to 19.7 billionths, i.e. 0.0197 ppm !

These slides can not be reproduced or distributed without permission in writing from the author. 147
4.3. Worst Case Analysis (W.C.A.)

4.3.2
Chain of tolerances

These slides can not be reproduced or distributed without permission in writing from the author. 148
4.3.2. Chain of tolerances

The chain of tolerances breaks the deadlock in the prece-


ding paragraph, allowing to assess, in probabilistic terms
by using an appropriate software, the combined effect of
parameters affecting the investigated phenomenon.

In other words, we can evaluate what is the probability that a “customer


expected function” exceeds a set limit, depending on the process capa-
bility (actual or estimated) for each parameter(*).

(*) The process capability (at the time of sampling, which must occur in absence of
"special causes" of variation), is defined by the interval, large 6 s, ideally between:
mean - 3 s and mean + 3 s,
where standard deviation s is estimated on the basis of samples (as well as the mean).

These slides can not be reproduced or distributed without permission in writing from the author. 149
W.C.A.: example of application of CHAIN OF TOLERANCES
to a HOUSING FOR BATTERIES OF A RADIO RECORDER
The length of each battery has a relatively high variability.
defined by the mean value l and by the standard deviation sl .
l3 = z(l , sl ) l2 = z(l , sl ) l1 = z(l , sl )
If the housing has a total length l tot ...
 … too small: the insertion of the last battery
may be difficult;
sl sl sl
 … too large: the force exerted by the spring
may be insufficient to ensure the contact in
presence of vibrations.

l l l l ± 3s
is the interval that defines the capability
of the batteries production process. Out-
The calcula- side of this range it is produced only 2.7
tion of the di-
stribution l tot = l 1 + l 2 + l 3 = 3 l
‰ of the pieces.
sum of seve- l tot
ral distribu- stot = s21 + s22 + s23 = 3 s
tions is quite
easy.
The wrong choice to assume, as a worst
10 9 8 7 6 -4
5 4-3 3 -2 2 -11 00 -11 -2 2 -3 3 -4 4-5
case, that all three batteries simultaneou-
sly have their length on the same limit of
l tot natural tolerances (e.g. the upper, l + 3 s,
with a total length of 3l + 9 s) means
guard against an event with probability of
Unless we manage safety problems, occurrence of:
it is preferable to consider only events with
probability of occurrence of a few per thousand. (2.7‰ / 2)3  less than 0.025 ppm i.e.
less than 1 in 400 millions !
These slides can not be reproduced or distributed without permission in writing from the author. 150
4.3.2. Chain of tolerances
independent random variables, normally distributed
generic real deterministic constant
In complex cases,
the expressions, already seen in the
ALGEBRAIC Mean Standard deviation PROBABILISTIC DESIGN,
FUNCTION
can be used even for the
CHAIN ​OF TOLERANCE

Useful expressions
to evaluate means
and standard deviations
of random variables
normally distributed
These expressions can be applied
to algebraic variables
in the individual steps
of the usual mathematical expressions
(maybe with the help of EXCEL).
Obviously, for complex cases, it is
preferable to use appropriate software
commercially available.

These slides can not be reproduced or distributed without permission in writing from the author. 151
4.3. Worst Case Analysis (W.C.A.)

4.3.3
Computerized
mathematical models

These slides can not be reproduced or distributed without permission in writing from the author. 152
4.3.3. Computerized mathematical models

In all cases (certainly not many) when we have a mathematical


model able to provide the “customer expected functions” de-
pending on different design/process parameters, we can con-
duct simulated studies.

Mathematical/functional models as C.A.E. (Computed Aided


Engineering) are very important. They apply the Monte Carlo
method with the following purpose:
• having as INPUT the statistical distribution of de-
sign/process parameters (which is something more
than the mere process capability of the process),
• they provide as OUTPUT the distribution of the cu-
stomer expected function, which can be compared
with its tolerability range.

These slides can not be reproduced or distributed without permission in writing from the author. 153
4.3.3. Computerized mathematical models

Compared to traditional software, current mathematical models


have the following new features:
 they relate the customer expected functions with the design
parameters;
 and, since the beginning of the project, they take into account the
actual capability of the production process that will be used …

… in full agreement with requirements


of WORST CASE ANALYSIS
just seen.

Their construction requires to find sufficient experimental information (e.g.


applying Experimental Design which will be discussed later) and also
requires a laborious preparation of the software. So these activities are
challenging and consequently these models are not so many.

These slides can not be reproduced or distributed without permission in writing from the author. 154
4.3.3. Computerized mathematical models

Monte Carlo Method (nothing to do with the casino)

It is a general purpose simulation method, because conceptually


simple. But it requires a very large number of calculation iterations.
This is not a problem if we use a computer (which is however ne-
cessary).

For example, let us suppose we want to determine the “response” Y of an expected


function, having a mathematical model able to calculate it as a function of some
input parameters (e.g. 3: A, B and C).

Using the average values of these parameters (A, B and C in the example), with
only one sequence of calculation, we would be able to determine the average
value of the response Y, but we can not know anything about its distribution.

The input of the Monte Carlo method is the distribution (PDF = Probability Density
Function) of each considered parameter (A, B and C in the example). Obviously
these distributions originate from their process capability (or have been estimated).

These slides can not be reproduced or distributed without permission in writing from the author. 155
4.3.3. Computerized mathematical models
Monte Carlo Method

Ditribution of parameter A Distribution of parameter B Distribution of parameter C

m-3s m m+3s m-3s m m+3s m-3s m m+3s

PROCESS PROCESS PROCESS


CAPABILITY CAPABILITY CAPABILITY

Application of the Monte Carlo method requires 6 basic steps:


1. From each of the distributions of input parameters we extract a certain
number, nA nB nC ….. of random values consistent with their specific
distribution (in the example let us suppose nA = nB = nC = 20, although in
the figure, for simplicity, are graphically represented only 5 points).

These slides can not be reproduced or distributed without permission in writing from the author. 156
4.3.3. Computerized mathematical models
Monte Carlo Method
PDF of parameter A PDF of parameter B PDF of parameter C

1,00
0,95
Using the method illustrated by
0 e 10 to 1

0,90
0,85
the diagram on the side (almost
from

0,8091
I I NN PP UU TT
0,80
1)

distributedfra

Distribuzione
casuali (0(0<<nn<<1)

Any cumulative distribution


function)
probabilità cumulata)

0,75
the same as that already used
uniformemente

cumulativa qualisasi
(e.g. normal, with
0,70 mean m =normale,
(per es. 77.42 mm con:
and
media
std deviation s = 10.04
= 77,42 mm mm
e

for Normal Probability Plot,


0,6528
0,65 std deviation = 10,04 mm)
probability
I N PP UU TT

0,60
uniformly

0,5682

GPN), it is possible to extract a


Numeri numbers

0,55
casuali distribuiti

0,50
(con funzione di

n-tuple of random numbers di-


(cumulative

0,45
numbers

0,40 0,3868

stributed according to an assi-


Random

0,35
50 numeri

0,30 0,2740
50 random

0,25
0,20
gned distribution. Many soft-
0,15
0,10 OOUUT PP UU TT
ware include a special command
0,05
0,00
50 50
numeri casuali
random

71,39
distribuiti
numbers

74,53
secondo
distributed accordinglatonormale

79,14 81,36
impostata
the set normal

86,20
( =(m77,42
distribution = 77.42 mm; s = 10.04
mm; 10,04mm)
mm)
to do this automatically.
55 60 65 70 75 80 85 90 95 100

Numeri Random
casuali distribuiti
numbers secondo normale
distributed trotetoallevate
according condistribution
the normal mangime classico

These slides can not be reproduced or distributed without permission in writing from the author. 157
4.3.3. Computerized mathematical models
Monte Carlo Method
2. We form all the nA . nB . nC . ….. values combinations (in this example, there are:
20 . 20 . 20 = 8.000 triples of values of the parameters A, B and C) and, for each triple,
it is determined, by the mathematical model, the value of the response y.
3. We get nA . nB . nC . ….. (8.000 in this example) values of the response y and we can
arrange them in a histogram.
Tolerability range

4. We interpolate the histogram with a continuous curve, obtaining the distribution of the
response y.
5. We can superimpose the “tolerability range" to this distribution.
6. Finally, we calculate the area of the two tails outside the “tolerability range”, obtaining a
good estimate of the percentage of dissatisfied customers.

If the number of compute cycles is too heavy and too long also for a computer, we can try to reduce it
by using fractional factorial plans (D.O.E. = Design Of Experiments), which will be discussed later in
connection with the Experimental Design.
These slides can not be reproduced or distributed without permission in writing from the author. 158
4. Main statistical tools to prevent failure and ensure reliability

4.4
Fault Tree Analysis
(F.T.A.)

These slides can not be reproduced or distributed without permission in writing from the author. 159
F.T.A. The electric fire
Fault Tree Analysis does not heat up any more SYSTEM
(product)

Electrical resistances

Both electrical Power does not


SUBSYSTEMS
resistances reach electr.
do not work resistances (parts)

Upper Lower

Plug breakage
A broken wire
electr. electr.

breakage
resist. resist.

Switch
does does COMPONENTS
not not (elements)
work work

ELECTRIC FIRE LACK OF


POWER SUPPLY
(e.g. grid under repair)
We consider here the Top event “The electric fire
does not warm up any more”. Another possible
Top event may be “The electric fire heats up in
degraded mode (only one resistance working)”.
These slides can not be reproduced or distributed without permission in writing from the author. FAULT TREE 160
4.4. Fault Tree Analysis (F.T.A.)

F.T.A. is a top-down approach which provides a separate analysis for each


failure mode of the examined system. Of course, we analyze only the fai-
lure modes considered as significant. The failure mode at the top of each
analysis is called top event and it is regarded as an unwanted effect (root)
from which all possible causes must be deployed (branches), until we reach
the root-causes (= elementary events). Appropriate symbolism highlights
the logical links, AND (conjunction) or OR (independence), between the
different levels of causes.

Analysis F.T.A. provides two distinct OUTPUTs:


 Quantitative: estimate of the top event probability from the know-
ledge of elementary events probabilities.
 Qualitative: each critical path set connecting elementary events
with the top event is characterized by a minimum number of inde-
pendent circumstances, called minimal cut set, which must occur
simultaneously to produce the top event. This minimum value is called
the rank (or the level) of the critical path set. Regardless of probabili-
ties of elementary events, we must act starting from the critical path
set of lowest rank, providing some redundancy.

These slides can not be reproduced or distributed without permission in writing from the author. 161
4.4. Fault Tree Analysis (F.T.A.)

COMMONLY USED SYMBOLS

These slides can not be reproduced or distributed without permission in writing from the author. 162
4.4. Fault Tree Analysis (F.T.A.)

SIMPLE EXAMPLE

These slides can not be reproduced or distributed without permission in writing from the author. 163
4.4. Fault Tree Analysis (F.T.A.)

WEAK POINTS & STRONG POINTS

 The F.T.A approach requires a “tree” for every conside-


red top event.
 The F.T.A approach is usually used for relatively complex
systems, especially where safety is involved, because their
failure frequency is extremely low and therefore can not be
experimentally verified: so F.T.A. appears to be the only
way.
 Generally F.T.A. requires the cooperation of 2 Specialists:
an Expert of the design and an Expert of the F.T.A. method.
 For the same system, the application of the F.T.A. method
requires about 1÷2 months for the tree of the first top
event and about half the time for each top event after the
first.
These slides can not be reproduced or distributed without permission in writing from the author. 164
4.4. Fault Tree Analysis (F.T.A.)

One might ask: "Which is most used: F.M.E.A. or F.T.A. ? And


what are the circumstances in which we prefer a method rather
than the other? “

 F.M.E.A. is always used. It is the preliminary activity for any


other analysis method and the situations in which it can be
omitted without risk are very rare.
 F.T.A. generally requires more time than F.M.E.A. and moreo-
ver it covers only one top-event. It is imperative to develop it
for safety components, because their failure frequency is very
low and can not be experimentally verified (paradoxically, for
components for which reliability is crucial, we must give up the
experimental verifications and be content with calculated esti-
mates!).

These slides can not be reproduced or distributed without permission in writing from the author. 165
4. Main statistical tools to prevent failure and ensure reliability

4.5
EXPERIMENTAL DESIGN

These slides can not be reproduced or distributed without permission in writing from the author. 166
4.5. Experimental Design

4.5.1
Concepts

These slides can not be reproduced or distributed without permission in writing from the author. 167
4.5.1. Concepts

Experimental Design is a method


 experimental;
 based on Statistics;

aimed to identify:
 the variables more influential a not well
known phenomenon;
 and the values that should be assigned to
them for the best results on the product.
with the minimum number of tests sufficient to
ensure the validity of results in order to make correct
decisions.
These slides can not be reproduced or distributed without permission in writing from the author. 168
4.5.1. Concepts

Experimental Design includes 4 basic steps:


 design of experimental test, whose purpose is the deter-
mination of:
 experimental conditions,
 minimum number of tests needed,
 sequence of tests,
taking into account all available assessments on the variables
(and their interactions) that may affect the system response;
 execution of tests;
 analysis of test results, aimed at determining the optimi-
zed set of values to be assigned to the variables recognized
as influential;
 the execution of verification tests, which are needed
for the unavoidable simplifications introduced in the
previous steps.
These slides can not be reproduced or distributed without permission in writing from the author. 169
4.5.1. Concepts
V e r i f i c a t i o n t e s t s
d

X ≤ 2/3 . d
Verification tests

Photograph of the present situation


is OK results

obtained by the results analysis


Prediction of the model

(before DOE)
30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200
Values of the quality-characteristic (smaller the better)
FREQUENZA DI GUASTO IN RETE
to be [ inconvenienti / 100 vetture ]
optimized
These slides can not be reproduced or distributed without permission in writing from the author. 170
4.5.1. Concepts

Until a few years ago


(and sometimes even now!),
it was common practice for the Experimenter
to change a variable at a time.
If we do not have a suitable software
(based on statistical tools)
for further processing of experimental data,
changing a variable at a time is in fact the only way
to discriminate the effects of every variable.

These slides can not be reproduced or distributed without permission in writing from the author. 171
4.5.1. Concepts
On the opposite, if we have an analytical tool,
as when using Experimental Design,
we can assign to each variable its own effect,
also if we make
vary more than one of them at the same time.
This gives significant benefits in terms of:
 significant reduction of the number of tests (and of re-
lated costs);
 ability to recognize the presence of possible interactions
between the involved variables: i.e. if their combined effect
is different from that deduced from the simple “effects superpo-
sition (additivity)” law.
It should be emphasized that this second benefit is obtainable only if more varia-
bles vary simultaneously. The Experimental Design is the only method able to
deal with problems where the shifty presence of interactions prevents the resolu-
tion by traditional methods.
These slides can not be reproduced or distributed without permission in writing from the author. 172
4.5. Experimental Design

4.5.2
Interactions

These slides can not be reproduced or distributed without permission in writing from the author. 173
4.5.2. Interactions
Example of “positive interaction”
A car brakes from a certain speed (always the same) until it stops,
in different conditions of asphalt and tires(1).
The stopping distances are measured.
90
FACTORS ASPHALT
levels dry wet 80 80 worn tires

new 44 51 STOPPING DISTANCE


TIRES

70

worn tires
worn 53 ?
80
80 60 60 without interaction

worn tires 53
50 51 new tires

Effects superposition: new tires 44


• 51 - 44 = 7 (wet effect) 40

• 53 - 44 = 9 (worn tires effect)


30
 44 + 7 + 9 = 60 (effects super- asciutto
dry bagnato
wet
wet
position) A S PH A L T

(1) In this example, the tires are considered “worn“ when the tread blocks
are fully consumed and thus the tire has become completely smooth.
These slides can not be reproduced or distributed without permission in writing from the author. 174
4.5.2. Interactions

We say that there is an interaction between two (or more) factors


when their combined effects give a result different from that expec-
ted from the simple effects superposition (additivity).
For example, let us study the braking distance of a car from a given speed. In first approxi-
mation we can assume that it depends on the tires conditions (new or worn: factor A with 2
levels) and on the asphalt conditions (dry or wet: factor B with 2 levels). Suppose we have
experienced first on dry asphalt and measured the braking distances of 44 meters with new
tires and of 53 meters with worn tires. In a further test on wet asphalt with new tires, a braking
distance of 51 meters was measured.

To stop the car with worn tires on dry asphalt, we need 53 - 44 = 9 meters more than with new
tires. Using the law of effects superposition, the same difference should also apply on wet
asphalt and then we would expect 51 + 9 = 60 meters to stop on wet asphalt with worn tires.
Instead, because of the interaction between worn tire and wet asphalt, we measure a stop-
ping distance of 80 meters. This interaction is due to a layer of water that is interposed
between the worn tire (tread too thin) and the wet asphalt ("aquaplaning“).

The most dangerous interactions are those that give rise to a


reversal of the effects of factors (as if, on wet asphalt, the new tire re-
quires a stopping distance greater than that of a worn tire). The interac-
tions of this kind are called negative.
These slides can not be reproduced or distributed without permission in writing from the author. 175
4.5.2. Interactions

To be able to detect the presence of possible


interactions,
we must think in terms of:

 factors: “cause” variable (generally not more


than 8÷12);
 levels: values (always the same) assigned to
each factor in the experiments plan (ge-
nerally we try to use no more than 2 or 3
levels): a factorial plan may contains factors
both with 2 and with 3 (or more) levels together.

These slides can not be reproduced or distributed without permission in writing from the author. 176
4.5.2. Interactions
Graphical illustration of an example of 2 factors both with 2 levels (4 experiments)

65%

Level 2
60%
(59% ÷ 61%)
HUMIDITY
SPECIFIC

55%

Level 1
50%
(49% ÷ 51%)

45%

-15 -10 -5 0 +5 +10 +15 +20 +25

TEMPERATURE
Level 1 Level 2
(-12 ÷ -8 °C) (17 ÷ 23 °C)

Note that each level is defined by a range of values


and that the width of each range is very small
compared to the distance between the centers of the corresponding levels.
These slides can not be reproduced or distributed without permission in writing from the author. 177
4.5.2. Interactions
Example of a “weak interaction” INERT GAS
FLOW

TUNGSTEN
ELECTROD

WELDING PROCESS FOR A CAR SILENCER PICK-UP

"weak" interaction between VOLTAGE and ELECTRODE CONDITION


70

63,00
A1 = 28 volts
60
STRENGTH[kN]

50
STRENGHT

40
A2 = 26 volts 36,96
35,58
30 33,33
WELD
WELD

20

10

G1 = new electrode G2 = old electrode


0

ELECTRODE CONDITION
These slides can not be reproduced or distributed without permission in writing from the author. 178
4.5.2. Interactions
Example of a “negative interaction”
OPTIMIZATION OF MACHINE SETTINGS
TO FIX A PLASTIC END CAP TO A METAL PUNCH
130
TO BE REMOVED [Newton]
RESISTANCE OF THE CAP

120 119,5
B2 = large size hollow
112,0 112,4
110

100

B1 = small size hollow


90

86,5
80
200 300 400 500 600 700
A1 AIR PRESSURE [kPascal] A2

From a managerial point of view, the solution to be adopted is B2, which is “robust” because it gives
approximately the same result regardless of pressure. However if it is “vital” to achieve the maximum
strength, we must choose B1 but keeping the pressure under control.
These slides can not be reproduced or distributed without permission in writing from the author. 179
4.5.2. Interactions
Here the interaction is absent = additivity of effects
Trends are parallel
(it is not necessary that they are straight lines:
general trends are not linear).

B2
Risposta y y
Response

B1
B2

B1

A1 A2 A3
Levels of fattore
Livelli del factorAA
These slides can not be reproduced or distributed without permission in writing from the author. 180
4.5.2. Interactions
This interaction is positive = non-additivity of effects
Trends (generally not linear) are not parallel.

“increasing” “decreasing”

B2 B2
Risposta y y

Risposta y y
Response

Response
B1 B1
B2 B2
B1 B1

A1 A2 A3 A1 A2 A3
Livelli del Livelli del fattore A
Levels of fattore
factor AA Levels of factor A

It is incorrect to extrapolate outside the range of values


covered by the chosen levels,
because we do not know what happens outside this interval.
Instead it is allowed to interpolate (inside the interval).
These slides can not be reproduced or distributed without permission in writing from the author. 181
4.5.2. Interactions
This interaction is negative = non-additivity of effects
Trends (generally not linear) are intersecting
(within the range of values covered by levels).

B1
Risposta y y

B2
Response

B1 B2

A1 A2 A3
Levels of fattore
Livelli del factorAA

As it is easy to understand, this is the more insidious interaction.


If we neglect it in planning experiments (when it exists),
almost certainly we will get misleading results.
These slides can not be reproduced or distributed without permission in writing from the author. 182
4.5. Experimental Design

4.5.3
Full factorial plans

These slides can not be reproduced or distributed without permission in writing from the author. 183
4.5.3. Full factorial plans
An example to … start
In an extrusion process, the following key factors have been
identified:
A. Delivery pressure of extruded;
B. Temperature of extruded;
C. Temperature of the mold.
Let us suppose we have defined 3 values (or 3 narrow ranges of
values), called levels, for every factor. All these levels are plausibly
valid and checkable during production. Each triplet of levels of each
factor can be considered as consisting of the values:
 low;
 medium (optional);
 high.
In order to achieve an optimized solution for setting the production factors, we
may establish a plan of experiments in which there are (only once) all pos-
sible different combinations of factors with all their levels.

These slides can not be reproduced or distributed without permission in writing from the author. 184
4.5.3. Full factorial plans
For simplicity, we consider first, for each factor, only 2 levels (low
and high). We obtain:

C1
B1
C2
A1
C1 A full factorial plan
B2
C2 with 3 factors, all with 2 levels,
C1 requires
B1
C2 2 x 2 x 2 = 23 = 8 tests
A2
C1
B2
C2
2 x 2 x 2 = 23 = 8 tests
These slides can not be reproduced or distributed without permission in writing from the author. 185
4.5.3. Full factorial plans
Similarly, if the factors were all at 3 le- Similarly, if the factors were chosen with
vels (low, medium and high) we would different number of levels, e.g. A and
have obtained: C with 2 levels and B with 3 levels, we
C1 would have obtained:
C2
B1 C3
C1
C2 C1
A1 B2 C3 B1
C2
C1
C2
B3 C3
C1
C1
A1 B2
B1 C2 C2
C3

C1
B2
C1
C2
B3
A2 C3
C2
C1

B3 C2 C1
C3 B1
C1 C2
B1 C2
C3
C1
C1
A2 B2
A3 B2 C2 C2
C3

C1 C1
B3 C2 B3
C3
C2

3 x 3 x 3 = 33 = 27 tests 2 x 3 x 2 = 22 31 = 12 tests
These slides can not be reproduced or distributed without permission in writing from the author. 186
4.5.3. Full factorial plans
General rules

We define “Full Factorial Plan” a plan of experiments


in which every level of every factor is examined
in combination with all levels of the other factors;
i.e. all possible combinations are taken into account
and each of them appears only once in the plan.
 A Full Factorial Plan requires a number of experi-
ments equal to the product of the number of levels
of all its factors.
It may be convenient to express this product in terms of powers of sin-
gle number of levels. In this case, the exponents indicate the number of
factors which have the same number of levels (indicated by the base).
 Full Factorial Plans generally allow to analyze the main
effects(of the considered factors) and the interactions.
These slides can not be reproduced or distributed without permission in writing from the author. 187
4.5.3. Full factorial plans

Mathematical expressions for the number of experiments

We have just seen that the number of experiments n


required by a Full Factorial Plan
is the product of the number of levels of all factors.

If all factors of the plan have the same number of levels, we


can write:
n = Lk
where: L = levels number and k = factors number.

As we tend to assign to the factors 2 or at most 3 levels,


the more used plans are of the kind 2k or 3k.
And, if a plan uses factors both with 2 and with 3 levels,
we will have: n = 2k2 x 3k3 .
These slides can not be reproduced or distributed without permission in writing from the author. 188
4.5.3. Full factorial plans
Conventional representation of Factorial Plans
It is usual to represent Factorial Plans (both Full Factorial Plans and Fractional
Factorial Plans, which are discussed below) with a matrix in which:
 every column is headed to a factor,
 while the cells below contain the levels with which each factor is present in
each experiment.
FF AA CT TT O
O RR SI
C1 ## A B C
B1 test
C2 identif.
ident. pressione
extruded temperatura
extruded temperatura
mold
N.o
prova estruso
pressure estruso
temperature stampo
temperature
Example of C1
A1 B2 1 1 1 1
Full Factorial C2 2 1 1 2
Plan C1 3 1 2 1
B3
C2 4 1 2 2
with 3 factors
C1
5 1 3 1
A with 2 levels B1 6 1 3 2
C2 7 2 1 1
B with 3 levels C1
A2 B2 8 2 1 2
C with 2 levels C2 9 2 2 1
C1
10 2 2 2
B3 11 2 3 1
C2
These slides can not be reproduced or distributed without permission in writing from the author.
12 2 3 2 189
4.5.3. Full factorial plans
Manual construction of a Full Factorial Plan
Full factorial plan with 3 factors: A with 2 levels; B with 3 levels; C with 2 levels

FF AA CT TT O
O RR SI
#
# A B C
test
identif.
ident. pressione
extruded temperatura
extruded temperatura
mold
N.o
prova pressure
estruso temperature
estruso temperature
stampo
1 1 1 1
2 1 1 2
3 1 2 1
4 1 2 2
5 1 3 1
6 1 3 2
7 2 1 1
8 2 1 2
9 2 2 1
10 2 2 2
11 2 3 1
12 2 3 2
These slides can not be reproduced or distributed without permission in writing from the author. 190
4.5.3. Full factorial plans
With the standardized construction of a full factorial plan,
the speed of change of factors increases from left to right
To a better highlighting, we refer here to a Full Factorial Plan with 3 factors
all with 2 levels. Like before, we obtain:
F F AA CT TT O R
R IS We see, more clearly than in the previous
#
# A B C example, that the vertical sequences of a
test same value (1 or 2) are long:
identif.
ident. pressione
extruded temperatura
extruded temperatura
mold
prova
N.o estruso
pressure estruso
temperature stampo
temperature  4 (equal) values (2 sequences) for the
1 1 1 1 factor A;
2 1 1 2  2 (equal) values (4 sequences) for the
3 1 2 1 factor B;
4 1 2 2  1 value only (8 sequences) for the fac-
5 2 1 1 tor C.
6 2 1 2 The factor A is the slowest, the factor C
7 2 2 1 is the quickest and the factor B changes
8 2 2 2 its level with an intermediate speed.
This property can be practically exploited by placing as factor A (first column from left) something
requiring long time to be disassembled/reassembled (e.g. two kinds of engine), as factor C, so-
mething very fast to change (e.g. different spark plugs) and as factor B something in between
(e.g. two different types of clutch).
These slides can not be reproduced or distributed without permission in writing from the author. 191
4.5.3. Full factorial plans
With the standardized construction of a full factorial plan,
the speed of change of factors increases from left to right
The benefits of this technique are obviously a saving of time and therefore
of cost.
The reckoning to pay for these benefits is the risk of some wrong
disassembly/reassembly. For example, if the engine corresponding to the
level 2 is wrong installed, as it is installed only once, we can not distinguish
whether the change of the response at the level 2 (in comparison with that
at the level 1) is attributable to the specific characteristics of the second
engine (level 2) or to an assembly mistake!
The classic countermeasure to the risk described above is the RANDOMI-
ZATION OF THE EXPERIMENTS. This implies performing the experiments
in a random order and not in the previous one which was economically opti-
mized by exploiting the different speeds of level changing for each factor.
Repeating several times the assembly and disassembly operations, we re-
duce the effects of a wrong assembling, but we lose the benefits of time and
cost.
In practice, randomization is not usually used. We prefer to avoid bad in-
stallations, by ensuring, as far as possible, that the assembling is always per-
formed in a workmanlike manner, through the adoption of appropriate as-
sembly checks (if possible), the use of particularly reliable workforce, etc..
These slides can not be reproduced or distributed without permission in writing from the author. 192
4.5.3. Full factorial plans
Analysis of the results of a full factorial plan
Of course, a Factorial Plan is the design of experiments, to which we must
match the experimental results, in order to properly analyze them.
Design of
Definizione
Definizione As an example, here is a Factorial Plan
prova
di results

11 Interazioni
Interazioni
experiments
prove
prove
44 Factors
Fattori a2a2 with 4 factors, all with 2 levels.
Experimental

Fattore
Risultati

#
# Factor A
test A
identif.
ident.
prova
N.o
A B C D
Lev. 1 Lev. 2
To understand whether a factor (e.g. factor
Liv. 1 Liv. 2
1 1 1 1 1 y1 y1 A) is influential or not on the investigated
2 1 1 1 2 y2 y2 phenomenon, we may compare the mean of
3 1 1 2 1 y3 y3
4 1 1 2 2 y4 y4 results of all experiments where the factor is
5 1 2 1 1 y5 y5 at level 1 with the mean of results of all ex-
6 1 2 1 2 y6 y6
7 1 2 2 1 y7 y7 periments where the factor ia at level 2:
8 1 2 2 2 y8
y9
y8
y9
 a large difference between the two means
9 2 1 1 1
10 2 1 1 2 y10 y10 (or a ratio much greater than 1) means that
11 2 1 2 1 y11 y11 the factor has a decisive influence on the
12 2 1 2 2 y12 y12
13 2 2 1 1 y13 y13 result;
14
15
2
2
2
2
1
2
2
1
y14
y15
y14
y15
 instead a small difference between the two
16 2 2 2 2 y16 y16 means (or a ratio ≤ 1) means that the factor
yliv.1 yliv.2 has poor influence on the result.
These slides can not be reproduced or distributed without permission in writing from the author. 193
4.5.3. Full factorial plans
If all the factors have 2 levels, the
Definizione
Design of levels of interaction between

prova
1111interactions

Risultati di results
experiments Interazioni
prove
two factors may be obtained from
44Factors
Fattori 2 bya22factors
a2 3
a3bya 33 4a 4
bya 44
their levels with the following rule
Experimental
A
#
# A A A B B C
A A A B
B
of setting each interaction level:
test
identif.
ident. A B C D
B C D C D D
B B C C
C  equal to 1 if the levels of interac-
prova
N.o C D D D
D ting factors are equal (no matter if
1 1 1 1 1 y1 1 1 1 1 or 2);
2 1 1 1 2 y2 2 2 2  equal to 2 if the levels of interac-
3 1 1 2 1 y3 1 2 2
ting factors are different.
4 1 1 2 2 y4 2 1 1 In Boolean Algebra, this rule corresponds
5 1 2 1 1 y5 1 1 2 to the XOR operator, also called exclusive
OR.
6 1 2 1 2 y6 2 2 1
7 1 2 2 1 y7 1 2 1
For the interactions of higher or-
8 1 2 2 2 y8 2 1 2
der (e.g. among factors taken at 3
9 2 1 1 1 y9 2 2 2
to 3), we can still use the previous
10 2 1 1 2 y10 1 1 1
rule but applied to the interaction of
11 2 1 2 1 y11 2 1 1
lower order (2 by 2) and to the mis-
12 2 1 2 2 y12 1 2 2
sing factor.
13 2 2 1 1 y13 2 2 1
14 2 2 1 2 y14 1 1 2 If factors are not all at 2 levels, the
15 2 2 2 1 y15 2 1 2 calculation is very complex (and it is
2 1
better to use the computer), but, in
16 2 2 2 2 y16 1
essence, it is quite similar to that ex-
These slides can not be reproduced or distributed without permission in writing from the author.
posed. 194
4.5.3. Full factorial plans
Definizione
Design of At this point, the presence or

prova
Risultati di results
experiments 11 Interazioni
prove
44Factors
Fattori 2 bya22factors
a2
absence of an interaction
a3a3
can be treated with the same
Experimental
# Interazione
Interaction
#
test
identif. A B C D
A A A B B C AD
AD logic of the influence of a fac-
ident.
prova
N.o
B C D C D D
Liv. 1 Liv. 2
tor, considering the differen-
1 1 1 1 1 y1 1 y1 ce (or the ratio) between the
2 1 1 1 2 y2 2 y2 mean of the results of experi-
3 1 1 2 1 y3 1 y3 ments in which the interaction
4 1 1 2 2 y4 2 y4 occurs at level 1 and the
5 1 2 1 1 y5 1 y5 mean of the results of experi-
y6 y6
6 1 2 1 2 2 ments in which the interaction
7 1 2 2 1 y7 1 y7
y8 y8
occurs at level 2.
8 1 2 2 2 2
9 2 1 1 1 y9 2 y9 NOTE - It is commonplace to believe that
10 2 1 1 2 y10 1 y10 higher order interactions (between 3 or
more factors) are of little importance in
11 2 1 2 1 y11 2 y11
the industrial phenomena. The truth is
12 2 1 2 2 y12 1 y12 that there are only very few packages a-
13 2 2 1 1 y13 2 y13 ble to analyze the presence or absence of
14 2 2 1 2 y14 1 y14 interactions among 3 or more factors.
y15
Therefore we have to be on guard and, if
15 2 2 2 1 y15 2
the data are not convincing, it is recom-
16 2 2 2 2 y16 1 y16 mended to verify that there are no higher
yliv.1 yliv.2 order interactions: but there are no esta-
These slides can not be reproduced or distributed without permission in writing from the author.
blished rules in this regard. 195
4.5.3. Full factorial plans
NO INTERACTION POSITIVE INTERACTION NEGATIVE INTERACTION
Response y Response y Response y

midpoint midpoint midpoint

Levels of factor A Levels of factor A Levels of factor A


------- ------- ------- ------- ------- ------- ------- -------
A1 B1 + A2 B2 A1 B2 + A2 B1 A1 B1 + A2 B2 A1 B2 + A2 B1
-------------------- = --------------------- -------------------- ≠ ---------------------
2 2 2 2

Because in a parallelogram In the presence of interaction, the previous equality becomes an inequality.
the diagonals are mutually cut in half, When we detect a significant difference between the two members of inequality,
the ordinate of the intersection point we can say that there is interaction, even if we can not yet say what kind it is
has the same value, both if it is until we have drawn the graphs ANOM (= Analysis Of Means)
calculated as belonging to the trapeze to study its effects.
A1 B1 B2 A2 or if it is calculated as
belonging to the trapeze A1 B2 B1 A2.
Note that in the left side of equality,
the two pairs AB always present the As already said, if factors are not all at 2 levels,
same value (1 or 2), while in the right the calculation is quite complex
side they always have different values.
This explains, in some way, the choice (and it is better to use the computer),
made at the previous slide to define but, in essence, it is quite similar to that exposed.
the levels of interaction.
These slides can not be reproduced or distributed without permission in writing from the author. 196
4.5.3. Full factorial plans
Definizione
Design of Analysis of FACTORS and INTERACTIONS: SUMMARY

prova
Risultati di results
experiments 11 Interazioni
prove
44Factors
Fattori 2 bya22factors
a2 a3a3

Experimental
# Fattore Interazione
Interaction
# A A A B B C
Factor
A
A AD
test
identif.
ident. A B C D
B C D C D D
prova Lev. 1 Lev. 2 Lev. 11 Lev.
Liv. 22
N.o
Liv. 1 Liv. 2 Liv.
1 1 1 1 1 y1 1 y1 y1
2 1 1 1 2 y2 2 y2 y2
3 1 1 2 1 y3 1 y3 y3
4 1 1 2 2 y4 2 y4 y4
5 1 2 1 1 y5 1 y5 y5
6 1 2 1 2 y6 2 y6 y6
7 1 2 2 1 y7 1 y7 y7
8 1 2 2 2 y8 2 y8 y8
9 2 1 1 1 y9 2 y9 y9
10 2 1 1 2 y10 1 y10 y10
11 2 1 2 1 y11 2 y11 y11
12 2 1 2 2 y12 1 y12 y12
13 2 2 1 1 y13 2 y13 y13
14 2 2 1 2 y14 1 y14 y14
15 2 2 2 1 y15 2 y15 y15
16 2 2 2 2 y16 1 y16 y16
yliv.1 yliv.2 yliv.1 yliv.2
These slides can not be reproduced or distributed without permission in writing from the author. 197
4.5.3. Full factorial plans

It remains to be determined
a rational criterion for deciding when
the differences between the means related to the results
of a single level of each factor/interaction
are sufficiently “large” to allow to recognize that
the factor/interaction “significantly” contributes
to the investigated phenomenon.

The mathematical tool to deal with this problem is


the ANALYSIS OF VARIANCE (AN.O.VA.)

These slides can not be reproduced or distributed without permission in writing from the author. 198
4.5. Experimental Design

4.5.4
Two quick clarifications
on statistical matters

These slides can not be reproduced or distributed without permission in writing from the author. 199
4.5.4. Two quick clarifications on statistical matters

4.5.4.1
Variance and deviance

These slides can not be reproduced or distributed without permission in writing from the author. 200
4.5.4.1. Variance and deviance
Starting from the definition of
standard deviation ...
S (xi - x)2
… and eliminating s= -------------
the root sign
we obtain ... n-1
… the variance S (xi - x)2
… from which, considering
only the numerator,
s2 = -------------
we obtain ... n-1

… the sum of squares


Q = S (xi - x)2
or deviance
These simple definitions will help us to work with
the Analysis of Variance (AN.O.VA.)
These slides can not be reproduced or distributed without permission in writing from the author. 201
4.5.4. Two quick clarifications on statistical matters

4.5.4.2
Degrees of freedom (df)

These slides can not be reproduced or distributed without permission in writing from the author. 202
4.5.4.2. Degrees of freedom (df)

Statistics has borrowed the concept of degrees of freedom (df)


from other disciplines such as Mathematics and Physics.

In Physics, to locate:

 a point along a straight line, a single coordinate is sufficient and we say


that the points of a line have only one degree of freedom;

 a point of a plane, 2 coordinates are needed and we say that it has 2 de-
grees of freedom;

 a point in the space, 3 coordinates are needed and we say that it has 3
degrees of freedom;

 a rigid body in the space, 6 coordinates are needed and we say that it
has 6 degrees of freedom.

These slides can not be reproduced or distributed without permission in writing from the author. 203
4.5.4.2. Degrees of freedom (df)

In mathematics, we say that:


 points of a straight line are 1 (only 1 dimension),
 points of plane are 2 (2 dimensions),
 points of the space are 3 (3 dimensions);
 a rigid body has 6 different ways to position themsel-
ves in the space.

This is a concise way to classify the orders of magnitude of


the different categories of infinite, which is very often use-
ful.
For example, let us suppose we have fixed 4 coordinates of a
rigid body in the space. Its total degrees of freedom are 6, so it
has yet 2 actual df. This means that, despite the constraints im-
posed, this rigid body has so many different ways to position
themselves in the space as there are points in a plane (2).
These slides can not be reproduced or distributed without permission in writing from the author. 204
4.5.4.2. Degrees of freedom (df)

In a statistical experiment of n trials (usually n measurements), we


obtain n results, each of which can typically take 1 values, i.e. as
many as there are the points on a segment (or real numbers) and the
whole set of n trials has n possible n-tuples of results.
1st conclusion - In Statistics, each trial (measurement) corre-
sponds to 1 degree of freedom and therefore
the total number of degrees of freedom of the
whole experiment is equal to the total number
of trials (measurements) carried out.

If we know the mean of the n results, we need only know n - 1


results in order to be able to rebuild them all.
We say that, once it has been obtained (and fixed) the mean, the
statistical experiment remains with n - 1 degrees of freedom.
2nd conclusion - Any known relationship among the
observations decreases the degrees
of freedom.
These slides can not be reproduced or distributed without permission in writing from the author. 205
4.5. Experimental Design

4.5.5
Analysis of results
(AN.O.VA. + AN.O.M.)

These slides can not be reproduced or distributed without permission in writing from the author. 206
4.5.5. Analysis of results (AN.O.VA. + AN.O.M.)

For the analysis of results, we have two main tools:

 the ANALYSIS OF VARIANCE (AN.O.VA.) to de-


termine which factors are “significant”, i.e. really
influential in the investigated phenomenon
 and the ANALYSIS OF MEANS (AN.O.M.) to quan-
tify their effects (no need to make this analysis on
factors recognized as not significant!).

These slides can not be reproduced or distributed without permission in writing from the author. 207
4.5.5. Analysis of results (AN.O.VA. + AN.O.M.)

4.5.5.1
Analysis Of Variance
(AN.O.VA.)

These slides can not be reproduced or distributed without permission in writing from the author. 208
4.5.5.1. Analysis Of Variance (AN.O.VA.)

The ANALYSIS OF VARIANCE


divides the total variability
into fractions “explained”
by each considered factor/interaction
and into an additional fraction
“unexplained or residual” due to the “error”.

These slides can not be reproduced or distributed without permission in writing from the author. 209
4.5.5.1. Analysis Of Variance (AN.O.VA.)
INTUITIVE CONCEPT of the Analysis of Variance
(to be then translated in mathematical terms)

True announcement of
a radio news “U.S. presidential election:
(on November 4, 2000) the advantage
of Bush over Gore
is less than the margin of error
and therefore not significant".

This communication pays particular attention to statistical aspects but it is easily


understandable by all. It well highlights the basic principle of Analysis Of Variance:
the validity of information is determined
by how it rises above the typical uncertainties
caused by the methods of collection and processing data.
These slides can not be reproduced or distributed without permission in writing from the author. 210
4.5.5. Analysis of results (AN.O.VA. + AN.O.M.)

4.5.5.2
An example to understand
(AN.O.VA. & AN.O.M.)

These slides can not be reproduced or distributed without permission in writing from the author. 211
4.5.5.2. An example to understand
Let us suppose we want to know
if 4 gasolines (of 4 different brands)
give rise to a really different consumption [km/l] or not.
To this purpose, every gasoline has been tested
on 3 different engines (2000, 1600 and 1200 c.c.).
For every combination gasoline/engine,
3 tests have been conducted
under “nominally” equal conditions (repetitions)
and consumption was observed.
Results are shown in the following slide.
We want to understand
whether the differences in consumption
observed on each combination gasoline/engine,
may be attributable to the "randomness of sampling"
or express a real difference in performance.
These slides can not be reproduced or distributed without permission in writing from the author. 212
4.5.5.2. An example to understand
RESULTS [km/l]
Identity number

FACTOR
(with replication)
of the test

A B
(different brand (different kind y1 y2 y23
of gasoline: of engine:
4 levels) 3 levels)
1 1 1 20 19 20
2 1 2 23 24 23
3 1 3 23 24 26
4 2 1 17 18 17
5 2 2 20 20 19 This value has
6 2 3 20 23 21 been reported as
7 3 1 16 16 15 an interesting
8 3 2 17 16 17 reference, but it is
9 3 3 24 14 18 not so directly used
10 4 1 21 20 21 in the
11 4 2 26 25 24
AN.O.VA.
12 4 3 25 27 29
calculations.

Overall mean 20,778


Overall variance 13,9492
These slides can not be reproduced or distributed without permission in writing from the author. 213
4.5.5.2. An example to understand
Variances by rows
RESULTS [km/l] "WITHIN" (WITHIN)
Variance "AMONG" different
Identity number

FACTOR
(with replication) variance Each of these
Level 1 variances is
of the test

A B related to 3 replications
conducted under "nominally"
(different brand (different kind
of gasoline: of engine:
y1 y2 y32 x s2 equal conditions.
4 levels) 3 levels)
1 1 1 20 19 20 19,67 0,3333
If conditions were exactly
2 1 2 23 24 23 23,33 0,3333
the same, all these
3 1 3 23 24 26 24,33 2,3333 variances would be zero.
4 2 1 17 18 17 17,33 0,3333 In fact the results of
5 2 2 20 20 19 19,67 0,3333 experiments, conducted in
6 2 3 20 23 21 21,33 2,3333 nominally equal conditions,
7 3 1 16 16 15 15,67 0,3333 vary as a result of all the
8 3 2 17 16 17 16,67 0,3333 factors (with low influence)
9 3 3 24 14 18 18,67 25,3333 not taken into account.
10 4 1 21 20 21 20,67 0,3333 We can say that each of
11 4 2 26 25 24 25,00 1,0000
these variances quantifies
12 4 3 25 27 29 27,00 4,0000
the variability associated
mean
Average s2 with the measurement
Overall mean 20,778 3,111
i.e. variance method, broadly
Overall variance 13,9492 of ERROR construed.
Because all these variances relate to experiments in nominally equal conditions, at least in first
approximation, the differences among these variances are due to the randomness of sampling.
Therefore the best way to estimate the variability
associated with the adopted measurement method in the broad sense
is to calculate the average (3,111) of these variances.
These slides can not be reproduced or distributed without permission in writing from the author. 214
4.5.5.2. An example to understand
EXAMPLES OF CAUSES OF VARIABILITY RELATED TO THE MEASUREMENT METHOD IN THE BROAD SENSE
(in this case)

Mileage - The covered distance is read on the odometer which has an error of about
Machinery

50 m. Remedy: more sophisticated measuring system.


Method of two fills - The filling of the tank “to the brim" may have small differences
from time to time. Remedy: more sophisticated measuring system.
Men

Driver - The use of more than one Driver can be a source of variability due to the diffe-
rent driving habits, but also the driving style of a single Driver can vary from day to day
depending on his mood. Remedy: utilize bench tests.
1. Methods 2. Men 3. Machinery

Itinerary - It should always be the sa-


me, but it may be necessary to accept EFFECT
a few minor variations (e.g. due to road or
Methods

RESPONSE
repairs). Remedy: exclude tests with
abnormal routes or utilize bench tests. 4. Materials 5. Milieu 6. Measurements
(environment)

Time - The traffic-intensity changes during the day, so tests have to be carried out al-
ways at the same hour: but even so, from day to day, traffic conditions may present
changes. Remedy: exclude tests with abnormalities of some importance, or utilize bench
tests.
Weather - External temperature, wind, humidity (not to mention the rain) can affect test
(environment)
Milieu

results: even excluding tests conducted under particular conditions, some variability how-
ever remains. Remedy: utilize bench tests.

These slides can not be reproduced or distributed without permission in writing from the author. 215
4.5.5.2. An example to understand
"WITHIN" Variance "AMONG" different brands of GASOLINE
variance Level 1
Now we have
to calculate the
x s2 differences
19,67 0,3333 between
23,33
24,33
0,3333
2,3333
the means
17,33 0,3333 (as already seen)
19,67 0,3333
21,33 2,3333
15,67 0,3333
16,67 0,3333 Definizione
Design of

prova
Risultati di results
experiments 11 Interazioni
18,67 25,3333 prove
44Factors
Fattori 2 bya22factors
a2 a3a3
20,67 0,3333

Experimental
25,00 1,0000 3,111 represents the re- #
#
test
identif. A B C D
A A A B B C
Fattore
Factor
A
A
Interazione
Interaction
AD
ident.
27,00 4,0000 B C D C D D
ference suitable to
means
prova
N.o

y1
Lev. 1 Lev. 2
Liv. 1 Liv. 2 Lev. 11 Lev.
Liv. Liv. 22
1 1 1 1 1 1 y1 y1
Average s2
i.e. variance
3,111 decide if differences 2 1 1 1 2 y2 2 y2 y2
3 1 1 2 1 y3 1 y3 y3
of ERROR
between means are 4 1 1 2 2 y4 2 y4 y4

large or not. 5
6
1
1
2
2
1
1
1
2
y5
y6
1
2
y5
y6
y5
y6
7 1 2 2 1 y7 1 y7 y7
8 1 2 2 2 y8 2 y8 y8
9 2 1 1 1 y9 2 y9 y9
10 2 1 1 2 y10 1 y10 y10
11 2 1 2 1 y11 2 y11 y11
12 2 1 2 2 y12 1 y12 y12
13 2 2 1 1 y13 2 y13 y13
14 2 2 1 2 y14 1 y14 y14
15 2 2 2 1 y15 2 y15 y15
16 2 2 2 2 y16 1 y16 y16
yliv.1 yliv.2 yliv.1 yliv.2

These slides can not be reproduced or distributed without permission in writing from the author. 216
4.5.5.2. An example to understand
RESULTS [km/l] Variance "BETWEEN" different brands of GASOLINE
Identity number

FACTOR
(with replication) Level 1 Level 2 Level 3 Level 4
of the test

A B
(different brand (different kind y1 y2 y3 y1 y2 y3 y1 y2 y3 y1 y2 y3 y1 y2 y3
of gasoline: of engine:
4 levels) 3 levels)
1 1 1 20 19 20 20 19 20
2 1 2 23 24 23 23 24 23
3 1 3 23 24 26 23 24 26
4 2 1 17 18 17 17 18 17
5 2 2 20 20 19 20 20 19
6 2 3 20 23 21 20 23 21
7 3 1 16 16 15 16 16 15
8 3 2 17 16 17 17 16 17
9 3 3 24 14 18 24 14 18
10 4 1 21 20 21 21 20 21
11 4 2 26 25 24 26 25 24
12 4 3 25 27 29 25 27 29
means
Overall mean 20,778 22,444 19,444 17,000 24,222
Overall variance 13,9492
Overall mean = 20,778
The variance BETWEEN the level-means Variance "between" means = 10,230 of sample means

(across the rows) Variance "BETWEEN" means = 92,074 of single elements


emphasizes the differences between the (the previous variance multiplied by the sample size equal to 9 )
levels and thus quantifies the influence of a This slide shows the calculation for the
factor/interaction in the investigated
phenomenon.
factor A - Gasoline

These slides can not be reproduced or distributed without permission in writing from the author. 217
4.5.5.2. An example to understand
RESULTS [km/l] Variance "BETWEEN" different kinds of ENGINE
Identity number

FACTOR
(with replication) Level 1 Level 2 Level 3
of the test

A B
(different brand (different kind y1 y2 y3 y1 y2 y3 y1 y2 y3 y1 y2 y3
of gasoline: of engine:
4 levels) 3 levels)
1 1 1 20 19 20 20 19 20
2 1 2 23 24 23 23 24 23
3 1 3 23 24 26 23 24 26
4 2 1 17 18 17 17 18 17
5 2 2 20 20 19 20 20 19
6 2 3 20 23 21 20 23 21
7 3 1 16 16 15 16 16 15
8 3 2 17 16 17 17 16 17
9 3 3 24 14 18 24 14 18
10 4 1 21 20 21 21 20 21
11 4 2 26 25 24 26 25 24
12 4 3 25 27 29 25 27 29
means
Overall mean 20,778 18,333 21,167 22,833
Overall variance 13,9492
Overall mean = 20,778
Variance "between" means = 5,176 of sample means

Variance "BETWEEN" means = 62,111 of single elements


(the previous variance multiplied by the sample size equal to 12 )

In this slide is done the same for the


factor B - Engine
These slides can not be reproduced or distributed without permission in writing from the author. 218
4.5.5.2. An example to understand

One could proceed similarly


also for the interaction AB
between gasoline and engine,
but is omitted for brevity.
Moreover, as we will see in the following,
the interaction is no significant in this case
(remember that this case is invented and not real).

These slides can not be reproduced or distributed without permission in writing from the author. 219
4.5.5.2. An example to understand

Note that the overall variance (= 13.9492 as already seen) is:

 greater than the within (or error) variance (= 3.111)


 less than the between variances of both the gasoline
(= 92.074) and the engine (= 62.111)

This happens because the ANOVA approach to calculations


tends to minimize the within variance and to maximize the
between variance (respect to the overall variance).

These slides can not be reproduced or distributed without permission in writing from the author. 220
4.5.5.2. An example to understand
As previously mentioned, we want to assess, for each factor/interaction,
if the variance between the means at different levels (“BETWEEN” variance)
rises above or not the intrinsic variance
of the measurement method in the broader sense
(called “WITHIN” variance or “ERROR" variance).
To do this, we can calculate the ratio between the BETWEEN variance of each
factor/interaction and the ERROR (or WITHIN) variance. This ratio is called the F-
ratio (defined by R.A. Fisher) and denoted by F. In our case, we have:

SOURCE VARIANCE F-ratio

Gasoline 92.0741 92.0741/3.1111 = 29.595

Engine 62.1111 62.1111/3.1111 = 19.964

ERROR 3.1111

If the F-ratio assumes high values, we say that the factor/interaction is


significant (= influential). Otherwise we simply say that the experiment
does not allow us to recognize the factor/interaction as a significant. In any
case, we can never prove that the factor is not statistically significant.
These slides can not be reproduced or distributed without permission in writing from the author. 221
4.5.5.2. An example to understand

The use of the variance of the means of the 4 gasolines,


with respect to the overall mean,
is not very different from what was proposed
at the end of the previous Paragraph on Full Factorial Plans:
to make the difference between
the mean of all experiments where a factor is at level 1
and the mean of all experiments where the same factor is at level 2.
Instead of calculating the difference between two means,
here, by means of the variance, we calculate the difference
between each mean and the overall mean:
in this way, the technique applies
even when there are more than 2 levels.

These slides can not be reproduced or distributed without permission in writing from the author. 222
4.5.5.2. An example to understand
The problem now is reduced to establish a criterion to define
when the Fisher’s F-ratio can be considered large.

Fisher defined a probability distribution that,


according to the degrees of freedom of
numerator (BETWEEN variance)
and denominator (WITHIN or ERROR variance),
gives the probability density of each value of the ratio:
s2BETWEEN
F = --------------
s2ERROR
assuming that the factor/interaction has no influence
on the investigated phenomenon,
so the average is obviously always equal to 1.

These slides can not be reproduced or distributed without permission in writing from the author. 223
4.5.5.2. An example to understand
FISHER’S F DISTRIBUTION constructed under the F
DISTRIBUZIONE hypothesis
DI FISHERthat the gasoline are not influencing
0,7 Degrees of freedom of the numerator = N. o of levels - 1
Gradi di libertà per il numeratore = 3
Degrees of freedom of the denominator =Gradi di libertà- 1)
(repetitions perx ilN.
denominatore =8
o of different experiments

0,6 → Gasoline → 4-1 = 3

For more details,


see next slide
NUMERATOR
0,5 → ( Engine → 3-1 = 2)
density
Densità di probabilità

DENOMINATOR → Error → (3-1) x 12 = 24


0,4
Probability

0,3
mean

0,2

Livello di significatività
Significance
0,1
 = 5%

0,0
0,0 0,5 1,0 1,5 2,0 2,5 3,0 3,5 4,0 4,5 5,0
1 F-ratio5,5values
6,0
Fc = 2,25 F = se / si
2 2
FT = 4,07

Almost the same for engines


These slides can not be reproduced or distributed without permission in writing from the author. 224
4.5.5.2. An example to understand
EXPLANATION OF THE CALCULATION OF THE DEGREES OF FREEDOM
Variance "BETWEEN" different brands of GASOLINE Variance "BETWEEN" different kinds of ENGINE Variance "BETWEEN" different kinds of ENGINE

Level 1 Level 2 Level 3 Level 4 Level 1 Level 2 Level 3

y1 y2 y3 y1 y2 y3 y1 y2 y3 y1 y2 y3 y1 y2 y3 y1 y2 y3 y1 y2 y3

20 19 20
20
23
19
24
20
23 NUMERATORS 23 24 23
23 24 26 23 24 26
17 18 17 17 18 17

DENOMINATOR
20 20 19 20 20 19
20 23 21 20 23 21
16 16 15 16 16 15
17 16 17 17 16 17
24 14 18 24 14 18
21 20 21 21 20 21
26 25 24 26 25 24
25 27 29 25 27 29
means means means
22,444 19,444 17,000 24,222 18,333 21,167 22,833

GASOLINE  NUMERATOR
Overall mean =
Variance "between" means =
20,778
10,230 of sample means
ENGINE  20,778
Overall mean =
Variance "between" means =
NUMERATOR
5,176 of sample means
variance between
Variance "BETWEEN" means =
4 means
92,074 of single elements
variance
Variance "BETWEEN" means =
between
62,111 3 means
of single elements

4 - 1variance
(the previous = 3 degrees ofsample
multiplied by the freedom
size equal to 9 ) (the
(the previous variance multiplied by the sample size equal to 12 ) previous 3 - 1 = 2 degrees of freedom
variance multiplied by the sample size equal to 12 )

RESULTS [km/l] "WITHIN" Variance "AMONG" different brands of GASOLINE


Identity number

FACTOR ERROR OF THE MEASUREMENT METHOD


(with replication) variance Level 1
of the test

A B IN THE BROAD SENSE  DENOMINATOR


(different brand (different kind
of gasoline: of engine:
y1 y2 y32 x s2 the same for both, gasolines and engines
4 levels) 3 levels)
1 1 1 20 19 20 19,67 0,3333  mean of 12 variances;
2
3
1
1
2
3
23
23
24
24
23
26
23,33
24,33
0,3333
2,3333
 each variance is calculated between 3 va-
4 2 1 17 18 17 17,33 0,3333 lues (row), but before calculating the va-
5 2 2 20 20 19 19,67 0,3333 riance, we need to calculate the mean: so
6 2 3 20 23 21 21,33 2,3333
7 3 1 16 16 15 15,67 0,3333
the degrees of freedom of each variance
8 3 2 17 16 17 16,67 0,3333 become 2;
9 3 3 24 14 18 18,67 25,3333
10 4 1 21 20 21 20,67 0,3333  therefore the total number of degrees of
11 4 2 26 25 24 25,00 1,0000 freedom of the denominator are given by
12 4 3 25 27 29 27,00 4,0000 12 . 2 = 24.
means
Average s2
Overall mean 20,778 i.e. variance
3,111
These slides can not be reproduced or distributedOverall variance in writing from
without permission 13,9492
the author.
of ERROR 225
4.5.5.2. An example to understand
At this point we can proceed as follows.
 we assume an acceptable risk of error (significance ): a value of
5% is commonly used;
 on the Tables of the Fisher’s distribution, we find the threshold at 5%
for the F-ratio relative to our experiment (see next slide): the value so
found is called F-Table and is denoted by FT;
 we compare FT with the F-ratio actually observed in our experiment
for each factor/interaction (denoted by FC = F-ratio calculated):
 if FC ≥ FT we consider the factor/interaction as significant (i.e. influen-
tial with a risk of error of 5%);
 otherwise we can not conclude anything, but usually we behave as if
the factor/interaction is not influential.

The logic is that if we face with an experimental event which is very rare on the as-
sumption that the factor/interaction is no influential (FC large), we bet that the factor is
influential (= significant). It is a bit as if, playing roulette the zero came out 10 times:
it is certainly a possible event, but most people would call the police!

These slides can not be reproduced or distributed without permission in writing from the author. 226
4.5.5.2. An example to understand
Since the experimentally
FISHER’S
DISTRIBUZIONE F DI FISHER observed F-ratio values
F DISTRIBUTION
0,7 are much
Degrees of freedom of the numerator = N.o larger
of
Gradi levels - 1 per il numeratore = 3
di libertà
than the
Degrees respective
of freedom threshold
of the denominator =Gradi values
di libertà
(repetitions- 1)per ofread
different on
il denominatore
x N.o = 8 the
experiments

0,6 Fisher’s Table, for both gasoline and engines,


we can conclude that
0,5 both these factors
should be considered as significant
density
Densità di probabilità

0,4 (with a risk of error less than 5%).


Probability

0,3 Gasoline experimental F-ratio = 29.595


Engine experimental F-ratio = 19.964
mean

0,2

Livello di significatività
Significance
0,1
 = 5%

0,0
0,0 0,5 1,0 1,5 2,0 2,5 3,0 3,5 4,0 4,5 5,0 5,5 6,0
Fc = 2,25 F = se / si2 2
FT = 4,07

Gasoline 5% thresold = 3.01


Engine 5% thresold = 3.40
These slides can not be reproduced or distributed without permission in writing from the author. 227
4.5.5.2. An example to understand
FISHER’s
Distribuzione distribution -for
di FISHER/SNEDECOR significance
Livello of 5%
di SIGNIFICATIVITÀ 5%
G R A D DEGREES
I DI LIBOFE FREEDOM
R T À P EFOR
R ITHE
L N UMERATORE
NUMERATOR 9999999

1 2 3 4 5 6 7 8 9 10 12 15 20 24 30 40 60 120 
1 161,4 199,5 215,7 224,6 230,2 234,0 236,8 238,9 240,5 241,9 243,9 245,9 248,0 249,1 250,1 251,1 252,2 253,3 254,3
2 18,51 19,00 19,16 19,25 19,30 19,33 19,35 19,37 19,38 19,40 19,41 19,43 19,45 19,45 19,46 19,47 19,48 19,49 19,50
3 10,13 9,55 9,28 9,12 9,01 8,94 8,89 8,85 8,81 8,79 8,74 8,70 8,66 8,64 8,62 8,59 8,57 8,55 8,53
4 7,71 6,94 6,59 6,39 6,26 6,16 6,09 6,04 6,00 5,96 5,91 5,86 5,80 5,77 5,75 5,72 5,69 5,66 5,63
5 6,61 5,79 5,41 5,19 5,05 4,95 4,88 4,82 4,77 4,74 4,68 4,62 4,56 4,53 4,50 4,46 4,43 4,40 4,36
6 5,99 5,14 4,76 4,53 4,39 4,28 4,21 4,15 4,10 4,06 4,00 3,94 3,87 3,84 3,81 3,77 3,74 3,70 3,67
DENOMINATOR

7 5,59 4,74 4,35 4,12 3,97 3,87 3,79 3,73 3,68 3,64 3,57 3,51 3,44 3,41 3,38 3,34 3,30 3,27 3,23
DENOMINATORE

8 5,32 4,46 4,07 3,84 3,69 3,58 3,50 3,44 3,39 3,35 3,28 3,22 3,15 3,12 3,08 3,04 3,01 2,97 2,93
9 5,12 4,26 3,86 3,63 3,48 3,37 3,29 3,23 3,18 3,14 3,07 3,01 2,94 2,90 2,86 2,83 2,79 2,75 2,71
10 4,96 4,10 3,71 3,48 3,33 3,22 3,14 3,07 3,02 2,98 2,91 2,85 2,77 2,74
Error
2,70
2,66 2,62 2,58 2,54
11 4,84 3,98 3,59 3,36 3,20 3,09 3,01 2,95 2,90 2,85 2,79 2,72 2,65 (common
2,61 2,57 denominator)
2,53 2,49 2,45 2,40
12 4,75 3,89 3,49 3,26 3,11 3,00 2,91 2,85 2,80 2,75 2,69 2,62 2,54 2,51 2,47 2,43 2,38 2,34 2,30
Gasoline
FORILTHE

13 4,67 3,81 3,41 3,18 3,03 2,92 2,83 2,77 2,71 2,67 2,60 2,53 2,46 2,42 2,38 2,34 2,30 2,25 2,21
14 4,60 3,74 3,34 3,11 2,96 2,85 2,76 2,70 2,65 2,60 2,53 2,46 2,39 2,35
(numerator)
2,31 2,27 2,22 2,18 2,13
Engine
PER

15 4,54 3,68 3,29 3,06 2,90 2,79 2,71 2,64 2,59 2,54 2,48 2,40 2,33 2,29 2,25 2,20 2,16 2,11 2,07
16 4,49 3,63 3,24 3,01 2,85 2,74 2,66 2,59 2,54 2,49 2,42 2,35 2,28 2,24 2,19 2,15 2,11 2,06 2,01
(numerator)
FREEDOM
OFLIBERTÀ

17 4,45 3,59 3,20 2,96 2,81 2,70 2,61 2,55 2,49 2,45 2,38 2,31 2,23 2,19 2,15 2,10 2,06 2,01 1,96
18 4,41 3,55 3,16 2,93 2,77 2,66 2,58 2,51 2,46 2,41 2,34 2,27 2,19 2,15 2,11 2,06 2,02 1,97 1,92
19 4,38 3,52 3,13 2,90 2,74 2,63 2,54 2,48 2,42 2,38 2,31 2,23 2,16 2,11 2,07 2,03 1,98 1,93 1,88
20 4,35 3,49 3,10 2,87 2,71 2,60 2,51 2,45 2,39 2,35 2,28 2,20 2,12 2,08 2,04 1,99 1,95 1,90 1,84
GRADI DI

21 4,32 3,47 3,07 2,84 2,68 2,57 2,49 2,42 2,37 2,32 2,25 2,18 2,10 2,05 2,01 1,96 1,92 1,87 1,81
DEGREES

22 4,30 3,44 3,05 2,82 2,66 2,55 2,46 2,40 2,34 2,30 2,23 2,15 2,07 2,03 1,98 1,94 1,89 1,84 1,78
23 4,28 3,42 3,03 2,80 2,64 2,53 2,44 2,37 2,32 2,27 2,20 2,13 2,05 2,01 1,96 1,91 1,86 1,81 1,76
24 4,26 3,40 3,01 2,78 2,62 2,51 2,42 2,36 2,30 2,25 2,18 2,11 2,03 1,98 1,94 1,89 1,84 1,79 1,73
25 4,24 3,39 2,99 2,76 2,60 2,49 2,40 2,34 2,28 2,24 2,16 2,09 2,01 1,96 1,92 1,87 1,82 1,77 1,71
30 4,17 3,32 2,92 2,69 2,53 2,42 2,33 2,27 2,21 2,16 2,09 2,01 1,93 1,89 1,84 1,79 1,74 1,68 1,62
40 4,08 3,23 2,84 2,61 2,45 2,34 2,25 2,18 2,12 2,08 2,00 1,92 1,84 1,79 1,74 1,69 1,64 1,58 1,51
60 4,00 3,15 2,76 2,53 2,37 2,25 2,17 2,10 2,04 1,99 1,92 1,84 1,75 1,70 1,65 1,59 1,53 1,47 1,39
120 3,92 3,07 2,68 2,45 2,29 2,18 2,09 2,02 1,96 1,91 1,83 1,75 1,66 1,61 1,55 1,50 1,43 1,35 1,25
1E+10  3,84 3,00 2,60 2,37 2,21 2,10 2,01 1,94 1,88 1,83 1,75 1,67 1,57 1,52 1,46 1,39 1,32 1,22 1,00

These slides can not be reproduced or distributed without permission in writing from the author. 228
4.5.5.2. An example to understand

In practice, these steps are not necessary if we use a computer,


because software provides an output like the one below.

Degrees of freedom

≤ 5%
DEVIANCE VARIANCE Calculated TAIL’s
Source
(sum of squares) (Dev./d.f.) F-ratio area

Significance
These slides can not be reproduced or distributed without permission in writing from the author. 229
4.5.5.2. An example to understand

The previous TAIL’s AREA, Q(F), is called significance.

Significance is the risk of failing


when we consider two things as different.

In our particular case, the significance represents


the risk of being wrong
in considering a factor/interaction as “significant”:
in fact, this implies that there is an important difference
between
the results obtained when the factor/interaction is at level 1
compared to the results obtained when it is at level 2
(or at other levels).

These slides can not be reproduced or distributed without permission in writing from the author. 230
4.5.5.2. An example to understand

The penultimate right column shows the area of the tail to the right
of F value observed in the experiment. So we no longer need to
search the FT value in a Fisher’s Table, but we only need to read
this column of AN.O.VA. and follow the criteria listed below.
Usually with:

 ≤ 1% the factor/interaction is undoubtedly recognized as statistically significant.

1% <  ≤ 5%
the factor/interaction is "accepted" as statistically significant: for the significance, 5% is the standard
generally assumed (except in very special cases).
this is an area of uncertainty: thus, before the final decision, we try to carry out further investiga-
5% <  ≤ 10% tions or to provide further experiments or, if possible, to take into account other technical informa-
tion.

 > 10%
strictly speaking, we should "stay the proceedings" , but in practice one is forced to behave as if
the factor/interaction was not statistically significant.

The AN.O.VA. Table of the previous slide also presents the analysis
of the interaction AB between gasoline and engine, which, for bre-
vity, we have never treated. From the AN.O.VA., it follows that the
factors A and B are both very significant, while their interaction is
not significant (risk of error in considering it as significant greater
than 65% !).
These slides can not be reproduced or distributed without permission in writing from the author. 231
4.5.5.2. An example to understand
Generally, software also provide a predictive model. This is an example

Where Y is the overall mean and the arrays (brackets) contain the effects of the
factor A, of the factor B and of the interaction AB respectively.
Overall mean 20.7778
For example, the prediction of:
Factor A gasoline effect (3rd) -3.7778
A at level 3 and B at level 2
rd nd Factor B engine effect (2nd) 0.3889
(3 gasoline and 2 engine),
Interaction AB -0.7222
taking into account the interaction,
---------------
is obtained as shown at right
Prediction (A=3 and B=2) 16.6667
These slides can not be reproduced or distributed without permission in writing from the author. 232
4.5.5.2. An example to understand
But in the previous ANOVA the interaction AB is not significant, and therefore we
have to exclude from calculations the array of the interaction AB: i.e. we have to omit
the value -0.7222 from the previous calculations.

Overall mean 20.7778


Factor A gasoline effect (3rd) -3.7778
Factor B engine effect (2nd) 0.3889
Interaction AB -0.7222
---------------
Prediction (A=3 and B=2) 16.6667
Prediction (A=3 and B=2) 17.3889
These slides can not be reproduced or distributed without permission in writing from the author. 233
4.5.5.2. An example to understand
Of course we can simply repeat the calculations to the PC, with the additional
information that the interaction AB does not exist. In this way we get:

These slides can not be reproduced or distributed without permission in writing from the author. 234
4.5.5. Analysis of results (AN.O.VA. + AN.O.M.)

4.5.5.3
Pooling

These slides can not be reproduced or distributed without permission in writing from the author. 235
4.5.5.3. Pooling

The calculation of merging into the “error”


of the factors/interactions
recognized as not significant
(we have just done)
is called POOLING.

The detail of calculations is shown on the usual example


(see next slide)

These slides can not be reproduced or distributed without permission in writing from the author. 236
4.5.5.3. Pooling

30 87.778 87.778 / 30 = 2.9259

Since the interaction could not be considered significant,


we may eliminate it from the calculations by inserting it into the ERROR.
In other words,
its degrees of freedom and the pertaining sum of squares
will be added to those of the ERROR:
Résidus DDL = = 6 + 24 = 30
Résidus S. DES CARRES = 13.1111 + 74.6667 = 87.7778
Résidus VARIANCES = 87.7778/30 = 2.9259.
These slides can not be reproduced or distributed without permission in writing from the author. 237
4.5.5.3. Pooling

Pooling should be used with extreme caution, because:


 can never "prove" that a factor/interaction is not significant, but only that it is
(with a predetermined risk);
 there is always the risk of adding to the error, not only "unexplained" deviances
(linked to the measurement method in the broader sense); these deviances cer-
tainly does not affect much in the phenomenon, but they may do so in a "syste-
matic“ way).

In conclusion, we recommend to use POOLING as verification of the


stability of the results, bearing in mind that we can not provide
general rules, when stability is not verified.

These slides can not be reproduced or distributed without permission in writing from the author. 238
4.5.5.3. Pooling

It is worth noting that if in the AN.O.VA.:

 the interaction AB between factors A and B is significant,


 but the two factors A and B are not significant,

in making the pooling, we must continue to consider


also the factors A and B (although not significant),
otherwise the AN.O.VA. is unable to handle the interaction
(which is significant).

These slides can not be reproduced or distributed without permission in writing from the author. 239
4.5.5.3. Pooling
REMARK
Now, it would not be difficult to verify that the variance of error can
be interpreted as a measure of deviations between the experi-
mental results and those calculated by the predictive model.
In the previous slide (214), the variance of the error was taken as an indica-
tor of the uncertainty (or noise) attributable to the inherent variability of
the measurement method in the broad sense.
If we take account of the interaction, as we did before the pooling, the two
interpretations lead to the same value for the variance of error, because, in
both cases, it is calculated from the variances among experiments conduc-
ted in “nominally equal conditions” (and the means of the results of each
triple of experiments coincide with the values calculated by the predictive
model).
Instead, after the pooling, the degrees of freedom and the sum of squares
of the interaction have been incorporated in the error. Obviously this does
slightly change both the predictions of the model and the value of the error
variance (which becomes 2.9259 instead of 3.1111).

These slides can not be reproduced or distributed without permission in writing from the author. 240
4.5.5. Analysis of results (AN.O.VA. + AN.O.M.)

4.5.5.4
Analysis of Means
(AN.O.M.)

These slides can not be reproduced or distributed without permission in writing from the author. 241
4.5.5.3. Pooling
Once we have recognized factors and interactions which are
significant, we can use the predictive model to construct the
graph of the Analysis Of Means (AN.O.M.)
30

25

20
km / l

15

10

0
. A1 A2 A3 A4 . B1 B2 B3 .
LEVELS OF FACTORS

These slides can not be reproduced or distributed without permission in writing from the author. 242
4.5.5.3. Pooling
The predictive model can also be used to predict the combination of
gasoline and engine that leads to lower consumption.
The parallelism of the curves confirms the absence of interaction al-
ready found by ANOVA.
30

B3
B3
25 B2
B2
B1
20 B1
km / l

15

10

A1 A2 A3 A4
LEVELS OF FACTOR A
These slides can not be reproduced or distributed without permission in writing from the author. 243
4.5.5. Analysis of results (AN.O.VA. + AN.O.M.)

4.5.5.5
Importance and use
of
“Percentages of contribution”

These slides can not be reproduced or distributed without permission in writing from the author. 244
4.5.5.5. Importance and use of “Percentages of contribution”
Once we have established what are the factors/interactions significant, i.e.
those who have a real influence on the investigated phenomenon, may be
interesting to determine how much each of them impacts on the phenome-
non, i.e. what are their percentages of contribution.
Q s2 Rough estimate
df Sum VARIANCE
Fisher's
Source Meaning of the percentage
Degrees of freedom of squares Q / df F-ratio of contributionon
(DEVIANCE) (mean square)

Variability due to the factor


A A : type of fuel
k-1= 3 QA = 276.2222 sA2 = QA / dfA = 92.0741 FA = sA2 / se2 = 31.468 56.58%

Variability due to the factor


B B : type of engine
n-1= 2 QB = 124.2222 sB2 = QB / dfB = 62.1111 FB = sB2 / se2 = 21.228 25.44%

Variability due to the


AB interaction between n o n s i g n i f i c a n t
interaction factors A and B

Residual error : dflT - (dfA + dfB) =


e " unexplained " residual
= 35 - (3 + 2) = 30
Qe = 87.7778 se2 = Qe / dfe = 2.9259 - 17.98%
variability

Total (or overall ) variabi- Total number of


lity, considering the totality
T of observations as a single
experiments - 1 = QT = 488.2222 sT2 = QT / dfT = 13.9492 - 100.00%
set = 35

Where: k = number of levels for factor A, fuel; n = number of levels for factor B, engine.

Since the total deviance (= sum of squares) is equal to the sum of the other
deviances, a simple but very rough method could be to calculate the percenta-
ges of total deviance explained by each significant factor/interaction (see right
column).
These slides can not be reproduced or distributed without permission in writing from the author. 245
4.5.5.5. Importance and use of “Percentages of contribution”
More accurately, the percentages of contribution are calculated through the
“corrected sums of squares",
i.e. purified as much as possible from the variability due to the randomness of sampling.
Of course, the calculation is made only on the significant factors/interactions and after a pooling.
QAc = QA - dfA . se2 = 276.2222 - 3 . 2.9259 = 267.4445
Q s2 QC r
df Sum VARIANCE
Fisher's
Source Meaning "Corrected" Percentage of
Degrees of freedom of squares Q / df F-ratio sum of squares contribution
(DEVIANCE) (mean square)

A
Variability due to the factor
k-1= 3 QA = 276.2222 sA2 = QA / dfA = 92.0741 FA = sA2 / se2 = 31.468
QAc = QA - dfA . se2 = 54.78%
A : type of fuel = 276.2222 - 3 . 2.9259 = 267.4445 (56.58%)

B
Variability due to the factor
n-1= 2 QB = 124.2222 sB2 = QB / dfB = 62.1111 FB = sB2 / se2 = 21.228
QBc = QB - dfB . se2 = 24.25%
B : type of engine = 124.2222 - 2 . 2.9259 = 118.3704 (25.44%)

Variability due to the


AB interaction between n o n s i g n i f i c a n t
interaction factors A and B

Residual error :
e " unexplained " residual
dflT - (dfA + dfB) =
Qe = 87.7778 se2 = Qe / dfe = 2.9259 -
Qec = Qe + (dfA + dfB) . se2 = 20.98%
variability = 35 - (3 + 2) = 30 = 87.7778 + 5 . 2.9259 = 102.4073 (17.98%)
Total (or overall ) variabi- Total number of
lity, considering the totality
T of observations as a single
experiments - 1 = QT = 488.2222 sT2 = QT / dfT = 13.9492 - QT = 488.2222 100.00%
set = 35

The treatment consists in subtracting from the sum of squares, Q, of each factor/inte-
raction, the product of the error variance (always the same), s2e, for the degrees of
freedom, df, of each single factor/interaction.
In this case (which is not real but invented), the differences between the percentages of
contributions calculated according the two different methods is rather small. But there are
frequent cases where the differences are very important.
These slides can not be reproduced or distributed without permission in writing from the author. 246
4.5.5.5. Importance and use of “Percentages of contribution”

COMPARISON BETWEEN THE TWO CALCULATIONS

Q s2 Q s2 QC QCr r
Rough estimate
df Sumdf VARIANCESum Fisher's
VARIANCE
Fisher's
Meaning
Source Meaning "Corrected" "Corrected"
Percentage of of the Percentage
percentage of
Degrees of freedom Degrees
of squares
of freedom Qof squares
/ df F-ratio
Q / df F-ratio
sum of squares sumcontribution
of squares of contributionon
contribution
(DEVIANCE) (DEVIANCE)
(mean square) (mean square)

ty due to the factor


A
Variability due to the factor
k-1= 3 QA = 276.2222
k-1= 3 sA2 = QA / df
QAA == 92.0741
276.2222 sA2F=
A Q
2
=A s/ Adf 2
e 92.0741
/ s=
A = 31.468
QAc = QA - dfA . se2 =
FA = sA2 / se2 =. 31.468
54.78%
QAc = QA - dfA . se2 = 54.78%
56.58%
of fuel A : type of fuel = 276.2222 - 3 2.9259 = 267.4445 3 . 2.9259 = 267.4445
= 276.2222 - (56.58%) (56.58%)

ty due to the factor


B
Variability due to the factor
n-1= 2 QB = 124.2222
n-1= 2 sB2 = QB / df
QBB == 62.1111
124.2222 2
sBF =Q
B
2
= B s/Bdf 2
e 62.1111
/B s= = 21.228
QBc = QB - dfB . se2 =
FB = sB2 / se2 =. 21.228
24.25%
QBc = QB - dfB . se2 = 24.25%
25.44%
of engine B : type of engine .
= 124.2222 - 2 2.9259 = 118.3704 2.9259 = 118.3704
= 124.2222 - 2(25.44%) (25.44%)

ty due to the Variability due to the


AB n o n sn iognnns oiI g
fn inscI ifagI n
ion between
A and B interaction
interaction between
factors A and B
cnat i nf ti c a n t
al error : Residual error :
e
dflT - (df A + dfB) = dflT - (dfA + dfB) =
-
Qec = Qe + (dfA + dfB) . se2 =
- 20.98%
Qec = Qe + (dfA + dfB) . se2 = 20.98%
ained " residual " unexplained " residual
= 35 - (3 + 2) = 30
Qe = 87.7778
= 35 - (3 + 2)
s 2 = Qe / df
= 30 e
Qee == 2.9259
87.7778 se2 = Qe / dfe = 2.9259
= 87.7778 + 5 . 2.9259 = 102.4073 .
2.9259 = 102.4073
= 87.7778 + 5(17.98%)
17.98%
y variability (17.98%)
r overall ) variabi- Total (or number
Total overall ) variabi-
of Total number of
sidering the totality lity, considering the totality
T
vations as a single
experiments -1 =
of observations as a single
Qexperiments - 1 s=T2 = QT / df
T = 488.2222 QTT == 13.9492
488.2222 -
sT2 = QT / dfT = 13.9492 - QT = 488.2222 100.00% 100.00%
QT = 488.2222 100.00%
set = 35 = 35

In this case (which is not real but invented), the differences between the percentages of
contributions calculated according the two different methods is rather small. But there are
frequent cases where the differences are very important.

These slides can not be reproduced or distributed without permission in writing from the author. 247
4.5.5.5. Importance and use of “Percentages of contribution”

The thumb rule (used in the previous slide) to calculate the "corrected sum
of squares" in order to purge deviances from the "tare" introduced by the
randomness of sampling can be justified as follows.
The randomness of sampling generates the variability which would result if we
were to repeat many times the whole set of all tests. For each repetition of
the experiment, the deviances would change, because the means of all tests
performed in nominally identical conditions would change (slightly) at each
repetition.
Empirically, we can observe that changes introduced by the randomness of
sample increases with:
 the variance of error, which quantifies the inherent inaccuracy of the
measurement method in a broad sense;
 the degrees of freedom of each significant factor/interaction: the larger
the number of degrees of freedom, the greater the number of levels and
therefore the larger the number of possibilities to vary by a factor/interac-
tion.
Hence the thumb rule adopted at the previous slide
These slides can not be reproduced or distributed without permission in writing from the author. 248
4.5.5.5. Importance and use of “Percentages of contribution”

The estimate of the percentage of contribution


is empirical and precautionary,
in the sense that the actual contribution on the phenomenon
of each significant factor/interaction
could be a little larger than the value
estimated by means of the percentages of contributions.

These slides can not be reproduced or distributed without permission in writing from the author. 249
4.5.5.5. Importance and use of “Percentages of contribution”

BAR GRAPH OF PERCENTAGE OF CONTRIBUTION

60%
54,78%
PERCENTAGE OF CONTRIBUTION

50%

40%

30%
24,25%
20,98%
20%

10%

0,00%
0%

Gasoline Engine Interaction Error


FACTOR / INTERACTION

These slides can not be reproduced or distributed without permission in writing from the author. 250
4.5.5.5. Importance and use of “Percentages of contribution”

THUMB RULE - If the percentage of contribution of the factor/inte-


raction that has the smallest contribution is larger than the percen-
tage contribution of the error (or at least not too much smaller),
then we can reasonably conclude that the model used is able to
capture at least the bulk the investigated phenomenon.
If this does not occur, it means that:
 some factor / interaction of any importance has been omitted,
 or the phenomenon has a very large intrinsic variability that is
not well captured by all the factors/interactions considered.

On this basis, the model in the above graph can be considered


just acceptable.

REMARK - The construction of a variance bar graph (instead of


the percentage of contribution) would not allow to draw any conclu-
sion, because the variances of the significant factors/interactions
are, by definition, much higher than those of the error (according to
the Fisher's F distribution).
These slides can not be reproduced or distributed without permission in writing from the author. 251
4.5.5.5. Importance and use of “Percentages of contribution”
When results do not meet the criterion of the previous rule, we may wonder if we
really have omitted some important factor/interaction or whether the phenomenon
has a large intrinsic variability. It is not easy to answer. To provide at least a small
contribution in these cases of uncertainty, the following Table compact schematically
the various types of phenomena in 3 categories:

TYPES OF NUMBER OF FACTORS


PHENO- of little
very of medium
MENA important importance
importance COMMENTS
(negligible)

COMMON 2÷12 0 many Typical to deal with Experimental Design.

Usually there is no way to deal with mid-


level factors by using Experimental Design,
MALIGNANT 3÷4 5÷20 many and so we have to be satisfied with a fairly
large residual error.
These phenomena have a very high intrin-
sic variability and therefore we can not use
RARE - several many Experimental Design (unless we are able
to decompose them into smaller problems,
like those mentioned above in this Table).

These slides can not be reproduced or distributed without permission in writing from the author. 252
4.5.5.5. Importance and use of “Percentages of contribution”

Computerized calculation of the PERCENTAGE OF CONTRIBUTION

Fc
2
s calculated
Q QC r
Source df Mean Fisher's
Sum of  Corrected percentage
of degrees of square F-ratio
squares significance sum of of
variation freedom (Variance) s
2
explained
(Deviance ) square contribution
[SS/DF] -------------------
2
s error

SV DF SS V FC  Qc r

Gasoline A 3 276,222 92,0741 31,468 0,000% 267,444 54,78%


Engine B 2 124,222 62,1111 21,228 0,000% 118,370 24,25%
Interaction AB nonsignificant

E rror E 30 87,778 2,9259 102,407 20,98%


Total T 35 488,222 13,9492 488,222 100,00%
dftot. related to explained variability = 5

N O T E - Input cells are in yellow (the others should not be touched) .


For each new application, insert new rows or delete existing ones starting from the central ones (e.g. from that of "Engine"):
so do not affect the validity of formulas.

These slides can not be reproduced or distributed without permission in writing from the author. 253
4.5. Experimental Design

4.5.6
Fractional factorial plans

These slides can not be reproduced or distributed without permission in writing from the author. 254
4.5.6. Fractional factorial plans

4.5.6.1
Reducing
the number of experiments

These slides can not be reproduced or distributed without permission in writing from the author. 255
4.5.6.1. Reducing the number of experiments

If we need to consider all interactions


(maybe even some among those of higher order
in addition to all interactions between only two factors),
then we must test all possible combinations
of factor’s values (levels),
i.e. we must design a Full Factorial Plan.

Instead, if it is possible
rule out the existence of some interactions,
it is worth verifying the possibility of using
a Fractional Factorial Plan.
Of course there are risks,
to be carefully evaluated before making a final choice.

These slides can not be reproduced or distributed without permission in writing from the author. 256
4.5.6. Fractional factorial plans

4.5.6.2
Minimum
number of experiments
according to
degrees of freedom (df)

These slides can not be reproduced or distributed without permission in writing from the author. 257
4.5.6.2. Minimum number of experiments
Statistical theory leads to the conclusion that:
The analysis of each factor (e.g. A) requires a
number of experiments (i.e. degrees of freedom) dfA = levelsA - 1
equal to the number of its levels minus 1.
The analysis of each interaction, AB, requires a
number of experiments (i.e. degrees of freedom)
equal to the product of degrees of freedom of the dfAB = dfA x dfB
two interacting factors.

We then must add 1 experiment (i.e. 1 degree of


freedom) for the mean. dfmean = 1 (fixed)
We must still add at least 1 experiment (i.e. at
least 1 degree of freedom) for the error. Better if dferror ≥ 1
more than one (recommended 2 or 3 !)

Adding the number of experiments (or degrees of freedom) required by


all factors and interactions that we decided to consider and adding 1 ex-
periment for the mean (i.e. to pass from degrees of freedom to number of
experiments) and adding at least a small number of experiments for the
error, we can calculate the minimum required number of experiments.
It must immediately be noted that this number may be insufficient (and thus
will be increased) in order to obtain an orthogonal plane (see shortly). Situa-
tions of this kind occur especially when in the plan there are factors with diffe-
rent number of levels.
These slides can not be reproduced or distributed without permission in writing from the author. 258
4.5.6. Fractional factorial plans

4.5.6.3
Meaning of
"orthogonal” factorial plan

These slides can not be reproduced or distributed without permission in writing from the author. 259
4.5.6.3. Meaning of "orthogonal” factorial plan

 A plan is called "orthogonal" when each factor is


tested in such a way that its effect can be judged
independently of other factors.

 Practically, this means that every level of each factor


appears in the plan the same number of times.

These slides can not be reproduced or distributed without permission in writing from the author. 260
4.5.6.3. Meaning of "orthogonal” factorial plan

To better illustrate the meaning of orthogonal plan,


we will refer to the following case of
a reduction of noise inside the passenger compartment.

The initial brainstorming has led to the following 3 factors,


each with 2 levels (= normal and improved).
FACTOR Level 1 Level 2
Engine elastic
A anchors
standard with damper

B Tires standard low rolling noise


standard larger diameter
C Gearbox ties
(  8 mm ) (  10 mm )

These slides can not be reproduced or distributed without permission in writing from the author. 261
4.5.6.3. Meaning of "orthogonal” factorial plan

Full factorial plan (always orthogonal)

A C
# engine B gearbox
elastic tires
ties
anchors
1 1 1 1
2 1 1 2
3 1 2 1
4 1 2 2
5 2 1 1
6 2 1 2
7 2 2 1
8 2 2 2
These slides can not be reproduced or distributed without permission in writing from the author. 262
4.5.6.3. Meaning of "orthogonal” factorial plan
It should be said immediately that
it is not scientifically possible to address this problem
with Fractional Factorial Plans.

Nevertheless,
three different kinds of “reduced” plans,
all with only 4 experiments,
will be discussed later,
in order to give
an intuitive but quite clear idea
about the meaning of “orthogonal plan”
and to highlight the differences
among the three alternative criteria.
These slides can not be reproduced or distributed without permission in writing from the author. 263
4.5.6.3. Meaning of "orthogonal” factorial plan
1st criterion: PARAMETRIC PLAN
We vary one thing at a time.
The unbalancing is evident. A B C
The level 2 is present engine
only 3 times out of 12. tasselli
elastic pneumatici
tires tiranti
gearbox
motoprop.
anchors ties
cambio
1a prova: tutto standard,
1st experiment:
quale riferimento della
all standard 1 1 1
situazione attuale.
2a prova:
2nd experiment:
tasselli elastici
engine elasticcon
motopropulsore anchor 2 1 1
improved
smorzatore.
3a prova: pneumatici a
rd experiment:
bassa3rumorosità di 1 2 1
tires improved
rotolamento, dopo ripristino
tasselli elastici normali.
4a prova:
4th experiment:
tiranti cambio
gearbox
maggiorati, dopoties
ripristino 1 1 2
pneumaticiimproved
normali.
These slides can not be reproduced or distributed without permission in writing from the author. 264
4.5.6.3. Meaning of "orthogonal” factorial plan
2nd criterion: similar to PARAMETRIC PLAN
Development in order of importance.
Improvement progressively increasing. Plan
good enough, but it needs to know the ran- A B C
king of importance of the corrective actions. engine
tasselli pneumatici tiranti
gearbox
Columns remain unbalanced. elastic tires ties
motoprop.
anchors cambio
1a prova: tutto standard,
1st experiment:
quale riferimento della
all standard 1 1 1
situazione attuale.
2a prova:
2nd experiment:
tasselli elastici
engine elasticcon
motopropulsore anchor 2 1 1
improved
smorzatore.
3a prova: pneumatici a
rd experiment:
3
bassa rumorosità di
engine elastic anchor 1
2 2 1
rotolamento,
and tiresdopo ripristino
improved
tasselli elastici normali.
4a prova: tiranti cambio
4th experiment:
maggiorati, dopo ripristino
All improved 1
2 1
2 2
pneumatici normali.
These slides can not be reproduced or distributed without permission in writing from the author. 265
4.5.6.3. Meaning of "orthogonal” factorial plan
3rd criterion: ORTHOGONAL FRACTIONAL PLAN
Full symmetry for each factor-level
The total number of 1 and 2 is the same,
both in the plan and in each column. A B C
Each level of each factor appears in the engine
tasselli pneumatici tiranti
gearbox
plan the same number of times. elastic tires ties
motoprop.
anchors cambio
1a prova: tutto standard,
1st experiment:
quale riferimento della
all standard 1 1 1
situazione attuale.
2a prova: tasselli elastici
2nd experiment
motopropulsore con 1
2 2
1 21
smorzatore.
3a prova: pneumatici a
bassa rumorosità
3rd experiment
di 1
2 2
1 21
rotolamento, dopo ripristino
tasselli elastici normali.
4a prova: tiranti cambio
4th experiment
maggiorati, dopo ripristino 1
2 1
2 12
pneumatici normali.
These slides can not be reproduced or distributed without permission in writing from the author. 266
4.5.6.3. Meaning of "orthogonal” factorial plan
An Orthogonal Fractional Plan consists of
several rows (suitably chosen) of a Full Factorial Plan.

A C
A
engine
B C
# engine B gearbox
tasselli
elastic
motoprop.
anchors
pneumatici
tires
tiranti
gearbox
ties
cambio
elastic tires 1a prova: tutto standard,
1st ties
experiment:
anchors quale riferimento della
all standard 1 1 1
situazione attuale.
1 1 1 1
2a prova: tasselli elastici
2 1 1 2
2nd experiment
motopropulsore con 1
2 2
1 21
3 1 2 smorzatore.
a
1
3 prova: pneumatici a
4 1 2 2
bassa rumorosità
rd
3 experiment
di 1
2 2
1 21
5 2 1 rotolamento,1 dopo ripristino
6 2 1 tasselli elastici
2 normali.
a
4 prova: tiranti cambio
7 2 2 1 dopo ripristino 1 1
4th experiment
maggiorati, 2 2 12
8 2 2 pneumatici2normali.

The algorithm for extraction, in general, is not easy


and, for the construction of a Fractional Factorial Plan,
we usually use a computer.
These slides can not be reproduced or distributed without permission in writing from the author. 267
4.5.6.3. Meaning of "orthogonal” factorial plan
3rd criterion: ORTHOGONAL FRACTIONAL PLAN

An Orthogonal Fractional Plan


is an effective and efficient plan
of general use
because it requires no foreknowledge.
Practically, it is always valid.
If plans are not orthogonal, we can not perform the ANOVA

The minimum number of experiments calculated according


to the Table of degrees of freedom (see previous slide 258)
may increase because of the need to satisfy the requirement
of orthogonality.

These slides can not be reproduced or distributed without permission in writing from the author. 268
4.5.6.3. Meaning of "orthogonal” factorial plan
A way to find the Fractional Factorial Plan of interest is to use a
"generator“. Usually it is the higher interaction (in this case ABC).
ABC
# A B C gen.
1 1 11 1 1 Fractional
2 1 11 2 2 Factorial Plans
3 1 22 1 2 produced by the
generator
4 1 22 2 1
(with the same
5 2 12 1 2 rules already
6 2 12 2 1 seen for the
7 2 21 1 1 interactions)
8 2 21 2 2
The levels of interaction ABC are used as a generator. We can select
the experiments with ABC at level 1 or those with ABC at level 2.
These slides can not be reproduced or distributed without permission in writing from the author. 269
4.5.6. Fractional factorial plans

4.5.6.4
Confounding & Aliasing

These slides can not be reproduced or distributed without permission in writing from the author. 270
4.5.6.4. Confounding & Aliasing

The price we pay to use a Fractional Factorial Plan


is, of course, that we have
a smaller amount of data for analysis.

In practice, it may happen to fail to discriminate whether


a particular effect is attributable
to a factor rather than to an interaction.

Such a situation is said to be of CONFOUNDING.

AN EXAMPLE FOR
CLARIFICATION

These slides can not be reproduced or distributed without permission in writing from the author. 271
4.5.6.4. Confounding & Aliasing
Let us consider the following Fractional Factorial Plan consisting of
4 experiments (the Full Factorial Plan would require 23 = 8 experiments).

calculated as described above.


levels of interaction were
FATTORI
FACTOR RISULTATI
TEST INTERAZIONI
INTERACTION
A B C RESULTS
DI PROVA AB AC BC

2 levels factors,
In dealing with
1 1 1 y1 1 1 1
1 2 2 y2 2 2 1
2 1 2 y3 2 1 2
2 2 1 y4 1 2 2

SAME VALUES !
It would be easy to verify that 4 experiments are not sufficient to verify the
presence of interactions and even for the analysis of the effects of the
factors. But here we just want to explain the confounding!
These slides can not be reproduced or distributed without permission in writing from the author. 272
4.5.6.4. Confounding & Aliasing
The following table shows the calculation
of average values of factors/interactions at various levels
(see also previous slides 193, 195 and/or 197 and/or 216 in the right).

FATTORI
FACTOR RISULTATI
TEST INTERAZIONI
INTERACTION
A B C RESULTS
DI PROVA AB AC BC
MEANS
1 1 1 y1 1 1 1
Level 1 Level 2 1 2 2 y2 2 2 1
2 1 2 y3 2 1 2
2 2 1 y4 1 2 2
FACTORS

We see that the same nu-


merical values are used to
evaluate the effect of:
INTERACTIONS

 factor A and interaction BC;


 factor B and interaction AC;
 factor C and interaction AB.

These slides can not be reproduced or distributed without permission in writing from the author. 273
4.5.6.4. Confounding & Aliasing
NOTE 1 - With a full factorial plan (of 8 experiments), there is no con-
founding, because every numerator has 4 terms (instead of 2): 2 of them
are different and thus we are able to determine the effect of the factor and/or
of the interaction, without confounding.
Referring to the previous example, it is shown below the absence of confounding
between the factor A and the interaction BC, but obviously the same happens for the
other two factors and their interactions with corresponding signs.
# A B C y BC

1 1 1 1 y1 1

2 1 1 2 y2 2
y1 + y2 + y3 + y4 y5 + y6 + y7 + y8
A1 = A2 =
4 4 3 1 2 1 y3 2

4 1 2 2 y4 1

5 2 1 1 y5 1
y1 + y4 + y5 + y8 y2 + y3 + y6 + y7
BC1 = BC2 =
4 4 6 2 1 2 y6 2

7 2 2 1 y7 2

8 2 2 2 y8 1
These slides can not be reproduced or distributed without permission in writing from the author. 274
4.5.6.4. Confounding & Aliasing
NOTE 2 - Using a fractional factorial plan (of 4 experiments), even
adding a replication for each test, we can not avoid confounding.
Referring to the previous example, it is shown below the presence of confounding
between the factor A and the interaction BC, but obviously the same happens for
the other two factors and their interactions with corresponding signs.

# A B C y BC

1 1 1 1 y11 y21 1

2 1 2 2 y12 y22 1

3 2 1 2 y13 y23 2

4 2 2 1 y14 y24 2

y11 + y12 + y21 + y22 y31 + y32 + y41 + y42


A1 = A2 =
4 4

y11 + y12 + y21 + y22 y31 + y32 + y41 + y42


BC1 = BC2 =
4 4

These slides can not be reproduced or distributed without permission in writing from the author. 275
4.5.6.4. Confounding & Aliasing

This is confounding.

We are not able to determine if a certain effect


is attributable to a factor rather than to an interaction
or maybe both.
On the other hand, we are not even able to determine whether
the absence of an effect
originates from the lack of influence
both of a factor and of an interaction
or instead from
the contrasting effect of the factor and the interaction.
But if we can rule out the existence of interactions,
then the problem falls, because it is limited to factors only.
It is important to remember the risk of taking a wrong hypothesis
is always lurking!

These slides can not be reproduced or distributed without permission in writing from the author. 276
4.5.6.4. Confounding & Aliasing

The confounding is often represented


by the structure of aliasing,
which, in our example, is the following table:

A  BC
B  AC
C  AB

These slides can not be reproduced or distributed without permission in writing from the author. 277
4.5.6.4. Confounding & Aliasing

Generalizing, a Fractional Factorial Plan


may be accepted only on condition that:
 factors should not be “confused” with
interactions which could be significant;
 nor the interactions between them.

These slides can not be reproduced or distributed without permission in writing from the author. 278
4.5. Experimental Design

4.5.7
Experimental Design applied
to Robust Design

These slides can not be reproduced or distributed without permission in writing from the author. 279
4.5.7. Experimental design applied to Robust Design

The purpose of ROBUST DESIGN


is to
optimize the choices of project/process in order to
maximize the performance (response)
minimizing the effect of the inevitable variability,
through a careful consideration of noise factors,
dominated by selecting the best values of control factors.

These slides can not be reproduced or distributed without permission in writing from the author. 280
4.5.7. Experimental design applied to Robust Design

Operationally, this requires to provide 2 arrays


(each with a full or fractional factorial plan):
 one for the control factors (inner array);
 and one for the identified noise factors (outer array).

For each configuration of the control factors in the inner array,


we must replicate the experiments
in all noise conditions planned in the outer array.

These slides can not be reproduced or distributed without permission in writing from the author. 281
4.5.7. Experimental design applied to Robust Design
Example of plates welded with TIG (Tungsten Inert Gas) technique

TestIdentificativo
condition identification number
configuraz. 1 2 3 4
FATTORI E AlRiv. all.
cladding 1 1 2 2
NOISE
DI F Sporco sup.
Dirt surface 1 2 1 2
FACTORS
DISTURBO G Stato
Electr.elettr.
wear 1 2 2 1

Test
Identifi- FATTORI DI CONTROLLO
CONTROL FACTORS
setup
identi-
cativo A B C D
fication
prove
number Voltage
Corrente Gas flow
Portata gas Electr.
Pick-uppick-up
elettr. Cooling mode
Raffreddam.
1 1 1 1 1 y1.1 y2.1 y3.1 y4.1
2 1 1 2 2 y1.2 y2.2 y3.2 y4.2
3 1 2 3 3 y1.3 y2.3 y3.3 y4.3
4 1 2 1 1 y1.4 y2.4 y3.4 y4.4
5 1 3 2 2 y1.5 y2.5 y3.5 y4.5
6 1 3 3 3 y1.6 y2.6 y3.6 y4.6
7 2 1 1 3 y1.7 y2.7 y3.7 y4.7
8 2 1 2 3 y1.8 y2.8 y3.8 y4.8
9 2 2 3 2 y1.9 Resultsy2.9of experiments
y3.9 y4.9
10 2 2 1 2 y1.10 (replicated
y2.10 for each
y3.10 y4.10
11 2 3 2 1 y1.11 considered
y2.11 noise condition)
y3.11 y4.11
12 2 3 3 1 y1.12 y2.12 y3.12 y4.12
13 3 1 3 2 y1.13 y2.13 y3.13 y4.13
14 3 1 3 1 y1.14 y2.14 y3.14 y4.14
15 3 2 2 3 y1.15 y2.15 y3.15 y4.15
16 3 2 2 1 y1.16 y2.16 y3.16 y4.16
17 3 3 1 2 y1.17 y2.17 y3.17 y4.17
18 3 3 1 3 y1.18 y2.18 y3.18 y4.18

These slides can not be reproduced or distributed without permission in writing from the author. 282
4.5. Experimental Design

4.5.8
Concluding remarks on
Experimental Design

These slides can not be reproduced or distributed without permission in writing from the author. 283
4.5.8. Concluding remarks on Experimental Design
Sequence of operational steps for Experimental Design
1. Definition of quality characgteristics and their measurement method.
2. Kind of target:
• nominal the best (e.g.: a part to be matched with an other);
• smaller the better (e.g.: fuel consumption);
• larger the better (e.g.: performances).
3. List of all noise factors (if necessary/useful: Robust Design).
4. Collection of all available information on the subject..
5. Picture of the “current situation”.
6. Brainstorming.
7. Selection of control and noise factors and their levels and interactions.
8. Designing the optimal test plan.
9. Carrying out experiments.
10. Compilation of results.
11. Analysis of results (AN.O.V.A. + AN.O.M.).
12. Selection of the optimal configuration and prediction of achievable results.
13. Comparison with the current situation.
14. Additional verification experiments.
15. Conclusions and operational consolidation.
When there are several quality characteristics to be optimized, with conflicting demands (for the
levels of involved factors), the tools used are the Loss Function and Q.F.D..
These slides can not be reproduced or distributed without permission in writing from the author. 284
4.5.8. Concluding remarks on Experimental Design

These slides can not be reproduced or distributed without permission in writing from the author. 285
4. Main statistical tools to prevent failure and ensure reliability

4.6
MULTIPLE REGRESSION

These slides can not be reproduced or distributed without permission in writing from the author. 286
4.6. Multiple Regression
It is a method very similar to EXPERIMENTAL DESIGN, but
with less need for input.
The characteristic parameters of each test
are no longer defined by "levels“
(as in Experimental Design),
but are measured in physical units (in continuous)
of the values found in the test
for each independent variable.

This implies:
 possibility of using existing tests available at the
company;
 inability to verify the existence of interactions.

These slides can not be reproduced or distributed without permission in writing from the author. 287
4.6. Multiple Regression

4.6.1
Correlation coefficient

These slides can not be reproduced or distributed without permission in writing from the author. 288
4.6.1. Multiple Regression: correlation coefficient

Meaning of the correlation coefficient r

In essence, the correlation


coefficient r indicates how
much the interpolation with the
straight line of least squares
is better than that with a
horizontal straight line
passing through the center of
Comparison between and
data.

Usually, a correlation is considered "good“ when it is


r > 75%
These slides can not be reproduced or distributed without permission in writing from the author. 289
4.6.1. Multiple Regression: correlation coefficient

As a guideline, to assess the quality of a correlation,


we can refer to the table below:

Strength of correlation
r R2
(indicative)

0.8÷1.0 0.64÷1.00 High

0.5÷0.8 0.25÷0.64 Moderate

0.2÷0.5 0.04÷0.25 Weak

0.0÷0.2 0.00÷0.04 Negligible


These slides can not be reproduced or distributed without permission in writing from the author. 290
4.6.1. Multiple Regression: correlation coefficient

The standard error of the estimate


is usually denoted by se.

S (y – y’)2
se = --------------
n-2
It basically expresses the error that we make
in referring to the interpolating straight line,
rather than to the individual observations.

NOTE - The denominator is n - 2, because here, in order to define the straight line
from which the distances (of the single points) are calculated, 2 constraints are
needed: for example, the intercept and the slope of the straight line (and not just 1,
like when calculating differences from the mean).

These slides can not be reproduced or distributed without permission in writing from the author. 291
4.6. Multiple Regression

4.6.2
Fundamentals

These slides can not be reproduced or distributed without permission in writing from the author. 292
4.6.2. Multiple Regression: fundamentals

Here again it is typical to start with a table quite similar


to that used for DOE.

x1 x2 ..... xi ..... xn y y’
serie di datitest
Experimental 11 d1 r1 r’1
serie di datitest
Experimental 22 d2 r2 r’2
serie di datitest
Experimental 33 d3 r3 r’3
....................... ..... ..... .....
....................... ..... ..... .....
serie di dati
Experimental test(m-1)
(m-1) dm-1 rm-1 r’m-1
serie di datitest
Experimental mm dm rm r’m
As in the DOE, each of n independent variables x1, x2, ..., xn, is the title of a
column, but here each row contains the data obtained from a specific test.
The last column (in gray) contains the actual test results and the column
outside (in yellow) shows the forecasts of the same results calculated by the
regression model.
But here the test values of variables are no longer defined by the levels (which,
with the DOE, are mandatory and have constant values), but they are
replaced by the values (in physical units and therefore in continuous)
observed during the test.
These slides can not be reproduced or distributed without permission in writing from the author. 293
4.6.2. Multiple Regression: fundamentals
The use of measures of physical units (in continuous) removes
the constraint of the levels(1),
allowing us to use results
of tests already available in the company
(where, of course, were measured all the n variables
that we have decided to keep under control
or to consider as potentially influential
in the examined phenomenon).
On the other hand,
precisely the measurement in the continuous
can not distinguish
between tests at low level and tests at high level,
making it impossible to assess
the presence or not of interactions.
(1) Let us suppose to have three factors (temperature, pressure and humidity) and to have
established two levels each: e.g. the temperature at Level 1 has values between -5°C and
0°C and at Level 2 values between +30°C and +35°C. All tests available in the company
with temperatures outside these two ranges should be discarded. Those remaining will be
further reduced by repeating the selection with reference to the pressure and to the humi-
dity, so at the end would remain little or nothing !
These slides can not be reproduced or distributed without permission in writing from the author. 294
4.6.2. Multiple Regression: fundamentals

To assess the influence of each variable xi on the investigated


phenomenon, we build the linear model:

y’ = f(xi) = a0 + a1 x1 + a2 x2 + … + an xn
after optimizing the coefficients ai
using the least squares method.
As in the Experimental Design,
here too, the variables significance is quantified
(by means of Fisher’s F distribution)
and the “corrected” percentages of contribution
are an often automatically provided output.

There are also software with nonlinear models, however, the


constraint of linearity can be largely overcome by introducing
additional variables such as:
x 5 = x 32 or x9 = x7 / x8 etc..
These slides can not be reproduced or distributed without permission in writing from the author. 295
4.6. Multiple Regression

4.6.3
”Dummy” variables

These slides can not be reproduced or distributed without permission in writing from the author. 296
4.6.3. Multiple Regression: “dummy” variables

Although the MULTIPLE REGRESSION


is mainly oriented
to the elaboration of mathematical variables,
it is possible use qualitative variables too
(the ones that are usually called “attribute“).

Examples of qualitative variables:


 different types of vents;
 set of design alternatives related to each other;
 different suppliers;
 etc..

These slides can not be reproduced or distributed without permission in writing from the author. 297
4.6.3. Multiple Regression: “dummy” variables

 In these cases, it would be wrong to assign (to the va-


riables) progressive integer values such as 1, 2, 3, …
as if they were levels, because, in practice, the impor-
tance of the variables would be measured by the ratio
between these numbers: which does not exist at all
in the physical phenomenon;

 instead it is proper to use only the values 0 and 1, indi-


cating the presence (1) or the absence (0) of a variant
compared at the basic solution (0). If the alternatives
are only two, it is easy, but if they are more than 2 ?!?

These slides can not be reproduced or distributed without permission in writing from the author. 298
For example, we want to analyze the effect of different weekdays (from Monday
to Friday) on the defects of a certain production process.
If we had coded weekdays with a sequence of integers (for example in the variable x2)
as follows:
Monday = 1
Tuesday = 2
Wednesday = 3 y’ = f(xi) = a0 + a1 x1 + a2 x2 + … + an xn
Thursday = 4
Friday = 5
whatever the value that the system assigns to a2, the effects of Tuesday would always
be twice than those of Monday and those of Thursday would always be twice than
those of Tuesday and so on. This is meaningless in the real world, where, at most, one
could expect abnormal results in the first (Monday) or in the last (Friday) working day !
If we use several columns, in this case 5-1 = 4: x2, x3, x4, x5, we can use the
following code:
x2 x3 x4 x5
Monday = 0 0 0 0 a2 x2 + a 3 x3 + a 4 x4 + a 5 x5
Tuesday = 1 0 0 0
Wednesday = 0 1 0 0
Thursday = 0 0 1 0 y’ = f(xi) = a0 + a1 x1 + a2 x2 + … + an xn
Friday = 0 0 0 1
we see that the effect of Monday (taken as the reference) is expressed only by the
value of a0 because all its xi, always equal to zero, cancel any effect of coefficients
ai; while the effect of Tuesday is expressed (only) by the value of a2, and so on.
These slides can not be reproduced or distributed without permission in writing from the author. 299
With reference to the last coding:
x2 x3 x4 x5
Monday = 0 0 0 0
Tuesday = 1 0 0 0
Wednesday = 0 1 0 0
Thursday = 0 0 1 0
Friday = 0 0 0 1
it remains to clear up how to interpret the significance of each xi.

Although quite obvious, it is worth to stress that, for the software, the variables x2,
x3, x4, x5 are like all other variables: only the User knows that, in fact, they represent
the qualitative variable "weekday" with 5 levels. At this point, we can say that:
 if all four variables (from x2 to x5) are significant, it (obviously) means that the production
process generates different amounts of defective in each weekday;
 if all four variables (from x2 to x5) are not significant, it (obviously) means that there is no
significant difference in the production of defective units on different weekdays;
 if, for example, only the variable x4 is significant (and the others are not), this means that on
Thursday (which is coded with a value equal to 1 in the third column, x4) the production is
significantly different (= with a greater or lesser percentage of defectives) than that on Mon-
day (assumed as reference); however, the production of Monday is no different from those
of Tuesday, Wednesday and Friday and then only Thursday's production is different;
 similarly, if, for example, only the variables x2 e x5 are significant, it means that the produc-
tion on Tuesday and on Friday is different from those of other days and so on.
These slides can not be reproduced or distributed without permission in writing from the author. 300
4.6.3. Multiple Regression: “dummy” variables

The above example (of "the days of the week") was targeted for educatio-
nal purposes. In a company, if we really wanted to investigate whether the
percentage of defectives varies with the days of the week, almost certainly
would be more effective focusing solely on three types of days:
 Monday: production startup;
 midweek (Tuesday/Wednesday/Thursday): current production;
 Friday: the end of production and closing.

Of course, in this case, it is appropriate to refer to the midweek day and


then we will have:
x2 x3
midweek = 0 0
Monday = 1 0
Friday = 0 1

These slides can not be reproduced or distributed without permission in writing from the author. 301
4.6.3. Multiple Regression: “dummy” variables

Qualitative variables are to be transformed


into dummy variables.

Everything is very simple, once built the matrix below, which


consists of all 0 except for values 1 in diagonal from the second row.
Number
Numeroofcolonne
columns ==>→ 1 2 3 4 5 6 7
Numero
Number livelli→==>
of levels 2 3 4 5 6 7 8
==>

 0 0 0 0 0 0 0
2 1 0 0 0 0 0 0
3 0 1 0 0 0 0 0
4 0 0 1 0 0 0 0
5 0 0 0 1 0 0 0
6 0 0 0 0 1 0 0
7 0 0 0 0 0 1 0
8 0 0 0 0 0 0 1

Generally, for a qualitative variable with n levels,


are used n-1 dummy variables,
These slides can not be reproduced or distributed without permission in writing from the author. 302
4.6.3. Multiple Regression: “dummy” variables
Example of dummy variables
The variable "Suppliers" has 4 levels
corresponding to 4 suppliers: A B C D.
Numeroofcolonne
Number →
columns ==> 1 2 3 4 5 6 7 DUMMY VARIABLE
Numero
Number livelli→==>
of levels 2 3 4 5 6 7 8
Meaning dummy sub-variables
==>


2
0
1
0
0
0
0
0
0
0
0
0
0
0
0
x4 x5 x6
3 0 1 0 0 0 0 0 Supplier A 0 0 0
4 0 0 1 0 0 0 0
5 0 0 0 1 0 0 0 Supplier B 1 0 0
6 0 0 0 0 1 0 0
7 0 0 0 0 0 1 0 Supplier C 0 1 0
0 0 0 0 0 0 1
8 Supplier D 0 0 1

T E S T V A R I A B L E S
N° of Experim.
other variables SUPPLIERS other variables RESULTS
test x1 x2 x3 x4 x5 x6 xn-1 xn xn+1 xn+2
1 0 0 0 y1
2 0 0 0 y2
3 1 0 0 y3
4 1 0 0 y4
m-2 0 1 0 ym-2
m-1 0 0 1 ym-1
m 0 0 1 ym

These slides can not be reproduced or distributed without permission in writing from the author. 303
4.6.3. Multiple Regression: “dummy” variables

Minitab Help: dummy variabiles

These slides can not be reproduced or distributed without permission in writing from the author. 304
4.6. Multiple Regression

4.6.4
Summary
and comparison with DOE

These slides can not be reproduced or distributed without permission in writing from the author. 305
4.6.4. Multiple Regression: summary and comparison with DOE
Benefits:
1. use of data already available in the company, with
occasional need for additional testing because the system
does not require the use of levels, but it directly operates
on the physical values of variables;
2. easy and quick application.

Limits:
1. the number of available tests must be at least from 4
to 5 times the number of investigated variables;
2. except in very special cases, it is impossible to detect in-
teractions between variables (because we are working on
the continuous and not by levels and therefore it is not
possible to cross the effects of high and low levels of the
involved variables).

Applications:
1. approach is often used directly;
2. but the approach is also commonly used as a prelimi-
nary step before starting an Experimental Design.
These slides can not be reproduced or distributed without permission in writing from the author. 306
4. Main statistical tools to prevent failure and ensure reliability

4.7
EFFECTIVE APPROACH
TO A COMPLEX PROBLEM

These slides can not be reproduced or distributed without permission in writing from the author. 307
4.7. Effective approach to complex problems

Quality
CONTINUOUS
PROBLEM DEFINITION Function CUSTOMERS
IMPROVEMENT
Deployment

COLLECTION OF AVAILABLE INFORMATION


AND CURRENT SITUATION ANALYSIS
DATA ANALYSIS
- Capability studies
- Control Chart yes
Is it available a system that is already operating ?
- Multi-Vari Chart and/or Com-
(e.g. a production process)
ponents Search [Shainin ]
- Multiple Regression
no

Solution no
found?
EXPERIMENTAL DESIGN

yes
PROBLEM ANALYSIS

DEFINITION OF
THE DESIGN OF EXPERIMENTS
(D.O.E.)

CARRYING OUT EXPERIMENTS

SOLUTION OF
ANALYSIS OF RESULTS
(AN.O.VA. + AN.O.M.)

COMPLEX PROBLEMS:
general pattern
VERIFICATION EXPERIMENTS

These slides can not be reproduced or distributed without permission in writing from the author. 308
4.7. Effective approach to complex problems
 Classical statistics infers any information solely from samples examination:
 this often involves samples of industrially unacceptable size.
 We must then try to use all prior knowledge, taking them into account when planning experiments, in
order to contain the total amount of experimentation within industrially acceptable limits :
 this implies a bet on the validity of prior knowledge, which involves risks hard to be quantified in a
precise way.

BET
CLASSICAL
STATISTICS + to reduce the
number of  VERIFICATION
TESTS
experiments

Risk Unique
Confidence level difficult to assess lifebelt

 It is true that the reduction in testing is paid with a loss of credibility of the results, but the additio-
nal risks are fully explained in the definition of the terms of the bet, thus maintaining the rationality
of the approach.
 Moreover, verification experiments ensures consistent results, regardless of approximations/inac-
curacies used to achieve them.

These slides can not be reproduced or distributed without permission in writing from the author. 309
4. Main statistical tools to prevent failure and ensure reliability

4.8
TECHNICAL MEMO
AND LESSONS LEARNED

These slides can not be reproduced or distributed without permission in writing from the author. 310
4.8. Technical Memo and Lessons Learned

TECHNICAL MEMO means


“hoard the past failures to prevent
they happen again in the future”
Technical Memo are the standards of design and experimentation
that brings together in an organized way, all the results of problem
solving studies, so that, since their acquisition, they become a com-
pany's asset immediately available for all future applications.
The problems, whose solutions are the Lessons Learned to be inclu-
ded in the Technical Memo, can occur both in the products already
commercialized that during the development process of a new pro-
duct.

These slides can not be reproduced or distributed without permission in writing from the author. 311
4.8. Technical Memo and Lessons Learned

from FMEA to FMEA


We repeat that every prediction/prevention activity
originates from FMEA.

After these activities have been concluded, it’s always the FMEA
which gets and organizes the results, converting them into stan-
dard prescriptions (Technical Memo).

It doesn’t make any sense to repeat demanding investigations (ex-


perimental or analytical), when it is possible to use (or at least to ar-
range) the results of similar studies already developed.

The philosophy of 2nd generation FMEA is that Risk Matrix sum-


marizes the results of all prediction/prevention activities so far
developed.

These slides can not be reproduced or distributed without permission in writing from the author. 312
4.8. Technical Memo and Lessons Learned

TECHNICAL MEMO helps to expand our growing expertise,


leveraging on the experience of those who came before us:
it must be internalized and, where appropriate,
updated and expanded.
It is a virtuous spiral that resembles a relay race.

Thinking of starting each time from the beginning,


covering alone the entire race course,
would clearly be a loser choice !
These slides can not be reproduced or distributed without permission in writing from the author. 313
4. Main statistical tools to prevent failure and ensure reliability

4.9
SUMMARY
OF PROACTIVE ACTIONS

These slides can not be reproduced or distributed without permission in writing from the author. 314
4.9. Summary of proactive actions

 Standards (from Technical Memo)


 Proactive actions
1. FMEA
2. Parts Count Method
3. W.C.A. = Worst Case Analysis and computerized mathematical
models, taking into account the process capability
4. F.T.A. = Fault Tree Analysis
5. Multiple Regression in all
6. Experimental Design (D.O.E.) and Robust Design
6
instruments,
 Testing and production start-up not 3 or 20 !
(topics discussed later)
1. Success Run
2. Reliability Growth Testing: both for useful life (design) and for
infant mortality (production process)
 Lessons Learned (immediately after the launch of a new product)
These slides can not be reproduced or distributed without permission in writing from the author. 315
4.9. Summary of proactive actions

 To be used practically always:


• FMEA (Failure Mode and Effects Analysis);
• Parts Count Method (rough but fast and cheap verification);
• Worst Case Analysis (as a sizing guideline).

 To be used whenever it is necessary/useful:


• Worst Case Analysis, especially in the evaluation of tolerance chains or
of the relationships between of the characteristics variability and the ex-
pected functions distributions;
• FTA (Fault Tree Analysis), especially to verify the reliability of the safety
components: for its particularly high values in these cases, reliability could
not be verified experimentally.

 To be used to address and rationally solve problems:


• Multiple Regression: cheap, to be used in cases presumably without
interactions;
• Experimental Design: expensive, in most cases it is used without taking
into account noise factors.
These slides can not be reproduced or distributed without permission in writing from the author. 316
4. Strumenti di misura e di prevenzione per l’Affidabilità

4.10
PRELIMINARY
RELIABILITY PREDICTIONS
FOR A NEW PRODUCT
(“Reliability Plans”
using a Bayesian approach)

These slides can not be reproduced or distributed without permission in writing from the author. 317
4.10. Preliminary reliability predictions for a new product
A classical analytical method for the preliminary estimate of the on field failure fre-
quency at the beginning of the development of a new subsystem/component consists
of the following steps.
1. Collection, based on data from the field (Dealers and Customers Office),
of the failure frequencies of subsystems/components already on the
market, at the elementary level of component/defect.
2. “Virtual” construction of the archetype, as close as possible to the de-
sign (in progress) of the subsystem/component in development.
3. List of all changes (improvements and/or cost containment) provided at
the design and process level for the subsystem/component under deve-
lopment, with reference to the archetype.
4. “Subjective” prediction by Experts (designers, technologists, etc.) of the
failure frequency for each elementary item (component/defect), based
on the collected data and on all considered changes.
5. Prediction of the failure frequency for the whole subsystem in develop-
ment through the sum of the predicted failure frequencies for the indivi-
dual elementary items and subsequent comparison with the desired tar-
get.
6. If the result is unsatisfactory, repeat the above steps, starting from step
3, after changing some variants (with reference to the archetype), or
have introduced new ones.
These slides can not be reproduced or distributed without permission in writing from the author. 318
4.10. Preliminary reliability predictions for a new product
Logical structure of analytic prediction module (fictitious example, only indicative)
Examined system XXXXXXXXXXXXXXXXXXXX

Reliability target = maximum failure frequency in the warranty period = 0,35 repairs / 100 units

A R C H E T Y P E NEW PRODUCT IN DEVELOPMENT


List of all changes
Failure frequencies observed Expected failure frequencies
Subsystems (both on the design and
on the archetype after applying changes
or Kind of failure [repairs / 100 units]
on the process) planned for [repairs / 100 units]
components the new product
individual kind of failure totals in development individual kind of failure totals

Subsystem A 0,09 0,03


Tear on the body 0,01 0,00
Loosening/coming adrift 0,01 0,00
Loose/inefficient fastening 0,04 0,02
Broken fastening 0,01 0,00
Excessive play 0,02 0,01

Details are not exposed


Subsystem B 0,05 0,03
Excessive play 0,02 0,01
Seizure/stiffness 0,01 0,01
Noise 0,02 0,01

Subsystem C 0,06 0,03


Failure 0,00 0,00
Fraying 0,01 0,01
Loose/inefficient fastening 0,02 0,01
Excessive play 0,02 0,01
Seizure/stiffness 0,01 0,00

Subsystem D 0,23 0,12


Loose/inefficient fastening 0,05 0,03
Damaged/stripped fastening 0,01 0,00
Deformations 0,00 0,00
Excessive play 0,14 0,08
Early wear 0,02 0,01
Seizure/stiffness 0,01 0,00
Locking 0,00 0,00

Subsystem E 0,05 0,02


Lubrication missing/insufficient 0,01 0,00
Noise 0,03 0,02
Drift during motion 0,01 0,00

Subsystem F - 0,14 0,14 0,10 0,10

Grand total [repairs / 100 units] 0,62 0,33

These slides can not be reproduced or distributed without permission in writing from the author. 319
4.10. Preliminary reliability predictions for a new product

This method has:

• the advantage of being applied in the choice of initial design


features, with great effectiveness of preventive actions;
• the obvious disadvantage of using subjective assumptions
and therefore it requires a subsequent experimental verifi-
cation;
• insidious disadvantage (why not so obvious) of being sy-
stematically optimistic, since it can not take into account
that implemented changes reduce many of the current failure
modes (reported by Dealers), but they can also introduce
some new failure mode, difficult to identify at this stage !

These slides can not be reproduced or distributed without permission in writing from the author. 320
4.10. Preliminary reliability predictions for a new product
We can provide a value for the statistical uncertainty, s, in the fol-
lowing way:
1. The uncertainty, in the predicted failure frequency, is objectively
related to the predictable difference between the achievable target
(= what the experts have predicted the new model in development could reach)
and the result (known) obtained on the archetype, where:
 the achievable target is understood as the best result theoretically achieva-
ble and therefore it is considered as the lower limit of the confidence
interval for the failure frequency;
 the result of the archetype is considered, at least at first approximation, as
the worst achievable result (only worsened with cost reductions) and there-
fore assumed as the upper limit of the confidence interval for the failure
frequency
The basic value of Bayesian uncertainty, sbasic is given by this dif-
ference.
2. We acquire the subjective opinion of the Experts (Design and Techno-
logies) about the credibility of the estimate of the achievable target
(item by item) and, on this basis, we modify the previous basic value of
Bayesian uncertainty, sbasic, defining the value staken, to be assumed.
Since employing subjective opinions of Experts, this method is called Bayesian:
from Thomas Bayes (1702-1761), author of a famous theorem on conditional probability
which was published posthumously in 1763.
These slides can not be reproduced or distributed without permission in writing from the author.321
4.10. Preliminary reliability predictions for a new product
For each achievable target the Designer is asked on what his prediction is based and what
is the resulting level of confidence that we can attribute to it.

Adjustments of s
Kind of (decreasing from the initial
Description adopted value sbasic equal to the
forecast
difference between the result of the
archetype and the achievable target)

Sure Predicted targets are certainly achievable: or even already achieved. staken = 0,25 . sbasic

Predictions should be very reliable, because we actually have to identify,


develop and test articulate modifications, but, in all comparable historical
Plain cases, we have almost always achieved the expected benefits and it is highly
staken = 0,50 . sbasic
unlikely that the changes lead to new failure modes.

Innovative/new solutions, on which we do not have much experience, but


supported by a series of studies (FMEA, FTA, WCA, etc.) and/or
Uncertain experimental tests (Multiple Regression, Experimental Design, etc.) already
staken = 0,75 . sbasic
available, which are considered probative.
Innovative/new solutions, on which we do not have much experience, and that
Risky we do not feel sufficiently supported by experiments or studies already staken = sbasic
completed.

In the absence of statements, by default,


it is assumed a kind of forecast of level 3, uncertain.
These slides can not be reproduced or distributed without permission in writing from the author. 322
4.10. Preliminary reliability predictions for a new product

Beta distribution
(not covered in this course)

Assunption of
“uncertain prediction”

mean
s = 0,75 sbasic

Failure
Bayesian confidence interval frequency
Analytical for a “risky” estimate Result of
forecast sbasic archetype

LOWER UPPER
LIMIT LIMIT
For new suppliers or hard cost
reductions, the Upper Limit can
grow (= move right)

These slides can not be reproduced or distributed without permission in writing from the author. 323
4.10. Preliminary reliability predictions for a new product

Statistically summing the Beta distributions of all individual


failures (component + defect), we obtain the failure probability
distribution for the whole investigated subsystem.

Statistically summing distributions of all the subsystems, we


obtain the distribution (which may no longer be a Beta distribution!)
for the whole vehicle.

These Bayesian distributions are called “prior” (”a priori”).

The combination of prior distributions with the appropriate pro-


cessing of experimental results, will provide the “posterior”
(”a posteriori”) distributions.

For each statistical distribution so defined,


we can calculate the “Bayesian” confidence interval.
These slides can not be reproduced or distributed without permission in writing from the author. 324
A R C H E T Y P E NEW PRODUCT IN DEVELOPMENTNEW PRODUCT IN DEVELOPMENT

Failure frequencies observed Expected failure frequencies


Subsystems
on the archetype after applying changes
or Kind of failure [repairs / 100 units] [repairs / 100 units]
components
individual kind of failure totals individual kind of failure totals

Subsystem A 0,09 0,03


Tear on the body 0,01 0,00
Loosening/coming adrift 0,01 0,00
Loose/inefficient fastening 0,04 0,02
Broken fastening 0,01 0,00
Excessive play 0,02 0,01

Subsystem B 0,05 0,03


Excessive play 0,02 0,01

CHANGES
Seizure/stiffness 0,01 0,01
Noise 0,02 0,01

Subsystem C 0,06 0,03


Failure 0,00 0,00
Fraying 0,01 0,01
Loose/inefficient fastening 0,02 0,01
Excessive play 0,02 0,01
Seizure/stiffness 0,01 0,00

Subsystem D 0,23 0,12


Loose/inefficient fastening 0,05 0,03
Damaged/stripped fastening 0,01 0,00
Deformations 0,00 0,00
Excessive play 0,14 0,08
Early wear 0,02 0,01
Seizure/stiffness 0,01 0,00
Locking 0,00 0,00

Subsystem E
Lubrication missing/insufficient 0,01
0,05
Value, 0,00
0,02

Noise
Drift during motion
0,03
0,01
after all, 0,02
0,00

Subsystem F - 0,14 0,14


more likely 0,10 0,10

Grand total [repairs / 100 units] 0,62 0,33

RISCHIO STIMA:
ESTIMATE RISK Valore Atteso
Expected =
value 0,411 C.L. 75% = 0,425  0,33

Confidence interval Upper Limit


Deterministic sum (optimistic)
These slides can not be reproduced or distributed without permission in writing from the author. 325
4.10. Preliminary reliability predictions for a new product

SUMMARY BLOCK DIAGRAM

Archetype field
data Analytical
Bayesian Prior
punctual Assessment
estimates of distribution
estimates of of Bayesian
failure of failure
failure uncertainty
frequencies frequencies
frequencies
Compari-
Posterior sons with

+ Highlighting of
distribution
of failure
frequencies
pre-assi-
gned
reliability
targets
the most
Estimates
important Conducting
Experimen- of failure
Changes criticalities of
tal tests frequencies
planned for and, if useful, experimen-
planning by test
definition tal tests
the new results
of additional
product in tests(1)
development

(1) E.g.: climatic chambers, specific proving grounds, etc..


These slides can not be reproduced or distributed without permission in writing from the author. 326
4.10. Preliminary reliability predictions for a new product

 If the initial forecast (prior) is too optimistic, as almost


always happens if you do not use the Bayesian approach, the
experimental results will be inconsistent with that forecast
and all the burden of proof will fall on the experimental tests,
resulting in higher costs and times;
 Instead, if the initial forecast (prior) is realistic, it is likely that
it is not contradicted by the experimental results, which will be
merged with the forecast (posterior), providing a credible re-
sult from two results individually not very sure (but cheap!):
the first because largely subjective and the second being ba-
sed on samples rather small.

These slides can not be reproduced or distributed without permission in writing from the author. 327

You might also like