You are on page 1of 223

U N IV ER SID A D A U TÓN OM A D E QU ER ÉTA R O

G R A D U A TE SC H OOL OF EN G IN EER IN G

M . Sc. D ESIG N OF EX P ER IM EN TS
P rof. Eric L. H uerta, Eng. D .

FIN A L R EP OR T

Rafael Ortiz Hernández

th
December 11 , 2021
FOR EW OR D

This is my Final Report assignment for the Design of Experiments class. It is written as a
journal, with the annotation of each important subject learnt in each class. The subjects
viewed in the class related to statistical tools and concepts used in experiments in order to
reach conclusions in our experiments.

Some important subjects seen in class are Hypothesis testing (Null and alternate hypotheses),
Analysis of Variance (ANOVA), 2k and 3k factorial experiments, Central Composite Design,
Response Surfaces and their analytical tools. They can be seen in chronological order and
are marked on the Report’s Index. These notes are based on paper notes and short video
recordings.

At the end of this Report there are several Appendices:

The first appendix contains all the assignments left on the VirtualUAQ page of the class,
which consisted of small essays and some ANOVA exercises.

Appendix B contains the reports of all the Practices performed as homework. Some practices
are ANOVA problems from books and the web, others are factorial experiments using paper
helicopter designs based on G. E. P. Box classical experiment and NASA’s Jet Propulsion
Laboratory and some Central Composite Design and Response Surface practices using the
paper helicopter and a virtual catapult.

Appendix C and E are the Reports for both Exam 1 and Exam 2, where we show our work
done in each, I also include the corrections done on some of the problems.

Appendix D is the final experiment slideshow, this experiment is alluded in but not included
in this report. As instructed by Prof. Huerta, the experiment development and results can
be seen in my Group Report which is not included here.

Special care was put in order to make this report readable, coherent and all English spelling
and grammar was reviewed. All external material was sourced and referenced at the best of
my abilities and procured to be cited as such.

This was one of my favorite classes as apart from using new software such as R and RStudio,
it also made us think on some philosophical concepts such as the nature of science,
experiment, errors, measurements and statistics.

Thank you for your teaching Prof. Huerta, may you keep enriching the lives of other students
by your teaching!

– Rafael Ortiz Hernández


Postgraduate Student
M.Sc. Geotechnical Engineering
TA B LE OF C ON TEN TS

C LA SS 1 – 20210730 ........................................................................................... 1
The VirtualUAQ page .................................................................................................... 1
English test .................................................................................................................... 1
Bibliography ................................................................................................................... 1
Statistical Significance .................................................................................................... 1
The Black Swan ............................................................................................................. 2
Statistical effects .............................................................................................................3
The Law of Large Numbers ............................................................................................ 4
Statistical Hypothesis ..................................................................................................... 4
Homework Assignments .................................................................................................. 4
C LA SS 2 – 20210813 ........................................................................................... 5
Free and Open Software ................................................................................................. 5
R Software ..................................................................................................................... 5
MIT, Harvard and Elsevier............................................................................................. 6
Web of Science and Scopus............................................................................................. 6
Data Science ................................................................................................................... 6
RStudio .......................................................................................................................... 6
C LA SS 3 – 20210827 ........................................................................................... 7
Microsoft R Open ........................................................................................................... 7
What is ANOVA? .......................................................................................................... 8
One Way ANOVA.......................................................................................................... 8
Teaching Problem .......................................................................................................... 8
Applying ANOVA .......................................................................................................... 9
What do the ANOVA table means? ............................................................................... 9
Alternate hypothesis verification ................................................................................... 10
What is a significant data point? ................................................................................... 10
How many data points would we need to ensure we do not have a false positive? ......... 10
C LA SS 4 – 20210903 ......................................................................................... 12
Measurements ................................................................................................................ 12
Mock-up and Prototype ................................................................................................. 12
Fundamental properties of measurement ....................................................................... 12
Margin of errors and ANOVA ....................................................................................... 13
Probabilities Distribution .............................................................................................. 13
Graphical Analysis of ANOVA ...................................................................................... 15
ANOVA Assumptions ................................................................................................... 17
Bartlett test................................................................................................................... 17
Fligner test .................................................................................................................... 17
The ANOVA guide ........................................................................................................ 18
Homework Assignments ................................................................................................. 19

i
ANOVA Problems Procedure ........................................................................................ 20
C LA SS 5 – 20210910 ......................................................................................... 21
Box plot ........................................................................................................................ 21
Statistical model ............................................................................................................ 21
Uncertainty ................................................................................................................... 23
ANOVA and Student’s t-test ........................................................................................ 23
ANOVA and unbalanced data. ...................................................................................... 23
Normality ...................................................................................................................... 23
Residual vs Fitted plot .................................................................................................. 29
Normal Q-Q plot ........................................................................................................... 30
Scale-Location plot ........................................................................................................ 31
Cook’s distance plot ...................................................................................................... 32
Constant Leverage: Residual vs Factor Levels plot ........................................................ 33
Cook’s distance vs Leverage plot ................................................................................... 34
Parameters of Normality ............................................................................................... 35
What is the purpose of a research graduate program? ................................................... 35
EX A M 1 – 20210924 ......................................................................................... 36
Question 1 ..................................................................................................................... 36
Question 2 ..................................................................................................................... 36
Question 3 ..................................................................................................................... 37
Question 4 ..................................................................................................................... 37
C LA SS 6 – 20211001 ......................................................................................... 38
What is science? How do we distinguish science from not science? ................................ 38
Falsifiability .................................................................................................................. 40
The Catapult Problem as a Factorial Problem .............................................................. 40
The Half-normal plot ..................................................................................................... 46
ANOVA Results ............................................................................................................ 49
The 25 Catapult experiment .......................................................................................... 50
The Paper Helicopter experiment .................................................................................. 51
C LA SS 7 – 20211008 ......................................................................................... 52
3k Factorial Experiments ............................................................................................... 52
What is the meaning of a relationship between factors? ................................................ 53
Reply and Repeat .......................................................................................................... 54
Scientific focus ............................................................................................................... 55
Scientific tactics............................................................................................................. 55
The Factor vs Mean plot ............................................................................................... 55
The 3k Paper Helicopter Experiment ............................................................................. 57
Factorial Design ............................................................................................................ 58
Phyphox ........................................................................................................................ 58
The experiment assignment ........................................................................................... 58
What should I see in my experiment if I made a mistake? ............................................ 59

ii
C LA SS 8 – 20211015 ......................................................................................... 60
Response Surface ........................................................................................................... 60
Central Composite Design ............................................................................................. 61
Choosing the right variables .......................................................................................... 65
The Paper Helicopter Central Composite Design Experiment........................................ 65
The experiment proposal ............................................................................................... 65
C LA SS 9 – 20211022 ......................................................................................... 66
C LA SS 10 – 20211029 ....................................................................................... 66
Response Surfaces .......................................................................................................... 66
What does lack of fit mean and how do you interpret that line? ................................... 68
Stationary points ........................................................................................................... 68
Eigen values and vectors ............................................................................................... 69
Model Improvement ...................................................................................................... 69
The contour plot ........................................................................................................... 70
The 3D contour plot ...................................................................................................... 71
Central Composite Designs ............................................................................................ 72
The Paper Helicopter Composite Design ....................................................................... 74
C LA SS 11 – 20211105 ....................................................................................... 75
Replies ........................................................................................................................... 75
Comma Separated Value Files in R ............................................................................... 75
Lack of fit ...................................................................................................................... 75
Statistical Models .......................................................................................................... 76
C LA SS 12 – 20211112 ....................................................................................... 78
Uncertainty, Accuracy and Precision ............................................................................. 78
Linearity........................................................................................................................ 79
Classification as a measurement .................................................................................... 80
CENAM’s GUM ............................................................................................................ 80
Non-linear effects ........................................................................................................... 80
ANOVA Results ............................................................................................................ 81
Theory........................................................................................................................... 81
Central Composite Design Question .............................................................................. 81
Remaining tasks ............................................................................................................ 82
Important pending dates ............................................................................................... 82
C LA SS 13 – 20211119 ....................................................................................... 83
EX A M 2 – 20211126 ......................................................................................... 84
Problem 1 ...................................................................................................................... 84
Problem 2 ...................................................................................................................... 85
R EFER EN C ES ................................................................................................. 86
A P P EN D IX A : A ssignm ents from V irtual U A Q ............................................. 88
Assignment 1: Academic Integrity ................................................................................. 88

iii
Assignment 2: Upload your signed honor code declaration. ........................................... 89
Assignment 3: Install R and RStudio............................................................................. 90
Assignment 4: Training Example ................................................................................... 91
Assignment 5: TI-2.1 FDCER ....................................................................................... 93
Assignment 6: TI-2.2 R and RStudio ............................................................................. 94
A P P EN D IX B : A ssignm ents from C lass .......................................................... 95
Assignment 1: Read Gutierrez Chapter 2 ...................................................................... 95
Assignment 2: Do some ANOVA problems .................................................................. 100
Assignment 3: The 2k and 3k paper helicopter experiment .......................................... 117
Experimento 2k ........................................................................................................ 117
Experimento 3k ........................................................................................................ 139
Assignment 4: The Paper Helicopter Response Surface................................................ 156
Assignment 5: The Paper Helicopter Central Composite Design .................................. 162
Central Composite Circumscribed ............................................................................ 162
Central Composite Inscribed .................................................................................... 166
Central Composite Face centered ............................................................................. 169
Assignment 6: The Virtual Catapult ........................................................................... 172
Factorial Design ....................................................................................................... 172
Response Surface...................................................................................................... 179
A P P EN D IX C : R evised Exam 1 .................................................................... 190
Question 1 ................................................................................................................... 190
Question 2 ................................................................................................................... 192
Question 3 ................................................................................................................... 194
Question 4 ................................................................................................................... 195
A P P EN D IX D : Experim ent P roposal Slideshow ........................................... 200
A P P EN D IX E: Exam 2 .................................................................................. 204
Problem 1 .................................................................................................................... 204
Las hipótesis que se pueden verificar; ....................................................................... 205
Las tablas de ANOVA completas ............................................................................. 207
Los factores significativos; ........................................................................................ 208
La evaluación analítica de los supuestos del ANOVA x ∈ N(µ, σ); .......................... 210
Los modelos lineales y su evaluación con base en los criterios correspondientes; ...... 211
Un reporte del análisis, incluyendo un diagrama de flujo, el código de R, los resultados
y las conclusiones. .................................................................................................... 214
Problem 2 .................................................................................................................... 215
Tabla de factores y niveles. ...................................................................................... 216
El código de R para generar la tabla experimental. .................................................. 216
La tabla del experimento. ........................................................................................ 216
Un reporte completo, incluyendo un diagrama de flujo, el código de R y el racional para
la selección de los factores y los niveles. ................................................................... 217

iv
C LA SS 1 – 20210730

The V irtualU A Q page

We started the class by logging into the VirtualUAQ website and entering into the Class page. Here
we will find the subjects, assignments and resources for the course. The URL for the site is:
http://uaqedvirtual.uaq.mx/campusvirtual/ingenieria/course/view.php?id=488.

English test

My English test results (EML01 - Examen para cumplir con el requisito de manejo de la lengua de
st
los programas educativos) for applying to UAQ was -9. I performed the test on May 31 , 2021 with
Group 2 (ML 310521 GRUPO 2 POSGRADO FI).

As recommended by Prof. Huerta, I applied for the November test.

B ibliography

Prof. Huerta recommended two main books and one supplemental book for this course:

 Gutiérrez Pulido, H., & Vara Salazar, R. D. L. (2004). Análisis y diseño de experimentos.
 George, E. P., Hunter, J. S., Hunter, W. G., Bins, R., Kirlin IV, K., & Carroll, D. (2005).
Statistics for experimenters: Design, innovation, and discovery (pp. 235-273). New York, NY,
USA: Wiley.
 Montgomery, D. C. (2017). Design and analysis of experiments. John wiley & sons.

For the next class Prof. Huerta recommended that we read Chapter 2 from Gutierrez and Chapter 3
from Hunter to understand the t-test.

Statistical Significance

Prof. Huerta asked the classroom if we have heard the term “Statistical Significance”. I googled the
term and found an article written in the Harvard Business Review about it. I shared it with the class:
https://hbr.org/2016/02/a-refresher-on-statistical-significance

He then explained most data sets can be measured with three parameters:

1. μ Central tendency measurement


2. σ Dispersion parameter, often referred to variance
3. F Frequency distribution, often referred as a histogram or number table.

The Frequency distribution is often based on a known distribution model, such as the normal
distribution curve.

The normal Distribution curve has the following properties:

 Symmetrical.
 Average is at the center of the distribution.
 At the tail end at 97% it has 3 standard deviation.

1
One of the most useful relationships between this distribution and statistical significance is the
confidence interval. The most used Confidence Interval in statistics 95%, leaving a 2.5% interval at
each tail tip.

Standard Deviation Curve, with multiple Standard Deviation intervals (UCLA).

If one data point lies below the distribution, will this data point be significant of an important event
or just an anomaly?

The B lack Sw an

Prof. Huerta related to us the tale of the Black Swan, an animal that English taxonomists considered
impossible due to the entirety of English swans being white. When they found a black swan their
entire taxonomy model had to change due this event.

Eric Marsden wrote an excellent presentation about black swans where he relates the relationship
between such events and statistics:

Black Swans in “Thin-tailed” probability distributions

 The “tail” of a probability distribution is the part which is far away from the mean
 Typical examples of thin-tailed probability distributions:
o normal (Gaussian) distribution
o exponential distribution
 For a normal distribution:
o probability of an observation which is more than 6 standard deviations from the
−10
mean (a “six-𝜎 event”) is 9.9×10
o a 10-𝜎 event is less frequent than one day in the life of the universe

2
Black Swans in “Thin-tailed” probability distributions

 Extreme events carry significant weight


o fairly high probability of something “unusual” occurring
 More formally, the tail of the distribution follows a power law:
𝑃𝑟[𝑋 > 𝑥] ∼ 𝑥 − 𝛼 𝑎𝑠 𝑥 → ∞, 𝛼 > 0
where ∼ means “is asymptotically similar to”.
 Example: the Pareto distribution
o designed by economist V. Pareto to model distribution of wealth in capitalist societies
o a good fit for the severity of large losses for worker compensation, the level of damage
from forest fires, the energy released by earthquakes…
 Example: the oil shock in 1973 was a 37-sigma event

Statistical effects

When do we say we have a significant effect?

In the example regarding the distribution curve, the extreme point really belongs to another distinct
category, then the blue curve represents a separate confidence interval.

Finding a pattern of behavior is a significant event. One data point is not enough to reach this
conclusion, as it might be an outlier an atypical data point. We need more data to identify it.

We perform experiments to have these significant events, in order to confirm or deny a hypothesis
or an experiment, bringing new knowledge to the field or discipline in case the confirmation is
successful.

X values are the probable ranges or treatments that might cause significant changes in the result.

Why are some vaccines more efficient than others?

Any hypothesis must have a theoretical foundation based on previous data and publications.

What is a doubt and what is a research question? Are they the same?

A research question is an inform ed doubt, it is born out of knowledge. We know enough of a


subject that based on the available knowledge we don’t have doubts and yet we have a possible
answer to a research question that is still in question.

We have the State of the Art and State of the Technique to base our research questions.

To perform research, we must start with research questions, where we formulate hypothesis, which
is a possible answer to the question with the 5 characteristics we stated above and we will design an
experiment.

1. Previous information, State of the Art


2. Variables and their relationships

The experiment must answer research questions and once answering them, it will provide new
knowledge and insights.

3
The Law of Large N um bers

Law of large numbers, in statistics, is the theorem that, as the number of identically distributed,
randomly generated variables increases, their sample mean (average) approaches their theoretical
mean.

Statistical H ypothesis

The contingency table states that the response variable “y” is proportional to the effect of control
variable “x”:
𝑦∝𝑥

The pair of statistical hypothesis:


𝐻 : 𝜇 = 𝜇𝑏
{ 𝑜 𝑎
𝐻𝑎 : 𝜇𝑎 ≠ 𝜇𝑏

If there is no significant difference between the variances of a and b, the null hypothesis is correct.

Since we don’t have enough resources to test a large number of experiments, we must use
probability to assure it’s true or false.

Ho is true Ho is false
Accept Ho Correct decision Type II Error
Null hypothesis, A Confidence Level Η, False positive
Probability p = 1 - α Probability p = β
Reject Ho Type I Error Correct decision
Alternate hypotheses, R Significance Level Π, power of the test
Probability p = α Probability p = 1 - β

I want to increase my confidence interval and only decide it is significant when it is less than R.

α is the probability of making a mistake, to reject a true result.

H om ew ork A ssignm ents

Read the following concepts:

 Law of Large Numbers


 Foundations of Statistical Hypothesis
 Statistical Significance – (Chapter of 1, 2 and 3 of Hunter’s book).

4
C LA SS 2 – 20210813
th
Originally Class 2 was supposed to be on August 6 but due to external issues the class had to be
th
moved to August 13 .

Free and Open Softw are

Prof. Huerta asked what the similarities between free software and open software are.

Classmate Ricardo Mendez stated that while both do the same basic thing, the difference is the
interface with the user, such as specialty tools.

Classmate Carlos said that open software allows other users to modify its code, free software does not
necessarily allows it. Free software developers usually have some functions that can be unlocked if
one pays for it.

I mentioned that open software does not necessarily mean free, it just means that you can modify its
source code once paid.

Prof. Huerta mentioned that in the academic research field software is fundamental. There is no
scientific field that does not use software for its analysis. Knowing software (ANSYS) and computer
languages (Python) are some of the skills needed to perform analysis.

Free and Open Software are not the same. Free software might be a program that until a point needs
to be purchased to fully be used, such as freeware and freemium. Open software refers to use licenses
of modification such as GPL1, GPL2, GPL3, MIT, Chicago, etc. These licenses for intellectual
products can be shared with others under certain conditions.

We must build knowledge together, but have rights of license.

Free software also has the problem that it has rare maintenance or technical support.

R Softw are

R is a programming language and free software environment for statistical computing and graphics.
It is widely used among statisticians and data miners for developing statistical software and data
analysis (Wikipedia, 2021).

R is an open software where may people, usually scientists of many fields, collaborate in making
packages and expanding the use of R. Genomics, finance and econometrics, clinical trials are some of
these fields. Some countries won’t event accept a statistical analysis if it isn’t done in R, as it doesn’t
have any “black box” routines in it.

In R, each user is responsible for the use of the software and its results. One of the characteristics of
open software it that it is not liable for the use of its users.

One can also create a module or package for R and share with others, collaborating to create new
information.

5
The software version of R was the 4.1.1 release and was downloaded from the R Comprehensive
Archive Network organization servers at https://cloud.r-project.org/bin/windows/base/R-4.1.1-
win.exe.

M IT, H arvard and Elsevier

Science must be open, economic factors must not be a limit for disciplines. Web of Science and Scopus,
Quartile is a metric of scientific impact.

W eb of Science and Scopus

W eb of Science is a website that provides subscription-based access to multiple databases that


provide comprehensive citation data for many different academic disciplines.

Scopus is Elsevier’s abstract and citation database launched in 2004. Scopus covers nearly 36,377
titles (22,794 active titles and 13,583 inactive titles) from approximately 11,678 publishers, of which
34,346 are peer-reviewed journals in top-level subject fields: life sciences, social sciences, physical
sciences and health sciences. It covers three types of sources: book series, journals, and trade journals.

D ata Science

"Data Science" means the scientific study of the creation, validation and transformation of data to
create meaning

R Studio

RStudio is an Integrated Development Environment (IDE) for R, a programming language for


statistical computing and graphics (Wikipedia, 2021). It includes a console, syntax-highlighting editor
that supports direct code execution, as well as tools for plotting, history, debugging and workspace
management.

We downloaded the RStudio Desktop Open Source License Edition 1.4.1717 version from the RStudio
developer page: https://download1.rstudio.org/desktop/windows/RStudio-1.4.1717.exe

6
C LA SS 3 – 20210827
th
Originally Class 2 was supposed to be on September 6 but due to external issues the class had to be
th
moved to August 13 .
th
On the August 27 class we also downloaded the Microsoft R Open version 4.0.2, which is a version
based on the R-4.0.2 released on July 2020. The software was downloaded from Microsoft’s R
Application Network at https://mran.blob.core.windows.net/install/ mro/4.0.2/Windows/microsoft-
r-open-4.0.2.exe.

The main difference between R and Microsoft R Open is that R is the current version, while MS R
Open might not work with more modern packages due to incompatibility in the code.

M icrosoft R Open

Microsoft provides various open source tools to help people use R. This includes R packages published
on CRAN and in Github.

M icrosoft R O pen, Microsoft's distribution of open-source R. The only difference with CRAN R is
that comes bundled with Intel Math Kernel Libraries (which makes vector and matrix operations
faster on multi-core machines), and that it uses a static mirror of CRAN so packages don’t change
from day to day (but you can always use the "checkpoint" package described below to get the latest-
and-greatest, if you want).

M R A N , which is the download repository for Microsoft R Open, and also hosts daily archive
snapshots of the entire CRAN system (from 2015 to the present). These snapshots are used for
reproducibility by Microsoft R Open, the checkpoint package (see below), and anyone who wants a
non-changing CRAN image. (The Rocker docker images are configured to use these static snapshots,
for example.)

Base R does not have multi-core or multi-threaded options. When doing complex mathematical
operations, these take a great amount of time.

Prof. Huerta asked about these terms, I put an example of videogame engines and the rendering of
complex 3D surface.

P erform ance. RAM can overflow and the process stops. Multi-threading helps distribute the task
ensuring the process gets finished and the calculate ions are completed.

R eprodu cibility. Since Microsoft has a fixed archive of R packages at a certain date, scripts can
still work if a newer package from CRAN has a mistake in it. However, it has a disadvantage that
one must always indicate MRO to search a new version of the package.

7
W hat is A N OV A ?

An ANOVA test is a way to find out if survey or experiment results are significant. In other words,
they help you to figure out if you need to reject the null hypothesis or accept the alternate
hypothesis. Basically, you’re testing groups to see if there’s a difference between them.

There are one-way or two-way tests, which refers to the number of independent variables (IVs) in
your Analysis of Variance test.

 One-way has one independent variable (with 2 levels).


 Two-way has two independent variables (it can have multiple levels).

One W ay A N O V A

A one way ANOVA is used to compare two means from two independent (unrelated) groups using
the F-distribution. The null hypothesis for the test is that the two means are equal. Therefore, a
significant result means that the two means are unequal.

In practical terms, it is the relationship between the variances between each group and the natural
variances of the system.

Teaching P roblem

A gray–iron casting product is been molded with a green sand stacking molding process; which is
known to produce pin–hole defects. The current yield is 76%. The plant manager wants to increase
the plant’s yield.

An experiment was conducted to verify 𝑦𝑖𝑒𝑙𝑑 = 𝑓(𝑡𝑒𝑚𝑝) + 𝜖 at three different temperatures. The
runs’ results are shown in table 1:

Table 1. Operating tem perature


1127 1178 1204
73.4 74.4 78.9
76.3 74.2 79.3
76.9 72.7 77.9
77.9 73.5 78.9
77.7 71.4 79.5
R data input:
data.set <- data.frame(yield=
c(73.4,76.3,76.9,77.9,77.7,
74.4,74.2,72.7,73.5,71.4,
78.9,79.3,77.9,78.9,79.5),
temp=as.factor(
c(rep('1127',5),
rep('1178',5),
rep('1204',5)
)))

data.set

8
We also wrote a function to group this data set for further operations:

fn <- yield~temp

A pplying A N OV A

ANOVA in R is very simple:

fit_1 <- aov(fn,data.set)


summary(fit_1)

W hat do the A N O V A table m eans?

The R output for ANOVA:

Analysis of Variance Table

Response: yield
Df Sum Sq Mean Sq F value Pr(>F)
temp 2 80.545 40.273 23.319 7.345e-05 ***
Residuals 12 20.724 1.727
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

One can calculate a variance of these 5 values between the 5 values of the first column. One can also
calculate the variance of columns 2 and 3. One can also calculate a pondered variance of these 3
variances and that represents the natural variance of these yields, which is 1.727.

The Mean Sq column means variance. In every ANOVA table the variance is called Mean Square.
Therefore it means the variance of the temp values and the variance of the Residuals. The variance
of the error between groups is 1.727. The variance of the comparison between groups is 40.273.

The 40.273 is significant, since it means that when we change the operating temperature of the
furnace, it is considerably bigger that the natural variation of the process without any temperature
change.

If we divide 40.273 by 1.727 we get 23.319. Note that it is the value found in the F value column.
This means that the change brought up by the temperature in the process yield is 23.319 bigger than
the natural variation in the yield of normal process.

9
A lternate hypothesis verification

Now, to verify the alternate hypothesis, we need to perform Tukey.


TukeyHSD(aov(fit))

Tukey multiple comparisons of means


95% family-wise confidence level

Fit: aov(formula = fit)

$temp
diff lwr upr p adj
1178-1127 -3.20 -5.4173783 -0.9826217 0.0060433
1204-1127 2.46 0.2426217 4.6773783 0.0297891
1204-1178 5.66 3.4426217 7.8773783 0.0000517

What does the Tukey test tell us? It compares the means of each fitted pair of groups and gives the
difference between the averages, with a lower and upper bounds of the confidence interval of that
difference at 95% with an adjacent probability.

Each line is a comparison between 2 groups, the confidence interval in line 1 does not include zero,
meaning that there is a significant difference between them. Lines 2 and 3 also do not have a zero in
the confidence intervals, also meaning that there is a significant difference between them.

In conclusion: there are significant differences between all the groups and each produce a different
yield between them.

W hat is a significant data point?

The easiest method is defining its significance value, in this case 5% or 0.05. All p-values that are
below that threshold are significant.
-5
The 7.345×10 value obtained in the ANOVA test is the probability (p-value) of that change be a
part of the natural variation. That that change occurs without changing the temperature. “What is
the probability that with doing nothing that change happens?”

H ow m any data points w ould w e need to ensure w e do not have a false positive?

A method for confirming that we have enough data points to ensure that we do not have a false
positive is the power of the test:

power.anova.test(groups = #, n=#, between.var = #, within.var = #,


sig.level = #,power=#)

Where groups is the number of groups (3), n is the number of data points in each group (5),
between.var is the variance between (Mean Sq of temp = 40.273), within.var is the variance within
each group (Mean Sq of Residuals = 1.727), sig.level is significance level (0.05) and power is left as
NULL to get a value.

10
One needs the following values from the ANOVA table:

Analysis of Variance Table

Response: yield
Df Sum Sq Mean Sq F value Pr(>F)
temp 2 80.545 40.273 23.319 7.345e-05 ***
Residuals 12 20.724 1.727
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

We now have:

power.anova.test(groups = 3,n=5,between.var = 40.273,within.var = 1.727,sig.level


= 0.05,power=NULL)

Balanced one-way analysis of variance power calculation

groups = 3
n = 5
between.var = 40.273
within.var = 1.727
sig.level = 0.05
power = 1

NOTE: n is number in each group

The power is 100%, the big variance in temp with only 5 data points is enough to ensure it is not a
false positive.

11
C LA SS 4 – 20210903

M easurem ents

One of the fundamental principles of engineering is measurement. A measurement obtains or applies


a numerical value to a property of matter or energy by a method, an instrument or physical principles.

M ock-up and P rototype

A mock-up is an unstable, unpredictable and non-repeatable engineering system that is the previous
step before a prototype.

A prototype is a robust system that is stable, predictable and repeatable, close to the full version of
the engineering system. It can also measure the three fundamental properties of measurement.

Fundam ental properties of m easurem ent

A ccuracy, P recision, and U ncertainty

The degree of accuracy and precision of a measuring system are related to the uncertainty in the
measurements. Uncertainty is a quantitative measure of how much your measured values deviate
from a standard or expected value. If your measurements are not very accurate or precise, then the
uncertainty of your values will be very high. In more general terms, uncertainty can be thought of as
a disclaimer for your measured values.

For example, if someone asked you to provide the mileage on your car, you might say that it is 45,000
miles, plus or minus 500 miles. The plus or minus amount is the uncertainty in your value. That is,
you are indicating that the actual mileage of your car might be as low as 44,500 miles or as high as
45,500 miles, or anywhere in between.

All measurements contain some amount of uncertainty. In our example of measuring the length of
the paper, we might say that the length of the paper is 11 in., plus or minus 0.2 in. The uncertainty
in a measurement, A, is often denoted as δA (“delta A”), so the measurement result would be recorded
as A ± δA. In our paper example, the length of the paper could be expressed as 11 in. ± 0.2.

The factors contributing to uncertainty in a measurement include:

1. Limitations of the measuring device,


2. The skill of the person making the measurement,
3. Irregularities in the object being measured,

Any other factors that affect the outcome (highly dependent on the situation).

In our example, such factors contributing to the uncertainty could be the following: the smallest
division on the ruler is 0.1 in., the person using the ruler has bad eyesight, or one side of the paper is
slightly longer than the other. At any rate, the uncertainty in a measurement must be based on a
careful consideration of all the factors that might contribute and their possible effects.

These three elements can be summarized as intrinsic error in the data measurement, which will be
reflected in the observed variance.

12
M argin of errors and A N OV A
-5
Last class we saw a 73.4×10 p-value, but with no margin of error. We just assumed as a correct
value. In fact, when stating this value we should also include the following: “Of the observed variation
we do not know what percentage is due to a measurement error and how much is the variations of
the temperature levels.”
Analysis of Variance Table

Response: yield
Df Sum Sq Mean Sq F value Pr(>F)
temp 2 80.545 40.273 23.319 7.345e-05 ***
Residuals 12 20.724 1.727
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

In the ANOVA table, on the “Residuals” row and “Mean Sq” value there is the 1.73 which can be
interpreted as the “error”, but in reality it represents the natural variance of the system and the error
is embedded in that value.

In cases where the variance between groups is very large, the error in measurement is not significant,
it’s when the “Mean Sq.” values are very close where this is important and might cause a false positive.

P robabilities D istribution

The calculation of the Pr(>F) cannot be performed manually, so one can use a mobile app called
“Probabilities Distribution” created by Matthew Bognar, Ph D. and is the 5.6.3 version.

“Compute probabilities and plot the probability mass function for the binomial, geometric, Poisson,
hypergeometric, and negative binomial distributions. Compute probabilities, determine percentiles,
and plot the probability density function for the normal (Gaussian), t, chi-square, F, exponential,
gamma, beta, and log-normal distributions.”

https://play.google.com/store/apps/details?id=com.mbognar.probdist

One starts the application, selects the F category from the “Select a Distribution” drop-down menu
located on the upper part.

d1 is the number of degrees of freedom from the group (number of groups – 1).

d2 is de number of degrees of freedom from the residuals (number of data points – number of groups)

x is the F value calculated from the ANOVA test (Mean Sq. of Groups / Mean Sq. of Residuals).

P(X>x) is the p-value obtained for the previous values.

13
One can also calculate de critical F value at a desired α, such as 5% or 0.05. We just input the desired
α in the P(X>x) pink box.

The critical F value is useful to establish the distance between the confidence interval and the value
obtained from ANOVA.

14
G raphical A nalysis of A N OV A

Group mean
Grand mean

Grand mean ±
Data point residual
variance

Variances
between factors

It has a horizontal axis a scale that represents a zero effect / base effect that is aligned with the
average of averages, which is the green point. The average of averages is obtained from the 15 data
that we have in the experimental set and is associated with 76.2 of Yield.

It means that the average of averages of good pieces is 76.2, the left vertical scale is associated with
the Yield. The horizontal scale is associated with changes in temperature.

First element: there is an average of averages (Grand mean).

Second element: we have three red triangles. These represent the average associated with each
temperature. For example, here we have this average for the temperature of 1178 degrees, it turned
out to be 73.2 and it appears bounded on the subscale on the right. The 1127 average was 76.4 and
the 1204 average was 78.9.

15
Third: the blue box represents the average plus / minus the variance of the error, in this case it goes
from 76.2 to 77.5 and from 76.2 to 74.9. It is represented in height and width in the same proportion.

Fourth: The red line in the form of axes represents the area that we could understand as the variance
between the factors.

Where we have the group average of 1178, the effect it has with respect to zero, of the general average,
having the temperature at 1178 is calculated and it says that Yield is going to reduce by 2.95. Where
did that 2.95 come from? Subtract 76.2, which is the average of averages, subtract the average of
1178, which is 73.2. The value gives -2.95.

It means that the average of averages is reduced, that is, considering this as a baseline, most likely,
let's say, it is reduced by 3 percentage points, so it is minus 2.95.

Now we do subtract from 76.4, which is the average of the intermediate category, minus the average
of averages: 76.4 - 76.2 gives us 0.25, which is the effect of the temperature at 1127. It is 0.25
percentage points higher than the average of averages.

Finally if we subtract 78.9 minus 76.2 gives us 2.71. The effect is the difference from the average of
each category minus the average of averages.

What is the effect? Well, it is what we could expect as a change when we modify the temperature in
its three levels studied in the experiment.

Note that in the vertical scale and the horizontal scale the measure of these effects exceeds the
expected variation, except for the red triangle of 1127. It means that with respect to the most probable
1127 it is not distinguished and if we put the temperature at 1178 the efficiency will decrease by
2.95% and if we use a temperature of 1204 the efficiency will increase by 2.71% compared to the
general average.

Which means that these effects are significant at the extremes, either to reduce or to increase the
response and obviously the interesting response would be 1204, which is the best result and has a
better result than the low and the intermediate. Operating at the highest temperature would be the
most interesting.

We will also be able to see that in this effects graph that we are going to review right now.

It is interesting to me that the lower temperature of these three cases, that of 1127, has a better
behavior than that of 1178.

That observation is super important, if we were in a practical and non-academic context, that
observation would be super important because obviously less energy is spent operating the ovens at
1127 than at 1178. There is energy savings.

Between 1127 and 1204 it would be necessary to evaluate not only statistically if one is better than
the other, but the costs could be that 1127 has a greater cost although the yield is not almost 80%
like 1204 due to the increase that could be in costs between 1127 and 1204.

16
Increasing the 1204 temperature represented an extra 25% of cost, which I do not know justified with
the reduction of some rings that had to be reprocessed again. Take into account that the melted ring
that went wrong was thrown back into the furnace and melted again, thus needing additional energy
to melt again and wait for it to come out well. So I don't know if an increase was justified until 1127
and the observation that this very wise Jonathan made is that with less temperature it works better.

Each of the experimental runs in the case of 1204 there are some overlaps, but they are 5. It can be
seen that in 1204 the variance is less because the points are more clustered than in the other two.

A N O V A A ssum ptions

ANOVA has 2 main assumptions:

1) Variances are homogeneous between groups, not the same, but statistically similar.
2) Data points are normal, meaning that it is a random variable that is an element of a normal
distribution.

There are about 8 normality tests for data analysis, but we will use 2 for this course study: The
Bartlett test and the Fligner test.

B artlett test

In statistics, Bartlett's test is an inferential statistic used to assess the equality of variance in different
samples. Some common statistical procedures assume that variances of the populations from which
different samples are drawn are equal. Bartlett's test assesses this assumption. It tests the null
hypothesis that the population variances are equal.

Fligner test

The Fligner-Killeen test is a non-parametric test for homogeneity of group variances based on ranks.
It is useful when the data are non-normally distributed or when problems related to outliers in the
dataset cannot be resolved. It is also one of the many tests for homogeneity of variances which is
most robust against departures from normality.

These tests should be perform ed before the A N OV A test.

R input:
bartlett.test(fn,data.set)

R output:
##
## Bartlett test of homogeneity of variances
##
## data: yield by temp
## Bartlett's K-squared = 3.6215, df = 2, p-value = 0.1635

17
R input:
fligner.test(fn,data.set)

R output:
##
## Fligner-Killeen test of homogeneity of variances
##
## data: yield by temp
## Fligner-Killeen:med chi-squared = 1.9389, df = 2, p-value = 0.3793

Both test results have a p-value above 0.05, therefore the null hypothesis that the variances are indeed
equal and homogeneous.

In case one of these test fails, one must consider the assumptions or methods the test uses to verify
why it failed.

The A N OV A guide

Prof. Huerta uploaded a guide to the telegram group in order to do a manual A N OV A exercise.

N is the amount of data points.

ϕ is the sum of data points divided by the amount of data points

SST is the sum of the sums of the squared values minus ϕ

18
SSF is the sum of the squares of each data group sum divided by the amount of data points minus ϕ

SSE is the difference between SST and SSF

vi is the degree of freedom of the group (total data points – 1)

si is the root of the group SSE divided by the degree of freedom of the group

Mi is the sum of data points divided by the number of data points

We test the 11.34 value for x, d1=2 and d2=9 in the “Probability distribution” mobile app, we get a
0.00347 p-value, which is below 0.05 and therefore statistically significant.

H om ew ork A ssignm ents

Solve Problems 11 to 21 from Gutierrez Pulido book. These problems must be included in the Final
Report

19
A N O V A P roblem s P rocedure

1) Read the problem carefully.


2) State your confidence interval and α value.
3) State your Null Hypothesis and Alternative Hypothesis.
4) Count all the data sets and group it, verify the quantities for each group is similar.
a. In case of lopsided data, annotate it in the report.
b. Perform power test
5) Create the data set
6) Perform Fligner and Bartlett tests to verify homogeneity.
a. In case of both tests failing, stop and annotate it in the report.
b. In case of one of the tests failing, continue and annotate it in the report.
c. In case of both tests passing continue as normal.
7) Perform ANOVA test.
8) Calculate critical F value considering the degrees of freedom from groups, residuals and α
value.
9) Verify placement of F in the confidence interval graph.
a. If F is below, accept Null Hypothesis, perform Power Test and annotate it in the
report.
b. If F is above, reject Null Hypothesis, perform Tukey HSD test and annotate it in the
report.
10) Annotate all results in the report.
11) Answer each question with the obtained results.

20
C LA SS 5 – 20210910

Most of this class was dedicated to answering questions from the homework and the concepts treated
in those problems.

B ox plot

A box plot is a method for graphically depicting groups of numerical data through their quartiles.
Box plots may also have lines extending from the boxes (whiskers) indicating variability outside the
upper and lower quartiles, hence the terms box-and-whisker plot and box-and-whisker diagram.
Outliers may be plotted as individual points.

Box plots are non-parametric: they display variation in samples of a statistical population without
making any assumptions of the underlying statistical distribution. The spacings between the different
parts of the box indicate the degree of dispersion (spread) and skewness in the data, and show outliers.
In addition to the points themselves, they allow one to visually estimate various L-estimators, notably
the interquartile range, midhinge, range, mid-range, and trimean.

Box plots can be drawn either horizontally or vertically. Box plots received their name from the box
in the middle, and from the plot that they are.

Statistical m odel

A statistical model is a proposed relationship in the data, it can be based on a linear function (linear
cause and effect), a product or a polynomial function of any order.

The model has to have 4 elements and a property:

1) The model must state all its variables.


2) The model must state the relationships between these variables.
3) The model must be based on previous data.
4) The model must be based on known theory or theories about the problem.
5) Property: The model must be written on a run-on sentence.

Most of the 1-way ANOVA problems can be stated as a linear model such as:

f(x) = μ + τi + ϵi

Where:

μ is the global mean of the system

τ is the effect of each group

ϵ is the error caused by measurement

21
However, we don’t have the total amount of data to define such a model, so we use approximations
for each variable.

μ can be approximated by the global mean of the data set.

τ can be calculated by the difference between the global mean and the mean of each group.

ϵ can be referred to the standard error based on the data, using the weighted variance of the residuals.

Looking at the graphical analysis chart we can identify each element:

Group mean
μ

τ μ – Group Mean

22
U ncertainty

Conceptually, it is a region of probability as the range between a measurement and the true value.
As our technologies and techniques get better, our uncertainty gets smaller but never to a point.
Uncertainty is related to the equipment’s engineering and the scientific principle implemented in the
measurement.

Some metrologists argue whether we can translate this uncertainty between measurement systems, or
interpolate them. This is still an open question and will continue to keep debated.

The uncertainty stated in measurement devices is the biggest uncertainty measured in that kind of
system, as it considers the worst case scenario.

The difference of uncertainty between measurement devices is the cost. Better materials, technology
and research results in smaller uncertainty and bigger cost.

A N O V A and Student’s t-test

ANOVA can be considered a generalized T Student test as it can compare more than 2 groups, which
is the limit of the Student’s t-test. However, it might not give out the exact same results as ANOVA
uses variances while the t-test uses the standard deviation.

A N O V A and unbalanced data.

ANOVA is robust enough to accept unbalanced data, but the test might produce unsatisfactory
results in case the unbalance is great enough.

In case this happens in the practice, we must try to obtain more data points to balance the treatments.

Unbalance becomes very relevant in the case where we accept the null hypothesis, because we have a
different power inside each treatment. If we have unbalanced data and we still reject the null
hypothesis, then the balance is irrelevant.

N orm ality

There is a graphical method to see normality in our data sets. The R function outputs several graphics
that can be used to verify normality in our data set.

We will use an example problem for show purposes. The data set belongs to the scores obtained by a
training program for 4 groups of 5 workers.

23
R input code:
reps <- c(9,12,11,14,13,
10,6,9,9,19,
12,14,11,13,11,
9,8,11,7,8) #Vector de la respuesta

prog <- as.factor(c(rep("1",5),


rep("2",5),
rep("3",5),
rep("4",5))) #Vector del factor

data <- data.frame(reps,prog) #Tabla de datos

fit <- reps~prog #Función


mod <- aov(fit,data) #ANOVA
summary (mod) #Tabla ANOVA
test <- TukeyHSD(mod)
test #Prueba de Tukey
plot(test)

#Intervalos de confianza descriptivos


library(gplots)
plotmeans(fit,data,ci.label=T,n.label=F)

#Intervalos de confianza inferenciales


library(margins)
ml<-lm(reps~prog,data=data)
cplot(ml) #pendiente

bartlett.test(fit,data)
fligner.test(fit,data)

lineal <- lm(fit,data)

power.anova.test(groups = 4, n=5, between.var = 18.32, within.var = 2.60,


sig.level = 0.05, power = NULL)

plot(lineal,1)
plot(lineal,2)
plot(lineal,3)
plot(lineal,4)
plot(lineal,5)
plot(lineal,6)

24
R output code:
> reps <- c(9,12,11,14,13,
+ 10,6,9,9,19,
+ 12,14,11,13,11,
+ 9,8,11,7,8) #Vector de la respuesta
>
> prog <- as.factor(c(rep("1",5),
+ rep("2",5),
+ rep("3",5),
+ rep("4",5))) #Vector del factor
>
> data <- data.frame(reps,prog)#Tabla de datos
>
> fit <- reps~prog #Función
> mod <- aov(fit,data) #ANOVA
> summary (mod) #Tabla ANOVA
Df Sum Sq Mean Sq F value Pr(>F)
prog 3 39.2 13.07 1.633 0.221
Residuals 16 128.0 8.00
> test <- TukeyHSD(mod)
> test #Prueba de Tukey
Tukey multiple comparisons of means
95% family-wise confidence level

Fit: aov(formula = fit, data = data)

$prog
diff lwr upr p adj
2-1 -1.2 -6.317948 3.917948 0.9065594
3-1 0.4 -4.717948 5.517948 0.9958916
4-1 -3.2 -8.317948 1.917948 0.3141183
3-2 1.6 -3.517948 6.717948 0.8078269
4-2 -2.0 -7.117948 3.117948 0.6840623
4-3 -3.6 -8.717948 1.517948 0.2244134

> plot(test)
>
> #Intervalos de confianza descriptivos
>
> library(gplots)
> plotmeans(fit,data,ci.label=T,n.label=F)
>
> #Intervalos de confianza inferenciales
>
> library(margins)
> ml<-lm(reps~prog,data=data)
> cplot(ml) #pendiente
Error in eval(model[["call"]][["data"]], env) :

25
promise already under evaluation: recursive default argument reference or
earlier problems?
>
> bartlett.test(fit,data)

Bartlett test of homogeneity of variances

data: reps by prog


Bartlett's K-squared = 8.8951, df = 3, p-value = 0.03072

> fligner.test(fit,data)

Fligner-Killeen test of homogeneity of variances

data: reps by prog


Fligner-Killeen:med chi-squared = 0.88011, df = 3, p-value = 0.8302

>
> lineal <- lm(fit,data)
>
> power.anova.test(groups = 4, n=5, between.var = 18.32, within.var = 2.60,
sig.level = 0.05, power = NULL)

Balanced one-way analysis of variance power calculation

groups = 4
n = 5
between.var = 18.32
within.var = 2.6
sig.level = 0.05
power = 1

NOTE: n is number in each group

>
> plot(lineal,1)
> plot(lineal,2)
> plot(lineal,3)
> plot(lineal,4)
> plot(lineal,5)
> plot(lineal,6)
> reps <- c(9,12,11,14,13,
+ 10,6,9,9,10,
+ 12,14,11,13,11,
+ 9,8,11,7,8) #Vector de la respuesta
>
> prog <- as.factor(c(rep("1",5),
+ rep("2",5),
+ rep("3",5),

26
+ rep("4",5))) #Vector del factor
>
> data <- data.frame(reps,prog)#Tabla de datos
>
> fit <- reps~prog #Función
> mod <- aov(fit,data) #ANOVA
> summary (mod) #Tabla ANOVA
Df Sum Sq Mean Sq F value Pr(>F)
prog 3 54.95 18.32 7.045 0.00311 **
Residuals 16 41.60 2.60
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
> test <- TukeyHSD(mod)
> test #Prueba de Tukey
Tukey multiple comparisons of means
95% family-wise confidence level

Fit: aov(formula = fit, data = data)

$prog
diff lwr upr p adj
2-1 -3.0 -5.9176792 -0.08232082 0.0427982
3-1 0.4 -2.5176792 3.31767918 0.9788127
4-1 -3.2 -6.1176792 -0.28232082 0.0291638
3-2 3.4 0.4823208 6.31767918 0.0197459
4-2 -0.2 -3.1176792 2.71767918 0.9972140
4-3 -3.6 -6.5176792 -0.68232082 0.0133087

> plot(test)
>
> #Intervalos de confianza descriptivos
>
> library(gplots)
> plotmeans(fit,data,ci.label=T,n.label=F)
>
> #Intervalos de confianza inferenciales
>
> library(margins)
> ml<-lm(reps~prog,data=data)
> cplot(ml) #pendiente
Error in eval(model[["call"]][["data"]], env) :
promise already under evaluation: recursive default argument reference or
earlier problems?
>
> bartlett.test(fit,data)

Bartlett test of homogeneity of variances

data: reps by prog

27
Bartlett's K-squared = 0.56848, df = 3, p-value = 0.9036

> fligner.test(fit,data)

Fligner-Killeen test of homogeneity of variances

data: reps by prog


Fligner-Killeen:med chi-squared = 0.60773, df = 3, p-value = 0.8947

>
> lineal <- lm(fit,data)
>
> power.anova.test(groups = 4, n=5, between.var = 18.32, within.var = 2.60,
sig.level = 0.05, power = NULL)

Balanced one-way analysis of variance power calculation

groups = 4
n = 5
between.var = 18.32
within.var = 2.6
sig.level = 0.05
power = 1

NOTE: n is number in each group


>
> plot(lineal,1)
> plot(lineal,2)
> plot(lineal,3)
> plot(lineal,4)
> plot(lineal,5)
> plot(lineal,6)

We observe that the Bartlett and Fligner tests have a high p-value, indicating the variances are
homogeneous.

There is a statistical significance between each group as indicated by the 0.00311 p-value from the
ANOVA table.

Tukey indicates that the main differences are between groups 2-1, 4-1, 3-2 and 4-3, therefore groups
1 and 3 have statistical difference.

28
R esidual vs Fitted plot

This plot shows if residuals have non-linear patterns. There could be a non-linear relationship between
predictor variables and an outcome variable and the pattern could show up in this plot if the model
doesn’t capture the non-linear relationship. If you find equally spread residuals around a horizontal
line without distinct patterns, that is a good indication you don’t have non-linear relationships.

The fitted vs residuals plot is mainly useful for investigating:

 Whether linearity holds. This is indicated by the mean residual value for every fitted value
region being close to 0. In R this is indicated by the red line being close to the dashed line.
 Whether homoskedasticity (all its random variables have the same finite variance) holds. The
spread of residuals should be approximately the same across the x-axis.
 Whether there are outliers. This is indicated by some ‘extreme’ residuals that are far from
the rest.

29
N orm al Q-Q plot

The Q-Q plot, or quantile-quantile plot, is a graphical tool to help us assess if a set of data plausibly
came from some theoretical distribution such as a Normal or exponential. For example, if we run a
statistical analysis that assumes our dependent variable is normally distributed, we can use a Normal
Q-Q plot to check that assumption. It’s just a visual check, not an air-tight proof, so it is somewhat
subjective. But it allows us to see at-a-glance if our assumption is plausible, and if not, how the
assumption is violated and what data points contribute to the violation.

Do residuals follow a straight line well or do they deviate severely? It’s good if residuals are lined well
on the straight dashed line.

30
Scale-Location plot

It’s also called Spread-Location plot. This plot shows if residuals are spread equally along the ranges
of predictors. It’s good if you see a horizontal line with equally (randomly) spread points.

The scale-location plot is very similar to residuals vs fitted, but simplifies analysis of the
homoskedasticity assumption. It takes the square root of the absolute value of standardized residuals
instead of plotting the residuals themselves. Recall that homoskedasticity means constant variance in
linear regression.

We want to check two things:

 That the red line is approximately horizontal. Then the average magnitude of the
standardized residuals isn’t changing much as a function of the fitted values.
 That the spread around the red line doesn’t vary with the fitted values. Then the variability
of magnitudes doesn’t vary much as a function of the fitted values.

31
C ook’s distance plot

It is used to identify influential data points. It depends on both the residual and leverage, it takes in
account both the x value and y value of the observation.

A data point that has a large value for Cook’s Distance indicates that it strongly influences the fitted
values. A general rule of thumb is that any point with a Cook’s Distance over 4/n (where n is the
total number of data points) is considered to be an outlier.

It’s important to note that Cook’s Distance is often used as a way to identify influential data points.
Just because a data point is influential doesn’t mean it should necessarily be deleted – first you should
check to see if the data point has simply been incorrectly recorded or if there is something strange
about the data point that may point to an interesting finding.

32
C onstant Leverage: R esidual vs Factor Levels plot

This is a plot of the residuals versus any factor of your choosing. It checks whether the variance not
accounted for by the model is different for different levels of a factor. If all is okay, the plot should
exhibit a random scatter. Pronounced curvature may indicate a systematic contribution of the
independent factor that is not accounted for by the model.

33
C ook’s distance vs Leverage plot

Cook's distance and leverage are used to detect highly influential data points, i.e. data points that
can have a large effect on the outcome and accuracy of the regression. For large sample sizes, a rough
guideline is to consider Cook's distance values above 1 to indicate highly influential points and
leverage values greater than 2 times the number of predictors divided by the sample size to indicate
high leverage observations. High leverage observations are ones which have predictor values very far
from their averages, which can greatly influence the fitted model.

The contours in the scatterplot are standardized residuals labelled with their magnitudes.

34
P aram eters of N orm ality

Normality has three parameters:

1) One number that represents the central data trend.


2) A second number that represents a measurement of the variability of the data.
3) The skewness of the data distribution with regards to the Gauss bell curve.

A grouping means that it is a possible representation that there is already a significant change in the
data and that these three data points correspond to a special cause of variation. It is called special
because the variation that is represented by the normal is called common variation or it is caused by
the common causes of variation and what comes out is a special cause of variation.

How can Cook's graph indicate normality? The lengths present the differences but I observe different
lengths. If most of them are very close to 0, could you say that it is normal?

How could we bring normality here? There could be patterns where we have as it progresses, the
frequency of repetitions increases, the error will increase, we would have a pattern like that where the
error here would be small and as the frequency increases, I have a greater error.

This reveals that the model is not absorbing or representing this trend, so the residuals are not normal,
but rather present a trend. That would be a situation.

Another would be that the differences are low all were grouped at one level, then when it is
intermediate the result would be at another level and then at another high level. What this represents
is a second order relationship, which is not represented by the model we calculated.

We could observe both behaviors in both the first and the second output graph.

W hat is the purpose of a research graduate program ?

The purpose is to form independent researchers with independent critical thinking with self-interests,
self-concepts that will provide an original contribution to science, be it great or small.

35
EX A M 1 – 20210924

The exam consisted of 4 questions and were related to the concepts related to the ANOVA test,
variances and null hypothesis.

Question 1

A company hired you to evaluate new concrete blends. The blends have been tested, and the strength
(MPa) results after 28 days of curing are shown in the summary table.

x mean sdev n
--- ------ ------ ---
1 43.64 5.556 5
2 40.06 1.006 5
3 41.80 1.056 5
4 34.78 1.934 5

W hat is the experim ent total variance?


(Σ𝑦)2
Σ𝑦 2 − 2
𝑥𝐹
Consider that: 𝑠 = 2 𝑛
(1) and 𝐹 = 2 (2)
𝑣 𝑋𝐸

Question 2

A company hired you to evaluate new concrete blends. The blends have been tested, and the strength
(MPa) results after 28 days of curing are shown in the summary table.

x mean sdev n

--- ------ ------ ---

1 37.64 1.862 5

2 38.06 3.913 5

3 31.80 1.056 5

4 34.78 1.934 5

W hat w ould be the m axim um average expected strength for the best perform ance blend
at C I= 90% and group variance?
(Σ𝑦)2
Σ𝑦 2 − 2
𝑥𝐹
Consider that: 𝑠 = 2 𝑛
(1) and 𝐹 = 2 (2)
𝑣 𝑋𝐸

36
Question 3

A one-factor design of experiment comparing five models of the same Product has been run, and
partial results for the ANOVA are provided below.

Product:
Sum Square 2044.8
Degrees of freedom 4

Total
Sum Square 2922.02
Degrees of freedom 119

W hat is the factor's variance?


(Σ𝑦)2
Σ𝑦 2 − 2
𝑥𝐹
Consider that: 𝑠 2 = 𝑛
(1) and 𝐹 = 2 (2)
𝑣 𝑋𝐸

Question 4

En una operación de ensamble con dos celdas de manufactura, la línea base de eficiencia semanal,
calculada con los datos del año pasado, indica un máximo de 66.5% al 99% de confianza (con varianza
de 1.1).

En las últimas siete semanas la celda A obtuvo un promedio de 67.9% con un error estándar de .09 y
la celda B logró un 67.8% con una varianza de .3.

1. Calcula los límites de los I.C. al 95% para:

 Línea base
 Celda A
 Celda B

2. Calcula t0 y tc al 95% para:

 Comparar las celdas entre sí


 Comparar la celda A con la línea base
 Comparar la celda B con la línea base

3. Contesta las preguntas y justifica tus respuestas:

 ¿Mejoró la celda A?
 ¿Mejoró la celda B?
 ¿Cuál es la celda más eficiente?

I had to write a calculation report for the test, it is found in Appendix C.

The exam score w as 7.20/10.00, I failed the first question.

37
C LA SS 6 – 20211001

W hat is science? H ow do w e distinguish science from not science?

Patterson and Williams (1998) use insights from philosophy of science to present a model of science
which, at the time, advanced discussion about social science within natural resource management.

They make the case that science has, in part, a normative structure. So, doing ‘X’ kind of scientific
research is therefore underwritten by ‘Y’ set of normative philosophical commitments.

These philosophical commitments involve theories about:

 The nature of reality and of what really exists (ontology)


 The relationship between the knower and what is known (epistemology)
 What we value (axiology)
 The strategy and justifications in constructing a specific type of knowledge (methodology),
as linked to individual techniques (method/s).

Taken as a whole, a set of philosophical commitments form a meta-theoretical (theory of theory)


structure that can help with further understanding research as a phenomenon in its own right.

Paradigmatic commitments in the macrostructure of science (from Patterson and Williams 1998:
286, as adapted from Laudan 1984)

38
Ontological commitments underlying research (from Patterson and Williams 1998: 288)

Epistemological commitments underlying research (from Patterson and Williams 1998: 288)

Axiological commitments underlying research (from Patterson and Williams 1998: 288)

39
Falsifiability

The hypothesis must be proven that it can be false.

The method and doing well the method is very important for any experiment.

In conclusion, t he experim ent is the m aterialization of scientific reproducibility.

The C atapult P roblem as a Factorial P roblem

The problem is a factorial problem, as one can use five different inputs (also called factors or
independent variables) and has one output (also called response or dependent variable).

5
4

1 2

The inputs (and their ranges) are:

1. Release Angle (100 – 185)


2. Firing Angle (90 – 140)
3. Cup Elevation (200 – 300)
4. Pin Elevation (100 – 200)
5. Bungee Position (100 – 200)

We could investigate the independent effect of each input on the output, but we would miss the
information regarding the relationships between each input and their combined effects or synergy. To
consider such combinations we should be modifying simultaneously each input level.

This requires a different model were we can use multiple input variables, which is the factorial
experiment model.
k
The simplest factorial model is a 2 model, where the “2” is the level of each treatment and “k” is the
k
number of treatments. A 3 treatment factor in a 2 model would need 8 experiment runs to cover all
3
cases (2 = 2 ∙ 2 ∙ 2 = 8). This method works best in categorical value treatments.

40
Since we need to perform multiple runs, the catapult experiment can be simulated on a large scale
using a table version found here:

https://sigmazone.com/catapult-grid/

The page looks like this, where we can input each of the 5 variables each and on clicking the blue
button all rows input are run into the catapult and the distance output is displayed on the rightmost
row.

The table can be copy and pasted in Excel, making it easier to input in R or any other statistics
software.
3
In the class experiment we will use 3 factors with 2 levels, needing 8 experiments to test (2 = 8).
The factors and ranges for this experiment are cup elevation (250 – 300), pin elevation (100 – 200)
and bungee position (150 – 200). The other 2 factors will be left constant (release angle = 185 and
firing angle = 125).

Here is a table with the results of using the Grid Interface:

R elease Firing C up P in B ungee


D istance
A ngle A ngle Elevation Elevation P osition
185 125 250 100 150 204.62
185 125 250 200 150 392.74
185 125 300 100 150 264.92
185 125 300 200 150 508.55
185 125 250 100 200 257.18
185 125 300 200 200 625.8
185 125 250 100 200 255.29
185 125 300 200 200 620.03

41
Our Null Hypothesis is the following:

𝐻𝑜 : 𝜇𝑥1 = 𝜇𝑥2 = 𝜇𝑥3

Meaning that the average distance is independent of the levels of treatments x1, x2 and x3.

We can have many alternate hypothesis due to the interactions between the treatments:

𝐻1 : 𝜇𝑥1 ≠ 𝜇𝑥2

𝐻1 : 𝜇𝑥1 ≠ 𝜇𝑥3

𝐻1 : 𝜇𝑥2 ≠ 𝜇𝑥3

We now have all different possibilities, but we could also consider the levels of each treatment.

𝐻1 : 𝜇𝑥1=100 ≠ 𝜇𝑥1=200

𝐻1 : 𝜇𝑥2=250 ≠ 𝜇𝑥3=300

𝐻1 : 𝜇𝑥3=150 ≠ 𝜇𝑥3=200

We can propose a linear model that can describe the behavior of the catapult, using the following
structure:

𝑦 = 𝛽0 + 𝛽1 𝑥1 + 𝛽2 𝑥2 + 𝛽3 𝑥3 + 𝛽4 𝑥1 𝑥2 + ⋯ + 𝛽𝑛 𝑥1 𝑥2 𝑥3

Where β is a coefficient for each treatment x. β0 is the grand mean of the distance reached in the
experiment, β1 is the estimated coefficient based on the variables that define the extremes of the levels
of x1 and so on. β values are estimated on a linear regression and minimum squares.

Prof. Huerta shared a R code to view and analyze the results.

RStudio code:

> library (DoE.base)


Loading required package: grid
Loading required package: conf.design
Registered S3 method overwritten by 'DoE.base':
method from
factorize.factor conf.design

Attaching package: ‘DoE.base’

The following objects are masked from ‘package:stats’:

aov, lm

The following object is masked from ‘package:graphics’:

plot.design

42
The following object is masked from ‘package:base’:

lengths

> vars <- list(pe=c(100,200), #pin elevation


+ ce=c(250,300), #cup elevation
+ bp=c(150,200)) #bungee position
> tabla <- fac.design(factor.names = vars, randomize = F)
creating full factorial with 8 runs ...

>
> tabla
pe ce bp
1 100 250 150
2 200 250 150
3 100 300 150
4 200 300 150
5 100 250 200
6 200 250 200
7 100 300 200
8 200 300 200
class=design, type= full factorial
>
> Y <- c(203.13,397.81,266.03,513.49,257.76,483.19,325.5,621.11)
>
> tab.1 <- add.response(tabla,Y)
>
> tab.1
pe ce bp Y
1 100 250 150 203.13
2 200 250 150 397.81
3 100 300 150 266.03
4 200 300 150 513.49
5 100 250 200 257.76
6 200 250 200 483.19
7 100 300 200 325.50
8 200 300 200 621.11
class=design, type= full factorial

Take note that Table.1 has a property of an experimental design table, it is not an ordinary table
and is not an ordinary data.frame. This table has the contrasts of the high and low value of each
treatment.

>
> plot (tab.1) #Resumen gráfico: pe es el factor significativo
>
> halfnormal(tab.1, alpha = .05) #pe es el factor significativo

43
Significant effects (alpha=0.05, Length method):
[1] pe1 ce1

>
> fit <- lm (Y~pe+ce+bp, tab.1)
>
> anova (fit)
Analysis of Variance Table

Response: Y
Df Sum Sq Mean Sq F value Pr(>F)
pe 1 115964 115964 165.808 0.0002097 ***
ce 1 18455 18455 26.387 0.0068060 **
bp 1 11789 11789 16.856 0.0147853 *
Residuals 4 2798 699
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
> summary (fit)

Call:
lm.default(formula = Y ~ pe + ce + bp, data = tab.1)

Residuals:
1 2 3 4 5 6 7 8
26.4425 -19.6725 -6.7175 -0.0525 4.2975 -11.0675 -24.0225 30.7925

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 383.50 9.35 41.016 2.11e-06 ***
pe1 120.40 9.35 12.877 0.00021 ***
ce1 48.03 9.35 5.137 0.00681 **
bp1 38.39 9.35 4.106 0.01479 *
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 26.45 on 4 degrees of freedom


Multiple R-squared: 0.9812, Adjusted R-squared: 0.9671
F-statistic: 69.68 on 3 and 4 DF, p-value: 0.0006568

44
The following plot was obtained:

The horizontal axis are the factors involved in the experiment, the vertical axis has the results of the
means obtained in the experiments. Pe had a 200 distance units at 100 and 500 distance units at 200.
The horizontal line is the βo of the model, which is the 383.50 value in the “(intercept)” row (marked
in yellow in the RStudio code). That is the baseline or grand means.

In the table, 120.40 is the greater value of PE (marked in green) and by adding it to the grand mean
we get 383.50 + 120.40 = 503.90. PE lowest value can be calculated by subtracting 120.40 from 383.50
(383.50 – 120.40 = 263.1).

In this case β0 = 383.50 and β1 = 120.40 and the codified value is +1 and -1. These values were
obtained by minimum square differences, it is a linear regression.

W hat could be the factor that is the m ost influential in the distance result?

It is the Pin Elevation as it has the most difference ±120.40 to the baseline. So the biggest
coefficient value is indicative of being the m ost im pactful.

W hat if one of the values in “Estim ate Std.” w as negative?

It means that it could be affecting negatively the distance result (proportionally inverse), as you have
bigger values it could shorten the distance reached.

The levels of the factors are not continuous variables but categorical, therefore the lowest value is
represented on a negative sign (-1) and highest value with a positive sign (+1), we do not use the
magnitudes of the range values for our calculations.

45
The H alf-norm al plot

This half-normal has in its horizontal axis the values of each of the coefficients used in the linear
model. Since we used 8 experimental data points, there should be 8 points on the graph, if not, some
values might be overlapping.

There is a data point over to the right edge of the graph with a label: “ra1”. It means that at the 5%
significance level, only the high level of the “Release Angle” treatment is statistically significant. Other
treatments might have an effect on the output, but they are not above the normal variation of the
catapult simulator.

The black dot is an atypical value that while it is close to zero, it did not have a normal behavior
and was therefore filled with black. This can be verified in the “plot(fit)” graphs of “Residuals vs
th
Fitted”, “Normal Q-Q” and “Constant Leverage”, which means that the 8 run in the experiment was
atypical.

46
47
When having these results, it is best to just repeat the run in order to verify that results and use an
average to normalize the results.

48
A N O V A R esults

> anova (fit)


Analysis of Variance Table

Response: Y
Df Sum Sq Mean Sq F value Pr(>F)
pe 1 115964 115964 165.808 0.0002097 ***
ce 1 18455 18455 26.387 0.0068060 **
bp 1 11789 11789 16.856 0.0147853 *
Residuals 4 2798 699
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
In the ANOVA output table, we can see more rows, each representing a treatment. The code marks
that PE is indeed more significant treatment in the experiment.

Whatever is close to zero, will only be identified when its significance value is less than 0.05.

Take note that the Degrees of Freedom of each treatment is 1 as there’s only 2 levels (dof = n-1).

The residuals are taken considering the degrees of freedom of the 8 data points (dof = 8 – 1 = 7)
minus 3 treatments (dof = 7 – 3 = 4) which can be used to calculate the error in the experiment.

Treating the error as the variance, we get that the standard deviation is √699 = 26.43. Considering
that the grand mean of the experiment is 383.50 and there is a variance of 26.43 distance units, the
catapult simulator has a very large variation in its output.

49
5
The 2 C atapult experim ent
5
Prof. Huerta showed us the Half-normal plot of a 2 experiment using the 5 input treatments of the
Catapult:

With the 5 treatments we see that “ra1” is the most statistically significant treatment, followed by
“ce1”, “fa1” and some interactions between treatments “ra1:ce1”. Some double interactions are
significant. Some black and white points indicate grouping or clustering, showing the behavior of the
algorithms used in the simulator.
5
Prof. Huerta then told the class to run a similar 2 experiment and report the results for next week.

50
The P aper H elicopter experim ent

For a real life example of a 2k experiment, Prof. Huerta provided the class with a template for a
paper helicopter, a classic experiment proposed by George E. P. Box, Søren Bisgaard and Conrad
Fung.

The output of this experiment is the flight time of the paper helicopter and the input treatments are
the wings length, body length, body width, paper clip, tape in the body and tape on the wings.
k
The assignment is to perform a 2 experiment with this template and report the results in a written
report, a slide presentation and a shot format video.
5
I worked with classmate Andrea Berenice in a 2 experiment using the input treatments of wings
length, body length, body width, paper clip and tape on the wings. The experiment’s written report
is located in Appendix B: Assignment 3.

51
C LA SS 7 – 20211008
k
3 Factorial Experim ents

A 3k experiment is used when 1) the value ranges are very large or small and an intermediate level
is needed to justify changes and/or 2) there’s indication that the output behavior in the input
ranges of the experiment is not linear but a second or higher order response, like a curve. We need at
least 3 points to ensure a second or higher order behavior exists.

In factorial experiments the mathematical treatment of the factors are always categorical even if the
input variable is continuous, we select arbitrary levels and mark them as categories. In ANOVA the
k
“distance” between levels are only contrast points in 2 experiments, they have 1 degree of freedom
k
between them is just 1 unit (+1 and -1), in 3 the middle point is named “0” (-1, 0, +1).

This distance is not equidistant and can have a different values and be asym m etrical
betw een -1 and 0 or 0 and + 1. Equidistance is im portant if the physics of the experim ent
requires it, if not, it is irrelevant. For the statistical analysis asym m etry is not im portant.
k
When we perform 3 experiments, we can get quadratic relationships and can be used as a control
factor.

k
When seeing the half-normal plot for a 3 experiment, the labels now mark the lineal (.L), quadratic
(.Q) and relationship (:) behavior of the factors.

52
W hat is the m eaning of a relationship betw een factors?

It is a combined effect called “synergy” as proposed in Systems Theory. The result of a system is
bigger than the sum of its component parts.

In this case, the synegy elements are the linear relationships between “ra.L:ce.L” and the relationship
of “ra.Q” with itself, as a quadratic. These elements bring an additional effect when combined.
k
As “ra.Q” is near zero, but not zero, it means that the 3 experiment was justified in finding this
relationship and the synergy.

In industrial and manufacturing processes these effects are very rare and irrelevant.

For the class we saw a 3k experiment using the catapult simulator and with the following input
treatments:

Fixed input treatments:

 FA = 130
 PE = 200
 BP = 200

Variable input treatments and levels:

Treatm ent -1 0 +1
16 18
RA 140
0 0
CE 200 250 300

Note that these values are symmetrical, but they don’t need to be.

In this case, we will have 9 experimental runs, but with a reply of 2 blocks we will have 18 data
points.

What would happen if we treat each block separately as a different data set?

Some researchers average the results of each run, thereby potentiating the capacity of the data to
show us the true central tendency of the dataset at the cost of losing the variance information.

Prof. Huerta recommendation is to have one data table with all the block experiments as is, even
while the data can repeat itself on each run. This offers the opportunity of verifying the variance of
each run, allowing at least 1 degree of freedom to see the process stability. This could mean than even
blocks are a treatment level if there is significant change between them, however the one of our
hypothesis is that there shouldn’t be any difference between blocks if the methodology was properly
followed.

53
In order to verify atypical values one can review the residuals in the “Residuals vs Fitted” plot in
order to catch any run that might be atypical.

In the class example, runs 16, 7 and 8 have large residuals, therefore they are atypical. In this case
this is too late to fix with other runs, one must have kept a very detailed log of the experiments in
order to notice anything unusual. By combining both sources of information one can only repeat the
atypical run and not the whole experiment. We can also do a forensic analysis of the experiment in
order to understand the atypical value in case the logbook didn’t report anything unusual. One can
either repeat the run or analyze it in order to understand it.

R eply and R epeat

There’s a difference between a reply and a repeat of an experiment.

A reply uses the same model “n” amounts of the reply, while a repeat uses two identical models for
a run each. It might seem the same, but they are different. In the repeat, both helicopter models
have the variability of their construction and their operation. In a reply we only get the variability
of its operation.

54
Scientific focus

Science is a way of thinking and action. Precisely the most recent, most universal and the most useful.

Experimentation is not the only scientific method, but it is useful as it makes us think of a plan and
to act accordingly to the plan. Action also makes us think about the results, and it becomes a feedback
loop. With the end result being knowledge.

Scientific tactics

The scientific method is the strategy of scientific research, it affects the entire process of research.
Tactics depend on our field of study and the amount of knowledge regarding the subject. We must
know the tactics in our field and our resources (time, lab equipment, research partners, etc.).

The Factor vs M ean plot

When we see the “Factor vs Mean of Y” plot, what we see on each factor line is the average value
obtained by each treatment and it is not symmetrical. Note that the middle/zero level didn’t reach
the grand mean axis, meaning that there is a second order relationship that affects the experiment.

In case of the “ce” treatment, the second order relationship is nearly irrelevant or nonexistent, that is
why it’s near the grand mean axis.

If the “ra” middle value was higher or lower that the grand mean axis, it means that there is a
significant effect on the distance output.

55
When seeing the linear regression model output, one can see the entire relationship values of the
model and one can see that “ra.Q” is indeed significant while “ce.Q” is not.

2
The R is very high, meaning there’s a strong correlation between the calculated values and
experimental data.

We can also see the effects of the variables by this code:

Efectos <- allEffects(m10)


Efectos

This shows the linear model output from the changes in the variables.

One can explore a linear interpolation between those values to reach the target distance using a fixed
variable.

56
That plot can help us to see the behavior of the model according to the fixed value of “ce”, with this
tool we can control the output behavior of the model, parametrize the system.

We can create the interpolation function as:

𝑌̅ (𝑥2 − 𝑥1 ) + 𝑦2 𝑥1 − 𝑦1 𝑥2
𝑥̅ =
𝑦2 − 𝑦1

In R this function is written as:

Getx <- function(y,x1,x2,y1,y2)


{
(y*(x2-x1)+y2*x1-y1*x2) / (y2-y2)
}

Getx (y = 250, x1 = 160, x2 = 180, y1 = 184.77, y2 = 294.15)

k
The 3 P aper H elicopter Experim ent
k
As homework Prof. Huerta requested we run 3 factorial experiment with the JPL paper helicopter
model.

The results of this assignment can be seen in the section titled Assignment 4 in Appendix B of this
report.

57
Factorial D esign

Factorial designs needs that the factors be balanced, if the experiments levels does not allow
for a full range of options for testing, one option is to search for levels that are compatible
with one another and do not interfere among them.

It is preferable that the number of test prototypes is equal to the number or experimental
runs, in order to see the variation in the control parameters and not the variation in the
prototype. If one does a reply, you must create additional prototypes, in repetitions, you use
the same prototypes more than once.

The factor ϵ is there in the linear model in order to take into account all possible error done
in the experiment, knowingly or unknowingly.

P hyphox

Phyphox is a phone application developed by the RWTH Aachen University and is a physics
toolbox that uses the sensors in the phone for physical measurements such as:

 Accelerometer
 Magnetometer
 Gyroscope
 Light intensity
 Pressure
 Microphone
 Proximity
 GPS

The app can be downloaded at:

https://play.google.com/store/apps/details?id=de.rwth_aachen.phyphox

The experim ent assignm ent

Prof. Huerta mentioned that the last assignment for this class is a real-life experiment.
Usually a catapult, however this year will be different and we each must propose an
experiment of our own.

The proposal must be presented next class.

58
W hat should I see in m y experim ent if I m ade a m istake?

There are five elements and all can be seen in the linear regression:

1. Each factor’s coefficients (the higher, the better)


2. Their p-values (Pr>|t|) (the lower, the better)
3. Multiple R squared and Adjusted (the closer to 1, the better)
4. The residual standard error and the ANOVA residual error (this is the total
experiment’s variance)
5. The F-statistic and the model’s p-value.

59
C LA SS 8 – 20211015

R esponse Surface

This is a surface response:

The surface is the response variable, throwing distance, drawn on the z axis, using the x-y axis for
the control variables release angle “ra” and cup elevation “ce”. It has a curve in the “ra” axis and is
linear in the “ce” axis. It is drawn from the interpolation obtained by different results.

This surface does not need a big amount of results, it can be interpolated from a minimum number
of data points.

One of the hypothesis for this surface is that it is continuous and does not have any interruptions,
therefore the function based on the control variables must also be continuous.

A response surface can have more than two control variables, but then we would have to have more
response surfaces for more variable pairings.

An important thing to note is that the relationships between variables must have be based on
confirmed theory.

60
C entral C om posite D esign

This method can produce the needed results for a response surface.

For the class we will be using the catapult simulator and with the following input treatments:

Fixed input treatments:

 FA = 120
 PE = 200
 BP = 200

Variable input treatments and levels:

Treatm ent -1 0 +1
16 18
RA 140
0 0
CE 220 250 280

k k k
All Central Composite Design has a factorial base, such as 2 , 3 or n . The mathematical theory
behind the response surface is tensor algebra, which uses characteristic vectors for the geometric
development of the surface.

Since we are using a central point, we will codify the extremes as -1 or +1. W e m ust not forget
that this is not a 3 k experim ent, but a 2k w ith a central point. Although at first glance they
seem similar, the fact is that the central point is not needed for the characteristic values for the
interpolation, just the extremes. These extremes will then be rotated to generate 4 additional points
to create the surface.

61
The following table shows the rotation of each point:

Original R otated
R un
X1 X2 X1 X2
1 -1 -1 1.41421356 0
2 1 -1 0 -1.41421356
3 -1 1 -1.41421356 0
4 1 1 0 1.41421356
5 0 0 0 0
6 0 0 0 0
7 0 0 0 0

The rotated points are beyond the original experiment space and they are called rotational o “star”
points.

The justification for using a response surface instead of a 3k factorial experim ent is that
w e have a reasonable doubt there is a second or higher order linear relationship betw een
variables.

We now have to build the run table in order to create the star points:

C odified Original
R un
X1 X2 X1 X2
1 -1 -1 140 220
2 1 -1 180 280
3 -1 1 140 220
4 1 1 180 280
5 0 0 160 250
6 0 0 160 250
7 0 0 160 250
8 √2 0 188.284271 250
9 0 -√2 160 207.573593
10 -√2 0 131.715729 250
11 0 √2 160 292.426407
12 0 0 160 250
13 0 0 160 250
14 0 0 160 250

The way to obtain the star points is this equation:

𝑆𝑡𝑎𝑟 𝑃𝑜𝑖𝑛𝑡 = 𝐶𝑒𝑛𝑡𝑒𝑟 ± √2(𝐸𝑥𝑡𝑟𝑒𝑚𝑒+ − 𝐸𝑥𝑡𝑟𝑒𝑚𝑒− )

62
This type of design will always be equidistant since the extremes define the space and the center w ill
k
alw ays be at the center , unlike 3 designs that can be not equidistant.

The data interpolation uses the codified values, not the original values as they can be asymmetric.

The R code is as follows:

library (rsm)
trsm <- ccd(basis = 2, randomize = F,n0 = c(3,3),
coding = list(x1~(RA-163)/15, x2~(CE-250)/35))

The “basis” parameter refers to the number of factorial design used.

The “n0” parameter refers to the levels of treatment and how many center points to generate.

The “coding” parameter is the codification and tells R what is the center point is in terms of real
values and the ±1 distance between maximum and bottom values. It also names the control variables.

We can also input the results of the experiment and create or first model and response surface plot:

Y7 <- c(199, 370, 292, 549, 356, 352, 357, 195, 486, 258, 450, 364, 369, 366)
trsm <- data.frame(trsm,Y7)
trsm <- coded.data(trsm, x1~(ra-163)/15,
x2~(ce-250)/35)
trsm

The following code produces our first proposal of the model:

first <- rsm(Y7~SO(x1,x2),data = trsm)

The “rsm” function allows us to have explicit higher order relationships of the model, instead of the
k k
linear models we used before in the 2 and 3 experiments. In this case, SO means to consider all
Second Order, First Order and interrelationships between variables.

We now analyze the impact of each variable:

summary(first)
contour (first, ~x1+x2, image = T)

rsm(formula = Y7 ~ SO(x1, x2), data = trsm)

Estimate Std. Error t value Pr(>|t|)


(Intercept) 360.6667 2.5346 142.2957 6.654e-15 ***
x1 104.9420 2.1951 47.8084 4.052e-11 ***
x2 67.9411 2.1951 30.9519 1.290e-09 ***
x1:x2 21.5000 3.1043 6.9259 0.0001213 ***
x1^2 -8.7708 2.2847 -3.8390 0.0049544 **
x2^2 -2.0208 2.2847 -0.8845 0.4022195
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

63
Multiple R-squared: 0.9976, Adjusted R-squared: 0.9961
F-statistic: 661.3 on 5 and 8 DF, p-value: 3.053e-10

We notice that all factors are important except the x2^2 value. As we saw in the response surface,
the X2 value represented by the Cup Elevation are formed by straight lines meaning that it a linear
relationship.

Based on the results we can create a better model based on the important factors.

second <- rsm(Y7~FO(x1,x2)+TWI(x1,x2)+PQ(x1), trsm)


summary(second)

rsm(formula = Y7 ~ FO(x1, x2) + TWI(x1, x2) + PQ(x1), data = trsm)

In the code FO() means a First Order relationship of the factors, TWI() means the a first order
TW In interactions between factors, PQ() means P artial Quadratic (squared) relationship of factors.

Estimate Std. Error t value Pr(>|t|)


(Intercept) 359.4231 2.0833 172.5272 < 2.2e-16 ***
x1 104.9420 2.1684 48.3972 3.441e-12 ***
x2 67.9411 2.1684 31.3331 1.685e-10 ***
x1:x2 21.5000 3.0665 7.0112 6.247e-05 ***
x1^2 -8.6154 2.2502 -3.8287 0.004036 **
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Multiple R-squared: 0.9974, Adjusted R-squared: 0.9962


F-statistic: 847 on 4 and 9 DF, p-value: 1.392e-11

Our model now has a better R squared value, indicating a strong correlation. When the Multiple and
Adjuested R values are equal, it means that the model does not have any non-significant variable.

64
Response: Y7
Df Sum Sq Mean Sq F value Pr(>F)
FO(x1, x2) 2 125031 62515 1662.0248 2.763e-12
TWI(x1, x2) 1 1849 1849 49.1573 6.247e-05
PQ(x1) 1 551 551 14.6591 0.004036
Residuals 9 339 38
Lack of fit 4 119 30 0.6793 0.635323
Pure error 5 219 44

The “Lack of fit” line code means the difference between the calculated values by the model
and the experimental data. A high Pr(>F) value means that our model is good, as we are
rejecting the null hypothesis that the model has a lack of fit. If the p-value was lower than
0.05 we should discard the model as it has a lack of fit with the data.

C hoosing the right variables

One needs practical experience of the process in order to see what variables have the most
impact, parametrize their range and use past experiments to refine them. With that we can
choose the center points, star points, etc.

The P aper H elicopter C entral C om posite D esign Experim ent

Prof. Huerta assigned us to perform a 2k Central Composite Design. A needed variable was the angle
of the wing. He recommended that the central point be the marked angle and one angle below and
above it.

The other variable can be either the wing or body lengths.

I did perform the experiment, along with my classmate Andrea on October 21 and the results can be
seen on Appendix B, Assignment 4.

The experim ent proposal

As a final assignment Prof. Huerta requested that we make a team to perform an experiment. Andrea,
Adolfo and I formed the team as it needed a 3 member group.
th
The experiment was to be presented next week, it was actually presented on October 29 .

We had to present a brief slideshow where we explained the concept of our experiment, the response
variable, the control variable, the treatment levels, controls. Prof. Huerta suggested that these
treatments be quantitative as to allow for a Response Surface.

The presentation can be seen on Appendix D.

65
C LA SS 9 – 20211022
th
There was no class on this day. The class was moved on October 29 .

C LA SS 10 – 20211029

R esponse Surfaces

The response surface (RSM) is a type of experimental design that allows obtaining information to
interpolate the results and calculate a surface that allows approximation of the result values in a
range of response values. RSM is applied when there is evidence or reasonable doubt that there are
significant second order relationships.
k
We refer to second order relationships as quadratic, and are often seen in 3 factorial experiments.

These experiments are often more expensive than a factorial one.

The R code output above is a linear regression, and differs from the ANOVA as it has the estimated
coefficients of the model like the β0 and the other β from each factor candidate (x1, x2, x1:x2, x1^2,
x2^2). Note that x2^2 is not a significant factor as its Pf value is above 0.05.

first <- rsm(Y7~SO(x1,x2),data = trsm)

rsm(formula = Y7 ~ SO(x1, x2), data = trsm)

Estimate Std. Error t value Pr(>|t|)


(Intercept) 360.6667 2.5346 142.2957 6.654e-15 ***
x1 104.9420 2.1951 47.8084 4.052e-11 ***
x2 67.9411 2.1951 30.9519 1.290e-09 ***
x1:x2 21.5000 3.1043 6.9259 0.0001213 ***
x1^2 -8.7708 2.2847 -3.8390 0.0049544 **
x2^2 -2.0208 2.2847 -0.8845 0.4022195
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Multiple R-squared: 0.9976, Adjusted R-squared: 0.9961


F-statistic: 661.3 on 5 and 8 DF, p-value: 3.053e-10

66
W hat happens w hen a 3 k factorial experim ents m arks a non -linear effect but the
response surface experim ent does not?

It could happen because the geometry of the response surface didn’t cover the curvature of the effect.
Although that would be a rare mistake a seasoned experimenter would make, as he would choose the
geometric center in the area he previously investigated.

It could also happen if the coefficients of the squared factors were less than one, as squaring it would
reduce the value further. This happens when one chooses the geometric center near a stationary point,
where the response is lower around it.

The third explanation is that the experiment had a high amount of variation, the experiment control
of the response surface was deficient and the variation clouded the results.

The low p-values shown on the first factors clearly state that they had a major impact in the response.
P -values therefore are a first indicator of im portance . High coefficient values with low p-value
mean that they really do not have an important effect in the response.

Another important parameter for analysis is the “Pure error” value in the ANOVA table summary:

summary(first)

Analysis of Variance Table

Response: Y7
Df Sum Sq Mean Sq F value Pr(>F)
FO(x1, x2) 2 125031 62515 1621.8342 3.664e-11
TWI(x1, x2) 1 1849 1849 47.9686 0.0001213
PQ(x1, x2) 2 582 291 7.5435 0.0144177
Residuals 8 308 39
Lack of fit 3 89 30 0.6766 0.6027711
Pure error 5 219 44

The mean square of the “Pure error” is 44 squared units based on the response magnitude, taking the
root of that value means the weighted variation of the experiment or the experiment error. In this
case the experiment error is 6.63 (√44 = 6.63 …).

Referring back to the first code on the “Estimate” row one cans see that marked with *** have a vale
higher than 6.63, because their effects are higher than the experiment error/natural variation. Even
the -8.7708 of x1^2, because it is an absolute value.

67
W hat does lack of fit m ean and how do you interpret that line?

Analysis of Variance Table

Response: Y7
Df Sum Sq Mean Sq F value Pr(>F)
FO(x1, x2) 2 125031 62515 1621.8342 3.664e-11
TWI(x1, x2) 1 1849 1849 47.9686 0.0001213
PQ(x1, x2) 2 582 291 7.5435 0.0144177
Residuals 8 308 39
Lack of fit 3 89 30 0.6766 0.6027711
Pure error 5 219 44

The null hypothesis of the lack of fit in a response surface ANOVA is that the model has no lack of
fit. The value of 0.6027711 is higher that the p < 0.05 threshold, therefore the null hypothesis is true
and therefore the model has no lack of fit for the data.

If one had a low p-value of lack of fit, it means the model has a lack of fit with the data and therefore
is a bad model to represent the observed phenomenon. One should repeat the experiment again
because the results are not statistically significant.

Stationary points

The R code output gives the stationary points of the response surface and it its original units:

Stationary point of response surface:


x1 x2
-4.816311 -8.810611

Stationary point in original units:


ra ce
90.75534 -58.37138

A stationary point is where the function/response does not change, it does not increase or decrease.
One must take caution that the stationary points of the surface might get unrealistic quantities, as
they result from an interpolation. One can use them as reference points, but they might be functionally
useless.

68
Eigen values and vectors

Finally, the R code outputs the Eigen values and Eigen vectors of the response surface:

Eigenanalysis:
eigen() decomposition
$values
[1] 5.871514 -16.663181

$vectors
[,1] [,2]
x1 -0.5918031 -0.8060825
x2 -0.8060825 0.5918031

The Eigen values are used as the origin point to trace constant increment/decrement lines, the vectors
are the directions of those lines.

They are useful to plot the course for optimization, reaching the highest response possible with the
factors available.

M odel Im provem ent

We want to improve the model getting rid of the factors with low p-value:

best <- rsm(Y7~FO(x1,x2)+TWI(x1,x2)+PQ(x1),trsm)

In the code FO() means a First Order relationship of the factors, TWI() means the a first order
TW In interactions between factors, PQ() means P artial Quadratic (squared) relationship of factors.

The result of this model is shown below:

summary(best)
Call:
rsm(formula = Y7 ~ FO(x1, x2) + TWI(x1, x2) + PQ(x1), data = trsm)

Estimate Std. Error t value Pr(>|t|)


(Intercept) 359.4231 2.0833 172.5272 < 2.2e-16 ***
x1 104.9420 2.1684 48.3972 3.441e-12 ***
x2 67.9411 2.1684 31.3331 1.685e-10 ***
x1:x2 21.5000 3.0665 7.0112 6.247e-05 ***
x1^2 -8.6154 2.2502 -3.8287 0.004036 **
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Multiple R-squared: 0.9974, Adjusted R-squared: 0.9962


F-statistic: 847 on 4 and 9 DF, p-value: 1.392e-11

69
Analysis of Variance Table

Response: Y7
Df Sum Sq Mean Sq F value Pr(>F)
FO(x1, x2) 2 125031 62515 1662.0248 2.763e-12
TWI(x1, x2) 1 1849 1849 49.1573 6.247e-05
PQ(x1) 1 551 551 14.6591 0.004036
Residuals 9 339 38
Lack of fit 4 119 30 0.6793 0.635323
Pure error 5 219 44

The improvements are the following:

1. The β of each factor remained more or less the same, except that the intercept and x1^2
reduced its values. However they are still above the variation error so they are still significant.
2. P-values also decreased, making them more statistically significant.
3. The R-squared values both modestly increased, meaning the model has a high correlation
with the data.
4. The model’s p-value also decreased, making it more statistically significant.
5. The (modest) decrease in residuals.

The contour plot

The R code for contour plots is:

contour (best, ~x1+x2, image = T)

Contours are useful for making forecast patterns.

70
The 3D contour plot

The contour plot can be enhanced by adding the 3D surface into the plot space:

persp (best, ~x1+x2, theta = 15, contours = "col", col = rainbow(40))

The “theta” parameter changes the rotation of the plot, “contours” is activated using “col” and the
surface can be painted with the rainbow(#) function, where the number is the number of colors
applied.

71
C entral C om posite D esigns

In statistics, a central composite design is an experimental design, useful in response surface


methodology, for building a second order (quadratic) model for the response variable without
needing to use a complete three-level factorial experiment.

The following table is a summary of the different types of designs:

C entral C om posite D esigns(N IST, 2013):


C entral
Term inolog
C om posite C om m ents
y
D esign Type
CCC designs are the original form of the central composite
design. The star points are at some distance from the center
based on the properties desired for the design and the number
of factors in the design. The star points establish new extremes
Circumscribed CCC for the low and high settings for all factors. Figure 5 illustrates
a CCC design. These designs have circular, spherical, or
hyperspherical symmetry and require 5 levels for each factor.
Augmenting an existing factorial or resolution V fractional
factorial design with star points can produce this design.
In this design the star points are at the center of each face of
the factorial space, so α = ± 1. This variety requires 3 levels of
Face Centered CCF
each factor. Augmenting an existing factorial or resolution V
design with appropriate star points can also produce this design.
For those situations in which the limits specified for factor
settings are truly limits, the CCI design uses the factor settings
as the star points and creates a factorial or fractional factorial
Inscribed CCI design within those limits (in other words, a CCI design is a
scaled down CCC design with each factor level of the CCC
design divided by α to generate the CCI design). This design
also requires 5 levels of each factor.

72
Graphically, these three composite designs look like this:

The Circumscribed circle extends beyond the borders of the original testing space.

The Face Centered just reaches the borders of the original testing space.

The Inscribed reduces the testing space to have a circle but inside the testing space.

In R the code for each type is:

C ircum scribed

library (rsm)
reactor <- ccd(basis = 2, randomize = F, n0 = c(3,3),
coding = list(x1~(temp-75)/25,x2~(conc-37.5)/12.5))
reactor

Face C entered

library (rsm)
reactor.f <- ccd(basis = 2, randomize = F, n0 = c(3,3), alpha="faces",
coding = list(x1~(temp-75)/25,x2~(conc-37.5)/12.5))
reactor.f

73
Inscribed

library (rsm)
reactor.i <- ccd(basis = 2, randomize = F, n0 = c(3,3), inscribed = T,
coding = list(x1~(temp-75)/25,x2~(conc-37.5)/12.5))
reactor.i

The output can be compared in the following table:

C ircum scribed Face C entered Inscribed


run. std.
tem p conc tem p conc tem p conc B lock
order order
1 1 1 50 25 50 25 57.32 28.66 1
2 2 2 100 25 100 25 92.68 28.66 1
3 3 3 50 50 50 50 57.32 46.34 1
4 4 4 100 50 100 50 92.68 46.34 1
5 5 5 75 37.5 75 37.5 75 37.5 1
6 6 6 75 37.5 75 37.5 75 37.5 1
7 7 7 75 37.5 75 37.5 75 37.5 1
8 1 1 3 9.64 37.5 50 37.5 50 37.5 2
9 2 2 110.3 6 37.5 100 37.5 100 37.5 2
10 3 3 75 19.82 75 25 75 25 2
11 4 4 75 55.18 75 50 75 50 2
12 5 5 75 37.5 75 37.5 75 37.5 2
13 6 6 75 37.5 75 37.5 75 37.5 2
14 7 7 75 37.5 75 37.5 75 37.5 2

The P aper H elicopter C om posite D esign

As homework Prof. Huerta requested we run a Response Surface experiment with the JPL paper
k
helicopter model using a 2 treatment of the length of the wings and its incidence angle.

The results of this assignment can be seen in the section titled Assignment 5 in Appendix B.

74
C LA SS 11 – 20211105

R eplies

A classmate had a problem with its CCD results as he used the means of the replies he did with his
experiment. As mentioned previously using the means of each reply hides the variance of the process
and since our analysis is based on the variance it makes interpretation impossible or meaningless.

The answer was to use the following R code in the CCD package:

Diseno_cc2 <- ccd(basis = 2, n0 = 3, randomize = FALSE, wreps = (3,3), coding –


list (x1 – (LA – 7)/1, x2 – (AN – 11)/11

C om m a Separated V alue Files in R

CSV is a common data exchange format that is widely supported by consumer, business, and scientific
applications. Among its most common uses is moving tabular data between programs that natively
operate on incompatible (often proprietary or undocumented) formats.

For example, a user may need to transfer information from a database program that stores data in a
proprietary format, to a spreadsheet that uses a completely different format. Most database programs
can export data as CSV and the exported CSV file can then be imported by the spreadsheet program

R has a CSV writing command, useful for exporting tables to Excel spreadsheets with information:

Write.csv (x = ccc, file = “ccc.csv”)

It also has an import to CVS file command, when we have finished writing our data in the Excel
spreadsheet.

Ccr <- read.csv(file = “ccc.csv”, header = T)

Lack of fit

One classmate showed its results with an ANOVA result of an adjusted R-squared value of 0.9331
-5
and a p-value of 2.463×10 , making it statistically significant, but it’s lack of fit probability value
was 0.0249518. This means that the model could be better if it was 0.05 or higher.

Another classmate asked what did the Lack of fit parameter measured, Prof. Huerta answered that
it measures the difference of the experiment data points with the calculated results of the model.

As the first classmate showed the Central Composite Face Centered experiment results he asked: Had
he performed more replies in the experiment, would the model be more accurate? Prof. Huerta
answered that yes, the model would be more accurate, the coefficients might be lower, but more
realistic and the lack of fit parameter would decrease.

A third classmate asked if the lack of fit was a more powerful parameter than the R squared to
determine if the experiment was correct. Prof. Huerta answered that no parameter by itself tells the
whole story, each tells a capacity of the model to understand a phenomenon.

75
Statistical M odels

The last question asked segued into a lecture about modelling. Prof. Huerta showed us the following
slides:

We can identify 3 levels of understanding of a physical phenomenon in an engineering system we can


modify:

 Characterization, where we understand its factors and how the critical factors affect a desired
output.
 Parametrization, where we select and manipulate values in the control factors to have a
desired output.
 Optimization, where we finely tune the values in order to reach the desired output.

Our class is focused on the “Factorial” and “Superficie Resp” (Response Surface) methods.

The Central Composite Design is applied for the optimization level of understanding.

76
There are the following types of models:

 Algorithm models: A model based on a series of defined or iterative steps.


 Mathematical models: A model based on mathematical relationships like equations
independent of the application. This model describes all other models in a mathematical
language.
 Stochastic models: A mathematical model based on probability distributions and random
variables.
 Empirical models: A model based on ordered data obtained from testing and systematized
experience. If an empirical model is repeated enough times, it can be the basis for a
mechanistic model.
 Mechanistic models: A model that has had enough tests to be generally applied, it doesn’t
necessarily needs random elements but enough data to show repeating patterns.

A statistical m odel is an em pirical m odel w ith a stochasti c elem ent (random elem ent),
expressed m athem atically that can be the basis for a m echanistic m odel.

Another important characteristic of a mechanistic model is its parsim ony, which is that any example
of any phenomenon should be interpreted at its simplest, most immediate level. The best model is a
simple one, as it is more reliable in terms of durability, predictability and maintenance.

What can a model do?

 Understand: Answer a research question.


 Predict: Estimate an output.
 Control: Eliminate special causes of variation (conditions or anomalies).
 Optimize: Parametrize and improve the process.

77
C LA SS 12 – 20211112

This class was dedicated to resolve some issues regarding our experiments.

U ncertainty, A ccuracy and P recision

These concepts were seen in Class 4 but now we see them applied to a real example.

One team presented the results from their experiment, which consists of measuring the temperature
of a water solution. The solution consisted of NaCl salt.

In it they presented a digital scale, which was part of the experiment’s measurement devices. The
scale had a maximum capacity of 2 kg, minimum capacity of 20 g and an error of 1 g. Prof. Huerta
remarked that all measurement process always has an uncertainty, the range in which the true value
lies. In the case of the digital scale, a measurement of 1000 g might be ±1 g off the true value.

The uncertainty is linked to the measurement physical method, the device’s manufacturing, materials
and reliability.

Prof. Huerta recommended us to read Kirkup and Frenkel’s book: “An Introduction to Uncertainty
in Measurement: Using the GUM (Guide to the Expression of Uncertainty in Measurement)” for more
information on how to determine the uncertainty.

The Amazon.com summary of that book is as follows:

Measurement shapes scientific theories, characterizes improvements in manufacturing


processes and promotes efficient commerce. In concert with measurement is uncertainty, and
students in science and engineering need to identify and quantify uncertainties in the
measurements they make. This book introduces measurement and uncertainty to second and
third year students of science and engineering. Its approach relies on the internationally
recognized and recommended guidelines for calculating and expressing uncertainty (known
by the acronym GUM). The statistics underpinning the methods are considered and worked
examples and exercises are spread throughout the text. Detailed case studies based on typical
undergraduate experiments are included to reinforce the principles described in the book. This

78
guide is also useful to professionals in industry who are expected to know the contemporary
methods in this increasingly important area. Additional online resources are available to
support the book at www.cambridge.org/9780521605793.

Prof. Huerta recommended that we look these three parameters for each measurement device we use
in the experiment, as they w ill be a source of error in our m easurem ents and w e m ay
confuse it w ith the natural variation of the process.

Some devices are more susceptible to uncertainty than others, like calipers to weighing scales and
most of it due to their quality.

Linearity

The team now showed a thermometer and its manual. The manual had a mistranslation, as they
translated “Uncertainty” as “Precisión” (Accuracy), since they show a ± sign in the unit.

Prof. Huerta pointed out that this device does have a segmented linearity, as its uncertainty changes
between temperature ranges. Linearity is important as it is related to accuracy.

79
C lassification as a m easurem ent

The team now presented the water used in the experiment, showing two brands of bottled water.

Here Prof. Huerta remarked that one way of measuring things is by classifying them, in this case
the water is a categorical, nominal or qualitative variable.

By using only one brand of water for each treatment level and procuring to stop any cross-
contamination of water, the team is making sure that it is a separate treatment level and thus
eliminating error in their experiment and making a case for repeatability.

C EN A M ’s G U M

CENAM is a government institution that is the national reference for metrology, it published a small
guide called “GUÍA PARA ESTIMAR LA INCERTIDUMBRE DE LA MEDICIÓN” that is based on
the book “Guide to the expression of Uncertainty in Measurement” and is about the CENAM’s
approach to uncertainty based on their experience as a metrology laboratory.

N on-linear effects

The team showed us the results of their experiment with its Factor vs. Mean Plot:

It clearly shows that the factors have a non-linear effect since both the “solute” and “water” means are
not in the center line drawn in the plot, which represents the value of the β0 parameter of the lineal
model, meaning that both factors have a curvature.

80
A N O V A R esults

The team now presented their ANOVA results and their linear model in R. The ANOVA stated that
they had a perfect fit, so the recom m endation w as to add m ore replies to the experim ent
as there w as no data on the natural variation of the process.

Prof. Huerta suggested that they use a model based on the water factor, the results showed that while
the factor has a high mean square value, its F value and probability is not statistically significant,
meaning that the variation is absorbing the effects of both treatments involved in the experiment.

Another option to get significant results is to increase the resolution, accuracy and precision in the
measurement of the dependent variable, which is the temperature, however this could mean a higher
cost of the experiment as better devices are needed.

One of the teammates also mentioned that the behavior of heating each water was different and they
didn’t boil linearly. Prof. Huerta suggested that establishing a shorter or a different measurement
range might show the effect of each treatment better.

Another suggestion was to increase the contrast between each treatment, thereby magnifying the
difference in each result.

I mentioned that there might be residual heat left in the pot between each test run, but the team
stated that after each run they cleansed it with cold water as to return it to ambient temperature.

Another “noise” factor that might be involved in the experiment is the atmospheric pressure of the
site, as boiling water behaves differently on higher and lower altitudes.

Theory

“Theory is simply the best explanation we have for a phenomenon at that moment” – Prof. Eric L.
Huerta

I related to that sentiment as in my field there is still a question on how deep foundations work, as
the current theory that uses the shear strength of the surrounding soil has not been able to measure
up with real-size tests. New tests are offering more data-points in order to find a new analytical
method to measure the bearing capacity of deep foundations.

C entral C om posite D esign Question

I asked Prof. Huerta if he agreed that our experiment used the continuous variables (angle and mass)
in order to use the Circumscribed type as the other variables were not able to use fractions (ball type,
frame height, surface type).

Prof. Huerta mentioned that he hoped that the variables that are significant in the factorial
experiment are used in the CCC experiment, I mentioned that both variables used in the CCC
experiment are indeed significant as they have a direct relationship with the potential energy of the
hammer.

81
R em aining tasks

Prof. Huerta asked if we had registered for the Postgraduate Colloquium as he was going to receive
th
a list with our names for consideration. I did sign up on November 4 .

He also mentioned if we finished our Teacher Evaluation for this semester


(https://comunidad.uaq.mx:8011/EvaluacionDocente/login.jsp). I did finish my evaluation on
rd
November 23 .
th
Finally he asked about our English test, I did my test on November 4 and got an 8+, the necessary
grade to comply with the requirement. I asked if I needed to present the certification letter for UAQ,
he confirmed.

Im portant pending dates


th
The second exam will be on November 26 .
rd
The presentation of our experiment results will be December 3 .
th
The individual and group experiment reports are due on December 4 .

82
C LA SS 13 – 20211119
th
We didn’t have class as we had the 15 Postgraduate Colloquium of the Engineering Faculty.

As part of our scholarship we needed to be part of the Organization Staff. I, along with classmate
and generation member Andrea Medina were assigned the ROBOUAQ block of the colloquium. Since
the COVID restrictions and schedule changes disallowed us from being present on the UAQ Campus,
th
we presented the two major conferences on the first day of the Colloquium (November 17 ).

I hosted the Lecture: “Perspectiva en la Aplicación Tecnológica de Biosensores Enzimáticos” by Dr.


Ricardo A. Escalona Villalpando from the Centro de Investigación y Desarrolllo Tecnológico en
Electroquímica in Querétaro.

On Thursday and Friday I worked with my class team on the 3 factorial and Surface Response
Experiment Assignment.

Our prototype was a “golf” device: A hammer placed on a frame that pushed a ball along a rail and
we measured the distance traveled based on the hammer height, angle and mass.

The development and results of the experiment can be read in Appendix E of this report.

83
EX A M 2 – 20211126

The exam consisted of 2 problems and were related to the concepts related to the ANOVA test, linear
model construction, design of experiments, RSM test, variances and null hypothesis.

P roblem 1

El gerente de la planta pretende mejorar la productividad de un proceso, el Black Belt de la empresa


inició un estudio, pero ha sido promovido a otra posición y no lo concluyó, por lo que se pide que se
analice la información disponible (Cuadro 1) y se obtengan conclusiones a partir de esta. Se sabe que
se evaluaban las posibles interacciones de la temperatura con la presión y con el tiempo ciclo. Con
base en la información indica:

1. Las hipótesis que se pueden verificar;


2. Las tablas de ANOVA completas;
3. Los factores significativos;
4. La evaluación analítica de los supuestos del ANOVA x ∈ N(µ, σ);
5. Los modelos lineales y su evaluación con base en los criterios correspondientes;
6. Un reporte del análisis, incluyendo un diagrama de flujo, el código de R, los resultados y las
conclusiones.

Cuadro 1: Porcentaje de eficiencia de centros de operación marca Hitachi


T.C. T1 a P1 T2 a P1 T1 a P2 T2 a P2 T1 a P3 T2 a P3
Corto 61,74,74, 66,72,72, 67,67,67, 77,78,79, 75,71,75, 74,74,74,
85,69,69 66,67,67 71,79,71 80,79,78 69,68,69 74,77,74
Largo 56,55,54, 51,52,52, 44,43,43, 59,59,60, 50,51,51 56,57,57,
53,54,55 52,53,53 48,48,49 66,67,67 53,52,52 59,60,69

84
P roblem 2

(40 puntos) En una fábrica de bujías automotrices se realizan pruebas de desgaste. Se toman cinco
muestras de producción en forma aleatoria mensualmente y se someten a una prueba de desgaste
acelerado, cuyos resultados se reportan en µm/1000 hr. Se ha observado que parece haber una relación
entre el desgaste y el diámetro del cátodo (E1). También se sabe que el espesor total de los ´alabes
del ´ánodo (E2) puede afectar el desgaste. Las tolerancias del diseño de los dos electrodos son del 1
% y se ha demostrado que es robusto hasta 2 %. Diseña un experimento para obtener una superficie
de respuesta del desgaste en función de las dimensiones de los electrodos. Considera 2 puntos centrales
para el diseño original y 3 para explorar fuera de la tolerancia del diseño tanto como sea posible.
Referirse a la Figura.

Indica:

1. Tabla de factores y niveles.

2. El código de R para generar la tabla experimental.

3. La tabla del experimento.

4. Un reporte completo, incluyendo un diagrama de flujo, el código de R y el racional para la selección


de los factores y los niveles.

I had to write a calculation report for the test, it is found in Appendix F. I delivered the exam late
so the possible highest grade is 8.00/10.00

85
R EFER EN C ES

Alexander Moreno, “Linear Regression Plots: Fitted vs Residuals”


https://boostedml.com/2019/03/linear-regression-plots-fitted-vs-residuals.html (Accessed 21
September 2021).

Alexander Moreno, “The Scale Location Plot: Interpretation in R”


https://boostedml.com/2019/03/linear-regression-plots-scale-location-plot.html (Accessed 21
September 2021).

Arsham, Dr. Hossein & Lovric, Miodrag. (2011). Bartlett's Test. International Encyclopedia of
Statistical Science. 2. 20-23. 10.1007/978-3-642-04898-2_132.

Aravind Hebbali, “Measures of Influence” olsrr: Tools for Building OLS Regression Models.
Comprehensive R Archive Network https://cran.r-
project.org/web/packages/olsrr/vignettes/influence_measures.html

Clay Ford. “Understanding Diagnostic Plots for Linear Regression Analysis” University of Virginia
Library Research Data Services + Sciences https://data.library.virginia.edu/diagnostic-plots/
(Accessed 21 September 2021).

Clay Ford. “Understanding Q-Q Plots” University of Virginia Library Research Data Services +
Sciences https://data.library.virginia.edu/understanding-q-q-plots/ (Accessed 21 September 2021).

Data Science Association “Data Science Code of Professional Conduct”


https://www.datascienceassn.org/code-of-conduct.html (accessed 12 September 2021).

David Smith “What does Microsoft do with R?” Revolutionary Analytics, 13 February 2008
https://blog.revolutionanalytics.com/2018/02/what-does-microsoft-do-with-r.html (accessed 12
September 2021).

Eric Mardsen, “Black swans, or the limits of statistical modelling” Risk Engineering. https://risk-
engineering.org/static/PDF/slides-black-swans.pdf (accessed September 9, 2021)

Glen S. "Scheffe Test: Definition, Examples, Calculating (Step by Step)" From StatisticsHowTo.com:
Elementary Statistics for the rest of us! https://www.statisticshowto.com/scheffe-test/ (Accessed 12
September 2021).

Gutiérrez P., H et al. (2008). Análisis y Diseño de Experimentos. Tercera edición. McGraw-
Hill Interamericana.

Ian Sample “Harvard University says it can't afford journal publishers' prices” The Guardian 24 April
2012. https://www.theguardian.com/science/2012/apr/24/harvard-university-journal-publishers-
prices (accessed 12 September 2021).

Juan Bosco Mendoza Vega, “R para principiantes” https://bookdown.org/jboscomendoza/r-


principiantes4/ (accessed 12 September 2021).

86
Lumen Learning, “Accuracy, Precision, and Significant Figures”
https://courses.lumenlearning.com/physics/chapter/1-3-accuracy-precision-and-significant-figures/
(accessed 21 September 2021).

MIT Libraries. “MIT, guided by open access principles, ends Elsevier negotiations” MIT News, 11
June, 2020, https://news.mit.edu/2020/guided-by-open-access-principles-mit-ends-elsevier-
negotiations-0611 (accessed 12 September 2021).

NIST/SEMATECH e-Handbook of Statistical Methods,


https://www.itl.nist.gov/div898/handbook/pri/section3/pri3361.htm (accessed November 24, 2021)

Patterson, M. and Williams, D. (1998). Paradigms and problems: The practice of social science in
natural resource management. Society and Natural Resources, 11, 3: 279-295. (DOI):
10.1080/08941929809381080

Q Wiki contributors, “Regression - Diagnostic - Plot - Cook's Distance vs Leverage” Q Software,


https://wiki.q-researchsoftware.com/index.php?title=Regression_-_Diagnostic_-_Plot_-
_Cook%27s_Distance_vs_Leverage&oldid=51097 (accessed September 21, 2021).

Routledge, Richard. "Law of large numbers". Encyclopedia Britannica, 12 Oct. 2016,


https://www.britannica.com/science/law-of-large-numbers (accessed 11 September 2021).

StatEase, “Diagnostics Plots”


https://www.statease.com/docs/v11/contents/analysis/diagnostics/diagnostics-plots/ (Accessed 21
September 2021).

Statology, “How to Identify Influential Data Points Using Cook’s Distance”


https://www.statology.org/how-to-identify-influential-data-points-using-cooks-distance/ (Accessed
21 September 2021).

UCLA: Statistical Consulting Group “FAQ: What are the Differences Between One-Tailed and Two-
Tailed Tests?” UCLA Institute for Digital Research and Education.
https://stats.idre.ucla.edu/other/mult-pkg/faq/general/faq-what-are-the-differences-between-one-
tailed-and-two-tailed-tests/ (accessed September 9, 2021)

Wikipedia contributors, "R (programming language)," Wikipedia, The Free Encyclopedia,


https://en.wikipedia.org/w/index.php?title=R_(programming_language)&oldid=1043075835
(accessed September 9, 2021).

Wikipedia contributors, "RStudio," Wikipedia, The Free Encyclopedia,


https://en.wikipedia.org/w/index.php?title=RStudio&oldid=1025289669 (accessed September 9,
2021).

87
A P P EN D IX A : A ssignm ents from V irtual U A Q

U nit 1: Introduction

A ssignm ent 1: A cadem ic Integrity

We needed to write in our own words the importance of academic integrity and prove 3 examples of
such. My short essay was this:

1. The lack or responsibility in our studies can result in the misunderstanding in the core concepts of
our classes, leaving gaps in our knowledge of the subject and introducing errors in our procedures and
producing questionable data. Combined with our wrong assumptions caused by the lack of study, we
might often read the data and get incorrect conclusions. As the character of Ian Malcolm in Spielberg's
"Jurassic Park" says: "I'll tell you the problem with the scientific power that you're using here. It
didn’t require any discipline to attain it."

2. We also have a responsibility to the scientific community and to the public to the truth in our
research, even when the political and social stances might be in opposition to it. Without valid data
all further knowledge based on it will be based on a weak foundation and will further stray from the
truth, often with unpredictable and terrible results. As the character of Valery Legasov in HBO’s
Chernobyl states: “Every lie we tell incurs a debt to the truth. Sooner or later, that debt is paid.”

3. The previous two can result in a loss of personal or institutional reputation, where our words and
works will no longer be referred in the academic field due to distrust in our results. The lack of
support will result in loss of funding (scholarships, government grants, etc.) and will end any further
careers in research. An example in real life is the case of Marc Hauser, a scientist of psychology,
organismic and evolutionary biology, and biological anthropology. He was found guilty of 8 cases of
misconduct in his research experiments (fabricating and falsifying data) by Harvard University and
removed from his post as researcher and barred him from teaching, effectively ending his career.
https://www.thenation.com/article/archive/disgrace-marc-hauser/

88
A ssignm ent 2: U pload your signed honor code declaration.

I signed and uploaded my signed honor code declaration on September 9th, 2021. A copy of this
declaration is presented below.

89
A ssignm ent 3 : Install R and R Studio
th
We installed R and RStudio in our class on Class 2 on August 13 with the guidance of Prof. Huerta.

90
A ssignm ent 4: Training Exam ple

Exam ple

A firm wishes to compare four programs for training workers to perform a certain manual
task. Twenty new employees are randomly assigned to the training programs, with 5 in each
program. At the end of the training period, a test is conducted to see how quickly trainees
can perform the task. The number of times the task is performed per minute is recorded for
each trainee, with the following results:

Obs. Prog Prog 2 Prog 3 Prog


1 4
1 9 10 12 9
2 12 6 14 8
3 11 9 11 11
4 14 9 13 7
5 13 10 11 8

> HIC <- c(9,12,11,14,13,10,6,9,9,10,12,14,11,13,11,9,8,11,7,8)


> Grupos <- as.factor(c(rep('Prog 1',5),rep('Prog 2',5),rep('Prog 3',5),rep('Prog
4',5)))
> data.set <- data.frame(HIC,Grupos)
> fit <- lm(HIC~Grupos, data.set)
> anova (fit)
Analysis of Variance Table

Response: HIC
Df Sum Sq Mean Sq F value Pr(>F)
Grupos 3 54.95 18.317 7.0449 0.003113 **
Residuals 16 41.60 2.600
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
>
> TukeyHSD(aov(fit))
Tukey multiple comparisons of means
95% family-wise confidence level

Fit: aov(formula = fit)


$Grupos
diff lwr upr p adj
Prog 2-Prog 1 -3.0 -5.9176792 -0.08232082 0.0427982
Prog 3-Prog 1 0.4 -2.5176792 3.31767918 0.9788127
Prog 4-Prog 1 -3.2 -6.1176792 -0.28232082 0.0291638
Prog 3-Prog 2 3.4 0.4823208 6.31767918 0.0197459
Prog 4-Prog 2 -0.2 -3.1176792 2.71767918 0.9972140
Prog 4-Prog 3 -3.6 -6.5176792 -0.68232082 0.0133087

91
According to the ANOVA analysis, the null hypothesis (Ho) that the variance of the 4
groups is equal must be rejected. The variance between groups is different and is
statistically significant.

Applying Tukey's test we can see that the most significant difference is in the pairs A-B, A-
D, B-C and C-D, with C being the highest group of all.

92
A ssignm ent 5: TI-2.1 FD C ER

1. Explain the five elem ents of the hypothesis

A hypothesis is a statement that has:

1. Verifiability: It can be verified by means of experiments.


2. The problem variables: It can identify the control factors of an experiment.
3. The relationships between variables: Identifies possible relationships between the
control factors and the output response
4. Theoretical background: It is based on literature review and the field of study
background
5. Previous data: It must have data taken from previous experiments or real-life
phenomena.
2. W hy is im portant to review the literature before a research project?

Because our hypothesis might have been answered or solved by previous research.

3. Should the problem be defined before or after review the literature? Justify
your answ er.

After, as the knowledge gained from reading the literature might refine the question or
problem by knowing more about the relationship between control factors and output
response.

4. A n expert question is the sam e as a new bie question? Explain.

No, because the expert question has a theory and knowledge behind it, it is a more refined
question.

5. H ow the potential control factors are related w ith the statistical significance.

The statistical significance is the value that defines if the control factor is above or below
the “noise” of the system (measurement errors, process variation and experimental errors).
If one chooses wrong control factors that are below the “noise” level of the system for the
hypothesis, they will infer the wrong conclusions from the results.

93
A ssignm ent 6: TI-2.2 R and R Studio

C reate a project in R Studio.

C heck the project's folder structure and describe its content. W hat files are
there?

There are the following files:

"Prueba.Rproj": Which is the working project file.

".RData" Which keeps all the variables used and the values stored in those variables.

".Rhistory". Which is the history files for each script file opened in the Project.

94
A P P EN D IX B : A ssignm ents from C lass

A ssignm ent 1: R ead G utierrez C hapter 2

P arám etros y estadísticos / P oblación y m uestra

Población finita

Es aquella en la que se pueden medir todos los individuos para tener un conocimiento exacto de sus
características

Parámetros

Características que, mediante su valor numérico, describen a un conjunto de elementos o individuos

Población infinita

Es aquella en la que la población es grande y es imposible e incosteable medir a todos los individuos.

Muestra representativa

Es una parte de una población, seleccionada adecuadamente, que conserva los aspectos clave de la
población.

Inferencia estadística

Son las afirmaciones válidas acerca de la población o proceso basadas en la información contenida en
la muestra.

Estadístico

Cualquier función de los datos muestrales que no contiene parámetros desconocidos

95
D istribuciones de probabilidad

Distribución de probabilidad de X

Relaciona el conjunto de valores de X con la probabilidad asociada con cada uno de estos valores.

Las distribuciones de probabilidad que más se usan en intervalos de confianza y pruebas de hipótesis
son las distribuciones: normal, T de Student, ji-cuadrada y F.

Figura A.1. Muestra de las distribuciones de probabilidad de mayor uso en inferencia.

La distribución normal está completamente definida por sus parámetros, que son la media, m, y la
desviación estándar, s. Una distribución normal con μ = 0 y σ = 1, que se simboliza con N(0, 1) y se
conoce como la distribución normal estándar.

Tanto la distribución normal estándar como la T de Student son simétricas y centradas en cero,
mientras que las distribuciones ji-cuadrada y F son sesgadas y sólo toman valores positivos. Las cuatro
distribuciones están relacionadas entre sí, ya que las distribuciones T de Student, ji-cuadrada y F se
definen en términos de la distribución normal estándar. Los parámetros que definen por completo las
distribuciones T de Student, ji-cuadrada y F, reciben el nombre de grados de libertad, que tienen que
ver con los tamaños muestrales involucrados.

Las distribuciones normal y T de Student sirven para hacer inferencias sobre las medias; mientras que
la distribución ji-cuadrada será de utilidad para hacer inferencias sobre varianzas y la distribución F
se empleará para comparar varianzas.

Es por esto que la distribución F es la de mayor relevancia en diseño de experimentos, dado que el
análisis de la variabilidad que se observó en un experimento se hace comparando varianzas.

96
Estim ación puntual y de intervalo

Un estimador puntual de un parámetro desconocido es un estadístico que genera un valor numérico


simple, que se utiliza para hacer una estimación del valor del parámetro desconocido.

Tres parámetros sobre los que con frecuencia se desea hacer inferencia son:

• La media μ del proceso (población).


2
• La varianza σ o la desviación estándar σ del proceso.

• La proporción p de ítems de interés.

Los estimadores puntuales (estadísticos) más recomendados para estimar estos parámetros son,
respectivamente:

• La media muestral 𝜇̂ = 𝑋̅

• La varianza muestral 𝜎̂ 2 = 𝑆 2 .

• La proporción de defectuosos en la muestra, 𝑝̂ = 𝑥/𝑛, donde x es el número de ítems de interés en


una muestra de tamaño n

Intervalo de confianza

Rango donde se estima que está el valor de un parámetro poblacional.

La longitud del intervalo de confianza es una medida de la precisión de la estimación. De aquí que es
deseable que la longitud de los intervalos sea pequeña y con alto nivel de confianza. El ancho de los
intervalos es mayor a medida que sea mayor la varianza de la población y el nivel de confianza exigido.
El ancho del intervalo es menor si se incrementa el tamaño de la muestra.

97
W H A T A R E TH E D IFFER EN C ES B ETW EEN O N E -TA ILED A N D TW O -TA ILED
TESTS?

When you conduct a test of statistical significance, whether it is from a correlation, an ANOVA, a
regression or some other kind of test, you are given a p-value somewhere in the output. If your test
statistic is symmetrically distributed, you can select one of three alternative hypotheses. Two of these
correspond to one-tailed tests and one corresponds to a two-tailed test. However, the p-value
presented is (almost always) for a two-tailed test. But how do you choose which test? Is the p-value
appropriate for your test? And, if it is not, how can you calculate the correct p-value for your test
given the p-value in your output?

W hat is a tw o-tailed test?

First let’s start with the meaning of a two-tailed test. If you are using a significance level of 0.05, a
two-tailed test allots half of your alpha to testing the statistical significance in one direction and half
of your alpha to testing statistical significance in the other direction. This means that .025 is in each
tail of the distribution of your test statistic. When using a two-tailed test, regardless of the direction
of the relationship you hypothesize, you are testing for the possibility of the relationship in both
directions. For example, we may wish to compare the mean of a sample to a given value x using a t-
test. Our null hypothesis is that the mean is equal to x. A two-tailed test will test both if the mean
is significantly greater than x and if the mean significantly less than x. The mean is considered
significantly different from x if the test statistic is in the top 2.5% or bottom 2.5% of its probability
distribution, resulting in a p-value less than 0.05.

W hat is a one-tailed test?

Next, let’s discuss the meaning of a one-tailed test. If you are using a significance level of .05, a one-
tailed test allots all of your alpha to testing the statistical significance in the one direction of interest.
This means that .05 is in one tail of the distribution of your test statistic. When using a one-tailed
test, you are testing for the possibility of the relationship in one direction and completely disregarding
the possibility of a relationship in the other direction. Let’s return to our example comparing the
mean of a sample to a given value x using a t-test. Our null hypothesis is that the mean is equal to
x. A one-tailed test will test either if the mean is significantly greater than x or if the mean is
significantly less than x, but not both. Then, depending on the chosen tail, the mean is significantly
greater than or less than x if the test statistic is in the top 5% of its probability distribution or bottom
5% of its probability distribution, resulting in a p-value less than 0.05. The one-tailed test provides
more power to detect an effect in one direction by not testing the effect in the other direction. A
discussion of when this is an appropriate option follows.

W hen is a one-tailed test appropriate?

Because the one-tailed test provides more power to detect an effect, you may be tempted to use a
one-tailed test whenever you have a hypothesis about the direction of an effect. Before doing so,
consider the consequences of missing an effect in the other direction. Imagine you have developed a
new drug that you believe is an improvement over an existing drug. You wish to maximize your
ability to detect the improvement, so you opt for a one-tailed test. In doing so, you fail to test for the
possibility that the new drug is less effective than the existing drug. The consequences in this example
are extreme, but they illustrate a danger of inappropriate use of a one-tailed test.

98
So when is a one-tailed test appropriate? If you consider the consequences of missing an effect in the
untested direction and conclude that they are negligible and in no way irresponsible or unethical,
then you can proceed with a one-tailed test. For example, imagine again that you have developed a
new drug. It is cheaper than the existing drug and, you believe, no less effective. In testing this drug,
you are only interested in testing if it less effective than the existing drug. You do not care if it is
significantly more effective. You only wish to show that it is not less effective. In this scenario, a
one-tailed test would be appropriate.

W hen is a one-tailed test N O T appropriate?

Choosing a one-tailed test for the sole purpose of attaining significance is not appropriate. Choosing
a one-tailed test after running a two-tailed test that failed to reject the null hypothesis is not
appropriate, no matter how "close" to significant the two-tailed test was. Using statistical tests
inappropriately can lead to invalid results that are not replicable and highly questionable–a steep
price to pay for a significance star in your results table!

D eriving a one-tailed test from tw o-tailed output

The default among statistical packages performing tests is to report two-tailed p-values. Because the
most commonly used test statistic distributions (standard normal, Student’s t) are symmetric about
zero, most one-tailed p-values can be derived from the two-tailed p-values.

https://stats.idre.ucla.edu/other/mult-pkg/faq/general/faq-what-are-the-differences-between-one-
tailed-and-two-tailed-tests/

99
A ssignm ent 2: D o som e A N O V A problem s

Exercises - One W ay A nalysis of V ariance (A N OV A )

1. Per-pupil costs (in thousands of dollars) for cyber charter school tuition for school districts in three
areas are shown. Test the claim that there is a difference in means for the three areas, using an
appropriate parametric test.

Area 1 6.2 9.3 6.8 6.1 6.7 7.5


Area 2 7.5 8.2 8.5 8.2 7.0 9.3
Area 3 5.8 6.4 5.6 7.1 3.0 3.5

a. What are the assumptions for this test? (Assume they are met. Do not check them.)
b. Test the claim at the 0.05 significance level.
c. Do pairwise Scheffe tests to determine where the difference in means lies.

Answer:

a. Independent samples, each distribution is approximately normal, and all variances are equal.
(ANOVA assumptions).
b. 𝐻𝑜 = 𝜇1 = 𝜇2 = 𝜇3
𝐻𝑎 = 𝜇1 ≠ 𝜇2 ≠ 𝜇3

R input code:

Promedios <- c(6.2, 9.3, 6.8, 6.1, 6.7, 7.5, 7.5, 8.2, 8.5, 8.2, 7.0, 9.3,
5.8, 6.4, 5.6, 7.1, 3.0, 3.5)
Areas <- as.factor(c(rep('Area 1',6),rep('Area 2',6),rep('Area 3',6)))
data.set <- data.frame(Promedios,Areas)
fit <- lm(Promedios~Areas, data.set)
anova (fit)

R output code:

Analysis of Variance Table


Response: Promedios
Df Sum Sq Mean Sq F value Pr(>F)
Areas 2 25.663 12.8317 8.1759 0.003969 **
Residuals 15 23.542 1.5694
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Critical Value:

Number of treatments, t: 3
𝑑𝑓1 = 𝑡 − 1 = 3 − 1 = 2
Number of observations, N: 18
𝑑𝑓2 = 𝑁 − 𝑡 = 18 − 3 = 15

100
Looking up the table for df1 and df2 gives a Critical Value of 3.68.

F value > Critical Value ∴ we reject Ho.

c. There is a difference between areas.

Scheffe tests:

R input code:

ScheffeTest(aov(fit))

R output code:

Posthoc multiple comparisons of means: Scheffe Test


95% family-wise confidence level
$Areas
diff lwr.ci upr.ci pval
Area 2-Area 1 1.016667 -0.9461879 2.97952126 0.3953
Area 3-Area 1 -1.866667 -3.8295213 0.09618792 0.0636 .
Area 3-Area 2 -2.883333 -4.8461879 -0.92047874 0.0044 **
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

There is significant difference between Area 2 and Area 3.

101
2. Random samples of employees were selected from three different types of stores at the mall, and
their ages recorded. Assume the assumptions for the parametric test are met. At the 0.05 significance
level, test the claim that there is a difference in mean ages for the three groups. If you find a difference,
perform pairwise Scheffe tests to determine where the difference lies.

Department Sporting
Music
Store Goods
𝑥̅ = 47.2 𝑥̅ = 26.8 𝑥̅ = 29.8 𝑆𝑆𝐵 = 1446
𝑛=6 𝑛=6 𝑛=6 𝑆𝑆𝑊 = 1337
Answer:

𝐻0 : 𝜇𝐷 = 𝜇𝑀 = 𝜇𝑆

SS Df MS F
Stores 1446 2 723 7.876
Residual
1377 15 91.8
s
Critical Value:

Number of treatments, t: 3
𝑑𝑓1 = 𝑡 − 1 = 3 − 1 = 2
Number of observations, N: 18
𝑑𝑓2 = 𝑁 − 𝑡 = 18 − 3 = 15

Looking up the table for df1 and df2 gives a Critical Value of 3.68.

F value > Critical Value ∴ we reject Ho.

There is a difference between stores.

Scheffé tests:

Critical value: 7.36


𝐻𝑜 : 𝜇𝐷 = 𝜇𝑀 , 𝐹𝑆 = 13.60 > 7.36 Reject Ho.
𝐻𝑜 : 𝜇𝑀 = 𝜇𝑆 , 𝐹𝑆 = 0.29 < 7.36 Accept Ho.
𝐻𝑜 : 𝜇𝐷 = 𝜇𝑆 , 𝐹𝑆 = 9.89 > 7.36 Reject Ho.

102
The mean age of department store employees is significantly different from employees of music and
sporting goods stores.

3. The number of grams of fat per serving for three different varieties of pizza from several
manufacturers is measured and a partial ANOVA table is provided below. Complete the ANOVA
table. At the 0.05 significance level, is there a significant difference in mean fat content for the three
varieties? Assume the assumptions for the test are met.

M
SS Df F
S
Betwee
23.0 2
n
Within 316.9 18
Total 339.9 20

Answer:
𝐻𝑜 = 𝜇1 = 𝜇2 = 𝜇3

SS Df MS F
Betwee
23.0 2 11.5 0.653
n
Within 316.9 18 17.6
Total 339.9 20

𝑀𝑆𝐵 = 𝑆𝑆𝐵 /𝐷𝑓𝐵


𝑀𝑆𝑊 = 𝑆𝑆𝑊 /𝐷𝑓𝑊
𝐹 = 𝑀𝑆𝐵 /𝑀𝑆𝑊
Critical Value:

Number of treatments, t: 3
𝑑𝑓1 = 𝑡 − 1 = 3 − 1 = 2
Number of observations, N: 18
𝑑𝑓2 = 𝑁 − 𝑡 = 18 − 3 = 15

103
Looking up the table for df1 and df2 gives a Critical Value of 3.68.

F value < Critical Value ∴ we accept Ho.

There is not enough evidence to support the claim that there is a difference in mean fat content.

No Scheffé tests are needed because no significant difference was found.

4. Prices (rounded to the closest dollar amount) of men's women's, and children's athletic shoes were
compared by obtaining random samples of 18 shoes each. The variances of the samples are
homogeneous and the distributions are assumed to be approximately normal.

a. Use the information provided to test at α=.05 whether there is a difference is mean shoe
prices for the three types of shoes.

𝑆𝑆𝐵 = 2210.1 𝑆𝑆𝑊 = 5083.4

b. The means for the three groups are as follows: Women's shoes = $48.78; Men's shoes =
$60.33; Children's shoes = $45.39. Perform the necessary tests to find where the difference
lies. State your overall inference.

Answer:

𝐻𝑜 = 𝜇𝑀 = 𝜇𝑊 = 𝜇𝐶

SS Df MS F
Betwee
2210.1 2 1105.05 11.09
n
Within 5083.4 51 99.67
Total 7293.5

𝑀𝑆𝐵 = 𝑆𝑆𝐵 /𝐷𝑓𝐵


𝑀𝑆𝑊 = 𝑆𝑆𝑊 /𝐷𝑓𝑊
𝐹 = 𝑀𝑆𝐵 /𝑀𝑆𝑊

Number of treatments, t: 3
𝑑𝑓1 = 𝑡 − 1 = 3 − 1 = 2
Number of observations, N: 54
𝑑𝑓2 = 𝑁 − 𝑡 = 54 − 3 = 51

104
Looking up the table for df1 and df2 gives a Critical Value of 3.23.

F value > Critical Value ∴ we reject Ho.

There is a significant difference in the prices of men's women's, and children's athletic shoes.
Scheffé tests:

Critical value: 6.46


𝐻𝑜 : 𝜇𝑀 = 𝜇𝑊 , 𝐹𝑆 = 12.05 > 6.46 Reject Ho.
𝐻𝑜 : 𝜇𝑊 = 𝜇𝐶 , 𝐹𝑆 = 1.40 < 7.36 Accept Ho.
𝐻𝑜 : 𝜇𝑀 = 𝜇𝐶 , 𝐹𝑆 = 9.89 > 7.36 Reject Ho.

Men's athletic shoe prices are significantly different from both women's and children's.

5. A researcher wishes to try three different techniques to lower the blood pressure of individuals
diagnosed with high blood pressure. The subjects are randomly assigned to three groups; the first
group takes medication, the second group exercises, and the third group follows a special diet. After
four weeks, the reduction in each person's blood pressure is recorded. Assume that the data in each
group is approximately normal. Is there a significant difference between the techniques used to lower
blood pressure, at α=0.05? If you find a difference, perform pairwise tests to determine where the
difference lies.

Medication group Exercise group Diet group


9 0 4
10 2 5
12 3 8
13 6 9
15 8 12
𝑥̅1 = 11.8 𝑥̅2 = 3.8 𝑥̅3 = 7.6
𝑠12 = 5.7 𝑠22 = 10.2 𝑠32 = 10.3

𝑆𝑆𝐵 = 160.13 𝑆𝑆𝑊 = 104.80

105
Answer:

𝐻𝑜 = 𝜇𝑀 = 𝜇𝐸 = 𝜇𝐷

SS Df MS F
Betwee
160.13 2 80.07 9.17
n
Within 104.80 12 8.73
Total 264.93 14

𝑀𝑆𝐵 = 𝑆𝑆𝐵 /𝐷𝑓𝐵


𝑀𝑆𝑊 = 𝑆𝑆𝑊 /𝐷𝑓𝑊
𝐹 = 𝑀𝑆𝐵 /𝑀𝑆𝑊
Number of treatments, t: 3
𝑑𝑓1 = 𝑡 − 1 = 3 − 1 = 2
Number of observations, N: 15
𝑑𝑓2 = 𝑁 − 𝑡 = 15 − 3 = 12

Looking up the table for df1 and df2 gives a Critical Value of 3.89.

F value > Critical Value ∴ we reject Ho.

There is a difference between the techniques used.

Scheffé tests:

Critical value: 7.78


𝐻𝑜 : 𝜇𝑀 = 𝜇𝐸 , 𝐹𝑆 = 18.33 > 7.78 Reject Ho.
𝐻𝑜 : 𝜇𝑀 = 𝜇𝐷 , 𝐹𝑆 = 5.05 < 7.78 Accept Ho.
𝐻𝑜 : 𝜇𝐸 = 𝜇𝐷 , 𝐹𝑆 = 4.14 < 7.78 Accept Ho.

The medication group is significantly different from the exercise group.

106
6. A turkey farmer tested three kinds of poultry feeds with the weights (in pounds) of the grown
turkeys in each sample given below. Test at α=.05 whether there is a difference in the mean weights
of turkeys consuming the different feeds. If you find a difference, perform pairwise tests to determine
where the difference lies. Assume the distributions are normal and variances are equal.

Feed A Feed B Feed C


12.3 12.1 11.5
11.4 13.4 12.4
13.4 14.0 10.8
12.5 13.6 12.6
12.0 12.8 11.8
13.1 14.2 11.9

Answer:

𝐻𝑜 = 𝜇𝐴 = 𝜇𝐵 = 𝜇𝐶

R input code:
Weights <- c(12.3,11.4,13.4,12.5,12.0,13.1,
12.1,13.4,14.0,13.6,12.8,14.2,
11.5,12.4,10.8,12.6,11.8,11.9)
Feed <- as.factor(c(rep('Feed A',6),rep('Feed B',6),rep('Feed C',6)))
data.set <- data.frame(Weights,Feed)
fit <- lm(Weights~Feed, data.set)
anova (fit)
TukeyHSD(aov(fit))

R output code:
Analysis of Variance Table

Response: Weights
Df Sum Sq Mean Sq F value Pr(>F)
Feed 2 6.9811 3.4906 6.6926 0.008366 **
Residuals 15 7.8233 0.5216
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Critical Value:

Number of treatments, t: 3
𝑑𝑓1 = 𝑡 − 1 = 3 − 1 = 2

Number of observations, N: 18
𝑑𝑓2 = 𝑁 − 𝑡 = 18 − 3 = 15

107
Looking up the table for df1 and df2 gives a Critical Value of 3.68.

F value > Critical Value ∴ we reject Ho.

There is a significant difference in the mean weights of turkeys consuming the different feeds.

R input code:
ScheffeTest(aov(fit))

R output code:
Posthoc multiple comparisons of means: Scheffe Test
95% family-wise confidence level

$Feed
diff lwr.ci upr.ci pval
Feed B-Feed A 0.9000000 -0.2315284 2.0315284 0.1315
Feed C-Feed A -0.6166667 -1.7481950 0.5148617 0.3603
Feed C-Feed B -1.5166667 -2.6481950 -0.3851383 0.0087 **

---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

B is significantly different from C. Recommend using B, or possibly A, but not C.

108
7. A researcher wishes to see whether there is any difference in the weight gains of athletes following
one of three special diets. Athletes are randomly assigned to three groups and placed on the diet for
6 weeks. The weight gains (in pounds) are given. Assume weight gains are normally distributed and
the variances are equal. At a 0.05 significance level, can the researcher conclude that there is a
difference in the diets?

Diet A: 3, 6, 7, 4

Diet B: 10, 12, 11, 14, 8, 6

Diet C: 8, 3, 2, 5

Answer:

𝐻𝑜 = 𝜇𝐴 = 𝜇𝐵 = 𝜇𝐶

R input code:
Weights <- c(3,6,7,4,10,12,11,14,8,6,8,3,2,5)
Diet <- as.factor(c(rep('Diet A',4),rep('Diet B',6),rep('Diet C',4)))
data.set <- data.frame(Weights,Diet)
fit <- lm(Weights~Diet, data.set)
anova (fit)

R output code:
Analysis of Variance Table
Response: Weights
Df Sum Sq Mean Sq F value Pr(>F)
Diet 2 101.095 50.548 7.7405 0.007971 **
Residuals 11 71.833 6.530
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Critical Value:

Number of treatments, t: 3

𝑑𝑓1 = 𝑡 − 1 = 3 − 1 = 2

Number of observations, N: 14

𝑑𝑓2 = 𝑁 − 𝑡 = 14 − 3 = 11

109
Looking up the table for df1 and df2 gives a Critical Value of 3.98.

F value > Critical Value ∴ we reject Ho.

There is a significant difference in the diets.

R input code:
ScheffeTest(aov(fit))

R output code:
Posthoc multiple comparisons of means: Scheffe Test
95% family-wise confidence level

$Diet
diff lwr.ci upr.ci pval
Diet B-Diet A 5.166667 0.5114176 9.821916 0.0300 *
Diet C-Diet A -0.500000 -5.5995698 4.599570 0.9626
Diet C-Diet B -5.666667 -10.3219157 -1.011418 0.0181 *

---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Diet B is the better diet for the athletes.

110
8. Three classes of ten students each were taught using the following methodologies: traditional,
inquiry-oriented and a mixture of both. At the end of the term, the students were tested, their scores
were recorded and this yielded the following partial ANOVA table. Assume distributions are normal
and variances are equal.

M
SS Df F
S
Betwee
136
n
Within 432
Total

a. Complete the above table and use it to test the claim that there is a difference in the mean
score of the students taught with the three different methodologies. Use α=0.05.

M
SS Df F
S
Betwee
136 2 68 4.25
n
Within 432 27 16
Total 568

𝑀𝑆𝐵 = 𝑆𝑆𝐵 /𝐷𝑓𝐵


𝑀𝑆𝑊 = 𝑆𝑆𝑊 /𝐷𝑓𝑊
𝐹 = 𝑀𝑆𝐵 /𝑀𝑆𝑊
Number of treatments, t: 3
𝑑𝑓1 = 𝑡 − 1 = 3 − 1 = 2
Number of observations, N: 30
𝑑𝑓2 = 𝑁 − 𝑡 = 30 − 3 = 27

Looking up the table for df1 and df2 gives a Critical Value of 3.35.

F value > Critical Value ∴ we reject Ho.

There is a difference between the methodologies used.

111
b. Suppose that the students taught by the traditional method had a mean score of 80 and the
ones taught by the inquiry-oriented method had a mean score of 89. Do a Scheffe test to
determine whether the difference in means is significant when we compare the traditional
method with the inquiry-based one.
c. The mean for the mixed approach is 85. Test statistics are FS=7.8125 for between
traditional and mixed methods and FS=5.000 for between inquiry oriented and mixed
methods. Give an overall inference for the teaching methods.

9. Below are HIC measures of the severity of head injuries collected from crash test dummies placed
in different types of vehicles. Use a 0.05 significance level to test the claim that the shown car
categories have the same mean.

Small Cars 290 406 371 544 374 50 376 499 479 475
1
Medium Cars 245 502 474 505 393 264 368 51 296 349
0
Large Cars 342 216 335 698 216 16 608 432 51 332
9 0

Answer:

R input code:
HIC <- c(290,406,371,544,374,501,376,499,479,475,
245,502,474,505,393,264,368,510,296,349,
342,216,335,698,216,169,608,432,510,332)
Cars <- as.factor(c(rep('Small',10),rep('Medium',10),rep('Large',10)))
data.set <- data.frame(HIC,Cars)
fit <- lm(HIC~Cars, data.set)
anova (fit)

R output code:
Analysis of Variance Table

Response: HIC
Df Sum Sq Mean Sq F value Pr(>F)
Cars 2 12614 6307.2 0.3974 0.6759.
Residuals 27 428505 15870.5
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Critical Value:

Number of treatments, t: 3
𝑑𝑓1 = 𝑡 − 1 = 3 − 1 = 2

Number of observations, N: 30

112
𝑑𝑓2 = 𝑁 − 𝑡 = 30 − 3 = 27

Looking up the table for df1 and df2 gives a Critical Value of 3.35.

F value < Critical Value ∴ we accept Ho.

There is no evidence of a difference in the mean HIC measure for each car type.

10. Below are times (in minutes and seconds) five bicyclists took to complete each mile of a 3 mile
race. Test the claim that it takes the same time to ride each mile at α=0.05.

Mile 3:15 3:24 3:23 3:22 3:21


1
Mile 3:19 3:22 3:21 3:17 3:19
2
Mile 3 3:34 3:31 3:29 3:31 3:29
Answer:
𝐻𝑜 : 𝜇1 = 𝜇2 = 𝜇3

R input code:
Time <-
c(3.15,3.24,3.23,3.22,3.21,3.19,3.22,3.21,3.17,3.19,3.34,3.31,3.29,3.31,3.29)
Miles <- as.factor(c(rep('Mile 1',5),rep('Mile 2',5),rep('Mile 3',5)))
data.set <- data.frame(Times,Miles)
fit <- lm(Time~Miles, data.set)
anova (fit)

R output code:
Analysis of Variance Table
Response: Time
Df Sum Sq Mean Sq F value Pr(>F)
Miles 2 0.03724 0.0186200 27.249 3.453e-05 ***

113
Residuals 12 0.00820 0.0006833
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Critical Value:

Number of treatments, t: 3
𝑑𝑓1 = 𝑡 − 1 = 3 − 1 = 2

Number of observations, N: 15
𝑑𝑓2 = 𝑁 − 𝑡 = 15 − 3 = 12

Looking up the table for df1 and df2 gives a Critical Value of 3.88.

F value > Critical Value ∴ we reject Ho.

There is evidence the third mile takes longer.

11. The below data shows the nicotine content of king size, menthol, and filtered 100mm non-
menthol cigarettes for 25 different brands. Test at α=0.05 the claim that the three categories of
cigarettes yield the same mean amount of nicotine.

King Mentho Filtere


Size l d
1.1 1.1 0.4
1.7 0.8 1.0
1.7 1.0 1.2
1.1 0.9 0.8
1.1 0.8 0.8
1.4 0.8 1.0
1.1 0.8 1.1
1.4 0.8 1.1
1.0 0.9 1.1
1.2 0.8 0.8

114
King Mentho Filtere
Size l d
1.1 0.8 0.8
1.1 1.2 0.8
1.1 0.8 0.8
1.1 0.8 1.0
1.1 1.3 0.2
1.8 0.7 1.1
1.6 1.4 1.0
1.1 0.2 0.8
1.2 0.8 1.0
1.5 1.0 0.9
1.3 0.8 1.1
1.1 0.8 1.1
1.3 1.2 0.6
1.1 0.6 1.3
1.1 0.7 1.1
Answer:

𝐻𝑜 : 𝜇𝐾 = 𝜇𝑀 = 𝜇𝐹

R input code:
Nicotine <-
c(1.1,1.7,1.7,1.1,1.1,1.4,1.1,1.4,1.0,1.2,1.1,1.1,1.1,1.1,1.1,1.8,1.6,1.1,1.2,1.
5,1.3,1.1,1.3,1.1,1.1,1.1,0.8,1.0,0.9,0.8,0.8,0.8,0.8,0.9,0.8,0.8,1.2,0.8,0.8,1.
3,0.7,1.4,0.2,0.8,1.0,0.8,0.8,1.2,0.6,0.7,0.4,1.0,1.2,0.8,0.8,1.0,1.1,1.1,1.1,0.
8,0.8,0.8,0.8,1.0,0.2,1.1,1.0,0.8,1.0,0.9,1.1,1.1,0.6,1.3,1.1)
Brands <- as.factor(c(rep('King',25),rep('Mentol',25),rep('Filtro',25)))
data.set <- data.frame(Nicotine,Brands)
fit <- lm(Nicotine~Brands, data.set)
anova (fit)

R output code:
Analysis of Variance Table

Response: Nicotine
Df Sum Sq Mean Sq F value Pr(>F)
Brands 2 2.2083 1.10413 18.993 2.376e-07 ***
Residuals 72 4.1856 0.05813
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Critical Value:

Number of treatments, t: 3

𝑑𝑓1 = 𝑡 − 1 = 3 − 1 = 2

115
Number of observations, N: 75

𝑑𝑓2 = 𝑁 − 𝑡 = 75 − 3 = 72

Looking up the table for df1 and df2 gives a Critical Value of 3.12.

F value > Critical Value ∴ we reject Ho.

There is evidence of a difference in nicotine content between these three categories of cigarettes.

R EFER EN C E

The Department of Mathematics and Computer Science. “Exercises - One Way Analysis of Variance
(ANOVA)” Oxford College. http://mathcenter.oxford.emory.edu/site/math117/probSetAnova/
(Accessed 27 August 2021).

116
A ssignm ent 3 : The 2k and 3 k paper helicopter experim ent
k
Experim ento 2

A ntecedentes

El experimento del helicóptero de papel es un ejercicio utilizado por George E. P. Box, Søren
Bisgaard y Conrad Fung en las clases de experimentos para ingenieros y está basado en las
ideas de Kip Rogers de la empresa Digital Equipment. El propósito del experimento es para
enseñar principios de diseño experimental como las condiciones para validez de
experimentación, aleatorización, bloques, el uso de diseños factoriales y fraccionales y el
manejo de la experimentación (Box, 1991).

Para esta clase este experimento tiene el propósito de enseñarnos sobre experimentos
factoriales como el análisis de 2k y 3k donde “k” es el número de los tratamientos y el número
base (2/3) son el número de niveles del tratamiento.

El diseño factorial es un diseño experimental que sirve para estudiar el efecto individual y
de interacción de varios factores sobre una o varias respuestas (Gutiérrez, 2008).

Los factoriales 2k completos son útiles principalmente cuando el número de factores a


estudiar está entre dos y cinco (2 ≤ k ≤ 5), rango en el cual su tamaño se encuentra entre
cuatro y 32 tratamientos; esta cantidad es manejable en muchas situaciones experimentales.

En un análisis 2k se tienen “k” tratamiento con dos niveles, pueden ser el máximo y el mínimo,
el “normal” y la mejora o la reducción, dependiendo de lo que se busque.

De igual manera en el experimento 3k se tienen “k” tratamiento con tres niveles, que pueden
ser el máximo, el “normal” y el mínimo o dos niveles de mejora o dos niveles de reducción,
cualquiera sea el caso.

El experimento consiste en dejar caer el helicóptero de papel desde una distancia (específica
y constante a lo largo del experimento) y medir el tiempo de vuelo.

Una de las preguntas a hacer en este tipo de experimentos es: ¿Cuáles son los factores
influyentes en el tiempo de vuelo del modelo del helicóptero de papel?

El artículo de Annis establece que el propósito del experimento es enseñar sobre la estadística
y los experimentos multifactoriales a los ingenieros para evitar el uso de “estadística de
receta”. Sin embargo, señala que tam bién es im portante el de entender el proceso
físico y los m ecanism os en el que el experim ento funciona.

117
El helicóptero de papel funciona como un cuerpo en caída que está sujeto a la gravedad pero
está influenciado por las dinámicas de fluido del aire que esta interactuando con las alas del
cuerpo (Figura 1).

Figura 1. Diagrama de cuerpo libre del helicóptero de papel que muestra las fuerzas debido
al peso y al arrastre del viento en balance con equilibrio. (Annis, 2005)

El video de YouTube del canal “2BrokeScientists” con el nombre “PHYSICS of PAPER


HELICOPTERS – AUTOROTATION” lo explica de la siguiente forma:

“El aire comienza a correr hacia [el helicóptero] a medida que [este] cae. Para que la rotación
comience, debe haber un par de torsión. El helicóptero de papel solo interactúa con el aire
que lo rodea. Entonces, cuando el aire se encuentra con las alas del helicóptero, se redirige
lejos del cuerpo, lo que provoca un cambio en el impulso.

A partir de la tercera ley de Newton, una fuerza actúa en la dirección opuesta y genera el
par. Este par actúa en ambas alas y forma un par en el eje central.”

Para el experimento 2k tenemos las siguientes variables:

1. Largo de las alas.


2. Largo del cuerpo.
3. Ancho del cuerpo.
4. Presencia del clip.
5. Cinta en las alas.
6. Cinta en los costados del cuerpo.

118
7. Angulo de incidencia de las alas.

P ara el experim ento solo elegim os la variable 1, 2, 3, 4 y 5, dando un valor de


32 corridas, que es el recom endado para este tipo de diseño de experim entos.

En este tipo de experimento los factores son cuantitativos (dimensiones de las alas y cuerpo
del helicóptero) y categóricos (presencia o ausencia de la cinta y/o clip).

Como referencia, en la clase de Box se tenían las siguientes variables:

1. Tipo de papel.
2. Largo de las alas.
3. Largo de cuerpo.
4. Ancho del cuerpo.
5. Clip.
6. Doblez.
7. Cuerpo con cinta
8. Alas con cinta.

Annis tiene todo un análisis físico de cómo las variables del helicóptero afectan al cuadrado
de la velocidad terminal (v2) que es proporcional a la superficie de área del helicóptero (S) e
inversamente proporcional al área de contacto donde actúa el arrastre del aire (A). Tanto S
y A son resultado del ancho del cuerpo (B), longitud de la base (H), longitud del ala (L) y
ancho del ala (W).
𝑆 𝐵𝐻 + (2𝐿 + 1)𝑊
𝑡 ∝ 𝑣2 ∝ =
𝐴 2 sin 𝜃 𝐿𝑊
Mientras se reduzca la relación S/A se tendrá una mayor duración de vuelo.

De acuerdo con lo enseñado en clase, la hipótesis debe tener lo cuatro elementos y una
propiedad: debe de llevar las variables, la relación entre las variables y esta se debe de haber
formulado con base en datos previos, la teoría que se conozca del problema y debe estar
escrito en un enunciado a renglón corrido.

H ipótesis:

La duración de vuelo del helicóptero de papel está en función d e propiedades


que le perm itan m antenerse m ás tiem po en el aire, por lo que el largo de las
alas puede ser el factor decisivo ya que increm enta el contacto con la m asa de
aire com o ocurre con objetos aerodinám icos.

119
Como punto importante, esta hipótesis fue hecha una vez que se leyeron el artículo
de B ox y A nnis, donde explican las bases educativas y físicas de este experimento.

Mi experimento solo se corrió bajo las hipótesis del ANOVA que vimos en clase, que consiste
en una hipótesis nula y una (o unas) hipótesis alternativas.

La hipótesis nula del ANOVA es que las varianzas de los 5 tratamientos son iguales por lo
que no hay diferencia entre los tratamientos.

𝐻0 : 𝜇𝑙𝑎 = 𝜇𝑙𝑐 = 𝜇𝑎𝑐 = 𝜇𝑐𝑖𝑛𝑡𝑎 = 𝜇𝑐𝑙𝑖𝑝

Las hipótesis alternativas es que las varianzas entre tratamientos y los niveles de estos
tratamientos son diferentes y si afectan la duración de vuelo:

𝐻1 : 𝜇𝑙𝑎 ≠ 𝜇𝑙𝑐 ≠ 𝜇𝑎𝑐 ≠ 𝜇𝑐𝑖𝑛𝑡𝑎 ≠ 𝜇𝑐𝑙𝑖𝑝

𝐻2 : 𝜇𝑙𝑎 ≠ 𝜇𝑙𝑐
𝐻3 : 𝜇𝑙𝑎 ≠ 𝜇𝑎𝑐
𝐻4 : 𝜇𝑙𝑎 ≠ 𝜇𝑐𝑖𝑛𝑡𝑎
𝐻5 : 𝜇𝑙𝑎 ≠ 𝜇𝑐𝑙𝑖𝑝

𝐻6 : 𝜇𝑙𝑐 ≠ 𝜇𝑎𝑐
𝐻7 : 𝜇𝑙𝑐 ≠ 𝜇𝑐𝑖𝑛𝑡𝑎
𝐻8 : 𝜇𝑙𝑐 ≠ 𝜇𝑐𝑙𝑖𝑝

𝐻9 : 𝜇𝑎𝑐 ≠ 𝜇𝑐𝑖𝑛𝑡𝑎
𝐻10 : 𝜇𝑎𝑐 ≠ 𝜇𝑐𝑙𝑖𝑝

𝐻11 : 𝜇𝑐𝑖𝑛𝑡𝑎 ≠ 𝜇𝑐𝑙𝑖𝑝

𝐻12 : 𝜇𝑙𝑎:6.5 𝑐𝑚 ≠ 𝜇𝑙𝑎:9.5 𝑐𝑚


𝐻13 : 𝜇𝑙𝑐:6.5 𝑐𝑚 ≠ 𝜇𝑙𝑐:9.5 𝑐𝑚
𝐻14 : 𝜇𝑎𝑐:4 𝑐𝑚 ≠ 𝜇𝑎𝑐:6 𝑐𝑚
𝐻15 : 𝜇𝑐𝑖𝑛𝑡𝑎:𝑆í ≠ 𝜇𝑐𝑖𝑛𝑡𝑎:𝑁𝑜
𝐻16 : 𝜇𝑐𝑙𝑖𝑝:𝑆í ≠ 𝜇𝑐𝑙𝑖𝑝:𝑁𝑜

Los objetivos de esta práctica son:

 Definir los parámetros más significativos.


 Realizar el experimento y medir las interacciones de estos parámetros.
 Tener datos suficientes para medir la influencia de estos parámetros.
 Determinar que factor es el de mayor influencia en el helicóptero de papel.

120
D esarrollo

El método que se tomó fue el siguiente:

1. Imprimir la plantilla provista por el Profesor Huerta.


2. Sacar copias de la plantilla (40 copias).
3. Elegir los niveles de tratamiento adecuados. Se eligieron los siguientes parámetros:
a. Largo de alas (6.5 cm y 9.5 cm)
b. Longitud de cuerpo (6.5 cm y 9.5 cm)
c. Ancho de cuerpo (4 y 6 cm)
d. Cinta en las alas (Sí/No)
e. Clip en el cuerpo (Sí/No)
4. Crear una tabla con los 32 modelos y sus tratamientos.
5. Enumerar con pluma cada helicóptero y hacer los cortes necesarios de acuerdo a la
tabla de modelos.
6. Verificar dimensiones de modelos.
7. Elegir un lugar con condiciones estables o controlables y que cuente con una altura
de 2.5 m.
8. Realizar los vuelos de los 32 modelos de forma secuencial y anotar la duración de
vuelos.
9. Replicar los vuelos 3 veces más, anotando el vuelo de cada uno.
10. Subir los datos a una tabla de Excel y a RStudio.
11. Aplicar el ANOVA y generar el modelo linear.

121
Para la elaboración del helicóptero se imprimió una vez el documento PDF, cerciorándose
que la escala del modelo fuera correcta con una regla metálica. Este documento fue llevado
a un centro de copiado para sacar las 40 copias necesarias del experimento (se dejó un margen
de 8 copias en caso de errores en el corte). El papel usado en la copias era papel bond blanco,
2
tamaño carta de 75 gr/m .

Figura 2. Formato de Helicóptero de Papel proporcionado. Con dimensiones 15.24 cm ×


25.4 cm y señales de corte y dobleces recomendados.

122
El corte del papel se realizó con una tijera manual, intentando hacer que el corte estuviera
en el centro de la línea negra del modelo o muy cercana a esta. Los cortes se pudieron haber
realizado con navaja de mano (cutter) pero no se tenía el recurso a la mano. R econocem os
que esto puede inducir una variación, tal vez m ínim a en el desarrollo del
experim ento.

La cinta era marca 3M modelo “Scotch Magic” cortado aproximadamente en el área


establecida por la línea punteada. D ebido a que el tam año del ancho de la cinta no
era el m ism o del área, puede que tam bién intervenga en el experim ento.

Figura 3. Corte del modelo con tijeras manuales, se aprecia la presencia de la cinta que se
usó en las alas.

Durante el corte me di cuenta que la forma en que estaba realizando los helicópteros no era
la forma indicada. Se tuvo que sacar copias adicionales del formato en otra papelería, por lo
que algunos m odelos pueden que tengan una calidad de papel diferente,
im perceptible a sim ple vista. Los helicópteros afectados fueron del 28 a 32.

Los clips usados fueron de un tamaño de 2,54 mm y la única diferencia entre ellos era el tipo
de recubrimiento de colores que tenían, ya que vienen de la misma caja. N o se verifico las
dim ensiones de los clips fueran iguales o que el peso de cada clip fuera el m ism o
por falta de herram ienta tipo bascula gram era, esto puede tam bién intervenir en
el experim ento.

123
Figura 4. Colocación del clip en modelo.

Las dimensiones de los helicópteros fueron revisadas de acuerdo a la tabla de referencia


(Tabla 1) y se utilizó una regla convencional con graduaciones en milímetros y centímetros.

Tabla 1. Tabla de m odelos con valores de variables.


Longitud Longitud A ncho de
M odelo de A las de Cuerpo C uerpo C lip C inta
(cm ) (cm ) (cm )
1 6.5 6.5 4 Sí Sí
2 6.5 6.5 4 Sí No
3 6.5 6.5 4 No Sí
4 6.5 6.5 4 No No
5 6.5 6.5 6 Sí Sí
6 6.5 6.5 6 Sí No
7 6.5 6.5 6 No Sí
8 6.5 6.5 6 No No
9 6.5 9.5 4 Sí Sí
10 6.5 9.5 4 Sí No
11 6.5 9.5 4 No Sí
12 6.5 9.5 4 No No
13 6.5 9.5 6 Sí Sí
14 6.5 9.5 6 Sí No
15 6.5 9.5 6 No Sí

124
Tabla 1. Tabla de m odelos con valores de variables.
Longitud Longitud A ncho de
M odelo de A las de Cuerpo C uerpo C lip C inta
(cm ) (cm ) (cm )
16 6.5 9.5 6 No No
17 9.5 6.5 4 Sí Sí
18 9.5 6.5 4 Sí No
19 9.5 6.5 4 No Sí
20 9.5 6.5 4 No No
21 9.5 6.5 6 Sí Sí
22 9.5 6.5 6 Sí No
23 9.5 6.5 6 No Sí
24 9.5 6.5 6 No No
25 9.5 9.5 4 Sí Sí
26 9.5 9.5 4 Sí No
27 9.5 9.5 4 No Sí
28 9.5 9.5 4 No No
29 9.5 9.5 6 Sí Sí
30 9.5 9.5 6 Sí No
31 9.5 9.5 6 No Sí
32 9.5 9.5 6 No No

125
Figura 5. Verificación de longitud de alas del modelo.

N o se utilizó una segunda m edición con otra regla, por lo que la incertidum bre
de la regla original queda en discusión.

Figura 6. Presentación de los 32 modelos del experimento.

126
Se eligió como campo de prueba el pasillo del segundo nivel de un edificio de departamentos
ya que solo presenta una entrada/salida de aire, se encuentra techado y tiene la altura
necesaria para realizar el experimento. No se sintieron movimiento de viento en el pasillo
durante el experimento (la percepción humana no identifica micro-estados del aire).

Con una cinta métrica (cuya incertidum bre o m argen de error es de sconocido) se
tomó una altura de 2.50 m desde el piso hacia arriba y se dejó una marca para tenerla de
referencia al lanzar el helicóptero.

Figura 7. Medición de la altura de caída del experimento.

Para llegar a la altura de lanzamiento se utilizó una silla metálica. El operador lanza
manualmente desde la altura marcada el helicóptero de papel y se prepara para continuar
con los lanzamientos de los 31 modelos restantes.

127
Figura 8. Lanzamiento de modelo desde 2.50 m de altura.

Una pregunta que tengo sobre el experimento es si la distancia de altura debería de ser
medida desde el centro de gravedad del helicóptero, debido a que sería más complicado el
medir este centro de gravedad solo se utilizó la parte superior del helicóptero como referencia
(Figura 9). R econocem os que este m étodo puede tener un error de m edición por
el paralelaje. Se podría haber colocado un hilo tensado perpendicularmente a la pared como
referencia o utilizar un dispositivo de caída operado a distancia en el caso de mejorar el
experimento.

128
Figura 9. Diagrama de ejemplo de medida de referencia para la altura de caída del
helicóptero.

Para el registro del tiempo de vuelo un operador se encuentra sobre la silla y espera la señal
del operador que tiene el cronometro que registra el inicio y fin del tiempo de vuelo.

La duración del vuelo es registrada con un cronometro que es operado manualmente y forma
parte de las aplicaciones básicas del celular (Aplicación “Reloj” versión 12.0.17.3 Samsung
Galaxy S9+, modelo SM-G950 Android 10) el tiempo de vuelo inicia con la señal del operador
del cronometro y termina con el primer contacto del cuerpo del helicóptero con el suelo.
R econocem os que este m étodo puede generar un error debido a la coordinación
m ano-ojo del operador del cronom etro y la dife rencia de tiem po entre la señal
del operador del cronom etro y el oído del operador que lanza el helicóptero.

Figura 10. Captura de pantalla de la aplicación de cronometro usado en el experimento.

129
En el caso que el modelo helicóptero tuviera un vuelo atípico (chocar con la pared, desviarse
a un lado, etc.) se repetía el vuelo.

Se realizaron las mediciones de los 32 helicópteros de forma secuencial del modelo 1 a 32,
una vez terminado la corrida se juntaban nuevamente los modelos y se hicieron 3 réplicas
adicionales.

Figura 11. Modelos utilizados en una de las réplicas del experimento. La operadora del
cronometro se encuentra en la parte superior derecha.

Los datos se anotaron en un cuaderno y se ingresaron a una tabla de Excel ya preparada, se


calcula la media de las 4 réplicas de cada modelo, se determina el coeficiente de variación y
se grafican los valores en una gráfica de dispersión.

Figura 12. Hoja de Cuaderno donde se anotaron los valores de tiempo de vuelo.

130
Tabla 2. Tabla de m odelos con resultado s de tiem po de vuelo.
M odelo LA LC AC C lip C inta T1 T2 T3 T4 Tprom
1 6.5 6.5 4 Sí Sí 1.94 2.13 1.76 1.62 1.86
2 6.5 6.5 4 Sí No 1.28 1.80 1.46 1.41 1.49
3 6.5 6.5 4 No Sí 1.93 1.86 1.79 1.55 1.78
4 6.5 6.5 4 No No 2.28 2.12 2.11 1.67 2.05
5 6.5 6.5 6 Sí Sí 1.48 1.61 1.26 1.22 1.39
6 6.5 6.5 6 Sí No 1.85 1.93 1.39 1.28 1.61
7 6.5 6.5 6 No Sí 2.05 1.73 2.18 1.83 1.95
8 6.5 6.5 6 No No 1.74 1.93 1.98 1.87 1.88
9 6.5 9.5 4 Sí Sí 1.90 2.00 2.10 1.67 1.92
10 6.5 9.5 4 Sí No 1.54 1.75 1.34 1.34 1.49
11 6.5 9.5 4 No Sí 1.62 1.80 1.46 1.74 1.66
12 6.5 9.5 4 No No 1.80 1.87 1.27 1.87 1.70
13 6.5 9.5 6 Sí Sí 1.87 1.61 1.45 1.66 1.65
14 6.5 9.5 6 Sí No 1.61 1.53 1.20 1.42 1.44
15 6.5 9.5 6 No Sí 1.80 1.81 1.53 1.63 1.69
16 6.5 9.5 6 No No 1.80 1.93 1.72 1.86 1.83
17 9.5 6.5 4 Sí Sí 1.81 1.93 2.04 1.43 1.80
18 9.5 6.5 4 Sí No 1.99 1.93 2.24 1.73 1.97
19 9.5 6.5 4 No Sí 2.26 2.33 2.18 1.93 2.18
20 9.5 6.5 4 No No 2.07 2.33 2.12 2.01 2.13
21 9.5 6.5 6 Sí Sí 2.32 1.73 1.47 1.87 1.85
22 9.5 6.5 6 Sí No 2.26 1.99 2.18 1.69 2.03
23 9.5 6.5 6 No Sí 2.55 2.25 2.21 2.06 2.27
24 9.5 6.5 6 No No 1.73 2.09 2.24 2.04 2.03
25 9.5 9.5 4 Sí Sí 1.80 1.34 1.85 1.47 1.62
26 9.5 9.5 4 Sí No 2.13 1.73 1.79 1.67 1.83
27 9.5 9.5 4 No Sí 1.93 2.00 1.78 1.61 1.83
28 9.5 9.5 4 No No 2.33 1.80 1.98 1.60 1.93
29 9.5 9.5 6 Sí Sí 1.67 1.73 2.15 1.20 1.69
30 9.5 9.5 6 Sí No 2.07 1.93 1.85 1.47 1.83
31 9.5 9.5 6 No Sí 1.99 2.06 2.11 1.80 1.99
32 9.5 9.5 6 No No 1.58 1.99 1.54 1.99 1.78
LA: Longitud de Ala (cm)
LC: Longitud de Cuerpo (cm)
AC: Ancho de Cuerpo (cm)
T#: Tiempo de réplica 1, 2, 3 o 4 (s)
Tprom: Tiempo promedio de modelo (s)

131
Modelo vs. Tiempo de Vuelo
3.00

2.50

2.00
Tiempo de vuelo (s)

1.50

Tiempo 1
1.00
Tiempo 2
Tiempo 3
0.50 Tiempo 4
Tiempo Promedio
0.00
0 5 10 15 20 25 30 35
Modelo

Figura 13. Gráfica de dispersión de datos de tiempo de vuelo con respecto al modelo.

Modelo vs. Coeficiente de Variación


25%

20%
Coeficiente de variación (%)

15%

10%

Tiempo
5% Promedio

0%
0 5 10 15 20 25 30 35
Modelo

Figura 14. Gráfica de coeficiente de variación en los tiempos de cada modelo.

132
La gráfica de dispersión nos dice que el coeficiente de variación de los datos es de 5% a 23%.
Si se utiliza la regla de 10-30 que comenta Johnson (2006) 11 de los modelos tienen una
variación aceptable y 21 son “cuestionables” pero útiles. Si la variación fuera mayor de 30%
los datos no deberían de utilizarse.

El modelo con mayor tiempo de vuelo fue el modelo 23, que tiene un largo de alas de 9.5 cm,
largo de cuerpo de 6.5 cm, ancho de cuerpo 6 cm, sin clip y con cinta en las alas. Esto da
un poco de credencia a la teoría de que el largo de la ala es un factor im portante.

La teoría tiene un mayor apoyo ya que 7 de los modelos con longitud de alas de 9.5 cm son
los que tienen un tiempo mayor de vuelo (modelos 18, 32, 24, 22, 20, 19 y 23 en orden de
menor a mayor tiempo de vuelo), hay un modelo de longitud de alas de 6.5 cm que esta
como el 4º con tiempo de vuelo más largo.

El modelo con menor vuelo es el modelo 5, es el que tiene un largo de alas de 6.5 cm, largo
de cuerpo de 6.5 cm, ancho de cuerpo 6 cm, con clip y con cinta en las alas.

De igual manera los 7 modelos con menor tiempo de vuelo tienen las alas de 6.5 cm (modelos
5, 14, 2, 10, 6, 13 y 11 en orden de menor a mayor tiempo de vuelo).

Posteriormente se ingresan estos datos a RStudio y se realizan los análisis solicitados como
el resumen gráfico, la gráfica Half-Normal, el modelo lineal y el ANOVA.

Código R:

library (DoE.base)
vars <- list(wl=c(6.5,9.5), #wings length
bl=c(6.5,9.5), #body length
bw=c(4,6), #body width
c=c("y","n"), #clip
tw=c("y","n")) #tape on wings

tabla <- fac.design(factor.names = vars, randomize = F)

tabla

Y <-
c(1.86,1.80,1.92,1.62,1.39,1.85,1.65,1.69,1.78,1.97,1.66,1.83,1.95,2.03,1
.69,1.83,1.49,2.18,1.49,1.83,1.61,2.27,1.44,1.99,2.05,2.13,1.70,1.93,1.88
,2.03,1.83,1.78)

tab.1 <- add.response(tabla,Y)

tab.1

133
plot (tab.1) #Resumen gráfico:

halfnormal(tab.1, alpha = .05) # es el factor significativo

fit <- lm (Y~ra+ce+fa+ra:ce+pe+bp+ra:pe+fa:ce+ra:bp+ra:fa+ce:pe+ce:bp,


tab.1)

anova (fit)
summary (fit)

Figura 15. El código R aplicado en RStudio con gráfica de factores y medias de datos
(RStudio, 2021).

134
Figura 16. Gráfica de factores y medias de datos.

La gráfica de factores muestra que efectivamente el largo de las alas si es un factor muy
importante ya que tiene el mayor rango de medias de tiempo de vuelo, seguido del largo del
cuerpo y presencia del clip.

De acuerdo con el análisis de la gráfica normal se tiene que la variable con más influencia es
la longitud de las alas, seguida de la longitud del cuerpo.

Figura 17. Gráfica Half-Normal de los datos del experimento (RStudio, 2021).

Como referencia, el artículo de Box señala un comportamiento similar en donde la longitud


de las alas era el factor de más impacto, junto con la longitud del cuerpo.

135
Como comentario del Prof. Huerta se tiene que la gráfica debería de tener otra forma, ya
que la gráfica presente indica que hay “ruido” en el experimento, la forma debería ser algo
similar a esta:

Figura 18. Gráfica Half-Normal de los datos del experimento con indicaciones de
comportamiento en rojo (RStudio, 2021).

Los resultados de modelo linear y ANOVA de R nos da estos resultados:


Analysis of Variance Table

Response: Y
Df Sum Sq Mean Sq F value Pr(>F)
wl 1 0.35490 0.35490 17.3441 0.0003473 ***
bl 1 0.17850 0.17850 8.7234 0.0069281 **
c 1 0.12375 0.12375 6.0478 0.0215150 *
wl:tw 2 0.15491 0.07745 3.7851 0.0372562 *
wl:c 1 0.06038 0.06038 2.9507 0.0987198 .
wl:bl 1 0.03990 0.03990 1.9501 0.1753626
Residuals 24 0.49110 0.02046
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
> summary (fit)

Call:
lm.default(formula = Y ~ wl + bl + c + tw:wl + bl + c:wl + bl:wl,
data = tab.1)

Residuals:
Min 1Q Median 3Q Max

136
-0.28125 -0.10156 -0.00937 0.07469 0.32750

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 1.81719 0.02529 71.861 < 2e-16 ***
wl1 0.10531 0.02529 4.165 0.000347 ***
bl1 -0.07469 0.02529 -2.954 0.006928 **
c1 0.06219 0.02529 2.459 0.021515 *
wl6.5:tw1 -0.02562 0.03576 -0.717 0.480569
wl9.5:tw1 0.09500 0.03576 2.656 0.013815 *
wl1:c1 -0.04344 0.02529 -1.718 0.098720 .
wl1:bl1 -0.03531 0.02529 -1.396 0.175363
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.143 on 24 degrees of freedom


Multiple R-squared: 0.6501, Adjusted R-squared: 0.548
F-statistic: 6.369 on 7 and 24 DF, p-value: 0.000268

Los valores de probabilidad del ancho de alas en ANOVA y modelo linear tiene un código
de significancia de 3 asteriscos, por lo que sí es un valor im portante y apoya a la
hipótesis.

En cuanto a la hipótesis nula de ANOVA de que las varianzas de los factores son iguales es
falso y por lo tanto debemos rechazar esta hipótesis.

Box comenta: “el experimento también demuestra el valor de tener un experimento


secuencial, ya que aprendemos conforme avanzamos”. La form a en que realice el
experim ento la secuencia no tuvo un efecto ya que todos los helicópteros ya
habían sido cortados y no hubo cam bios en el diseño de los helicópteros al notar
los cam bios. En una segunda fase del experimento sería utilizar los diseños con mayor
duración de tiempo de vuelo y ver si coinciden los parámetros para refinarlos.

Annis comenta que otro detalle que el experimentador puede tomar en cuenta como variable
dependiente es la estabilidad. Lo confirma ya que en el experimento durante algunas corridas
el helicóptero no caía en línea recta y se desviaba, alargando el tiempo de vuelo o dejaba de
girar y caía con mayor rapidez, acortando el tiempo de vuelo. Box menciona: “los criterios
que se utilizarán para evaluar los resultados pueden necesitar ser modificados o cambiados
totalmente durante la investigación a medida que aprendemos más de los fenómenos en
estudio. Los objetivos apropiados y viables no siem pre se pueden determ inar de
antem ano”.

137
C onclusiones

Se realizó un experimento factorial tipo 2k que consiste en un helicóptero de papel con 5


tratamientos (largo de las alas, largo del cuerpo, ancho del cuerpo, presencia del clip y cinta
en las alas) y 2 niveles de tratamiento, dando como resultado 32 modelos de helicópteros
para encontrar como estos valores afectan el tiempo de vuelo.

De acuerdo a la teoría de física la duración del vuelo está relacionado con la superficie de
contacto del cuerpo y el área de contacto con la resistencia del aire, por lo que la hipótesis
de trabajo es que la longitud del ala será el factor de mayor impacto en la duración del
tiempo.

Los resultados del experimento con 4 réplicas, una varianza de datos entre “aceptable” y
“cuestionable pero útil”, la gráfica de factores y el análisis ANOVA confirman que sí, la
longitud de las alas es el factor de m ayor im pacto, seguido del largo del cuerpo
y la presencia de un clip . Los dos primeros factores están de acuerdo con la teoría física,
ya que son las dos variables que afectan la superficie del cuerpo y el área de contacto con la
resistencia del aire.

138
k
Experim ento 3

A ntecedentes

Como se mencionó anteriormente el diseño factorial es un diseño experimental que sirve para
estudiar el efecto individual y de interacción de varios factores sobre una o varias respuestas
(Gutiérrez, 2008).
k
En un análisis 3 se tienen “k” tratamiento con dos niveles, pueden ser el máximo, el promedio
y el mínimo o una línea base y dos niveles de incremento o decremento, dependiendo de lo
que se esté buscando. Este tipo de tratamientos requiere de mayor cantidad de pruebas en
el experimento.

Para el experimento 3k tenemos las siguientes variables:

1. Largo de las alas.


2. Angulo de incidencia de las alas.
3. Longitud del cuerpo.

P ara el experim ento solo elegim os la variable 1 y 2 dando un valor de 9 corridas.

Muchos de los planteamientos de enseñanza del diseño factorial 2k y el experimento de


helicóptero de papel son aplicables a este por lo que no los repetiré.

En el análisis de Annis si contempla en ángulo de incidencia de las alas en las variables del
helicóptero en el ángulo de incidencia en el área de contacto de la siguiente forma:

𝐴 = sin 𝜃 𝜋(𝐿2 + 𝑊 2 )

Donde A es el área de contacto donde actúa el arrastre del aire, 𝜃 es el ángulo de incidencia,
L es la longitud del ala y W es el ancho del ala. Se interpreta a A como la diagonal del ala
modificada por el ángulo de incidencia.

Queda la ecuación modificada de la siguiente forma:


𝑆 𝐵𝐻 + (2𝐿 + 1)𝑊
𝑡 ∝ 𝑣2 ∝ =
𝐴 sin 𝜃 𝜋(𝐿2 + 𝑊 2 )

El cuadrado de la velocidad terminal (v2) es proporcional a la superficie de área del


helicóptero (S) e inversamente proporcional al área de contacto donde actúa el arrastre del
aire (A).

Se sigue manteniendo la idea de que mientras se reduzca la relación S/A se tendrá una mayor
duración de vuelo.

139
H ipótesis:

La duración de vuelo del helicóptero de papel está en función de propiedades


que le perm itan m antenerse m ás tiem po en el aire, por lo que el largo de las
alas puede seguirá siendo el factor decisiv o ya que increm enta el contacto con
la m asa de aire com o ocurre con objetos aerodinám icos y m ientras el angulo
de incidencia sea proporcional a la longitud de las alas, se increm entará el
tiem po de vuelo.

Este experimento también solo se corrió bajo las hipótesis del ANOVA que vimos en clase,
que consiste en una hipótesis nula y unas hipótesis alternativas.

La hipótesis nula del ANOVA es que las varianzas de los 2 tratamientos son iguales por lo
que no hay diferencia entre los tratamientos.

𝐻0 : 𝜇𝑙𝑎 = 𝜇𝑎𝑖

Las hipótesis alternativas es que las varianzas entre tratamientos y los niveles de estos
tratamientos son diferentes y si afectan la duración de vuelo:

𝐻1 : 𝜇𝑙𝑎 ≠ 𝜇𝑎𝑖
𝐻2 : 𝜇𝑙𝑎+ ≠ 𝜇𝑙𝑎𝑜

𝐻3 : 𝜇𝑙𝑎+ ≠ 𝜇𝑙𝑎−
𝐻4 : 𝜇𝑙𝑎𝑜 ≠ 𝜇𝑙𝑎−

𝐻5 : 𝜇𝑎𝑖+ ≠ 𝜇𝑎𝑖𝑜

𝐻6 : 𝜇𝑎𝑖+ ≠ 𝜇𝑎𝑖−
𝐻7 : 𝜇𝑎𝑖𝑜 ≠ 𝜇𝑎𝑖−

Los objetivos de esta práctica son idénticos al experimento 2k:

 Definir los parámetros más significativos.


 Realizar el experimento y medir las interacciones de estos parámetros.
 Tener datos suficientes para medir la influencia de estos parámetros.
 Determinar que factor es el de mayor influencia en el helicóptero de papel.

140
D esarrollo

El método que se tomó fue igual similar al hecho en el experimento anterior:

1. Imprimir la plantilla provista por el Profesor Huerta.


2. Sacar copias de la plantilla (20 copias).
3. Elegir los niveles de tratamiento adecuados. Se eligieron los siguientes parámetros:
a. Largo de alas (8.1 cm, 8.4 cm y 8.7 cm)
b. Angulo de incidencia (0°, 11° y 22°)
4. Crear una tabla con los 9 modelos y sus tratamientos.
5. Enumerar con pluma cada helicóptero y hacer los cortes necesarios de acuerdo a la
tabla de modelos.
6. Verificar dimensiones de modelos.
7. Elegir un lugar con condiciones estables o controlables y que cuente con una altura
de 2.5 m.
8. Realizar los vuelos de los 9 modelos de forma secuencial y anotar la duración de
vuelos.
9. Replicar los vuelos 3 veces más, anotando el vuelo de cada uno.
10. Subir los datos a una tabla de Excel y a RStudio.
11. Aplicar el ANOVA y generar el modelo linear.

Para la elaboración del helicóptero se imprimió una vez el documento PDF, cerciorándose
que la escala del modelo fuera correcta con una regla metálica. Este documento fue llevado
a un centro de copiado para sacar las 40 copias necesarias del experimento (se dejó un margen
de 8 copias en caso de errores en el corte). El papel usado en la copias era papel bond blanco,
tamaño carta de 75 gr/m2.

141
Figura 19. Formato de Helicóptero de Papel proporcionado. Con dimensiones 21.59 cm ×
27.94 cm y señales de corte y dobleces recomendados.

142
El corte del papel se realizó con una tijera manual, intentando hacer que el corte estuviera
en el centro de la línea negra del modelo o muy cercana a esta. Los cortes se pudieron haber
realizado con navaja de mano (cutter) pero no se tenía el recurso a la mano. R econocem os
que esto puede inducir una variación, tal vez m ínim a en el desarrollo del
experim ento.

Figura 20. Corte del modelo con tijeras manuales.

Las dimensiones de los helicópteros fueron revisadas de acuerdo a la tabla de referencia


(Tabla 1) y se utilizó una regla convencional con graduaciones en milímetros y centímetros.

Tabla 3. Tabla de m odelos con


valores de variables.
Longitud Á ngulo de
M odelo de A las incidencia
(cm ) (°)
1 8.40 0
2 7.00 0
3 6.00 0
4 8.40 11
5 7.00 11
6 6.00 11
7 8.40 22
8 7.00 22

143
Tabla 3. Tabla de m odelos con
valores de variables.
Longitud Á ngulo de
M odelo de A las incidencia
(cm ) (°)
9 6.00 22

El ángulo de incidencia se calculó por medio de trigonometría, aplicando la siguiente


ecuación:
𝑥
𝜃(°) = tan−1 ( )
𝐵
Donde x es la distancia desde la horizontal base y B es el ancho del ala.

Figura 21. Diagrama explicativo del ángulo de incidencia.

En este caso se eligieron unos valores de X = 0.0 cm, 0.7 cm y 1.4 cm.

144
Figura 22. Verificación de longitud de alas del modelo con regla.

N o se utilizó una segunda m edición con otra regla, por lo que la incertidum bre
de la regla original queda en discusión.

Figura 23. Presentación de los 9 modelos del experimento.

145
Se eligió como campo de prueba el mismo lugar que el experimento anterior: el pasillo del
segundo nivel de un edificio de departamentos ya que solo presenta una entrada/salida de
aire, se encuentra techado y tiene la altura necesaria para realizar el experimento. No se
sintieron movimiento de viento en el pasillo durante el experimento (la percepción humana
no identifica micro-estados del aire).

Se utilizó la marca de referencia colocada en el experimento anterior, que fue hecha una cinta
métrica (cuya incertidum bre o m argen de error es desconocido) se tomó una altura
de 2.50 m desde el piso hacia arriba y se dejó una marca para tenerla de referencia al lanzar
el helicóptero.

Figura 23. Medición de la altura de caída del experimento.

146
Para llegar a la altura de lanzamiento se utilizó una silla metálica. El operador lanza
manualmente desde la altura marcada el helicóptero de papel y se prepara para continuar
con los lanzamientos de los 8 modelos restantes.

Figura 24. Lanzamiento de modelo desde 2.50 m de altura.

La pregunta que tengo sobre el experimento anterior continua siendo la misma: si se debería
de considerar que la altura de lanzamiento de 2.50 m sea desde el centro de gravedad del
helicóptero hasta el piso. Como en este caso el cuerpo del helicóptero se vuelve constante no
será un problema como en el experimento anterior. Sin embargo, reconocem os que aún
puede existir el error de m edición por paralelaje al m om ento de lanzar el
helicóptero (Figura 25). Se tuvo mayor cuidado al colocar en la altura el modelo,
cerciorándose de no bajar el brazo al momento de alejarse de la marca de referencia. Continúa
la recomendación de colocar un hilo tensado perpendicularmente a la pared como referencia
o utilizar un dispositivo de caída operado a distancia en el caso de mejorar el experimento.

147
Figura 25. Diagrama de ejemplo de medida de referencia para la altura de caída del
helicóptero.

Para el registro del tiempo de vuelo se repitió el esquema de tener un operador se encuentra
sobre la silla y espera la señal del operador que tiene el cronometro que registra el inicio y
fin del tiempo de vuelo.

La duración del vuelo es registrada con el mismo cronometro del experimento anterior que
es operado manualmente y forma parte de las aplicaciones básicas del celular (Aplicación
“Reloj” versión 12.0.17.3 Samsung Galaxy S9+, modelo SM-G950 Android 10) el tiempo de
vuelo inicia con la señal del operador del cronometro y termina con el primer contacto del
cuerpo del helicóptero con el suelo. R econocem os que este m étodo puede generar un
error debido a la coordinación m ano -ojo del operador del cronom etro y la
diferencia de tiem po entre la señal del operador del cronom etro y el oído del
operador que lanza el helicóptero.

Figura 26. Captura de pantalla de la aplicación de cronometro usado en el experimento.

148
En el caso que el modelo helicóptero tuviera un vuelo atípico (chocar con la pared, desviarse
a un lado, etc.) se repetía el vuelo.

Se realizaron las mediciones de los 9 helicópteros de forma secuencial del modelo 1 a 9, una
vez terminado la corrida se juntaban nuevamente los modelos y se hicieron 4 réplicas
adicionales.

Figura 27. Modelos utilizados en una de las réplicas del experimento.

Los datos se anotaron en un cuaderno y se ingresaron a una tabla de Excel ya preparada, se


calcula la media de las 4 réplicas de cada modelo, se determina el coeficiente de variación y
se grafican los valores en una gráfica de dispersión.

Figura 28. Hoja de Cuaderno donde se anotaron los valores de tiempo de vuelo.

149
Tabla 4. Tabla de m odelos con resultados de tiem po de vuelo.
M odelo LA AI T1 T2 T3 T4 T5 Tprom
1 8.4 0 2.38 1.85 1.79 2.05 2.18 2.05
2 7.0 0 2.05 2.03 1.72 1.72 1.66 1.84
3 6.0 0 1.52 1.46 1.55 1.47 1.53 1.51
4 8.4 11 2.24 2.11 1.73 1.72 1.6 1.88
5 7.0 11 1.65 1.79 1.61 1.79 1.86 1.74
6 6.0 11 1.98 1.46 1.4 1.72 1.46 1.60
7 8.4 22 1.46 1.67 1.59 1.49 1.55 1.55
8 7.0 22 1.46 1.66 1.85 1.46 1.53 1.59
9 6.0 22 1.42 1.26 1.13 1.14 1.51 1.29
LA: Longitud de Ala (cm)
AI: Ángulo de Incidencia (°)
T#: Tiempo de réplica 1, 2, 3, 4 o 5 (s)
Tprom: Tiempo promedio de modelo (s)

Modelo vs. Tiempo de vuelo


2.5

1.5
Tiempo de vuelo (s)

Tiempo 1
1
Tiempo 2

Tiempo 3

Tiempo 4
0.5
Tiempo 5

Tiempo promedio

0
0 1 2 3 4 5 6 7 8 9 10
Modelo

Figura 29. Gráfica de dispersión de datos de tiempo de vuelo con respecto al modelo.

150
Modelo vs. Coeficiente de Variación
16%

14%

12%
Coeficiente de variación (%)

10%

8%

6%

4% Coeficiente de
Variación

2%

0%
0 1 2 3 4 5 6 7 8 9 10
Modelo

Figura 30. Gráfica de coeficiente de variación en los tiempos de cada modelo.

La gráfica de dispersión nos dice que el coeficiente de variación de los datos es de 2% a 14%.
Si se utiliza la regla de 10-30 que comenta Johnson (2006) 5 de los modelos tienen una
variación aceptable y 4 son “cuestionables pero útiles”. Si la variación fuera mayor de 30%
los datos no deberían de utilizarse.

El modelo con mayor tiempo de vuelo fue el modelo 1, que tiene un largo de alas de 8.4 cm
y ángulo de incidencia de 0°. Recordando que la teoría física menciona que el largo del ala
es incrementa el tiempo de vuelo y mientras más pequeño sea el ángulo de incidencia el área
de contacto con el aire, estos resultados dan un poco de credencia a la teoría de que
el largo del ala es un factor im portante.

El modelo con menor vuelo es el modelo 9, es el que tiene un largo de alas de 6 cm y ángulo
de incidencia de 22°. Esto puede ser significativo, ya que sería el m odelo con la m enor
superficie de contacto con la m asa de aire.

151
Posteriormente se ingresan estos datos a RStudio y se realizan los análisis solicitados como
el resumen gráfico, la gráfica Half-Normal, el modelo lineal y el ANOVA.

Código R:

library (DoE.base)
vars <- list(la=c(6,7,8.4), #largo de ala
an=c(0,11,22)) #angulo de incidencia
tabla <- fac.design(factor.names = vars, randomize = F)
tabla

Y <- c(1.51,1.84,2.05,1.60,1.74,1.88,1.29,1.59,1.55)

tab.1 <- add.response(tabla,Y)

tab.1

plot (tab.1) #Resumen gráfico:

halfnormal(tab.1, alpha = .05) # es el factor significativo

fit <- lm (Y~la+an,tab.1)

anova (fit)
summary (fit)

Figura 31. El código R aplicado en RStudio con gráfica de factores y medias de datos
(RStudio, 2021).

152
Figura 32. Gráfica de factores y medias de datos (RStudio, 2021).

La gráfica de factores muestra que efectivamente el largo de las alas y los ángulos de
incidencia son un factor muy importante ya que tiene casi el mismo rango de diferencia.

De acuerdo con el análisis de la gráfica half-normal no se observa ningún comportamiento


notable y no se tienen señales.

> halfnormal(tab.1, alpha = .05) # es el factor significativo


no significant effects

R no pudo encontrar efectos significativos, puede ser que necesitemos de una mayor cantidad
de tratamientos o replicas para poder demostrar un efecto significativo.

Figura 33. Gráfica Half-Normal de los datos del experimento (RStudio, 2021).

153
Los resultados de modelo linear y ANOVA de R nos da estos resultados:

Analysis of Variance Table

Response: Y
Df Sum Sq Mean Sq F value Pr(>F)
la 2 0.206156 0.103078 12.017 0.02036 *
an 2 0.177489 0.088744 10.346 0.02624 *
Residuals 4 0.034311 0.008578
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Call:
lm.default(formula = Y ~ la + an, data = tab.1)

Residuals:
1 2 3 4 5 6 7 8
9
-0.08444 -0.01111 0.09556 0.06556 -0.05111 -0.01444 0.01889 0.06222 -
0.08111

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 1.67222 0.03087 54.166 6.95e-07 ***
la.L 0.24739 0.05347 4.627 0.00983 **
la.Q -0.08669 0.05347 -1.621 0.18028
an.L -0.22863 0.05347 -4.276 0.01289 *
an.Q -0.08301 0.05347 -1.552 0.19552
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.09262 on 4 degrees of freedom


Multiple R-squared: 0.9179, Adjusted R-squared: 0.8358
F-statistic: 11.18 on 4 and 4 DF, p-value: 0.01911

El ANOVA de esta prueba solo pudo establecer que ambos parámetros son significativos
pero no de la forma en que el experimento anterior estableció con seguridad que la longitud
de las alas es el valor más importante.

El modelo linear si indica que la longitud de las alas si es el valor con mayor significativo
que el ángulo de incidencia.

154
C onclusiones

Se realizó un experimento factorial tipo 3k que consiste en un helicóptero de papel con 2


tratamientos (largo de las alas y ángulo de incidencia) y 3 niveles de tratamiento, dando
como resultado 9 modelos de helicópteros para encontrar como estos valores afectan el tiempo
de vuelo.

De acuerdo a la teoría de física la duración del vuelo está relacionado con la superficie de
contacto del cuerpo y el área de contacto con la resistencia del aire que está relacionado con
el ángulo de incidencia, por lo que la hipótesis de trabajo es que la longitud del ala será el
factor de mayor impacto en la duración del tiempo que pueda ser afectado por el ángulo de
incidencia.

Los resultados del experimento con 5 réplicas, una varianza de datos entre “aceptable” y
“cuestionable pero útil”, la gráfica de factores y el análisis ANOVA confirman que sí, la
longitud de las alas es el factor de m ayor im pacto, seguido del ángulo de
incidencia. Los dos primeros factores están de acuerdo con la teoría física.

Sin embargo, creo que por la cantidad de datos no es posible establecer con tanta seguridad
como el experimento de 2k. Puede que sea necesario realizar más replicas o incrementar el
número de tratamientos.

R eferencias

Annis D. H. (2005). Teacher’s Corner Rethinking the Paper Helicopter: Combining


Statistical and Engineering Knowledge. Taylor Francis, vol. 59, no. 4, pp. 320–326.

Box, G. E. P. (1992). Teaching engineers experimental design with a paper helicopter.


Quality Engineering, 4(3):453–459.

Gutiérrez P., H et al. (2008). Análisis y Diseño de Experimentos. Tercera edición. McGraw-
Hill Interamericana.

J. A. Johnson, et al (2006). A ‘Six Sigma’© Black Belt Case Study: GEP Box’s Paper
Helicopter Experiment Part A. Taylor Francis, vol. 18, no. 4, pp. 413–430.

155
A ssignm ent 4: The P aper H elicopter R esponse Surface

P hotographs of the experim ent:

Test results:

R espuest
run.order std.order LA AI B lock
a
1 1 1 5 0 1 1.378
2 2 2 11 0 1 2.336
3 3 3 5 24 1 1.290
4 4 4 11 24 1 1.792
5 5 5 8 12 1 1.650
6 6 6 8 12 1 1.700
7 7 7 8 12 1 1.400
8 1 1 3.757359 12 2 1.234
9 2 2 12.242641 12 2 1.642
10 3 3 8 -4.970563 2 2.112
11 4 4 8 28.970563 2 1.418
12 5 5 8 12 2 1.680
13 6 6 8 12 2 1.780
14 7 7 8 12 2 1.650
LA: Length of wings

156
AI: Angle of incidence

R code input for creating the central com posite design


library (rsm)
trsm <- ccd(basis = 2, randomize = F, n0 = c(3,3),
coding = list(x1~(LA-8.25)/2.75, x2~(AI-12)/12))

Y <- c(1.378,2.336,1.290,1.792,1.650,1.700,1.400,1.234,1.642,2.112,1.418,
1.680,1.780,1.650) #Resultados
trsm <- data.frame(trsm,Y) #Columna Res.
trsm <- coded.data(trsm, x1~(LA-8.25)/2.75,x2~(AI-12)/12) #Recodificar.
Trsm

P roposal of first m odel:


> first <- rsm(Y~SO(x1,x2),data = trsm)
> summary(first)

Call:
rsm(formula = Y ~ SO(x1, x2), data = trsm)

Estimate Std. Error t value Pr(>|t|)


(Intercept) 1.643333 0.066813 24.5959 7.976e-09 ***
x1 0.254625 0.057862 4.4006 0.002285 **
x2 -0.201683 0.057862 -3.4856 0.008252 **
x1:x2 -0.114000 0.081829 -1.3931 0.201066
x1^2 -0.078292 0.060225 -1.3000 0.229806
x2^2 0.085208 0.060225 1.4148 0.194838
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Multiple R-squared: 0.824, Adjusted R-squared: 0.714


F-statistic: 7.491 on 5 and 8 DF, p-value: 0.006899

Analysis of Variance Table

Response: Y
Df Sum Sq Mean Sq F value Pr(>F)
FO(x1, x2) 2 0.84408 0.42204 15.7571 0.00168
TWI(x1, x2) 1 0.05198 0.05198 1.9409 0.20107
PQ(x1, x2) 2 0.10709 0.05355 1.9992 0.19764
Residuals 8 0.21427 0.02678
Lack of fit 3 0.13174 0.04391 2.6603 0.15946
Pure error 5 0.08253 0.01651

157
Stationary point of response surface:
x1 x2
0.5141192 1.5273894

Stationary point in original units:


LA AI
9.663828 30.328673

Eigenanalysis:
eigen() decomposition
$values
[1] 0.1031181 -0.0962014

$vectors
[,1] [,2]
x1 -0.2997573 -0.9540155
x2 0.9540155 -0.2997573

First M odel C ontour plot:

158
First M odel 3D Surface plot:

P roposal of the second m odel:

I considered a first order proposal as both x1 and x2 had statistical significance.


> second <- rsm(Y~FO(x1,x2), trsm)
> summary(second)

Call:
rsm(formula = Y ~ FO(x1, x2), data = trsm)

Estimate Std. Error t value Pr(>|t|)


(Intercept) 1.647286 0.049238 33.4558 2.034e-12 ***
x1 0.254625 0.065135 3.9092 0.002438 **
x2 -0.201683 0.065135 -3.0964 0.010170 *
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Multiple R-squared: 0.6933, Adjusted R-squared: 0.6376


F-statistic: 12.43 on 2 and 11 DF, p-value: 0.001502

159
Analysis of Variance Table

Response: Y
Df Sum Sq Mean Sq F value Pr(>F)
FO(x1, x2) 2 0.84408 0.42204 12.4345 0.001502
Residuals 11 0.37335 0.03394
Lack of fit 6 0.29082 0.04847 2.9364 0.128711
Pure error 5 0.08253 0.01651

Direction of steepest ascent (at radius 1):


x1 x2
0.7838885 -0.6209016

Corresponding increment in original units:


LA AI
2.155693 -7.450819

This model is a mild improvement, since the R squared value is lower while the Lack of fit
is higher.

More replies might have helped find a better model.

Second M odel Contour plot:

160
Second M odel 3D Surface plot:

161
A ssignm ent 5: The P aper H elicopter C entral Com posite D esign

C entral C om posite C ircum scribed

R Input Code:
library (rsm)
heli.ccc <- ccd(basis = 2, randomize = F, n0 = c(3,3),
coding = list(x1~(la-105)/2,x2~(aa-10)/2))
heli.ccc

Y.ccc <- c(2.83, 2.39, 2.25, 2.31, 2.32, 2.43, 2.55, 1.92, 2.05, 2.58, 2.44, 2.51,
2.51, 2.71)
heli.ccc <- data.frame(heli.ccc,Y.ccc)
heli.ccc <- coded.data(heli.ccc,x1~(la-105)/2,x2~(aa-10)/2)
heli.ccc

primera.ccc <- rsm(Y.ccc~SO(x1,x2),data = heli.ccc)


summary(primera.ccc)

mejor.ccc <- rsm(Y.ccc~FO(x1)+PQ(x1),heli.ccc)


summary(mejor.ccc)

persp (primera.ccc, ~x1+x2, theta = 50, contours = "col", col = rainbow(3))

R Output Code:
> library (rsm)
> heli.ccc <- ccd(basis = 2, randomize = F, n0 = c(3,3),
+ coding = list(x1~(la-105)/0.02,x2~(aa-10)/0.05))
> heli.ccc
run.order std.order la aa Block
1 1 1 104.9800 9.950000 1
2 2 2 105.0200 9.950000 1
3 3 3 104.9800 10.050000 1
4 4 4 105.0200 10.050000 1
5 5 5 105.0000 10.000000 1
6 6 6 105.0000 10.000000 1
7 7 7 105.0000 10.000000 1
8 1 1 104.9717 10.000000 2
9 2 2 105.0283 10.000000 2
10 3 3 105.0000 9.929289 2
11 4 4 105.0000 10.070711 2
12 5 5 105.0000 10.000000 2
13 6 6 105.0000 10.000000 2
14 7 7 105.0000 10.000000 2

Data are stored in coded form using these coding formulas ...
x1 ~ (la - 105)/0.02
x2 ~ (aa - 10)/0.05

162
> Y.ccc <- c(2.83, 2.39, 2.25, 2.31, 2.32, 2.43, 2.55, 1.92, 2.05, 2.58, 2.44,
2.51, 2.51, 2.71)
> heli.ccc <- data.frame(heli.ccc,Y.ccc)
> heli.ccc <- coded.data(heli.ccc,x1~(la-105)/2,x2~(aa-10)/2)
> heli.ccc
run.order std.order la aa Block Y.ccc
1 1 1 103.0000 8.000000 1 2.83
2 2 2 107.0000 8.000000 1 2.39
3 3 3 103.0000 12.000000 1 2.25
4 4 4 107.0000 12.000000 1 2.31
5 5 5 105.0000 10.000000 1 2.32
6 6 6 105.0000 10.000000 1 2.43
7 7 7 105.0000 10.000000 1 2.55
8 1 1 102.1716 10.000000 2 1.92
9 2 2 107.8284 10.000000 2 2.05
10 3 3 105.0000 7.171573 2 2.58
11 4 4 105.0000 12.828427 2 2.44
12 5 5 105.0000 10.000000 2 2.51
13 6 6 105.0000 10.000000 2 2.51
14 7 7 105.0000 10.000000 2 2.71

Data are stored in coded form using these coding formulas ...
x1 ~ (la - 105)/2
x2 ~ (aa - 10)/2
>
> primera.ccc <- rsm(Y.ccc~SO(x1,x2),data = heli.ccc)
> summary(primera.ccc)

Call:
rsm(formula = Y.ccc ~ SO(x1, x2), data = heli.ccc)

Estimate Std. Error t value Pr(>|t|)


(Intercept) 2.505000 0.068978 36.3157 3.622e-10 ***
x1 -0.024519 0.059737 -0.4104 0.692249
x2 -0.107249 0.059737 -1.7953 0.110335
x1:x2 0.125000 0.084481 1.4796 0.177239
x1^2 -0.210625 0.062176 -3.3875 0.009535 **
x2^2 0.051875 0.062176 0.8343 0.428308
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Multiple R-squared: 0.6954, Adjusted R-squared: 0.505


F-statistic: 3.652 on 5 and 8 DF, p-value: 0.05118

Analysis of Variance Table

Response: Y.ccc
Df Sum Sq Mean Sq F value Pr(>F)
FO(x1, x2) 2 0.09683 0.048414 1.6959 0.24322

163
TWI(x1, x2) 1 0.06250 0.062500 2.1893 0.17724
PQ(x1, x2) 2 0.36203 0.181015 6.3407 0.02239
Residuals 8 0.22838 0.028548
Lack of fit 3 0.14443 0.048145 2.8675 0.14311
Pure error 5 0.08395 0.016790

Stationary point of response surface:


x1 x2
0.1830828 0.8131411

Stationary point in original units:


la aa
105.36617 11.62628

Eigenanalysis:
eigen() decomposition
$values
[1] 0.06599629 -0.22474629

$vectors
[,1] [,2]
x1 -0.2203854 -0.9754129
x2 -0.9754129 0.2203854

> mejor.ccc <- rsm(Y.ccc~FO(x1)+PQ(x1),heli.ccc)


> summary(mejor.ccc)

Call:
rsm(formula = Y.ccc ~ FO(x1) + PQ(x1), data = heli.ccc)

Estimate Std. Error t value Pr(>|t|)


(Intercept) 2.536923 0.064999 39.0300 3.783e-13 ***
x1 -0.024519 0.067653 -0.3624 0.72390
x1^2 -0.214615 0.070207 -3.0569 0.01091 *
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Multiple R-squared: 0.4628, Adjusted R-squared: 0.3651


F-statistic: 4.738 on 2 and 11 DF, p-value: 0.0328

Analysis of Variance Table

Response: Y.ccc
Df Sum Sq Mean Sq F value Pr(>F)
FO(x1) 1 0.00481 0.00481 0.1313 0.72390
PQ(x1) 1 0.34216 0.34216 9.3445 0.01091
Residuals 11 0.40278 0.03662
Lack of fit 2 0.13759 0.06879 2.3347 0.15248
Pure error 9 0.26519 0.02947

164
Stationary point of response surface:
x1
-0.05712319

Stationary point in original units:


la
104.8858

Eigenanalysis:
eigen() decomposition
$values
[1] -0.2146154

$vectors
[,1]
x1 1

> persp (primera.ccc, ~x1+x2, theta = 50, contours = "col", col = rainbow(3))

165
C entral C om posite Inscribed

R Input Code:
library (rsm)
heli.cci <- ccd(basis = 2, randomize = F, n0 = c(3,3), inscribed = T,
coding = list(x1~(la-105)/2,x2~(aa-10)/2))
heli.cci

Y.cci <-
c(2.39,2.62,2.57,1.80,2.50,2.63,2.18,2.33,2.08,2.13,2.42,2.63,2.45,2.37)
heli.cci <- data.frame(heli.cci,Y.cci)
heli.cci <- coded.data(heli.cci,x1~(la-105)/2,x2~(aa-10)/2)
heli.cci

primera.cci <- rsm(Y.cci~SO(x1,x2),data = heli.cci)


summary(primera.cci)

persp (primera.cci, ~x1+x2, theta = 50, contours = "col", col = rainbow(3))

R Output Code:
> library (rsm)
> heli.cci <- ccd(basis = 2, randomize = F, n0 = c(3,3), inscribed = T,
+ coding = list(x1~(la-105)/2,x2~(aa-10)/2))
> heli.cci
run.order std.order la aa Block
1 1 1 103.5858 8.585786 1
2 2 2 106.4142 8.585786 1
3 3 3 103.5858 11.414214 1
4 4 4 106.4142 11.414214 1
5 5 5 105.0000 10.000000 1
6 6 6 105.0000 10.000000 1
7 7 7 105.0000 10.000000 1
8 1 1 103.0000 10.000000 2
9 2 2 107.0000 10.000000 2
10 3 3 105.0000 8.000000 2
11 4 4 105.0000 12.000000 2
12 5 5 105.0000 10.000000 2
13 6 6 105.0000 10.000000 2
14 7 7 105.0000 10.000000 2

Data are stored in coded form using these coding formulas ...
x1 ~ (la - 105)/2
x2 ~ (aa - 10)/2
>
> Y.cci <-
c(2.39,2.62,2.57,1.80,2.50,2.63,2.18,2.33,2.08,2.13,2.42,2.63,2.45,2.37)
> heli.cci <- data.frame(heli.cci,Y.cci)
> heli.cci <- coded.data(heli.cci,x1~(la-105)/2,x2~(aa-10)/2)

166
> heli.cci
run.order std.order la aa Block Y.cci
1 1 1 103.5858 8.585786 1 2.39
2 2 2 106.4142 8.585786 1 2.62
3 3 3 103.5858 11.414214 1 2.57
4 4 4 106.4142 11.414214 1 1.80
5 5 5 105.0000 10.000000 1 2.50
6 6 6 105.0000 10.000000 1 2.63
7 7 7 105.0000 10.000000 1 2.18
8 1 1 103.0000 10.000000 2 2.33
9 2 2 107.0000 10.000000 2 2.08
10 3 3 105.0000 8.000000 2 2.13
11 4 4 105.0000 12.000000 2 2.42
12 5 5 105.0000 10.000000 2 2.63
13 6 6 105.0000 10.000000 2 2.45
14 7 7 105.0000 10.000000 2 2.37

Data are stored in coded form using these coding formulas ...
x1 ~ (la - 105)/2
x2 ~ (aa - 10)/2
>
> primera.cci <- rsm(Y.cci~SO(x1,x2),data = heli.cci)
> summary(primera.cci)

Call:
rsm(formula = Y.cci ~ SO(x1, x2), data = heli.cci)

Estimate Std. Error t value Pr(>|t|)


(Intercept) 2.460000 0.080395 30.5990 1.413e-09 ***
x1 -0.157959 0.098463 -1.6042 0.14733
x2 -0.040637 0.098463 -0.4127 0.69066
x1:x2 -0.500000 0.196926 -2.5390 0.03476 *
x1^2 -0.202500 0.144934 -1.3972 0.19989
x2^2 -0.132500 0.144934 -0.9142 0.38733
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Multiple R-squared: 0.5959, Adjusted R-squared: 0.3433


F-statistic: 2.359 on 5 and 8 DF, p-value: 0.1344

Analysis of Variance Table

Response: Y.cci
Df Sum Sq Mean Sq F value Pr(>F)
FO(x1, x2) 2 0.10641 0.053205 1.3720 0.30740
TWI(x1, x2) 1 0.25000 0.250000 6.4466 0.03476
PQ(x1, x2) 2 0.10109 0.050546 1.3034 0.32361
Residuals 8 0.31024 0.038780
Lack of fit 3 0.16424 0.054747 1.8749 0.25149

167
Pure error 5 0.14600 0.029200

Stationary point of response surface:


x1 x2
0.1509774 -0.4382105

Stationary point in original units:


la aa
105.301955 9.123579

Eigenanalysis:
eigen() decomposition
$values
[1] 0.08493811 -0.41993811

$vectors
[,1] [,2]
x1 0.6562592 -0.7545356
x2 -0.7545356 -0.6562592

> persp (primera.cci, ~x1+x2, theta = 50, contours = "col", col = rainbow(3))

168
C entral C om posite Face centered

R Input Code:
library (rsm)
heli.ccf <- ccd(basis = 2, randomize = F, n0 = c(3,3), alpha="faces",
coding = list(x1~(la-105)/2,x2~(aa-10)/2),)
heli.ccf

Y.ccf <-
c(2.51,2.12,2.39,2.18,2.38,2.45,2.31,2.07,2.18,1.98,2.38,2.71,2.45,2.64)
heli.ccf <- data.frame(heli.ccf,Y.ccf)
heli.ccf <- coded.data(heli.ccf,x1~(la-105)/2,x2~(aa-10)/2)
heli.ccf

primera.ccf <- rsm(Y.ccf~SO(x1,x2),data = heli.ccf)


summary(primera.ccf)

persp (primera.ccf, ~x1+x2, theta = 50, contours = "col", col = rainbow(3))

R Output Code:
> library (rsm)
> heli.ccf <- ccd(basis = 2, randomize = F, n0 = c(3,3), alpha="faces",
+ coding = list(x1~(la-105)/2,x2~(aa-10)/2),)
> heli.ccf
run.order std.order la aa Block
1 1 1 103 8 1
2 2 2 107 8 1
3 3 3 103 12 1
4 4 4 107 12 1
5 5 5 105 10 1
6 6 6 105 10 1
7 7 7 105 10 1
8 1 1 103 10 2
9 2 2 107 10 2
10 3 3 105 8 2
11 4 4 105 12 2
12 5 5 105 10 2
13 6 6 105 10 2
14 7 7 105 10 2

Data are stored in coded form using these coding formulas ...
x1 ~ (la - 105)/2
x2 ~ (aa - 10)/2
>
> Y.ccf <-
c(2.51,2.12,2.39,2.18,2.38,2.45,2.31,2.07,2.18,1.98,2.38,2.71,2.45,2.64)
> heli.ccf <- data.frame(heli.ccf,Y.ccf)
> heli.ccf <- coded.data(heli.ccf,x1~(la-105)/2,x2~(aa-10)/2)

169
> heli.ccf
run.order std.order la aa Block Y.ccf
1 1 1 103 8 1 2.51
2 2 2 107 8 1 2.12
3 3 3 103 12 1 2.39
4 4 4 107 12 1 2.18
5 5 5 105 10 1 2.38
6 6 6 105 10 1 2.45
7 7 7 105 10 1 2.31
8 1 1 103 10 2 2.07
9 2 2 107 10 2 2.18
10 3 3 105 8 2 1.98
11 4 4 105 12 2 2.38
12 5 5 105 10 2 2.71
13 6 6 105 10 2 2.45
14 7 7 105 10 2 2.64

Data are stored in coded form using these coding formulas ...
x1 ~ (la - 105)/2
x2 ~ (aa - 10)/2
>
> primera.ccf <- rsm(Y.ccf~SO(x1,x2),data = heli.ccf)
> summary(primera.ccf)

Call:
rsm(formula = Y.ccf ~ SO(x1, x2), data = heli.ccf)

Estimate Std. Error t value Pr(>|t|)


(Intercept) 2.432941 0.085996 28.2912 2.633e-09 ***
x1 -0.081667 0.091550 -0.8920 0.3984
x2 0.056667 0.091550 0.6190 0.5531
x1:x2 0.045000 0.112125 0.4013 0.6987
x1^2 -0.136765 0.133225 -1.0266 0.3347
x2^2 -0.081765 0.133225 -0.6137 0.5564
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Multiple R-squared: 0.3166, Adjusted R-squared: -0.1105


F-statistic: 0.7413 on 5 and 8 DF, p-value: 0.6139

Analysis of Variance Table

Response: Y.ccf
Df Sum Sq Mean Sq F value Pr(>F)
FO(x1, x2) 2 0.05928 0.029642 0.5894 0.57704
TWI(x1, x2) 1 0.00810 0.008100 0.1611 0.69868
PQ(x1, x2) 2 0.11900 0.059501 1.1832 0.35469
Residuals 8 0.40231 0.050288
Lack of fit 3 0.28371 0.094569 3.9869 0.08534

170
Pure error 5 0.11860 0.023720

Stationary point of response surface:


x1 x2
-0.253012 0.276899

Stationary point in original units:


la aa
104.4940 10.5538

Eigenanalysis:
eigen() decomposition
$values
[1] -0.07373303 -0.14479638

$vectors
[,1] [,2]
x1 -0.3361865 -0.9417954
x2 -0.9417954 0.3361865

> persp (primera.ccf, ~x1+x2, theta = 50, contours = "col", col = rainbow(3))

171
A ssignm ent 6: The V irtual C atapult

Factorial D esign
3
2 Factorial D esign

R Input Code:
library (DoE.base)
vars <- list(ra=c(140,180),
ce=c(200,300),
fa=c(120,135))
tabla <- fac.design(factor.names = vars, randomize = F)

tabla

Y <- c(128.21,339.51,221.89,404.26,25.01,253.44,60.58,461.15)

tab.1 <- add.response(tabla,Y)

tab.1

plot (tab.1) #Resumen gráfico: ra tiene el efecto mayor

halfnormal(tab.1, alpha = .05) #Ra es el factor significativo

fit <- lm (Y~ra+ce+fa, tab.1)

anova (fit)
summary (fit)

R Output Code:
> library (DoE.base)
> vars <- list(ra=c(140,180),
+ ce=c(200,300),
+ fa=c(120,135))
> tabla <- fac.design(factor.names = vars, randomize = F)
creating full factorial with 8 runs ...

>
> tabla
ra ce fa
1 140 200 120
2 180 200 120
3 140 300 120
4 180 300 120
5 140 200 135
6 180 200 135
7 140 300 135

172
8 180 300 135
class=design, type= full factorial
>
> Y <- c(128.21,339.51,221.89,404.26,25.01,253.44,60.58,461.15)
>
> tab.1 <- add.response(tabla,Y)
>
> tab.1
ra ce fa Y
1 140 200 120 128.21
2 180 200 120 339.51
3 140 300 120 221.89
4 180 300 120 404.26
5 140 200 135 25.01
6 180 200 135 253.44
7 140 300 135 60.58
8 180 300 135 461.15
class=design, type= full factorial
>
> plot (tab.1) #Resumen gráfico: ra tiene el efecto mayor
>
> halfnormal(tab.1, alpha = .05) #Ra es el factor significativo

Significant effects (alpha=0.05, Lenth method):


[1] ra1

>
> fit <- lm (Y~ra+ce+fa, tab.1)
>
> anova (fit)
Analysis of Variance Table

Response: Y
Df Sum Sq Mean Sq F value Pr(>F)
ra 1 130732 130732 33.8689 0.004341 **
ce 1 20171 20171 5.2258 0.084227 .
fa 1 10782 10782 2.7932 0.169982
Residuals 4 15440 3860
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
> summary (fit)

Call:
lm.default(formula = Y ~ ra + ce + fa, data = tab.1)

Residuals:
1 2 3 4 5 6 7 8
32.790 -11.577 26.043 -47.255 3.013 -24.225 -61.845 83.057

173
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 236.76 21.97 10.778 0.00042 ***
ra1 127.83 21.97 5.820 0.00434 **
ce1 50.21 21.97 2.286 0.08423 .
fa1 -36.71 21.97 -1.671 0.16998
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 62.13 on 4 degrees of freedom


Multiple R-squared: 0.9128, Adjusted R-squared: 0.8475
F-statistic: 13.96 on 3 and 4 DF, p-value: 0.01383

174
5
2 Factorial D esign

Input R Code:
library (DoE.base)
vars <- list(ra=c(140,180),
fa=c(120,135),
ce=c(200,300),
pe=c(150,200),
bp=c(150,250)
)
tabla <- fac.design(factor.names = vars, randomize = F)

Y <- c(76.68,214.35,0,155.03,142.48,378.26,15.68,290.77,
105.13,276.46,5.65,211.87,184.94,493.15,37.23,
378.04,102.74,257.81,7.01,190.75,184.82,462.52,
26.59,360.94,131.55,333.75,15.22,255.12,226.40,
602.26,50.29,465.19)

tab.1 <- add.response(tabla,Y)

plot (tab.1) #Resumen gráfico: ra tiene el efecto mayor

halfnormal(tab.1, alpha = .05) #Ra es el factor significativo

fit <- lm (Y~ra+ce+fa+ra:ce+pe+bp+ra:pe+fa:ce+ra:bp+ra:fa+ce:pe+ce:bp, tab.1)

anova (fit)
summary (fit)

Output R Code:
> library (DoE.base)
Loading required package: grid
Loading required package: conf.design
Registered S3 method overwritten by 'DoE.base':
method from
factorize.factor conf.design

Attaching package: ‘DoE.base’

The following objects are masked from ‘package:stats’:

aov, lm

The following object is masked from ‘package:graphics’:

plot.design

The following object is masked from ‘package:base’:

175
lengths

> vars <- list(ra=c(140,180),


+ fa=c(120,135),
+ ce=c(200,300),
+ pe=c(150,200),
+ bp=c(150,250)
+ )
> tabla <- fac.design(factor.names = vars, randomize = F)
creating full factorial with 32 runs ...

>
> Y <- c(76.68,214.35,0,155.03,142.48,378.26,15.68,290.77,
+ 105.13,276.46,5.65,211.87,184.94,493.15,37.23,
+ 378.04,102.74,257.81,7.01,190.75,184.82,462.52,
+ 26.59,360.94,131.55,333.75,15.22,255.12,226.40,
+ 602.26,50.29,465.19)
>
> tab.1 <- add.response(tabla,Y)
>
> plot (tab.1) #Resumen gráfico: ra tiene el efecto mayor
>
> halfnormal(tab.1, alpha = .05) #Ra es el factor significativo

Significant effects (alpha=0.05, Lenth method):


[1] ra1 ce1 fa1 ra1:ce1 pe1 bp1 ra1:pe1 fa1:ce1 ra1:bp1
ra1:fa1 ce1:pe1

[12] ce1:bp1

> fit <- lm (Y~ra+ce+fa+ra:ce+pe+bp+ra:pe+fa:ce+ra:bp+ra:fa+ce:pe+ce:bp, tab.1)


>
> anova (fit)

Analysis of Variance Table

Response: Y
Df Sum Sq Mean Sq F value Pr(>F)
ra 1 503471 503471 2972.6352 < 2.2e-16 ***
ce 1 120104 120104 709.1274 < 2.2e-16 ***
fa 1 91156 91156 538.2106 2.112e-15 ***
pe 1 25641 25641 151.3913 1.696e-10 ***
bp 1 15631 15631 92.2892 1.000e-08 ***
ra:ce 1 38610 38610 227.9646 4.905e-12 ***
ra:pe 1 7970 7970 47.0544 1.519e-06 ***
ce:fa 1 4809 4809 28.3929 3.837e-05 ***
ra:bp 1 3907 3907 23.0671 0.0001237 ***
ra:fa 1 2560 2560 15.1153 0.0009899 ***

176
ce:pe 1 1877 1877 11.0806 0.0035291 **
ce:bp 1 1374 1374 8.1121 0.0102834 *
Residuals 19 3218 169
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
> summary (fit)

Call:
lm.default(formula = Y ~ ra + ce + fa + ra:ce + pe + bp + ra:pe +
fa:ce + ra:bp + ra:fa + ce:pe + ce:bp, data = tab.1)

Residuals:
Min 1Q Median 3Q Max
-21.4763 -4.2238 -0.4631 4.0131 25.2325

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 207.459 2.301 90.176 < 2e-16 ***
ra1 125.433 2.301 54.522 < 2e-16 ***
ce1 61.264 2.301 26.629 < 2e-16 ***
fa1 -53.372 2.301 -23.199 2.11e-15 ***
pe1 28.307 2.301 12.304 1.70e-10 ***
bp1 22.101 2.301 9.607 1.00e-08 ***
ra1:ce1 34.736 2.301 15.098 4.91e-12 ***
ra1:pe1 15.781 2.301 6.860 1.52e-06 ***
ce1:fa1 -12.259 2.301 -5.328 3.84e-05 ***
ra1:bp1 11.049 2.301 4.803 0.000124 ***
ra1:fa1 8.944 2.301 3.888 0.000990 ***
ce1:pe1 7.658 2.301 3.329 0.003529 **
ce1:bp1 6.552 2.301 2.848 0.010283 *
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 13.01 on 19 degrees of freedom


Multiple R-squared: 0.9961, Adjusted R-squared: 0.9936
F-statistic: 402 on 12 and 19 DF, p-value: < 2.2e-16

177
178
R esponse Surface

C entral C om posite C ircum scribed

Input R Code:
library (rsm)

#CCC

catapulta.ccc <- ccd(basis = 2, randomize = F, n0 = c(3,3),


coding = list(x1~(FA-115)/15,x2~(BP-148)/32))
catapulta.ccc

Y.ccc <-
c(226.72,147.07,297.25,220.21,288.24,296.90,287.55,120.43,219.32,223.08,346.23,2
89.58,292.28,292.33)
catapulta.ccc <- data.frame(catapulta.ccc,Y.ccc)
catapulta.ccc <- coded.data(catapulta.ccc,x1~(FA-115)/15,x2~(BP-148)/32)
catapulta.ccc

primera.ccc <- rsm(Y.ccc~SO(x1,x2), data = catapulta.ccc)


summary(primera.ccc)

persp (primera.ccc, ~x1+x2, theta = 50, contours = "col", col = rainbow(10))

Output R Code:
> library (rsm)
>
> #CCC
>
> catapulta.ccc <- ccd(basis = 2, randomize = F, n0 = c(3,3),
+ coding = list(x1~(FA-115)/15,x2~(BP-148)/32))
> catapulta.ccc
run.order std.order FA BP Block
1 1 1 100.0000 116.0000 1
2 2 2 130.0000 116.0000 1
3 3 3 100.0000 180.0000 1
4 4 4 130.0000 180.0000 1
5 5 5 115.0000 148.0000 1
6 6 6 115.0000 148.0000 1
7 7 7 115.0000 148.0000 1
8 1 1 93.7868 148.0000 2
9 2 2 136.2132 148.0000 2
10 3 3 115.0000 102.7452 2
11 4 4 115.0000 193.2548 2
12 5 5 115.0000 148.0000 2
13 6 6 115.0000 148.0000 2
14 7 7 115.0000 148.0000 2

179
Data are stored in coded form using these coding formulas ...
x1 ~ (FA - 115)/15
x2 ~ (BP - 148)/32
>
> Y.ccc <-
c(226.72,147.07,297.25,220.21,288.24,296.90,287.55,120.43,219.32,223.08,346.23,2
89.58,292.28,292.33)
> catapulta.ccc <- data.frame(catapulta.ccc,Y.ccc)
> catapulta.ccc <- coded.data(catapulta.ccc,x1~(FA-115)/15,x2~(BP-148)/32)
> catapulta.ccc
run.order std.order FA BP Block Y.ccc
1 1 1 100.0000 116.0000 1 226.72
2 2 2 130.0000 116.0000 1 147.07
3 3 3 100.0000 180.0000 1 297.25
4 4 4 130.0000 180.0000 1 220.21
5 5 5 115.0000 148.0000 1 288.24
6 6 6 115.0000 148.0000 1 296.90
7 7 7 115.0000 148.0000 1 287.55
8 1 1 93.7868 148.0000 2 120.43
9 2 2 136.2132 148.0000 2 219.32
10 3 3 115.0000 102.7452 2 223.08
11 4 4 115.0000 193.2548 2 346.23
12 5 5 115.0000 148.0000 2 289.58
13 6 6 115.0000 148.0000 2 292.28
14 7 7 115.0000 148.0000 2 292.33

Data are stored in coded form using these coding formulas ...
x1 ~ (FA - 115)/15
x2 ~ (BP - 148)/32
>
> primera.ccc <- rsm(Y.ccc~SO(x1,x2), data = catapulta.ccc)
> summary(primera.ccc)

Call:
rsm(formula = Y.ccc ~ SO(x1, x2), data = catapulta.ccc)

Estimate Std. Error t value Pr(>|t|)


(Intercept) 291.1467 15.2804 19.0536 5.961e-08 ***
x1 -2.1048 13.2332 -0.1591 0.877568
x2 39.7288 13.2332 3.0022 0.017015 *
x1:x2 0.6525 18.7146 0.0349 0.973041
x1^2 -61.7490 13.7736 -4.4831 0.002047 **
x2^2 -4.3590 13.7736 -0.3165 0.759744
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Multiple R-squared: 0.7846, Adjusted R-squared: 0.65


F-statistic: 5.828 on 5 and 8 DF, p-value: 0.01464

180
Analysis of Variance Table

Response: Y.ccc
Df Sum Sq Mean Sq F value Pr(>F)
FO(x1, x2) 2 12662.5 6331.2 4.5192 0.04860
TWI(x1, x2) 1 1.7 1.7 0.0012 0.97304
PQ(x1, x2) 2 28158.2 14079.1 10.0497 0.00657
Residuals 8 11207.6 1400.9
Lack of fit 3 11148.0 3716.0 311.6148 4.197e-06
Pure error 5 59.6 11.9

Stationary point of response surface:


x1 x2
0.007037183 4.557670520

Stationary point in original units:


FA BP
115.1056 293.8455

Eigenanalysis:
eigen() decomposition
$values
[1] -4.357104 -61.750813

$vectors
[,1] [,2]
x1 -0.005684513 -0.999983843
x2 -0.999983843 0.005684513

> persp (primera.ccc, ~x1+x2, theta = 50, contours = "col", col = rainbow(10))

181
C entral C om posite Inscribed

R Code Input:
library (rsm)

#CCI

catapulta.cci <- ccd(basis = 2, randomize = F, n0 = c(3,3), inscribed = T,


coding = list(x1~(FA-115)/15,x2~(BP-148)/32))
catapulta.cci

catapulta.cci <- data.frame(catapulta.cci,Y.cci)


catapulta.cci <- coded.data(catapulta.cci,x1~(FA-115)/15,x2~(BP-148)/32)
catapulta.cci

Y.cci <-
c(254.22,188.01,315.55,244.34,292.39,296.1,297.13,266.22,183.6,244.19,332.72,291
.05,292.72,292.94)

primera.cci <- rsm(Y.cci~SO(x1,x2),data = catapulta.cci)


summary(primera.cci)

mejor.cci <- rsm(Y.cci~FO(x1,x2)+PQ(x1,x2),catapulta.cci)


summary(mejor.cci)

persp (mejor.cci, ~x1+x2, theta = 50, contours = "col", col = rainbow(3))

R Code Output:
> library (rsm)
>
> #CCI
>
> catapulta.cci <- ccd(basis = 2, randomize = F, n0 = c(3,3), inscribed = T,
+ coding = list(x1~(FA-115)/15,x2~(BP-148)/32))
> catapulta.cci
run.order std.order FA BP Block
1 1 1 104.3934 125.3726 1
2 2 2 125.6066 125.3726 1
3 3 3 104.3934 170.6274 1
4 4 4 125.6066 170.6274 1
5 5 5 115.0000 148.0000 1
6 6 6 115.0000 148.0000 1
7 7 7 115.0000 148.0000 1
8 1 1 100.0000 148.0000 2
9 2 2 130.0000 148.0000 2
10 3 3 115.0000 116.0000 2

182
11 4 4 115.0000 180.0000 2
12 5 5 115.0000 148.0000 2
13 6 6 115.0000 148.0000 2
14 7 7 115.0000 148.0000 2

Data are stored in coded form using these coding formulas ...
x1 ~ (FA - 115)/15
x2 ~ (BP - 148)/32
>
> catapulta.cci <- data.frame(catapulta.cci,Y.cci)
> catapulta.cci <- coded.data(catapulta.cci,x1~(FA-115)/15,x2~(BP-148)/32)
> catapulta.cci
run.order std.order FA BP Block Y.cci
1 1 1 104.3934 125.3726 1 254.22
2 2 2 125.6066 125.3726 1 188.01
3 3 3 104.3934 170.6274 1 315.55
4 4 4 125.6066 170.6274 1 244.34
5 5 5 115.0000 148.0000 1 292.39
6 6 6 115.0000 148.0000 1 296.10
7 7 7 115.0000 148.0000 1 297.13
8 1 1 100.0000 148.0000 2 266.22
9 2 2 130.0000 148.0000 2 183.60
10 3 3 115.0000 116.0000 2 244.19
11 4 4 115.0000 180.0000 2 332.72
12 5 5 115.0000 148.0000 2 291.05
13 6 6 115.0000 148.0000 2 292.72
14 7 7 115.0000 148.0000 2 292.94

Data are stored in coded form using these coding formulas ...
x1 ~ (FA - 115)/15
x2 ~ (BP - 148)/32
>
>
> Y.cci <-
c(254.22,188.01,315.55,244.34,292.39,296.1,297.13,266.22,183.6,244.19,332.72,291
.05,292.72,292.94)
>
> primera.cci <- rsm(Y.cci~SO(x1,x2),data = catapulta.cci)
> summary(primera.cci)

Call:
rsm(formula = Y.cci ~ SO(x1, x2), data = catapulta.cci)

Estimate Std. Error t value Pr(>|t|)


(Intercept) 293.7217 1.8458 159.1261 2.721e-15 ***
x1 -44.9477 2.2607 -19.8823 4.267e-08 ***
x2 42.9320 2.2607 18.9907 6.118e-08 ***
x1:x2 -2.5000 4.5214 -0.5529 0.59542
x1^2 -71.8879 3.3276 -21.6033 2.221e-08 ***

183
x2^2 -8.3429 3.3276 -2.5072 0.03653 *
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Multiple R-squared: 0.9935, Adjusted R-squared: 0.9894


F-statistic: 244.7 on 5 and 8 DF, p-value: 1.593e-08

Analysis of Variance Table

Response: Y.cci
Df Sum Sq Mean Sq F value Pr(>F)
FO(x1, x2) 2 15453.8 7726.9 377.9769 1.203e-08
TWI(x1, x2) 1 6.2 6.2 0.3057 0.59542
PQ(x1, x2) 2 9555.4 4777.7 233.7103 8.018e-08
Residuals 8 163.5 20.4
Lack of fit 3 135.7 45.2 8.1385 0.02274
Pure error 5 27.8 5.6

Stationary point of response surface:


x1 x2
-0.3582957 2.6266465

Stationary point in original units:


FA BP
109.6256 232.0527

Eigenanalysis:
eigen() decomposition
$values
[1] -8.318337 -71.912496

$vectors
[,1] [,2]
x1 0.01965969 -0.99980673
x2 -0.99980673 -0.01965969

> mejor.cci <- rsm(Y.cci~FO(x1,x2)+PQ(x1,x2),catapulta.cci)


> summary(mejor.cci)

Call:
rsm(formula = Y.cci ~ FO(x1, x2) + PQ(x1, x2), data = catapulta.cci)

Estimate Std. Error t value Pr(>|t|)


(Intercept) 293.7217 1.7732 165.6433 < 2.2e-16 ***
x1 -44.9477 2.1717 -20.6966 6.712e-09 ***
x2 42.9320 2.1717 19.7685 1.006e-08 ***
x1^2 -71.8879 3.1967 -22.4881 3.221e-09 ***
x2^2 -8.3429 3.1967 -2.6098 0.02828 *
---

184
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Multiple R-squared: 0.9933, Adjusted R-squared: 0.9903


F-statistic: 331.4 on 4 and 9 DF, p-value: 9.288e-10

Analysis of Variance Table

Response: Y.cci
Df Sum Sq Mean Sq F value Pr(>F)
FO(x1, x2) 2 15453.8 7726.9 409.5717 1.454e-09
PQ(x1, x2) 2 9555.4 4777.7 253.2460 1.228e-08
Residuals 9 169.8 18.9
Lack of fit 4 142.0 35.5 6.3849 0.03352
Pure error 5 27.8 5.6

Stationary point of response surface:


x1 x2
-0.3126231 2.5729638

Stationary point in original units:


FA BP
110.3107 230.3348

Eigenanalysis:
eigen() decomposition
$values
[1] -8.342917 -71.887917

$vectors
[,1] [,2]
x1 0 -1
x2 -1 0

> persp (mejor.cci, ~x1+x2, theta = 50, contours = "col", col = rainbow(3))

185
C entral C om posite Faces

R Input Code:
library (rsm)

#CCF

catapulta.ccf <- ccd(basis = 2, randomize = F, n0 = c(3,3), alpha="faces",


coding = list(x1~(FA-115)/15,x2~(BP-148)/32))
catapulta.ccf

Y.ccf <-
c(226.34,150.71,302.28,217.90,289.82,289.19,293.13,268.62,167.42,246.94,332.46,2
90.89,300.46,297.10)

primera.ccf <- rsm(Y.ccf~SO(x1,x2),data = catapulta.ccf)


summary(primera.ccf)

mejor.ccf <- rsm(Y.ccf~FO(x1,x2)+PQ(x1,x2),catapulta.ccf)


summary(mejor.ccf)

persp (mejor.ccf, ~x1+x2, theta = 50, contours = "col", col = rainbow(3))

R Output Code:
> library (rsm)
>
> #CCF
>
> catapulta.ccf <- ccd(basis = 2, randomize = F, n0 = c(3,3), alpha="faces",
+ coding = list(x1~(FA-115)/15,x2~(BP-148)/32))
> catapulta.ccf
run.order std.order FA BP Block
1 1 1 100 116 1
2 2 2 130 116 1
3 3 3 100 180 1
4 4 4 130 180 1
5 5 5 115 148 1
6 6 6 115 148 1
7 7 7 115 148 1
8 1 1 100 148 2
9 2 2 130 148 2
10 3 3 115 116 2
11 4 4 115 180 2
12 5 5 115 148 2
13 6 6 115 148 2
14 7 7 115 148 2

186
Data are stored in coded form using these coding formulas ...
x1 ~ (FA - 115)/15
x2 ~ (BP - 148)/32
>
> Y.ccf <-
c(226.34,150.71,302.28,217.90,289.82,289.19,293.13,268.62,167.42,246.94,332.46,2
90.89,300.46,297.10)
>
> primera.ccf <- rsm(Y.ccf~SO(x1,x2),data = catapulta.ccf)
> summary(primera.ccf)

Call:
rsm(formula = Y.ccf ~ SO(x1, x2), data = catapulta.ccf)

Estimate Std. Error t value Pr(>|t|)


(Intercept) 292.25294 2.66361 109.7208 5.319e-14 ***
x1 -43.53500 2.83562 -15.3529 3.217e-07 ***
x2 38.10833 2.83562 13.4391 9.005e-07 ***
x1:x2 -2.18750 3.47292 -0.6299 0.5463
x1^2 -70.69676 4.12644 -17.1326 1.370e-07 ***
x2^2 0.98324 4.12644 0.2383 0.8177
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Multiple R-squared: 0.9897, Adjusted R-squared: 0.9832


F-statistic: 153.6 on 5 and 8 DF, p-value: 1.007e-07

Analysis of Variance Table

Response: Y.ccf
Df Sum Sq Mean Sq F value Pr(>F)
FO(x1, x2) 2 20085.2 10042.6 208.1606 1.264e-07
TWI(x1, x2) 1 19.1 19.1 0.3967 0.54634
PQ(x1, x2) 2 16940.8 8470.4 175.5722 2.462e-07
Residuals 8 386.0 48.2
Lack of fit 3 285.5 95.2 4.7377 0.06347
Pure error 5 100.4 20.1

Stationary point of response surface:


x1 x2
-0.007949323 -19.387893572

Stationary point in original units:


FA BP
114.8808 -472.4126

187
Eigenanalysis:
eigen() decomposition
$values
[1] 0.9999207 -70.7134501

$vectors
[,1] [,2]
x1 0.01525346 -0.99988366
x2 -0.99988366 -0.01525346

> mejor.ccf <- rsm(Y.ccf~FO(x1,x2)+PQ(x1,x2),catapulta.ccf)


> summary(mejor.ccf)

Call:
rsm(formula = Y.ccf ~ FO(x1, x2) + PQ(x1, x2), data = catapulta.ccf)

Estimate Std. Error t value Pr(>|t|)


(Intercept) 292.25294 2.57279 113.5939 1.612e-15 ***
x1 -43.53500 2.73894 -15.8948 6.813e-08 ***
x2 38.10833 2.73894 13.9135 2.164e-07 ***
x1^2 -70.69676 3.98575 -17.7374 2.611e-08 ***
x2^2 0.98324 3.98575 0.2467 0.8107
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Multiple R-squared: 0.9892, Adjusted R-squared: 0.9844


F-statistic: 205.7 on 4 and 9 DF, p-value: 7.78e-09

Analysis of Variance Table

Response: Y.ccf
Df Sum Sq Mean Sq F value Pr(>F)
FO(x1, x2) 2 20085.2 10042.6 223.1157 2.148e-08
PQ(x1, x2) 2 16940.8 8470.4 188.1861 4.546e-08
Residuals 9 405.1 45.0
Lack of fit 4 304.7 76.2 3.7915 0.08816
Pure error 5 100.4 20.1

Stationary point of response surface:


x1 x2
-0.3078995 -19.3790508

Stationary point in original units:


FA BP
110.3815 -472.1296

188
Eigenanalysis:
eigen() decomposition
$values
[1] 0.9832353 -70.6967647

$vectors
[,1] [,2]
x1 0 -1
x2 -1 0

> persp (mejor.ccf, ~x1+x2, theta = 50, contours = "col", col = rainbow(3))

189
A P P EN D IX C : R evised Exam 1

Question 1

A company hired you to evaluate new concrete blends. The blends have been tested, and
the strength (MPa) results after 28 days of curing are shown in the summary table.

x mean sdev n
--- ------ ------ ---
1 43.64 5.556 5
2 40.06 1.006 5
3 41.80 1.056 5
4 34.78 1.934 5

W hat is the experim ent total variance?


(Σ𝑦)2
Σ𝑦 2 − 𝑥2
Consider that: 𝑠 2 = 𝑣
𝑛
(1) and 𝐹 = 𝑋𝐹2 (2)
𝐸

A N SW ER :

We need to find Σ𝑦 2 for each treatment, using (1):


(Σ𝑦)2
Σ𝑦 2 − 2
𝑠2 = 𝑛 → Σ𝑦 2 = 𝑠 2 𝑣 + (Σ𝑦)
𝑣 𝑛
Remembering that 𝑆 = 𝑠𝑑𝑒𝑣 2 (3), 𝑣 = 𝑛 − 1 (4), and Σ𝑦 = 𝑀𝑖 ∙ 𝑛 (5):

Treatment 1 Σ𝑦 2 :
(43.64 ∙ 5)2
Σ𝑦12 = (5.556)2 (5 − 1) + = 123.476 + 9522.248 = 9645.725
5
Treatment 2 Σ𝑦 2 :

(40.06 ∙ 5)2
Σ𝑦22 = (1.006)2 (5 − 1) + = 4.048 + 8024.018 = 8028.066
5
Treatment 3 Σ𝑦 2 :

(41.80 ∙ 5)2
Σ𝑦32 = (1.056)2 (5 − 1) + = 4.461 + 8736.200 = 8740.661
5
Treatment 4 Σ𝑦 2 :
(34.78 ∙ 5)2
Σ𝑦42 = (1.934)2 (5 − 1) + = 14.961 + 6048.242 = 6063.203
5

190
We now have all the Σ𝑦 2 treatments:

ΣΣ𝑦 2 = 9645.725 + 8028.066 + 8740.661 + 6063.203 = 32477.655

We can calculate now SST:

𝑆𝑆𝑇 = ΣΣy𝑖2 − Φ; Φ = (ΣΣyi )2 /𝑁

𝑁 = (𝑘 ∙ 𝑛) = (4 ∙ 5) = 20
(ΣΣ𝑦)2 = (218.2 + 200.3 + 209.0 + 173.9)2 = (801.4)2 = 642241.960
642241.960
Φ= = 32112.098
20
𝑆𝑆𝑇 = 32477.655 − 32112.098 = 365.557

In conclusion, the experim ent total variance (SS T ) is 365.557.

191
Question 2

A company hired you to evaluate new concrete blends. The blends have been tested, and
the strength (MPa) results after 28 days of curing are shown in the summary table.

x mean sdev n

--- ------ ------ ---

1 37.64 1.862 5

2 38.06 3.913 5

3 31.80 1.056 5

4 34.78 1.934 5

W hat w ould be the m axim um average expected strength for the best
perform ance blend at C I= 90% and group variance?
(Σ𝑦)2
Σ𝑦 2 − 𝑥2
Consider that: 𝑠 2 = 𝑣
𝑛
(1) and 𝐹 = 𝑋𝐹2 (2)
𝐸

A N SW ER :

We must first find the t-Student distribution value for such case. I will use the t-Student
table which requires the Degrees of Freedom of each group (n-1) and α = 0.1, α/2 = 0.05.

192
𝑡𝑑𝑓=4,𝛼=0.05 = 2.132
2

Remembering the formula for confidence intervals:


𝑠
𝑀𝑖 ± 𝑡𝑑𝑓,𝛼
2 √𝑛
Remembering that 𝑠𝑑𝑒𝑣 = 𝑠.

Confidence interval for Group 1:


𝑠1 1.862
𝑀1 ± 𝑡𝑑𝑓,𝛼 = 37.64 ± (2.132) = [35.86,39.42]
2 √𝑛1 √5
Confidence interval for Group 2:
𝑠2 3.913
𝑀2 ± 𝑡𝑑𝑓,𝛼 = 38.06 ± (2.132) = [34.33, 𝟒𝟏. 𝟕𝟗]
2 √𝑛2 √5

Confidence interval for Group 3:


𝑠3 1.056
𝑀3 ± 𝑡𝑑𝑓,𝛼 = 31.8 ± (2.132) = [30.79,32.81]
2 √𝑛3 √5

Confidence interval for Group 4:


𝑠4 1.934
𝑀4 ± 𝑡𝑑𝑓,𝛼 = 34.78 ± (2.132) = [32.94,36.62]
2 √𝑛4 √5

We observe that the maximum value obtained is in Group 2 with 41.79 MPa.

In conclusion the m axim um average expected strength for the best perform ance
blend considering C I= 90% and its group variance is 41.79 M P a.

193
Question 3

A one-factor design of experiment comparing five models of the same Product has been run,
and partial results for the ANOVA are provided below.

Product:

Sum Square 2044.8

Degrees of freedom 4

Total

Sum Square 2922.02

Degrees of freedom 119

W hat is the factor's variance?


(Σ𝑦)2
Σ𝑦 2 − 𝑥2
2
Consider that: 𝑠 = 𝑣
𝑛
(1) and 𝐹 = 𝑋𝐹2 (2)
𝐸

A N SW ER :

The factor’s variance can be interpreted as the Mean Square of the factors (SF):

𝑆𝐹 = 𝑆𝑆𝐹 /𝑑𝑓𝐹

Where:

𝑆𝑆𝐹 Sum of squares of the treatments, factors

𝑑𝑓𝐹 Degrees of freedom of the treatments, factors.

Since we have the information already provided, we can just use a simple division:
2044.8
𝑆𝐹 = = 511.2
4
In conclusion, the factor’s variance is 511.2.

194
Question 4

En una operación de ensamble con dos celdas de manufactura, la línea base de eficiencia
semanal, calculada con los datos del año pasado, indica un máximo de 66.5% al 99% de
confianza (con varianza de 1.1).

En las últimas siete semanas la celda A obtuvo un promedio de 67.9% con un error estándar
de .09 y la celda B logró un 67.8% con una varianza de .3.

1. Calcula los límites de los I.C. al 95% para:

 Línea base

 Celda A

 Celda B

2. Calcula t0 y tc al 95% para:

 Comparar las celdas entre sí

 Comparar la celda A con la línea base

 Comparar la celda B con la línea base

3. Contesta las preguntas y justifica tus respuestas:

 ¿Mejoró la celda A?

 ¿Mejoró la celda B?

 ¿Cuál es la celda más eficiente?

A N SW ER :

We must first interpret the data:

Baseline: 66.5% maximum efficiency at α = 0.01 and s2 = 1.1 and n = 52 (52 weeks in a
year).

Cell A: 67.9% average with standard error = 0.09 (standard error = s/√n), n = 7.

Cell B: 68.8% average with s = 0.30, n = 7.

Number of treatments, k = 2.

195
1. Lim its at 95% C I

B aseline:

We must first find the mean of this baseline in the 99% CI in order to calculate its limits at
95% CI.

Since the problem states that the 66.5% is the maximum efficiency, we can interpret it as
this:
1.1
66.5% = 𝑀𝑖 + 𝑡𝑑𝑓,𝛼
2 √52
The degrees of freedom for the baseline is (n-1), since we have 52 data points, we have df =
(52-1) = 51 and α/2 = 0.005.

Since a df=51 value is non-existant in the table, we will have to use the EXCEL function
“=INV.T.2C(p,df)” to calculate the t-Student critical value where p is the probability and
df is the degrees of freedom.

=INV.T.2C(0.005,51) = 2.934.

𝑡51,0.005 = 2.934

196
1.1
66.5% = 𝑀𝑖 + 2.934
√52
1.1
𝑀𝑖 = 66.5 − 2.934 = 66.05%
√52
We can now build the IC, considering:

𝑡51,0.025 = 2.934

Confidence interval for Baseline:


𝑠1 1.1
𝑀𝐵𝐿 ± 𝑡𝑑𝑓,𝛼 = 66.02 ± (2.934) = [65.70,66.40]
2 √𝑛1 √52
95% C I for B aseline [65.70,66.40]

C ell A :

67.9% average with standard error = 0.09 (standard error = s/√n), n = 7, df = (n-1) = 6.

We must find the t-Student critical value for df = 6 and α/2 = 0.025.

𝑡6,0.025 = 2.447

197
Confidence interval for Cell A:
𝑠1
𝑀𝐴 ± 𝑡𝑑𝑓,𝛼 = 67.09 ± 2.447(0.09) = [66.82,67.36]
2 √𝑛1

95% C I for C ell A [66.82,67.36]

C ell B :

68.8% average with s = 0.30, n = 7.

We already know the t-Student critical value for df = 6 and α/2 = 0.025.

𝑡6,0.025 = 2.447

Confidence interval for Cell B:


𝑠1 0.3
𝑀𝐵 ± 𝑡𝑑𝑓,𝛼 = 68.8 ± 2.447 = [68.46,69.14]
2 √𝑛1 √7

95% C I for C ell B [68.46,69.14]

2. 𝒕𝒐 and 𝒕𝒄 values at 95% C I

To calculate to we need the following formulas:

𝑦1 − 𝑦2 (𝑛1 − 1)𝑠12 + (𝑛2 − 1)𝑠22


𝑡𝑜 = ; 𝑠𝑝 =
1 1 𝑛1 + 𝑛2 − 2
𝑠𝑝 √ +
𝑛1 𝑛2

To calculate tc we need to determine the critical t-Student value at α/2 = 0.025 and with
the degree of freedoms between each pair:

𝑡𝑐 = 𝑡𝛼,𝑑𝑓=𝑛
2 1 +𝑛2 −2

We construct a table for calculating to

C ase 𝒚𝒊 𝒏𝒊 𝒔 𝒔𝟐

Baselin
66.05 52 1.10 1.21
e

A 67.09 7 0.24 0.06

B 68.8 7 0.30 0.09

198
C ase Sp to tc

BL-A 1.089 -2.373 2.302

A-B 0.073 -43.631 2.560

B-BL 1.092 6.255 2.302

3. Interpretation:

 Did Cell A improve?

It did. Compared to the baseline, we can see that its average is higher and its variance is
lesser, the t-student comparison also suggests that if 𝐻𝑜 = 𝜇𝐵𝐿 − 𝜇𝐴 , we should reject it.

 Did Cell B improve?

It did as Cell A. Compared to the baseline, we can see that its average is higher and its
variance is lesser, the t-student comparison also suggests that if 𝐻𝑜 = 𝜇𝐵𝐿 − 𝜇𝐵 , we should
reject it.

 Which is the most efficient Cell?

I would say that B is the most efficient as its CI values are higher than A. The lower floor
of the interval is higher than the upper part of A, meaning that even at worst-case scenarios
its efficiency would be higher than A.

Thank you for your time, it was a fun experience!

- Rafael Ortiz Hernández.

199
A P P EN D IX D : Experim ent P roposal Slideshow

200
201
202
203
A P P EN D IX E: Exam 2

P roblem 1

El gerente de la planta pretende mejorar la productividad de un proceso, el Black Belt de la


empresa inició un estudio, pero ha sido promovido a otra posición y no lo concluyó, por lo
que se pide que se analice la información disponible (Cuadro 1) y se obtengan conclusiones
a partir de esta. Se sabe que se evaluaban las posibles interacciones de la temperatura con
la presión y con el tiempo ciclo. Con base en la información indica:

7. Las hipótesis que se pueden verificar;


8. Las tablas de ANOVA completas;
9. Los factores significativos;
10. La evaluación analítica de los supuestos del ANOVA x ∈ N(µ, σ);
11. Los modelos lineales y su evaluación con base en los criterios correspondientes;
12. Un reporte del análisis, incluyendo un diagrama de flujo, el código de R, los
resultados y las conclusiones.

Cuadro 1: Porcentaje de eficiencia de centros de operación marca Hitachi


T.C. T1 a P1 T2 a P1 T1 a P2 T2 a P2 T1 a P3 T2 a P3
Corto 61,74,74, 66,72,72, 67,67,67, 77,78,79, 75,71,75, 74,74,74,
85,69,69 66,67,67 71,79,71 80,79,78 69,68,69 74,77,74
Largo 56,55,54, 51,52,52, 44,43,43, 59,59,60, 50,51,51 56,57,57,
53,54,55 52,53,53 48,48,49 66,67,67 53,52,52 59,60,69

204
Las hipótesis que se pueden verificar;

The Null Hypothesis (H0) is:

There is no significant variance between each treatment, both Pressure and Temperature
and time do not affect the process. 𝐻𝑜 : 𝜇𝑃 = 𝜇𝑇 = 𝜇𝑡

The alternate hypothesis are:

Temperature and Pressure variances do affect the process.

𝐻1 : 𝜇𝑇 ≠ 𝜇𝑃

There is a difference in variance between the +1 and -1 ends of the Temperature variables.

𝐻2 : 𝜇𝑇 +1 ≠ 𝜇𝑇 +1

There is a difference between the +1, 0 and -1 levels of the Pressure variables.

𝐻3 : 𝜇𝑃−1 ≠ 𝜇𝑃0

𝐻4 : 𝜇𝑃0 ≠ 𝜇𝑃+1

𝐻5 : 𝜇𝑃−1 ≠ 𝜇𝑃+1

And there could be a difference between the variance of each level of Pressure and
Temperature variables.

𝐻6 : 𝜇𝑇 +1 ≠ 𝜇𝑃−1

𝐻7 : 𝜇𝑇 +1 ≠ 𝜇𝑃0

𝐻8 : 𝜇𝑇 +1 ≠ 𝜇𝑃+1

𝐻9 : 𝜇𝑇 −1 ≠ 𝜇𝑃−1

𝐻10 : 𝜇𝑇 −1 ≠ 𝜇𝑃0

𝐻11 : 𝜇𝑇 −1 ≠ 𝜇𝑃+1

These 11 alternate hypothesis can be for either the +1 and -1 levels of Process time:

𝐻1 : 𝜇𝑇 ≠ 𝜇𝑃 ≠ 𝜇𝑡+1

𝐻2 : 𝜇𝑇 +1 ≠ 𝜇𝑇 +1 ≠ 𝜇𝑡+1

𝐻3 : 𝜇𝑃−1 ≠ 𝜇𝑃0 ≠ 𝜇𝑡+1

𝐻4 : 𝜇𝑃0 ≠ 𝜇𝑃+1 ≠ 𝜇𝑡+1

𝐻5 : 𝜇𝑃−1 ≠ 𝜇𝑃+1 ≠ 𝜇𝑡+1

205
𝐻6 : 𝜇𝑇 +1 ≠ 𝜇𝑃−1 ≠ 𝜇𝑡+1

𝐻7 : 𝜇𝑇 +1 ≠ 𝜇𝑃0 ≠ 𝜇𝑡+1

𝐻8 : 𝜇𝑇 +1 ≠ 𝜇𝑃+1 ≠ 𝜇𝑡+1

𝐻9 : 𝜇𝑇 −1 ≠ 𝜇𝑃−1 ≠ 𝜇𝑡+1

𝐻10 : 𝜇𝑇 −1 ≠ 𝜇𝑃0 ≠ 𝜇𝑡+1

𝐻11 : 𝜇𝑇 −1 ≠ 𝜇𝑃+1 ≠ 𝜇𝑡+1

𝐻12 : 𝜇𝑇 ≠ 𝜇𝑃 ≠ 𝜇𝑡−1

𝐻13 : 𝜇𝑇 +1 ≠ 𝜇𝑇 +1 ≠ 𝜇𝑡−1

𝐻14 : 𝜇𝑃−1 ≠ 𝜇𝑃0 ≠ 𝜇𝑡−1

𝐻15 : 𝜇𝑃0 ≠ 𝜇𝑃+1 ≠ 𝜇𝑡−1

𝐻16 : 𝜇𝑃−1 ≠ 𝜇𝑃+1 ≠ 𝜇𝑡−1

𝐻17 : 𝜇𝑇 +1 ≠ 𝜇𝑃−1 ≠ 𝜇𝑡−1

𝐻18 : 𝜇𝑇 +1 ≠ 𝜇𝑃0 ≠ 𝜇𝑡−1

𝐻19 : 𝜇𝑇 +1 ≠ 𝜇𝑃+1 ≠ 𝜇𝑡−1

𝐻20 : 𝜇𝑇 −1 ≠ 𝜇𝑃−1 ≠ 𝜇𝑡−1

𝐻21 : 𝜇𝑇 −1 ≠ 𝜇𝑃0 ≠ 𝜇𝑡−1

𝐻22 : 𝜇𝑇 −1 ≠ 𝜇𝑃+1 ≠ 𝜇𝑡−1

206
Las tablas de A N OV A com pletas

R code input:
library (DoE.base)
vars <- list(temp=c("T1","T2"),
pres=c("P1","P2","P3"),
time=c("Corto","Largo"))
tabla <- fac.design(factor.names = vars, randomize = F, replications = 6)
tabla

Y <- c(61,66,67,77,75,74,56,51,44,59,50,56,74,72,67,78,71,74,55,52,43,59,
51,57,74,72,67,79,75,74,54,52,43,60,51,57,85,66,71,80,69,74,53,52,48,66,
53,59,69,67,79,70,68,74,54,53,48,67,52,60,69,67,71,78,69,74,55,53,49,67,52,69)

tab.1 <- add.response(tabla,Y)


first <- lm (Y~time+temp,tab.1)
anova (first)
summary (first)

R code output:
Analysis of Variance Table

Response: Y
Df Sum Sq Mean Sq F value Pr(>F)
time 1 5635.7 5635.7 213.718 < 2.2e-16 ***
temp 1 415.7 415.7 15.764 0.0001737 ***
Residuals 69 1819.5 26.4
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Call:
lm.default(formula = Y ~ time + temp, data = tab.1)

Residuals:
Min 1Q Median 3Q Max
-9.0417 -2.8125 -0.5417 2.9583 15.2639

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 63.2917 0.6052 104.58 < 2e-16 ***
time1 -8.8472 0.6052 -14.62 < 2e-16 ***
temp1 2.4028 0.6052 3.97 0.000174 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 5.135 on 69 degrees of freedom


Multiple R-squared: 0.7688, Adjusted R-squared: 0.7621
F-statistic: 114.7 on 2 and 69 DF, p-value: < 2.2e-16

207
Los factores significativos;

According to the ANOVA Code, the significant factors are:

1. Time
2. Temperature

Pressure appears not to be statistically significant as its p-value is higher than α = 0.05.

The factor plot and the Lenth method also support this hypothesis:

R code input:
plot (tab.1) #Resumen gráfico

halfnormal(tab.1, alpha = .05) # Factor significativo

R code output:
Creation of temppres
Projected out: temp
Effects columns are the following linear combinations of residuals:
temppres1 temppres2
temp1:presP2 -0.7071 0.7071
temp1:presP3 0.7071 0.7071

Creation of prestime
Projected out: time
Effects columns are the following linear combinations of residuals:
prestime1 prestime2
presP2:time1 -0.7071 -0.7071
presP3:time1 0.7071 -0.7071

Significant effects (alpha=0.05, Lenth method):


[1] time1 temp:pres2 temp1 temp1:time1 e24 presP3 presP2

[8] temp:pres1 e34 e60 e11 e54 lof2 e45

Warning message:
In halfnormal.lm(lm(x, response = response, use.center = TRUE), :
halfnormal not recommended for models with more residual df than model df

208
We can see that Time has the biggest range of the three variables, temperature has the
second biggest impact and finally pressure. Level 0 and +1 appear to have the same effect
and indicates a non-linear relationship.

209
The Lenth method marks that the -1 Time treatment level (Corto) is the most impactful of
the response variable.

La evaluación analítica de los supuestos del A N OV A x ∈ N (µ, σ );

We will use the Fligner test:


fligner.test(fit,data)

Fligner-Killeen test of homogeneity of variances

data: tab.1
Fligner-Killeen:med chi-squared = 216.42, df = 4, p-value < 2.2e-16

Since Fligner’s test p-value is less than α = 0.05 we consider this as valid.

Bartlett test does not work:


> bartlett.test(first,tab.1)
Error in is.finite(x) : default method not implemented for type 'list'

210
Los m odelos lineales y su evaluación con base en los criterios correspondientes;

My first model will be based on all factors: Time, Temperature and Pressure
first <- lm (Y~time+temp+pres,tab.1)
anova (first)
summary (first)
> first <- lm (Y~time+temp+pres,tab.1)
> anova (first)
Analysis of Variance Table

Response: Y
Df Sum Sq Mean Sq F value Pr(>F)
time 1 5635.7 5635.7 217.7657 < 2.2e-16 ***
temp 1 415.7 415.7 16.0621 0.0001565 ***
pres 2 85.6 42.8 1.6535 0.1990934
Residuals 67 1733.9 25.9
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
> summary (first)

Call:
lm.default(formula = Y ~ time + temp + pres, data = tab.1)

Residuals:
Min 1Q Median 3Q Max
-9.7917 -3.3056 -0.9167 2.7083 16.8056

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 61.7500 1.0384 59.465 < 2e-16 ***
time1 -8.8472 0.5995 -14.757 < 2e-16 ***
temp1 2.4028 0.5995 4.008 0.000156 ***
presP2 2.2917 1.4685 1.560 0.123353
presP3 2.3333 1.4685 1.589 0.116797
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 5.087 on 67 degrees of freedom


Multiple R-squared: 0.7797, Adjusted R-squared: 0.7666
F-statistic: 59.28 on 4 and 67 DF, p-value: < 2.2e-16

W e have that is not a good m odel as w e have a insignific ant factor (P ressure),
the R squared values are lower than 95% but its p -value is lower than α = 0.05.

211
We will now use a second model and use only the Time and the relationship between
Temperature and Pressure factors:
second <- lm (Y~time+temp:pres,tab.1)
anova (second)
summary (second))

> second <- lm (Y~time+temp:pres,tab.1)


> anova (second)
Analysis of Variance Table

Response: Y
Df Sum Sq Mean Sq F value Pr(>F)
time 1 5635.7 5635.7 344.911 < 2.2e-16 ***
temp:pres 5 1173.1 234.6 14.359 1.834e-09 ***
Residuals 65 1062.1 16.3
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
> summary (second)

Call:
lm.default(formula = Y ~ time + temp:pres, data = tab.1)

Residuals:
Min 1Q Median 3Q Max
-11.0972 -1.6806 -0.4861 1.2639 12.9028

Coefficients: (1 not defined because of singularities)


Estimate Std. Error t value Pr(>|t|)
(Intercept) 66.8333 1.1669 57.275 < 2e-16 ***
time1 -8.8472 0.4764 -18.572 < 2e-16 ***
tempT1:presP1 -3.5833 1.6502 -2.171 0.033556 *
tempT2:presP1 -6.5833 1.6502 -3.989 0.000171 ***
tempT1:presP2 -8.7500 1.6502 -5.302 1.47e-06 ***
tempT2:presP2 3.1667 1.6502 1.919 0.059387 .
tempT1:presP3 -5.5000 1.6502 -3.333 0.001422 **
tempT2:presP3 NA NA NA NA
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 4.042 on 65 degrees of freedom


Multiple R-squared: 0.8651, Adjusted R-squared: 0.8526
F-statistic: 69.45 on 6 and 65 DF, p-value: < 2.2e-16

W e have a better m odel as all factors are now statistically significant, the R -
squared values are higher, but still lower than 95% . The p -value is still lower
than α = 0.05.

212
We will now use a third model and use Time, the relationship between Temperature and
Pressure, and the temperature factors:
third <- lm (Y~time+temp:pres+temp,tab.1)
anova (third)
summary (third)
> third <- lm (Y~time+temp:pres+temp,tab.1)
> anova (third)
Analysis of Variance Table
Response: Y
Df Sum Sq Mean Sq F value Pr(>F)
time 1 5635.7 5635.7 344.911 < 2.2e-16 ***
temp 1 415.7 415.7 25.440 3.908e-06 ***
temp:pres 4 757.4 189.4 11.589 3.662e-07 ***
Residuals 65 1062.1 16.3
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
> summary (third)

Call:
lm.default(formula = Y ~ time + temp:pres + temp, data = tab.1)
Residuals:
Min 1Q Median 3Q Max
-11.0972 -1.6806 -0.4861 1.2639 12.9028
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 61.7500 0.8251 74.838 < 2e-16 ***
time1 -8.8472 0.4764 -18.572 < 2e-16 ***
temp1 -1.5000 0.8251 -1.818 0.073683 .
tempT1:presP2 -5.1667 1.6502 -3.131 0.002611 **
tempT2:presP2 9.7500 1.6502 5.908 1.39e-07 ***
tempT1:presP3 -1.9167 1.6502 -1.161 0.249704
tempT2:presP3 6.5833 1.6502 3.989 0.000171 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 4.042 on 65 degrees of freedom


Multiple R-squared: 0.8651, Adjusted R-squared: 0.8526
F-statistic: 69.45 on 6 and 65 DF, p-value: < 2.2e-16

W e have a better m odel as all factors are now statistically significant, the R -
squared values are higher, but still low er than 95% , the p-value is still low er
than α = 0.05. Since adding m ore factors w ill not increase significantly this value
I decide to end the proposal here.

213
U n reporte del análisis, incluyendo un diagram a de flujo, el código de R , los resultados
y las conclusiones.

Conclusion:

 Time is the most important factor, followed by a relationship between temperature


and pressure and finally the temperature.
 Pressure is not a statistically significant factor. The 3 pressure levels are not
necessary, as treatment level P2 and P3 produce the same effect.
 A good linear model to describe this process would use the factors of Time,
Temperature and the relationship between temperature and pressure.
 However, this model does not have the best correlation, as their R-values are lower
than 95%.
 In order to increase this correlation, we suggest running more experiment replies to
reduce the variation.

214
P roblem 2

(40 puntos) En una fábrica de bujías automotrices se realizan pruebas de desgaste. Se toman
cinco muestras de producción en forma aleatoria mensualmente y se someten a una prueba
de desgaste acelerado, cuyos resultados se reportan en µm/1000 hr. Se ha observado que
parece haber una relación entre el desgaste y el diámetro del cátodo (E1). También se sabe
que el espesor total de los ´alabes del ´ánodo (E2) puede afectar el desgaste. Las tolerancias
del diseño de los dos electrodos son del 1 % y se ha demostrado que es robusto hasta 2 %.
Diseña un experimento para obtener una superficie de respuesta del desgaste en función de
las dimensiones de los electrodos. Considera 2 puntos centrales para el diseño original y 3
para explorar fuera de la tolerancia del diseño tanto como sea posible. Referirse a la Figura.

Indica:

1. Tabla de factores y niveles.

2. El código de R para generar la tabla experimental.

3. La tabla del experimento.

4. Un reporte completo, incluyendo un diagrama de flujo, el código de R y el racional para


la selección de los factores y los niveles.

215
Tabla de factores y niveles.

Since we have a 2% limit on the working parameters, I decide to use a CCC method.

Response variable: Wear.

Control Factor: Cathode diameter and Anode flaps.

Factor and level table:

Levels
Treatment -1 0 +1
Cathode diameter 2.45 2.5 2.55
Anode Flap 1.47 1.5 1.53

El código de R para gener ar la tabla experim ental.

library (rsm)
anodo <- ccd(basis = 2, randomize = F, n0 = c(3,3),
coding = list(x1~(diametro-2.5)/0.04,x2~(flap-1.5)/0.04))
anodo

La tabla del experim ento.

run.order std.order diametro flap Block


1 1 1 2.460000 1.460000 1
2 2 2 2.540000 1.460000 1
3 3 3 2.460000 1.540000 1
4 4 4 2.540000 1.540000 1
5 5 5 2.500000 1.500000 1
6 6 6 2.500000 1.500000 1
7 7 7 2.500000 1.500000 1
8 1 1 2.443431 1.500000 2
9 2 2 2.556569 1.500000 2
10 3 3 2.500000 1.443431 2
11 4 4 2.500000 1.556569 2
12 5 5 2.500000 1.500000 2
13 6 6 2.500000 1.500000 2
14 7 7 2.500000 1.500000 2

Data are stored in coded form using these coding formulas ...
x1 ~ (diametro - 2.5)/0.04
x2 ~ (flap - 1.5)/0.04

216
U n reporte com pleto, incluyendo un diagram a de flujo, el código de R y el racional para
la selección de los factores y los niveles.

Response variable: Wear.

Control Factor: Cathode diameter and Anode flaps.

I used a CCC as it is the better surface response test since it can use the entire experiment
space.

I used the ± 2% tolerance as the limits of the test.

217

You might also like