Professional Documents
Culture Documents
Experimental design is the gold standard of research design, and the best approach for
assessing cause and effect. The design, which relies on random assignment and repeated
measurements for its rigor, has its roots in agricultural experiments of the early twentieth
century and is now commonplace in a variety of scientific and industrial settings.
Elements of experimental design, as outlined by Fisher, include comparison of an
experimental group, which receives the treatment or intervention, and a baseline control
group. Other features include random assignment of subjects to treatment and control
groups to control for any differences between the groups that could bias the results.
Finally, experimental design requires multiple measures to estimate the level of variation
in measurements.
The real start must surely be the arrival of R. A. Fisher at the Rothamsted Experimental
station in the 1920's and his insistence on laying out field trials in special patterns such as
Latin Squares. This early work concerned itself with estimability, the ability to extract the
effect of factors, such as fertilizer, from a hazardous and unpredictability background
environment. Randomization was an additional method of isolating these effects.
Experimental design was considered as an active process; treatments were applied by
human action. This work led to the laying of the foundation of subject and its integration
with analytic method such as the analysis of variance. It also led to a burst of activity in
combinatorial design and a whole new branch of mathematics in block designs and other
structures. Experimental design expanded beyond its agricultural experiment roots during
World War II, as the procedure became a method for assessing and improving the
performance of weapons systems, such as long range artillery.
The second phase of activity was the growth of experimental design for continuous
variables and multiple regressions associated particularly with the work of G.E.P. Box
and co-workers on response surfaces. By this time also factorial design had been well
developed particularly through the work of R. C. Bose, F. Yates and C. R. Rao. Coming
from a different tradition, namely that founded by A. Wald, J. Kiefer and J. Wolfowitz
brought the full machinery of decision theory to bear on the experimental design
problem. They produced at least one startling theorem linking different optimality criteria
together. Setting Up the problem as one of optimization enabled algorithms to be
developed for fast solution and the subject of computer-aided experimental design was
born. This work, which started in the US quickly spread to West and East Europe. Both
within the US decision theoretic school and also from the Bayesian tradition (in which
both the UK and Italy have played a major part) the subject of Bayesian experimental
design has grown and is now one of the most active. Lately the subject has come of age
industrially because of the increasing use of industrial experiments to test prototypes and
improve products and processes. It has been a shock for main-line statisticians to see an
engineer, Genichi Taguchi, credited with this new popularity. Fisher published "Design
of Experiments," a book that articulated the features of experimental research design that
are still used today.
Experimental design has its roots in industrial application and Clinical trails. British
statistician George Box, who trained as a chemist, helped extend the use of experimental
design to the chemical industry. Experimental design as a method of quality control came
to Japanese industry in the 1950s. Japanese products were cheap and of poor quality in
the years immediately following World War II. Japanese management adopted
experiments and statistical quality control as methods for improving the quality of
products, which ushered in a new era for Japanese industry. Also in the 1960s,
randomized experiments became the standard for approval of new medications and
medical procedures. Prior to that time, approval of medical devices and drugs relied on
anecdotal data, in which a physician would examine a handful of patients and write a
paper. This approach introduced bias, for which randomized clinical trials controlled.
Experimental design methods have found broad application in many disciplines. In fact
we may view experimentation as part of the scientific process and as one of the ways as
we learn about how system or process work. Generally we learn through a series of
activities in which we make inference about a process, perform experiment to generate
data from the process, and then use the information from the experiment to establish new
conjectures, which lead to new experiment and so on.
Experimental design is a critically important tool in the engineering world for improving
the performance of a manufacturing process. It also has extensive application in the
development of new processes. The application of experimental design techniques early
in process development can result in
The use of experimental design in these areas can result in products that are easier to
manufacture, products that have enhanced field performance and reliability, lower
product cost, and shorter product design and development time.
This differs from an observational study, which involves collecting and analyzing data
without changing existing conditions. Because the validity of an experiment is directly
affected by its construction and execution, attention to experimental design is extremely
important. The specific questions that the experiment is intended to answer must be
clearly identified before carrying out the experiment. We should also attempt to identify
known or expected sources of variability in the experimental units since one of the main
aims of a designed experiment is to reduce the effect of these sources of variability on the
answers to questions of interest. That is, we design the experiment in order to improve
the precision of our answers.
The importance of experimental design also stems from the quest for inference about
causes or relationships as opposed to simply description. Researchers are rarely satisfied
to simply describe the events they observe. They want to make inferences about what
produced, contributed to, or caused events. To gain such information without ambiguity,
some form of experimental design is ordinarily required. As a consequence, the need for
using rather elaborate designs ensues from the possibility of alternative relationships,
consequences or causes. The purpose of the design is to rule out these alternative causes,
leaving only the actual factor that is the real cause. The kinds of planned manipulation
and observation called experimental design often seem to become a bit complicated. This
is unfortunate but necessary, if we wish to pursue the potentially available information so
the relationships investigated are clear and unambiguous.
The plan that we choose to call a design is an essential part of research strategies. The
design itself entails:
Blocking is a design technique used to improve the precision with which comparisons
among the factors of interest are made. Often blocking is used to reduce or eliminate the
variability transmitted from nuisance factor- that is, factors that may influence the
experimental response but in which we are not directly interested, for example, an
experiment in a chemical process may require two batches of raw material to make all the
required runs, however, there could be differences between the batches fur to supplier-to-
supplier variability, and if we are not specifically interested in this effect, we would think
of the batches of raw material as a nuisance factor. Generally, a block is a set of relatively
homogeneous experimental conditions.
Now for the explanation of the whole procedure we start with simple comparative
experiment. In such experiments we consider two conditions to be compared. We begin
with an experiment performed to determine whether two different formulations of a
product give equivalent results. The discussion leads to a review of several basic
statistical concepts, such as random variable, probability distribution, random
samples, sampling distributions and test of hypotheses.
Basic statistical concepts would help describing the procedure better way. Each of the
observation in the experiment would be called a run. Since the individual runs differ, so
there is fluctuation, or noise. This noise is usually called experimental error or simply
error. It is a statistical error, meaning that it arises from variation that is uncontrolled and
generally unavoidable. The presence of error or noise implies that response variable,
tension bond strength, is a random variable. A random variable may be either discrete or
continuous. If the set of all possible values of the random variable is either finite or
countably infinite, then the random variable is discrete, whereas if the set of all possible
values of the random variable is an interval, then the random variable is continuous.
We often use simple graphical methods to assist in analyzing the data from an
experiment. Dot diagram, histograms, and box plot are useful for summarizing the
information in a sample of data. To describe the observation that might occur in a sample
more completely, we use the concept of probability distribution. The probability
structure of a random variable, say y, is described by its probability distribution. If y is
discrete, we often call the probability distribution of y, say p(y), the probability function
of y. If y is continuous then the probability distribution of y, say f(y), is called the
probability density function of y. Mean, variance and expected values give useful
information regarding data. The mean, μ, of a probability distribution is a measure of its
central tendency or location. We may also express the mean in terms of the expected
value E or the long-run average value of the random variable y. The variability or
dispersion of a probability distribution can be measured by the variance σ.
The objective of the statistical inference is to draw conclusion about a population using a
sample. Often we are able to determine the probability distribution of a particular statistic
if we know the probability distribution of the population from which the sample was
drawn. The probability distribution of a statistic is called a sampling distribution. There
are several useful sampling distributions as normal distribution, chi-square distribution, t-
distribution and F-distribution. The simple comparative experiment can be analyzed
using hypothesis testing and confidence interval procedures for comparing two treatment
means. The technique of statistical inference called hypothesis testing can be used to
assist the experimenter in comparing these two formulations. Hypothesis testing allows
the comparison of the two formulations to be made on objective terms, with knowledge
of the risks associated with reaching the wrong conclusion.
There are