You are on page 1of 6

Experimental Design

Experimental design is the gold standard of research design, and the best approach for
assessing cause and effect. The design, which relies on random assignment and repeated
measurements for its rigor, has its roots in agricultural experiments of the early twentieth
century and is now commonplace in a variety of scientific and industrial settings.
Elements of experimental design, as outlined by Fisher, include comparison of an
experimental group, which receives the treatment or intervention, and a baseline control
group. Other features include random assignment of subjects to treatment and control
groups to control for any differences between the groups that could bias the results.
Finally, experimental design requires multiple measures to estimate the level of variation
in measurements.

The real start must surely be the arrival of R. A. Fisher at the Rothamsted Experimental
station in the 1920's and his insistence on laying out field trials in special patterns such as
Latin Squares. This early work concerned itself with estimability, the ability to extract the
effect of factors, such as fertilizer, from a hazardous and unpredictability background
environment. Randomization was an additional method of isolating these effects.
Experimental design was considered as an active process; treatments were applied by
human action. This work led to the laying of the foundation of subject and its integration
with analytic method such as the analysis of variance. It also led to a burst of activity in
combinatorial design and a whole new branch of mathematics in block designs and other
structures. Experimental design expanded beyond its agricultural experiment roots during
World War II, as the procedure became a method for assessing and improving the
performance of weapons systems, such as long range artillery.

The second phase of activity was the growth of experimental design for continuous
variables and multiple regressions associated particularly with the work of G.E.P. Box
and co-workers on response surfaces. By this time also factorial design had been well
developed particularly through the work of R. C. Bose, F. Yates and C. R. Rao. Coming
from a different tradition, namely that founded by A. Wald, J. Kiefer and J. Wolfowitz
brought the full machinery of decision theory to bear on the experimental design
problem. They produced at least one startling theorem linking different optimality criteria
together. Setting Up the problem as one of optimization enabled algorithms to be
developed for fast solution and the subject of computer-aided experimental design was
born. This work, which started in the US quickly spread to West and East Europe. Both
within the US decision theoretic school and also from the Bayesian tradition (in which
both the UK and Italy have played a major part) the subject of Bayesian experimental
design has grown and is now one of the most active. Lately the subject has come of age
industrially because of the increasing use of industrial experiments to test prototypes and
improve products and processes. It has been a shock for main-line statisticians to see an
engineer, Genichi Taguchi, credited with this new popularity. Fisher published "Design
of Experiments," a book that articulated the features of experimental research design that
are still used today.
Experimental design has its roots in industrial application and Clinical trails. British
statistician George Box, who trained as a chemist, helped extend the use of experimental
design to the chemical industry. Experimental design as a method of quality control came
to Japanese industry in the 1950s. Japanese products were cheap and of poor quality in
the years immediately following World War II. Japanese management adopted
experiments and statistical quality control as methods for improving the quality of
products, which ushered in a new era for Japanese industry. Also in the 1960s,
randomized experiments became the standard for approval of new medications and
medical procedures. Prior to that time, approval of medical devices and drugs relied on
anecdotal data, in which a physician would examine a handful of patients and write a
paper. This approach introduced bias, for which randomized clinical trials controlled.

Statistical design of experiments refers to the “process of planning the experiment so


that appropriate data that can be analyzed by statistical methods will be collected,
resulting in valid and objective conclusions”. The statistical approach to experimental
design is necessary if we wish to draw meaningful conclusions from the data. When the
problem involves data that are subject to experimental errors, statistical methods are the
only objective approach to analysis. We are concerned with the analysis of data generated
from an experiment. It is wise to take time and effort to organize the experiment properly
to ensure that the right type of data, and enough of it, is available to answer the questions
of interest as clearly and efficiently as possible. This process is called experimental
design. There are two aspects to any experimental problem; the design of the experiment
and the statistical analysis of the data. These two subjects are closely related because the
methods of analysis depend directly on the design employed. An experiment deliberately
imposes a treatment on a group of objects or subjects in the interest of observing the
response.

Experimental design methods have found broad application in many disciplines. In fact
we may view experimentation as part of the scientific process and as one of the ways as
we learn about how system or process work. Generally we learn through a series of
activities in which we make inference about a process, perform experiment to generate
data from the process, and then use the information from the experiment to establish new
conjectures, which lead to new experiment and so on.

Experimental design is a critically important tool in the engineering world for improving
the performance of a manufacturing process. It also has extensive application in the
development of new processes. The application of experimental design techniques early
in process development can result in

• Improved process yields


• Reduced variability and closer conformance to nominal or target requirements
• Reduced development time
• Reduced overall costs
Experimental design methods also play major role in engineering design activities, where
new product are developed and existing ones improved. Some application of
experimental design in engineering design include

• Evaluation and comparison of basic design configurations


• Evaluation of material alterations
• Selection of design parameters so that product will work well under a wide
variety of field conditions, that is, so that the product is robust
• Determination of key product design parameters that impact product performance.

The use of experimental design in these areas can result in products that are easier to
manufacture, products that have enhanced field performance and reliability, lower
product cost, and shorter product design and development time.

This differs from an observational study, which involves collecting and analyzing data
without changing existing conditions. Because the validity of an experiment is directly
affected by its construction and execution, attention to experimental design is extremely
important. The specific questions that the experiment is intended to answer must be
clearly identified before carrying out the experiment. We should also attempt to identify
known or expected sources of variability in the experimental units since one of the main
aims of a designed experiment is to reduce the effect of these sources of variability on the
answers to questions of interest. That is, we design the experiment in order to improve
the precision of our answers.

The importance of experimental design also stems from the quest for inference about
causes or relationships as opposed to simply description. Researchers are rarely satisfied
to simply describe the events they observe. They want to make inferences about what
produced, contributed to, or caused events. To gain such information without ambiguity,
some form of experimental design is ordinarily required. As a consequence, the need for
using rather elaborate designs ensues from the possibility of alternative relationships,
consequences or causes. The purpose of the design is to rule out these alternative causes,
leaving only the actual factor that is the real cause. The kinds of planned manipulation
and observation called experimental design often seem to become a bit complicated. This
is unfortunate but necessary, if we wish to pursue the potentially available information so
the relationships investigated are clear and unambiguous.

The plan that we choose to call a design is an essential part of research strategies. The
design itself entails:

• selecting or assigning subjects to experimental units


• selecting or assigning units for specific treatments or conditions of the experiment
(experimental manipulation
• specifying the order or arrangement of the treatment or treatments
• specifying the sequence of observations or measurements to be taken
By convention, the problems of design to not ordinarily include details of sampling,
selection of measurement instruments, selection of the research problem or any other nuts
and bolts of procedure required to actually do the study. There are three basic principles
of experimental design; randomization, replication and blocking.

Randomization is the basis underlying the use of statistical methods in experimental


design. Because it is generally extremely difficult for experimenters to eliminate bias
using only their expert judgment, the use of randomization in experiments is common
practice. By randomization we mean that both the allocation of the experimental material
and the order in which the individual runs or trails of the experiment are to be performed
are randomly determined. Statistical methods require that the observations (or errors) be
independently distributed random variables. Randomization usually makes this
assumption valid. By properly randomizing the experiment, we also assist in “averaging
out” the effects of extraneous factors that may be present. In a randomized experimental
design, objects or individuals are randomly assigned (by chance) to an experimental
group. Using randomization is the most reliable method of creating homogeneous
treatment groups, without involving any potential biases or judgments. Sometimes
experimenters encounter situations where randomization of some aspect of the
experiment is difficult. For example in a chemical process, temperature may very hard-to-
change variable as we may want to change it less often than we change the levels of other
factors. In an experiment of this type complete randomization would be dealing with
restrictions on randomization.

By replication we mean an independent repeat of each factor combination. To improve


the significance of an experimental result, replication, the repetition of an experiment on
a large group of subjects, is required. Although randomization helps to insure that
treatment groups are as similar as possible, the results of a single experiment, applied to a
small number of objects or subjects, should not be accepted without question. Randomly
selecting two individuals from a group of four and applying a treatment with "great
success" generally will not impress the public or convince anyone of the effectiveness of
the treatment. If a treatment is truly effective, the long-term averaging effect of
replication will reflect its experimental worth. If it is not effective, then the few members
of the experimental population who may have reacted to the treatment will be negated by
the large numbers of subjects who were unaffected by it. Replication reduces variability
in experimental results, increasing their significance and the confidence level with which
a researcher can draw conclusions about an experimental factor. There are two important
properties of replication. First it allows the experimenter to obtain an estimate of the
experimental error. This estimate of error becomes a basic unit of measurement for
determining whether observed differences in the data are really statistically different.
Secondly, if the sample mean (y) is used to estimate the true mean response for one of the
factor levels in the experiment; replication permits the experimenter to obtain a more
precise estimate of this parameter.

Blocking is a design technique used to improve the precision with which comparisons
among the factors of interest are made. Often blocking is used to reduce or eliminate the
variability transmitted from nuisance factor- that is, factors that may influence the
experimental response but in which we are not directly interested, for example, an
experiment in a chemical process may require two batches of raw material to make all the
required runs, however, there could be differences between the batches fur to supplier-to-
supplier variability, and if we are not specifically interested in this effect, we would think
of the batches of raw material as a nuisance factor. Generally, a block is a set of relatively
homogeneous experimental conditions.

The three basic principles of experimental design, randomization, replication and


blocking are part of every experiment. The use of the statistical approach in designing
and analyzing an experiment, it is necessary to have a clear idea in advance of exactly
what is to be studied, how the data are to be collected, and at least a qualitative
understanding of how these data are to be analyzed. The key points of guidelines for
designing experiment are as follows:

• Recognition of and statement of the problem


• Selection of the response variable
• Choice of factors, levels and range
• Choice of experimental design
• Performing the experiment
• Conclusions and recommendations

Now for the explanation of the whole procedure we start with simple comparative
experiment. In such experiments we consider two conditions to be compared. We begin
with an experiment performed to determine whether two different formulations of a
product give equivalent results. The discussion leads to a review of several basic
statistical concepts, such as random variable, probability distribution, random
samples, sampling distributions and test of hypotheses.

Basic statistical concepts would help describing the procedure better way. Each of the
observation in the experiment would be called a run. Since the individual runs differ, so
there is fluctuation, or noise. This noise is usually called experimental error or simply
error. It is a statistical error, meaning that it arises from variation that is uncontrolled and
generally unavoidable. The presence of error or noise implies that response variable,
tension bond strength, is a random variable. A random variable may be either discrete or
continuous. If the set of all possible values of the random variable is either finite or
countably infinite, then the random variable is discrete, whereas if the set of all possible
values of the random variable is an interval, then the random variable is continuous.

We often use simple graphical methods to assist in analyzing the data from an
experiment. Dot diagram, histograms, and box plot are useful for summarizing the
information in a sample of data. To describe the observation that might occur in a sample
more completely, we use the concept of probability distribution. The probability
structure of a random variable, say y, is described by its probability distribution. If y is
discrete, we often call the probability distribution of y, say p(y), the probability function
of y. If y is continuous then the probability distribution of y, say f(y), is called the
probability density function of y. Mean, variance and expected values give useful
information regarding data. The mean, μ, of a probability distribution is a measure of its
central tendency or location. We may also express the mean in terms of the expected
value E or the long-run average value of the random variable y. The variability or
dispersion of a probability distribution can be measured by the variance σ.

The objective of the statistical inference is to draw conclusion about a population using a
sample. Often we are able to determine the probability distribution of a particular statistic
if we know the probability distribution of the population from which the sample was
drawn. The probability distribution of a statistic is called a sampling distribution. There
are several useful sampling distributions as normal distribution, chi-square distribution, t-
distribution and F-distribution. The simple comparative experiment can be analyzed
using hypothesis testing and confidence interval procedures for comparing two treatment
means. The technique of statistical inference called hypothesis testing can be used to
assist the experimenter in comparing these two formulations. Hypothesis testing allows
the comparison of the two formulations to be made on objective terms, with knowledge
of the risks associated with reaching the wrong conclusion.

There are

You might also like