You are on page 1of 26

EXPERIMENTAL DESIGN

Reported by :
Alvarez , Emily
Quiling , Rina Ticia R.
DEFINITIONS:

Experimental Research
An attempt by the researcher to maintain control over all factors that may
affect the result of an experiment. In doing this, the researcher attempt to
determine or predict what may occur.
Experimental design

• A preconceived plan for conducting an experiment;

• A blueprint of the procedure that enables the


researcher to test his hypotheses by reaching valid
conclusions about independent and dependent
variables.

• Often diagramed with symbols to indicate the


arrangement of the variables and conditions.
Hypothesis – a proposition or principle which is
supposed or taken for granted, in order to draw a
conclusion or inference for proof of the point in question;
it is something not proved but assumed for the purpose of
argument.

Variables - are things that we measure, control, or


manipulate in research. They differ in many respects, most
notably in the role they are given in our research and in
the type of measures that can be applied to them.
Correlational vs. Experimental Research
In correlational research, we do not (or at least try not to)
influence any variables but only measure them and look
for relations (correlations) between some set of variables,
such as blood pressure and cholesterol level.
In experimental research, we manipulate some variables
and then measure the effects of this manipulation on
other variables.
For example, a researcher might artificially increase blood
pressure and then record cholesterol level. Data analysis
in experimental research also comes down to calculating
"correlations" between variables, specifically, those
manipulated and those affected by the manipulation.
However, experimental data may potentially provide
qualitatively better information: only experimental data
can conclusively demonstrate causal relations between
variables.
For example, if we found that whenever we change
variable A then variable B changes, then we can
conclude that "A influences B." Data from correlational
research can only be "interpreted" in causal terms
based on some theories that we have, but correlational
data cannot conclusively prove causality.
Why bother with
experimental design?

• It enables the researcher to interpret and


understand the data of an experiment. The
researcher can manipulate or control the
levels or experimental treatments.
CRITERIA FOR A WELL-DESIGNED
EXPERIMENT

• Adequate Experimental Control – this means that there


are enough restraints on the conditions of the experiment
so that the researcher can interpret the results. The
experimental design is so structured that if the
experimental variable has an effect, it can be detected.

• Lack of artificiality – this criterion is especially important in


educational research if the results of the experiment are to
be generalized to a non-experimental setting. It means that
the experiment is conducted in such a way that the results
will apply to the real educational world.
CRITERIA FOR A WELL-DESIGNED
EXPERIMENT

• Basis for Comparison – there must be some way to make a


comparison to determine whether or not there is an
experimental effect

• Adequate information from the date – the data must be


adequate for testing the hypotheses of the experiment.

• Uncontaminated data – the data should adequately reflect


the experimental effects. They should not be affected by
poor measurement or errors in the experimental
procedure.
CRITERIA FOR A WELL-DESIGNED
EXPERIMENT

• No confounding of relevant variables – this criterion is


closely related to adequate experimental control.

• Representativeness – the researcher commonly


includes some aspect of randomness, either through
the selection of the subjects for the experiment.

• Parsimony – with all other characteristics equal, a


simpler design is preferred to a more complex one.
The simpler design is usually easier to implement and
possibly easier to interpret.
Steps in conducting an experimental study
•Identify and define the problem
•Formulate hypothesis and deduce their consequences
•Construct an experimental design that represents all the
elements, conditions and relations of the consequences:
1. Select sample of subjects
2. Group or pair subjects
3. Identify and control non experimental factors.
4. Select or construct and validate instruments to
measure outcomes.
5. Conduct pilot study.
6. determine place, time and duration of
experiment.
•Conduct the experiment.
•Compile raw data and reduce to usable form
•Apply an appropriate test of significance
EXPERIMENTAL VALIDITY

• The criteria of a well-designed experiment can be summarized as


the characteristics that enhance experimental validity.

2 Types of Experimental Validity

• Internal Validity – is the basic minimum of control, measurement,


analysis, and procedures necessary to make the results of the
experiment interpretable. It deals with being able to understand
the data and draw conclusions from them.

• External Validity – deals with the generalizability of the results of


the experiment.
THREATS TO EXPERIMENTAL
VALIDITY

• Internal validity
 History  Statistical Regression

 Maturation  Differential Selection of


subjects
 Testing
 Experimental Mortality
 Instrumentation
 Selection
THREATS TO EXPERIMENTAL
VALIDITY

• External validity

 Interaction effect of testing

 Interaction effects of selection biases

 Reactive effects of experimental arrangements

 Multiple treatment interference


Tools of Experimental Design Used to Control
factors Jeopardizing Validity
PRE-TEST- The pre-test, or measurement before the experiment
begins, can aid control for different selection by determining
presence or knowledge of the experimental variable before the
experiment begins. It can aid control of experimental mortality
because the subjects can be removed from the entire
comparison by removing their pre-tests.

CONTROL GROUP – The use of matched or similar group which


is not exposed to the experimental variable can help reduce the
effect of History, Maturation, Instrumentation and Interaction of
Factors. The control group is exposed to all conditions of the
experiment except the experimental variable.
Tools of Experimental Design Used to Control
factors Jeopardizing Validity
RANDOMIZATION – Use of random selection procedures for
subjects can aid control of Statistical regression, Differential
selection and the Interaction of Factors. It greatly increases
generalizability by helping make the groups representative of the
population.

ADDITIONAL GROUPS – The effects of Pre-test and Experimental


procedures can be partially controlled through the use of groups
which were not pre-tested or exposed to experimental
arrangements. They would have to be used in conjunction with
other pre-tested groups or other factors jeopardizing validity
would be present.
TYPES OF EXPERIMENTAL DESIGNS

• POSTTEST-ONLY CONTROL GROUP DESIGN


It contains as many groups as there are
experimental treatments, plus control or comparison
groups. Subjects are measured only after the
experimental treatments have been applied.

• Pretest- refers to a measure or test given to the


subjects prior to the experimental treatment.

• Posttest- is a measure taken after the experimental


treatment has been applied.
TYPES OF EXPERIMENTAL DESIGNS

• PRETEST-POSTTEST CONTROL GROUP DESIGN

It contains as many groups as there are experimental


treatments, plus a control group. Subjects are measured before
and after receiving the experimental treatments.

• SOLOMON FOUR GROUP DESIGN

Combining the pretest-posttest control group design and the


posttest only control group design in their simplest forms produces
a design described by Solomon (1949). This design in its four group
form includes two control and two experimental groups but the
experimental groups receive the same experimental treatment.
TYPES OF EXPERIMENTAL DESIGNS

• FACTORIAL DESIGNS
It involves two or more independent
variables, called factors, in a single design. The
cells of the design are determined by the levels of
the independent variables taken in combination.

• REPEATED MEASURE DESIGN


These are designs in which the same subject
is measured more than once on the dependent
variable.
TYPES OF EXPERIMENTAL DESIGNS

• TIME SERIES DESIGN


It involves repeated measurement with an
experimental treatment inserted between two of
the measurements.

• BETWEEN SUBJECT DESIGNS


Different groups of subjects are randomly
assigned to the levels of your independent
variable.
TYPES OF EXPERIMENTAL DESIGNS

• WITHIN SUBJECT DESIGNS


A single group of subjects is exposed to all levels of
your independent variable.

• A SINGLE SUBJECT DESIGN


Similar to the within subjects design in that
subjects are exposed to all levels of the independent
variable. The main difference from within the subject
design is that you do not average data across subjects.
Instead, you focus on changes in the behavior of a
single subject.
THE COST OF
EXPERIMENTATION

A simple experimental study may


require six months of research time and complex
studies sometimes have budgets that many
laymen would consider to amount to a fortune.
Before beginning an investigation, it is important
to consider carefully whether or not the
investigator will have the resources necessary to
complete the study. An equally important
consideration is whether the findings will be
important enough to justify the expenditure in
time and money.
THE PRODUCTIVITY OF EXPERIMENTATION

Although experimentation has been a


valuable technique for the natural sciences, the actual
contributions of experimental research have thus far
been disappointing. First, many educational
phenomena are so complex that they cannot best be
studied in laboratory situations. Second, experimental
research is an endeavor that requires training and skills
that the typical educator does not possess or fully
understand. However, experimentation will play an
increasingly important role in the development of
learning theory and educational thought.
CRITIQUE OF EXPERIMENTAL RESEARCH DESIGN

• True experimental methodology may be considered ideal from a


methodological point of view, the limitations placed on the selection
of variables and problems amenable to a true experiment are serious.

• A true experiment eliminates any type of self-selection into


treatments.

• The manipulation of one or more treatment variables, while


controlling all others is clear and straightforward from a
methodological point of view.

• Comparative studies employing randomization have been the modus


operandi of only some of the social scientists.

• Research is not an either/or endeavor.


References

You might also like