You are on page 1of 4

SINGLE-FACTOR PRE-EXPERIMENTAL DESIGNS

The three designs summarized in this section are termed pre-experimental designs because they are
without two or more of the six characteristics of experimental research listed earlier. As a
consequence, few threats to internal validity are controlled. This does not mean that these designs
are always uninterpretable, nor does it mean that the designs should not be used. There are certain
cases in which the threats can be ruled out on the basis of accepted theory, common sense, or other
data. Because they fail to rule out most rival hypotheses, however, it is difficult to make reasonable
causal inferences from these designs alone. They are best used, perhaps, to generate ideas that can
be tested more systematically.

It should be noted that the designs in this section and the next two use a single inde- pendent
variable. Most studies use more than one independent variable. These designs are called factorial.

Notation
In presenting the designs in this chapter, we will use a notational system to provide information for
understanding the designs. The notational system is unique, although similar to that used by Campbell
and Stanley (1963), Cook and Campbell (1979), and Shadish, Cook, and Campbell (2002). Our
notational system is as follows:

R Random assignment

O Observation, a measure that records pretest or posttest scores

X Intervention conditions (subscripts 1 through n indicate different interventions)

A, B, C, D, E, F Groups of subjects or, for single-subject designs, baseline or treatment conditions

Single-Group Posttest-Only
Design In the single-group posttest-only design, the researcher gives a treatment and then measures
the dependent variable, as is represented in the following diagram, where A is the intervention group,
X is the intervention, and O is the posttest.

Although not all threats to internal validity are applicable to this design because there is no pretest
and no comparison with other treatments, valid causal conclusions are rare. Without a pretest, for
example, it is difficult to conclude that behavior has changed at all (e.g., when testing a method of
teaching math to students who know the answers to the final exam before receiving any instruction).
Without a comparison or control group, it is also difficult to know whether other factors occurring at
the same time as the intervention were causally related to the dependent variable. Even though only
five of the threats to internal validity are relevant to this design, the above weaknesses are so severe
that the results of research based on this design alone are usually uninterpretable (see Table 11.7).
The only situation in which this design is reasonable is when the researcher can be fairly certain of the
level of knowledge, attitude, or skill of the subjects before the intervention, and can be fairly sure that
history is not a threat. For example, let’s say that an instructor in an introductory research methods
class wants to con- duct a study of how much students have learned about statistical regression. It
seems reasonable to conclude that they did not know much about regression before the course began
and that it is unlikely that they will learn about it in other ways—say, during party conversations!
Conse- quently, the single-group posttest-only design may provide valid results.

Single-Group Pretest-Posttest Design (HLM 288)


This common design is distinguished from the single-group posttest-only design by one difference—
the addition of an observation that occurs before the treatment condition is experienced (pretest). In
the single-group pretest-posttest design, one group of subjects is given a pretest (O), then the
treatment (X), and then the posttest (O). The pretest and posttest are the same, just given at different
times. The result that is examined is a change from pretest to posttest. (This design is popularized as
the pretest-posttest design.) Although the researcher can at least obtain a measure of change with
this design, there are still many plausible rival hypotheses that are applicable.

Consider this example: A university professor has received a grant to conduct inservice workshops for
teachers on the topic of inclusion. One objective of the program is to improve the attitudes of the
teachers toward including children with disabilities in their “regular” class To assess this objective, the
professor selects a pretest-posttest design, administering an attitude pretest survey to the teachers
before the workshop and then giving the same survey again after the workshop (posttest). Suppose
the posttest scores are higher than the pretest scores. Can the researcher conclude that the cause of
the change in scores is the workshop? Perhaps, but several threats to internal validity are plausible,
and until they can be ruled out, the researcher cannot assume that attendance at the workshop was
the cause of the change.

The most serious threat is history. Because there is no control or comparison group, the researcher
cannot be sure that other events occurring between the pretest and posttest did not cause a change
in attitude. These events might occur within the context of the workshop (e.g., a teacher gives a
moving testimonial about exceptional children in a setting unrelated to the workshop), or they might
occur outside the context of the workshop (e.g., during the workshop, an article about inclusion
appears in the school paper). Events like these are uncontrolled and may affect the results. It is
necessary for the researcher, then, to make a case either that such effects are implausible or that if
they are plausible, they did not occur. Data are sometimes used as evidence to rule out some threats,
but in many cases, it is simply common sense, theory, or experience that is used to make this
judgment.

Statistical regression could be a problem with this design if the subjects are selected on the basis of
extremely high or low scores. In our example with the workshop, for instance, suppose the principal
of the school wanted only those teachers with the least favorable attitudes to attend. The pretest
scores would then be very low and, because of regression, would be higher on the posttest regardless
of the effect of the workshop.

Pretesting is often a threat to research carried out with this design, especially in research on attitudes,
because simply taking the pretest can alter the attitudes. The content of the questionnaire might
sensitize the subjects to specific problems or might raise the general awareness level of the subjects
and cause them to think more about the topic. Instrumentation can also be a threat. For example, if
the teachers take the pretest on Friday afternoon and the posttest next Wednesday morning, the
responses could be different simply because of the general atti- tudes that are likely to prevail at each
of these times of the day and week.

Attrition can be a problem if, between the pretest and posttest, subjects are lost for particular reasons.
If all the teachers in a school begin a workshop, for example, and those with the most negative attitude
toward inclusion drop out because they do not want to learn more about it, then the measured
attitudes of the remaining subjects will be high. Consider another example. To assess the effect of a
schoolwide effort to expand favorable attitudes toward learning, students are pre tested as
sophomores and posttested as seniors. A plausible argument—at least one that would need to be
ruled out—is that improvement in attitudes is demonstrated because the students who have the most
negative attitudes as sophomores never become seniors; they drop out. In this sit- uation it is not
appropriate to use all students taking the pretest and all students taking the posttest and then
compare these groups. Only students who completed both the pretest and the posttest should be
included in the statistical analysis. Attrition is especially a problem in cases with tran- sient
populations, with a long-term experiment, or with longitudinal research.

Maturation is a threat to internal validity of this design when the dependent variable is unstable
because of maturational changes. This threat is more serious as the time between the pretest and
posttest increases. For instance, suppose a researcher is investigating the self-concept of middle
school students. If the time between the pretest and posttest is relatively short (two or three weeks),
then maturation is probably not a threat, but if a year elapses between the pretest and posttest,
changes in self-concept would probably occur regardless of the treatment because of maturation.
Maturation includes such threats as being more tired, bored, or hungry at the time of test taking, and
these factors might be problems in some pretest-posttest designs. In the example of the workshop on
mainstreaming, it is unlikely that maturation is a serious threat, and it would probably be reasonable
to rule out these threats as plausible rival hypotheses.

Intervention replications may be a threat, depending on the manner in which the treat- ment is
administered. Experimenter effects, subject effects, and statistical conclusion threats are possible in
any experiment, and these would need to be examined.

From this discussion, it should be obvious that there are many uncontrolled threats to the inter- nal
validity of a single-group pretest-posttest design. Consequently, this design should be used only under
certain conditions that minimize the plausibility of the threats (e.g., use reliable instruments and short
pretest-posttest time intervals) and when it is impossible to use other designs that will control some
of these threats.

Several modifications can be made to the single-group pretest-posttest design that will improve
internal validity, including the following:
• Adding a second pretest
• Adding a second pretest and posttest of a construct similar to the one being tested (i.e., to show
change in the targeted variable and no change in the other variable)
• Following the posttest with a second pretest/posttest with the intervention either removed or
repeated and determining if the pattern of results is consistent with predictions

Excerpt 11.3 is an example of a study that used a single-group pretest-posttest design. Note how the
targeted students suggest that regression to the mean may be a plausible rival hypothesis.

You might also like