Name: Jani Patience
Reg no
Course name Quntitative Research Methods
Course Covener Muparamoto
A quasi-experiment is a prospective or retrospective study in which participants or clusters of
participants self-select into (or their providers select on their behalf) one of several different
treatment groups for the purpose of comparing the real-world effectiveness and safety of
those non-randomized treatments (Pignotti and Thyer, 2009). Quasi-experiments are
observational studies that are similar to randomized controlled trials (RCTs) in many
respects, with the primary exception being that participants self-select into different
treatments instead of being randomized. Given the lack of randomization in a quasi-
experiment, there are many challenges in designing and conducting a quasi-experiment with
strong internal validity. In this write-up the essayist discuss some of these challenges and
potential solutions.
Quasi-experimental studies encompass a broad range of nonrandomized intervention studies.
These designs are frequently used when it is not logically feasible or not ethical to conduct a
randomized, controlled trial which is commonly accepted as the “gold standard” of causal
research design (Shadish, 2011). For example, if a hospital is introducing use of an alcohol-
based hand disinfectant, the hospital may want to study the impact of this intervention on the
outcome of acquisition of antibiotic-resistant bacteria, on the basis of surveillance culture.
The intervention is implemented, acquisition rates are measured before the intervention and
after the intervention, and the results are analysed. As another example, if a hospital has an
increasing rate of ventilator-associated pneumonia (VAP), the hospital personnel may design
an educational intervention aimed at decreasing the rate of VAP and compare rates before
and after the intervention. A third example would be a study of the effect of an antimicrobial
stewardship/educational program on preintervention and post intervention antibiotic
prescribing practices. A quasi-experimental design is one that looks a bit like an experimental
design but lacks the key ingredient – random assignment.
True Quasi-experimental Research Designs in which a treatment or stimulus is administered
to only one of two groups whose members were randomly assigned are considered the gold
standard in assessing causal hypotheses (Solomon et al, 2009). True experiments require
researchers to exert a great deal of control over all aspects of the design, which in turn allows
strong statements to be made about causal relationships. In many situations, especially those
involving human subjects, it is simply not possible for researchers to exert the level of control
necessary for a true experiment. For example, it may be unethical to expose subjects to a
stimulus which the researcher knows may cause harm. In addition, researchers are often
interested in processes that are too complex or lengthy to be administered in an experimental
setting. Quasi-experimental designs relax some of the key requirements of true experiments,
making them more practical to implement in many cases but also reducing the strength of the
causal claims that can be made. True experiments require that subjects be randomly assigned
to the treatment or control group (Thyer 2010). Random assignment ensures that any
characteristics of the subjects which may be associated with the outcome of interest will be
distributed throughout the two groups according to the laws of probability. Often it is not
possible for researchers to randomly assign subjects to groups, for either practical or ethical
reasons. Quasi-experimental research designs therefore use alternative ways of assigning
subjects to the treatment and control groups.
The most common subset of quasi-experimental research designs are the non-equivalent
control group designs. In one implementation of this design, subjects in the control group are
intentionally matched by the researcher to subjects in the treatment group on characteristics
which might be associated with the outcome of interest (Shadish et al, 20020. This matching
can be done at the individual level, resulting in a one-to-one match of individuals in the two
groups. Another approach is aggregate matching, in which researchers select a control group
with the same general composition of relevant characteristics (for example, the same
proportion of females and the same age distribution) as the treatment group. These
approaches are considered quasi-experimental due to the fact that assignment of subjects to
groups is intentional and not random (Sanderson et al, 2007). Another common approach to
this type of quasi-experimental research design is the use of existing groups. For example, a
comparison could be made between students in two classrooms, with the stimulus
administered in only one classroom.
Some quasi-experimental research designs do not include a comparison with a control group
at all. Known as before-and-after, pre-test/post-test, or pre-experimental designs, these quasi-
experimental approach designs expose all subjects to the treatment or stimulus. The
comparison in these designs comes from examining subjects’ values on the outcome of
interest prior to and after the exposure. If post-treatment values differ significantly from pre-
treatment values, a case can be made that the treatment was the cause of the change.
Another quasi-experimental approach involves time-series data, in which researchers observe
one group of subjects repeatedly both before and after the administration of the treatment.
This can be done in a controlled experimental setting, but this design also lends itself well to
a more naturalistic setting in which data are commonly collected on a group of subjects and
researchers are interested in the effects of some treatment or intervention which they did not
experimentally apply. For example, researchers might examine the yearly test scores of
students at a given school for several years both before and after the implementation of an
extended school day; in this situation the yearly tests scores represent the time-series data and
the change to an extended school day is the naturally occurring, quasi-experimental
treatment. This approach is an improvement over the single pre-test/post-test design, which
is unable to demonstrate long-term effects. The time-series data design can be further
improved by including a control group which is also examined over time but which does not
experience the treatment; such a design is termed a multiple time-series design.
Ethical considerations typically will not allow random withholding of an intervention with
known efficacy. Thus, if the efficacy of an intervention has not been established, a
randomized controlled trial is the design of choice to determine efficacy (Shadish, 2011). But
if the intervention under study incorporates an accepted, well-established intervention, or if
the intervention has either questionable efficacy or safety based on previously conducted
studies, then the ethical issues of randomizing patients are sometimes raised. In addition,
quasi-experimental designs are more preferable especially when the researcher has does not
have enough time. For instance there is often pressure to implement the intervention quickly
thus not allowing researchers sufficient time to plan a randomized trial. So while this
randomized control trials are technically efficient in establishing causal relationships, they are
underused.
In conclusion, while quasi-experimental designs are often more practical to implement than
true experiments, they are more susceptible to threats to internal validity. Special care must
be taken to address validity threats, and the use of additional data to rule out alternate
explanations is advised.
References
Pignotti, M. & Thyer, B. A. (2009). Why randomized clinical trials are important and
necessary to social work practice. In H-W. Otto, A. Polutta, & H. Ziegler (Eds.), Evidence-
based practice: Modernizing the knowledge base of social work (pp. 99 – 109). Farmington
Hills, MI/Opladen, Germany: Barbara Budrich Publishers.
Sanderson, S., Tatt, I. D., & Higgins, J. P. T. (2007). Tools for assessing quality and
susceptibility to bias in observational studies in epidemiology: a systematic review and
annotated bibliography. International Journal of Epidemiology, 36, 666 – 676.
Shadish, W. R. (2011). Randomized controlled studies and alternative designs in outcome
studies: Challenges and opportunities. Research on Social Work Practice, 21, xxx – xxx.
Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasiexperimental
designs for generalized causal inference. New York: Houghton Mifflin.
Solomon, P., Cavanaugh, M. M., & Draine , J. ( 2009 ). Randomized controlled trials. New
York: Oxford University Press.
Thyer, B. A. (Ed.) (2010). Handbook of social work research methods. Thousand Oaks, CA:
Sage Publications, Inc.