You are on page 1of 19

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/363184941

Single-case experimental design (Chapter 33)-PREPRINT

Preprint · January 2023


DOI: 10.1037/0000319-033

CITATIONS READS
0 152

3 authors, including:

John Ferron Megan Kirby


University of South Florida Language Dynamics Group
188 PUBLICATIONS 5,438 CITATIONS 16 PUBLICATIONS 21 CITATIONS

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Cultural Competency Training of Pre-Service Students to Prepare for Working with Diverse Populations View project

Implementation Science & Educational Intervention Design for Displaced Children View project

All content following this page was uploaded by Megan Kirby on 01 September 2022.

The user has requested enhancement of the downloaded file.


Part VI

QUANTITATIVE RESEARCH
DESIGNS INVOLVING
SINGLE PARTICIPANTS
OR UNITS

1ST PAGES
1ST PAGES
Chapter 33

SINGLE-CASE EXPERIMENTAL
DESIGN
John M. Ferron, Megan Kirby, and Lodi Lipien

Single-case experimental designs (SCEDs) are single-subject design, single-case design, and
experimental designs used to study the effects of N-of-1 trials. These designs assume a variety of
interventions on individual cases. This focus on forms, including, but not limited to, reversal,
individual effects is a defining feature of SCEDs alternating treatments, changing criterion, repeated
and can be motivated by conceptual, as well as acquisition, multiple-baseline, and multiple-probe
practical, considerations. Conceptually, the belief designs. One may ask why we need so many
in the uniqueness of individuals and the potential design options to study individual effects or why
for variability in the response to an intervention traditional pre–post intervention measurement
motivates a desire to study intervention effects one at the individual level is not sufficient. Although
participant at a time (Morgan & Morgan, 2009). looking at the pre-to-post change for an individual
As studies accrue, the individual effect estimates may be common in some areas of clinical practice,
can be used to build a distribution from which it can be difficult to argue that the change was
we can see consistencies or inconsistencies in the due to the intervention. It is possible that the
effect across cases, best- or worst-case scenarios, behavior, or outcome of interest, fluctuates over
and typical or average effects. Practical reasons time for the individual, and the change that is
that may motivate the use of SCEDs include observed is simply part of this routine fluctua-
(a) studying the effect of an intervention for those tion. It is also possible that there was indeed a
from a sparse population or with a rare diagnosis, systematic change, but that the change resulted
making it difficult to recruit more than few from something other than the intervention
participants for a study (Odom et al., 2005); (e.g., a change in the home, school, or work
(b) developing an intervention where it may environment that happened to coincide with
be efficient to pilot and refine the intervention the start of the intervention). Sensitive to the
through a series of single-case studies (Gallo detection of change at the case level across time,
et al., 2013); and (c) adding to the research SCEDs have been developed to help us separate
base on the effects of interventions in practice intervention effects from other confounds.
through the engagement of clinicians (Morgan However, because of the variation within indi­
& Morgan, 2009). viduals and outcomes of interest, there is no
Single-case experimental design has been single best way to do this. Rather, design varia-
referred to by a variety of names, including tions have emerged as research has evolved.

https://doi.org/10.1037/0000319-033
APA Handbook of Research Methods in Psychology, Second Edition: Vol. 2. Research Designs: Quantitative, Qualitative, Neuropsychological, and
Biological, H. Cooper (Editor-in-Chief)
Copyright © 2023 by the American Psychological Association. All rights reserved.

1ST PAGES
Ferron, Kirby, and Lipien

In this chapter, we first consider the experi- values are zero). In such cases, the projections are
mental tactics developed to strengthen the internal relatively straightforward because the researcher
validity of studies focusing on individual effects. can assume that in the absence of intervention,
These tactics help us attribute observed changes future observations of the outcome would have
to the intervention and include the use of baseline values identical to the constant baseline value.
logic, response-guided experimentation, repli­ More commonly, stability assumptions are
cation, and randomization. Next, we show how less strict, and the projections are less exact.
various experimental tactics can be combined to Researchers may expect some variability in the
produce a variety of SCEDs in which appropriate outcome from one observation to the next due
selection of a design is dependent upon the study to various factors that are outside of the control
purpose, participant characteristics, and the of the researcher, but they may also assume that
outcome of interest. We next consider additional there are no systematic trends (i.e., the expected
procedures used within SCEDs to document level of the behavior does not change over time).
reliability of the outcome measurement, treatment In such situations, the baseline projections
fidelity, generalization, and social validity, assume that in the absence of intervention, the
and close with a summary of analysis options mean and variation of future observations would
appropriate for SCEDs. be similar to baseline observations. Consider for
example a researcher who is studying the effect
of an intervention on the number of minutes
EXPERIMENTAL TACTICS
a child with attention deficit disorder spends
Experimental tactics are procedures used to reading. Initial baseline and intervention phases
strengthen the internal validity of SCEDs. These are shown in Figure 33.1. The baseline obser­
procedures include baseline logic, response-guided vations show no trend, with observations ranging
experimentation, within-case replication, between- from two to nine minutes of reading during
case replication, and randomization. 30-minute daily reading sessions. Given this
baseline, it seems reasonable to project that
Baseline Logic without intervention, the child would continue to
Many SCEDs involve phases in which multiple read less than 10 minutes per session. Because
observations, typically five or more, are collected the intervention observations are not in line
under the same treatment condition. For example, with the baseline projection, they support the
the study may begin with a baseline phase in contention that something has changed the reading
which five to 10 observations are collected in behavior of the child.
a business-as-usual condition prior to the intro­ If there is not only variability in the baseline
duction of the intervention. The baseline, if stable, observations, but also noticeable trends, baseline
establishes a problem level of behavior (i.e., the projections become more challenging. Here the
need for intervention) and allows researchers researcher may assume that the trend is temporally
to assess treatment effects by comparing what stable (i.e., the same trend would continue in the
is observed in the intervention phase to what absence of intervention). If this assumption is
would be expected if the baseline were projected reasonable, the researcher would use an extension
(Engel & Schutt, 2013; Sidman, 1960). To make of the baseline trend line to make a projection
projections about what would have been observed of what would happen in the absence of inter­
in the absence of intervention, the researcher must vention. Manolov and colleagues (2019) provided
make an assumption about some kind of temporal an excellent discussion of the challenges in pro-
stability. The strictest stability assumption is that jecting trends and provide some relatively flexible
the outcome is temporally stable, implying that options for making projections. If it is unreason-
there is no variation in the outcome from one able to assume a continued trend, any projection
session to the next (e.g., all baseline observation becomes so suspect that baseline logic fails.

1ST PAGES
Single-Case Experimental Design

30 A B

25

20

Minutes
15

10

0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Session

FIGURE 33.1.   Illustration of an AB design.

Response-Guided Experimentation unobserved factor has led to a more permanent


When researchers adopt the tactic of baseline logic, shift in the level of the behavior and that the
they commit to establishing stable baselines. fifth observation is reflective of this new level
However, the number of baseline observations of behavior. Finally, Projection C assumes that
that will be needed may not be known prior to some unobserved factor is leading to a shift in
starting the study. Consider the baseline obser- the behavior, but that this transition to a new
vations graphed in Figure 33.2 representing the level is still in process, and thus future baseline
on-task behavior of a child with an emotional observations would be higher than those collected
behavioral disorder. The instability in the base- to this point. When there are such a wide range
line, and particularly the high value for the fifth of possible projections, it is not feasible to use
observation, leads to uncertainty about what baseline logic.
would be observed if we continued in baseline, To ensure baselines are stable at the time of
as shown by the three alternative projections transition to an intervention phase, researchers
in Figure 33.2. Projection A assumes that the may choose response-guided experimentation,
uniqueness of the last observation is attributable where the length of the baseline is not established
to some unobserved event specific to the day of a priori, but rather is dependent on an ongoing
the fifth observation, and thus future baseline visual analysis of the data as they are collected.
observations would return to the level of the first If this ongoing visual analysis reveals variation
four observations. Projection B assumes that some that can be accounted for (e.g., it is related to the
target child’s seatmate during the observation
period), the researcher can alter the baseline
condition to hold this factor constant and obtain
100
a more stable baseline. If researchers are unable
80 C to identify and hold constant the source of the
variability, they can extend the baseline phase
60
B until the instability has passed and stability has
40 been established. For example, the researcher
20 A may extend the five-observation baseline in
Figure 33.2 to see if the baseline level would
0 be reestablished at the level of the first four
FIGURE 33.2.   Unstable baseline with observations (Projection A) or at a higher level
three possible projections (A, B, and C). (Projection B or C).

1ST PAGES
Ferron, Kirby, and Lipien

Often the tactic of response-guided experi- the intervention, which just happened to coincide
mentation is coupled with the tactic of baseline with the intervention. Consider the results shown
logic, but in some situations baseline logic can be in Figure 33.1. We see a shift in the number of
used without response-guided experimentation. minutes the child reads, but it is difficult to know
Specifically, in contexts where the participant and if the intervention caused the change in behavior.
outcome of focus produce little to no baseline The child could have increased their reading
variability, there is no need to respond to the data. because of some nonintervention change in the
Consider a study of a phonological awareness teacher behavior, some change among their peers,
intervention, where the participant inclusion or for a variety of other reasons. Replication is a
criteria included a lack of first-sound identifi­ tactic used to make it more difficult to attribute
cation skills, and thus the researcher anticipates changes to factors other than the intervention.
that the participant will score zero on each of the One way to replicate is to do so within the
baseline measures of first-sound identification. case. If the effect of the intervention is believed
The researcher may fix the baseline length to to be limited to when the intervention is active
three or four observations based on practical (e.g., while a support dog is present, or when
considerations, or they may randomly select the child is taking their prescribed medication),
whether it will be three or four observations. the intervention could be removed with the
Either way, the baseline length could be established expectation that the behavior would return to
a priori, as opposed to in a response-guided baseline levels, and then the intervention could
fashion, and baseline logic could be used because be reintroduced. Consider again a researcher
the baseline would be stable. studying the effect of an intervention on the
amount of time a child spends reading. Suppose
Replication that the study had started as shown in Figure 33.1,
Baseline logic and response-guided experimen­ and then the researcher added a second baseline
tation are useful approaches in the design of phase followed by a second treatment phase, as
SCEDs, but they are not sufficient for making shown in Figure 33.3. Because of the replication
causal inferences. When the observations in a of the effect within the case, it would be difficult
treatment phase differ from the baseline projection, to attribute the change in behavior to some other
multiple explanations for the change are possible. factor. Put simply, it does not seem plausible
It may have been due to the intervention, but it that the other factor would happen to occur,
could also have been due to something other than be removed, and then occur again, in a way that

30 A B A B
Number of Desirable Responses

25

20

15

10

0
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39
Session

FIGURE 33.3.   Illustration of within-case replication.

1ST PAGES
Single-Case Experimental Design

coincided with the changes between baseline with the introduction of intervention, there is
and intervention phases. stronger evidence of a causal relation. However,
In some cases, a behavior is not reversible, if changes in behavior occur simultaneously for
such as when the intervention targets the learning all cases, the changes are more likely due to some
of a particular skill, and thus removal of the nonintervention effect than the intervention
intervention is not expected to lead to a return itself. Thus, with replication at different times,
to baseline levels. In such studies, within-case researchers are able to disentangle intervention
replication is not possible. However, replication effects from history effects (i.e., external events
could be accomplished by attempting to duplicate that impact the outcome, such as a change in
the effect across different individuals, behaviors, school personnel or policies).
or settings. When replicating across cases (i.e.,
participants, behaviors, or settings) the start of Randomization
the intervention is typically introduced at different Another experimental tactic that may be used
times, as illustrated in Figure 33.4. When changes with SCEDs is randomization. Consider a com-
in behavior are staggered over time and coincide parative study of the effect of two treatments on
a reversible behavior. The researchers may design
the study so there is rapid alternation between
10 the treatments. In this case, the phase structure
(e.g., five or more successive observations in the
8
same intervention condition) that was shown
6 in our previous examples is not present, and
4 thus the tactic of baseline logic is unavailable.
However, the researchers still need to argue that
2
the difference between the observations is due
0 to the difference in treatments and not some other
factor. In this context, SCED researchers will often
conceptualize their design as having successive
12
pairs of observations, and then randomly assign
Number of Desirable Responses

10
one observation from each pair to each condition
8 (i.e., one of the first two observations to Treat-
6 ment A and the other to Treatment B, one of the
4 second two observations to Treatment A and
2 the other to Treatment B, and so forth). If the
0 researcher finds that the behavior is consistently
better under Treatment A than Treatment B,
it is difficult to attribute this difference to
12
some other factor. This type of randomization
10 facilitates analyses (e.g., randomization tests;
8 Edgington & Onghena, 2007) that control the
6 probability of incorrectly inferring that one
4 treatment was more effective than the other
2 (i.e., control over Type I errors), as well as
0 facilitating unbiased estimates of the treatment
1 3 5 7 9 11 13 15 effect. A review of SCEDs suggests that randomi­
Session zation is commonly used in designs that rapidly
FIGURE 33.4.   Illustration of across-case alternate between conditions (Tanious &
replication. Onghena, 2020).

1ST PAGES
Ferron, Kirby, and Lipien

Although somewhat less common, randomiza- TYPES OF SCEDs


tion is also used in designs with a phase structure
By using different combinations of the experi-
(Tanious & Onghena, 2020). Here researchers use
mental tactics of baseline logic, response-guided
one of several methods to randomly determine
experimentation, within-case replication,
when the transitions between phases will occur.
between-case replication, and randomization,
For example, consider Figure 33.3 where one
a variety of SCEDs emerge. We discuss some of
could randomly determine when to transition
the options here, indicating which experimental
from A1 to B1, B1 to A2, and A2 to B2 (e.g., Onghena,
tactics may be used, along with the contexts for
1992), or consider Figure 33.4 where one could
which these designs are best aligned.
determine randomly when to transition from
baseline to treatment for each case (e.g., Koehler
& Levin, 1998). Just as with randomization
Withdrawal and Reversal Designs
procedures for designs that rapidly alternate The withdrawal (or reversal) design can be
between treatments, randomization in designs used to study individual intervention effects on
with phases facilitates analyses that control Type I reversible outcomes (i.e., outcomes that would
errors. However, randomly selecting transition return to the baseline level if the intervention were
times does not remove the relationship between removed). They consist of a baseline phase (A),
time and treatment assignment (e.g., baseline followed by an intervention phase (B), followed
observations still precede treatment observa- by a second baseline phase where the intervention
tions) and thus treatment effect estimates may is withdrawn and the behavior is expected to
be biased by time-related unobserved factors revert to baseline levels (Baer et al., 1968; Sidman,
(Ferron et al., 2014). Consequently, the randomi­ 1960). The second baseline phase is typically
zation used in SCEDs with phases is more followed by a second intervention phase, creating
limited in value than the randomization used an ABAB design, as shown in Figure 33.3. Visual
in designs with rapid alternation between inspection of Figure 33.3 shows the number
conditions. In addition, selecting start points of desirable behaviors during the intervention
a priori conflicts with response-guided experi- phases is higher than could be reasonably pro-
mentation (e.g., Kazdin, 1980), and, thus, jected from the baseline phases, and this effect
researchers using phase-based designs may is replicated within the case, providing evidence
choose either a priori randomization or response- of a treatment effect. With an ABAB design,
guided experimentation. In contexts where there are three opportunities to observe effects
baselines are assumed to have little to no (i.e., at each of the three transitions between
variability, there is minimal need for response- phases), which is generally considered to be
guided experimentation, and random selection the minimum acceptable number of replications.
of transition points can be easily accommo- For those who prefer more replications, the ABAB
dated. In addition, for researchers who have design can be extended by adding phases within
practical constraints preventing them from the case, such as in an ABABAB design, or by
extending phases or who prefer a design that replicating the ABAB design across cases. As can
is not responsive to the data, the option of be seen in this illustration, the internal validity
randomizing tran­sition points is appealing. of withdrawal designs relies heavily on baseline
Finally, for researchers who would like to respond logic and within-case replication. In addition,
to their data to ensure stable baselines as well withdrawal designs may include response-
as randomize, there are approaches to do so, guided experimentation (Kazdin, 1980),
where the random assignments are made during randomly selected times to transition between
rather than before the experiment (Ferron & phases (Onghena, 1992), or both (Ferron &
Jones, 2006; Moeyaert et al., 2021). Levin, 2014). 1 LINE S

1ST PAGES
Single-Case Experimental Design

Multiple Baseline Designs natural rate of the behavior at infrequent but


The multiple-baseline design is commonly scheduled time points during baseline rather than
used to investigate the effect of an intervention continuous measurement (Murphy & Bryan, 1980).
on a target behavior, particularly when it is not The use of probes reduces the need for resources
feasible or appropriate to use a reversal design, by minimizing unnecessary data collection, but
such as when the target behavior is not reversible. researchers must ensure that no changes in the
This design includes a minimum of three cases behavior have occurred before introducing the
(i.e., behaviors, settings, or individuals) with intervention. A multiple-probe design is illustrated
baselines of varying lengths so that the inter- in Figure 33.5. The reduction in the number of
vention is introduced at different times for each baseline observations is observed for the second
case. More specifically, the baseline phase for and third participant. In particular, the third
the first case should be stable prior to intro­ participant has five baseline observations but
ducing the intervention, and the baseline phase would have had 10 in a traditional multiple-
for the second case should continue until the baseline design.
intervention phase for the first case is stable.
This staggered approach allows the researcher to
determine if changes in behavior coincide with 10
the intervention for each participant. As shown
8
in Figure 33.4, the data are presented in stacked
graphs, one for each case. An advantage of the 6
multiple-baseline design is that its staggered 4
intervention across cases allows for more expe­
rimental control than replicated AB designs, 2

making it easier to conclude that the intervention 0


was responsible for the change in behavior
(Horner & Odom, 2014). The internal validity
10
of multiple-baseline designs relies on baseline
Number of Desirable Responses

logic and temporally staggered replication in 8


which ongoing visual analysis is used to confirm 6
stability within the phases. In addition, researchers
4
may utilize response-guided experimentation
(Baer et al., 1968), random selection of inter- 2
vention starts (Koehler & Levin, 1998), or both 0
response-guided experimentation and randomi­
zation (Ferron & Jones, 2006).
12

Multiple-Probe Designs 10
The multiple-probe design is a variation of the 8
multiple-baseline design, which requires fewer 6
observations. It is often preferred when (a) long 4
baseline phases present an ethical problem, 2
(b) the target behavior is unlikely to change in 0
the absence of the intervention, or (c) the target 1 3 5 7 9 11 13 15
behavior is readily influenced by repeated testing Session
(Horner & Baer, 1978; Morgan & Morgan, 2009). FIGURE 33.5.   Illustration of a multiple-probe
SHORT In this design, probes are used to determine the design.

1ST PAGES
Ferron, Kirby, and Lipien

Changing Criteria Designs including randomization procedures (Ferron et al.,


The changing criterion design (CCD) allows 2019; Onghena et al., 2019).
researchers to implement an intervention that
shapes a given behavior in a gradual, stepwise Alternating Treatments Designs
fashion with the goal of achieving a predetermined The alternating treatments design (ATD) allows
outcome. This design is especially useful when researchers to compare within-case effects of
the rate, duration, or accuracy of a behavior is two or more distinctly different treatments with
best changed through small steps (e.g., shaping repeated measurement of reversible behavior(s)
by approximation), such as with reducing smoking across sessions (Barlow & Hayes, 1979). The
or increasing physical exercise. As shown in rapid and relatively short duration of an ATD
Figure 33.6, the design includes an initial base- can control threats to internal validity such as
line phase followed by multiple treatment phases changes within the participant that are not due
of varied lengths that are introduced over time. to the intervention but that impact the outcome
Each treatment phase has its own preset criterion over time (i.e., maturation) and changes in the
to be met, and once stability is achieved, a more way the outcome is measured at different times
stringent criterion for the next phase is established. (i.e., instrumentation; Wolery et al., 2018).
Researchers have recommended at least three to However, with ATDs, the researcher needs to
four changes in the criterion level over the course assume that there are no carry-over effects and
of the study (Gast & Ledford, 2014; Klein et al., no interaction of treatments with each other.
2017). This process continues until the overall To control for sequence effects, the researcher can
outcome or goal is attained. This design has randomize treatment order with restrictions to
several advantages: (a) at a minimum, it requires prevent no more than two consecutive sessions
only one participant, behavior, and setting; of the same condition (i.e., restricted randomi­
(b) treatment does not have to be withdrawn; zation; Onghena & Edgington, 1994). Study
(c) after a brief baseline phase, treatment is intro- outcomes are then graphed by associated treat-
duced across cases concurrently (i.e., baselines ment condition using independent and different
are not staggered); and (d) treatment efficacy is data paths (see Figure 33.7). Unlike other SCEDs,
evident when performance closely matches the stable responding is not a prerequisite for changing
criterion (Byiers et al., 2012; Poling & Grossett, conditions, and thus, researchers may consider
1986). The internal validity of CCDs can be the use of ATDs to study variable behavior that
enhanced by changing the distance between would otherwise necessitate longer phases,
criterion levels, varying the lengths of phases, and allowing for an efficient comparative analysis of

100
90
80
Correct Responses

70
60
50
40
30
20
10
0
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43
Session

FIGURE 33.6.   Illustration of a changing criterion design.

10

1ST PAGES
Single-Case Experimental Design

35

Number of Desirable Responses


30

25

20

15

10

0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Session
Baseline Treatment A Treatment B

FIGURE 33.7.   Illustration of an alternating treatment design.

differential treatment effectiveness. For example, comparing pre- to postexposure performance.


Peters et al. (2020) used an ATD to assess differ- More recently, Dennis and Whalon (2020)
ential spelling performance in persons with visual used a RAD to compare preschoolers’ rates of
impairment and amyotrophic lateral sclerosis voca­bulary acquisition following computer-
(ALS). Although spelling performance was assisted instruction or teacher-delivered
highly variable, the researchers were able to instruction. Advantages of the RAD include
compare accurate responding within and across the strength of the within-subjects design, ability
three different eye tracking and brain-computer to evaluate treatment dosage and salience on
interface systems. As such, the ATD can be a operant behaviors, and ability to examine the
suitable option for researchers seeking to iden- extent to which responding across different
tify a more effective treatment or investigate the stimulus sets are a consequence of experimental
extent to which two or more treatments differ manipulation. Further, evidence of the credibility
in effectiveness. of the design (Kratochwill et al., 2010) can be
enhanced with additional considerations, such
Repeated Acquisition Designs as procedures to determine stimulus equivalency,
The repeated acquisition design (RAD) is a type randomization techniques, multiple cases, and
of SCED used to repeatedly test brief interven- the addition of baseline and maintenance phases
tion effects on discrete operant behaviors such (Kirby et al., 2021). As shown in Figure 33.8,
as language or academic skills (e.g., vocabulary), the researcher can strengthen the RAD by adding
making it a feasible alternative to the ATD when control or comparison stimuli (i.e., stimuli that
studying irreversible outcomes (Cohn & Paule, are not taught).
1995). The RAD requires a priori assignment
of discrete stimuli or behaviors of relative
STUDY PROCEDURES
equivalency (i.e., difficulty) to sets targeted for
instruction and the repeated measurement of In addition to selecting a design and experimental
an outcome conducted through pre- and post­ tactics that align with the study purpose and
intervention probes over time (Kennedy, 2005; outcome of interest, single-case researchers
Ledford & Gast, 2018). For example, Cohn and implement procedures to document reliability
colleagues (1993) examined the effects of lead of the outcome measurement, treatment fidelity,
on animals’ skill acquisition and retention by generalization, and social validity.

11

1ST PAGES
Ferron, Kirby, and Lipien

12
Control stimuli Targeted stimuli

10
Number of words correct (out of 12)
Post-probe

6
Pre-probe

0
1 2 3 4 5 6 7 8 9 10
Weeks

FIGURE 33.8.   Illustration of a repeated acquisition design.

Reliability of Outcome Measurement have also suggested graphical presentation of IOA


The reliability of outcome measurement is a key results over time to examine the extent to which
variable in the interpretability of SCED study the levels of agreement are consistent within
results. Analysis and interpretability of SCED and across experimental conditions (Ledford &
outcomes hinge on the integrity of the data Wolery, 2013; Ledford et al., 2019). To guard
reported by trained independent data collectors against instrumentation effects, where perhaps
and scorers. Poor reliability translates to high the two raters drift in the same way over time
rates of disagreement between independent data (e.g., each gets more lenient), sessions could
collectors about the presence or absence of an also be recorded and scored in random order
effect on the target behavior. If rates of disagree- for each rater.
ment remain high and are not immediately
resolved, researchers introduce confounds in the Treatment Fidelity
ability to make assumptions about intervention Treatment fidelity in single-case design refers to
effects, which threatens replicability of results the consistent implementation of an intervention
in future research (Ledford & Wolery, 2013). within and across study sessions. The efficacy
Thus, a priori, the researcher should operation­ of an intervention is often highly dependent on
alize the dependent variable and measurement whether it has been implemented as intended.
procedures and train data collectors and scorers Fidelity increases confidence that the intervention
to fidelity. Researchers should plan to have a is directly linked to changes in the target behavior.
second independent data collector measure and/or A common approach to monitoring treatment
score outcomes for at least 20% of all observations fidelity in single-case studies involves completing
per phase or condition with a goal of 80% agree- a checklist to ensure that the intervention
ment between the two data collectors (Lane & is delivered according to plan. For example,
Gast, 2014). Reliability is then usually reported team members can document the amount of
by researchers as a percent of inter-observer time spent with each participant, the quantity
agreement (IOA), inter-rater reliability, or inter- of feedback given to each participant, and any
rater agreement (Lane & Gast, 2014). To examine notable changes within the testing environment.
measurement discrepancies, some researchers The compiled information can help to assess the

12

1ST PAGES
Single-Case Experimental Design

need for implementation supports (Collier-Meek are numerous ways for researchers to contribute
et al., 2018). Adherence to treatment fidelity support for external validity, such as conducting
enhances the internal validity of the study and follow-up observations poststudy, planning for
reduces the potential for Type II errors (Krasny- probes in novel contexts, and documenting
Pacini & Evans, 2018). instances of response generalization. Regardless
of the specific measures taken, researchers should
Generalization consider ways to integrate these strategies into
Planning for maintenance and generalization their SCED.
phases can provide information about the extent
to which behavior change persists in other Social Validity
contexts and over time. In SCED, generalization One of the seven dimensions of applied behavior
refers to instances when trained skills or behaviors analysis is the study of socially significant
resulting from experimental manipulation of behavior: “changes in behavior that are clinically
the independent variable transfer beyond experi- significant or actually make a difference in the
mental conditions to more natural contexts client’s life” (Kazdin, 1977, p. 427). The accept-
(Kendall, 1981; Stokes & Baer, 1977). In SCED ability of intervention components, methods of
research, researchers can plan to evaluate gener- measurement, and experimental outcomes can
alization effects across three categories: response be equally as relevant to interventionists as a
generalization, stimulus generalization, and/or measure of effectiveness, answering “For whom
maintenance (Catania, 1992; Kendall, 1981). does it work?” and “Will it continue in use when
Response generalization occurs when participants I’m gone?” Attention to social validity is important
encounter a discriminative stimulus that evokes because interventions that are impractical or
an untrained behavior of similar topography or unacceptable are less likely to be adopted (Leko,
function to the trained response. For example, 2014; Lloyd & Heubusch, 1996). Behavior
say a researcher was interested in reducing a scientists can use surveys and choice measures
child’s toy-grabbing behavior by teaching the to gather information about the social signifi-
child to ask for a peer’s permission to share. cance of the research before and after a study
During the study, the researcher taught the child (Fuqua & Schwade, 1986). Additionally, structured
to say, “May I have . . .”. However, presented interviews with participants and stakeholders can
with the same antecedent condition later in the supplement measures of treatment adherence and
day, the child made a spontaneous request using attrition (i.e., participant drop-out). Follow-up
the untrained phrase, “Can I please have . . .” interviews with participants and primary stake-
Conversely, researchers can document treatment holders can provide information about whether
effects on stimulus generalization when partici- the research methods and designs are aligned with
pants engage in a target behavior in response to the applied dimension of applied behavior analysis
an untrained stimulus or novel situation (Sidman, (Baer et al., 1968; Kazdin, 1977; Wolf, 1978).
1997). For example, Gunby et al. (2010) taught
abduction prevention skills to three children
DATA ANALYSIS
with autism using behavioral skills training at
a day care facility. In a stimulus generalization The principal analysis method for SCEDs is
probe, one participant demonstrated the learned visual analysis of the graphed data (Barlow &
skills in the community setting without explicit Hersen, 1984; Gast & Spriggs, 2014; Kratochwill
instruction in this setting. In addition, all parti­ et al., 2010). During visual analyses researchers
cipants demonstrated maintenance of such engage in four steps: (a) documenting stable
skills after 1 month. In other words, the children baseline patterns, (b) examining the data within
demonstrated generalization of treatment effects each phase, (c) comparing adjacent and similar
over time in absence of the intervention. There phases, and (d) determining whether there are at

13

1ST PAGES
Ferron, Kirby, and Lipien

least three demonstrations of the effect at dif- contains multiple cases (e.g., multiple-baseline
ferent points in time (Kratochwill et al., 2010). designs, multiple-probe designs), multilevel
In analyzing and comparing the data patterns models are available that account for the nesting
within and across phases, researchers attend to of the repeated observations within the cases
six data features: (a) level, (b) trend, (c) vari- (Rindskopf, 2014; Rindskopf & Ferron, 2014;
ability, (d) immediacy of effect, (e) overlap, and Shadish et al., 2013; Van den Noortgate, &
(f) consistency of patterns across similar phases Onghena, 2003). Parameter estimates from these
(Kratochwill et al., 2010). Visual analysis training regression or multilevel models can be used as
methods have been developed (Wolfe & Slocum, raw score effect indices. For those that desire
2015) and visual analyses continue to serve standardized effect estimates, options include
researchers using SCEDs well. However, they do effect sizes based on nonoverlap between adjacent
have some limitations. Indexing the probability phases (Parker & Vannest, 2009; Parker et al.,
of falsely concluding an effect exists is difficult, 2011), mean differences standardized by within-
which leads to questions about Type I error control case variability (Busk & Serlin, 1992), mean differ-
(Fisch, 1998), and quantitative summaries of ences standardized by between-case variability
the size of the effect can be helpful for research (Pustejovsky et al., 2014; Shadish et al., 2014),
synthesis and meta-analyses. Thus, a variety response ratios (Pustejovsky, 2018), and progress
of statistical analyses have been developed to toward a goal (Ferron et al., 2020).
complement visual analyses.
For researchers who have incorporated random­ SUMMARY
ization into their designs, it is possible to formally
Single-case experimental designs (SCEDs) are
control the probability of falsely concluding there
a collection of research methods for the study
is an effect (i.e., Type I error) by using random-
of intervention effects on individuals. No single
ization tests if the randomization is done a priori
method is always optimal because of differences
(Edgington & Onghena, 2007) or masked visual
in study purposes (e.g., indexing the effective-
analysis if the randomization is done during
ness of an intervention versus comparing the
response-guided experimentation (Ferron & effectiveness of alternative interventions), target
Levin, 2014). These randomization-based methods outcomes (e.g., reversible versus nonreversible
can be appealing for testing the effect because behaviors), and practical constraints associated
they do not require modeling assumptions, such with the research context and individual under
as temporal stability, independence, or normality study. Rather than relying on a single method,
(Edgington, 1980). In addition, software to SCED researchers combine different experimental
conduct randomization tests is readily available tactics (e.g., baseline logic, response-guided
(Gafurov & Levin, 2020; Bulté & Onghena, 2009), experimentation, within-case replication, across-
as is an application to facilitate masked visual case replication, and randomization) to select
analysis (Moeyaert et al., 2021). However, these a design type (e.g., reversal, multiple-baseline,
randomization-based methods are limited to testing multiple-probe, changing criterion, alternating
the null hypothesis of no treatment effect. treatments, or repeated acquisition) that aligns
Researchers who wish to provide a quantitative with their purpose, outcome, and research context.
estimate of the size of the effect must turn to Regardless of which tactics and design type are
other options, including statistical modeling and employed, SCED researchers incorporate proce-
standardized effect estimation. When the study dures to ensure the outcome is measured reliably,
contains a single case, extensions of regression the intervention is implemented with fidelity,
that allow researchers to account for potential issues of generalizability and social validity are
serial dependence (or autocorrelation in the considered, and the primary outcome data are
repeated observations) are available (Maggin et al., graphed as a function of time to facilitate visual
2011; Swaminathan et al., 2014). When the study analyses of the intervention effect.

14

1ST PAGES
Single-Case Experimental Design

References Edgington, E. S., & Onghena, P. (2007). Randomization


tests (4th ed.). Chapman & Hall. https://doi.org/
Baer, D. M., Wolf, M. M., & Risley, T. R. (1968). 10.1201/9781420011814
Some current dimensions of applied behavior
analysis. Journal of Applied Behavior Analysis, 1(1), Engel, R. J., & Schutt, R. K. (2013). The practice of
91–97. https://doi.org/10.1901/jaba.1968.1-91 research in social work (3rd ed.). SAGE.
Barlow, D. H., & Hayes, S. C. (1979). Alternating Ferron, J., Goldstein, H., Olszewski, A., & Rohrer, L.
treatments design: One strategy for comparing (2020). Indexing effects in single-case experimental
the effects of two treatments in a single subject. designs by estimating the percent of goal obtained.
Journal of Applied Behavior Analysis, 12(2), Evidence-Based Communication Assessment and
199–210. https://doi.org/10.1901/jaba.1979.12-199 Intervention, 14(1-2), 6–27. https://doi.org/
10.1080/17489539.2020.1732024
Barlow, D. H., & Hersen, M. (1984). Single case expe­
rimental designs: Strategies for studying behavior Ferron, J., Rohrer, L. L., & Levin, J. R. (2019). Randomi­
change. Pergamon. zation procedures for changing criterion designs.
Behavior Modification, 145445519847627. Advance
Bulté, I., & Onghena, P. (2009). Randomization tests online publication. https://doi.org/10.1177/
for multiple-baseline designs: An extension of 0145445519847627
the SCRT-R package. Behavior Research Methods,
41(2), 477–485. https://doi.org/10.3758/BRM. Ferron, J. M., & Jones, P. (2006). Tests for the visual
41.2.477 analysis of response-guided multiple-baseline
data. Journal of Experimental Education, 75(1),
Busk, P. L., & Serlin, R. C. (1992). Meta-analysis 66–81. https://doi.org/10.3200/JEXE.75.1.66-81
of single-case research. In T. R. Kratochwill &
J. R. Levin (Eds.), Single-case research design Ferron, J. M., & Levin, J. R. (2014). Single-case
and analysis: New directions for psychology and permutation and randomization statistical tests:
education (pp. 187–212). Lawrence Erlbaum Present status, promising new developments. In
Associates. T. R. Kratochwill & J. R. Levin (Eds.), Single-case
intervention research: Methodological and statistical
Byiers, B. J., Reichle, J., & Symons, F. J. (2012). Single- advances (pp. 153–183). American Psychological
subject experimental design for evidence-based Association. https://doi.org/10.1037/14376-006
practice. American Journal of Speech-Language
Pathology, 21(4), 397–414. https://doi.org/10.1044/ Ferron, J. M., Moeyaert, M., Van den Noortgate, W., &
1058-0360(2012/11-0036) Beretvas, S. N. (2014). Estimating causal effects
from multiple-baseline studies: Implications for
Catania, A. C. (1992). Learning. Prentice Hall. design and analysis. Psychological Methods, 19(4),
Cohn, J., Cox, C., & Cory-Slechta, D. A. (1993). The 493–510. https://doi.org/10.1037/a0037038
effects of lead exposure on learning in a multiple Fisch, G. S. (1998). Visual inspection of data revisited:
repeated acquisition and performance schedule. Do the eyes still have it? The Behavior Analyst,
Neurotoxicology, 14(2-3), 329–346. 21(1), 111–123. https://doi.org/10.1007/
Cohn, J., & Paule, M. G. (1995). Repeated acquisition BF03392786
of response sequences: The analysis of behavior Fuqua, R. W., & Schwade, J. (1986). Social validation
in transition. Neuroscience and Biobehavioral of applied behavioral research. In A. Poling &
Reviews, 19(3), 397–406. https://doi.org/10.1016/ R. W. Fuqua (Eds.), Research methods in applied
0149-7634(94)00067-B behavior analysis (pp. 265–292). Springer.
Collier-Meek, M. A., Fallon, L. M., & Gould, K. https://doi.org/10.1007/978-1-4684-8786-2_12
(2018). How are treatment integrity data assessed? Gafurov, B. S., & Levin, J. R. ExPRT (Excel Package
Reviewing the performance feedback literature. of Randomization Tests): Statistical Analyses of
School Psychology Quarterly, 33(4), 517–526. Single-Case Intervention Data; current Version 4.1
https://doi.org/10.1037/spq0000239 (March 2020) is retrievable from the ExPRT website
Dennis, L. R., & Whalon, K. J. (2020). Effects of at http://ex-prt.weebly.com
teacher- versus application-delivered instruction Gallo, K. P., Comer, J. S., & Barlow, D. H. (2013).
on the expressive vocabulary of at-risk preschool Single-case experimental designs and small pilot
children. Remedial and Special Education, 42(4), trial designs. In J. S. Comer & P. C. Kendall
1–12. https://doi.org/10.1177/0741932519900991 (Eds.), The Oxford handbook of research strategies
Edgington, E. S. (1980). Validity of randomization for clinical psychology (pp. 24–39). Oxford
tests for one-subject experiments. Journal of University Press.
Educational Statistics, 5(3), 235–251. https:// Gast, D. L., & Ledford, J. R. (Eds.). (2014). Single
doi.org/10.3102/10769986005003235 case research methodology: Applications in special

15

1ST PAGES
Ferron, Kirby, and Lipien

education and behavioral sciences (2nd ed.). Krasny-Pacini, A., & Evans, J. (2018). Single-case
Routledge. https://doi.org/10.4324/9780203521892 experimental designs to assess intervention
effectiveness in rehabilitation: A practical guide.
Gast, D. L., & Spriggs, A. D. (2014). Visual analysis
Annals of Physical and Rehabilitation Medicine,
of graphic data. In D. L. Gast & J. R. Ledford
61(3), 164–179. https://doi.org/10.1016/j.rehab.
(Eds.), Single case research methodology: Applica-
2017.12.002
tions in special education and behavioral sciences
(2nd ed.). Routledge. https://doi.org/10.4324/ Kratochwill, T. R., Hitchcock, J., Horner, R. H.,
9780203521892-9 Levin, J. R., Odom, S. L., Rindskopf, D. M., &
Shadish, W. R. (2010). Single-case designs tech­
Gunby, K. V., Carr, J. E., & Leblanc, L. A. (2010).
nical documentation. Retrieved from What Works
Teaching abduction-prevention skills to children
Clearinghouse website: https://ies.ed.gov/ncee/
with autism. Journal of Applied Behavior Analysis,
wwc/Document/229
43(1), 107–112. https://doi.org/10.1901/jaba.
2010.43-107 Lane, J. D., & Gast, D. L. (2014). Visual analysis
in single case experimental design studies:
Horner, R. D., & Baer, D. M. (1978). Multiple-probe
Brief review and guidelines. Neuropsychological
technique: A variation on the multiple baseline.
Rehabilitation, 24(3-4), 445–463. https://doi.org/
Journal of Applied Behavior Analysis, 11(1), 189–196.
10.1080/09602011.2013.815636
https://doi.org/10.1901/jaba.1978.11-189
Ledford, J., & Gast, D. L. (2018). Combination and
Horner, R. H., & Odom, S. L. (2014). Constructing
other designs. In J. R. Ledford & D. L. Gast
single-case research designs: Logic and options.
(Eds.), Single case research methodology: Applica-
In T. R. Kratochwill & J. R. Levin (Eds.),
tions in special education and behavioral sciences
School psychology series. Single-case intervention
(3rd ed., pp. 335–364). Routledge. https://doi.org/
research: Methodological and statistical advances
10.4324/9781315150666-12
(pp. 27–51). American Psychological Association.
https://doi.org/10.1037/14376-002 Ledford, J. R., Barton, E. E., Severini, K. E., &
Zimmerman, K. N. (2019). A primer on single-
Kazdin, A. E. (1977). Assessing the clinical or applied
case research designs: Contemporary use and
importance of behavior change through social
analysis. American Journal on Intellectual and
validation. Behavior Modification, 1(4), 427–452.
Developmental Disabilities, 124(1), 35–56.
https://doi.org/10.1177/014544557714001
https://doi.org/10.1352/1944-7558-124.1.35
Kazdin, A. E. (1980). Obstacles in using randomi­
Ledford, J. R., & Wolery, M. (2013). Effects of
zation tests in single-case experimentation.
plotting a second observer’s data on A-B-A-B
Journal of Educational Statistics, 5(3), 253–260.
graphs when observer disagreement is present.
https://doi.org/10.3102/10769986005003253
Journal of Behavioral Education, 22, 312–324.
Kendall, P. C. (1981). Assessing generalization and https://doi.org/10.1007/s10864-013-9178-0
the single-subject strategies. Behavior Modifica-
Leko, M. M. (2014). The value of qualitative methods
tion, 5(3), 307–319. https://doi.org/10.1177/
in social validity research. Remedial and Special
014544558153001
Education, 35(5), 275–286. https://doi.org/
Kennedy, C. H. (2005). Single-case designs for educa- 10.1177/0741932514524002
tional research. Pearson.
Lloyd, J. W., & Heubusch, J. D. (1996). Issues of social
Kirby, M. S., Spencer, T. D., & Ferron, J. M. (2021). validation in research on serving individuals with
How to be RAD: Repeated acquisition design emotional or behavioral disorders. Behavioral
features that enhance internal and external Disorders, 22(1), 8–14. https://doi.org/10.1177/
validity. Perspectives on Behavior Science, 44(2-3), 019874299602200105
389–416. https://doi.org/10.1007/s40614-021-
Maggin, D. M., Swaminathan, H., Rogers, H. J.,
00301-2
O’Keeffe, B. V., Sugai, G., & Horner, R. H. (2011).
Klein, L. A., Houlihan, D., Vincent, J. L., & Panahon, A generalized least squares regression approach
C. J. (2017). Best practices in utilizing the for computing effect sizes in single-case research:
changing criterion design. Behavior Analysis in Application examples. Journal of School Psychology,
Practice, 10(1), 52–61. https://doi.org/10.1007/ 49(3), 301–321. https://doi.org/10.1016/j.jsp.
s40617-014-0036-x 2011.03.004
Koehler, M. J., & Levin, J. R. (1998). Regulated Manolov, R., Solanas, A., & Sierra, V. (2019).
randomization: A potentially sharper analytical Extrapolating baseline trend in single-case data:
tool for the multiple baseline design. Psychological Problems and tentative solutions. Behavior Research
Methods, 3(2), 206–217. https://doi.org/10.1037/ Methods, 51(6), 2847–2869. https://doi.org/
1082-989X.3.2.206 10.3758/s13428-018-1165-x

16

1ST PAGES
Single-Case Experimental Design

Moeyaert, M., Bursali, S., & Ferron, J. M. (2021). R. W. Fuqua (Eds.), Research methods in applied
SCD-MVA: A mobile application for conducting behavior analysis. Springer. https://doi.org/10.1007/
single-case experimental design research during 978-1-4684-8786-2_2
the pandemic. Human Behavior and Emerging
Pustejovsky, J. E. (2018). Using response ratios for
Technologies, 3(1), 75–96. https://doi.org/10.1002/
meta-analyzing single-case designs with behavioral
hbe2.223
outcomes. Journal of School Psychology, 68,
Morgan, D. L., & Morgan, R. K. (2009). Single-case 99–112. https://doi.org/10.1016/j.jsp.2018.02.003
research methods for the behavioral and health
Pustejovsky, J. E., Hedges, L. V., & Shadish, W. R.
sciences. Sage Publications. https://doi.org/10.4135/
9781483329697 (2014). Design-comparable effect sizes in multiple
baseline designs: A general modeling framework.
Murphy, R. J., & Bryan, A. J. (1980). Multiple- Journal of Educational and Behavioral Statistics,
baseline and multiple-probe designs: Practical 39(5), 368–393. https://doi.org/10.3102/
alternatives for special education assessment 1076998614547577
and evaluation. The Journal of Special Education,
14(3), 325–335. https://doi.org/10.1177/ Rindskopf, D. (2014). Nonlinear Bayesian analysis
002246698001400306 for single case designs. Journal of School Psychol-
ogy, 52(2), 179–189. https://doi.org/10.1016/
Odom, S. L., Brantlinger, E., Gersten, R., Horner, j.jsp.2013.12.003
R. H., Thompson, B., & Harris, K. (2005).
Research in special education: Scientific methods Rindskopf, D., & Ferron, J. (2014). Using multilevel
and evidence-based practices. Exceptional models to analyze single-case design data. In T. R.
Children, 71(2), 137–148. https://doi.org/10.1177/ Kratochwill & J. R. Levin (Eds.), Single-case
001440290507100201 intervention research: Statistical and methodological
advances (pp. 221–246). American Psychological
Onghena, P. (1992). Randomization tests for Association. https://doi.org/10.1037/14376-008
extensions and variations of ABAB single-case
experimental designs: A rejoinder. Behavioral Shadish, W. R., Hedges, L. V., & Pustejovsky, J. E.
Assessment, 14, 153–171. (2014). Analysis and meta-analysis of single-case
designs with a standardized mean difference
Onghena, P., & Edgington, E. S. (1994). Random­ statistic: A primer and applications. Journal of
ization tests for restricted alternating treatments School Psychology, 52(2), 123–147. https://doi.org/
designs. Behaviour Research and Therapy, 10.1016/j.jsp.2013.11.005
32(7), 783–786. https://doi.org/10.1016/
0005-7967(94)90036-1 Shadish, W. R., Kyse, E. N., & Rindskopf, D. M.
(2013). Analyzing data from single-case designs
Onghena, P., Tanious, R., De, T. K., & Michiels, B. using multilevel models: New applications and
(2019). Randomization tests for changing criterion some agenda items for future research. Psycho­
designs. Behaviour Research and Therapy, 117, logical Methods, 18(3), 385–405. https://doi.org/
18–27. Advance online publication. https://doi.org/ 10.1037/a0032964
10.1016/j.brat.2019.01.005
Sidman, M. (1960). Tactics of scientific research:
Parker, R. I., & Vannest, K. (2009). An improved Evaluating experimental data in psychology.
effect size for single-case research: Nonoverlap Authors Cooperative, Inc.
of all pairs. Behavior Therapy, 40(4), 357–367.
https://doi.org/10.1016/j.beth.2008.10.006 Sidman, M. (1997). Equivalence relations. Journal
of the Experimental Analysis of Behavior, 68(2),
Parker, R. I., Vannest, K. J., & Davis, J. L. (2011).
258–266. https://doi.org/10.1901/jeab.1997.68-258
Effect size in single-case research: A review of
nine nonoverlap techniques. Behavior Modification, Stokes, T. F., & Baer, D. M. (1977). An implicit
35(4), 303–322. https://doi.org/10.1177/ technology of generalization. Journal of Applied
0145445511399147 Behavior Analysis, 10(2), 349–367. https://doi.org/
10.1901/jaba.1977.10-349
Peters, B., Bedrick, S., Dudy, S., Eddy, B., Higger, M.,
Kinsella, M., McLaughlin, D., Memmott, T., Swaminathan, H., Rogers, H. J., & Horner, R. H.
Oken, B., Quivira, F., Spaulding, S., Erdogmus, D., (2014). An effect size measure and Bayesian
& Fried-Oken, M. (2020). SSVEP BCI and eye analysis of single-case designs. Journal of School
tracking use by individuals with late-stage ALS Psychology, 52(2), 213–230. https://doi.org/
and visual impairments. Frontiers in Human 10.1016/j.jsp.2013.12.002
Neuroscience, 14, 595890. https://doi.org/10.3389/ Tanious, R., & Onghena, P. (2020). A systematic
fnhum.2020.595890
review of applied single-case research published
Poling, A., & Grossett, D. (1986). Basic research designs between 2016 and 2018: Study designs, randomi­
in applied behavior analysis. In A. Poling & zation, data aspects, and data analysis. Advance

17

1ST PAGES
Ferron, Kirby, and Lipien

online publication, Behavior Research Methods. sciences (3rd ed., pp. 283–334). Routledge.
https://doi.org/10.3758/s13428-020-01502-4 https://doi.org/10.4324/9781315150666-11
Van den Noortgate, W., & Onghena, P. (2003). Wolf, M. M. (1978). Social validity: The case for
Combining single-case experimental data using subjective measurement or how applied behavior
hierarchical linear models. School Psychology analysis is finding its heart. Journal of Applied
Quarterly, 18(3), 325–346. https://doi.org/ Behavior Analysis, 11(2), 203–214. https://doi.org/
10.1521/scpq.18.3.325.22577 10.1901/jaba.1978.11-203
Wolery, M., Gast, D., & Ledford, J. R. (2018). Wolfe, K., & Slocum, T. A. (2015). A comparison
Comparative designs. In J. R. Ledford & D. L. of two approaches to training visual analysis of
Gast (Eds.), Single case research methodology: AB graphs. Journal of Applied Behavior Analysis,
Applications in special education and behavioral 48(2), 472–477. https://doi.org/10.1002/jaba.212

18

1ST PAGES
View publication stats

You might also like