Professional Documents
Culture Documents
16139-33 CH33 1stPgsjf
16139-33 CH33 1stPgsjf
net/publication/363184941
CITATIONS READS
0 152
3 authors, including:
Some of the authors of this publication are also working on these related projects:
Cultural Competency Training of Pre-Service Students to Prepare for Working with Diverse Populations View project
Implementation Science & Educational Intervention Design for Displaced Children View project
All content following this page was uploaded by Megan Kirby on 01 September 2022.
QUANTITATIVE RESEARCH
DESIGNS INVOLVING
SINGLE PARTICIPANTS
OR UNITS
1ST PAGES
1ST PAGES
Chapter 33
SINGLE-CASE EXPERIMENTAL
DESIGN
John M. Ferron, Megan Kirby, and Lodi Lipien
Single-case experimental designs (SCEDs) are single-subject design, single-case design, and
experimental designs used to study the effects of N-of-1 trials. These designs assume a variety of
interventions on individual cases. This focus on forms, including, but not limited to, reversal,
individual effects is a defining feature of SCEDs alternating treatments, changing criterion, repeated
and can be motivated by conceptual, as well as acquisition, multiple-baseline, and multiple-probe
practical, considerations. Conceptually, the belief designs. One may ask why we need so many
in the uniqueness of individuals and the potential design options to study individual effects or why
for variability in the response to an intervention traditional pre–post intervention measurement
motivates a desire to study intervention effects one at the individual level is not sufficient. Although
participant at a time (Morgan & Morgan, 2009). looking at the pre-to-post change for an individual
As studies accrue, the individual effect estimates may be common in some areas of clinical practice,
can be used to build a distribution from which it can be difficult to argue that the change was
we can see consistencies or inconsistencies in the due to the intervention. It is possible that the
effect across cases, best- or worst-case scenarios, behavior, or outcome of interest, fluctuates over
and typical or average effects. Practical reasons time for the individual, and the change that is
that may motivate the use of SCEDs include observed is simply part of this routine fluctua-
(a) studying the effect of an intervention for those tion. It is also possible that there was indeed a
from a sparse population or with a rare diagnosis, systematic change, but that the change resulted
making it difficult to recruit more than few from something other than the intervention
participants for a study (Odom et al., 2005); (e.g., a change in the home, school, or work
(b) developing an intervention where it may environment that happened to coincide with
be efficient to pilot and refine the intervention the start of the intervention). Sensitive to the
through a series of single-case studies (Gallo detection of change at the case level across time,
et al., 2013); and (c) adding to the research SCEDs have been developed to help us separate
base on the effects of interventions in practice intervention effects from other confounds.
through the engagement of clinicians (Morgan However, because of the variation within indi
& Morgan, 2009). viduals and outcomes of interest, there is no
Single-case experimental design has been single best way to do this. Rather, design varia-
referred to by a variety of names, including tions have emerged as research has evolved.
https://doi.org/10.1037/0000319-033
APA Handbook of Research Methods in Psychology, Second Edition: Vol. 2. Research Designs: Quantitative, Qualitative, Neuropsychological, and
Biological, H. Cooper (Editor-in-Chief)
Copyright © 2023 by the American Psychological Association. All rights reserved.
1ST PAGES
Ferron, Kirby, and Lipien
In this chapter, we first consider the experi- values are zero). In such cases, the projections are
mental tactics developed to strengthen the internal relatively straightforward because the researcher
validity of studies focusing on individual effects. can assume that in the absence of intervention,
These tactics help us attribute observed changes future observations of the outcome would have
to the intervention and include the use of baseline values identical to the constant baseline value.
logic, response-guided experimentation, repli More commonly, stability assumptions are
cation, and randomization. Next, we show how less strict, and the projections are less exact.
various experimental tactics can be combined to Researchers may expect some variability in the
produce a variety of SCEDs in which appropriate outcome from one observation to the next due
selection of a design is dependent upon the study to various factors that are outside of the control
purpose, participant characteristics, and the of the researcher, but they may also assume that
outcome of interest. We next consider additional there are no systematic trends (i.e., the expected
procedures used within SCEDs to document level of the behavior does not change over time).
reliability of the outcome measurement, treatment In such situations, the baseline projections
fidelity, generalization, and social validity, assume that in the absence of intervention, the
and close with a summary of analysis options mean and variation of future observations would
appropriate for SCEDs. be similar to baseline observations. Consider for
example a researcher who is studying the effect
of an intervention on the number of minutes
EXPERIMENTAL TACTICS
a child with attention deficit disorder spends
Experimental tactics are procedures used to reading. Initial baseline and intervention phases
strengthen the internal validity of SCEDs. These are shown in Figure 33.1. The baseline obser
procedures include baseline logic, response-guided vations show no trend, with observations ranging
experimentation, within-case replication, between- from two to nine minutes of reading during
case replication, and randomization. 30-minute daily reading sessions. Given this
baseline, it seems reasonable to project that
Baseline Logic without intervention, the child would continue to
Many SCEDs involve phases in which multiple read less than 10 minutes per session. Because
observations, typically five or more, are collected the intervention observations are not in line
under the same treatment condition. For example, with the baseline projection, they support the
the study may begin with a baseline phase in contention that something has changed the reading
which five to 10 observations are collected in behavior of the child.
a business-as-usual condition prior to the intro If there is not only variability in the baseline
duction of the intervention. The baseline, if stable, observations, but also noticeable trends, baseline
establishes a problem level of behavior (i.e., the projections become more challenging. Here the
need for intervention) and allows researchers researcher may assume that the trend is temporally
to assess treatment effects by comparing what stable (i.e., the same trend would continue in the
is observed in the intervention phase to what absence of intervention). If this assumption is
would be expected if the baseline were projected reasonable, the researcher would use an extension
(Engel & Schutt, 2013; Sidman, 1960). To make of the baseline trend line to make a projection
projections about what would have been observed of what would happen in the absence of inter
in the absence of intervention, the researcher must vention. Manolov and colleagues (2019) provided
make an assumption about some kind of temporal an excellent discussion of the challenges in pro-
stability. The strictest stability assumption is that jecting trends and provide some relatively flexible
the outcome is temporally stable, implying that options for making projections. If it is unreason-
there is no variation in the outcome from one able to assume a continued trend, any projection
session to the next (e.g., all baseline observation becomes so suspect that baseline logic fails.
1ST PAGES
Single-Case Experimental Design
30 A B
25
20
Minutes
15
10
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Session
1ST PAGES
Ferron, Kirby, and Lipien
Often the tactic of response-guided experi- the intervention, which just happened to coincide
mentation is coupled with the tactic of baseline with the intervention. Consider the results shown
logic, but in some situations baseline logic can be in Figure 33.1. We see a shift in the number of
used without response-guided experimentation. minutes the child reads, but it is difficult to know
Specifically, in contexts where the participant and if the intervention caused the change in behavior.
outcome of focus produce little to no baseline The child could have increased their reading
variability, there is no need to respond to the data. because of some nonintervention change in the
Consider a study of a phonological awareness teacher behavior, some change among their peers,
intervention, where the participant inclusion or for a variety of other reasons. Replication is a
criteria included a lack of first-sound identifi tactic used to make it more difficult to attribute
cation skills, and thus the researcher anticipates changes to factors other than the intervention.
that the participant will score zero on each of the One way to replicate is to do so within the
baseline measures of first-sound identification. case. If the effect of the intervention is believed
The researcher may fix the baseline length to to be limited to when the intervention is active
three or four observations based on practical (e.g., while a support dog is present, or when
considerations, or they may randomly select the child is taking their prescribed medication),
whether it will be three or four observations. the intervention could be removed with the
Either way, the baseline length could be established expectation that the behavior would return to
a priori, as opposed to in a response-guided baseline levels, and then the intervention could
fashion, and baseline logic could be used because be reintroduced. Consider again a researcher
the baseline would be stable. studying the effect of an intervention on the
amount of time a child spends reading. Suppose
Replication that the study had started as shown in Figure 33.1,
Baseline logic and response-guided experimen and then the researcher added a second baseline
tation are useful approaches in the design of phase followed by a second treatment phase, as
SCEDs, but they are not sufficient for making shown in Figure 33.3. Because of the replication
causal inferences. When the observations in a of the effect within the case, it would be difficult
treatment phase differ from the baseline projection, to attribute the change in behavior to some other
multiple explanations for the change are possible. factor. Put simply, it does not seem plausible
It may have been due to the intervention, but it that the other factor would happen to occur,
could also have been due to something other than be removed, and then occur again, in a way that
30 A B A B
Number of Desirable Responses
25
20
15
10
0
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39
Session
1ST PAGES
Single-Case Experimental Design
coincided with the changes between baseline with the introduction of intervention, there is
and intervention phases. stronger evidence of a causal relation. However,
In some cases, a behavior is not reversible, if changes in behavior occur simultaneously for
such as when the intervention targets the learning all cases, the changes are more likely due to some
of a particular skill, and thus removal of the nonintervention effect than the intervention
intervention is not expected to lead to a return itself. Thus, with replication at different times,
to baseline levels. In such studies, within-case researchers are able to disentangle intervention
replication is not possible. However, replication effects from history effects (i.e., external events
could be accomplished by attempting to duplicate that impact the outcome, such as a change in
the effect across different individuals, behaviors, school personnel or policies).
or settings. When replicating across cases (i.e.,
participants, behaviors, or settings) the start of Randomization
the intervention is typically introduced at different Another experimental tactic that may be used
times, as illustrated in Figure 33.4. When changes with SCEDs is randomization. Consider a com-
in behavior are staggered over time and coincide parative study of the effect of two treatments on
a reversible behavior. The researchers may design
the study so there is rapid alternation between
10 the treatments. In this case, the phase structure
(e.g., five or more successive observations in the
8
same intervention condition) that was shown
6 in our previous examples is not present, and
4 thus the tactic of baseline logic is unavailable.
However, the researchers still need to argue that
2
the difference between the observations is due
0 to the difference in treatments and not some other
factor. In this context, SCED researchers will often
conceptualize their design as having successive
12
pairs of observations, and then randomly assign
Number of Desirable Responses
10
one observation from each pair to each condition
8 (i.e., one of the first two observations to Treat-
6 ment A and the other to Treatment B, one of the
4 second two observations to Treatment A and
2 the other to Treatment B, and so forth). If the
0 researcher finds that the behavior is consistently
better under Treatment A than Treatment B,
it is difficult to attribute this difference to
12
some other factor. This type of randomization
10 facilitates analyses (e.g., randomization tests;
8 Edgington & Onghena, 2007) that control the
6 probability of incorrectly inferring that one
4 treatment was more effective than the other
2 (i.e., control over Type I errors), as well as
0 facilitating unbiased estimates of the treatment
1 3 5 7 9 11 13 15 effect. A review of SCEDs suggests that randomi
Session zation is commonly used in designs that rapidly
FIGURE 33.4. Illustration of across-case alternate between conditions (Tanious &
replication. Onghena, 2020).
1ST PAGES
Ferron, Kirby, and Lipien
1ST PAGES
Single-Case Experimental Design
Multiple-Probe Designs 10
The multiple-probe design is a variation of the 8
multiple-baseline design, which requires fewer 6
observations. It is often preferred when (a) long 4
baseline phases present an ethical problem, 2
(b) the target behavior is unlikely to change in 0
the absence of the intervention, or (c) the target 1 3 5 7 9 11 13 15
behavior is readily influenced by repeated testing Session
(Horner & Baer, 1978; Morgan & Morgan, 2009). FIGURE 33.5. Illustration of a multiple-probe
SHORT In this design, probes are used to determine the design.
1ST PAGES
Ferron, Kirby, and Lipien
100
90
80
Correct Responses
70
60
50
40
30
20
10
0
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43
Session
10
1ST PAGES
Single-Case Experimental Design
35
25
20
15
10
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Session
Baseline Treatment A Treatment B
11
1ST PAGES
Ferron, Kirby, and Lipien
12
Control stimuli Targeted stimuli
10
Number of words correct (out of 12)
Post-probe
6
Pre-probe
0
1 2 3 4 5 6 7 8 9 10
Weeks
12
1ST PAGES
Single-Case Experimental Design
need for implementation supports (Collier-Meek are numerous ways for researchers to contribute
et al., 2018). Adherence to treatment fidelity support for external validity, such as conducting
enhances the internal validity of the study and follow-up observations poststudy, planning for
reduces the potential for Type II errors (Krasny- probes in novel contexts, and documenting
Pacini & Evans, 2018). instances of response generalization. Regardless
of the specific measures taken, researchers should
Generalization consider ways to integrate these strategies into
Planning for maintenance and generalization their SCED.
phases can provide information about the extent
to which behavior change persists in other Social Validity
contexts and over time. In SCED, generalization One of the seven dimensions of applied behavior
refers to instances when trained skills or behaviors analysis is the study of socially significant
resulting from experimental manipulation of behavior: “changes in behavior that are clinically
the independent variable transfer beyond experi- significant or actually make a difference in the
mental conditions to more natural contexts client’s life” (Kazdin, 1977, p. 427). The accept-
(Kendall, 1981; Stokes & Baer, 1977). In SCED ability of intervention components, methods of
research, researchers can plan to evaluate gener- measurement, and experimental outcomes can
alization effects across three categories: response be equally as relevant to interventionists as a
generalization, stimulus generalization, and/or measure of effectiveness, answering “For whom
maintenance (Catania, 1992; Kendall, 1981). does it work?” and “Will it continue in use when
Response generalization occurs when participants I’m gone?” Attention to social validity is important
encounter a discriminative stimulus that evokes because interventions that are impractical or
an untrained behavior of similar topography or unacceptable are less likely to be adopted (Leko,
function to the trained response. For example, 2014; Lloyd & Heubusch, 1996). Behavior
say a researcher was interested in reducing a scientists can use surveys and choice measures
child’s toy-grabbing behavior by teaching the to gather information about the social signifi-
child to ask for a peer’s permission to share. cance of the research before and after a study
During the study, the researcher taught the child (Fuqua & Schwade, 1986). Additionally, structured
to say, “May I have . . .”. However, presented interviews with participants and stakeholders can
with the same antecedent condition later in the supplement measures of treatment adherence and
day, the child made a spontaneous request using attrition (i.e., participant drop-out). Follow-up
the untrained phrase, “Can I please have . . .” interviews with participants and primary stake-
Conversely, researchers can document treatment holders can provide information about whether
effects on stimulus generalization when partici- the research methods and designs are aligned with
pants engage in a target behavior in response to the applied dimension of applied behavior analysis
an untrained stimulus or novel situation (Sidman, (Baer et al., 1968; Kazdin, 1977; Wolf, 1978).
1997). For example, Gunby et al. (2010) taught
abduction prevention skills to three children
DATA ANALYSIS
with autism using behavioral skills training at
a day care facility. In a stimulus generalization The principal analysis method for SCEDs is
probe, one participant demonstrated the learned visual analysis of the graphed data (Barlow &
skills in the community setting without explicit Hersen, 1984; Gast & Spriggs, 2014; Kratochwill
instruction in this setting. In addition, all parti et al., 2010). During visual analyses researchers
cipants demonstrated maintenance of such engage in four steps: (a) documenting stable
skills after 1 month. In other words, the children baseline patterns, (b) examining the data within
demonstrated generalization of treatment effects each phase, (c) comparing adjacent and similar
over time in absence of the intervention. There phases, and (d) determining whether there are at
13
1ST PAGES
Ferron, Kirby, and Lipien
least three demonstrations of the effect at dif- contains multiple cases (e.g., multiple-baseline
ferent points in time (Kratochwill et al., 2010). designs, multiple-probe designs), multilevel
In analyzing and comparing the data patterns models are available that account for the nesting
within and across phases, researchers attend to of the repeated observations within the cases
six data features: (a) level, (b) trend, (c) vari- (Rindskopf, 2014; Rindskopf & Ferron, 2014;
ability, (d) immediacy of effect, (e) overlap, and Shadish et al., 2013; Van den Noortgate, &
(f) consistency of patterns across similar phases Onghena, 2003). Parameter estimates from these
(Kratochwill et al., 2010). Visual analysis training regression or multilevel models can be used as
methods have been developed (Wolfe & Slocum, raw score effect indices. For those that desire
2015) and visual analyses continue to serve standardized effect estimates, options include
researchers using SCEDs well. However, they do effect sizes based on nonoverlap between adjacent
have some limitations. Indexing the probability phases (Parker & Vannest, 2009; Parker et al.,
of falsely concluding an effect exists is difficult, 2011), mean differences standardized by within-
which leads to questions about Type I error control case variability (Busk & Serlin, 1992), mean differ-
(Fisch, 1998), and quantitative summaries of ences standardized by between-case variability
the size of the effect can be helpful for research (Pustejovsky et al., 2014; Shadish et al., 2014),
synthesis and meta-analyses. Thus, a variety response ratios (Pustejovsky, 2018), and progress
of statistical analyses have been developed to toward a goal (Ferron et al., 2020).
complement visual analyses.
For researchers who have incorporated random SUMMARY
ization into their designs, it is possible to formally
Single-case experimental designs (SCEDs) are
control the probability of falsely concluding there
a collection of research methods for the study
is an effect (i.e., Type I error) by using random-
of intervention effects on individuals. No single
ization tests if the randomization is done a priori
method is always optimal because of differences
(Edgington & Onghena, 2007) or masked visual
in study purposes (e.g., indexing the effective-
analysis if the randomization is done during
ness of an intervention versus comparing the
response-guided experimentation (Ferron & effectiveness of alternative interventions), target
Levin, 2014). These randomization-based methods outcomes (e.g., reversible versus nonreversible
can be appealing for testing the effect because behaviors), and practical constraints associated
they do not require modeling assumptions, such with the research context and individual under
as temporal stability, independence, or normality study. Rather than relying on a single method,
(Edgington, 1980). In addition, software to SCED researchers combine different experimental
conduct randomization tests is readily available tactics (e.g., baseline logic, response-guided
(Gafurov & Levin, 2020; Bulté & Onghena, 2009), experimentation, within-case replication, across-
as is an application to facilitate masked visual case replication, and randomization) to select
analysis (Moeyaert et al., 2021). However, these a design type (e.g., reversal, multiple-baseline,
randomization-based methods are limited to testing multiple-probe, changing criterion, alternating
the null hypothesis of no treatment effect. treatments, or repeated acquisition) that aligns
Researchers who wish to provide a quantitative with their purpose, outcome, and research context.
estimate of the size of the effect must turn to Regardless of which tactics and design type are
other options, including statistical modeling and employed, SCED researchers incorporate proce-
standardized effect estimation. When the study dures to ensure the outcome is measured reliably,
contains a single case, extensions of regression the intervention is implemented with fidelity,
that allow researchers to account for potential issues of generalizability and social validity are
serial dependence (or autocorrelation in the considered, and the primary outcome data are
repeated observations) are available (Maggin et al., graphed as a function of time to facilitate visual
2011; Swaminathan et al., 2014). When the study analyses of the intervention effect.
14
1ST PAGES
Single-Case Experimental Design
15
1ST PAGES
Ferron, Kirby, and Lipien
education and behavioral sciences (2nd ed.). Krasny-Pacini, A., & Evans, J. (2018). Single-case
Routledge. https://doi.org/10.4324/9780203521892 experimental designs to assess intervention
effectiveness in rehabilitation: A practical guide.
Gast, D. L., & Spriggs, A. D. (2014). Visual analysis
Annals of Physical and Rehabilitation Medicine,
of graphic data. In D. L. Gast & J. R. Ledford
61(3), 164–179. https://doi.org/10.1016/j.rehab.
(Eds.), Single case research methodology: Applica-
2017.12.002
tions in special education and behavioral sciences
(2nd ed.). Routledge. https://doi.org/10.4324/ Kratochwill, T. R., Hitchcock, J., Horner, R. H.,
9780203521892-9 Levin, J. R., Odom, S. L., Rindskopf, D. M., &
Shadish, W. R. (2010). Single-case designs tech
Gunby, K. V., Carr, J. E., & Leblanc, L. A. (2010).
nical documentation. Retrieved from What Works
Teaching abduction-prevention skills to children
Clearinghouse website: https://ies.ed.gov/ncee/
with autism. Journal of Applied Behavior Analysis,
wwc/Document/229
43(1), 107–112. https://doi.org/10.1901/jaba.
2010.43-107 Lane, J. D., & Gast, D. L. (2014). Visual analysis
in single case experimental design studies:
Horner, R. D., & Baer, D. M. (1978). Multiple-probe
Brief review and guidelines. Neuropsychological
technique: A variation on the multiple baseline.
Rehabilitation, 24(3-4), 445–463. https://doi.org/
Journal of Applied Behavior Analysis, 11(1), 189–196.
10.1080/09602011.2013.815636
https://doi.org/10.1901/jaba.1978.11-189
Ledford, J., & Gast, D. L. (2018). Combination and
Horner, R. H., & Odom, S. L. (2014). Constructing
other designs. In J. R. Ledford & D. L. Gast
single-case research designs: Logic and options.
(Eds.), Single case research methodology: Applica-
In T. R. Kratochwill & J. R. Levin (Eds.),
tions in special education and behavioral sciences
School psychology series. Single-case intervention
(3rd ed., pp. 335–364). Routledge. https://doi.org/
research: Methodological and statistical advances
10.4324/9781315150666-12
(pp. 27–51). American Psychological Association.
https://doi.org/10.1037/14376-002 Ledford, J. R., Barton, E. E., Severini, K. E., &
Zimmerman, K. N. (2019). A primer on single-
Kazdin, A. E. (1977). Assessing the clinical or applied
case research designs: Contemporary use and
importance of behavior change through social
analysis. American Journal on Intellectual and
validation. Behavior Modification, 1(4), 427–452.
Developmental Disabilities, 124(1), 35–56.
https://doi.org/10.1177/014544557714001
https://doi.org/10.1352/1944-7558-124.1.35
Kazdin, A. E. (1980). Obstacles in using randomi
Ledford, J. R., & Wolery, M. (2013). Effects of
zation tests in single-case experimentation.
plotting a second observer’s data on A-B-A-B
Journal of Educational Statistics, 5(3), 253–260.
graphs when observer disagreement is present.
https://doi.org/10.3102/10769986005003253
Journal of Behavioral Education, 22, 312–324.
Kendall, P. C. (1981). Assessing generalization and https://doi.org/10.1007/s10864-013-9178-0
the single-subject strategies. Behavior Modifica-
Leko, M. M. (2014). The value of qualitative methods
tion, 5(3), 307–319. https://doi.org/10.1177/
in social validity research. Remedial and Special
014544558153001
Education, 35(5), 275–286. https://doi.org/
Kennedy, C. H. (2005). Single-case designs for educa- 10.1177/0741932514524002
tional research. Pearson.
Lloyd, J. W., & Heubusch, J. D. (1996). Issues of social
Kirby, M. S., Spencer, T. D., & Ferron, J. M. (2021). validation in research on serving individuals with
How to be RAD: Repeated acquisition design emotional or behavioral disorders. Behavioral
features that enhance internal and external Disorders, 22(1), 8–14. https://doi.org/10.1177/
validity. Perspectives on Behavior Science, 44(2-3), 019874299602200105
389–416. https://doi.org/10.1007/s40614-021-
Maggin, D. M., Swaminathan, H., Rogers, H. J.,
00301-2
O’Keeffe, B. V., Sugai, G., & Horner, R. H. (2011).
Klein, L. A., Houlihan, D., Vincent, J. L., & Panahon, A generalized least squares regression approach
C. J. (2017). Best practices in utilizing the for computing effect sizes in single-case research:
changing criterion design. Behavior Analysis in Application examples. Journal of School Psychology,
Practice, 10(1), 52–61. https://doi.org/10.1007/ 49(3), 301–321. https://doi.org/10.1016/j.jsp.
s40617-014-0036-x 2011.03.004
Koehler, M. J., & Levin, J. R. (1998). Regulated Manolov, R., Solanas, A., & Sierra, V. (2019).
randomization: A potentially sharper analytical Extrapolating baseline trend in single-case data:
tool for the multiple baseline design. Psychological Problems and tentative solutions. Behavior Research
Methods, 3(2), 206–217. https://doi.org/10.1037/ Methods, 51(6), 2847–2869. https://doi.org/
1082-989X.3.2.206 10.3758/s13428-018-1165-x
16
1ST PAGES
Single-Case Experimental Design
Moeyaert, M., Bursali, S., & Ferron, J. M. (2021). R. W. Fuqua (Eds.), Research methods in applied
SCD-MVA: A mobile application for conducting behavior analysis. Springer. https://doi.org/10.1007/
single-case experimental design research during 978-1-4684-8786-2_2
the pandemic. Human Behavior and Emerging
Pustejovsky, J. E. (2018). Using response ratios for
Technologies, 3(1), 75–96. https://doi.org/10.1002/
meta-analyzing single-case designs with behavioral
hbe2.223
outcomes. Journal of School Psychology, 68,
Morgan, D. L., & Morgan, R. K. (2009). Single-case 99–112. https://doi.org/10.1016/j.jsp.2018.02.003
research methods for the behavioral and health
Pustejovsky, J. E., Hedges, L. V., & Shadish, W. R.
sciences. Sage Publications. https://doi.org/10.4135/
9781483329697 (2014). Design-comparable effect sizes in multiple
baseline designs: A general modeling framework.
Murphy, R. J., & Bryan, A. J. (1980). Multiple- Journal of Educational and Behavioral Statistics,
baseline and multiple-probe designs: Practical 39(5), 368–393. https://doi.org/10.3102/
alternatives for special education assessment 1076998614547577
and evaluation. The Journal of Special Education,
14(3), 325–335. https://doi.org/10.1177/ Rindskopf, D. (2014). Nonlinear Bayesian analysis
002246698001400306 for single case designs. Journal of School Psychol-
ogy, 52(2), 179–189. https://doi.org/10.1016/
Odom, S. L., Brantlinger, E., Gersten, R., Horner, j.jsp.2013.12.003
R. H., Thompson, B., & Harris, K. (2005).
Research in special education: Scientific methods Rindskopf, D., & Ferron, J. (2014). Using multilevel
and evidence-based practices. Exceptional models to analyze single-case design data. In T. R.
Children, 71(2), 137–148. https://doi.org/10.1177/ Kratochwill & J. R. Levin (Eds.), Single-case
001440290507100201 intervention research: Statistical and methodological
advances (pp. 221–246). American Psychological
Onghena, P. (1992). Randomization tests for Association. https://doi.org/10.1037/14376-008
extensions and variations of ABAB single-case
experimental designs: A rejoinder. Behavioral Shadish, W. R., Hedges, L. V., & Pustejovsky, J. E.
Assessment, 14, 153–171. (2014). Analysis and meta-analysis of single-case
designs with a standardized mean difference
Onghena, P., & Edgington, E. S. (1994). Random statistic: A primer and applications. Journal of
ization tests for restricted alternating treatments School Psychology, 52(2), 123–147. https://doi.org/
designs. Behaviour Research and Therapy, 10.1016/j.jsp.2013.11.005
32(7), 783–786. https://doi.org/10.1016/
0005-7967(94)90036-1 Shadish, W. R., Kyse, E. N., & Rindskopf, D. M.
(2013). Analyzing data from single-case designs
Onghena, P., Tanious, R., De, T. K., & Michiels, B. using multilevel models: New applications and
(2019). Randomization tests for changing criterion some agenda items for future research. Psycho
designs. Behaviour Research and Therapy, 117, logical Methods, 18(3), 385–405. https://doi.org/
18–27. Advance online publication. https://doi.org/ 10.1037/a0032964
10.1016/j.brat.2019.01.005
Sidman, M. (1960). Tactics of scientific research:
Parker, R. I., & Vannest, K. (2009). An improved Evaluating experimental data in psychology.
effect size for single-case research: Nonoverlap Authors Cooperative, Inc.
of all pairs. Behavior Therapy, 40(4), 357–367.
https://doi.org/10.1016/j.beth.2008.10.006 Sidman, M. (1997). Equivalence relations. Journal
of the Experimental Analysis of Behavior, 68(2),
Parker, R. I., Vannest, K. J., & Davis, J. L. (2011).
258–266. https://doi.org/10.1901/jeab.1997.68-258
Effect size in single-case research: A review of
nine nonoverlap techniques. Behavior Modification, Stokes, T. F., & Baer, D. M. (1977). An implicit
35(4), 303–322. https://doi.org/10.1177/ technology of generalization. Journal of Applied
0145445511399147 Behavior Analysis, 10(2), 349–367. https://doi.org/
10.1901/jaba.1977.10-349
Peters, B., Bedrick, S., Dudy, S., Eddy, B., Higger, M.,
Kinsella, M., McLaughlin, D., Memmott, T., Swaminathan, H., Rogers, H. J., & Horner, R. H.
Oken, B., Quivira, F., Spaulding, S., Erdogmus, D., (2014). An effect size measure and Bayesian
& Fried-Oken, M. (2020). SSVEP BCI and eye analysis of single-case designs. Journal of School
tracking use by individuals with late-stage ALS Psychology, 52(2), 213–230. https://doi.org/
and visual impairments. Frontiers in Human 10.1016/j.jsp.2013.12.002
Neuroscience, 14, 595890. https://doi.org/10.3389/ Tanious, R., & Onghena, P. (2020). A systematic
fnhum.2020.595890
review of applied single-case research published
Poling, A., & Grossett, D. (1986). Basic research designs between 2016 and 2018: Study designs, randomi
in applied behavior analysis. In A. Poling & zation, data aspects, and data analysis. Advance
17
1ST PAGES
Ferron, Kirby, and Lipien
online publication, Behavior Research Methods. sciences (3rd ed., pp. 283–334). Routledge.
https://doi.org/10.3758/s13428-020-01502-4 https://doi.org/10.4324/9781315150666-11
Van den Noortgate, W., & Onghena, P. (2003). Wolf, M. M. (1978). Social validity: The case for
Combining single-case experimental data using subjective measurement or how applied behavior
hierarchical linear models. School Psychology analysis is finding its heart. Journal of Applied
Quarterly, 18(3), 325–346. https://doi.org/ Behavior Analysis, 11(2), 203–214. https://doi.org/
10.1521/scpq.18.3.325.22577 10.1901/jaba.1978.11-203
Wolery, M., Gast, D., & Ledford, J. R. (2018). Wolfe, K., & Slocum, T. A. (2015). A comparison
Comparative designs. In J. R. Ledford & D. L. of two approaches to training visual analysis of
Gast (Eds.), Single case research methodology: AB graphs. Journal of Applied Behavior Analysis,
Applications in special education and behavioral 48(2), 472–477. https://doi.org/10.1002/jaba.212
18
1ST PAGES
View publication stats