You are on page 1of 16

729508

research-article2017
RSEXXX10.1177/0741932517729508Remedial and Special EducationBarton et al.

Literature Reviews
Remedial and Special Education

Technology-Aided Instruction and


1­–16
© Hammill Institute on Disabilities 2017
Reprints and permissions:
Intervention for Students With ASD: A sagepub.com/journalsPermissions.nav
DOI: 10.1177/0741932517729508
https://doi.org/10.1177/0741932517729508

Meta-Analysis Using Novel Methods of rase.sagepub.com

Estimating Effect Sizes for Single-Case


Research

Erin E. Barton, PhD1, James E. Pustejovsky, PhD2,


Daniel M. Maggin, PhD3, and Brian Reichow, PhD4

Abstract
The adoption of methods and strategies validated through rigorous, experimentally oriented research is a core professional
value of special education. We conducted a systematic review and meta-analysis examining the experimental literature on
Technology-Aided Instruction and Intervention (TAII) using research identified as part of the National Autism Professional
Development Project. We applied novel between-case effect size methods to the TAII single-case research base. In addition,
we used meta-analytic methodologies to examine the methodological quality of the research, calculate average effect sizes
to quantify the level of evidence for TAII, and compare effect sizes across single-case and group-based experimental
research. Results identified one category of TAII—computer-assisted instruction—as an evidence-based practice across
both single-case and group studies. The remaining two categories of TAII—augmentative and alternative communication
and virtual reality—were not identified as evidence-based using What Works Clearinghouse summary ratings.

Keywords
meta-analysis, single-case design, effect size, technology-aided instruction and intervention

The development, identification, and dissemination of evi- novel methods for conducting systematic reviews and meta-
dence-based interventions have been dominant themes in analyses in a synthesis of the research base on Technology-
special education for more than a decade (Odom et al., Aided Intervention and Instruction (TAII) for students with
2005). Following the logic of evidence-based practice, autism spectrum disorders (ASD).
experimental research methods are used to determine which
strategies and practices are most likely to produce desired
Technology-Aided Intervention and
outcomes (Council for Exceptional Children [CEC], 2014;
Horner et al., 2005; Odom et al., 2005). A foundational Instruction
tenet of evidence-based practice is the use of transparent, TAII refers to a range of interventions in which technology
replicable, and objective methods for reviewing studies to is used as the primary method to deliver instruction (Wong
distinguish those practices with sufficient empirical support et al., 2015). Technology is defined as any electronic appa-
to warrant their use in schools and classrooms. These prin- ratus or virtual network that is used to target a particular
ciples are applicable regardless of the particular type of academic, social, or behavioral skill of the student (Odom
research being reviewed and should be extended to all
aspects of the review process (What Works Clearinghouse 1
[WWC], 2014). Vanderbilt University, Nashville, TN, USA
2
The University of Texas at Austin, USA
Despite the need for transparent and replicable proce- 3
The University of Illinois at Chicago, USA
dures, there remain challenges with available methods for 4
University of Florida, Gainesville, USA
assessing, analyzing, and synthesizing results from single-
Corresponding Author:
case research as part of a systematic review and meta-anal- Erin E. Barton, Vanderbilt University, 230 Appleton Place, Peabody 228,
ysis (Shadish, Hedges, Horner, & Odom, 2015). The aim of Nashville, TN 37203, USA.
this article is to demonstrate the use of a combination of Email: erin.e.barton@vanderbilt.edu
2 Remedial and Special Education 00(0)

et al., 2015). TAII includes computer-assisted instructional Risk of Bias Assessment


(CAI) programs and speech generating devices that are
more specifically focused on communication outcomes Part of conducting a systematic review involves the critical
(Lofland, 2016; Odom et al., 2015). The communicative, appraisal of primary research studies. The Risk of Bias
social, and behavioral challenges that confront students (RoB) assessment tool of the Cochrane Collaboration
with ASD require the development and implementation of (Higgins, Altman, & Sterne, 2008) is often considered one
empirically supported practices to target their idiosyncratic of the best tools for the appraisal of randomized controlled
symptomatology and functional learning outcomes trials (Sterne, 2009). Group RoB tools assess selection bias
(Reichow, Doehring, Cicchetti, & Volkmar, 2011). TAII has (systematic differences between baseline characteristics of
been used to teach students with ASD facial and emotional the groups), performance bias (systematic differences
recognition, safety skills, and reading and vocabulary out- between groups in the care that is provided or in exposure to
comes (Jones, Wilcox, & Simons, 2016). A variety of inter- factors other than the interventions of interest), attrition bias
ventions and outcomes can be addressed through TAII, with (systematic differences between groups in withdrawals
tools that range in sophistication and modality from com- from a study), detection bias (systematic differences
plex virtual reality (VR) programs to basic touch screen and between groups in how outcomes are determined), and
icon communication systems. Furthermore, TAII includes reporting bias (systematic differences between reported and
an assortment of devices and delivery formats such as lap- unreported findings). The Cochrane RoB tool has been
top computers, smart phones and tablets, and virtual net- adapted for use in reviews of nonrandomized studies (B. C.
works and applications (Wong et al., 2014). Reeves et al., 2013), and recently for use with single-case
In an effort to provide school personnel and related ser- designs (SCDRoB; Reichow, Barton, & Maggin, 2017).
vice providers with guidance on which practices were most The SCDRoB was developed based on current conceptual-
effective for a range of academic, behavioral, and social izations of the types of bias that could affect empirical sin-
outcomes for students with ASD, Wong and colleagues gle-case research (Sterne, 2009). Although Tate and
(2015) conducted a large-scale systematic review of the colleagues (2013) also developed a risk of bias tool for
ASD intervention literature. TAII was among the interven- single-case research, their criteria and bias domains do not
tion categories identified as evidence-based by Wong and align with the Cochrane RoB tool, thus precluding compari-
colleagues (2015). The broad scope of the project did not sons across group and single-case research methodologies.
allow for analysis of specific details regarding procedures, Using RoB tools with consistent domains allows for a com-
risk of bias, or outcomes, or the relative strengths of the prehensive comparison of internal validity of a body of lit-
interventions, which might have important practical impli- erature including both group and single-case research.
cations. Other extant systematic reviews and meta-analyses
examining TAII have supported their use on a range of aca-
Between-Case Effect Sizes
demic, communication, and social outcomes for students
with ASD (Grynszpan, Weiss, Perez-Diaz, & Gal, 2014; Hedges, Pustejovsky, and Shadish (2012, 2013) recently
Odom et al., 2015; Pennington, 2010; Ramdoss et al., 2011; introduced a distinct conceptualization of effect sizes for
Weng, Maeda, & Bouck, 2014). Collectively, these reviews single-case research by constructing an effect size index
indicated that TAII has the potential to be an effective class that is directly comparable to the standardized mean differ-
of interventions, though additional research is needed to ence effect size commonly used with between-groups
refine technologies that have the greatest potential for spe- research designs. This novel effect size index—the between-
cific outcomes. A common shortcoming of these reviews is case standardized mean difference (BC-SMD)—differs
that they treated single-case and group design research from other effect sizes commonly used with single-case
using separate, incomparable methodologies. For instance, designs because it characterizes average treatment effects
Ramdoss and colleagues (2011) used standardized mean at the level of the study, rather than at the level of the indi-
difference effect sizes to describe effects from group vidual case. It is defined based on a hierarchical model for
designs, but a nonoverlap index to describe effects from the data from a single-case design, which captures both
single-case designs. Grynszpan and colleagues (2014) within-case and between-case variation in the dependent
excluded single-case studies from their review. Knight, variable. Using such a hierarchical model provides a way to
McKissick, and Saunders (2013) compared contemporary estimate the same standardized mean difference parameter
quality indicators across group and single-case studies, that would be estimated by a hypothetical between-groups
respectively; however, they did not conduct meta-analyses experimental design on the same population (Pustejovsky,
or examine the relative strengths of the interventions. Thus, Hedges, & Shadish, 2014). Effect size estimates from each
there remains a need to develop and test risk of bias assess- type of design therefore share a common scale.
ments and effect size indices that can be compared across Shadish et al. (2015) suggested that the advantages of
both types of research designs. BC-SMDs are that (a) they allow researchers to compare the
Barton et al. 3

results of single-case research with the results of group included in the recommendations and closely evaluated the
designs and (b) they are in a metric that is familiar to educa- contributing studies to provide precise guidance and recom-
tional researchers who work primarily with between-subjects mendations for research. Two research questions guided our
designs. Furthermore, because BC-SMDs are on the same work:
scale as standardized mean differences from between-groups
designs, results from both types of designs can be compared Research Question 1: To what extent is TAII more
or even combined in a single meta-analysis, at least in prin- effective than business as usual or as compared with
ciple. In practice, of course, single-case studies and between- other interventions in improving targeted communica-
group studies might differ in dimensions other than just the tion, academic, engagement, social, emotion recogni-
research design, such as participant inclusion criteria, class of tion, and adaptive outcomes for students with ASD?
outcome measures, or length of follow-up. Thus, there Research Question 2: Do the group and single-case
remains a need to consider the extent to which evidence from experimental research bases provide sufficient empirical
single-case studies aligns with, or diverges from, evidence support for particular independent-dependent variable
from between-groups studies on a common topic. combinations (i.e., augmentative and alternative com-
In addition to questions of conceptual comparability, munication [AAC], CAI, VR) to be classified as evi-
methods for estimating BC-SMDs are currently available dence-based practices for students with ASD?
only for certain types of single-case designs. Specifically,
because the BC-SMD involves between-case variation, it
can only be estimated for studies that include multiple indi-
Method
vidual cases, such as across-participant multiple baseline The application and evaluation of novel methods for the
designs, across-participant multiple probe designs, or with- calculation of single-case design data included an update of
drawal (A-B-A-B) designs that are replicated across partici- the review of TAII by Wong and colleagues (2015). We
pants. Therefore, it is important to examine the extent to used methods consistent with recommendations of Cochrane
which this technical limitation has consequences for appli- for updating systematic reviews and meta-analyses and
cation of the BC-SMD in practice. The present review con- adhered to the guidelines for systematic reviews and meta-
siders the implications of these limitations by calculating analyses proposed by Preferred Reporting Items for
BC-SMD estimates for single-case studies—and comparing Systematic Reviews and Meta-Analyses (PRISMA; Moher,
the results with standardized mean difference estimates Liberati, Tetzlaff, Altman, & PRISMA Group, 2009).
from group design studies—on the effects of TAII for stu-
dents with ASD.
Eligibility Criteria and Study Identification
To be eligible for inclusion in this review, studies were
Purpose and Research Questions required to meet several criteria. First, the study evaluated
The purpose of this article is to demonstrate the use of a the effects of TAII as compared with a control condition or
combination of novel methods for conducting systematic non-TAII-based instruction (as defined by Odom et al.,
reviews and meta-analyses in a synthesis of the research 2015). Second, the majority of participants in the study were
base on TAII for students with ASD. In doing so, we reported to have ASD. Third, the study used an eligible
expanded and updated the review conducted by Wong and between-group design or single-case design. Group designs
colleagues (2015) regarding the use of TAII for students (i.e., those involving comparisons across participants) were
with ASD. Specifically, we evaluated studies for risk of bias eligible if they used a nonequivalent groups quasi-experi-
using a framework that can be applied to both types of mental design or a randomization process for allocating par-
designs, we quantified the magnitude of effects using ticipants to conditions. Single-case studies were eligible if
BC-SMDs, and we synthesized the effect sizes using formal they were experimental (i.e., at least three opportunities to
meta-analytic models. In demonstrating the application of demonstrate behavior change at three different points in
these recently developed methods to the research base on time). Studies had to meet further eligibility criteria to be
TAII, we hoped to stimulate further discussion about their included in the effect size calculations and meta-analysis, as
strengths and limitations as tools for synthesizing single- explained in more detail in the next sections.
case research and thereby informing evidence-based prac- We started with the original pool of studies identified by
tice in special education. This included examining the Wong and colleagues (2015) and screened all for eligibility.
quality of the research, calculating effect sizes to quantify Studies included in their review were identified through
the level of evidence for TAII, and comparing single-case electronic searches of education and social science research
and group-based experimental research effect sizes. Given databases and appraisal of relevant literature reviews, sup-
that TAII can be implemented in a variety of ways and can plemented by ancestral searches of identified studies. We
focus on a range of outcomes, we disaggregated the studies also screened for eligibility studies identified in previous
4 Remedial and Special Education 00(0)

reviews by Ganz and colleagues (2012), Irish (2013), discussed with the first author until consensus was reached.
Kagohara and colleagues (2013), Knight et al. (2013), There were discrepancies on fewer than 5% of the entries
Ploog, Scharf, Nelson, and Brooks (2013), Odom and col- across coded variables.
leagues (2015), and Ramdoss and colleagues (2012). In
addition, the first author conducted an electronic search of Participants and setting characteristics.  The graduate stu-
the following databases during February of 2015 and lim- dents extracted information regarding the age, ethnicities,
ited the date of publication from 2012 to 2015: Educational disability status, confirmation of ASD diagnosis, descrip-
Resources Information Center (ERIC), ProQuest, tions of participants’ functional repertoires, and the mea-
PsycINFO, and PubMed. Key terms used in the electronic surement of technology skill levels. This information was
search included ([autism or Asperger or pervasive develop- extracted and summarized to provide further understanding
mental disorder] AND [intervention or treatment or prac- of the populations for whom TAII has been investigated.
tice or strategy or therapy or program or procedure or The country of origin of the study and the physical location
instruction] AND [technology or computer or iPod or iPad of the intervention sessions were also extracted.
or computer assisted instruction or computer aided instruc-
tion or device or personal digital assistant or app or virtual Target outcomes and measures.  Three items related spe-
or electronic]). The lower limit was set at 2012 because cifically to the dependent variables. These items were (a)
Wong and colleagues (2015) searched through 2011, and primary skills targeted, (b) categorical domain of the pri-
the search was intended to supplement their findings. A mary target skills, and (c) the functional relevance of the tar-
PRISMA flow diagram (Moher et al., 2009) of study inclu- get skills for the participants. The categories for the primary
sion, incorporating recommendations for the display of skills were mutually exclusive and exhaustive and included
updated reviews (Stovold, Beecher, Foxlee, & Noel-Storr, communication, academic, engagement/task completion,
2014) is provided in Online Appendix A. social, emotion recognition, and adaptive. The functional
relevance of the skill was rated as yes if the skills were
needed for participants in most or all situations to facili-
Study Coding Procedures tate independence, used across most daily activities and
Four concurrent coding procedures were used. First, two routines, or occasioned learning more sophisticated or com-
special education graduate students extracted descriptive plex functional skills; if two of the previous three clauses
information from all studies related to the following vari- were not true of the primary target skill, the relevance was
ables: participant and setting characteristics, target outcomes rated as no. Several items related specifically to the mea-
and measures, intervention procedures and dosage, technol- surement of the dependent variable. First, the measure-
ogy characteristic and platform (using categories described ment procedures for the primary dependent variable were
by Odom et al., 2015), study design, and study-reported coded as direct observation or standardized assessments (or
results. Second, the first, third, and fourth authors assessed both). Studies using direct observation were further coded
the methodological rigor of all studies using two frame- as using event recording, percentage correct (i.e., accuracy),
works: (a) WWC study design standards as outlined in the or an interval system (e.g., partial interval recording). Sec-
WWC Procedures and Standards Handbook 3.0 (2014) and ond, measurement of generalization was coded as occurring
(b) an adaptation of Cochrane’s risk of bias tool (Higgins across people, settings, materials, activities, or measures.
et al., 2008) for group research design studies and an adapta- Third, the length of time between the posttreatment data
tion of the tool for single-case studies (Reichow et al., 2017). collection and maintenance data collection for the primary
Third, a graduate student extracted all data from the single- dependent variable was coded.
case studies. Fourth, the second and third authors extracted
summary statistics and effect size information from the Intervention procedures and dosage.  Several items related
group design studies. The first author contacted authors of specifically to the intervention (i.e., independent variable).
four studies where effect sizes could not be computed for all The intervention agent was coded as researcher, classroom
variables based on published information. Two of the four staff, external therapist, peer, or parent. Regarding inter-
authors responded but none could provide the needed data. vention dosage, the total number of instructional sessions
and the number of trials per session were coded. Additional
Descriptive characteristics. Descriptive characteristics were dosage information was extracted when provided (e.g.,
coded for all identified group design and single-case studies duration of sessions, rate of sessions per week or month).
using a systematic coding protocol. Given the heteroge- The name of the instructional procedure was extracted
neous nature of TAII, this descriptive information was along with the authors’ brief description. The exact TAII
important for understanding conditions under which TAII materials used were listed. The technology characteristics
was effective. All entries were independently coded by the and platforms were coded using categories identified from
special education graduate students and discrepancies were previous reviews (Odom et al., 2015). These were speech
Barton et al. 5

generating device, personal computer, Internet/web, mobile Group design effect sizes.  Of the 12 group design studies,
device, shared active surface, VR, sensory/wearable tech- data for calculating effect sizes were available from 10.
nology, robotics, and natural user interface; multiple cat- Several studies reported the results of more than one experi-
egories could be used in each study. The interventions were ment or for more than one sample; in these cases, each sam-
further coded into three mutually exclusive categories: (a) ple was treated as a separate, independent study. Estimates
AAC, (b) CAI, and (c) VR. of the SMD were calculated for each outcome in each sam-
ple based on the posttest difference in means and the pooled
Study design and methodological features. The exact posttest standard deviation. Estimates were corrected for
designs used across studies were listed. The graduate stu- small-sample bias using Hedges’ g (Hedges, 1981).
dents coded the single-case studies as a withdrawal (A-B-
A-B, coding the exact notation); multiple baseline across Single-case effect sizes. To calculate effect sizes for the
participants, behaviors, or stimuli; multiple probe across included single-case research, raw data from each study
participants, behaviors, or stimuli; or alternating treatment were digitally extracted from published single-subject
design. For group designs, the unit of randomization and the graphs using WebPlotDigitizer (Rohatgi, 2014), which has
group comparison categories were listed. high intercoder reliability (Drevon, Fursa, & Malcolm,
2017). One study was excluded from the statistical analyses
because the graphs did not provide discernible data and data
Risk of Bias points could not be accurately extracted (Ke & Im, 2013).
We evaluated study-level RoB using an adaptation of the The remaining 22 single-case studies had a total of 43
Cochrane’s RoB tool (Higgins et al., 2008) for group graphs. All 43 graphs were digitized using the WebPlotDig-
research design studies, which incorporated concerns for itizer program.
inclusion of nonrandomized studies (e.g., L. M. Reeves, The BC-SMD (Hedges, Pustejovsky, & Shadish, 2012,
Umbreit, Ferro, & Liaupsin, 2013), and a single-case adap- 2013) was calculated for purposes of quantifying the mag-
tation of the tool: SCDRoB (Reichow et al., 2017). The nitude of treatment effects in the single-case research. The
RoB tool uses a domain-based evaluation system so that BC-SMD effect size is premised on a statistical model with
critical assessments are made separately for different the following assumptions: (a) the baseline is stable (e.g.,
domains relevant to the internal validity of the study. The no trend), (b) the intervention leads to an immediate change
third and fourth authors independently coded each study in level (e.g., no intervention-phase trend), (c) the interven-
using the respective RoB tools and reviewed and discussed tion effect is constant across cases, (d) the outcome is nor-
disagreements until consensus was reached. mally distributed about case- and phase-specific mean
levels, and (e) deviations from mean levels follow a first-
order autoregressive process. The final assumption accounts
Visual Analysis for potential autocorrelation arising from repeated measure-
The graduate student coders used a multistep process for ment of cases.
coding single-case study outcomes. First, coders used a sys- A limitation of the BC-SMD is that it can only be cal-
tematic visual analysis protocol (adapted from Gast & culated for studies that include at least three participants
Spriggs, 2014; Kratochwill et al., 2013) to assess the pres- and that use an across-participant multiple baseline,
ence or absence of a functional relation for each case. A across-participant multiple probe, or withdrawal (i.e.,
case was conceptualized as at least three opportunities to A-B-A-B) design. Analysis of the BC-SMD was therefore
demonstrate behavior change at three different points in limited to 10 studies that used an eligible design and
time within or across participants, target behaviors, settings, included at least three cases. Some studies with complex
or materials. Second, coders reviewed and entered the designs were also included because a subset of cases could
results as described by the study authors. Third, the first be extracted to create an eligible design. For example,
author reviewed the coders’ assessment of functional rela- Cihak, Wright, and Ayres (2010) used a multiple probe
tions and the authors’ description of results and coded as design across settings with an embedded A-B-A-B design,
agreement or disagreement. This multistep coding process all replicated across three cases; for this study, BC-SMD
was established to reduce the likelihood that author bias estimate was calculated based on the embedded A-B-A-B
affected their assessment of functional relations. series across the three participants. Two studies reported
data on multiple outcomes for the same set of cases (Choi,
O’Reilly, Sigafoos, & Lancioni, 2010; King et al., 2014);
Statistical Analysis here, separate effect size estimates were calculated for
Group design and single-case studies were meta-analyzed each outcome. Calculations were carried out using the
both separately and jointly, using effect size indices scdhlm package (Pustejovsky, 2016) for the R statistical
designed to be comparable across both types of designs. computing environment.
6 Remedial and Special Education 00(0)

Meta-analysis. Effect size estimates from the single-case are alternative methods available. Lacking methods that
studies and group design studies were synthesized using are better suited for single-case research, the present review
robust variance estimation methods (Hedges, Tipton, & employed conventional publication bias analyses for group
Johnson, 2010; Pustejovsky & Ferron, 2017) to address the designs (i.e., funnel plots, trim and fill, and robust Egger’s
problem that most studies contributed multiple, noninde- regression test), but interpreted the results cautiously. This
pendent effect size estimates. Most of the group design approach is consistent with at least one previous meta-
studies included effect size estimates for multiple outcome analysis using the BC-SMD (Shadish, Hedges, &
measures based on a common set of participants. It is not Pustejovsky, 2014).
generally reasonable to assume that effect size estimates
based on a common sample are independent, which pre-
cludes use of standard random effects models for meta-
Summary Identification of Evidence-Based
analysis. Similarly, some of the single-case studies reported Practices
multiple outcomes, for which separate BC-SMD estimates The final WWC design standard ratings for all single-case
were calculated. Thus, robust variance estimation with a and group design studies were applied to the summative
“correlated effects” working model (Hedges et al., 2010; analyses here but are not reported individually. For single-
Tanner-Smith, Tipton, & Polanin, 2016) was used to syn- case studies, we applied the WWC (2014) and Horner and
thesize the SMDs for group design and the BC-SMDs for colleagues (2005) criterion of five studies with 20 partici-
single-case research. Given the relatively small number of pants across three research groups to the single-case stud-
included samples, small-sample adjustments for hypothesis ies. We included single-case studies that met the following
tests and confidence intervals (CIs; Tipton, 2015; Tipton & criteria in our analysis: (a) visual analysts identified a func-
Pustejovsky, 2015) were used for all analyses. The meta- tional relation, (b) raters indicated the study had three or
analysis was conducted using the robumeta package (Fisher fewer domains at high risk of bias, and (c) the study met
& Tipton, 2015) and clubSandwich package (Pustejovsky, WWC design standards with or without reservations. For
2016) for the R statistical computing environment. group design studies, we applied the WWC criteria for rat-
ing the effectiveness of an intervention (WWC, 2014).
Outcome reporting bias analysis.  Outcome reporting biases, These criteria specify that, to receive a positive effect rat-
including publication bias and other forms of outcome cen- ing, an intervention needed to have two studies showing
soring, are an important threat to the validity of meta-anal- statistically significant positive effects with at least one
yses of group designs (Rothstein, Sutton, & Borenstein, meeting group design standards without reservations and no
2005). With group designs, it is typically assumed that out- negative effects.
come reporting biases arise because statistically significant
results are more likely to appear in the published literature.
In the present review, outcome reporting bias for the group Results
design studies was assessed using visual inspection of fun- Study Selection
nel plots, the trim-and-fill procedure (Duval & Tweedie,
2000), and a version of Egger’s regression test (Egger, Searches resulted in identification of 51 eligible studies
Smith, Schneider, & Minder, 1997), adapted for use with d based on title and abstract review, including 20 studies that
statistics and robust variance estimation. The trim-and-fill were included in the previous review by Wong and col-
procedure and Egger’s regression test use a modified stan- leagues (2015), 59 studies identified through electronic
dard error of the effect size estimate to avoid induced cor- searches, and two studies identified through ancestral search
relation between the d estimate and its standard error. and appraisal of other literature reviews. These 81 records
Publication and outcome reporting biases are also were screened and 30 were excluded. Full-text evaluation
potential threats in syntheses of single-case research. of the 51 identified studies resulted in exclusion of 16 addi-
Recent initial evidence indicates that published single-case tional studies, including 1 group design that did not use ran-
studies might not be representative of the full research base dom assignment (Williams, Wright, Callaghan, & Coughlan,
(Shadish, Zelinsky, Vevea, & Kratochwill, 2016; Sham & 2002) and 15 single-case studies that were not experimen-
Smith, 2014). However, the process that leads to publica- tal—including one that was included in Wong and col-
tion bias in the single-case literature is likely to be driven leagues’ review (Stromer, Mackay, Howell, McVay, &
by factors such as visual determinations of experimental Flusser, 1996). The fourth author conducted a reliability
control and functional relationships, rather than by the sta- check using the inclusion and exclusion criteria and inter-
tistical significance of results, as in the group design litera- rater reliability was 100% for identified studies. The 35
ture (Shadish et al., 2015). Consequently, tools for studies retained in the current analysis included 12 group
analyzing publication bias in group designs are not ideally designs and 23 single-case studies. Of these, 10 group
suited for application to single-case research, yet neither designs and 10 single-case were eligible for meta-analysis.
Barton et al. 7

Descriptive Characteristics just 11 of the 35 studies; when reported, it ranged from one
to 96 trials. Given the heterogeneity of technology devices
Participants and setting characteristics.  A total of 540 partici- used across this literature, we further classified the TAII
pants were included across the 35 studies. This included interventions into three categories based on the primary
321 participants receiving a TAII and 219 in a control or technology device used in the study. The intervention types
comparison group. The mean age of participants was 11.5 included CAI (n = 23), AAC (n = 10), and VR (n = 3).
years (range = 2–52 years). Thirty-three studies included
only participants younger than 19 years old; two studies Study design and methodological features.  The 23 single-case
included children older than 19 years (Golan & Baron- studies used the following methodologies: A-B-A-B (n =
Cohen, 2006; Smith et al., 2014). Seventy-one female and 6), multiple probe (n = 9), multiple baseline (n = 4), alter-
320 male participants were included; three studies did not nating treatments (n = 2), and combination designs (n = 2;
report participant gender. A total of 482 participants were multiple baseline with embedded alternating treatments and
reported to have ASD; 21 had other disabilities and 37 were multiple probe with embedded A-B-A-B). Of these, BC-
typically developing. Researchers in five of the 35 studies SMD effect sizes could be computed for 10 studies, includ-
conducted an independent diagnostic evaluation to confirm ing two A-B-A-B designs, six multiple probe, and two
ASD diagnoses. Race/ethnicity was reported in nine of the multiple baseline designs. Twelve studies used group design
35 studies for 169 participants. For particpants for whom methodologies. Nine of these 12 used a randomized experi-
race/ethnicity were reported, 58% (n = 98) were White, mental design. Across the group studies, the comparison
24% were Black (n = 40), 8% were Asian (n = 13), 8% were group received either business as usual (BAU; n = 7 stud-
multiracial (n = 13), and 3% were Latino/a (n = 5). Research- ies); an active, non-technology-based intervention (n = 4);
ers in 22 of the 35 studies provided descriptions of the func- or an active, different TAII intervention (n = 1; see Table 1).
tional repertoires of participants. Researchers in two of the
35 studies assessed participant technology skill level prior
to commencing their study. Interventions were primarily Risk of Bias
conducted in classrooms (n = 16 studies) and separate Results from the risk of bias evaluation summaries are pro-
school rooms (n = 11 studies); other intervention settings vided in Online Appendix B; they were created using the
included homes (n = 5), clinics (n = 4), and community cen- RevMan computer program (The Cochrane Collaboration,
ters (n = 1). Over half of the studies occurred in the United 2014). In general, the single-case studies were unlikely to
States (n = 26). Three were conducted in the United King- use random assignment (n = 21 were rated as unclear or
dom, one in Taiwan, and one in Australia. Researchers in high in sequence generation), blinding of participants (n =
four studies did not report the country of origin. 23 were rated as unclear or high in blinding of participants),
or outcome assessors (n = 22 were rated as unclear or high
Dependent variable characteristics.  Table 1 lists the skills tar- in blinding of outcome assessors). However, most studies
geted and measurement procedures used in the group design adequately measured procedural fidelity (n = 15 were rated
and single-case studies. The target skills in 32 of the 35 as low risk of bias) and dependent variable reliability (n =
studies were assessed to be functionally relevant and 20 were rated as low). Furthermore, most studies used
researchers in 30 studies provided a specific rationale for appropriate participant selection processes.
the identified targets. Researchers in eight of the 35 studies The group studies were generally mixed across risk of bias
measured generalization of skills across settings (n = 3), domains. Most studies did not adequately measure procedural
materials (n = 2), skills (n = 1), activities (n = 1), or people fidelity (n = 11 were rated as unclear or high), used blind cod-
(n = 1). Researchers in 13 of the 35 studies measured main- ing for outcome assessors (n = 10 were rated as unclear or
tenance of target behaviors after the intervention ended. The high), or blind participants or treatment personnel (n = 12 were
length of time between the end of the intervention and rated as unclear or high). Areas of methodological strength
maintenance assessments ranged from 1 week to 12 weeks. across the studies included issues of selective reporting (n =
12) where all studies did not appear to selectively report the
Independent variable characteristics and dosage.  Table 1 pro- results of outcomes (n = 12), contamination bias with most
vides summaries of intervention agents, dosage, and tech- studies using procedures that would limit intervention spill-
nology devices. Across all 35 studies, the intervention was over, and attrition bias (n = 9) with a majority of studies having
most likely to be implemented by researchers (n = 22); relatively low rates of participants leaving the study.
however, intervention fidelity was not reported for all stud-
ies. Intervention fidelity was reported for at least 20% of
Visual Analyses
sessions in 19 studies and averaged at least 80% in all
of these studies. The number of sessions was reported in 30 For the single-case studies, the coders (i.e., trained graduate
of the 35 studies and ranged from three to 105 sessions. students) identified functional relations using visual analy-
However, the number of trials per sessions was reported in sis (i.e., three demonstrations of consistent behavior change)
8 Remedial and Special Education 00(0)

Table 1.  Summary of Dependent and Independent Variable Characteristics of Included Studies.

Group design studies Single-case design studies


Study characteristics (n = 12) (n = 23) Total (n = 35)
Dependent variable
•• Communication 4 8 12
•• Academic skills 1 6 7
•• Engagement/task completion 1 6 7
•• Social skills 2 2
•• Emotion recognition 6 6
•• Adaptive skills 1 1
Outcome measurement
•• Standardized measures 8 1 9
•• Direct observation 5 23 28
 Accuracy 3 12 15
  Event recording 1 8 9
  Interval recording 2 2
  Duration recording 1 2 3
Intervention agent, dosage, fidelity
•• Researcher 7 15 22
•• Classroom staff 3 8 11
•• Parent 2 1 3
•• Intervention sessions (range) 3–25 5–105 3–105
•• Intervention fidelity reported 2 18 20
Technology devices
•• Mobile devices 4 15 19
•• Personal computers 7 9 16
•• Speech generating devices 1 12 13
•• Virtual reality 3 1 4
•• Internet 1 1 2
•• Shared active surface 1 1
Comparison condition (group only)
•• Business as usual 7  
•• Active, nontechnology intervention 4  
•• Active, other technology intervention 1  

for at least one participant or behavior in 16 of the 23 stud- small number of studies evaluating AAC and VR interven-
ies. These results were used for all summative analyses of tions. Differences in effect sizes across the outcome domains
evidence-based practices listed in Table 2. of emotion recognition, communication, and social skills
outcomes could not be statistically distinguished, F(2, 4.6) =
0.13, p = .88.
Meta-Analysis of Group Design Of the three types of intervention, only CAI had more
For the group design studies, the analytic sample included than two group design studies contributing to the average
52 effect size estimates from 12 unique studies, with between effect estimate; it is the only type of intervention for which
two and eight effect sizes per studies. Online Appendix C average effects could be statistically distinguished from
provides a forest plot of these effect size estimates. Table 3 zero (95% CI = [0.39, 1.23], p = .003). For this category of
reports the overall effect size estimates for the group design interventions, effects compared with a business-as-usual
studies, as well as the weighted average effect sizes by inter- control condition were only slightly smaller than those
vention type and outcome domain. The overall weighted compared with an active control condition and were not sta-
average effect size across all 52 effect size estimates was d = tistically distinguishable (95% CI for difference = [–1.27,
0.66 (95% CI = [0.41, 0.91], p < .001). Controlling for inter- 0.90], p = .66).
vention type, the effects were still heterogeneous, with an
estimated between-study SD of 0.51 (I2 = 61%). Differences Outcome reporting bias.  Online Appendix D provides a fun-
in effect size between types of intervention could not be sta- nel plot of the effect size estimates from the group design
tistically distinguished, F(2, 1.14) = 0.89, p = .59, due to the studies. Asymmetry in the distribution of effect size
Barton et al. 9

Table 2.  Summary of Evidence-Based Practices for Select Single-Case Studies.

AAC CAI

No. of No. of Primary outcome


Reference participantsa Primary outcome domain participantsa domain
Choi, O’Reilly, Sigafoos, and Lancioni (2010) 4 Language/communication  
Cihak, Wright, and Ayres (2010) 3 Engagement/task
completion
Kagohara et al. (2010) 1 Language/communication  
(social communication)
Kodak, Fisher, Clements, and Bouxsein (2011) 1 Academic
Lorah, Parnell, and Speight (2014) 3 Language/communication  
(tacts)
Mechling and Savidge (2011) 3 Engagement/task
completion
Mechling, Gast, and Seid (2009) 3 Engagement/task
completion
Mechling, Gast, and Cronin (2006) 2 Engagement/task
completion
Neely, Rispoli, Camargo, Davis, and Boles (2013) 2 Engagement/task
completion
Soares, Vannest, and Harrison (2009) 1 Academic
Taylor, Hughes, Richard, Hoch, and Coello (2004) 3 Social
van der Meer et al. (2013) 2 Language/communication  
Yakubova and Taber-Doughty (2013) 3 Adaptive
Participant total 10 21  

Note. Single-case studies included in this summative analysis met the following criteria: (a) visual analysts identified a functional relation, (b) raters
indicated the study had three or fewer domains at high risk of bias, and (c) the study met WWC design standards with or without reservations.
AAC = augmentative and alternative communication; CAI = computer-assisted instruction; WWC = What Works Clearinghouse.
a
This refers to the numbers of participants in studies with functional relations identified.

Table 3.  Average Effect Size Estimates by Intervention Type, Outcome Domain, and Study Design.

Group designs Single-case designs

Study Category Studies (effects) Est. (SE) 95% CI Studies (effects) Est. (SE) 95% CI
Overall 12 (52) 0.66 (0.10) [0.41, 0.91] 10 (13) 1.97 (0.48) [0.73, 3.21]
Intervention type
 AAC 2 (5) 0.67 (0.40) [−4.44, 5.79] 5 (8) 1.61 (0.43) [0.41, 2.80]
 CAI 8 (35) 0.81 (0.18) [0.39, 1.23] 5 (5) 2.41 (0.97) [−0.32, 5.15]
  Virtual reality 2 (12) 0.37 (0.16) [−1.71, 2.46] 0 (0)  
Outcome domain
  Academic skills 2 (3) 1.20 (0.39) [−3.79, 6.18] 2 (2) 1.08 (0.45) [−4.59, 6.75]
  Adaptive skills 0 (0) 1 (1)  
 Communication 4 (11) 0.66 (0.22) [−0.10, 1.42] 5 (8) 1.58 (0.42) [0.40, 2.76]
  Emotion recognition 7 (25) 0.67 (0.22) [0.09, 1.26] 0 (0)  
 Engagement 2 (3) 0.50 (1.07) [−13.09, 14.08] 1 (1)  
  Social skills 4 (10) 0.79 (0.17) [0.25, 1.33] 1 (1)  

Note. Est. = point estimate; CI = confidence interval; AAC = augmentative and alternative communication; CAI = computer-assisted instruction.

estimates indicates the possibility of outcome reporting funnel plot appears to be symmetric. The robust variant of
bias. Effect size estimates falling outside of the dashed lines Egger’s test did not indicate asymmetry, t(3.9) = 1.04, p =
of the funnel are indicative of heterogeneity, which makes .355, although this test does not have strong power. Results
interpretation of the funnel plot more ambiguous. The of trim-and-fill analysis estimated that there were no
10 Remedial and Special Education 00(0)

censored effect sizes, and thus no outcome reporting bias in indicative of outcome reporting bias. Results of trim-and-
the average effect size estimates. fill analysis were ambiguous. When based on the “R0” vari-
ant, it estimated that there were six censored effect sizes,
inclusion of which reduced the overall average effect size
Meta-Analysis of Single-Case Research
estimate to 0.83. However, when based on the “L0” variant,
Effect size estimates.  In total, BC-SMDs could be calculated it estimated that there were no censored effect sizes, and
for 10 out of the 23 identified single-case studies. Online thus no outcome reporting bias in the average effect size
Appendix C displays a forest plot of the BC-SMD effect estimates.
size estimates. Table 2 reports the overall average effect
size estimates for the single-case studies, as well as the
Summary Identification of Evidence-Based
average effect sizes by intervention type and outcome
domain. Unlike the group design studies, no single-case Practices
studies examined VR. Using robust random effects estima- WWC results are provided in Online Appendix E. No group
tion, the average effect size across all single-case studies design studies met WWC design standards without reserva-
and outcomes was 1.97 (95% CI = [0.73, 3.21], p = .010). tions. Table 2 reports results from the summative analyses
Controlling for intervention type, the effects were heteroge- identifying evidence-based practices based on single-case
neous, with an estimated between-study SD of 1.26 (I2 = studies. Overall, the visual analysis results for the single-
87%). Differences in effect size between the two interven- case studies indicated that CAI is an evidence-based prac-
tion types could not be statistically distinguished, F(1, 7.77) tice for students with ASD as there were nine studies with
= 0.58, p = .47. The distribution of effect size estimates was 21 participants across seven research groups; however, the
concentrated on communication-related outcomes, with target outcomes varied across the studies (i.e., engagement/
few effect sizes available in other domains. For communi- task completion, academic, social, and adaptive skills). In
cation-related outcomes, the average effect was statistically addition, only three of these nine CAI studies were able to
distinguishable from zero (95% CI = [0.40, 2.76]). Average be included in the BC-SMD analyses (Cihak et al., 2010;
effect size estimates were omitted from Table 3 for domains Taylor et al., 2004; Yakubova & Taber-Doughty, 2013).
that included only a single study. Positive effects were noted for all three of these studies
Two of the effect size estimates were larger than 5.00 although one was larger than five and imprecisely estimated
and imprecisely estimated. Due to the possibility that these (Cihak et al., 2010). These findings were aligned with visual
effects are outliers, average effect sizes were recomputed analyses as functional relations were identified for each of
after excluding them. Excluding these studies, the average these three studies. Functional relations were also identified
effect size for the remaining 11 outcomes from nine studies for the remaining six studies not included in the BC-SMD
was 1.38 (95% CI = [0.68, 2.08], p = .005), with an esti- analyses. Three methodological areas were identified across
mated between-study SD of 0.71 (I2 = 72%). Thus, although the CAI studies as sources of possible bias based on the risk
average effects were still statistically distinguishable from of bias evaluation; these included issues with selection bias,
zero, the estimates of average magnitude and variability of and blinding of participants and outcome assessors. Using
effects were both substantially influenced by the two outly- visual analysis summaries, there were insufficient studies
ing effect sizes. and participants with functional relations—fewer than five
Although the average effect size estimate from the sin- and 20, respectively. Neither AAC nor VR would be consid-
gle-case studies was larger than the corresponding estimate ered evidence-based practice.
from the group studies, the difference was not statistically
distinguishable from zero. Controlling for intervention
Discussion
type, the difference in effects between the single-case and
group studies was 1.30 (95% CI = [–0.23, 2.83], p = .085); The primary purpose of this review was to apply novel meta-
excluding two outlying effects from single-case studies, the analytic techniques to synthesize the single-case and group
difference was 0.70 (95% CI = [–0.27, 1.67], p = .130). design research on TAII for students with ASD and to iden-
tify the strengths, limitations, and external validity of this
Outcome reporting bias. The funnel plot of the BC-SMD research. On the basis of the single-case studies, we con-
effect size estimates is in Online Appendix D. The distribu- cluded that CAI was an evidence-based practice. However,
tion of effect size estimates appears to be somewhat asym- the primary outcome domains varied across these studies,
metrical, with smaller effects tending to be more precisely which limits interpretations of the overall summary. The
estimated and larger effects tending to be less precise. This other categories of TAII—AAC and VR—did not include a
visual assessment of asymmetry is consistent with robust sufficient research base to classify as an evidence-based
Egger’s regression tests, t(4.9) = 4.05, p = .010. In meta- practice. The group research produced similar results with
analysis of between-groups studies, such asymmetry can be additional support for CAI, whereas there was insufficient
Barton et al. 11

support for AAC and VR interventions. However, only two have a sound rationale when used with single-case research.
AAC and two VR group studies met the eligibility criteria, Nonetheless, we believe that outcome reporting biases are
indicating a need for additional research with these TAII an important concern if single-case research is to be used as
modalities. Regarding CAI, there were 35 outcomes mea- a basis for establishing evidence-based practices (Shadish,
sures identified across the eight group studies. Results indi- Sham, & Smith, 2014; Shadish et al., 2016). There is an
cated that these CAI group studies collectively produced outstanding need for further research and development of
meaningful differences between the groups. As with the better methodology in this area.
single-case results, however, there are important qualifica-
tions to consider. For instance, the CAI group research con-
Systematic Review Findings
sisted of a range of dependent measures although the
majority of the studies reported measures of emotion recog- The results of the current review demonstrated that there is
nition (n = 6 of 8 studies). The research could be strength- a need for additional research on the use of AAC and VR
ened with greater consideration of methodological issues interventions for students with ASD. The body of available
that introduce bias. experimental research on these technologies is currently too
small to make any definitive statements regarding their
overall potential for addressing the array of academic, com-
Meta-Analytic Findings municative, and social outcomes that students with ASD
Several findings emerged from the meta-analytic synthesis confront. As such, we encourage the pursuit of additional,
of single-case research and group designs. First, for both rigorous research and innovation to continue to expand the
group designs and single-case research, there was marked portfolio of interventions available for addressing the
heterogeneity in effect sizes, with between-study SDs in diverse needs of students with ASD.
excess of 0.5. Such a high degree of heterogeneity indicates Our review provided support for the use of CAI to address
that many unexplained factors contribute to the efficacy of the academic, communicative, and social needs of students
TAII, even after controlling for the type of technology. This with ASD. CAI was the most frequently used TAII interven-
should serve as a caution in generalizing about the overall tion category in which skills were taught using mobile
effects of a given type of technology. That is, there are devices or personal computers. In fact, TAII was generally
likely to be effective and ineffective ways to use TAII, used to teach socially significant skills. Researchers across a
which should be explored in future research. majority of the studies used TAII to teach functionally rele-
Second, average effects from single-case research were vant skills and were likely to provide a rationale for their
estimated to be larger than those from group designs targets. Not surprisingly, the most frequently reported target
although the difference was not statistically distinguishable behaviors were communication, academic, and social skills.
from zero due to the high degree of unexplained heteroge- Although communication and social skills might be particu-
neity. Still, this trend is consistent with theoretical expecta- larly important for students with ASD, given that their symp-
tions that effect sizes from single-case research may tomatology specifies known deficits in these areas (American
generally be larger than those from group designs, even Psychiatric Association, 2013), additional research is needed
when assessed on comparable scales. For example, visual examining the use of TAII to teach other functional skills.
analysis of single-case research can reliably detect large For example, several researchers in the current review effec-
effects better than small or moderate effects. Therefore, tively used TAII to teach emotion recognition (Golan et al.,
within the single-case research community, there is strong 2006; Hopkins et al., 2011). Smith and colleagues used a VR
publication preference for studies with larger effects software program to teach job interviewing skills. These
(Shadish et al., 2015; Shadish et al., 2016). This inadver- studies provide examples of the innovative uses of TAII
tently limits the pool of studies for which to draw effect across the broad spectrum of functional skills for students
sizes and might artificially inflate average effects. with ASD. Future research should continue to examine inno-
Finally, outcome reporting bias analyses indicated the vations in the devices used and the target behaviors taught.
possibility that the set of studies included in this synthesis The current research on TAII could also be strengthened
may not be fully representative of the true distribution of through improved reporting on sample characteristics.
effects. In particular, trim-and-fill and Egger’s regression Although all studies in this review included students with
test results applied to the single-case research suggested ASD, comparisons across studies are limited because
that we may be missing small studies with ambiguous or researchers in only five of the 35 studies independently con-
null results; identification and inclusion of such studies firmed ASD diagnoses. Also, nearly 11% of the sample were
would tend to reduce the estimated effects. As noted in a not reported to have ASD. Race and ethnicity also were
previous section, use of these outcome reporting bias analy- rarely reported. Perhaps most surprisingly, researchers in
ses with single-case research is exploratory, given that the only two of the 35 studies reported participants’ prestudy
techniques were developed for group designs and may not skill levels with technology. For example, it might be critical
12 Remedial and Special Education 00(0)

to identify the specific motor or attending skills required to Limitations


use specific TAII devices (Kagohara et al., 2013).
Researchers also might advance the science of TAII by There are at least three limitations to note. First, we started
minimizing risk of bias in future studies. Overall, sequence with an existing review (i.e., Wong et al., 2015) and only
generation and blinding of participants and outcome assessors searched for articles after the end date of their review. Thus,
were the domains with most studies with high risk of bias rat- we might have inadvertently missed studies. Furthermore,
ings. In single-case studies, sequence generation refers to the several studies have been published since the date of our
condition ordering, which when controlled by the researcher search that would have met our inclusion criteria. Second,
can introduce bias. Blinding of participants and key personnel because our review examined a specific type of indepen-
occurs when the participants and the researchers making deci- dent variable, the range of dependent variables included
sions about condition changes are blind to study condition across studies limited interpretations of the results. Even
changes. Although the dynamic nature of single-case research similar types of TAII might have been applied in using dis-
allows for response guided decisions using baseline logic parate instructional techniques. Furthermore, given students
(Barton et al., 2016), there are acceptable randomization pro- with ASD are known to be heterogeneous, the user or par-
cedures that can be introduced within specific single-case ticipant characteristics might have varied across studies to a
designs (i.e., alternating treatments designs, multielement degree that makes syntheses difficult to interpret. Third, the
designs) that might minimize bias due to the ordering and BC-SMD effect size measure used to meta-analyze data
iteration of conditions. Likewise, although there are generally from the single-case designs could be applied only to a sub-
accepted standards for the quality of data collection and the set of the studies (10 out of 23) that otherwise met inclusion
amount and degree of reliability data in single-case research criteria. To the extent that the studies meeting technical
(Ayres & Ledford, 2014), standardized procedures under inclusion criteria systematically differ from studies that
which data should be collected are not ubiquitous. To mini- include fewer cases or use other designs, the meta-analytic
mize threats to internal validity, outcome assessors should be findings from included studies might not generalize. Future
blind to study outcomes and conditions for both group and syntheses should consider using multiple effect sizes,
single-case research and reliability data should be formatively including both between-case and within-case measures, to
analyzed to monitor the presence of systematic bias (Chazin, characterize the magnitude of the effects of TAII.
Ledford, Barton, & Osbourne, 2017).
The present review identified two challenges for the con-
Conclusion
tinued development of TAII for students with ASD. The A primary purpose of this review was to apply new methods
first is that there remains considerable variation in the defi- for calculating between-case effect sizes for single-case
nitions of TAII and the specific categories used across the data and comparing results with effect sizes from group
literature (cf. Grynszpan et al., 2014; Kagohara et al., 2013; design research. There is a history of disagreement in the
Odom et al., 2015). Although some variation might be field regarding appropriate methods for calculating effect
expected, major definitional differences might indicate a sizes for single-case research (Shadish et al., 2015; Wolery,
need for developing a consistent framework or categories of Busick, Reichow, & Barton, 2010). While we present a sin-
TAII to support the synthesis of results across reviews. For gle method, we do not believe this method is uniformly
example, Grynszpan and colleagues (2014) identified 21 superior to other methods or the most valid method of cal-
articles for their review on TAII with students with ASD, culating effect sizes for all single-case data. Rather, the
yet only five of those 21 were included in the current review. results of this systematic review highlighted many limita-
Part of the reason for the lack of overlap can be traced back tions that still must be addressed through careful study.
to the definition of technology. Specifically, Grynszpan and Systematic comparisons of novel methods will be needed.
colleagues focused their review primarily on interventions However, our initial demonstration of the utility of between-
that were delivered through a computerized system, whereas case effect sizes for single-case research and the potential
the review by Odom and colleagues—and the present for use alongside more conventional and accepted methods
review—adopted a broader definition of technology. The for calculating between-group effect sizes is noteworthy
second challenge confronting the TAII literature is the need and warrants additional attention.
for the use of more rigorous research designs and methods. A secondary purpose of the review was to summarize the
This issue is exemplified again in the overlap of included TAII research. The results of our review support the use of
studies across Grynszpan and colleagues and Odom and TAII for students with ASD. This is consistent with the
colleagues. Specifically, there were a number of studies findings from other reviews although reviews varied in
excluded in the present review because they did not use an their scopes and foci (cf. Grynszpan et al., 2014; Knight
eligible single-case or group-based experimental design. et al., 2013). Overall, TAII was used to improve student out-
These experimental methods are essential for drawing valid comes using a variety of formats, devices, and software
conclusions regarding overall effectiveness. within authentic educational settings. Furthermore, the
Barton et al. 13

research in this area is rapidly growing as an increased and alternative communication. Research in Developmental
number of studies were identified each subsequent year. Disabilities: A Multidisciplinary Journal, 31, 560–567.
Given the heterogeneity of the outcomes and the TAII tools, *Cihak, D. F., Wright, R., & Ayres, K. M. (2010). Use of self-
researchers should continue to examine disaggregated com- modeling static-picture prompts via a handheld computer to
facilitate self-monitoring in the general education classroom.
ponents or characteristics of TAII that are effective for spe-
Education and Training in Developmental Disabilities, 45,
cific outcomes.
136–149.
The Cochrane Collaboration. (2014). Review Manager (RevMan)
Declaration of Conflicting Interests (Version 5.3) [Computer program]. Copenhagen, Denmark:
The author(s) declared no potential conflicts of interest with The Nordic Cochrane Centre, The Cochrane Collaboration.
respect to the research, authorship, and/or publication of this Council for Exceptional Children. (2014). Council for Exceptional
article. Children standards for evidence-based practices in special
education. Retrieved from https://www.cec.sped.org/~/media/
Funding Images/Standards/CEC%20EBP%20Standards%20cover/
CECs%20Evidence%20Based%20Practice%20Standards.pdf
The authors received no financial support for the research, author- *DeThorne, L., Betancourt, M. A., Karahalios, K., Halle, J., &
ship, and/or publication of this article. Bogue, E. (2015). Visualizing syllables: Real-time computer-
ized feedback within a speech-language intervention. Journal
Supplemental Material of Autism and Developmental Disorders, 45, 3756–3763.
The online appendices are available at http://journals.sagepub. Drevon, D., Fursa, S. R., & Malcolm, A. L. (2017). Intercoder reli-
com/doi/suppl/10.1177/0741932517729508. ability and validity of WebPlotDigitizer in extracting graphed
data. Behavior Modification, 41, 323–339.
Duval, S., & Tweedie, R. (2000). A nonparametric “trim and fill”
References method of accounting for publication bias in meta-analysis.
References marked with an asterisk indicate studies included in Journal of the American Statistical Association, 95, 89–98.
the meta-analyses. Egger, M., Smith, G., Schneider, M., & Minder, C. (1997). Bias
*Achamadi, D., Kagohara, D. M., van der Meer, L., O’Reilly, in meta-analysis detected by a simple, graphical test. BMJ:
M., Lancioni, G., Sutherland, D., . . . Sigafoos, J. (2012). British Medical Journal, 315, 629–634.
Teaching advanced operation of an iPod-based speech-gener- *Faja, S., Aylward, E., Bernier, R., & Dawson, G. (2007).
ating device to two students with Autism Spectrum Disorders. Becoming a face expert: A computerized face-training pro-
Research in Autism Spectrum Disorders, 6, 1258–1264. gram for high-functioning individuals with Autism Spectrum
American Psychiatric Association. (2013). Diagnostic and sta- Disorders. Developmental Neuropsychology, 33, 1–24.
tistical manual of mental disorders (5th ed.). Arlington, VA: Fisher, Z., & Tipton, E. (2015). robumeta: An R-package for robust
American Psychiatric Publishing. variance estimation in meta-analysis. (arXiv:1503.02220).
Ayres, K., & Ledford, J. R. (2014). Dependent measures and Retrieved from https://www.rdocumentation.org/packages/
measurement systems. In D. L. Gast & J. R. Ledford (Eds.), robumeta
Single case research design in behavioral sciences (2nd ed., *Flores, M., Musgrove, K., Renner, S., Hinton, V., Strozier, S.,
pp. 124–153). New York, NY: Routledge. Franklin, S., & Hil, D. (2012). A comparison of communi-
Barton, E. E., Ledford, J. R., Lane, J. D., Decker, J., Germansky, S. cation using the Apple iPad and a picture-based system.
E., Hemmeter, M. L., & Kaiser, A. (2016). The iterative use of Augmentative and Alternative Communication, 28, 74–84.
single case research designs to advance the science of EI/ECSE. Ganz, J. B., Earles-Vollrath, T. L., Heath, A. K., Parker, R. I.,
Topics in Early Childhood Special Education, 36, 4–14. Rispoli, M. J., & Duran, J. B. (2012). A meta-analysis of
*Beaumont, R., & Sofronoff, K. (2008). A multi-component single-case research studies on aided augmentative and alter-
social skills intervention for children with Asperger syn- native communication systems with individuals with Autism
drome: The junior detective training program. Journal of Spectrum Disorders. Journal of Autism and Developmental
Child Psychology and Psychiatry, 49, 743–753. Disorders, 42, 60–74.
*Boesch, M. C., Wendt, O., Subramanian, A., & Hsu, N. Gast, D. L., & Spriggs, A. (2014). Visual analysis of graphic data.
(2013). Comparative efficacy of the Picture Exchange In D. Gast & J. R. Ledford (Eds.), Single case research meth-
Communication System (PECS) versus a speech-generating odology: Applications in special education and behavioral
device: Effects on social-communicative skills and speech sciences (pp. 234–375). New York, NY: Routledge.
development. Augmentative and Alternative Communication, *Golan, O., Ashwin, E., Granader, Y., McClintock, S., Day, K.,
29, 197–209. Leggett, V., & Baron-Cohen, S. (2010). Enhancing emotion
Chazin, K. T., Ledford, J. R., Barton, E. E., & Osbourne, K. recognition in children with autism spectrum conditions: An
(2017). Antecedent exercise for young children: An examina- intervention using animated vehicles with real emotional faces.
tion of bias on experimental outcomes. Manuscript Submitted Journal of Autism and Developmental Disorders, 40, 269–279.
for Publication. *Golan, O., & Baron-Cohen, S. (2006). Systemizing empathy:
*Choi, H., O’Reilly, M., Sigafoos, J., & Lancioni, G. (2010). Teaching adults with Asperger syndrome or high-functioning
Teaching requesting and rejecting sequences to four chil- autism to recognize complex emotions using interactive mul-
dren with developmental disabilities using augmentative timedia. Development and Psychopathology, 18, 591–617.
14 Remedial and Special Education 00(0)

Grynszpan, O., Weiss, P. L., Perez-Diaz, F., & Gal, E. (2014). Spectrum Disorder. Research in Autism Spectrum Disorders,
Innovative technology-based interventions for autism spec- 8, 1107–1120.
trum disorders: A meta-analysis. Autism, 18, 346–361. Knight, V., McKissick, B. R., & Saunders, A. (2013). A review
Hedges, L. V. (1981). Distribution theory for Glass’s estimator of technology-based interventions to teach academic skills
of effect size and related estimators. Journal of Educational to students with autism spectrum disorder. Journal of Autism
Statistics, 6, 107–128. and Developmental Disorders, 43, 2628–2648.
Hedges, L. V., Pustejovsky, J. E., & Shadish, W. R. (2012). A *Kodak, T., Fisher, W. W., Clements, A., & Bouxsein, K. J.
standardized mean difference effect size for single case (2011). Effects of computer-assisted instruction on cor-
designs. Research Synthesis Methods, 3, 224–239. rect responding and procedural integrity during early inten-
Hedges, L. V., Pustejovsky, J. E., & Shadish, W. R. (2013). A sive behavioral intervention. Research in Autism Spectrum
standardized mean difference effect size for multiple baseline Disorders, 5, 640–647.
designs across individuals. Research Synthesis Methods, 4, Kratochwill, T. R., Hitchcock, J. H., Horner, R. H., Levin, J. R.,
324–341. Odom, S. L., Rindskopf, D. M., & Shadish, W. R. (2013).
Hedges, L. V., Tipton, E., & Johnson, M. C. (2010). Robust vari- Single-case intervention research design standards. Remedial
ance estimation in meta-regression with dependent effect size and Special Education, 34, 26–38.
estimates. Research Synthesis Methods, 1, 39–65. Lofland, K. B. (2016). The use of technology in the treatment of
Higgins, J. P. T., Altman, D. G., & Sterne, J. A. C. (2008). autism. In T. A. Cardon (Ed.), Technology and the treatment
Assessing risk of bias in included studies. In J. P. T. Higgins of children with Autism Spectrum Disorder (pp. 27–35). New
& S. Green (Eds.), Cochrane handbook for systematic reviews York, NY: Springer.
of interventions (pp. 187–241). Chichester, UK: John Wiley. *Lorah, E. R., Parnell, A., & Speight, D. R. (2014). Acquisition
*Hopkins, I. M., Gower, M. W., Perez, T. A., Smith, D. S., of sentence frame discrimination using the iPad™ as a speech
Amthor, F. R., Wimsatt, F. C., & Biasini, F. J. (2011). Avatar generating device in young children with developmental dis-
assistant: Improving social skills in students with an ASD abilities. Research in Autism Spectrum Disorders, 8, 1734–
through a computer-based intervention. Journal of Autism 1740.
and Developmental Disorders, 41, 1543–1555. *McKissick, B. R., Spooner, F., Wood, C. L., & Diegelmann, K.
Horner, R. H., Carr, E. G., Halle, J., McGee, G., Odom, S., & M. (2013). Effects of computer-assisted explicit instruction
Wolery, M. (2005). The use of single-subject research to iden- on map-reading skills for students with autism. Research in
tify evidence-based practice in special education. Exceptional Autism Spectrum Disorders, 7, 1653–1662.
Children, 71, 165–179. *Mechling, L. C., Gast, D. L., & Cronin, B. A. (2006). The effects
Irish, J. E. (2013). Can I sit here? A review of the literature sup- of presenting high-preference items, paired with choice,
porting the use of single-user virtual environments to help via computer-based video programming on task comple-
adolescents with autism learn appropriate social communica- tion of students with autism. Focus on Autism and Other
tion skills. Computers in Human Behavior, 29, A17–A24. Developmental Disabilities, 21, 7–13.
Jones, P., Wilcox, C., & Simons, J. (2016). Evidence-based *Mechling, L. C., Gast, D. L., & Seid, N. H. (2009). Using a per-
instruction for students with Autism Spectrum Disorder: sonal digital assistant to increase independent task comple-
TeachTown Basics. In T. A. Cardon (Ed.), Technology and tion by students with Autism Spectrum Disorder. Journal of
the treatment of children with Autism Spectrum Disorder (pp. Autism and Developmental Disorders, 39, 1420–1434.
113–130). New York, NY: Springer. *Mechling, L. C., & Savidge, E. J. (2011). Using a personal digi-
*Kagohara, D. M., van der Meer, L., Achmadi, D., Green, V. A., tal assistant to increase completion of novel tasks and inde-
O’Reilly, M. F., Mulloy, A., . . . Sigafoos, J. (2010). Behavioral pendent transitioning by students with Autism Spectrum
intervention promotes successful use of an iPod-based com- Disorder. Journal of Autism and Developmental Disorders,
munication device by an adolescent with autism. Clinical 41, 687–704.
Case Studies, 9, 328–338. doi:10.1177/1534650110379633 *Mineo, B. A., Ziegler, W., Gill, S., & Salkin, D. (2009).
Kagohara, D. M., van der Meer, L., Ramdoss, S., O’Reilly, M. F., Engagement with electronic screen media among students
Lancioni, G. E., Davis, T. N., . . . Green, V. A. (2013). Using with Autism Spectrum Disorders. Journal of Autism and
iPods® and iPads® in teaching programs for individuals with Developmental Disorders, 39, 172–187.
developmental disabilities: A systematic review. Research in Moher, D., Liberati, A., Tetzlaff, J., Altman, D. G., & PRISMA
Developmental Disabilities, 34, 147–156. Group. (2009). Preferred reporting items for systematic
*Kasari, C., Kaiser, A., Goods, K., Nietfeld, J., Mathy, P., Landa, reviews and meta-analyses: the PRISMA statement. PLoS
R., . . . Almirall, D. (2014). Communication interventions for Medicine, 6(7), e1000097.
minimally verbal children with autism: A sequential mul- *Moore, M., & Calvert, S. (2000). Brief report: Vocabulary acqui-
tiple assignment randomized trial. Journal of the American sition for children with autism: Teacher or computer instruc-
Academy of Child & Adolescent Psychiatry, 53, 635–646. tion. Journal of Autism and Developmental Disorders, 30,
*Ke, F., & Im, T. (2013). Virtual-reality-based social interac- 359–362.
tion training for children with high-functioning autism. The *Myles, B. S., Ferguson, H., & Hagiwara, T. (2007). Using a
Journal of Educational Research, 106, 441–461. personal digital assistant to improve the recording of home-
*King, M. L., Takeguchi, K., Barry, S. E., Rehfeldt, R. A., Boyer, work assignments by an adolescent with Asperger syndrome.
V. E., & Mathews, T. L. (2014). Evaluation of the iPad in Focus on Autism and Other Developmental Disabilities, 22,
the acquisition of requesting skills for children with Autism 96–99.
Barton et al. 15

*Neely, L., Rispoli, M., Camargo, S., Davis, H., & Boles, M. Anita Zucker Center for Excellence in Early Childhood
(2013). The effect of instructional use of an iPad® on chal- Studies, University of Florida, Gainesville.
lenging behavior and academic engagement for two students Reichow, B., Doehring, P., Cicchetti, D. V., & Volkmar, F. R.
with autism. Research in Autism Spectrum Disorders, 7, (2011). Evidence-based practices in autism: Where we
509–516. started. In B. Reichow, P. Doehring, D. V. Cicchetti & F. R.
Odom, S. L., Brantlinger, E., Gersten, R., Horner, R. H., Volkmar (Eds.), Evidence-based practices and treatments for
Thompson, B., & Harris, K. R. (2005). Research in special children with autism (pp. 3–24). New York, NY: Springer.
education: Scientific methods and evidence-based practices. *Richter, S., & Test, D. (2011). Effects of multimedia social sto-
Exceptional Children, 71, 137–148. ries on knowledge of adult outcomes and opportunities among
Odom, S. L., Thompson, J. L., Hedges, S., Boyd, B. A., Dykstra, transition-aged youth with significant disabilities. Education
J. R., Duda, M. A., . . . Bord, A. (2015). Technology-aided and Training in Autism and Developmental Disabilities, 46,
interventions and instruction for adolescents with Autism 410–424.
Spectrum Disorder. Journal of Autism and Developmental Rohatgi, A. (2014). WebPlotDigitizer user manual (Version 3.4).
Disorders, 45, 3805–3819. Retrieved from http://arohatgi.info/WebPlotDigitizer/user-
Pennington, R. C. (2010). Computer-assisted instruction for Manual.pdf
teaching academic skills to students with Autism Spectrum Rothstein, H. R., Sutton, A. J., & Borenstein, M. (2005).
Disorders: A review of literature. Focus on Autism and Other Publication bias in meta-analysis. In H. R. Rothstein, A. J.
Developmental Disabilities, 25, 239–248. Sutton & M. Borenstein (Eds.), Publication bias in meta-
*Pennington, R. C., Stenhoff, D. M., Gibson, J., & Ballou, K. analysis: Prevention, assessment, and adjustments (pp. 1–7).
(2012). Using simultaneous prompting to teach computer- Chichester, UK: John Wiley.
based story writing to a student with autism. Education and Shadish, W. R., Hedges, L. V., Horner, R. H., & Odom, S. L.
Treatment of Children, 35, 389–406. (2015). The role of between-case effect size in conducting,
Ploog, B. O., Scharf, A., Nelson, D., & Brooks, P. J. (2013). interpreting, and summarizing single-case research (NCER
Use of computer-assisted technologies (CAT) to enhance 2015-002). Washington, DC: National Center for Education
social, communicative, and language development in chil- Research, Institute of Education Sciences, U.S. Department
dren with autism spectrum disorders. Journal of Autism and of Education. Retrieved from http://ies.ed.gov/ncser/
Developmental Disorders, 43, 301–322. pubs/2015002/
Pustejovsky, J. E. (2016). scdhlm: Estimating hierarchical linear Shadish, W. R., Hedges, L. V., & Pustejovsky, J. E. (2014).
models for single-case designs. Retrieved from https://cran.r- Analysis and meta-analysis of single-case designs with a stan-
project.org/web/packages/scdhlm dardized mean difference statistic: A primer and applications.
Pustejovsky, J. E., & Ferron, J. M. (2017). Research synthesis and Journal of School Psychology, 52, 123–147.
meta-analysis of single-case designs. In J. M. Kaufmann, D. Shadish, W. R., Sham, E., & Smith, T. (2014). Publication bias in
P. Hallahan & P. C. Pullen (Eds.), Handbook of special edu- studies of an applied behavior-analytic intervention: An initial
cation (2nd ed., pp. 168–186). New York, NY: Routledge. analysis. Journal of Applied Behavior Analysis, 47, 663–678.
Pustejovsky, J. E., Hedges, L. V., & Shadish, W. R. (2014). Shadish, W. R., Zelinsky, N. A. M., Vevea, J. L., & Kratochwill,
Design-comparable effect sizes in multiple baseline designs: T. R. (2016). A survey of publication practices of single-
A general modeling framework. Journal of Educational and case design researchers when treatments have small or large
Behavioral Statistics, 39, 368–393. effects. Journal of Applied Behavior Analysis, 49, 1–18.
Ramdoss, S., Lang, R., Mulloy, A., Franco, J., O’Reilly, M., Sham, E., & Smith, T. (2014). Publication bias in studies of an
Didden, R., & Lancioni, G. (2011). Use of computer-based applied behavior-analytic intervention: An initial analysis.
interventions to teach communication skills to children with Journal of Applied Behavior Analysis, 47, 663–678.
Autism Spectrum Disorders: A systematic review. Journal of *Shih, C. H., Chiang, M. S., & Shih, C. T. (2015). Assisting
Behavioral Education, 20, 55–76. students with autism to cooperate with their peers to per-
Ramdoss, S., Machalicek, W., Rispoli, M., Mulloy, A., Lang, form computer mouse collaborative pointing operation on a
R., & O’Reilly, M. (2012). Computer-based interventions to single display simultaneously. Research in Autism Spectrum
improve social and emotional skills in individuals with autism Disorders, 10, 15–21.
spectrum disorders: A systematic review. Developmental *Silver, M., & Oakes, P. (2001). Evaluation of a new computer
Neurorehabilitation, 15, 119–135. intervention to teach people with autism or Asperger syn-
Reeves, B. C., Higgins, J., Ramsay, C., Shea, B., Tugwell, P., & drome to recognize and predict emotions in others. Autism,
Wells, G. A. (2013). An introduction to methodological issues 5, 299–316.
when including non-randomised studies in systematic reviews *Smith, M. J., Ginger, E. J., Wright, K., Wright, M. A., Taylor,
on the effects of interventions. Research Synthesis Methods, J. L., Humm, L. B., . . . Fleming, M. F. (2014). Virtual real-
4, 1–11. ity job interview training in adults with Autism Spectrum
Reeves, L. M., Umbreit, J., Ferro, J. B., & Liaupsin, C. J. (2013). Disorder. Journal of Autism and Developmental Disorders,
Function-based intervention to support the inclusion of stu- 44, 2450–2463.
dents with autism. Education and Training in Autism and *Soares, D. A., Vannest, K. J., & Harrison, J. (2009). Computer
Developmental Disabilities, 48, 379–391. aided self-monitoring to increase academic production
Reichow, B., Barton, E. E., & Maggin, D. (2017). Risk of bias and reduce self-injurious behavior in a child with autism.
assessment for single case designs. Unpublished manuscript, Behavioral Interventions, 24, 171–183.
16 Remedial and Special Education 00(0)

Sterne, J. A. C. (Ed.). (2009). Meta-analysis in Stata: An updated Weng, P. L., Maeda, Y., & Bouck, E. C. (2014). Effectiveness
collection from the Stata journal. College Station, TX: Stata of cognitive skills-based computer-assisted instruction for
Press. students with disabilities: A synthesis. Remedial and Special
Stovold, E., Beecher, D., Foxlee, R., & Noel-Storr, A. (2014). Study Education, 35, 167–180.
flow diagrams in Cochrane systematic review updates: an *Whalen, C., Moss, D., Ilan, A. B., Vaupel, M., Fielding, P.,
adapted PRISMA flow diagram. Systematic Reviews, 3, 54–59. Macdonald, K., . . . Symon, J. (2010). Efficacy of TeachTown:
Stromer, R., Mackay, H. A., Howell, S. R., McVay, A. A., & Basics computer-assisted intervention for the intensive com-
Flusser, D. (1996). Teaching computer-based spelling to indi- prehensive autism program in Los Angeles Unified School
viduals with developmental and hearing disabilities: Transfer District. Autism, 14, 179–197.
of stimulus control to writing tasks. Journal of Applied What Works Clearinghouse. (2014). What Works Clearinghouse
Behavior Analysis, 29, 25–42. procedures and standards handbook (Version 3.0).
Tanner-Smith, E. E., Tipton, E., & Polanin, J. R. (2016). Handling Washington, DC: Institute for Education Sciences.
complex meta-analytic data structures using robust variance Williams, C., Wright, B., Callaghan, G., & Coughlan, B. (2002).
estimates: A tutorial in R. Journal of Developmental and Life- Do children with autism learn to read more readily by com-
Course Criminology, 2, 85–112. puter assisted instruction or traditional book methods? A pilot
Tate, R. L., Perdices, M., Rosenkoetter, U., Wakim, D., Godbee, K., study. Autism, 6, 71–91.
Togher, L., & McDonald, S. (2013). Revision of a method qual- Wolery, M., Busick, M., Reichow, B., & Barton, E. E. (2010).
ity rating scale for single-case experimental designs and n-of-1 Comparison of overlap methods for quantitatively synthesiz-
trials: The 15-item Risk of Bias in N-of-1 Trials (RoBiNT) ing single-subject data. The Journal of Special Education,
Scale. Neuropsychological Rehabilitation, 23, 619–638. 44(1), 18–28.
*Taylor, B. A., Hughes, C. E., Richard, E., Hoch, H., & Coello, A. Wong, C., Odom, S. L., Hume, K. A., Cox, A. W., Fettig, A.,
R. (2004). Teaching teenagers with autism to seek assistance Kucharczyk, S., . . . Schultz, T. R. (2014). Evidence-
when lost. Journal of Applied Behavior Analysis, 37, 79–82. based practices for children, youth, and young adults with
Tipton, E. (2015). Small sample adjustments for robust variance autism spectrum disorder. Chapel Hill: Autism Evidence-
estimation with meta-regression. Psychological Methods, 20, Based Practice Review Group, Frank Porter Graham Child
375–393. Development Institute, The University of North Carolina.
Tipton, E., & Pustejovsky, J. E. (2015). Small-sample adjustments Wong, C., Odom, S. L., Hume, K. A., Cox, A. W., Fettig, A.,
for tests of moderators and model fit using robust variance Kucharczyk, S., . . . Schultz, T. R. (2015). Evidence-based
estimation in meta-regression. Journal of Educational and practices for children, youth, and young adults with autism
Behavioral Statistics, 40, 604–634. spectrum disorder: A comprehensive review. Journal of
*van der Meer, L., Kagohara, D., Roche, L., Sutherland, D., Autism and Developmental Disorders, 45, 1951–1966.
Balandin, S., Green, V. A., . . . Sigafoos, J. (2013). Teaching *Yakubova, G., & Taber-Doughty, T. (2013). Brief report:
multi-step requesting and social communication to two Learning via the electronic interactive whiteboard for two
children with Autism Spectrum Disorders with three AAC students with autism and a student with moderate intellectual
options. Augmentative and Alternative Communication, 29, disability. Journal of Autism and Developmental Disorders,
222–234. 43, 1465–1472.

You might also like