You are on page 1of 14

Review Journal of Autism and Developmental Disorders

https://doi.org/10.1007/s40489-018-0152-6

REVIEW PAPER

Training Behavior Change Agents and Parents to Implement Discrete


Trial Teaching: a Literature Review
Justin B. Leaf 1,2 & Wafa A. Aljohani 2 & Christine M. Milne 1,2 & Julia L. Ferguson 1 & Joseph H. Cihon 1,2 &
Misty L. Oppenheim-Leaf 3 & John McEachin 1 & Ronald Leaf 1

Received: 10 April 2018 / Accepted: 3 October 2018


# Springer Science+Business Media, LLC, part of Springer Nature 2018

Abstract
Discrete trial teaching (DTT) is a commonly implemented and evaluated teaching procedure for individuals diagnosed with
autism spectrum disorder (ASD). As such, DTT is often a procedure that behavior analytic practitioners are required to learn how
to impliment. Additionally, parents are often encouraged to learn how to implement DTT to supplement intervention for their
child(ren) diagnosed with ASD. This review of the literature included 51 studies (57 experiments) that involved training behavior
change agents and/or parents on the implementation of DTT. Each of the studies was evaluated and quantified along several
dimensions including participant demographics, experimental design, outcome, DTT task analysis, training procedures, training
time, and the mastery conditions for the implementation of DTT. The results of the review indicated that there is a robust literature
on training individuals to implement DTT. However, results also revealed there are several areas that should be addressed by
future studies as well as implications for practitioners and certification standards.

Keywords Behavior analysts . Discrete trial teaching . Parent training . Staff training . And training

One commonly implemented procedure based upon the prin- learner responding correctly (MacDuff et al. 2001), modifica-
ciples of applied behavior analysis (ABA) is discrete trial tion of the intertrial interval (i.e., the time between trials;
teaching (DTT; Lovaas 1981, 1987). DTT is a systematic Lerman et al. 2016), data collection on learner responding
method of instruction consisting of three primary components (Gongola and Sweeney 2011), and developing establishing
including (1) a discriminative stimulus (e.g., an instruction) operations (Michael 1988).
presented by the instructor, (2) a response from the learner, DTT has been described in a variety of commentaries (e.g.,
and (3) a consequence based on the learner’s response (i.e., Ghezzi 2007; Leaf et al. 2016; Smith 2001), book chapters
reinforcement or punishment) provided by the instructor. (e.g., Leaf and McEachin 2016; Lerman et al. 2016), curricu-
Additional variables commonly involved within DTT are the lum books (e.g., Leaf and McEachin 1999; Lovaas 1981;
instructor providing a prompt to increase the likelihood of the Maurice 1993), and research articles (e.g., Severtson and

* Justin B. Leaf Misty L. Oppenheim-Leaf


Jblautpar@aol.com mlo0411@gmail.com
John McEachin
Wafa A. Aljohani
jmautpar@aol.com
waljo346@mail.endicott.edu
Ronald Leaf
Christine M. Milne Rlautpar@aol.com
cmautpar@aol.com
1
Autism Partnership Foundation, 200 Marina Drive, Seal
Julia L. Ferguson Beach, CA 90740, USA
jfergusonautpar@aol.com 2
Endicott College, 376 Hale Street, Beverly, MA 01915, USA
3
Joseph H. Cihon Behavior Therapy and Learning Center, 200 Marina Drive, Seal
jcihon@autismpartnership.com Beach, CA 90740, USA
Rev J Autism Dev Disord

Carr 2012). DTT remains a well-researched and described their own children diagnosed with ASD. One of the first stud-
teaching procedure and a commonly implemented procedure ies to evaluate parents’ implementation of DTT was conducted
within autism intervention (Ghezzi 2007; Smith 2001). DTT by Koegel et al. (1978). In the first experiment within the
has been demonstrated to be an effective method to teach a study, training occurred with four parents of children diag-
variety of skills including, but not limited to, receptive labels nosed with autism. Training consisted of three 30 min lectures,
(e.g., DiGennaro Reed et al. 2011), expressive labels parent trainer demonstrations (lasting between 10 and 15 min
(Conallen and Reed 2016), play (Weiss et al. 2017), and social per demonstration), and two 37 min videos which modeled
behavior (e.g., Shillingsburg et al. 2014). correct and incorrect implementation of DTT. Within this ex-
The findings of the research demonstrating DTT to be a periment, Koegel et al. created a task analysis of DTT
successful procedure may have contributed to the common consisting of 14 steps including how to present instructions,
recommendation for the use of DTT for individuals diagnosed provide prompts, use shaping strategies, provide conse-
with autism spectrum disorder (ASD) in clinical settings (e.g., quences, and implement each trial with discrete beginning
Leaf and McEachin 1999). However, behavior change agents and ending. Total training time took between 2 h 54 min and
(i.e., individuals implementing behavioral intervention) may 3 h 54 min. Using a multiple-response baseline design, Koegel
use DTT in multiple settings including, but not limited to, and colleagues demonstrated that all parents implemented
clinic-based centers, university based centers, in the individ- DTT correctly with their children. In the second experiment,
uals home and/or community, and in school settings (Leaf Koegel et al. evaluated the effects of video illustrations in
et al. 2018). Additionally, parents have been trained to imple- isolation on the implementation of DTT. Using a multiple-
ment DTT with their children outside of formal intervention response baseline design, Koegel and colleagues demonstrat-
sessions to supplement treatment (e.g., Lovaas et al. 1973). ed that the parents implemented the 14-step DTT procedure
This may be more likely to occur when parents live in rural successfully with thier children diagnosed with ASD. Since
areas where access to services is difficult. Regardless of who this study, there have been numerous investigations that have
is implementing DTT, it is important that it be done with high demonstrated other behavior analytic interventions such as
treatment fidelity so that optimal outcomes can be achieved video modeling (e.g., Young et al. 2012), instructional man-
(Fryling et al. 2012; St Peter Pipkin et al. 2010). As such, it is uals (e.g., Summers and Hall 2008), and BST (e.g., Eid et al.
important for these individuals to be well trained in the imple- 2017) to be effective procedures to train parents on the imple-
mentation of DTT. mentation of DTT with their children.
To help ensure that behavior change agents and parents Given the breadth of the literature base on training behavior
implement DTT with a high degree of fidelity, researchers change agents and parents to implement DTT, a review of the
have explored the use of a variety of training procedures. In current literature could provide a synthesis of the pertinent
an early example, Koegel et al. (1977) trained 11 teachers to information. Furthermore, there has been a growth in the num-
implement DTT and shaping procedures with individuals diag- ber of certified professionals (e.g., Registered Behavioral
nosed with autism. In this study, the Koegel et al. utilized a Technician™, Board Certified Assistant Behavior Analyst®,
manual plus feedback to train teachers to implement DTT. Board Certified Behavior Analyst®, and Board Certified
Using a modified multi-response baseline design, the authors Autism Technicians) who provide behavioral intervention
demonstrated that the teachers improved on the correct imple- for individuals diagnosed with ASD, who may implement
mentation of DTT. In a more recent example, Sarokoff and DTT to individuals diagnosed with ASD, and who have to
Sturmey (2004) provided a task analysis of DTT consisting of demonstrate at least minimal competency in DTT as part of
10 different components and utilized an adult confederate to receiving a certification (Behavior Analyst Certification
train teachers to implement the DTT components correctly. Board 2017; Behavior Intervention Certification Council
Prior to intervention, the three teachers displayed low levels 2015).
of correct implementation of DTT. Following behavioral skills The purpose of the present paper was to provide this review
training (BST), consisting of instructions, modeling, role-play, by analyzing and synthesizing the research that has evaluated
and feedback, all three teachers implemented the 10 steps of the methods to train behavior change agents and parents on the
DTT task analysis correctly across multiple sessions with an implementation of DTT. Specifically, we sought to identify (1)
adult confederate. There are several other training procedures participant demographics (e.g., educational level), (2) location
which have been evaluated to train behavior change agents how of training, (3) research designs used (e.g., multiple baseline
to implement DTT including, but not limited to, video model- designs), (4) training variables (e.g., type of training proce-
ing (e.g., Catania et al. 2009), performance feedback (e.g., dure), and (5) outcome measures (e.g., percentage of non-
Gilligan et al. 2007), computer instruction (e.g., Higbee et al. overlapping data). After identifying this information, we syn-
2016), and self-instructional manuals (e.g., Arnal et al. 2007). thesized the information by (1) comparing differences in the
Researchers have also evaluated a wide variety of training research findings across participants (e.g., differences in re-
procedures on training parents how to implement DTT with sults between parents and therapists) and (2) comparing
Rev J Autism Dev Disord

differences in the research findings across training procedures Data Extraction and Synthesis
(e.g., differences in results between studies that used BST to
those that used video modeling). From this synthesis, we eval- Each study that met the criteria for inclusion was evaluated
uated the current status of training behavior change agents and and quantified along several dimensions.
parents to implement DTT, provided recommendations on
training, identified limitations in the research, and provided Participant Demographics Four participant variables were
recommendations for future research. evaluated within each study. First, we evaluated the partici-
pants’ roles and/or profession (e.g., teacher, parent, student).
Second, we calculated the total number of behavior change
agents and/or parents who received training on the implemen-
Methods tation of DTT within each study. Third, we evaluated the
number of participants who had a previous history of training
Search Procedure with DTT prior to their inclusion within the current study.
Fourth, we evaluated the education level of the participants
First, the primary researcher conducted a search in peer- in each study.
reviewed journals from January 1950 to November 2017
using the PsychINFO database. The primary researcher used Setting We evaluated the environment in which training oc-
the following keywords: Bdiscrete trial,^ Bdiscrete trials,^ curred. We categorized training environments as an agency
Bdiscrete trial instruction,^ Bdiscrete trial training,^ BDTT,^ (e.g., an organization that provides services outside of a uni-
Btraining,^ Bdevelopmental disabilities,^ and Bautism^ in all versity and the home), home, university, school (which in-
possible combinations. Next, the primary researcher conduct- cludes classrooms), and, in one instance, a swim school.
ed a search in the peer-reviewed journals form January 1950
to November 2017 in the Educational Resources Information Experimental Design To provide and assessment of the meth-
Center (ERIC) database using the same keywords and combi- odological rigor of each study, we recorded the experimental
nations. All articles identified through initial search of the design each study.
PsychINFO and ERIC databases were reviewed if they met
the inclusion or exclusion criteria. There were a total of 198 Training Variables We evaluated five variables that were relat-
articles located based upon these searches. Finally, the prima- ed to training. First, we measured the number of behavioral
ry researcher manually checked the reference sections of the steps or components the authors used to define DTT within
198 articles for any additional studies that might meet the each study. Second, we evaluated the types of teaching proce-
inclusion criterion (stated below). No additional articles were dures (e.g., video modeling, BST, instructional manuals) that
identified using this method. Thus, a total of 198 articles were were used to train participants on the implementation of DTT.
evaluated for inclusion. Third, we recorded the training time per session as directly
stated by the researchers. Fourth, we evaluated the total train-
ing time per study. There were two ways that we could calcu-
Inclusion Criteria late this measure. First, the authors of the articles could provide
the information on total training time. The authors could pres-
There were four inclusion criteria. First, articles had to be ent this as a precise measure per participant, as the total amount
published in a peer reviewed journal. Dissertations and book of training time across all participants, as an average across
chapters were excluded. Second, articles had to be experimen- participants, or as a range of training time across participants.
tal. Reviews, commentaries, or programmatic descriptions The other way that total time could be determined if the au-
were excluded. Third, articles had to specifically state that thors did not provide information on total training time, but did
the experimenters trained behavior change agents (e.g., stu- provide information on the training time per session, was
dents, teachers, therapists, staff, paraprofessionals, swim in- to calculate the approximate total training time. To obtain this
structors, or adults) or parents on how to implement DTT. measure, we multiplied the number of sessions by the stated
Finally, articles had to provide objective and empirical data amount of training time per session for each participant. In
on the individual implementing DTT. There were no some articles reviewed, it was impossible to determine the total
inclusion/exclusion criteria for with whom the behavior amount of training time; this was scored as could not determine
change agent or parent was implementing DTT. That is, if (CND). Finally, we recorded with whom the participants were
the study met the above four criteria, it was included regard- implementing DTT. This was divided into two categories, im-
less of other variables (e.g., age, multiple diagnosis). Based plementation with a confederate or with an individual with
upon these criteria, 51 out of the 198 initial articles were a developmental disability (DD). Confederate was defined as
included for further review and analysis. an adult role-playing as an individual with a DD or ASD.
Rev J Autism Dev Disord

Outcome Measures We evaluated five different variables as it behavior change agents or parents how to implement DTT
relates to outcomes within the study. First, we evaluated the (contact the corresponding author for a full list of the studies
percentage of non-overlapping data (PND; Scruggs and included). In five studies, there were multiple experiments
Mastropieri 2001); scores from the participants performance in (i.e., Arnal et al. 2007; Higbee et al. 2016; Koegel et al.
the baseline condition to the intervention condition were record- 1978; Randell et al. 2007; Young et al. 2012); we evaluated
ed. To obtain a PND score, the authors calculated the total num- each experiment separately. In total, 57 experiments were
ber of data points during intervention that did not overlap with evaluated.
data during baseline and divided by the total number of sessions
in the intervention condition. There were instances in which a
PND score could not be determined which included group de- Participant Demographics
signs, data not presented per session, or unreadable graphs.
Second, we evaluated the length of the probe session to Table 1 provides information about participant demographics
determine participants’ mastery on the implementation of across the 57 experiments. Across the 57 experiments, there
DTT. This could have been reported as the number of trials were a total of 510 participants who were trained to implement
the participant had to display DTT correctly or an amount of DTT. Crockett et al. (2007) and Lerman et al. (2015) had the
time the participant was evaluated. Third, we measured the fewest number of participants (i.e., 2), while the third experi-
inclusion or exclusion of maintenance data and when mainte- ment conducted by Randell et al. (2007) had the most partic-
nance data was collected. Fourth, we measured if an assess- ipants (i.e., 75). Only 8.8% (n = 45) of participants across all
ment of generalization occurred, which could have been prior experiments had a previous history with DTT prior to the
to intervention and/or following intervention. Finally, we eval- study in which s/he was a participant. The most common
uated how generalization was measured (e.g., in another en- participants were college students (i.e., undergraduate or grad-
vironment, with another individual). uate students) who participated in 19 experiments. Other com-
mon participants were therapists (13 experiments), parents (10
Interobserver Reliability experiments), paraprofessionals (5 experiments), teachers (4
experiments), multiple identifications (3 experiments), indi-
The first level of reliability was on the screening of articles (i.e., viduals diagnosed with ASD (2 experiments), and swim in-
two independent observers looking through electronic databases structors (1 experiment). The most common educational level
to determine the number of studies that might fit the criterion to was some college which was reported in 28 experiments,
determine eligibility). To calculate inter-rater reliability at this followed by a bachelor’s degree (20 experiments), education
level, we recorded the number of articles with agreement be- level not reported (13 experiments), high school degree (11
tween the two reviewers (i.e., articles that were designated to be experiments), masters degree (3 experiments), some graduate
reviewed and articles designated as not appropriate) divided by training (1 experiment), and doctoral degree (1 experiment).
the total number of articles evaluated. For this level, interrater
reliability was 95.4%. Any article for which there was disagree-
ment (i.e., one reviewer saying that it was appropriate and an- Training Locations
other saying it was not appropriate) was reviewed by a third
rater to determine whether or not it should be included. The most common setting in which training occurred was a
The second measure of reliability was in determining inclu- university (21 experiments; see Table 1) followed by school
sion eligibility of the articles (e.g., two independent reviewers (15 experiments), home (11 experiments), agency (9 experi-
evaluating the 198 articles to determine if each article met the ments), swim school (1 experiment), and not reported (1
inclusion criterion). Each reviewer evaluated if each of the ar- experiment).
ticles met all four of the aforementioned inclusion criteria. If
disagreements occurred, the lead reviewer’s score determined
inclusion or exclusion of the article. IOA for this level was Research Designs
95.7% (i.e., the reviewers only disagreed on three articles).
Seventy-nine percent of the experiments utilized a single sub-
ject design while 21% of the experiments used a group design
Results or a statistical analysis of the data (see Table 1). The majority
of the experiments (67%) utilized a multiple baseline design,
Number of Studies while multiple probe/multiple response was used in 12% of
the experiments. Overall, single-subject methodology was the
There were a total of 51 studies published in peer reviewed most common method of evaluating the training of behavior
journals from 1950 to 2017 which evaluated methods to train change agents or parents on how to implement DTT.
Rev J Autism Dev Disord

Table 1 Demographic information, setting, and experimental design

First author and year Participant type Number Participants with Participant education Setting Experimental design
previous history level

Arnal (2007; Exp 1) Student 4 NS SC University AB


Arnal (2007; Exp 2) Student 3 NS SC University Multiple baseline
Babel (2008) Student 7 0 SC Not stated Correlational
Belfiore (2008) Therapist 4 4 BA School Multiple baseline
Bolton (2008) Para 3 1 Not stated Agency Multiple Probe
Catania (2009) Therapist 3 2 BA School Multiple baseline
Crockett (2007) Parent 2 0 SC University Multiple baseline
de Oliveira (2016) Student 32 0 SC and none University Group
Dib (2007) Para 3 3 Not stated School Multiple baseline
Downs (2008) Student 6 0 SC University Multiple baseline
Downs (2012) Student 8 0 SC University Group
Eid (2017) Parent 3 0 HS and BA Agency Multiple probe
Eldevik (2013) Therapist 12 0 MA, BA, and HS Agency Group
Fazzio (2009) Student 5 0 SC University Multiple baseline
Fetherston (2014) Therapist 4 NS BA School Multiple baseline
Garland (2012) Student 4 0 Some graduate University Multiple baseline
Gilligan (2007) Para 3 0 SC, BA, and BA School Multiple baseline
Hay-Hansson (2013) Teacher 16 2 SC, HS, and none School Group design
Higbee (2016; Exp 1) Student 4 0 SC School Multiple baseline
Higbee (2016; Exp 2) Student 4 0 BA School Multiple baseline
Jeanson (2010) Therapist and parent 12 6 Not stated University Pre-post
Jull (2016) Swim instructor 6 0 BA and SC Swim school Multiple baseline
Koegel (1977) Teacher 11 0 Not stated School Multi response baseline
Koegel (1978; Exp 1) Parent 4 0 HS School Multi response baseline
Koegel (1978; Exp 2) Therapist, parent, 3 0 BA and SC School Multi response baseline
and student
Lafasakis (2007) Parent 3 0 BA and HS School Multiple baseline
Lenlanc (2005) Therapist 3 0 NS Agency Multiple baseline
Lerman (2013) Individuals with ASD 4 1 BA, SC, and HS University Multiple baseline
Lerman (2015) Individuals with ASD 2 0 Not stated University Multiple baseline
Mason (2017) Para 16 16 MA, BA, and HS School Multiple baseline
May (2011) Therapist 3 3 Not stated Home Multiple baseline
McKinney (2014) Student 3 0 SC University Multiple baseline
McKenney (2015) Teacher, SLP, and para 8 0 Not stated University Multiple baseline
Neef (1995) Parent 26 0 Not stated Home Multiple baseline
Nosik (2011) Student 4 0 SC Agency Multiple baseline
Nosik (2013) Therapist 6 0 HS Agency Multiple baseline
Parnell (2017) Therapist 3 3 BA Home Multiple baseline with
changing criterion
Pollard (2014) Student 4 0 SC University Multiple baseline and AB
Randell (2007; Exp 1) Student 50 0 BA and SC University Group
Randell (2007; Exp 2) Student 50 0 BA and SC University Group
Randell (2007; Exp 3) Student 75 0 BA and SC University Group
Ryan (2005) Teacher 3 0 Not stated Home Statistical analysis
Salem (2009) Student 4 0 SC University Multiple baseline
Sarokoff (2004) Teacher 3 0 MA Home Multiple baseline
Sarokoff (2008) Therapist 3 0 Not stated School Multiple baseline
Severston (2012) Therapist 6 0 SC and HS Agency Multiple baseline
Rev J Autism Dev Disord

Table 1 (continued)

First author and year Participant type Number Participants with Participant education Setting Experimental design
previous history level

Subramaniam (2017) Parent 4 0 Ph.D., BA, and SC Agency and home Multiple baseline
Summers (2008) Parent 4 0 SC Home Pre-post
Thiessen (2009) Student 4 0 SC University Multiple baseline
Thomas (2013) Para 3 0 BA School Multiple baseline
Thomson (2012) Therapist 8 0 SC University Multiple baseline
Valdescu (2012) Therapist 3 0 Not stated Agency Multiple baseline
Ward-Horner (2008) Parent 3 0 Not stated Home Multiple baseline
Wightman (2012) Therapist 13 1 SC University Multiple baseline
Young (2012; Exp 1) Parent 5 0 BA and HS Home ABC
Young (2012; Exp 2) Parent 5 0 BA and HS Home AB and multiple baseline
Zaragoza (2015) Student 5 0 SC Home Pre-post

HS high school degree, SC some college, BA bachelor degree, MA masters degree, Ph.D doctoral degree

Training Outcomes Unfortunately, in the majority of experiments (i.e., 51%),


the authors did not report the total amount of training time.
Table 2 provides information about training outcomes across When training time was reported, it ranged anywhere from
the 57 experiments. 30 min of total training (i.e., Fetherston and Sturmey
2014) up to 1500 min of total training (i.e., Koegel et al.
Discrete Trial Teaching Task Analysis Each experiment task 1977) across participants.
analyzed DTT into components or steps. Across the 57 exper- When the total training time was not directly reported
iments reviewed, the most steps in a task analysis of DTT were but the training session time was reported, it was possible
45 steps (i.e., Severtson and Carr 2012) and the least number to determine the approximate total training time by calcu-
of steps was 2 (i.e., Fetherston and Sturmey 2014). The aver- lating the duration of training per session and multiplying
age number of steps across the 57 experiments was 16 with a it by the total number of sessions. Of those studies where a
median of 14 steps. total amount of training was not reported, we were able to
calculate an approximate average total in 43% of the stud-
Training Procedures Across the 57 experiments, only one ies. The approximate total training time ranged from
study did not specify the training procedures (i.e., Jeanson 15 min of total training (i.e., Belfiore et al. 2008) up to
et al. 2010). The procedures used to train participants on 4200 min of total training (i.e., Ryan and Hemmes 2005)
how to implement DTT procedures included 9 broad catego- across participants. The average total training and/or total
ries: BST, BST with other procedures (e.g., in situ training), training time was able to be calculated or was directly
computer training, lecture and role-play, manual and feed- stated by the author in 70% (n = 40) of all studies.
back, performance feedback/bug in ear/coaching, self- Across the 40 experiments in which training time was either
instructional manuals, video modeling, and not stated. The reported or calculated, the average amount of training time per
most commonly implemented procedure used was BST with participant was 353 min (5 h 53 min). Additionally, across the
other procedures (n = 11 experiments) followed by BST (n = 9 57 experiments, the training time per session was reported in
experiments) and self-instructional manuals (n = 9 experi- 67% of the studies, with a range from 7 min (i.e., Catania et al.
ments), computer training (n = 7 experiments) and manual 2009) up to 480 min per session (i.e., Downs et al. 2008;
and feedback (n = 7 experiments), performance feedback/ Downs and Downs 2013).
bug in ear/coaching (n = 6 experiments), video modeling
(n = 4 experiments), lecture and role-play (n = 3 experiments), DTT Probes We measured if the participant implemented DTT
and not stated (n = 1 experiment). with a confederate or an individual diagnosed with ASD and/
or a DD. In 28 experiments (i.e., 49%), the participants imple-
Training Time An important measure regarding training is mented DTT with a confederate; in 28 experiments (i.e., 49%),
efficiency. Reporting the total amount of training time a the participants implemented DTT with an individual diag-
participant receives is critical. Total training time could be nosed with ASD and/or a DD; and in one study (i.e., 2%),
reported as a precise number, an average, or a range. the subjects implemented DTT with a virtual student.
Rev J Autism Dev Disord

Table 2 Independent variable information

First author and year DTT steps Training procedure Training time Total training time DTT implemented
per session with

Arnal (2007; Exp 1) 19 SI-manual NS Range, 110–177 min Confederate


Arnal (2007; Exp 2) 19 SI-manual and video NS Range, 20–240 min Confederate
Babel (2008) 21 SI-practice NS CND Confederate
Belfiore (2008) 5 Didactic, video self-monitoring 15 min Range, 15–45 min* Individual
Bolton (2008) 7 BST 180 min 180 min Confederate
Catania (2009) 10 Video modeling 7 min 36, 43, and 43 min* Confederate
Crockett (2007) 7 BST with video 120 min 540 and 780 min Individual
de Oliveira (2016) 21 Manual and peer review NS Avg, 447 and 350 min Confederate
Dib (2007) 14 BST NS CND Individual
Downs (2008) 30 BST and feedback 480 min 480 min Individual
Downs (2012) 35 BST 480 min 480 min Individual
Eid (2017) 8 BST NS CND Individual
Eldevik (2013) 22 Computer simulation 1–2 days Range, 300–540 min Individual
Fazzio (2009) 19 SI-manual, feedback, and 150 min 180 min* Confederate
demonstration
Fetherston (2014) 2 BST 20 min 40, 30, and 30 min Individual
Garland (2012) 13 Coach 15 min 90 min* Virtual Student
Gilligan (2007) 10 Performance feedback NS CND Individual
Hay-Hansson (2013) 22 BST and video coach 15 min 45 min Individual
Higbee (2016; Exp 1) 7 Computer 15 trials 316, 200, 200, and 369 min Confederate
Higbee (2016; Exp 2) 7 Computer 15 trials 333. 689, 287, 620 min Individual
Jeanson (2010) 21 NS NS CND Confederate
Jull (2016) 7 BST and feedback 180 min 690 min Individual
Koegel (1977) 13 Manual, video, and feedback 150–300 min 1500 min Individual
Koegel (1978; Exp 1) 14 Demonstration and lecture 10–37 min CND Individual
Koegel (1978; Exp 2) 14 Video 30 min CND Individual
Lafasakis (2007) 10 BST 10 min CND Individual
Lenlanc (2005) 10 Feedback 8–10 min 64–70, 30–40, 32–40 min* Individual
Lerman (2013) 11 BST and In situ 60 min 840, 600, and 1080 min* Confederate
Lerman (2015) 15 BST and video 120 min 270 and 510 min Individual
Mason (2017) 26 Manual and coach 90, 60, and 15–45 min 255–275, 285–555, 222–375 min Individual
May (2011) 7 BST and stimulus training 120 min CND Confederate
McKinney (2014) 20 Bug in ear NS CND Confederate
McKenney (2015) 6 Workshop, feedback, role-play NS CND Individual
Neef (1995) 6 BST and pyramidal 15–30 min CND Individual
Nosik (2011) 8 Video, feedback, and 19, 10, and 10 min 197, 128, 187, and 187 min* Confederate
competency based
Nosik (2013) 20 Computer and model or BST 34–43, 68–92 min 374–462, 374–462, 374–462, Confederate
748–1012, 1088–1472, and
1156–1574 min*
Parnell (2017) 11 Job aide, performance feedback 10 min CND Individual
Pollard (2014) 15 Interactive computer and NS Range, 109–122 min Confederate
performance feedback
Randell (2007; Exp 1) 30 Computer 25 min 50 min Confederate
Randell (2007; Exp 2) 30 Computer 25 min 40 min Confederate
Randell (2007; Exp 3) 36 Computer 25 min 50 min Confederate
Ryan (2005) 12 BST and peer feedback 60–120 min 1500–4200 min Individual
Salem (2009) 21 SI- manual NS Avg, 287 min Confederate
Sarokoff (2004) 10 BST 10 min CND Individual
Sarokoff (2008) 10 BST NS CND Individual
Rev J Autism Dev Disord

Table 2 (continued)

First author and year DTT steps Training procedure Training time Total training time DTT implemented
per session with

Severston (2012) 45 SI-manual, video, and 140–150, 41, and 140–150, 140–150, Confederate
performance feedback 10–15 min 140–150, 201–211,
201–211, and 211–221 min*
Subramaniam (2017) 22 BST, manual, video, and 90, 60, 20 min Range, 229–349 min* Confederate
performance feedback
Summers (2008) 14 Manual NS CND Individual
Thiessen (2009) 21 Manual NS Range, 160–341 min Individual
Thomas (2013) 9 Peer observation and 5–10 min 30–60, 40–80, 20–40 min* Individual
performance feedback
Thomson (2012) 21 SI-manual and video NS Avg, 260 min Confederate
Valdescu (2012) Not stated Video modeling 9–16 min 57, 57, and 37 min* Confederate
Ward-Horner (2008) 10 BST NS CND Confederate
Wightman (2012) 20 SI and video modeling NS Range, 195–315 min Confederate
Young (2012; Exp 1) 21 Manual and video modeling 17 min Avg, 285 min Confederate
Young (2012; Exp 2) 21 Manual, video, and role-play 17 min Avg, 280 min Confederate
Zaragoza (2015) 21 Computer manual and SI manual NS Range, 563–1182 min Confederate

Avg average, NS not stated, SI self-instruction, CND could not determine


*Total training time was calculated by multiplying the number of sessions by the amount of training time per session across each participant

Outcome Measures between 2 and 100 trials per session). In 30% of the experi-
ments, the researchers reported the length of the implementa-
Table 3 provides information about outcome measures across tion sessions in terms of session duration. The average amount
the 57 experiments. of time per implementation session was 14 min (ranging be-
tween 2 and 60 min per session). In 16% of the experiments,
PND The PND of each single subject study was calculated in the length of the implementation sessions was not reported.
order to determine the effectiveness of the training procedures
on participant implementation of DTT. PND was calculated Maintenance Across the 57 experiments, maintenance mea-
with 73% (n = 42) of experiments; PND was not appropriate sures were only evaluated in 28% of the experiments; in those
for 23% (n = 13) of experiments and could not be determined studies, maintenance was evaluated between 3 days after in-
with 4% (n = 2) of experiments. Of the 42 experiments in tervention (i.e., Severtson and Carr 2012) and up to 26 weeks
which PND was calculated, 40 experiments demonstrated after intervention (i.e., Subramaniam et al. 2017).
the training procedures to be highly effective (i.e., PND was
90% or above) and 2 to be effective (i.e., PND between 70 and Generalization Generalization was not assessed in over half of
90%). The average PND scores across all 42 experiments the experiments (i.e., 57%). In the remaining experiments,
were 97.7% indicating that overall the procedures used to train 23% assessed generalization prior to and following interven-
the implementation of DTT were highly effective (Scruggs tion and 20% assessed generalization only following interven-
and Mastropieri 2001). The inclusion or exclusion of mainte- tion. When generalization measures were included, assess-
nance was evaluated in each experiment as well as the time ment of generalization occurred with a different participant
between intervention and maintenance. in 37.5% of experiments, a different skill in 37.5% of exper-
iments, and a different participant and skill in 25% of
Duration of Probes The probes used to measure the partici- experiments.
pants’ implementation of DTT and whether the participant
implemented DTT correctly were also evaluated. First, we Outcomes Across Different Participants
recorded the reported length of sessions in which DTT was
implemented, which could either be equated based upon time Table 4 provides information on outcomes across different
(e.g., 15 min) or number of trials (e.g., 10 trials). In 54% of the participant types. We divided participants into four broad cat-
experiments, the researchers reported the length of the imple- egories: parents, students, professionals (e.g., therapists, para-
mentation sessions in terms of number of trials. The average professionals, teachers), and other (i.e., individuals with au-
amount of trials per implementation session was 15 (ranging tism and swim teachers). Although therapists were evaluated
Rev J Autism Dev Disord

Table 3 Dependent variable information

First author and year PND Duration of probes Maintenance Generalization Generalization assessed with

Arnal (2007; Exp 1) 83.3% 12 trials None None N/A


Arnal (2007; Exp 2) 100% 12 trials None None N/A
Babel (2008) N/A 36 trials None None N/A
Belfiore (2008) 94.4% 12–15 trials 4 weeks None N/A
Bolton (2008) 100% 2–10 trials 19 weeks Post only Participant
Catania (2009) 100% 10 trials 1 week Pre and post Participant and skill
Crockett (2007) 51.2% 100 trials None None N/A
de Oliveira (2016) N/A 12 trials None None N/A
Dib (2007) 100% NS None None N/A
Downs (2008) 100% 10–15 min 10 weeks Post Participant
Downs (2012) N/A 60 min None None N/A
Eid (2017) 100% 10 trials 2 weeks Pre and post Skill
Eldevik (2013) N/A 15 min None None N/A
Fazzio (2009) 97.5% 15 min None Post only Participant and skill
Fetherston (2014) 100% 10 trials None Pre and post Skill
Garland (2012) 100% 15 min None None N/A
Gilligan (2007) 100% 11 trials 3 month None N/A
Hay-Hansson (2013) N/A 2.5 min None None N/A
Higbee (2016; Exp 1) 100% 15 trials None Pre and post Participant and skill
Higbee (2016; Exp 2) 100% 15 trials 1 month Pre and post Skill
Jeanson (2010) N/A 12 trials None None N/A
Jull (2016) CND NS None None N/A
Koegel (1977) CND NS None None N/A
Koegel (1978; Exp 1) 100% NS None Pre and post Skill
Koegel (1978; Exp 2) 100% NS None None N/A
Lafasakis (2007) 100% 10 trials None Pre and post Skill
Lenlanc (2005) 100% 10–15 min 11 weeks None N/A
Lerman (2013) 100% 5 trials None Pre and post Skill
Lerman (2015) 100% 6 trials None None N/A
Mason (2017) 100% 5 trials None None N/A
May (2011) CND 3 min None None N/A
McKinney (2014) 81.25% NS NS None N/A
McKenney (2015) 27% NS NS None N/A
Neef (1995) N/A 5 trials 6 weeks Pre and post Skill
Nosik (2011) 100% 10 trials 2 weeks Post only Skill
Nosik (2013) 100% NS 6 weeks None N/A
Parnell (2017) 97.5% 10–15 min 2 weeks None N/A
Pollard (2014) 96.2% 20 trials None Pre and post Participant and skill
Randell (2007; Exp 1) N/A 20 min None None N/A
Randell (2007; Exp 2) N/A 15 min None None N/A
Randell (2007; Exp 3) N/A 10 min None None N/A
Ryan (2005) N/A 2–15 min None None N/A
Salem (2009) 91.7% 12 trials None Post only Participant
Sarokoff (2004) 100% 5 min None None N/A
Sarokoff (2008) 100% 10 trials NS Pre and post Participant and skill
Severston (2012) 100% 10 min 3–5 days Post only Skill
Subramaniam (2017) 100% 12 trials 26 weeks Post only Participant
Summers (2008) N/A 5–10 min None None N/A
Thiessen (2009) 100% 12 trials None Post only Participant
Rev J Autism Dev Disord

Table 3 (continued)

First author and year PND Duration of probes Maintenance Generalization Generalization assessed with

Thomas (2013) 100% 5–10 min None None N/A


Thomson (2012) 97.2% 36 trials None None N/A
Valdescu (2012) 100% 12 trials None Pre and post Participant and skill
Ward-Horner (2008) 97.9% 10 trials None Pre and post Participant
Wightman (2012) 92.3% 36 trials None Post (1 participant) Participant
Young (2012; Exp 1) 94.7% 12 trials 1 month Post only Participant
Young (2012; Exp 2) 93.9% 12 trials 1 month Post only Participant
Zaragoza (2015) N/A NS None None N/A

Avg average, NS not stated, SI self-instruction, CND could not determine, N/A not available

in almost half of the experiments (i.e., 43.8%), students rep- number of participants. All training procedures had high PND
resented the largest number of participants (n = 277). Overall, scores with the exception of BST with other procedures (i.e.,
PND scores showed that training was highly effective for stu- 71%) and lecture and role-play (i.e., 34%). However, outlier
dent participants (i.e., 97.2%) and for participants in the other studies (i.e., studies with PND scores that varied greatly from
category (i.e., 100%). PND scores for parent participants were others within that category) in each category are removed the
66.7% and 73.6% for therapist participants. However, it PND score was 100% and 96.4% for BST with other proce-
should be noted that only one study included poor PND dures and lecture and role-play, respectively.
scores, and when the studies are excluded, PND scores are
97.3% and 98.4% for parent and therapist participants,
respectively. Discussion
Due to the varied methods of reporting training time, it was
not possible to obtain a precise number of the total duration of This review included an evaluation of 51 studies (57 experi-
training time; thus, results are reported as a range. Across the ments) in which researchers trained behavior change agents
four categories of participants, there was a wide and varying and parents to implement DTT. Our evaluation permitted the
range of the total duration of training time required. For exam- identification of (1) participant demographics (e.g., career),
ple, in some experiments, therapist participants required the (2) training location, (3) research designs used (e.g., multiple
shortest duration of training (i.e., 15 min), but in others required baseline designs), (4) training variables (e.g., type of training
the longest duration of training time (i.e., 1574 min). Student procedure), and (5) outcome measures (e.g., percentage of
participants also included a wide range of total training time non-overlapping data). The results revealed that over 500 in-
(i.e., 20 to 1182 min), while parent participants included the dividuals were trained on the implementation of DTT that
smallest range of total training time (i.e., 229 to 780 min). included a wide variety of participants, including parents, un-
dergraduate students, paraprofessionals, teachers, therapists,
Outcomes Across Different Training Procedures and swim teachers. Across the studies, training took place in
a wide variety of setting including universities, homes, agen-
Table 5 provides information about the outcomes across the cies, and schools. Also, the majority of studies used single
different training procedures used within the studies that met subject methodology, specifically the multiple baseline
our inclusion criteria. BST with other procedures represented design.
the most commonly implemented training procedure across Our results yielded numerous procedures that have been
the studies, but computer training was used with the largest used to train behavior change agents and/or parents that

Table 4 Results across different types of participants

Participant type Experiments Number Experiments’ PND PND Score Experiments’ total time Range of total
was calculable was interpretable training time (min)

Parents 12 66 8 66.7% (range, 51–100%) 4 229–780


Students 20 277 13 97.2% (range, 81–100%) 16 20–1182
Therapists 25 147 19 73.6% (range, 27–100%) 16 15–1574
Other 3 12 2 100% 3 270–1080
Rev J Autism Dev Disord

Table 5 Results across different training procedures

Procedure Experiments Number Experiments’ PND PND score Experiments’ Range of total
was calculable total time was training time (min)
interpretable

BST 9 33 8 99.4% (range, 98–100%) 3 30–480


BST with other procedures 11 71 6 71% (range, 51–100%) 9 45–4200
Computer 7 199 3 98.7% (range, 96–100%) 7 40–689
Lecture and role-play 3 16 3 34% (range, 27–100%) 1 15–45
Manual and feedback 7 77 5 96.5% (range, 94–100%) 5 160–555
Not stated 1 6 0 N/A 0 N/A
Performance feedback, 6 19 6 95% (range, 81–100%) 3 20–90
bug in ear, and coaching
Self-instructional manuals 9 55 7 95.7% (range, 83–100%) 8 20–1182
Video modeling 4 13 3 100% 3 36–197

N/A not available

included BST, BST plus other procedures (e.g., in situ train- The results of this review also showed that a wide variety of
ing), computer training, lecture and role-play, manual and individuals have been trained how to effectively implement
feedback, performance feedback/bug in ear/coaching, self- DTT. Although, behavior analysts (e.g., RBTs®, BCaBAs®,
instructional manuals, and video modeling. The results also BCBAs®, BCATs) are commonly the ones implementing
showed that all of the procedures were successful regardless DTT, other professionals such as teachers, paraprofessionals,
of the participant demographic (e.g., therapist versus parent). or other professionals may also implement DTT. Additionally,
Furthermore, across these procedures, the total training time the inclusion of parents in the intervention process for indi-
varied across and within studies from as little as 15 min (e.g., viduals diagnosed with ASD is also common as a means to
Belfiore et al. 2008) to 4200 min (e.g., Ryan & Hemmes supplement or enhance intervention (e.g., Lovaas 1987). As
2005). There were no large differences with respect to main- such, parents should continue to be trained in procedures such
tenance or generalization across the difference procedure, but as DTT if they are going to be a part of the intervention.
it should be noted that both were assessed in under half of the Furthermore, it would be beneficial for any person involved
studies included in the review. Additionally, no differences in the intervention for an individual diagnosed with ASD or a
were found when mastery was assessed with an individual DD to be proficient in the implementation of DTT. The train-
diagnosed with ASD or a confederate peer. These results have ing of all individuals participating in intervention can help
several clinical and research implications. ensure that individuals receive quality teaching across all as-
pects of intervention.
A third finding was that about half of the experiments in
Clinical Implications this review (49%; n = 28) assessed mastery of DTT implemen-
tation with a confederate. While this is a common approach to
As apparent from the results of this review, several procedures assessing mastery (e.g., RBT® certification), clinicians should
can be used to effectively train individuals how to implement be cautioned about this method of assessment when the de-
DTT and that there is no substantial difference across training sired outcome is implementation with individuals diagnosed
procedures. This is an important finding as many profes- with ASD or a DD. Assessing mastery with a confederate may
sionals may have philosophical dispositions that one training not provide an ecologically valid way to evaluate mastery, as
procedure is more efficacious than another, but this review the stimulus conditions may be much different from that of
found that a variety of procedures are effective. What remains intervention. Potentially important variables such as the pres-
unclear are the advantages and disadvantages of each proce- ence of problematic behavior (e.g., aggression, self-injury,
dure used to train those to implement DTT. That is, the con- stereotypic behavior) may not be present in the assessment
ditions under which a supervisor should select one approach with a confederate which may limit generalization to the ac-
over another have not been evaluated. Therefore, it may be tual treatment environment (i.e., with an individual receiving
advantageous for future researchers to compare different train- intervention). With 49% of the experiments (n = 28) assessing
ing methods on the implementation of DTT to evaluate the mastery with an individual diagnosed with ASD or a DD and
conditions under which one may be more efficient than anoth- the potential problems with assessing with a confederate, cli-
er, the results of which could assist clinicians determine the nicians should elect to assess the mastery of training with
best approach to training new staff and parents. populations with whom individuals are actually going to
Rev J Autism Dev Disord

provide intervention. It should be noted that no study to date abundance of studies using single-subject designs that have
has compared mastery of skills with a confederate to an actual documented the effectiveness of procedures to train individ-
client and therefore the concerns stated above are only poten- uals to implement DTT, we encourage future researchers to
tial concerns. Future researchers should evaluate the use or employ group design methodology (Smith 2012).
non-use of confederates as part of training studies. As previously stated, parents are commonly included as
A final finding of this review was that participants reached part of a comprehensive intervention for individuals diag-
the mastery criterion after demonstrating correct implementa- nosed with ASD or a DD. However, the results of the review
tion of DTT during brief probe periods (i.e., a small number of found that parents only accounted for 18% (n = 10) of the
trials or duration of time). Many of the experiments within this experiments in this review. Furthermore, the majority of par-
review evaluated the implementation of DTT in relatively brief ticipants within each experiment were college students. Given
periods of time (i.e., an average of 14 min) or brief number of the common inclusion of parents within interventions, re-
trials (i.e., an average of 15 trials). Once again, this may not searchers should include parents as participants in future train-
provide an ecologically valid assessment of mastery. It is com- ing studies, DTT, and otherwise. These studies should evalu-
mon for intervention to occur for an average of 40 h a week ate effective training procedures in addition to the effects of
(e.g., Howard et al. 2014; Lovaas 1987), which makes it rea- this training on the outcomes of children within comprehen-
sonable to assume that DTT will be implemented for several sive behavioral interventions.
hours and, potentially, across hundreds of trials. Therefore, cli- Finally, there were several methodological differences
nicians may wish to evaluate an individual’s implementation of across the studies which creates difficulties with respect to
DTT in a time frame that more closely resembles typically comparing results. We suggest several areas of consistency
therapy. across research studies moving forward. First, researchers
should state the history participants have with the implemen-
Research Implications tation of DTT. Second, researchers should provide more de-
mographic information on participants including gender,
Several of the clinical implications just discussed could be race/ethnicity, SES, and duration of time working in the agen-
enhanced by addressing some of the gaps within the literature cy. Third, researchers should note training time per session
which this review shed light upon. This review divulged that and provide the total training time per participant. Fourth, as
maintenance following the training of the implementation of previously stated, a variety of measures (e.g., generalization
DTT is not commonly assessed within the literature. While the and maintenance) should also be included. Until maintenance
results of the experiments included within this review showed and generalization measures are more commonly assessed, it
that several procedures are effective at training several demo- may be beneficial for training to occur with the population
graphics of people to implement DTT, it is critical that these with whom the participant will provide intervention, given
skills maintain across time. Future researchers should include that it remains unclear if training with a confederate will gen-
assessment of maintenance when evaluating the training indi- eralize to this population.
viduals in the implementation of DTT. This research could
help determine changes to current procedures or the develop- Global Implications
ment of new procedures to train individuals on the implemen-
tation of DTT. Along these lines, generalization was not This review provided an analysis of the literature on training
assessed in a majority of the studies, and in several experi- individuals to implement DTT. Through this review, we were
ments, generalization was only assessed as a post measure. As able to provide several clinical and research recommenda-
such, little is known about the generalization effects of train- tions. These recommendations may also be viewed in a larger
ing individuals to implement DTT. Future researchers should context with respect to ABA-based intervention and certifica-
ensure measures of generalization within training studies, spe- tion. As DTT is commonly used within ABA-based interven-
cifically across different individuals and skills. tions, it is commonly included on task lists requirements for
The majority of studies identified within this review have certification (Behavior Analyst Certification Board 2013,
utilized single-subject methodology. Although single-subject 2017; Behavior Intervention Certification Council 2015),
methodology has many strengths (e.g., subjects as their own which outline tasks that are likely to be performed by a pro-
controls) and is a hallmark of behavior analytic research, fessional to ensure minimum competencies. Various certifica-
single-subject methodology has its limitations. For example, tions involve a variety of requirements in terms of the number
single-subject methodology can impose limits on the general- of hours of training required prior to becoming eligible for
ity of the findings of a single study. Additionally, professionals citification. This analysis indicated that, on average, it takes
have stated that single-subject methodology is only the first 6 h to train individuals to implement DTT, but for some individ-
step within the evolution of research on interventions for in- uals can take up to 70 h (i.e., Ryan and Hemmes 2005). This
dividuals diagnosed with ASD (Smith 2012). Given the amount of time, as evident within the research base, to train a
Rev J Autism Dev Disord

behavior change agent to implement DTT may not be a problem Behavior Analyst Certification Board (2017). BCBA/BCaBA task list (5th
ed.). Retrieved from https://www.bacb.com/wp-content/uploads/
if the certification requires a large amount of training (e.g.,
2017/09/170113-BCBA-BCaBA-task-list-5th-ed-.pdf.
1500 h). However, if an individual is attainting a certification Behavior Intervention Certification Council (2015). BACT task list.
that involves a low number of training hours (e.g., 40 h), it Retrieved from https://www.behavioralcertification.org/Content/
would be unrealistic, based upon the research, to assume that Downloads/TASKLIST-BICC-2015.pdf.
proficiency would be obtained for DTT in addition to all other Belfiore, P. J., Fritts, K. M., & Herman, B. C. (2008). The role of proce-
dural integrity. Focus on Autism and Other Developmental
skills on a task list. This has implications for those planning Disabilities Developmental Disabilities, 23(2), 95–102.
supervision and training for those seeking certifications. The Bolton, J., & Mayer, M. D. (2008). Promoting the generalization of para-
number of hours listed to provide training may not be adequate professional discrete trial teaching skills. Focus on Autism and
to ensure competency in all the skills listed in the task list; Other Developmental Disabilities, 23(2), 103–111.
Catania, C. N., Almeida, D., Liu-Constant, B., & DiGennaro Reed, F. D.
therefore, those providing supervision and training should be
(2009). Video modeling to train staff to implement discrete-trial
prepared to provide more hours of training then required of instruction. Journal of Applied Behavior Analysis, 42(2), 387–392.
certifications. This also has implication for certifying organiza- Conallen, K., & Reed, P. (2016). A teaching procedure to help children
tions. Reviews like this one should be used to inform certifying with autistic spectrum disorder to label emotions. Research in
organizations and subject matter experts to determine and mod- Autism Spectrum Disorders, 23, 63–72.
Crockett, J. L., Fleming, R. K., Doepke, K. J., & Stevens, J. S. (2007).
ify the number of hours required to achieve minimum acceptable Parent training: acquisition and generalization of discrete trials
competence in all the skills on task lists. We hope that this teaching skills with parents of children with autism. Research in
review helps to spark continued research on the training of future Developmental Disabilities, 28(1), 23–36.
behavior analysts and helps to inform those in positions to make DiGennaro Reed, F. D., Reed, D. D., Baez, C. N., & Maguire, H. (2011).
A parametric analysis of errors of commission during discrete-trial
changes to training requirements of certification organizations. training. Journal of Applied Behavior Analysis, 44(3), 611–615.
Downs, A., & Downs, R. C. (2013). Training new instructors to imple-
ment discrete trial teaching strategies with children with autism in a
community-based intervention program. Focus on Autism and
Conclusion Other Developmental Disabilities, 28(4), 212–221.
Downs, A., Downs, R. C., & Rau, K. (2008). Effects of training and
This review showed that the experimental research on training feedback on discrete trial teaching skills and student performance.
behavior change agents and parents is robust. The results in- Research in Developmental Disabilities, 29(3), 235–246.
dicate that there are a variety of modalities that can be used to Eid, A. M., Aljaser, S. M., AlSaud, A. N., Asfahani, S. M., Alhaqbani, O.
A., Mohtasib, R. S., et al. (2017). Training parents in Saudi Arabia to
train behavior change agents and parents to implement DTT implement discrete trial teaching with their children with autism
successfully. Although the research literature is robust, there spectrum disorder. Behavior Analysis in Practice, 10(4), 402–406.
are still areas of limitations that could be addressed by future Fetherston, A. M., & Sturmey, P. (2014). The effects of behavioral skills
researchers. Nevertheless, the research on training behavior training on instructor and learner behavior across responses and skill
sets. Research in Developmental Disabilities, 35(2), 541–562.
change agents and parents have direct implications for clini- Fryling, M. J., Wallace, M. D., & Yassine, J. N. (2012). Impact of treat-
cians who provide training on discrete trial teaching. ment integrity on intervention effectiveness. Journal of Applied
Behavior Analysis, 45(2), 449–453.
Compliance with Ethical Standards Ghezzi, P. M. (2007). Discrete trials teaching. Psychology in the Schools,
44(7), 667–679.
Gilligan, K. T., Luiselli, J. K., & Pace, G. M. (2007). Training parapro-
Conflict of Interest The authors declare that they have no conflict of
fessional staff to implement discrete trial instruction: evaluation of a
interest.
practical performance feedback intervention. The Behavior
Therapist, 30(3), 63–66.
Informed Consent As such, no informed consent was needed in this Gongola, L., & Sweeney, J. (2011). Discrete trial teaching: getting started.
study. Intervention in School and Clinic, 47(3), 183–190.
Higbee, T. S., Aporta, A. P., Resende, A., Nogueira, M., Goyos, C., &
Human and Animal Rights This article does not contain any studies Pollard, J. S. (2016). Interactive computer training to teach discrete-
with human or animal participants performed by any of the authors. trial instruction to undergraduates and special educators in Brazil: a
replication and extension. Journal of Applied Behavior Analysis,
49(4), 780–793.
Howard, J. S., Stanislaw, H., Green, G., Sparkman, C. R., & Cohen, H. G.
References (2014). Comparison of behavior analytic and eclectic early interven-
tions for young children with autism after three years. Research in
Arnal, L., Fazzio, D., Martin, G. L., Yu, C. T., Keilback, L., & Starke, M. Developmental Disabilities: a Multidisciplinary Journal, 35(12),
(2007). Instructing university students to conduct discrete-trials 3326–3344.
teaching with confederates simulating children with autism. Jeanson, B., Thiessen, C., Thomson, K., Vermeulen, R., Martin, G. L., &
Developmental Disabilities Bulletin, 35, 131–147. Yu, C. T. (2010). Field testing of the discrete-trials teaching evalu-
Behavior Analyst Certification Board (2013). Registered behavior tech- ation form. Research in Autism Spectrum Disorders, 4(4), 718–723.
nician™ (RBT®) task list. Retrieved from https://www.bacb.com/ Jull, S., & Mirenda, P. (2016). Effects of a staff training program on
wp-content/uploads/2017/09/161019-RBT-task-list-english.pdf. community instructors’ ability to teach swimming skills to children
Rev J Autism Dev Disord

with autism. Journal of Positive Behavior Interventions, 18(1), 29– Michael, J. (1988). Establishing operations and the mand. The Analysis of
40. Verbal Behavior, 6, 3–9.
Koegel, R. L., Russo, D. C., & Rincover, A. (1977). Assessing and train- Randell, T., Hall, M., Bizo, L., & Remington, B. (2007). DTkid: interac-
ing teachers in the generalized use of behavior modification with tive simulation software for training tutors of children with autism.
autistic children. Journal of Applied Behavior Analysis, 10(2), 197– Journal of Autism and Developmental Disorders, 37(4), 637–647.
205. Ryan, C. S., & Hemmes, N. S. (2005). Post-training discrete-trial teaching
Koegel, R. L., Glahn, T. J., & Nieminen, G. S. (1978). Generalization of performance by instructors of young children with autism in early
parent-training results. Journal of Applied Behavior Analysis, 11(1), intensive behavioral intervention. The Behavior Analyst Today, 6(1),
95–109. 1–12.
Leaf, R., & McEachin, J. (1999). A work in progress: behavior manage- Sarokoff, R. A., & Sturmey, P. (2004). The effects of behavioral skills
ment strategies and a curriculum for intensive behavioral treatment training on staff implementation of discrete-trial teaching. Journal of
of autism. New York: DRL Books. Applied Behavior Analysis, 37(4), 535–538.
Leaf, R., & McEachin, J. (2016). The Lovaas model: love it or hate it, but Scruggs, T. E., & Mastropieri, M. A. (2001). How to summarize single-
first understand it. In R. G. Romanczyk & J. McEachin (Eds.), participant research: ideas and applications. Exceptionality, 9(4),
Comprehensive models of autism spectrum disorder treatment (pp. 227–244.
7–43). Berlin: Springer. Severtson, J. M., & Carr, J. E. (2012). Training novice instructors to
Leaf, J. B., Cihon, J. H., Leaf, R., McEachin, J., & Taubman, M. (2016a). implement errorless discrete-trial teaching: a sequential analysis.
A progressive approach to discrete trial teaching: some current Behavior Analysis in Practice, 5(2), 13–23.
guidelines. International Electronic Journal of Elementary
Shillingsburg, M. A., Bowen, C. N., & Shapiro, S. K. (2014). Increasing
Education, 9(2), 361–372.
social approach and decreasing social avoidance in children with
Leaf, J. B., Leaf, R., McEachin, J., Cihon, J. H., & Ferguson, J. L. (2018).
autism spectrum disorder during discrete trial training. Research in
Advantages and challenges of a home- and clinic-based model of
Autism Spectrum Disorders, 8(11), 1443–1453.
behavioral intervention for individuals diagnosed with autism spec-
Smith, T. (2001). Discrete trial training in the treatment of autism. Focus
trum disorder. Journal of Autism and Developmental Disorders,
on Autism and Other Developmental Disabilities, 16(2), 86–92.
48(6), 2258–2266.
Lerman, D. C., Hawkins, L., Hillman, C., Shireman, M., & Nissen, M. A. Smith, T. (2012). Evolution of research on interventions for individuals
(2015). Adults with autism spectrum disorder as behavior techni- with autism spectrum disorder: implications for behavior analysts.
cians for young children with autism: outcomes of a behavioral The Behavior Analyst Today, 35(1), 101–113.
skills training program. Journal of Applied Behavior Analysis, St Peter Pipkin, C., Vollmer, T. R., & Sloman, K. N. (2010). Effects of
48(2), 233–256. treatment integrity failures during differential reinforcement of alter-
Lerman, D. C., Valentino, A. L., & Leblanc, L. A. (2016). Discrete trial native behavior: a translational model. Journal of Applied Behavior
training. In R. Lang, T. B. Hancock, & N. N. Singh (Eds.), Early Analysis, 43(1), 47–70.
intervention for young children with autism spectrum disorder (pp. Subramaniam, S., Brunson, L. Y., Cook, J. E., Larson, N. A., Poe, S. G.,
47–83). Cham: Springer International Publishing. & Peter, C. C. S. (2017). Maintenance of parent-implemented dis-
Lovaas, O. I. (1981). Teaching developmentally disabled children: the me crete-trial instruction during videoconferencing. Journal of
book. Baltimore: University Park Press. Behavioral Education, 26(1), 1–26.
Lovaas, O. I. (1987). Behavioral treatment and normal educational and Summers, J., & Hall, E. (2008). Impact of an instructional manual on the
intellectual functioning in young autistic children. Journal of implementation of ABA teaching procedures by parents of children
Consulting and Clinical Psychology, 55(1), 3–9. with Angelman syndrome. Journal on Developmental Disabilities,
Lovaas, O. I., Koegel, R., Simmons, J. Q., & Long, J. S. (1973). Some 14(2), 26–34.
generalization and follow-up measures on autistic children in behav- Weiss, M. J., Hilton, J., & Russo, S. (2017). Discrete trial teaching and
ior therapy. Journal of Applied Behavior Analysis, 6(1), 131–165. social skill training: Don’t throw the baby out with the bath water. In
MacDuff, G. S., Krantz, P. J., & McClannahan, L. E. (2001). Prompts and J. B. Leaf (Ed.), Handbook of social skills and autism spectrum
prompt-fading strategies for people with autism. In C. Maurice, G. disorder (pp. 155–169). Berlin: Springer.
Green, & R. M. Foxx (Eds.), Making a difference behavioral inter- Young, K. L., Boris, A. L., Thomson, K. M., Martin, G. L., & Yu, C. T.
vention for autism (1st ed., pp. 37–50). Austin: Pro-Ed. (2012). Evaluation of a self-instructional package on discrete-trials
Maurice, C. (1993). Let me hear your voice: a family’s triumph over teaching to parents of children with autism. Research in Autism
autism. New York: Knopf. Spectrum Disorders, 6(4), 1321–1330.

You might also like