You are on page 1of 33

Empowering Teachers with Low-Intensity Strategies to Support

Academic Engagement: Implementation and Effects of


Instructional Choice for Elementary Students in Inclusive
Settings

Kathleen Lynne Lane, David J. Royer, Mallory L. Messenger, Eric Alan Common,
Robin Parks Ennis, Emily D. Swogger

Education and Treatment of Children, Volume 38, Number 4, November


2015, pp. 473-504 (Article)

Published by West Virginia University Press


DOI: https://doi.org/10.1353/etc.2015.0013

For additional information about this article


https://muse.jhu.edu/article/597970

[ Access provided at 7 Aug 2021 14:41 GMT from University of Kansas Libraries ]
EDUCATION AND TREATMENT OF CHILDREN Vol. 38, No. 4, 2015

Empowering Teachers with Low-Intensity


Strategies to Support Academic Engagement:
Implementation and Effects of Instructional
Choice for Elementary Students in
Inclusive Settings
Kathleen Lynne Lane, David J. Royer
University of Kansas
Mallory L. Messenger
Miami University in Oxford
Eric Alan Common
University of Kansas
Robin Parks Ennis
Clemson University
Emily D. Swogger
University of Kansas

Abstract
Instructional choice is a low-intensity strategy that requires little preparation,
is easy to implement, and supports content instruction in the classroom. In
this study we explored the effectiveness of two types of instructional choice—
across-task and within-task choices—implemented classwide during writing
instruction by classroom teachers with limited university support in an in-
clusive first-grade classroom. Student participants were one boy (Neal) and
one girl (Tina) who were identified using academic and behavioral screening
procedures as needing more intensive supports in the classroom. Results es-
tablished a functional relation between choice conditions and increases in aca-
demic engaged time and decreases in disruptive behavior for Tina, but not for
Neal. Teachers functioned as both primary and reliability data collectors us-
ing momentary time sampling and implemented both choice conditions with
high levels of fidelity. Social validity was assessed from the perspectives of all
stakeholders. Limitations and future directions are discussed.

I n 2014, Michael Yudin—Assistant Secretary for the Office of Spe-


cial Education and Rehabilitation of the United States Department
of Education—gave a compelling address at the National Positive
Behavior Intervention and Support (PBIS) Leadership Forum in

Pages 473–504
474 LANE et al.

which he urged educators and educational systems to “pay as much


attention to students’ social and behavioral needs as we do academ-
ics,” noting that all too often students with the most pronounced
needs are often missing the most instruction. Fortunately, state and
local education agencies have recognized the importance of address-
ing students’ academic, behavioral, and social needs in an integrated
fashion within tiered systems (Lane, Menzies, Ennis, & Oakes, 2015;
Sailor, 2014). These systems include progressively intensive supports,
with each tier providing increasingly concentrated, evidenced-based
strategies, practices, and programs (Cook & Tankersley, 2013). These
levels of prevention range from Tier 1 (primary) prevention efforts
for all students, Tier 2 (secondary) prevention efforts for some stu-
dents with common acquisition and/or performance deficits, to Tier
3 (tertiary) supports reserved for students with the most intensive
needs (e.g., Sugai & Horner, 2006). At each step, educators embrace
data-informed decision making to determine which students require
these more focused intervention efforts, with PBIS being at the fore-
front of these evidenced-based practices (Lane & Walker, 2015).
For example, there are a number of PBIS strategies that have
been used effectively within and beyond the context of tiered sys-
tems of support to facilitate high levels of engagement in instruc-
tional experiences, foster positive teacher-student interactions, and
reduce the likelihood of students engaging in behavior that is dis-
ruptive to the learning environment (Jolivette, Wehby, Canale, &
Massey, 2001; Simonsen, Fairbanks, Briesch, Myers, & Sugai, 2008).
Some such strategies include using active supervision, providing
increased rates of behavior specific praise, increasing students’
opportunities to respond during instructional tasks and providing
instructional choice (Jolivette, Alter, Scott, Josephs, & Swoszowski,
2013). Such low-intensity, teacher-delivered supports are ideally
incorporated as part of Tier 1 practices, but can also be used more
specifically to assist students whose screening data suggest Tier 1
efforts may be insufficient (Lane et al., 2015). Such strategies are
often less labor and time-intensive for teachers to implement with
fidelity than more intensive student-led interventions (e.g., behavior
contracts, self-monitoring interventions), and often provide an effi-
cient, effective method for supporting multiple students in fully
engaging in the instructional activities at hand.
By making simple shifts in how teachers provide instruction,
teachers can reduce the likelihood of challenging behaviors (e.g., dis-
ruption) occurring and increase engagement (e.g., Shogren, Faggel-
la-Luby, Bae, & Wehmeyer, 2004). Incorporating instructional choices
into daily lessons is one such strategy that can be used by educators
INSTRUCTIONAL CHOICE 475

in a range of contexts to achieve the shared goal of maximizing the


amount of time students spend engaged in high-quality instructional
activities and support students in engaging in self-determined behav-
iors such as choice-making, which can ultimately offer them a sense of
control that may improve the quality of their life (Jolivette et al., 2001).
This PBIS strategy is just one example of how a relatively simple shift
in environmental variables (in this case instructional choice) can facil-
itate desired behaviors (e.g., engagement and self-determined behav-
iors) and essentially make problem behaviors “irrelevant, inefficient,
and ineffective” for students (Horner, 2000, p. 182).
Current research on choice has included a range of topics includ-
ing examining students’ preferences, increasing students’ opportuni-
ties to make choices in their life, the mechanisms that support choice
(e.g., why it works; Morgan, 2006; Romaniuk & Miltenberger, 2001)
and examining the effects of choice (as an intervention) on students’
engagement, task completion, and disruption. In this current study,
we focus on the latter objective: examining the implementation and
effects of instructional choice at the elementary level.
Jolivette, Stichter, and McCormick (2002) provided the follow-
ing, concise definition of instructional choice “. . . the student is pro-
vided with two or more options, is allowed to independently select an
option, and is provided with the selected option” (p. 28). This strategy
requires little preparation, is easy to implement, and supports content
instruction (Ramsey, Jolivette, Peterson, & Kennedy, 2010). Rispoli et
al. (2013) indicated instructional choice can include across-task choices
and within-task choices, both of which can be effective in decreasing
problem behaviors (Dibley & Lim 1999; Tullis et al., 2011). Across-task
choices can include choosing the order of tasks to be completed (e.g.,
“Which worksheet would you like to do first?”) or choosing which
task to complete from a menu of options. Within-task choices can
include asking the student to choose materials for task completion
(e.g., colored pencils or markers, pencil-and-paper or electronically)
or providing the choice of environmental variables (e.g., where to
complete tasks, with whom to work; Cole & Levinson 2002).
To date there are many studies of instructional choice demon-
strating the ease and effectiveness of this strategy with a range of
students (e.g., those with developmental disabilities, behavioral
disorders, at risk) in a continuum of PK–12 grade settings (e.g.,
inclusive contexts, self-contained classrooms located in public
schools, and residential treatment facilities; DiCarlo, Baumgartner,
Stephens, & Pierce, 2013; Dunlap et al., 1994; Ramsey et al., 2010;
Skerbetz & Kostewicz, 2013). Such studies have consistently shown
that instructional choice—compared to no instructional choice
476 LANE et al.

conditions—yielded higher levels of task engagement and lower


levels of disruption. Moreover, students were more engaged and
less disruptive when they were offered instructional choices than
when instructional choices were not offered. While evidence is
mixed as to the science behind why choice works; in some instances,
it was not just preference (getting what they want) associated with
changes, but instead it was choice—the act of choosing—associated
with improved behavior (e.g., Kern & State, 2009; Vaughn & Horner,
1997).
Rispoli and colleagues (2013) explicitly examined the differen-
tial effects of across-task and within-task choices as separate inde-
pendent variables. In this study, three boys (Dylan, Eddie, and Alex;
ages 5, 7, 11) and one girl (Kelley, age 11) with autism exhibited
one or more challenging behaviors in the form of off task behavior,
screaming, aggression, elopement, verbal protesting, delayed echo-
lalia, and/or property destruction. In an effort to reduce disruptive
behavior, students were observed two to four days per week during
two to four 5 min sessions in their respective settings. In this A-B-
A-B alternating treatment design, across- and within-task choice
conditions were embedded in each B phase, where either an across-
task choice was presented visually (e.g., choose what to complete
first) or within-task choices (e.g., choose how to respond, where
to work, materials to use) were presented for a teacher assigned
activity. All four students displayed higher rates of disruptive
behavior during baseline and withdrawal phases, which decreased
during both choice conditions. During the first intervention phase,
three out of the four students exhibited lower rates of challenging
behavior during across-task choice compared to within-task choice.
When choice was reintroduced after withdrawal, Alex and Kelly
again exhibited lower rates of behavior during across-task choices,
while Dylan and Eddie demonstrated zero levels of behavior during
within-task choice and near zero (Dylan) or zero (Eddie) levels for
across-task choice.
Rispoli and colleagues (2013) have moved the field forward
in better understanding instructional choice and introduced some
important next steps. For example, it will be important to exam-
ine outcomes associated not only with shifts in disruption, but also
shifts in academic engagement given the importance of supporting
students in accessing high-quality instruction. In addition, future
inquiry is needed in more inclusive settings with teachers as lead
interventionists given the goals of supporting feasible, effective
low-intensity supports that teachers can use in a whole-class format
within the context of tiered systems of support to meet students’
INSTRUCTIONAL CHOICE 477

academic, behavioral, and social needs. Strategies such as instruc-


tional choice can ideally be used as a classroom practice, with an
emphasis in evaluating how well these low-intensity supports assist
students with moderate behavioral challenges. Finally, it is import-
ant to assess the social validity of these types of supports from both
teacher and student perspectives.
Purpose
In this study we built on the foundational work of Rispoli
et al. (2013) which was conducted in more restrictive settings with
doctoral students as intervention agents and assessors. Specifically,
we extended this line of inquiry by replicating the A-B-A-B alternat-
ing treatment design in an inclusive setting, monitoring disruptive
behavior and academic engaged time, involving classroom teachers
as intervention agents and assessors with minimal university sup-
port, and gathering social validity from all involved stakeholders.
Both types of instructional choices were offered to all students, with
data collected on those students whose screening scores indicated
they needed additional support to be fully engaged during writing
instruction.
In this study we addressed three questions. First, we sought to
determine if instructional choice could be implemented with integ-
rity by site-level general and special education teachers, with limited
university supports. Second, we examined the extent to which a func-
tional relation was established between the introduction of two types
of instructional choice—across-task and within-task—and changes in
academic engagement for students identified as having writing and
work completion concerns. Third, we explored teachers’ and stu-
dents’ views on the goals, procedures, and outcomes of the instruc-
tional choice interventions.

Method

Participants
Participants were two first-grade students (Neal and Tina
[pseudonyms]) attending a public elementary school in the Midwest
(see tables 1 and 2). Students were identified, through systematic
screening procedures, examining their risk index (e.g., moderate to
high) according to the Student Risk Screening Scale (SRSS; Drum-
mond, 1994) and report card grades (e.g., progressing, limited prog-
ress) in writing and working independently. Neal qualified for special
478 LANE et al.

education services under the category of autism as determined by a


multidisciplinary team according to the Individuals with Disabilities
Education Improvement Act (IDEA, 2004) and Tina was a typically
developing student who was not receiving supplemental supports in
writing at the onset of this study. Tina received small group, Tier 2
reading interventions in the classroom at the beginning of the study
and later participated in a Tier 3 reading intervention, Reading Recov-
ery (a 1:1, daily intervention), to better meet her instructional needs in
reading as the Tier 2 support was not sufficient according to curricu-
lum-based measures.

Table 1
School Characteristics
Characteristic % n

Studentsa (N = 604)
Male 49.83 301
Female 50.17 303

Ethnicity
Asian / Pacific Islander 21.69 131
Black 0.66 4
Hispanic 1.32 8
Two or more races 4.47 27
American Indian / Alaska Native 0.17 1
White 71.69 433

Grade level
Kindergarten 15.23 92
First 14.57 88
Second 15.40 93
Third 18.71 113
Fourth 18.38 111
Fifth 17.38 105
Free or reduced-price lunch eligible 2.81 17
Students with disabilitiesb 6.60 36

Localea Suburb: Large


Classroom teachers (FTE)a 30
Student / teacher ratioa 20.13
Title I schoola No

Note. FTE = full time equivalent.


a
National Center for Education Statistics, Common Core of Data 2011-2012.
b
Ohio State Department of Education, 2012-2013 report card; Students with disabilities
make up 9.4% of district.
INSTRUCTIONAL CHOICE 479

Table 2
Characteristics of Student Participants
Student
Variable Neal Tina
Demographics
Age 6.11 7.04
Gender Male Female
Ethnicity White Asian
Screening
SRSS overall (Total Score) Moderate Risk (8) Moderate Risk (4)
Fall trimester report card
Writing Progressing Progressing
Works independently Progressing Progressing
SSiS rating scales (standard scores)
Social skills 81 100
Problem behaviors 111* 120*
Academic competence 93 88
Special education Yes (autism) No
Instructional sessions attended 28 28
% sessions observed: fidelity (n) 28.57 (8) 28.57 (8)

Note. SSiS = Social Skills Improvement System – Rating Scale (Gresham & Elliott, 2008b);
SRSS = Student Risk Screening Scale (Drummond, 1994; 0-3 = low risk; 4-8 = moderate
risk, 9-21 = high risk). *Scores reflect above average levels of hyperactivity/inatten-
tiveness. Neal also scored above average on the autism spectrum score as expected.

Educators were three adults: first-grade general education


teacher, special education teacher, and instructional support teacher
(see table 3). The first-grade general education teacher was in her thir-
teenth year teaching, with a teaching credential and master’s degree
in school counseling. The special education teacher was in her sec-
ond year teaching, with a teaching credential and bachelor’s degree in
special education. The instructional support teacher assisted school-
site teachers in a variety of ways: conducting interventions with small
groups of students, running the school’s pre-referral team for special
education, and providing professional development to teachers on
research-based practices. This teacher was in her 15th year of teaching
(10 years as a general education elementary teacher, five years as the
instructional support teacher). She had a teaching credential and mas-
ter’s degree in leadership.
Setting
Neal and Tina attended a large, suburban, public elementary
school in the Midwestern, United States (National Center for
480 LANE et al.

Education Statistics, 2014; see table 1). Approximately 600 K–5


students were enrolled in the school. Kindergarten through second
grade classrooms were taught by one teacher in a self-contained for-
mat, as each teacher was responsible for teaching all subject areas.
Third through fifth grade used a team-teaching approach to instruc-
tion, as two teachers were responsible for teaching two subject areas
to two classes. Students who qualified for special education were
provided services on a continuum of placements with many students
receiving services through inclusive practices in the general education
classroom while other students received a portion of their special edu-
cation services in a resource room.
There were 25 students (14 girls) in the first-grade classroom that
was the setting for this study. The students sat in assigned seats at six
circular tables around the perimeter of the room. Students were sup-
ported behaviorally through a shared Peacemaker Promise that was
developed by the teacher and students at the beginning of the year.
The promise was recited daily to remind students of expected behav-
iors (e.g., help others, make safe choices), which were reinforced using
“drops in the bucket” tickets. Once a student took the ticket home for
a parent to sign and brought it back to school, that ticket would go
into the bucket. Every morning, the first-grade teacher would pull out
five tickets from the bucket. When a student’s name was called, he or
she could choose from a variety of non-tangible reinforcement (e.g.,
“use a pen for the day” pass or “bring a stuffed animal to school”
pass). Classroom lessons followed a predictable format: whole-group
mini-lesson to introduce and teach a concept and individual practice
on the concept.
Procedures
The special education teacher contacted a university to inquire
about participating in a study advertised to learn more about teach-
er-directed, low-intensity supports: Empowering Teachers with Low-Inten-
sity Strategies to Support Instruction. After obtaining district and site-level
approvals to participate, the research team worked with the special
education teacher to detect students who might benefit from partici-
pation. She indicated some students were struggling during writing
instruction, which took place in an inclusive first-grade classroom. The
special education teacher and research team members worked together
remotely to analyze deidentified school data to identify students who
were struggling to work independently during writing instruction and
met all inclusion criteria. Three students were identified.
A consenting meeting was held remotely using web-based tech-
nology to explain the purpose of the study to the special education
INSTRUCTIONAL CHOICE 481

teacher, the first-grade general education teacher, and the instructional


support teacher. All three teachers consented. Following this meeting,
the general education teacher sent home parental consent letters to the
parents of the three students, provided by the research team. Only the
teachers knew the students’ names. University researchers did not learn
the student names until after parental consent was obtained and prior
to obtaining student assent. Students provided assent through a similar
procedure using the web-based technology in the presence of a teacher.
Of the three parents offered this opportunity for their child, two elected
to allow their child to participate, and both students assented.
Student Inclusion Criteria
Data from behavior screenings and report card data, were used
to identify first-grade students with overall behavioral concerns and
work completion issues during writing. Inclusion criteria were: (a)
scoring in the moderate or high risk category on the SRSS (Drum-
mond, 1994), (b) earning a Progressing (PR) or Limited Progress (LP)
grade in Writing, and (c) earning a Progressing (PR) grade in Works
independently on the district’s fall 2013 trimester report card.
Student Risk Screening Scale. The SRSS is free-access universal
behavior screener. As part of regular school practices, teachers rate
each student in their homeroom class on seven items: steal; lie, cheat,
sneak; behavior problem; peer rejection; low academic achievement;
negative attitude; and aggressive behavior according to a 4-point
Likert-type scale (never = 0, occasionally = 1, sometimes = 2, frequently =
3). Total scores range from 0 to 21, with higher scores indicating higher
risk. Total scores place students into one of three risk categories: low
(0 to 3), moderate (4 to 8), and high (9 to 21) risk. The SRSS is highly
accurate in predicting academic and behavioral outcomes at the ele-
mentary level (e.g., Menzies & Lane, 2012).
Report card. At the end of each trimester, students are graded
using coded performance levels on the report card. The first-grade
teacher grades every student individually in multiple sub-catego-
ries within the broad learning areas of language arts, math, science,
social studies, health, and social development/work habits. For the
purpose of identifying students with academic risk, we used reported
performance for the sub-category of Writing within language arts and
the sub-category of Works independently within social development/
work habits. Coded performance levels for the sub-categories were
the highest performance of Achieving (AC), middle performance of
Progressing (PR), and lowest performance of Limited Progress (LP).
No students in the first-grade class received a LP grade in Works inde-
pendently in the fall trimester.
482 LANE et al.

Intervention Procedures
In this study we explored the independent variable of instruc-
tional choice, defined as “. . . opportunities to make choices means that
the student is provided with two or more options, is allowed to inde-
pendently select an option, and is provided with the selected option”
(Jolivette et al., 2002, p. 28). Instructional choice was selected by first
surveying all three teachers to determine their knowledge, confidence,
and perceived utility of 10 low-intensity supports: behavior specific
praise, active supervision, opportunities to respond, precorrection,
instructional choice, instructive and corrective feedback, group con-
tingencies, proximity, self-monitoring, and behavior contracts (Lane,
Oakes, & Ennis, 2012; Low-Intensity Support Survey Self-Assessment:
Knowledge, Confidence, and Use). Teachers were provided with a 4-point
Likert-type scale ranging from 0 to 3, with higher scores indicating
higher levels of knowledge about the strategy, higher confidence in
their ability to implement the strategy, and more positive views that
the strategy would be useful in their teaching (see table 3). The special
education and instructional support teachers rated higher levels of
knowledge and use scores compared to the general education teacher
for all strategies as well as scores pertaining to the instructional choice
strategy. Scores were reviewed with the teachers during a meeting
with the primary investigator, and collectively the decision was made
to explore instructional choice.
We examined two types of instructional choice: (a) across-
task choices: the option to choose the order in which to complete
assigned tasks; or (b) within-task choices: options of how to com-
plete an assigned task (e.g., writing instrument). The two choice
options were randomly assigned to intervention days. During the
first introduction of the intervention, there were six sessions during
which within-task choices were planned (with one lost data point
due to a change in the school schedule) and another five sessions
during which across-task choices were planned, all of which were
conducted. During the confirmation phase (reintroduction of the
intervention conditions), there were four to five dates randomly
selected for each condition. Yet, due to changes such as end of the
year schedule of events and the primary observer being called out
of the classroom during data collection, one session was lost from
each task condition. Thus, during the B2, there were four sessions for
across-task choices and three sessions for within-task choices.
Across-task choices. During intervention phases, students
selected the sequence in which they completed tasks during the daily
writing block. At the end of the teacher’s mini-lesson, she wrote the
INSTRUCTIONAL CHOICE 483

Table 3
Characteristics of Teacher Participants and Knowledge, Confidence, and Use of
Low-Intensity Support Strategies

Teacher Primary Role
General Special Support
Variable Education Education Provider
Demographics
Age 34 23 37
Gender Female Female Female
Ethnicity White White White
Years teaching experience 13 2 15
Years teaching experience current school 12 2 15
Certified in the area currently teaching Yes Yes Yes
Highest degree earned Master’s Bachelor’s Master’s
Completed course in classroom management Yes Yes Yes
Professional development in academic screen- No Yes Yes
ing
Professional development in behavior screening No Yes Yes
M, range = 0-3; (SD)
Low-intensity support strategies survey (Lane,
Oakes, & Ennis, 2012)
Knowledge 1.80 (0.63) 2.00 (0.00) 2.40 (0.52)
Confidence 1.80 (0.63) 1.80 (0.42) 2.40 (0.52)
Use 2.00 (0.00) 2.30 (0.82) 2.70 (0.48)
Instructional choice item
Knowledge 1 2 2
Confidence 1 2 2
Use 2 3 3

tasks that needed to be done on the board with boxes next to them. The
teacher would explicitly say, “Your choice today is to choose the order
that you finish these tasks.” Tasks included at least two options, with
one day offering as many as four options. For example, the teacher
wrote “write 2 pages in nonfiction book” and “do 2 illustrations in
nonfiction book” on the board with boxes next to them. Then, the
teacher explained they could choose the order that they finished the
two tasks. Another example of across-task choices was when students
were writing how-to books. The tasks that needed to be completed
were “read a completed how-to book to a partner,” “write 2 new steps
to your how-to book,” and “draw 2 new illustrations in your how-to
book.” Students were allowed to choose the order in which they com-
pleted the three tasks. The teacher praised students for making their
self-selected choice.
Within-task choices. On days selected for within-task choice,
participants were offered a choice of materials to complete activi-
ties and/or a choice of environmental factors. The teacher still went
484 LANE et al.

over the tasks the students needed to complete with the remaining
time in the writing block at the end of her mini-lesson. During with-
in-task days, the teacher would number the tasks on the board and
tell students this was the order in which they needed to complete
the tasks. Following the description of the tasks, the teacher would
say what the choice was for the day. Some within-task days, students
were able to choose which type of art supply they wanted to use
for their illustrations. Other within-task days, students were able to
choose the location around the room that they completed their tasks
or the partner whom they worked with on the tasks. As with the
across-task condition, the teacher praised students for making their
self-selected choice.
Treatment integrity. Treatment integrity was measured using a
behavior component checklist for baseline conditions (14 items) and
both intervention conditions (5 items each). We collected data on base-
line practices during each phase to make sure that initial baseline pro-
cedures were still in place during each intervention condition, and
that the only change was the introduction of the across-task or with-
in-task interventions (e.g., with the five items detailing the tactics for
the specific choice component e.g., Teacher offered student the opportunity
to ________; Student made choice within 30 s; Teacher praised student for
making a choice selection; Teacher made _______ choices available; Teacher
praised student for completing assigned tasks). Each item was scored on
a 3 point Likert-type scale: 0 = not implemented, 1 = partially imple-
mented, 2 = fully implemented. The special education teacher collected
treatment integrity data daily, and the instructional support teacher
collected reliability of treatment integrity data. We computed integ-
rity of baseline practices, across-task choice, and within-task choice
conditions for each student by dividing the sum of items observed by
the total items possible for each session, multiplying the quantity by
100 to obtain a percentage (see Table 4).
Training. Prior to implementing the intervention, the three
teachers listened to a 10 min voiced-over PowerPoint describing each
intervention; reviewed the treatment integrity protocols; and com-
pleted a 10-item quiz which included examples of both types of
choice, intervention procedures, and instructions on how to complete
the forms. A meeting was held using web-based technology to answer
questions prior to and following the training. The special education
teacher served as the primary treatment integrity data collector and
the instructional support teacher assessed reliability in at least 25% of
days for each condition, including baseline.
INSTRUCTIONAL CHOICE 485

Table 4
Social Validity and Treatment Fidelity by Student and Phase

Treatment Integrity IOAa Social Validity
Student Phase Baseline Strategy % (n) IRP-15b CIRP
(No. Sessions) Practices % (SD)
% (SD)
T1 T2 T3
Neal A1 Baseline (8) 81.56 (12.37) 100.00 (2) 90 77 89 24
B1 Intervention 100.00 (3)
Within-task (5) 75.29 (9.83) 100.00 (0.00)
Across-task (5) 82.66 (13.60) 88.00 (10.95)
A2 Withdrawal (3) 82.05 (4.44) 100.00 (1)
B2 Confirmation 100.00 (2) 80 83 88 42
Within-task (3) 87.18 (11.75) 100.00 (0.00)
Across-task (4) 82.87 (8.08) 100.00 (0.00)
Tina A1 Baseline (8) 85.00 (13.09) 100.00 (2) 90 78 83 27
B1 Intervention 93.33 (3)
Within-task (5) 77.88 (11.27) 100.00 (0.00)
Across-task (5) 85.00 (9.79) 92.00 (10.95)
A2 Withdrawal (3) 80.56 (12.73) 100.00 (1)
B2 Confirmation 100.00 (2) 82 86 82 28
Within-task (3) 88.89 (12.73) 100.00 (0.00)
Across-task (4) 79.17 (9.28) 100.00 (0.00)

Note. IRP-15 = Intervention Rating Profile (Witt & Elliott, 1985); CIRP = Children’s Inter-
vention Rating Profile (Witt & Elliott, 1985); IOA = interobserver agreement; T1 =
general education teacher; T2 = special education teacher; T3 = support provider
teacher.
a
IOA percentage for treatment integrity was calculated via item-by-item analysis, and
the n reported represents the number of sessions within the phase observed by the
support teacher. bIRP-15 scores can range from 0-90, with higher scores indicating
higher social validity.

Baseline
During the baseline condition, the teacher conducted a whole-
group, mini-lesson to teach and introduce a concept. Typically, the
students sat on the carpet in the front of the room for the mini-lesson.
The students were expected to sit quietly and listen to the teacher.
Many times the mini-lesson included a read aloud with a mentor text
or teacher modeling of her own writing. Once the mini-lesson was
complete, the teacher told the students their tasks for the rest of the
time, and the students went back to their table to work on writing.
Throughout the work time, the teacher monitored the room—working
with individual students as needed. Baseline practices were broken
486 LANE et al.

down into 14 components (e.g., Teacher prompted students to enter


class after related arts [students entered]; Teacher prompted students
to meet on the carpet and wait quietly [students moved to carpet and
sat quietly without additional prompting]).
Of the 19 components in the intervention phase, 14 were also in
effect during the baseline phase—just the instructional choice compo-
nents were added during the intervention phases. All baseline proce-
dures remained in place during the two intervention conditions, and they
were combined with the addition of instructional choice. We monitored
procedural fidelity of baseline conditions across all phases to ensure there
were no other changes between baseline and intervention conditions.
Descriptive Measure
The Social Skills Improvement System-Rating Scales (SSiS-RS;
Gresham & Elliott, 2008). The SSiS-RS is a diagnostic tool that pro-
vides information about students’ social behavior. This nationally
norm-referenced measure has subscales assessing social skills, prob-
lem behaviors, and academic competence. It is a reliable and valid tool
for use with students ranging in age from 3-18 years and has three
versions: parent (contains social skills and problem behavior domains
only), teacher, and student self-report (ages 8–18; Gresham, Elliott,
Vance, & Cook, 2011). This information provided descriptive data of
students’ skill sets (see table 2).
Outcome Measures
Dependent variables were observations of behavior during
writing instruction (academic engaged time [AET] and disruptive
behavior [disruption]). AET was the main variable of interest in this
study as the goal was to determine the impact of instructional choices
on engagement.
Direct observations. Students’ individual academic engaged
time was assessed during baseline and intervention conditions using
a modified version of the direct observations procedures provided in
the SSBD. AET referred to the amount of time a student spent actively
engaged attending to and working with teacher-assigned tasks and
materials during writing instruction. Examples included (a) attending
to and following teacher instructions within 5 s of prompts, (b) engag-
ing in intended motoric responses (e.g., using a pencil or pen to write
words; taking paper out of a binder), and (c) asking peers or a teacher
for assistance according to requested procedures (e.g., raising their
hand or quietly talking with a peer) and regarding the appropriate
topic. Non-examples included (a) engaging in tasks other than teacher-
assigned activities and tasks, (b) gazing away from paper or work
INSTRUCTIONAL CHOICE 487

group for more than 5 s, (c) out of the assigned instructional area,
follow-up activity (e.g. drawing a picture).
Disruption referred to any behavior that interrupted classroom
instruction, or prevented students from engaging in classroom activities.
Examples included (a) talking to peers about off-topic items, (b) being
out of seat without permission, (c) talking out without raising hand, (d)
engaging in activities other than those requested by the teacher, (e) speak-
ing in an elevated voice, (f) hitting desk loudly with hands or objects,
(g) arguing with adults or students, (h) refusing to work, or (i) touching
others’ property without permission. Non-examples included (a) looking
at teacher or materials during instruction, (b) working independently or
with designated group, (c) being in assigned seat or area, (d) raising hand
to ask questions, (e) speaking in an indoor voice, (f) using materials as
they are intended, or (g) following teacher directions.
Measurement. AET and disruption were collected using
momentary time sampling procedures, collected in 2 min intervals for
the duration of the independent writing segment of the writing block.
Momentary time sampling was selected as the measurement system
as it allows for teachers to be the data collectors without interfering
with instruction (Cooper, Heron, & Heward, 2007). Additionally, dis-
ruptive episodes were not brief or uniform in length. Instead, most
instances of disruption extended for several minutes.
The special education teacher served as the primary data collec-
tor, collecting one probe during writing each day. The instructional
support teacher collected reliability data for at least 25% of each phase
(e.g., about one day per week). Prior to collecting data, both the special
education teacher and the instructional support teacher completed a
10 min voiced-over PowerPoint, reviewed the data collection forms,
and completed a 10-item quiz related to measurement issues, includ-
ing how to complete the forms. A meeting was held using web-based
technology to answer any questions following the training. Following
this step, the two teachers practiced collecting data before collecting
baseline data using the videos provided by the SSBD. They sat next to
each other for three 10 min sessions and used a data collection sheet
and MotivAider. Each rater collected data independently, and follow-
ing each session compared results and computed IOA using point-by-
point agreement. The number of intervals in agreement was divided
by the sum of the number of intervals in agreement and disagreement
(total intervals), multiplying the quantity by 100 to obtain a percent-
age. Each of the three 10 min sessions had an IOA of 100%, 90.00%,
and 90.00%, respectively, with an overall mean IOA of 93.33%. A cri-
terion of three consecutive agreements ≥ 90% was established as min-
imum criteria.
488 LANE et al.

Social Validity
We assessed social validity prior to beginning and after complet-
ing the testing of the interventions. The general education teacher, spe-
cial education teacher, and instructional support teacher completed the
Intervention Rating Profile (IRP-15; Witt & Elliott, 1985) to obtain their
opinions regarding the importance of intervention goals, the accept-
ability of the procedures, and importance of the intervention outcomes.
Teachers rated 15 statements regarding procedures and outcomes (e.g.,
“This would be an acceptable intervention for the child’s needs”) on
a six-point Likert-type scale ranging from 1 = strongly disagree to 6 =
strongly agree for each student. Total scores were summed (range 15 to
90) with higher scores suggesting higher social validity.
Students completed a modified version of the Children’s Interven-
tion Rating Profile (CIRP; Witt & Elliott, 1985) to obtain their views, with
minor wording changes in the items to soften the language. Students rated
seven items on a six point Likert-type scale ranging from 1 = I do not agree
to 6 = I agree. Negatively worded items were reversed scored and summed
(range 7 to 42), with higher scores suggesting greater social validity.
Experimental Design and Statistical Analysis
We utilized single case design methodology for this inter-
vention. In this study, we implemented an A-B-A-B alternating treat-
ment withdrawal design over eight weeks, beginning with a baseline
stage for Neal and Tina. Data paths for all variables were analyzed
using visual inspection techniques focusing on stability, level, and
trend (Gast & Ledford, 2014). Nonparametric effect sizes for com-
parison of A-B contrast to measure the direct impact of instructional
choice were calculated using Tau-U omnibus effect sizes (Parker,
Vannest, Davis, & Sauber, 2011). Tau-U was selected over other
non-overlap methods due to its increased statistical power than
other non-overlap methods, as it is distribution free, and controls
for positive baseline trend. To calculate Tau-U, all data were entered
into the online Tau-U calculator (Vannest, Parker, & Gonen, 2011)
to compute phase change contrast and weighted average Tau-U for
each participant, across all independent variables. We controlled for
positive baseline trend across all contrasts using the Tau-U calcula-
tor. Phase change decisions were guided by the academic engage-
ment variable as the most proximal variable of interest. A table of
mean and slope changes across phases is also included (see Table
5). Social validity and treatment integrity data were analyzed using
descriptive statistics (see Table 4).
Table 5
Academic Engaged Time and Disruptive Behavior: Mean and Slope by Phase
Academic Engaged Time Disruptive Behavior
Student Phase M SD Slope SEyx IOAa M SD Slope SEyx IOAa
(No. Sessions) % % % (n) % %
Neal A1 Baseline (8) 59.16 27.33 -4.75 26.71 92.86 (2) 6.60 15.57 4.20 12.62 100 (2)
B1 Intervention 54.26 26.52 -- -- -- 9.17 13.86 -- -- --
Across-task (5) 65.74 23.00 -3.42 25.81 80.00 (1) 3.33 7.46 3.33 6.09 100 (1)
Within-task (5) 42.78 26.90 -10.83 23.96 87.50 (2) 15.00 17.08 4.17 18.19 100 (2)
INSTRUCTIONAL CHOICE

A2 Withdrawal (3) 58.55 18.39 -2.41 25.78 100 (1) 3.70 6.41 5.56 4.54 100 (1)
B2 Confirmation 79.17 12.03 -- -- -- 12.10 19.82 -- -- --
Across-task (4) 79.17 15.96 11.67 6.45 95.00 (2) 15.97 26.68 -15.84 21.00 95.00 (2)
Within-task (3) 79.17 7.22 0.00 10.21 -- (0) 6.94 6.36 2.09 8.50 -- (0)
Tina A1 Baseline (8) 49.18 18.27 -4.32 16.09 92.86 (2) 21.82 15.40 3.84 13.18 92.86 (2)
B1 Intervention 52.69 15.60 -- -- -- 14.12 9.72 -- -- --
Across-task (5) 58.71 14.57 1.25 16.67 90.00 (1) 17.14 8.15 0.08 9.41 90.00 (1)
Within-task (5) 46.67 15.64 -2.67 17.39 100 (2) 11.11 11.11 -4.44 9.94 77.09 (2)
A2 Withdrawal (3) 25.93 21.03 -19.45 11.34 100 (1) 20.37 22.45 22.22 4.53 100 (1)
B2 Confirmation 66.18 20.14 -- -- -- 7.57 12.36 -- -- --
Across-task (4) 60.62 20.81 11.07 18.53 87.86 (2) 10.12 15.84 -1.19 19.30 92.86 (2)
Within-task (3) 73.61 20.55 2.09 28.92 -- (0) 4.17 7.22 0.00 10.21 -- (0)

Note. IOA = interobserver agreement; SEyx = standard error. aIOA is reported as mean value.
489
490 LANE et al.

Results

Date

Figure 1. Neal’s academic engaged time and disruptive behavior across


conditions.
INSTRUCTIONAL CHOICE 491

Baseline, no choice
Across-activity choice
Within-activity choice

Date

Figure 2. Tina’s academic engaged time and disruptive behavior across


conditions.
492 LANE et al.

Treatment Integrity
Table 4 shows summary statistics for Neal and Tina’s observed
use of the intervention components across all 28 days of the interven-
tion study. In brief, for Neal, baseline practices remained implemented
between 75.29 to 87.18% across all phases of the project. Treatment
integrity was 100% for the within-task condition in both introductions
of the intervention. Treatment integrity was slightly lower for across-
task conditions (88.00%) during the first introduction of the interven-
tion, but then increased to 100% during the second introduction.
This same pattern was observed for Tina. Again, baseline prac-
tices remained in place across all phases of the project with close to
80% integrity, with mean scores ranging from 77.88 to 88.89%. Treat-
ment integrity was 100% for Tina for the within-task condition in both
introductions of the intervention. Treatment integrity was slightly
lower for the across-task condition (92.00%) during the first introduc-
tion of the intervention, but then increased to 100% during the second
introduction of the intervention.
Student Performance: Neal
Academic Engaged Time. Figure 1 shows the results for Neal.
In the first baseline phase (A1), Neal’s AET was variable, ranging from
11.11% to 100%, with a mean of 59.16% (SD = 27.33), and a downward
slope of -4.75 (SE = 26.71). AET was as high as 100% on the third day
of baseline (when a paraprofessional was in the room spending most
of her time with Neal) and declined to 11.11% on the last day (day 8)
of A1. Due to the counter therapeutic trend for Neal, the decision was
made to begin the intervention phase (B1).
During the first introduction of the interventions (phase B1), we
examined the two data paths: one for across-task choice conditions
and one for within-task choice conditions. For within-task choice
sessions, Neal’s engagement increased from 11.11% on the last day
of baseline to 66.67% during the first within-task choice session. For
the next three within-task choice sessions, AET varied between 33.33
to 58.33%. However, on the last within-task choice session, Neal’s
AET score was 0% resulting in a more pronounced downward trend
(-10.83). For across-task choice sessions, Neal’s AET was initially
100%, decreasing to 37.50% AET on the second session, but with an
upward trend for the final data points. The final three data points sug-
gested a mean level of engagement of 63.74% AET.
When the intervention was withdrawn (A2), Neal’s AET returned
to 58.55%, a mean level commensurate with the average AET during
baseline performance. Daily AET percentages fluctuated between
37.50 to a high of 71.48%.
INSTRUCTIONAL CHOICE 493

In the final phase (B2), we examined the data paths for within and
across-task choices. During the within-task session, AET increased in
level ranging from 75.00-87.50% (M = 79.17%; SD = 7.22%), suggest-
ing high levels of engagement and very limited variability in perfor-
mance. During the across-task choice sessions, AET increased to the
same level, with daily AET sessions ranging from 66.67% to 100%
engagement (M = 79.17%; SD = 15.96%). The trend improved with an
accelerating trend (slope = 11.67; SE = 6.45). However, omnibus effect
sizes for contrasts between A1-B1 and A2-B2 were not significant for
either variable: within-task choice (TAU-U = 0.36, p-value = 0.25) and
across-task choice (TAU-U = 0.42, p-value = 0.15).
Disruptive Behavior. In the first phase baseline (A1), Neal did not
engage in any disruptive behavior during the first six days of baseline.
During the last two days of baseline, disruptive behavior increased
sharply to 44.44% on the last day when his AET was 11.11%, yielding a
slope of 4.20 (SE = 12.62) and phase average of 6.60 (SD = 15.57).
In the second phase intervention (B1), disruptive behavior was
lowest during the across-task choice sessions at 3.33% (SD = 7.46), with
only one day with any disruptive behavior 4/7/14 (16.67%). During
the within-task choice sessions, disruptive behavior was above base-
line levels at 15.00% (SD = 17.08) with high variability during with-
in-task choices sessions.
When the choice conditions were withdrawn, disruption was
very low. There was no disruptive behavior occurring during the
first two days and nominal disruption (11.11%) on the final day of the
withdrawal phase.
When the choice conditions were reintroduced, Neal’s disrup-
tive behavior increased slightly during the within-task choice sessions
(M = 6.94, SD = 6.36) and across-task choice sessions (M = 15.97, SD =
26.68). During the first across-task choice session on 4/17/2014 in this
phase, disruption was very high at 55.56%. The disruption decreased
dramatically in subsequent across-task choice conditions, with two
days of no disruption and one day of 8.33% disruption. An omnibus
effect size for contrasts between A1-B1 and A2-B2 were not significant
for either variable: within-task choice (TAU-U = 0.04, p-value = 0.88)
and across choice (TAU-U = -0.22 p-value = 0.46).
In sum, for Neal results of visual inspection techniques do not
establish a functional relation between the introduction of across-task
or within-task choice conditions, and increases in AET. However, when
both choice conditions were reintroduced, engagement increased to
79.17% and levels stabilized across conditions with across-task show-
ing a positive trend, suggesting improvements during the second
intervention condition. Furthermore, a functional relation was not
494 LANE et al.

established between choice conditions and disruptive behavior. In


fact, disruption actually increased for Neal.
Student Performance: Tina
Academic Engaged Time. Figure 2 shows the results for Tina. In
the first baseline phase (A1), Tina’s AET declined, with a slope of -4.32
(SE = 16.09). AET began with a high of 75.00%, ending with 33.33% on
3/13/2014 and only 16.67% on day 4. Due to the counter therapeutic
trend for Tina and Neal, the decision was made to begin the inter-
vention phase (B1).
During the first introduction of the interventions (phase B1), we
examined the two data paths: one for across-task choice conditions
and one for within-task choice conditions. For within-task choice ses-
sions, Tina’s engagement increased from 33.33% on the last day of
baseline to 55.56% during the first within-task choice session. With the
exception of the third within-task choice session (3/18/2014, 22.22%),
AET ranged from 40.00 to 60.00% with a phase average of 46.67% (SD
= 15.64) and a more stable pattern of responding relative to baseline
conditions. For across-task choice sessions, Tina’s AET was initially
66.67%, with a phase average of 58.71% (SD = 14.57) and the last three
data points ranging from 50.00 to 72.73%.
When the intervention was withdrawn (A2), Tina’s AET demon-
strated a downward trend (slope = -19.45; SE = 11.34). AET declined
steadily across these three days declining from 50.00, 16.67, to 11.11%
AET. Due to the counter therapeutic trend for Tina, choice sessions
were reintroduced.
In the final phase (B2), we examined the data paths for within-
and across-task choices. During the within-task choice session, AET
increased immediately to 83.33%, with a phase average of 73.61% (SD
= 20.55) across these three sessions. During the across-task choice ses-
sions, AET increased from an average of 25.93% (SD = 21.03) during
withdrawal to 60.62% (SD = 20.81), with a positive slope (M = 11.07;
SE = 18.53). Omnibus effect size for contrasts between A1-B1 and A2-B2
were both moderate to large, and significant for both within-task
choice (TAU-U = 0.66 p-value = 0.03) and across choice (TAU-U = 0.89
p-value = <0.01).
Disruptive Behavior. In the first phase baseline (A1), Tina did
not engage in any disruptive behavior during the first two days
of baseline. However, during the remaining six days disruption
ranged from 14.29 to 40.00%, with a phase average of 21.82% (SD
= 15.40).
In the second phase intervention (B1), disruptive behavior was
lower than baseline levels, with the lowest average for the within-task
INSTRUCTIONAL CHOICE 495

sessions (M = 11.11, SD =11.11) compared to across-task choice ses-


sions (M = 17.14, SD = 8.15). In addition there were also changes in
slope relative to baseline conditions for both within-task and across-
task sessions. Tina did not display any disruptive behavior during the
final two within-task choice sessions.
When the choice conditions were withdrawn, disruption
increased tremendously during the three days constituting this phase.
Scores increased from 0% on 4/8/2014 to a high of 44.44% on the final
day of the withdrawal phase. Given this pronounced counter thera-
peutic trend for disruption (slope = 22.22) as well as for AET for Tina,
the choice conditions were reintroduced.
When the choice conditions were reintroduced, Tina’s disrup-
tive behavior decreased during within-task choice sessions (M = 4.17,
SD = 7.22) and across-task choice sessions (M = 10.12, SD = 15.84).
During the second across-task choice condition, disruption spiked
at 33.33% on 4/22/2014. Disruption decreased dramatically in sub-
sequent across-task choice conditions, one day with no disruption
and one day of 7.14%. An omnibus effect size for contrasts between
A1-B1 and A2-B2 was moderate and significant for within-task (TAU-U
= -0.84, p-value = <.01), and approached significance for across-task
(TAU-U = -0.56; p-value = 0.0527).
In sum, for Tina results of visual inspection suggested a func-
tional relation between the introduction of across and within-task con-
ditions and changes in AET, which was supported with the omnibus
effect sizes. When choice conditions were reintroduced, AET increased
to 60.62% for across-task sessions and 73.61% for within-task sessions.
In addition, there was also a functional relation between the introduc-
tion of choice conditions and changes in disruptive behavior for Tina,
with the strongest evidence for within-task choices.
Social Validity
Social validity was assessed from the perspectives of all three
adults and two students. Prior to implementation of the intervention
with Neal, the adults’ IRP-15 ranged from 77 to 90 (M = 85.33, SD
= 7.23). In looking at post intervention social validity scores, social
validity ratings declined for the general education teacher, increased
for the special education teacher, and remained relatively stable for the
instructional support teacher, with average post intervention scores of
83.67 (SD = 4.04). This same pattern was evident for teachers’ ratings
of social validity for Tina. In terms of students’ view, prior to imple-
mentation, Neal rated the intervention using the CIRP at 24 and after
implementation at 42, suggesting the intervention exceeded his initial
496 LANE et al.

expectations. In contrast, Tina had moderate-to-low expectations for


this intervention that remained relatively stable (See Table 4).

Discussion

We offer this study as a demonstration of one approach to sup-


porting the use of low-intensity, teacher-delivered supports—in this case
instructional choice—within a tiered system of support (Lane et al., 2015).
We sought to extend the work of Rispoli and colleagues (2013) by fur-
ther examining the extent to which across-task and within-task choices
increased academic engagement and decreased disruptive behavior
for two students during writing instruction. Moreover, we examined
the extent to which practitioners were able to implement variations of
instructional choice within their regular classroom practices and deter-
mine the extent to which these strategies assisted students with moder-
ate behavioral risk be more engaged and less disruptive during writing
instruction. Our questions were three fold: could this strategy be imple-
mented as planned (treatment integrity)? Was there a functional relation
between the introduction of the across-task and within-task choices and
changes in students’ performance? Was the intervention viewed as feasi-
ble from teacher and student perspectives (social validity)?
Treatment Integrity
In examining treatment integrity data for both Neal and Tina,
the across-task and within-task interventions were implemented
with a very high level of fidelity—particularly the within-task inter-
vention condition which was implemented with 100% fidelity during
both introductions of the intervention for Neal and Tina. Fidelity data
exceeded the 80% criterion during the first introduction, rising to 100%
implementation fidelity during the reintroduction of the strategy.
To make sure the only change in this antecedent-based interven-
tion was the introduction of instructional choices, we also collected
data on baseline practices during each phase. Although not a com-
mon practice in most treatment-outcome studies, we feel this is an
important source of information to accurately interpret intervention
outcomes. Moreover, it is critical to ensure that baseline practices
remained in effect through all phases of the intervention and that
the only variation in context was the introduction of the interven-
tion components – in this case across-task and within-task choices
(Lane, Wolery, Reichow, & Rogers, 2006). Given the complexities of all
that teachers need to manage and address within the regular school
day, it is particularly important to attend to this aspect of fidelity in
INSTRUCTIONAL CHOICE 497

naturalistic contexts. In this study, fidelity data suggest baseline prac-


tices and intervention conditions were in effect with a high level of
fidelity.
Student Outcomes
Our primary interest was to increase both students’ academic
engagement during writing instruction. For Neal, findings from visual
inspection techniques suggested a functional relation was not established
between the introduction of across-task or within-task choice conditions
and increases in AET. This was consistent with the omnibus effect size
contrasts between A1-B1 and A2-B2 suggesting nonsignificant changes
in student performance. However, when both choice conditions were
reintroduced, engagement increased from initial baseline levels below
60% to close to 80% engagement, suggesting improvements for Neal.
Had this intervention been conducted earlier in the school year, it would
have been possible to introduce another replication by withdrawing
and then reintroducing choice conditions another time. An additional
demonstration (A3-B3) may have provided further evidence of functional
relation between the introduction of choice and changes in AET. Results
for Tina were clearer, with visual inspection suggesting a functional rela-
tion between the introduction of across- and within-task conditions and
changes in AET which was supported with the omnibus effect sizes.
In terms of disruptive behavior, a functional relation was not
established between choice conditions and disruptive behavior
for Neal, running contrary to the findings of Rispoli et al. (2013).
Although, for Tina a functional relation between the introduction of
choice conditions and changes in disruptive behavior was apparent
across choice conditions, with the strongest evidence for within-task
choices. Her findings more closely parallel the outcomes reported by
Rispoli et al. (2013). Additional inquiry regarding the effectiveness of
instructional choice in decreasing disruptive behavior in inclusive set-
tings is needed. In short, results suggested improvements in academic
engagement, with partial evidence for improving disruptive behavior.
In examining how Neal and Tina responded, it may be students
could benefit from more explicit instruction and positive behavioral sup-
ports to engage in these choice strategies within inclusive settings. For
example, it may be there needs to be more explicit instruction on how to
make a choice and then follow through on the choices. It may also be pre-
correction strategies can be used prior to engaging in instructional choice
tasks to remind students how to participate successfully, which could be
particularly helpful for students like Neal and Tina with higher than aver-
age hyperactivity and inattentiveness as reflected in their SSiS-RS scores.
Also, it might be students’ disruptive behavior or AET are maintained
by different functions (e.g., positive and/or negative reinforcement of
498 LANE et al.

attention, tasks/activities, or sensory experiences), and not all choices


are function related (Umbreit, Ferro, Liaupsin, & Lane, 2007). Similarly,
it might be not all choices were as interesting to Neal and Tina, calling
for a need to consider involving students’ input on the choices offered.
It might be students could benefit from high rates of behavior specific
praise when first introducing and testing these strategies. In essence,
it may be we need a combination of low-intensity supports to achieve
desired levels of engagement and disruption.
In looking more closely at this issue of responsiveness, it is likely
additional inquiry will be needed to better understand variables that
mediate (e.g., contextual variables in the classroom) and moderate (e.g.,
initial behavioral risk for a given student). Contextual variables such as
choice alternatives may have had an effect on students’ performance.
Skinner (2002) suggested students may prefer assignments which result
in a higher rate of task completion and the density of reinforcement asso-
ciated with that task. Whereas initial behavioral risk may have played
a moderating role. For example, in looking at the initial data paths for
Neal for both academic engaged time and disruptive behavior, it is
important to note he had a SRSS score of 8 which suggested moderate
risk, but was only one point shy of the high risk category (Drummond,
1994), and he also had fewer than average social skills as measured by
the SSiS-RS scores which may have impeded his success. Sufficient evi-
dence exists to note that students with higher levels of risk and those
exposed to multiple risk factors often require more intensive supports
such as functional assessment-based interventions (Lane, Oakes, &
Cox, 2011; Umbreit & Ferro, in 2015). In contrast, a functional relation
was established for Tina for AET and disruption and she also fell in the
moderate risk category (but at the low end with an SRSS score of 4), and
had average social skills. Yet despite the relatively weak initial changes
for Neal, over time his AET levels actually exceeded Tina’s performance
during the second introduction of the choice conditions, both in terms
of level and stability. He simply may have needed more exposure to the
strategies to build fluency (Gresham & Elliott, 2008), learning how to
make a choice and follow through with the option selected in the time
provided. Also, it may be that he began to access other benefits of being
engaged such as praise from teachers for being more involved in the
activities at hand and the good feelings that come with having some
choice in one’s daily life and feeling productive (Jolivette et al., 2001;
Kern & State, 2009; Shogren et al., 2004).
Social Validity
Despite the less than optimal results for Neal, his social validity
data as assessed using the CIRP indicated the intervention exceeded
his initial expectations and he viewed the intervention as feasible,
INSTRUCTIONAL CHOICE 499

desirable, and effective. In short – he liked it and he ended the inter-


vention with more desirable levels of academic engagement. There
was also far less variability in his engagement during the second intro-
duction of the intervention conditions, which facilitates his learning
opportunities as well as the learning opportunities of others when he
is engaged and not disruptive (Lane et al., 2015). For Tina, her initial
expectations were met, with a moderate level of social validity.
For the teachers, the interventions fell short of the general educa-
tion teacher’s expectations but exceeded the special education teach-
er’s expectations. Though the general education teacher viewed the
intervention as feasible, desirable, and effective, she did not experience
the decrease in disruptive behavior and increase in academic engaged
time that she was expecting. The general education teacher noted that
instructional choice was easy to implement with the routines and
expectations established in her classroom, and she would recommend
the intervention to other teachers. It would have been interesting to
have her complete the Low-Intensity Support Survey Self-Assessment:
Knowledge, Confidence, and Use following this experience, to see how
her knowledge, confidence, and use scores shifted. For the special
education teacher, who worked almost exclusively with five students
in the first grade classroom during writing, the intervention exceeded
her expectations. The special education teacher had high levels of
interactions with Neal and Tina on a daily basis during writing and
reported that she noticed improvements in academic engaged time
and decreased disruptive behavior, including reduced variability in
performance. It may be that these more discrete observations noted
by the special education teacher were not recognized by the general
education teacher as the general education teacher had a larger group
of 25 students to attend to. And, it might be that initial impressions
of this strategy as evidenced in teacher’s knowledge, confidence, and
use ratings impacted their views—a point calling for future inquiry.
Separate from this study, the general and special education teach-
ers decided to ask the full class four questions: Did having a choice
during writing make school more interesting? Did you like having
a choice during writing? Do think having choices in school is a good
idea? Would you like other teachers to give you choices in school?
Responses were overwhelmingly in favor of instructional choice, with
only 1–3 students indicating no to each question. Thus, the strategy
was viewed favorably for most stakeholders
Limitations and Future Directions
As with all studies, we encourage readers to interpret the find-
ings in light of the following limitations. First, this study involves
only two students in one teacher’s classroom. Although an important
500 LANE et al.

study because it demonstrates teachers are capable of conducting


these strategies and assessing student performance with very limited
university support, this is just one setting limiting the generalizability
of the findings. Future inquiry is needed to replicate these findings
before generalizing the results. Namely, replication is needed to deter-
mine if the same results hold true for students with similar academic
and behavioral needs in inclusive general education settings. Other
future directions include the use of instructional choice in inclusive
settings with different subject areas and with different student behav-
ioral challenges (e.g., internalizing issues) or evidenced deficits.
Second, and related to measurement, during the final phase,
IOA was only collected for the across-task condition and not for the
within-task conditions due to a lost day of intervention. Nonetheless,
IOA agreement levels were very high in all phases of the intervention
mostly in the 90%, with the exception of one datum point at 77.09%
during the within-task phase.
Third, there are curricular considerations to note. The types of
choices offered may have impacted results. Students did not provide
input on their preferences for the various choice opportunities. We
would encourage future studies to explore the feasibility and utility of
preference assessments, particularly in the within-task sessions. Also,
it should be recognized that the study spanned across two different
writing units: a how-to writing unit and a non-fiction writing unit.
Some days the students had isolated writing projects such as a Lepre-
chaun writing prompt. Students may have found some of the topics
more enjoyable than others or some of the daily assignments more
challenging than others. We were not able to equate the daily assign-
ments as the study used the natural classroom setting and incorpo-
rated choice into the lesson plan the general education teacher had
already created for each day. Differences in the type and content of
assignments may have impacted students’ behavior as well.
Fourth, there were two lost days of intervention: one lost session
of within-task choices during the first introduction of the intervention
and one lost session within each choice condition during the reintro-
duction of the intervention conditions. Ideally it would have been pref-
erable to have at least five data points in each phase, particularly for the
second introduction of the intervention conditions. Yet, due to the end
of the school year approaching, this was not possible. We encourage
future studies conducted on a timeline that could address this concern.
A closely related final consideration that also relates to the pre-
vious pertains to the work scope. Having two students in the same
class and the intervention conditions being conducted by the classroom
teacher on a classwide basis required some careful considerations in
terms of phase changes decisions. For example, while quality indicators
INSTRUCTIONAL CHOICE 501

recommend five data points be collected during the withdrawal phase,


we elected to reintroduce the choice conditions following the third data
point due to the clear counter therapeutic trend revealed for Tina. In
analyzing her data, there was a dramatic decrease in AET and increase
in disruptive behavior. Due to ethical considerations, choice conditions
were reintroduced.
Summary
Despite these limitations, this study extended the line of inquiry
on instructional choice, building on the work of Rispoli et al. (2013)
by replicating the A-B-A-B alternating treatment (within- and across-
tasks) design in an inclusive setting. In this class, teachers implemented
all phases, providing within- and across-task choices to all students.
Teachers monitored the disruptive behavior and academic engage-
ment of two students in one class who were identified by screening
procedures as needing more than Tier 1 efforts. Teachers assessed
treatment fidelity – with careful attention to baseline and intervention
components and minimal university support. Results indicated teach-
ers were able to implement both choice conditions with high levels of
fidelity while collecting data on AET and disruptive behavior using
momentary time sampling. Although a functional relation was only
established for one student (Tina), both types of choice resulted in
increases in AET for both students, with Neal demonstrating higher
levels of AET during across-task conditions and Tina demonstrating
higher levels of AET during within-task conditions. In addition, there
were also decreases in disruptive behavior associated with choice
for Tina—the student with initially lower levels of behavioral risk.
Finally, overall the intervention experience was viewed as accept-
able to most stakeholders according to social validity data. Further
research is needed to determine how low-intensity PBIS strategies can
be implemented by classroom teachers within inclusive contexts and
with limited university support to increase the AET of all students.
References
Cole, C. L., & Levinson, T. R. (2002). Effects of within-task choices on
the challenging behavior of children with severe develop-
mental disabilities. Journal of Positive Behavior Interventions, 4,
29–37. doi:10.1177/109830070200400106.
Cook, B., & Tankersley, M. (Eds.). (2013). Effective practices in special
education. Boston, MA: Pearson.
Cooper, J. O., Heron, T. E., & Heward, W. L. (2007). Applied Behavior Anal-
ysis, 2nd Edition. Upper Saddle River, N.J.: Pearson Prentice Hall.
Dibley, S., & Lim, L. (1999). Providing choice making opportunities
502 LANE et al.

within and between daily school routines. Journal of Behavioral


Education, 9, 117–132. doi:10.1023/A:1022888917128
DiCarlo, C. F., Baumgartner, J., Stephens, & Pierce, S. H. (2013). Using
structured choice to increase child engagement in low-prefer-
ence centres. Early Child Development and Care. 183, 109–124.
doi:10.1080/03004430.2012.657632
Drummond, T. (1994). The Student Risk Screening Scale (SRSS). Grants
Pass, OR: Josephine County Mental Health Program.
Dunlap, G., DePerczel, M., Clarke, S., Wilson, D., Wright, S., White, R.,
& Gomez, A. (1994). Choice making to promote adaptive be-
havior for students with emotional and behavioral challeng-
es. Journal of Applied Behavior Analysis, 27, 505–518.
Gast, D. L., & Ledford, J. R. (Eds.). (2014). Single case research meth-
odology: Applications in special education and behavioral sciences
(2nd ed.). New York, NY: Routledge.
Gresham, F. M., & Elliott, S. N. (2008). Social Skills Improvement System-
Rating Scales. Bloomington, MN: Pearson Assessments.
Gresham, F. M., Elliott, S. N., Vance, M. J., & Cook, C. R. (2011). Com-
parability of the social skills rating system to the social skills
improvement system: Content and psychometric compari-
sons across elementary and secondary age levels. School Psy-
chology Quarterly, 26, 27–44.
Horner, R. H. (2000). Positive behavior supports. In M. L. Wehmeyer
& J. R. Patton (Eds.), Mental retardation in the 21st century (pp.
181-196). Austin, TX: Pro-Ed.
Jolivette, K., Alter, P., Scott, T. M., Josephs, N. L., & Swoszowski, N. C.
(2013). Strategies to prevent problem behavior. In K. L. Lane,
B. G. Cook, & M. Tankersley (Eds.), Research-based strategies
for improving outcomes in behavior (pp. 22–33). Boston: Pearson.
Jolivette, K., Stichter, J. P., & McCormick, K. M. (2002). Making
choices—Improving behavior—Engaging in learning. TEACH-
ING Exceptional Children, 34(3), 24–29.
Jolivette, K., Wehby, J. H., Canale, J., & Massey, N. G. (2001). Effects
of choice-making opportunities on the behavior of students
with emotional and behavioral disorders. Behavioral Disorders,
26, 131–145.
Kern, L., & State, T. M. (2009). Incorporating choice and preferred ac-
tivities into classwide instruction. Beyond Behavior, 18, 3–11.
Lane, K. L., Little, A. L., Menzies, H. M., Lambert, W., & Wehby, J. H.
(2010). A comparison of students with behavioral challenges
INSTRUCTIONAL CHOICE 503

educated in suburban and rural settings: Academic, social


and behavioral outcomes. Journal of Emotional and Behavioral
Disorders, 18, 131–148.
Lane, K. L., Menzies, H. M., Ennis, R. P., & Oakes, W. P. (2015). Sup-
porting Behavior for School Success: A Step-by-Step Guide to Key
Strategies. New York, NY: Guilford Press.
Lane, K. L., Oakes, W. P., & Cox, M. (2011). Functional assess-
ment-based interventions: A university-district partnership
to promote learning and success. Beyond Behavior, 20, 3–18.
Lane, K. L., Oakes, W. P., & Ennis, R. P. (2012). Low-intensity support
strategies survey self-assessment: Knowledge, confidence, and use.
Unpublished manuscript.
Lane, K. L., & Walker, H. M. (2015). The connection between assess-
ment and intervention: How does screening lead to better in-
terventions? In B. Bateman, M. Tankersley, and J. Lloyd (Eds.).
Issues in special education. New York, NY: Routledge.
Lane, K. L., Wolery, M., Reichow, B., & Rogers, L. (2006). Describing
baseline conditions: Suggestions for study reports. Journal of
Behavioral Education, 16, 224–234.
Menzies, H. M., & Lane, K. L. (2012). Validity of the student risk
screening scale: Evidence of predictive validity in a diverse,
suburban elementary setting. Journal of Emotional and Behav-
ioral Disorders, 20, 82-91. doi: 10.1177/1063426610389613
Morgan, P. L. (2006). Increasing task engagement using preference or
choice-making: Some behavioral and methodological factors
affecting their efficacy as classroom interventions. Remedi-
al and Special Education, 27, 176–187. Available at http://rse.
sagepub.com/content/27/3/176.short
Parker, R. I., Vannest, K. J., Davis, J. L., & Sauber, S.B. (2011). Com-
bining non-overlap and trend for single case research: Tau-U.
Behavior Therapy, 42, 284–299.
Ramsey, M. L., Jolivette, K., Patterson, D. P., & Kennedy, C. (2010). Us-
ing choice to increase time on-task, task completion, and ac-
curacy for students with emotional/ behavioral disorders in a
residential facility. Education and Treatment of Children, 33, 1–21.
Rispoli, M., Lang, R., Neely, L., Camargo, S., Hutchins, N., Davenport,
K., & Goodwyn, F. (2013). A comparison of within- and
across-task choices for reducing challenging behavior in
children with autism spectrum disorders. Journal of Behavioral
Education, 22, 66–83.
504 LANE et al.

Romaniuk, C., & Miltenberger, R. G. (2001). The influence of prefer-


ence and choice of activity on problem behavior. Journal of
Positive Behavior Interventions, 3, 152–159.
Sailor, W. (2014). Advances in schoolwide inclusive school reform. Remedial
and Special Education, Online First. doi:10.1177/0741932514555021
Shogren, K. A., Faggella-Luby, M. N., Bae, S. J., & Wehmeyer, M. L.
(2004). The effect of choice-making as an intervention for
problem behavior: A meta-analysis. Journal of Positive Behavior
Interventions, 6, 228–237.
Simonsen, B., Fairbanks, S., Briesch, A., Myers, D., & Sugai, G. (2008).
Evidence-based practices in classroom management: Consid-
erations for research to practice. Education and Treatment of
Children, 31, 351–380.
Skerbetz, M.D., & Kostewicz, D.E. (2013). Academic choice for included
students with emotional and behavioral disorders. Preventing
School Failure, 57, 212–222. doi: 10.1080/1045988X.2012.701252
Skinner, C. H. (2002). An empirical analysis of interspersal research:
Evidence, implications, and applications of the discrete task
completion hypothesis. Journal of School Psychology, 40, 347–368.
Sugai, G., & Horner, R. R. (2006). A promising approach for expanding
and sustaining school-wide positive behavior support. School
Psychology Review, 35, 245-259.
Tullis, C. A., Cannella-Malone, H. I., Basbigill, A. R., Yeager, A., Fleming,
C. V., Payne, D., & Wu, P. (2011). Review of the choice and pref-
erence assessment literature for individuals with severe to pro-
found disabilities. Education and Training in Autism and Develop-
mental Disabilities, 46, 576–595. Retrieved from http://daddcec.
org/Publications/ETADDJournal/ETDDDetailsPage/
Umbreit, J., Ferro, J., Liaupsin, C., & Lane, K. (2007). Functional be-
havioral assessment and function-based intervention: An effective,
practical approach. Upper Saddle River, NJ: Prentice-Hall.
Vannest, K. J., Parker, R. I., & Gonen, O. (2011). Single case research:
Web based calculators for SCR analysis. (Version 1.0) [Web-
based application]. College Station, TX: Texas A&M Univer-
sity. Retrieved April 21, 2013. Available from singlecasere-
search.org
Vaughn, B. J., & Horner, R. H. (1997). Identifying instructional tasks
that occasion problem behavior and assessing the effects of
student versus teacher choice during these tasks.
Witt, J. C., & Elliott, S. N. (1985). Acceptability of classroom inter-
vention strategies. In T. R. Kratochwill (Ed.), Advances in
school psychology (Vol. 4, pp. 251-288). Mahwah, NJ: Erlbaum.

You might also like