Professional Documents
Culture Documents
Kathleen Lynne Lane, David J. Royer, Mallory L. Messenger, Eric Alan Common,
Robin Parks Ennis, Emily D. Swogger
[ Access provided at 7 Aug 2021 14:41 GMT from University of Kansas Libraries ]
EDUCATION AND TREATMENT OF CHILDREN Vol. 38, No. 4, 2015
Abstract
Instructional choice is a low-intensity strategy that requires little preparation,
is easy to implement, and supports content instruction in the classroom. In
this study we explored the effectiveness of two types of instructional choice—
across-task and within-task choices—implemented classwide during writing
instruction by classroom teachers with limited university support in an in-
clusive first-grade classroom. Student participants were one boy (Neal) and
one girl (Tina) who were identified using academic and behavioral screening
procedures as needing more intensive supports in the classroom. Results es-
tablished a functional relation between choice conditions and increases in aca-
demic engaged time and decreases in disruptive behavior for Tina, but not for
Neal. Teachers functioned as both primary and reliability data collectors us-
ing momentary time sampling and implemented both choice conditions with
high levels of fidelity. Social validity was assessed from the perspectives of all
stakeholders. Limitations and future directions are discussed.
Pages 473–504
474 LANE et al.
Method
Participants
Participants were two first-grade students (Neal and Tina
[pseudonyms]) attending a public elementary school in the Midwest
(see tables 1 and 2). Students were identified, through systematic
screening procedures, examining their risk index (e.g., moderate to
high) according to the Student Risk Screening Scale (SRSS; Drum-
mond, 1994) and report card grades (e.g., progressing, limited prog-
ress) in writing and working independently. Neal qualified for special
478 LANE et al.
Table 1
School Characteristics
Characteristic % n
Studentsa (N = 604)
Male 49.83 301
Female 50.17 303
Ethnicity
Asian / Pacific Islander 21.69 131
Black 0.66 4
Hispanic 1.32 8
Two or more races 4.47 27
American Indian / Alaska Native 0.17 1
White 71.69 433
Grade level
Kindergarten 15.23 92
First 14.57 88
Second 15.40 93
Third 18.71 113
Fourth 18.38 111
Fifth 17.38 105
Free or reduced-price lunch eligible 2.81 17
Students with disabilitiesb 6.60 36
Table 2
Characteristics of Student Participants
Student
Variable Neal Tina
Demographics
Age 6.11 7.04
Gender Male Female
Ethnicity White Asian
Screening
SRSS overall (Total Score) Moderate Risk (8) Moderate Risk (4)
Fall trimester report card
Writing Progressing Progressing
Works independently Progressing Progressing
SSiS rating scales (standard scores)
Social skills 81 100
Problem behaviors 111* 120*
Academic competence 93 88
Special education Yes (autism) No
Instructional sessions attended 28 28
% sessions observed: fidelity (n) 28.57 (8) 28.57 (8)
Note. SSiS = Social Skills Improvement System – Rating Scale (Gresham & Elliott, 2008b);
SRSS = Student Risk Screening Scale (Drummond, 1994; 0-3 = low risk; 4-8 = moderate
risk, 9-21 = high risk). *Scores reflect above average levels of hyperactivity/inatten-
tiveness. Neal also scored above average on the autism spectrum score as expected.
Intervention Procedures
In this study we explored the independent variable of instruc-
tional choice, defined as “. . . opportunities to make choices means that
the student is provided with two or more options, is allowed to inde-
pendently select an option, and is provided with the selected option”
(Jolivette et al., 2002, p. 28). Instructional choice was selected by first
surveying all three teachers to determine their knowledge, confidence,
and perceived utility of 10 low-intensity supports: behavior specific
praise, active supervision, opportunities to respond, precorrection,
instructional choice, instructive and corrective feedback, group con-
tingencies, proximity, self-monitoring, and behavior contracts (Lane,
Oakes, & Ennis, 2012; Low-Intensity Support Survey Self-Assessment:
Knowledge, Confidence, and Use). Teachers were provided with a 4-point
Likert-type scale ranging from 0 to 3, with higher scores indicating
higher levels of knowledge about the strategy, higher confidence in
their ability to implement the strategy, and more positive views that
the strategy would be useful in their teaching (see table 3). The special
education and instructional support teachers rated higher levels of
knowledge and use scores compared to the general education teacher
for all strategies as well as scores pertaining to the instructional choice
strategy. Scores were reviewed with the teachers during a meeting
with the primary investigator, and collectively the decision was made
to explore instructional choice.
We examined two types of instructional choice: (a) across-
task choices: the option to choose the order in which to complete
assigned tasks; or (b) within-task choices: options of how to com-
plete an assigned task (e.g., writing instrument). The two choice
options were randomly assigned to intervention days. During the
first introduction of the intervention, there were six sessions during
which within-task choices were planned (with one lost data point
due to a change in the school schedule) and another five sessions
during which across-task choices were planned, all of which were
conducted. During the confirmation phase (reintroduction of the
intervention conditions), there were four to five dates randomly
selected for each condition. Yet, due to changes such as end of the
year schedule of events and the primary observer being called out
of the classroom during data collection, one session was lost from
each task condition. Thus, during the B2, there were four sessions for
across-task choices and three sessions for within-task choices.
Across-task choices. During intervention phases, students
selected the sequence in which they completed tasks during the daily
writing block. At the end of the teacher’s mini-lesson, she wrote the
INSTRUCTIONAL CHOICE 483
Table 3
Characteristics of Teacher Participants and Knowledge, Confidence, and Use of
Low-Intensity Support Strategies
Teacher Primary Role
General Special Support
Variable Education Education Provider
Demographics
Age 34 23 37
Gender Female Female Female
Ethnicity White White White
Years teaching experience 13 2 15
Years teaching experience current school 12 2 15
Certified in the area currently teaching Yes Yes Yes
Highest degree earned Master’s Bachelor’s Master’s
Completed course in classroom management Yes Yes Yes
Professional development in academic screen- No Yes Yes
ing
Professional development in behavior screening No Yes Yes
M, range = 0-3; (SD)
Low-intensity support strategies survey (Lane,
Oakes, & Ennis, 2012)
Knowledge 1.80 (0.63) 2.00 (0.00) 2.40 (0.52)
Confidence 1.80 (0.63) 1.80 (0.42) 2.40 (0.52)
Use 2.00 (0.00) 2.30 (0.82) 2.70 (0.48)
Instructional choice item
Knowledge 1 2 2
Confidence 1 2 2
Use 2 3 3
tasks that needed to be done on the board with boxes next to them. The
teacher would explicitly say, “Your choice today is to choose the order
that you finish these tasks.” Tasks included at least two options, with
one day offering as many as four options. For example, the teacher
wrote “write 2 pages in nonfiction book” and “do 2 illustrations in
nonfiction book” on the board with boxes next to them. Then, the
teacher explained they could choose the order that they finished the
two tasks. Another example of across-task choices was when students
were writing how-to books. The tasks that needed to be completed
were “read a completed how-to book to a partner,” “write 2 new steps
to your how-to book,” and “draw 2 new illustrations in your how-to
book.” Students were allowed to choose the order in which they com-
pleted the three tasks. The teacher praised students for making their
self-selected choice.
Within-task choices. On days selected for within-task choice,
participants were offered a choice of materials to complete activi-
ties and/or a choice of environmental factors. The teacher still went
484 LANE et al.
over the tasks the students needed to complete with the remaining
time in the writing block at the end of her mini-lesson. During with-
in-task days, the teacher would number the tasks on the board and
tell students this was the order in which they needed to complete
the tasks. Following the description of the tasks, the teacher would
say what the choice was for the day. Some within-task days, students
were able to choose which type of art supply they wanted to use
for their illustrations. Other within-task days, students were able to
choose the location around the room that they completed their tasks
or the partner whom they worked with on the tasks. As with the
across-task condition, the teacher praised students for making their
self-selected choice.
Treatment integrity. Treatment integrity was measured using a
behavior component checklist for baseline conditions (14 items) and
both intervention conditions (5 items each). We collected data on base-
line practices during each phase to make sure that initial baseline pro-
cedures were still in place during each intervention condition, and
that the only change was the introduction of the across-task or with-
in-task interventions (e.g., with the five items detailing the tactics for
the specific choice component e.g., Teacher offered student the opportunity
to ________; Student made choice within 30 s; Teacher praised student for
making a choice selection; Teacher made _______ choices available; Teacher
praised student for completing assigned tasks). Each item was scored on
a 3 point Likert-type scale: 0 = not implemented, 1 = partially imple-
mented, 2 = fully implemented. The special education teacher collected
treatment integrity data daily, and the instructional support teacher
collected reliability of treatment integrity data. We computed integ-
rity of baseline practices, across-task choice, and within-task choice
conditions for each student by dividing the sum of items observed by
the total items possible for each session, multiplying the quantity by
100 to obtain a percentage (see Table 4).
Training. Prior to implementing the intervention, the three
teachers listened to a 10 min voiced-over PowerPoint describing each
intervention; reviewed the treatment integrity protocols; and com-
pleted a 10-item quiz which included examples of both types of
choice, intervention procedures, and instructions on how to complete
the forms. A meeting was held using web-based technology to answer
questions prior to and following the training. The special education
teacher served as the primary treatment integrity data collector and
the instructional support teacher assessed reliability in at least 25% of
days for each condition, including baseline.
INSTRUCTIONAL CHOICE 485
Table 4
Social Validity and Treatment Fidelity by Student and Phase
Treatment Integrity IOAa Social Validity
Student Phase Baseline Strategy % (n) IRP-15b CIRP
(No. Sessions) Practices % (SD)
% (SD)
T1 T2 T3
Neal A1 Baseline (8) 81.56 (12.37) 100.00 (2) 90 77 89 24
B1 Intervention 100.00 (3)
Within-task (5) 75.29 (9.83) 100.00 (0.00)
Across-task (5) 82.66 (13.60) 88.00 (10.95)
A2 Withdrawal (3) 82.05 (4.44) 100.00 (1)
B2 Confirmation 100.00 (2) 80 83 88 42
Within-task (3) 87.18 (11.75) 100.00 (0.00)
Across-task (4) 82.87 (8.08) 100.00 (0.00)
Tina A1 Baseline (8) 85.00 (13.09) 100.00 (2) 90 78 83 27
B1 Intervention 93.33 (3)
Within-task (5) 77.88 (11.27) 100.00 (0.00)
Across-task (5) 85.00 (9.79) 92.00 (10.95)
A2 Withdrawal (3) 80.56 (12.73) 100.00 (1)
B2 Confirmation 100.00 (2) 82 86 82 28
Within-task (3) 88.89 (12.73) 100.00 (0.00)
Across-task (4) 79.17 (9.28) 100.00 (0.00)
Note. IRP-15 = Intervention Rating Profile (Witt & Elliott, 1985); CIRP = Children’s Inter-
vention Rating Profile (Witt & Elliott, 1985); IOA = interobserver agreement; T1 =
general education teacher; T2 = special education teacher; T3 = support provider
teacher.
a
IOA percentage for treatment integrity was calculated via item-by-item analysis, and
the n reported represents the number of sessions within the phase observed by the
support teacher. bIRP-15 scores can range from 0-90, with higher scores indicating
higher social validity.
Baseline
During the baseline condition, the teacher conducted a whole-
group, mini-lesson to teach and introduce a concept. Typically, the
students sat on the carpet in the front of the room for the mini-lesson.
The students were expected to sit quietly and listen to the teacher.
Many times the mini-lesson included a read aloud with a mentor text
or teacher modeling of her own writing. Once the mini-lesson was
complete, the teacher told the students their tasks for the rest of the
time, and the students went back to their table to work on writing.
Throughout the work time, the teacher monitored the room—working
with individual students as needed. Baseline practices were broken
486 LANE et al.
group for more than 5 s, (c) out of the assigned instructional area,
follow-up activity (e.g. drawing a picture).
Disruption referred to any behavior that interrupted classroom
instruction, or prevented students from engaging in classroom activities.
Examples included (a) talking to peers about off-topic items, (b) being
out of seat without permission, (c) talking out without raising hand, (d)
engaging in activities other than those requested by the teacher, (e) speak-
ing in an elevated voice, (f) hitting desk loudly with hands or objects,
(g) arguing with adults or students, (h) refusing to work, or (i) touching
others’ property without permission. Non-examples included (a) looking
at teacher or materials during instruction, (b) working independently or
with designated group, (c) being in assigned seat or area, (d) raising hand
to ask questions, (e) speaking in an indoor voice, (f) using materials as
they are intended, or (g) following teacher directions.
Measurement. AET and disruption were collected using
momentary time sampling procedures, collected in 2 min intervals for
the duration of the independent writing segment of the writing block.
Momentary time sampling was selected as the measurement system
as it allows for teachers to be the data collectors without interfering
with instruction (Cooper, Heron, & Heward, 2007). Additionally, dis-
ruptive episodes were not brief or uniform in length. Instead, most
instances of disruption extended for several minutes.
The special education teacher served as the primary data collec-
tor, collecting one probe during writing each day. The instructional
support teacher collected reliability data for at least 25% of each phase
(e.g., about one day per week). Prior to collecting data, both the special
education teacher and the instructional support teacher completed a
10 min voiced-over PowerPoint, reviewed the data collection forms,
and completed a 10-item quiz related to measurement issues, includ-
ing how to complete the forms. A meeting was held using web-based
technology to answer any questions following the training. Following
this step, the two teachers practiced collecting data before collecting
baseline data using the videos provided by the SSBD. They sat next to
each other for three 10 min sessions and used a data collection sheet
and MotivAider. Each rater collected data independently, and follow-
ing each session compared results and computed IOA using point-by-
point agreement. The number of intervals in agreement was divided
by the sum of the number of intervals in agreement and disagreement
(total intervals), multiplying the quantity by 100 to obtain a percent-
age. Each of the three 10 min sessions had an IOA of 100%, 90.00%,
and 90.00%, respectively, with an overall mean IOA of 93.33%. A cri-
terion of three consecutive agreements ≥ 90% was established as min-
imum criteria.
488 LANE et al.
Social Validity
We assessed social validity prior to beginning and after complet-
ing the testing of the interventions. The general education teacher, spe-
cial education teacher, and instructional support teacher completed the
Intervention Rating Profile (IRP-15; Witt & Elliott, 1985) to obtain their
opinions regarding the importance of intervention goals, the accept-
ability of the procedures, and importance of the intervention outcomes.
Teachers rated 15 statements regarding procedures and outcomes (e.g.,
“This would be an acceptable intervention for the child’s needs”) on
a six-point Likert-type scale ranging from 1 = strongly disagree to 6 =
strongly agree for each student. Total scores were summed (range 15 to
90) with higher scores suggesting higher social validity.
Students completed a modified version of the Children’s Interven-
tion Rating Profile (CIRP; Witt & Elliott, 1985) to obtain their views, with
minor wording changes in the items to soften the language. Students rated
seven items on a six point Likert-type scale ranging from 1 = I do not agree
to 6 = I agree. Negatively worded items were reversed scored and summed
(range 7 to 42), with higher scores suggesting greater social validity.
Experimental Design and Statistical Analysis
We utilized single case design methodology for this inter-
vention. In this study, we implemented an A-B-A-B alternating treat-
ment withdrawal design over eight weeks, beginning with a baseline
stage for Neal and Tina. Data paths for all variables were analyzed
using visual inspection techniques focusing on stability, level, and
trend (Gast & Ledford, 2014). Nonparametric effect sizes for com-
parison of A-B contrast to measure the direct impact of instructional
choice were calculated using Tau-U omnibus effect sizes (Parker,
Vannest, Davis, & Sauber, 2011). Tau-U was selected over other
non-overlap methods due to its increased statistical power than
other non-overlap methods, as it is distribution free, and controls
for positive baseline trend. To calculate Tau-U, all data were entered
into the online Tau-U calculator (Vannest, Parker, & Gonen, 2011)
to compute phase change contrast and weighted average Tau-U for
each participant, across all independent variables. We controlled for
positive baseline trend across all contrasts using the Tau-U calcula-
tor. Phase change decisions were guided by the academic engage-
ment variable as the most proximal variable of interest. A table of
mean and slope changes across phases is also included (see Table
5). Social validity and treatment integrity data were analyzed using
descriptive statistics (see Table 4).
Table 5
Academic Engaged Time and Disruptive Behavior: Mean and Slope by Phase
Academic Engaged Time Disruptive Behavior
Student Phase M SD Slope SEyx IOAa M SD Slope SEyx IOAa
(No. Sessions) % % % (n) % %
Neal A1 Baseline (8) 59.16 27.33 -4.75 26.71 92.86 (2) 6.60 15.57 4.20 12.62 100 (2)
B1 Intervention 54.26 26.52 -- -- -- 9.17 13.86 -- -- --
Across-task (5) 65.74 23.00 -3.42 25.81 80.00 (1) 3.33 7.46 3.33 6.09 100 (1)
Within-task (5) 42.78 26.90 -10.83 23.96 87.50 (2) 15.00 17.08 4.17 18.19 100 (2)
INSTRUCTIONAL CHOICE
A2 Withdrawal (3) 58.55 18.39 -2.41 25.78 100 (1) 3.70 6.41 5.56 4.54 100 (1)
B2 Confirmation 79.17 12.03 -- -- -- 12.10 19.82 -- -- --
Across-task (4) 79.17 15.96 11.67 6.45 95.00 (2) 15.97 26.68 -15.84 21.00 95.00 (2)
Within-task (3) 79.17 7.22 0.00 10.21 -- (0) 6.94 6.36 2.09 8.50 -- (0)
Tina A1 Baseline (8) 49.18 18.27 -4.32 16.09 92.86 (2) 21.82 15.40 3.84 13.18 92.86 (2)
B1 Intervention 52.69 15.60 -- -- -- 14.12 9.72 -- -- --
Across-task (5) 58.71 14.57 1.25 16.67 90.00 (1) 17.14 8.15 0.08 9.41 90.00 (1)
Within-task (5) 46.67 15.64 -2.67 17.39 100 (2) 11.11 11.11 -4.44 9.94 77.09 (2)
A2 Withdrawal (3) 25.93 21.03 -19.45 11.34 100 (1) 20.37 22.45 22.22 4.53 100 (1)
B2 Confirmation 66.18 20.14 -- -- -- 7.57 12.36 -- -- --
Across-task (4) 60.62 20.81 11.07 18.53 87.86 (2) 10.12 15.84 -1.19 19.30 92.86 (2)
Within-task (3) 73.61 20.55 2.09 28.92 -- (0) 4.17 7.22 0.00 10.21 -- (0)
Note. IOA = interobserver agreement; SEyx = standard error. aIOA is reported as mean value.
489
490 LANE et al.
Results
Date
Baseline, no choice
Across-activity choice
Within-activity choice
Date
Treatment Integrity
Table 4 shows summary statistics for Neal and Tina’s observed
use of the intervention components across all 28 days of the interven-
tion study. In brief, for Neal, baseline practices remained implemented
between 75.29 to 87.18% across all phases of the project. Treatment
integrity was 100% for the within-task condition in both introductions
of the intervention. Treatment integrity was slightly lower for across-
task conditions (88.00%) during the first introduction of the interven-
tion, but then increased to 100% during the second introduction.
This same pattern was observed for Tina. Again, baseline prac-
tices remained in place across all phases of the project with close to
80% integrity, with mean scores ranging from 77.88 to 88.89%. Treat-
ment integrity was 100% for Tina for the within-task condition in both
introductions of the intervention. Treatment integrity was slightly
lower for the across-task condition (92.00%) during the first introduc-
tion of the intervention, but then increased to 100% during the second
introduction of the intervention.
Student Performance: Neal
Academic Engaged Time. Figure 1 shows the results for Neal.
In the first baseline phase (A1), Neal’s AET was variable, ranging from
11.11% to 100%, with a mean of 59.16% (SD = 27.33), and a downward
slope of -4.75 (SE = 26.71). AET was as high as 100% on the third day
of baseline (when a paraprofessional was in the room spending most
of her time with Neal) and declined to 11.11% on the last day (day 8)
of A1. Due to the counter therapeutic trend for Neal, the decision was
made to begin the intervention phase (B1).
During the first introduction of the interventions (phase B1), we
examined the two data paths: one for across-task choice conditions
and one for within-task choice conditions. For within-task choice
sessions, Neal’s engagement increased from 11.11% on the last day
of baseline to 66.67% during the first within-task choice session. For
the next three within-task choice sessions, AET varied between 33.33
to 58.33%. However, on the last within-task choice session, Neal’s
AET score was 0% resulting in a more pronounced downward trend
(-10.83). For across-task choice sessions, Neal’s AET was initially
100%, decreasing to 37.50% AET on the second session, but with an
upward trend for the final data points. The final three data points sug-
gested a mean level of engagement of 63.74% AET.
When the intervention was withdrawn (A2), Neal’s AET returned
to 58.55%, a mean level commensurate with the average AET during
baseline performance. Daily AET percentages fluctuated between
37.50 to a high of 71.48%.
INSTRUCTIONAL CHOICE 493
In the final phase (B2), we examined the data paths for within and
across-task choices. During the within-task session, AET increased in
level ranging from 75.00-87.50% (M = 79.17%; SD = 7.22%), suggest-
ing high levels of engagement and very limited variability in perfor-
mance. During the across-task choice sessions, AET increased to the
same level, with daily AET sessions ranging from 66.67% to 100%
engagement (M = 79.17%; SD = 15.96%). The trend improved with an
accelerating trend (slope = 11.67; SE = 6.45). However, omnibus effect
sizes for contrasts between A1-B1 and A2-B2 were not significant for
either variable: within-task choice (TAU-U = 0.36, p-value = 0.25) and
across-task choice (TAU-U = 0.42, p-value = 0.15).
Disruptive Behavior. In the first phase baseline (A1), Neal did not
engage in any disruptive behavior during the first six days of baseline.
During the last two days of baseline, disruptive behavior increased
sharply to 44.44% on the last day when his AET was 11.11%, yielding a
slope of 4.20 (SE = 12.62) and phase average of 6.60 (SD = 15.57).
In the second phase intervention (B1), disruptive behavior was
lowest during the across-task choice sessions at 3.33% (SD = 7.46), with
only one day with any disruptive behavior 4/7/14 (16.67%). During
the within-task choice sessions, disruptive behavior was above base-
line levels at 15.00% (SD = 17.08) with high variability during with-
in-task choices sessions.
When the choice conditions were withdrawn, disruption was
very low. There was no disruptive behavior occurring during the
first two days and nominal disruption (11.11%) on the final day of the
withdrawal phase.
When the choice conditions were reintroduced, Neal’s disrup-
tive behavior increased slightly during the within-task choice sessions
(M = 6.94, SD = 6.36) and across-task choice sessions (M = 15.97, SD =
26.68). During the first across-task choice session on 4/17/2014 in this
phase, disruption was very high at 55.56%. The disruption decreased
dramatically in subsequent across-task choice conditions, with two
days of no disruption and one day of 8.33% disruption. An omnibus
effect size for contrasts between A1-B1 and A2-B2 were not significant
for either variable: within-task choice (TAU-U = 0.04, p-value = 0.88)
and across choice (TAU-U = -0.22 p-value = 0.46).
In sum, for Neal results of visual inspection techniques do not
establish a functional relation between the introduction of across-task
or within-task choice conditions, and increases in AET. However, when
both choice conditions were reintroduced, engagement increased to
79.17% and levels stabilized across conditions with across-task show-
ing a positive trend, suggesting improvements during the second
intervention condition. Furthermore, a functional relation was not
494 LANE et al.
Discussion