You are on page 1of 12

Accelerat ing t he world's research.

Determining an Instructional Level


for Early Writing Skills
Matthew Burns

Cite this paper Downloaded from Academia.edu 

Get the citation in MLA, APA, or Chicago styles

Related papers Download a PDF Pack of t he best relat ed papers 

T ECHNICAL REPORT #7: Technical Feat ures of New and Exist ing CBM Writ ing Measures Wit hi…
Krist en McMast er

Technical Feat ures of Curriculum-Based Measures for Beginning Writ ers


Anna-Lind Pét ursdót t ir

T ECHNICAL REPORT #25: Monit oring Progress of Beginning Writ ers: Technical Feat ures of t he Slope
Krist en McMast er
School Psychology Review,
2011, Volume 40, No. 1, pp. 158 –167

RESEARCH BRIEF

Determining an Instructional Level


for Early Writing Skills

David C. Parker, Kristen L. McMaster, and Matthew K. Burns


University of Minnesota

Abstract. The instructional level is helpful when identifying an intervention for


math or reading, but researchers have yet to investigate whether the instructional-
level concept can be applied to early writing. The purpose of this study was to
replicate and extend previous research by examining technical features of poten-
tial instructional-level criteria for writing. Weekly writing performance was
assessed with 85 first-graders over 12 weeks using Picture-Word and Sentence
Copying prompts. Data from the students with the highest slopes were used to
derive instructional level criteria. Several scoring procedures across Picture-Word
and Sentence Copying prompts produced reliable alternate-form correlations and
statistically significant relationships with a standardized writing assessment. De-
termining an instructional level in writing appears feasible; however, further
research is needed to examine the instructional utility of this approach.

Writing skills are essential for satisfac- (Berninger, Nielsen, Abbott, Wijsman, & Ras-
tory academic progress during kindergarten kind, 2008). Given that previous research has
through twelfth-grade education and for later established that early writing skills (e.g., tran-
vocational success (Graham & Perin, 2007), scription skills) are related to compositional
but writing problems often go undetected until fluency (Graham, Berninger, Abbott, Abbott,
late elementary or middle school, when they & Whitaker, 1997), the successful develop-
become increasingly difficult to remediate ment of these skills might establish a founda-
(Baker, Gersten, & Graham, 2003). Early tion on which to develop later writing skills,
identification and intervention are critical contributing to the reverse of the national
for preventing the long-term negative con- trend in poor writing outcomes (Salahu-Din,
sequences of persistent writing problems Daane, & Persky, 2008).

This research was supported in part by Grant H324H030003 awarded to the Institute on Community
Integration and the Department of Educational Psychology, College of Education and Human Develop-
ment, at the University of Minnesota, by the Office of Special Education Programs in the U.S. Department
of Education.
Correspondence regarding this article should be addressed to David Parker, 250 Education Sciences
Building, 56 East River Rd., Minneapolis, MN 55455; e-mail: parke384@umn.edu
Copyright 2011 by the National Association of School Psychologists, ISSN 0279-6015

158
Determining an Instructional Level for Writing

The learning hierarchy (Haring & Eaton, instruction (i.e., an independent level) and had
1978) is an intervention heuristic that could little room to grow, or (b) started too low (i.e.,
guide intervention development for writing at a frustration level) and the instruction did
problems. In using the learning hierarchy, in- not adequately address their learning needs
terventions are identified by matching student (Burns et al., 2006). Analysis of a subset of the
skill with one of four phases of student learn- data used for validation showed that students
ing (Haring & Eaton, 1978). First, interven- who had initial math scores within the instruc-
tions focus on skill accuracy (the acquisition tional level made the greatest gains over time
phase) through high modeling and frequent (Burns et al. ). Recent meta-analytic research
cueing. Next, interventions are targeted to en- that used the Burns et al. (2006) instructional-
hance the speed with which the skill is per- level criteria found stronger effects for acqui-
formed (fluency phase) through additional sition interventions (modeling and cueing)
practice and contingent reinforcement. Once a provided for students with frustration-level
student can accurately and fluently exhibit the skill than for those whose baseline perfor-
skill, efforts can focus on the later phases of mance represented an instructional level
maintenance and generalization. (Burns et al., 2010).
Although research supports the learning Identifying an instructional level for
hierarchy as an intervention heuristic (Burns, early writing skills could help interventionists
Codding, Boice, & Lukito, 2010), it remains determine the type of intervention struggling
unknown as to when the intervention focus writers need (e.g., modeling vs. practice). To
should change. The instructional level is a determine a student’s instructional level in
potential criterion that could be used to iden- other academic domains, the interventionist
tify whether the intervention should focus on assesses student skill within a specific domain
acquisition, fluency, or maintenance/general- by sampling the behavior for a short time
ization. Gickling and Armstrong (1978) oper- period (e.g., 1– 4 min) from a specific instruc-
ationally defined the instructional level for tional stimulus (e.g., a passage that will be
reading as material in which the student could read as part of instruction, or a single-skill
read 93%– 97% of the words. Reading less math probe such as single-digit multiplica-
than 93% of the words represented a frustra- tion), and then computes either the percentage
tion level and exceeding 97% was an indepen- of the words read correctly for reading or the
dent level. Researchers have found that task rate at which the skill was completed for math
completion, task comprehension, and time on (e.g., digits correct per minute). The resulting
task increased when instructional level mate- data are sufficiently reliable for instructional
rial was used (Gickling & Armstrong; Trep- decisions in reading and math (Burns et al.,
tow, Burns, & McComas, 2007). Moreover, 2000, 2006), but less is known about the ad-
students provided with an acquisition inter- equacy of assessments for instructional deci-
vention (i.e., high modeling and cueing) to sion making in writing.
facilitate an instructional level in reading ex- One assessment approach that has
perienced increased and sustained growth over shown promise for yielding technically sound
a period of 15 weeks compared to a randomly data for instructional decision making in writ-
assigned control group (Burns, 2007). ing is curriculum-based measurement (CBM;
Math intervention research also supports Deno, Mirkin, & Marston, 1982), which might
the instructional level as a decision-making serve as an approach for identifying students’
criterion. Burns, VanDerHeyden, and Jiban instructional level in writing. CBM for writing
(2006) empirically derived instructional-level typically consists of brief prompts to which
criteria for math by computing slopes of students respond for 3–5 min that are scored
growth and finding the mean baseline score for for the number of words written (WW), words
students with the highest growth. Students spelled correctly (WSC), or correct word se-
who did not experience high growth rates may quences (CWS; Videen, Deno, & Marston,
have (a) demonstrated proficient skills before 1982). Several CBM options for assessing stu-
159
School Psychology Review, 2011, Volume 40, No. 1

dent progress provide information about early first-grade classrooms from two schools that
writing skills that are important for later writ- were selected by convenience sampling. As
ing development (Graham et al., 1997). For described in the larger study, all consenting
example, McMaster, Du, & Petusdottir (2009) students participated, resulting in 85 students
found CBM using sentence copying or pic- (51% male). Forty-one percent were White,
tures with words produced promising techni- 28% Black, 26% Hispanic, 2% Native Amer-
cal characteristics, and subsequent research ican, and 2% Asian American. Fifty-seven
showed these measures were sensitive to percent were eligible for the federal free or
growth over time (McMaster et al., 2011). reduced-price lunch program, 21% were
Whereas previous research has estab- English learners, and 17% received special
lished that early writing CBM is promising for education.
measuring progress in early writing, less is
known about which measures and assessment Measures
procedures can be used to directly inform in-
tervention. Given that acquisition interven- CBM tasks. Participants in the larger
tions at the instructional level correspond to study completed several CBM tasks on a
positive learning outcomes in other academic weekly basis for 12 weeks. In the present
domains, research is needed to determine study, we examined data from two tasks com-
whether an instructional level can be identified pleted each week that were informed by the
for writing. The purpose of the current study current understanding of early writing devel-
was to replicate Burns and colleagues’ (2006) opment (i.e., that transcription skills play a
study of instructional level in math by apply- critical role in writing development; Graham
ing their methods to writing. Specifically, we et al., 1997) and examined for use with first-
examined the reliability and criterion validity grade students (McMaster et al., 2009). Sen-
of potential instructional-level estimates for tence Copying consisted of packets of eight
beginning writing based on different types of pages, with three sentences on each page. Par-
prompts and scoring procedures. The follow- ticipants were instructed to copy an example
ing research questions guided the study: (a) To sentence at the top of the first page (e.g., “We
what extent do different prompts and scoring have one cat.”). Then, they were instructed to
procedures affect the reliability of writing as- copy the remaining sentences and to stop
sessment data? (b) To what extent do different after 3 min. Alternate-form reliability on Sen-
prompts and scoring procedures affect esti- tence Copying using the current scoring pro-
mates of the criterion validity of writing as- cedures ranged from r ⫽ .63 to .80, and
sessment data? (c) What mean initial writing criterion validity with a standardized norm-
scores are linked to the highest rates of growth referenced writing measure ranged from r ⫽
during weekly progress monitoring of writing? .23 to .50 (McMaster et al., 2011).
(d) How reliable are instructional-level cate- Picture-Word prompts consisted of
gories based on empirically derived criteria? words with a picture above each word. Partic-
(e) How well do instructional-level categories ipants wrote a sentence using the word pro-
relate to a standardized measure of writing? vided. Before the task, the examiner drew a
picture (e.g., a tree) on the board and wrote the
Method word underneath. Then, the examiner asked
Setting and Participants the students to generate sentences using the
word. After allowing the students to practice,
Data for this study were drawn from a the examiner instructed them to write as many
larger study of the technical features of slopes sentences as they could using the words and
produced from weekly administered early pictures on their probes. After 3 min, the ex-
writing CBM prompts (McMaster et al., aminer instructed participants to stop. Alter-
2011). The larger study was conducted in a nate-form reliability on Picture Word using
Midwestern urban school district with five the current scoring procedures ranged from
160
Determining an Instructional Level for Writing

r ⫽ .70 to .77, and criterion validity ranged authors, four additional graduate research as-
from r ⫽ .23 to .54 (McMaster et al., 2011). sistants (all special education or school psy-
Writing samples were scored using chology students), and one special education
words written, words spelled correctly, and teacher. All scorers had scoring experience on
correct word sequences. A word was defined previous CBM projects and received training
as at least two letters written in sequence, or for this project. Mean interrater agreement
single-letter words such as “I” and “a” (Deno, was 95% for both CBM prompt types and 88%
Mirkin, & Marston, 1980). Words were for the TOWL-3 Spontaneous Writing score.
judged as spelled correctly by scoring them as See McMaster et al. (2011) for a complete
a computer would score them (i.e., syntax and description of prompt administration, scoring,
semantics were not taken into account), and and interrater agreement.
were computed by subtracting the number of
all words that were judged as being spelled Data Analyses
incorrectly from the total words written. A
correct word sequence was defined as any two The first step was to compute reliability
adjacent, correctly spelled words that are ac- coefficients for both fluency and accuracy
ceptable within the context of the sample to a metrics of writing performance. Fluency met-
native speaker of English (Videen et al., rics consisted of the total scores students re-
1982). ceived for the two prompt types using WW,
WSC, and CWS (e.g., 25 CWS on the Picture-
Test of Written Language—3rd Edi- Word prompt). Accuracy metrics were com-
tion (TOWL-3). The TOWL-3 (Hammill & puted for the two prompt types by dividing the
Larsen, 1996) is a comprehensive test of writ- total number of WSC by the total number of
ten language designed for students from 7 words written, and the total number of CWS
years to 17 years 11 months of age. The Spon- by the total number of word sequences. For
taneous Writing subtest (Form A) was group example, a student who produced 25 WSC out
administered to all participants. Students were of a total of 30 words on Sentence Copying
presented with a picture of astronauts, space would have an accuracy score of 83%. Flu-
ships, and construction activity; asked to plan ency and accuracy scores for Weeks 2 and 3
a story about the picture; and then write as were then correlated using Pearson product
much as they could in 15 min. Writing sam- moment correlation coefficients (the first week
ples were scored based on Contextual Conven- of data were omitted because of potential task
tions (capitalization, punctuation, and spell- novelty). Next, data that produced acceptable
ing), Contextual Language (quality of vocab- reliability coefficients were evaluated for cri-
ulary, sentence construction, and grammar), terion-related validity by correlating Sentence
and Story Construction (e.g., quality of plot, Copying and Picture-Word scores at Week 2
prose, character development, and interest). with the age-based total of the TOWL-3 stan-
Alternate-form reliabilities for Spontaneous dard scores using a Pearson product moment
Writing for 7-year-olds ranged from r ⫽ .60 to correlation.
.87, and the measure is reported to correlate Fluency data were subsequently con-
well with other standardized writing measures verted to categories of frustration, instruc-
(r ⫽ .50; Hammill & Larsen, 1996). tional, and independent levels. To make these
Procedures conversions, slopes of growth over Weeks
2–12 of progress monitoring were computed
All writing prompts were administered using ordinary least-squares regression, which
by a trained graduate student during Week 1 represented average weekly growth in WW,
and by the classroom teachers thereafter. Fi- WSC, and CWS for each student. Next, we
delity observations indicated that measures identified students whose slopes of growth
were administered with high levels of accu- equaled or exceeded the 66th percentile of all
racy. Scorers included the first and second the slopes within the sample (as in Burns et al.,
161
School Psychology Review, 2011, Volume 40, No. 1

Table 1
Means, Standard Deviations, and Correlation Coefficients for Fluency and
Accuracy Scores for Sentence Copy and Picture-Word Prompts and
Accompanying Scoring Procedures
Fluency Accuracy

Prompt Probe 2 Probe 3 Probe 2 Probe 3

Procedure M SD M SD r M SD M SD r

Picture-Word
Words written 17.0 8.4 18.4 8.6 .71*
Words spelled correctly 13.4 7.7 15.0 8.6 .67* 76.1 23.4 77.8 24.6 .52*
Correct word sequences 11.9 8.6 13.1 9.1 .67* 54.2 28.6 55.9 26.5 .46*
Sentence Copy
Words written 16.7 7.1 16.8 7.7 .71*
Words spelled correctly 12.8 6.6 13.3 7.1 .74* 74.6 25.1 78.8 19.7 .60*
Correct word sequences 11.9 7.3 12.6 8.1 .70* 59.8 29.6 64.6 26.7 .56*

*p ⬍ .01.

2006). Finally, the mean score on each probe each prompt were computed for both fluency
type among the group of students with high and accuracy metrics, and are reported in Ta-
rates of growth was considered an estimate of ble 1. Correlation coefficients for scores on the
an instructional level for beginning writing. fluency metric approached or exceeded .70 for
Categories were subsequently created by es- each scoring procedure under both prompt
tablishing a range for the instructional-level types, whereas the coefficients for scores on
scores, which was determined by computing the accuracy metrics were at or below .60,
the standard error (SE) of the mean. Scores suggesting less reliable scores for accuracy
that exceeded the mean by two SEs were con- measures. Reliability coefficients that are at or
sidered an independent level and those that fell above .70 are acceptable for programs of re-
more than two SEs below the mean were at a search that are in early stages (Nunnally &
frustration level. This process was repeated for Bernstein, 1994), but coefficients below .70
the data from Week 3, and kappa coefficients are less interpretable. For that reason, subse-
were computed for agreement between the quent analyses excluded the accuracy data and
data from each week. The criterion-related included only the fluency measures.
validity of the categorical data were then eval- To address the research question regard-
uated by computing Spearman rho correlation ing the criterion validity of prompts and scor-
coefficients between the total standard scores ing procedures, the fluency scores for each
on the TOWL-3 and the categories produced procedure and prompt were correlated with the
by Week 2 data for each of the scoring pro- TOWL-3 Spontaneous Writing standard score
cedures and prompts. total. The results are presented in the second
Results column of Table 2. With the exception of WW
for Sentence Copying prompts, each of the
To address the first research question scoring procedures was significantly corre-
regarding reliability of prompts and scoring lated, using the more conservative alpha level
procedures, delayed alternate-form reliability of .01, with the TOWL-3 total standard scores.
coefficients for each scoring procedure for Significant correlations ranged from r ⫽ .42
162
Determining an Instructional Level for Writing

Table 2 Table 3
Criterion-Related Validity Derivation of and Estimates for
Coefficients Between Scoring Fluency Instructional-Level Criteria
Procedures for Each Prompt and the for Scoring Procedures Within
TOWL-3 Total Score Prompt Types

Correlation with Fluency


TOWL-3 Total Criteria
Prompt
(3-min
Fluency Procedures Mean SD SE Probe)
Prompt Raw Category
Picture-Word
Procedure r ␳ Words written 14.46 8.68 1.64 11–18
Words spelled
Picture-Word correctly 11.43 7.03 1.33 9–14
Words written .32* .36* Correct word
Words spelled correctly .48* .46* sequences 10.93 8.56 1.62 8–14
Correct word sequences .52* .50* Sentence Copy
Sentence Copy Words written 16.25 6.58 1.24 14–19
Words written .26 .21 Words spelled
Words spelled correctly .42* .46* correctly 13.39 6.52 1.23 11–16
Correct word sequences .46* .48* Correct word
sequences 13.32 7.86 1.49 10–16
Note. TOWL-3 ⫽ Test of Written Language—3rd Edi-
tion.
*p ⬍ .01.

CWS, and .90 for WSC. Instructional-level


criteria were computed by finding the mean
for WSC on the Sentence Copying prompt to score from Weeks 2 and 3 and building a two
r ⫽ .52 for CWS on the Picture-Word prompt. SE range around those data (see Table 3).
Validity was further explored by examining Twenty-eight (34.6%) of the students were
concurrent (i.e., within Week 2) and predictive classified as high responders using each scor-
(i.e., between Weeks 2 and 3) correlations of ing procedure for both prompt types.
the same scoring procedures across Sentence To answer the fourth research question,
Copying and Picture-Word prompts. All coef- the fluency writing data were converted to the
ficients for these analyses were significant and categories of frustration, instructional, and in-
moderate (r ⫽ .42 and r ⫽ .69). dependent levels. Reliability of the categorical
The third research question addressed data was estimated using Cohen’s (1960)
whether an instructional level could be esti- kappa coefficient for chance agreement. The
mated for beginning writing. Given the ac- number and percentage of students scoring in
ceptable technical adequacy of the fluency- the frustration, instructional, and independent
based measures, instructional-level estimates levels of difficulty are presented in Table 4
were computed for each of the scoring proce- along with the kappa coefficients. Kappa co-
dures and prompts. These estimates are based efficients for the categories using Sentence
on the initial performance of the students Copying data were all .46, and using Picture-
whose slopes were at or above the 66th per- Word data ranged from .37 (WW) to .47
centile. The slopes that represented the 66th (CWS). This indicates that the categorical
percentile for the Picture-Word prompts were agreements across Weeks 2 and 3 were 46%
.87 for WW, .96 for CWS, and .84 for WSC. above chance for the scoring procedures for
For Sentence Copying, slopes that represented Sentence Copying data and 37% to 47% above
the 66th percentile were .97 for WW, .96 for chance for the Picture-Word data.
163
School Psychology Review, 2011, Volume 40, No. 1

Table 4
Number and Percentage of Fluency Scores Categorized as Frustration,
Instructional, and Independent and Kappa Coefficients

Probe 2 Probe 3

Prompt Frustration Instructional Independent Frustration Instructional Independent

Procedure N % N % N % N % N % N % ␬

Picture-Word
Words written 19 23.8 19 23.8 42 52.5 21 25.3 15 18.1 47 56.6 .46*
Words spelled
correctly 24 30.0 18 22.5 38 47.5 23 27.7 12 14.5 48 57.8 .46*
Correct word
sequences 30 37.5 20 25.0 30 37.5 29 34.9 17 20.5 37 44.6 .46*
Sentence Copy
Words written 24 29.6 30 37.0 27 33.3 23 28.8 24 30.0 33 41.2 .37*
Words spelled
correctly 29 35.8 24 29.6 28 34.6 29 36.2 28 35.0 23 28.8 .46*
Correct word
sequences 32 39.5 25 30.9 24 29.6 31 38.8 20 25.0 29 36.2 .47*

*p ⬍ .01.

Finally, to answer the fifth research 2009). Our findings also suggest that these
question, the categorical data from Week 2 data can be converted to instructional-level
were correlated with the TOWL-3 standard categories that are reliable and significantly
score total using Spearman’s rho, presented in related to scores on a standardized measure of
the third column of Table 2. As with the writing.
continuous fluency data, the correlations were Although these findings provide a
all significant ( p ⬍ .01) with CWS resulting in method to assess the instructional level for
the correlation coefficients at or near .50 for early writers, discussion is warranted with re-
the Picture-Word (␳ ⫽ .50), and Sentence gard to how the instructional-level concept
Copying prompts (␳ ⫽ .48). applies to writing. In keeping with the learning
hierarchy (Haring & Eaton, 1978), frustration-
Discussion
level scores might indicate that a student re-
Results of this study suggest that WSC quires additional modeling in the skill being
and CWS for Sentence Copying, as well as all taught, but scores within the instructional level
scoring procedures for the Picture-Word might suggest that the student is ready for
prompts, were sufficiently reliable and statis- fluency-building activities in the skill. In read-
tically significantly correlated with a standard- ing and math, it is possible to manipulate the
ized measure of writing. Similar to previous difficulty of the material, whereas the task for
research in math (Burns et al., 2006), accept- writing is to produce an entirely new product
able coefficients were found for fluency but on blank paper. Thus, the instructional level
not accuracy scores. Overall, correlations were for writing may be more dependent on the
similar to reliability and validity findings from skills of the learner than the required task
previous research on CBM for beginning writ- difficulty. This study was the first step in a line
ers (Coker & Ritchey, 2010; McMaster et al., of inquiry designed to understand the instruc-
164
Determining an Instructional Level for Writing

tional level as related to writing, and addi- each of the instructional-level categories. For
tional instructional implications should be de- instance, for the Picture-Word prompt types,
termined with future research. For example, as WW, WSC, and CWS, resulted in 19, 24, and 30
currently conceptualized, the instructional students, respectively, at the frustration level.
level might serve as a heuristic for determin- These scoring procedures assess different skills,
ing at what stage of the instructional hierarchy and thus future research would be necessary to
(Haring & Eaton, 1978) a student’s skills fall, identify which of the procedures resulted in the
which suggests that future research examine most useful instructional-level criteria. Related
the effectiveness of early writing interventions to this point, the criteria derived in this study
in a way that is similar to that done with math suggest three categories in which students’ skills
(Burns et al., 2010). Moreover, other types of can fall, but such a fine-grained evaluation of
CBM prompts, such as spelling (a critical student skills may not be necessary for early
component of transcription; Berninger & Amt- writing, and perhaps a dichotomous interpreta-
mann, 2003), might offer a curricular domain tion of the instructional level (i.e., below and at
in which the difficulty of the material could or above) would sufficiently suggest the need for
drive application of the instructional level in a different interventions.
way similar to how reading material is manip- Third, whereas kappa correlations calcu-
ulated to correspond to students’ instructional lated to determine the reliability of the cate-
levels (Gickling & Armstrong, 1978). gorical data were statistically significant, co-
The findings and implications of this efficients were modest. It is possible that stu-
study should be considered in light of the dents who were at the upper limits of the
limitations of the data. First, the current sam-
instructional-level criteria made sufficient
ple of students came from two schools in a
gains between assessments such that they
single district with a relatively high proportion
moved to the independent level; this possibil-
(21%) of students who were classified as Eng-
ity could be explored in future studies by
lish language learners. Thus, the degree to
conducting direct evaluations of classification
which the results would apply to students in
accuracy. Further research is also needed to
other locations with other characteristics and
determine whether other indices of student
curricula is unknown, and the overall rates of
writing yield more reliable classifications. Re-
growth from which instructional levels were
computed may have been affected by growth lated to this point, whereas criterion-related
characteristics that are specific to English lan- validity coefficients were statistically signifi-
guage learner students. Moreover, this study cant and similar to criterion-validity coeffi-
included only first-graders, and previous re- cients generally found in the writing literature
search in math found different instructional (cf. McMaster & Espin, 2007), these coeffi-
levels across grades (Burns et al., 2006). cients are not strong and thus prevent firm
A second limitation is the somewhat arbi- conclusions regarding criterion validity. This
trary designation of high and low growth rates problem is not new to writing assessment re-
used to determine instructional level. The current search, given the complex, multidimensional
procedure used the top third of all slopes to nature of writing (cf. Tindal & Hasbrouk,
identify students at the instructional level be- 1991). Because of these limitations in techni-
cause that was the approach used in previous cal adequacy of writing measures, future re-
research (Burns et al., 2006). Other approaches searchers may wish to continue to investigate
could be selected, such as the median slope (cf. various writing tasks, durations, and scoring
Vellutino et al., 1996), and might lead to differ- procedures to identify more precise estimates
ent estimates of instructional-level ranges. Fur- of writing skill. One direction for future re-
ther research could be conducted to identify cut- search might investigate the influence of the
offs that yield the most precise instructional level amount of time allowed for completion on
estimates. Moreover, different scoring proce- fluency-based assessments on writing, and
dures yielded different numbers of students in whether increasing the duration of writing
165
School Psychology Review, 2011, Volume 40, No. 1

samples would yield more technically sound Cohen, J. (1960). A coefficient of agreement for nominal
scales. Educational and Psychological Measurement,
and instructionally useful data. 20, 37– 46.
In light of the above limitations, the Coker, D. L., & Ritchey, K., D. (2010). Curriculum based
current study should primarily be used as a measurement of writing in kindergarten and first grade:
An investigation of production and qualitative scores.
heuristic for empirically deriving instructional Exceptional Children, 76, 175–193.
levels for writing; further work is needed to Deno, S. L., Mirkin, P., & & Marston, D. (1982). Valid
determine whether using CBM, or perhaps measurement procedures for continuous evaluation of
written expression. Exceptional Children Special Ed-
other skill-based writing assessments, to iden- ucation and Pediatrics: A New Relationship, 48, 368 –
tify instructional levels for writing is an in- 371.
structionally useful approach. To validate the Deno, S. L., Mirkin, P., & Marston, D. (1980). Relation-
ships among simple measures of written expression
utility of identifying an instructional level in and performance on standardized achievement tests
writing, instruction or intervention would need (Vol. IRLD-RR-22, pp. 109). University of Minnesota,
to be manipulated to occur within and outside Institute for Research on Learning Disabilities.
Gickling, E. E., & Armstrong, D. L. (1978). Levels of
of the instructional-level ranges suggested by instructional difficulty as related to on-task behavior,
this study. Results that indicated the strongest task completion, and comprehension. Journal of
outcomes for those students taught at their Learning Disabilities, 11, 559 –566.
Graham, S., Berninger, V. W., Abbott, R. D., Abbott,
instructional level would support the applica- S. P., & Whitaker, D. (1997). Role of mechanics in
bility of instructional level to writing. The composing of elementary school students: A new
current design did not permit this analysis, but methodological approach. Journal of Educational Psy-
chology, 89, 170 –182.
the results suggest the value of future efforts to Graham, S., & Perin, D. (2007). A meta-analysis of writ-
do so. ing instruction for adolescent students. Journal of Ed-
ucational Psychology, 99(3), 445– 476.
Hammill, D. D., & Larsen, S. C. (1996). Test of Written
References Language—Third Edition. Austin, TX: PRO-ED, Inc.
Haring, N. G., & Eaton, M. D. (1978). Systematic instruc-
Baker, S., Gersten, R., & Graham, S. (2003). Teaching tional technology: An instructional hierarchy. In N. G.
expressive writing to students with learning disabili- Haring, T. C. Lovitt, M. D. Eaton, & C. L. Hansen
ties: Research-based applications and examples. Jour- (Eds.), The fourth R: Research in the classroom (pp.
nal of Learning Disabilities, 36, 109 –123. 23– 40). Columbus, OH: Merrill.
Berninger, V., & Amtmann, D. (2003). Preventing written McMaster, K. L., Du, X., & Petursdottir, A. (2009). Tech-
expression disabilities through early and continuing nical features of curriculum-based measures for begin-
assessment and intervention for handwriting and/or ning writers. Journal of Learning Disabilities, 42, 41–
spelling problems: Research into practice. In H. L. 60.
Swanson, K. Harris, & S. Graham (Eds.), Handbook of McMaster, K. L., Du, X., Yeo, S., Deno, S. L., Parker, D.,
learning disabilities (pp. 345–363). New York Guil- & Ellis, T. (2011). Curriculum-based measures of be-
ford Press. ginning writing: Technical features of the slope. Ex-
Berninger, V. W., Nielsen, K. H., Abbott, R. D., Wijsman, ceptional Children, 77, 185–206.
E., & Raskind, W. (2008). Writing problems in devel- McMaster, K. L., & Espin, C. A. (2007). Technical fea-
opmental dyslexia: Under-recognized and under- tures of curriculum-based measurement in writing: A
treated. Journal of School Psychology, 46, 1–21. literature review. Journal of Special Education, 41,
Burns, M. K. (2007). Reading at the instructional level 68 – 84.
with children identified as learning disabled: Potential Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric
implications for response-to-intervention. School Psy- theory (3rd ed.). New York: McGraw-Hill.
chology Quarterly, 22, 297–313. Salahu-Din, D., Persky, H., & Miller, J. (2008). The
Burns, M. K., Codding, R. S., Boice, C. H., & Lukito, G. Nation’s Report Card: Writing 2007. Retrieved De-
(2010). Meta-analysis of acquisition and fluency math cember 1, 2009, from http://nces.ed.gov/nationsreport-
interventions with instructional and frustration level card/writing/
skills: Evidence for a skill by treatment interaction. Tindal, G., & Hasbrouck, J. (1991). Analyzing student
School Psychology Review, 39, 69 – 83. writing to develop instructional strategies. Learning
Burns, M. K., Tucker, J. A., Frame, J., Foley, S., & Disabilities Research and Practice, 6, 237–245.
Hauser, A. (2000). Interscorer, alternate-form, internal Treptow, M. A., Burns, M. K., & McComas, J. J. (2007).
consistency, and test-retest reliability of Gickling’s Reading at the frustration, instructional, and indepen-
model of curriculum-based assessment for reading. dent levels: Effects on student time on-task and com-
Journal of Psychoeducational Assessment, 18, 353– prehension School Psychology Review, 36, 159 –166.
360. Vellutino, F. R., Scanlon, D. M., Sipay, E. R., Small,
Burns, M. K., VanDerHeyden, A. M., & Jiban, C. L. S. G., Chen, R., Pratt, A., et al. (1996). Cognitive
(2006). Assessing the instructional level for mathemat- profiles of difficult-to-remediate and readily remedi-
ics: A comparison of methods. School Psychology Re- ated poor readers: Early intervention as a vehicle for
view, 35, 401– 418. distinguishing between cognitive and experiential def-

166
Determining an Instructional Level for Writing

icits as basic causes of specific reading disability. Jour- Date Received: November 12, 2009
nal of Educational Psychology, 88, 601– 638. Date Accepted: October 25, 2010
Videen, J., Deno, S. L., & Marston, D. (1982). Correct
word sequences: A valid indicator of proficiency in Action Editor: Tanya Eckert 䡲
written expression (Vol. IRLD-RR-84, pp. 61). Min-
neapolis: University of Minnesota, Institute for Re-
search on Learning Disabilities. Article accepted by previous Editor.

David C. Parker is a doctoral candidate in the school psychology program within the
Department of Educational Psychology at the University of Minnesota. His research
interests include direct assessment and intervention for reading and writing, particularly
in the primary elementary grades.

Kristen L. McMaster, Ph.D. is an Associate Professor of Special Education in the


Department of Educational Psychology, University of Minnesota. Her research interests
include (a) promoting teachers’ use of data-based decision making and evidence-based
instruction and (b) developing individualized interventions for students at risk for or
identified with reading and writing-related disabilities.

Matthew K. Burns, Ph.D. is a Professor of Educational Psychology, Coordinator of the


School Psychology Program, and Co-Director of the Minnesota Center for Reading
Research at the University of Minnesota. His current research interests include curricu-
lum-based assessment for instructional design, interventions for academic problems,
problem-solving teams, and incorporating all of these into a response-to-intervention
model.

167
Copyright of School Psychology Review is the property of National Association of School Psychologists and its
content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's
express written permission. However, users may print, download, or email articles for individual use.

You might also like