You are on page 1of 14

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/227746651

The validity of reading comprehension rate: Reading speed,


comprehension, and comprehension rates

Article  in  Psychology in the Schools · December 2009


DOI: 10.1002/pits.20442

CITATIONS READS

10 688

6 authors, including:

Christopher H. Skinner Jennifer Morrow


University of Tennessee University of Tennessee
218 PUBLICATIONS   4,031 CITATIONS    19 PUBLICATIONS   562 CITATIONS   

SEE PROFILE SEE PROFILE

Christine E. Neddenriep Renee O. Hawkins


University of Wisconsin - Whitewater University of Cincinnati
15 PUBLICATIONS   231 CITATIONS    44 PUBLICATIONS   436 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Special issue View project

All content following this page was uploaded by Christopher H. Skinner on 05 September 2018.

The user has requested enhancement of the downloaded file.


Psychology in the Schools, Vol. 46(10), 2009 
C 2009 Wiley Periodicals, Inc.
Published online in Wiley InterScience (www.interscience.wiley.com) DOI: 10.1002/pits.20442

THE VALIDITY OF READING COMPREHENSION RATE: READING SPEED,


COMPREHENSION, AND COMPREHENSION RATES
CHRISTOPHER H. SKINNER, JACQUELINE L. WILLIAMS, AND JENNIFER ANN MORROW
The University of Tennessee
ANDRE D. HALE
Eastern Kentucky University
CHRISTINE E. NEDDENRIEP
University of Wisconsin - Whitewater
RENEE O. HAWKINS
The University of Cincinnati

This article describes a secondary analysis of a brief reading comprehension rate measure, percent
comprehension questions correct per minute spent reading (%C/M). This measure includes reading
speed (seconds to read) in the denominator and percentage of comprehension questions answered
correctly in the numerator. Participants were 22 4th-, 29 5th-, and 37 10th-grade students. Results
showed that reading speed accounted for much of the variance in Broad Reading Cluster scores
and subtest scores of the Woodcock – Johnson III Tests of Achievement across all grade levels.
Converting reading speed to the rate measure %C/M increased Broad Reading Cluster variance
accounted for in the 4th- and 5th-grade sample, but decreased the Broad Reading Cluster variance
accounted for in the 10th-grade sample. Discussion focuses on the importance of reading speed
and the failure to enhance validity of a brief rate measure in more skilled readers by incorporating
a direct measure of comprehension.  C 2009 Wiley Periodicals, Inc.

Educators have developed data-based decision-making models of service delivery (e.g., re-
sponse to intervention [RTI]; curriculum-based measurement) designed to identify, prevent, and
remedy reading skill deficits (Compton, Fuchs, & Fuchs, 2006; Deno & Mirkin, 1977; Fletcher,
Coulter, Reschley, & Vaughn, 2004; Fuchs, Fuchs, & Compton, 2004). Often, brief assessments
of reading are used to guide decisions. Perhaps the most frequently used brief reading measure is
words correct per minute (WC/M). When assessing WC/M, students read passages aloud, often for
1 minute, as evaluators score their reading accuracy for each word. Thus, WC/M is a rate measure
that incorporates accurate aloud word reading as well as reading speed. Other brief reading skill
measures include assessing rates of nonsense word reading (Good & Kaminski, 2002), rates of
selecting or providing deleted words (Jenkins & Jewell, 1993; Parker, Hasbrouck, & Tindal, 1992),
and reading comprehension rates (Skinner, 1998).
Reading skill development may be assessed using traditional, commercial, standardized norm-
referenced tests. Because RTI requires a significant amount of assessment, efficient brief rate mea-
sures with multiple forms are often used to guide across- and within-student decisions. These rate
measures are used for universal assessment (e.g., assess all students in a school or district), and
across-student comparisons allow educators to identify those in need, or most in need, of remedial
services. Multiple equivalent forms can be constructed or obtained allowing for frequent assessment
of the same student. These repeated applications allow for within-student comparisons of a student’s
skill development (learning or growth rate), which guide various formative decisions, including
whether to continue, alter, or strengthen specific remediation procedures; try a new procedure;

The research for this article was supported by the Korn Learning, Assessment, and Social Skills (KLASS) Center,
The University of Tennessee.
Correspondence to: Christopher H. Skinner, The University of Tennessee, BEC 525, Knoxville, TN 37996 – 3452.
E-mail: cskinne1@utk.edu

1036
Reading Comprehension Rate 1037

and/or consider a more intense level of services (Deno & Mirkin, 1977; Fuchs & Fuchs, 2006;
Saecker, Skinner, Brown, & Roberts, in press; Shapiro, 2004).
Many important educational decisions are being based on these brief rate measures. The quality
of these decisions is influenced by the quality of these measures. Relative to other brief rate measures,
the research base supporting WC/M is most robust (Marston, 1989). Numerous researchers have
found that WC/M is a reliable and valid measure of broad reading skill development (Fuchs, Fuchs,
Hosp, & Jenkins, 2001; Marston, 1989). Additionally, researchers evaluating interventions have
shown that WC/M is sensitive enough to detect small changes in reading skill development over
brief periods of time (e.g., Daly, Chafouleas, & Skinner, 2005; Daly & Martens, 1994; Skinner,
Logan, Robinson, & Robinson, 1997).
Although WC/M can be a valid, reliable, and sensitive indicator of overall reading ability,
educators and researchers have expressed concerns with WC/M (Potter & Wamre, 1990). One
concern is that the sensitivity and validity of WC/M begins to decline at the 5th- or 6th-grade
reading level (Hintze & Shapiro, 1997; Jenkins & Jewell, 1993). A second concern is related to the
indirect nature of the measure, especially for advanced readers. The primary function of reading is
comprehension, and although WC/M correlates with comprehension it does not incorporate a direct
measure of comprehension (Skinner, 1998; Skinner, Neddenriep, Bradley-Klug, & Ziemann, 2002).
Skinner (1998) described another brief rate measure that directly assesses comprehension,
reading comprehension rate. Like other rate measures, reading comprehension rate can be converted
to a common metric, percentage of comprehension questions answered correct per minute (%C/M).
The %C/M measure is similar to other rate measures in that it incorporates reading speed (the
denominator). The numerator is the percentage of comprehension questions answered correctly.
Thus, %C/M is the percentage of passage comprehension questions answered correctly for each
minute spent reading (Skinner et al., 2002).
Researchers conducting treatment studies have demonstrated that reading comprehension rate
measures are sensitive enough to detect changes caused by interventions. In these studies, reading
comprehension rate was measured using the Timed Readings series (Spargo, 1989), which includes
50 400-word passages for 10 consecutive grade levels beginning at the 4th grade. Following each
passage are 10, three-option multiple choice comprehension questions (five fact questions and five
inference questions). Researchers measured reading comprehension rate by having students read the
400-word passage aloud and recording the number of seconds required to read the passage. After
students finished reading they answered the 10 comprehension questions, and researchers converted
these two measures to a reading comprehension rate measure by dividing the percentage of questions
answered correctly by the time to read. Results of these studies suggested that reading comprehension
rate could detect changes in reading skills occasioned by repeated reading and listening while reading
(Freeland, Skinner, Jackson, McDaniel, & Smith, 2000; Hale et al., 2005; McDaniel et al., 2001),
reinforcement (Freeland, Jackson, & Skinner, 1999), and a prereading comprehension intervention
(Williams & Skinner, 2004).
Using the same curricula (i.e., Timed Readings series) and measurement procedures, re-
searchers found support for the validity of reading comprehension rate measures. Neddenriep,
Skinner, Wallace, and McCallum (2009) repeatedly measured WC/M, percentage of comprehension
questions answered correctly, and %C/M, when evaluating a peer tutoring reading intervention. Ex-
ploratory analysis showed that %C/M correlated more strongly with WC/M than did percentage of
comprehension questions answered correctly. In another study, Neddenriep, Hale, Skinner, Hawkins,
& Winn (2007) investigated the validity of %C/M across 4th-, 5th-, and 10th-grade students. Results
revealed significant correlations between reading comprehension rate and Broad Reading Cluster
scores of the Woodcock–Johnson III Tests of Achievement (WJ-III Ach.; Woodcock, McGrew,
& Mather. 2001). Further analysis showed that %C/M was more strongly correlated with Broad

Psychology in the Schools DOI: 10.1002/pits


1038 Skinner et al.

Reading Cluster scores than with reading comprehension level (i.e., percentage of comprehension
questions answered correctly). These results suggest that, when attempting to assess broad read-
ing skills, changing comprehension level measures to rate measures (e.g., %C/M) can enhance the
validity of the measure.
Initial evidence suggests that %C/M is a valid and sensitive measure of broad reading skill
development. Furthermore, researchers have argued that this measure is more valid, especially
with advanced readers, than other rate measures because it includes a direct measure of reading
comprehension in the numerator (Skinner, 1998; Skinner et al., 2002). Recent research on the
oral reading fluency measure WC/M suggests, however, that the numerator (words read correctly)
may not be as important as the measure of reading speed. Williams, Neddenriep, Hale, Skinner, and
Hawkins (2006) showed that reading speed (number of seconds required to read equivalent passages)
accounted for 62%, 65%, and 56% of the variance in Broad Reading Cluster scores across 4th-,
5th-, and 10th-grade students. Perhaps aloud reading comprehension rate measures (e.g., %C/M)
may be valid measures of global reading skills development because of the measure of reading
speed embedded within reading comprehension rate measures. Thus, altering reading rate measures
by using a more direct measure of comprehension in the numerator (e.g., replacing words read
aloud correctly with percentage of comprehension questions correct) may have little influence on
the validity of reading rate measures, even with more skilled readers.

Purpose
This study was designed to extend research on the reading comprehension rate measure %C/M
by analyzing the 4th-, 5th-, and 10th-grade student data collected by Neddenriep and colleagues
(2007). Pearson’s correlations and simultaneous regression analyses were used to determine how
much variance in the WJ-III Broad Reading Cluster scores and the individual subtest scores of
the Broad Reading Cluster (i.e., Letter-Word Identification, Reading Fluency, and Passage Com-
prehension) was accounted for by reading comprehension rate, and both components of reading
comprehension rate, number of seconds to read, and percentage of comprehension questions an-
swered correctly.

M ETHOD

Participants and Setting


The 4th- and 5th-grade participants attended a rural elementary school in the Southeastern
United States. Approximately 300 students attended this school, and 57% of these students qualified
for free or reduced price lunch. Four teachers from the two grade levels (4th and 5th grade) agreed to
participate. Secondary participants were recruited from a 10th-grade language arts classroom of an
urban high school, also located in the Southeastern United States. Almost 1,000 students attended
this school, of which 62% qualified for free or reduced price lunch. Only those students whose
parents signed consent forms were asked to participate. Participants included 7 male and 15 female
4th-grade students (n = 22); 17 male and 12 female 5th-grade students (n = 37), and 13 male and
24 female 10th-grade students. Of the 51 elementary students who participated in this study, 46 were
White and 5 were Black. Of the 37 10th-grade students who participated, 16 were White, 15 were
Black, 4 were Hispanic, and 2 were Asian. None of the students were receiving special education
services designed to address reading problems at the time the study was conducted.
During this study, the three subtests of the WJ-III that make up the Broad Reading Cluster were
administered to the participants. Table 1 summarizes these data and shows that the mean standard
scores across the four WJ-III measures ranged from 96.11 to 104.20. All WJ-III scores have an
average score of 100 and a standard deviation (SD) of 15. These data suggest that the samples’

Psychology in the Schools DOI: 10.1002/pits


Reading Comprehension Rate 1039

Table 1
Mean and SD WJ-III and Reading Comprehension Rate Scores for Elementary and Secondary Students

WJ-III WJ-III WJ-III WJ-III Seconds to


BRC LWI RF PC %C Read RCR
Grade Mean (SD) Mean (SD) Mean (SD) Mean (SD) Mean (SD) Mean (SD) Mean (SD)

4th 100.70 104.20 98.00 100.30 8.23 295.23 1.85


n = 22 (9.04) (12.18) (6.55) (7.47) (0.92) (98.28) (0.62)
5th 98.93 102.90 97.14 98.14 8.41 269.41 2.18
n = 29 (11.19) (12.45) (10.84) (9.95) (1.30) (145.87) (0.78)
10th 96.11 96.57 96.49 96.35 7.05 188.89 2.38
n = 37 (11.34) (11.29) (11.56) (12.81) (1.47) (42.01) (0.81)

Note. BRC: Broad Reading Cluster; LWI: Letter Word Identification; RF: Reading Fluency; PC: Passage Comprehension;
%C: Percent Comprehension Questions Correct; RCR: Reading Comprehension Rate.

average performance was fairly representative of national norms. However, the SD data ranged from
6.55 to 12.81 (see Table 1), suggesting that the sample had less variation in WJ-III scores than did the
normative sample. Data were collected between the months of October and February. Assessments
were administered individually at the participants’ schools in unoccupied rooms.

Materials and Measures


Passages from the Timed Readings series (Spargo, 1989) were used to collect %C/M data. The
Timed Readings series consists of 50 different passages for each grade level, beginning with 4th
grade. Passage reading level was based on the Fry (1968) readability formula, and the passages were
designed to steadily increase in difficulty. Each passage contains 400 words providing information
across a variety of subjects. Ten multiple-choice comprehension questions (five factual and five
inferential) follow each passage. Passages were selected from the 4th-, 5th-, and 10th-grade books,
and each student read passages that were from his or her grade level. The first 12 passages were
divided into three sets of four (1–4, 5–8, and 9–12). For each student, a passage from each set
was randomly assigned to the oral reading condition. Although the Timed Reading series (Spargo,
1989) is a reading curriculum, Neddenriep and colleagues (2007) used this series to measure reading
comprehension rates and correlated this measures with Broad Reading Cluster scores for a small
sample of 4th-, 5th-, and 10th-grade students. Results showed that reading comprehension rate
accounted for a significant amount of Broad Reading Cluster score variance (42% –81%) for each
grade. Additionally, the percentage of comprehension questions answered correctly accounted for a
significant amount of Broad Reading Cluster variance for the 4th- and 5th-grade sample, but not for
the 10th-grade sample.
The three subtests comprising the Broad Reading Cluster—Letter-Word Identification, Reading
Fluency, and Passage Comprehension of the WJ-III Ach. (Woodcock et al., 2001)—were admin-
istered to each student. The WJ-III Ach. is an individually administered, norm-referenced test of
achievement for individuals from 2 to 90+ years of age. The Broad Reading Cluster subtests measure
reading decoding, reading speed, and ability to comprehend while reading. Letter-Word Identifi-
cation requires individuals to pronounce words in isolation and has a median reliability of .91 for
ages 5–19. Reading Fluency assesses an individual’s ability to read simple sentences and decide
if statements are true or not within a 3-minute period. Reading Fluency has a median reliability of
.90 for ages 5–19. The Passage Comprehension subtest requires the individual to read passages and
identify missing key words. Passage comprehension has a median reliability of .83 for ages 5–19.

Psychology in the Schools DOI: 10.1002/pits


1040 Skinner et al.

Battery-powered audio-recorders were used to tape each session. Replaying the tapes allowed
experimenters to independently score students’ performance, calculate inter-scorer agreement, and
assess procedural integrity. Stopwatches were used to measure the amount of time each student
required to read passages aloud.

Experimenters and Training


Four graduate students in a school psychology Ph.D. program and one undergraduate student
administered assessment procedures. All of the graduate students had prior training in the adminis-
tration of oral reading fluency probes. In addition, three of the four graduate students had extensive
training in administering and scoring the WJ-III Ach. Those with little or no training received addi-
tional intensive training by the primary investigator. These students were given instruction, practice,
and feedback on administering and scoring oral reading fluency probes and the Broad Reading
Cluster subtests before data collection began.

Procedures
Each student participated in three sessions. During each session, students (a) completed the
three Broad Reading Cluster subtests, (b) read three 400-word passages aloud and answered com-
prehension questions, or (c) for purposes of the Neddenriep and colleagues (2007) study read three
400-word passages silently and then answered questions. Condition order was counterbalanced
across participants. Due to schedule conflicts, four high-school students were assessed on the same
day with each session separated by at least 30 minutes. All other participants were assessed across
three separate sessions, one session per day, held within the same school week.
Aloud Passage Reading. After establishing or re-establishing rapport with the student, the
experimenter started the tape recorder and instructed the student to read the passage aloud at his or
her normal pace. The student was also told that he or she would be asked to answer comprehension
questions when he or she finished reading the passage. When the student began reading the passage
aloud, the experimenter started the stopwatch and silently read a photocopy of the same passage.
The experimenter recorded mispronunciations, substitutions, omissions, additions, and skipped lines
during the first minute of reading. If the student skipped a line or began rereading a line while reading,
the experimenter redirected him or her and counted each redirection as one error. If a student paused
for 5 seconds, the experimenter read the word aloud and prompted the student to continue reading.
After the student finished reading the entire passage aloud, the experimenter recorded the number
of seconds required to read the passage (Deno & Mirkin, 1977; Shapiro, 2004). The student was
then given a sheet containing the 10 comprehension questions. After each participant answered the
comprehension questions, the examiner collected the answers and repeated these procedures using
the remaining two passages.
Broad Reading Cluster. The three Broad Reading Cluster subtests (Letter-Word Identification,
Reading Fluency, and Passage Comprehension) of the WJ-III Ach. were also administered to each
student. Standardized procedures for administration and scoring were followed during each session
(Woodcock et al., 2001).

Analysis Procedures
The Broad Reading Cluster score was the primary criterion measure (outcome variable). The
individual subtests that comprise the Broad Reading Cluster score were also analyzed. Standard
scores were computed for each (M = 100; SD = 15). Predictor variables (independent variables)
included %C/M (%C/M = percentage of comprehension questions answered correctly × 60/s to

Psychology in the Schools DOI: 10.1002/pits


Reading Comprehension Rate 1041

read) and the two factors used to calculate %C/M, reading speed (number of seconds to read) and
reading level (percentage of comprehension questions answered correctly = number of questions
correct × 10). To reduce the impact of extreme scores, for each predictor variable each student’s
median score was analyzed (Shapiro, 2004). Thus, a student’s three predictor scores may have come
from three different passages. Pearson correlations and simultaneous (standard) regression analyses
were used to test for unique variance in Broad Reading Cluster and individual subtests scores
accounted for by number of seconds to read and percentage of comprehension questions answered
correctly. Relationships were considered significant at the p < .05 level.

Inter-scorer Agreement and Procedural Integrity Data


To calculate inter-scorer agreement, a second independent observer listened to 20% of the
audiotaped sessions. This second observer recorded the time required to read the passage, inde-
pendently scored answers to comprehension questions, and recorded procedural integrity data. Data
indicated that every recorded time by the independent observer was within 2 seconds of the originally
collected and recorded time. Procedural integrity data showed that all experimenters read instruc-
tions verbatim, administered procedures using appropriate passages, and administered passages in
the appropriate sequence 100% of the time.

R ESULTS

Correlation Matrices
The correlation matrices for grades 4, 5, and 10 are displayed in Tables, 2, 3, and 4, respectively.
For each grade level, correlations between reading comprehension rate (%C/M) and each WJ-III
score were positive and statistically significant. Also, for each grade level, correlations between
reading speed (number of seconds to read) and each WJ-III score were negative and statistically
significant. Percentage of comprehension questions answered correctly was significantly positively
correlated with Broad Reading Cluster and Letter-Word Identification subtest scores for each grade
level. Percentage of comprehension questions answered correctly correlated significantly (positive)
with the Reading Fluency subtest, but not with the Passage Comprehension subtest for 4th-grade par-
ticipants (p = .08). For the 5th- and 10th-grade participants, percentage of comprehension questions

Table 2
Correlation Matrix for 4th-Grade Students, n = 22

Seconds to %C/M
BRC LWI RF PC %C Read (RCR)

BRC 1.00 .937∗∗∗ .816∗∗∗ .691∗∗ .619∗∗ −.795∗∗∗ .905∗∗∗


LWI 1.00 .634∗∗ .540∗ .547∗∗ −.704∗∗∗ .801∗∗∗
RF 1.00 .407 .528∗ −.776∗∗∗ .864∗∗∗
PC 1.00 .383 −.483∗ .554∗∗
%C 1.00 −.402 .635∗∗
Seconds to read 1.00 −.884∗∗∗
RCR (%C/M) 1.00

Note. BRC: Broad Reading Cluster; LWI: Letter Word Identification; RF: Reading Fluency; PC: Passage Comprehension;
%C: Percent Comprehension Questions Correct; RCR: Reading Comprehension Rate.
∗ Correlation is significant at p < .05 (two-tailed).
∗∗ Correlation is significant at p < .01 (two-tailed).
∗∗∗ Correlation is significant at p < .001 (two-tailed).

Psychology in the Schools DOI: 10.1002/pits


1042 Skinner et al.

Table 3
Correlation Matrix for 5th-Grade Students, n = 27

Seconds to %C/M
BRC LWI RF PC %C Read (RCR)

BRC 1.00 .901∗∗∗ .898∗∗∗ .762∗∗∗ .509∗∗ −.773∗∗∗ .834∗∗∗


LWI 1.00 .668∗∗∗ .705∗∗∗ .547∗∗ −.670∗∗∗ .752∗∗∗
RF 1.00 .491∗∗ .347 −.724∗∗∗ .778∗∗∗
PC 1.00 .458∗∗ −.592∗∗ .603∗∗
%C 1.00 −.488∗∗ .692∗∗∗
Seconds to read 1.00 −.765∗∗∗
RCR (%C/M) 1.00

Note. BRC: Broad Reading Cluster; LWI: Letter Word Identification; RF: Reading Fluency; PC: Passage Comprehension;
%C = Percent Comprehension Questions Correct; RCR: Reading Comprehension Rate.
∗∗ Correlation is significant at p < .01 (two-tailed).
∗∗∗ Correlation is significant at p < .001 (two-tailed).

answered correctly correlated significantly and positively with Passage Comprehension, but not with
Reading Fluency. With respect to the specific subtests, the most unexpected finding was the failure
to find a significant correlation between percentage of comprehension questions answered correctly
and the Passage Comprehension subtests of the WJ-III among 4th-grade students.
For each grade level and across each WJ-III measure, the denominator, reading speed (number
of seconds to read) was more strongly correlated with the corresponding WJ-III measure than was
the numerator, percentage of comprehension questions answered correctly. For the 4th- and 5th-
grade students, the strongest correlations with each WJ-III measure were found when these two
variables were combined to form the comprehension rate measure, %C/M. However, for the 10th-
grade sample, reading speed correlated more strongly with Broad Reading Cluster, Letter-Word
Identification, and Reading Fluency scores than with %C/M. Thus, for the 10th-grade sample, the
only WJ-III measure for which %C/M resulted in a stronger correlation than reading speed (number
of seconds to read) was Passage Comprehension.

Table 4
Correlation Matrix for 10th-Grade Students, n = 37

Seconds to %C/M
BRC LWI RF PC %C Read (RCR)

BRC 1.00 .721∗∗∗ .896∗∗∗ .671∗∗∗ .371∗ −.751∗∗∗ .712∗∗∗


LWI 1.00 .383∗ .727∗∗∗ .374∗ −.692∗∗∗ .631∗∗∗
RF 1.00 .331∗ .190 −.593∗∗∗ .525∗∗
PC 1.00 .487∗∗ −.593∗∗∗ .656∗∗∗
%C 1.00 −.314 .807∗∗∗
Seconds to read 1.00 −.779∗∗∗
RCR (%C/M) 1.00

Note. BRC: Broad Reading Cluster; LWI: Letter Word Identification; RF: Reading Fluency; PC: Passage Comprehension;
%C: Percent Comprehension Questions Correct; RCR: Reading Comprehension Rate.
∗ Correlation is significant at p < .05 (two-tailed).
∗∗ Correlation is significant at p < .01 (two-tailed).
∗∗∗ Correlation is significant at p < .001 (two-tailed).

Psychology in the Schools DOI: 10.1002/pits


Reading Comprehension Rate 1043

Simultaneous Standard Regression


A regression analysis was conducted for the total Broad Reading Cluster score and each of the
three subtests (Letter-Word Identification, Reading Fluency, and Passage Comprehension). Table 5
displays the results from the simultaneous regressions for 4th-, 5th-, and 10th-grade students. For
each analysis, the two predictor variables were the components of the reading comprehension rate
measure, percentage of comprehension questions answered correctly and reading speed (number
of seconds to read). For the 4th-grade students, reading speed accounting for 63.2% of the Broad
Reading Cluster score variance, and reading comprehension level accounted for a significant amount
of additional variance (10.7%) in Broad Reading Cluster scores. However, for 5th- and 10th-grade
students, the simultaneous regressions revealed only one significant predictor, reading speed, which
accounted for 59.7% and 56.4% of the variance in Broad Reading Cluster scores.
Table 5 also provides a summary of the simultaneous regressions for the three individual Broad
Reading Cluster subtests across 4th-, 5th-, and 10th-grade participants. These results showed that,
after the measure of reading speed was accounted for, the measure of reading comprehension only
added a significant amount of variance for the 10th-grade students’ Passage Comprehension scores.

Table 5
Stepwise Nonhierarchal Regression Model Summary and Excluded Variables for 4th-, 5th-, 10th-Grade Students
(n = 22) with the WJ-III BRC and the Individual Subtests Comprising the BRC as the Dependent Variables

4th Grade WJ-III Letter-Word Reading Reading


(n = 22) BRC Identification Fluency Comprehension

Predictor Variables
Seconds to R .795∗ .704∗ .776∗ .483∗
Read R2 .632∗ .496∗ .603∗ .233∗
Sig. .000∗ .000∗ .000∗ .023∗
% Comp. R .859∗ − − −
Correct R2 .739∗ − − −
Sig. .012∗ .068∗∗ .096∗∗ .305∗∗
5th Grade WJ-III Letter-Word Reading Reading
(n = 29) BRC Identification Fluency Comprehension
Predictor Variables
Seconds to R .773∗ .670∗ .724∗ .592∗
Read R2 .597∗ .448∗ .524∗ .351∗
Sig. .000∗ .000∗ .000∗ .001∗
% Comp. R − − − −
Correct R2 − − − −
Sig. .221∗∗ .083∗∗ .958∗∗ .150∗∗
10th Grade WJ-III Letter-Word Reading Reading
(n = 37) BRC Identification Fluency Comprehension
Predictor Variables
Seconds to R .751∗ .692∗ .593∗ .593∗
Read R2 .564∗ .479∗ .351∗ .351∗
Sig. .000∗ .000∗ .000∗ .000∗
% Comp. R − − − .672∗
Correct R2 − − − .452∗
Sig. .205∗∗ .178∗∗ .978∗∗ .018∗
∗ Significant at the p < .05 level.
∗∗ Excluded variable (nonsignificant; p > .05).

Psychology in the Schools DOI: 10.1002/pits


1044 Skinner et al.

D ISCUSSION
To address the indirect nature of many brief reading rate measures, Skinner (1998) advocated
directly assessing functional reading skills by measuring reading comprehension rate. The current
study extended this line of research by assessing the amount of Broad Reading Cluster variance
accounted for by the two factors used to calculate reading comprehension rate: the numerator
(percentage of comprehension questions answered correctly) and the denominator (reading speed).
Results showed that almost all of the variance in Broad Reading Cluster scores accounted for by the
reading comprehension rate measure could be accounted for by reading speed. Thus, reading speed
itself is a strong predictor of global reading skills.
One reason that researchers (see Skinner et al., 2002) began to investigate reading compre-
hension rate measures was that WC/M tends to lose some of its strength in predicting students’
global reading achievement at the 5th- or 6th-grade level (Hintze & Shapiro, 1997). When the two
factors that make up the reading comprehension rate measure, reading speed (number of seconds to
read) and reading comprehension level (percentage of comprehension questions answered correctly),
were analyzed, regression analyses showed that percentage of comprehension questions answered
correctly accounted for additional Broad Reading Cluster variance for 4th-grade students but not
for 5th- and 10th-grade students. With respect to subtests, percentage of comprehension questions
answered correctly accounted for additional variance on the Passage Comprehension subtest for
10th-grade students only.
Additionally, the current results showed that the amount of additional variance in Broad Reading
Cluster scores accounted for by converting reading speed to %C/M increased for the 4th- and 5th-
grade students (i.e., an r change of 0.11, 0.06, respectively) but decreased for the 10th-grade students
(i.e., r was reduced by 0.039). A similar pattern was observed with the Letter-Word Identification and
Reading Fluency subtests. As grade levels increased, the amount of additional variance accounted
for by including a measure of comprehension (i.e., converting reading speed to %C/M) decreased.
Thus, results from the current study do not support Skinner’s (1998) hypothesis that substituting
a direct measure of comprehension for accurate aloud word reading may address the problem of
WC/M losing some of its sensitivity and validity as students’ reading skills develop.
The additional Passage Comprehension subtest variance accounted for by converting reading
speed to a reading comprehension rate measure was significant only in the 10th-grade students. By
10th grade, students are reading to learn (e.g., comprehend) as opposed to learning to read (e.g.,
enhance phonetic or decoding skills). Thus, this finding may be important because, for 10th-grade
students, reading comprehension scores (Passage Comprehension) may be a more valid measure of
reading skill development than Broad Reading Cluster scores, which include prereading skills such
as Letter-Word Identification (Skinner et al., 2002).
The current results coupled with the findings by Williams and colleagues (2006) suggest that
researchers should extend this line of research by conducting similar studies across criterion measures
and brief assessments of reading skill development. For example, researchers may want to determine
how much of the concurrent validity of other brief reading rate measures, such as cloze and maze
(see Deno, Mirkin, & Chiang, 1982; Jenkins & Jewell, 1993; Parker et al., 1992) is accounted for
by the measure of reading speed embedded within these other rate measures.
In the current study, the samples were small and the WJ-III SD data suggest that variance in
reading skills were restricted relative to national norms. This restricted range of scores may have
decreased the probability of finding significant correlations. To obtain more generalizable results,
researchers should conduct similar studies with larger numbers of students (including students with
a more representative range of skills). As the sensitivity and validity of WC/M begins to decline
at approximately the 5th- or 6th-grade level (Hintze & Shapiro, 1997; Jenkins & Jewell, 1993),

Psychology in the Schools DOI: 10.1002/pits


Reading Comprehension Rate 1045

researchers should conduct similar studies across additional grade levels to determine if a similar
pattern emerges with reading speed, %C/M, and other measures of reading that incorporate reading
speed. Perhaps these changes in validity and sensitivity are caused by reading speed development
fitting an asymptotic learning curve, where reading speed increases less as reading skills develop.
Thus, the decrease in sensitivity and validity of reading rate measures (e.g., WC/M, %C/M) may be
caused by reading speed, which accounts for much of the variance in broad reading skill development,
approaching ceiling levels (Paris, 2005).
In the current study the correlation between percentage of comprehension questions answered
correctly and WJ-III Passage Comprehension scores was not significant for 4th-grade students.
This surprising finding suggests some limitation in the reading comprehension rate measure. The
measure of percentage of comprehension questions answered correctly may have been restricted and
much less sensitive than reading speed because it was based on only 10 comprehension questions
and the multiple choice items provided had only three options. To address these limitations, future
researchers should conduct similar studies with longer passages that allow for more comprehension
questions and/or different types of questions (e.g., more options for the multiple choice question
or short-answer questions with a large range of scores). The comprehension questions used in the
current study were based on facts and information that could be answered based on prior knowledge.
Although we attempted to control for prior knowledge by using median scores across three different
passages, future researchers conducting similar studies based on fiction passage reading may find
different results as prior knowledge may have less influence on comprehension question accuracy.
In the current study, when students had difficulty reading a word, it was supplied after 5 seconds,
which may have enhanced comprehension scores. An alternative would have been to allow students
to continue to attempt to read words for an indefinite period of time, which could have invalidated
the reading speed component of the reading comprehension rate measure as they spent inordinate
amounts of time on individual words (Neddenriep et al., 2007). A more acceptable alternative
would have been to tell students to skip the word and continue reading after a pause of 3-5 seconds.
Regardless, researchers should conduct similar studies where assessment procedures are manipulated
(e.g., unknown words are not supplied, unknown words are provided after 3 seconds, the typical
procedure used with most WC/M measures) to determine if such procedures influence that predictive
validity of seconds to read, percentage of comprehension questions answered correctly, and reading
comprehension rate measures.
S UMMARY
The current study extended research on reading comprehension rates by demonstrating that the
measure of reading speed embedded within the reading comprehension rate measure accounted for
more than 50% of the variance in Broad Reading Cluster scores for each grade level. Researchers
and educators have expressed concern over the validity of reading rate measures, based on what
is measured (e.g., Jenkins & Jewell, 1993; Potter & Wamre, 1990; Skinner et al., 2002). The
current findings and previous research (Williams et al., 2006) suggest that, when considering the
validity of brief rate measures that incorporate aloud passage reading speed, what is measured (e.g.,
comprehension levels, words correct per minute, selected or inserted missing words) may be less
important than the measure of reading speed embedded within the rate measure.

R EFERENCES
Compton, D. L., Fuchs, D., & Fuchs, L. S. (2006). Selecting at-risk readers in first grade for early intervention: A two-year
longitudinal study of decision rules and procedures. Journal of Educational Psychology, 98, 394 – 409.
Daly, E. J., III, Chafouleas, S., & Skinner, C. H. (2005). Interventions for reading problems: Designing and evaluating
effective strategies. New York: Guilford Press.

Psychology in the Schools DOI: 10.1002/pits


1046 Skinner et al.

Daly, E. J., III, & Martens, B. K. (1994). A comparison of three interventions for increasing oral reading performance:
Application of the instructional hierarchy. Journal of Applied Behavior Analysis, 27, 450 – 469.
Deno, S. L., & Mirkin, P. K. (1977). Data-based problem modification: A manual. Reston, VA: Council for Exceptional
Children.
Deno, S. L., Mirkin, P. K., & Chiang, B. (1982). Identifying valid measures of reading. Exceptional Children, 49, 36 – 45.
Fletcher, J. M., Coulter, W. A., Reschley, D. J., & Vaughn, S. (2004). Alternative approaches to the definition and identification
of learning disabilities: Some questions and answers. Annals of Dyslexia, 54, 304 – 331.
Freeland, J. T., Jackson, B., & Skinner, C. H. (1999). The effects of reinforcement on reading rate of comprehension. Paper
presented at the twenty-sixth annual meeting of the Mid-South Educational Research Association. Point Clear, AL.
Freeland, J. T., Skinner, C. H., Jackson, B., McDaniel, C. E., & Smith, S. (2000). Measuring and increasing silent reading
comprehension rates: Empirically validating a repeated readings intervention. Psychology in the Schools, 37, 415 –
429.
Fry, E. (1968). A readability formula that saves time. Journal of Reading, 11, 513 – 516.
Fuchs, D., & Fuchs, L. S. (2006). Introduction to response-to-intervention: What, why, and how valid is it. Reading Research
Quarterly, 41, 93 – 99.
Fuchs, D., Fuchs, L. S., & Compton, D. L. (2004). Identifying reading disabilities by responsiveness-to-instruction: Specifying
measures and criteria. Learning Disability Quarterly, 27, 216 – 227.
Fuchs, L. S., Fuchs, D., Hosp, M., & Jenkins, J. R. (2001). Oral reading fluency as an indicator of reading competence: A
theoretical, empirical, and historical analysis. Scientific Studies of Reading, 5, 239 – 256.
Good, R. H., & Kaminski, R. A. (Eds.). (2002). Dynamic Indicators of Basic Early Literacy Skills (6th Ed.). Eugene, OR:
Institute for the Development of Educational Achievement.
Hale, A. D., Skinner, C. H., Winn, B. D., Oliver, R., Allin, J. D., & Molloy, C. C. M. (2005). An investigation of listening
and listening-while-reading accommodations on reading comprehension levels and rates in students with emotional
disorders. Psychology in the Schools, 42, 39 – 52.
Hintze, J. M., & Shapiro, E. S. (1997). Curriculum-based measurement and literature-based reading: Is curriculum-based
measurement meeting the needs of changing reading curricula? Journal of School Psychology, 35, 351 – 357.
Jenkins, J. R., & Jewell, M. (1993). Examining the validity of two measures for formative teaching: Reading aloud and maze.
Exceptional Children, 59, 421 – 432.
Marston, D. B. (1989). A curriculum-based measurement approach to assessing academic performance: What it is and
why we do it. In M. Shinn (Ed.), Curriculum-based measurement: Assessing special children (pp. 18 – 78). New York:
Guilford.
McDaniel, C. E., Watson, T. S., Freeland, J. T., Smith, S. L., Jackson, B., & Skinner, C. H. (2001). Comparing silent repeated
reading and teacher previewing using silent reading comprehension rate. Paper presented at the Annual Convention of
the Association for Applied Behavior Analysis: New Orleans.
Neddenriep, C. E., Hale, A. D., Skinner, C. H., Hawkins, R. O., & Winn, B. (2007). A preliminary investigation of the
concurrent validity of reading comprehension rate: A direct, dynamic measure of reading comprehension. Psychology
in the Schools, 44, 373 – 388.
Neddenriep, C. E., Skinner, C. H., Wallace, M. A., & McCallum, E. (2009). ClassWide peer tutoring: Two experiments
investigating the generalized relationship between increased oral reading fluency and reading comprehension. Journal
of Applied School Psychology, 25, 244 – 269.
Paris, S. G. (2005). Reinterpreting the development of reading skills. Reading Research Quarterly, 40, 184 – 202.
Parker, R., Hasbrouck, J. E., & Tindal, G. (1992). The maze as a classroom-based reading measure: Construction methods,
reliability, and validity. Journal of Special Education, 26, 195 – 218.
Potter, M. L., & Wamre, H. M. (1990). Curriculum-based measurement and developmental reading models: Opportunities
for cross-validation. Exceptional Children, 57, 16 – 25.
Saecker, L. B., Skinner, C. H., Brown, K. S., & Roberts, A. S. (in press). Using responsiveness data to modify a number-writing
intervention. Journal of Evidence-Based Practices for Schools.
Shapiro, E. S. (2004). Academic skills problems: Direct assessment and intervention (3rd Ed.). New York: Guilford
Press.
Skinner, C. H. (1998). Preventing academic skills deficits. In T. S. Watson & F. M. Gresham (Eds.), Handbook of child
behavior therapy (pp. 61 – 82). New York: Plenum Press.
Skinner, C. H., Logan, P., Robinson, S. L., & Robinson, D. H. (1997). Myths and realities of modeling as a reading intervention:
Beyond acquisition. School Psychology Review, 26, 437 – 447.
Skinner, C. H., Neddenriep, C. E., Bradley-Klug, K. L., & Ziemann, J. M. (2002). Advances in curriculum-based measurement:
Alternative rate measures for assessing reading skills in pre- and advanced readers. Behavior Analyst Today, 3, 270 –
281.
Spargo, E. (1989). Timed readings (3rd Ed., Book 1). Providence, RI: Jamestown Publishers.

Psychology in the Schools DOI: 10.1002/pits


Reading Comprehension Rate 1047

Williams, A., & Skinner, C. H. (2004). Using measures of reading comprehension rate to evaluate the effects of a previewing
strategy on reading comprehension. Paper presented at the twenty-eighth annual meeting of the Mid-South Educational
Research Association. Gatlinburg, TN.
Williams, J., Neddenriep, C. E., Hale, A. D., Skinner, C. H., & Hawkins, R. (2006). Words correct per minute: How important
is the denominator, time required to read? Paper presented at the annual meeting of the Association for Applied Behavior
Analysis, International: Atlanta.
Woodcock, R. W., McGrew, K. S., & Mather, N. (2001). Woodcock-Johnson Tests of Achievement-Third Edition. Itasca, IL:
Riverside Publishing.

Psychology in the Schools DOI: 10.1002/pits


View publication stats

You might also like