Professional Documents
Culture Documents
Practitioner’s Guide to
Curriculum-Based
Evaluation in Reading
1 3
Jason E. Harlacher Nicole M. Kattelman
Marzano Research Laboratory Washoe County School District
9000 E. Nichols Ave. Ste. 112 425 East Ninth Street
Centennial, CO 80112 Reno, NV 89520
Tami L. Sakelaris
Washoe County School District
425 East Ninth Street
Reno, NV 89520
Almost every school in the country has a problem-solving team, and research has
shown that effective problem-solving teams can improve both student (e.g., improved
reading skills, reduced behavioral difficulties, etc.) and systemic (e.g., reduced num-
ber of students retained in a grade, fewer students referred for special education, etc.)
outcomes (Burns and Symington, 2002). Yet, there continues to be a well-document-
ed deficiency in student reading and math skills in this country. How is that possible?
If almost every school in the country is convening a group of skilled professionals on
a weekly basis to brainstorm ideas for students who are experiencing difficulties, then
why is it that children continue to demonstrate skill deficiencies at an alarming rate?
The answer to the above questions is a complex one that goes well beyond the
scope of this book. However, I suggest that three reasons why problem-solving
teams have not led to more global positive outcomes are because most school per-
sonnel (a) have unfortunate misconceptions about assessment, (b) do not understand
problem-analysis, and (c) do not contextualize the interventions within a broader
system. I will discuss these below.
Assessment
Many school-based professionals confuse the term ‘testing’ with assessment. There
are several types of data that can be used within any assessment process, one of
which may be standardized norm-referenced tests or data collected to judge student
proficiency. What matters most is not the type of test used, but that the data match
the purpose for which they are used. There are many high quality measures that
may provide excellent summative information, but do little to inform instruction.
There are also tools that provide excellent instructional information, but the data
lead to inaccurate screening decisions. As the authors point out, assessment should
be a dynamic process that is guided by the question being asked. It seems that few
school-based professionals truly understand how to select the appropriate data to
address the question and often rely on commercially prepared tests because those
are mandated by the district in which they work.
Problem Analysis
“Which intervention should I use?” That is by far the most common question that I
hear from school-based practitioners. Most problem-solving teams are quite good
at identifying a problem and may even collect data to determine if the problem
persists. However, very few fully understand the diagnostic assessment process
outlined in this book or are able to examine discrete sub-skills that contribute to a
problem. As was somewhat famously stated, most problem-solving teams do not
solve problems; they admire them (the actual source of that quote is unclear, but
most attribute it to Jim Ysseldyke at the University of Minnesota). In my experi-
ence, the essential attribute of an effective problem-solving team is that they use
data to analyze the problem and to determine the intervention. When the problem is
analyzed, which intervention to use becomes quite clear and the likelihood that the
intervention will be successful substantially increases.
Imagine an elementary school with 600 students. On average 20 % of the students
need something more than effective instruction and curriculum (Burns, Appleton,
& Stehouwer, 2005). If there were 600 students, then 120 of them would require
some level of support beyond quality core instruction. If the problem-solving team
met each week, spent 1 hour talking about 2 students (30 minutes each) every week,
and they met 32 times throughout the year, then they would have time to discuss 64
students leaving 56 students out and not having time for any follow-up meetings re-
garding the students that they did discuss. Of course, one solution would be to meet
twice as often, but more than likely school personnel cannot conduct the level of
analysis that is needed for effective problem-solving to occur at the individual level
for 120 students. First, some lower level of analysis has to occur at the classroom
and group level. Stated in language commonly used within Multi-Tiered System of
Foreword vii
Supports, you cannot have an effective Tier 3 without an effective Tier 2, and you
cannot have an effective Tier 2 without strong core instruction (Tier 1).
Most school personnel do not systematically conduct analyses at Tier 2. Instead,
all students receive the same intervention under the idea of standard protocol. How-
ever, the term standard protocol does not mean that every student receives the same
intervention, it simply means that there are a few highly standardized interventions
from which to select for specific problems. For example, a student who needs better
decoding skills would likely not benefit from an intervention designed to enhance
comprehension. A low-level analysis to determine the broad category of the prob-
lem can be used to identify the target of the intervention for small-groups of stu-
dents, for which a standardized intervention could then be delivered.
Curriculum-Based Evaluation
possible and could also increase the likelihood of increased research. I am confident
that practitioners will find the procedures easy to implement, especially with the
forms and tools that Harlacher, Sakelaris, and Kattelman provide. This book was
needed by researchers and practitioners alike, and the authors have filled an impor-
tant gap with a well-written and useful tool.
References
This book is the result of a tremendous amount of work and passion for developing
a user-friendly tool for practitioners. We want to first thank Dr. Kelly Humphreys,
our friend and colleague from Washoe County School District. Her support, insight,
and collaboration made this book possible. We also want to thank Mr. and Mrs.
Lock for allowing us a retreat at their condominium in Reno without which we
could not have completed this book.
Dr. Harlacher I’d like to thank my classmates, professors, and colleagues from
the University of Oregon. Their support and tutelage paved the way for my inter-
est in school wide systems and Curriculum-Based Evaluation. I also want to thank
Dr. Kenneth W. Merrell, who unfortunately passed away in 2011. Dr. Merrell was
a generous and supportive mentor who always kept his students at the forefront.
Spending just a few moments with Ken left one feeling renewed, listened to, and
capable of taking on more work (which was key for stressed out graduate students!).
Without his advisement, my career and this project would not have come to fruition.
Finally, I want to thank my brothers, Chad and Todd, and my parents, Carl and Kaye
Harlacher, whose unconditional love and support led me to where I am today.
Dr. Sakelaris I would like to thank my husband, Greg, and my children, Peyton,
Jade, and Quinn, for supporting me throughout the process of writing this book. I
could not have managed it without their understanding and patience. I also want to
thank my parents, Butch and Peggy Renovich, for their continuous interest in and
encouragement of this project.
Ms. Kattelman I wish to thank my husband, Michael, and my two children, Jake
and Elise, for their support and patience during the writing of this book. Without
Michael taking over parental duties and Jake and Elise giving Mom some quiet
time, my writing would not have been accomplished. I would also like to thank my
many colleagues who truly believe that each student can learn when provided the
right academic supports. Without these colleagues, the pursuit of building school-
wide and district-wide systems for all students’ needs would not be possible.
ix
Contents
1 Introduction ����������������������������������������������������������������������������������������������� 1
1.1 Outline of the Book ���������������������������������������������������������������������������� 2
xi
xii Contents
xvii
Chapter 1
Introduction
This book is divided into three sections (a fourth section contains supplemental
material including appendices, a glossary, and topic index):
1. The background of education and conceptual basis for CBE
2. Using CBE to assess reading
3. Making educational decisions within the CBE Process
The first section of the book discusses the current state of education to establish
the need for an effective problem-solving process and describes how CBE fits
in a school system. We review the National Assessment of Educational Progress
(NAEP) results, which show that over half of students in the fourth and eighth
grade are scoring below proficient. Over 90 % of fourth- and eighth-grade students
1.1 Outline of the Book 3
who have disabilities or are English language learners score below proficient on
the NAEP (National Center for Educational Statistics 2011a, b). Three practices
that can lead to improved outcomes for schools are presented. An overview of the
problem-solving model, which facilitates and enables school improvements, is pre-
sented (see Greenwood et al. 2008). Schoolwide problem solving is the focus of
Chapter 3 and individual problem solving with the CBE Process is the focus of
Chapters 4 and 5.
Chapter 3 provides the foundation for CBE, through a discussion of education-
al reform and Multi-Tiered System of Supports (MTSS). MTSS is a multi-tiered,
schoolwide model of service delivery providing a continuum of evidence-based
supports with frequent data-based monitoring for instructional decision making
aiming to improve academic and behavioral outcomes (Barnes and Harlacher 2008;
Horner et al. 2005; Kansas MTSS, n. d.). The principles behind MTSS are outlined
and then details about critical features are provided. MTSS sets the stage for the use
of CBE. CBE is used most effectively and efficiently in a collaborative, problem-
solving, school culture.
Chapter 4 defines CBE, and discusses the assumptions behind CBE. Learning
is viewed as an interaction between the learner, the curriculum, and the environ-
ment (which includes instruction). CBE includes consideration and assessment of
all those components using low-inference assessments (low-inference means that
the gap between the results and interpretation of the results is small; Howell and
Nolet 2000).
The second part of the book focuses on the actual implementation of CBE in
reading. In Chapter 5, the steps of the CBE Process are detailed. CBE’s alignment
with the problem-solving model is illustrated and an assessment framework for con-
ducting CBE [review, interview, observation, and testing (RIOT)/instruction, cur-
riculum, environment, learner (ICEL)] is provided (Christ 2008). Chapters 6 to 8
walk through use of reading CBE in daily practice and provide step-by-step direc-
tions for using CBE to assess decoding, early literacy, and reading comprehension.
Explicit directions, reproducible handouts, and instructional strategies based on the
results of the CBE Process are provided. Those chapters guide the CBE Process and
result in practical recommendations.
The third part of the book describes how to make educational decisions with
CBE. Chapter 9 provides guidelines for progress monitoring, goal-setting, and
instructional decision making. Finally, answers to frequently asked questions about
CBE are provided in Chapter 10.
This book provides educators with a practical tool that is an extension of the
belief that all students can learn, given the right instructional support. CBE facili-
tates identifying student needs and instructional supports to address them. The CBE
Process focuses school teams in a way that increases their efficiency and effective-
ness in improving outcomes for all students.
Part I
Background of Education and
Curriculum-Based Evaluation
Chapter 2
History of Education
2.1 Chapter Preview
To illustrate the need for a problem-solving approach in schools and for the use of
curriculum-based evaluation (CBE), the present state of education, including sta-
tistics about student performance and explanations for low school performance,
is discussed within this chapter. Strategies for improving outcomes in schools are
discussed and the problem-solving model (PSM) is introduced. The PSM can be
applied at both the systems and individual levels, with the system laying the founda-
tion of support for the individual.
It is no secret that schools are struggling in the USA. Any educator can attest to
the challenges that schools face and the unsatisfactory outcomes for students. For
instance, the 2011 National Assessment of Educational Progress (NAEP) results
indicated that only 34 % of fourth-grade students in the USA scored at or above
proficient (see Table 2.1). Massachusetts was the top-scoring state with 51 % of its
fourth graders that scored at or above proficient. So the best state had only half of
its fourth graders at a proficient level in reading. The lowest state was Mississippi
with only 25 % of its fourth graders at or above proficient in reading. Eighth-grade
students’ scores on the NAEP were similar, as the percentage of students scoring at
or above proficient also was 34 %. The highest state again was Massachusetts with
46 % of eighth-grade students proficient in reading and the lowest was Mississippi
with 21 % of eighth-grade students at or above proficient [National Center for Edu-
cational Statistics (NCES) 2011a]. Mathematics scores on the NAEP are relatively
higher, with 40 % of fourth-grade and 35 % of eighth-grade students scoring at or
above proficient (NCES 2011b).
Given the overall low performance of students on the NAEP, it is not surprising
that the average graduation rate is 75.5 % (NCES 2011c; Viadero 2011). However,
there is considerable variation among state graduation rates, with Nevada scoring
J. E. Harlacher et al., Practitioner’s Guide to Curriculum-Based Evaluation in Reading, 7
DOI 10.1007/978-1-4614-9360-0_2, © Springer Science+Business Media New York 2014
8 2 History of Education
Table 2.1 Percentage of high school dropout rates and percentage of fourth and eighth-grade stu-
dents scoring at or above proficiency on the National Assessment of Educational Progress among
US students
NAEP
Readinga Mathematicsb
Dropout rates Fourth Eighth Fourth Eighth
General population 4.1c 34 34 40 35
Students with disabilities 26.2d 11 7 9 9
Students with second language 24.5e 7 3 14 5
Caucasians 2.7c 34 43 52 44
African Americans 6.6c 16 15 17 14
Hispanics 6.0c 19 19 24 21
American Indian, Alaska Natives 6.3c 18 22 22 17
Asian Americans 2.4c 49 47 62 55
a
NCES 2011a; b NCES 2011b; c NCES 2011c; d OSEP 2011; e Kim 2011
the lowest at 56 % and Wisconsin the highest at 90 %. The encouraging news is that
the status dropout rate1 of students has declined since 1990, going from 12.1 % in
1990 to 8.1 % in 2009 (Viadero 2011). However, there remains a considerable gap
among ethnic groups and event dropout rates,1 with nearly three times as many
Hispanic and African-American students dropping out of high school compared to
Caucasian students (see Table 2.1) (NCES 2011c). Viadero (2011) reports that there
is a 17.6 % status dropout rate among Hispanic students and 9.3 % among African-
American students compared to 5.2 % among Caucasian students and 3.4 % among
Asian-American and Pacific Islander students. Additionally, a study by the National
Center for Research on Evaluation, Standards, and Student Testing examined drop-
out rates across three cohorts and discovered that students who speak a second
language have a dropout rate of 24.5 %, compared to a dropout rate of 15 % among
non–second-language learners (Kim 2011). Job for the Future (n. d.) summarizes
the state of graduation eloquently: “For every 10 students who enter eighth grade,
only seven graduate high school on time, and only three complete a postsecondary
degree by age 26” (p. 2).
As if those numbers are not troublesome enough, a comparison between the
USA and other developed countries reveals more dismal findings. UNICEF (2002)
examined the performance of teenagers (14 and 15 years old) in reading, math-
ematics, and science and ranked the United States 18th out of 24 countries after
averaging the findings of five different international studies on education (including
scores on the NAEP). Additionally, the results of the Programme for International
1
Status dropout rate refers to the percentage of students within a certain age range who are not
currently enrolled in high school and have not earned a high school diploma or equivalency. This
is different than the event dropout rate, which is the percentage of high school students who left
school in a given year and did not earn a diploma or equivalency.
2.2 The State of Education 9
Student Assessment (PISA) indicate that the United States is not heading in a posi-
tive direction. The PISA is an international assessment administered on a rotating
schedule that measures performance of 15-year-old students in the areas of reading,
mathematics, and science. Over 65 countries are included and between 4,500 and
10,000 students are sampled from each country. In reading, the USA ranked 15th
in 2000 and then ranked 17th in 2009 (Fleischman et al. 2010; OECD 2001). The
USA’s performance in mathematics also decreased from a ranking of 24th in 2003
and 31st in 2009 (Fleischman et al. 2010; Lemke et al. 2004). Performance in sci-
ence decreased in the USA from 21st in 2006 to 23rd in 2009 (Baldi et al. 2007;
Fleischman et al. 2010).
Performance trends for students with disabilities are even worse. The 2011 NAEP
results show that an average of only 11 % of fourth-grade and 7 % of eighth-grade
students with disabilities scored at or above proficient in reading. In mathematics,
17 % of fourth-grade and 9 % of eighth-grade students with disabilities scored at
or above a proficient level (NCES 2011a). The graduation rates for students with
disabilities are somewhat encouraging, depending on your point of view. The per-
centage of students who exited special education by graduating with a high school
diploma increased from 43 % to 56.5 % from 1997 to 2006. The percentage of stu-
dents who exited special education by dropping out of high school decreased from
49.5 % to 26.2 % in that same time frame (OSEP 2011). The overall graduation rates
may be low, but they are trending in a positive direction.
NAEP results for students who speak a second language are equally low. Only
7 % of fourth-grade and 3 % of eighth-grade English language learners (ELLs)
scored at or above proficient on the NAEP in reading, and 14 % of fourth-grade and
5 % of eighth-grade ELLs scored at or above proficient (NCES 2011b).
Achievement results only paint half the picture of the state of public educa-
tion, as there is a historical concern over the identification of students who require
special education services (Merrell et al. 2006; Reschly 2008; Tilly 2008). Spe-
cial education services are provided to students with disabilities to ensure a free
and appropriate public education. Since its inception, there have been fluctuations
in the identification rates of eligibility categories. For example, the category of
learning disability (LD) currently accounts for almost half (44.6 %) of all students
identified as eligible for services. This statistic may not seem alarming in and of
itself, but what is alarming is the 272 % increase in identification of LD since the
installment of special education. This identification increase can be compared to
no change in identification of students under speech-language impairment (SLI), a
25 % increase in identification of students under emotional disability (ED)/Distur-
bance and a 60 % decrease in identification of students classified under intellectual
disability. Additionally, health impairment, which is a category for students with
chronic health issues, had a 460 % increase in identification rates (US Department
10 2 History of Education
of Education 2012). The changes in rates of students served under various eligibil-
ity categories have raised questions about the accuracy and subjective nature of
referrals of students to special education (MacMillian et al. 1998; Johnston 2011;
Merrell et al. 2006; Ortiz et al. 2008).
Concerns about special education also extend to students with diverse back-
grounds, as there is an apparent bias for identification of minority populations
(Ortiz et al. 2008; Rhodes et al. 2005). Overrepresentation of minorities in special
education has been a concern for over three decades for several reasons, includ-
ing questions about unreliable and invalid assessments, weak or inappropriate psy-
choeducational practices, misunderstanding of the needs of ELLs, and a difficulty
among practitioners to distinguish typical language development from an LD (Sul-
livan 2011; Zhang and Katsiyannis 2002). In fact, American-Indian/Alaskan-Native
students are 1.56 times more likely and African-American students 1.46 times more
likely to be identified for special education compared to other ethnic groups (a risk
ratio of 1.0 indicates the risk is similar between two groups). African-American
students are 2.28 times more likely to be classified under the category of ED and
a staggering 2.75 times under intellectual disability, compared to 0.85 and 0.62,
respectively, for Caucasians (OSEP 2011). Sullivan (2011) examined the rates of
ELLs vs non-ELLs classified for special education under LD, Mild Mental Retar-
dation (MMR), SLI, and ED categories from 1999 to 2006. She found that ELLs
are 1.82 times more likely to be identified for LD compared to non-ELLs. ELLs
also were 1.63 and 1.30 times more likely to be classified under MMR and SLI,
respectively. ELLs were estimated to be less at risk for ED, as the risk ratio was
0.12 (Sullivan 2011). Samson and Lesaux (2009) discovered a grade-based change
in risk for ELLs and special education. They found that ELLs are underrepresented
in kindergarten, have similar rates in first grade, but are overrepresented by the
third grade compared to native English speakers. This change in representation is
believed to stem from the shift from “learning to read” in early elementary grades to
“reading to learn” beginning in the third grade (Carnine et al. 2009).
In summary, the majority of students in the fourth and eighth grades are scoring
below a proficient range in reading and mathematics. Achievement data for students
with disabilities and ELLs are lower. Graduation rates are trending upward, but
significant variation in graduation rates exists between states and between ethnic
groups. Historical trends in eligibility categories show drastic changes in the rates
of identification under certain eligibility categories, and students with diverse back-
grounds and languages have an increased risk for identification. Given those statis-
tics, it is not surprising that education in the USA ranks in the lower half compared
to other developed countries.
Given the state of affairs in schools, it is logical to ask why schools are not doing
well. Several possible explanations are discussed next: (a) teacher attrition, (b) a
changing student population and pressure for schools to provide more than academic
2.3 Why are Schools Struggling? 11
services to students, (c) isolation among staff and fragmented school structure, (d) a
historical focus on labeling and entitlement vs problem solving and instruction, and
(e) inadequate educator training in regards to scientific practices and limited use of
effective practices.
2.3.1 Teacher Attrition
Poor student outcomes may be related to the turnover of newly employed teachers
and the retirement of veteran teachers without a substantial workforce to take their
place (Carroll and Foster 2008). A remarkable 33 % of new teachers leave the pro-
fession in their first 3 years of employment and 50 % leave within 5 years (Gonzalez
et al. 2008; Wolfe 2005). More modest estimates of teacher attrition among new
teachers are between 10 % (Kaiser 2011) and 20 % to 25 % (Grissmer and Nataraj
Kirby 1987). Other research has reported that approximately 8.0 % of teachers leave
the profession entirely and 7.6 % leave for a new school annually (Keigher 2010).
Whether teachers leave because of retirement or because they have taken a posi-
tion elsewhere, teacher attrition has doubled since 1990 (Carroll and Foster 2008).
This constant turnover puts a strain on schools, can prevent continuity from year to
year, and reduces the number of veteran and highly skilled teachers within a school
(Barnes et al. 2007).
Another reason that schools struggle to perform well may be related to their dif-
ficulty keeping up with the rapid change in demographics. No longer is the aver-
age student Caucasian and from a two-parent home. Consider the following facts.
In 1950, 89 % of the population was Caucasian (Gibson and Jung 2002); today,
that number is at 75 % (US Census Bureau 2011a). Additionally, the percentage of
students living in two-parent homes held steady around 90 % from 1880 to 1970,
but the 2009 census results indicated just under 70 % of children live in two-parent
homes (see Fig. 2.1; US Census Bureau 2011b). That change is a decline of 20 %
in just over 40 years after remaining consistent for nearly 100 years. In 2010, the
student population comprised 60 % Caucasian, 20 % Hispanic, and 14 % African
American. By 2050, those rates are projected to be 46 % Caucasian, almost 30 %
Hispanic, and 15 % African American (Ortiz et al. 2008). The population is chang-
ing, bringing with it different background knowledge, different cultural values, and
various primary languages of students. This change in population requires a change
in instruction that educators may not be fully prepared to handle (Merrell et al.
2006). Teacher preparation programs have been described as “largely inadequate”
in preparing teachers to work with diverse students (Ortiz et al. 2008, p. 1729). Most
programs do not have a large percentage of students with diverse backgrounds, nor
do they offer more than one course or even one chapter in a book on multilingual
assessment and cross-cultural competency (Ortiz et al. 2008; Rhodes et al. 2005).
12 2 History of Education
Fig. 2.1 Historical living arrangements of children from 1880 to 2009. (US Census Bureau 2011b)
When the directive to provide all students with an appropriate education arose out
of Public Law 94-142, there was a necessary push to identify students with disabili-
ties and provide appropriate services to them. That push may have come at the cost
of focusing too much on labels and not enough on effective instructional practices.
Special education has been criticized for waiting to provide services until the gap
between expected and actual performance is large enough to be called a disability
(Johnston 2011; Merrell et al. 2006; Reschly 2008; Tilly 2008). In many cases, a
child had to fail for more than 1 year before being referred for an evaluation to
consider eligibility for special education services (Nelson and Machek 2007). The
result? A problem that is larger than it was when the student was first identified as
struggling and a problem that is very difficult to remediate.
The practice of identifying students as eligible for special education services
was not only criticized as a “wait-to-fail” model, but also as a process of infor-
mation gathering that did not inform instruction (Johnston 2001; Reschly 2008).
Many of the assessments used to identify students as eligible for special education
services were summative and measured aptitude. The results rarely contributed to
14 2 History of Education
a meaningful instructional plan (Braden and Shaw 2009; Johnston 2011; Merrell
et al. 2006). Instead, the assessment results contributed global statements about a
child’s learning capacity compared to a normative sample (e.g., your student scored
in the xth percentile) (see Inset 2.1) (Hosp 2008; Ysseldyke et al. 2010). Many
teachers were justifiably frustrated when little helpful information was produced
from such extensive evaluations and schools were criticized for identifying and ad-
miring students’ difficulties in education without offering real solutions (Johnston
2011; Tilly 2008).
Another factor contributing to poor school performance is that some teachers are
not trained adequately in research-based/effective instructional practices (Johnston
2011). Consider the area of reading. Despite the fact that teaching reading requires
knowledge of the “Big 5” areas of reading (i.e., phonemic awareness, phonics, flu-
ency with connected text, vocabulary, and reading comprehension) (Carnine et al.
2009; Hattie 2009), only 15 % of teacher-training programs exposed future teachers
to those “Big 5” sufficiently (Walsh et al. 2006). The National Council on Teacher
Quality examined syllabi from 72 education schools (a total of 227 courses) and
discovered that only 23 % used textbooks rated as “acceptable” and there was no
clear consensus on a seminal text in reading. Out of 227 courses, 154 of them used
a text unique to their course, indicating that universities are using a wide range of
texts and have not accepted the certainty of empirical research. Texts used ranged
from those written based on personal opinion to those with outdated research, and
statements in some of the syllabi examined blatantly ignored the reading research.
Despite the certainty of the research on effective reading instruction, only one in
seven universities taught teachers the science of reading (Walsh et al. 2006).
If teachers are not trained well, then it is not surprising that ineffective or nonsup-
ported practices would continue to be used in schools. Ash et al. (2009) illustrate the
lack of use of research to guide practice in the classroom. They analyzed the read-
ing practices of 80 teachers and 27 literacy coaches from elementary and middle
schools and found that almost half of the teachers and literacy coaches sampled
2.4 What to do About it? 15
used round robin reading (RRR), an ineffective reading strategy in which students
are called upon one-by-one to read portions of a text aloud. Of the sample that
used RRR, 21 % of them reported being unaware of the research related to RRR,
and 30 % of them knew that the research showed RRR is ineffective, yet they still
reported using it.
Use of unsupported practices is not limited to teachers, and teachers are not to
blame for the use of such practices. A somewhat intuitive theory that came out of
special education law was that each student has a certain capacity for learning that
can be unlocked with the right instructional program. By assessing intrachild and
cognitive abilities, educators believed they could provide each child with a specially
designed program that would maximize learning, particularly those identified as
having a disability. This theory, aptitude-by-treatment (ATI), argued that certain
published, norm-referenced tests could be used to predict the success of certain in-
terventions over other ones for particular students. Individualized instruction could
then be planned based on the results. Griffiths et al. (2007) summarized the logic of
ATI eloquently: “ATI logic contends that certain measured aptitudes (measured in-
ternal characteristics of children) can be logically matched with certain instructional
approaches to produce differential benefit or learning with the student (p. 15).”
However, over 30 years of research have not supported ATI (Braden and Shaw
2009; Merrell et al. 2006; Reschley 2008; Ysseldyke et al. 2010; Stuebing et al.
2009). Griffiths and colleagues again provide an eloquent (and blunt) statement:
“It is a myth among special educators, school psychologists and the neuropsycho-
logical field, that modality matching is effective and can improve student learning”
(p. 15) (see also Gresham and Witt 1997). Still, the focus on assessing processing
and cognitive abilities remains in practice (Fiorello et al. 2006; Restori et al. 2008),
despite the lack of empirical support for ATI or learning styles (Braden and Shaw
2009; Pashler et al. 2008; Restori et al. 2008).
These findings illustrate two issues: first, some educators will continue to use
disproven practices, even if they know the research, and second, a portion of educa-
tors have finished their training program or continue to work without having been
exposed to relevant research. Just like other professions, educators require ongoing
training on current, evidence-based practices to ensure effective and contemporary
practices are used. Ensuring training and implementation of effective practices en-
ables students to have the best chance at success.
The purpose of this chapter is not to beat up on public education. The fact that
many people work in education because of their resolve, passion, and dedication to
helping students is not in question. Instead, the concerns in schools are described
to create the impetus for change. The urgency and need for more effective prac-
tices in schools must be understood. As educators passionate about our roles, it is
unsettling to know that so many students are failing. Even more unsettling is that
16 2 History of Education
use of ineffective practices, making decisions in the absence of good data, and the
inconsistency of teacher training all are very real problems today. Continuing to
do “business as usual” will result in nothing more than mediocrity. As Mark Twain
once said, “If you do what you’ve always done you’ll get what you always got.”
Having been reminded of the challenges educators face, we hope to have
sparked enthusiasm for participation in educational reform. This book supports
one part of that reform: using assessments that are focused on problem solving and
that have high instructional relevance. Assessment should provide information that
guides educators in identifying what to teach and how to teach. We have painted a
grim picture of education, but we also offer hope and direction. Although there are
many ways schools can improve performance, we outline 3 strategies before offer-
ing a conceptual framework for reform and refer to these strategies as improvement
practices.
Because schools have operated in silos and teachers have historically been iso-
lated from each other (Hattie 2009; Johnston 2011; Schmoker 2006; White et al.
2012), the first improvement practice is to break down those barriers and increase
the amount of collaboration among staff (Goddard et al. 2007). DuFour and Mar-
zano (2011) describe professional learning communities (PLCs) as one avenue to
increase collaboration and support among staff (see also DuFour 2004). PLCs are
defined as any combination of “individuals with an interest in education” (DuFour
2004, p. 6). Often, this is viewed as a grade-level or department-level team of teach-
ers and educators. Within PLCs, teams work together to answer three questions:
1. What do we want each student to learn?
2. How will we know when each student has learned it?
3. What will we do when a student experiences difficulty in learning it?
As PLCs answer these questions, they encounter two insights. First, they quickly
find out that it takes all of them working together to effectively answer and respond
to those questions. No single teacher alone can answer all three of those questions as
effectively as the team. After discussing the curriculum and standards that they want
students to learn, teachers create common formative assessments (or use ones al-
ready created) to answer the second question. To answer the third question, teachers
are faced with what DuFour refers to as an “incongruity between their commitment
to ensure learning and the lack of a coordinated strategy to respond when some
students do not learn (DuFour 2004, p. 8). The result is that teachers realize they
must work together to provide additional time to students who have not yet learned
the content. Teachers within a PLC become more collaborative and conversations
among them focus on data to determine if students have learned the content and on
sharing ideas, resources, and support for each other. Collaboration inevitably results
in answering the three questions previously.
2.4 What to do About it? 17
Second, as PLC members embrace the notion of those questions, their focus
shifts from teaching to learning. Perhaps this shift is subtle, but it can create a belief
that all students can learn with the right support, and it can break down the distrust
or lack of collegiality in schools (Marzano 2003). Instead of confusing collegial-
ity with congeniality, or with collaborating on nonacademic topics, the collegiality
and collaboration that are essential to the success of PLCs deals with openly trust-
ing each other as professionals. The PLC members work together to analyze and
improve their classroom practices. They engage in an ongoing cycle of questions
about instruction and student learning, they believe and trust in each other and ul-
timately, the result is improved student achievement (DuFour 2004; Ainsworth and
Viegut 2006).
Goddard and colleagues (2007) conducted a study that looked at the connection
between collaboration and student achievement. Using hierarchical-linear model-
ing, a statistical process that accounts for nesting issues among schools (e.g., one
school may have confounding factors relative to another one that can influence
results, such as one school with a lot of students from wealthy families compared
to a school with a lot of students from poverty), the authors found a connection
between fourth graders’ achievement in mathematics and reading and the amount
of teacher collaboration. Schools in which teachers collaborated more frequently on
issues related to curriculum, instruction, and professional development had students
who scored higher on the state assessment examination. Although this research is
preliminary, it lends credit to creating a school environment of collaboration and its
association with higher student achievement (Stiggins and DuFour 2009; Yates and
Collins 2006).
The persistent use of ineffective practices was discussed earlier (Ash et al. 2009;
Pashler et al. 2009), and it should be very obvious that if schools are to get better
results, ineffective practices must be replaced with effective practices. Ash et al.
(2009) make recommendations to increase the use of effective practices by teach-
ers. They point out that simply sharing the research may not be enough to ensure
teachers adopt effective practices. They state that teachers should be encouraged to
explore the research, gather data to evaluate their own students’ progress, and have
ongoing professional development to align their previous knowledge with new
knowledge. Yoon et al. (2007) conducted a review of the research looking at the
link between professional development provided to teachers and student achieve-
ment. Their conclusion was that teachers who received an average of 49 hours
of professional development raised their students’ achievement by 21 points. Al-
though the studies included in their analyses were only at the elementary level, this
is evidence to suggest that training (which leads to improved practices) can impact
student achievement.
18 2 History of Education
Finally, schools can examine the alignment between their assessment and their in-
structional practices. Historically, assessment in education was used to either docu-
ment the occurrence (or nonoccurrence) of learning after the fact, largely for ac-
countability purposes, or to qualify students for extra services, such as special edu-
cation, second-language support, or the talented and gifted program (Howell and
Nolet 2000; Merrell et al. 2006). The schedule and purpose of assessment created
schools in which timely feedback about student learning was limited and informa-
tion that was beneficial for instructional planning was weak at best (Merrell et al.
2006; Pashler et al. 2009; Reschly 2008). To improve student outcomes, teachers
need information collected readily and efficiently so that they can make adjustments
to their instruction while they provide it (Hosp 2008; Ysseldyke and Christenson
1988). This calls for a shift from “assessment of learning” (i.e., using the results of
assessments to document that learning occurred) to “assessment for learning” (i.e.,
using the results of assessment to adjust instruction while it is actively occurring to
ensure learning) (Stiggins and Chappuis 2006).
This third improvement practice simply states that what is taught should be
measured and what is measured should inform what is taught. This improvement
practice requires use of assessments intimately tied to instruction that generate data
with high-instructional relevancy. (We use the term “high-instructional relevancy
to refer to data that provide teachers with information about what to teach and how
to teach.) Historically, problems have been defined as residing within the child and
assessments reflected that belief (Reschly 2008; Ysseldyke and Christenson 1988).
Educators looked within the child and tried to identify innate, biological, or cogni-
tive reasons to explain poor student performance. Terms such as “slow processor”
or “visual learner” were used to describe students, but offer little information about
what academic skills students need to learn or what is actually being taught in the
classroom (i.e., the curriculum). Information about instruction was limited (or non-
existent), so teachers were not offered helpful solutions about how to work with
students who were struggling academically (Reschley 2008; Tilly 2008).
To address these shortcomings, educators can use assessments to examine and
measure alterable factors that contribute to student learning. The effort moves
from focusing within the child to describing problems as the difference between
what is expected and what occurs using observable and measurable terms. Vague
descriptions such as “he struggles in reading” become, for example, “he is read-
ing 50 words correctly per minute and he should be reading 100 words correctly
per minute” or “she can’t focus on anything” becomes “In mathematics class, she
attends to task 45 % of the time and she should attend at least 80 % of the time.”
Defining problems in observable and measurable terms leaves little room for er-
ror in interpretation, provides a clear goal to work toward, and makes teaching the
skills concrete and clear for students’ need (Howell and Nolet 2000; Ysseldyke and
Christenson 1998).
2.5 Use of Problem-Solving Model 19
Given the fragmented nature of some schools and the lack of clear use of data and
effective practices, it is not surprising that many schools need whole-school reform
to improve student achievement and outcomes (Newmann et al. 2001). Applying
the PSM to the whole school rests within a tiered model of prevention, which we
refer to as Multi-Tiered System of Supports or MTSS (Barnes and Harlacher 2008;
Brown-Chidsey and Steege 2010; Horner et al. 2005; Reschly 2008; Tilly 2008).
We will discuss MTSS in more detail in the next chapter, but MTSS is a schoolwide
service delivery model. With MTSS, schools reconfigure how they deliver services
into a leveled model in which students are matched to a corresponding level of in-
struction (called tiers). There is a focus on prevention, data-based decision making,
and use of the PSM to improve practices and outcomes for students.
The installment of this model into a school is not just about using the PSM, but
includes improving the use of research-based practices, increasing collaboration
among staff, and aligning assessments with instruction. This model provides an
overarching umbrella within which the three improvement practices can be concep-
tualized. Schools progress through the steps of the PSM to ask if their school as a
whole is achieving high standards. The four phases of the PSM are applied to the
entire school, but educators also can apply the phases to any particular student or
group of students. The application of the PSM to the individual student is the focus
of this book, but it is the culture and philosophy behind MTSS that provides the
context in which problem-solving assessments are used.
20 2 History of Education
When using the PSM at the individual student level, educators follow the same pro-
cess as described for systems-level problem solving. Obviously, the unit of analysis
is much smaller at the individual level, but the steps of the PSM are the same. The
problem initially is identified and analysis of why the problem is occurring is un-
dertaken. A thorough analysis of all the relevant, alterable variables contributing to
the student’s learning is conducted, from which instructional plans are created or
adjusted. Finally, ongoing monitoring of student learning and instructional fidelity
is conducted to ensure the instructional plan results in student progresses toward
(and ultimately reaching) his or her goal.
If schools are to use the PSM at both the system and individual level, they require
assessments that align with that purpose. Summative assessments and assessments
that focus on unalterable or intracognitive variables have less relevance to problem
solving and often schools lack in assessment tools that allow for effective problem
solving. It is in this gap between what schools need and what they have that the pur-
pose of this book was born. We believe that CBE can fill a void and provide schools
with an assessment process that allows for effective problem solving.
Schools in the USA are troubled with a number of issues, including low academic
performance and high rates of violence, bullying, and problem behavior. Contribut-
ing factors include the isolation of teachers, changing student population, teacher
attrition, and a historical focus on entitlement and labels. To improve schools, we
outlined a schoolwide approach of implementing MTSS (systems-level problem
solving) and an individual approach of using CBE (individual problem solving).
Key Points
• The USA is ranked 18/24th in education between developed countries.
• Fewer than half of US fourth- and eighth-grade students are performing at
proficient levels in reading and mathematics.
• Students with disabilities and ELLs perform lower than students without
disabilities and those who speak English as their primary language.
2.7 Summary and Key Points 21
• ELLs are almost twice as likely to be identified for special education iden-
tification and are three times more likely to drop out, compared to other
ethnic groups.
• Schools are dealing with teacher attrition, inadequate teacher training, and
isolation in their classrooms.
• MTSS is a schoolwide approach for service delivery.
• Schools could potentially improve outcomes through use of the PSM, in-
creased collaboration and use of data for decision making, and consistent
use of research-based practices.
Chapter 3
Multi-Tiered System of Supports
3.1 Chapter Preview
Chapter 2 discussed how schools can apply the problem-solving model (PSM) to
both the whole school system and individual students. Chapter 3 focuses on the
former: systems-level problem solving. This chapter first outlines tiered models
of service delivery that schools are using to improve academic and behavioral out-
comes. The foundational principles are discussed and the three main components of
such models are identified: (a) tiers of instruction, (b) a comprehensive assessment
system, and (c) use of the PSM. Each component is discussed in detail, and the
chapter ends with a focus on how curriculum-based evaluation (CBE) fits into the
tiered models of service delivery.
MTSS is a multitiered model of service delivery in which all students are provided
an appropriate level of academic and behavioral support based on their needs and
skill levels. Graden et al. (2007) articulately describe the premise behind MTSS
as matching “research-validated instruction…to the data-based needs of students
(p. 295).” MTSS relies on the ongoing use of data to make decisions to ensure both
proper implementation of practices (i.e., fidelity or treatment integrity) and that
the practices actually are effective. MTSS is more than just a process of provid-
ing interventions to a small group of students. Rather, it is a school reform model
and with it comes a new way of thinking and doing business in education. Various
terms used to describe MTSS will be reviewed before discussing the principles
behind it.
Terms to Describe MTSS Various terms are used to describe a tiered model of
service delivery depending on whether the focus is on the academic or behavio-
ral outcomes in schools. Response to Intervention (RTI) is often used to describe
an academic tiered-model, whereas Positive Behavioral Interventions and Sup-
port (PBIS) is used to describe a behavioral tiered model (formerly called Positive
Behavior Support or PBS). Additional terms that have been used to describe aca-
demic tiered models include Instructional Decision Making (Tilly 2008), Response
to Instruction and Intervention (RTII) (www.pattan.net), and RTI2 (rti.lausd.net/
about_rti). PBIS is the most often used term for behavior tiered models, but anot-
her term used to describe such a model is RTI-Behavior or RTI-B (http://www.
dpi.state.nd.us/health/pbs/index.shtm). As schools began to combine both acade-
mic and behavior tiered models and as the overlap between these models became
evident (Algozzine et al. 2012), terms such as Multi-Tiered System of Supports
(http://www.kansasmtss.org) or MTSS appeared (http://www.florida-rti.org/flori-
daMTSS/index.htm).
These terms signify the similarities between academic and behavior tiered mod-
els and acknowledge the link between academic and behavioral performance in
students (Kansas MTSS, n. d.; McIntosh et al. 2010; McIntosh et al. 2008). Al-
though there are variations between the use of the past or present tense of “tier” (i.e.,
tiered vs tier) or between the plural use of the words “system” and “supports” (i.e.,
system of support, systems of supports), we use the term MTSS to refer to a tiered
model used by schools to address both the academic and behavioral functioning of
students. Whether schools implement a tiered model to address academics, behav-
ior, or both, the principles between these models are the same. Before discussing
the salient features of MTSS, the foundational principles of MTSS are presented
(see Table 3.1).
learn, except for that student.” The true spirit of MTSS is the belief that all stu-
dents can reach grade-level expectations, given the right support. This principle
embodies matching what students need with what they get. An arduous task for
sure, this can become a reality in light of the remaining principles and features
of MTSS.
Believing all students can learn with the right support raises questions about
how feasible and realistic it is for schools to achieve grade-level standards for all
students. First, it is important to point out that some schools are getting remark-
able results with tiered systems. For example, Marchand-Martella et al. (2007)
reported improvements with effect sizes (ESs) of 0.50–3.96, which means that
students in grades K–2 improved their academic scores by at least 19 percen-
tile points and upward of 40 percentile points (depending on the skill measured
and the student’s grade). (See Inset 3.1 for an explanation of an ES.) Algozzine
et al. (2008) reported an increase of students scoring at benchmark in kindergar-
ten from 64 % to 82 % after 1 year of implementation. Fisher et al. (2008) used
common formative assessments and increased collaboration among teachers in a
high school setting to increase the percentage of students scoring at or above basic
on the state biology assessment from 30 % to 71 %. Other settings and researchers
report similar positive gains in achievement, reductions in special education refer-
rals of 50 %, and an increase in the accuracy of special education referrals (Burns
et al. 2005; Greenwood et al. 2008; Jimerson et al. 2007; VanDerHeyden et al.
2007; Vaughn et al. 2010). In terms of behavior, Taylor-Greene and co-workers
(1997) reported a reduction of 42 % in office discipline referrals in just 1 year in a
middle school setting. Others have found similarly positive results in both elemen-
tary schools (Curtis et al. 2010; McCurdy et al. 2003) and high schools (Bohanon
et al. 2006).
26 3 Multi-Tiered System of Supports
The point here is not just about the research. It is important to distinguish between
the belief that all students can learn grade-level content and the reality of schools
accomplishing that goal. We whole-heartedly endorse the belief that all students can
learn given the right support, and we also believe that schools can achieve this goal.
Although educators may get frustrated when students do not reach grade-level stan-
dards, separating the belief from reality allows one to retain their persistence and
dedication to their work. Without adopting the stance that all children can learn, it
is possible that educators will not persist as long if they believe the goal is unreach-
able (Howell and Nolet 2000; Johnston 2011). This discussion is not suggesting that
those educators are not working hard or doing their best work. But the belief that
all students can learn versus the belief that some students can learn is the difference
between persisting to find an effective instructional plan versus not persisting.
Truly embracing Key Principle 1 means one does not place stock in unalter-
able variables and instead, focuses on continually finding alterable variables to im-
prove educational performance of students (regardless of the student’s past history
or struggles). The task of achieving grade-level standards for all students is not
easy, but believing it can happen and a focus on continuous improvement through
systematic problem solving will make it possible regardless of the current reality.
The following situation highlights this point. A school psychologist was par-
ticipating in a reevaluation for a student in special education and the question of a
possible intellectual disability (ID) was raised during the pre-evaluation meeting.
3.3 Description of Multi-Tiered System of Supports 27
Although the child had some significant challenges and skill deficits (e.g., third-
grade reading level in the sixth grade), she did not meet eligibility criteria under that
particular category. The teacher who had spent a few years working with the student
was particularly interested in the cognitive score and asked the school psychologist
for the result. (For reference, a cognitive score below 70 is one of several criteria
required for an ID classification.) The conversation followed as:
• School psychologist: Well, it was low. But her academic achievement and adap-
tive skills were measured in the average range.
• Teacher: How low?
• School psychologist: It was in the high 60s.
• Teacher: Oh…so it’s not my fault.
This example illustrates the danger of looking for reasons to explain a student’s fail-
ures that are not alterable or instructionally relevant (see Braden and Shaw 2009).
Acknowledging real challenges that require intensive support and the impact that
unalterable variables can have on learning is fine, but once it is determined that a
child’s innate traits are the reason the student is struggling, beliefs that the child can-
not succeed will diminish instructional efforts (Brady and Woolfson 2008; Rolison
and Medway 1985; Woodcock and Vialle 2011). For example, Woodcock and Vialle
(2011) found that teachers in training had lower expectations of students labeled
as having a learning disability compared to those students who were not labeled.
Rolison and Medway (1985) found similar results in which teachers held lower
expectations for students classified under the category of mental retardation. They
also believed that a student’s history of difficulty was more due to innate ability
than external factors for those students labeled with a learning disability compared
to those students not labeled as having a disability. When student failure is not at-
tributed to innate characteristics, educators are more likely to look for ways instruc-
tion can be adjusted to improve outcomes, and they also are more likely to believe
they can change the student’s outcome trajectory (Brady and Woolfson 2008). It
is important to acknowledge the risk in believing that certain abilities or student
performances are innate (Howell 2010).
Students who start behind tend to stay behind, with the gap between their skills and
those of typical peers continuing to widen as they progress through school (Baker
et al. 1997; Hart and Risley 1995). For example, students who score in the low-
est 10 % of readers (based on oral reading fluency rates) in the end of first grade
tend to remain in the lowest 10 % through the fifth grade (Good et al. 1998). These
difficulties persist into high school, as 74 % of poor readers in the third grade re-
main poor readers in the ninth grade (Fletcher and Lyon 1998). Additionally, Baker
and colleagues (1997) identified that students in grades 1–3 from economically
28 3 Multi-Tiered System of Supports
Related to Key Principle 1 that all children can learn, is the notion of instruc-
tional match. Critical to ensuring all students learn is providing the right support.
(The right support means it is the right intensity, targeting the right skill deficits,
and results in improved student performance on target to meet goals.) MTSS uses
multiple layers of instruction, which increase in their intensity, in order to match a
student’s skills and skill deficits to the right level of support. If instruction does not
result in sufficient growth, then adjustments are made or additional supports are
provided until sufficient growth results (Barnes and Harlacher 2008). MTSS allows
for a continuum of services to meet the needs of all students.
The last Key Principle emphasizes the deconstruction of silos and isolation in
schools and the increase of collaboration and schoolwide adoption of MTSS prac-
tices. Although the principles of using data to identify students who need support
and to monitor the effectiveness of interventions can be used in small groups or with
any one individual student, MTSS is a schoolwide reform model (Barnes and Har-
lacher 2008). It is intended to create a new climate and culture within the school in
which all students are seen as learners and data are used not just for accountability,
but instead to ensure learning occurs (DuFour et al. 2004).
Reviewing the principles behind MTSS is helpful for understanding not only
what MTSS is but also why it is being implemented. Understanding the why behind
MTSS increases the likelihood of educators buying into the process (Ikeda et al.
2002). Having reviewed the principles, a summary of the MTSS process is pre-
sented and the three main components of the model are discussed in detail.
30 3 Multi-Tiered System of Supports
3.4 Description of MTSS
The goal of MTSS is to improve the academic and behavioral outcomes of all
students (behavioral refers to social, emotional, and behavioral outcomes of stu-
dents) [Barnes and Harlacher 2008; Horner et al. 2005; Jimerson et al. 2007; Kan-
sas MTSS, n. d.; National Association of State Directors of Special Education
(NASDSE) 2005]. All students are screened at least three times a year using reliable
and valid measures that are efficient, yet predictive of general outcomes (principle
4). The data are summarized and reviewed shortly after gathered to (a) identify
which students are at risk for not meeting academic or behavioral expectations, and
(b) ensure the system is meeting the needs of most students with core instruction
(principle 2). Students then receive evidence-based support that matches their level
of need (principles 3 and 5). Students with more severe deficits are provided more
intensive supports; as the level of need increases, so does the intensity of the support
(principle 5). Students’ progress is monitored over time to ensure supports are ef-
fective. Students requiring more intensive supports receive more intensive monitor-
ing (principle 5). If progress monitoring data show the support is not effective, the
instructional plan is modified in attempt to allow students to progress toward goals.
The cyclical process of using data to identify and ensure effectiveness of a given
intervention to solve an identified problem (called the PSM) ensures that each child
is given the right support (principle 1).
As illustrated in Fig. 3.1, the aim is to create healthy, functioning schools that use
data to make decisions and are able to provide support to students immediately when
needed and before problems are severe. Using evidence-based instruction and layers
of increasingly intense supports, schools should meet the needs of at least 80 % of
their students with their core or Tier 1 level of support (Torgesen 2000). That is to
say that at least 80 % of students will benefit from the instruction that all students
receive and will not require additional help to reach grade-level standards. Ten to
15 % of students will require Tier 2, or additional instruction to supplement Tier 1
to meet academic and behavioral expectations. About 3 –5 % of students will require
intensive support to remediate missing skills. Simply put, MTSS is a model that uses
data in a formative manner to match students’ needs to an effective level of support,
with the goal of creating a healthy system that meets the needs of all students.
The three-tiered model of prevention and service delivery was developed out of
the public health field, and it is helpful to describe MTSS using a medical analogy
(Reschly 2008). Most of us engage in common practices to maintain our health.
Eating healthy, getting enough sleep, exercising regularly, taking vitamins, and go-
ing to the dentist every 6 months for a cleaning. When we visit the doctor, quick
screeners, such as blood pressure and body temperature are used to assess our over-
all health. If we meet expectations on screeners and continue to stay healthy, the
only recommendation to continue with is to continue with the common practices
mentioned previously.
When a screener indicates that there may be a cause for concern, additional in-
formation is needed to determine why the problem is occurring. If our body tem-
perature is high, the doctor may order a blood test to see why. If our knee hurts, the
doctor may order an x-ray to pinpoint the cause of the pain. Information gathered
3.5 Core Components of MTSS 31
informs the treatment plan, as the diagnostic assessment gets to the root of the prob-
lem. If the annual checkup indicates that we have high cholesterol, the treatment
plan would likely include an increase in exercise and an adjustment of diet. Weekly
weigh-ins and follow-up laboratory work would be recommended to determine if
the plan is effective in lowering cholesterol. A food and exercise journal is also sug-
gested to ensure the plan is followed. If cholesterol levels are not improved, more
assessment may be conducted to determine if there is an alternate explanation, and
the intensity of the treatment and monitoring plans increase accordingly. In this case
and in MTSS, data are used to determine treatment needs with a focus on a speci-
fied goal. Having discussed the principles and process of MTSS, the next section
provides an overview of the core components of MTSS.
Tier 2 Some students will require additional supports to supplement services pro-
vided at Tier 1. Tier 2 involves targeted instruction for groups of students with
common needs and is provided for an estimated 10 –15 % of the student population
(Horner et al. 2005; Jimerson et al. 2007). Tier 2 is designed to intervene early and
prevent minor academic and behavioral problems from getting worse and allow
students to meet grade-level expectations.
For behavior, Tier 2 is a collection of efficient interventions for groups of stu-
dents with common needs, though these interventions may not necessarily be pro-
vided in a group format. Examples of the interventions include daily “check in,
check out” programs in which the student receives steady feedback on the perfor-
mance of the expectations (Crone et al. 2010), social skills groups, and mentoring
by adults or peers. Tier 2 is designed to be efficient (i.e., quick access to support),
effective, and early (i.e., provided at the first signs of difficulty) (Hawken et al.
2009). Tier 2 interventions are similar between elementary and secondary schools,
but the interventions are tailored to meet the needs and developmental levels of their
respective students (see Crone et al. 2010 and Greenwood et al. 2008).
Tier 2 for academics provides students more time to practice the skills taught dur-
ing the core instruction and occurs outside of the time designated for Tier 1 (Vaughn
et al. 2007). At the elementary level, Tier 2 is usually provided in 30-minute time
blocks 3–5 days each week and in groups of 6–8 students (Abbott et al. 2008; Brown-
Chidsey and Steege 2010; Greenwood et al. 2008; Vaughn et al. 2007). At the sec-
ondary level, students are provided instruction in groups of 3–6 students (Griffin
and Hatterdorf 2010) or up to 10–15 students for 40–50 minutes each day (Pyle
and Vaughn 2010; Vaughn et al. 2010).1 Generally speaking, Tier 2 is an average of
10–12 weeks and is capped at 20 weeks because most students do not achieve much
benefit after that time frame (Vaughn et al. 2007; Vaughn et al. 2012). If Tier 2 does
not result in success, more individualized, intensive supports at Tier 3 are provided.
Tier 3 Supports at Tier 3 are highly individualized, based on additional assessment,
and more intensive for students not successful with the combination of Tier 1 and
Tier 2 supports. Three to 5 % of the student population requires this level of support
(Horner et al. 2005; Jimerson et al. 2007).
For behavior support at Tier 3, students receive function-based support that is
individually tailored to their needs and sufficiently comprehensive to teach replace-
ment behaviors (Scott et al. 2009). A functional behavior assessment (FBA) is con-
ducted to identify that support. An FBA is a process in which a behavior of concern
is defined in observable and measurable terms, the triggering and maintaining fac-
tors of the behavior of concern are described, and data are collected to verify the
hypothesis of why the behavior is occurring (Crone and Horner 2003). Following
the FBA, a behavior intervention plan (BIP) is designed to make the behavior of
concern less likely to occur and to teach the student more appropriate behaviors that
1
Although 10–15 students may all have an intervention block within the same classroom, inst-
ruction is still differentiated and students can receive small-group instruction in groups of 6–8.
The interventionist can split the block of time so that small-group instructions occur for one group
while another group simultaneously works independently (Burns 2008; Vaughn et al. 2010).
34 3 Multi-Tiered System of Supports
serve the same function (Scott et al. 2009). Tier 3 support for behavior is similar
between elementary and secondary schools, as both levels use the FBA and BIP
processes. Examples of Tier 3 are hard to summarize because they are individually
designed, but students may receive higher rates of reinforcement compared to Tier
2 as well as the use of response-cost systems. Tier 3 plans can also include wrap-
around services and intense counseling sessions (Crone and Horner 2003; Netzel
and Eber 2003).
For academics, Tier 3 is more intensive primarily by examining four factors: the
size of the group (i.e., number of students), the frequency (i.e., number of days of
intervention per week), the duration (i.e., both the number of minutes of interven-
tion time and the number of weeks of the intervention), and the person facilitating
the group. Specifically, students are in groups of less than 4 (Denton et al. 2007),
receive Tier 3 daily (5 times/week) from 45 to 60 minutes at a time (Abbott et al.
2008; Vaughn et al. 2007), and receive instruction longer than the 8–12 weeks des-
ignated by Tier 2 (Griffiths et al. 2006; Vaughn et al. 2003). Additionally, the person
facilitating Tier 3 is typically a more experienced teacher; for example, a regular
educator teaches Tier 2 and a special educator or reading specialist teaches Tier 3
(Harn et al. 2007). School sites will have more intervention time, for a longer dura-
tion and frequency, and in smaller groups at Tier 3, relative to what is already in
place at Tier 2.
An important part of Tier 3 supports includes strategies to maximize impact of
core instructional time. It is critical that students who receive Tier 3 supports do not
miss essential content or grade-level skills, and that they benefit from the time spent
in core instruction (Harn et al. 2007). A combination of whole-group, small-group,
and individualized support may be required to provide a student meaningful access
to Tier 1 (Walpole and McKenna 2007). Such coordination of supports for students
with intensive needs prevents the creation of new skill gaps. Tier 3 may “fill in” a
student’s missing skills, but removing the student from Tier 1 for Tier 3 remedia-
tion would only create new skills gaps (Harn et al. 2007). When providing Tier 3
services, a balance between core instruction (Tier 1) and remediation is necessary.
tion, curriculum, educational environment, and the learner’s skills and attributes
(ICEL) using review of records, interview, observation, and tests (RIOT) to under-
stand the learning environment and contributing causes of the student’s level of
performance. This is referred to as the RIOT/ICEL framework, and we discuss this
in detail in Chapter 5.
Formative Assessment After schools answer “Who is at-risk?” and “Why are they
at-risk?” an instructional plan is developed and formative assessment is used to
answer the question “Is the instructional plan working?” (Hosp 2008). Formative
assessment is a process in which teachers gather and use data to guide their decision
making around teaching and learning. It is a cyclical process in which adjustments
are made based on the data gathered to ensure learning is taking place. Formative
assessment for academics can be various assessments and sources of data, including
(a) formal tests, such as CBM or end-of-unit examinations, or (b) minute-to-minute
instructional interactions and strategies, such as questioning, use of “exit tickets,”
or verifying that students understand content with simple verbal responses (Black
and Wiliam 1998; Marzano 2010). For behavior, formative assessment may include
behavioral observations or daily behavior points cards (Crone and Horner 2003;
Crone et al. 2010).
In MTSS, a type of formative assessment called progress monitoring is used to
verify whether students are making growth toward academic and behavioral stan-
dards or goals. Although teachers use a variety of assessment practices to measure
student progress and to guide instruction (Marzano 2010), MTSS requires that deci-
sions are made about the level of support a student is receiving with the entire model
3.5 Core Components of MTSS 37
(e.g., does the student need Tier 3 to make adequate progress?). Assessment must
help answer whether or not the current level of support is moving the student toward
mastery of global and/or specific outcomes in a subject or content area. Decisions
about instruction are not limited to whether or not a student needs a few additional
minutes of review (which is what an end-of-unit test may suggest) or if they are
able to infer meaning from a text (which is what a teacher question-and-answer
exchange may reveal). Instead, decisions also are made about whether or not a stu-
dent requires more intensive supports (i.e., does the current level of support meet
the students’ needs?). To make valid decisions, the progress monitoring tools must
meet the following criteria: (a) brief and efficient, (b) measure general outcomes
or discrete skills, (c) have strong psychometric properties, and (d) be sensitive to
growth over time (Hosp et al. 2006; Hosp 2008).
CBM is most often used to measure basic academic skill growth in MTSS. For
behavior, schools often use office discipline referrals, direct observations of behav-
ior, and daily ratings of behavior to monitor progress (Horner et al. 2005). For high-
stakes decisions, the measures used should have strong technical properties. Schools
will have to balance the properties of the measures used with the importance of the
decision being made when making changes within the MTSS framework.
Fidelity Measuring fidelity ensures that the intervention or instructional plan is
being implemented as intended. When considering measurement of the effective-
ness of an intervention (i.e., progress monitoring), schools also must measure fide-
lity, also known as treatment integrity (Wilkinson 2006). This evaluation is critical
because it affects the logic behind decisions about student growth.
To illustrate using an analogy, imagine you set a goal to lose weight. You plan to
lose 4 lbs in 2 weeks, and your plan is to exercise three times each week and avoid
eating dessert during dinner. After 2 weeks, you weigh yourself and discover you
have only lost 2 lbs. It would be easy to blame the diet for the insufficient weight
loss, but without information about fidelity (i.e., if you actually followed your diet
as intended), you cannot blame the diet yet. Imagine you measured your adherence
to your diet and realized, you only exercised twice each week and you ate dessert
three times each week. Although that chocolate cake is delicious, you did not follow
your plan and you have to improve fidelity and redo your diet. You oblige with a
focus on sticking to your diet, and after two more weeks of following your plan, you
discover that you still only lost 2 lbs and have not reached your goal of 4 lbs every
2 weeks. At this point, you can safely conclude that the diet is not intense enough
to reach your goal. You followed the diet as planned but did not reach your goal, so
now it is time for a more intensive diet.
Measuring fidelity or not measuring fidelity leads us down two different paths of
decision making. Look at Fig. 3.3. When faced with having not met a goal, we must
first decide if the fidelity was good or not or if we followed the plan as intended. If
yes (the left side of Fig. 3.3), then it makes sense that we conclude the diet did not
work. We did not meet our goal, yet we followed the diet perfectly; therefore, the
diet must not have been intense enough (we can blame the diet because we actually
followed it).
38 3 Multi-Tiered System of Supports
However, there will be times when our implementation of a diet is poor and
times when we have not tracked our behavior. As a result, we cannot truly make any
conclusions about the diet (right side of Fig. 3.3). It would be tempting to look at the
results (i.e., that the goal was not met) and to jump to the decision to try a new diet
(in Fig. 3.3, starting at the right side of the diagram and jumping to the left side pre-
maturely). However, without information on fidelity, it is not logical to make that
decision. If fidelity is poor, or if fidelity data are not available, then the only logical
decision pathway that can be followed is to improve fidelity and try again (the right
side of the diagram in Fig. 3.3). Measuring fidelity ensures that (a) we did what we
had planned and (b) we make logical decisions considering all important informa-
tion (Wolery 2011). Table 3.3 provides a summary of the differences between the
tiers for instruction and assessment.
3.6 The PSM
The third and final component of MTSS is the use of the PSM (Good et al. 2002;
Reschly 2008; Shinn 2008; Tilly 2008). The PSM is a four-phase heuristic used
to define and solve problems. The four phases are: (a) Problem Identification, (b)
Problem Analysis, (c) Plan Implementation, and (d) Plan Evaluation (see Fig. 3.4).
(Although some authors refer to the phases of the PSM as “steps,” the word “step”
is used when describing the specifics actions within the CBE Process later in this
book.)
As educators, we have all heard concerns about students. The student can’t read!
He won’t write anything! She won’t sit still! To avoid vague and ill-defined prob-
lems, the first phase of the PSM, Problem Identification asks “What is the prob-
lem?” Problems are described in observable and measurable terms and are defined
as the gap between the expected behavior and observed behavior. As a result, “He
3.6 The PSM 39
can’t read” becomes “He is reading 30 words correctly per minute out of a third-
grade passage and he should be reading at least 70 words per minute in that pas-
sage.” “She won’t sit still” becomes “She stays in her seat 20 % of the class time
and she should be in her seat at least 80 % of the time.” Defining problems with
quantifiable terms allows two things: (a) a goal can be set and clarity about progress
toward that goal is easily discerned and (b) the magnitude of the problem is quanti-
fied, allowing educators to determine if the problem is severe enough to warrant
further investigation.
Once a problem is defined and determined to be severe enough to warrant action,
Phase 2, Problem Analysis is conducted. The question “Why is the problem occur-
ring?” is asked and during this step of the PSM, diagnostic assessments are used
to answer that question (Shinn 2008; Tilly 2008). As discussed in the assessment
section previously, schools assess the skills the student has and does not have, as
well as the instruction, curriculum, and environmental factors (i.e., alterable vari-
ables) that contribute to the problem. The goal is to identify alterable variables
that can be manipulated to improve student achievement/outcomes. Although we
do not want to discount the effect learner variables can have on learning, the goal of
problem analysis is to identify variables that can be adjusted. Unalterable, uncon-
trollable variables are assessed to the extent that they can influence the treatment
plan, such as knowing that student has a vision disability that requires enlarged font
(Christ 2008).
The third phase of the PSM, Plan Implementation, involves designing and im-
plementing an instructional plan based on the findings from the problem analysis
phase. The question “What can be done about the problem?” is posed. During this
phase, a plan to correct the problem is developed with a focus on matching an ap-
propriate intervention to the needs of the student. Additionally, goals are set, strate-
gies for measuring progress toward that goal and measuring fidelity of plan imple-
mentation are determined.
3.6 The PSM 41
Fig. 3.5 Complete MTSS model illustrating systems-level problem solving. Note: Monitoring and
assessment can vary between school sites
Problem analysis is conducted to understand why the problem exists, and for
systems, these reasons can be extensive. Lack of alignment of the curriculum be-
tween grades, use of a non–research-based program, and an uncoordinated system
for additional support are just a few. During this stage, schools analyze the assess-
ments they use, the type and quality of instruction, the curriculum and standards,
and the environment within entire grade levels (Gibbons and Silberglitt 2008; New-
mann et al. 2001; White et al. 2012). The focus is on how things look as a whole
(Gibbons and Silberglitt 2008; Tilly 2008).
Plan implementation still involves identifying a plan of action, but with systems-
level problem solving, obviously that plan can be comprehensive and complex. Pro-
fessional development, purchase and training on a new program, alignment of the
curriculum and learning objectives, and use of new assessments may all be a part
of the school’s plan (e.g., Griffin and Hatterdorf 2010; Griffiths et al. 2006; White
et al. 2012). Ongoing evaluation of the outcomes and fidelity of the plan is still
a part of this step of the PSM. Measuring outcomes can be done through annual
benchmarking (which occurs 3–4 times/year), examining scores on district interim
assessments, and considering the results of summative assessments. Fidelity is mea-
sured through either directly observing instruction or through self-report of those
implementing the instruction (Greenwood et al. 2008). The steps of the PSM model
are followed and just as with individual problem solving, if the goal is not met, cy-
cling through the steps is conducted again until the goal is met. The entire process
of systems-level problem solving is illustrated in Fig. 3.5.
3.7 Four Elements of MTSS 43
Related to systems-level problem solving and schoolwide reform are four key ele-
ments. MTSS is about creating sustainable systems-change. To create such change,
a focus on four key elements is useful. These elements are: (a) outcomes, (b) prac-
tices, (c) data, and (d) systems. These four key elements originally are described in
relation to PBIS (Sugai and Horner 2006).
According to Sugai and Horner (2006), schools first identify relevant social and
behavioral goals that they want to accomplish with the adoption of MTSS. These
outcomes must be relevant, data-based, measureable, and valued by staff, parents,
and students. Next, practices that are evidence-based, doable, and practical are iden-
tified to accomplish desired outcomes. Third, schools identify what data they will
use to evaluate the impact of their interventions and instruction. The data should
be efficient to collect, easily accessible, and examined at regular intervals. Finally,
the school adopts or creates systems that support the sustained use of MTSS. Such
systems include allocating funding, identifying personnel and positions for MTSS,
and gathering district or political support.
These elements are not separate entities, as all four interact (see Fig. 3.6). For
example, data are used to define the outcomes and to monitor the impact of the ad-
opted practices. Also, the adjustment of one system may impact another system, and
the designation of desired outcomes certainly influences what practices are used.
As schools adopt MTSS, focus on these four elements is important to sustainability.
Next, we share the developmental nature of systems-change before discussing how
curriculum-based evaluation fits in MTSS.
44 3 Multi-Tiered System of Supports
The process of implementing MTSS is not a quick task for schools. It takes on aver-
age 3–5 years to implement, with larger systems taking anywhere from 6–10 years
(Fixsen et al. 2007; NASDSE 2005). MTSS is both a shift in practices that may
require new skills for some educators and a conceptual shift in how problems are
identified and addressed (Ikeda et al. 2002). Because of these shifts, it is important
for schools to view MTSS as a development process that takes time. The National
Association of State Directors of Special Education has provided blueprints to assist
schools in their implementation (NASDSE 2008a, 2008b). They define three phases
of implementation:
1. Consensus building: Schools build knowledge of what MTSS is, the concepts
behind it, and why the model is being used, taught, and discussed.
a. Key question: Why should we implement MTSS?
2. Infrastructure development: Schools examine the components that currently are
in place and those that need to be implemented. The model and necessary steps
for implementation are outlined.
a. Key question: What should our model look like?
3. Implementation: The main focus during this phase is full implementation of the
model. Schools refine their model, build sustainability, and create a new culture
of MTSS as “business as usual.”
a. Key question: Is the model working well and are we getting the results we
want?
MTSS is a contextual model that requires buy-in and leadership for success
(Harlacher and Siler 2011). MTSS is not as simple as just putting these three com-
ponents in place. It requires patience, coordination, and understanding for schools
to be successful (Ikeda et al. 2002; NASDSE 2008a).
MTSS and its accompanying principles and features provide the setting and frame-
work for use of CBE. MTSS has an intricate connection between assessment and
instruction. Assessment data drive instruction and ensure that what is taught is ef-
fective. Schools need different types of assessments generating a variety of data
for the range of decisions made in the PSM. CBE is one such method. Describing
MTSS in detail helps illustrate that CBE is part of a culture in schools in which
assessment and instruction are considered equally important to problem solving
(see Fig. 3.2). In MTSS, schools move away from an exclusive focus on intrachild
variables and labels for services and instead focus on problem solving, instructional
variables, and low-inference assessments (low-inference means that the results of
3.10 Summary and Key Points 45
the assessment are very close to what is measured and rely on explicit behaviors,
as opposed to high-inference assessments which rely on assumptions between the
data and interpretation of the data and on theoretical constructs; Christ 2008). A
shift toward problem solving and a school system that embraces collaboration and
use of clearly defined effective practices (Reschly 2008) will result in the need for
problem-solving assessments such as CBE.
MTSS is a schoolwide model of service delivery. It has distinct principles that fo-
cus on prevention, use of effective practices and data, and a belief that all students
can learn to grade-level with the right amount of support. The three components of
MTSS are: (a) multiple and increasingly intensive tiers of instruction, (b) a compre-
hensive assessment system, and (c) use of the PSM. The implementation of MTSS
is a developmental process that involves three distinct stages, and there are four key
elements that MTSS embodies in order to create sustainability. What makes MTSS
unique is its focus on systems-level problem solving and its fluid, contextual nature.
Key Points
• MTSS is a model of service delivery that matches an appropriate level of
support to the needs of students as determined with data.
• MTSS is based on the belief that all students can learn, given the right level
of support.
• MTSS is unique because of the focus on systems-level problem solving,
use of data to identify students for additional support, and its fluid, context-
ual nature.
• MTSS has three distinct components: instruction, assessment, and the pro-
blem-solving model.
• MTSS uses four types of assessments (screening, diagnostic, formative,
fidelity) to match student needs with instruction.
Chapter 4
What is Curriculum-Based Evaluation?
4.1 Chapter Preview
4.2 Definition of CBE
Before discussing the assumptions behind CBE, the difference between curriculum-
based assessment (CBA), CBM, and CBE are presented. CBA is a capstone term that
describes the use of assessments to measure where the student is in relation to the
curriculum they are being taught. This can range from administering an end-of-unit
quiz, a mathematics fact quiz, or analyzing the student’s homework and comparing
their results to objectives in the curriculum (Howell and Nolet 2000).
Underneath the CBA umbrella is CBM, which is a standardized method of
assessment designed to provide educators information on a student’s level of per-
formance. These general outcome measures are brief, efficient, and technically
adequate (i.e., reliable and valid) measures that assess basic skills (Deno 2003).
When multiple CBM probes are administered over time, the resulting data can be
used to monitor student growth and in turn, make decisions about the effectiveness
of instruction. CBM is currently used to measure basic skills in the areas of reading,
writing, mathematics, and spelling (Hosp et al. 2006).
The CBE Process uses CBM, CBA, and decision rules to pinpoint skills that
require additional instruction (Howell et al. 2008; Howell and Nolet 2000).
There are four assumptions about learning and the information required for instruc-
tional decisions underlying the CBE Process (Howell and Nolet 2000).
The first assumption behind the CBE Process is about how problems are concep-
tualized. Problems are defined as the gap between the expected behavior and the
observed behavior using observable and measureable terms (see Fig. 4.1). (We
use the formula: P = E − O, where P = problem, E = expected behavior, and O = ob-
served behavior) This achieves several things. First, it makes the skill or behav-
ior concrete and clear to everyone. For example, “trouble in reading” becomes
“accurate but slow reading” or “difficulty decoding multisyllabic words.” Sec-
ond, it allows us to quantify the severity of the problem. For example, we may
say a child has a problem in reading. From there, we define it in observable and
measurable terms, such as “The student is able to read 18 words correctly in one
minute out of a grade-level passage, and he should be able to read at least 34
correctly in one minute out of a grade-level passage.” Describing the problem in
these terms provides a quantifiable gap on the severity of the problem and allows
educators to determine if the originally identified problem is severe enough to
actually be considered a problem (Shinn 2008). In this example, the gap of the
problem is 16 words correctly per minute and we can implement an intervention
to close that gap.
4.3 Assumptions Behind CBE 49
Quantifying a problem in this manner makes goal setting an easy and straight-
forward task. If a student needs to meet an expectation, the goal is set to reach
that expectation (Shapiro 2008). Finally, if the problem is defined as the gap be-
tween the expected and observed performance for a specific skill, we know what
the intervention will target. Consider these two examples. If a teacher states that
a child struggles with reading comprehension, we have little information about
the magnitude or type of problem. Goal setting, intervention development, and
evaluation would be difficult. If, however, the problem is defined as “Difficulty
identifying the main character of a story and two details about the character,”
conceptualization of the problem is much more manageable. In addition, goal set-
ting (i.e., identify the character and two details) and intervention development are
straightforward.
There are many theories about how children learn. Some ascribe to a social/Vy-
gotskian theory that suggests students need social interaction and discussion to
develop skills, or Kohlberg’s moral development or Piaget’s cognitive theory sug-
gesting that direct interaction or conflict with problems allows for development.
Others believe in observational learning or the importance of a supportive, nurtur-
ing environment to support growth and development (i.e., family systems theory). It
is important to acknowledge that learning is viewed in different ways. When engag-
ing in problem solving in a school setting, however, learning is considered to be an
interaction between (a) the learner, (b) the academic and behavioral curriculum, and
(c) the instructional environment.
Learning is not the sole result of one variable in a student’s life; instead, it is
the interaction among these three components that influences the extent to which a
student becomes proficient with content (see Fig. 4.2). The learner variable refers
to the student, and includes the skills and background knowledge that the student
has acquired as well as preexisting conditions or experiences the student brings
with him or her. The academic and behavioral curriculum variable refers to the
50 4 What is Curriculum-Based Evaluation?
learning objectives for the current year and their vertical alignment over time. The
instructional environment variable incorporates the setting and instructional strate-
gies. Size of classroom, use of contingent reinforcement, and pacing of instruction
are just a few variables that fall under instructional environment. Table 4.1 lists
several examples that fall under each variable.
Students are not passive recipients of the environment (Christ 2008). They are
able to influence their surroundings, which in turn influences the reactions and in-
teractions others have with them (Braden and Shaw 2009; McDonald Connor et al.
2009; McIntosh et al. 2008). Consider an example of a student who comes from a
stable, two-parent home. Since she was little, this particular student was encour-
aged to explore her curiosity and was provided with numerous toys that supported
the development of mathematical and reasoning skills. She enters middle school
already knowing advanced algebra. As a result of her background knowledge, the
teacher provides more difficult material (a curriculum variable) and affords the stu-
4.3 Assumptions Behind CBE 51
dent more one-on-one attention. Work that is taken home is met with care and atten-
tion by her parents to answer questions that come up. The teacher enjoys seeing the
student progress, so more time is afforded to the student (an environment variable).
As the student is exposed to more curriculum and instruction, she gains more skills
relative to other students. The reverberant nature between the three parts results in
higher levels of learning and more positive outcomes for the student. This example
illustrates the interaction between many variables to influence student learning.
By conceptualizing learning as an interaction between these three variables, a
more comprehensive and accurate view of learning is achieved. Educators are em-
powered to have control over variables that are alterable and more likely to influ-
ence learning.
There are many reasons why students have difficulty in learning. An assumption
behind CBE is that when necessary background knowledge is missing, students
will struggle to perform a task (Braden and Shaw 2009; McDonald Connor et al.
2009). Background knowledge is all the knowledge a student has accumulated; in
other words, it is what the student already knows (Howell and Nolet 2000; Coyne
et al. 2010).
Read the following paragraph:
A newspaper is better than a magazine. A seashore is a better place than the street. At first
it is better to run than to walk. You may have to try several times. It takes some skill but is
easy to learn. Even young children can enjoy it. Once successful, complications are mini-
mal. Birds seldom get too close. Rain, however, soaks in very fast. Too many people doing
the same thing can also cause problems. One needs lots of room. If there are no complica-
tions, it can be very peaceful. A rock will serve as an anchor. If things break loose from it,
however, you will not get a second chance (Bolt 2011).
Without background knowledge, the paragraph does not make much sense. If you
were asked to recall information from that paragraph later, you likely would remem-
ber very little of it. If your background knowledge is activated before reading that
paragraph with the statement “the context is kite flying,” your connection with the
paragraph would be dramatically different. Reread the paragraph again to see how
your preexisting knowledge about kites changes its meaning. By linking the para-
graph to background knowledge, your recall at a later time will be higher (Howell
et al. 2008).
If information can be linked to background knowledge, learning is more likely
to occur. If the information is not connected to background knowledge, new skills
become disjointed bits of information and learning decreases (Hattie 2009; Howell
and Nolet 2000). In fact, McDonald Connor et al. (2009) conducted a randomized,
controlled study and found that changing the focus and amount of instruction pro-
vided to students based on their background knowledge led to greater improvements
in first graders’ reading comprehension compared to students who did not receive
52 4 What is Curriculum-Based Evaluation?
adjusted instruction. They also discovered that the closer the teachers followed the
instructional recommendations provided to them, the more progress students made
in reading. Their results provide evidence for an interaction effect between a child’s
background knowledge and the type of instruction the student receives.
CBE is used to identify what background knowledge students are missing; if
the child cannot perform a task, it is because they do not have the knowledge to
do so. This assumption is low inference, meaning that the jump between the results
of the assessment and the conclusion are very small (see Inset 4.1). If a student’s
performance on a reading assignment does not meet expectation, the conclusion is
simply that they have not mastered the content being measured. With CBE, low-
inference assessments are used and low-inference conclusions are made. This keeps
conclusions practical and avoids global, rigid conclusions about the child’s abilities
or “potential” (Christ 2008). Focusing on background knowledge ensures a focus
on what teachers can teach (cf. Howell and Nolet 2000). When assumptions about
learning shift to something outside of the school’s purview, instructional utility is
lost. After the missing background knowledge and skills are identified, the focus
stays on alterable variables, which is the next assumption.
After identifying the skills to target during instruction, CBE focuses on variables
that can be altered and manipulated. Alterable variables are ones that an educa-
tor can directly influence, such as the pacing of a lesson, the size of the group,
and the amount of instructional time for a particular subject. Alterable variables
are in contrast to unalterable variables, which are outside the educator’s control
and cannot necessarily be influenced by the educator (Braden and Shaw 2009;
McDonald Connor et al. 2009). A student’s family structure, the presence of a
disability, birth order, gender, and race are just a few examples of unalterable
variables (see Table 4.2).
By examining alterable variables, educators can manipulate those variables to
influence learning and take charge of student learning. A focus on unalterable vari-
4.5 RIOT/ICEL and Instructional Hierarchy 53
ables results in wasting valuable educator time admiring the problem. CBE is a tool
that allows educators to change learning trajectories.
Each assessment should have a purpose or a question attached to it, and that
question should generate an answer that directly informs teaching.
Summary of Assumptions Behind CBE To summarize, CBE is an assessment
method that focuses on making practical recommendations that are directly rele-
vant to instruction and contribute to problem solving. Problems are defined as
the gap between expected performance and observed performance. When this
gap is quantified, it provides a goal for instruction. Learning is considered an
interaction among three variables: learner, curriculum, and environment. Focu-
sing on alterable variables can lead to practical recommendations that improve
student learning. Understanding difficulty learning as the result of missing back-
ground knowledge empowers teachers and prevents placing a limit on student
learning.
When conducting CBE, the evaluator keeps in mind two conceptual frameworks:
(a) the RIOT and ICEL framework, which provides the backdrop of the CBE Pro-
cess and (b) the IH, which organizes recommendations within a skill-acquisition
hierarchy.
54 4 What is Curriculum-Based Evaluation?
4.5.1 RIOT/ICEL
RIOT and ICEL are acronyms for types of assessments and instructional domains to
analyze in problem solving. The RIOT and ICEL matrix is an organization frame-
work that guides a thorough problem analysis (Christ 2008; Howell and Nolet 2000).
Instruction is the “how”; how new skills are taught and reinforced for the student.
Curriculum is the “what”; what is being taught. Environment is the “where”; where
the instruction takes place. Finally, learner is the “who”; who is interacting with the
instruction, curriculum, and environment. Review refers to examining existing data
such as, permanent products, attendance records, and lessons plans. Interviews can be
structured, semistructured, or unstructured and are methods of assessment that involve
question–answer formats. Observations involve directly observing the interaction be-
tween the instruction, curriculum, environment, and learner. Finally, testing refers to
the administration of tests (both formal and informal) to obtain information about the
ICEL.
Using the RIOT and ICEL framework ensures consideration of all variables con-
tributing to a problem. Table 4.3 displays examples of areas assessed and assess-
ments used for problem analysis in the RIOT and ICEL framework. This framework
captures the multifaceted nature of learning, which is the interaction among the
learner, curriculum, and the instructional environment (see Fig. 4.2).
Table 4.3 RIOT and ICEL matrix with sources of information and examples of variables to assess. (Adapted from Christ 2008 and Howell and Nolet 2000)
Review Interview Observe Test
Instruc- • Permanent products to determine • Teacher’s use/intended use of • Direct observation, anecdotal • Administer scales or check-
tion prior strategies used and their instructional strategies and focus of observations, checklists to deter- lists that measure effective
effects instruction mine effective teaching practices instructional practices
• Lesson plans to determine instruc- • Teacher for use of reinforcement and factors (e.g., pacing, group- • Manipulate instructional
tional demands (e.g., difficulty, • Peers and student for perception of ing, and explicitness) variables to see effect on
differentiation, response type, etc.) pace, activities, engagement, etc. • Direct observation of antecedents student’s performance (e.g.,
and consequences of behaviors repeated practice, increase
• Observation of expectations of in opportunities to respond,
teacher, demands of task/activities etc.)
Curricu- • Lesson plans/learning objectives • Teachers, LEA for expectations • Permanent products and plans to • Determine the readability of
lum relative to student’s skills of pacing and coverage of the determine alignment of assign- texts used
• Permanent products to determine curriculum ments with objectives • Manipulate difficulty of
instructional demands in cur- • Teacher(s), LEA for philosophical • Clarity of learning objectives material or manner in which
riculum (e.g., scope and sequence, orientation of the curriculum (e.g., • Observe alignment of objectives/ it is presented to see effect
4.5 RIOT/ICEL and Instructional Hierarchy
prerequisite skills, massed vs direct instruction, whole language, content between classroom, set- on student performance
distributed practice, amount of phonics-based, etc.) tings, grades, etc.
review, etc.) • Teacher(s) for organization, clarity,
content, difficulty of curriculum
Envi- • Lesson plans to determine behav- • Teacher to describe behavioral • Observe interactions among peers • Administer classroom envi-
ronment ioral expectations taught expectations, rules, and routines and climate of classroom, school ronment scales
• School rules and policies to assess (i.e., are rules situationally and • Observe physical environment • Compare student’s perfor-
climate, rules, and routine developmentally appropriate and (seating, lighting, noise, etc.) mance on assessments in
• Seating charts clear?) different settings (e.g., task
• Student, peers to describe climate, demands, use of reinforce-
rules, routines, etc. ment, different distractions
or seating, etc.)
Learner • Permanent products, gradebook to • Student to understand perception of • Direct observation of engagement • Administer a variety of
compare performance to peers skills and difficulties and target behaviors or skills assessments to determine
• Cumulative records to assess • Teachers, personnel for perception • Observe ability to complete tasks/ student’s performance/skills
previous history, health history, of problem (intensity, significance, activities • Use functional behavior
attendance, district testing results nature, etc.); also for experience • Record nature and dimensions assessment or functional
55
• Permanent products for previous and observations from working with of behavior (frequency, duration, analysis
response to instruction and change student intensity, latency) • Use self-report (checklists,
in skills rating scales, inventories, etc.)
56 4 What is Curriculum-Based Evaluation?
passage
Generaliza- Is proficient with skill, Reads at a rate similar to Can perform skill across • Accurate and fluent in Model, practice, and
tion but performance is peers, but rate changes multiple settings and responding reinforce across different
limited to very few when presented with discriminates accu- • Does not perform skill settings and contexts
contexts (often only in materials different from rately when to use the well in new settings
instructional setting) instructional materials skill or ones different from
instructional setting
Adaptation Can perform skill in new Can read a variety of materi- (Stage does not discon- • Accurate and fluent with Provide opportunities for
contexts, but does not als, but unable to apply tinue, so there is no exit skill adaptation
modify skill in certain phonics to foreign words goal) • Can perform in novel Identify “big ideas” of the
situations settings skill(s)
• Does not modify skill to
adapt to situations
57
58 4 What is Curriculum-Based Evaluation?
To illustrate the stages of learning, recall the first time you drove a car. In the ac-
quisition stage, you performed turns too wide or narrow and accelerated, stopped,
and shifted gears in a jerky manner. With corrective feedback from your instructor,
your turns became more accurate and driving was smoother. With practice, your
driving in the family car became more fluent. Generalization occurred when you
drove different cars, and adaptation was achieved when you switched to a car with
an automatic transmission or drove a big moving truck. This example illustrates the
progression in skills and accompanying change in the focus of instruction. Each
stage of the IH has very specific instructional recommendations (Daly and Martens
1994; Intervention Central, n.d.). With CBE, the specific skill(s) that need to be
taught are identified, and the location in the IH where the skill currently is per-
formed is pinpointed, leading to clearer instructional recommendations (Daly et al.
1996; Daly and Martens 1994).
Providing teachers with clear indications of how developed a skill is can lead to
more practical recommendations. For example, Daly and Martens (1994) examined
three types of instructional recommendations with four students identified with
learning disabilities (the average age is approximately 11 years old). They found
that instruction focused on the first three stages of the IH yielded the greatest im-
provement in students’ oral reading. This was moderated, however, by the student’s
skill level. Those students, who were reading aloud with at least 80 % accuracy and
therefore considered to be in the fluency or generalization stage, benefitted the most
from instruction. Those reading with less than 80 % accuracy (i.e., in the acquisition
stage) did not experience as much growth as the aforementioned students. This pro-
vides evidence of a “skill level by treatment” interaction in which matching the type
of instruction to the student’s stage within the IH results in higher reading achieve-
ment. By providing information to teachers on the developmental nature of a stu-
dent’s skill and considering the instruction recommended by the stages of the IH,
teachers are provided more beneficial information for differentiating instruction.
The development of reading is not an easy or a natural process. While the develop-
ment of speech is largely achieved through exposure to others who are speaking,
reading does not develop simply from looking at books. It requires direct teaching,
practice, and feedback. Louisa Moats summarizes the complexity of reading by say-
ing “reading IS rocket science” (Moats 1999).
The intention of this section is not to describe in excessive detail how reading
skills develop, or to discuss the structures of the brain involved in reading devel-
4.6 Big Five Areas of Reading 59
Fluency is the shortened term used to refer to fluency with connected text. Flu-
ency is not simply reading fast, but reading with prosody, rate, and accuracy out of
connected text (Kuhn and Stahl 2003; Therrien et al. 2012). Fluency incorporates
understanding the sounds in speech, matching sounds to symbols, and developing
automaticity with reading.
Reading is a complex process. A student must visually track a string of letters,
break apart that word into individual phonemes, retrieve the sounds that match
those letters, recognize the rules of letter patterns, discriminate irregular word parts,
and put the word back together to read it. It is a process of decoding and then recod-
ing. That process is applied to reading several words in a row that make a sentence.
A student must first acquire the skill to decode and recode words and then build
fluency with that skill. It is at this point that comprehension is enhanced. Although
students build reading comprehension skills as they develop the ability to read,
reading comprehension relies heavily on the student’s ability to define words and
decode fluently (Carnine et al. 2009; Therrien et al. 2012) in combination with ac-
quiring background knowledge. A student’s working memory must not be burdened
with trying to decode words and retrieve meanings of words. They must be able to
decode quickly and efficiently in order to devote their mental resources to under-
standing the meaning of the text and the author’s intentions. If a student is too busy
decoding and arduously working through a word, comprehension will be limited
(Kuhn and Stahl 2003; Musti-Roo et al. 2009).
Comprehension is the ability to glean meaning from a text (NICHHD 2000).
We present comprehension as a skill that rests on four factors: (a) decoding skills
or knowledge of phonics, (b) vocabulary and language, (c) background knowledge
of content, and (d) metacognitive skills or the ability to monitor meaning while
reading (Klinger 2004; NICHHD 2000; Perfetti and Adlof 2012). Decoding skills
are not only one of the four factors contributing to comprehension, but also a pre-
cursory skill for comprehension. If one does not have the skills to decode text, they
do not have the opportunity to attempt comprehension. Students “learn to read” in
grades K–3 and then “read to learn” beginning in grade 4 (Carnine et al. 2009; Kuhn
and Stahl 2003; Therrien et al. 2012).
Finally, vocabulary refers to knowledge and awareness of the meanings of words
(NICHHD 2000). Vocabulary is developed in conjunction with the other four skills.
Vocabulary can be taught in conjunction with beginning reading skills. Figure 4.3
illustrates the developmental process of reading.
CBE is a systematic process that educators can use to determine what skills need
to be taught and how to teach them. The recommendations that result from using
CBE are relevant to instruction and directly inform teaching practices. The CBE
Process is based on several assumptions including the idea that learning is an in-
teraction between the learner, curriculum, and environment and it is the interac-
tion among these that leads to positive student outcomes. It is also assumed that a
4.7 Summary and Key Points 61
Key Points
• CBE is a systematic problem-solving assessment process.
• Learning is an interaction between the curriculum, environment, and the
learner. It is the interaction between these variables that results in learning.
• Problems are defined as the gap between what is expected and what is
observed.
• When a problem develops, it is because students lack the background
knowledge or are missing skills required to do the task.
• RIOT is an acronym for types of assessments including review, interview,
observe, and test. ICEL is an acronym for areas to assess and includes ins-
truction, curriculum, environment, and the learner.
• IH has four stages with corresponding instructional recommendations.
• Reading development consists of five skills: phonemic awareness, alpha-
betic principle, fluency with connected text, reading comprehension, and
vocabulary.
• The acquisition of reading requires direct instruction, as it is not a skill that
naturally develops.
Chapter 5
The Curriculum-Based Evaluation Process
5.1 Chapter Preview
You will notice that the four steps of the CBE Process are identical to the PSM.
Each model involves four steps that cycle through until the problem is solved. Both
the CBE Process and the PSM can be applied to a single student, but the PSM can
also be applied to groups of students or to entire school systems. The phases of the
PSM and CBE Process are the same.
5.3 Problem Identification
In the first phase of the CBE Process, the problem is identified using a survey-level
assessment. Survey-level assessment is the process of measuring an array of skills
to identify areas of concern warranting further assessment. The questions to answer
in this step are “What is the problem?” and “Is it severe enough to warrant further
investigation?”
The problem is then defined as the gap between the expected performance and
the observed performance. The severity of the gap is analyzed by conducting a gap
analysis.
A gap analysis is conducted by comparing the student’s performance to a certain
standard. For example, if a student is expected to read 50 words correctly per min-
ute and he reads 30 words correctly per minute, simple division and multiplication
are used to calculate the extent of the gap. The student’s observed performance is
5.5 Plan Implementation 65
5.4 Problem Analysis
The second phase of the CBE Process involves administering specific-level assess-
ments to pinpoint missing background knowledge and identify the alterable vari-
ables needing adjustment during instruction. While survey-level assessment mea-
sures a wide range of skills, specific-level assessment is more targeted. Specific-
level assessment involves a series of tasks, such as analyzing errors made while
reading or asking the student to explain their comprehension of text. During this
phase, a series of tasks are completed by the student to test hypotheses about what
skills are missing. The evaluator asks a question (e.g., is the student lacking certain
decoding skills?) and then administers a task to answer that question (e.g., admin-
isters several reading passages and codes the errors made to determine if there are
decoding deficits). Analysis of the problem is conducted to gain a thorough un-
derstanding of why the problem is occurring to increase the likelihood of a match
between the problem and the solution.
5.5 Plan Implementation
The third phase of the CBE Process involves using the data gathered in the first
two phases to design and implement instructional changes in an attempt to correct
the identified problem. In the previous steps, the evaluator has identified the prob-
lem and pinpointed the missing skills. In the Plan Implementation stage, a plan is
designed and implemented to correct the problem. In addition to developing a plan
matched to the student’s need, two other key components at this stage are (a) goal
setting and (b) determining how to measure both the effectiveness of the plan (i.e.,
progress monitoring) and the implementation of the plan (i.e., fidelity monitoring).
66 5 The Curriculum-Based Evaluation Process
5.5.1 Instructional Match
Once a problem has been verified and the factors contributing to it are identified, a
plan is developed (or the student’s current instructional plan is adjusted) to better
match instruction to the student’s specific skill deficits. The Instructional Hierarchy
(IH) is considered at this point, since knowing where a student falls on the IH leads
to a better-matched instructional plan. To illustrate the importance of instructional
match, consider the differences in nutritional and fitness plan for a person who is
training for a marathon compared to a person who is trying to lower his cholesterol.
Each would require a different plan to reach his goals.
The runner’s plan might include (a) running several days/week, (b) running a
longer distance 1 day/week, and (c) eating lots of pasta the night before long dis-
tance runs. The person with high cholesterol would have a very different plan that
might include (a) eating oatmeal for breakfast, (b) eliminating fatty foods, and (3)
exercising 30 minutes at least 3 days/week. A person training for a marathon and a
person with high cholesterol both require a plan focusing on exercise and diet, but
have very different needs and thus, require very different plans. Similarly, two stu-
dents who require a plan to improve reading may have very different instructional
needs. CBE guides specific instructional recommendations.
Examples of Recommendations from the CBE Process There are a few guidelines
to consider in generating recommendations from CBE results. First, the recommen-
dations should be tied to instruction and include direct instruction from the teacher
on missing skills. Second, recommendations should be practical, meaning they are
tied to controllable variables. For example, a recommendation to provide more time
to practice reading connected text in which the student is at least 93 % accurate is a
practical and controllable recommendation. Comparatively, a recommendation that
the student needs more help with reading is not specific enough. Table 5.2 lists vari-
ous examples and nonexamples of recommendations that come from using CBE.
5.5.2 Goal Writing
Following Problem Analysis, the evaluator can summarize the student’s current
level of performance, quantify the severity of the problem, and write a goal for the
student. A well-written goal requires several components (Shinn 2002a; Yell and
Stecker 2003; see Table 5.3):
1. The name of the student that should perform the behavior;
2. The behavior or skill that the student performs;
3. Under what conditions the skill is to be performed;
4. The criterion at which that the skill must be performed; and
5. The time frame by which the skill should be performed.
Together, all of the parts specify with great clarity the expected outcome. The fol-
lowing is an example of a well-written goal:
5.5 Plan Implementation 67
When given a 3rd grade-level reading probe (conditions), Anthony (name of student) will
read aloud (behavior or skill) 90 words correctly in one minute with at least 95 % accuracy
(criteria), by June 1, 2014 (time frame).
5.5.3 Setting Goals
Student outcomes are impacted by goals so it is important that goals are both am-
bitious and reasonable. With that being said, there may be different ways to write
goals. Goal setting can be accomplished through consideration of several different
sources of information. Four sources are addressed: (a) benchmark standards, (b)
normative data, (c) rates of growth, and (d) the student’s current level of perfor-
mance.
Benchmark Standards Benchmarks standards are predictive of current or
future success and based on correlational data. For example, students who can
read at least 47 cwpm at the end of first grade with at least 90 % accuracy have
an 80–90 % chance of reaching subsequent literacy benchmarks (and continuing
to meet these benchmarks places the odds in their favor of achieving high-sta-
kes reading goals, such as meeting proficiency on a state assessment; Good and
Kaminski 2011).
Normative Data Normative data provide information about how large groups of
students perform and goals can be set for students using those data. For example,
students in first grade performing at the 50th percentile in the spring (based on
national norms) read approximately 67 cwpm (AIMSweb, n. d.).
Rates of Growth Normative Rates of Growth tell us what typical students do.
Comparing the target student’s rate of growth to that of typical students all-
ows us to see if the gap will be closed over time. Using rates of growth to
set goals provides perspective on how reasonable and ambitious the goals are.
Tables 5.5–5.7 display rates of growth for different curriculum-based measu-
res, including oral reading fluency (ORF), the MAZE, and early literacy mea-
sures. Within Tables 5.5 and 5.6, the rates listed in the normative column and
the rates of growth are calculated by examining students performing at the 50th
percentile.
5.5 Plan Implementation 69
A reasonable goal is a goal based on the normative rates listed in the tables to
close a student’s performance gap over time. An ambitious goal would contain a
relatively higher rate of growth that would close the student’s performance gap in a
shorter period of time (Pearson 20120b; Shinn 2002b).
Current Level of Performance Another source of information to consider
when setting goals is the student’s current level of performance. Students in the
initial stages of reading development progress at a faster rate than students in
the later stages of reading development (Deno et al. 2001). In setting goals, it
is reasonable to expect a third grader reading below grade level to progress fas-
ter than a third grader reading at grade level. The former student is at an ear-
lier stage of reading development and has more “room to grow” (see Table 5.5;
Deno et al. 2001).
The goal is to provide instruction that results in closing the achievement gap. Re-
member, the guiding philosophy is that all students can learn and the purpose of
adjusting instruction is to provide the specific skills students need to close the
achievement gap. Goals should reflect a sense of urgency about a student’s educa-
tion. Teachers who write and monitor goals get better student outcomes (Conte and
Hintze 2000; Stecker et al. 2005) and ambitious goals yield better outcomes than
unambitious goals (Bush et al. 2001; Christ 2010; Hattie 2009; cf. Conte and Hintze
2000; Stecker et al. 2005).
Inset 5.1 How Much Growth can be Expected from Students with
Disabilities?
Deno et al. (2001) examined the typical growth rates of all students, irrespective
of the quality of instruction or program they received. Those growth rates are in
Table 5.5 under the column labeled “Gen Ed”. Deno et al. (2001) examined the
progress that could be expected of students with learning disabilities who are
receiving special education. As displayed in Table 5.5 under the column “Spe-
cial Ed”, students receiving special education services grow at about half the
rate of students in general education. These rates also were calculated without
consideration of instruction. It is important to realize that it cannot be assumed
that students in special education receive individualized, high-quality instruc-
tion (Shinn 2002a; Tilly 2008). Students with disabilities that do receive target-
ed, evidence-based, high-quality instruction do achieve growth rates similar to
students without disabilities receiving instruction in general education settings.
These rates are displayed in the column labeled “Learning Disability, Effective”
in Tables 5.5 and 5.6. Deno et al. (2001) have established that, regardless of a
label, when students are given the right, high-quality instruction, they grow.
5.5 Plan Implementation 71
Selecting Goal Criteria Selecting goal criteria and determining a goal timeline
can appear to be a complex process because several different sources of informa-
tion can be considered to do so. Experts have suggested using normative standards,
benchmark standards, and rates of growth to identify reasonable and ambitious
goals for students (Deno et al. 2001; Fuch et al. 1993; Pearson 2012b). In this sec-
tion, we have attempted to consolidate this information to summarize one way of
setting goals and timelines that attempts to consolidate several bits of information.
Considerations for goal setting are provided later.
First, it is necessary to determine if goal criteria will be set based on normative
standards or benchmark standards. Normative standards can be used to identify an
average range compared to similar peers and the goal criteria can be set to reach a
commensurate level. A benchmark standard can be used so that the goal is predic-
tive of future success (as discussed earlier, benchmarks can predict success on a
later outcome, such as a state test). Ultimately, the goal is to have students perform
proficiently on grade level, so selecting a benchmark standard or a high normative
standard that predicts reading success with other criteria can ensure students are
successful with reading skills. However, if the student is below grade level or if the
benchmark standard is too rigorous given the time frame for the goal, then an in-
terim step using either normative or benchmark standards from lower levels may be
a reasonable approach (Pearson 2012b). (Selecting a time frame is discussed later.)
72 5 The Curriculum-Based Evaluation Process
Once the goal criterion is determined, the evaluator then determines how much
weekly growth is needed to reach that goal. A date by which the goal should be met
is decided upon. Then, to determine the growth needed to reach the goal, conduct a
gap analysis and divide the gap by the number of weeks that we want the student to
reach the goal. For example, if Marianne, a third-grade student, is reading 76 cwpm
and the end of year benchmark is 100 with 97 % accuracy, then the goal would read
“When given a third-grade reading passage, Marianne will read 100 cwpm with
97 % accuracy by June 1 2014.” The gap is 100 − 76 = 24 words. If there are 16
weeks left in the school year that date could be used to reach the goal. Dividing 24
cwpm by 16 weeks results in 1.5 words per week. Therefore, Marianne must grow
at a rate of at least 1.5 words per week to reach her goal.
The final step is to determine whether the rate of growth needed is reasonable.
Is it unambitious or too ambitious? To evaluate reasonableness, compare the need-
ed growth to established rates of growth within the normative column under the
AIMSweb source (which is the 50th percentile of recent normative data on rates
of growth). If the goal criterion is at or above growth rate listed, then the goal will
likely close the student’s performance gap over time. If the needed growth is below
the 50th percentile, then the student’s growth will likely not close the performance
gap (Pearson 2012b).
Selecting a Time Frame Finally, evaluators will want to consider how much time
is needed for the goal to be reached and what is reasonable to write for a time frame.
Calculating a time frame depends on how intense the instruction the student is recei-
ving is, rates of growth, and how many weeks are available within the school year.
Here are a few considerations when planning a time frame.
First, if students are to acquire skills to be successful on to grade-level material,
then expecting 2 years’ growth within 1 year’s time is an ambitious, yet reasonable
expectation. Less growth than that would maintain the student’s below grade-level
performance even if he or she makes positive growth in skills.
Second, educators can identify a reasonable rate of growth to expect from the
student and then calculate how many weeks of intervention/instruction are needed
to reach that goal. For example, if a student’s gap is 24 cwpm and a reasonable rate
of growth to expect is 1.50 words per week, then the student will need 16 weeks
to reach his or her goal. The weeks needed is determined by dividing the gap by
the rate of growth: 24 words/1.5 rate of growth = 16 weeks. The evaluator can then
compare the number of weeks left in the school year to the number of weeks needed
to determine if the time frame of the goal is reasonable or not. If there are more
than 16 weeks left, then the time frame is reasonable and the goal can be written
for 16 weeks from the present date. If there are less than 16 weeks left in the school
year, then the time frame can be written to extend into the next school year or in-
structional changes can be made to expect higher rates of growth and to close the
performance gap before the school year ends.
One last point on selecting a time frame deals with the issue of an older student
who is below criterion on a skill that should have been mastered in a previous
grade. For example, what time frame should be used for a second grader who has
5.5 Plan Implementation 73
not met standards on the nonsense word fluency (NWF)? Generally speaking, if
a student is missing a previous skill, the goal can be written for the student to
reach that goal in half the time that it normally takes to achieve that goal (this
is equivalent to expecting 2 years’ growth in 1 year’s time). For example, a first
grader should read at least 8 whole words on an NWF in the winter and 13 whole
words by the end of the year (Good and Kaminski 2011). Now imagine a second
grader who begins the year reading 9 whole words on NWF. The student is behind
in skill acquisition and it is now past the deadline for the benchmark. The goal is
to have the student reach 13 whole words in half the time it normally takes to reach
that goal. A score of 9 whole words is at benchmark for the middle of first grade,
so if the expectation is that it normally takes 18 weeks to go from 8 to 13 whole
words (the winter to spring benchmark), then the goal would be written so that it
is achieved in 9 weeks for this particular student. Using this approach does take
some analysis and mathematics, but it can be a fairly straightforward method for
setting goals.
Summary of Selecting Goal Criteria and Time Frame To summarize, setting
goals and timelines includes:
1. Determine a normative or benchmark criterion.
2. Set the goal time frame and estimate the growth needed for the student to reach
that goal.
a. Calculate the gap between the goal and the current level of performance.
b. Divide the gap by the number of weeks needed.
3. Determine if the goal is realistic by comparing it to research-based growth rates.
Compare the rate of growth needed to the 50th percentile of rates of growth to
determine if gap will be closed within the goal time frame.
4. Analyze the time frame by examining the number of weeks available for
instruction.
5.5.7 Measuring Progress
5.5.8 Measuring Fidelity
A final decision made during Phase 4 of the CBE Process is how to measure fidel-
ity of the instructional plan. Two ways to measure fidelity are to have an outside
observer observe the instructional plan being implemented (direct assessment) or
to have the person delivering the plan report on its delivery (indirect assessment).
Direct Assessment of Fidelity Direct assessment occurs when the steps of the
instructional plan are defined in observable and measurable terms and then an out-
side observer (e.g., administrator, instructional coach, peer, etc.), observes the steps
implemented. A checklist is created, and the total percentage of components imple-
mented is calculated.
Indirect Assessment of Fidelity Indirect assessment of fidelity involves using
a variety of tools to measure fidelity without directly observing the plan imple-
mentation. Examples include reviewing attendance records to see if the student
was present for instruction, examining permanent products to ensure the student
participated in the instruction, interviewing those who deliver the instruction,
or having those delivering the instruction complete a checklist. The same che-
cklist that would be used for direct observation could be completed in indirect
assessment.
The process of measuring fidelity is not intended to be a teacher evaluation
process. The content of what is measured in this process should be openly shared
and available to those who are part of the instructional plan. Fidelity tools are
used to ensure students are being provided the instruction/intervention as it was
intended.
In addition to direct and indirect assessment of fidelity, the definition of fidelity
also can include consideration of how well the plan matches the student’s skills and
deficits.
5.6 Plan Evaluation
In the final phase of the CBE Process, the question asked is, “Did the intervention
plan work?” During this phase, educators review monitoring data for both the
fidelity of implementation of the intervention plan (fidelity data) and the plan’s
overall effectiveness (student progress monitoring data). The ongoing monitor-
ing of the intervention plan is called formative evaluation. Formative evaluation
is an assessment process that occurs during instruction and allows for changes
to be made if student benefit is not evident. Scheduling systematic data review
meetings at regular intervals, to ensure these important decisions about student
benefit and need for instructional changes are made, is critical to improved stu-
dent outcomes.
Prior to making changes to the instructional plan when data indicate it is not
working, the fidelity of the plan must be examined. If the plan was not implemented
5.7 Summary and Key Points 75
or received as intended, it cannot be concluded the plan did not result in improved
student outcomes. Instead, the plan should be implemented as intended before teams
conclude it did not work. Before making changes to the plan, make changes to the
implementation and continue to monitor progress. There is not a clear-cut standard
in the literature as to what is acceptable or unacceptable fidelity. However, it is a
reasonable assumption that fidelity scores below 90 % are undesirable. This number
comes from research on academic interventions that establish fidelity at 90–95 %
(see Greenwood et al. 2008 and Jimerson et al. 2007).
The steps in the CBE Process are the same as those in the PSM, although the CBE
Process is focused on individuals and the PSM can be applied to an entire school
system. The CBE Process is a series of phases in which a problem is defined, ana-
lyzed, an intervention is implemented to correct the problem, and data are collected
to evaluate its effectiveness. Goals are set by considering various sources of infor-
mation. Measurement of both progress and fidelity are key components of the CBE
Process to ensure logical decisions are made about the effectiveness and continued
use of the intervention plan. If a performance goal is not met, the CBE Process con-
tinues cycles back around until the goal is met.
Key Points
• The CBE Process follows the same phases as the PSM.
• Survey-level and specific-level assessment comprise the first two steps of
the CBE Process.
• Phase 3 involves designing an intervention plan, setting goals, and deter-
mining how fidelity and progress will be monitored.
• Ambitious goals are written through consideration of benchmark standards
and normative growth rates.
• Evaluation of the intervention plan requires ensuring fidelity is strong to
allow for logical decisions about the effectiveness of the instructional plan.
• Failure is not an option, since a lack of progress requires cycling back
through the CBE Process.
Part II
Using Curriculum-Based Evaluation
Chapter 6
CBE Decoding
6.1 Chapter Preview
This chapter describes the process for Curriculum-Based Evaluation (CBE) decod-
ing. The chapter is structured around the four phases of the CBE Process and will
walk the reader through the entire process for decoding. The chapter discusses spe-
cific assessment techniques and intervention recommendations based on the results.
6.2 CBE Decoding
The CBE Process moves through four phases, within which is a series of steps that
involve three types of tasks:
1. Ask: The questions that guide assessment.
2. Do: The direct assessment activities conducted with the student. Data are col-
lected and interpreted to answer the question.
3. Teach: The instructional recommendations based on the outcomes from ask and
do.
Evaluators start with a question (Ask), which then requires an action or activity
(Do). Following a certain amount of Asks and Dos, the evaluator arrives at an in-
structional focus (Teach), which indicates specific instructional strategies (see
Fig. 6.1). The entire CBE Process for Decoding is presented in Handout 6.1, which
is designed to be a quick summary of the Decoding CBE Process (Table 6.1 also
outlines the CBE Process for Decoding in a linear form). All of the handouts used
for the CBE Process for Decoding are included at the end of the chapter, and the
entire list is displayed in Table 6.2.
6.3 Problem Identification
The first step is to identify if a reading problem exists. This initial identification of
reading difficulty typically occurs during the universal screening process. Multiple
sources of information also can answer this question at any time (e.g., review of
records, interview with the student or teacher, and various assessments).
After identification of a reading concern, the next step is to determine if the prob-
lem is severe enough to warrant further investigation. This question is answered by
conducting a Survey-Level Assessment (SLA) using leveled reading passages. SLA
is a technically adequate measure of overall reading performance (Hintze and Conte
1997) that is conducted to determine a student’s instructional reading level. The
instructional reading level is the highest grade-level material out of which students
read at or above the Fall 25th percentile based on national norms with at least 95 %
accuracy (Hosp et al. 2006). A gap analysis can be conducted with data collected
in the SLA.
SLA Directions:
1. Administer three 1-minute oral reading fluency (ORF) probes using curriculum-
based measurement (CBM) directions. Report the median words read correctly
(WRC) and the median errors as the score (directions for SLA are included in
Handout 6.2 and the formulas for calculating rate and accuracy are “A” and “B”
in Fig. 6.3, respectively).
Table 6.1 Phases of CBE Process for Decoding
Phase Ask Do Teach
6.3 Problem Identification
Fig. 6.2 Problem Analysis phase of CBE Process for Decoding. (Note: “+” indicates at or above
criterion, “–” indicates below criterion)
a. SLA requires both the unnumbered student passages and the numbered exam-
iner passages.
b. If the student’s rate and accuracy are below criteria for the expected grade
level, administer three passages a grade level lower and record the median
WRC/errors from successive grade levels until criteria are met at a grade
level.
6.3 Problem Identification 83
2. Record the student’s scores on Handout 6.5. Complete the bottom portion of
Handout 6.5 to determine the severity of the problem.
3. Ask: Does it warrant further investigation?
− If the student is performing at criterion with accuracy and rate with grade-
level material, then decoding CBE is complete and reading comprehension
can be examined (see Chapter 8). Students scoring at criterion with grade-
level material may require an instructional focus on comprehension, vocabu-
lary, and/or content knowledge.
− If the student is not performing at criterion for accuracy or rate with grade-
level material, then proceed to the Problem Analysis phase.
Things to Consider
• In some cases it may be more efficient to administer the easier grade-
level material first before progressing to higher grade-level material.
This approach may be useful with students reading several levels below
expected grade level (cf. Hintze and Conte 1997).
• It is helpful and efficient to record all types of errors that students make
while reading. It may be necessary to collect errors as part of the CBE
Process later, so gathering them along the way can save time. However, it
requires proficiency in CBM administration before adding the collection
of error data. Table 6.3 shows how to record errors and Fig. 6.4 shows how
to code errors.
84 6 CBE Decoding
Fig. 6.3 Formulas for rate, accuracy, and percentage of errors corrected
6.4 Problem Analysis
In the Problem Analysis phase, the evaluator examines the student’s rate and ac-
curacy with grade-level material, and is Step 3 of the CBE Decoding Process. The
tasks conducted will depend on the student’s performance. If rate and accuracy are
at criterion, then the evaluator proceeds to Chapter 8 to evaluate the student’s read-
ing comprehension. If rate is below criterion, and accuracy is at or above criterion,
the next task is to teach fluency building. If rate is at or above criterion, and accu-
racy is below criterion, the evaluator conducts Step 4. Finally, if rate and accuracy
are below criterion, the evaluator conducts Step 5. Figure 6.2 illustrates the Problem
Analysis phase of the CBE Decoding Process.
Using the assessment results from grade-level material, compare the student’s rate
and accuracy to the criteria (i.e., 25th percentile fall, at least 95 % accuracy) and
proceed to one of the following tasks where “+” signifies at or above criterion and
“–” signifies below criterion:
1. Rate +, accuracy +: Assess reading comprehension (see Chapter 8)
2. Rate –, accuracy +: Teach: Fluency
3. Rate +, accuracy –: Conduct Step 4 of CBE Decoding Process
4. Rate –, accuracy –: Conduct Step 5 of CBE Decoding Process
Rate at Criterion; Accuracy at Criterion (Rate +, Accuracy +) For students who
are reading at or above criterion for rate and accuracy, assessment focus moves to
reading comprehension. Chapter 8 focuses on the CBE Process for reading com-
prehension. The instructional goal will be informed by the results of additional
assessment.
Rate Below Criterion; Accuracy at Criterion (Rate –, accuracy +) For students
below the criterion for rate, and at or above criterion for accuracy, the teaching
recommendation is to focus on building fluency. No further assessment is needed at
6.4 Problem Analysis 85
this point. Fluency building becomes the instructional recommendation (see Teach:
Fluency later in this chapter).
Rate at Criterion; Accuracy Below Criterion (Rate +, Accuracy −) For students
at or above criterion for accuracy, and below criterion for rate, it is necessary to
determine if the students are monitoring their reading. Are they reading quickly and
carelessly, or do they lack the decoding skills required to read the words? A simple
procedure is conducted to determine if the student has the decoding skills and is not
using them, or if the student is not able to decode the words. This procedure can
determine if the student is within the acquisition stage for decoding and should be
further evaluated as a student with both low rate and low accuracy.
Rate Below Criterion; Accuracy Below Criterion (Rate –, Accuracy –) Stu-
dents below criterion for rate and accuracy likely are in the acquisition stage of the
instructional hierarchy. Also, a student who does not improve accuracy with the
self-monitoring strategy (Step 4) requires the same instructional focus as students
in this category. Students in this category are still learning decoding and phonics
skills, so the specific-level assessment focuses on pinpointing decoding needs and
determining the extent to which early literacy skills (phonemic awareness, letter-
sound relationships, and sight words) are mastered.
Step 4 is for students whose rate is at or above criterion, and accuracy is below crite-
rion (i.e., rate +, accuracy –). Self-monitoring assessment involves having students
86 6 CBE Decoding
read aloud while providing an auditory prompt when a decoding error is made.
The results indicate whether the student has the decoding skills necessary to read,
or requires instruction targeting reading decoding. Self-monitoring assessment is
described in the following six steps and summarized in Handout 6.3. Handout 6.6
can be used to record self-monitoring assessment information.
Self-Monitoring Assessment
1. Select grade-level material from which the student reads at or above criterion for
rate, but below criterion for accuracy. Use examiner copy and student copy of at
least two passages from that grade level.
2. In addition to the standardized directions say to the student, “I want you to take
your time and read this passage as accurately and carefully as you can.” (Pear-
son 2012a).
3. Score according to standardized scoring procedures and determine if the addi-
tional prompt in #2 improved the student’s reading accuracy, and if so did it
reach grade-level criterion (use Handout 6.6 to calculate and record changes in
rate, accuracy, and percentage of errors corrected).
− If yes, the prompt helped the student, and it can be assumed the student would
benefit from such prompts and possibly from incentives for improvement.
− If no, proceed to the next step to determine if assisted self-monitoring affects
accuracy.
4. Use new copies of reading passages and instruct the student to read (untimed).
Tell the student that if he or she makes a mistake, you will provide a prompt.
a. A simple tap of a pencil can be used for the auditory prompt and will signal
to the student that he or she made a mistake. Clicking a pen, snapping one’s
fingers, or use of a clicker, which can be purchased inexpensively online, also
can provide the auditory prompt.
b. Say to student, “Please read this aloud. This may be difficult for you, but
please do your best reading. I am not timing you, but if you make a mistake,
I will (tap this pencil, click this clicker). That is your clue that you made a
mistake and I want you to find the mistake and fix it. Remember, find it and fix
it. What will you do?” (Student indicates understanding of procedure).
5. Have the student read aloud and provide a signal each time a word is misread.
a. As the student reads and makes a mistake, slash the word that is misread on
your passage, write down the error that is made above the misread word, and
provide the prompt.
b. Then mark a “slash” next to the error to indicate that the prompt was given,
and write down the word the student says following the prompt (see Fig. 6.5
for a visual display of recording misread words).
c. Provide only one chance for the student to reread the word correctly; do not
prompt them again if he or she misreads the word a second time after the audi-
tory prompt. See Fig. 6.5 for a visual depiction of reading misread words.
6. Ask: Can the student self-correct errors? Generally speaking, if a student is able
to self-correct 90 % of the errors, the issue likely is self-monitoring (Burns et al.
2012; Howell and Nolet 2000).
6.4 Problem Analysis 87
a. Determine the number of errors made and the number of errors corrected.
b. Divide the total errors corrected by the total errors made and multiply by 100
to get the percentage of errors corrected (see formula “C” in Fig. 6.3). Use
Handout 6.6 to record your scores.
c. If the answer to the question “Can the student self-correct at least 90 % of
errors?” is yes, then the student likely has a self-monitoring issue. Suggest the
teaching strategy “Teach: Self-Monitoring.”
d. If the answer to the question is “no,” then proceed to step 5 in Handout 6.1.
Things to Consider
• Gathering, recording, and saving error information will save you time in
the event an error analysis is required in step 6.
• Prompt each error only once. If the student misreads it again record the
second error. If the student ignores or does not respond to the prompt record
that with three dots and allow them to continue reading. If the student con-
tinually ignores the prompt, stop and clarify the purpose of the prompt.
• This assessment may reveal that the student has both a self-monitoring
and a decoding issue. Unless the student corrects at least 90 % of the errors
made, it is worthwhile to proceed to step 5 in Handout 6.1.
88 6 CBE Decoding
Step 5 is for students whose rate and accuracy are below criterion (i.e., rate –, ac-
curacy –), and for students who did not show an improvement in accuracy with Step
4. This step determines if the student has emerging phonics skills or if the reading
breakdown occurs with early literacy skills.
1. Ask: Does the student have acceptable rate and accuracy at some level greater
than grade 1?
2. Do: Examine the results of the SLA
a. If yes, then analyze the errors the student makes while reading (see Step 6).
b. If no, then assess early literacy skills (see Chapter 7, which describes the CBE
Process for assessing early literacy skills).
If the student is able to decode and read at some level greater than grade 1, the next
step is to analyze the types of errors the student makes in an attempt to identify er-
ror patterns. Identifying patterns in errors allows for targeted reading instruction on
particular word types or skills.
In an error analysis: (a) determine if the errors violate the meaning of the passage
(and subsequently, if the student corrects those errors), (b) determine the general
reading errors made, and (c) identify the prevalence and types of decoding errors
made. The SLA (and self-monitoring assessment if applicable) may have supplied
all the information required for error analysis. Instructions for Step 6 are described
next and provided in Handout 6.4.
Collect an Error Sample Follow these steps to collect or add to the error sample.
1. Identify a grade level in which the student reads between 80 % and 85 % accu-
racy and use those passages (250+ words). The goal is to generate errors for
analysis.
a. Use a student passage and an examiner passage.
b. Error samples of at least 25 for grade 1 and at least 50 for grades 2 and above,
or as many as 100 errors have been recommended (Howell and Nolet 2000).
The sample must be sufficient to identify existing error patterns.
c. When collecting an error sample, both the number and type of errors are
informative. Errors recorded can include omissions, insertions, skipped lines,
hesitations, meaning violation errors without self-correcting, and decoding
errors.
2. Have the student read aloud and record the errors using the codes listed in
Table 6.3, and write the errors substituted for the actual words.
6.4 Problem Analysis 89
3. Ask: Are there patterns to the student’s reading errors? There are three questions
in analyzing errors.
a. Do the errors violate the meaning of the passage and if so, are they
self-corrected?
b. What types of general reading errors are made?
c. What types of decoding errors are made?
4. Meaning Violation Errors. First determine if the student’s errors violate the
meaning of the passage. Consider all of the errors for this analysis. Tally and
code the meaning violation errors using Handouts 6.7 and 6.8.
a. Write each error and code it under the appropriate column with an “X” using
the coding sheet in Handout 6.7 (Table 6.7.1).
b. Determine if each error violates the meaning of the text, does not violate
the meaning, or if meaning violation cannot be determined. Also mark
whether or not the error was self-corrected. This self-correction also is known
as comprehension self-monitoring since it contributes information about
comprehension.
5. Tally the frequency of errors and calculate the percentages for each type on Hand-
out 6.7. Then write the totals on the Tally Sheet in Handout 6.8 (Table 6.8.1).
a. Ask: Is the student self-correcting errors, particularly those that violate the
meaning of the text?
b. The results will provide information about how well the student is monitoring
the meaning of the passage. For example, a student who makes errors but self-
corrects likely is monitoring the meaning more than a student who is unaware
errors do not make sense in the passage.
6. General Reading Errors. Next determine the frequency of each type of general
reading error that the student makes. General reading errors include whether or
not the substitution was a real word, self-corrects, hesitations, insertions, omis-
sions, and repetitions.
a. Write each error that the student makes on Handout 6.7 (Table 6.7.2). Write
the actual word under the “actual word” column and then the error under the
column “read word.”
b. Code the errors by marking an “X” under the appropriate column. Then write
the totals in Handout 6.8 (Table 6.8.2). Also make note of qualitative errors,
such as prosody and phrasing.
c. Review the results of the general reading errors. If decoding errors are a prev-
alent type of error, analyze the decoding errors.
7. Decoding Errors, Ask: “Do decoding errors make up a majority of the errors
made?”
a. If yes, code each decoding error using the coding sheet in Handout 6.7
(Table 6.7.3).
b. Write the actual word and the misread word under the respective columns in
Table 6.7.3. Then put an “X” under the appropriate column for that decoding
error. A decoding error may be more than one type of error within Table 6.7.3.
Table 6.7.3 illustrates some examples.
90 6 CBE Decoding
c. After coding each error, tally up the totals on the Tally Table in Handout 6.8
(Table 6.8.3).
8. Review Errors. Review the tallies for each error category in Handout 6.8. Ask:
Are there patterns evident in the student’s reading errors?
a. If the answer is “yes,” the recommendation is “Teach: Targeted Instruction,”
targeting the student’s errors (see Handouts 6.14 and 6.15).
b. If sight word errors emerge as a pattern, then go to Step 7.
c. If the answer is “no,” then the teaching recommendation “Teach: General
Reading Skills” to recommend general reading instruction (see Handout 6.16).
Things to Consider
• It should be fairly easy to obtain a sufficient sample of errors. For example,
if a student reads a 250-word passage with 80 % accuracy with material and
the student is asked to read a 250-word passage, the evaluator will receive
a sample of about 50 errors in a matter of minutes with one passage.
• It is suggested that assessment of sight words be considered even if no pat-
tern emerges for the decoding errors.
• More than one error type. It is possible that a reading error meets more
than one category of word type. For example, a student may read “brother”
as “brothers.” This can be counted as both a decoding-suffix error (reading
the word as plural) and as an insertion error (inserting an “s”).
− Additionally, a decoding error may be classified under more than one
word type because it may be difficult to discern exactly the type of
decoding error. For example, reading the word “brotan” for the word
“brothers” could be both a consonant blend error (misreading the “th”
blend) and a suffix error (leaving off the plural “s”). There is a level of
interpretation here, but keep in mind the goal is to pinpoint consistent
errors. Overcategorizing errors would be better for finding error pat-
terns than undercategorizing.
After conducting Step 6, the evaluator may wish to examine the student’s knowl-
edge of sight words, particularly if sight words were a consistent error type identi-
fied in the error analysis results (Step 6).
1. Administer a sight word list. For example, the Dolch word list is provided in
Handout 6.9.
2. Tally the percentage of sight words that the student read correctly.
6.5 Plan Implementation 91
Things to Consider
• When suggesting sight words as a focus of instruction, practicing in
isolation and then incorporating them into connected text is a thorough
approach.
6.5 Plan Implementation
The Problem Analysis phase results in a clear understanding of why the problem is
occurring. Next, in the Plan Implementation phase, three tasks are accomplished: (a)
the design and implementation of an intervention plan that matches student needs,
(b) goal setting, and (c) identification of ways to measure progress (i.e., intervention
effectiveness) and fidelity of intervention implementation.
In this section, four general instructional foci (labeled as “Teach”) are described
that will address student needs for four possible outcomes of the CBE Process. We
then list specific instructional strategies in Handouts 6.10–6.16. There are numer-
ous interventions and specific instructional strategies to support reading needs, and
listing all of them is beyond the scope of this book. The key to selecting a strategy is
to ensure that (a) it is evidence-based, and (b) formative assessment is used to deter-
mine whether it benefits students. This section will describe the overall instructional
focus for a need identified in the CBE Process and to then provide specific strate-
gies in the Handouts. Evaluators are encouraged to explore other resources to gain
additional instructional strategies. A list of the instructional strategies discussed
next is provided in Table 6.4, and resources for reading decoding are presented in
Table 6.5.
This strategy targets students whose rate is at or above criterion, and accuracy is be-
low criterion (i.e., rate +, accuracy –). These readers have the decoding skills to read
accurately, but do not employ them consistently while reading connected text. The
92 6 CBE Decoding
reader does not gain meaning from the passage. The instructional focus is teaching
the reader to actively engage with and derive meaning from the text. The student
will need guided practice and instruction initially, and then the scaffolds gradually
can be faded. Three strategies are described.
Cued Self-Monitoring The first strategy described is to provide cueing when a
student makes an error, much like during the self-monitoring assessment. During
guided or paired reading, the student reads aloud and the teacher monitors the stu-
dent’s reading. Each time the student misreads a word, the teacher provides a cue
(e.g., pencil tap, clicker) and the student is to then stop, find the error, fix it by read-
ing the word correctly, and then continue reading. As the student develops the skill
to self-correct errors independently, the cueing can be faded. Instead of prompting
the student after each error, the prompt can be provided when the student finishes
the sentence and then the paragraph. Eventually, the cue will be completely elimi-
6.5 Plan Implementation 93
nated, and the student will rely on self-monitoring by asking questions (“Did that
make sense to me? Do I think I misread any words?”). This strategy is presented in
Handout 6.10.
This cueing procedure can be combined with an overcorrection or positive prac-
tice (PP) procedure in which the student is provided numerous opportunities to
practice the correct word after each misread word (“Please read that word 3 times”)
and then instructed to start over at the beginning of the sentence (“Now go back and
reread the sentence”). This is an effective procedure that is superior to simply sup-
plying the correct word or having the student read the correct word one time (Singh
1990; Singh and Singh 1986). This cueing procedure can also be used by parents or
peers in place of the teacher as long as the person providing the cueing is able to be
taught the procedure and recognize the errors.
Considering the IH, the student will require guided practice and immediate feed-
back until self-monitoring is automatic. As the student develops the skill, the feed-
back and the focus may shift to building fluency (Burns et al. 2012). It is a normal
progression in the IH for a student’s rate to decrease while the focus is on accuracy.
After accuracy is built to a specified percentage (e.g., 93 %), then the goal will
change to reflect both accuracy and fluency.
Goal Setting to Improve Accuracy and Paired Reading This strategy involves
setting a goal for accuracy and having a partner (e.g., peer, teacher, or parent) moni-
tor the student’s reading. While the student reads, the partner follows a scripted
error correction procedure and provides correction as needed. (An example of a
scripted error correction procedure is provided in Handout 6.10). The student reads
a text selection or for a specified time period. When the student is finished, the part-
ner calculates an accuracy rate, compares to the previously set goal, and provides a
reward if the goal is met.
Making Thinking Strategies Concrete and Explicit The last strategy for this
instructional focus is to model and make explicit metacognitive or self-regulation
skills. Students who struggle with reading are less likely to monitor their compre-
hension of the material they are reading. Direct instruction to monitor the level of
comprehension during reading can have a significant impact (Vaughn et al. 2012).
In fact, Hattie’s (2009) metaanalytical work identified self-vocalization and meta-
cognitive strategies as having effect sizes of 0.67. This strategy may be helpful for
students who have improved error monitoring and can begin to shift their attention
to monitoring understanding of and deriving meaning from text.
Vaughn et al. (2012) offer some recommendations for teaching metacognitive
strategies. They describe making the teacher’s thinking “visible” to the student by
using think-alouds, or talking out the strategies that are used. They present one
example:
Before I read this text, I see it will be difficult to understand. First, I look for key words.
I see three words in bold that I don’t know, so I write them down to see if I can figure out
what they mean. Second, I look at the title, the heading and the questions at the end of the
text. I think about what this text is going to be about, and I try to make connections while
reading. Third, while I read, I stop to see whether I have learned any information to help me
answer the questions at the end of the text. (p. 14.)
94 6 CBE Decoding
Additionally, Vaughn et al. (2012) describe monitoring students’ reading and help-
ing them think aloud if they misread a word. They also describe teaching students
to identify breakdowns in their reading and developing ways to fix them, such as
asking questions while reading (e.g., “What do you do when you don’t know how
to read a word? Are there any words or ideas that you did not understand?”) Ad-
ditionally, having students visualize the story, underline elements, take notes while
reading important elements, or actively think about the author’s point of view can
increase self-monitoring and comprehension. Rubrics or key questions such as: (a)
Were there any words that did not make sense? If so, reread and use strategies to
read them accurately; (b) Were there sections of text that did not make sense? If so,
reread and try to figure out the author’s meaning; (c) What strategies can I use to
make sense of the text? can guide reading and improve comprehension. Once stu-
dents actively monitor their accuracy, the focus can shift to actively comprehending
the content.
6.5.2 Teach: Fluency
For students below the criterion for rate, and at or above criterion for accuracy, the
teaching recommendation is to focus on building fluency (rate –, accuracy +). It is
important to clarify that reading fluency is not speed reading or reading as fast as
possible. Fluency is multidimensional and consists of rate, accuracy, and prosody
(Musti-Roo et al. 2009). In fact, students who are able to read with appropriate
“prosodic markings” divide words into meaningful phrases and have higher com-
prehension of text than students that do not (Therrien 2004). Fluency is about effort-
less reading that is efficient, which in turn, allows students to devote their working
memory to comprehending the text. Effective fluency instruction focuses as much
on the prosodic features of text as it does on the rate and accuracy of reading (Kuhn
and Stahl 2003).
Generally speaking, students in the fluency stage of the IH need practice and
repetition with corrective feedback, goal setting, and use of performance contingen-
cies (Burns et al. 2012). Instruction centers on building fluency at the letter, word,
sentence, paragraph, and passage level. Numerous interventions and instructional
approaches can be used to build fluency. Choral reading, partner reading, chunking,
setting goals for rate, previewing passages, and reader’s performance (where stu-
dents read roles from a play that they have rehearsed) are some of these strategies
(Rathvon 2008; Vaughn and Linan-Thompson 2004). We describe some of the more
researched interventions: repeated readings (RR), partner reading, and chunking.
Repeated Readings One effective intervention for building fluency is the repeated
reading strategy, which is defined as reading and rereading a passage or section of
text until a sufficient level of fluency is reached (Chard et al. 2002; Musti-Roo et al.
2009). The steps of RR is presented in Handout 6.11. RR consists of:
1. Identifying a target student and a tutor (can be a teacher or peer).
2. Selecting a grade-level passage in which students read with at least 93 %
accuracy.
6.5 Plan Implementation 95
3. Selecting a daily goal for rate (this can be based on benchmark goals, self-refer-
enced goals, or normative data).
4. Having the target student read for 1-minute while the tutor listens and marks errors.
5. After the 1-minute reading, the tutor provides error correction.
6. The target student rereads the passage, attempting to read further than before.
This is repeated for a total of four readings.
Variations on this procedure include partner reading, wherein the target student and
tutor take turns reading sections of the text prior to the student reading for 1-minute
and providing reinforcement (e.g., praise, incentives, etc.) based on the student im-
proving rate each time (Burns and Parker, n. d.; Musti-Roo et al. 2009). Difficult
words can be previewed and reading the passage can be modeled for the target stu-
dent prior to the 1-minute timing (Lo et al. 2011). Cueing can be provided to focus
students explicitly on reading rate (e.g., “Try to read this as quickly as you can.”),
on reading comprehension (e.g., “Do your best reading and try to really understand
the passage.”), or on both (Therrien 2004).
RR is effective at improving both the fluency rate and reading comprehension
scores of students, as Therrien (2004) found that RR is associated with an effect size
of 0.76 for improving fluency and an effect size of 0.48 for comprehension. The
most effective components of RR are:
• Adult delivered (vs peer delivered)
• Cueing for both fluency and comprehension (vs cueing for either fluency or
comprehension alone)
• Repeating the reading four times (vs repeating two to three times or more than
four times)
• Using corrective feedback on word errors (vs no corrective feedback)
• Setting a performance criterion (having students read until they reach a level of
correct words per minute or reading a passage within a certain time limit)
• Providing a teacher model (vs no model; tape- or computer-mediated models are
more effective than no model, but less effective than a teacher model)
• Providing progressively more difficult material once a performance criterion is
met (Chard et al. 2002; Therrien 2004).
Listening Preview with Partner Reading Partner reading combines elements of
RR and listening preview to improve fluency. This is a helpful intervention when try-
ing to accommodate groups of students, as it is a peer-delivered strategy. Although
RR is less effective when delivered by a peer compared to an adult, this interven-
tion represents a more feasible and less resource-intensive option (i.e., requires less
teacher time and direct 1:1 instruction). Rathvon (2008) offers a description of part-
ner reading, which is presented in Handout 6.12.
Chunking Chunking is a strategy that is helpful for building fluency at a phrase
or sentence level. Reading text or passages are divided into prosodic phrases with
slashes. Students then read the text and illustrate the intonation depicted by the
slashes. Rasinski (1994) reports that this “phrase-cued” strategy has led to positive
increases in comprehension, word recognition, and rate. LeVasseur et al. (2008)
compared three RR approaches with a group of second graders: (a) RR with phrase-
96 6 CBE Decoding
cued text, (b) RR with standard text, and RR of difficult words (word list). They
found that the RR with either the standard or phrase-cued text resulted in greater
gains in rate compared to the RR of word lists. However, the RR with phrase-cued
text led to the most gains in prosody compared to the other two conditions. Hand-
out 6.13 describes the chunking strategy.
The instructional focus here is to correct errors. Errors are learned and once learned,
they persist. Targeted, focused instruction and repetition is needed to teach the cor-
rect word (Reitsma 1983).
To correct clear patterns of errors, identify the errors, provide extensive model-
ing and practice of the correct word. Students making errors are in the acquisition
stage need and therefore need modeling of the skill, prompting to ensure its accurate
use, and immediate performance feedback (Burns et al. 2012). It may be useful to
first build accuracy with the accurate word(s) in isolation and then build accuracy
in connected text.
Error correction strategies that can be implemented with small groups ( word
drill and positive practice (PP)) and error correction strategies that can infuse into
various instructional formats ( DISSECT strategy and word sort) are presented next.
Word Drill When an error is made, several error correction procedures can be used
including (a) word supply, (“That word is ____. What word?”), (b) word-analysis
(“Look at the word. What sound does ____ make? Okay, sound it out. Say it with
me”), or (c) overcorrection (“That word is ___. Say it three times.”). The student
can also be asked to repeat the sentence with any of the aforementioned techniques
(sentence repeat; “Okay, now go back to the beginning of the sentence and reread
it.”). Word drill, which is described in Handout 6.14, combines several of the error
correction procedures and is relatively more effective than word supply or sentence
repeat alone (Jenkins and Larson 1979; Jenkins et al. 1983).
Positive Practice Positive Practice (PP), also referred to as overcorrection, is a
strategy in which the student performs the skill repeatedly in an attempt to “over-
learn” the skill (Singh and Singh 1986). In PP, the student is asked to repeat the cor-
rect word three to five times following an error. Following PP, the student rereads
the sentence containing the misread word for another repetition. Reinforcement in
the form of praise or incentives can be offered contingent on the student performing
the correct skill and following the PP procedures. Singh and Singh (1986) found
that PP plus praise was superior to a drill procedure in correcting reading errors with
four students receiving special education services. This strategy may be useful for
students who can read words accurately in isolation, but struggle with them in con-
nected text. Variations include correcting the student at the end of a sentence instead
of stopping the student immediately to supply the correct word (Meyer 1982; Singh
and Singh 1986; Singh 1990).
6.5 Plan Implementation 97
DISSECT Strategy The DISSECT strategy is helpful for older students who strug-
gle with more complex word analysis units and multisyllabic words. It is effective at
improving reading accuracy and comprehension (Lenz and Hughes 1990; Rathvon
2008). This strategy also is useful for vocabulary deficits. The student is taught a
problem-solving approach in which the acronym DISSECT represents each step
in the process (Rathvon 2008). The specific steps are presented as part of Handout
6.15.
Word Building Word building is useful for students who can decode the initial
sound or phoneme of words, but struggle with the rime or remaining phonemes.
Word building involves using a set of letter cards to teach students how to build
words and analyze the different words created by adding or replacing certain letters.
The strategy is described in Chapter 7 (see Handout 7.24), as it overlaps with early
literacy skills.
• Can tier 3 time be added and linked to instruction at tier 1 and tier 2?
• Are the different levels of instruction the student receives coordinated to ensure
meaningful continuity?
• Do teachers working with the student follow a more explicit, prescriptive format
including explicit modeling, guided practice, independent practice, and immedi-
ate corrective feedback?
• Can the size of any small group be reduced?
• Does the instructional focus match student needs?
Examine Rate of OTR Increasing the number of OTRs is a way to increase the
intensity of instruction. Increasing OTRs does two things: first, it increases engage-
ment and attention of the student and second, it provides feedback to the teacher
about how well the student is mastering skills and allows teachers to provide cor-
rective feedback to fix student mistakes.
It is recommended that the evaluator gather baseline OTR data, compare to a
standard, and set a goal to increase the OTRs. Appendix A offers templates to guide
measurement and recording of OTRs. OTR standards have been established by
Haydon et al. (2010). In whole-group instruction, OTRs should be approximately
4–6 per minute, and for small-group, direct instruction, OTRs should be approxi-
mately 8–12 per minute (see Chapter 10 for more information about OTR standards).
Examine Group Size and Instructional Minutes The easiest ways to intensify
instruction is to (a) decrease the group size and to (b) increase the instructional time,
both in terms of the actual minutes for each instructional session (e.g., increasing
from 30 minutes to 45 minutes) and in frequency of instructional sessions (e.g.,
increasing from 2 days per week to 4 days per week). Guidelines in determining
instructional minutes are described in Chapter 3 (see Table 3.3). Adding more time is
not a guaranteed solution. The instructional plan must be matched to student needs.
The difference between Teach: General Reading Instruction and Teach: Tar-
geted Instruction is the absence or presence of a clear error pattern to target. In the
absence of a clear error pattern, the focus is more balanced and general.
6.6 Plan Evaluation
Having identified an instructional focus and strategy, the plan evaluation phase fo-
cuses on measuring the effectiveness of the strategy. Measurement of fidelity and
measurement of student progress are two critical components of plan evaluation.
Reading CBM Perhaps the most effective way to measure general reading prog-
ress is to use reading CBM. Reading CBM is an indicator of general reading that is
sensitive to reading improvements over short periods of time, particularly when the
instructional focus is on decoding, accuracy, fluency, and/or rate (Hosp et al. 2006).
Goals are set and progress is measured using CBM on a frequent basis. Reading rate
is measured by counting the number of WRC per minute. Accuracy is measured by
calculating the percentage of WRC.
6.7 Expanding Your Knowledge and Fine-Tuning 99
Word Lists Accuracy and decoding progress also can be measured using word
lists. If a student struggles with a particular word type or word part (e.g., suffixes,
silent-e rule), word lists can be created and administered at regular intervals to
determine if the student is improving with that particular skill.
Integrated Data Data can be collected during instructional lessons to measure
progress with the plan. For example, a teacher can track errors made during a lesson
and graph the percentage of errors each day to determine if the student is making
progress. The teacher also may record the number of times the student requires an
error correction during a word drill, or the number of errors or the student’s rate
during partner reading.
This section describes considerations and ways to expand the use of the CBE Pro-
cess. As evaluators build proficiency and use the CBE Process, they may wish to
tailor it to address deeper content. We describe things to consider and ways to ex-
pand the use of the CBE Process for Decoding.
Flexible Process Although the CBE Process is presented in a sequential manner,
educators may wish to “jump around” and use various steps of the process at dif-
ferent points in time. CBE is about exploring “hunches” or answering questions
with assessment and data gathering. Thus, we encourage following the data where
it leads. This process involves gathering data, analyzing it, and then gathering more
data in an attempt to answer questions that will lead to a solution. It is a lot of “back
and forth” and is not meant to be completed in one sitting.
Normative vs Benchmark Standards for Rate In conducting a SLA, evaluators
assess back in subsequent levels of reading material until students can read at a rate
that is at or above the Fall 25th percentile (based on national norms) with at least 95 %
accuracy. Meeting those two criteria for a given grade level places a student within the
instructional range for that grade level. However, educators may wish to use a bench-
mark criterion instead of a normative criterion. Although the benchmark will generate
a higher rate standard, it is a predictive standard. Recall from Chapter 4 that students
who reach benchmark scores have the odds in their favor meeting standard on a high-
stakes assessment (Good and Kaminski 2011; McGlinchey and Hixson 2004).
Pinpointing Instructional Level The criterion for an instructional level is based
on the Fall 25th percentile national norm. The highest grade-level material out of
which a student performs at rate (at least Fall 25th percentile) and with at least 95 %
accuracy is considered the instructional level.
Reward-Based Assessment for Step 2 A reward-based assessment can help deter-
mine if the student’s low performance on the SLA is due to a lack of skill or due to
a lack of motivation. This process is referred to as a “can’t do/won’t do” assessment
(VanDerHeyden and Witt 2008) and is included in Appendix 6A.
100 6 CBE Decoding
Consideration of Opportunity for Word Types and Errors for Step 4 It may be
helpful to know not only the errors made by the student, but also the opportunity
to read certain types of words. This would take more planning and analysis of a
passage, but there is benefit in knowing that the student made a certain percentage
of errors relative to exposure to a certain word type (Howell and Nolet 2000). For
example, there is a difference in the certainty of the problem for a student whose
errors with suffixes are 20 % when they have been exposed to 5 words with suf-
fixes versus exposure to 30 words with suffixes. The coding table in Handout 6.8
includes columns for both opportunity for a particular word type and the percentage
of errors relative to opportunity.
An Alternative to Step 4 for Assessing Ability to Decode Errors As an alterna-
tive to the self-monitoring assessment, there are two options.
1. Reading in isolation. You may wish to pull the errors from the SLA, list the
errors on a separate sheet of paper, and ask the student to read the words in isola-
tion. It also works to highlight the words on the student’s copy of the passage
instead of listing the words on a separate sheet of paper and ask the student to
read those highlighted words.
2. Reading within context. You may ask the student to reread the word within the
story by underlining the sentence. Combined with “reading in isolation,” this
strategy can provide information about whether the student’s skill changes from
reading the word in isolation, to reading the word within context.
Adjust Error Categories as Needed for Error Analysis You may wish to add or
ignore certain error categories based on the student’s grade level, curriculum, and
instructional focus.
Further Verify Error Types There are three ways in which the examiner can fur-
ther verify the types of errors made. Once an initial pattern of errors is identified, it
may be worthwhile to further examine them.
1. List of words types. The evaluator may want to verify the student’s difficulty
with a particular word type by providing the student with a list of at least 10
words of that type to read in isolation. If the student struggles to decode at least
90 % of the words correctly, it is likely instruction should target that particular
word type.
2. Conduct a brief experiment. To further determine if a student struggles with a
particular word type and to verify the results of Step 6, the evaluator can conduct
a brief experiment. Point out the error the student makes and provide a mini-les-
son on correcting the error. Follow the “model-lead-test” format and determine if
the student “responds” to the instruction. If the student responds positively (i.e.,
performance improves), this approach should be incorporated into the instruc-
tional plan.
3. Use diagnostic surveys. You may wish to administer diagnostic reading surveys,
such as the Diagnostic Decoding Surveys offered from Really Great Reading
(www.reallygreatreading.com). The goal is to determine if there are patterns to
the student’s errors, so use available tools and analysis of oral reading to deter-
mine errors.
6.8 Chapter Summary 101
6.8 Chapter Summary
This chapter outlined the CBE Process for Decoding and is structured around the
four phases of the process within which there are a series of steps and tasks. The
CBE Process for Decoding begins with an SLA with reading CBM, followed by
working through a series of tasks and questions that examine the accuracy and rate
of the student’s reading. These data combined with additional assessment activities
inform instructional recommendations. The plan is evaluated with progress moni-
toring and fidelity of implementation data.
Handout 6.1 Curriculum-Based Evaluation Process in Decoding Flowchart
ƵƌƌŝĐƵůƵŵͲĂƐĞĚǀĂůƵĂƟŽŶ͗ĞĐŽĚŝŶŐ
102
WZK>D/Ed/&/d/KE
ϭ͘ƐŬ͗/ƐƚŚĞƌĞĂƉƌŽďůĞŵ͍ Î Ž͗/ŶŝƟĂůŝĚĞŶƟĮĐĂƟŽŶŽĨƉƌŽďůĞŵ
Ϯ͘ƐŬ͗ŽĞƐƚŚĞƉƌŽďůĞŵǁĂƌƌĂŶƚĨƵƌƚŚĞƌŝŶǀĞƐƟŐĂƟŽŶ͍ Î Ž͗ŽŶĚƵĐƚ^ƵƌǀĞLJͲ>ĞǀĞůƐƐĞƐƐŵĞŶƚ
KƌĂůZĞĂĚŝŶŐ&ůƵĞŶĐLJ
WZK>DE>z^/^
ϯ͘ƐŬ͗tŚĂƚŝƐƚŚĞƐƚƵĚĞŶƚ͛ƐĂĐĐƵƌĂĐLJĂŶĚƌĂƚĞĂƚŐƌĂĚĞͲůĞǀĞů͍ Î Ž͗džĂŵŝŶĞƌĂƚĞĂŶĚĂĐĐƵƌĂĐLJǁŝƚŚŐƌĂĚĞͲůĞǀĞůŵĂƚĞƌŝĂů
ZĂƚĞн ZĂƚĞͲ ZĂƚĞн ZĂƚĞͲ
ĐĐƵƌĂĐLJн ĐĐƵƌĂĐLJн ĐĐƵƌĂĐLJͲ ĐĐƵƌĂĐLJͲ
Ž͗ƐƐĞƐƐ dĞĂĐŚ͗ ϰ͘ƐŬ͗ĂŶƚŚĞƐƚƵĚĞŶƚƐĞůĨͲĐŽƌƌĞĐƚĞƌƌŽƌƐ͍ ϱ͘ƐŬ͗ŽĞƐƚŚĞƐƚƵĚĞŶƚŚĂǀĞĂĐĐĞƉƚĂďůĞ
ŽŵƉƌĞŚĞŶƐŝŽŶ &ůƵĞŶĐLJ ƌĂƚĞĂďŽǀĞŐƌĂĚĞϭ͍
;ƐĞĞŚĂƉƚĞƌϴͿ Ð Ð
Ž͗ƐƐĞƐƐƐĞůĨͲŵŽŶŝƚŽƌŝŶŐƐŬŝůůƐ Ž͗džĂŵŝŶĞƌĞƐƵůƚƐŽĨ^ƵƌǀĞLJͲ>ĞǀĞůƐƐĞƐƐŵĞŶƚ
zĞƐ EŽ zĞƐ EŽ
dĞĂĐŚ͗ Ž͗'ŽƚŽ Ž͗ƐƐĞƐƐĂƌůLJ>ŝƚĞƌĂĐLJ
Ð
^ĞůĨͲDŽŶŝƚŽƌŝŶŐ ͞ZĂƚĞͲĐĐƵƌĂĐLJͲ͞ ^ŬŝůůƐ;ƐĞĞŚĂƉƚĞƌϳͿ
ϲ͘ƐŬ͗ƌĞƚŚĞƌĞƉĂƩĞƌŶƐƚŽ
ƚŚĞƐƚƵĚĞŶƚ͛ƐƌĞĂĚŝŶŐĞƌƌŽƌƐ͍
Ð
Ž͗ŽŶĚƵĐƚƌƌŽƌŶĂůLJƐŝƐ
zĞƐ EŽ
dĞĂĐŚ͗ dĞĂĐŚ͗
dĂƌŐĞƚĞĚ/ŶƐƚƌƵĐƟŽŶ 'ĞŶĞƌĂůZĞĂĚŝŶŐ^ŬŝůůƐ
ϳ͘ƐŬ͗ƌĞƐŝŐŚƚǁŽƌĚƐĂĐŽŶĐĞƌŶ͍
Ð
Ž͗ƐƐĞƐƐƐŝŐŚƚǁŽƌĚƐ
zĞƐ EŽ
dĞĂĐŚ͗
dĂƌŐĞƚĞĚ/ŶƐƚƌƵĐƟŽŶ
W>E/DW>DEdd/KE
dĞĂĐŚ͗^ĞůĨͲDŽŶŝƚŽƌŝŶŐ dĞĂĐŚ͗&ůƵĞŶĐLJ dĞĂĐŚ͗dĂƌŐĞƚĞĚ/ŶƐƚƌƵĐƟŽŶ dĞĂĐŚ͗'ĞŶĞƌĂůZĞĂĚŝŶŐ/ŶƐƚƌƵĐƟŽŶ
W>Es>hd/KE
DŽŶŝƚŽƌīĞĐƟǀĞŶĞƐƐ DŽŶŝƚŽƌ&ŝĚĞůŝƚLJ
EŽƚĞ͗нсĂƚĐƌŝƚĞƌŝŽŶ͕ͲсďĞůŽǁĐƌŝƚĞƌŝŽŶ
6 CBE Decoding
Handout 103
Interpretation Guidelines:
3. Ask: Does the issue warrant further consideration?
− If the student is performing at criterion for accuracy (≥ 95 %) and rate (≥ Fall
25th percentile), at expected grade level, then you are finished with Decoding
CBE and can examine reading comprehension (see Chapter 8).
− If the student is not performing at criterion for either accuracy or rate at grade
level, proceed to Problem Analysis and examine the student’s rate and accu-
racy to determine further steps.
104 6 CBE Decoding
Note: The reading CBM directions are reprinted with permission. 2012 Copyright
by Pearson Education Inc.
Handout 105
c. If the answer to the question “Can the student self-correct at least 90 % of
errors?” is yes, then the student likely has a self-monitoring issue. Suggest the
teaching strategy “Teach: Targeted Instruction.”
d. If the answer to the question is “no,” then proceed to Step 6 in Handout 6.1.
Handout 107
džƉĞĐƚĞĚ/ŶƐƚƌƵĐƟŽŶĂů>ĞǀĞů;'ƌĂĚĞͲůĞǀĞůͿ͗
KďƚĂŝŶĞĚ/ŶƐƚƌƵĐƟŽŶĂů>ĞǀĞů;ŵĞĞƚƐƌĂƚĞĂŶĚĂĐĐƵƌĂĐLJͿ͗
KďƚĂŝŶĞĚƌĂƚĞǁŝƚŚŐƌĂĚĞͲůĞǀĞůŵĂƚĞƌŝĂů͗
džƉĞĐƚĞĚƌĂƚĞǁŝƚŚŐƌĂĚĞͲůĞǀĞůŵĂƚĞƌŝĂů͗
^ƵďƚƌĂĐƚŽďƚĂŝŶĞĚƌĂƚĞĨƌŽŵĞdžƉĞĐƚĞĚƌĂƚĞ сƌĂƚĞĚŝƐĐƌĞƉĂŶĐLJ͗
KďƚĂŝŶĞĚĂĐĐƵƌĂĐLJǁŝƚŚŐƌĂĚĞͲůĞǀĞůŵĂƚĞƌŝĂů͗
džƉĞĐƚĞĚĂĐĐƵƌĂĐLJǁŝƚŚŐƌĂĚĞͲůĞǀĞůŵĂƚĞƌŝĂů͗ ϵϱй
^ƵďƚƌĂĐƚŽďƚĂŝŶĞĚĂĐĐƵƌĂĐLJĨƌŽŵĞdžƉĞĐƚĞĚĂĐĐƵƌĂĐLJ сĂĐĐƵƌĂĐLJĚŝƐĐƌĞƉĂŶĐLJ͗
EŽƚĞ͗^ĞĞ,ĂŶĚŽƵƚϲ͘ϮĨŽƌĨŽƌŵƵůĂƐƚŽĐĂůĐƵůĂƚĞƌĂƚĞĂŶĚĂĐĐƵƌĂĐLJ͘
Handout 109
&ŝƌƐƚZĞĂĚ
ŽŶĚŝƟŽŶƐ ZĞĂĚƉĂƐƐĂŐĞǁŝƚŚŽƵƚĂŶLJƐƉĞĐŝĮĐƉƌŽŵƉƟŶŐ͘
'ƌĂĚĞ>ĞǀĞůŽĨƌĞĂĚŝŶŐŵĂƚĞƌŝĂů
EƵŵďĞƌŽĨǁŽƌĚƐƌĞĂĚĐŽƌƌĞĐƚůLJ
ĐĐƵƌĂĐLJ
ƌƌŽƌƐŵĂĚĞ
ƌƌŽƌƐĐŽƌƌĞĐƚĞĚ
WĞƌĐĞŶƚĂŐĞŽĨĞƌƌŽƌƐĐŽƌƌĞĐƚĞĚ
/ŶƚĞƌǀĞŶƟŽŶ͗WƌŽŵƉƚ
ŽŶĚŝƟŽŶƐ WƌŽǀŝĚĞĚƉƌŽŵƉƚƚŽƌĞĂĚĂĐĐƵƌĂƚĞůLJ͘
'ƌĂĚĞ>ĞǀĞůŽĨƌĞĂĚŝŶŐŵĂƚĞƌŝĂů
EƵŵďĞƌŽĨǁŽƌĚƐƌĞĂĚĐŽƌƌĞĐƚůLJ
ĐĐƵƌĂĐLJ
ƌƌŽƌƐŵĂĚĞ
ƌƌŽƌƐĐŽƌƌĞĐƚĞĚ
WĞƌĐĞŶƚĂŐĞŽĨĞƌƌŽƌƐĐŽƌƌĞĐƚĞĚ
/ŶƚĞƌǀĞŶƟŽŶ͗WĞŶĐŝůdĂƉ
ŽŶĚŝƟŽŶƐ dŽůĚƚŽƌĞĂĚĂŶĚƉƌŽŵƉƚĞĚƚŽĐŽƌƌĞĐƚĞƌƌŽƌƐƵƐŝŶŐĂ
ĐůŝĐŬĞƌŽƌƉĞŶĐŝůƚĂƉ͘
'ƌĂĚĞ>ĞǀĞůŽĨƌĞĂĚŝŶŐŵĂƚĞƌŝĂů
EƵŵďĞƌŽĨǁŽƌĚƐƌĞĂĚĐŽƌƌĞĐƚůLJ
ĐĐƵƌĂĐLJ
ƌƌŽƌƐŵĂĚĞ
ƌƌŽƌƐĐŽƌƌĞĐƚĞĚ
WĞƌĐĞŶƚĂŐĞŽĨĞƌƌŽƌƐĐŽƌƌĞĐƚĞĚ
Formulas
For rate:
For accuracy:
ZĞĐŽƌĚŝŶŐŚĂŶŐĞŝŶZĂƚĞĨŽƌWƌŽŵƉƚ͗
о с
WƌŽŵƉƚtZ &ŝƌƐƚZĞĂĚtZ ŝīĞƌĞŶĐĞ
ͬ y ϭϬϬ с
ŝīĞƌĞŶĐĞ &ŝƌƐƚZĞĂĚtZ WĞƌĐĞŶƚĂŐĞŽĨŚĂŶŐĞŝŶtZ
ZĞĐŽƌĚŝŶŐŚĂŶŐĞŝŶĐĐƵƌĂĐLJĨŽƌWƌŽŵƉƚ͗
о с
WƌŽŵƉƚĐĐƵƌĂĐLJ &ŝƌƐƚZĞĂĚĐĐƵƌĂĐLJ ŝīĞƌĞŶĐĞ
ZĞĐŽƌĚŝŶŐŚĂŶŐĞŝŶWĞƌĐĞŶƚĂŐĞŽĨƌƌŽƌƐŽƌƌĞĐƚĞĚĨŽƌWƌŽŵƉƚ͗
о с
WƌŽŵƉƚйŽĨŽƌƌĞĐƚĞĚƌƌŽƌƐ &ŝƌƐƚZĞĂĚйŽĨŽƌƌĞĐƚĞĚƌƌŽƌƐ ŝīĞƌĞŶĐĞ
ZĞĐŽƌĚŝŶŐŚĂŶŐĞŝŶĐĐƵƌĂĐLJĨŽƌ^ĞůĨͲDŽŶŝƚŽƌŝŶŐ͗
о с
^ĞůĨͲŵŽŶŝƚŽƌŝŶŐĂĐĐƵƌĂĐLJ &ŝƌƐƚƌĞĂĚĂĐĐƵƌĂĐLJ ŝīĞƌĞŶĐĞ
ZĞĐŽƌĚŝŶŐŚĂŶŐĞŝŶWĞƌĐĞŶƚĂŐĞŽĨƌƌŽƌƐŽƌƌĞĐƚĞĚĨŽƌ^ĞůĨͲDŽŶŝƚŽƌŝŶŐ͗
о с
^ĞůĨͲDŽŶŝƚŽƌŝŶŐйŽĨŽƌƌĞĐƚĞĚƌƌŽƌƐ &ŝƌƐƚZĞĂĚйŽĨŽƌƌĞĐƚĞĚƌƌŽƌƐ ŝīĞƌĞŶĐĞ
Handout 6.7 Error Analysis Coding Sheets
Table 6.7.1 Meaning violation coding sheet
Use this sheet to record and analyze meaning violation errors. Place a tally or check mark in the corresponding column. Mark if the error was
Handout
self-corrected. Calculate totals and percentages and then transfer the results to tally table in Handout 6.8
dŽƚĂůƐ
WĞƌĐĞŶƚĂŐĞƐ
Table 6.7.2 General reading errors coding sheet
Use this sheet to record and analyze general reading errors. Place a tally or check mark in the corresponding column. Calculate totals and percentages and
then transfer the results to the tally table in Handout 6.8
ĐƚƵĂůtŽƌĚ ZĞĂĚtŽƌĚ ĞĐŽĚŝŶŐ ĞĐŽĚŝŶŐ ĞĐŽĚŝŶŐ /ŶƐĞƌƚ /ŶƐĞƌƚ KŵŝƐƐŝŽŶ ,ĞƐŝƚĂƟŽŶ ZĞƉĞƟƟŽŶ WƵŶĐƚƵĂƟŽŶ ^ĞůĨͲ
112
ŵĂƐƚĞƌ DĂƐƚĞƌ y
ĐĂƚ ĐĂ y
ŚŽŵĞ ŚŽŵĞƐ y y
ĞŶĚ͘dŚĞ ĞŶĚdŚĞ y
dŽƚĂůƐ
WĞƌĐĞŶƚĂŐĞƐ
6 CBE Decoding
Table 6.7.3 Decoding errors coding sheet
Use this sheet to record and analyze decoding errors. Place a tally or check mark in the corresponding column. Calculate totals and percentages and then
Handout
dŽƚĂůƐ
WĞƌĐĞŶƚĂŐĞƐ
114 6 CBE Decoding
dŽƚĂůEƵŵďĞƌŽĨƌƌŽƌƐƚŽďĞŶĂůLJnjĞĚ
&ƌĞƋƵĞŶĐLJŽĨƌƌŽƌ
&ƌĞƋƵĞŶĐLJ^ĞůĨͲŽƌƌĞĐƚĞĚ
WĞƌĐĞŶƚĂŐĞŽĨƌƌŽƌƐDĂĚĞ
WĞƌĐĞŶƚĂŐĞŽĨƌƌŽƌƐŽƌƌĞĐƚĞĚ
EŽƚĞ͗ŽŶƐŝĚĞƌĂůůŽĨƚŚĞĞƌƌŽƌƐŵĂĚĞďLJƚŚĞƐƚƵĚĞŶƚĂŶĚĚĞƚĞƌŵŝŶĞǁŚĞƚŚĞƌŽƌŶŽƚƚŚĞLJǀŝŽůĂƚĞŵĞĂŶŝŶŐ͘
Handout 115
ĞĐŽĚŝŶŐƌƌŽƌƐ͗
ƌƌŽƌƐĂƌĞZĞĂůtŽƌĚ ͞ĐĂŶ͛ƚ͟ĨŽƌ͞ĐĂƚ͖͟͞ƚŚĞ͟ĨŽƌ͞Ă͟
ƌƌŽƌƐĂƌĞEŽƚZĞĂůtŽƌĚƐ ͞ŚĂŶƚ͟ĨŽƌ͞ŚĂǀĞ͟
ƌƌŽƌƐĂƌĞ^ĞůĨͲŽƌƌĞĐƚĞĚ ;ƐĞůĨͲĐŽƌƌĞĐƚƐĂŶĞƌƌŽƌǁŝƚŚŝŶϯƐĞĐŽŶĚƐͿ
/ŶƐĞƌƟŽŶƐ͗
ŽŶƚĞdžƚƵĂůůLJƉƉƌŽƉƌŝĂƚĞ tĞƐƟůůĂƌĞŐŽŝŶŐ͙
ŽŶƚĞdžƚƵĂůůLJ/ŶĂƉƉƌŽƉƌŝĂƚĞ tĞĂŶĚǁĞŶƚƚŽŚĂǀĞ͙
&ůƵĞŶĐLJƌƌŽƌƐ͗
KŵŝƐƐŝŽŶƐ ͞/;ǁĞŶƚͿĂǁĂLJ͙͟;ǁĞŶƚŽŵŝƩĞĚͿ
,ĞƐŝƚĂƟŽŶƐ;ϯƐĞĐŽŶĚƐͿ ͞tĞ͙;ϯƐĞĐŽŶĚƐͿ;ĞdžĐůĂŝŵĞĚͿ͙͟
ZĞƉĞƟƟŽŶƐ;ϯƟŵĞƐͿ ͞/ǁĞŶƚƚŽ͕/ǁĞŶƚƚŽ͕/ǁĞŶƚƚŽ͙͟
WƵŶĐƚƵĂƟŽŶ͗ŶŽƚƉĂƵƐŝŶŐĂƚƉƵŶĐƚƵĂƟŽŶ ͙͞ƚŚĞĞŶĚ͘dŚĞŶǁĞ͙͟;ŶŽƉĂƵƐĞĂƚ
ƉĞƌŝŽĚͿ
^ĞůĨͲĐŽƌƌĞĐƚƐ ;ƐĞůĨͲĐŽƌƌĞĐƚƐĂŶŽŵŝƐƐŝŽŶͿ
dKd>ZZKZ^ Ͳ
YƵĂůŝƚĂƟǀĞ͗ KĐĐƵƌƌĞĚ͍
WĂƵƐĞƐĂƚĞŶĚŽĨůŝŶĞƐŽĨƚĞdžƚ zĞƐ EŽ WĂƵƐĞƐĂƚĞŶĚŽĨůŝŶĞŽĨƚĞdžƚ
WŽŽƌƉƌŽƐŽĚLJŽƌŝŶƚŽŶĂƟŽŶ zĞƐ EŽ >ĂĐŬŽĨĞdžƉƌĞƐƐŝŽŶ
ŚƵŶŬŝŶŐƉŚƌĂƐĞƐ zĞƐ EŽ ZĞĂĚƐǁŽƌĚďLJǁŽƌĚ
KƚŚĞƌ͗
KƚŚĞƌ͗
116 6 CBE Decoding
tŽƌĚƐ͗
^ŚŽƌƚsŽǁĞů^ŽƵŶĚƐ ͚Ă͛ŝŶĂƉƉůĞ
>ŽŶŐsŽǁĞů^ŽƵŶĚƐ ͚ĞĞ͛ŝŶũĞĞƉ
^ŝůĞŶƚ͚Ğ͛ƐŽƵŶĚͬs ďŝƚĞ͕ŵŽƉĞ͕ƚĂƉĞ
,ŝŐŚͲĨƌĞƋƵĞŶĐLJͬ^ŝŐŚƚ ĚŽ͕ŵĂŬĞ͕LJĞƐ͕ŝƚ
ŽŵƉŽƵŶĚtŽƌĚƐ ŝŶƚŽ͕ĨŽŽƚďĂůů
ŽŶƚƌĂĐƟŽŶƐ ŚĂǀĞŶ͛ƚ͕ĐĂŶ͛ƚ
^ŝůĞŶƚ>ĞƩĞƌƐ ŬŶŝƚ͕ŬŶŽǁ
WŽůLJƐLJůůĂďŝĐtŽƌĚƐ ĐƵĐƵŵďĞƌ͕ƚŽŵŽƌƌŽǁ
ŽƵďůĞĐŽŶƐŽŶĂŶƚǁŽƌĚƐ ďƵƩĞƌ͕ǁƌŝƩĞŶ
hŶŝƚƐ͗
DŝƐƐĞƐŝŶŝƟĂůƐŽƵŶĚ͕WƌĞĮdžĞƐ ƉƌĞ͕ďĞ͕ƉŽƐƚ͕ƐƵď
DŝƐƐĞƐƌŝŵĞ;ŝŶŝƟĂůƐŽƵŶĚŽŶůLJͿ ͞Śŝƚ͟ĨŽƌ͞ŚĞůƉ͕͟͞ǁĂƐ͟ĨŽƌ͞ǁĞƌĞ͟
DŝƐƐĞƐĮŶĂůƐŽƵŶĚ͕^ƵĸdžĞƐ ĂďůĞ͕ŝŶŐ͕Ɛ͖͞ǁŽƌŬƐ͟ĂƐ͞ǁŽƌŬ͟
ZͲĐŽŶƚƌŽůůĞĚǀŽǁĞůƐ Ğƌ͕ŝƌ͕Ăƌ
sŽǁĞůͬŽŶƐŽŶĂŶƚďůĞŶĚƐ Ăů͕ŝů͕Ğů
sŽǁĞůdĞĂŵƐͬŽŵďŽƐ Ăŝ͕ĂLJ͕ĞĞ
ŽŶƐŽŶĂŶƚŽŵďŝŶĂƟŽŶƐ ƐŚ͕ŬŶ͕ƉŚ͕ƚŚ͕ǁŚ
dKd>
EŽƚĞ͗KƉƉсŽƉƉŽƌƚƵŶŝƚLJ͘ƌƌŽƌŽƵŶƚŝƐĂƐLJŶŽŶLJŵĨŽƌĨƌĞƋƵĞŶĐLJŽƌŽĐĐƵƌƌĞŶĐĞŽĨƚŚĞĞƌƌŽƌ͘
Handout 117
WĞƌĐĞŶƚĂŐĞŽĨtŽƌĚƐZĞĂĚŽƌƌĞĐƚůLJďLJ'ƌĂĚĞ>ĞǀĞů
Table 6.9.3 Percentage of words read correctly by grade level
'ƌĂĚĞ>ĞǀĞů WĞƌĐĞŶƚĂŐĞZĞĂĚŽƌƌĞĐƚůLJ
WƌĞͲWƌŝŵĞƌ
WƌŝŵĞƌ
&ŝƌƐƚ'ƌĂĚĞ
^ĞĐŽŶĚ'ƌĂĚĞ
dŚŝƌĚ'ƌĂĚĞ
• Use daily goals, which can be set based on normative or benchmarks standards
(see DIBELS Next benchmarks).
• The “How did I read?” rubric can be used to provide detailed feedback (see
Table 6.11.1).
Evidence-base: Lo et al. 2011; Musti-Roo et al. 2009
Handout 123
Table 6.11.1 How Did I Read? (Adapted from: Therrien et al. 2012)
Level 4 □ I read most of the story in long, meaningful phrases
□ I repeated or missed only a few words.
□ I emphasized important phrases or words.
□ I read with expression.
Level 3 □ I read most of the story in three- or four-word phrases.
□ I repeated or missed only a few words.
□ I emphasized important words or phrases.
□ I read some of the story with expression.
Level 2 □ I read mostly in two-word phrases.
□ I repeated or missed a lot of words.
□ I did not emphasize important words or phrases.
□ I did not read with expression.
Level 1 □ I read most of the story word-by-word
□ I repeated or missed a lot of words.
□ I did not emphasize important words or phrases.
□ I did not read with expression.
124 6 CBE Decoding
Implementation
1. During social studies or science lessons, review the strategy when introducing
new vocabulary. Select students to demonstrate strategy on several words.
2. Provide time for students to apply the strategy during class assignments. If
desired, divide class into pairs and have them work together to apply the strat-
egy to a section of the text or reading materials while you circulate to provide
assistance.
DISSECT
D—Discover the context. Skip the difficult word, read to the end of the sentence,
and use the meaning of the sentence to make your best guess about a word that fits
in place of the unfamiliar word. If the guessed word does not match the difficult
word, proceed to the next step.
I—Isolate the prefix. Using the list of prefixes, look at the beginning of the word to
see if the first several letters form a prefix that you can pronounce. If so, box it off
by drawing a line between the prefix and the rest of the word.
S—Separate the suffix. Using the list of suffixes, look at the end of the word to see
if the last several letters form a suffix that you can pronounce. If so, box it off by
drawing a line between the suffix and the rest of the word.
S—Say the stem. If you recognize the stem (the part of the word that is left after
the prefix and suffix have been boxed off), pronounce the prefix, stem, and suffix
together. If you cannot recognize them, proceed to next step.
E—Examine the stem. Using the Rules of Twos and Threes, dissect the stem into
easy-to-pronounce word parts.
C—Check with someone. If you still cannot pronounce the word, ask someone for
help. If someone not available, go to the next step.
T—Try the dictionary. Look up the word in the dictionary, use the pronunciation
guide to pronounce the word and read the definition if you do not know the mean-
ing of the word.
Rules of Twos and Threes
Rule 1
• If a stem or any part of a stem begins with a vowel, separate the first two letters
from the rest of the stem and pronounce them.
• If the stem or any part of the stem begins with a consonant, separate the first
three letters from the rest of the stem and pronounce them.
• Once you have separated the first two or three letters from the stem, apply the
same rules until you reach the end of the stem (example: al/ter/na/tor)
• Pronounce the stem by saying the dissected parts. If you can read the stem, add
the prefix and suffix and reread the entire word. If you cannot use Rule 1, use
Rule 2.
130 6 CBE Decoding
Rule 2
• Isolate the first letter of the stem and try to apply Rule 1 again. Rule 2 is espe-
cially useful when the stem begins with two or three consonants.
Rule 3
• If two different vowels appear together in a word, pronounce both of the vowel
sounds. If that does not sound right, pronounce one vowel sound at a time until
it sounds right. Rule 3 can be used with either Rule 1 or Rule 2.
Evidence Base: Lenz and Hughes 1990; Rathvon 2008
Handout 131
Appendix 6A
tŝƚŚŽƵƚ tŝƚŚZĞǁĂƌĚ
ZĞǁĂƌĚ
WĂƐƐĂŐĞϭ WĂƐƐĂŐĞϮ WĂƐƐĂŐĞϯ DĞĚŝĂŶ ƌŝƚĞƌŝŽŶ DĞƚ͍
>ĞǀĞůϴ ƌĂƚĞ ϭϮϯ
ĂĐĐƵƌĂĐLJ ϵϱй
>ĞǀĞůϳ ƌĂƚĞ ϭϭϵ
ĂĐĐƵƌĂĐLJ ϵϱй
>ĞǀĞůϲ ƌĂƚĞ ϭϭϲ
ĂĐĐƵƌĂĐLJ ϵϱй
>ĞǀĞůϲ ƌĂƚĞ ϵϰ
ĂĐĐƵƌĂĐLJ ϵϱй
>ĞǀĞůϰ ƌĂƚĞ ϴϰ
ĂĐĐƵƌĂĐLJ ϵϱй
>ĞǀĞůϯ ƌĂƚĞ ϱϵ
ĂĐĐƵƌĂĐLJ ϵϱй
>ĞǀĞůϮ ƌĂƚĞ ϯϱ
ĂĐĐƵƌĂĐLJ ϵϱй
>ĞǀĞůϭ ƌĂƚĞ ϭϵΎΎΎ
ĂĐĐƵƌĂĐLJ ϵϱй
EŽƚĞ͗dƌĂŶƐĨĞƌƚŚĞƌĞƐƵůƚƐŽĨƚŚĞ^ƵƌǀĞLJͲ>ĞǀĞůƐƐĞƐƐŵĞŶƚƚŽƚŚĞ͞tŝƚŚŽƵƚZĞǁĂƌĚ͟ŽůƵŵŶ
Chapter 7
CBE Early Literacy
This chapter describes the process for CBE early literacy skills. The chapter is struc-
tured around the four phases of the CBE Process and will walk the reader through
the entire process for early literacy skills. The chapter discusses specific assessment
techniques and intervention recommendations based on the results.
Fig. 7.1 Ask-Do-Teach cycle for steps within the CBE Process
The CBE Early Literacy Process moves through four phases, within which are a
series of steps that involve three types of tasks:
1. Ask: These are questions that signify a starting point for a step. Assessments
results are collected and interpreted in order to answer the question.
2. Do: These are assessment activities conducted with the student.
3. Teach: These are instructional recommendations based on the results of the CBE
Process.
Evaluators start with a question (Ask), which then requires an action or activity
(Do). Following a certain amount of Asks and Dos, the evaluator arrives at an in-
structional focus (Teach), which indicates specific instructional strategies (see
Fig. 7.1). The entire CBE Process for Early Literacy Skills is presented in Handout
7.1, which is designed to be a quick summary of the early literacy CBE Process.
Table 7.1 also outlines the CBE Process for Early Literacy Skills in a linear form.
All of the handouts used for the CBE Process for Early Literacy Skills are included
at the end of the chapter and the entire list is displayed in Table 7.2.
The first step of the CBE Process is to identify whether or not a problem in Ear-
ly Literacy Skills exists. Initial identification of a problem with Early Literacy
Skills can be identified through several means, including a review of records, an
interview with the student or teacher(s), or during the CBE Process for Decod-
ing (i.e., the student is not currently reading fluently and/or accurately with first
grade reading material).
Table 7.1 Steps of CBE Process for Early Literacy Skills
Ask Do Teach
Problem identification Is there a problem? Initial identification
7.4 Problem Identification
Table 7.2 List of Handouts for CBE Process for Early Literacy Skills
Handout Title
Instructions and Process Sheets
7.1 Curriculum-Based Evaluation in Early Literacy Skills Flowchart
7.2 Letter Naming Fluency (LNF) Instructions
7.3 Letter Sound Fluency (LSF) Instructions
7.4 Phoneme Segmentation Fluency (PSF) Instructions
7.5 Nonsense Word Fluency (NWF) Instructions
7.6 Phonemic Awareness Skills Assessment Instructions
7.7 Print Concepts Assessment Instructions
7.8 Alphabetic Knowledge (Letter Naming) Assessment Instructions
7.9 Letter-Sound Correspondence Assessment Instructions
7.10 Letter Blends Assessment Instructions
7.11 Letter-Sound Correspondence: Introduction of Sounds
Tally and Assessment Sheets
7.12 Survey-Level Assessment Results for Early Literacy Skills
7.13 Phonemic Awareness Assessment Tally Sheet
7.14 Print Concepts Assessment Tally Sheet
7.15 Letter Naming and Letter Sound Assessment
7.16 Letter Blends Assessment Tally Sheet
7.17 Additional Phonics Assessment Tally Sheet
Strategy Sheets
7.18 Teach: Phonemic Awareness: Sound Box Activity
7.19 Teach: Guided Teaching of Print Concepts
7.20 Teach: Letter Identification with Letter-Sound Correspondence: Multisensory
Teaching of Letter Names
7.21 Teach: Letter-Sound Correspondence: Guided Instruction of Letter Sounds
7.22 Teach: Letter-Sound Correspondence: Visual Support
7.23 Teach: Letter Blends: Word Boxes
7.24 Teach: Letter Blends: Word Building
After initially identifying a concern with early literacy skills, the next step is to
verify the extent of the problem by conducting a Survey-Level Assessment (SLA)
using early literacy probes that will assess phonemic awareness, alphabetic knowl-
edge, and the alphabetic principle. Early literacy probes are available from various
sources, such as dibels.org and aimsweb.com.
Benchmark standards For the SLA in decoding, the fall 25th percentile and an
accuracy criterion of 95 % were used to identify the student’s instructional level. For
the SLA in early literacy, the goal is to identify those skills that are at a reasonable
level of mastery or that may require additional instruction/intervention beyond the
7.4 Problem Identification 139
core for development. Consequently, a criterion that predicts the likelihood that a
student will require additional instruction is used. The DIBELS Next benchmark
standards are used to identify a reasonable level of proficiency. In situations where
the benchmark is not available, such as with the Letter Sound Fluency (LSF), the
40th percentile is used since it is comparable to the other benchmark standards
(Good & Kaminski, 2011).
SLA Directions
1. Begin by administering four, 1-minute Curriculum-Based Measurements for
Early Literacy: Letter Naming Fluency (LNF), Letter Sound Fluency (LSF),
Phoneme Segmentation Fluency (PSF), and Nonsense Word Fluency (NWF).
Directions for administration are located in Handouts 7.2 to 7.5.
a. Materials needed:
i. You will need “student copies” for the student to read for LNF, LSF, and
NWF.
ii. You will need “educator copies” for LNF, LSF, PSF, and NWF on which
to record the student’s responses. Additionally, a practice item/sheet is
needed for NWF (see Handout 7.5).
b. Administer one probe each for LNF, LSF, PSF, and NWF. Then compare
the student’s score to the criterion listed for the expected grade level. If the
student does not meet criterion for their expected grade level, compare the
score to the next lower grade-level criterion until the criterion is met. This is
slightly different than the SLA for CBE Decoding, since it is not necessary to
administer a new probe for a lower grade level.
2. Record the student’s scores on Handout 7.12. Complete the bottom portion of
Handout 7.12 to determine the severity of the problem.
3. Ask: Does it warrant further investigation?
− If the student is at criterion for his or her grade level on each of the Early Lit-
eracy probes, then the CBE Process for early literacy is complete and Decod-
ing CBE can be examined or reexamined (see Chapter 6).
− If the student is below criterion on any of the early literacy probes, then pro-
ceed to Problem Analysis for that particular skill. Low scores on PSF indicate
a need for assessment of phonemic awareness skills (Step 3 of CBE Process),
low scores on LNF indicate print concepts and alphabetic knowledge (Step
4 of CBE Process), and low scores on NWF and/or LSF indicate alphabetic
principle (Step 5 of CBE Process). As an evaluator, you will evaluate each of
these areas in Problem Analysis if the SLA indicates a score below criterion.
Things to Consider
• As with decoding, it is helpful and efficient to record all types of errors
while completing the early literacy probes. It may be necessary to examine
errors later in the CBE Process, so recording this information now will
make the process more efficient.
140 7 CBE Early Literacy
Fig. 7.2 Phase 2 of CBE Early Literacy Process depicting individual steps
• You may also wish to administer three Early Literacy probes and use the
median score. This step is not necessary, but may lead to a more reliable
score. Any time there is a concern about the administration of an Early
Literacy probe, use a new probe and readminister.
In the Problem Analysis phase of CBE Process in Early Literacy, the evaluator
examines each of the scores from the SLA to determine what skills warrant further
investigation. If the student scored low on PSF, assess phonemic awareness skills
(Step 3 of CBE Early Literacy Process). If the student scored low on LNF, assess
alphabetic knowledge and print concepts (Step 4 of CBE Early Literacy Process). If
the student scored low on LSF or NWF, assess alphabetic knowledge (this step be-
gins with Step 5 and may include Steps 6 and 7 of the CBE Early Literacy Process).
The evaluator will assess each area that the SLA results indicate warrants further
investigation. Figure 7.2 provides a visual depiction of phase 2 of the CBE Early
Literacy Process.
Step 3 of the CBE Early Literacy Process involves having the student complete at
least ten items for each skill to answer the previous questions. Evaluators may cre-
ate their own items to use for each question, or they may use the assessment activity
in Handout 7.6. Directions for Step 3 are provided in Handout 7.6.
1. Assess each phonemic awareness skill using the directions in Handouts 7.6 and
7.13 to record the scores. The skills to be assessed include: blend word parts,
segment word parts, rhyme words, blend syllables, segment syllables, delete
onset, delete rime, blend phonemes, and segment phonemes.
2. After administering the phonemic awareness assessments, determine if an error
pattern is evident or if the student has a general lack of phonemic awareness
skills. Ask, Is there a pattern to errors made?
a. To determine if there are gaps in skills or a general lack of skills, the evalu-
ator looks for patterns in the phonemic awareness skills. If the student has
mastered most of the skills that were assessed but struggles with one or two
skills, then those missing skills can be targeted for instruction. The evaluator
can recommend “Teach: Phonemic Awareness: Sound Boxes” (Handout 7.18)
and “Teach: Targeted Instruction” as described in Chapter 6.
b. If the evaluator finds that the student has an overall lack of phonemic aware-
ness skills, then the evaluator can recommend general instruction using some
of the strategies described in the “Teach: Phonemic Awareness” section of this
chapter and the general reading instruction strategies described in Chapter 6
(see the “Teach: General Reading Instruction”). Additionally, Fig. 7.3 illus-
trates the continuum of skills from simple to complex. The evaluator can sug-
gest where on the continuum of phonemic awareness skills instruction should
focus.
c. After completion of Step 3, proceed to Step 4 and/or Step 5, depending on the
results of the SLA.
142 7 CBE Early Literacy
For Steps 5 and 6, evaluators will examine the student’s letter-sound correspon-
dence for individual letters and letter blends. Letter sound fluency is one of the two
high-priority skills necessary to acquire the alphabetic principle (Hosp and Mac-
Connell 2008). First, the evaluator will assess the student’s knowledge of individual
sounds and then will assess letter blends. Letter Sound Fluency probes and Non-
sense Word Fluency probes sample letter sounds. The evaluator will further analyze
and verify the student’s knowledge of letter sound correspondence and letter blends.
1. Use the assessment described in Handout 7.9 with all lowercase letters. Ask the
student to produce each of the most common sounds for each letter.
2. Record the responses on the summary/tally sheet on the second page of Handout
7.15 (note that the same sheet is used for the letter name assessment).
3. Ask: Has the student mastered individual letter sounds?
a. If the student has mastered all of the letter sounds, then move to assessing
letter blends (Step 6).
b. If the student has not mastered all of the letter sounds, identify the missing
letter sounds and target for instruction. Recommend the strategy outlined in
Handout 7.20 (with a focus on letter sound) or one of the strategies outlined
in Handout 7.21 and Handout 7.22.
c. Further examine the errors and see if an error pattern emerges. With early
literacy skills, a pattern may not be as evident as with Decoding CBE because
there only a few categories of letters. The errors are examined for missing
vowel sounds, consonants, or confusion of visually or auditorily similar
letters.
i. If a pattern is clear, teach those specific letter sounds with targeted instruc-
tion (as described previously).
ii. If no pattern emerges, recommend a more general, balanced approach
to instruction as described in Chapter 6 (see “Teach: General Reading
Instruction”). The strategies described in Handouts 7.20 to 7.22 can still be
used, but the focus will be on more letters than if a pattern was identified.
Once the evaluator has determined that individual letter sounds have been mastered,
it is necessary to determine if the student is able to blend those sounds into words.
The starting point is to examine performance on the NWF task. If the student scores
below the criterion for NWF, it may be necessary to assess further the student’s abil-
ity to blend sounds connected to print.
144 7 CBE Early Literacy
1. Assess the student’s ability to blend vowel–consonant (VC) words and conso-
nant–vowel–consonant (CVC) words. Use Handout 7.10 for directions.
2. Assess the student’s ability to blend letter sounds at the beginning of the word
and blend letter sounds at the end of the word. Use a dry-erase board or a piece
of paper with a pencil.
3. Record the results on Handout 7.16.
4. Ask: Is there an error pattern evident with letter blends or the student’s ability to
blend letter sounds?
a. Determine whether or not an error pattern is evident. The student may make
errors with specific letter sounds or with blending certain types of words (i.e.,
VC words, CVC words, or letter blends at the beginning or end of the word).
b. If a pattern exists, teach the missing skills. Follow guidelines described in
Chapter 6 (see Teach: Targeted Instruction) and recommend one of the strategies
outlined in Handouts 7.23 (i.e., Word Boxes) or Handout 7.24 (i.e., Word
Building).
c. If no pattern exists, consider general instruction related to blending. Consult
the “Teach: General Reading Instruction” outlined in Chapter 6 and recom-
mend one of the strategies in Handouts 7.23 and 7.24.
Things to Consider
• Assessment of letter blends overlaps with the error analysis described in
Chapter 6. The evaluator may wish to follow the Error Analysis section
from Chapter 6 using primer or appropriate-level passages.
• It is also an option to examine previous work, the SLA assessment, and
other assessments to identify errors or missing letter sounds and letter
blends. Evaluators can create assessments or use a commercially available
diagnostic survey.
• If simple letter blends appear to be mastered, further assessment can be
conducted to determine if instruction is needed for diphthongs, digraphs,
and/or r-controlled syllables. Handout 7.17 provides an example of such
an assessment.
Table 7.4 List of resources for instructional strategies for early literacy skills
Resource Location/Publisher
Phonemic Awareness in Young Children Jager Adams et al. (1998). Brookes Publishing
Effective School Interventions Rathvon (2008). Guilford Press
Intervention Central Sight Word Generator http://www.interventioncentral.org/tools/
wordlist-fluency-generator
Words Their Way: Word Study for Phonics, Bear et al. (2007). Prentice Hall
Vocabulary, and Spelling Instruction (4th ed)
Reading Rockets Readingrockets.org
CORE Literacy Library www.corelearn.com
is matched to the student’s skill deficits is selected, designed, and implemented, (b)
a goal is set, and (c) ways to measure fidelity and progress are determined.
Five general instructional foci (labeled as “Teach”) are described for use with
students depending on the results of the CBE Process. A list of all the strategies
described in this chapter is provided in Table 7.3, with each strategy detailed in
Handouts 7.18–7.24. There are numerous intervention strategies to support early
literacy needs, and listing all of them is beyond the scope of this book. The key to
selecting a strategy is to use problem analysis results to guide the selection, and
use formative assessment to ensure it results in student benefit. This section will
describe the overall instructional focus for a given result of the CBE Process and
share a few evidence-based strategies in the Handouts. Educators are encouraged
to explore other resources to locate additional strategies. A list of resources that
provide instructional strategies is presented in Table 7.4.
Targeted vs. General Reading Instruction Much of the CBE Process is an
attempt to determine the presence of specific error patterns or if there is a general
skill problem. The most helpful recommendations may be lists of the types of errors
made as opposed to a specific instructional strategy. When an error pattern is identi-
fied, such as with phonemic awareness skills or letter-sound correspondence, then
the teaching strategy is to use targeted instruction to correct those identified errors.
The reader is referred to the “Teach: Targeted Instruction” section in Chapter 6. If
no clear pattern emerges with error analysis or if the student is struggling with the
skill overall, then it is recommended to use “Teach: General Reading Instruction”
as described in Chapter 6. When the need for general instruction is identified, the
146 7 CBE Early Literacy
student needs overall improvement of the skill and a general, balanced approach to
reading instruction is recommended.
Sound boxes Sound boxes, also known as Elkonin boxes (Elkonin 1973), are used
to teach students how to segment sounds of spoken words in sequence and make
students understand the positions of sounds in spoken and written words (McCar-
thy 2008). Picture cards or word cards with boxes represent the separate sounds
within the words. The instructor models the different phonemes within the word
for the student by slowly articulating the word and sliding chips into the box for
each phoneme within the word. For example, the word “sheep” has three phonemes
(/sh/ /ee/ /p/). The teacher would say the word, elongating each phoneme, and slide
a chip into one of three boxes for each phoneme. The student is then given the
opportunity to do the same.
The technique may be used to teach students how to identify beginning, middle,
or ending sounds by asking students to slide the chip into the box where they hear the
specific sound. This technique can be used with: CV, CVC, consonant–vowel–con-
sonant–silent vowel (CVCV), consonant–vowel–vowel–consonant (CVVC), con-
sonant–consonant–vowel–consonant (CCVC), and multisyllabic words. Maslanka
and Joseph (2002) compared the use of sound boxes to the use of sound sorts with a
group of preschoolers. Although both methods led to increases in phonemic aware-
ness skills, students who received instruction with sound boxes scored better on
measures of segmenting phonemes and isolating medial sounds. The strategy is
described in detail in Handout 7.18.
Guided teaching of print concepts For teaching print concepts, evaluators iden-
tify the exact missing skills and target those for instruction. The teacher directly
teaches and models the areas the student has yet to master within whole-group ins-
truction, small-group instruction, and/or one-to-one instruction. Two strategies are
suggested. First, a direct instruction approach is described in Handout 6.16 (Teach:
General Reading Instruction). Second, is the Shared Book Reading strategy, descri-
bed by Lovelace and Stewart (2007). Lovelace and Stewart describe an intervention
with preschool students who received a 10-minute print concepts lesson as part
of their 30-minute speech and language IEP session. The teacher sat next to the
individual student and read a story aloud. While reading, the teacher would point
out a total of 20 print concepts. Each participant at least doubled their mastery of
print concepts (as measured by a concept of print assessment; see Handouts 7.7 and
7.14). The intervention is described in more detail in Handout 7.19.
teaching sounds that are easy to articulate. Refer to Handout 7.11 for their recom-
mended introduction of letter-sound correspondences. Using the data that you have
obtained through the CBE Process and comparing that to Handout 7.11 will help
determine which letter sounds to target first.
Multisensory teaching of letter names Lafferty and colleagues (2005) describe a
direct instruction, multisensory approach used to teach four preschoolers (two with
typical language development and two with delayed language development) letters
that they had not mastered. Following a baseline assessment, a pool of five letters
was identified for instruction. Students were taught in small-group instruction for
30 minutes, three times per week. Students were directly taught a letter using a
model-production-feedback sequence. Then a multisensory strategy was used in
which the students used shaving cream or Play-Doh to “write” the letter, followed
by using paper and pencil to write the letter. Production of the letter involved both
letter name and letter sound. Each student showed large gains in accuracy of letter
name and letter sound identification, with results favoring recognition over produc-
tion. More detail on this method is provided in Handout 7.20.
Many of the strategies described for teaching letter-sound correspondence also can
be used to teach letter blends. One strategy for teaching letter-sound correspon-
dence is presented within this section, but evaluators also should read the Letter
Blend section to see other possible strategies to recommend.
Guided instruction of letter sounds This strategy involves the student learning
to identify letter names and letter sounds, both upper- and lowercase (Vaughn and
Linan-Thompson 2004). Explicit, guided instruction is used. The teacher begins by
introducing one vowel and three or four consonants. Letters may be added as stu-
dents master them. The instructor models the task by showing the student the first
letter and saying, “This is a, then asks, “What letter is this?” The student repeats the
letter. Showing each letter, the teacher asks the student, “What letter is this?” Once
the letters are mastered, match the sounds to the letters, and repeat the same process
by saying, “This is a. a says /a/. What is the sound of a?” Repeat with each letter.
This strategy is described in Handout 7.21.
Visual support In addition to increasing engagement with letter sounds, using
the guided instruction of letter sounds strategy described previously (see Hand-
out 7.21), teachers may also consider using visuals to support the acquisition of
the letters. Animals paired to letters can provide the visuals for students to learn
the letters and letter sounds. For instance, an elephant would represent e, or a dog
would represent d (http://www.readingrockets.org/strategies/alphabet_matching).
Additionally, props or images that represent a functional depiction of the letter can
be created to augment instruction (Dilorenzo et al. 2011). For example, an image of
snake in the shape of an “s” can be used for the letter s. Refer to Handout 7.22 for
further directions.
7.6 Plan Implementation 149
CL A P
Word boxes Word boxes are similar to sound boxes (described earlier) and assist
the student with one-to-one sequential correspondence between letters and sounds
in words (Clay 1993). The teacher uses word cards with boxes below each letter or
grouping of letters (blends) that represent the individual sounds within the word.
The student is presented the word card (see Fig. 7.4) and the instructor models for
the student by slowly articulating the word and sliding the letter (or combination of
letters) into the box when a sound in the word is pronounced.
The student then articulates the word slowly and slides the letters (or letters)
into the box when pronouncing a new sound in a word. This technique also can be
used to work with the student in identifying beginning, middle, and ending sounds
in words. The instructor presents a word and says, “Where do you hear the /cl/ in
clap?” The student then slides the letters into the box or position where they hear
the sound in the word. Word boxes are effective in improving student’s decoding
and word-reading ability (Joseph 2000). See Handout 7.23 for more detail on this
strategy.
Word building Word building involves using a set of letter cards to teach students
how to build words and analyze the different words created by adding or replacing
certain letters. Each student is given a set of letter cards that correspond to the letter-
sound units for that particular lesson. The teacher presents a word on the board, stu-
dents pronounce it, and then build it with their cards. For example, the teacher may
write “cat” on the board, pronounce it, and then have students build it with their
cards. Next, students are taught to insert or delete certain letters to create new words
(e.g., cat to cap). When changing letters, particular attention is given to the position-
ing of the letter being changed, and after each new word is formed, students respond
chorally. If students cannot pronounce the word correctly, they are encouraged to
sound it out by looking at the letter sounds instead of supplying the word for them.
Variations on this method include a peer-tutoring and sentence-reading component.
Rathvon (2008) offers more detail on implementing this strategy and indicates it may
be best for students who have completed grade 1 but struggle with basic letter sounds.
Also, this strategy is particularly helpful for students who get the initial sound or letter
in a word, but do not fully decode the word (McCandliss et al. 2003). McCandliss and
colleagues found that students between ages 7 and 10 improved their decoding and
reading comprehension skills following 20 intervention sessions using Word Building
compared to a control group. See Handout 7.24 for instructions on this strategy.
150 7 CBE Early Literacy
Having identified an instructional focus and strategy, the question in the Plan Evalu-
ation phase is about the effectiveness of the intervention plan on student’s early
literacy skills. This section describes ways to measure early literacy skills. The spe-
cific measurement tool will vary depending on which early literacy skill is being
targeted. As mentioned in Chapter 6, educators may wish to create measures to as-
sess the specific skill being targeted, or they may wish to analyze data collected as
part of daily instruction. A few options are summarized here.
Phonemic awareness The use of Phoneme Segmentation Fluency (PSF) can be
used to measure progress related to phonemic awareness skills (Good and Kaminski
2011).
Print concepts Handouts 7.7 and 7.14 can be used to assess the components of
print concepts during instruction.
Alphabetic knowledge Curriculum-Based Measurement can be used to measure
progress of alphabetic knowledge. LNF probes are used to measure a student’s
progress in acquiring letter names.
Alphabetic principle Two Curriculum-Based Measures are used for measuring
a student’s progress with the alphabetic principle: LSF (single letter sounds) and
NWF (incorporates blending letter sounds).
Integrated data As previously mentioned in Chapter 6, data also can be collected as
part of the instruction. For example, the teacher could document the percentage of cor-
rect sounds a student produces during sound manipulation activities. The data obtained
during instruction can enhance formally administered individual assessments.
This section describes considerations and ways to expand the use of the CBE Pro-
cess. As evaluators build proficiency and use the CBE Process, they may wish to
tailor it to address deeper content. Things to consider and ways to expand the depth
and use of the CBE Process for Early Literacy are described.
Analysis of NWF When analyzing the results of the NWF, the evaluator can deter-
mine where the breakdown is with decoding. For example, the student may con-
sistently be unable to identify the first letter sound or the student may be able to
decode, but struggle with recoding or blending.
Overlap with error analysis Error analysis may be used to identify what types
of letter blends a student is missing. The evaluator may wish to consult Chapter 6
(Step 6).
7.9 Chapter Summary 151
This chapter outlined the CBE Process for Early Literacy Skills. The CBE Process
for Early Literacy Skills begins with an SLA with CBM early literacy probes fol-
lowed by working through a series of questions and tasks that examine the student’s
skills in phonemic awareness, alphabetic knowledge, and the alphabetic principle.
Instructional recommendations are determined based on the results.
Handout 7.1 Curriculum-Based Evaluation Process in Early Literacy Flowchart
Curriculum-Based Evaluaon: Early Literacy
152
PROBLEM IDENTIFICATION
1. Ask: Is there a problem in Early Literacy Skills? Do: Inial idenficaon of problem
2. Ask: Does the problem warrant further invesgaon? Do: Conduct Survey-Level Assessment
PSF LNF LSF NWF
PROBLEM ANALYSIS
Phonemic Awareness Print Concepts & Alphabec Knowledge Alphabec Principle
3. Ask: If below criterion on PSF, 4. Ask: If below criterion on LNF, does student 5. Ask: If below criterion on LSF and/or NWF,
is there an error paern evident? have print concepts and leer names mastered? has the student mastered individual leer sounds?
Do: Assess phonemic awareness skills Do: Assess print concepts and leer names Do: Assess leer-sound correspondence
Yes No Yes No Yes No
Teach: Phonemic Teach: Phonemic Go to Step 5 Teach: Print Concepts Go to Step 6 Teach Leer-Sound
Awareness with Awareness with and/or Correspondence
Targeted Instrucon General Instrucon Teach Leer
Idenficaon with (Consider Step 6)
Leer-Sound 6. Ask: Is there a paern to errors
Correspondence made with leer blends?
Do: Assess leer blends
Yes No
Teach Leer Blends with Teach Leer Blends with
Targeted Instrucon General Instrucon
7. Ask: Are sight words a concern?
Do: Assess sight words (see Chapter 6)
PLAN IMPLEMENTATION
Teach: Leer Idenficaon with Teach: Leer-Sound
Teach: Phonemic Awareness Teach: Print Concepts Teach: Leer Blends
Leer-Sound Correspondence Correspondence
PLAN EVALUATION
Monitor Effecveness Monitor Fidelity
7 CBE Early Literacy
Handout 153
Things to Consider
• If a student makes errors without self-corrections on ten consecutive letters, dis-
continue the probe. Give credit for any letters correct before the discontinue rule
was met.
Note: The LNF directions are reprinted by permission (Pearson, 2012b). 2012
Copyright by Pearson Education Inc.
Handout 155
bim lat
Handout 159
7. Segmenting syllables. Next, ask the student to segment syllables. Say to the
student, “Say the first sound in ‘hit.’”
a. If the student says /h/, say “Yes, the first sound in hit is /h/.”
b. If the student is incorrect, say “The first sound in hit is /h/. Say the first
sound in ‘hit.’”
8. Next say, “Now listen to this word: invite. There are two parts to the word
invite. Say the two parts.”
a. If the student says /in/ /vite/, say “Yes, the two parts are /in/ /vite/.”
b. If the student is incorrect, say “No, the two parts are /in/ /vite/. Say the two
parts of the word invite.”
c. Proceed to the actual items. Record the student’s actual response on Hand-
out 7.13.
9. Next, assess the extent to which the student can delete onset sounds (initial con-
sonant sound of a word) and delete rimes (the vowel and the rest of the syllable
that follows).
10. Delete onset. Say to the student, “Say the word ‘seat.’ Now say ‘seat’ without
the /s/.”
a. If the student responds correctly, say “Yes, seat without the /s/ is eat.”
b. If the student responds incorrectly, say “No, seat without the /s/ is ‘eat.’ Say
‘seat’ without the /s/.”
c. Proceed to the actual items. Record the student’s actual response on Hand-
out 7.13.
11. Deleting rime. Say to the student, “Say ‘dust.’ Now say dust without the /
ust/.”
a. If the student responds correctly, say “Yes, dust without the /ust/ is /d/.”
b. If the student responds incorrectly, say “No, dust without the /ust/ is /d/.
Say dust without the /ust/.”
c. Proceed to the actual items. Record the student’s actual response on Hand-
out 7.13.
12. Next, assess the extent to which the student can blend and segment phonemes.
13. Blend phonemes. Say to the student, “I’m going to say some sounds. You put
the sounds together to make a word. /i/…/t/. Say the word.”
a. If the student responds correctly, say “Yes, /i/ and /t/ make it.”
b. If the student responds incorrectly, say, “/i/ and /t/ make it. Listen. /i/…/t/.
Say the word.”
c. Next, give a three-letter example. Say, “Listen to these sounds. /p/…/i/…
/g/. Say the word?”
d. If the student responds correctly, say “Yes, /p/ /i/ /g/ is pig.”
e. If the student responds incorrectly, say, “/p/ /i/ /g/ is pig. Listen. /p/…/i/…
/g/. Say the word.”
f. Proceed to the actual items. Record the student’s actual response on Handout
7.13.
14. Segment phonemes. Say to the student, “I’m going to say a word. I want you
to tell me the sounds in the word. Tell me the sounds in ‘top.’
a. If the responds correctly (/t/ /o/ /p/), say “Yes, the sounds in top are /t/ /o/
/p/.”
Handout 161
b. If the student responds incorrectly, say “No, the sounds in top are /t/ /o/ /p/.
Tell me the sounds in top.
c. Proceed to the actual items. Record the student’s response on Handout 7.13.
Interpretation Guidelines:
15. After administering the phonemic awareness assessment, the next step is to
determine if there is a pattern to the student’s errors or if there is a general lack
of skill in phonemic awareness. Ask: Is an error pattern evident in phonemic
awareness skills?
a. Identify if there are 1 or 2 skill deficits or if there is an overall general deficit
with phonemic awareness.
b. If a few skills emerge as difficult for the student, then those are targeted for
instruction. The evaluator can recommend some of the activities described
in the “Teach: Phonemic Awareness” section in this chapter. The evaluator
can also consider the “Teach: Targeted Instruction” activities described in
Chapter 6.
c. If no clear pattern emerges, then recommend some of the activities described
in the “Teach: Phonemic Awareness” section in this chapter along with the
“Teach: General Instruction” activities described in Chapter 6.
d. Following completion of Step 3, proceed to Step 4 and/or Step 5, depending
on the results of the SLA.
162 7 CBE Early Literacy
Edition Benchmark criteria. d Based on Fall 40th percentile from 2012-2013 AIMSweb normative data. e Based on Winter 40th percentile from
2012-2013 AIMSweb normative data, as LSF isn’t administered until winter. f score is correct letter sounds and does not indicate whole words
reads.
PSF
Expected level:
Obtained level:
Subtract obtained level from expected level = gap:
LNF
Expected level:
Obtained level:
Subtract obtained level from expected level = gap:
LSF
Expected level:
Obtained level:
Subtract obtained level from expected level = gap:
NWF
Expected level:
Obtained level:
Subtract obtained level from expected level = gap:
Things to Consider:
• It may be helpful to identify the expected criterion for the time of year (e.g., fall,
winter, and spring) in order to get a more accurate skill level.
Handout 169
B D q E t i P N m I L h
H w z V j O p y d S l R
U a X M Y F k u K A b e
Z x C Q s n T o r J G f
W c g v
r c e d s u g p i n z w
m l o h a b t f v j k q
x y
Note: In the Sample Letter Names, the lowercase “l” and uppercase “i” are visually
similar, so prompt the student to provide the other answer when the second letter is
assessed.
178 7 CBE Early Literacy
Blends at the Beginning of Words and Blends at the End of Words Correct Response Accurate?
Pracce Item “I’m going to write each leer that represents the step
sound I say. Watch me.”
“/s/.” (Write s on the board.)
“/t/.” (Write t on the board.)
“/e/”. (Write e on the board.)
“/p/.” (Write p on the board. )
“Now what word is /s/ /t/ /e/ /p/?” “The word is
step.”
“I will say more sounds and I want you to write the
leer that represents each sound. Then tell me the
word.”
Prompt (Say the sound for each leer and have the student
write each leer. Then have the student blend the
leers.)
Items 1. /s/ /t/ /o/ /p/ stop
2. /f/ /l/ /a/ /p/ flap
3. /s/ /n/ /a/ /p/ snap
4. /t/ /r/ /i/ /p/ trip
5. /s/ /k/ /i/ /n/ skin
TOTAL ____/5
6. /m/ /a/ /s/ /t/ mast
7. /j/ /u/ /m/ /p/ jump
8. /h/ /a/ /n/ /d/ hand
9. /b/ /e/ /n/ /d/ bend
10. /b/ /u/ /n/ /k/ bunk
TOTAL ____/5
Handout 181
2. Provide students with a chip or token for each box. Have them place the chip
above the box.
3. Model the procedure for the students. Slowly articulate the word and slide a chip
or token into the box when a phoneme or sound in the word is pronounced.
4. Have the student articulate the word slowly and slide the chip or token into the
box when a new sound is pronounced.
Considerations and Modifications:
• This technique can be used to identify beginning, middle, ending sounds. For
example, say, “Where do you hear the /a/ sound in cat?” Student slides the chip
or token into the box or position where they hear the sound.
• An image of the word can be provided above the sound box.
• This technique can be used with: CV, CVC, CVCV, CVVC, CCVC, and multi-
syllabic words.
• The activity can be extended by having the student write the word after it is
created.
Evidence Base: Maslanka and Joseph (2002)
184 7 CBE Early Literacy
>W
> W
4. Have the student articulate the word slowly and slide the letters (or letters) into
the box while pronouncing a new sound in a word.
Considerations and Modifications:
• This technique can be used with: consonant-vowel (CV), CVC, CVCV, conso-
nant-vowel-vowel-consonant (CVVC), consonant-consonant-vowel-consonant
(CCVC), and multisyllabic words.
• Instead of using letter cards initially, the instructor may use a token for each
sound. The student slides a token into the box for each sound and then replaces
each token with a letter card. This can provide further repetition and identifica-
tion of specific phonemes and letter blends.
• The activity can be extended by having the student write the word after pro-
nouncing and/or writing a sentence using the word.
• This technique can also be used to work with the student in identifying begin-
ning, middle, and ending sounds in words.
− The teacher presents a word and says, “Where do you hear the /cl/ in clap?”
− The student then slides the letters into the box or position where they hear the
sound in the word.
Evidence Base: Joseph (2000)
190 7 CBE Early Literacy
This chapter describes the process for CBE Reading Comprehension. The chapter
is structured around the four phases of the CBE Process and will walk the reader
through the entire process for Reading Comprehension. The chapter discusses spe-
cific assessment techniques and intervention based on the results.
Reading comprehension is the ability to derive meaning from text and is the ultimate
goal of reading (NICHHD 2000). It is a skill that rests on four factors: (a) decod-
ing, (b) vocabulary, (c) meta-cognitive skills or the ability to monitor one’s meaning
while reading, and (d) background knowledge of content (Klinger 2004; NICHHD
2000; Perfetti and Adlof 2012) (see Fig. 8.1). The CBE Process for Reading Com-
prehension involves assessing each one of these domains in a systematic manner.
For the CBE Process for Reading Comprehension, the evaluator will work
through the aforementioned factors and determine which one is contributing to the
student’s lack of comprehension. As with Chapters 6 and 7, the CBE Reading Com-
prehension Process moves through 4 phases, within which are a series of steps that
involve three types of tasks (see Fig. 8.2):
1. Ask: These are questions that signify a starting point for a step. Assessments
results are collected and interpreted in order to answer the question.
2. Do: These are assessment activities conducted with the student.
3. Teach: These are instructional recommendations based on the results of the CBE
Process.
The entire process is outlined in Handout 8.1, and Table 8.1 presents the steps in a
linear form. All of the handouts used for the CBE Process for Reading Comprehen-
sion are included at the end of the chapter, and the entire list is displayed in Table 8.2.
After initially identifying a reading concern, the next action is to verify the prob-
lem and determine if its severity warrants further investigation. The question is
answered by conducting a Survey-Level Assessment (SLA) using both oral reading
fluency (ORF) probes and reading MAZE probes. ORF and MAZE are both used to
ensure a comprehensive picture of the student’s reading development.
SLA is conducted to determine a student’s instructional reading level. To be con-
sidered within the instructional range for a given grade-level with ORF, students
Fig. 8.2 Ask-Do-Teach cycle for steps within the CBE Process
Table 8.1 Steps of CBE Process for Reading Comprehension
8.3 Problem Identification
Ask Do Teach
Problem Is there a problem? Initial Identification
Identification Does it warrant further investigation? Survey-level Assessment
Problem analysis Does the student have sufficient accuracy and Examine rate and accuracy as described in
rate at grade-level with ORF and MAZE? Chapter 6
Is the student missing critical vocabulary? Examine vocabulary of content and passages
Is the student monitoring comprehension? Examine meta-cognitive skills
Does the student have sufficient background Examine the impact on retell after orienting the
knowledge for comprehension? student to the content of the text before reading
Plan Is the instructional focus decoding? (Follow recommendations in Chapter 6)
implementation Is the instructional focus vocabulary? Vocabulary instruction
Is the instructional focus meta-cognitive skills? Meta-cognitive skills across before/
during/after reading framework
Is the instructional focus teaching content? Preview and discuss content before
reading
Plan evaluation Is student progressing toward his/her goal? Monitoring fidelity and student progress
193
194 8 CBE Reading Comprehension
Table 8.2 List of Handouts for CBE Process for Reading Comprehension
Handout Title
Instructions and Process Sheets
8.1 Curriculum-based evaluation in reading comprehension flowchart
8.2 Survey-level assessment in MAZE instructions
8.3 MAZE practice instructions
8.4 Vocabulary list instructions
8.5 Comprehension interview instructions
8.6 Retell instructions
8.7 Background Knowledge Discussion instructions
Tally and Assessment Sheets
8.8 Survey-level assessment results for MAZE
8.9 Vocabulary list recording sheet
8.10 Comprehension interview recording sheet
8.11 Retell rubric and questions
Strategy Sheets
8.12 Teach: Peer tutoring in vocabulary
8.13 Teach: Before reading: Previewing and developing questions
8.14 Teach: During reading: Click or clunk
8.15 Teach: During reading: Paragraph shrinking
8.16 Teach: After reading: Summarizing and question-generating
8.17 Teach: After reading: Partner retell
8.18 Teach: Background knowledge: Connections to self, word, text
Additional forms
8.19 Story map template
8.20 Directions for Vocabulary-matching
8.21 Vocabulary-matching list template and example
should read at or above the fall 25th percentile based on national norms with at least
95 % accuracy (Hosp et al. 2006). For the reading MAZE, students should score at
or above the fall 25th percentile based on national norms with at least 80 % accuracy
to be considered proficient. Scores between 60 and 80 % accuracy are questionable,
and below 60 % likely is a frustrational range (Howell and Nolet 2000). After com-
pleting an SLA, evaluators conduct a gap analysis and quantify the problem.
SLA directions:
1. Begin by administering three, 1-minute CBM ORF probes at expected grade
level and use the median WRC/Errors as the score (Directions for SLA are
included in Chapter 6 in Handout 6.2 and the formulas for calculating rate and
accuracy are “A” and “B” in Fig. 6.3, respectively).
a. You will need student copies of the passages and “evaluator copies” for
recording responses.
b. Record the student’s scores on Handout 6.5. If the student score is below
criteria, then administer reading passages from previous grade-levels until
criteria are met.
c. Complete the bottom portion of Handout 6.5 to determine the severity of the
problem.
8.4 Problem Analysis 195
2. Now conduct an SLA with reading MAZE. Administer one 3-minute CBM read-
ing MAZE probe at expected grade level. Directions for SLA are included in
Handout 8.2 and the formulas for calculating are displayed in Fig. 8.3. A practice
example and directions for MAZE are provided in Handout 8.3.
3. Record the student’s scores on Handout 8.8. If the student scores below grade-level,
then administer reading passages from previous grade-levels until criteria are met.
a. Complete the bottom portion of Handout 8.8 to determine the severity of the
problem.
4. Ask: Does it warrant further investigation?
a. If the student meets criteria for both rate and accuracy at grade-level on both
ORF and MAZE, normative data comparisons would not indicate reading
comprehension is a deficit for the student. However, the evaluator may wish
to further assess reading comprehension skills.
b. If the student is below grade-level criterion for either ORF or MAZE in
comparison to normative data or benchmark standards, proceed to Problem
Analysis.
Things to consider
• As mentioned in Chapter 6, some evaluators prefer to administer the easier
grade-level material first before progressing to higher grade-level material
to manage frustration with the task.
• It may be helpful to administer the ORF and MAZE probes in different
sittings to avoid burn-out or response fatigue.
In the Problem Analysis phase of the CBE Reading Comprehension Process, the
first step is to examine the student’s decoding abilities (i.e., the rate and accuracy).
Because automatic and fluent reading sets the stage for reading comprehension
(Therrien et al. 2012), it is examined first in an attempt to determine if it could ex-
plain why a student struggles with reading comprehension. If the student’s reading
rate and accuracy meet criteria at grade level, then the evaluator proceeds to ex-
amining the student’s vocabulary and meta-cognitive skills. The Problem Analysis
phase of CBE Reading Comprehension Process begins with Step 3.
196 8 CBE Reading Comprehension
The first step in determining why a student may have a reading comprehension
deficit is to identify the student’s level of decoding. Decoding is necessary for com-
prehension because students must be able to read the text before they can draw
meaning from it (Howell 2008). It is the ability to decode that allows access to text,
which in turn sets the stage for reading comprehension (Carnine et al. 2009; Howell
2008; NICHHD 2000).
1. Ask: “Does the student have sufficient rate and accuracy at grade-level with
ORF and MAZE?”
a. After administering the SLA, examine the student’s rate and accuracy with
grade-level material. If the student does not read at rate with accuracy, the
evaluator is referred to Step 3 of the CBE Process in Decoding (see Handout
6.1). The steps for analyzing the student’s accuracy and rate are described in
Chapter 6.
b. If the rate and accuracy meet criteria, then proceed to Step 4.
The goal of this step is to determine if the student’s lack of vocabulary knowledge
is contributing to comprehension difficulties. There are two types of vocabulary to
consider: (a) academic vocabulary and (b) content-specific vocabulary. Definitions
are provided next.
Academic vocabulary These are words that are used across subjects to understand
and organize information. They are words critical to understanding the concepts
taught in any topic. Examples of academic vocabulary words include: analyze,
characteristic, distinguish, emphasis, hypothesize, sequence, transition, and utilize.
Resources for academic vocabulary are listed in Table 8.3.
Content-specific vocabulary These are words that are specific to a particular topic
or content area. For example, the word “pregnancy” and “infectious disease” are
important for understanding the subject of health, but not as important for math-
ematics (in which words like “integer” and “exponent” are important). Resources
for content-specific vocabulary are listed in Table 8.3.
To conduct Step 4:
1. Analyze errors from work samples, review of records, and errors made on the
Survey-Level Assessment (both ORF and MAZE) to determine if student is
missing academic or content-specific vocabulary words.
8.4 Problem Analysis 197
Table 8.3 List of resources for vocabulary word banks and create vocabulary lists
Resource Location/Publisher
Academic vocabulary resources
Academic vocabulary http://www2.elc.polyu.edu.hk/cill/eap/wordlists.htm
English companion http://www.englishcompanion.com/pdfDocs/
acvocabulary2.pdf
Academic vocabulary in use http://assets.cambridge.org/97805216/89397/
frontmatter/9780521689397_frontmatter.pdf
Academic word list. Coxhead http://www.uefap.com/vocab/select/awl.htm
http://www.victoria.ac.nz/lals/resources/
academicwordlist/
Content-specific vocabulary resources
Building academic vocabulary: Marzano and Pickering 2005. ACSD
teacher’s manual
Marzano research laboratory Marzanoresearch.com
Things to consider
• Vocabulary concerns and decoding errors can be examined together to
improve focus of instruction. Instruction may center around teaching spe-
cific vocabulary words or include teaching specific word parts. If a student
shows deficits with word parts, the DISSECT strategy may be useful (see
Handout 6.15 in Chapter 6).
198 8 CBE Reading Comprehension
At this point, you have ruled out or addressed decoding and lack of vocabulary as
reasons for the student’s difficulty with comprehension. The focus of this step is
on the extent to which the student is aware of his or her own reading comprehen-
sion. Is the student “thinking about thinking” and aware of the level of compre-
hension while reading? Successful readers read text with a purpose, are actively
pursuing meaning from the text, adjust reading rate in response to text complex-
ity, seek clarification, and use various strategies to monitor and obtain meaning
(e.g., take notes, highlight, summarize, etc.). In addition, successful readers will
preview the material, set a goal for reading, and then monitor the success toward
that goal (Carnine et al. 2009; Howell 2008). Table 8.4 summarizes 6 meta-cog-
nitive strategies that readers use to gain meaning from reading. Step 5 of CBE
Reading Comprehension Process is organized around the 6 meta-cognitive strate-
gies listed in Table 8.4.
There are two methods used to conduct Step 5. They are (a) conducting an inter-
view with the student while he or she reads, and (b) examining the student’s ability
to retell what he or she has read. Both methods are described next.
Conduct an interview while the student reads a selected passage. For this interview,
the student will be asked to read a passage and to “think aloud” as he or she reads.
The evaluator observes the student for evidence of the reading comprehension strat-
8.4 Problem Analysis 199
egies listed in Handouts 8.5 and 8.10, and after the student reads the passage, the
evaluator follows up by asking questions to clarify the student’s use of reading strat-
egies. The evaluator also prompts thinking aloud by asking questions as the student
reads. The Interview used for this step is provided in Handout 8.10. The interview
was developed by considering a “before/during/after” orientation to reading and
by considering the meta-cognitive strategies listed in Table 8.4 (cf. Howell 2008;
Howell and Nolet 2000). The instructions for Step 5 are listed in Handout 8.5, and
the Comprehension Interview is provided in Handout 8.10:
1. Identify a passage from which the student reads with sufficient accuracy and rate.
2. Explain to the student that you want him or her to “think aloud” as he or she
reads and that you will ask questions during the reading.
a. Say to the student, “I want you to read this text/passage aloud. I want you to
“think aloud” as you read because I want to understand how you read and
how you make sense of what you read. I will ask you questions to help me
understand how you read. First, let me ask how you prepare yourself to read.
What do you do before you read this passage/text?”
b. Proceed to ask the student questions or encourage the student to explain what
he or she does before reading a passage or text. Encourage the student to
speak and elaborate by saying “Tell me more.” or “Explain that more fully.”
c. Mark the skills observed in the Comprehension Interview under the section
“Before Reading” on Handout 8.10.
3. Next, have the student read and observe the student for the skills listed in the
“During Reading” section of the Comprehension Interview. Mark the student’s
skills on that section in Handout 8.10.
a. Say, “Okay, now begin reading and talk aloud while you read. Pretend I am
a student and you are the teacher. How can I make sure I understand what is
being read?” Having the student read aloud makes the skills that the student
uses more observable and measureable.
b. Ask questions as the student reads to clarify each skill. If the student requires
prompting or assistance to identify the skill, then consider the skill to be par-
tially observed.
4. When the student finishes, ask him or her to explain what he or she does after
finishing a passage or text.
a. Say, “Now that you are finished, what do you do after you read to make sure
you understand what was read?”
b. Ask questions to clarify the skills listed in the “After Reading” section of the
Comprehension Interview on Handout 8.10.
5. Look over the Comprehension Interview and ask any clarifying questions to
ensure you have assessed each skill listed. After the student finishes reading the
passage, ask any follow-up questions to determine if he or she is using the skills
within the interview.
a. Mark the appropriate column to indicate if the skill was observed, partially
observed, or not observed in the Comprehension Interview (Handout 8.10).
6. Proceed to conduct the Retell before interpreting results.
200 8 CBE Reading Comprehension
After conducting the interview, ask the student to read a different passage and have
the student retell and summarize what was read. This activity helps determine if
the student is monitoring the meaning of the text, and if he or she understands the
structure of the text. The retell also can be used to answer some of the items on
the Comprehension Interview. Directions for conducting the Retell are provided in
Handout 8.6, and an example of scoring rubrics for both narrative and expository
texts are provided in Handout 8.11:
1. Gather approximately 250-word passages or reading texts that are both exposi-
tory and narrative, depending on the student’s grade level and the content areas
being assessed. Using the student’s textbook or curriculum to locate passages
may be the most straight forward option to identify text for this assessment.
a. Consider using a recording device so that you can replay the retell and accu-
rately interpret the student’s response.
2. Say to the student, “I want you to read this passage to yourself. I will then have
you tell me about what you read.”
3. Have the student read untimed. After the student finishes the passage, ask the
student to summarize what was read.
a. Say, “Please tell me about what you read.”
b. After the student provides a response, score the response using the rubric pro-
vided in Handout 8.11. You may also wish to create your own rubric, based
on your standards and curriculum.
After both the Interview and the Retell, determine if the student is monitoring his or
her comprehension while reading.
1. Ask: “Is the student monitoring his or her comprehension?”
a. If yes, then reconsider the problem (Problem Identification) and/or examine
background knowledge (see the “Expanding Knowledge” section).
b. If no, examine the results of the Interview and Retell to select a strategy to
use.
i. If the student scored low on the Comprehension Interview, use strategies
to re-teach the area of deficit.
1. For “Before Reading”, teach the student to develop questions and pre-
view the text (see Handout 8.13).
2. For “During Reading”, teach the student to use strategies to monitor
meaning (see Handouts 8.14 and 8.15).
3. For “After Reading”, teach the student to summarize and answer ques-
tions about the reading (see Handout 8.16).
ii. If the student scored low on Retell, teach partner retell (see Handout 8.17)
and/or story mapping (see the “Story Mapping” section later in the Chap-
ter). The instruction will be adjusted depending on whether the student
was able to provide accurate Retell with prompting or without.
8.4 Problem Analysis 201
Things to Consider
• Prompting during retell. You may want to determine if the student is able
to retell the text with assistance, as this can tailor instruction according
to the Instructional Hierarchy (e.g., more fluency-based than acquisition-
based). If the student needed assistance to perform a skill, then fluency
may be the focus of instruction. If the student could not perform the skill
at all, then acquisition of the skill is likely the focus.
1. After the Retell assessment, consider asking questions to prompt the
student’s retell and to determine if the student is identifying relevant
information from the passage. The specific questions asked are based
on the content and whether the text is expository or fictional. Examples
of questions to consider are provided in Handout 8.11.
2. Ask: With prompting/assistance, can student monitor and construct
meaning?
a. If yes, teach the student to monitor meaning and construct meaning
independently. The focus is on building fluency with this skill and
such strategies listed in Handouts 8.16 and 8.17 may be appropriate
if they are adjusted to target fluency instead of acquisition of the skill.
b. If no, then the student needs to be taught explicitly to monitor mean-
ing and to understand the structure of text.
• Teach the missing skill. There are many instructional strategies that may
be a good match based on the problem analysis. The guiding rule, how-
ever, is that if the student cannot perform the skill, then the student requires
specific, direct instruction to learn that skill. If the student can perform the
skill with assistance, then the student requires fluency building with that
skill.
• Expository and narrative texts. Consider assessing the student’s meta-
cognitive skills using both informational (expository) and fictional (narra-
tive) text. Have the student read each type of text and conduct an interview
and a retell.
• Interview educators familiar with the student. Younger students may
not be able to verbalize certain strategies or skills, so interviewing those
familiar with the student may provide more information about the stu-
dent’s meta-cognitive abilities. However, be sure that the interviewee pro-
vides not only observations but also evidence of the skill to avoid decisions
based solely on perception or subjective statements.
• CBM Retell. Have the student read an ORF passage and then provide
1-minute for a retell. This is a quick procedure to efficiently assess reading
comprehension. Follow procedures and directions outlined by DIBELS
(see http://dibels/org and Good and Kaminski 2011). This procedure could
serve as an alternative assessment for the Retell procedure described ear-
lier, or it could be part of the Survey-Level Assessment.
202 8 CBE Reading Comprehension
In this strategy, students are asked a question intended to prompt a discussion re-
lated to the reading. The evaluator determines if the student understands the topic.
Directions are provided in Handout 8.7. This assessment should be repeated for
each topic area of concern:
1. Gather reading passages containing the topic to be assessed.
2. Preview the reading passages and identify main theme and 3–5 key points that
are essential to understanding the passage. Identify important vocabulary words.
3. Explain to the students that you want them to read a passage but that first, you will
have a conversation to understand what they may already know about the topic.
4. Begin with an open-ended question, such as “Tell me what you know about
____.” Take notes while the student answers and look for identification of the
key points you identified.
5. Next ask specific questions about the key points and vocabulary that you identi-
fied. Record what the student says.
8.5 Plan Implementation 203
Interpretation Guidelines
6. Ask, “Does the student’s background knowledge support text content?” Review
your notes and determine if the student’s background knowledge supports the
content of the text. As a general guideline, the student should know the majority
of the key points that you identified and not present inaccuracies.
a. If yes, then you can reasonably conclude that the student’s background knowl-
edge is sufficient for that particular topic
b. If no, then recommend teaching strategies that will activate or build back-
ground knowledge (see Teach: Background Knowledge section and Handouts
8.13 and 8.18).
Things to Consider
• If the student struggles to define vocabulary words related to the topic,
consider Step 4. Vocabulary is related to background knowledge, so the
student may need vocabulary instruction as part of background knowledge
instruction.
• Conduct the Free Discussion across multiple topics to which the student
has been introduced and struggled reading the text, and include topics with
which the student is familiar. This step provides additional information to
determine if background knowledge is contributing to reading difficulties.
If the student needs vocabulary instruction, he/she needs direct teaching of the
words that will enable he/she to understand and gain more meaning from the texts
that he/she read. Two approaches to instruction are described: a direct instruction
approach and a peer tutoring strategy. Additionally, the student may benefit from
being taught specific word parts or instruction on using context clues to define
words (see the DISSECT strategy in Handout 6.15 in Chapter 6).
8.5 Plan Implementation 205
sense). Students are taught to use “fix-up strategies” if something they have read
“clunks”. Handout 8.14 describes the “click or clunk” strategy.
During/after reading: Paragraph shrinking This is a strategy that can be used
during and after reading, as it focuses on summarizing and making meaning from
what was read. Students are taught to summarize each paragraph in 10 or fewer
words. Students work in pairs and can be provided questions to guide their sum-
maries. Handout 8.15 describes this strategy. Using paragraph shrinking within
the context of cooperative peer tutoring, Sáenz et al. (2005) found that students
with learning disabilities and second language-learners scored better on measures
of word decoding and comprehension compared to students who did not use the
strategy.
After reading: Summarizing and question generating Students can be taught to
summarize the main ideas and to write questions based on who, what, when, where,
why, and how. Students can read the text, write questions, and then work in pairs to
answer each other’s questions. Handout 8.16 describes this strategy.
After reading: Partner retell In partner retell, students take turns reading in pairs
and then retell portions of the text to their partner. This strategy has been shown
to improve both reading decoding and reading comprehension scores, particularly
when combined with paragraph shrinking and prediction relay, which is a predictive
reading strategy (Rathvon 2008; Sáenz et al. 2005). Handout 8.17 includes informa-
tion on partner retell.
After reading and retell: Story mapping Story mapping is a general strategy with
many variations, but it involves teaching students to understand and recognize the
structure of texts that they read. It is helpful during and after reading as it enables
readers to make sense of what was read and to understand the structure of text. Story
mapping has been effective across various grades, background, and ability levels
(Rathvon 2008; Vaughn and Linan-Thompson 2004).
Story mapping can be as simple as teaching students to identify “beginning,
middle, and the end” of a story, but they can also be quite complex and include
characters, plot lines, or compare/contrast elements. After reading a passage or text,
students are taught to skim the story to re-familiarize themselves with key compo-
nents of the text. Students then complete a pre-made story map. Variations of this
approach include having students work in groups or pairs to complete the story
map, including visual imagery to support the story map, and including a discussion
of the story before or after completing the story map. An example of a basic story
map is provided in Handout 8.19.
Once teachers have assessed background knowledge and know more about what
their students know, lessons targeting specific content can be created to activate
8.6 Plan Evaluation 207
Having identified an instructional focus and strategy, the Plan Evaluation phase
involves measuring the effectiveness of the strategy. Measurement of fidelity and
measurement of student progress are two critical components of Plan Evaluation.
Strategies for monitoring reading comprehension skills are discussed. General out-
come measures (i.e., CBM) can be used to monitor overall reading growth, and
other measures can be used to assess progress or mastery of the specific targeted
skill. Data collected as part of daily instruction also can be analyzed.
Oral reading fluency ORF or reading CBM is a general outcome measure of read-
ing, including comprehension. As students improve with accuracy and rate, so does
their comprehension.
MAZE Reading MAZE also is a general outcome measure of reading and can
be used to measure comprehension. MAZE is a good option for monitoring when
students have performed well on ORF but not on MAZE during the Survey-Level
Assessment.
Vocabulary lists Creating vocabulary lists is an option to assess mastery of vocab-
ulary. Vocabulary lists can be used to assess students’ ability to produce definitions
or match definitions (see Handouts 8.20 and 8.21 for directions and examples).
MAZE passages can be created with specified vocabulary words serving as the
foils (i.e., replacement options) in order to assess word understanding within con-
text. A MAZE passage generator can be found on Interventioncentral.org (http://
www.interventioncentral.org/tools/maze-passage-generator). This generator ran-
domly replaces every 7th word and evaluators can replace certain vocabulary words
in order for the passage to assess changes in vocabulary growth.
208 8 CBE Reading Comprehension
This section describes considerations for expanding the use of the CBE Process.
Vocabulary-matching If the student cannot identify the definitions of words on
his or her own, providing definitions with a vocabulary-matching task will provide
information about the student’s ability to determine if the student can match words
to definitions. This is a less difficult task because the student only has to identify the
correct definition instead of both retrieving the information from long-term memory
and then determining if it is correct.
To examine this skill, create word lists using the missed vocabulary words in
Step 4 and provide the definitions. Ask the student to match each word with the cor-
rect definition. Directions and Templates for creating a matching vocabulary list are
provided in Handouts 8.20 and 8.21.
Assessing vocabulary in context To assess vocabulary in context, obtain passages
with the missed words and underline them. Ask the student to use context clues and
think aloud to define the words. Observe the strategies used. If the student does
well, fluency building with context and vocabulary instruction likely would be ben-
eficial (see Handout 8.12). If the student struggles with this task, providing direct
instruction on using clues to define a word (e.g., DISSECT strategy) likely would
improve and increase use of this strategy (Handout 6.15 in Chapter 6).
Background Knowledge An alternative way to assess background knowledge is
to create MAZE probes related to the topic area in question. Identify the topic to
assess and create a MAZE probe on that topic. Administer the MAZE and determine
if the student meets criterion. If the criterion is met, then background knowledge
may be sufficient. If criterion is not met, then background knowledge may be an
area to target. To verify if background knowledge is an issue, compare the results of
multiple MAZE administrations across different topics, including topics with which
the student is confident and familiar.
Self-monitoring Consider conducting a self-monitoring assessment. The self-
monitoring assessment (see Chapter 6) can help the evaluator determine if the
student is actively monitoring reading. If the student scores well with Vocabulary,
Decoding, and Meta-Cognition, then Self-Monitoring can be used to determine if
the student makes meaning violating decoding errors that diminish comprehension.
Follow the self-monitoring steps outlined in Chapter 6 focusing on whether or not
the errors violate meaning.
Referents Referents are the person, thing or concept to which words or expressions
refer. For example, in the phrase “Tom picked up his children,” his refers to Tom.
Students may be confused by such referents, particularly if they are more complex,
like in the phrase “Tom shouted at Bill because he spilled the coffee.” Examine the
student’s errors and determine if a high number of them are related to referents (e.g.,
he, she, they, that). Interview the student about a text selection to determine if they
can identify to what or whom the words refer. If the student struggles with referents,
additional instruction may be warranted.
8.8 Chapter Summary 209
This chapter outlined the CBE Process for Reading Comprehension. The CBE Pro-
cess for Reading Comprehension begins with a survey-level assessment with CBM
oral reading fluency and MAZE probes followed by working through a series of
questions and tasks that examine reading accuracy, reading rate, vocabulary, meta-
cognition, and background knowledge. Instructional recommendations are deter-
mined based on the results.
210
Î
Ž͗džĂŵŝŶĞƌĂƚĞĂŶĚĂĐĐƵƌĂĐLJŽŶKZ&ĂƐĚĞƐĐƌŝďĞĚŝŶŚĂƉƚĞƌϲ
zĞƐ EŽ
WƌŽĐĞĞĚƚŽ^ƚĞƉϰ dĞĂĐŚ͗ĞĐŽĚŝŶŐ^ŬŝůůƐ
;ďĂƐĞĚŽŶƌĞƐƵůƚƐŽĨĞĐŽĚŝŶŐWƌŽĐĞƐƐŝŶŚĂƉƚĞƌϲͿ
sŽĐĂďƵůĂƌLJ DĞƚĂͲŽŐŶŝƟǀĞ ĂĐŬŐƌŽƵŶĚ<ŶŽǁůĞĚŐĞ
ϰ͘ƐŬ͗/ƐƐƚƵĚĞŶƚŵŝƐƐŝŶŐĐƌŝƟĐĂůǀŽĐĂďƵůĂƌLJ͍ ϱ͘ƐŬ͗/ƐƐƚƵĚĞŶƚŵŽŶŝƚŽƌŝŶŐĐŽŵƉƌĞŚĞŶƐŝŽŶ͍ ϲ͘ƐŬ͗ŽĞƐƚŚĞƐƚƵĚĞŶƚ͛ƐďĂĐŬŐƌŽƵŶĚ
ŬŶŽǁůĞĚŐĞƐƵƉƉŽƌƚƚĞdžƚĐŽŶƚĞŶƚ͍
Î
Î
Î
Ž͗džĂŵŝŶĞǀŽĐĂďƵůĂƌLJŽĨĐŽŶƚĞŶƚĂŶĚƉĂƐƐĂŐĞƐ Ž͗džĂŵŝŶĞŵĞƚĂͲĐŽŐŶŝƟǀĞƐŬŝůůƐ Ž͗džĂŵŝŶĞĂĐŬŐƌŽƵŶĚ<ŶŽǁůĞĚŐĞ
zĞƐ EŽ zĞƐ EŽ zĞƐ EŽ
dĞĂĐŚ͗sŽĐĂďƵůĂƌLJ WƌŽĐĞĞĚƚŽ^ƚĞƉϱ WƌŽĐĞĞĚƚŽ^ƚĞƉϲ dĞĂĐŚ͗DĞƚĂͲĐŽŐŶŝƟǀĞ dĞĂĐŚ͗ĂĐŬŐƌŽƵŶĚ
ZĞĐŽŶƐŝĚĞƌƉƌŽďůĞŵ
/ŶƐƚƌƵĐƟŽŶ ^ŬŝůůƐ <ŶŽǁůĞĚŐĞ
ŝĚĞŶƟĮĐĂƟŽŶ
;ŽŶƐŝĚĞƌ^ƚĞƉϱͿ
;ŽŶƐŝĚĞƌ^ƚĞƉϲͿ
W>E/DW>DEdd/KE
dĞĂĐŚ͗ĞĐŽĚŝŶŐ^ŬŝůůƐ dĞĂĐŚ͗sŽĐĂďƵůĂƌLJ/ŶƐƚƌƵĐƟŽŶ dĞĂĐŚ͗DĞƚĂͲŽŐŶŝƟǀĞ^ŬŝůůƐ dĞĂĐŚ͗ĂĐŬŐƌŽƵŶĚ<ŶŽǁůĞĚŐĞ
;ĂƐŽƵƚůŝŶĞĚŝŶŚĂƉƚĞƌϲͿ ĞĨŽƌĞͬƵƌŝŶŐͬŌĞƌZĞĂĚŝŶŐ&ƌĂŵĞǁŽƌŬ
W>Es>hd/KE
DŽŶŝƚŽƌīĞĐƟǀĞŶĞƐƐ DŽŶŝƚŽƌ&ŝĚĞůŝƚLJ
8 CBE Reading Comprehension
Handout 211
Note: Reprinted with permission (Shinn & Shinn, 2002a). Copyright 2002 by NCS
Pearson.
212 8 CBE Reading Comprehension
Note: Reprinted with permission (Shinn & Shinn, 2002a). Copyright 2002 by NCS
Pearson.
Handout 213
Practice Items
b. If no, examine the results of the Interview and Retell to select a strategy to
use.
i. If the student scored low on the Comprehension Interview, teach strategies
based on where the deficit occurred.
1. For “Before Reading”, teach the student to develop questions and pre-
view the text (see Handout 8.13).
2. For “During Reading”, teach the student to use strategies to monitor
meaning (see Handouts 8.14 and 8.15).
3. For “After Reading”, teach the student to summarize and answer ques-
tions about the reading (see Handout 8.16).
ii. If the student scored low on Retell, teach partner retell (see Handout 8.17)
and/or story mapping (see the “Story Mapping” section later in the Chap-
ter). The instruction will be adjusted depending if the student was able to
provide accurate Retell with prompting or without.
Handout 217
c. If the student scored low on the Comprehension Interview, teach strategies
based on where the deficit occurred.
1. For “Before Reading”, teach the student to develop questions and preview
the text (see Handout 8.13).
2. For “During Reading”, teach the student to use strategies to monitor mean-
ing (see Handouts 8.14 and 8.15).
3. For “After Reading”, teach the student to summarize and answer questions
about the reading (see Handout 8.16).
d. If the student scored low on Retell, teach partner retell (see Handout 8.17)
and/or story mapping (see the “Story Mapping” section later in the Chapter).
The instruction will be adjusted depending on whether the student required
prompting to provide accurate Retell.
Handout 219
a
Passage Score Criterion Met? Benchmark
Level 8 rate 18
N/A
accuracy >80%
Level 7 rate 18
N/A
accuracy >80%
Level 6 rate 16
18
accuracy >80%
Level 5 rate 12
18
accuracy >80%
Level 4 rate 10
15
accuracy >80%
Level 3 rate 8
8
accuracy >80%
Level 2 rate 2
N/A
accuracy >80%
a
based on Fall DIBELS Next Benchmark Goals (hp://dibels.org/papers/DIBELSNextBenchmarkGoals.pdf).
^Ŭŝůů
ĞĨŽƌĞZĞĂĚŝŶŐ
^ƚĂƚĞƐƉƵƌƉŽƐĞĨŽƌƌĞĂĚŝŶŐ KďƐĞƌǀĞĚ͍ WĂƌƟĂů͍ EŽƚKďƐĞƌǀĞĚ͍
႒ ႒ ႒
/ĚĞŶƟĮĞƐƋƵĞƐƟŽŶƐƚŽĐŽŶƐŝĚĞƌŽƌĂƐŬ KďƐĞƌǀĞĚ͍ WĂƌƟĂů͍ EŽƚKďƐĞƌǀĞĚ͍
႒ ႒ ႒
^ŬŝŵƐƉĂƐƐĂŐĞƐŽƌƉĂƌĂŐƌĂƉŚƐƚŽĮŶĚŝŶĨŽƌŵĂƟŽŶ KďƐĞƌǀĞĚ͍ WĂƌƟĂů͍ EŽƚKďƐĞƌǀĞĚ͍
ĨŽƌƋƵĞƐƟŽŶƐ ႒ ႒ ႒
&ŽƌŵƐĂŐĞŶĞƌĂůŝŵƉƌĞƐƐŝŽŶŽĨŝŶĨŽƌŵĂƟŽŶ KďƐĞƌǀĞĚ͍ WĂƌƟĂů͍ EŽƚKďƐĞƌǀĞĚ͍
ĞŵƉŚĂƐŝnjĞĚǁŝƚŚŝŶƚŚĞƚĞdžƚ ႒ ႒ ႒
^ĞƚƐĂŐŽĂůĨŽƌƌĞĂĚŝŶŐ KďƐĞƌǀĞĚ͍ WĂƌƟĂů͍ EŽƚKďƐĞƌǀĞĚ͍
႒ ႒ ႒
hƐĞƐƟƚůĞƚŽŝĚĞŶƟĨLJƉƵƌƉŽƐĞ KďƐĞƌǀĞĚ͍ WĂƌƟĂů͍ EŽƚKďƐĞƌǀĞĚ͍
႒ ႒ ႒
>ŽŽŬƐĂƚŝůůƵƐƚƌĂƟŽŶƐŽƌŚĞĂĚĞƌƐ KďƐĞƌǀĞĚ͍ WĂƌƟĂů͍ EŽƚKďƐĞƌǀĞĚ͍
႒ ႒ ႒
DĂŬĞƐƉƌĞĚŝĐƟŽŶƐĂďŽƵƚǁŚĂƚŝƐŝŶƚŚĞƚĞdžƚ KďƐĞƌǀĞĚ͍ WĂƌƟĂů͍ EŽƚKďƐĞƌǀĞĚ͍
႒ ႒ ႒
ƵƌŝŶŐZĞĂĚŝŶŐ
/ĚĞŶƟĮĞƐŵĂŝŶŝĚĞĂƐĂŶĚĐƌŝƟĐĂůĚĞƚĂŝůƐ KďƐĞƌǀĞĚ͍ WĂƌƟĂů͍ EŽƚKďƐĞƌǀĞĚ͍
႒ ႒ ႒
ĞĐŝĨĞƌƐǁŚŝĐŚŝŶĨŽƌŵĂƟŽŶŝƐƌĞůĞǀĂŶƚĂŶĚǁŚŝĐŚ KďƐĞƌǀĞĚ͍ WĂƌƟĂů͍ EŽƚKďƐĞƌǀĞĚ͍
ŝƐŶŽƚƌĞůĞǀĂŶƚ ႒ ႒ ႒
^LJŶƚŚĞƐŝnjĞƐŬĞLJŝĚĞĂƐ KďƐĞƌǀĞĚ͍ WĂƌƟĂů͍ EŽƚKďƐĞƌǀĞĚ͍
႒ ႒ ႒
ĚũƵƐƚƐƌĞĂĚŝŶŐƌĂƚĞǁŝƚŚĐŚĂŶŐĞƐŝŶƚĞdžƚ KďƐĞƌǀĞĚ͍ WĂƌƟĂů͍ EŽƚKďƐĞƌǀĞĚ͍
ĐŽŵƉůĞdžŝƚLJ ႒ ႒ ႒
^ĞůĨͲĐŽƌƌĞĐƚƐĞƌƌŽƌƐ KďƐĞƌǀĞĚ͍ WĂƌƟĂů͍ EŽƚKďƐĞƌǀĞĚ͍
႒ ႒ ႒
ZĞŵĞŵďĞƌƐƋƵĞƐƟŽŶƐĂŶĚƉƌĞĚŝĐƟŽŶƐǁŚŝůĞ KďƐĞƌǀĞĚ͍ WĂƌƟĂů͍ EŽƚKďƐĞƌǀĞĚ͍
ƌĞĂĚŝŶŐ ႒ ႒ ႒
ŚĞĐŬƐƚŽƐĞĞŝĨǁŚĂƚǁĂƐƌĞĂĚŵĂŬĞƐƐĞŶƐĞŽƌ KďƐĞƌǀĞĚ͍ WĂƌƟĂů͍ EŽƚKďƐĞƌǀĞĚ͍
ĂůŝŐŶƐǁŝƚŚƉƌĞǀŝŽƵƐŝŶĨŽƌŵĂƟŽŶ ႒ ႒ ႒
^ƚŽƉƐĂŶĚƐƵŵŵĂƌŝnjĞƐŝŶĨŽƌŵĂƟŽŶǁŚŝůĞƌĞĂĚŝŶŐ KďƐĞƌǀĞĚ͍ WĂƌƟĂů͍ EŽƚKďƐĞƌǀĞĚ͍
႒ ႒ ႒
ůĂƌŝĮĞƐŝŶĨŽƌŵĂƟŽŶǁŚĞŶŝƚĚŽĞƐŶŽƚŵĂŬĞƐĞŶƐĞ KďƐĞƌǀĞĚ͍ WĂƌƟĂů͍ EŽƚKďƐĞƌǀĞĚ͍
႒ ႒ ႒
Handout 223
ŌĞƌZĞĂĚŝŶŐ
/ĚĞŶƟĮĞƐŽƌƌĞĂĐŚĞƐĂĐŽŶĐůƵƐŝŽŶ KďƐĞƌǀĞĚ͍ WĂƌƟĂů͍ EŽƚKďƐĞƌǀĞĚ͍
႒ ႒ ႒
^ƵŵŵĂƌŝnjĞƐǁŚĂƚǁĂƐƌĞĂĚ KďƐĞƌǀĞĚ͍ WĂƌƟĂů͍ EŽƚKďƐĞƌǀĞĚ͍
႒ ႒ ႒
ŶƐǁĞƌƐƋƵĞƐƟŽŶƐƚŚĂƚǁĞƌĞƉŽƐĞĚĂƚƚŚĞ KďƐĞƌǀĞĚ͍ WĂƌƟĂů͍ EŽƚKďƐĞƌǀĞĚ͍
ďĞŐŝŶŶŝŶŐŽĨƌĞĂĚŝŶŐ ႒ ႒ ႒
ůĂďŽƌĂƚĞƐŽŶƌĞĂĚŝŶŐĂŶĚͬŽƌĐŽŶŶĞĐƚƐǁŝƚŚŽƚŚĞƌ KďƐĞƌǀĞĚ͍ WĂƌƟĂů͍ EŽƚKďƐĞƌǀĞĚ͍
ƐŽƵƌĐĞƐŽĨŝŶĨŽƌŵĂƟŽŶ͖hƐĞƐƉƌŝŽƌŬŶŽǁůĞĚŐĞƚŽ ႒ ႒ ႒
ŵĂŬĞƐĞŶƐĞŽĨǁŚĂƚǁĂƐƌĞĂĚ
DĂŬĞƐĚĞĐŝƐŝŽŶƐĂďŽƵƚƌĞĂĚŝŶŐƚŽĚĞƚĞƌŵŝŶĞŝĨ KďƐĞƌǀĞĚ͍ WĂƌƟĂů͍ EŽƚKďƐĞƌǀĞĚ͍
ƌĞĂĚŝŶŐŐŽĂůǁĂƐŵĞƚŽƌŝĨƐĞĐƟŽŶƐƌĞƋƵŝƌĞ ႒ ႒ ႒
ƌĞƌĞĂĚŝŶŐ
ZĞǀŝĞǁƐƐĞĐƟŽŶƐŽĨƚĞdžƚ KďƐĞƌǀĞĚ͍ WĂƌƟĂů͍ EŽƚKďƐĞƌǀĞĚ͍
႒ ႒ ႒
• Related strategies that can help with previewing reading passages and texts are
“Inquiry Charts”, which teach students to develop specific questions about the
topic and “Think Alouds”, which teach students to both develop questions and
monitor their meaning during reading. The reader is referred to http://www.read-
ingrockets.org/strategies/#comprehension for more information.
Evidence-base: Liff Manz 2002; Vaughn and Kettman Klinger 1999; Vaughn et al.
2000
Handout 229
Handout 8.18a
Name: _____________________________________
Text to Self
Text to World
Text to Text
236 8 CBE Reading Comprehension
Main Characters:
Problem:
Major Events:
1
2
3
Outcome or Resoluon:
Expository Text
Topic Sentence or Author’s Purpose:
Supporng Detail 1:
Supporng Detail 2:
Supporng Detail 3:
Main Idea:
Handout 237
V
Handout 239
Vocabulary-Matching Example
9.1 Chapter Preview
This chapter focuses on using data to make instructional decisions, also known as
Plan Evaluation, which is the fourth step of the CBE Process. In this chapter, two
topics are discussed: (a) analyzing progress monitoring data for instructional deci-
sion making and (b) instructional factors to consider when adjusting or designing
instructional plans.
During Phase 3 of the CBE Process, Plan Implementation, decisions are made about
how to measure student’s progress and the fidelity of treatment. Data are collected
for later review to ensure the instructional plan is working for the student. During
the last step of the CBE Process, Plan Evaluation, decisions about the effectiveness
of the instructional plan are made. To assist educators in making such decisions,
guidelines are provided on how to evaluate progress monitoring data. Instructional
factors to consider when adjusting instructional plans also are provided.
9.3 Progress Monitoring
There are a series of decisions and steps to take when examining a student’s growth.
The steps are presented in Table 9.1 and discussed in detail next.
9.3.2 Graphing Basics
Each graph has an x-axis (i.e., horizontal line or abscissa) and a y-axis (i.e., verti-
cal line or ordinate). The scale on the y-axis should represent the normal range of
possible scores. As an example, look at Fig. 9.1. The hypothetical data indicate that
this second grade student is making consistent and steady reading progress. The
data points are going up at a steep angle. However, the data are displayed on a scale
that does not represent the normal range of scores expected for typical second grad-
ers. When the same data are displayed on a graph with the normal range of scores
represented, as seen in Fig. 9.2, a more accurate view of the student’s growth rate
is presented. Using the normal range of scores is part of accurately representing the
student’s growth.
9.3 Progress Monitoring 245
Fig. 9.1 Negative example of a y-axis. ( Note. Notice the scale of the y-axis is well below the ave-
rage range for a second grader, resulting in what appears to be very steep growth)
Fig. 9.2 Positive example of a y-axis. ( Note. The scale of the y-axis represents the normal range
of scores for all second graders. The same data in Fig. 9.1 are more accurately depicted within this
graph)
The other part is ensuring that the x-axis follows the actual dates the data were
collected. Examine Fig. 9.3. The data illustrate what appears to be steep growth for
the student. However, the x-axis is in numerical order, so information about the stu-
dent’s growth over time is missing. It appears the student is making a lot of growth.
246 9 Progress Monitoring and Educational Decisions
Fig. 9.3 Negative example of an x-axis. ( Note. The scale of the x-axis does not indicate the chro-
nology of the data collection; instead it is listed numerically)
Fig. 9.4 Positive example of an x-axis. ( Note. The scale of the x-axis is chronological and illust-
rates the actual dates the data were collected)
However, when we examine Fig. 9.4, which is the same data displayed on an x-axis
that indicates the chronological dates that the data were collected, we see the stu-
dent is making less impressive growth than it appeared in the first graph.
The purpose of reviewing a progress graph is to examine the student’s slope.
Slope incorporates the amount of growth made and the time that has passed. Slope
9.3 Progress Monitoring 247
Fig. 9.5 Progress monitoring graph with goal, aim line, and trend line
9.3.4 Pattern of Performance?
Once a graph contains the necessary components, the first question to ask in exam-
ining growth is whether or not the data show a reliable pattern of performance. To
make this determination, ask “Can we predict what the next data point will be with
reasonable confidence?” If the answer is “yes,” then there is a pattern of perfor-
mance; if “no,” then a pattern likely does not exist.
It may take as many as 10 data points to establish a pattern (Christ 2010; Shinn
2002), with the most reliable slope having at least 14 data points (Christ et al.
2012). If data points are consistently increasing, decreasing, or flat, five or six
data points may be enough to establish a pattern. When data points are inconsis-
tent from date to date, it will take closer to the 10 to 14 suggested by researchers.
For example, look at Fig. 9.6. The graph “A” illustrates a pattern of performance,
even though there are only five data points. One can look at graph “A” and guess
with reasonable confidence where the next data point will be (it will likely fall
somewhere between 40 and 50). Graph “B” illustrates a nonestablished pattern of
performance. When one looks at the student’s performance, it is difficult to guess
with any confidence where the next data point will be. The student’s performance
thus far is erratic. With erratic performance, more data are required to establish a
pattern of performance. High variability in data points may indicate other issues
to examine, which will be discussed later in the chapter. For now, it is important
to understand that an inability to predict the next data point indicates the need for
more data (Christ 2010).
There is not a hard and fast rule for deciding whether or not a pattern of per-
formance is established. As mentioned, it is dependent on the student’s data. We
provide examples and nonexamples of patterns of performance in Appendix 10A.
Examine the graphs and decide if a pattern of performance is evident. The answers
are provided in Appendix 10A. The rule is that if you are not confident in predict-
ing the next data point, more data should be collected until you can predict with
confidence.
9.3 Progress Monitoring 249
Fig. 9.7 Examples of the types of responses when comparing the trend line to the aim line
9.3.5 Judging Growth
to implement the intervention with fidelity should occur before changing the inter-
vention plan. As discussed in Chapter 3, it is not logical to judge instruction if the
plan was not implemented as intended (see Fig. 3.3 in Chapter 3).
A questionable response indicates the trend line is approximately parallel to the
aim line, and the rate of improvement is not changing. In Fig. 9.7, the good news
is that the student is growing. The bad news is that the student will not reach the
goal. The instructional plan may require intensification to get a positive response.
Intensifying the plan implies a small adjustment versus a substantial change in the
intervention. Before making an instructional change, however, it is important to ex-
amine fidelity of the plan. If fidelity is poor, it may be the case that the only change
required is implementing the plan as intended. Figure 9.7 illustrates the three dif-
ferent responses and Table 9.2 is a description of the responses and accompanying
recommendations.
9.3.6 Additional Analyses
Analyses of level and variability may be useful when examining student progress,
particularly when comparing performance in response to different instructional
plans (Hixson et al. 2008; Kennedy 2005).
Level Level refers to the mean performance across multiple data points (Kennedy
2005). Examining level of performance from one instructional phase to the next is
another way to judge intervention effectiveness. The level is determined by taking
the average of all the data points within an instructional phase. The level of one
phase can be compared to the level of a second instructional phase. It is also possi-
ble to compare the beginning level of performance in an instructional phase to the
end level of performance of the same instructional phase. For example, the mean of
the first three to five data points could be compared to the mean of the final three to
five data points. Figure 9.8 illustrates comparing levels between phase changes, and
Fig. 9.9 illustrates comparing levels within the same instructional phase.
9.3 Progress Monitoring 251
Fig. 9.8 Example of comparing level of performance between two phase changes
Fig. 9.9 Comparison level of performance within the same instructional phase
252 9 Progress Monitoring and Educational Decisions
Fig. 9.10 Examining variability of student’s data within and between phase changes
Variability Variability refers to the complete range of scores within the instruc-
tional phase (i.e., the lowest score to the highest score). Variability also is used to
describe how far scores deviate from the level (Hixson et al. 2008). Some variability
or “bounce” from one score to the next is common and normal (Christ 2008; Christ
2010; Christ et al. 2012; Silberglitt and Hintze 2007). Variability can be examined
within one instructional phase or between two different phases (see Fig. 9.10).
Variability is important because highly variable data make detecting patterns
of performance, level of performance, and intervention effects difficult. Too much
variability calls into question the reliability of data. Errors related to scoring, tim-
ing, or the quality of the administration can impact data consistency (Hixson et al.
2008). Student distractibility, understanding the rules or purpose of the task or will-
ingness to participate can impact data consistency.
Instructional Decision Making While examining data and making decisions about
the need for instructional changes, it is helpful to have the detailed instructional
plan and fidelity data. Having all the information readily available helps ensure the
conversation stays instructionally focused, is based on all relevant information, and
does not drift to unalterable variables. Consideration of only the graph can result
in conversations that focus exclusively on the student. When information about the
curriculum, instruction, and environment are readily available, the discussion can
focus on those variables that can be changed (see Fig. 9.11).
Summary of Analyzing Progress Monitoring Graphs In summary, before making
data-based decisions, it is important to ensure the graph’s x- and y-axes have proper
9.4 What to do After a Poor or Questionable Response 253
Fig. 9.11 The learning triad and ensuring proper analysis of monitoring graphs
scales and ranges. The graph also should contain a goal, aim line, and trend line
(and a phase change line if there have been any instructional changes in the stu-
dent’s plan). Once those critical components are present, the graph can be examined
to determine the presence of a performance (i.e., is it possible to predict the next
data point with reasonable confidence?). If fidelity is good, a judgment can be made
about the effectiveness of the intervention by comparing the trend line to the aim
line, the level of performance for one intervention phase compared to another, or
the level of performance at the beginning of the intervention to the level of perfor-
mance at the end of the intervention. Variability in scores is common, which is why
it takes several data points to determine a performance pattern (Christ et al. 2012).
Too much variability can indicate problems with the assessment administration or
the student’s behavior during assessment.
Once progress data, fidelity data, and the instructional plan have been examined,
decisions are made to either continue the current plan (in the case of a positive
response) or to change the plan (in the case of negative or questionable responses).
One of the first things to consider is that when the need for instructional change is
evident, drastic changes may not be necessary. Small (but powerful) instructional
factors could be adjusted to improve the student’s rate of growth. This section pres-
ents a few research-based factors that can be adjusted to enhance instruction. It’s
important to note that a change in instruction does not always require a change in
curricular programs.
254 9 Progress Monitoring and Educational Decisions
In this section, eight instructional factors that can be altered when an instructional
change is warranted are discussed. The factors all have a research base that demon-
strates an association with improved student achievement. This list is not exhaustive
list by any means. A summary of the instructional factors is listed in Table 9.3.
Instruction also can be intensified by reducing the group size. Related to group
size is the homogeneity of the skills and needs in the group. When skills and needs
are similar among group members, the focus of instruction is more targeted and
students spend more of the intervention time receiving direct instruction for their
needs. As the skills change over time, groups should be modified to maintain ho-
mogeneity.
9.5.3 3. Pacing
Amount of review The time devoted to reviewing previously learned • Teacher provides three to five review questions at the end of each lesson
material • Teacher provides 5–10 minutes of review of previously learned material
at the beginning of each lesson
Repetitions The number of times a student repeats a newly lear- • Teacher introduces new vocabulary word, students pronounce the word
ned concept or fact several times, read the word repeatedly in the context of the story, and
write a sentence using that key word
Activating background Providing explicit connection to previously learned • Previewing and discussing the content in large and small groups
knowledge material • Teach: Background knowledge: Connections to self, world, text (Hand-
out 8.18, Chapter 8)
Corrective feedback Feedback provided by a teacher designed to correct • A teacher states, “That word is (correct word). What word?”
an error produced by the student
Praise-to-redirect ratio The ratio of praise statements provided to serve as • A teacher places five pennies in her left pocket. When she provides a redi-
reinforcement, compared to the number of statements rect, she makes five praise statements, moving a penny to her other pocket
that redirect a student’s behavior, express disappro- after each praise statement
val, or correct a behavior • A teacher places a visual on the corner of the whiteboard and each time
she looks at it, she gives out a behavior-praise statement
255
256 9 Progress Monitoring and Educational Decisions
OTRs are defined as the number of times a teacher provides academic prompts
that requires active student responses (Simonsen et al. 2010). Each OTR involves a
teacher prompt, a student response, and teacher feedback, or a three-step interaction
between the teacher and student. OTRs can be verbal (e.g., oral responding) or non-
verbal (e.g., write on a white board, thumbs up or down) and require an individual
or a group to respond (i.e., choral responding) (Haydon et al. 2010; Simonsen et al.
2010).
An increase in OTRs is associated with an increase in student achievement by
commanding student attention, providing the teacher information about students’
mastery of the material (i.e., are the student responses correct or incorrect?) and
prompting praise or corrective feedback. Praise is effective in building rapport with
students and reinforcing the mastery of skills (Gable et al. 2009; Simonsen et al.
2011), and corrective feedback provides additional instruction to students and pre-
vents the practicing of errors (Hattie 2009; Stitcher et al. 2009).
The optimal level of OTRs and accuracy of responses depends on the students’
level of acquisition with the skill (i.e., initial acquisition vs practice) and the instruc-
tional grouping (i.e., whole vs small-group). Whole-class direct instruction is most
effective with approximately 3 to 6 OTRs per minute and small-group instruction
is most effective with approximately 8 to 12 OTRs per minute (Gunter et al. 2004;
Harlacher et al. 2010). Other recommendations suggest that students learning new
skills, particularly students with disabilities, may require as many as 10 OTRs per
minute, (Gunter et al. 2004; Haydon et al. 2009). Gunter et al. (2004) indicate that
students in the acquisition stage of the instructional hierarchy require 4 to 6 OTRs
per minute with at least 80 % accuracy; and students at the fluency stage and beyond
should have 9 to 12 OTRs per minute with at least 90–93 % accuracy. Students who
respond with at least 98 % accuracy have reached a level of independent mastery
and are ready for more difficult material (Treptow et al. 2007).
Increasing OTRs Increasing OTRs is a powerful way of adjusting instruction and
can be accomplished in multiple ways. One way to increase the rate of OTRs is to
set a goal, and obtain graphed feedback from peer observers (Simonsen et al. 2010).
Another strategy is to use visual aids to prompt the provision of an OTR (e.g., pie-
ces of paper on the corner of the projector, moving coins from one pocket to the
other after providing an OTR, or a visual on each student’s desk).
Increasing Accuracy When observing OTRs, the accuracy of those responses can
be recorded. If the accuracy is not sufficient, at least two strategies can improve
accuracy. First, ensure “think time” (also called wait time) is between 3 and
5 seconds, which is an optimal time for students to respond (Stitcher et al. 2009).
Second, use antecedent or precorrection strategies. Precorrections are prompts or
cues that remind students about the expected behavior or skill and are given prior
to students entering problematic areas (e.g., unstructured times during school or
before responding to certain academic content) (Simonsen et al. 2010). Examples
of precorrection in reading include reminding students of decoding rules prior to
reading, highlighting aspects of the reading material, or practicing a skill prior to
another activity involving that skill.
9.5 Evidence-Based Instructional Factors 257
A fourth factor that can be adjusted is the amount of review included with instruc-
tion. Review is important both for maintenance of skills and for activating previ-
ously learned knowledge and skills prior to instruction (Hall 2002; Kame’enui and
Simmons 1990). Review and practice is more beneficial when it is distributed (i.e.,
a few minutes of review each day) instead of massed (i.e., longer review periods on
fewer days) (Donovan and Radosevich 1999; Hall 2002).
Adjusting Review Review can be adjusted by considering massed versus distri-
buted review, review that is cumulative (both recent and newly learned material)
versus focused on one topic, and various schedules of review (daily, weekly, and
monthly) (Hall 2002). The amount of review can be adjusted by increasing chunks
of time set aside for review within a lesson, increasing repetitions of previously
learned material, or increasing the amount of review material that is interspersed
with new material.
9.5.5 5. Repetitions
Repetitions are opportunities for students to practice new skills and have been
shown to improve retention, increase proficiency and fluency, and free up work-
ing memory for more complex, higher-level tasks (Archer and Hughes 2010). The
amount of review and the number of repetitions are overlapping elements.
On average, it can take between three and eight repetitions for a student to learn
a new skill, with more proficient readers requiring fewer repetitions and less profi-
cient readers requiring more repetitions (Reistma 1983). Gunter et al. (2004) report
it may take up to 30 repetitions for a student to acquire a skill, and Wong and Wong
(2001), crediting research by Madeline Hunter, report that it can take, on average,
28 repetitions for a student to learn a new behavior that has to replace an old be-
havior.
Increasing Repetitions Repetition can be increased by specifically allocating time
for repetition or by ensuring a certain number of repetitions are built in when pre-
senting new material. For example, some lessons will introduce a key vocabulary
word prior to reading a story, have students pronounce the word several times, read
the word repeatedly in the context of the story, and write a sentence using that key
word (Kame’enui and Simmons 1990).
This factor is the process of connecting new content to previous knowledge to in-
crease the student’s ability to learn the new content. See Teach: Background Knowl-
edge in Chapter 8.
258 9 Progress Monitoring and Educational Decisions
Corrective feedback refers to the feedback a teacher provides a student after an in-
correct response. Corrective feedback can be adjusted by increasing the directness
and clarity of the corrective feedback. Consider a misread word. Less direct correc-
tive feedback may be, “Look at that word again,” More direct corrective feedback
may be, “That word is _____. What word is it?” This directness may result in an
increase in the rate of OTRs per minute and provide more repetitions. Immediate
and direct corrective feedback is beneficial because less time is spent waiting for
information retrieval.
As was just discussed, clear feedback about behavior and performance contributes
to an effective learning environment. Providing clear feedback includes behavior-
specific praise. Praise is defined as feedback and acknowledgment for behavior or
academic responses that is specific and delivered immediately after the behavior is
performed (Flora 2000; Gable et al. 2009). Using behavior-specific praise as part of
clear feedback ensures students perform accurately and are aware of what behaviors
and skills are expected of them (Hattie 2009; Horner et al. 2005; Marzano 2010).
A redirect is defined as feedback that expresses disapproval and directs student to
a different response.
The recommended praise to redirect ratio is 5 to 1 (Flora 2000; Sutherland et al.
2003). Establishing a 5 to 1 ratio in a classroom can increase students’ time on-
task, correct academic responding, work production, and compliance with requests
(Gable et al. 2009; Simonsen et al. 2011; Sutherland et al. 2000, 2003).
Increasing Ratio of Positive-to-Redirect Statements Increasing the ratio of positive
to redirect statements can be accomplished through peer observation by establis-
hing a baseline of performance, setting a goal, and determining if goal is met (see
Simonsen et al. 2006, 2010). The ratio can also be improved through the use of
visual prompts (e.g., a poster in the classroom, sticky-note, etc.), a tactile reminder
(e.g., coins transferred from one pocket to another following a praise statement), or
other self-monitoring such as marking on a sticky-note each time praise is provided.
Progress and fidelity monitoring are critical to plan evaluation. This chapter re-
viewed interpretation of progress monitoring graphs and provided a description
of several research-based instructional factors to adjust when data indicate an in-
structional change is warranted. Before reviewing student progress, critical graph
9.6 Chapter Summary and Key Points 259
components should be present including a y-axis that represents the normal range
of scores for the skill being measured, an x-axis representing chronological dates
the data were collected, a goal, an aim line, and a trend line. Progress can be in-
terpreted by examining the number of consecutive data points below the aim line
or by comparing the trend line to the aim line. Interpreting the trend line requires
consideration of the number of data points and a pattern of performance. When a
student’s progress is poor or questionable and fidelity is good, instructional changes
are warranted. A summary of eight instructional factors that can be adjusted to im-
prove student outcomes were provided, illustrating that instructional changes do not
always require changing curricula.
Key Points
• A progress monitoring graph has components that allow for interpretation
including a y-axis that represents the normal range of skills, an x-axis that
represents the chronological dates of administration, a goal, aim line, and
trend line.
• A pattern of performance must be established before examining growth.
• Research suggests that up to 14 data points may be needed to establish a
reliable trend line.
• Level and variability can be examined on progress monitoring graphs to
help interpret performance.
• Fidelity is necessary to consider before making decisions about instructio-
nal changes.
• A summary of eight instructional factors that can intensify instruction are
provided.
260 9 Progress Monitoring and Educational Decisions
Appendix 9A
10.1.1 Group Diagnostics
To conduct group diagnostics, evaluators examine oral reading fluency data for rate
and accuracy. Each student’s rate and accuracy are compared to either a normative
criterion or benchmark criterion for rate and a 95 % criterion for accuracy.
J. E. Harlacher et al., Practitioner’s Guide to Curriculum-Based Evaluation in Reading, 261
DOI 10.1007/978-1-4614-9360-0_10, © Springer Science+Business Media New York 2014
262 10 Frequently Asked Questions about Curriculum-Based Evaluation
Table 10.1 Description of how and where CBE can be used between tiers of instruction
Tier Use of CBE Examples
1 Survey-level assessment to verify • After benchmarking with CBM, a survey-level
instructional, frustrational, and assessment is conducted to determine instructio-
independent status nal reading level
• Survey-level assessment above a student’s grade
level to determine how advanced the student’s
skills are
2 Group diagnostics • A team of third grade teachers use group diagno-
stics to plan tier 2 support for students
Survey-level assessment • Survey-level assessment is conducted on a small
group of eighth grade students to determine their
appropriate placement for support
3 Entire CBE Process to design and • A school-level problem-solving team conducts
provide support CBE for students that receive tier 3 support
• As part of a student’s triannual review, a school
psychologist conducts CBE to help develop a
student’s individualized education program
Problem Identification
Step 1—Ask: Is there a problem? Do: Universal screening with all students in
a class or grade level.
Examine universal screening data from a class or grade level to identify students
who are at risk for not meeting reading standards. A cutoff point (e.g., 25th per-
centile) will be determined by the school team, and students performing below this
cutoff will move to Step 2.
Step 2—Ask: Does the problem warrant further investigation? Do: Verify stu-
dents’ at-risk status with multiple sources of information.
For each student identified in the at-risk group from the universal screening data,
multiple sources of information should be used to verify their at-risk status before
moving to the problem analysis phase. Examples of the sources of information to
consider include results from state tests, district tests, classroom-based assessments,
teacher observation of daily classroom performance, etc.
Problem Analysis
Step 3—Ask: Why are the students at risk in reading? Do: Examine rate and
accuracy on oral reading fluency probes.
The Problem Identification phase will generate a list of students verified as at risk
for not meeting standards in reading. With these students, administer three reading
CBM probes and report the median words read correctly and errors. Calculate the
accuracy percentage. Refer to Chapter 6 for scoring directions and the formulas for
calculating rate and accuracy.
Plan Implementation Once fluency and accuracy rates have been obtained, stu-
dents are sorted into four groups (see Table 10.2 and Handout 10.1): Group 1: accu-
10.1 Is Curriculum-Based Evaluation Just for Tier 3? 263
rate and fluent; Group 2: accurate and not fluent; Group 3: inaccurate and fluent;
and Group 4: inaccurate and not fluent. Each group is associated with teaching
recommendations that can be provided in Tier 2 instruction. Table 10.2 summarizes
the teaching recommendations for each group.
Plan Evaluation Plan Evaluation involves measuring progress to determine the
effectiveness of instruction. Measurement of fidelity and measurement of progress
are two critical components of Plan Evaluation. Oral reading fluency probes are
recommended as the tool to monitor general outcomes in reading. Both rate and
accuracy should be monitored. Other measures can be used in addition to oral rea-
ding fluency to monitor short-term goals and skill mastery. See Chapters 6 to 8 for
progress monitoring specifics.
In addition to group diagnostics, survey-level assessment can assist with Tier
2 instructional planning. The survey-level assessment provides information about
students; instructional level and can increase homogeneity of targeted groups.
Tier 1—Core Instruction At Tier 1, there are a variety of ways to use CBE. Group
diagnostics can be used to determine the groups and focus for the small-group ins-
truction portion of Tier 1. Additionally, the survey-level assessment can be used to
determine a student’s frustrational, instructional, and independent reading levels,
which could contribute to verifying a student’s at-risk status after universal scree-
ning. For struggling students, a teacher could use the entire CBE Process.
The administration time will vary depending on the student’s skills, grade level,
and what phase of the CBE Process is being conducted. For the Problem Identifica-
tion phase, the initial identification of a problem can be as quick as a few minutes,
since review of records may be all that is necessary to determine the existence of a
problem. If included in the Problem Identification phase, a survey-level assessment
can take 20–30 minutes.
The Problem Analysis phase involves a series of short tasks that average about
10–15 minutes each. For example, the self-monitoring assessment for decoding can
take 5–10 minutes, but an in-depth teacher interview focused on comprehension and
approach to reading could take 20–30 minutes. Once the evaluator is fluent with the
CBE Process, a full evaluation can range from as little as an hour up to 3 hours to
conduct, depending on the student’s skills.
The Plan Implementation phase usually involves a meeting of about 1 hour and
results in an intervention plan and identified strategies for measuring progress and
fidelity.
264 10 Frequently Asked Questions about Curriculum-Based Evaluation
The Plan Evaluation phase can last several weeks or months and includes ongo-
ing progress and fidelity monitoring and regularly scheduled data review meetings
for data-based decision making.
To answer this question, we will refer to the literature on systems change and
Multi-Tier System of Supports. First, educators are more willing to try new
practices when they understand the purpose and benefits of their use (Barnes
and Harlacher 2008). An understanding of the rationale and the skills needed to
use the practice increases the likelihood of buy-in. Doing things because “the
district wants you to do it” may get compliance, but perhaps at the expense of
fidelity of implementation and understanding the practice (Greenwood et al.
2008; Ikeda et al. 2002). Having discussions and providing presentations about
solutions to current problems are ways to introduce CBE. Another approach to
convincing others is to use the CBE Process yourself and share your data with
others to illustrate the value of CBE.
making decisions about individuals; and above α = 0.90 for high-stakes decisions,
such as disability classifications (Foegen et al. 2007; Kaplan and Saccuzzo 2008;
Thorndike and Thorndike-Christ 2010). Tools are generally considered to have
good validity if they correlate moderately with tests that measure the same con-
struct (i.e., correlation coefficients above 0.40). However, this is a general guide-
line and in some cases, a correlation below 0.40 may be considered good validity
(Thorndike and Thorndike-Christ 2010).
CBE frequently uses CBM in the process, which has strong reliability (Wayman
et al. 2007). Reliability coefficients for test-retest reliability for word identification
and oral reading fluency (ORF) have been between r = 0.84 and 0.96, and alternate-
forms reliability coefficients have ranged from r = 0.84 to 0.96 (Deno 2003; Way-
man et al. 2007).
In addition, CBM has demonstrated good validity because of its association with
other reading measures and state-level assessments (McGlinchey and Hixson 2004;
Miura Wayman et al. 2007; Silberglitt et al. 2006). Fuchs et al. (1988) compared
ORF to other common measures of comprehension, including a subtest from a read-
ing achievement test, cloze (a silent reading activity in which every seventh word is
blank and the student must select the word that fits the sentence), story retell (i.e.,
266 10 Frequently Asked Questions about Curriculum-Based Evaluation
Inferences about CBE’s effectiveness can be drawn from two sources: (a) the re-
search on use of formative assessment as a means to improve academics and (b) the
use of CBM within the assessment process.
Formative Assessment Research Formative assessment refers to a broad set of
assessments practices in which the data obtained from those assessments is used to
improve teaching or learning (Black and Wiliam 1998; Kingston and Nash 2011). It
is termed “formative” because the data is used while learning is still forming; that
is, instruction is occurring during assessment so that the information obtained can
guide instruction. Formative assessment is in contrast to summative assessments,
which are assessments used at the end of a learning cycle to make global conclusi-
ons about a student’s skills. The term summative implies the goal is to measure the
sum of learning after teaching has occurred (Stiggins and Chappuis 2006). CBE is
designed to be used in a formative manner, therefore the literature base for forma-
tive assessment is applicable.
Black and Wiliam’s (1998) oft-cited analysis of formative assessment conclud-
ed that formative assessment has a positive effect on student achievement. They
determined that the use of formative assessment results in an effect size (ES) of
0.40–0.70 (ESs of 0.20, 0.50, and 0.80 are rated as small, moderate, and large, re-
spectively; Cohen 1988). (See Inset 3.1 in Chapter 3 for an explanation of an ES.)
However, there were questions about what types of formative assessment practices
were analyzed in the study, and there may have been definition issues with Black
and Wiliam’s analysis.
To correct this lack of clarity on the definition, Kingston and Nash (2011) con-
ducted a meta-analysis to see the effect of the use of formative assessment on stu-
dent achievement. Overall, formative assessment was associated with a small ES
( d = 0.20), but was moderated by the type of formative assessment and by subject
area. Professional development (i.e., training teachers to use formative assessment)
netted an ES of 0.30 and use of computer-based formative assessment systems re-
sulted in an ES of 0.28. Formative assessments used in reading and mathemat-
ics were associated with an ES of 0.32 and 0.17, respectively. Given Black and
Wiliam’s (1998) findings and Kingston and Nash’s (2011) recent meta-analysis, it
is fair to conclude that the use of formative assessment, particularly in reading and
around professional development and computer-based systems, is associated with
improvement in student’s achievement.
CBM Research CBM used as a progress monitoring tool lends more support to the
effectiveness of the CBE Process. Progress monitoring is the frequent assessment
and visual graphing of a student’s performance to determine the effectiveness of
a given instructional plan and to inform decisions about the need for instructional
changes (Hosp 2008). Formative assessment can take the form of informal assess-
ment (e.g., anecdotal observations; use of mastery-checklists; checks for understan-
ding, etc.); progress monitoring is a specific type of formative assessment that is a
268 10 Frequently Asked Questions about Curriculum-Based Evaluation
They do! When students are told to “read fast and carefully,” they do read faster,
but they make more errors. It is important to use the phrase “do your best reading”
contained in the standardized administration directions as it gets the most accurate
and valid reading from the student (Colon and Kranzler 2006).
Oral reading fluency are general outcome measures of reading. When scores im-
prove, they indicate that overall reading has improved. In fact, oral reading fluency
correlates quite well with measures of reading comprehension. Fuchs et al. (1988)
compared ORF to other common measures of comprehension, including cloze (a
silent reading activity in which every seventh word is blank and the student must
select the word that fits the sentence), story retell, and question–answer measures.
ORF correlated at 0.91 with the comprehension subtest of an achievement test,
compared to 0.76–0.82 for the typical comprehension measures. A small, moder-
ate, and large ES for correlations is 0.20, 0.50, and 0.80, respectively; Cohen 1988.
Like the body temperature as measured by a thermometer is an indicator of over-
all physical health, ORF is an indicator of overall reading performance. A fever does
10.9 Why do I Have to do a Survey-Level Assessment if I Know the Student’s … 269
not tell us exactly what is wrong (e.g., infection, virus, etc.), but indicates a problem
exists that warrants further investigation. Similarly, a low score on an oral reading
fluency probe does not tell us exactly what is wrong with reading (e.g., decoding,
comprehension, etc.), but indicates a reading problem exists that warrants further
investigation. After further investigation, a health problem is identified and treated
and the thermometer is used to monitor body temperature, which indicates if the
treatment is working. Likewise, after further investigation, a reading problem is
identified and targeted with intervention and monitored with oral reading fluency
probes to determine if the intervention is working.
The English Language Arts Common Core State Standards (ELA CCSS) include
Reading Foundations standards, which are divided into three sections: (1) Print
Concepts for kindergarten and first grade, (2) Phonological Awareness for kinder-
garten and first grade, and (3) Phonics and Word Analysis for kindergarten through
fifth grade. The standards describe what the students should know at the end of each
grade level.
The ELA CCSS do not provide tools to pinpoint the skills struggling students are
missing, or guidelines for how to teach those skills at the prescribed grade levels
and for students above fifth grade who have yet to master the Reading Foundations
standards.
CBE supports the implementation of ELA CCSS by providing a process to iden-
tify missing foundational skills that are preventing students from achieving the stan-
dards. CBE offers educators the tools to pinpoint missing skills and teach those
skills so students can meet ELA CCSS.
Because ORF measures are sensitive to changes in reading skills, they are also
sensitive to error in administration, student background knowledge, and other ex-
traneous factors. Using the median can ensure a more robust and reliable measure of
reading skills (Good and Kaminski 2011; Miura Wayman et al. 2007).
The survey-level assessment (SLA) is the beginning point of the CBE Process and
determines the focus of the specific-level assessment. It provides a comprehensive
picture of the student’s reading level and provides specific information about a stu-
270 10 Frequently Asked Questions about Curriculum-Based Evaluation
dent’s reading level that supports verifying that a problem warrants further investi-
gation, allows a gap analysis to inform goal setting, informs material selection for
teacher-led instruction and independent reading, and indicates the need for scaffold-
ing when material is frustrational. The results of the SLA kick off the specific-level
assessment in the CBE Process because they inform the first assessment activities
to pinpoint missing skills.
Handout 271
ZĂƚĞŽĨtŽƌĚƐZĞĂĚŽƌƌĞĐƚWĞƌDŝŶƵƚĞ
,ŝŐŚ;хŶŽƌŵŽƌďĞŶĐŚŵĂƌŬͿ >Žǁ;фŶŽƌŵŽƌďĞŶĐŚŵĂƌŬͿ
'ƌŽƵƉϭ͗ĐĐƵƌĂƚĞĂŶĚ&ůƵĞŶƚZĞĂĚĞƌ 'ƌŽƵƉϮ͗ĐĐƵƌĂƚĞĂŶĚEŽƚ&ůƵĞŶƚZĞĂĚĞƌ
/ŶƐƚƌƵĐƟŽŶĂů,ŝĞƌĂƌĐŚLJ͗ /ŶƐƚƌƵĐƟŽŶĂů,ŝĞƌĂƌĐŚLJ͗
'ĞŶĞƌĂůŝnjĂƟŽŶĂŶĚĚĂƉƚĂƟŽŶ &ůƵĞŶĐLJ
,ŝŐŚ;хϵϱйͿ
dĞĂĐŚ͗'ƌĂĚĞͲ>ĞǀĞůŽŶƚĞŶƚ͕ dĞĂĐŚ͗&ůƵĞŶĐLJĂŶĚƌĂƚĞďƵŝůĚŝŶŐ
ŽŵƉƌĞŚĞŶƐŝŽŶĂŶĚsŽĐĂďƵůĂƌLJ;ƐĞĞ
ŚĂƉƚĞƌϴͿ WůĂŶŽĨĐƟŽŶ͗ƵŝůĚŇƵĞŶĐLJĂŶĚ
ĂƵƚŽŵĂƟĐŝƚLJĂƚƚŚĞǁŽƌĚ͕ƐĞŶƚĞŶĐĞ͕ĂŶĚ
WůĂŶŽĨĐƟŽŶ͗/ŶƐƚƌƵĐƟŽŶŽŶŵĞĂŶŝŶŐ ƉĂƐƐĂŐĞůĞǀĞů͘/ŶƐƚƌƵĐƟŽŶŽŶŐƌŽƵƉŝŶŐ
ŽĨĐŽŶƚĞŶƚ͕ƐƉĞĐŝĮĐǁŽƌĚƐ͕ĂŶĚ ǁŽƌĚƐƚŽŝŵƉƌŽǀĞƉƌŽƐŽĚLJ͕ƌĂƚĞ͕ĂŶĚ
ŵĂŝŶƚĞŶĂŶĐĞŽĨƐŬŝůůƐ ĂƵƚŽŵĂƟĐŝƚLJ͘
ĐĐƵƌĂĐLJŽĨdĞdžƚ
'ƌŽƵƉϯ͗/ŶĂĐĐƵƌĂƚĞĂŶĚ&ůƵĞŶƚZĞĂĚĞƌ 'ƌŽƵƉϰ͗/ŶĂĐĐƵƌĂƚĞĂŶĚEŽƚ&ůƵĞŶƚZĞĂĚĞƌ
/ŶƐƚƌƵĐƟŽŶĂů,ŝĞƌĂƌĐŚLJ͗ /ŶƐƚƌƵĐƟŽŶĂů,ŝĞƌĂƌĐŚLJ͗
ĐƋƵŝƐŝƟŽŶ;ŽĨƉŚŽŶŝĐƐƐŬŝůůƐͿŽƌ&ůƵĞŶĐLJ ĐƋƵŝƐŝƟŽŶ
ĂŶĚ'ĞŶĞƌĂůŝnjĂƟŽŶ;ŽĨƐŬŝůůƐƚŽŶĞǁ
dĞĂĐŚ͗ĐƋƵŝƐŝƟŽŶĂŶĚŇƵĞŶĐLJŽĨďĂƐŝĐ
ǁŽƌĚƐĂŶĚƚĞdžƚͿ
ƌĞĂĚŝŶŐƐŬŝůůƐ;ĚĞĐŽĚŝŶŐĂŶĚͬŽƌƐŝŐŚƚǁŽƌĚƐͿ
dĞĂĐŚ͗ĞƚĞƌŵŝŶĞŝĨƐƚƵĚĞŶƚ͛ƐŚŝŐŚĞƌƌŽƌ
>Žǁ;фϵϱйͿ
WůĂŶŽĨĐƟŽŶ͗/ŶƐƚƌƵĐƟŽŶŽŶŵŝƐƐŝŶŐ
ƌĂƚĞŝƐĚƵĞƚŽůĂĐŬŽĨƐĞůĨͲĐŽƌƌĞĐƟŶŐ
ĚĞĐŽĚŝŶŐƐŬŝůůƐĂŶĚƐŝŐŚƚǁŽƌĚƐ;ďĂƐĞĚŽŶ
ĞƌƌŽƌƐŽƌŝĨƐƚƵĚĞŶƚůĂĐŬƐĚĞĐŽĚŝŶŐƐŬŝůůƐ͘
ƌĞƐƵůƚƐŽĨĞƌƌŽƌĂŶĂůLJƐĞƐͿ͘tŽƌŬŽŶĂƉƉůLJŝŶŐ
WůĂŶŽĨĐƟŽŶ͗/ĨƐĞůĨͲĐŽƌƌĞĐƟŶŐĞƌƌŽƌ͕ ƐŬŝůůƐƚŽĐŽŶŶĞĐƚĞĚƚĞdžƚĂŶĚďƵŝůĚŝŶŐ
ĨŽĐƵƐŽŶĂĐĐƵƌĂĐLJĂŶĚƐĞůĨͲŵŽŶŝƚŽƌŝŶŐ͘/Ĩ ŇƵĞŶĐLJ͘
ůĂĐŬŽĨĚĞĐŽĚŝŶŐ͕ĨŽůůŽǁ͞/ŶĂĐĐƵƌĂƚĞĂŶĚ
EŽƚ&ůƵĞŶƚZĞĂĚĞƌ͟ƐƚƌĂƚĞŐŝĞƐ͘/ĨďŽƚŚ͕
ĐŽŵďŝŶĞƐƚƌĂƚĞŐŝĞƐĨƌŽŵ'ƌŽƵƉϯĂŶĚ
'ƌŽƵƉϰ
Note: Accuracy and rate are based on result of Reading Curriculum-Based Mea-
surement. Adapted from Kansas State Department of Education (2011).
Appendices
3. Student strengths?
Addional notes:
Teacher: Date:
Student
Data Review:
Aendance
Vision
Hearing
Other?
Opening: Thank you for taking the me to meet about this student. The purpose of this interview is to
gain some informaon about the student from your perspecve. This informaon will be used to help
idenfy a focus for a classroom observaon and to add to data that the (name of school-level problem-
solving team) will use when designing instrucon for (name of student).
1. What are the strengths of the student? What does s/he enjoy?
• Wring –
• Reading –
• Organizaonal Skills –
• Behavior –
• Social Interacons –
Appendices 275
7. To what extent are students familiar with the school wide behavior expectaons?
• How oen is this student recognized for meeng academic and behavioral expectaons?
8. Is there anything else you would like to share that the team might need to know?
276 Appendices
Child’s Name:_______________________________________________________
Thank you for your me. I want to just ask a few quesons to get to know your child and to
get your input about how he/she is doing in school. I’ll start by asking…
2. What does your child do well? In what areas have you seen growth?
3. What concerns do you have currently about your child’s growth and development?
4. How is your child’s physical health? Any medical issues, hearing, vision?
5. Any other comments or informaon you think would be important for our school team to
know in planning support for your child?
Appendices 277
Form A6 Student Interview Form (adjust language according to age of student)
Student Interview
1) What do you like to do for fun? (To figure out what they find reinforcing)
3) Some things are tough in school for students. What is tough for you? (To get informaon about
academic funconing)
3a) Follow up with specific quesons about a subject, "What is tough about reading for you?" or
"What is tough about math for you?"
4) Somemes students get in trouble every now and then. Do you ever get in trouble? (To get
informaon about behavioral funconing)
4a) Ask follow-up quesons to get at why, how oen, and when he/she gets in trouble
5) What do you (play at recess/do during breaks) and with whom)? (Ask about friends, weekend
acvies) (To get informaon about their social funconing)
6) Tell me about your day when you go home. What do you do first, then second...etc. (To get a sense
of their daily roune at home)
278 Appendices
Instrucon
1. What about the instrucon may contribute to lower than expected performance?
2. What changes to the instrucon would likely improve the student’s performance?
Curriculum
3. What about the curriculum may contribute to lower than expected performance?
4. What changes to the curriculum would likely improve the student’s performance?
Environment
5. What about the environment may contribute to lower than expected performance?
6. What changes to the environment would likely improve the student’s performance?
Learner
7. What characteriscs/experiences/circumstances of the student may contribute to lower
than expected performance?
Form A8 Instructional Observation Form, Example 1
Instruconal Observaon Form
Observer Name : ____________________________________ Date: ________________________ Time: ________________
Appendices
1:00-1:59 11:00-11:59
2:00-2:59 12:00-12:59
3:00-3:59 13:00-13:59
4:00-4:59 14:00-14:59
5:00-5:59 15:00-15:59
6:00-6:59 16:00-16:59
7:00-7:59 17:00-17:59
8:00-8:59 18:00-18:59
9:00-9:59 19:00-19:59
Correct Responses: _____ + Interacons: ______ Comments, general student behavior, etc:_____________________________________
Total OTR’s: _____ - Interacons: ______ ______________________________________________________________________________
Student Success %: _____ +:-Rao: ____:____ ______________________________________________________________________________
Pacing (OTR’s/min) _____ Intervals On Task ____ / 20 ______________________________________________________________________________
281
282 Appendices
Observaon Form
Describe:
_____ Number of Students in the class 1. Describe the specific expectaons (instrucons
given, behaviors/acons expected, work to be
Where is student seated in the classroom? completed, etc.).
Mark time every 5 minutes or less. Briefly describe what the teacher says or does
(or the activity, expectations, etc.) and what the student says or does. Examine data
for patterns or relationships between the environment and the student’s behavior or
performance.
Instrucon:
• What effecve teaching pracces do you see that benefit the student?
• How is the student’s instrucon differenated?
• Any modificaons or accommodaons?
• Can student transion from one task to another? Require follow-up re-teaching or prompng
a er whole-class direcons are given?
Curriculum:
• What materials are being used (at grade level)?
• What task-related skills do you see that the student demonstrates or does not demonstrate?
(e.g., raising hand, knowing how to get help, cleaning area and preparing for next acvity, etc.).
Environment
• Where is the student seated (away from noise, busy areas, etc.)?
• How is the noise level?
• What posive behavior/movaonal/discipline strategies do you see?
• How many posives to redirects are observed? (Track posives to redirects)
• Describe how the teacher interacts with the student
Learner:
• How is the student’s on-task behavior compared to other students?
• How successful does the student appear with the task?
Glossary
Alphabetic Principle The relationship between phonemes and printed letters and
letter patterns.
Assessment The process of gathering information to make decisions.
Curriculum The scope and sequence of knowledge and skills that students are
intended to learn. The “what” of teaching.
Curriculum-based assessment (CBA) any tool used to assess student perfor-
mance with the curriculum compared to classmates to inform instruction. For
example, examining a student’s score on a math test compared to classmates, or
identifying independent reading levels. The CBA is broad category under which
CBM and CBE fall.
Curriculum-Based Evaluation (CBE) A systematic and fluid problem-solving
process in which various assessment activities and tasks are used to identify
missing skills and to design and evaluate instruction.
Curriculum-Based Measurement (CBM) A reliable and valid standardized
assessment method used to monitor basic academic skills over time.
Decoding The process of using letter-sound relationships to read a word. Decoding
involves breaking apart the sounds of a printed word and re-assembling those
sounds to read the word.
Diagnostic Assessment The process of gathering data to determine student
strengths and weaknesses. Diagnostic assessments tease apart broad skills into
discrete skills to pinpoint specific strengths and weaknesses.
Evaluation The process of gathering and synthesizing data from multiple sources
of information to make decisions.
Evidenced-Based Practices Practices and instructional strategies that have been
developed using research and have documented results demonstrating effective-
ness (see also “Research-Based Practices”)
Fidelity (Treatment Integrity) The extent to which a plan is implemented as it
was originally designed to be implemented.
Fluency The ability to read words, phrases, sentences, paragraphs, and passages
with automaticity, accuracy, and prosody (i.e., intonation).
Formative Assessment a range of formal and informal assessments used during
instruction designed to modify instruction and improve student outcomes.
Abbott, M., Wills, H., Kamps, D., Greenwood, C. R., Dawson-Bannister, H., Kaufman, J., et al.
(2008). The kansas reading and behavior center’s K-3 prevention model. In C. Greenwood,
T. Kratochwill, & M. Clements (Eds.), Schoolwide prevention models: Lessons learned in
elementary schools (pp. 215–265). New York: Guilford.
AIMSweb. (n. d.). AIMSweb national norms tables. Retrieved from www.aimsweb.com.
Ainsworth, L. B., & Viegut, D. J. (2006). Common formative assessments: How to connect
standards-based instruction and assessment. Thousand Oaks: Corwin Press.
Algozzine, B., Cooke, N., White, R., Helf, S., Algozzine, K., & McClanahan, T. (2008). The North
Carolina reading and behavior center’s K-3 prevention model: Eastside elementary school case
study. In C. Greenwood, T. Kratochwill, & M. Clements (Eds.), Schoolwide prevention models:
Lessons learned in elementary schools (pp. 173–214). New York: The Guilford Press.
Algozzine, B., Wang, C., White, R., Cooke, N., Marr, M. B., Algozzine, K., et al. (2012). Effects of
multi-tier academic and behavior instruction on difficult-to-teach students. Exceptional Chil-
dren, 79(1), 4564.
Archer, A. L., & Hughes, C. A. (2010). Explicit instruction: Effective and efficient teaching.
New York: Guilford.
Armbruster, B. B., Lehr, F., & Osborn, J. (2001). Put reading first: The research building blocks
for teaching children to read. National Institute for Literacy, The Partnership for Reading.
Ash, G. E., Kuhn, M. R., & Walpole, S. (2009). Analyzing “inconsistencies” in practice: Teachers’
continued use of round robin reading. Reading & Writing Quarterly, 25, 87–103.
Baker, S. K., Simmons, D. C., & Kame’enuim, E. J. (1997). Vocabulary acquisition: Research
bases. In D. C. Simmons & E. J. Kame’enui (Eds.), What reading research tells us about chil-
dren with diverse learning needs: Bases and basics (pp. 183–218). Mahwah: Erlbaum.
Baldi, S., Jin, Y., Skemer, M., Green, P. J., & Herget, D. (2007). Highlights from PISA 2006:
Performance of U.S. 15-year-old students in science and mathematics literacy in an inter-
national context. Washington, DC: National Center for Education Statistics, Institute of
Education Sciences, U.S. Department of Education. Retrieved from http://nces.ed.gov/
pubs2008/2008016.pdf.
Barnes, A. C., & Harlacher, J. E. (2008). Response-to-intervention as a set of principles: Clearing
the confusion. Education & Treatment of Children, 31(1), 417–431.
Barnes, G., Crowe, E., & Schaefer, B. (2007). The cost of teacher turnover in five school districts.
Washington, DC: National Commission on Teaching and America’s Future. Retrieved from
http://nctaf.org/wp-content/uploads/CTTExecutiveSummaryfinal.pdf.
Bear, D. R., Invernizzi, M., Templeton, S., & Johnston, F. (2007). Words their way: Word study for
phonics, vocabulary, and spelling instruction (4th ed). New Jersey: Prentice Hall.
Begeny, J., & Silber, J. (2006). An examination of group-based treatment packages for increasing
elementary-aged students’ reading fluency. Psychology in the Schools, 43(2), 183.
Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education:
Principles, Policy & Practice, 5(1), 7–74.
Bohanon, H., Fenning, P., Carney, K. L., Minnis-Kim, M. J., Moroz, K. B., Hicks, K. J., et al.
(2006). Schoolwide application of positive behavior support in an urban high school: A case
study. Journal of Positive Behavior Interventions, 8(3), 131–145.
Braden, J. P., & Shaw, S. R. (2009). Intervention utility of cognitive assessments. Assessment for
Effective Intervention, 34(2), 106–115.
Brady, K., & Woolfson, L. (2008). What teacher factors influence their attributions for chil-
dren’s difficulties in learning? British Journal of Educational Psychology, 78(4), 527–544.
doi: 10.1348/000709907X268570.
Brookhart, S. M. (2003). Developing measurement theory for classroom assessment purposes and
uses. Educational Measurement Issues and Practice, 22(4), 5–12.
Brown-Chidsey, R., & Steege, M. W. (2010). Response to intervention: Principles and strategies
for effective practice. New York: Guilford.
Brown-Chidsey, R., Bronaugh, L., & McGraw. K. (2009). RTI in the classroom: Guidelines and
recipes for success. New York: Guilford Press.
Buehl, D. (2008). Classroom strategies for interactive learning. International Reading Associa-
tion.
Burns, M. K. (2008). Response to instruction at the secondary level. Principal Leadership, 8(7),
12–15.
Burns, M. K., Appleton, J. L., & Stehouwer, J. D. (2005). Meta-analytic review of responsiveness-
to-intervention research: Examining field-based and research-implemented models. Journal of
Psychoeducational Assessment, 23, 381–394.
Burns, M. K., & Parker, D. C. (n. d.). Using instructional level as a criterion to target reading interven-
tions. Retrieved from http://www.cehd.umn.edu/reading/documents/reports/Burns-Parker-2010.
pdf.
Burns, M. K., Riley-Tillman, T. C., VanDerHeyden, A. K. (2012). RTI applications: Academic and
behavioral interventions (Vol. 1). New York: Guilford Press.
Bush, T. W., Pederson, K., Espin, C. A., & Weissenburger, J. W. (2001). Teaching students with
learning disabilities: Perceptions of a first-year teacher. The Journal of Special Education,
35(2), 92–99.
Carroll, T. G., & Foster, E. (2008). Learning teams: Creating what’s next. Washington, DC:
National Commission on Teaching and America’s Future. Retrieved from http://nctaf.org/
wp-content/uploads/2012/01/NCTAFLearningTeams408REG2.pdf.
Carnine, D. W., Silbert, J., Kame’enui, E. J., & Tarver, S. G. (2009). Direct instruction reading.
New Jersey: Pearson.
Chard, D. J., Vaughn, S., & Tyler, B. (2002). A synthesis of research on effective interventions
for building reading fluency with elementary students with learning disabilities. Journal of
Learning Disabilities, 35(5), 386–406.
Chenowerth, K. (2009). It can be done, it’s being done, and here’s how. Kappan, 91(1), 38–43.
Christ, T. J. (2008). Best practices in problem analysis. In A. Thomas & J. Grimes (Eds.), Best
practices in school psychology V (pp. 159–176). Bethesda: National Association of School
Psychologists.
Christ, T. J. (2010). Curriculum-based measurement of oral reading (CBM-R): Summary and
discussion of recent research-based guidelines for progress monitoring. Workshop presented
at Minnesota Center for Reading Research, 2010 Workshop. Retrieved from http://www.cehd.
umn.edu/reading/events/AugWkshop2010/MCRR-8-11-10-TChrist.pdf.
Christ, T. J., Zopluoglu, C., Long, J. D., & Monaghen, B. D. (2012). Curriculum-based measure-
ment of oral reading: Quality of progress monitoring outcomes. Exceptional Children, 78(3),
356–373.
Clarke, B., & Shinn, M. R. (2002). Test of Early Numeracy (TEN). Administration and scoring
of AIMSweb early numeracy measures for use with AIMSweb. Bloomington: MNL NCS
Pearson, Inc.
Clay, M. M. (1993). An observation study of early literacy achievement. Portsmouth: Heinemann.
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale:
Lawrence Earlbaum Associates.
References 291
Fisher, D., Grant, M., Frey, N., & Johnson, C. (2008). Taking formative assessment schoolwide.
Educational Leadership, 65(4), 64–68.
Fixsen, D., Naoom, S., Blase, K., & Wallace, F. (2007, Winter/Spring). Implementation: The
missing link between research and practice. The APSAC Advisor, 4–10.
Fleischman, H. L., Hopstock, P. J., Pelczar, M. P., & Shelley, B. E. (2010). Highlights from PISA
2009: Performance of U.S. 15-year-old students in reading, mathematics, and science literacy
in an international context. Washington, DC: U.S. Government Printing Office. Retrieved
from http://nces.ed.gov/pubs2011/2011004.pdf.
Fletcher, J. M., & Lyon, G. R. (1998). Reading: A research-based approach. In W. M. Evers (Ed.),
What’s gone wrong in America’s classrooms (pp. 49–90). Stanford: Hoover Institution Press.
Foegen, A., Jiban, C., & Deno, S. (2007). Progress monitoring measures in mathematics: A review
of the literature. The Journal of Special Educaiton, 41(2), 121–139.
Fuchs, L. S., Fuchs, D., & Maxwell, L. (1988). The validity of informal comprehension measures.
Remedial and Special Education, 9, 20–28.
Gable, R. A., Hester, P. H., Rock, M. L., & Hughes, K. G. (2009). Back to basics: Rules, praise,
ignoring, and reprimands revisited. Intervention in School and Clinic, 44(4), 195–205.
Gibbons, K., & Silberglitt, B. (2008). Best practices in evaluation psychoeducational services
based on student outcome data. In A. Thomas & J. Grimes (Eds.), Best practices in school
psychology V (pp. 2103–2116). Bethesda: NASP Publications.
Gibson, C., & Jung, K. (2002). Historical census statistics on population totals by race, 1790
to 1990, and by hispanic origin, 1970 to 1990, for the United States, regions, divisions, and
states. Washington, DC: US Census Bureau. Retrieved from http://www.census.gov/popula-
tion/www/documentation/twps0056/twps0056.html.
Goddard, Y. L., Goddard, R. D., & Tschannen-Moran, M. (2007). A theoretical and empirical
investigation of teacher collaboration for school improvement and student achievement in
public elementary schools. Teachers College Record, 109(4), 877–896.
Gonzalez, L., Stallone Brown, M., & Slate, J. R. (2008). Teachers who left the teaching profession:
A qualitative understanding. The Qualitative Report, 13(1), 1–11.
Good, R. H., Gruba, J., & Kaminski, R. (2002). Best practice in using Dynamic Indicators of
Basic Early Literacy Skills (DIBELS) in an outcomes-driven model. In A. Thomas & Grimes
(Eds.), Best practices in school psychology IV (pp. 679–699). Bethesda: National Association
of School Psychologists.
Good, R. G., & Kaminski, R. A. (2011). DIBELS next assessment manual. Eugene: Dynamic
Measurement Group.
Good, R. G., Simmons, D. C., & Smith, S. (1998). The importance and decision-making utility
of a continuum of fluency-based indicators of foundational reading skills for third-grade high-
stakes outcomes. Scientific Studies of Reading, 5(3), 257–288.
Graden, J. L., Stollar, S. A., & Poth, R. L. (2007). The Ohio integrated systems model: Overview
and lessons learned. In S. R. Jimerson, M. K. Burns, & A. M. VanDerHeyden (Eds.), Handbook
of response to intervention (pp. 288–299). New York: Springer.
Greenwood, C. R., Kratochwill, T. R., & Clements, M. (2008). Schoolwide prevention models:
Lessons learned in elementary schools. New York: Guilford Press.
Gresham, F. M., & Witt, J. C. (1997). Utility of intelligence tests for treatment planning, clas-
sification, and placement decisions: Recent empirical findings and future directions. School
Psychology Quarterly, 12(3), 249–267.
Griffin, J., & Hatterdorf, R. (2010). Successful RTI implementation in middle schools. Perspec-
tives on Language and Literacy, 36(2), 30–34.
Griffiths, A., VanDerHeyden, A. M., Parson, L. B., & Burns, M. K. (2006). Practical applications
of response-to-intervention research. Assessment for Effective Intervention, 32(1), 50–57.
Griffiths, A., Parson, L. B., Burns, M. K., VanDerHeyden, A., & Tilly, W. D. (2007). Response
to intervention: Research for practice. Alexandria: National Association of State Directors of
Special Education.
Grissmer, D. W., & Nataraj Kirby, S. (1987). Teacher attrition: The uphill climb to staff the nation’s
schools. Santa Monica: The RAND Corporation.
References 293
Gunter, P. L., Reffel, J., Barnett, C. A., Lee, J. L., & Patrick, J. (2004). Academic response rates in
elementary-school classrooms. Education & Treatment of Children, 27(2), 105–113.
Haager, D., Klinger, J., & Vaughn, S. (2007). Evidence-based reading practices for response to
intervention. Baltimore: Brookes Publishing.
Hall, T. (2002). Explicit instruction. Wakefield: National Center on Accessing the General Curric-
ulum. Retrieved from http://www.cast.org/publications/ncac/ncac_explicit.html.
Harlacher, J. E., Nelson Walker, N. J., & Sanford, A. K. (2010). The “I” in RTI: Research-based
factors for intensifying instruction. Teaching Exceptional Children, 42(6), 30–38.
Haring, N. G., Lovitt, T. C., Eaton, M. D., & Hansen, C. L. (1978). The fourth R: Research in the
classroom. Columbus: Charles E. Merrill Publishing Co.
Harn, B. A., Kame’enui, E. J., & Simmons, D. C. (2007). The nature and role of the third tier in
a prevention model for kindergarten students. In D. Haager, J. Klinger, & S. Vaughn (Eds.),
Evidence-based reading practices for response to intervention (pp. 161–184). Baltimore:
Brookes.
Hart, B., & Risley, R. T. (1995). Meaningful differences in the everyday experience of young
American children. Baltimore: Brookes.
Hattie, J. (2009). Visible learning: A synthesis of over 800 meta-analyses relating to achievement.
Florence: Routledge.
Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Education Research, 77(1),
81–112.
Hawken, L. S., Adolphson, S. L., Macleod, K. S., & Schumann, J. (2009). Secondary-tier inter-
ventions and supports. In W. Sailor, G. Dunlap, G. Sugai, & R. Horner (Eds.), Handbook of
positive behavior support (pp. 395–420). New York: Springer.
Haydon, T., Conroy, M. A., Scott, T. M., Sindelar, P. T., Barber, B. R., & Orlando, A. (2010). A
comparison of three types of opportunities to respond on student academic and social behav-
iors. Journal of Emotional and Behavioral Disorders, 18(1), 27–40.
Haydon, T., Mancil, G. R., & Van Loan, C. (2009). Using opportunities to respond in a general
education classroom: A case study. Education and Treatment of Children, 32(2), 267–278.
Hintze, J. M., & Conte, K. L. (1997). Oral reading fluency and authentic reading material: Crite-
rion validity of the technical features of CBM survey-level assessment. School Psychology
Review, 26(4), 535–553.
Hoagwood, K., & Johnson, J. (2003). School psychology: A public health framework I. From
evidence-based practices to evidence-based policies. Journal of School Psychology, 41, 3–21.
Hock, M., & Mellard, D. (2005). Reading comprehension strategies for adult literacy outcomes.
Journal of Adolescent & Adult Literacy, 49(3), 182–200.
Horner, R. H., Sugai, G., Todd, A. W., & Lewis-Palmer, T. (2005) School-wide positive behavior
support: An alternative approach to discipline in schools. In L. M. Bambara & L. Kern (Eds.),
Individualized supports for students with problem behaviors (pp. 359–390). New York:
Guilford Press.
Hosp, J. L. (2008). Best practices in aligning academic assessment with instruction. In A. Thomas
& J. Grimes (Eds.), Best practices in school psychology V (pp. 363–376). Bethesda: NASP
Publications.
Hosp, M. K., Hosp, J. L., & Howell, K. W. (2006). The ABCs of CBM. New York: The Guilford
Press.
Hosp, M. K., & MacConnell, K. L. (2008). Best practices in curriculum-based evaluation in early
reading. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology (pp. 377–396).
Bethesda: National Association of School Psychologists.
Howell, K. W. (2010). FAQs: Patterns of strengths and weaknesses instruction (Aptitude by treat-
ment interaction). [Personal writing]. Retrieved from http://www.wce.wwu.edu/Depts/SPED/
Forms/Resources%20and%20Readings/Learning%20Styles%20Instruction%204-2-10.pdf.
Howell, K. W., & Nolet, V. (2000). Curriculum-based evaluation: Teaching and decision making.
Belmont: Wadsworth.
294 References
Howell, K. W., Hosp, J. L., & Kurns, S. (2008). Best practices in curriculum-based evaluation. In
A. Thomas & J. Grimes (Eds.), Best practices in school psychology V (pp. 349–362). Bethesda:
NASP Publications.
Ikeda, M. J., Grimes, J., Tilly, W. D., III, Allison, R., Kurns, S., & Stumme, J. (2002). Implementing
an intervention-based approach to service delivery: A case example. In M. Shinn, H. Walker,
& G. Stoner (Eds.), Interventions for academic and behavioral problems II: Preventative and
remedial approaches (pp. 53–69). Bethesda: National Association of School Psychologists.
Intervention Central. (n. d.). The instructional hierarchy: Linking stages of learning to effective
instructional techniques. Retrieved from http://www.interventioncentral.org/academic-inter-
ventions/general-academic/instructional-hierarchy-linking-stages-learning-effective-in.
Jenkins, J., & Larson, K. (1979). Evaluation of error-correction procedures for oral reading.
Journal of Special Education, 13, 145–156.
Jenkins, J. R., Larson, K., & Fleisher, L. (1983). Effects of error correction on word recognition
and reading comprehension. Learning Disability Quarterly, 6, 139–145.
Jimerson, S. R., Burns, M. K., & VanDerHeyden, A. (2007). Handbook of response to interven-
tion: The science and practice of assessment and intervention. New York: Springer.
Johnson, E. S., & Smith, L. (2008). Implementation of response to intervention at middle schools:
Challenges and potential benefits. Teaching Exceptional Children, 40(3), 46–52.
Johnston, P. H. (2011). Response to intervention in literacy. The Elementary School Journal,
111(4), 511–534.
Joseph, L. M. (2000). Using word boxes as a large group phonics approach in a first grade class-
room. Reading Horizons, 41(2), 117–127.
Kaiser, A. (2011). Beginning teacher attrition and mobility: Results from the first through third
waves of the 2007–08 beginning teacher longitudinal study. Washington, DC: US Department
of Education, National Center for Education Statistics. Retrieved from http://nces.ed.gov/
pubs2011/2011318.pdf.
Kame’enui, E. J., & Simmons, D. C. (1990). Designing instructional strategies: The prevention of
learning problems. Columbus: Merrill Publishing Company.
Kaminski, R., Cummings, K. D., Powell-Smith, K. A., & Good, R. H. (2008). Best practices in
using dynamic indicators or basic early literacy skills for formative assessment and evaluation.
In A. Thomas & J. Grimes (Eds.), Best practices in school psychology V (pp. 1181–1204).
Bethesda: National Association of School Psychologists.
Kansas State Department of Education. (2011). Kansas multi-tier system of supports: Collab-
orative team workbook reading. Topeka: Kansas MTSS Project, Kansas Technical Assistance
System Network.
Kansas Multi-Tier System of Supports. (n. d.). Overview. Retrieved from http://www.kansasmtss.
org.
Kaplan, R. M., & Saccuzzo, D. P. (2008). Psychological testing: Principles, applications, and
issues. Belmont: Wadsworth Publishing.
Keigher, A. (2010). Teacher attrition and mobility: Results from the 2008–09 teacher follow-up
survey. Washington, DC: US Department of Education, National Center for Education Statis-
tics. Retrieved from http://nces.ed.gov/pubs2010/2010353.pdf.
Kennedy, C. H. (2005). Single-case design for educational research. Boston: Allyn and Bacon.
Kim, J. (2011). Relationships among and between ELL status, demographic characteristics,
enrollment history, and school persistence. Los Angeles: University of California, National
Center for Research on Evaluation, Standards, and Student Testing (CRESST). Retrieved from
http://www.cse.ucla.edu/products/reports/R810.pdf.
Kingston, N., & Nash, B. (2011). Formative assessment: A meta-analysis and call for research.
Educational Measurement: Issues and Practice, 30(4), 28–37.
Klinger, J. K. (2004). Assessing reading comprehension. Assessment for Effective Intervnetion, 29,
59–70. doi: 10.1177/073724770402900408.
Kuhn, M. R., & Stahl, S. A. (2003). Fluency: A review of developmental and remedial practices.
Journal of Educational Psychology, 95(1), 3–21.
References 295
Lafferty, A. E., Gray, S., & Wilcox, M. J. (2005). Teaching alphabetic knowledge to pre-school chil-
dren with developmental language delay and typical language development. Child Language
Teaching and Therapy, 21(3), 263–277. doi: 10.1191=0265659005ct292oa.
Lague, K. M., & Wilson, K. (2010). Using peer tutors to improve reading comprehension. Kappa
Delta Pi, 46(4), 182–186.
Landers, E., Alter, P., & Servilio, K. (2008). Students’ Challenging Behavior and Teachers’ Job
Satisfaction. Beyond Behavior, 18(1), 26–33.
Lemke, M., Sen, A., Pahlke, E., Partelow, L., Miller, D., Williams, T., et al. (2004). International
outcomes of learning in mathematics literacy and problem solving: PISA 2003 results from the
U.S. perspective. Washington, DC: US Department of Education, National Center for Educa-
tion Statistics.
Lenz, B. K., & Hughes, C. A. (1990). A word identification strategy for adolescents with learning
disabilities. Journal of Learning Disabilites, 23(3), 149–163. doi: 10.1177/002221949002300304.
LeVasseur, V. M., Macaruso, P., & Shankweiler, D. (2008). Promoting gains in reading fluency:
A comparision of three approaches. Reading and Writing: An Interdisciplinary Journal, 21(3),
205–230.
Liff Manz, S. (2002). A strategy for previewing textbooks: Teaching reads to become THIEVES.
The Reading Teacher, 55(5), 434–435.
Lo, Y., Cooke, N. L., & Starling, A. L. (2011). Using a repeated reading program to improve
generalization of oral reading fluency. Education and Treatment of Children, 34(1), 115–140.
Lovelace, S., & Stewart, S. R. (2007). Increasing print awareness in preschoolers with language
impairment using non-evocative print referencing. Language, Speech, and Hearing Services in
Schools, 38, 16–30. doi: 0161-1461/06/3801-0016.
Malone, R. A., & McLaughlin, T. F. (1997). The effects of reciprocal peer tutoring with a group
contingency on quiz performance in vocabulary with seventh- and eighth-grade students.
Behavioral Interventions, 12, 27–40.
Mandinach, E. B. (2012). A perfect time for data use: Using data-driven decision making to inform
practice. Educational Psychologist, 47(2), 71–85.
Marchand-Martella, N. E., Ruby, S. F., & Martella, R. C. (2007). Intensifying reading instruction for
students within a three-tier model: Standard-protocol and problem solving approaches within
a Response-to-Intervention (RTI) system. Teaching Exceptional Children Plus, 3(5). Retrieved
from http://journals.cec.sped.org/cgi/viewcontent.cgi?article=1313&context = tecplus
Marzano, R. J. (2010). Formative assessment and standards-based grading. Bloomington:
Marzano Research Laboratory.
Marzano, R. J., & Pickering, D. J. (2005). Building academic vocabulary: Teacher’s manual.
Alexandria: Association for Supervision and Curriculum Development.
Maslanka, P., & Joseph, L. M. (2002). A comparison of two phonological awareness tech-
niques between samples of preschool children. Reading Psychology, 23(4), 271–288.
doi:10.1080/713775284.
McCandliss, B., Beck, I. L., Sandak, R., & Perfetti, C. (2003). Focusing attention on decoding for
children with poor reading skills: Design and preliminary tests of the word building interven-
tion. Scientific Studies of Reading, 71(1), 75–104. doi: 10.1207/S1532799XSSR0701_05.
McCarthy, P. A. (2008). Using sound boxes systematically to develop phonemic awareness. The
Reading Teacher, 62(4), 346–349. doi: 10.1598/RT.62.4.7.
McCurdy, B. L., Mannella, M. C., & Norris, E. (2003). Positive behavior support in urban schools:
Can we prevent the escalation of antisocial behavior? Journal of Positive Behavior Interven-
tions, 5(3), 158–170.
McDonald Connor, S., Piasta, S. B., Fishman, B., Glasney, S., Schatschneider, C., Crowe, E., et al.,
(2009). Individualizing student instruction precisely: Effects of child x instruction interactions
on first graders ׳literacy development. Child Development, 80(1), 77–100.
McIntosh, K., Goodman, S., & Bohanon, H. (2010). Toward true integration of academic and
behavior response to interventions systems: Part one: Tier 1 support. Communiqué, 39(2), 1,
14–16.
296 References
McIntosh, K., Horner, R. H., Chard, D. J., Boland, J. B., & Good, R. G. (2006). The use of reading
and behavior screening measures to predict nonresponse to school-wide positive behavior
support: A longitudinal analysis. School Psychology Review, 35(2), 275–291.
McGlinchey, M. T., & Hixson, M. D. (2004). Using curriculum-based measurement to predict
performance on state assessment in reading. School Psychology Review, 33(2), 193–203.
Merrell, K. W., Ervin, R. A., & Gimpel, G. A. (2006). School psychology for the 21st century. New
York: The Guildford Press.
Miura Wayman, M., Wallace, T., Ives Wiley, H., Tichá, R., & Espin, C. A. (2007). Literature
synthesis on curriculum-based measurement in reading. The Journal of Special Education,
41(2), 85–120.
Moats, L. (1999). Teaching reading IS rocket science: What expert teachers of reading should
know and be able to do. American Federation of teachers. Retrieved from http://www.louisa-
moats.com/Assets/Reading.is.Rocket.Science.pdf.
Musti-Roo, S., Hawkins, R. O., & Barkley, E. A. (2009). Effects of repeated readings on the oral
reading fluency of urban-fourth grade students: Implications for practice. Preventing School
Failure, 54(1), 12–23.
Meyer, L. A. (1982). The relative effects of word-analysis and word-supply correction procedures
with poor readers during word-attack training. Reading Research Quarterly, 4, 544–555.
National Association of State Directors of Special Education (NASDSE). (2005). Response to
intervention: Policy considerations and implementation. Alexandria: Author.
National Center for Educational Statistics. (2011a). Reading 2011: National assessment of educa-
tional progress at grades 4 and 8. Washington, DC: National Center for Education Statistics,
Institute of Education Sciences, U.S. Department of Education. Retrieved from http://nces.
ed.gov/nationsreportcard/pdf/main2011/2012457.pdf.
National Center for Educational Statistics. (2011b). Math 2011: National assessment of educa-
tional progress at grades 4 and 8. Washington, DC: National Center for Education Statistics,
Institute of Education Sciences, U.S. Department of Education. Retrieved from http://nces.
ed.gov/nationsreportcard/pdf/main2011/2012458.pdf.
National Center for Educational Statistics. (2011c). Digest of education statistics: 2011. Wash-
ington, DC: US Department of Education, Institute for Education Services, US Department of
Education. Retrieved from http://nces.ed.gov/programs/digest/d11.
National Institute of Child Health and Human Development (NICHHD). (2000). Report of the
national reading panel. Teaching children to read: an evidence-based assessment of the scien-
tific research literature on reading and its implications for reading instruction. Retrieved from
http://www.nichd.nih.gov/publications/nrp/smallbook.htm.
National Research Council. (2000). How people learn: Brain, mind, experience, and school.
Washington, DC: National Academy Press.
Nelson, J. M., & Machek, G. R. (2007). A survey of training, practice, and competence in reading
assessment and intervention. School Psychology Review, 36(2), 311–327.
Netzel, D. M., & Eber, L. (2003). Shifting from reactive to proactive discipline in an urban school
district: A change of focus through PBIS implementation. Journal of Positive Behavior Inter-
ventions, 5(2), 71–79.
Newmann, F. M., Smith, B., Allensworth, E., & Bryk, A. S. (2001). Instructional program coher-
ence: What it is and why it should guide school improvement. Educational Evaluation and
Policy Analysis, 23(4), 297–321. doi: 10.3102/01623737023004297.
Office of Special Education Programs (OSEP). (2011). 30th annual report to congress on the
implementation of the individuals with disabilities education act, 2008. Washington, DC: US
Department of Education, Office of Special Education and Rehabilitative Services, Office of
Special Education Programs.
Organisation for Economic Co-Operation and Development (OECD). (2001). Knowledge and
skills for life: First results from the OECD Programme for International Student Assess-
ment (PISA) 2000. OECD. Retrieved from http://www.oecd.org/edu/preschoolandschool/
programmeforinternationalstudentassessmentpisa/33691596.pdf.
References 297
Ortiz, S. O., Flanagan, D. P., & Dynda, A. M. (2008). Best practices in working with cultur-
ally diverse children and families. In A. Thomas & J. Grimes (Eds.), Best practices in school
psychology V (pp. 1721–1738). Bethesda: National Association of School Psychologists.
Pashler, H., McDaniel, M., Rohrer, D., & Bjork, R. (2005). Learning styles: Concepts and evidence.
Psychological Science in the Public Interest, 9(3), 105–119.
Pearson, Inc. (2012a). AIMSweb: Test of early literacy administration and scoring guide. Bloom-
ington: NCS Pearson, Inc. Retrieved from http://www.aimsweb.com/wp-content/uploads/
TEL_Admin_Scoring-Guide_2.0.pdf.
Pearson, Inc. (2012b). AIMSweb: Progress monitoring guide. Bloomington: NCS Pearson, Inc.
Perfetti, C., & Adlof, S. M. (2012). Reading comprehension: A conceptual framework from
word meaning to text meaning. In J. Sabatini, E. Albro, & T. O’Reilly (Eds.), Measuring up:
Advances in how we assess reading ability. Lanham: R & L Education.
Peshak George, H., Kincaid, D., & Pollard-Sage, J. (2009). Primary-tier interventions and supports.
In W. Sailor, G. Dunlap, G. Sugai, & R. Horner (Eds.), Handbook of positive behavior support
(pp. 375–394). New York: Springer.
Pyle, N., & Vaughn, S. (2012). Remediating reading difficulties in a response to intervention
model with secondary students. Psychology in the Schools, 49(3), 273–284. doi: 10.1002/pits.
Partnership for Reading. (2001). Fluency: An introduction. Retrieved from http://www.readin-
grockets.org/article/3415.
Phillips, B. M., Clancy-Menchetti, J., & Lonigan, C. J. (2008). Successful phonological awareness
instruction with preschool children. Topics in Early Childhood Special Education, 28(1), 3–17.
doi: 10.117/0271121407313813.
Rasinski, T. V. (1994). Developing syntactic sensitivity in reading through phrase-cued texts.
Intervention in School and Clinic, 29, 165–168. doi: 10.1177/105345129402900307.
Rathvon, N. (2008). Effective school interventions (2nd ed.). New York: The Guilford Press.
Reitsma, P. (1983). Printed word learning in beginning readers. Journal of Experimental Child
Psychology, 36, 321–339.
Reschly, D. J. (2008). School psychology paradigm shift and beyond. In A. Thomas & J. Grimes
(Eds.), Best practices in school psychology V (pp. 3–15). Bethesda: National Association of
School Psychologists.
Restori, A. F., Gresham, F. M., & Cook, C. R. (2008). Old habits die hard: Past and current issues
pertaining to Response-to-Intervention. The California School Psychologist, 13, 67–78.
Rhodes, R., Ochoa, S. H., & Ortiz, S. O. (2005). Comprehensive assessment of culturally and
linguistically diverse students: A practical approach. New York: Guilford.
Rolison, M. A., & Medway, F. J. (1985). Teachers' expectations and attributions for student
achievement: Effects of label, performance pattern, and special education intervention. Amer-
ican Educational Research Journal, 22(4), 561–573.
Sáenz, L. M., Fuchs, L. S., & Fuchs, D. (2005). Peer-assisted learning strategies for English
language learners with learning disabilities. Exceptional Children, 71(3), 231–247.
Samson, J. F., & Lesaux, N. K. (2009). Language-minority learners in special education: Rates
and predictors of identification for services. Journal of Learning Disabilities, 42(2), 148–162.
Schmoker, M. J. (2006). Results now: How we can achieve unprecedented improvement in teaching
and learning. Alexandria: Association for Supervision & Curriculum Development.
Scott, T. M., Anderson, C., Mancil, R., & Alter, P. (2009). Function-based supports for individual
students in school settings. In W. Sailor, G. Dunlap, G. Sugai, & R. Horner (Eds.), Handbook
of positive behavior support (pp. 421–442). New York: Springer.
Shapiro, E. S. (2008). Best practices in setting progress monitoring goals for academic skill
improvement. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology V (pp.
141–158). Bethesda: National Association of School Psychologists.
Shinn, M. R. (2002a). Best practices in using curriculum-based measurement in a problem solving
model. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology IV (pp. 671–697).
Bethesda: National Association of School Psychologists.
Shinn, M. R. (2002b). AIMSweb training workbook: Strategies for writing individualized goals in
general curriculum and more frequent formative evaluation. Eden Prairie: Edformation, Inc.
298 References
Shinn, M. R. (2008). Best practices in curriculum-based measurement and its use in a problem-
solving model. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology V
(pp. 243–262). Bethesda: National Association of School Psychologists.
Shinn, M. R., & Shinn, M. M. (2002). AIMSweb training workbook. Administration and scoring
of reading curriculum-based measurement (R-CBM) in general outcome measurement.
Eden Prairie: Edformation Inc. Retrieved from http://aimsweb.com/uploads/pdfs/Manuals/
RCBM%20Manual.pdf.
Silberglitt, B., Burns, M. K., Madyun, N. H., & Lail, K. E. (2006). Relationship of reading fluency
assessment data with state accountability test scores: A longitudinal comparison of grade
levels. Psychology in the Schools, 43(5), 527–535. doi: 10.1002/pits.20175.
Silberglitt, B., & Hintze, J. M. (2007). How much growth can we expect? A conditional analysis of
R-CBM growth rates by level of performance. Exceptional Children, 74, 71–84.
Simonsen, B., Fairbanks, S., Briesch, A., & Sugai, G. (2006). Positive behavior support classroom
management: Self-assessment revised. OSEP Positive Behavioral Interventions and Support.
US Office of Special Education Programs.
Simonsen, B., Myers, D., & DeLuca, C. (2010). Teaching teachers to use prompts, opportunities to
respond, and specific praise. Teacher Education and Special Education, 33(4), 300–318. doi:
10.1177/0888406409359905.
Singh, N. N. (1990). Effects of two error-correction procedures on oral reading errors. Behavior
Modiciation, 14(2), 188–199.
Singh, N. N., & Singh, J. (1986). Increasing oral reading proficiency. Behavior Modification,
10(1), 115–130.
Stecker, P. M., Fuchs, L. S., & Fuchs, D. (2005). Using curriculum-based measurement to improve
student achievement: Review of research. Psychology in the Schools, 42(8), 795–819.
Stichter, J. P., Lewis, T. J., Whittaker, T., Richter, M., Johnson, N. & Trussel, R. (2009). Assessing
teacher use of opportunities to respond and effective classroom management strategies within
inclusive classrooms: Comparisons among high and low risk elementary schools. Journal of
Positive Behavior Interventions, 11, 68–81.
Stiggins, R., & Chappuis, J. (2006). What a difference a word makes. Assessment for learning
rather than assessment of learning helps students succeed. Journal of Staff Development, 27(1),
10–14.
Stiggins, R., & DuFour, R. (2009). Maximizing the power of formative assessments. Phi Delta
Kappan, 90(9), 640–644.
Stuebing, K. K., Barth, A. E., Molfese, P. J., Weiss, B. & Fletcher, J. M. (2009). IQ is not strongly
related to response to reading instruction: A meta-analytic interpretation. Exceptional Children,
76(1), 31–51.
Sugai, G., & Horner, R. (2006). A promising approach for expanding and sustaining school-wide
positive behavior support. School Psychology Review, 35(2), 245–259.
Sugai, G., & Horner, R. (2009). Defining and describing schoolwide positive behavior support. In
W. Sailor, G. Dunlap, G. Sugai, & R. Horner (Eds.), Handbook of positive behavior support
(pp. 307–326). New York: Springer.
Sullivan, A. L. (2011). Disproportionality in special education identification and placement of
English language learners. Exceptional Children, 77(3), 317–334.
Sutherland, K., Alder, N., & Gunter, P. L. (2003). The effect of varying rates of opportunities to
respond to academic requests on the classroom behavior of students with EBD. Journal of
Emotional and Behavioral Disorders, 11, 239–248.
Sutherland, K. S., Wehby, J. H., & Copeland, S. R. (2000). Effect of varying rates of behavior-
specific praise on the on-task behavior of students with EBD. Journal of Emotional and Behav-
ioral Disorders, 8(1), 2–8.
Taylor-Greene, S., Brown, D., Nelson, L., Longton, J., Gassman, Cohen, J., et al. (1997). School-
wide behavioral support: Starting the year off right. Journal of Behavior Education, 7(1),
99–112.
Therrien, W. J. (2004). Fluency and comprehension gains as a result of repeated reading. Remedial
and Special Education, 25(4), 252–261.
References 299
Therrien, W. J., Kirk, J. F., Woods-Groves, S. (2012). Comparison of a reading fluency interven-
tion with and without passage repetition on reading achievement. Remedial and Special Educa-
tion, 33(5), 309–319.
Thorndike, R. M., & Thorndike-Christ, T. (2010). Measurement and evaluation in psychology and
education (8th ed). New York: Pearson.
Tilly, W. D., III. (2008). The evolution of school psychology to science-based practice: Problem-
solving and the three-tiered model. In A. Thomas & J. Grimes (Eds.), Best practices in school
psychology V (pp. 17–35). Bethesda: National Association of School Psychologists.
Tomlinson, C. A., & Britt, S. (2012). Common core standards: Where does differentiation fit?
Webinar available at www.ascd.org/professional-development/webinars/tomlinson-and-britt-
webinar.aspx.
Torgesen, J. K. (2000). Individual differences in response to early interventions in reading: The
lingering problem of treatment resisters. Learning Disabilities Research & Practice, 15(1),
55–64.
Treptow, M. A., Burns, M. K., & McComas, J. J. (2007). Reading at the frustration, instructional,
and independent levels: Effects on student time on task and comprehension. School Psychology
Review, 36, 159–166.
UNICEF. (2002). A league table of educational disadvantage in rich nations, Innocenti Report
Card No.4. Florence: UNICEF Innocenti Research Centre.
US Census Bureau. (2011a). Overview of race and hispanic origin: 2010. US department of
commerce, economics and statistics administration. Retrieved from http://www.census.gov/
prod/cen2010/briefs/c2010br-02.pdf.
US Census Bureau. (2011b). Living arrangements of children: 2009. US department of commerce,
economics and statistics administration. Retrieved from https://www.census.gov/prod/2011pubs/
p70-126.pdf.
US Department of Education. (2012). Digest of education statistics, 2011. National center for
education statistics. Retrieved from http://nces.ed.gov/fastfacts/display.asp?id=64.
VanDerHeyden, A. M., & Witt, J. C. (2008). Best practices in can’t do/won’t do assessment. In A.
Thomas & J. Grimes (Eds.), Best practices in school psychology V (pp. 131–140). Bethesda:
National Association of School Psychologists.
VanDerHeyden, A. M., Witt, J. C., & Gilbertson, D. (2007). A multi-year evaluation of the effects
of a response to intervention (RTI) model on identification of children for special education.
Journal of School Psychology, 45, 225–256.
Vaughn, S., Cirino, P. T., Wanzek, J., Wexler, J., Fletcher, J. M., Denton, C. D., et al. (2010).
Response to intervention for middle school students with reading difficulties: Effects of a
primary and secondary intervention. School Psychology Review, 39(1), 2–21.
Vaughn, S., & Fletcher, J. (2010). Thoughts on rethinking response to intervention with secondary
students. School Psychology Review, 39(2), 296–299.
Vaughn, S., & Kettman Klinger, S. (1999). Teaching reading comprehension through collaborative
strategic reading. Intervention in School and Clinic, 34(5), 284–292.
Vaughn, S., & Linan-Thompson, A. (2004). Research-based methods of reading instruction.
Grades K-3. Alexandria: Association for Supervision and Curriculum Development.
Vaughn, S., Linan-Thompson, S., & Hickman, P. (2003). Response to instruction as a means of
identifying students with reading/learning disabilities. Exceptional Children, 69(4), 391–409.
Vaughn, S., Wanzek, J. S., & Murray, G. (2012). Intensive interventions for students strugging in
reading and mathematics: A practice guide. Portsmouth: RMC Research Corporation, Center
on Instruction.
Vaughn, S., Wanzek, J., Woodruff, A. L., & Linan-Thompson, S. (2007). Prevention and early
identification of students with reading disabilities. In D. Haager, J. Klinger, & S. Vaughn
(Eds.), Evidence-based reading practices for response to intervention (pp. 11–27). Baltimore:
Brookes.
Viadero, D. (2011, October 19 ). Dropouts: Trends in high school dropout and completion rates in
the United States: 1972–2009. Education Week, 31(8), 4.
300 References
Walpole, S., & McKenna, M. C. (2007). Differentiated reading instruction: Strategies for the
primary grades. New York: The Guilford Press.
Walsh, K., Glaser, D., & Dunne Wilcox, D. (2006). What education schools aren’t teaching about
reading and what elementary teachers aren't learning. National Council on Teacher Quality
(NCTQ).
Watkins, C. L., & Slocum, T. A. (2004). The components of direct instruction. Journal of Direct
Instruction, 3, 75–110.
White, R. B., Polly, D., & Audette, R. H. (2012). A case analysis of an elementary school’s imple-
mentation of response to intervention. Journal of Research in Childhood Education, 26, 73–90.
Wilkinson, L. A. (2006). Monitoring treatment integrity: An alternative to the ‘consult and hope’
strategy in school-based behavioural consultation. School Psychology International, 27(4),
426–438. doi: 10.1177/0143034306070428.
Wolfe, I. S. (2005). Fifty percent of new teachers leave in five years. The Total View. Retrieved
from http://www.super-solutions.com/teachershortages.asp#axzz1NE7Bf2gA.
Wolery, M. (2011). Intervention research: The importance of fidelity measurement. Topics in Early
Childhood Special Education, 31(3), 155–157.
Wong, H. K., & Wong, R. T. (2001). The first days of school: How to be an effective teacher.
Mountain View: Harry K. Wong Publications.
Woodcock, S., Vialle, W. (2011). Are we exacerbating students’ learning disabilities? An investiga-
tion of preservice teachers’ attributions of the educational outcomes of students with learning
disabilities. Annuals of Dyslexia, 61, 223–241. doi: 10.1007/s11811-011-0058-9.
Yates, H. M., & Collins, V. K. (2006). How on school made the pieces fit. Journal of Staff Develop-
ment, 27(4), 30–35.
Yell, M. L., & Stecker, P. M. (2003). Developing legally correct and educationally meaningful
IEPs, using curriculum-based measurement. Assessment for Effective Intervention, 28, 73–88.
doi: 10.1177/073724770302800308.
Yoon, K. S., Duncan, T., Lee, S. W. Y., Scarloss, B., & Shapley, K. (2007). Reviewing the evidence
on how teacher professional development affects student achievement. Washington, DC: U.S.
Department of Education, Institute of Education Sciences, National Center for Education Eval-
uation and Regional Assistance, Regional Educational Laboratory Southwest. http://ies.ed.gov/
ncee/edlabs.
Ysseldyke, J., & Christenson, S. L. (1988). Linking assessment to intervention. In J. L. Graden,
J. E. Zins, & M. J. Curtis (Eds.), Alternative educational delivery systems: Enhancing instruc-
tional options for all students (pp. 91–110). Washington: National Association of School
Psychologists.
Ysseldyke, J., Burns, M. K., Scholin, S. E., & Parker, D. C. (2010). Instructionally valid assess-
ment within response to intervention. Teaching Exceptional Children, 42(4), 54–61.
Zhang, D., & Katsiyannis, A. (2002). Minority representation in special education: A persistent
challenge. Remedial and Special Education, 23(3), 180–187.
Index
A Curriculum-Based Measurement
Academic vocabulary 196, 214 (CBM) 36, 47
content specific and 214 alphabetic knowledge 150
Achievement 9, 23, 58 characteristics 73
academic 27 use in 37, 267
and outcomes 19 within assessment process 267
data for student 10
gap 70 D
outcomes 40 Decoding 3, 48, 79, 91, 98, 136, 196, 202,
positive effect of student 267 206, 208, 256
positive gains in 25 and arduously working 60
state level test 192 and lack of vocabulary 198
student 17, 254, 256, 267 and phonics skills 85
teacher and student 17 breakdown with 150
test 266, 268 CBE Process 79, 84, 99, 103, 143
Aim line 249, 253, 259 errors 86, 88–90, 113
essential components 247 process 60
Alphabetic knowledge 135, 138–140, 150 reading comprehension skills 149
assessment of 153, 155, 156, 163 self-monitoring assessment for 263
Alphabetic principle 59, 97, 131, 135, 138, skills 60, 85, 91, 131
139, 143, 150 student 149, 195
Alterable variables 2, 20, 26, 40, 52, 65 Diagnostic assessment 31, 35, 36
focusing on 52, 53 use 40
Assessment system
comprehensive 34, 37, 38 E
Assumptions behind CBE 47, 48 Effective practices 17, 19, 29, 45
and data 45
B limited use of 11, 14, 15
Background knowledge 203 Error analysis 100, 145
Big five areas of reading 58, 60 coding sheets 111
conduct 88, 90
C instructions 107
Content-specific vocabulary 197 overlap with 150
academic and 205 tally sheet 114
resources for 196 Evaluation
Curriculum-based evaluation (CBE) 2, 20 plan 98, 99, 150, 207, 243, 258, 263, 264
definition of 47
framework for use 44
M R
MAZE 196, 207 Reliability
probes 192, 208 and validity 264
students 194 coefficients 265
Multi-Tiered System of Supports (MTSS) 3, of data 252
19, 23 Review, interview, observation, and testing
description of 23, 24 (RIOT) 3, 54, 61
and IH 53
N assessment framework and IH 47, 53
Nonsense Word Fluency (NWF) 73, 139, 143, framework 36
150
analysis of 150 S
task 143 Screening assessment (screening) 35, 45, 80
Setting goals 68, 70
O Specific-level assessment 53, 65, 85, 145, 203
Oral Reading Fluency (ORF) 68, 80, 207, in CBE Process 270
265, 268, 269 Survey-level assessment (SLA) 64, 71, 80,
and MAZE 192, 209 138, 139, 192, 201, 214, 263, 269
rates 27 MAZE 207
with CBM 151, 209
P with reading CBM 101
Pattern of performance 248, 249 Systems-level problem solving 41, 43
Percentile 26, 68, 71–73, 80, 84, 99, 138, 194
Phoneme 59, 60, 97, 135, 146, 150, 189 T
Phoneme Segmentation Fluency (PSF) 156 Tiers of instruction
use 150 tier 1 32
Phonemic awareness 14, 59, 85, 135, 138, tier 2 33
150, 151 tier 3 33, 34
skills 140, 141, 145, 147 Trend line 247, 249, 253, 259
teach 146
Print concepts 135, 139, 142, 150, 153, 155, V
156, 269 Validity 264, 265
teach 147 Variability 252, 253
Problem-solving model Vocabulary 14, 28, 59, 60, 90, 97, 195–197,
use 19 202–205
Problem-solving model (PSM) 3, 7, 19, 20, list 207
23, 44, 63 matching 208
plan evaluation 38, 41, 64
plan implementation 38, 40
problem analysis 38, 40
problem identification 38, 64
use 31, 45
Progress monitoring 3, 36, 65, 71, 101, 243
data 30, 74
graphs 258
tools 37, 73, 267