You are on page 1of 307

Practitioner’s Guide to

Curriculum-Based Evaluation in Reading


Jason E. Harlacher • Tami L. Sakelaris 
Nicole M. Kattelman

Practitioner’s Guide to
Curriculum-Based
Evaluation in Reading

1  3
Jason E. Harlacher Nicole M. Kattelman
Marzano Research Laboratory Washoe County School District
9000 E. Nichols Ave. Ste. 112 425 East Ninth Street
Centennial, CO 80112 Reno, NV 89520

Tami L. Sakelaris
Washoe County School District
425 East Ninth Street
Reno, NV 89520

Additional material to this book can be downloaded from http://extra.springer.com

Printable Handouts can be downloaded on Springer.com by searching for “Practitioner’s


Guide to Curriculum-Based Evaluation in Reading” or by following this direct link: http://
www.springer.com/psychology/child+%26+school+psychology/book/978-1-4614-9359-4

ISBN 978-1-4614-9359-4     ISBN 978-1-4614-9360-0 (eBook)


DOI 10.1007/978-1-4614-9360-0
Springer New York Dordrecht Heidelberg London

Library of Congress Control Number: 2013955723

© Springer Science+Business Media New York 2014


This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or
information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed. Exempted from this legal reservation are brief excerpts
in connection with reviews or scholarly analysis or material supplied specifically for the purpose of
being entered and executed on a computer system, for exclusive use by the purchaser of the work. Du-
plication of this publication or parts thereof is permitted only under the provisions of the Copyright Law
of the Publisher’s location, in its current version, and permission for use must always be obtained from
Springer. Permissions for use may be obtained through RightsLink at the Copyright Clearance Center.
Violations are liable to prosecution under the respective Copyright Law.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publica-
tion does not imply, even in the absence of a specific statement, that such names are exempt from the
relevant protective laws and regulations and therefore free for general use.
While the advice and information in this book are believed to be true and accurate at the date of publica-
tion, neither the authors nor the editors nor the publisher can accept any legal responsibility for any errors
or omissions that may be made. The publisher makes no warranty, express or implied, with respect to
the material contained herein.

Printed on acid-free paper

Springer is part of Springer Science+Business Media (www.springer.com)


Foreword

Almost every school in the country has a problem-solving team, and research has
shown that effective problem-solving teams can improve both student (e.g., improved
reading skills, reduced behavioral difficulties, etc.) and systemic (e.g., reduced num-
ber of students retained in a grade, fewer students referred for special education, etc.)
outcomes (Burns and Symington, 2002). Yet, there continues to be a well-document-
ed deficiency in student reading and math skills in this country. How is that possible?
If almost every school in the country is convening a group of skilled professionals on
a weekly basis to brainstorm ideas for students who are experiencing difficulties, then
why is it that children continue to demonstrate skill deficiencies at an alarming rate?
The answer to the above questions is a complex one that goes well beyond the
scope of this book. However, I suggest that three reasons why problem-solving
teams have not led to more global positive outcomes are because most school per-
sonnel (a) have unfortunate misconceptions about assessment, (b) do not understand
problem-analysis, and (c) do not contextualize the interventions within a broader
system. I will discuss these below.

Assessment

Assessment is fundamental to effective instruction and intervention. Unfortunately,


most teachers and school personnel today hear the word assessment and immedi-
ately think of state accountability tests. When education first began in this country,
the goal of assessment was to identify the extremely high and low students in order
to rank them, and that remained a primary goal for decades (Reschly, 1996). The
recent accountability movement brought assessment back to the forefront of edu-
cational debates, but the focus was again on determining the haves and have-nots.
There certainly is a need to determine who has proficient skill, but that is a sum-
mative decision that has little utility for instruction and it is not consistent with the
definition of assessment.
A subtle yet important contribution that Harlacher, Sakelaris and Kattelman
make in this book is that they define assessment as “the process of gathering
information to make decisions”, and use that as the framework for the entire book.
v
vi Foreword

Many school-based professionals confuse the term ‘testing’ with assessment. There
are several types of data that can be used within any assessment process, one of
which may be standardized norm-referenced tests or data collected to judge student
proficiency. What matters most is not the type of test used, but that the data match
the purpose for which they are used. There are many high quality measures that
may provide excellent summative information, but do little to inform instruction.
There are also tools that provide excellent instructional information, but the data
lead to inaccurate screening decisions. As the authors point out, assessment should
be a dynamic process that is guided by the question being asked. It seems that few
school-based professionals truly understand how to select the appropriate data to
address the question and often rely on commercially prepared tests because those
are mandated by the district in which they work.

Problem Analysis

“Which intervention should I use?” That is by far the most common question that I
hear from school-based practitioners. Most problem-solving teams are quite good
at identifying a problem and may even collect data to determine if the problem
persists. However, very few fully understand the diagnostic assessment process
outlined in this book or are able to examine discrete sub-skills that contribute to a
problem. As was somewhat famously stated, most problem-solving teams do not
solve problems; they admire them (the actual source of that quote is unclear, but
most attribute it to Jim Ysseldyke at the University of Minnesota). In my experi-
ence, the essential attribute of an effective problem-solving team is that they use
data to analyze the problem and to determine the intervention. When the problem is
analyzed, which intervention to use becomes quite clear and the likelihood that the
intervention will be successful substantially increases.

Contextualized Within Larger System

Imagine an elementary school with 600 students. On average 20 % of the students
need something more than effective instruction and curriculum (Burns, Appleton,
& Stehouwer, 2005). If there were 600 students, then 120 of them would require
some level of support beyond quality core instruction. If the problem-solving team
met each week, spent 1 hour talking about 2 students (30 minutes each) every week,
and they met 32 times throughout the year, then they would have time to discuss 64
students leaving 56 students out and not having time for any follow-up meetings re-
garding the students that they did discuss. Of course, one solution would be to meet
twice as often, but more than likely school personnel cannot conduct the level of
analysis that is needed for effective problem-solving to occur at the individual level
for 120 students. First, some lower level of analysis has to occur at the classroom
and group level. Stated in language commonly used within Multi-Tiered System of
Foreword vii

Supports, you cannot have an effective Tier 3 without an effective Tier 2, and you
cannot have an effective Tier 2 without strong core instruction (Tier 1).
Most school personnel do not systematically conduct analyses at Tier 2. Instead,
all students receive the same intervention under the idea of standard protocol. How-
ever, the term standard protocol does not mean that every student receives the same
intervention, it simply means that there are a few highly standardized interventions
from which to select for specific problems. For example, a student who needs better
decoding skills would likely not benefit from an intervention designed to enhance
comprehension. A low-level analysis to determine the broad category of the prob-
lem can be used to identify the target of the intervention for small-groups of stu-
dents, for which a standardized intervention could then be delivered.

Curriculum-Based Evaluation

The assessment-to-intervention process described by Harlacher, Sakelaris and Kat-


telman in this book directly addresses the three points described above. Curricu-
lum-based evaluation (CBE) is not new. In fact, it has its roots in precision teach-
ing, which was developed in the 1960s (Lipsley, 1991), and Deno and Mirkin’s
(1977) seminal Data-based program modification: A manual. Ken Howell’s book
Curriculum-based evaluation for special and remedial education: A handbook for
deciding what to teach (Howell & Morehead, 1987) was the first written reference
regarding CBE, which later became the 2000 book (Howell & Nolet, 2000) that is
commonly cited. CBE was used in practice in Iowa and other locations at which
Howell trained school staff before either book was published, but the implementa-
tion of CBE has not expanded much beyond those initial efforts in the years since.
CBE was not widely implemented in the schools partially because it was perceived
as a complex process in which few people were trained. The current book simplifies
the process and makes it applicable to decisions throughout a Multi-Tiered System
of Supports (MTSS) framework. It discusses types of data to be used for specific
decisions, lays out a framework with which those data can be analyzed, and details
decisions that should be made at Tiers 1 and 2 in addition to Tier 3.
There is more to an effective problem-solving team, or problem-solving process,
than problem analysis, but it is the aspect that is most often missing. Moreover,
grade-level teams should be engaged in problem-solving procedures for classrooms
and groups of students, and the CBE framework outlined here can support that work
as well.
Ellis (2005) suggested that educational innovations need to be (a) consistent with
theory, (b) supported with research demonstrating its effectiveness, and (c) able to
be consistently implemented on a wide-scale basis. CBE is consistent with several
theories and the components of CBE have been well researched. More research
is needed to examine the CBE Process as a whole, but research efforts have been
impeded by the lack of clear conceptualization of CBE. Thus, Practitioner’s Guide
to Curriculum-Based Evaluation in Reading will make consistent implementation
viii Foreword

possible and could also increase the likelihood of increased research. I am confident
that practitioners will find the procedures easy to implement, especially with the
forms and tools that Harlacher, Sakelaris, and Kattelman provide. This book was
needed by researchers and practitioners alike, and the authors have filled an impor-
tant gap with a well-written and useful tool.

University of Minnesota Matthew K. Burns

References

Burns, M. K., Appleton, J. J., & Stehouwer, J. D. (2005). Meta-analysis of res­ponse-to-intervention


research: Examining field-based and research-implemented models. Journal of Psychoeduca-
tional Assessment, 23, 381–394.
Burns, M. K., & Symington, T, (2002). A meta-analysis of prereferral intervention teams: Systemic
and student outcomes. Journal of School Psychology, 40, 437–447.
Deno, S. L., & Mirkin, P. K. (1977). Data-based program modification: A manual. Reston, VA:
Council for Exception Children.
Ellis, A. K. (2005). Research on educational innovations (4th ed.). Larchmont, NY: Eye on
Education.
Howell, K. W., & Morehead, M. K. (1987). Curriculum-based evaluation for special and remedial
education: A handbook for deciding what to teach. Columbus, OH: Merrill.
Howell, K. W., & Nolet, V. (2000). Curriculum-based evaluation: Teaching and decision making.
Belmont, CA: Wadsworth.
Lindsley, O. R. (1991). Precision teaching’s unique legacy from B. F. Skinner. Journal of
Behavioral Education, 1, 253–266.
Reschly, D. J. (1996). Functional assessments and special education decision making. In
W. Stainback & S. Stainback (Eds.) Controversial issues confronting special education:
Divergent perspectives (2nd ed., pp. 115–128). Boston: Allyn and Bacon.
Acknowledgements from the Authors

This book is the result of a tremendous amount of work and passion for developing
a user-friendly tool for practitioners. We want to first thank Dr. Kelly Humphreys,
our friend and colleague from Washoe County School District. Her support, insight,
and collaboration made this book possible. We also want to thank Mr. and Mrs.
Lock for allowing us a retreat at their condominium in Reno without which we
could not have completed this book.
Dr. Harlacher  I’d like to thank my classmates, professors, and colleagues from
the University of Oregon. Their support and tutelage paved the way for my inter-
est in school wide systems and Curriculum-Based Evaluation. I also want to thank
Dr. Kenneth W. Merrell, who unfortunately passed away in 2011. Dr. Merrell was
a generous and supportive mentor who always kept his students at the forefront.
Spending just a few moments with Ken left one feeling renewed, listened to, and
capable of taking on more work (which was key for stressed out graduate students!).
Without his advisement, my career and this project would not have come to fruition.
Finally, I want to thank my brothers, Chad and Todd, and my parents, Carl and Kaye
Harlacher, whose unconditional love and support led me to where I am today.
Dr. Sakelaris  I would like to thank my husband, Greg, and my children, Peyton,
Jade, and Quinn, for supporting me throughout the process of writing this book. I
could not have managed it without their understanding and patience. I also want to
thank my parents, Butch and Peggy Renovich, for their continuous interest in and
encouragement of this project.
Ms. Kattelman  I wish to thank my husband, Michael, and my two children, Jake
and Elise, for their support and patience during the writing of this book. Without
Michael taking over parental duties and Jake and Elise giving Mom some quiet
time, my writing would not have been accomplished. I would also like to thank my
many colleagues who truly believe that each student can learn when provided the
right academic supports. Without these colleagues, the pursuit of building school-
wide and district-wide systems for all students’ needs would not be possible.

ix
Contents

1 Introduction �����������������������������������������������������������������������������������������������    1
1.1 Outline of the Book ����������������������������������������������������������������������������    2

Part I  Background of Education and Curriculum-Based Evaluation

2  History of Education ���������������������������������������������������������������������������������    7


2.1 Chapter Preview ���������������������������������������������������������������������������������    7
2.2 The State of Education �����������������������������������������������������������������������    7
2.2.1 Students with Disabilities and Second-Language Learners ���    9
2.3 Why are Schools Struggling? �������������������������������������������������������������   10
2.3.1 Teacher Attrition ���������������������������������������������������������������������   11
2.3.2 Changing Student Population �������������������������������������������������   11
2.3.3 Isolation Among Staff and Fragmented
School Structure ���������������������������������������������������������������������   13
2.3.4 Historical Focus on Labeling and Entitlement �����������������������   13
2.3.5 Inadequate Educator Training on Scientific
Practices and Limited Use of Effective Practices ������������������   14
2.4 What to do About it? ��������������������������������������������������������������������������   15
2.4.1 Improvement Practice 1: Increase Collaboration
Among Staff ���������������������������������������������������������������������������   16
2.4.2 Improvement Practice 2: Ensure Effective Practices �������������   17
2.4.3 Improvement Practice 3: Increase the Connection
Between Assessment and Instruction �������������������������������������   18
2.5 Use of Problem-Solving Model ���������������������������������������������������������   19
2.5.1 Systems-Level Problem Solving ��������������������������������������������   19
2.5.2 Individual Problem Solving ���������������������������������������������������   20
2.6 Purpose of the Book ���������������������������������������������������������������������������   20
2.7 Summary and Key Points �������������������������������������������������������������������   20

3  Multi-Tiered System of Supports ������������������������������������������������������������   23


3.1 Chapter Preview ���������������������������������������������������������������������������������   23
3.2 Systemic Approaches to School Improvement ����������������������������������   23

xi
xii Contents

3.3 Description of Multi-Tiered System of Supports �������������������������������   23


3.3.1 Key Principle 1: All Students can Learn to
Grade-Level, Given the Right Level of Support ��������������������   24
3.3.2 Key Principle 2: A Proactive, Preventative
Approach to Education �����������������������������������������������������������   27
3.3.3 Key Principle 3: Use of Evidence-Based Practices ����������������   28
3.3.4 Key Principle 4: Use of Data-Based Decision Making ����������   29
3.3.5 Key Principle 5: Instructional Match �������������������������������������   29
3.3.6 Key Principle 6: Schoolwide Use and Collaboration �������������   29
3.4 Description of MTSS �������������������������������������������������������������������������   30
3.5 Core Components of MTSS ���������������������������������������������������������������   31
3.5.1 Multiple Tiers of Instruction ��������������������������������������������������   32
3.5.2 Comprehensive Assessment System ��������������������������������������   34
3.6 The PSM ���������������������������������������������������������������������������������������������   38
3.6.1 Systems-Level Problem Solving ��������������������������������������������   41
3.7 Four Elements of MTSS ��������������������������������������������������������������������   43
3.8 Developmental Process of MTSS ������������������������������������������������������   44
3.9 MTSS as the Backdrop for Curriculum-Based Evaluation ����������������   44
3.10 Summary and Key Points �����������������������������������������������������������������   45

4  What is Curriculum-Based Evaluation? �������������������������������������������������   47


4.1 Chapter Preview ���������������������������������������������������������������������������������   47
4.2 Definition of CBE ������������������������������������������������������������������������������   47
4.2.1 CBA vs CBM vs CBE ������������������������������������������������������������   47
4.3 Assumptions Behind CBE ������������������������������������������������������������������   48
4.3.1 1. Problems are Defined as the Gap Between
Expected Behavior and Observed Behavior ��������������������������   48
4.3.2 2. Learning is an Interaction ��������������������������������������������������   49
4.3.3 3. Background Knowledge is Critical ������������������������������������   51
4.3.4 4. Focusing on Alterable Variables Leads to Better
Instructional Recommendations ���������������������������������������������   52
4.4 The CBE Process ��������������������������������������������������������������������������������   53
4.5 RIOT/ICEL and Instructional Hierarchy ��������������������������������������������   53
4.5.1 RIOT/ICEL �����������������������������������������������������������������������������   54
4.5.2 Instructional Hierarchy ����������������������������������������������������������   56
4.6 Big Five Areas of Reading �����������������������������������������������������������������   58
4.7 Summary and Key Points �������������������������������������������������������������������   60

5  The Curriculum-Based Evaluation Process �������������������������������������������   63


5.1 Chapter Preview ���������������������������������������������������������������������������������   63
5.2 The CBE Process ��������������������������������������������������������������������������������   63
5.3 Problem Identification ������������������������������������������������������������������������   64
5.4 Problem Analysis �������������������������������������������������������������������������������   65
5.5 Plan Implementation ��������������������������������������������������������������������������   65
5.5.1 Instructional Match ����������������������������������������������������������������   66
5.5.2 Goal Writing ���������������������������������������������������������������������������   66
Contents xiii

5.5.3 Setting Goals ������������������������������������������������������������������������    68


5.5.4 A Sense of Urgency ��������������������������������������������������������������    70
5.5.5 What Level Material Should be Used
for Progress Monitoring? �����������������������������������������������������    71
5.5.6 Selecting Goal Criteria and Time Frame ������������������������������    71
5.5.7 Measuring Progress ��������������������������������������������������������������    73
5.5.8 Measuring Fidelity ���������������������������������������������������������������    74
5.6 Plan Evaluation ��������������������������������������������������������������������������������    74
5.7 Summary and Key Points �����������������������������������������������������������������    75

Part II  Using Curriculum-Based Evaluation

6  CBE Decoding �����������������������������������������������������������������������������������������    79


6.1 Chapter Preview �������������������������������������������������������������������������������    79
6.2 CBE Decoding ���������������������������������������������������������������������������������    79
6.3 Problem Identification ����������������������������������������������������������������������    80
6.3.1 Step 1—Ask: Is There a Problem?
Do: Initial Problem Identification ����������������������������������������    80
6.3.2 Step 2—Ask: Does it Warrant Further
Investigation? Do: Survey-Level Assessment ����������������������    80
6.4 Problem Analysis �����������������������������������������������������������������������������    84
6.4.1 Step 3—Ask: What is the Student’s Rate and
Accuracy? Do: Examine Rate and Accuracy with
Grade-Level Material �����������������������������������������������������������    84
6.4.2 Step 4—Ask: Can the Student Self-Correct Errors?
Do: Self-Monitoring Assessment �����������������������������������������    85
6.4.3 Step 5—Ask: Does the Student Have Acceptable
Rate at Any Level Above Grade 1? Do: Examine
Results of SLA ���������������������������������������������������������������������    88
6.4.4 Step 6—Ask: Are there Patterns to the Student’s
Reading Errors? Do: Conduct Error Analysis ����������������������    88
6.4.5 Step 7—Ask: Are Sight Words a Concern? Do:
Assess Sight Words and/or Vocabulary ��������������������������������    90
6.5 Plan Implementation ������������������������������������������������������������������������    91
6.5.1 Teach: Accuracy and Self-Monitoring ���������������������������������    91
6.5.2 Teach: Fluency ���������������������������������������������������������������������    94
6.5.3 Teach: Targeted Instruction to Correct Errors ����������������������    96
6.5.4 Teach: General Reading Instruction �������������������������������������    97
6.6 Plan Evaluation ��������������������������������������������������������������������������������    98
6.7 Expanding Your Knowledge and Fine-Tuning ���������������������������������    99
6.8 Chapter Summary �����������������������������������������������������������������������������   101

7  CBE Early Literacy ��������������������������������������������������������������������������������   135


7.1  Chapter Preview �������������������������������������������������������������������������������   135
7.2  Early Literacy Skills �������������������������������������������������������������������������   135
xiv Contents

7.3  CBE Early Literacy ��������������������������������������������������������������������������   136


7.4  Problem Identification ����������������������������������������������������������������������   136
7.4.1 Step 1: Ask: Is There a Problem with Early Literacy
Skills? Do: Initial Problem Identification ����������������������������   136
7.4.2 Step 2: Ask: Is the Student’s Performance Below
Criteria? Do: Survey-Level Assessment ������������������������������   138
7.5  Problem Analysis �����������������������������������������������������������������������������   140
7.5.1 Step 3: Ask: If Below Criterion on PSF, is an Error
Pattern Evident? Do: Assess Phonemic Awareness Skills ���   140
7.5.2 Step 4: Ask: If Below Criterion on LNF, does the
Student have Print Concepts and Letter Names
Mastered? Do: Assess Print Concepts and Letter Names ����   142
7.5.3 Step 5: Ask: If Below Criterion on LSF and/or
NWF, has the Student Mastered Individual Letter
Sounds? Do: Assess Letter-Sound Correspondence
and Letter Blends �����������������������������������������������������������������   143
7.5.4 Step 6: Ask: Is an Error Pattern Evident with Letter
Blends? Do: Assess Letter Blends ���������������������������������������   143
7.5.5 Step 7: Ask: Are Sight Words a Concern?
Do: Assess Sight Words �������������������������������������������������������   144
7.6  Plan Implementation ������������������������������������������������������������������������   144
7.6.1  Teach: Phonemic Awareness �����������������������������������������������   146
7.6.2  Teach: Print Concepts �����������������������������������������������������������   147
7.6.3 Teach: Letter Identification with Letter-Sound
Correspondence ��������������������������������������������������������������������   147
7.6.4  Teach: Letter-Sound Correspondence ����������������������������������   148
7.6.5  Teach: Letter Blends �������������������������������������������������������������   149
7.7  Plan Evaluation ��������������������������������������������������������������������������������   150
7.8  Expanding and Fine-Tuning �������������������������������������������������������������   150
7.9  Chapter Summary �����������������������������������������������������������������������������   151

8  CBE Reading Comprehension ���������������������������������������������������������������   191


8.1  Chapter Preview �������������������������������������������������������������������������������   191
8.2  CBE Reading Comprehension ���������������������������������������������������������   191
8.3  Problem Identification ����������������������������������������������������������������������   192
8.3.1 Step 1: Ask: Is There a Problem? Do: Initial Prob-
lem Identification �����������������������������������������������������������������   192
8.3.2 Step 2: Ask: Does it Warrant Further Investigation?
Do: Survey-Level Assessment ���������������������������������������������   192
8.4  Problem Analysis �����������������������������������������������������������������������������   195
8.4.1 Step 3: Ask: Does the Student have Sufficient
Rate and Accurate at Grade-Level with ORF? Do:
Examine Rate and Accuracy as Described in Chapter 6 ������   196
8.4.2 Step 4: Ask: Is the Student Missing Critical
Vocabulary? Do: Examine Vocabulary of Content
and Passages �������������������������������������������������������������������������   196
Contents xv

8.4.3 Step 5: Ask: Is Student Monitoring Comprehen-


sion? Do: Examine Meta-Cognitive Skills ������������������������   198
8.4.4  Comprehension Interview ��������������������������������������������������   198
8.4.5  Retell: Constructing Meaning from Text ���������������������������   200
8.4.6 Step 6: Ask: Does the Student’s Background
Knowledge Support Text Content? Do: Examine
Background Knowledge ����������������������������������������������������   202
8.4.7  Background Knowledge Discussion ����������������������������������   202
  8.5  Plan Implementation ����������������������������������������������������������������������   203
8.5.1  Teach: Vocabulary ��������������������������������������������������������������   204
8.5.2  Teach: Meta-Cognitive Strategies ��������������������������������������   205
8.5.3  Teach: Background Knowledge �����������������������������������������   206
  8.6  Plan Evaluation ������������������������������������������������������������������������������   207
  8.7  Expanding Your Knowledge and Fine-Tuning �������������������������������   208
  8.8  Chapter Summary ���������������������������������������������������������������������������   209

Part III  Making Educational Decisions with CBE

9   Progress Monitoring and Educational Decisions �������������������������������   243


  9.1 Chapter Preview �����������������������������������������������������������������������������   243
  9.2 Educational Decisions During Plan Evaluation �����������������������������   243
  9.3 Progress Monitoring �����������������������������������������������������������������������   243
9.3.1 Guidelines for Judging Growth �����������������������������������������   244
9.3.2 Graphing Basics �����������������������������������������������������������������   244
9.3.3 Essential Components: Goal, Aim Line, Trend Line ���������   247
9.3.4 Pattern of Performance? ����������������������������������������������������   248
9.3.5 Judging Growth �����������������������������������������������������������������   249
9.3.6 Additional Analyses �����������������������������������������������������������   250
  9.4 What to do After a Poor or Questionable Response �����������������������   253
  9.5 Evidence-Based Instructional Factors ��������������������������������������������   254
9.5.1  1. Time Allotted for Instruction �����������������������������������������   254
9.5.2  2. Grouping and Homogeneity of the Group’s Skills ��������   254
9.5.3  3. Pacing ����������������������������������������������������������������������������   254
9.5.4  4. Amount of Review ���������������������������������������������������������   257
9.5.5  5. Repetitions ���������������������������������������������������������������������   257
9.5.6  6. Activating Background Knowledge �������������������������������   257
9.5.7  7. Corrective Feedback ������������������������������������������������������   258
9.5.8  8. Praise-to-Redirect Statements ����������������������������������������   258
  9.6 Chapter Summary and Key Points �������������������������������������������������   258

10  Frequently Asked Questions about Curriculum-Based Evaluation ���   261


  10.1 Is Curriculum-Based Evaluation Just for Tier 3? ���������������������������   261
10.1.1 Group Diagnostics ������������������������������������������������������������   261
  10.2 How can I Convince My School to Use CBE? ������������������������������   264
  10.3 Is CBE Reliable and Valid? ������������������������������������������������������������   264
xvi Contents

10.4 Is CBE Evidence-Based? ���������������������������������������������������������������   267


10.5 Do Directions Influence a Student’s Reading Rate
on Reading CBM Passages? �����������������������������������������������������������   268
10.6 Does Oral Reading Fluency Measure Comprehension? ����������������   268
10.7 What about the Common Core State Standards? ���������������������������   269
10.8 Do I Have to use the Median When Administering ORF
Measures? ���������������������������������������������������������������������������������������   269
10.9 Why do I Have to do a Survey-Level Assessment
if I Know the Student’s Reading Skills are Low? �������������������������   269

Appendices �����������������������������������������������������������������������������������������������������   273

Glossary ���������������������������������������������������������������������������������������������������������   287

References ������������������������������������������������������������������������������������������������������   289

Index ���������������������������������������������������������������������������������������������������������������   301


About the Authors

Jason E. Harlacher, Ph.D.  is a nationally certified school psychologist with more


than 10 years of experience in education. Dr. Harlacher has worked as a school
psychologist, a MTSS consultant, and as the state director for PBS-Nevada.
Dr. Harlacher currently works as a researcher and adjunct professor in Denver, Col-
orado. Dr. Harlacher presents nationally on school-wide prevention models and has
published articles on RTI, social-emotional learning, and class-wide interventions
for ADHD. Dr. Harlacher earned his master’s degree in School Psychology from
Utah State University in 2006 and his doctorate in School Psychology from the
University of Oregon in 2009.
Tami L. Sakelaris, Ph.D.  is a nationally certified school psychologist currently
working as a School Psychologist and consultant, coach and trainer for Multi-
Tiered System of Supports in Washoe County School District in Reno, Nevada.
Dr. Sakelaris has worked in education for more than 16 years, which includes over
7 years as an MTSS External Coach. Dr. Sakelaris earned her doctorate in School
Psychology from the University of Oregon in 1998.
Nicole M. Kattelman, M.S.  is a licensed school psychologist with over 15 years
of experience in education. She received her master’s in 1996 in school psychology
from California State University, Hayward. She has worked as a school psychologist
at both the elementary and secondary levels, and has supported her current school
district with the implementation of Multi-Tiered System of Supports. She has pre-
sented both nationally and locally on the implementation of Multi-Tiered System
of Supports at the secondary level. Nicole currently works as a school psychologist
and as a district consultant for Multi-Tiered System of Supports in Washoe County
School District in Reno, Nevada.

xvii
Chapter 1
Introduction

A group of educators are sitting around a table, discussing a struggling student.


They review work samples and pass around test scores. Exasperated faces fill the
room. There’s is a sense of urgency because the student continues to experience
academic failure and also because there are only 30 minutes left before the school
day begins. It took weeks to get everyone in the same room and they are all feeling
frustrated and are lacking direction.
They talk about the support the student is receiving and who has been working
with him (or her). Explanations for poor performance begin to surface:
• “He isn’t motivated.”
• “She’s always late and never listens to directions.”
• “She seems to know it one minute and then the next minute, it’s gone.”
• “There’s just something going on with his processing….he takes forever to
get it.”
• “What about his home life? Such a sad story.”
• “I had her brother last year. Same. Exact. Issues.”
• “She’s such a doll, but she just can’t seem to get it.”
• “It’s the parents. Her homework never comes back.”
• “How can I meet his needs when I have 30 other students? I’m supposed to focus
on one and sacrifice the others?”
• “She’s been struggling since kindergarten!”
• “He’s had one-to-one support and he’s still struggling!”
It did not take long for the conversation to move from vague descriptions of the
problem to discussing all the reasons the student, the family, or the life circum-
stances are preventing academic success. Then a menu of ideas for how to fix the
problem is thrown on the table:
• “Maybe she needs medication?”
• “The parents need to do something at home!”
• “I think it’s a disability. We should refer the student.”
• “I think he needs to be retained. He just isn’t getting it and he is emotionally
immature.”
• “What about one-to-one?”
J. E. Harlacher et al., Practitioner’s Guide to Curriculum-Based Evaluation in Reading, 1
DOI 10.1007/978-1-4614-9360-0_1, © Springer Science+Business Media New York 2014
2 1  Introduction

• “Has the teacher tried a token economy?”


• “It’s almost the end of the year. He’ll be sunk in middle school. He needs an
Individualized Education Program (IEP).”
• “What about putting him in Mrs. Soandso’s classroom? She does so well with
these kinds of students.”
Medication, home life, a new behavior plan, switching classrooms, a disability—
the suggestions are drastic (disability, retention, and IEP), time intensive (token
economy, one-to-one, and change classrooms), and focus on changes outside of
the school’s control (medication, parents’ help). The stress of accountability enters
the room and ideas for test preparation are discussed. Before they know it, the bell
rings, and students are entering the building. They just spent an hour at a table dis-
cussing the student and nothing was decided. There is an attempt to gain consensus
on a decision as everyone is getting up and heading out the door to their classrooms
and offices.
Sounds familiar?
How is it that after knowing, working with, and assessing a student for several
years, a school team could still be so uncertain about why a student is struggling
and could be so unproductive in an hour-long meeting? Educators put forth lots of
effort and dedication to support students, but often do not get the results they want.
They spend time in meetings discussing inalterable variables and reviewing data
that do not inform instruction. All educators could agree there is not time to spare
in education.
The results of sitting through meeting after meeting without generating practi-
cal or effective solutions are frustration, burnout, and no change in student perfor-
mance. This book is about stopping that unproductive process. It provides educators
with a problem-solving process called curriculum-based evaluation (CBE) which is
a practical, ongoing process that uses assessment to identify missing skills and to
inform and evaluate an intervention plan.

1.1 Outline of the Book

This book is divided into three sections (a fourth section contains supplemental
material including appendices, a glossary, and topic index):
1. The background of education and conceptual basis for CBE
2. Using CBE to assess reading
3. Making educational decisions within the CBE Process
The first section of the book discusses the current state of education to establish
the need for an effective problem-solving process and describes how CBE fits
in a school system. We review the National Assessment of Educational Progress
(NAEP) results, which show that over half of students in the fourth and eighth
grade are scoring below proficient. Over 90 % of fourth- and eighth-grade students
1.1 Outline of the Book 3

who have disabilities or are English language learners score below proficient on
the NAEP (National Center for Educational Statistics 2011a, b). Three practices
that can lead to improved outcomes for schools are presented. An overview of the
problem-solving model, which facilitates and enables school improvements, is pre-
sented (see Greenwood et al. 2008). Schoolwide problem solving is the focus of
Chapter  3 and individual problem solving with the CBE Process is the focus of
Chapters 4 and 5.
Chapter 3 provides the foundation for CBE, through a discussion of education-
al reform and Multi-Tiered System of Supports (MTSS). MTSS is a multi-tiered,
schoolwide model of service delivery providing a continuum of evidence-based
supports with frequent data-based monitoring for instructional decision making
aiming to improve academic and behavioral outcomes (Barnes and Harlacher 2008;
Horner et al. 2005; Kansas MTSS, n. d.). The principles behind MTSS are outlined
and then details about critical features are provided. MTSS sets the stage for the use
of CBE. CBE is used most effectively and efficiently in a collaborative, problem-
solving, school culture.
Chapter 4 defines CBE, and discusses the assumptions behind CBE. Learning
is viewed as an interaction between the learner, the curriculum, and the environ-
ment (which includes instruction). CBE includes consideration and assessment of
all those components using low-inference assessments (low-inference means that
the gap between the results and interpretation of the results is small; Howell and
Nolet 2000).
The second part of the book focuses on the actual implementation of CBE in
reading. In Chapter 5, the steps of the CBE Process are detailed. CBE’s alignment
with the problem-solving model is illustrated and an assessment framework for con-
ducting CBE [review, interview, observation, and testing (RIOT)/instruction, cur-
riculum, environment, learner (ICEL)] is provided (Christ 2008). Chapters 6 to 8
walk through use of reading CBE in daily practice and provide step-by-step direc-
tions for using CBE to assess decoding, early literacy, and reading comprehension.
Explicit directions, reproducible handouts, and instructional strategies based on the
results of the CBE Process are provided. Those chapters guide the CBE Process and
result in practical recommendations.
The third part of the book describes how to make educational decisions with
CBE. Chapter 9 provides guidelines for progress monitoring, goal-setting, and
instructional decision making. Finally, answers to frequently asked questions about
CBE are provided in Chapter 10.
This book provides educators with a practical tool that is an extension of the
belief that all students can learn, given the right instructional support. CBE facili-
tates identifying student needs and instructional supports to address them. The CBE
Process focuses school teams in a way that increases their efficiency and effective-
ness in improving outcomes for all students.
Part I
Background of Education and
Curriculum-Based Evaluation
Chapter 2
History of Education

2.1 Chapter Preview

To illustrate the need for a problem-solving approach in schools and for the use of
curriculum-based evaluation (CBE), the present state of education, including sta-
tistics about student performance and explanations for low school performance,
is discussed within this chapter. Strategies for improving outcomes in schools are
discussed and the problem-solving model (PSM) is introduced. The PSM can be
applied at both the systems and individual levels, with the system laying the founda-
tion of support for the individual.

2.2 The State of Education

It is no secret that schools are struggling in the USA. Any educator can attest to
the challenges that schools face and the unsatisfactory outcomes for students. For
instance, the 2011 National Assessment of Educational Progress (NAEP) results
indicated that only 34 % of fourth-grade students in the USA scored at or above
proficient (see Table 2.1). Massachusetts was the top-scoring state with 51 % of its
fourth graders that scored at or above proficient. So the best state had only half of
its fourth graders at a proficient level in reading. The lowest state was Mississippi
with only 25 % of its fourth graders at or above proficient in reading. Eighth-grade
students’ scores on the NAEP were similar, as the percentage of students scoring at
or above proficient also was 34 %. The highest state again was Massachusetts with
46 % of eighth-grade students proficient in reading and the lowest was Mississippi
with 21 % of eighth-grade students at or above proficient [National Center for Edu-
cational Statistics (NCES) 2011a]. Mathematics scores on the NAEP are relatively
higher, with 40 % of fourth-grade and 35 % of eighth-grade students scoring at or
above proficient (NCES 2011b).
Given the overall low performance of students on the NAEP, it is not surprising
that the average graduation rate is 75.5 % (NCES 2011c; Viadero 2011). However,
there is considerable variation among state graduation rates, with Nevada scoring
J. E. Harlacher et al., Practitioner’s Guide to Curriculum-Based Evaluation in Reading, 7
DOI 10.1007/978-1-4614-9360-0_2, © Springer Science+Business Media New York 2014
8 2  History of Education

Table 2.1   Percentage of high school dropout rates and percentage of fourth and eighth-grade stu-
dents scoring at or above proficiency on the National Assessment of Educational Progress among
US students
NAEP
Readinga Mathematicsb
Dropout rates Fourth Eighth Fourth Eighth
General population 4.1c 34 34 40 35
Students with disabilities 26.2d 11 7 9 9
Students with second language 24.5e 7 3 14 5
Caucasians 2.7c 34 43 52 44
African Americans 6.6c 16 15 17 14
Hispanics 6.0c 19 19 24 21
American Indian, Alaska Natives 6.3c 18 22 22 17
Asian Americans 2.4c 49 47 62 55
a
NCES 2011a; b NCES 2011b; c NCES 2011c; d OSEP 2011; e Kim 2011

the lowest at 56 % and Wisconsin the highest at 90 %. The encouraging news is that
the status dropout rate1 of students has declined since 1990, going from 12.1 % in
1990 to 8.1 % in 2009 (Viadero 2011). However, there remains a considerable gap
among ethnic groups and event dropout rates,1 with nearly three times as many
Hispanic and African-American students dropping out of high school compared to
Caucasian students (see Table 2.1) (NCES 2011c). Viadero (2011) reports that there
is a 17.6 % status dropout rate among Hispanic students and 9.3 % among African-
American students compared to 5.2 % among Caucasian students and 3.4 % among
Asian-American and Pacific Islander students. Additionally, a study by the National
Center for Research on Evaluation, Standards, and Student Testing examined drop-
out rates across three cohorts and discovered that students who speak a second
language have a dropout rate of 24.5 %, compared to a dropout rate of 15 % among
non–second-language learners (Kim 2011). Job for the Future (n. d.) summarizes
the state of graduation eloquently: “For every 10 students who enter eighth grade,
only seven graduate high school on time, and only three complete a postsecondary
degree by age 26” (p. 2).
As if those numbers are not troublesome enough, a comparison between the
USA and other developed countries reveals more dismal findings. UNICEF (2002)
examined the performance of teenagers (14 and 15 years old) in reading, math-
ematics, and science and ranked the United States 18th out of 24 countries after
averaging the findings of five different international studies on education (including
scores on the NAEP). Additionally, the results of the Programme for International

1 
Status dropout rate refers to the percentage of students within a certain age range who are not
currently enrolled in high school and have not earned a high school diploma or equivalency. This
is different than the event dropout rate, which is the percentage of high school students who left
school in a given year and did not earn a diploma or equivalency.
2.2  The State of Education 9

Student Assessment (PISA) indicate that the United States is not heading in a posi-
tive direction. The PISA is an international assessment administered on a rotating
schedule that measures performance of 15-year-old students in the areas of reading,
mathematics, and science. Over 65 countries are included and between 4,500 and
10,000 students are sampled from each country. In reading, the USA ranked 15th
in 2000 and then ranked 17th in 2009 (Fleischman et al. 2010; OECD 2001). The
USA’s performance in mathematics also decreased from a ranking of 24th in 2003
and 31st in 2009 (Fleischman et al. 2010; Lemke et al. 2004). Performance in sci-
ence decreased in the USA from 21st in 2006 to 23rd in 2009 (Baldi et al. 2007;
Fleischman et al. 2010).

2.2.1 Students with Disabilities and Second-Language Learners

Performance trends for students with disabilities are even worse. The 2011 NAEP
results show that an average of only 11 % of fourth-grade and 7 % of eighth-grade
students with disabilities scored at or above proficient in reading. In mathematics,
17 % of fourth-grade and 9 % of eighth-grade students with disabilities scored at
or above a proficient level (NCES 2011a). The graduation rates for students with
disabilities are somewhat encouraging, depending on your point of view. The per-
centage of students who exited special education by graduating with a high school
diploma increased from 43 % to 56.5 % from 1997 to 2006. The percentage of stu-
dents who exited special education by dropping out of high school decreased from
49.5 % to 26.2 % in that same time frame (OSEP 2011). The overall graduation rates
may be low, but they are trending in a positive direction.
NAEP results for students who speak a second language are equally low. Only
7 % of fourth-grade and 3 % of eighth-grade English language learners (ELLs)
scored at or above proficient on the NAEP in reading, and 14 % of fourth-grade and
5 % of eighth-grade ELLs scored at or above proficient (NCES 2011b).
Achievement results only paint half the picture of the state of public educa-
tion, as there is a historical concern over the identification of students who require
special education services (Merrell et  al. 2006; Reschly 2008; Tilly 2008). Spe-
cial education services are provided to students with disabilities to ensure a free
and appropriate public education. Since its inception, there have been fluctuations
in the identification rates of eligibility categories. For example, the category of
learning disability (LD) currently accounts for almost half (44.6 %) of all students
identified as eligible for services. This statistic may not seem alarming in and of
itself, but what is alarming is the 272 % increase in identification of LD since the
installment of special education. This identification increase can be compared to
no change in identification of students under speech-language impairment (SLI), a
25 % increase in identification of students under emotional disability (ED)/Distur-
bance and a 60 % decrease in identification of students classified under intellectual
disability. Additionally, health impairment, which is a category for students with
chronic health issues, had a 460 % increase in identification rates (US Department
10 2  History of Education

of Education 2012). The changes in rates of students served under various eligibil-
ity categories have raised questions about the accuracy and subjective nature of
referrals of students to special education (MacMillian et al. 1998; Johnston 2011;
Merrell et al. 2006; Ortiz et al. 2008).
Concerns about special education also extend to students with diverse back-
grounds, as there is an apparent bias for identification of minority populations
(Ortiz et al. 2008; Rhodes et al. 2005). Overrepresentation of minorities in special
education has been a concern for over three decades for several reasons, includ-
ing questions about unreliable and invalid assessments, weak or inappropriate psy-
choeducational practices, misunderstanding of the needs of ELLs, and a difficulty
among practitioners to distinguish typical language development from an LD (Sul-
livan 2011; Zhang and Katsiyannis 2002). In fact, American-Indian/Alaskan-Native
students are 1.56 times more likely and African-American students 1.46 times more
likely to be identified for special education compared to other ethnic groups (a risk
ratio of 1.0 indicates the risk is similar between two groups). African-American
students are 2.28 times more likely to be classified under the category of ED and
a staggering 2.75 times under intellectual disability, compared to 0.85 and 0.62,
respectively, for Caucasians (OSEP 2011). Sullivan (2011) examined the rates of
ELLs vs non-ELLs classified for special education under LD, Mild Mental Retar-
dation (MMR), SLI, and ED categories from 1999 to 2006. She found that ELLs
are 1.82 times more likely to be identified for LD compared to non-ELLs. ELLs
also were 1.63 and 1.30 times more likely to be classified under MMR and SLI,
respectively. ELLs were estimated to be less at risk for ED, as the risk ratio was
0.12 (Sullivan 2011). Samson and Lesaux (2009) discovered a grade-based change
in risk for ELLs and special education. They found that ELLs are underrepresented
in kindergarten, have similar rates in first grade, but are overrepresented by the
third grade compared to native English speakers. This change in representation is
believed to stem from the shift from “learning to read” in early elementary grades to
“reading to learn” beginning in the third grade (Carnine et al. 2009).
In summary, the majority of students in the fourth and eighth grades are scoring
below a proficient range in reading and mathematics. Achievement data for students
with disabilities and ELLs are lower. Graduation rates are trending upward, but
significant variation in graduation rates exists between states and between ethnic
groups. Historical trends in eligibility categories show drastic changes in the rates
of identification under certain eligibility categories, and students with diverse back-
grounds and languages have an increased risk for identification. Given those statis-
tics, it is not surprising that education in the USA ranks in the lower half compared
to other developed countries.

2.3 Why are Schools Struggling?

Given the state of affairs in schools, it is logical to ask why schools are not doing
well. Several possible explanations are discussed next: (a) teacher attrition, (b) a
changing student population and pressure for schools to provide more than academic
2.3  Why are Schools Struggling? 11

services to students, (c) isolation among staff and fragmented school structure, (d) a
historical focus on labeling and entitlement vs problem solving and instruction, and
(e) inadequate educator training in regards to scientific practices and limited use of
effective practices.

2.3.1 Teacher Attrition

Poor student outcomes may be related to the turnover of newly employed teachers
and the retirement of veteran teachers without a substantial workforce to take their
place (Carroll and Foster 2008). A remarkable 33 % of new teachers leave the pro-
fession in their first 3 years of employment and 50 % leave within 5 years (Gonzalez
et al. 2008; Wolfe 2005). More modest estimates of teacher attrition among new
teachers are between 10 % (Kaiser 2011) and 20 % to 25 % (Grissmer and Nataraj
Kirby 1987). Other research has reported that approximately 8.0 % of teachers leave
the profession entirely and 7.6 % leave for a new school annually (Keigher 2010).
Whether teachers leave because of retirement or because they have taken a posi-
tion elsewhere, teacher attrition has doubled since 1990 (Carroll and Foster 2008).
This constant turnover puts a strain on schools, can prevent continuity from year to
year, and reduces the number of veteran and highly skilled teachers within a school
(Barnes et al. 2007).

2.3.2 Changing Student Population

Another reason that schools struggle to perform well may be related to their dif-
ficulty keeping up with the rapid change in demographics. No longer is the aver-
age student Caucasian and from a two-parent home. Consider the following facts.
In 1950, 89 % of the population was Caucasian (Gibson and Jung 2002); today,
that number is at 75 % (US Census Bureau 2011a). Additionally, the percentage of
students living in two-parent homes held steady around 90 % from 1880 to 1970,
but the 2009 census results indicated just under 70 % of children live in two-parent
homes (see Fig. 2.1; US Census Bureau 2011b). That change is a decline of 20 %
in just over 40 years after remaining consistent for nearly 100 years. In 2010, the
student population comprised 60 % Caucasian, 20 % Hispanic, and 14 % African
American. By 2050, those rates are projected to be 46 % Caucasian, almost 30 %
Hispanic, and 15 % African American (Ortiz et al. 2008). The population is chang-
ing, bringing with it different background knowledge, different cultural values, and
various primary languages of students. This change in population requires a change
in instruction that educators may not be fully prepared to handle (Merrell et  al.
2006). Teacher preparation programs have been described as “largely inadequate”
in preparing teachers to work with diverse students (Ortiz et al. 2008, p. 1729). Most
programs do not have a large percentage of students with diverse backgrounds, nor
do they offer more than one course or even one chapter in a book on multilingual
assessment and cross-cultural competency (Ortiz et al. 2008; Rhodes et al. 2005).
12 2  History of Education

Fig. 2.1   Historical living arrangements of children from 1880 to 2009. (US Census Bureau 2011b)

Consequently, teachers may not be fully equipped to support appropriately the


range of diversity in their classroom (Rhodes et al. 2005).
In addition to the shift in demographics is a change in the challenges students
face. Aside from academic difficulties, students today come to school with substan-
tial social, emotional, and behavioral needs. These needs are forcing the traditional
focus on academics to change. In fact, schools have become the “de facto” mental
health delivery system. Of the 20 % of children who receive mental health services,
70 –80 % of them receive that treatment within the schools (Hoagwood and Johnson
2003). Bullying, aggression, and violence are tremendous concerns in schools, and
student problem behavior is the number one concern among teachers (Horner et al.
2005; Landers et  al. 2008; Sugai and Horner 2006). As a consequence, schools
do not have the luxury of just teaching academics and instead, also must focus on
the social, emotional, and behavioral health of their students (Horner et al. 2005).
Perhaps not too surprising is the idea that disrespect toward teachers from students
is a large reason teachers report burnout from their job (Landers et al. 2008). Ad-
ditionally, not all schools have the same resources to support the issues that students
bring. Rhodes and colleagues (2005) point out this disparity between schools, as
many children from diverse backgrounds are in schools that have neither the re-
sources nor the low student–teacher ratios of higher income schools. These complex
issues from diverse student populations create instructional challenges that influ-
ence student outcomes.
2.3  Why are Schools Struggling? 13

2.3.3 Isolation Among Staff and Fragmented


School Structure

Another factor contributing to struggling schools is the historical nature of how


schools were (and perhaps still are) structured. Elmore (2000) describes a “buffer”
in public education that led to an avoidance of scrutinizing instruction and practices
in schools. Decisions about what students should learn and how they should be
taught were left entirely up to individual teachers. Describing a challenge of educa-
tion, Hattie (2009) states, “…teaching is a private matter; it occurs behind a closed
classroom door, and it is rarely questioned or challenged” (p. 1). This exercise in
freedom led to some unintended consequences (Schmoker 2006). Teachers essen-
tially worked in silos, as they taught in isolation with limited feedback or input from
administration. Collaboration was low among staff and the result was a loosely
connected set of classrooms that varied widely in their practices and effectiveness
(Chenowerth 2009; Schmoker 2006; Newmann et al. 2001). As schools felt pres-
sure to improve student outcomes and demands for accountability increased, they
would adopt various programs and initiatives that were difficult to integrate. This
movement created a fragmented system with teachers pulled in numerous directions
(Newmann et al. 2001). Teachers were burdened with various initiatives, resources
and time were spread too thin, and as frustration with one initiative mounted, efforts
were abandoned to take up another initiative that would hopefully solve the school’s
ills (Newmann et al. 2001). Intentions were good, but the disorganized nature of
the school system limited the success schools could achieve (Johnston 2011; White
et al. 2012).

2.3.4 Historical Focus on Labeling and Entitlement

When the directive to provide all students with an appropriate education arose out
of Public Law 94-142, there was a necessary push to identify students with disabili-
ties and provide appropriate services to them. That push may have come at the cost
of focusing too much on labels and not enough on effective instructional practices.
Special education has been criticized for waiting to provide services until the gap
between expected and actual performance is large enough to be called a disability
(Johnston 2011; Merrell et al. 2006; Reschly 2008; Tilly 2008). In many cases, a
child had to fail for more than 1 year before being referred for an evaluation to
consider eligibility for special education services (Nelson and Machek 2007). The
result? A problem that is larger than it was when the student was first identified as
struggling and a problem that is very difficult to remediate.
The practice of identifying students as eligible for special education services
was not only criticized as a “wait-to-fail” model, but also as a process of infor-
mation gathering that did not inform instruction (Johnston 2001; Reschly 2008).
Many of the assessments used to identify students as eligible for special education
services were summative and measured aptitude. The results rarely contributed to
14 2  History of Education

a meaningful instructional plan (Braden and Shaw 2009; Johnston 2011; Merrell
et al. 2006). Instead, the assessment results contributed global statements about a
child’s learning capacity compared to a normative sample (e.g., your student scored
in the xth percentile) (see Inset  2.1) (Hosp 2008; Ysseldyke et  al. 2010). Many
teachers were justifiably frustrated when little helpful information was produced
from such extensive evaluations and schools were criticized for identifying and ad-
miring students’ difficulties in education without offering real solutions (Johnston
2011; Tilly 2008).

2.3.5 Inadequate Educator Training on Scientific Practices and


Limited Use of Effective Practices

Another factor contributing to poor school performance is that some teachers are
not trained adequately in research-based/effective instructional practices (Johnston
2011). Consider the area of reading. Despite the fact that teaching reading requires
knowledge of the “Big 5” areas of reading (i.e., phonemic awareness, phonics, flu-
ency with connected text, vocabulary, and reading comprehension) (Carnine et al.
2009; Hattie 2009), only 15 % of teacher-training programs exposed future teachers
to those “Big 5” sufficiently (Walsh et al. 2006). The National Council on Teacher
Quality examined syllabi from 72 education schools (a total of 227 courses) and
discovered that only 23 % used textbooks rated as “acceptable” and there was no
clear consensus on a seminal text in reading. Out of 227 courses, 154 of them used
a text unique to their course, indicating that universities are using a wide range of
texts and have not accepted the certainty of empirical research. Texts used ranged
from those written based on personal opinion to those with outdated research, and
statements in some of the syllabi examined blatantly ignored the reading research.
Despite the certainty of the research on effective reading instruction, only one in
seven universities taught teachers the science of reading (Walsh et al. 2006).

Inset 2.1 What Is a Percentile?


A percentile is a number used to describe how an individual’s score on a test
compares to others who took the same test. The percentile indicates the per-
cent of scores or data values in a set that are less than or equal to that value.
For example, a score at the 60th percentile is the same or better than 60 % of
the individuals who also took the test.

If teachers are not trained well, then it is not surprising that ineffective or nonsup-
ported practices would continue to be used in schools. Ash et al. (2009) illustrate the
lack of use of research to guide practice in the classroom. They analyzed the read-
ing practices of 80 teachers and 27 literacy coaches from elementary and middle
schools and found that almost half of the teachers and literacy coaches sampled
2.4  What to do About it? 15

used round robin reading (RRR), an ineffective reading strategy in which students
are called upon one-by-one to read portions of a text aloud. Of the sample that
used RRR, 21 % of them reported being unaware of the research related to RRR,
and 30 % of them knew that the research showed RRR is ineffective, yet they still
reported using it.
Use of unsupported practices is not limited to teachers, and teachers are not to
blame for the use of such practices. A somewhat intuitive theory that came out of
special education law was that each student has a certain capacity for learning that
can be unlocked with the right instructional program. By assessing intrachild and
cognitive abilities, educators believed they could provide each child with a specially
designed program that would maximize learning, particularly those identified as
having a disability. This theory, aptitude-by-treatment (ATI), argued that certain
published, norm-referenced tests could be used to predict the success of certain in-
terventions over other ones for particular students. Individualized instruction could
then be planned based on the results. Griffiths et al. (2007) summarized the logic of
ATI eloquently: “ATI logic contends that certain measured aptitudes (measured in-
ternal characteristics of children) can be logically matched with certain instructional
approaches to produce differential benefit or learning with the student (p. 15).”
However, over 30 years of research have not supported ATI (Braden and Shaw
2009; Merrell et  al. 2006; Reschley 2008; Ysseldyke et  al. 2010; Stuebing et  al.
2009). Griffiths and colleagues again provide an eloquent (and blunt) statement:
“It is a myth among special educators, school psychologists and the neuropsycho-
logical field, that modality matching is effective and can improve student learning”
(p. 15) (see also Gresham and Witt 1997). Still, the focus on assessing processing
and cognitive abilities remains in practice (Fiorello et al. 2006; Restori et al. 2008),
despite the lack of empirical support for ATI or learning styles (Braden and Shaw
2009; Pashler et al. 2008; Restori et al. 2008).
These findings illustrate two issues: first, some educators will continue to use
disproven practices, even if they know the research, and second, a portion of educa-
tors have finished their training program or continue to work without having been
exposed to relevant research. Just like other professions, educators require ongoing
training on current, evidence-based practices to ensure effective and contemporary
practices are used. Ensuring training and implementation of effective practices en-
ables students to have the best chance at success.

2.4 What to do About it?

The purpose of this chapter is not to beat up on public education. The fact that
many people work in education because of their resolve, passion, and dedication to
helping students is not in question. Instead, the concerns in schools are described
to create the impetus for change. The urgency and need for more effective prac-
tices in schools must be understood. As educators passionate about our roles, it is
unsettling to know that so many students are failing. Even more unsettling is that
16 2  History of Education

use of ineffective practices, making decisions in the absence of good data, and the
inconsistency of teacher training all are very real problems today. Continuing to
do “business as usual” will result in nothing more than mediocrity. As Mark Twain
once said, “If you do what you’ve always done you’ll get what you always got.”
Having been reminded of the challenges educators face, we hope to have
sparked enthusiasm for participation in educational reform. This book supports
one part of that reform: using assessments that are focused on problem solving and
that have high instructional relevance. Assessment should provide information that
guides educators in identifying what to teach and how to teach. We have painted a
grim picture of education, but we also offer hope and direction. Although there are
many ways schools can improve performance, we outline 3 strategies before offer-
ing a conceptual framework for reform and refer to these strategies as improvement
practices.

2.4.1 Improvement Practice 1: Increase Collaboration


Among Staff

Because schools have operated in silos and teachers have historically been iso-
lated from each other (Hattie 2009; Johnston 2011; Schmoker 2006; White et al.
2012), the first improvement practice is to break down those barriers and increase
the amount of collaboration among staff (Goddard et al. 2007). DuFour and Mar-
zano (2011) describe professional learning communities (PLCs) as one avenue to
increase collaboration and support among staff (see also DuFour 2004). PLCs are
defined as any combination of “individuals with an interest in education” (DuFour
2004, p. 6). Often, this is viewed as a grade-level or department-level team of teach-
ers and educators. Within PLCs, teams work together to answer three questions:
1. What do we want each student to learn?
2. How will we know when each student has learned it?
3. What will we do when a student experiences difficulty in learning it?
As PLCs answer these questions, they encounter two insights. First, they quickly
find out that it takes all of them working together to effectively answer and respond
to those questions. No single teacher alone can answer all three of those questions as
effectively as the team. After discussing the curriculum and standards that they want
students to learn, teachers create common formative assessments (or use ones al-
ready created) to answer the second question. To answer the third question, teachers
are faced with what DuFour refers to as an “incongruity between their commitment
to ensure learning and the lack of a coordinated strategy to respond when some
students do not learn (DuFour 2004, p. 8). The result is that teachers realize they
must work together to provide additional time to students who have not yet learned
the content. Teachers within a PLC become more collaborative and conversations
among them focus on data to determine if students have learned the content and on
sharing ideas, resources, and support for each other. Collaboration inevitably results
in answering the three questions previously.
2.4  What to do About it? 17

Second, as PLC members embrace the notion of those questions, their focus
shifts from teaching to learning. Perhaps this shift is subtle, but it can create a belief
that all students can learn with the right support, and it can break down the distrust
or lack of collegiality in schools (Marzano 2003). Instead of confusing collegial-
ity with congeniality, or with collaborating on nonacademic topics, the collegiality
and collaboration that are essential to the success of PLCs deals with openly trust-
ing each other as professionals. The PLC members work together to analyze and
improve their classroom practices. They engage in an ongoing cycle of questions
about instruction and student learning, they believe and trust in each other and ul-
timately, the result is improved student achievement (DuFour 2004; Ainsworth and
Viegut 2006).
Goddard and colleagues (2007) conducted a study that looked at the connection
between collaboration and student achievement. Using hierarchical-linear model-
ing, a statistical process that accounts for nesting issues among schools (e.g., one
school may have confounding factors relative to another one that can influence
results, such as one school with a lot of students from wealthy families compared
to a school with a lot of students from poverty), the authors found a connection
between fourth graders’ achievement in mathematics and reading and the amount
of teacher collaboration. Schools in which teachers collaborated more frequently on
issues related to curriculum, instruction, and professional development had students
who scored higher on the state assessment examination. Although this research is
preliminary, it lends credit to creating a school environment of collaboration and its
association with higher student achievement (Stiggins and DuFour 2009; Yates and
Collins 2006).

2.4.2 Improvement Practice 2: Ensure Effective Practices

The persistent use of ineffective practices was discussed earlier (Ash et al. 2009;
Pashler et al. 2009), and it should be very obvious that if schools are to get better
results, ineffective practices must be replaced with effective practices. Ash et al.
(2009) make recommendations to increase the use of effective practices by teach-
ers. They point out that simply sharing the research may not be enough to ensure
teachers adopt effective practices. They state that teachers should be encouraged to
explore the research, gather data to evaluate their own students’ progress, and have
ongoing professional development to align their previous knowledge with new
knowledge. Yoon et al. (2007) conducted a review of the research looking at the
link between professional development provided to teachers and student achieve-
ment. Their conclusion was that teachers who received an average of 49  hours
of professional development raised their students’ achievement by 21 points. Al-
though the studies included in their analyses were only at the elementary level, this
is evidence to suggest that training (which leads to improved practices) can impact
student achievement.
18 2  History of Education

2.4.3 Improvement Practice 3: Increase the Connection


Between Assessment and Instruction

Finally, schools can examine the alignment between their assessment and their in-
structional practices. Historically, assessment in education was used to either docu-
ment the occurrence (or nonoccurrence) of learning after the fact, largely for ac-
countability purposes, or to qualify students for extra services, such as special edu-
cation, second-language support, or the talented and gifted program (Howell and
Nolet 2000; Merrell et al. 2006). The schedule and purpose of assessment created
schools in which timely feedback about student learning was limited and informa-
tion that was beneficial for instructional planning was weak at best (Merrell et al.
2006; Pashler et al. 2009; Reschly 2008). To improve student outcomes, teachers
need information collected readily and efficiently so that they can make adjustments
to their instruction while they provide it (Hosp 2008; Ysseldyke and Christenson
1988). This calls for a shift from “assessment of learning” (i.e., using the results of
assessments to document that learning occurred) to “assessment for learning” (i.e.,
using the results of assessment to adjust instruction while it is actively occurring to
ensure learning) (Stiggins and Chappuis 2006).
This third improvement practice simply states that what is taught should be
measured and what is measured should inform what is taught. This improvement
practice requires use of assessments intimately tied to instruction that generate data
with high-instructional relevancy. (We use the term “high-instructional relevancy
to refer to data that provide teachers with information about what to teach and how
to teach.) Historically, problems have been defined as residing within the child and
assessments reflected that belief (Reschly 2008; Ysseldyke and Christenson 1988).
Educators looked within the child and tried to identify innate, biological, or cogni-
tive reasons to explain poor student performance. Terms such as “slow processor”
or “visual learner” were used to describe students, but offer little information about
what academic skills students need to learn or what is actually being taught in the
classroom (i.e., the curriculum). Information about instruction was limited (or non-
existent), so teachers were not offered helpful solutions about how to work with
students who were struggling academically (Reschley 2008; Tilly 2008).
To address these shortcomings, educators can use assessments to examine and
measure alterable factors that contribute to student learning. The effort moves
from focusing within the child to describing problems as the difference between
what is expected and what occurs using observable and measurable terms. Vague
descriptions such as “he struggles in reading” become, for example, “he is read-
ing 50 words correctly per minute and he should be reading 100 words correctly
per minute” or “she can’t focus on anything” becomes “In mathematics class, she
attends to task 45 % of the time and she should attend at least 80 % of the time.”
Defining problems in observable and measurable terms leaves little room for er-
ror in interpretation, provides a clear goal to work toward, and makes teaching the
skills concrete and clear for students’ need (Howell and Nolet 2000; Ysseldyke and
Christenson 1998).
2.5  Use of Problem-Solving Model 19

2.5 Use of Problem-Solving Model

To enable use of the three aforementioned improvement practices, schools can


adopt the PSM. The PSM provides a continuous improvement framework that could
correct many of the issues faced by schools today. The PSM can be applied at two
levels: at the systems level, systems-level problem solving, and at the individual
level, individual problem solving. (Please note that we refer to the classroom and
group levels as part of the systems level).
The PSM is described in detail in Chapter 3. It is a four-phase heuristic model
that outlines steps to take to identify, quantify, intervene with, and evaluate a prob-
lem. Problems are defined as the gap between the expected performance and the ob-
served performance, and data are gathered to verify (or disprove) hypotheses about
the presumed causes of the student’s learning. Interventions are designed based on
the results of data gathering and analysis and the student’s progress is tracked to
ensure learning taking place and that the original hypotheses generated about why
the problem exists are true. The PSM ensures that problems are clear, objective, and
defined with instructionally relevant terms (Shinn 2008).

2.5.1 Systems-Level Problem Solving

Given the fragmented nature of some schools and the lack of clear use of data and
effective practices, it is not surprising that many schools need whole-school reform
to improve student achievement and outcomes (Newmann et al. 2001). Applying
the PSM to the whole school rests within a tiered model of prevention, which we
refer to as Multi-Tiered System of Supports or MTSS (Barnes and Harlacher 2008;
Brown-Chidsey and Steege 2010; Horner et al. 2005; Reschly 2008; Tilly 2008).
We will discuss MTSS in more detail in the next chapter, but MTSS is a schoolwide
service delivery model. With MTSS, schools reconfigure how they deliver services
into a leveled model in which students are matched to a corresponding level of in-
struction (called tiers). There is a focus on prevention, data-based decision making,
and use of the PSM to improve practices and outcomes for students.
The installment of this model into a school is not just about using the PSM, but
includes improving the use of research-based practices, increasing collaboration
among staff, and aligning assessments with instruction. This model provides an
overarching umbrella within which the three improvement practices can be concep-
tualized. Schools progress through the steps of the PSM to ask if their school as a
whole is achieving high standards. The four phases of the PSM are applied to the
entire school, but educators also can apply the phases to any particular student or
group of students. The application of the PSM to the individual student is the focus
of this book, but it is the culture and philosophy behind MTSS that provides the
context in which problem-solving assessments are used.
20 2  History of Education

2.5.2 Individual Problem Solving

When using the PSM at the individual student level, educators follow the same pro-
cess as described for systems-level problem solving. Obviously, the unit of analysis
is much smaller at the individual level, but the steps of the PSM are the same. The
problem initially is identified and analysis of why the problem is occurring is un-
dertaken. A thorough analysis of all the relevant, alterable variables contributing to
the student’s learning is conducted, from which instructional plans are created or
adjusted. Finally, ongoing monitoring of student learning and instructional fidelity
is conducted to ensure the instructional plan results in student progresses toward
(and ultimately reaching) his or her goal.

2.6 Purpose of the Book

If schools are to use the PSM at both the system and individual level, they require
assessments that align with that purpose. Summative assessments and assessments
that focus on unalterable or intracognitive variables have less relevance to problem
solving and often schools lack in assessment tools that allow for effective problem
solving. It is in this gap between what schools need and what they have that the pur-
pose of this book was born. We believe that CBE can fill a void and provide schools
with an assessment process that allows for effective problem solving.

2.7 Summary and Key Points

Schools in the USA are troubled with a number of issues, including low academic
performance and high rates of violence, bullying, and problem behavior. Contribut-
ing factors include the isolation of teachers, changing student population, teacher
attrition, and a historical focus on entitlement and labels. To improve schools, we
outlined a schoolwide approach of implementing MTSS (systems-level problem
solving) and an individual approach of using CBE (individual problem solving).

Key Points 
• The USA is ranked 18/24th in education between developed countries.
• Fewer than half of US fourth- and eighth-grade students are performing at
proficient levels in reading and mathematics.
• Students with disabilities and ELLs perform lower than students without
disabilities and those who speak English as their primary language.
2.7  Summary and Key Points 21

• ELLs are almost twice as likely to be identified for special education iden-
tification and are three times more likely to drop out, compared to other
ethnic groups.
• Schools are dealing with teacher attrition, inadequate teacher training, and
isolation in their classrooms.
• MTSS is a schoolwide approach for service delivery.
• Schools could potentially improve outcomes through use of the PSM, in-
creased collaboration and use of data for decision making, and consistent
use of research-based practices.
Chapter 3
Multi-Tiered System of Supports

3.1 Chapter Preview

Chapter 2 discussed how schools can apply the problem-solving model (PSM) to
both the whole school system and individual students. Chapter  3 focuses on the
former: systems-level problem solving. This chapter first outlines tiered models
of service delivery that schools are using to improve academic and behavioral out-
comes. The foundational principles are discussed and the three main components of
such models are identified: (a) tiers of instruction, (b) a comprehensive assessment
system, and (c) use of the PSM. Each component is discussed in detail, and the
chapter ends with a focus on how curriculum-based evaluation (CBE) fits into the
tiered models of service delivery.

3.2 Systemic Approaches to School Improvement

As discussed in Chapter  2, a multitude of factors contribute to schools’ struggle


with student achievement. One way to improve support within a school is to create
a culture of collaboration and mutual respect and to use a model in which support
for students is systematic and coordinated, data are used to make decisions, and the
“silos” in which teachers have historically worked in are broken down (Chenoweth
2009; Schmoker 2006). Multi-Tiered System of Supports (MTSS) is such a model
and is the focus of this chapter.

3.3 Description of Multi-Tiered System of Supports

MTSS is a multitiered model of service delivery in which all students are provided
an appropriate level of academic and behavioral support based on their needs and

J. E. Harlacher et al., Practitioner’s Guide to Curriculum-Based Evaluation in Reading, 23


DOI 10.1007/978-1-4614-9360-0_3, © Springer Science+Business Media New York 2014
24 3  Multi-Tiered System of Supports

skill levels. Graden et  al. (2007) articulately describe the premise behind MTSS
as matching “research-validated instruction…to the data-based needs of students
(p. 295).” MTSS relies on the ongoing use of data to make decisions to ensure both
proper implementation of practices (i.e., fidelity or treatment integrity) and that
the practices actually are effective. MTSS is more than just a process of provid-
ing interventions to a small group of students. Rather, it is a school reform model
and with it comes a new way of thinking and doing business in education. Various
terms used to describe MTSS will be reviewed before discussing the principles
behind it.
Terms to Describe MTSS  Various terms are used to describe a tiered model of
service delivery depending on whether the focus is on the academic or behavio-
ral outcomes in schools. Response to Intervention (RTI) is often used to describe
an academic tiered-model, whereas Positive Behavioral Interventions and Sup-
port (PBIS) is used to describe a behavioral tiered model (formerly called Positive
Behavior Support or PBS). Additional terms that have been used to describe aca-
demic tiered models include Instructional Decision Making (Tilly 2008), Response
to Instruction and Intervention (RTII) (www.pattan.net), and RTI2 (rti.lausd.net/
about_rti). PBIS is the most often used term for behavior tiered models, but anot-
her term used to describe such a model is RTI-Behavior or RTI-B (http://www.
dpi.state.nd.us/health/pbs/index.shtm). As schools began to combine both acade-
mic and behavior tiered models and as the overlap between these models became
evident (Algozzine et  al. 2012), terms such as Multi-Tiered System of Supports
(http://www.kansasmtss.org) or MTSS appeared (http://www.florida-rti.org/flori-
daMTSS/index.htm).
These terms signify the similarities between academic and behavior tiered mod-
els and acknowledge the link between academic and behavioral performance in
students (Kansas MTSS, n.  d.; McIntosh et  al. 2010; McIntosh et  al. 2008). Al-
though there are variations between the use of the past or present tense of “tier” (i.e.,
tiered vs tier) or between the plural use of the words “system” and “supports” (i.e.,
system of support, systems of supports), we use the term MTSS to refer to a tiered
model used by schools to address both the academic and behavioral functioning of
students. Whether schools implement a tiered model to address academics, behav-
ior, or both, the principles between these models are the same. Before discussing
the salient features of MTSS, the foundational principles of MTSS are presented
(see Table 3.1).

3.3.1 Key Principle 1: All Students can Learn to


Grade-Level, Given the Right Level of Support

Whether a school’s concern is improving the academic or behavioral outcomes


of their school, the first key principle is the belief that all students can learn.
This does not mean “all students can learn…a little bit” or that “all students can
3.3  Description of Multi-Tiered System of Supports 25

Table 3.1   Key principles of MTSS


Principles Description
1. All students can learn to grade-level, Belief that a student’s failure to learn results from
given the right level of support incorrect instruction, not due to innate learning
capacity of student
2. A proactive, preventative approach to Problems are more easily fixed when they are pre-
education vented altogether or identified early on
3. Use of evidence-based practices To give students the best chance at success, we use
what has been proven to work
4. Use of data-based decision making To avoid biases and subjectivity, clear data are used
to guide decisions
5. Instructional match between support Students need the right match between their skill
and need level and the amount of support provided in order
to progress and learn
6. Schoolwide use and collaboration Creating a culture of unity and support can lead
to more effective and sustainable results and
practices

learn, except for that student.” The true spirit of MTSS is the belief that all stu-
dents can reach grade-level expectations, given the right support. This principle
embodies matching what students need with what they get. An arduous task for
sure, this can become a reality in light of the remaining principles and features
of MTSS.
Believing all students can learn with the right support raises questions about
how feasible and realistic it is for schools to achieve grade-level standards for all
students. First, it is important to point out that some schools are getting remark-
able results with tiered systems. For example, Marchand-Martella et  al. (2007)
reported improvements with effect sizes (ESs) of 0.50–3.96, which means that
students in grades K–2 improved their academic scores by at least 19 percen-
tile points and upward of 40 percentile points (depending on the skill measured
and the student’s grade). (See Inset  3.1 for an explanation of an ES.) Algozzine
et al. (2008) reported an increase of students scoring at benchmark in kindergar-
ten from 64 % to 82 % after 1 year of implementation. Fisher et  al. (2008) used
common formative assessments and increased collaboration among teachers in a
high school setting to increase the percentage of students scoring at or above basic
on the state biology assessment from 30 % to 71 %. Other settings and researchers
report similar positive gains in achievement, reductions in special education refer-
rals of 50 %, and an increase in the accuracy of special education referrals (Burns
et  al. 2005; Greenwood et  al. 2008; Jimerson et  al. 2007; VanDerHeyden et  al.
2007; Vaughn et  al. 2010). In terms of behavior, Taylor-Greene and co-workers
(1997) reported a reduction of 42 % in office discipline referrals in just 1 year in a
middle school setting. Others have found similarly positive results in both elemen-
tary schools (Curtis et al. 2010; McCurdy et al. 2003) and high schools (Bohanon
et al. 2006).
26 3  Multi-Tiered System of Supports

Inset 3.1 What Is an Effect Size (ES)?

An ES is a statistic used to describe the strength of a relationship between two


variables. For example, what is the ES between a particular teaching strategy
and reading abilities of students? ESs tell us just how much of a difference
there is between a group that received the particular strategy compared to a
group that did not. For example, let us imagine that students who receive Tea-
ching Strategy A have an ES of 0.50. This translates into 19 percentile points,
meaning that the students who had received Teaching Strategy A scored 19
percentile points better, on average, than students who did not receive Tea-
ching Strategy A.
Often times, researchers will ask what is the overall effect of a particular
strategy or topic. If there are numerous studies conducted on the topic, they
can gather all the studies together and examine the average ES of all those stu-
dies. This is called a meta-analysis and an ES can be calculated from looking
at all of the studies in the meta-analysis. Meta-analyses are used to summarize
and quantify the results of several studies to make an overall judgment about
the usefulness of a particular strategy.
For comparison, a small ES is 0.20 and this is equivalent to a percentile
gain of 8 points. A moderate sizes is of 0.50 and is equal to 19 percentile
points. A large ES is 0.80 and is equivalent to a percentile gain of 34 points.

The point here is not just about the research. It is important to distinguish between
the belief that all students can learn grade-level content and the reality of schools
accomplishing that goal. We whole-heartedly endorse the belief that all students can
learn given the right support, and we also believe that schools can achieve this goal.
Although educators may get frustrated when students do not reach grade-level stan-
dards, separating the belief from reality allows one to retain their persistence and
dedication to their work. Without adopting the stance that all children can learn, it
is possible that educators will not persist as long if they believe the goal is unreach-
able (Howell and Nolet 2000; Johnston 2011). This discussion is not suggesting that
those educators are not working hard or doing their best work. But the belief that
all students can learn versus the belief that some students can learn is the difference
between persisting to find an effective instructional plan versus not persisting.
Truly embracing Key Principle 1 means one does not place stock in unalter-
able variables and instead, focuses on continually finding alterable variables to im-
prove educational performance of students (regardless of the student’s past history
or struggles). The task of achieving grade-level standards for all students is not
easy, but believing it can happen and a focus on continuous improvement through
systematic problem solving will make it possible regardless of the current reality.
The following situation highlights this point. A school psychologist was par-
ticipating in a reevaluation for a student in special education and the question of a
possible intellectual disability (ID) was raised during the pre-evaluation meeting.
3.3  Description of Multi-Tiered System of Supports 27

Although the child had some significant challenges and skill deficits (e.g., third-
grade reading level in the sixth grade), she did not meet eligibility criteria under that
particular category. The teacher who had spent a few years working with the student
was particularly interested in the cognitive score and asked the school psychologist
for the result. (For reference, a cognitive score below 70 is one of several criteria
required for an ID classification.) The conversation followed as:
• School psychologist: Well, it was low. But her academic achievement and adap-
tive skills were measured in the average range.
• Teacher: How low?
• School psychologist: It was in the high 60s.
• Teacher: Oh…so it’s not my fault.
This example illustrates the danger of looking for reasons to explain a student’s fail-
ures that are not alterable or instructionally relevant (see Braden and Shaw 2009).
Acknowledging real challenges that require intensive support and the impact that
unalterable variables can have on learning is fine, but once it is determined that a
child’s innate traits are the reason the student is struggling, beliefs that the child can-
not succeed will diminish instructional efforts (Brady and Woolfson 2008; Rolison
and Medway 1985; Woodcock and Vialle 2011). For example, Woodcock and Vialle
(2011) found that teachers in training had lower expectations of students labeled
as having a learning disability compared to those students who were not labeled.
Rolison and Medway (1985) found similar results in which teachers held lower
expectations for students classified under the category of mental retardation. They
also believed that a student’s history of difficulty was more due to innate ability
than external factors for those students labeled with a learning disability compared
to those students not labeled as having a disability. When student failure is not at-
tributed to innate characteristics, educators are more likely to look for ways instruc-
tion can be adjusted to improve outcomes, and they also are more likely to believe
they can change the student’s outcome trajectory (Brady and Woolfson 2008). It
is important to acknowledge the risk in believing that certain abilities or student
performances are innate (Howell 2010).

3.3.2 Key Principle 2: A Proactive, Preventative


Approach to Education

Students who start behind tend to stay behind, with the gap between their skills and
those of typical peers continuing to widen as they progress through school (Baker
et al. 1997; Hart and Risley 1995). For example, students who score in the low-
est 10 % of readers (based on oral reading fluency rates) in the end of first grade
tend to remain in the lowest 10 % through the fifth grade (Good et al. 1998). These
difficulties persist into high school, as 74 % of poor readers in the third grade re-
main poor readers in the ninth grade (Fletcher and Lyon 1998). Additionally, Baker
and colleagues (1997) identified that students in grades 1–3 from economically
28 3  Multi-Tiered System of Supports

disadvantaged backgrounds developed 3,000 words per year to their vocabulary,


compared to 5,000 words per year for students from middle class homes. Hart and
Risley (1995) report a 30-million word gap in vocabulary between families on wel-
fare and those families with professional backgrounds. It is an unfortunate reality
that some students who end up behind (or start off behind) in education tend to stay
behind.
MTSS embodies a proactive and preventative approach to assist students before
problems become severe. MTSS identifies those students who need additional sup-
port early on, both in terms of the school year (i.e., assessment in the fall of all
students) and in terms of each student’s academic careers (i.e., assessment in early
elementary). There are early assessment tools available to predict which students
are at risk for difficulties in school for both academics and behavior. Measures
of early literacy can predict students at risk for reading failure through the sixth
grade (Good and Kaminski 2011; McGlinchey and Hixson 2004), and measures of
early numeracy are available to identify students in need of support in mathematics
(Clarke and Shinn 2002). Office discipline referrals are used to identify students
with behavioral difficulties (Horner et al. 2005), and a 1-minute phoneme segmen-
tation measure in kindergarten can reliably predict students who have behavioral
problems in the fifth grade (McIntosh et al. 2008).
Academic and behavior problems can begin early in education, are persistent,
and seem to have cumulative effects on student learning. With MTSS, the focus is
on preventing a small problem from becoming a larger problem through early iden-
tification and intervention. It is proactive in which students are actively screened
for early signs of struggles and support is provided immediately. It is preventa-
tive in which the goal is to reduce and avoid the number of struggling students in
education.

3.3.3 Key Principle 3: Use of Evidence-Based Practices

A historical concern in education has been the difficulty in consistently using


research-based practices (Ash et  al. 2009; Reschly 2008). There may be several
reasons why there is a gap between research and practice among schools, but MTSS
places a focus on ensuring that the practices used are evidence-based and proven
to be effective. The rationale is simple. Use what works to give students the best
chance at success. Imagine going to a doctor with an illness. The doctor describes
two treatment options: Option A, which is proven effective and has helped other
people, or Option B, a disproven method that does not have evidence to show it
works but is suggested because the doctor thinks it might work. It is not likely
patients would choose Option B and take chances with their health. How are those
chances taken with the education of our children?
3.3  Description of Multi-Tiered System of Supports 29

3.3.4 Key Principle 4: Use of Data-Based Decision Making

Related to using effective practices is using data to make educational decisions.


There is a notion that educators either make decisions with limited data, ignore the
data they have, or simply do not use data to make decisions (Merrell et al. 2006;
Mandinach 2012; Reschly 2008). Too often subjectivity is used to make high-stakes
decisions. Data should be used to align curriculum and instruction to assessment, to
allocate resources, to drive professional development decisions, and to determine
the effectiveness of practices in schools. Clinical judgment and teacher input are
invaluable when making high-stakes decisions, but schools have to balance their
decisions with objective parameters. MTSS aims to provide this balance by using
data to make decisions.

3.3.5 Key Principle 5: Instructional Match

Related to Key Principle 1 that all children can learn, is the notion of instruc-
tional match. Critical to ensuring all students learn is providing the right support.
(The right support means it is the right intensity, targeting the right skill deficits,
and results in improved student performance on target to meet goals.) MTSS uses
multiple layers of instruction, which increase in their intensity, in order to match a
student’s skills and skill deficits to the right level of support. If instruction does not
result in sufficient growth, then adjustments are made or additional supports are
provided until sufficient growth results (Barnes and Harlacher 2008). MTSS allows
for a continuum of services to meet the needs of all students.

3.3.6 Key Principle 6: Schoolwide Use and Collaboration

The last Key Principle emphasizes the deconstruction of silos and isolation in
schools and the increase of collaboration and schoolwide adoption of MTSS prac-
tices. Although the principles of using data to identify students who need support
and to monitor the effectiveness of interventions can be used in small groups or with
any one individual student, MTSS is a schoolwide reform model (Barnes and Har-
lacher 2008). It is intended to create a new climate and culture within the school in
which all students are seen as learners and data are used not just for accountability,
but instead to ensure learning occurs (DuFour et al. 2004).
Reviewing the principles behind MTSS is helpful for understanding not only
what MTSS is but also why it is being implemented. Understanding the why behind
MTSS increases the likelihood of educators buying into the process (Ikeda et al.
2002). Having reviewed the principles, a summary of the MTSS process is pre-
sented and the three main components of the model are discussed in detail.
30 3  Multi-Tiered System of Supports

3.4 Description of MTSS

The goal of MTSS is to improve the academic and behavioral outcomes of all
students (behavioral refers to social, emotional, and behavioral outcomes of stu-
dents) [Barnes and Harlacher 2008; Horner et al. 2005; Jimerson et al. 2007; Kan-
sas MTSS, n.  d.; National Association of State Directors of Special Education
(NASDSE) 2005]. All students are screened at least three times a year using reliable
and valid measures that are efficient, yet predictive of general outcomes (principle
4). The data are summarized and reviewed shortly after gathered to (a) identify
which students are at risk for not meeting academic or behavioral expectations, and
(b) ensure the system is meeting the needs of most students with core instruction
(principle 2). Students then receive evidence-based support that matches their level
of need (principles 3 and 5). Students with more severe deficits are provided more
intensive supports; as the level of need increases, so does the intensity of the support
(principle 5). Students’ progress is monitored over time to ensure supports are ef-
fective. Students requiring more intensive supports receive more intensive monitor-
ing (principle 5). If progress monitoring data show the support is not effective, the
instructional plan is modified in attempt to allow students to progress toward goals.
The cyclical process of using data to identify and ensure effectiveness of a given
intervention to solve an identified problem (called the PSM) ensures that each child
is given the right support (principle 1).
As illustrated in Fig. 3.1, the aim is to create healthy, functioning schools that use
data to make decisions and are able to provide support to students immediately when
needed and before problems are severe. Using evidence-based instruction and layers
of increasingly intense supports, schools should meet the needs of at least 80 % of
their students with their core or Tier 1 level of support (Torgesen 2000). That is to
say that at least 80 % of students will benefit from the instruction that all students
receive and will not require additional help to reach grade-level standards. Ten to
15 % of students will require Tier 2, or additional instruction to supplement Tier 1
to meet academic and behavioral expectations. About 3 –5 % of students will require
intensive support to remediate missing skills. Simply put, MTSS is a model that uses
data in a formative manner to match students’ needs to an effective level of support,
with the goal of creating a healthy system that meets the needs of all students.
The three-tiered model of prevention and service delivery was developed out of
the public health field, and it is helpful to describe MTSS using a medical analogy
(Reschly 2008). Most of us engage in common practices to maintain our health.
Eating healthy, getting enough sleep, exercising regularly, taking vitamins, and go-
ing to the dentist every 6 months for a cleaning. When we visit the doctor, quick
screeners, such as blood pressure and body temperature are used to assess our over-
all health. If we meet expectations on screeners and continue to stay healthy, the
only recommendation to continue with is to continue with the common practices
mentioned previously.
When a screener indicates that there may be a cause for concern, additional in-
formation is needed to determine why the problem is occurring. If our body tem-
perature is high, the doctor may order a blood test to see why. If our knee hurts, the
doctor may order an x-ray to pinpoint the cause of the pain. Information gathered
3.5  Core Components of MTSS 31

Fig. 3.1   Multi-Tiered System of Supports (MTSS)

informs the treatment plan, as the diagnostic assessment gets to the root of the prob-
lem. If the annual checkup indicates that we have high cholesterol, the treatment
plan would likely include an increase in exercise and an adjustment of diet. Weekly
weigh-ins and follow-up laboratory work would be recommended to determine if
the plan is effective in lowering cholesterol. A food and exercise journal is also sug-
gested to ensure the plan is followed. If cholesterol levels are not improved, more
assessment may be conducted to determine if there is an alternate explanation, and
the intensity of the treatment and monitoring plans increase accordingly. In this case
and in MTSS, data are used to determine treatment needs with a focus on a speci-
fied goal. Having discussed the principles and process of MTSS, the next section
provides an overview of the core components of MTSS.

3.5 Core Components of MTSS

The three core components of MTSS are:


1. Multiple and increasingly intensive tiers of instruction
2. A comprehensive assessment system
3. Use of the PSM
32 3  Multi-Tiered System of Supports

Fig. 3.2   The intricate relationship of the three components of MTSS

There is a reciprocal, intricate connection among all three of these components.


Data from assessments inform instruction and ongoing use of data are determined
if instruction is effective. This assessment-instruction decision-making process is
organized using the PSM (see Fig. 3.2).

3.5.1 Multiple Tiers of Instruction

To support the principles of evidence-based instruction and the matching instruction


to student needs, MTSS embodies multiple layers of instruction.
Tier 1  Tier 1 is core instruction provided to all students and is designed to inde-
pendently meet the needs of at least 80 % of the student population. Supports at
Tier 1 are evidence-based, scientifically-proven practices and programs that are
culturally and linguistically appropriate, and implemented with fidelity (NASDSE
2005).
For behavior support, Tier 1 consists of teaching 3–5 positively stated expecta-
tions (e.g., be safe, respectful, responsible). Students who display these rewards are
reinforced daily with social/verbal praise and with tangible rewards, such as tickets
or signatures (Horner et al. 2005; Sugai and Horner 2009). These high-frequency
rewards are often exchanged for larger items (e.g., a store to buy school supplies
or other items, drawings, raffles, tickets for school events; Morrisey et  al. 2010;
Peshak George et al. 2009). Tier 1 behavioral support is similar between elementary
and secondary schools, as both levels identify, teach, and reinforce universal behav-
ior expectations (see Flanery and Sugai 2009 and Greenwood et al. 2008).
At the elementary level, Tier 1 academic support is at least 90 minutes a day
in reading and 60 minutes a day in mathematics (Haager et  al. 2007; Jimerson,
et al. 2007), and it consists of both whole and small-group differentiated instruction
(Walpoe and McKenna 2007). At the secondary level, Tier 1 is the common content
area that all students enroll within, such as English/language arts and mathematics.
The time blocks can vary between schools, but are usually 50- to 60-minute daily
blocks or 90-minute blocks every other day (Burns 2008).
3.5  Core Components of MTSS 33

Tier 2  Some students will require additional supports to supplement services pro-
vided at Tier 1. Tier 2 involves targeted instruction for groups of students with
common needs and is provided for an estimated 10 –15 % of the student population
(Horner et al. 2005; Jimerson et al. 2007). Tier 2 is designed to intervene early and
prevent minor academic and behavioral problems from getting worse and allow
students to meet grade-level expectations.
For behavior, Tier 2 is a collection of efficient interventions for groups of stu-
dents with common needs, though these interventions may not necessarily be pro-
vided in a group format. Examples of the interventions include daily “check in,
check out” programs in which the student receives steady feedback on the perfor-
mance of the expectations (Crone et al. 2010), social skills groups, and mentoring
by adults or peers. Tier 2 is designed to be efficient (i.e., quick access to support),
effective, and early (i.e., provided at the first signs of difficulty) (Hawken et  al.
2009). Tier 2 interventions are similar between elementary and secondary schools,
but the interventions are tailored to meet the needs and developmental levels of their
respective students (see Crone et al. 2010 and Greenwood et al. 2008).
Tier 2 for academics provides students more time to practice the skills taught dur-
ing the core instruction and occurs outside of the time designated for Tier 1 (Vaughn
et al. 2007). At the elementary level, Tier 2 is usually provided in 30-minute time
blocks 3–5 days each week and in groups of 6–8 students (Abbott et al. 2008; Brown-
Chidsey and Steege 2010; Greenwood et al. 2008; Vaughn et al. 2007). At the sec-
ondary level, students are provided instruction in groups of 3–6 students (Griffin
and Hatterdorf 2010) or up to 10–15 students for 40–50 minutes each day (Pyle
and Vaughn 2010; Vaughn et al. 2010).1 Generally speaking, Tier 2 is an average of
10–12 weeks and is capped at 20 weeks because most students do not achieve much
benefit after that time frame (Vaughn et al. 2007; Vaughn et al. 2012). If Tier 2 does
not result in success, more individualized, intensive supports at Tier 3 are provided.
Tier 3  Supports at Tier 3 are highly individualized, based on additional assessment,
and more intensive for students not successful with the combination of Tier 1 and
Tier 2 supports. Three to 5 % of the student population requires this level of support
(Horner et al. 2005; Jimerson et al. 2007).
For behavior support at Tier 3, students receive function-based support that is
individually tailored to their needs and sufficiently comprehensive to teach replace-
ment behaviors (Scott et al. 2009). A functional behavior assessment (FBA) is con-
ducted to identify that support. An FBA is a process in which a behavior of concern
is defined in observable and measurable terms, the triggering and maintaining fac-
tors of the behavior of concern are described, and data are collected to verify the
hypothesis of why the behavior is occurring (Crone and Horner 2003). Following
the FBA, a behavior intervention plan (BIP) is designed to make the behavior of
concern less likely to occur and to teach the student more appropriate behaviors that

1 
Although 10–15 students may all have an intervention block within the same classroom, inst-
ruction is still differentiated and students can receive small-group instruction in groups of 6–8.
The interventionist can split the block of time so that small-group instructions occur for one group
while another group simultaneously works independently (Burns 2008; Vaughn et al. 2010).
34 3  Multi-Tiered System of Supports

serve the same function (Scott et al. 2009). Tier 3 support for behavior is similar
between elementary and secondary schools, as both levels use the FBA and BIP
processes. Examples of Tier 3 are hard to summarize because they are individually
designed, but students may receive higher rates of reinforcement compared to Tier
2 as well as the use of response-cost systems. Tier 3 plans can also include wrap-
around services and intense counseling sessions (Crone and Horner 2003; Netzel
and Eber 2003).
For academics, Tier 3 is more intensive primarily by examining four factors: the
size of the group (i.e., number of students), the frequency (i.e., number of days of
intervention per week), the duration (i.e., both the number of minutes of interven-
tion time and the number of weeks of the intervention), and the person facilitating
the group. Specifically, students are in groups of less than 4 (Denton et al. 2007),
receive Tier 3 daily (5 times/week) from 45 to 60 minutes at a time (Abbott et al.
2008; Vaughn et al. 2007), and receive instruction longer than the 8–12 weeks des-
ignated by Tier 2 (Griffiths et al. 2006; Vaughn et al. 2003). Additionally, the person
facilitating Tier 3 is typically a more experienced teacher; for example, a regular
educator teaches Tier 2 and a special educator or reading specialist teaches Tier 3
(Harn et al. 2007). School sites will have more intervention time, for a longer dura-
tion and frequency, and in smaller groups at Tier 3, relative to what is already in
place at Tier 2.
An important part of Tier 3 supports includes strategies to maximize impact of
core instructional time. It is critical that students who receive Tier 3 supports do not
miss essential content or grade-level skills, and that they benefit from the time spent
in core instruction (Harn et al. 2007). A combination of whole-group, small-group,
and individualized support may be required to provide a student meaningful access
to Tier 1 (Walpole and McKenna 2007). Such coordination of supports for students
with intensive needs prevents the creation of new skill gaps. Tier 3 may “fill in” a
student’s missing skills, but removing the student from Tier 1 for Tier 3 remedia-
tion would only create new skills gaps (Harn et al. 2007). When providing Tier 3
services, a balance between core instruction (Tier 1) and remediation is necessary.

3.5.2 Comprehensive Assessment System

To address the principles of data-based decision making and creating an instruc-


tional match, four different types of assessments are required: (a) screening, (b) di-
agnostic, (c) formative, and (d) fidelity. There is a fifth type of assessment, summa-
tive, which measures high-stakes outcomes or whether or not students reach certain
levels of mastery at a particular point in time (e.g., state assessments, end-of-unit
examinations). Because MTSS relies primarily on these four, they are discussed in
detail.
Screening  A screener is defined as a brief, efficient measure that identifies students
who are at risk for academic failure or behavioral challenges (Good et  al. 2002;
Hosp 2008). Screening tools are quick to administer and provide a general indica-
3.5  Core Components of MTSS 35

tion of academic or behavioral functioning of all students. Screeners do not provide


minute detail about why a student may need help; they only serve to measure broad
skills or areas. They simply ask “Is the student at-risk?” (Hosp 2008).
In MTSS, attendance and number of office discipline referrals may be used for
screening decisions for behavior (Horner et al. 2005; Netzel and Eber 2003), where-
as curriculum-based measurement (CBM) (Good et al. 2002) and common assess-
ments (DuFour and Marzano 2011) may be used for academic screening decisions.
Some schools will use previous academic grades, state testing scores, and summa-
tive assessment scores to identify students who are at risk (Jimerson et al. 2007;
Johnson and Smith 2008; Vaughn and Fletcher 2010). Although those assessments
are not screeners per se, the way the data are used allow them to serve as a screener
for that situation. Note the difference between a screener as defined by its properties
versus data used to make screening decisions. It is common that elementary schools
administer screeners to all students, but by the time students reach secondary levels,
an abundance of data typically exist with which screening decisions can be made
(Vaughn and Fletcher 2010).
Diagnostic Assessment  Once a student is identified as requiring extra support (and
that need for support is verified with additional data sources), diagnostic assess-
ments are used to determine why the student is at-risk (Good et al. 2002). While
screeners are brief and measure broad skills, diagnostic assessments are used to
pinpoint specific skill gaps to target with intervention. Diagnostic assessments are
more in-depth, comprehensive, and take longer to administer (Hosp 2008). The
word diagnostic does not mean diagnosing as commonly used in the medical field
or in order to label a student with a disability. In MTSS, diagnostic means to analyze
the instructional situation and learner’s skills to plan for intervention (Hosp 2008).
Thus, assessment of students’ strengths and weaknesses and examination of the
instructional environment contribute to intervention planning.
The in-depth diagnostic assessment of each student’s skill deficits and the in-
structional environment is not used for every student that requires additional tiered
support. When it is used depends on the type of protocol used by the MTSS model
(see Barnes and Harlacher 2008). Table 3.2 offers a description of protocols used
in MTSS.
With diagnostic assessments, schools can either assess the collective needs of
a group of students (group diagnostic) or they can individually assess each stu-
dent (individual diagnostic). Group diagnostic assessment focuses on the collective
needs of the group and target skill deficits that a small group of students have in
common. More detail on group diagnostic assessment is provided in Chapter 10.
For individual diagnostic, educators administer diagnostic assessments and assess
the instructional environment in order to pinpoint a personalized intervention plan
for the student. Thus, the focus on the intervention is aimed at the individual skills
and exact setting that the student needs to be successful (Christ 2008). For behavior,
an FBA is conducted to understand the behavioral pathway of the undesired behav-
ior (i.e., the setting events, antecedents, reinforcement of the undesired behavior,
and potential reinforcement for desired behavior) (Crone and Horner 2003; Scott
et al. 2009). For academics, the assessment process includes assessing the instruc-
36 3  Multi-Tiered System of Supports

Table 3.2   Types of protocols in an MTSS model


Type Description Pros and cons
Problem-solving Individual assessment is con- • Individually tailored interventions
protocol ducted at each tier to identify an • A corresponding increase in time
instructional plan and resources to accomplish
Standard protocol Students with specific areas of • More efficient as interventions are
concern receive a consistent or designed on the collective needs of
standard intervention designed the group (does not require indivi-
to meet the targeted needs of dual problem solving)
multiple students • Approximately 85 % of students
Students with similar needs who are served in a standard treat-
receive similar interventions ment protocol respond favorably
and do not require additional
support
• May provide instructional time on
a skill that a student does not need
Combined protocol Combines the previous protocols • Has positive benefits of each
approach
Students at Tier 2 receive a stan- • Requires more coordination and
dard intervention, whereas those collaboration
at Tier 3 receive an individually
designed intervention

tion, curriculum, educational environment, and the learner’s skills and attributes
(ICEL) using review of records, interview, observation, and tests (RIOT) to under-
stand the learning environment and contributing causes of the student’s level of
performance. This is referred to as the RIOT/ICEL framework, and we discuss this
in detail in Chapter 5.
Formative Assessment  After schools answer “Who is at-risk?” and “Why are they
at-risk?” an instructional plan is developed and formative assessment is used to
answer the question “Is the instructional plan working?” (Hosp 2008). Formative
assessment is a process in which teachers gather and use data to guide their decision
making around teaching and learning. It is a cyclical process in which adjustments
are made based on the data gathered to ensure learning is taking place. Formative
assessment for academics can be various assessments and sources of data, including
(a) formal tests, such as CBM or end-of-unit examinations, or (b) minute-to-minute
instructional interactions and strategies, such as questioning, use of “exit tickets,”
or verifying that students understand content with simple verbal responses (Black
and Wiliam 1998; Marzano 2010). For behavior, formative assessment may include
behavioral observations or daily behavior points cards (Crone and Horner 2003;
Crone et al. 2010).
In MTSS, a type of formative assessment called progress monitoring is used to
verify whether students are making growth toward academic and behavioral stan-
dards or goals. Although teachers use a variety of assessment practices to measure
student progress and to guide instruction (Marzano 2010), MTSS requires that deci-
sions are made about the level of support a student is receiving with the entire model
3.5  Core Components of MTSS 37

(e.g., does the student need Tier 3 to make adequate progress?). Assessment must
help answer whether or not the current level of support is moving the student toward
mastery of global and/or specific outcomes in a subject or content area. Decisions
about instruction are not limited to whether or not a student needs a few additional
minutes of review (which is what an end-of-unit test may suggest) or if they are
able to infer meaning from a text (which is what a teacher question-and-answer
exchange may reveal). Instead, decisions also are made about whether or not a stu-
dent requires more intensive supports (i.e., does the current level of support meet
the students’ needs?). To make valid decisions, the progress monitoring tools must
meet the following criteria: (a) brief and efficient, (b) measure general outcomes
or discrete skills, (c) have strong psychometric properties, and (d) be sensitive to
growth over time (Hosp et al. 2006; Hosp 2008).
CBM is most often used to measure basic academic skill growth in MTSS. For
behavior, schools often use office discipline referrals, direct observations of behav-
ior, and daily ratings of behavior to monitor progress (Horner et al. 2005). For high-
stakes decisions, the measures used should have strong technical properties. Schools
will have to balance the properties of the measures used with the importance of the
decision being made when making changes within the MTSS framework.
Fidelity  Measuring fidelity ensures that the intervention or instructional plan is
being implemented as intended. When considering measurement of the effective-
ness of an intervention (i.e., progress monitoring), schools also must measure fide-
lity, also known as treatment integrity (Wilkinson 2006). This evaluation is critical
because it affects the logic behind decisions about student growth.
To illustrate using an analogy, imagine you set a goal to lose weight. You plan to
lose 4 lbs in 2 weeks, and your plan is to exercise three times each week and avoid
eating dessert during dinner. After 2 weeks, you weigh yourself and discover you
have only lost 2 lbs. It would be easy to blame the diet for the insufficient weight
loss, but without information about fidelity (i.e., if you actually followed your diet
as intended), you cannot blame the diet yet. Imagine you measured your adherence
to your diet and realized, you only exercised twice each week and you ate dessert
three times each week. Although that chocolate cake is delicious, you did not follow
your plan and you have to improve fidelity and redo your diet. You oblige with a
focus on sticking to your diet, and after two more weeks of following your plan, you
discover that you still only lost 2 lbs and have not reached your goal of 4 lbs every
2 weeks. At this point, you can safely conclude that the diet is not intense enough
to reach your goal. You followed the diet as planned but did not reach your goal, so
now it is time for a more intensive diet.
Measuring fidelity or not measuring fidelity leads us down two different paths of
decision making. Look at Fig. 3.3. When faced with having not met a goal, we must
first decide if the fidelity was good or not or if we followed the plan as intended. If
yes (the left side of Fig. 3.3), then it makes sense that we conclude the diet did not
work. We did not meet our goal, yet we followed the diet perfectly; therefore, the
diet must not have been intense enough (we can blame the diet because we actually
followed it).
38 3  Multi-Tiered System of Supports

Fig. 3.3   Decision making based on fidelity results

However, there will be times when our implementation of a diet is poor and
times when we have not tracked our behavior. As a result, we cannot truly make any
conclusions about the diet (right side of Fig. 3.3). It would be tempting to look at the
results (i.e., that the goal was not met) and to jump to the decision to try a new diet
(in Fig. 3.3, starting at the right side of the diagram and jumping to the left side pre-
maturely). However, without information on fidelity, it is not logical to make that
decision. If fidelity is poor, or if fidelity data are not available, then the only logical
decision pathway that can be followed is to improve fidelity and try again (the right
side of the diagram in Fig. 3.3). Measuring fidelity ensures that (a) we did what we
had planned and (b) we make logical decisions considering all important informa-
tion (Wolery 2011). Table 3.3 provides a summary of the differences between the
tiers for instruction and assessment.

3.6 The PSM

The third and final component of MTSS is the use of the PSM (Good et al. 2002;
Reschly 2008; Shinn 2008; Tilly 2008). The PSM is a four-phase heuristic used
to define and solve problems. The four phases are: (a) Problem Identification, (b)
Problem Analysis, (c) Plan Implementation, and (d) Plan Evaluation (see Fig. 3.4).
(Although some authors refer to the phases of the PSM as “steps,” the word “step”
is used when describing the specifics actions within the CBE Process later in this
book.)
As educators, we have all heard concerns about students. The student can’t read!
He won’t write anything! She won’t sit still! To avoid vague and ill-defined prob-
lems, the first phase of the PSM, Problem Identification asks “What is the prob-
lem?” Problems are described in observable and measurable terms and are defined
as the gap between the expected behavior and observed behavior. As a result, “He
3.6  The PSM 39

Table 3.3   Defining the tiers in MTSS


Instruction Assessment Group size Frequency and
duration
Tier 1
Behavior 3–5 positively stated Office discipline Whole school, Initial teaching; retea-
expectations; tea- referrals; taught in ching and recog-
ching and reinfor- attendance various formats nition provided
cement of those (e.g., assembly, throughout school
expectations in classroom, year; instructional
etc.) boosters provided
as needed, based
on data
Academics Differentiated evi- CBM, class- Whole and Daily; entire school
dence-based core room-based small-group year
curriculum; effec- assessment;
tive instructional local and state
practices assessments
Tier 2
Behavior Targeted group, Daily behavior Small-group or 1:1 Occurs daily or
interventions to tracking Designed to reach weekly, depending
supplement Tier 1 groups of on intervention
with goal of mee- students, but not Duration determi-
ting schoolwide always delive- ned by student’s
expectations red in a group progress
setting
Academics Targeted small- Biweekly 6–8 students At least twice/week at
group instruction progress elementary; daily at
to supplement monitoring secondary level
Tier 1 with goal with CBM 5–15 at secondary Average of 10–12
of meeting grade- level weeks, but varies
level expectations depending on stu-
dent’s progress
Tier 3
Behavior Intensive instruction Daily behavior 1:1 or small group, Varies by student, but
that includes tracking depending on daily occurrence
small-group, 1:1, intervention usually
and wraparound Longer duration rela-
services tive to Tier 2 (e.g.,
Individually > 20 weeks)
designed
Academics Individualized, Weekly progress Up to 3 in a group Daily
intensive, small- monitoring Depends on student’s
group instruction with CBM progress, but longer
relative to Tier 2
The Assessment column lists some of the more common assessments used. It is not intended to be
an exhaustive or exclusive list
40 3  Multi-Tiered System of Supports

Fig. 3.4   Problem-solving model

can’t read” becomes “He is reading 30 words correctly per minute out of a third-
grade passage and he should be reading at least 70 words per minute in that pas-
sage.” “She won’t sit still” becomes “She stays in her seat 20 % of the class time
and she should be in her seat at least 80 % of the time.” Defining problems with
quantifiable terms allows two things: (a) a goal can be set and clarity about progress
toward that goal is easily discerned and (b) the magnitude of the problem is quanti-
fied, allowing educators to determine if the problem is severe enough to warrant
further investigation.
Once a problem is defined and determined to be severe enough to warrant action,
Phase 2, Problem Analysis is conducted. The question “Why is the problem occur-
ring?” is asked and during this step of the PSM, diagnostic assessments are used
to answer that question (Shinn 2008; Tilly 2008). As discussed in the assessment
section previously, schools assess the skills the student has and does not have, as
well as the instruction, curriculum, and environmental factors (i.e., alterable vari-
ables) that contribute to the problem. The goal is to identify alterable variables
that can be manipulated to improve student achievement/outcomes. Although we
do not want to discount the effect learner variables can have on learning, the goal of
problem analysis is to identify variables that can be adjusted. Unalterable, uncon-
trollable variables are assessed to the extent that they can influence the treatment
plan, such as knowing that student has a vision disability that requires enlarged font
(Christ 2008).
The third phase of the PSM, Plan Implementation, involves designing and im-
plementing an instructional plan based on the findings from the problem analysis
phase. The question “What can be done about the problem?” is posed. During this
phase, a plan to correct the problem is developed with a focus on matching an ap-
propriate intervention to the needs of the student. Additionally, goals are set, strate-
gies for measuring progress toward that goal and measuring fidelity of plan imple-
mentation are determined.
3.6  The PSM 41

Finally, phase 4, Plan Evaluation, involves using data identified in phase 3 to


(a) measure progress and (b) measure fidelity to monitor and evaluate the effects of
and adherence to the plan. The question asked in this step is, “Did the plan work?”
When the student makes progress and fidelity is good, discussions can be centered
on the longevity of the plan and determining when supports should be gradually
faded. When progress is poor, it is necessary to determine the strength of fidelity.
If fidelity is weak, then next steps focus on improving fidelity or “redoing” the
plan (as illustrated by the diet analogy). If fidelity is good, then next steps focus on
improving the instructional plan, which may involve cycling back through the steps
of the PSM to ensure the problem is accurately defined, the instructional environ-
ment is understood, and/or that the plan targets the appropriate skills with enough
intensity.

3.6.1 Systems-Level Problem Solving

In discussing the PSM, we end on the difference between systems-level problem


solving and problem solving for individual students. What separates MTSS from
other educational programs or movements is its focus on the entire school system
and its proactive use of data to identify students in need of additional support (vs
only relying on teacher-referral; Barnes and Harlacher 2008). It is a schoolwide
reform model that seeks to correct many of the concerns with public education
described in Chapter 2. In MTSS, educators follow the phases of the PSM for the
entire school system. In a healthy system, at least 80 % of students have their needs
met through core instruction at Tier 1 and no more than 20 % of students require
additional support. If more than 20 % of students require additional support (or less
than 80 % are meeting standards), then the focus is on improving Tier 1 to meet
the needs of more students, rather than adding supplemental interventions for large
numbers of students (Gibbons and Silberglitt 2008). Examining the health of the
system is a critical component of MTSS (Barnes and Harlacher 2008; Gibbons and
Silberglitt 2008; Harn et al. 2007; Jimerson et al. 2007).
The scope of this chapter is not to outline every detail of systems-level problem
solving, but instead to provide an overview. The steps of this process are the same
as described previously, but instead of applying the PSM to an individual student, it
is applied to the student population in each grade level. In Problem Identification,
all students are screened and results are compared to a standard. For example, if the
school is screened using a general outcome reading measure, the question can be
asked “Did at least 80 % of students meet benchmark?” (Good et al. 2002). If the
answer is yes, then the school can conclude that their system is healthy in reading. If
fewer than 80 % of students met benchmark, the focus would be on improving Tier
1 reading (Gibbons and Silberglitt 2008). If only 55 % of students met benchmark,
then it could be concluded that the current instructional program does not match
the needs of the current population of students. MTSS is fluid and contextual and
ongoing assessment is critical. As students change, so must the school’s instruction
and level of support.
42 3  Multi-Tiered System of Supports

Fig. 3.5   Complete MTSS model illustrating systems-level problem solving. Note: Monitoring and
assessment can vary between school sites

Problem analysis is conducted to understand why the problem exists, and for
systems, these reasons can be extensive. Lack of alignment of the curriculum be-
tween grades, use of a non–research-based program, and an uncoordinated system
for additional support are just a few. During this stage, schools analyze the assess-
ments they use, the type and quality of instruction, the curriculum and standards,
and the environment within entire grade levels (Gibbons and Silberglitt 2008; New-
mann et al. 2001; White et al. 2012). The focus is on how things look as a whole
(Gibbons and Silberglitt 2008; Tilly 2008).
Plan implementation still involves identifying a plan of action, but with systems-
level problem solving, obviously that plan can be comprehensive and complex. Pro-
fessional development, purchase and training on a new program, alignment of the
curriculum and learning objectives, and use of new assessments may all be a part
of the school’s plan (e.g., Griffin and Hatterdorf 2010; Griffiths et al. 2006; White
et  al. 2012). Ongoing evaluation of the outcomes and fidelity of the plan is still
a part of this step of the PSM. Measuring outcomes can be done through annual
benchmarking (which occurs 3–4 times/year), examining scores on district interim
assessments, and considering the results of summative assessments. Fidelity is mea-
sured through either directly observing instruction or through self-report of those
implementing the instruction (Greenwood et al. 2008). The steps of the PSM model
are followed and just as with individual problem solving, if the goal is not met, cy-
cling through the steps is conducted again until the goal is met. The entire process
of systems-level problem solving is illustrated in Fig. 3.5.
3.7  Four Elements of MTSS 43

Fig. 3.6   Four key elements of MTSS

3.7 Four Elements of MTSS

Related to systems-level problem solving and schoolwide reform are four key ele-
ments. MTSS is about creating sustainable systems-change. To create such change,
a focus on four key elements is useful. These elements are: (a) outcomes, (b) prac-
tices, (c) data, and (d) systems. These four key elements originally are described in
relation to PBIS (Sugai and Horner 2006).
According to Sugai and Horner (2006), schools first identify relevant social and
behavioral goals that they want to accomplish with the adoption of MTSS. These
outcomes must be relevant, data-based, measureable, and valued by staff, parents,
and students. Next, practices that are evidence-based, doable, and practical are iden-
tified to accomplish desired outcomes. Third, schools identify what data they will
use to evaluate the impact of their interventions and instruction. The data should
be efficient to collect, easily accessible, and examined at regular intervals. Finally,
the school adopts or creates systems that support the sustained use of MTSS. Such
systems include allocating funding, identifying personnel and positions for MTSS,
and gathering district or political support.
These elements are not separate entities, as all four interact (see Fig. 3.6). For
example, data are used to define the outcomes and to monitor the impact of the ad-
opted practices. Also, the adjustment of one system may impact another system, and
the designation of desired outcomes certainly influences what practices are used.
As schools adopt MTSS, focus on these four elements is important to sustainability.
Next, we share the developmental nature of systems-change before discussing how
curriculum-based evaluation fits in MTSS.
44 3  Multi-Tiered System of Supports

3.8 Developmental Process of MTSS

The process of implementing MTSS is not a quick task for schools. It takes on aver-
age 3–5 years to implement, with larger systems taking anywhere from 6–10 years
(Fixsen et al. 2007; NASDSE 2005). MTSS is both a shift in practices that may
require new skills for some educators and a conceptual shift in how problems are
identified and addressed (Ikeda et al. 2002). Because of these shifts, it is important
for schools to view MTSS as a development process that takes time. The National
Association of State Directors of Special Education has provided blueprints to assist
schools in their implementation (NASDSE 2008a, 2008b). They define three phases
of implementation:
1. Consensus building: Schools build knowledge of what MTSS is, the concepts
behind it, and why the model is being used, taught, and discussed.
a. Key question: Why should we implement MTSS?
2. Infrastructure development: Schools examine the components that currently are
in place and those that need to be implemented. The model and necessary steps
for implementation are outlined.
a. Key question: What should our model look like?
3. Implementation: The main focus during this phase is full implementation of the
model. Schools refine their model, build sustainability, and create a new culture
of MTSS as “business as usual.”
a. Key question: Is the model working well and are we getting the results we
want?
MTSS is a contextual model that requires buy-in and leadership for success
(Harlacher and Siler 2011). MTSS is not as simple as just putting these three com-
ponents in place. It requires patience, coordination, and understanding for schools
to be successful (Ikeda et al. 2002; NASDSE 2008a).

3.9 MTSS as the Backdrop for Curriculum-Based


Evaluation

MTSS and its accompanying principles and features provide the setting and frame-
work for use of CBE. MTSS has an intricate connection between assessment and
instruction. Assessment data drive instruction and ensure that what is taught is ef-
fective. Schools need different types of assessments generating a variety of data
for the range of decisions made in the PSM. CBE is one such method. Describing
MTSS in detail helps illustrate that CBE is part of a culture in schools in which
assessment and instruction are considered equally important to problem solving
(see Fig. 3.2). In MTSS, schools move away from an exclusive focus on intrachild
variables and labels for services and instead focus on problem solving, instructional
variables, and low-inference assessments (low-inference means that the results of
3.10  Summary and Key Points 45

the assessment are very close to what is measured and rely on explicit behaviors,
as opposed to high-inference assessments which rely on assumptions between the
data and interpretation of the data and on theoretical constructs; Christ 2008). A
shift toward problem solving and a school system that embraces collaboration and
use of clearly defined effective practices (Reschly 2008) will result in the need for
problem-solving assessments such as CBE.

3.10 Summary and Key Points

MTSS is a schoolwide model of service delivery. It has distinct principles that fo-
cus on prevention, use of effective practices and data, and a belief that all students
can learn to grade-level with the right amount of support. The three components of
MTSS are: (a) multiple and increasingly intensive tiers of instruction, (b) a compre-
hensive assessment system, and (c) use of the PSM. The implementation of MTSS
is a developmental process that involves three distinct stages, and there are four key
elements that MTSS embodies in order to create sustainability. What makes MTSS
unique is its focus on systems-level problem solving and its fluid, contextual nature.

Key Points
• MTSS is a model of service delivery that matches an appropriate level of
support to the needs of students as determined with data.
• MTSS is based on the belief that all students can learn, given the right level
of support.
• MTSS is unique because of the focus on systems-level problem solving,
use of data to identify students for additional support, and its fluid, context-
ual nature.
• MTSS has three distinct components: instruction, assessment, and the pro-
blem-solving model.
• MTSS uses four types of assessments (screening, diagnostic, formative,
fidelity) to match student needs with instruction.
Chapter 4
What is Curriculum-Based Evaluation?

4.1 Chapter Preview

This chapter will describe curriculum-based evaluation (CBE), its philosophical


background, and its assumptions. The chapter also provides an overview of the
development of reading and two conceptual frameworks: the review, interview,
observation, and testing (RIOT)/instruction, curriculum, environment, and learner
(ICEL) assessment framework and the Instructional Hierarchy (IH).

4.2 Definition of CBE

CBE is a systematic assessment method that supports problem solving (Howell


et al. 2008; Howell and Nolet 2000). It is a process in which the evaluator pres-
ents a series of tasks and activities to a student to identify what skills the student
is missing, what needs to be taught, and how it should be taught. The big idea
of CBE is that it is used to identify what skills to teach and how to teach them,
and the goal is to gather data that directly informs teaching practices. The evalu-
ator progresses through a series of questions that are answered using curriculum-
based measurement (CBM) and other assessments. CBE is not one single test,
nor is it designed to answer questions about eligibility or labels. Instead, it is
a process that generates instructionally relevant information that is immediately
interpretable and beneficial to those who work directly with the student (Howell
and Nolet 2000).

4.2.1 CBA vs CBM vs CBE

Before discussing the assumptions behind CBE, the difference between curriculum-
based assessment (CBA), CBM, and CBE are presented. CBA is a capstone term that

J. E. Harlacher et al., Practitioner’s Guide to Curriculum-Based Evaluation in Reading, 47


DOI 10.1007/978-1-4614-9360-0_4, © Springer Science+Business Media New York 2014
48 4  What is Curriculum-Based Evaluation?

describes the use of assessments to measure where the student is in relation to the
curriculum they are being taught. This can range from administering an end-of-unit
quiz, a mathematics fact quiz, or analyzing the student’s homework and comparing
their results to objectives in the curriculum (Howell and Nolet 2000).
Underneath the CBA umbrella is CBM, which is a standardized method of
assessment designed to provide educators information on a student’s level of per-
formance. These general outcome measures are brief, efficient, and technically
adequate (i.e., reliable and valid) measures that assess basic skills (Deno 2003).
When multiple CBM probes are administered over time, the resulting data can be
used to monitor student growth and in turn, make decisions about the effectiveness
of instruction. CBM is currently used to measure basic skills in the areas of reading,
writing, mathematics, and spelling (Hosp et al. 2006).
The CBE Process uses CBM, CBA, and decision rules to pinpoint skills that
require additional instruction (Howell et al. 2008; Howell and Nolet 2000).

4.3 Assumptions Behind CBE

There are four assumptions about learning and the information required for instruc-
tional decisions underlying the CBE Process (Howell and Nolet 2000).

4.3.1 1. Problems are Defined as the Gap Between Expected


Behavior and Observed Behavior

The first assumption behind the CBE Process is about how problems are concep-
tualized. Problems are defined as the gap between the expected behavior and the
observed behavior using observable and measureable terms (see Fig.  4.1). (We
use the formula: P = E − O, where P = problem, E = expected behavior, and O = ob-
served behavior) This achieves several things. First, it makes the skill or behav-
ior concrete and clear to everyone. For example, “trouble in reading” becomes
“accurate but slow reading” or “difficulty decoding multisyllabic words.” Sec-
ond, it allows us to quantify the severity of the problem. For example, we may
say a child has a problem in reading. From there, we define it in observable and
measurable terms, such as “The student is able to read 18 words correctly in one
minute out of a grade-level passage, and he should be able to read at least 34
correctly in one minute out of a grade-level passage.” Describing the problem in
these terms provides a quantifiable gap on the severity of the problem and allows
educators to determine if the originally identified problem is severe enough to
actually be considered a problem (Shinn 2008). In this example, the gap of the
problem is 16 words correctly per minute and we can implement an intervention
to close that gap.
4.3  Assumptions Behind CBE 49

Fig. 4.1   Illustration of how


problems are defined

Quantifying a problem in this manner makes goal setting an easy and straight-
forward task. If a student needs to meet an expectation, the goal is set to reach
that expectation (Shapiro 2008). Finally, if the problem is defined as the gap be-
tween the expected and observed performance for a specific skill, we know what
the intervention will target. Consider these two examples. If a teacher states that
a child struggles with reading comprehension, we have little information about
the magnitude or type of problem. Goal setting, intervention development, and
evaluation would be difficult. If, however, the problem is defined as “Difficulty
identifying the main character of a story and two details about the character,”
conceptualization of the problem is much more manageable. In addition, goal set-
ting (i.e., identify the character and two details) and intervention development are
straightforward.

4.3.2 2. Learning is an Interaction

There are many theories about how children learn. Some ascribe to a social/Vy-
gotskian theory that suggests students need social interaction and discussion to
develop skills, or Kohlberg’s moral development or Piaget’s cognitive theory sug-
gesting that direct interaction or conflict with problems allows for development.
Others believe in observational learning or the importance of a supportive, nurtur-
ing environment to support growth and development (i.e., family systems theory). It
is important to acknowledge that learning is viewed in different ways. When engag-
ing in problem solving in a school setting, however, learning is considered to be an
interaction between (a) the learner, (b) the academic and behavioral curriculum, and
(c) the instructional environment.
Learning is not the sole result of one variable in a student’s life; instead, it is
the interaction among these three components that influences the extent to which a
student becomes proficient with content (see Fig. 4.2). The learner variable refers
to the student, and includes the skills and background knowledge that the student
has acquired as well as preexisting conditions or experiences the student brings
with him or her. The academic and behavioral curriculum variable refers to the
50 4  What is Curriculum-Based Evaluation?

Fig. 4.2   Learning interaction

Table 4.1   Examples within each variable of the learning triad


Learner Academic and behavioral Instructional environment
curriculum
Background knowledge Number of behavioral Size of group
expectations
Hearing and vision Learning objectives Pacing of instruction
Level of alertness Time specified for each learning Amount of corrective feedback
objective provided
Ability to be motivated by Focus of curriculum between Level of explicitness
peer attention classrooms

learning objectives for the current year and their vertical alignment over time. The
instructional environment variable incorporates the setting and instructional strate-
gies. Size of classroom, use of contingent reinforcement, and pacing of instruction
are just a few variables that fall under instructional environment. Table  4.1 lists
several examples that fall under each variable.
Students are not passive recipients of the environment (Christ 2008). They are
able to influence their surroundings, which in turn influences the reactions and in-
teractions others have with them (Braden and Shaw 2009; McDonald Connor et al.
2009; McIntosh et al. 2008). Consider an example of a student who comes from a
stable, two-parent home. Since she was little, this particular student was encour-
aged to explore her curiosity and was provided with numerous toys that supported
the development of mathematical and reasoning skills. She enters middle school
already knowing advanced algebra. As a result of her background knowledge, the
teacher provides more difficult material (a curriculum variable) and affords the stu-
4.3  Assumptions Behind CBE 51

dent more one-on-one attention. Work that is taken home is met with care and atten-
tion by her parents to answer questions that come up. The teacher enjoys seeing the
student progress, so more time is afforded to the student (an environment variable).
As the student is exposed to more curriculum and instruction, she gains more skills
relative to other students. The reverberant nature between the three parts results in
higher levels of learning and more positive outcomes for the student. This example
illustrates the interaction between many variables to influence student learning.
By conceptualizing learning as an interaction between these three variables, a
more comprehensive and accurate view of learning is achieved. Educators are em-
powered to have control over variables that are alterable and more likely to influ-
ence learning.

4.3.3 3. Background Knowledge is Critical

There are many reasons why students have difficulty in learning. An assumption
behind CBE is that when necessary background knowledge is missing, students
will struggle to perform a task (Braden and Shaw 2009; McDonald Connor et al.
2009). Background knowledge is all the knowledge a student has accumulated; in
other words, it is what the student already knows (Howell and Nolet 2000; Coyne
et al. 2010).
Read the following paragraph:
A newspaper is better than a magazine. A seashore is a better place than the street. At first
it is better to run than to walk. You may have to try several times. It takes some skill but is
easy to learn. Even young children can enjoy it. Once successful, complications are mini-
mal. Birds seldom get too close. Rain, however, soaks in very fast. Too many people doing
the same thing can also cause problems. One needs lots of room. If there are no complica-
tions, it can be very peaceful. A rock will serve as an anchor. If things break loose from it,
however, you will not get a second chance (Bolt 2011).

Without background knowledge, the paragraph does not make much sense. If you
were asked to recall information from that paragraph later, you likely would remem-
ber very little of it. If your background knowledge is activated before reading that
paragraph with the statement “the context is kite flying,” your connection with the
paragraph would be dramatically different. Reread the paragraph again to see how
your preexisting knowledge about kites changes its meaning. By linking the para-
graph to background knowledge, your recall at a later time will be higher (Howell
et al. 2008).
If information can be linked to background knowledge, learning is more likely
to occur. If the information is not connected to background knowledge, new skills
become disjointed bits of information and learning decreases (Hattie 2009; Howell
and Nolet 2000). In fact, McDonald Connor et al. (2009) conducted a randomized,
controlled study and found that changing the focus and amount of instruction pro-
vided to students based on their background knowledge led to greater improvements
in first graders’ reading comprehension compared to students who did not receive
52 4  What is Curriculum-Based Evaluation?

Table 4.2   Examples of alter- Alterable variables Unalterable variables


able and unalterable variables
Group size Family structure
Precorrection strategies Presence of a disability
Opportunities to respond Gender
Reinforcement strategies Sibling relationships
Error correction procedures Availability of parent
Reading program Previous history
Learning objectives Race/Ethnicity

adjusted instruction. They also discovered that the closer the teachers followed the
instructional recommendations provided to them, the more progress students made
in reading. Their results provide evidence for an interaction effect between a child’s
background knowledge and the type of instruction the student receives.
CBE is used to identify what background knowledge students are missing; if
the child cannot perform a task, it is because they do not have the knowledge to
do so. This assumption is low inference, meaning that the jump between the results
of the assessment and the conclusion are very small (see Inset 4.1). If a student’s
performance on a reading assignment does not meet expectation, the conclusion is
simply that they have not mastered the content being measured. With CBE, low-
inference assessments are used and low-inference conclusions are made. This keeps
conclusions practical and avoids global, rigid conclusions about the child’s abilities
or “potential” (Christ 2008). Focusing on background knowledge ensures a focus
on what teachers can teach (cf. Howell and Nolet 2000). When assumptions about
learning shift to something outside of the school’s purview, instructional utility is
lost. After the missing background knowledge and skills are identified, the focus
stays on alterable variables, which is the next assumption.

4.3.4 4. Focusing on Alterable Variables Leads to Better


Instructional Recommendations

After identifying the skills to target during instruction, CBE focuses on variables
that can be altered and manipulated. Alterable variables are ones that an educa-
tor can directly influence, such as the pacing of a lesson, the size of the group,
and the amount of instructional time for a particular subject. Alterable variables
are in contrast to unalterable variables, which are outside the educator’s control
and cannot necessarily be influenced by the educator (Braden and Shaw 2009;
McDonald Connor et  al. 2009). A student’s family structure, the presence of a
disability, birth order, gender, and race are just a few examples of unalterable
variables (see Table 4.2).
By examining alterable variables, educators can manipulate those variables to
influence learning and take charge of student learning. A focus on unalterable vari-
4.5  RIOT/ICEL and Instructional Hierarchy 53

ables results in wasting valuable educator time admiring the problem. CBE is a tool
that allows educators to change learning trajectories.
Each assessment should have a purpose or a question attached to it, and that
question should generate an answer that directly informs teaching.
Summary of Assumptions Behind CBE  To summarize, CBE is an assessment
method that focuses on making practical recommendations that are directly rele-
vant to instruction and contribute to problem solving. Problems are defined as
the gap between expected performance and observed performance. When this
gap is quantified, it provides a goal for instruction. Learning is considered an
interaction among three variables: learner, curriculum, and environment. Focu-
sing on alterable variables can lead to practical recommendations that improve
student learning. Understanding difficulty learning as the result of missing back-
ground knowledge empowers teachers and prevents placing a limit on student
learning.

4.4 The CBE Process

The CBE Process involves the administration of general-level or global measures


to identify areas in need of further assessment. This step is referred to as survey-
level assessment. Once an area is identified as not meeting expectations, a more
in-depth approach called specific-level assessment is conducted (Howell et  al.
2008). These two steps are interrelated: survey level is used to pinpoint broad ar-
eas of concern and specific level is used to explore that area in-depth. The overall
concept is to cast a wide net and then focus on specific skills. Through survey-
level and specific-level assessments, educators develop hypotheses about why stu-
dents are not meeting expectations. The CBE Process walks educators through a
series of questions to determine if the hypotheses about the student’s skills are true
or false (Howell et  al. 2008; Howell and Nolet 2000). The CBE Process is dis-
cussed in detail in the next chapter. First, two conceptual frameworks to keep in
mind when conducting CBE will be reviewed: the RIOT/ICEL assessment frame-
work and the Instructional Hierarchy (IH). The chapter ends with a brief overview
of the development of reading.

4.5 RIOT/ICEL and Instructional Hierarchy

When conducting CBE, the evaluator keeps in mind two conceptual frameworks:
(a) the RIOT and ICEL framework, which provides the backdrop of the CBE Pro-
cess and (b) the IH, which organizes recommendations within a skill-acquisition
hierarchy.
54 4  What is Curriculum-Based Evaluation?

Inset 4.1 What is the Difference Between “High-Inference Assessments”


and “Low-Inference Assessments”? 
Low-inference assessments are those that have a low “jump” between the
results of the test and the interpretation of those results. In other words, a
high level of interpretation is not required and the reasons for the results are
straightforward and clear. Conclusions are drawn based on explicit behaviors
and the level of interpretation is low.
In contrast to low-inference assessments are high-inference assessments.
High-inference assessments typically are not direct measures. Measures that
require selection-type responses (e.g., read silently and answer multiple-
choice questions) require more inference about how a student arrives at a re-
sponse than those requiring production-type responses (e.g., read aloud out of
a grade-level passage). On other assessments requiring high levels of infer-
ence, observed responses are considered symptoms or behaviors of innate or
theoretical constructs, which are more abstract and nebulous (Hosp 2008).
Consequently, the jump between the results and the interpretation can be quite
large and, the probability of an inaccurate inference increases. With low-infer-
ence assessments, more concrete skills are measured and therefore the risk of
making a faulty inference is lower.

4.5.1 RIOT/ICEL

RIOT and ICEL are acronyms for types of assessments and instructional domains to
analyze in problem solving. The RIOT and ICEL matrix is an organization frame-
work that guides a thorough problem analysis (Christ 2008; Howell and Nolet 2000).
Instruction is the “how”; how new skills are taught and reinforced for the student.
Curriculum is the “what”; what is being taught. Environment is the “where”; where
the instruction takes place. Finally, learner is the “who”; who is interacting with the
instruction, curriculum, and environment. Review refers to examining existing data
such as, permanent products, attendance records, and lessons plans. Interviews can be
structured, semistructured, or unstructured and are methods of assessment that involve
question–answer formats. Observations involve directly observing the interaction be-
tween the instruction, curriculum, environment, and learner. Finally, testing refers to
the administration of tests (both formal and informal) to obtain information about the
ICEL.
Using the RIOT and ICEL framework ensures consideration of all variables con-
tributing to a problem. Table 4.3 displays examples of areas assessed and assess-
ments used for problem analysis in the RIOT and ICEL framework. This framework
captures the multifaceted nature of learning, which is the interaction among the
learner, curriculum, and the instructional environment (see Fig. 4.2).
Table 4.3   RIOT and ICEL matrix with sources of information and examples of variables to assess. (Adapted from Christ 2008 and Howell and Nolet 2000)
Review Interview Observe Test
Instruc- • Permanent products to determine • Teacher’s use/intended use of • Direct observation, anecdotal • Administer scales or check-
tion prior strategies used and their instructional strategies and focus of observations, checklists to deter- lists that measure effective
effects instruction mine effective teaching practices instructional practices
• Lesson plans to determine instruc- • Teacher for use of reinforcement and factors (e.g., pacing, group- • Manipulate instructional
tional demands (e.g., difficulty, • Peers and student for perception of ing, and explicitness) variables to see effect on
differentiation, response type, etc.) pace, activities, engagement, etc. • Direct observation of antecedents student’s performance (e.g.,
and consequences of behaviors repeated practice, increase
• Observation of expectations of in opportunities to respond,
teacher, demands of task/activities etc.)
Curricu- • Lesson plans/learning objectives • Teachers, LEA for expectations • Permanent products and plans to • Determine the readability of
lum relative to student’s skills of pacing and coverage of the determine alignment of assign- texts used
• Permanent products to determine curriculum ments with objectives • Manipulate difficulty of
instructional demands in cur- • Teacher(s), LEA for philosophical • Clarity of learning objectives material or manner in which
riculum (e.g., scope and sequence, orientation of the curriculum (e.g., • Observe alignment of objectives/ it is presented to see effect
4.5  RIOT/ICEL and Instructional Hierarchy

prerequisite skills, massed vs direct instruction, whole language, content between classroom, set- on student performance
distributed practice, amount of phonics-based, etc.) tings, grades, etc.
review, etc.) • Teacher(s) for organization, clarity,
content, difficulty of curriculum
Envi- • Lesson plans to determine behav- • Teacher to describe behavioral • Observe interactions among peers • Administer classroom envi-
ronment ioral expectations taught expectations, rules, and routines and climate of classroom, school ronment scales
• School rules and policies to assess (i.e., are rules situationally and • Observe physical environment • Compare student’s perfor-
climate, rules, and routine developmentally appropriate and (seating, lighting, noise, etc.) mance on assessments in
• Seating charts clear?) different settings (e.g., task
• Student, peers to describe climate, demands, use of reinforce-
rules, routines, etc. ment, different distractions
or seating, etc.)
Learner • Permanent products, gradebook to • Student to understand perception of • Direct observation of engagement • Administer a variety of
compare performance to peers skills and difficulties and target behaviors or skills assessments to determine
• Cumulative records to assess • Teachers, personnel for perception • Observe ability to complete tasks/ student’s performance/skills
previous history, health history, of problem (intensity, significance, activities • Use functional behavior
attendance, district testing results nature, etc.); also for experience • Record nature and dimensions assessment or functional
55

• Permanent products for previous and observations from working with of behavior (frequency, duration, analysis
response to instruction and change student intensity, latency) • Use self-report (checklists,
in skills rating scales, inventories, etc.)
56 4  What is Curriculum-Based Evaluation?

4.5.2  Instructional Hierarchy

The IH is a framework that outlines the development of skills and corresponding


instructional recommendations (Daly et al. 1996; Haring et al. 1978). As one uses
CBE, they can understand where the student’s skills are on a learning hierarchy,
which describes the development of a skill from initial acquisition to proficiency
(Daly et al. 1996; Haring et al. 1978). Once the level of a skill is identified, a par-
ticular instructional recommendation can be made. This consideration can increase
the likelihood of identifying an appropriate match between the instructional rec-
ommendation and student needs. The consideration of the development of a skill
and its accompanying instructional recommendations creates a continuum called
Instructional Hierarchy. This continuum includes four stages: (a) acquisition, (b)
fluency, (c) generalization, and (d) adaptation (Haring et al. 1978).
During the acquisition stage, the student is just beginning to learn a skill. The
student is not fluent or accurate with the skill, and requires considerable support
from the teacher at this point. At this stage, the instructional focus is on modeling
the skill for the student and prompting the student to use the skill. For example, a
teacher may model how to decode a multisyllabic word and also may prompt the
student by providing the first sound of the word (Daly et  al. 1996). During this
stage, a sequence of scaffolded support and corrective feedback is provided (Haring
et al. 1978) with an emphasis on building accuracy in performing the skill in isola-
tion (Daly et al. 1996). Once the student can perform the skill accurately without
support, they move to the fluency stage.
In the fluency stage, the student is accurate with the skill, but performs it in a slow
and laborious manner. The instruction shifts from modeling the skill to providing op-
portunities for practice and reinforcement of the skill (Daly et al. 1996; Intervention
Central, n. d.). Students are reinforced for performing the skill accurately and fluent-
ly, and goals can be set for increasing fluency or reaching a certain standard (Haring
et al. 1978). During this stage, students move from accurately performing the skill in
isolation to gaining proficiency with performing the skill in context. Once students
perform the skill proficiently in context, they move to the generalization stage.
In the generalization stage, the student has mastered the skill and is able to per-
form it with fluency. The instructional focus shifts to using the skill in new settings
and with new material. Teachers provide modeling, opportunities to practice, and
reinforcement for using the skill in new contexts. Generalization may occur sponta-
neously, but if it is not observed, it should be programmed and planned for as part
of instruction (Daly et al. 1996). Once the student is able to perform the skill in new
contexts, and can do so at a sufficient rate, then the student moves to the adaptation
stage of the IH.
In the final stage, adaptation, the focus is on the student’s ability to modify the
skill for use in novel situations. Instruction focuses on providing opportunities to
modify and use the skill in new situations with relatively little support (Haring et al.
1978). The goal is for the student to apply the skill in novel settings considering
environmental demands (see Table 4.4).
Table 4.4   Description of the Instructional Hierarchy
Stage Definition Example Exit goal Student behaviors Instructional
recommendations
Acquisition Initial learning of the Decodes a list of r-con- Student performs skill • Initial learning of skill Model and prompt
skill; performance of trolled vowels with 40 % accurately with • Inconsistent responding (I do, We do, and You do)
skill is laborious accuracy little support (at least in and performance
isolation)
Fluency Can perform skill accu- Decodes a list of short Student performs skill • Student performs skill Reinforcement for reaching
rately, but not fluently vowel sounds with ≥ 95 % accurately and fluently; accurately, but is slow standards or improv-
accuracy, but is slow while comparable to peers and laborious ing score. Practice and
reading those words in a repetition
4.5  RIOT/ICEL and Instructional Hierarchy

passage
Generaliza- Is proficient with skill, Reads at a rate similar to Can perform skill across • Accurate and fluent in Model, practice, and
tion but performance is peers, but rate changes multiple settings and responding reinforce across different
limited to very few when presented with discriminates accu- • Does not perform skill settings and contexts
contexts (often only in materials different from rately when to use the well in new settings
instructional setting) instructional materials skill or ones different from
instructional setting
Adaptation Can perform skill in new Can read a variety of materi- (Stage does not discon- • Accurate and fluent with Provide opportunities for
contexts, but does not als, but unable to apply tinue, so there is no exit skill adaptation
modify skill in certain phonics to foreign words goal) • Can perform in novel Identify “big ideas” of the
situations settings skill(s)
• Does not modify skill to
adapt to situations
57
58 4  What is Curriculum-Based Evaluation?

Daly et al. (1996) summarize the IH:


As the learner is gaining a new skill, he or she will first acquire it. The learner then becomes
fluent in skill use. Next, he or she learns to generalize its use to novel contexts. Finally,
he or she adapts its use to modify the response as necessary according to novel demands
(p. 370).

To illustrate the stages of learning, recall the first time you drove a car. In the ac-
quisition stage, you performed turns too wide or narrow and accelerated, stopped,
and shifted gears in a jerky manner. With corrective feedback from your instructor,
your turns became more accurate and driving was smoother. With practice, your
driving in the family car became more fluent. Generalization occurred when you
drove different cars, and adaptation was achieved when you switched to a car with
an automatic transmission or drove a big moving truck. This example illustrates the
progression in skills and accompanying change in the focus of instruction. Each
stage of the IH has very specific instructional recommendations (Daly and Martens
1994; Intervention Central, n.d.). With CBE, the specific skill(s) that need to be
taught are identified, and the location in the IH where the skill currently is per-
formed is pinpointed, leading to clearer instructional recommendations (Daly et al.
1996; Daly and Martens 1994).
Providing teachers with clear indications of how developed a skill is can lead to
more practical recommendations. For example, Daly and Martens (1994) examined
three types of instructional recommendations with four students identified with
learning disabilities (the average age is approximately 11 years old). They found
that instruction focused on the first three stages of the IH yielded the greatest im-
provement in students’ oral reading. This was moderated, however, by the student’s
skill level. Those students, who were reading aloud with at least 80 % accuracy and
therefore considered to be in the fluency or generalization stage, benefitted the most
from instruction. Those reading with less than 80 % accuracy (i.e., in the acquisition
stage) did not experience as much growth as the aforementioned students. This pro-
vides evidence of a “skill level by treatment” interaction in which matching the type
of instruction to the student’s stage within the IH results in higher reading achieve-
ment. By providing information to teachers on the developmental nature of a stu-
dent’s skill and considering the instruction recommended by the stages of the IH,
teachers are provided more beneficial information for differentiating instruction.

4.6 Big Five Areas of Reading

The development of reading is not an easy or a natural process. While the develop-
ment of speech is largely achieved through exposure to others who are speaking,
reading does not develop simply from looking at books. It requires direct teaching,
practice, and feedback. Louisa Moats summarizes the complexity of reading by say-
ing “reading IS rocket science” (Moats 1999).
The intention of this section is not to describe in excessive detail how reading
skills develop, or to discuss the structures of the brain involved in reading devel-
4.6  Big Five Areas of Reading 59

Table 4.5   The big five areas of reading


Skill Description Example
Phonemic awareness Understanding that spoken words Hearing the word “cat” and
are made up of individual sounds, breaking it into “c/a/t”
called phonemes
Alphabetic principle Knowledge and understanding of the Knowing that the letter “r”
relationship between phonemes and makes a “rrrrr” sound.
printed letters Being able to recognize
suffixes, such as “-ing”
and “-ent”
Fluency with connected Being able to decode with rate, accu- Automatically decoding
text racy, and prosody. Quick, efficient, words while reading; using
and accurate reading at the sentence intonation and infection
and paragraph level while reading passages
Reading comprehension The ability to derive meaning from Rereading sections of text to
text improve understanding
Vocabulary Knowledge of words and definitions Being able to define a given
word in a passage

opment. Instead, a simplification of reading development is discussed in order to


provide context and background for the nature and structure of the CBE Process.
Successful reading requires the development and teaching of five skills: (a) phone-
mic awareness, (b) decoding or the alphabetic principle, (c) fluency with connected
text, (d) vocabulary, and (e) reading comprehension [National Institute of Child
Health and Human Development (NICHHD) 2000]. These five skills are referred to
as the “Big 5 Areas of Reading” because teaching any one of these skills is associ-
ated with improved reading outcomes (Carnine et al. 2009; NICHHD 2000). Each
skill is described in Table 4.5.
Reading development begins well before a child actually touches a book. Before
beginning to map sounds to text (i.e., orthographic symbols), a student must under-
stand that spoken words consist of single sounds or phonemes (Carnine et al. 2009).
Developing phonemic awareness is knowing that the word “cat” consists of three
phonemes: c/a/t/. Phonemic awareness is a skill that does not require text, and an
easy way to remember the difference between phonemic awareness from phonics is
that you can do phonemic awareness with your eyes closed.
After students develop the knowledge that spoken words consist of individual
phonemes, they begin to learn the mapping of the sounds to symbols (i.e., text). In
other words, they develop an understanding of the alphabetic principle. Alphabetic
principle is the system in which letters correspond to certain sounds, and that the
system can be used to decode words. Phonics is the teaching of the rules that indi-
cate which sounds are attached to which written letters and words (NICHHD 2000).
Students develop the ability to decode words by first learning the individual sounds
and how those sounds match to individual letters. From there, it builds up to learn-
ing letter combinations and whole words. There are several irregular words that do
not follow the rules of phonics. These are called sight words and students develop
knowledge of these words as they develop phonics and fluency.
60 4  What is Curriculum-Based Evaluation?

Fluency is the shortened term used to refer to fluency with connected text. Flu-
ency is not simply reading fast, but reading with prosody, rate, and accuracy out of
connected text (Kuhn and Stahl 2003; Therrien et al. 2012). Fluency incorporates
understanding the sounds in speech, matching sounds to symbols, and developing
automaticity with reading.
Reading is a complex process. A student must visually track a string of letters,
break apart that word into individual phonemes, retrieve the sounds that match
those letters, recognize the rules of letter patterns, discriminate irregular word parts,
and put the word back together to read it. It is a process of decoding and then recod-
ing. That process is applied to reading several words in a row that make a sentence.
A student must first acquire the skill to decode and recode words and then build
fluency with that skill. It is at this point that comprehension is enhanced. Although
students build reading comprehension skills as they develop the ability to read,
reading comprehension relies heavily on the student’s ability to define words and
decode fluently (Carnine et al. 2009; Therrien et al. 2012) in combination with ac-
quiring background knowledge. A student’s working memory must not be burdened
with trying to decode words and retrieve meanings of words. They must be able to
decode quickly and efficiently in order to devote their mental resources to under-
standing the meaning of the text and the author’s intentions. If a student is too busy
decoding and arduously working through a word, comprehension will be limited
(Kuhn and Stahl 2003; Musti-Roo et al. 2009).
Comprehension is the ability to glean meaning from a text (NICHHD 2000).
We present comprehension as a skill that rests on four factors: (a) decoding skills
or knowledge of phonics, (b) vocabulary and language, (c) background knowledge
of content, and (d) metacognitive skills or the ability to monitor meaning while
reading (Klinger 2004; NICHHD 2000; Perfetti and Adlof 2012). Decoding skills
are not only one of the four factors contributing to comprehension, but also a pre-
cursory skill for comprehension. If one does not have the skills to decode text, they
do not have the opportunity to attempt comprehension. Students “learn to read” in
grades K–3 and then “read to learn” beginning in grade 4 (Carnine et al. 2009; Kuhn
and Stahl 2003; Therrien et al. 2012).
Finally, vocabulary refers to knowledge and awareness of the meanings of words
(NICHHD 2000). Vocabulary is developed in conjunction with the other four skills.
Vocabulary can be taught in conjunction with beginning reading skills. Figure 4.3
illustrates the developmental process of reading.

4.7 Summary and Key Points

CBE is a systematic process that educators can use to determine what skills need
to be taught and how to teach them. The recommendations that result from using
CBE are relevant to instruction and directly inform teaching practices. The CBE
Process is based on several assumptions including the idea that learning is an in-
teraction between the learner, curriculum, and environment and it is the interac-
tion among these that leads to positive student outcomes. It is also assumed that a
4.7  Summary and Key Points 61

Fig. 4.3   Developmental


process of reading

problem occurs because of a lack of background knowledge or missing skills. CBE


is part of a larger assessment framework referred to as the RIOT/ICEL assessment
framework that contributes to problem solving and problem analysis. Additionally,
skills are assessed and recommendations are made with consideration of the IH,
which outlines skill acquisition and corresponding instructional recommendations
along a continuum.

Key Points
• CBE is a systematic problem-solving assessment process.
• Learning is an interaction between the curriculum, environment, and the
learner. It is the interaction between these variables that results in learning.
• Problems are defined as the gap between what is expected and what is
observed.
• When a problem develops, it is because students lack the background
knowledge or are missing skills required to do the task.
• RIOT is an acronym for types of assessments including review, interview,
observe, and test. ICEL is an acronym for areas to assess and includes ins-
truction, curriculum, environment, and the learner.
• IH has four stages with corresponding instructional recommendations.
• Reading development consists of five skills: phonemic awareness, alpha-
betic principle, fluency with connected text, reading comprehension, and
vocabulary.
• The acquisition of reading requires direct instruction, as it is not a skill that
naturally develops.
Chapter 5
The Curriculum-Based Evaluation Process

5.1 Chapter Preview

The complete Curriculum-Based Evaluation (CBE) Process is described in this


chapter. Each phase and its link to the problem-solving model (PSM) are outlined
and goal setting is discussed.

5.2 The CBE Process

The CBE Process involves answering a series of questions by moving through


various tasks. The process begins with the initial identification of the problem and
moves to analysis and verification of the problem. The cycle of inquiry is completed
following the design and implementation of an effective solution. The entire cycle
is called the CBE Process.
The CBE Process is not a single test or one-time administration. Instead, it is
a flexible and fluid process that involves repeated data collection. Brief tasks are
administered, data are examined to verify or disprove a hypothesis, more data may
be gathered, and eventually information contributes to planning or adjusting in-
struction. The CBE Process identifies and targets missing background knowledge to
arrive at a solution. The focus is on what students need to learn and what “teachers
can teach” (Howell et al. 2008, p. 351). The ultimate goal of the CBE Process is to
enable the design of more focused and effective instruction that improves learning
(Howell et al. 2008).
The CBE Process and PSM  The CBE Process involves four phases which are
illustrated in Fig. 5.1:
1. Problem Identification: Survey-Level Assessment
2. Problem Analysis: Specific-Level Assessment
3. Plan Implementation: Goal-Writing, Intervention Design & Implementation, and
Monitoring Plan
4. Plan Evaluation: Monitoring Fidelity and Outcomes, Data-Based Decisions.
J. E. Harlacher et al., Practitioner’s Guide to Curriculum-Based Evaluation in Reading, 63
DOI 10.1007/978-1-4614-9360-0_5, © Springer Science+Business Media New York 2014
64 5  The Curriculum-Based Evaluation Process

Fig. 5.1   CBE Process

You will notice that the four steps of the CBE Process are identical to the PSM.
Each model involves four steps that cycle through until the problem is solved. Both
the CBE Process and the PSM can be applied to a single student, but the PSM can
also be applied to groups of students or to entire school systems. The phases of the
PSM and CBE Process are the same.

5.3 Problem Identification

In the first phase of the CBE Process, the problem is identified using a survey-level
assessment. Survey-level assessment is the process of measuring an array of skills
to identify areas of concern warranting further assessment. The questions to answer
in this step are “What is the problem?” and “Is it severe enough to warrant further
investigation?”
The problem is then defined as the gap between the expected performance and
the observed performance. The severity of the gap is analyzed by conducting a gap
analysis.
A gap analysis is conducted by comparing the student’s performance to a certain
standard. For example, if a student is expected to read 50 words correctly per min-
ute and he reads 30 words correctly per minute, simple division and multiplication
are used to calculate the extent of the gap. The student’s observed performance is
5.5  Plan Implementation 65

Table 5.1   Examples of gap analysis


Observed performance Expected Gap Equation
performance
7 correct phonemes (PSF) 40 correct phonemes 33 phonemes 7/40 × 100 = 18 %
38 cwpm (ORF) 115 correct words/ 77 words 38/115 × 100 = 33 %
minute
83 % accuracy on a reading passage 97 % accuracy 14 % points 83/97 × 100 = 86 %
22 circled words correct (MAZE) 24 circled words 2 words 22/24 × 100 = 92 %
PSF phoneme segmentation fluency, cwpm correct words per minute, ORF oral reading fluency

divided by the expected performance and multiplied by 100 to get a percentage. In


this example, 30 words correct per minute divided by 50 words correct per min-
ute × 100 = 60 %. The 60 % is interpreted as the student is performing at 60 % of
the expected criterion. Table 5.1 lists further examples of gap analysis. After the
problem is quantified, further analysis is conducted to pinpoint the specific nature
of the problem.

5.4 Problem Analysis

The second phase of the CBE Process involves administering specific-level assess-
ments to pinpoint missing background knowledge and identify the alterable vari-
ables needing adjustment during instruction. While survey-level assessment mea-
sures a wide range of skills, specific-level assessment is more targeted. Specific-
level assessment involves a series of tasks, such as analyzing errors made while
reading or asking the student to explain their comprehension of text. During this
phase, a series of tasks are completed by the student to test hypotheses about what
skills are missing. The evaluator asks a question (e.g., is the student lacking certain
decoding skills?) and then administers a task to answer that question (e.g., admin-
isters several reading passages and codes the errors made to determine if there are
decoding deficits). Analysis of the problem is conducted to gain a thorough un-
derstanding of why the problem is occurring to increase the likelihood of a match
between the problem and the solution.

5.5 Plan Implementation

The third phase of the CBE Process involves using the data gathered in the first
two phases to design and implement instructional changes in an attempt to correct
the identified problem. In the previous steps, the evaluator has identified the prob-
lem and pinpointed the missing skills. In the Plan Implementation stage, a plan is
designed and implemented to correct the problem. In addition to developing a plan
matched to the student’s need, two other key components at this stage are (a) goal
setting and (b) determining how to measure both the effectiveness of the plan (i.e.,
progress monitoring) and the implementation of the plan (i.e., fidelity monitoring).
66 5  The Curriculum-Based Evaluation Process

5.5.1 Instructional Match

Once a problem has been verified and the factors contributing to it are identified, a
plan is developed (or the student’s current instructional plan is adjusted) to better
match instruction to the student’s specific skill deficits. The Instructional Hierarchy
(IH) is considered at this point, since knowing where a student falls on the IH leads
to a better-matched instructional plan. To illustrate the importance of instructional
match, consider the differences in nutritional and fitness plan for a person who is
training for a marathon compared to a person who is trying to lower his cholesterol.
Each would require a different plan to reach his goals.
The runner’s plan might include (a) running several days/week, (b) running a
longer distance 1 day/week, and (c) eating lots of pasta the night before long dis-
tance runs. The person with high cholesterol would have a very different plan that
might include (a) eating oatmeal for breakfast, (b) eliminating fatty foods, and (3)
exercising 30 minutes at least 3 days/week. A person training for a marathon and a
person with high cholesterol both require a plan focusing on exercise and diet, but
have very different needs and thus, require very different plans. Similarly, two stu-
dents who require a plan to improve reading may have very different instructional
needs. CBE guides specific instructional recommendations.
Examples of Recommendations from the CBE Process  There are a few guidelines
to consider in generating recommendations from CBE results. First, the recommen-
dations should be tied to instruction and include direct instruction from the teacher
on missing skills. Second, recommendations should be practical, meaning they are
tied to controllable variables. For example, a recommendation to provide more time
to practice reading connected text in which the student is at least 93 % accurate is a
practical and controllable recommendation. Comparatively, a recommendation that
the student needs more help with reading is not specific enough. Table 5.2 lists vari-
ous examples and nonexamples of recommendations that come from using CBE.

5.5.2 Goal Writing

Following Problem Analysis, the evaluator can summarize the student’s current
level of performance, quantify the severity of the problem, and write a goal for the
student. A well-written goal requires several components (Shinn 2002a; Yell and
Stecker 2003; see Table 5.3):
1. The name of the student that should perform the behavior;
2. The behavior or skill that the student performs;
3. Under what conditions the skill is to be performed;
4. The criterion at which that the skill must be performed; and
5. The time frame by which the skill should be performed.
Together, all of the parts specify with great clarity the expected outcome. The fol-
lowing is an example of a well-written goal:
5.5  Plan Implementation 67

When given a 3rd grade-level reading probe (conditions), Anthony (name of student) will
read aloud (behavior or skill) 90 words correctly in one minute with at least 95 % accuracy
(criteria), by June 1, 2014 (time frame).

Table 5.4 lists additional examples and nonexamples of goals.

Table 5.2   Examples and nonexamples of recommendations from CBE


Example Nonexample
The student makes frequent errors with words Ensure the student has a workspace when
that end in “ing” with connected text. Provide doing homework at home ( not controllable)
direct instruction (modeling and practice)
of words that end in “ing” in isolation until
the student reads a list of words with 100 %
accuracy
The student’s reading is accurate but slow in The student has a disability (not alterable or
grade-level text. Provide repeated readings tied to instruction)
and reinforcement for “beating the last score”
until the grade-level standard is reached
The student has difficulty summarizing the main The student’s homework should be reduced
ideas of a story. Provide prompts during the (this would limit the opportunities to prac-
story for the student to summarize the main tice a skill and prevent moving higher up on
ideas (i.e., identify the main character and two the instructional hierarchy)
events that occurred in the story). Allow the
student to use a template for notes and fade
out this support once the student demonstrates
the skill accurately
The student does not pause at periods while rea- The student should be given extra time to
ding. Have the student read aloud and provide complete assignments ( does not specify
a cue (e.g., pencil tap or clicker noise) if they what skill needs to be taught to the student)
read through a period

Table 5.3   Components of goals


Components Description Example
Student’s name Name of the student that performs the target Helen
skill
Skill The target skill that the student performs; Will read
defined in observable and measurable terms Will identify
Will summarize
Conditions Specifies the setting and conditions under When given a third-grade reading
which the behavior is to be performed passage…
After reading a grade-level pas-
sage from the core curriculum…
Criteria Clarifies how well and for how long the At least 100 cwpm with 95 %
skill must be performed accuracy
Summarize two main details from
the story
Time frame Identifies by when the goal should be By February 15, 2014
achieved
cwpm correct words per minute
68 5  The Curriculum-Based Evaluation Process

Table 5.4   Examples and nonexamples of goals


Examples Nonexamples
When given a fifth grade reading MAZE Tony will increase his reading level by 1 year.
passage, Tony will circle 25 words correctly in (Missing conditions, criteria and time frame)
3 minutes by May 15, 2014
When given a third-grade reading CBM passage, When given a third-grade passage, Megan
Megan will read 100 words correctly per minute will read 100 words per minute. (Missing time
with 95 % accuracy by June 1, 2014 frame)
When given a PSF probe, Mark will identify 20 Mark will identify 20 sounds per minute by
correct sounds per minute by March 1, 2014 March 1, 2014. (Missing conditions)
When given ninth-grade reading text, Jill will When given reading material, Jill will read
identify the main idea and 3 details from the at a proficient level, 4 out of 5 opportunities.
reading selection by March 31, 2014 (Missing specific criteria, specific conditions,
and time frame)
CBM curriculum-based measurement

5.5.3 Setting Goals

Student outcomes are impacted by goals so it is important that goals are both am-
bitious and reasonable. With that being said, there may be different ways to write
goals. Goal setting can be accomplished through consideration of several different
sources of information. Four sources are addressed: (a) benchmark standards, (b)
normative data, (c) rates of growth, and (d) the student’s current level of perfor-
mance.
Benchmark Standards  Benchmarks standards are predictive of current or
future success and based on correlational data. For example, students who can
read at least 47 cwpm at the end of first grade with at least 90 % accuracy have
an 80–90 % chance of reaching subsequent literacy benchmarks (and continuing
to meet these benchmarks places the odds in their favor of achieving high-sta-
kes reading goals, such as meeting proficiency on a state assessment; Good and
Kaminski 2011).
Normative Data  Normative data provide information about how large groups of
students perform and goals can be set for students using those data. For example,
students in first grade performing at the 50th percentile in the spring (based on
national norms) read approximately 67 cwpm (AIMSweb, n. d.).
Rates of Growth  Normative Rates of Growth tell us what typical students do.
Comparing the target student’s rate of growth to that of typical students all-
ows us to see if the gap will be closed over time. Using rates of growth to
set goals provides perspective on how reasonable and ambitious the goals are.
Tables  5.5–5.7 display rates of growth for different curriculum-based measu-
res, including oral reading fluency (ORF), the MAZE, and early literacy mea-
sures. Within Tables  5.5 and 5.6, the rates listed in the normative column and
the rates of growth are calculated by examining students performing at the 50th
percentile.
5.5  Plan Implementation 69

Table 5.5   Growth rates for oral reading fluency


Grade Source
AIMSweb, n. d. Deno et al. 2001
Normative Gen Ed Special Ed Learning Disability, Effectivea
1 1.50 1.80 0.83
2 1.22 1.66 0.57
3 1.11 1.18 0.58 1.15b
4 0.89 1.01 0.58 1.14b
5 0.81 0.58 0.58 2.07b
6 0.69 0.66 0.62 1.59b
7 0.64
8c 0.47
Rates
≤ 30 cwpm 2.0
≥ 31 cwpm 1.25–1.50
a
Indicates rates for students with learning disabilities who are receiving effective, evidence-based
instruction
b
Rates were calculated using studies reported in Deno et al. (2001) by using the mean grade level
reported. If more than one study was noted, the mean was taken to determine the growth rate listed
previously
c
Growth rates are the same for grades 8–12 (AIMSweb, n.d.)

Table 5.6   Growth rates for reading MAZE


Grade Source
AIMSweb, n. d. Deno et al. 2001
Normative Learning Disability, Effectivea
1 0.19
2 0.31 0.56b
3 0.08 0.67b
4 0.17
5 0.28
6 0.17
7 0.19
8c 0.08
a
Indicates rates for students with learning disabilities who are receiving effective, evidence-based
instruction
b
Rates were calculated using studies reported in Deno et al. (2001) by using the mean grade level
reported. If more than one study was noted, the mean was taken to determine the growth rate listed
previously
c
Growth rates are the same for grades 8–12 (AIMSweb, n.d.)

Table 5.7   Growth rates for early literacy measures


Grade Measure
PSF LNF LSF NWF
1 0.42 0.44 0.64 0.94
2 0.17 0.58
All rates are from AIMSweb, n.d. and are the rates at the 50th percentile
PSF phoneme segmentation fluency, NWF nonsense word fluency
70 5  The Curriculum-Based Evaluation Process

A reasonable goal is a goal based on the normative rates listed in the tables to
close a student’s performance gap over time. An ambitious goal would contain a
relatively higher rate of growth that would close the student’s performance gap in a
shorter period of time (Pearson 20120b; Shinn 2002b).
Current Level of Performance  Another source of information to consider
when setting goals is the student’s current level of performance. Students in the
initial stages of reading development progress at a faster rate than students in
the later stages of reading development (Deno et  al. 2001). In setting goals, it
is reasonable to expect a third grader reading below grade level to progress fas-
ter than a third grader reading at grade level. The former student is at an ear-
lier stage of reading development and has more “room to grow” (see Table 5.5;
Deno et al. 2001).

5.5.4 A Sense of Urgency

The goal is to provide instruction that results in closing the achievement gap. Re-
member, the guiding philosophy is that all students can learn and the purpose of
adjusting instruction is to provide the specific skills students need to close the
achievement gap. Goals should reflect a sense of urgency about a student’s educa-
tion. Teachers who write and monitor goals get better student outcomes (Conte and
Hintze 2000; Stecker et al. 2005) and ambitious goals yield better outcomes than
unambitious goals (Bush et al. 2001; Christ 2010; Hattie 2009; cf. Conte and Hintze
2000; Stecker et al. 2005).

Inset 5.1 How Much Growth can be Expected from Students with
Disabilities?
Deno et al. (2001) examined the typical growth rates of all students, irrespective
of the quality of instruction or program they received. Those growth rates are in
Table 5.5 under the column labeled “Gen Ed”. Deno et al. (2001) examined the
progress that could be expected of students with learning disabilities who are
receiving special education. As displayed in Table 5.5 under the column “Spe-
cial Ed”, students receiving special education services grow at about half the
rate of students in general education. These rates also were calculated without
consideration of instruction. It is important to realize that it cannot be assumed
that students in special education receive individualized, high-quality instruc-
tion (Shinn 2002a; Tilly 2008). Students with disabilities that do receive target-
ed, evidence-based, high-quality instruction do achieve growth rates similar to
students without disabilities receiving instruction in general education settings.
These rates are displayed in the column labeled “Learning Disability, Effective”
in Tables 5.5 and 5.6. Deno et al. (2001) have established that, regardless of a
label, when students are given the right, high-quality instruction, they grow.
5.5  Plan Implementation 71

5.5.5 What Level Material Should be Used


for Progress Monitoring?

Information obtained during the survey-level assessment helps determine current


levels of performance. If the material level is too high, it will not be sensitive to a
student’s individual improvements. On the other hand, if the material level is too
low, it may appear the student is making growth but the attainment of critical skills
may be insignificant. Most students who are reading below grade level can be moni-
tored with grade-level material.
This assessment schedule would include 3 consecutive weeks of the student be-
ing monitored with instructional-level material, and 1 week of the student being
monitored with expected grade-level material. In the course of 4 weeks, the student
would be monitored three times at an instructional level and one time at grade level,
with two separate graphs tracking progress at different levels. The goal with any
additional support is to provide the skills needed in order for students to adequately
perform on grade-level material. Systematically checking progress in grade-level
material will improve decisions about the effectiveness of the intervention.
Regarding material at the student’s instructional level, Pearson (2012b) recom-
mends using the student’s highest instructional level, which they define as the level
out of which the student reads above the 10th percentile (normative data).

5.5.6 Selecting Goal Criteria and Time Frame

Selecting Goal Criteria  Selecting goal criteria and determining a goal timeline
can appear to be a complex process because several different sources of informa-
tion can be considered to do so. Experts have suggested using normative standards,
benchmark standards, and rates of growth to identify reasonable and ambitious
goals for students (Deno et al. 2001; Fuch et al. 1993; Pearson 2012b). In this sec-
tion, we have attempted to consolidate this information to summarize one way of
setting goals and timelines that attempts to consolidate several bits of information.
Considerations for goal setting are provided later.
First, it is necessary to determine if goal criteria will be set based on normative
standards or benchmark standards. Normative standards can be used to identify an
average range compared to similar peers and the goal criteria can be set to reach a
commensurate level. A benchmark standard can be used so that the goal is predic-
tive of future success (as discussed earlier, benchmarks can predict success on a
later outcome, such as a state test). Ultimately, the goal is to have students perform
proficiently on grade level, so selecting a benchmark standard or a high normative
standard that predicts reading success with other criteria can ensure students are
successful with reading skills. However, if the student is below grade level or if the
benchmark standard is too rigorous given the time frame for the goal, then an in-
terim step using either normative or benchmark standards from lower levels may be
a reasonable approach (Pearson 2012b). (Selecting a time frame is discussed later.)
72 5  The Curriculum-Based Evaluation Process

Once the goal criterion is determined, the evaluator then determines how much
weekly growth is needed to reach that goal. A date by which the goal should be met
is decided upon. Then, to determine the growth needed to reach the goal, conduct a
gap analysis and divide the gap by the number of weeks that we want the student to
reach the goal. For example, if Marianne, a third-grade student, is reading 76 cwpm
and the end of year benchmark is 100 with 97 % accuracy, then the goal would read
“When given a third-grade reading passage, Marianne will read 100 cwpm with
97 % accuracy by June 1 2014.” The gap is 100 − 76 = 24 words. If there are 16
weeks left in the school year that date could be used to reach the goal. Dividing 24
cwpm by 16 weeks results in 1.5 words per week. Therefore, Marianne must grow
at a rate of at least 1.5 words per week to reach her goal.
The final step is to determine whether the rate of growth needed is reasonable.
Is it unambitious or too ambitious? To evaluate reasonableness, compare the need-
ed growth to established rates of growth within the normative column under the
AIMSweb source (which is the 50th percentile of recent normative data on rates
of growth). If the goal criterion is at or above growth rate listed, then the goal will
likely close the student’s performance gap over time. If the needed growth is below
the 50th percentile, then the student’s growth will likely not close the performance
gap (Pearson 2012b).
Selecting a Time Frame  Finally, evaluators will want to consider how much time
is needed for the goal to be reached and what is reasonable to write for a time frame.
Calculating a time frame depends on how intense the instruction the student is recei-
ving is, rates of growth, and how many weeks are available within the school year.
Here are a few considerations when planning a time frame.
First, if students are to acquire skills to be successful on to grade-level material,
then expecting 2 years’ growth within 1 year’s time is an ambitious, yet reasonable
expectation. Less growth than that would maintain the student’s below grade-level
performance even if he or she makes positive growth in skills.
Second, educators can identify a reasonable rate of growth to expect from the
student and then calculate how many weeks of intervention/instruction are needed
to reach that goal. For example, if a student’s gap is 24 cwpm and a reasonable rate
of growth to expect is 1.50 words per week, then the student will need 16 weeks
to reach his or her goal. The weeks needed is determined by dividing the gap by
the rate of growth: 24 words/1.5 rate of growth = 16 weeks. The evaluator can then
compare the number of weeks left in the school year to the number of weeks needed
to determine if the time frame of the goal is reasonable or not. If there are more
than 16 weeks left, then the time frame is reasonable and the goal can be written
for 16 weeks from the present date. If there are less than 16 weeks left in the school
year, then the time frame can be written to extend into the next school year or in-
structional changes can be made to expect higher rates of growth and to close the
performance gap before the school year ends.
One last point on selecting a time frame deals with the issue of an older student
who is below criterion on a skill that should have been mastered in a previous
grade. For example, what time frame should be used for a second grader who has
5.5  Plan Implementation 73

not met standards on the nonsense word fluency (NWF)? Generally speaking, if
a student is missing a previous skill, the goal can be written for the student to
reach that goal in half the time that it normally takes to achieve that goal (this
is equivalent to expecting 2 years’ growth in 1 year’s time). For example, a first
grader should read at least 8 whole words on an NWF in the winter and 13 whole
words by the end of the year (Good and Kaminski 2011). Now imagine a second
grader who begins the year reading 9 whole words on NWF. The student is behind
in skill acquisition and it is now past the deadline for the benchmark. The goal is
to have the student reach 13 whole words in half the time it normally takes to reach
that goal. A score of 9 whole words is at benchmark for the middle of first grade,
so if the expectation is that it normally takes 18 weeks to go from 8 to 13 whole
words (the winter to spring benchmark), then the goal would be written so that it
is achieved in 9 weeks for this particular student. Using this approach does take
some analysis and mathematics, but it can be a fairly straightforward method for
setting goals.
Summary of Selecting Goal Criteria and Time Frame  To summarize, setting
goals and timelines includes:
1. Determine a normative or benchmark criterion.
2. Set the goal time frame and estimate the growth needed for the student to reach
that goal.
a. Calculate the gap between the goal and the current level of performance.
b. Divide the gap by the number of weeks needed.
3. Determine if the goal is realistic by comparing it to research-based growth rates.
Compare the rate of growth needed to the 50th percentile of rates of growth to
determine if gap will be closed within the goal time frame.
4. Analyze the time frame by examining the number of weeks available for
instruction.

5.5.7 Measuring Progress

Once a goal is written and a desired rate of progress is determined, it is necessary to


determine how progress will be measured. Since the goal is written in observable,
measurable terms, the evaluator now needs to select a progress monitoring tool
that is reflected in the goal, such as reading fluency or correct words per minute.
The progress monitoring tool must be efficient, technically adequate, have alternate
forms, and be sensitive to changes in skills over small increments of time (Shinn
2008). Curriculum-Based Measurement has all those characteristics and is appro-
priate when monitoring progress of basic academic skills. It is important to note
that other measures can contribute to instructional decision making. Though high-
stakes decisions with a student’s education, such as entitlement decisions, require
technically adequate measures, day-to-day decisions about instruction can involve
an array of formal and informal measures.
74 5  The Curriculum-Based Evaluation Process

5.5.8 Measuring Fidelity

A final decision made during Phase 4 of the CBE Process is how to measure fidel-
ity of the instructional plan. Two ways to measure fidelity are to have an outside
observer observe the instructional plan being implemented (direct assessment) or
to have the person delivering the plan report on its delivery (indirect assessment).
Direct Assessment of Fidelity  Direct assessment occurs when the steps of the
instructional plan are defined in observable and measurable terms and then an out-
side observer (e.g., administrator, instructional coach, peer, etc.), observes the steps
implemented. A checklist is created, and the total percentage of components imple-
mented is calculated.
Indirect Assessment of Fidelity  Indirect assessment of fidelity involves using
a variety of tools to measure fidelity without directly observing the plan imple-
mentation. Examples include reviewing attendance records to see if the student
was present for instruction, examining permanent products to ensure the student
participated in the instruction, interviewing those who deliver the instruction,
or having those delivering the instruction complete a checklist. The same che-
cklist that would be used for direct observation could be completed in indirect
assessment.
The process of measuring fidelity is not intended to be a teacher evaluation
process. The content of what is measured in this process should be openly shared
and available to those who are part of the instructional plan. Fidelity tools are
used to ensure students are being provided the instruction/intervention as it was
intended.
In addition to direct and indirect assessment of fidelity, the definition of fidelity
also can include consideration of how well the plan matches the student’s skills and
deficits.

5.6 Plan Evaluation

In the final phase of the CBE Process, the question asked is, “Did the intervention
plan work?” During this phase, educators review monitoring data for both the
fidelity of implementation of the intervention plan (fidelity data) and the plan’s
overall effectiveness (student progress monitoring data). The ongoing monitor-
ing of the intervention plan is called formative evaluation. Formative evaluation
is an assessment process that occurs during instruction and allows for changes
to be made if student benefit is not evident. Scheduling systematic data review
meetings at regular intervals, to ensure these important decisions about student
benefit and need for instructional changes are made, is critical to improved stu-
dent outcomes.
Prior to making changes to the instructional plan when data indicate it is not
working, the fidelity of the plan must be examined. If the plan was not implemented
5.7  Summary and Key Points 75

or received as intended, it cannot be concluded the plan did not result in improved
student outcomes. Instead, the plan should be implemented as intended before teams
conclude it did not work. Before making changes to the plan, make changes to the
implementation and continue to monitor progress. There is not a clear-cut standard
in the literature as to what is acceptable or unacceptable fidelity. However, it is a
reasonable assumption that fidelity scores below 90 % are undesirable. This number
comes from research on academic interventions that establish fidelity at 90–95 %
(see Greenwood et al. 2008 and Jimerson et al. 2007).

5.7 Summary and Key Points

The steps in the CBE Process are the same as those in the PSM, although the CBE
Process is focused on individuals and the PSM can be applied to an entire school
system. The CBE Process is a series of phases in which a problem is defined, ana-
lyzed, an intervention is implemented to correct the problem, and data are collected
to evaluate its effectiveness. Goals are set by considering various sources of infor-
mation. Measurement of both progress and fidelity are key components of the CBE
Process to ensure logical decisions are made about the effectiveness and continued
use of the intervention plan. If a performance goal is not met, the CBE Process con-
tinues cycles back around until the goal is met.

Key Points
• The CBE Process follows the same phases as the PSM.
• Survey-level and specific-level assessment comprise the first two steps of
the CBE Process.
• Phase 3 involves designing an intervention plan, setting goals, and deter-
mining how fidelity and progress will be monitored.
• Ambitious goals are written through consideration of benchmark standards
and normative growth rates.
• Evaluation of the intervention plan requires ensuring fidelity is strong to
allow for logical decisions about the effectiveness of the instructional plan.
• Failure is not an option, since a lack of progress requires cycling back
through the CBE Process.
Part II
Using Curriculum-Based Evaluation
Chapter 6
CBE Decoding

6.1 Chapter Preview

This chapter describes the process for Curriculum-Based Evaluation (CBE) decod-
ing. The chapter is structured around the four phases of the CBE Process and will
walk the reader through the entire process for decoding. The chapter discusses spe-
cific assessment techniques and intervention recommendations based on the results.

6.2 CBE Decoding

The CBE Process moves through four phases, within which is a series of steps that
involve three types of tasks:
1. Ask: The questions that guide assessment.
2. Do: The direct assessment activities conducted with the student. Data are col-
lected and interpreted to answer the question.
3. Teach: The instructional recommendations based on the outcomes from ask and
do.
Evaluators start with a question (Ask), which then requires an action or activity
(Do). Following a certain amount of Asks and Dos, the evaluator arrives at an in-
structional focus (Teach), which indicates specific instructional strategies (see
Fig. 6.1). The entire CBE Process for Decoding is presented in Handout 6.1, which
is designed to be a quick summary of the Decoding CBE Process (Table 6.1 also
outlines the CBE Process for Decoding in a linear form). All of the handouts used
for the CBE Process for Decoding are included at the end of the chapter, and the
entire list is displayed in Table 6.2.

J. E. Harlacher et al., Practitioner’s Guide to Curriculum-Based Evaluation in Reading, 79


DOI 10.1007/978-1-4614-9360-0_6, © Springer Science+Business Media New York 2014
80 6  CBE Decoding

Fig. 6.1   Tasks within the CBE Process

6.3 Problem Identification

6.3.1 Step 1—Ask: Is There a Problem?


Do: Initial Problem Identification

The first step is to identify if a reading problem exists. This initial identification of
reading difficulty typically occurs during the universal screening process. Multiple
sources of information also can answer this question at any time (e.g., review of
records, interview with the student or teacher, and various assessments).

6.3.2 Step 2—Ask: Does it Warrant Further Investigation?


Do: Survey-Level Assessment

After identification of a reading concern, the next step is to determine if the prob-
lem is severe enough to warrant further investigation. This question is answered by
conducting a Survey-Level Assessment (SLA) using leveled reading passages. SLA
is a technically adequate measure of overall reading performance (Hintze and Conte
1997) that is conducted to determine a student’s instructional reading level. The
instructional reading level is the highest grade-level material out of which students
read at or above the Fall 25th percentile based on national norms with at least 95 %
accuracy (Hosp et al. 2006). A gap analysis can be conducted with data collected
in the SLA.
SLA Directions: 
1. Administer three 1-minute oral reading fluency (ORF) probes using curriculum-
based measurement (CBM) directions. Report the median words read correctly
(WRC) and the median errors as the score (directions for SLA are included in
Handout 6.2 and the formulas for calculating rate and accuracy are “A” and “B”
in Fig. 6.3, respectively).
Table 6.1   Phases of CBE Process for Decoding
Phase Ask Do Teach
6.3  Problem Identification

Problem identification Is there a problem? Initial identification


Does it warrant further investigation? Survey-Level Assessment
Problem Analysis What is the student’s accuracy and rate at grade level? Examine rate and accuracy with grade-
level material
Can the student self-correct errors? Assess self-monitoring skills
Does student have acceptable rate and accuracy at some Examine Survey-Level Assessment results
level above grade 1?
Are there patterns to the student’s reading errors? Conduct error analysis
Are sight words and vocabulary a concern? (see Chapters 6 and 7)
Plan Implementation Is the instructional focus self-monitoring and accuracy? Self-monitoring
Is the instructional focus fluency? Fluency building
Is the instructional focus on specific errors? Teach specific skills
Is the instructional focus on general reading difficulties? Teach using a balanced
approach to reading skills
Plan Evaluation Is student progressing toward his or her goal? Monitoring fidelity and student progress
81
82 6  CBE Decoding

Table 6.2   List of handouts for CBE Process for Decoding


Handout Title
Instructions and Process Sheets
6.1 Curriculum-Based Evaluation in decoding flowchart
6.2 Survey-Level Assessment in ORF instructions
6.3 Self-monitoring assessment instructions
6.4 Error analysis instructions (reading and decoding errors)
Tally and Assessment Sheets
6.5 Survey-Level Assessment results for ORF
6.6 Self-monitoring assessment
6.7 Error analysis coding sheets
6.8 Error analysis tally sheets
6.9 Dolch 200 word sight list
Strategy Sheets
6.10 Teach: Self-monitoring
6.11 Teach: Repeated reading
6.12 Teach: Listening preview and partner reading
6.13 Teach: Chunking
6.14 Teach: Word drill
6.15 Teach: DISSECT strategy
6.16 Teach: General reading instruction
ORF oral reading fluency

Fig. 6.2   Problem Analysis phase of CBE Process for Decoding. (Note: “+” indicates at or above
criterion, “–” indicates below criterion)

a. SLA requires both the unnumbered student passages and the numbered exam-
iner passages.
b. If the student’s rate and accuracy are below criteria for the expected grade
level, administer three passages a grade level lower and record the median
WRC/errors from successive grade levels until criteria are met at a grade
level.
6.3  Problem Identification 83

Table 6.3   Examples of codes for reading errors


Code Type of error Description
/ Misread word A slash through the word indicates that the student misread the word.
The error made is written above the misread word
SC Self-correct The student misread the word, but self-corrected it within 3 seconds
O Omission The student skipped or omitted the word while reading
3 Hesitation Student does not correctly read the word within 3 seconds. The examiner
provides the word after 3 seconds and the word is scored as misread
__ Repetition Underlining the word indicates that the student repeated the word before
continuing with reading. The student reads the word correctly each
time. (If they were to misread it initially and then read it correctly,
that would count as a “self-correct”
__ Punctuation Underlining a punctuation mark and adjacent words indicates that the
student did not acknowledge the period or comma. The student did
not pause and instead, kept reading
^ Insertion The student inserted a word that was not within the sentence. The
inserted word is written above the carrot. (Note: These are not mis-
reads words, but rather are words added into the reading)
The codes are often written above the word that is misread while recording

2. Record the student’s scores on Handout  6.5. Complete the bottom portion of
Handout 6.5 to determine the severity of the problem.
3. Ask: Does it warrant further investigation?
− If the student is performing at criterion with accuracy and rate with grade-
level material, then decoding CBE is complete and reading comprehension
can be examined (see Chapter 8). Students scoring at criterion with grade-
level material may require an instructional focus on comprehension, vocabu-
lary, and/or content knowledge.
− If the student is not performing at criterion for accuracy or rate with grade-
level material, then proceed to the Problem Analysis phase.

Things to Consider
• In some cases it may be more efficient to administer the easier grade-
level material first before progressing to higher grade-level material.
This approach may be useful with students reading several levels below
expected grade level (cf. Hintze and Conte 1997).
• It is helpful and efficient to record all types of errors that students make
while reading. It may be necessary to collect errors as part of the CBE
Process later, so gathering them along the way can save time. However, it
requires proficiency in CBM administration before adding the collection
of error data. Table 6.3 shows how to record errors and Fig. 6.4 shows how
to code errors.
84 6  CBE Decoding

Fig. 6.3   Formulas for rate, accuracy, and percentage of errors corrected

6.4 Problem Analysis

In the Problem Analysis phase, the evaluator examines the student’s rate and ac-
curacy with grade-level material, and is Step 3 of the CBE Decoding Process. The
tasks conducted will depend on the student’s performance. If rate and accuracy are
at criterion, then the evaluator proceeds to Chapter 8 to evaluate the student’s read-
ing comprehension. If rate is below criterion, and accuracy is at or above criterion,
the next task is to teach fluency building. If rate is at or above criterion, and accu-
racy is below criterion, the evaluator conducts Step 4. Finally, if rate and accuracy
are below criterion, the evaluator conducts Step 5. Figure 6.2 illustrates the Problem
Analysis phase of the CBE Decoding Process.

6.4.1 Step 3—Ask: What is the Student’s Rate and Accuracy? Do:


Examine Rate and Accuracy with Grade-Level Material

Using the assessment results from grade-level material, compare the student’s rate
and accuracy to the criteria (i.e., 25th percentile fall, at least 95 % accuracy) and
proceed to one of the following tasks where “+” signifies at or above criterion and
“–” signifies below criterion:
1. Rate +, accuracy +: Assess reading comprehension (see Chapter 8)
2. Rate –, accuracy +: Teach: Fluency
3. Rate +, accuracy –: Conduct Step 4 of CBE Decoding Process
4. Rate –, accuracy –: Conduct Step 5 of CBE Decoding Process
Rate at Criterion; Accuracy at Criterion (Rate +, Accuracy +)  For students who
are reading at or above criterion for rate and accuracy, assessment focus moves to
reading comprehension. Chapter 8 focuses on the CBE Process for reading com-
prehension. The instructional goal will be informed by the results of additional
assessment.
Rate Below Criterion; Accuracy at Criterion (Rate –, accuracy +)  For students
below the criterion for rate, and at or above criterion for accuracy, the teaching
recommendation is to focus on building fluency. No further assessment is needed at
6.4  Problem Analysis 85

Fig. 6.4   Examples of ways to code reading errors

this point. Fluency building becomes the instructional recommendation (see Teach:
Fluency later in this chapter).
Rate at Criterion; Accuracy Below Criterion (Rate +, Accuracy −)  For students
at or above criterion for accuracy, and below criterion for rate, it is necessary to
determine if the students are monitoring their reading. Are they reading quickly and
carelessly, or do they lack the decoding skills required to read the words? A simple
procedure is conducted to determine if the student has the decoding skills and is not
using them, or if the student is not able to decode the words. This procedure can
determine if the student is within the acquisition stage for decoding and should be
further evaluated as a student with both low rate and low accuracy.
Rate Below Criterion; Accuracy Below Criterion (Rate –, Accuracy –)  Stu-
dents below criterion for rate and accuracy likely are in the acquisition stage of the
instructional hierarchy. Also, a student who does not improve accuracy with the
self-monitoring strategy (Step 4) requires the same instructional focus as students
in this category. Students in this category are still learning decoding and phonics
skills, so the specific-level assessment focuses on pinpointing decoding needs and
determining the extent to which early literacy skills (phonemic awareness, letter-
sound relationships, and sight words) are mastered.

6.4.2 Step 4—Ask: Can the Student Self-Correct Errors?


Do: Self-Monitoring Assessment

Step 4 is for students whose rate is at or above criterion, and accuracy is below crite-
rion (i.e., rate +, accuracy –). Self-monitoring assessment involves having students
86 6  CBE Decoding

read aloud while providing an auditory prompt when a decoding error is made.
The results indicate whether the student has the decoding skills necessary to read,
or requires instruction targeting reading decoding. Self-monitoring assessment is
described in the following six steps and summarized in Handout 6.3. Handout 6.6
can be used to record self-monitoring assessment information.
Self-Monitoring Assessment 
1. Select grade-level material from which the student reads at or above criterion for
rate, but below criterion for accuracy. Use examiner copy and student copy of at
least two passages from that grade level.
2. In addition to the standardized directions say to the student, “I want you to take
your time and read this passage as accurately and carefully as you can.” (Pear-
son 2012a).
3. Score according to standardized scoring procedures and determine if the addi-
tional prompt in #2 improved the student’s reading accuracy, and if so did it
reach grade-level criterion (use Handout 6.6 to calculate and record changes in
rate, accuracy, and percentage of errors corrected).
− If yes, the prompt helped the student, and it can be assumed the student would
benefit from such prompts and possibly from incentives for improvement.
− If no, proceed to the next step to determine if assisted self-monitoring affects
accuracy.
4. Use new copies of reading passages and instruct the student to read (untimed).
Tell the student that if he or she makes a mistake, you will provide a prompt.
a. A simple tap of a pencil can be used for the auditory prompt and will signal
to the student that he or she made a mistake. Clicking a pen, snapping one’s
fingers, or use of a clicker, which can be purchased inexpensively online, also
can provide the auditory prompt.
b. Say to student, “Please read this aloud. This may be difficult for you, but
please do your best reading. I am not timing you, but if you make a mistake,
I will (tap this pencil, click this clicker). That is your clue that you made a
mistake and I want you to find the mistake and fix it. Remember, find it and fix
it. What will you do?” (Student indicates understanding of procedure).
5. Have the student read aloud and provide a signal each time a word is misread.
a. As the student reads and makes a mistake, slash the word that is misread on
your passage, write down the error that is made above the misread word, and
provide the prompt.
b. Then mark a “slash” next to the error to indicate that the prompt was given,
and write down the word the student says following the prompt (see Fig. 6.5
for a visual display of recording misread words).
c. Provide only one chance for the student to reread the word correctly; do not
prompt them again if he or she misreads the word a second time after the audi-
tory prompt. See Fig. 6.5 for a visual depiction of reading misread words.
6. Ask: Can the student self-correct errors? Generally speaking, if a student is able
to self-correct 90 % of the errors, the issue likely is self-monitoring (Burns et al.
2012; Howell and Nolet 2000).
6.4  Problem Analysis 87

Fig. 6.5   Example of error recording for self-monitoring assessment

a. Determine the number of errors made and the number of errors corrected.
b. Divide the total errors corrected by the total errors made and multiply by 100
to get the percentage of errors corrected (see formula “C” in Fig. 6.3). Use
Handout 6.6 to record your scores.
c. If the answer to the question “Can the student self-correct at least 90 % of
errors?” is yes, then the student likely has a self-monitoring issue. Suggest the
teaching strategy “Teach: Self-Monitoring.”
d. If the answer to the question is “no,” then proceed to step 5 in Handout 6.1.

Things to Consider 
• Gathering, recording, and saving error information will save you time in
the event an error analysis is required in step 6.
• Prompt each error only once. If the student misreads it again record the
second error. If the student ignores or does not respond to the prompt record
that with three dots and allow them to continue reading. If the student con-
tinually ignores the prompt, stop and clarify the purpose of the prompt.
• This assessment may reveal that the student has both a self-monitoring
and a decoding issue. Unless the student corrects at least 90 % of the errors
made, it is worthwhile to proceed to step 5 in Handout 6.1.
88 6  CBE Decoding

6.4.3 Step 5—Ask: Does the Student Have Acceptable Rate at


Any Level Above Grade 1? Do: Examine Results of SLA

Step 5 is for students whose rate and accuracy are below criterion (i.e., rate –, ac-
curacy –), and for students who did not show an improvement in accuracy with Step
4. This step determines if the student has emerging phonics skills or if the reading
breakdown occurs with early literacy skills.
1. Ask: Does the student have acceptable rate and accuracy at some level greater
than grade 1?
2. Do: Examine the results of the SLA
a. If yes, then analyze the errors the student makes while reading (see Step 6).
b. If no, then assess early literacy skills (see Chapter 7, which describes the CBE
Process for assessing early literacy skills).

6.4.4 Step 6—Ask: Are there Patterns to the Student’s Reading


Errors? Do: Conduct Error Analysis

If the student is able to decode and read at some level greater than grade 1, the next
step is to analyze the types of errors the student makes in an attempt to identify er-
ror patterns. Identifying patterns in errors allows for targeted reading instruction on
particular word types or skills.
In an error analysis: (a) determine if the errors violate the meaning of the passage
(and subsequently, if the student corrects those errors), (b) determine the general
reading errors made, and (c) identify the prevalence and types of decoding errors
made. The SLA (and self-monitoring assessment if applicable) may have supplied
all the information required for error analysis. Instructions for Step 6 are described
next and provided in Handout 6.4.
Collect an Error Sample  Follow these steps to collect or add to the error sample.
1. Identify a grade level in which the student reads between 80 % and 85 % accu-
racy and use those passages (250+ words). The goal is to generate errors for
analysis.
a. Use a student passage and an examiner passage.
b. Error samples of at least 25 for grade 1 and at least 50 for grades 2 and above,
or as many as 100 errors have been recommended (Howell and Nolet 2000).
The sample must be sufficient to identify existing error patterns.
c. When collecting an error sample, both the number and type of errors are
informative. Errors recorded can include omissions, insertions, skipped lines,
hesitations, meaning violation errors without self-correcting, and decoding
errors.
2. Have the student read aloud and record the errors using the codes listed in
Table 6.3, and write the errors substituted for the actual words.
6.4  Problem Analysis 89

3. Ask: Are there patterns to the student’s reading errors? There are three questions
in analyzing errors.
a. Do the errors violate the meaning of the passage and if so, are they
self-corrected?
b. What types of general reading errors are made?
c. What types of decoding errors are made?
4. Meaning Violation Errors. First determine if the student’s errors violate the
meaning of the passage. Consider all of the errors for this analysis. Tally and
code the meaning violation errors using Handouts 6.7 and 6.8.
a. Write each error and code it under the appropriate column with an “X” using
the coding sheet in Handout 6.7 (Table 6.7.1).
b. Determine if each error violates the meaning of the text, does not violate
the meaning, or if meaning violation cannot be determined. Also mark
whether or not the error was self-corrected. This self-correction also is known
as comprehension self-monitoring since it contributes information about
comprehension.
5. Tally the frequency of errors and calculate the percentages for each type on Hand-
out 6.7. Then write the totals on the Tally Sheet in Handout 6.8 (Table 6.8.1).
a. Ask: Is the student self-correcting errors, particularly those that violate the
meaning of the text?
b. The results will provide information about how well the student is monitoring
the meaning of the passage. For example, a student who makes errors but self-
corrects likely is monitoring the meaning more than a student who is unaware
errors do not make sense in the passage.
6. General Reading Errors. Next determine the frequency of each type of general
reading error that the student makes. General reading errors include whether or
not the substitution was a real word, self-corrects, hesitations, insertions, omis-
sions, and repetitions.
a. Write each error that the student makes on Handout 6.7 (Table 6.7.2). Write
the actual word under the “actual word” column and then the error under the
column “read word.”
b. Code the errors by marking an “X” under the appropriate column. Then write
the totals in Handout 6.8 (Table 6.8.2). Also make note of qualitative errors,
such as prosody and phrasing.
c. Review the results of the general reading errors. If decoding errors are a prev-
alent type of error, analyze the decoding errors.
7. Decoding Errors, Ask: “Do decoding errors make up a majority of the errors
made?”
a. If yes, code each decoding error using the coding sheet in Handout  6.7
(Table 6.7.3).
b. Write the actual word and the misread word under the respective columns in
Table 6.7.3. Then put an “X” under the appropriate column for that decoding
error. A decoding error may be more than one type of error within Table 6.7.3.
Table 6.7.3 illustrates some examples.
90 6  CBE Decoding

c. After coding each error, tally up the totals on the Tally Table in Handout 6.8
(Table 6.8.3).
8. Review Errors. Review the tallies for each error category in Handout 6.8. Ask:
Are there patterns evident in the student’s reading errors?
a. If the answer is “yes,” the recommendation is “Teach: Targeted Instruction,”
targeting the student’s errors (see Handouts 6.14 and 6.15).
b. If sight word errors emerge as a pattern, then go to Step 7.
c. If the answer is “no,” then the teaching recommendation “Teach: General
Reading Skills” to recommend general reading instruction (see Handout 6.16).

Things to Consider
• It should be fairly easy to obtain a sufficient sample of errors. For example,
if a student reads a 250-word passage with 80 % accuracy with material and
the student is asked to read a 250-word passage, the evaluator will receive
a sample of about 50 errors in a matter of minutes with one passage.
• It is suggested that assessment of sight words be considered even if no pat-
tern emerges for the decoding errors.
• More than one error type. It is possible that a reading error meets more
than one category of word type. For example, a student may read “brother”
as “brothers.” This can be counted as both a decoding-suffix error (reading
the word as plural) and as an insertion error (inserting an “s”).
− Additionally, a decoding error may be classified under more than one
word type because it may be difficult to discern exactly the type of
decoding error. For example, reading the word “brotan” for the word
“brothers” could be both a consonant blend error (misreading the “th”
blend) and a suffix error (leaving off the plural “s”). There is a level of
interpretation here, but keep in mind the goal is to pinpoint consistent
errors. Overcategorizing errors would be better for finding error pat-
terns than undercategorizing.

6.4.5 Step 7—Ask: Are Sight Words a Concern? Do: Assess Sight


Words and/or Vocabulary

After conducting Step 6, the evaluator may wish to examine the student’s knowl-
edge of sight words, particularly if sight words were a consistent error type identi-
fied in the error analysis results (Step 6).
1. Administer a sight word list. For example, the Dolch word list is provided in
Handout 6.9.
2. Tally the percentage of sight words that the student read correctly.
6.5  Plan Implementation 91

3. Ask, “Are sight words a concern?”


a. Determine if the student has a deficit in sight word decoding. The results
should be used to guide the focus of instruction. If sight words emerge as a
concern (either from the error analysis or administering a sight word list),
instruction should focus on teaching those missing sight words. Follow the
guidelines of “Teach: Targeted Instruction.”
b. You may also wish to use your school’s curriculum to determine an appropri-
ate list of sight words to use for this assessment. Lists of sight words also are
available at: www.dolchword.com.

Things to Consider
• When suggesting sight words as a focus of instruction, practicing in
isolation and then incorporating them into connected text is a thorough
approach.

6.5 Plan Implementation

The Problem Analysis phase results in a clear understanding of why the problem is
occurring. Next, in the Plan Implementation phase, three tasks are accomplished: (a)
the design and implementation of an intervention plan that matches student needs,
(b) goal setting, and (c) identification of ways to measure progress (i.e., intervention
effectiveness) and fidelity of intervention implementation.
In this section, four general instructional foci (labeled as “Teach”) are described
that will address student needs for four possible outcomes of the CBE Process. We
then list specific instructional strategies in Handouts 6.10–6.16. There are numer-
ous interventions and specific instructional strategies to support reading needs, and
listing all of them is beyond the scope of this book. The key to selecting a strategy is
to ensure that (a) it is evidence-based, and (b) formative assessment is used to deter-
mine whether it benefits students. This section will describe the overall instructional
focus for a need identified in the CBE Process and to then provide specific strate-
gies in the Handouts. Evaluators are encouraged to explore other resources to gain
additional instructional strategies. A list of the instructional strategies discussed
next is provided in Table 6.4, and resources for reading decoding are presented in
Table 6.5.

6.5.1 Teach: Accuracy and Self-Monitoring

This strategy targets students whose rate is at or above criterion, and accuracy is be-
low criterion (i.e., rate +, accuracy –). These readers have the decoding skills to read
accurately, but do not employ them consistently while reading connected text. The
92 6  CBE Decoding

Table 6.4   Instructional strategies


Strategy Focus of strategy
Cued self-monitoring Monitoring and correction of errors
Goal-setting for accuracy Monitoring and correction of errors
Making metacognition explicit Comprehension and monitoring
Repeated readings Fluency at passage level
Partner reading Fluency at passage level
Chunking Fluency at the phrase and sentence level
Word drill Error correction
Overcorrection Error correction
DISSECT strategy Word analysis and structures of words
Word building Decoding rime and ending segments of words (can decode
initial phoneme, but struggles with rest)
Direct phonics instruction General reading issues
Rate of opportunities to respond General reading issues: low rates of responding, lower
intensity of instruction
Group size and instructional minutes General reading issues: lower intensity of instruction

Table 6.5   List of resources for instructional strategies


Resource Location/Publisher
Intervention Central www.interventioncentral.org
Effective School Interventions Rathvon (2008). Guilford Press
Research-based Methods of Reading Vaughn and Linan-Thompson (2004). Association
Instruction, Grades K-3 for Supervision & Curriculum Development
Intensive Interventions for Students Strug- Vaughn et al. (2012). Center on Instruction. www.
gling in Reading and Mathematics: A centeroninstruction.org
Practice Guide
RTI in the Classroom: Guidelines and Brown-Chidsey et al. (2009). Guilford Press
Recipes for Success
The ABCs of CBM Hosp et al. (2006). Guilford Press
Intervention Central Sight Word Generator http://www.interventioncentral.org/tools/
wordlist-fluency-generator

reader does not gain meaning from the passage. The instructional focus is teaching
the reader to actively engage with and derive meaning from the text. The student
will need guided practice and instruction initially, and then the scaffolds gradually
can be faded. Three strategies are described.
Cued Self-Monitoring  The first strategy described is to provide cueing when a
student makes an error, much like during the self-monitoring assessment. During
guided or paired reading, the student reads aloud and the teacher monitors the stu-
dent’s reading. Each time the student misreads a word, the teacher provides a cue
(e.g., pencil tap, clicker) and the student is to then stop, find the error, fix it by read-
ing the word correctly, and then continue reading. As the student develops the skill
to self-correct errors independently, the cueing can be faded. Instead of prompting
the student after each error, the prompt can be provided when the student finishes
the sentence and then the paragraph. Eventually, the cue will be completely elimi-
6.5  Plan Implementation 93

nated, and the student will rely on self-monitoring by asking questions (“Did that
make sense to me? Do I think I misread any words?”). This strategy is presented in
Handout 6.10.
This cueing procedure can be combined with an overcorrection or positive prac-
tice (PP) procedure in which the student is provided numerous opportunities to
practice the correct word after each misread word (“Please read that word 3 times”)
and then instructed to start over at the beginning of the sentence (“Now go back and
reread the sentence”). This is an effective procedure that is superior to simply sup-
plying the correct word or having the student read the correct word one time (Singh
1990; Singh and Singh 1986). This cueing procedure can also be used by parents or
peers in place of the teacher as long as the person providing the cueing is able to be
taught the procedure and recognize the errors.
Considering the IH, the student will require guided practice and immediate feed-
back until self-monitoring is automatic. As the student develops the skill, the feed-
back and the focus may shift to building fluency (Burns et al. 2012). It is a normal
progression in the IH for a student’s rate to decrease while the focus is on accuracy.
After accuracy is built to a specified percentage (e.g., 93 %), then the goal will
change to reflect both accuracy and fluency.
Goal Setting to Improve Accuracy and Paired Reading  This strategy involves
setting a goal for accuracy and having a partner (e.g., peer, teacher, or parent) moni-
tor the student’s reading. While the student reads, the partner follows a scripted
error correction procedure and provides correction as needed. (An example of a
scripted error correction procedure is provided in Handout 6.10). The student reads
a text selection or for a specified time period. When the student is finished, the part-
ner calculates an accuracy rate, compares to the previously set goal, and provides a
reward if the goal is met.
Making Thinking Strategies Concrete and Explicit  The last strategy for this
instructional focus is to model and make explicit metacognitive or self-regulation
skills. Students who struggle with reading are less likely to monitor their compre-
hension of the material they are reading. Direct instruction to monitor the level of
comprehension during reading can have a significant impact (Vaughn et al. 2012).
In fact, Hattie’s (2009) metaanalytical work identified self-vocalization and meta-
cognitive strategies as having effect sizes of 0.67. This strategy may be helpful for
students who have improved error monitoring and can begin to shift their attention
to monitoring understanding of and deriving meaning from text.
Vaughn et  al. (2012) offer some recommendations for teaching metacognitive
strategies. They describe making the teacher’s thinking “visible” to the student by
using think-alouds, or talking out the strategies that are used. They present one
example:
Before I read this text, I see it will be difficult to understand. First, I look for key words.
I see three words in bold that I don’t know, so I write them down to see if I can figure out
what they mean. Second, I look at the title, the heading and the questions at the end of the
text. I think about what this text is going to be about, and I try to make connections while
reading. Third, while I read, I stop to see whether I have learned any information to help me
answer the questions at the end of the text. (p. 14.)
94 6  CBE Decoding

Additionally, Vaughn et al. (2012) describe monitoring students’ reading and help-
ing them think aloud if they misread a word. They also describe teaching students
to identify breakdowns in their reading and developing ways to fix them, such as
asking questions while reading (e.g., “What do you do when you don’t know how
to read a word? Are there any words or ideas that you did not understand?”) Ad-
ditionally, having students visualize the story, underline elements, take notes while
reading important elements, or actively think about the author’s point of view can
increase self-monitoring and comprehension. Rubrics or key questions such as: (a)
Were there any words that did not make sense? If so, reread and use strategies to
read them accurately; (b) Were there sections of text that did not make sense? If so,
reread and try to figure out the author’s meaning; (c) What strategies can I use to
make sense of the text? can guide reading and improve comprehension. Once stu-
dents actively monitor their accuracy, the focus can shift to actively comprehending
the content.

6.5.2 Teach: Fluency

For students below the criterion for rate, and at or above criterion for accuracy, the
teaching recommendation is to focus on building fluency (rate –, accuracy +). It is
important to clarify that reading fluency is not speed reading or reading as fast as
possible. Fluency is multidimensional and consists of rate, accuracy, and prosody
(Musti-Roo et  al. 2009). In fact, students who are able to read with appropriate
“prosodic markings” divide words into meaningful phrases and have higher com-
prehension of text than students that do not (Therrien 2004). Fluency is about effort-
less reading that is efficient, which in turn, allows students to devote their working
memory to comprehending the text. Effective fluency instruction focuses as much
on the prosodic features of text as it does on the rate and accuracy of reading (Kuhn
and Stahl 2003).
Generally speaking, students in the fluency stage of the IH need practice and
repetition with corrective feedback, goal setting, and use of performance contingen-
cies (Burns et al. 2012). Instruction centers on building fluency at the letter, word,
sentence, paragraph, and passage level. Numerous interventions and instructional
approaches can be used to build fluency. Choral reading, partner reading, chunking,
setting goals for rate, previewing passages, and reader’s performance (where stu-
dents read roles from a play that they have rehearsed) are some of these strategies
(Rathvon 2008; Vaughn and Linan-Thompson 2004). We describe some of the more
researched interventions: repeated readings (RR), partner reading, and chunking.
Repeated Readings  One effective intervention for building fluency is the repeated
reading strategy, which is defined as reading and rereading a passage or section of
text until a sufficient level of fluency is reached (Chard et al. 2002; Musti-Roo et al.
2009). The steps of RR is presented in Handout 6.11. RR consists of:
1. Identifying a target student and a tutor (can be a teacher or peer).
2. Selecting a grade-level passage in which students read with at least 93 %
accuracy.
6.5  Plan Implementation 95

3. Selecting a daily goal for rate (this can be based on benchmark goals, self-refer-
enced goals, or normative data).
4. Having the target student read for 1-minute while the tutor listens and marks errors.
5. After the 1-minute reading, the tutor provides error correction.
6. The target student rereads the passage, attempting to read further than before.
This is repeated for a total of four readings.
Variations on this procedure include partner reading, wherein the target student and
tutor take turns reading sections of the text prior to the student reading for 1-minute
and providing reinforcement (e.g., praise, incentives, etc.) based on the student im-
proving rate each time (Burns and Parker, n. d.; Musti-Roo et al. 2009). Difficult
words can be previewed and reading the passage can be modeled for the target stu-
dent prior to the 1-minute timing (Lo et al. 2011). Cueing can be provided to focus
students explicitly on reading rate (e.g., “Try to read this as quickly as you can.”),
on reading comprehension (e.g., “Do your best reading and try to really understand
the passage.”), or on both (Therrien 2004).
RR is effective at improving both the fluency rate and reading comprehension
scores of students, as Therrien (2004) found that RR is associated with an effect size
of 0.76 for improving fluency and an effect size of 0.48 for comprehension. The
most effective components of RR are:
• Adult delivered (vs peer delivered)
• Cueing for both fluency and comprehension (vs cueing for either fluency or
comprehension alone)
• Repeating the reading four times (vs repeating two to three times or more than
four times)
• Using corrective feedback on word errors (vs no corrective feedback)
• Setting a performance criterion (having students read until they reach a level of
correct words per minute or reading a passage within a certain time limit)
• Providing a teacher model (vs no model; tape- or computer-mediated models are
more effective than no model, but less effective than a teacher model)
• Providing progressively more difficult material once a performance criterion is
met (Chard et al. 2002; Therrien 2004).
Listening Preview with Partner Reading  Partner reading combines elements of
RR and listening preview to improve fluency. This is a helpful intervention when try-
ing to accommodate groups of students, as it is a peer-delivered strategy. Although
RR is less effective when delivered by a peer compared to an adult, this interven-
tion represents a more feasible and less resource-intensive option (i.e., requires less
teacher time and direct 1:1 instruction). Rathvon (2008) offers a description of part-
ner reading, which is presented in Handout 6.12.
Chunking  Chunking is a strategy that is helpful for building fluency at a phrase
or sentence level. Reading text or passages are divided into prosodic phrases with
slashes. Students then read the text and illustrate the intonation depicted by the
slashes. Rasinski (1994) reports that this “phrase-cued” strategy has led to positive
increases in comprehension, word recognition, and rate. LeVasseur et  al. (2008)
compared three RR approaches with a group of second graders: (a) RR with phrase-
96 6  CBE Decoding

cued text, (b) RR with standard text, and RR of difficult words (word list). They
found that the RR with either the standard or phrase-cued text resulted in greater
gains in rate compared to the RR of word lists. However, the RR with phrase-cued
text led to the most gains in prosody compared to the other two conditions. Hand-
out 6.13 describes the chunking strategy.

6.5.3 Teach: Targeted Instruction to Correct Errors

The instructional focus here is to correct errors. Errors are learned and once learned,
they persist. Targeted, focused instruction and repetition is needed to teach the cor-
rect word (Reitsma 1983).
To correct clear patterns of errors, identify the errors, provide extensive model-
ing and practice of the correct word. Students making errors are in the acquisition
stage need and therefore need modeling of the skill, prompting to ensure its accurate
use, and immediate performance feedback (Burns et al. 2012). It may be useful to
first build accuracy with the accurate word(s) in isolation and then build accuracy
in connected text.
Error correction strategies that can be implemented with small groups ( word
drill and positive practice (PP)) and error correction strategies that can infuse into
various instructional formats ( DISSECT strategy and word sort) are presented next.
Word Drill  When an error is made, several error correction procedures can be used
including (a) word supply, (“That word is ____. What word?”), (b) word-analysis
(“Look at the word. What sound does ____ make? Okay, sound it out. Say it with
me”), or (c) overcorrection (“That word is ___. Say it three times.”). The student
can also be asked to repeat the sentence with any of the aforementioned techniques
(sentence repeat; “Okay, now go back to the beginning of the sentence and reread
it.”). Word drill, which is described in Handout 6.14, combines several of the error
correction procedures and is relatively more effective than word supply or sentence
repeat alone (Jenkins and Larson 1979; Jenkins et al. 1983).
Positive Practice  Positive Practice (PP), also referred to as overcorrection, is a
strategy in which the student performs the skill repeatedly in an attempt to “over-
learn” the skill (Singh and Singh 1986). In PP, the student is asked to repeat the cor-
rect word three to five times following an error. Following PP, the student rereads
the sentence containing the misread word for another repetition. Reinforcement in
the form of praise or incentives can be offered contingent on the student performing
the correct skill and following the PP procedures. Singh and Singh (1986) found
that PP plus praise was superior to a drill procedure in correcting reading errors with
four students receiving special education services. This strategy may be useful for
students who can read words accurately in isolation, but struggle with them in con-
nected text. Variations include correcting the student at the end of a sentence instead
of stopping the student immediately to supply the correct word (Meyer 1982; Singh
and Singh 1986; Singh 1990).
6.5  Plan Implementation 97

DISSECT Strategy  The DISSECT strategy is helpful for older students who strug-
gle with more complex word analysis units and multisyllabic words. It is effective at
improving reading accuracy and comprehension (Lenz and Hughes 1990; Rathvon
2008). This strategy also is useful for vocabulary deficits. The student is taught a
problem-solving approach in which the acronym DISSECT represents each step
in the process (Rathvon 2008). The specific steps are presented as part of Handout
6.15.
Word Building  Word building is useful for students who can decode the initial
sound or phoneme of words, but struggle with the rime or remaining phonemes.
Word building involves using a set of letter cards to teach students how to build
words and analyze the different words created by adding or replacing certain letters.
The strategy is described in Chapter 7 (see Handout 7.24), as it overlaps with early
literacy skills.

6.5.4 Teach: General Reading Instruction

General reading instruction attempts to improve the intensity of instruction by


matching instruction to the student’s specific needs. This type of instruction is use-
ful with students for whom no specific error patterns emerged. If the student is
behind in alphabetic principle, the approach is to intensify direct instruction in pho-
nics. If the student has not succeeded with less direct and less explicit instruction,
the shift is to increase direct, teacher-led instruction and increase opportunities to
practice reading skills. For this instructional focus, the goal is to make instruction
more explicit and more intense, and to include more opportunities to practice the
skills. Ask: “When, where and how can the student get more intensive instruction
with more opportunities to practice reading skills?”
Direct Phonics Instruction  Direct instruction increases the explicitness of the
instruction, increases opportunities to respond (OTR), and decreases the group
size. It is a powerful, effective approach to literacy instruction (Carnine et al. 2009;
NICHHD 2000; Watkins and Slocum 2004). There are several programs available
for purchase that teach phonics with a direct instruction approach, but the purchase
of a program is not necessary to implement this strategy. Direct instruction involves
a series of steps that are outlined in Handout 6.16.
Educators can examine a student’s instructional plan and look for opportunities
to make the instruction more explicit, intensive, and provide more time for practice.
With the goal of increasing the quality and amount of phonics instruction the stu-
dent is receiving, several questions can guide planning
• Is there time devoted to phonics instruction at tier 1?
• In examining the exact minutes of small-group direct instruction at tier 1, can the
student receive more direct small-group instruction in place of independent or
center-based work?
• Is the instructional focus of tier 2 an extension of support with content taught at
tier 1?
98 6  CBE Decoding

• Can tier 3 time be added and linked to instruction at tier 1 and tier 2?
• Are the different levels of instruction the student receives coordinated to ensure
meaningful continuity?
• Do teachers working with the student follow a more explicit, prescriptive format
including explicit modeling, guided practice, independent practice, and immedi-
ate corrective feedback?
• Can the size of any small group be reduced?
• Does the instructional focus match student needs?
Examine Rate of OTR  Increasing the number of OTRs is a way to increase the
intensity of instruction. Increasing OTRs does two things: first, it increases engage-
ment and attention of the student and second, it provides feedback to the teacher
about how well the student is mastering skills and allows teachers to provide cor-
rective feedback to fix student mistakes.
It is recommended that the evaluator gather baseline OTR data, compare to a
standard, and set a goal to increase the OTRs. Appendix A offers templates to guide
measurement and recording of OTRs. OTR standards have been established by
Haydon et  al. (2010). In whole-group instruction, OTRs should be approximately
4–6 per minute, and for small-group, direct instruction, OTRs should be approxi-
mately 8–12 per minute (see Chapter 10 for more information about OTR standards).
Examine Group Size and Instructional Minutes  The easiest ways to intensify
instruction is to (a) decrease the group size and to (b) increase the instructional time,
both in terms of the actual minutes for each instructional session (e.g., increasing
from 30  minutes to 45  minutes) and in frequency of instructional sessions (e.g.,
increasing from 2 days per week to 4 days per week). Guidelines in determining
instructional minutes are described in Chapter 3 (see Table 3.3). Adding more time is
not a guaranteed solution. The instructional plan must be matched to student needs.
The difference between Teach: General Reading Instruction and Teach: Tar-
geted Instruction is the absence or presence of a clear error pattern to target. In the
absence of a clear error pattern, the focus is more balanced and general.

6.6 Plan Evaluation

Having identified an instructional focus and strategy, the plan evaluation phase fo-
cuses on measuring the effectiveness of the strategy. Measurement of fidelity and
measurement of student progress are two critical components of plan evaluation.
Reading CBM  Perhaps the most effective way to measure general reading prog-
ress is to use reading CBM. Reading CBM is an indicator of general reading that is
sensitive to reading improvements over short periods of time, particularly when the
instructional focus is on decoding, accuracy, fluency, and/or rate (Hosp et al. 2006).
Goals are set and progress is measured using CBM on a frequent basis. Reading rate
is measured by counting the number of WRC per minute. Accuracy is measured by
calculating the percentage of WRC.
6.7  Expanding Your Knowledge and Fine-Tuning 99

Word Lists  Accuracy and decoding progress also can be measured using word
lists. If a student struggles with a particular word type or word part (e.g., suffixes,
silent-e rule), word lists can be created and administered at regular intervals to
determine if the student is improving with that particular skill.
Integrated Data Data can be collected during instructional lessons to measure
progress with the plan. For example, a teacher can track errors made during a lesson
and graph the percentage of errors each day to determine if the student is making
progress. The teacher also may record the number of times the student requires an
error correction during a word drill, or the number of errors or the student’s rate
during partner reading.

6.7 Expanding Your Knowledge and Fine-Tuning

This section describes considerations and ways to expand the use of the CBE Pro-
cess. As evaluators build proficiency and use the CBE Process, they may wish to
tailor it to address deeper content. We describe things to consider and ways to ex-
pand the use of the CBE Process for Decoding.
Flexible Process  Although the CBE Process is presented in a sequential manner,
educators may wish to “jump around” and use various steps of the process at dif-
ferent points in time. CBE is about exploring “hunches” or answering questions
with assessment and data gathering. Thus, we encourage following the data where
it leads. This process involves gathering data, analyzing it, and then gathering more
data in an attempt to answer questions that will lead to a solution. It is a lot of “back
and forth” and is not meant to be completed in one sitting.
Normative vs Benchmark Standards for Rate  In conducting a SLA, evaluators
assess back in subsequent levels of reading material until students can read at a rate
that is at or above the Fall 25th percentile (based on national norms) with at least 95 %
accuracy. Meeting those two criteria for a given grade level places a student within the
instructional range for that grade level. However, educators may wish to use a bench-
mark criterion instead of a normative criterion. Although the benchmark will generate
a higher rate standard, it is a predictive standard. Recall from Chapter 4 that students
who reach benchmark scores have the odds in their favor meeting standard on a high-
stakes assessment (Good and Kaminski 2011; McGlinchey and Hixson 2004).
Pinpointing Instructional Level  The criterion for an instructional level is based
on the Fall 25th percentile national norm. The highest grade-level material out of
which a student performs at rate (at least Fall 25th percentile) and with at least 95 %
accuracy is considered the instructional level.
Reward-Based Assessment for Step 2  A reward-based assessment can help deter-
mine if the student’s low performance on the SLA is due to a lack of skill or due to
a lack of motivation. This process is referred to as a “can’t do/won’t do” assessment
(VanDerHeyden and Witt 2008) and is included in Appendix 6A.
100 6  CBE Decoding

Consideration of Opportunity for Word Types and Errors for Step 4  It may be
helpful to know not only the errors made by the student, but also the opportunity
to read certain types of words. This would take more planning and analysis of a
passage, but there is benefit in knowing that the student made a certain percentage
of errors relative to exposure to a certain word type (Howell and Nolet 2000). For
example, there is a difference in the certainty of the problem for a student whose
errors with suffixes are 20 % when they have been exposed to 5 words with suf-
fixes versus exposure to 30 words with suffixes. The coding table in Handout 6.8
includes columns for both opportunity for a particular word type and the percentage
of errors relative to opportunity.
An Alternative to Step 4 for Assessing Ability to Decode Errors  As an alterna-
tive to the self-monitoring assessment, there are two options.
1. Reading in isolation. You may wish to pull the errors from the SLA, list the
errors on a separate sheet of paper, and ask the student to read the words in isola-
tion. It also works to highlight the words on the student’s copy of the passage
instead of listing the words on a separate sheet of paper and ask the student to
read those highlighted words.
2. Reading within context. You may ask the student to reread the word within the
story by underlining the sentence. Combined with “reading in isolation,” this
strategy can provide information about whether the student’s skill changes from
reading the word in isolation, to reading the word within context.
Adjust Error Categories as Needed for Error Analysis  You may wish to add or
ignore certain error categories based on the student’s grade level, curriculum, and
instructional focus.
Further Verify Error Types  There are three ways in which the examiner can fur-
ther verify the types of errors made. Once an initial pattern of errors is identified, it
may be worthwhile to further examine them.
1. List of words types. The evaluator may want to verify the student’s difficulty
with a particular word type by providing the student with a list of at least 10
words of that type to read in isolation. If the student struggles to decode at least
90 % of the words correctly, it is likely instruction should target that particular
word type.
2. Conduct a brief experiment. To further determine if a student struggles with a
particular word type and to verify the results of Step 6, the evaluator can conduct
a brief experiment. Point out the error the student makes and provide a mini-les-
son on correcting the error. Follow the “model-lead-test” format and determine if
the student “responds” to the instruction. If the student responds positively (i.e.,
performance improves), this approach should be incorporated into the instruc-
tional plan.
3. Use diagnostic surveys. You may wish to administer diagnostic reading surveys,
such as the Diagnostic Decoding Surveys offered from Really Great Reading
(www.reallygreatreading.com). The goal is to determine if there are patterns to
the student’s errors, so use available tools and analysis of oral reading to deter-
mine errors.
6.8  Chapter Summary 101

6.8 Chapter Summary

This chapter outlined the CBE Process for Decoding and is structured around the
four phases of the process within which there are a series of steps and tasks. The
CBE Process for Decoding begins with an SLA with reading CBM, followed by
working through a series of tasks and questions that examine the accuracy and rate
of the student’s reading. These data combined with additional assessment activities
inform instructional recommendations. The plan is evaluated with progress moni-
toring and fidelity of implementation data.
Handout 6.1  Curriculum-Based Evaluation Process in Decoding Flowchart
ƵƌƌŝĐƵůƵŵͲĂƐĞĚǀĂůƵĂƟŽŶ͗ĞĐŽĚŝŶŐ
102

WZK>D/Ed/&/d/KE
ϭ͘ƐŬ͗/ƐƚŚĞƌĞĂƉƌŽďůĞŵ͍ Î Ž͗/ŶŝƟĂůŝĚĞŶƟĮĐĂƟŽŶŽĨƉƌŽďůĞŵ
Ϯ͘ƐŬ͗ŽĞƐƚŚĞƉƌŽďůĞŵǁĂƌƌĂŶƚĨƵƌƚŚĞƌŝŶǀĞƐƟŐĂƟŽŶ͍ Î Ž͗ŽŶĚƵĐƚ^ƵƌǀĞLJͲ>ĞǀĞůƐƐĞƐƐŵĞŶƚ
KƌĂůZĞĂĚŝŶŐ&ůƵĞŶĐLJ
WZK>DE>z^/^
ϯ͘ƐŬ͗tŚĂƚŝƐƚŚĞƐƚƵĚĞŶƚ͛ƐĂĐĐƵƌĂĐLJĂŶĚƌĂƚĞĂƚŐƌĂĚĞͲůĞǀĞů͍ Î Ž͗džĂŵŝŶĞƌĂƚĞĂŶĚĂĐĐƵƌĂĐLJǁŝƚŚŐƌĂĚĞͲůĞǀĞůŵĂƚĞƌŝĂů
ZĂƚĞн ZĂƚĞͲ ZĂƚĞн ZĂƚĞͲ
ĐĐƵƌĂĐLJн ĐĐƵƌĂĐLJн ĐĐƵƌĂĐLJͲ ĐĐƵƌĂĐLJͲ
Ž͗ƐƐĞƐƐ dĞĂĐŚ͗ ϰ͘ƐŬ͗ĂŶƚŚĞƐƚƵĚĞŶƚƐĞůĨͲĐŽƌƌĞĐƚĞƌƌŽƌƐ͍ ϱ͘ƐŬ͗ŽĞƐƚŚĞƐƚƵĚĞŶƚŚĂǀĞĂĐĐĞƉƚĂďůĞ
ŽŵƉƌĞŚĞŶƐŝŽŶ &ůƵĞŶĐLJ ƌĂƚĞĂďŽǀĞŐƌĂĚĞϭ͍
;ƐĞĞŚĂƉƚĞƌϴͿ Ð Ð
Ž͗ƐƐĞƐƐƐĞůĨͲŵŽŶŝƚŽƌŝŶŐƐŬŝůůƐ Ž͗džĂŵŝŶĞƌĞƐƵůƚƐŽĨ^ƵƌǀĞLJͲ>ĞǀĞůƐƐĞƐƐŵĞŶƚ
zĞƐ EŽ zĞƐ EŽ
dĞĂĐŚ͗ Ž͗'ŽƚŽ Ž͗ƐƐĞƐƐĂƌůLJ>ŝƚĞƌĂĐLJ
Ð
^ĞůĨͲDŽŶŝƚŽƌŝŶŐ ͞ZĂƚĞͲĐĐƵƌĂĐLJͲ͞ ^ŬŝůůƐ;ƐĞĞŚĂƉƚĞƌϳͿ
ϲ͘ƐŬ͗ƌĞƚŚĞƌĞƉĂƩĞƌŶƐƚŽ
ƚŚĞƐƚƵĚĞŶƚ͛ƐƌĞĂĚŝŶŐĞƌƌŽƌƐ͍
Ð
Ž͗ŽŶĚƵĐƚƌƌŽƌŶĂůLJƐŝƐ
zĞƐ EŽ
dĞĂĐŚ͗ dĞĂĐŚ͗
dĂƌŐĞƚĞĚ/ŶƐƚƌƵĐƟŽŶ 'ĞŶĞƌĂůZĞĂĚŝŶŐ^ŬŝůůƐ
ϳ͘ƐŬ͗ƌĞƐŝŐŚƚǁŽƌĚƐĂĐŽŶĐĞƌŶ͍
Ð
Ž͗ƐƐĞƐƐƐŝŐŚƚǁŽƌĚƐ
zĞƐ EŽ
dĞĂĐŚ͗
dĂƌŐĞƚĞĚ/ŶƐƚƌƵĐƟŽŶ
W>E/DW>DEdd/KE
dĞĂĐŚ͗^ĞůĨͲDŽŶŝƚŽƌŝŶŐ dĞĂĐŚ͗&ůƵĞŶĐLJ dĞĂĐŚ͗dĂƌŐĞƚĞĚ/ŶƐƚƌƵĐƟŽŶ dĞĂĐŚ͗'ĞŶĞƌĂůZĞĂĚŝŶŐ/ŶƐƚƌƵĐƟŽŶ
W>Es>hd/KE
DŽŶŝƚŽƌīĞĐƟǀĞŶĞƐƐ DŽŶŝƚŽƌ&ŝĚĞůŝƚLJ
EŽƚĞ͗нсĂƚĐƌŝƚĞƌŝŽŶ͕ͲсďĞůŽǁĐƌŝƚĞƌŝŽŶ
6  CBE Decoding
Handout 103

Handout 6.2  Survey-Level Assessment Instructions 


Purpose: To determine existence and severity of a reading problem.
Materials Needed:
• Writing tool, Timer
• Handout 6.5 to record scores
• Three passages for each grade level that will be assessed
− Student copies and Evaluator copies
Directions:
1. Administer three 1-minute oral reading fluency probes at the student’s grade
level using standardized procedures.
a. Place the student copy of the passage in front of the student. Say “When I
say ‘Begin,’ start reading aloud at the top of the page. Read across the page
(demonstrate by pointing across the page). Try to read each word. If you
come to a word you don’t know, I’ll tell it to you. Be sure to do your best
reading. Are there any questions? Begin.”
b. Start timing when the student says the first word. If the student does not say
the first word within 3 seconds, supply the first word and mark it as incorrect
(3-seconds rule).
c. Follow along and record errors using a slash (/). Supply the word if the student
does not say the word within 3-seconds (see Good and Kaminski 2011; Hosp
et al. 2006; and Shinn and Shinn 2002 for more information about CBM)
d. After 1 minute, say “Stop,” and mark a “]” to indicate where the student stopped.
e. When administering the other passages, use abbreviated directions. Say
“When I say ‘Begin,’ start reading aloud at the top of the page. Begin.”
2. Score the passages using procedures outlined by CBM procedures and by con-
sulting Table 6.3. Use the formulas below to calculate accuracy and rate. Identify
the median (the middle) score for both rate and accuracy and record the results
on Handout 6.5.

Interpretation Guidelines: 
3. Ask: Does the issue warrant further consideration?
− If the student is performing at criterion for accuracy (≥ 95 %) and rate (≥ Fall
25th percentile), at expected grade level, then you are finished with Decoding
CBE and can examine reading comprehension (see Chapter 8).
− If the student is not performing at criterion for either accuracy or rate at grade
level, proceed to Problem Analysis and examine the student’s rate and accu-
racy to determine further steps.
104 6  CBE Decoding

Formula for Calculating Rate 


Total Words Read − Errors = Rate (WRC)
Formula for Calculating Accuracy 

Words Read Correct (WRC)


× 100 = Percentage Accuracy
Total Words Read

Note: The reading CBM directions are reprinted with permission. 2012 Copyright
by Pearson Education Inc.
Handout 105

Handout 6.3  Self-Monitoring Assessment Instructions 


Purpose: To determine if the student has the decoding skills needed to self-correct
errors.
Materials Needed:
• Writing tool
• Clicker or additional pen or pencil for tapping
• Handout 6.6
• Copies of reading passages in which student reads at criterion for rate, but below
criterion for accuracy (i.e., less than 95 %)
− Student copies and Evaluator copies
Directions:
1. Gather passages from a grade level in which the student reads at criterion for
rate, but is below criterion for accuracy (i.e., less than 95 %).
2. Provide a prompt to the student. Say, “I want you to read this passage aloud.
Take your time and read as accurately and carefully as you can. Focus on
reading your best, not the fastest. Are you ready? (pause) Begin.”
3. Determine if the prompt improved the student’s reading accuracy, rate, and percent-
age of errors corrected. Use Handout 6.6 to calculate and record scores.
− If yes, consider the use of incentives and prompts for the reading. Also con-
sider continuing with Step 4.
− If no, proceed to the next step.
e. Using new copies of reading passages, say to student, “Please read this
aloud. This may be difficult for you, but please do your best reading. I
am not timing you, but if you make a mistake, I will (tap this pencil, click
this clicker). That is your clue that you made a mistake. I want you to
find the mistake and fix it. Remember, find it and fix it? What will you
do” (Student indicates understanding of procedure).
4. Have the student read aloud and provide a cue each time a word is misread.
− As the student reads and makes a mistake, mark each error with a “slash”
and write down the error that is made above the misread word. Provide the
prompt, and then mark a “slash” next to the error to indicate that the prompt
was given. Write down the word the student says following the prompt (see
Fig. 6.5).
Interpretation Guidelines:
5. Ask: Can the student self-correct errors? Use 90 % as a general guideline.
a. Determine the number of errors made and the percentage of errors corrected.
Use Handout 6.6 to record your results.
b. Take the total errors corrected divided by the total errors made and multiply
by 100 to get the percentage of errors corrected. (See following formula.)
106 6  CBE Decoding

Total Errors Corrected


× 100 = Percentage of Errors Corrected
Total Errors Made

c. If the answer to the question “Can the student self-correct at least 90 % of
errors?” is yes, then the student likely has a self-monitoring issue. Suggest the
teaching strategy “Teach: Targeted Instruction.”
d. If the answer to the question is “no,” then proceed to Step 6 in Handout 6.1.
Handout 107

Handout 6.4  Error Analysis Instructions (Decoding Errors) 


Materials Needed:
• Writing tool
• Handouts 6.7 and 6.8
• Reading material in which the student read between 80 %and 85 % accuracy
− Evaluator copies and student copies
Directions:
1. Identify a grade level from which the student reads with between 80 % and 85 %
accuracy and gather passages from that grade level. Select passages that are
lengthy (250 + words) in order to have an adequate amount of material.
2. Instruct the student to read (untimed) and follow along, recording errors. Record
general and reading and decoding errors using the codes listed in Table 6.3.
3. Obtain a sample size of at least 25 words for grade 1 and 50 for grades 2 and up.
Interpretation Guidelines
4. Ask: Are there patterns to the student’s reading errors? There are three questions
to ask in analyzing the errors.
a. Do the errors violate the meaning of the passage? If so, are they self-corrected?
b. What types of general reading errors are made?
c. What types of decoding errors are made?
5. Meaning Violation Errors.
a. Write each error and code it under the appropriate column with an “X” using
the coding sheet in Handout 6.7 (Table 6.7.1).
b. Tally the frequency of errors and then calculate the percentages for each type.
Write the totals on the tally sheet in Handout 8 (Table 6.8.1).
c. Answer the question, Is the student self-correcting errors, particularly ones
that violate the meaning of the text?
6. General Reading Errors.
a. Code each error that the student makes using Table  6.7.2 in Handout  6.7.
Write the totals on Table 6.8.2 in Handout 6.8. Table 6.3 and Figs. 6.4 and 6.5
provide examples of ways to record errors.
7. Decoding Errors. Answer the question, “Do decoding errors make up a signifi-
cant majority of the errors made?”
a. If yes, code each decoding error using the coding sheet in Handout  6.7
(Table 6.7.3). Tally the totals on Table 6.8.3 in Handout 6.8.
8. Ask: Are patterns evident in the student’s reading errors?
d. Review the results in Handout 6.8. If a pattern emerges, follow the teaching
recommendation of “Teach: Targeted Instruction” (see Handout 6.11).
e. If no pattern emerges, follow the teaching recommendation “Teach: General
Reading Skills” (see Handout 6.12).
108 6  CBE Decoding

Handout 6.5  Survey-Level Assessment Recording Sheet


^ƚƵĚĞŶƚEĂŵĞ͗ͺͺͺͺͺͺͺͺͺͺͺͺͺͺͺͺͺͺͺͺͺͺͺͺͺͺͺͺͺͺͺͺͺ ĂƚĞ͗ͺͺͺͺͺͺͺͺͺ'ƌĂĚĞ͗ͺͺͺͺͺͺͺͺ 

WĂƐƐĂŐĞϭ WĂƐƐĂŐĞϮ WĂƐƐĂŐĞϯ DĞĚŝĂŶ ƌŝƚĞƌŝŽŶ DĞƚ͍ ĞŶĐŚŵĂƌŬ


>ĞǀĞůϴ ƌĂƚĞ ϭϮϯ
Eͬ
ĂĐĐƵƌĂĐLJ ϵϱй
>ĞǀĞůϳ ƌĂƚĞ ϭϭϵ
Eͬ
ĂĐĐƵƌĂĐLJ ϵϱй
>ĞǀĞůϲ ƌĂƚĞ ϭϭϲ
ϭϬϳ
ĂĐĐƵƌĂĐLJ ϵϱй
>ĞǀĞůϲ ƌĂƚĞ ϵϰ
ϭϭϭ
ĂĐĐƵƌĂĐLJ ϵϱй
>ĞǀĞůϰ ƌĂƚĞ ϴϰ
ϵϬ
ĂĐĐƵƌĂĐLJ ϵϱй
>ĞǀĞůϯ ƌĂƚĞ ϱϵ
ϳϬ
ĂĐĐƵƌĂĐLJ ϵϱй
>ĞǀĞůϮ ƌĂƚĞ ϯϱ
ϱϮ
ĂĐĐƵƌĂĐLJ ϵϱй
>ĞǀĞůϭ ƌĂƚĞ ϭϵď
ϮϯĐ
ĂĐĐƵƌĂĐLJ ϵϱй
Ă
ďĂƐĞĚŽŶ&Ăůů/>^EĞdžƚĞŶĐŚŵĂƌŬ'ŽĂůƐ;ŚƩƉ͗ͬͬĚŝďĞůƐ͘ŽƌŐͬƉĂƉĞƌƐͬ/>^EĞdžƚĞŶĐŚŵĂƌŬ'ŽĂůƐ͘ƉĚĨͿ͘ďtŝŶƚĞƌϮϱƚŚďĞŶĐŚŵĂƌŬ
Đ
&ŝƌƐƚŐƌĂĚĞŝƐďĂƐĞĚŽŶĂtŝŶƚĞƌďĞŶĐŚŵĂƌŬďĞĐĂƵƐĞKZ&ŝƐŶŽƚĂĚŵŝŶŝƐƚĞƌĞĚŝŶĮƌƐƚŐƌĂĚĞƵŶƟůƚŚĞǁŝŶƚĞƌ͘

džƉĞĐƚĞĚ/ŶƐƚƌƵĐƟŽŶĂů>ĞǀĞů;'ƌĂĚĞͲůĞǀĞůͿ͗
KďƚĂŝŶĞĚ/ŶƐƚƌƵĐƟŽŶĂů>ĞǀĞů;ŵĞĞƚƐƌĂƚĞĂŶĚĂĐĐƵƌĂĐLJͿ͗
KďƚĂŝŶĞĚƌĂƚĞǁŝƚŚŐƌĂĚĞͲůĞǀĞůŵĂƚĞƌŝĂů͗
džƉĞĐƚĞĚƌĂƚĞǁŝƚŚŐƌĂĚĞͲůĞǀĞůŵĂƚĞƌŝĂů͗
^ƵďƚƌĂĐƚŽďƚĂŝŶĞĚƌĂƚĞĨƌŽŵĞdžƉĞĐƚĞĚƌĂƚĞ сƌĂƚĞĚŝƐĐƌĞƉĂŶĐLJ͗
KďƚĂŝŶĞĚĂĐĐƵƌĂĐLJǁŝƚŚŐƌĂĚĞͲůĞǀĞůŵĂƚĞƌŝĂů͗
džƉĞĐƚĞĚĂĐĐƵƌĂĐLJǁŝƚŚŐƌĂĚĞͲůĞǀĞůŵĂƚĞƌŝĂů͗ ϵϱй
^ƵďƚƌĂĐƚŽďƚĂŝŶĞĚĂĐĐƵƌĂĐLJĨƌŽŵĞdžƉĞĐƚĞĚĂĐĐƵƌĂĐLJ сĂĐĐƵƌĂĐLJĚŝƐĐƌĞƉĂŶĐLJ͗
EŽƚĞ͗^ĞĞ,ĂŶĚŽƵƚϲ͘ϮĨŽƌĨŽƌŵƵůĂƐƚŽĐĂůĐƵůĂƚĞƌĂƚĞĂŶĚĂĐĐƵƌĂĐLJ͘
Handout 109

Handout 6.6  Self-Monitoring Assessment Recording Sheet

&ŝƌƐƚZĞĂĚ

ŽŶĚŝƟŽŶƐ ZĞĂĚƉĂƐƐĂŐĞǁŝƚŚŽƵƚĂŶLJƐƉĞĐŝĮĐƉƌŽŵƉƟŶŐ͘

'ƌĂĚĞ>ĞǀĞůŽĨƌĞĂĚŝŶŐŵĂƚĞƌŝĂů

EƵŵďĞƌŽĨǁŽƌĚƐƌĞĂĚĐŽƌƌĞĐƚůLJ

ĐĐƵƌĂĐLJ

ƌƌŽƌƐŵĂĚĞ

ƌƌŽƌƐĐŽƌƌĞĐƚĞĚ

WĞƌĐĞŶƚĂŐĞŽĨĞƌƌŽƌƐĐŽƌƌĞĐƚĞĚ

/ŶƚĞƌǀĞŶƟŽŶ͗WƌŽŵƉƚ

ŽŶĚŝƟŽŶƐ WƌŽǀŝĚĞĚƉƌŽŵƉƚƚŽƌĞĂĚĂĐĐƵƌĂƚĞůLJ͘

'ƌĂĚĞ>ĞǀĞůŽĨƌĞĂĚŝŶŐŵĂƚĞƌŝĂů

EƵŵďĞƌŽĨǁŽƌĚƐƌĞĂĚĐŽƌƌĞĐƚůLJ

ĐĐƵƌĂĐLJ

ƌƌŽƌƐŵĂĚĞ

ƌƌŽƌƐĐŽƌƌĞĐƚĞĚ

WĞƌĐĞŶƚĂŐĞŽĨĞƌƌŽƌƐĐŽƌƌĞĐƚĞĚ

/ŶƚĞƌǀĞŶƟŽŶ͗WĞŶĐŝůdĂƉ

ŽŶĚŝƟŽŶƐ dŽůĚƚŽƌĞĂĚĂŶĚƉƌŽŵƉƚĞĚƚŽĐŽƌƌĞĐƚĞƌƌŽƌƐƵƐŝŶŐĂ
ĐůŝĐŬĞƌŽƌƉĞŶĐŝůƚĂƉ͘

'ƌĂĚĞ>ĞǀĞůŽĨƌĞĂĚŝŶŐŵĂƚĞƌŝĂů

EƵŵďĞƌŽĨǁŽƌĚƐƌĞĂĚĐŽƌƌĞĐƚůLJ

ĐĐƵƌĂĐLJ

ƌƌŽƌƐŵĂĚĞ

ƌƌŽƌƐĐŽƌƌĞĐƚĞĚ

WĞƌĐĞŶƚĂŐĞŽĨĞƌƌŽƌƐĐŽƌƌĞĐƚĞĚ

DĞĞƚϵϬйƌŝƚĞƌŝŽŶ͍ ŝƌĐůĞ͗ zĞƐ EŽ


110 6  CBE Decoding

Formulas
For rate:

Total Words Read − Errors = Rate (Words Read Correct)

For accuracy:

Words Read Correct ( WRC)


× 100 = Percentage Accuracy
Total Words Read

For percentage of errors corrected:

Total Errors Corrected


× 100 = Percentage of Errors Corrected
Total Errors Made

ZĞĐŽƌĚŝŶŐŚĂŶŐĞŝŶZĂƚĞĨŽƌWƌŽŵƉƚ͗

о с
WƌŽŵƉƚtZ &ŝƌƐƚZĞĂĚtZ ŝīĞƌĞŶĐĞ
ͬ y ϭϬϬ с
ŝīĞƌĞŶĐĞ &ŝƌƐƚZĞĂĚtZ WĞƌĐĞŶƚĂŐĞŽĨŚĂŶŐĞŝŶtZ

ZĞĐŽƌĚŝŶŐŚĂŶŐĞŝŶĐĐƵƌĂĐLJĨŽƌWƌŽŵƉƚ͗
о с
WƌŽŵƉƚĐĐƵƌĂĐLJ &ŝƌƐƚZĞĂĚĐĐƵƌĂĐLJ ŝīĞƌĞŶĐĞ

ZĞĐŽƌĚŝŶŐŚĂŶŐĞŝŶWĞƌĐĞŶƚĂŐĞŽĨƌƌŽƌƐŽƌƌĞĐƚĞĚĨŽƌWƌŽŵƉƚ͗
о с
WƌŽŵƉƚйŽĨŽƌƌĞĐƚĞĚƌƌŽƌƐ &ŝƌƐƚZĞĂĚйŽĨŽƌƌĞĐƚĞĚƌƌŽƌƐ ŝīĞƌĞŶĐĞ

ZĞĐŽƌĚŝŶŐŚĂŶŐĞŝŶĐĐƵƌĂĐLJĨŽƌ^ĞůĨͲDŽŶŝƚŽƌŝŶŐ͗
о с
^ĞůĨͲŵŽŶŝƚŽƌŝŶŐĂĐĐƵƌĂĐLJ &ŝƌƐƚƌĞĂĚĂĐĐƵƌĂĐLJ ŝīĞƌĞŶĐĞ

ZĞĐŽƌĚŝŶŐŚĂŶŐĞŝŶWĞƌĐĞŶƚĂŐĞŽĨƌƌŽƌƐŽƌƌĞĐƚĞĚĨŽƌ^ĞůĨͲDŽŶŝƚŽƌŝŶŐ͗
о с
^ĞůĨͲDŽŶŝƚŽƌŝŶŐйŽĨŽƌƌĞĐƚĞĚƌƌŽƌƐ &ŝƌƐƚZĞĂĚйŽĨŽƌƌĞĐƚĞĚƌƌŽƌƐ ŝīĞƌĞŶĐĞ
Handout 6.7  Error Analysis Coding Sheets
Table 6.7.1   Meaning violation coding sheet
Use this sheet to record and analyze meaning violation errors. Place a tally or check mark in the corresponding column. Mark if the error was
Handout

self-corrected. Calculate totals and percentages and then transfer the results to tally table in Handout 6.8

ƌƌŽƌ sŝŽůĂƚĞƐDĞĂŶŝŶŐ ŽĞƐEŽƚsŝŽůĂƚĞŵĞĂŶŝŶŐ ĂŶŶŽƚůĂƐƐŝĨLJ ^ĞůĨͲŽƌƌĞĐƚĞĚ͍


ZĞĂĚ͞ŚŽŵĞ͟ĂƐ͞ŚŽƵƐĞ͟ y EŽ
KŵŝƩĞĚƚŚĞǁŽƌĚ͞ĐŽůůĞĐƟŽŶ͟ y zĞƐ
111

dŽƚĂůƐ
WĞƌĐĞŶƚĂŐĞƐ
Table 6.7.2   General reading errors coding sheet
Use this sheet to record and analyze general reading errors. Place a tally or check mark in the corresponding column. Calculate totals and percentages and
then transfer the results to the tally table in Handout 6.8
ĐƚƵĂůtŽƌĚ ZĞĂĚtŽƌĚ ĞĐŽĚŝŶŐ ĞĐŽĚŝŶŐ ĞĐŽĚŝŶŐ /ŶƐĞƌƚ /ŶƐĞƌƚ KŵŝƐƐŝŽŶ ,ĞƐŝƚĂƟŽŶ ZĞƉĞƟƟŽŶ WƵŶĐƚƵĂƟŽŶ ^ĞůĨͲ
112

;ZĞĂůtŽƌĚͿ ;EŽƚƌĞĂůͿ ;^ͬͿ ;ƉƉͿ ;/ŶĂƉƉͿ ŽƌƌĞĐƚƐ

ŵĂƐƚĞƌ DĂƐƚĞƌ y
ĐĂƚ ĐĂ y
ŚŽŵĞ ŚŽŵĞƐ y y
ĞŶĚ͘dŚĞ ĞŶĚdŚĞ y

dŽƚĂůƐ
WĞƌĐĞŶƚĂŐĞƐ
6  CBE Decoding
Table 6.7.3  Decoding errors coding sheet
Use this sheet to record and analyze decoding errors. Place a tally or check mark in the corresponding column. Calculate totals and percentages and then
Handout

transfer the results to the tally table in Handout 6.8


ĐƚƵĂůtŽƌĚ ZĞĂĚ ^ŝŐŚƚ DŝƐƐĞƐ DŝƐƐĞƐ ^ŚŽƌƚ >ŽŶŐ ^ŝůĞŶƚ sŽǁĞů ŽŶƐŽŶĂŶƚ sŽǁĞůͲ ZͲ ŽŶƚƌĂĐƟŽŶƐ ^ŝůĞŶƚ
tŽƌĚ tŽƌĚ ĞŐŝŶŶŝŶŐ ŶĚŝŶŐ sŽǁĞůƐ sŽǁĞůƐ ͚Ğ͛ƌƵůĞ dĞĂŵƐ ůĞŶĚƐ ŽŶƐŽŶĂŶƚ ĐŽŶƚƌŽůůĞĚ >ĞƩĞƌƐ
^ŽƵŶĚ͕ ^ŽƵŶĚ͕ ůĞŶĚƐ ǀŽǁĞůƐ
WƌĞĮdžĞƐ ^ƵĸdžĞƐ
džĂŵƉůĞƐ Ž͕ƚŚĞ͕ WƌĞ͕ďĞ͕ƉŽƐƚ͕ ŝŶŐ͕Ɛ͕ĂďůĞ͕ ͚Ă͛ŝŶ ͚ĞĞ͛ŝŶ DĂŬĞ͕ ŽĂ͕Ăŝ͕ ƐŚ͕ǁŚ͕ƉŚ͕ Ğů͕ŝů͕Ăů͕ Ğƌ͕Ăƌ͕Žƌ ĐĂŶΖƚ͕ĚŽŶ͛ƚ ŬŶŽǁ
ŝƚ ;ĂŶLJŝŶŝƟĂů ;ĂŶLJĮŶĂů ĂƉƉůĞ ũĞĞƉ ƌŝĚĞ ŽLJ͕ĞĂ ƚŚ͕ǁŶ
ƐŽƵŶĚͿ ƐŽƵŶĚͿ
ŵĂƐƚĞƌ ŵĂƩĞƌ y
ĐĂƚ ĐĂ y
113

dŽƚĂůƐ
WĞƌĐĞŶƚĂŐĞƐ
114 6  CBE Decoding

Handout 6.8  Error Analysis Tally Sheet: General Reading Errors

Table 6.8.1   Meaning violation and self-correction of errors tally table

dŽƚĂůEƵŵďĞƌŽĨƌƌŽƌƐƚŽďĞŶĂůLJnjĞĚ 

 sŝŽůĂƚĞƐDĞĂŶŝŶŐ ŽĞƐEŽƚsŝŽůĂƚĞ ĂŶŶŽƚĞƚĞƌŵŝŶĞ

&ƌĞƋƵĞŶĐLJŽĨƌƌŽƌ   
&ƌĞƋƵĞŶĐLJ^ĞůĨͲŽƌƌĞĐƚĞĚ   
WĞƌĐĞŶƚĂŐĞŽĨƌƌŽƌƐDĂĚĞ     
WĞƌĐĞŶƚĂŐĞŽĨƌƌŽƌƐŽƌƌĞĐƚĞĚ   
EŽƚĞ͗ŽŶƐŝĚĞƌĂůůŽĨƚŚĞĞƌƌŽƌƐŵĂĚĞďLJƚŚĞƐƚƵĚĞŶƚĂŶĚĚĞƚĞƌŵŝŶĞǁŚĞƚŚĞƌŽƌŶŽƚƚŚĞLJǀŝŽůĂƚĞŵĞĂŶŝŶŐ͘
Handout 115

Table 6.8.2   General reading errors tally table

dLJƉĞŽĨƌƌŽƌ &ƌĞƋƵĞŶĐLJ WĞƌĐĞŶƚĂŐĞ džĂŵƉůĞ

ĞĐŽĚŝŶŐƌƌŽƌƐ͗   
ƌƌŽƌƐĂƌĞZĞĂůtŽƌĚ   ͞ĐĂŶ͛ƚ͟ĨŽƌ͞ĐĂƚ͖͟͞ƚŚĞ͟ĨŽƌ͞Ă͟
ƌƌŽƌƐĂƌĞEŽƚZĞĂůtŽƌĚƐ   ͞ŚĂŶƚ͟ĨŽƌ͞ŚĂǀĞ͟
ƌƌŽƌƐĂƌĞ^ĞůĨͲŽƌƌĞĐƚĞĚ   ;ƐĞůĨͲĐŽƌƌĞĐƚƐĂŶĞƌƌŽƌǁŝƚŚŝŶϯƐĞĐŽŶĚƐͿ

/ŶƐĞƌƟŽŶƐ͗   
ŽŶƚĞdžƚƵĂůůLJƉƉƌŽƉƌŝĂƚĞ   tĞƐƟůůĂƌĞŐŽŝŶŐ͙
ŽŶƚĞdžƚƵĂůůLJ/ŶĂƉƉƌŽƉƌŝĂƚĞ   tĞĂŶĚǁĞŶƚƚŽŚĂǀĞ͙

&ůƵĞŶĐLJƌƌŽƌƐ͗   
KŵŝƐƐŝŽŶƐ   ͞/;ǁĞŶƚͿĂǁĂLJ͙͟;ǁĞŶƚŽŵŝƩĞĚͿ
,ĞƐŝƚĂƟŽŶƐ;ϯƐĞĐŽŶĚƐͿ   ͞tĞ͙;ϯƐĞĐŽŶĚƐͿ;ĞdžĐůĂŝŵĞĚͿ͙͟
ZĞƉĞƟƟŽŶƐ;ϯƟŵĞƐͿ   ͞/ǁĞŶƚƚŽ͕/ǁĞŶƚƚŽ͕/ǁĞŶƚƚŽ͙͟
WƵŶĐƚƵĂƟŽŶ͗ŶŽƚƉĂƵƐŝŶŐĂƚƉƵŶĐƚƵĂƟŽŶ   ͙͞ƚŚĞĞŶĚ͘dŚĞŶǁĞ͙͟;ŶŽƉĂƵƐĞĂƚ
ƉĞƌŝŽĚͿ
^ĞůĨͲĐŽƌƌĞĐƚƐ   ;ƐĞůĨͲĐŽƌƌĞĐƚƐĂŶŽŵŝƐƐŝŽŶͿ

dKd>ZZKZ^  Ͳ 

YƵĂůŝƚĂƟǀĞ͗ KĐĐƵƌƌĞĚ͍ 
WĂƵƐĞƐĂƚĞŶĚŽĨůŝŶĞƐŽĨƚĞdžƚ zĞƐ EŽ WĂƵƐĞƐĂƚĞŶĚŽĨůŝŶĞŽĨƚĞdžƚ
WŽŽƌƉƌŽƐŽĚLJŽƌŝŶƚŽŶĂƟŽŶ zĞƐ EŽ >ĂĐŬŽĨĞdžƉƌĞƐƐŝŽŶ
ŚƵŶŬŝŶŐƉŚƌĂƐĞƐ zĞƐ EŽ ZĞĂĚƐǁŽƌĚďLJǁŽƌĚ
KƚŚĞƌ͗   
KƚŚĞƌ͗   
116 6  CBE Decoding

Table 6.8.3   Decoding errors tally table

dLJƉĞƐŽĨƌƌŽƌƐ ƌƌŽƌ йŽĨdŽƚĂů KƉƉ͘ йŽĨƌƌŽƌƐ džĂŵƉůĞ


ŽƵŶƚ ƌƌŽƌƐ LJKƉƉ͘

tŽƌĚƐ͗     

^ŚŽƌƚsŽǁĞů^ŽƵŶĚƐ     ͚Ă͛ŝŶĂƉƉůĞ

>ŽŶŐsŽǁĞů^ŽƵŶĚƐ     ͚ĞĞ͛ŝŶũĞĞƉ

^ŝůĞŶƚ͚Ğ͛ƐŽƵŶĚͬs     ďŝƚĞ͕ŵŽƉĞ͕ƚĂƉĞ

,ŝŐŚͲĨƌĞƋƵĞŶĐLJͬ^ŝŐŚƚ     ĚŽ͕ŵĂŬĞ͕LJĞƐ͕ŝƚ

ŽŵƉŽƵŶĚtŽƌĚƐ     ŝŶƚŽ͕ĨŽŽƚďĂůů

ŽŶƚƌĂĐƟŽŶƐ     ŚĂǀĞŶ͛ƚ͕ĐĂŶ͛ƚ

^ŝůĞŶƚ>ĞƩĞƌƐ     ŬŶŝƚ͕ŬŶŽǁ

WŽůLJƐLJůůĂďŝĐtŽƌĚƐ     ĐƵĐƵŵďĞƌ͕ƚŽŵŽƌƌŽǁ

ŽƵďůĞĐŽŶƐŽŶĂŶƚǁŽƌĚƐ     ďƵƩĞƌ͕ǁƌŝƩĞŶ

hŶŝƚƐ͗     

DŝƐƐĞƐŝŶŝƟĂůƐŽƵŶĚ͕WƌĞĮdžĞƐ     ƉƌĞ͕ďĞ͕ƉŽƐƚ͕ƐƵď

DŝƐƐĞƐƌŝŵĞ;ŝŶŝƟĂůƐŽƵŶĚŽŶůLJͿ     ͞Śŝƚ͟ĨŽƌ͞ŚĞůƉ͕͟͞ǁĂƐ͟ĨŽƌ͞ǁĞƌĞ͟

DŝƐƐĞƐĮŶĂůƐŽƵŶĚ͕^ƵĸdžĞƐ     ĂďůĞ͕ŝŶŐ͕Ɛ͖͞ǁŽƌŬƐ͟ĂƐ͞ǁŽƌŬ͟

ZͲĐŽŶƚƌŽůůĞĚǀŽǁĞůƐ     Ğƌ͕ŝƌ͕Ăƌ

sŽǁĞůͬŽŶƐŽŶĂŶƚďůĞŶĚƐ     Ăů͕ŝů͕Ğů

sŽǁĞůdĞĂŵƐͬŽŵďŽƐ     Ăŝ͕ĂLJ͕ĞĞ

ŽŶƐŽŶĂŶƚŽŵďŝŶĂƟŽŶƐ     ƐŚ͕ŬŶ͕ƉŚ͕ƚŚ͕ǁŚ

dKd>    

EŽƚĞ͗KƉƉсŽƉƉŽƌƚƵŶŝƚLJ͘ƌƌŽƌŽƵŶƚŝƐĂƐLJŶŽŶLJŵĨŽƌĨƌĞƋƵĞŶĐLJŽƌŽĐĐƵƌƌĞŶĐĞŽĨƚŚĞĞƌƌŽƌ͘
Handout 117

Handout 6.9  Sight Word Lists


Table 6.9.1   Sight word list by grade level
Preprimer Primer First grade Second grade Third grade
a all under after always why about
and am want again around wish better
away are was an because work bring
big at well any been would carry
blue ate went ask before write clean
can be what as best your cut
come black white by both done
down brown who could buy draw
find but will every call drink
for came with fly cold eight
funny did yes from does fall
go do give don’t far
help eat going fast full
here four had first got
I get has five grow
in good her found hold
is have him gave hot
it he his goes hurt
jump into how green if
little like just its keep
look must know made kind
make new let many laugh
me no live off light
my now may or long
not on of pull much
one our old read myself
play out once right never
red please open sing only
run pretty over sit own
said ran put sleep pick
see ride round tell seven
the saw some their shall
three say stop these show
to she take those six
two so thank upon small
up soon them us start
we that then use ten
where there think very today
yellow they walk wash together
you this were which try
too when warm
118

Table 6.9.2   Dolch 220 basic word listed by frequency


Words
1–25 26–50 51–75 76–100 101–125 126–150 151–175 176–200 200–220
the look get ride away again black warm wash
to is them into old play white ate show
and her like just by who ten full hot
he there one blue their been does those because
a some this red here may bring done far
I out my from saw stop goes use live
you as would good call off write fast draw
it be me any after never always say clean
of have will about well seven drink light grow
in go yes around think eight once pick best
was we big want ran cold soon hurt upon
said am went don’t let today made pull these
his then are how help fly run cut sing
that little come know make myself gave kind together
she down if right going round open both please
for do now put sleep tell has sit thank
on came long too brown much find which wish
they could no got yellow keep only fall many
but when came take five give us carry shall
had did ask where six work three small laugh
at what very every walk first our under
him so an pretty two try better read
with see over jump or new hold why
up not your green before must buy own
all were its four eat start funny found
6  CBE Decoding
Handout 119

Sight Word Tally Sheets 


Directions:
Calculate the percentage of words read correctly and record the totals in Table 6.9.3.
Then write down the specific words to target for instruction in Table 6.9.4.

WĞƌĐĞŶƚĂŐĞŽĨtŽƌĚƐZĞĂĚŽƌƌĞĐƚůLJďLJ'ƌĂĚĞ>ĞǀĞů
Table 6.9.3   Percentage of words read correctly by grade level

'ƌĂĚĞ>ĞǀĞů WĞƌĐĞŶƚĂŐĞZĞĂĚŽƌƌĞĐƚůLJ
WƌĞͲWƌŝŵĞƌ
WƌŝŵĞƌ
&ŝƌƐƚ'ƌĂĚĞ
^ĞĐŽŶĚ'ƌĂĚĞ
dŚŝƌĚ'ƌĂĚĞ

Table 6.9.4   Words for instructional target


tŽƌĚƐƚŽdĂƌŐĞƚĨŽƌ/ŶƐƚƌƵĐƟŽŶ
120 6  CBE Decoding

Handout 6.10  Teach: Self-Monitoring


Targeted Skill:  Accuracy and self-monitoring
Purpose and description:  The purpose is to prompt the student to correct errors
that he or she makes while reading and to increase the active monitoring of com-
prehension while reading. Initially, the student is prompted after each error. As the
accuracy of the student improves, the prompt can be gradually faded and provided
on a delay.
Materials:
• Reading passages at the student’s instructional level
− Teacher copies (optional)
• Clicker
• Data-tracking sheet (sticky note, index card, etc.)
Setting:
Directions:
1. Say to the student, “Please read this aloud. If you make a mistake, I will (tap
this pencil, click this clicker). That is your clue that you made a mistake and
I want you to find the mistake, fix it, and then go back to the beginning of the
sentence.”
2. Establish a goal for accuracy for the student. This can be based on the previous
day’s performance or another criterion determined by the teacher.
3. As the student reads aloud, follow along and provide the cue each time the stu-
dent makes an error. If the student does not say the word within 3 seconds, pro-
vide error correction ( “That word is _____. What word?”) and have the student
reread the sentence.
4. Record if the student corrected the error accurately following the cue using a
data-tracking sheet, such as a sticky note, an index card, or on a copy of the
passage.
5. Following the reading, calculate the student’s accuracy and determine if he or
she met the goal. Provide feedback on performance.
6. As the student meets his or her goal (i.e., 3 days in a row), begin to delay the time
in which the prompt is provided. For example, the prompt can be provided at the
end of a sentence in which an error was made instead of after each error. From
there, the prompt can be provided at the end of the paragraph.
Considerations and Modifications:
• Eventually, the prompt can be replaced with self-monitoring questions, such as
“Did that make sense?” The teacher can use a rubric taped to the student’s desk
that prompts them through a series of questions (e.g., Did that make sense? Did
I make any mistakes?, etc.).
• This can be delivered in groups by pairing up students together and monitoring
their reading.
• Graphing the student’s daily accuracy is a good way to represent progress.
Evidence-Base: Hattie and Timperley 2007; Howell and Nolet 2000
Handout 121

Handout 6.11  Teach: Repeated Reading


Targeted Skill:  Fluency with connected text.
Purpose and Description:  To provide extended time and opportunity for the stu-
dent to reread passages or portions of text in order to build fluency. The purpose is
to build the student’s rate, accuracy, and prosody.
Materials
• Passages of text at student’s instructional level (150–250 words)
− Tutor copies of passages (optional)
• Timer
• Reinforcement card (optional)
Setting: One-on-one or with pairs of students
Directions
1. Provide a preview of difficult passage words. Use a format such as, “This word
is ____. What word?”
2. Say, “You are going to read the story. If you get stuck, I will tell you the word
so you can keep reading. Do your best reading. Read until I stay ‘stop.’ Ready?
Begin.” Have the student read the passage for 1 minute. The teacher or tutor fol-
lows along and records errors.
3. After the student reads, provide feedback on his or her performance. The tutor
shares how many words the student read correct and provides error correction
for each misread word using the format in Step 1. The student is also directed to
read the few words before and after the misread word (to capture the phrase) or
the entire sentence that contained the error.
4. Say to the student, “I want you to read the story along with me. Use your finger
to follow along and try your best reading with me. Ready? Begin.” The student
reads along with the teacher/tutor, as the teacher/tutor models expressive reading
at a slightly faster pace than what the student read.
5. Say to the student, “I want you to read just like what we’ve practiced. Last time,
you read ____ words. Try to beat your score!” The student rereads the passage
independently again and the teacher or tutor follows along and records errors.
6. Step 5 is repeated twice, resulting in a rereading of the passage a total of four
times by the student.
Considerations and Modifications
• Students can be assigned in pairs for this strategy. Students can take turns read-
ing paragraphs within a passage for a total of 10 minutes. Students provide error
correction with a standard prompt (see Step 1).
− Praise and rewards can be provided based on reaching daily goals. Students
can also have “reward cards” in which the teacher provides tangible reinforce-
ment (sticker, stamp) for following the steps of the strategy.
122 6  CBE Decoding

• Use daily goals, which can be set based on normative or benchmarks standards
(see DIBELS Next benchmarks).
• The “How did I read?” rubric can be used to provide detailed feedback (see
Table 6.11.1).
Evidence-base:  Lo et al. 2011; Musti-Roo et al. 2009
Handout 123

Table 6.11.1   How Did I Read? (Adapted from: Therrien et al. 2012)
Level 4 □ I read most of the story in long, meaningful phrases
□ I repeated or missed only a few words.
□ I emphasized important phrases or words.
□ I read with expression.
Level 3 □ I read most of the story in three- or four-word phrases.
□ I repeated or missed only a few words.
□ I emphasized important words or phrases.
□ I read some of the story with expression.
Level 2 □ I read mostly in two-word phrases.
□ I repeated or missed a lot of words.
□ I did not emphasize important words or phrases.
□ I did not read with expression.
Level 1 □ I read most of the story word-by-word
□ I repeated or missed a lot of words.
□ I did not emphasize important words or phrases.
□ I did not read with expression.
124 6  CBE Decoding

Handout 6.12  Teach: Listening Previewing with Partner Reading


Targeted Skill:  Fluency with connected text
Purpose: To enhance reading fluency and comprehension by discussing key vocab-
ulary words and providing an opportunity for students to hear what they will read
prior to independent reading.
Materials:
• Stopwatch, one per student pair.
• Reading materials, one set per student.
Setting: Partner groups, whole-class activity
Directions
Preparation
1. Explain to the students they will work in pairs to practice reading. Demonstrate
the Paired Listening Previewing procedures as follows:
a. The more proficient reader reads the first paragraph (or sentence for younger
and less skilled students) while less proficient reader follows along. Then the
less proficient reader reads the same paragraph or sentence.
b. As one student reads, the other corrects errors (substitutions, omissions, addi-
tions, and hesitations of 3 + seconds). If neither student in the pair knows
word, raise hand to ask for help.
2. After listening previewing have the students move to their partner stations. (Use
reading data to create pairs by matching higher performing readers with lower
performing readers.)
3. To ensure higher performing students read first, designate them as the “red team”
and lower performing students as the “blue team” or some other team names.
4. Conduct a class wide practice session while you move around the room provid-
ing encouragement and assistance.
Implementation
5. At the beginning of instructional period, write 10–12 key words from the read-
ing selection on the board or word wall. Key words are those words that are
critical to understanding the assignment and may be difficult to understand or
pronounce.
6. Read the first word to the students and have them repeat chorally. Discuss mean-
ing. Ask questions to determine if they understand meaning. Repeat for each of
key words.
7. Then read the selection aloud, while students follow along silently. To promote
active attention, instruct students to follow along with a finger under each word
as you read.
8. After you read selection, have students go to their partner stations and take turns
reading same selection, one paragraph at a time, with higher performing student
reading first. Instruct students to read for 2 minutes each and then switch who is
reading.
Handout 125

Considerations and Modifications


• Comprehension questions can be prepared for the passages and student pairs can
collaborate to prepare answers to comprehension worksheets. Review the ques-
tions as a whole-class activity and have students correct own partner’s papers.
• A different version of previewing involves the teacher reading the story aloud
while students follow along. The teacher can pause and ask students to read the
next word in the passage. Those words can be underlined in the student’s passage
in order to cue them. The teacher models fluency while reading. When students
go into pairs, the teacher monitors fluency during partner reading.
• Students can score accuracy for each other and set daily or weekly goals.
• Points and rewards can be utilized based on students reading with fluency, at-
tending to the task, or meeting accuracy goals.
• Students can be cued to take turns reading by the teacher instead of relying on
individual timers.
Evidence-base: Begeny and Silber 2006; Rathvon 2008
126 6  CBE Decoding

Handout 6.13  Teach: Chunking


Targeted Skill: Fluency at the phrase level
Purpose and description: To build fluency with phrases and develop prosody.
Teachers pair up a more proficient student with a less proficient student and use a
reading passage at the less proficient student’s instructional level.
Materials:
• Passages with slashes between phrases
−  Passages are at the less proficient student’s reading level
Setting: Partners, small-group, or whole-group instruction
Directions
1. Teacher pairs a more proficient student with a less proficient student
2. Each passage is prepared by placing slashes between two- to five-word phrases
to indicate fluency at the phrase level (e.g., “The little cat/was running/down the
street”). (Double slashes can be used between sentences.)
3. The teacher models fluent reading of the passage to students.
4. Students take turns practicing by reading the phrases in chunks to each other.
Considerations and Modifications
• A variation on this strategy is to write phrases on pieces of paper and having stu-
dents reorder them into a sentence. They then practice reading the phrases.
Evidence-base: Vaughn and Linan-Thompson 2012
Handout 127

Handout 6.14  Word Drill 


Targeted Skill: Error correction
Purpose and description: To provide immediate corrective feedback to the student
and provide opportunities to practice the accurate word. During reading, a standard
format is used to correct errors made by the student.
Materials:
• Writing tool
• Index cards
• Reading materials
Setting: Small-group or one-on-one when active reading is taking place
Directions
1. If a student misreads a word while reading a passage or list of words, the teacher
provides the correct word using the word supply correction technique (“That
word is _____. What word?”)
2. The student says the corrected word and continues to read. The teacher than
writes the misread word on an index card.
3. Following completion of the reading passage or list, the teacher then presents the
student with the deck of cards that contains the misread words.
Drill procedure:
4. Each card is presented to the student and he or she is asked to read the word. If
the word is read correctly, the card is placed faced down on the table. If the word
is misread, the word supply procedure is followed (as described in Step 1). Then
that card is placed at the back of the deck.
5. The drill procedure is followed until each word is read correctly. Once all the
words are read correctly, the entire deck is shuffled and the drill procedure is
repeated.
6. Once the student can read all the words/cards correctly without teacher-assis-
tance for two consecutive trials, the drill procedure ends.
Considerations and Modifications
• Word drill can be combined with other correction procedures. For example, the
student can be prompted to reread the sentence when he or she makes an error
(sentence repeat), or the student can repeat the correct word three times (overcor-
rection).
• If used during small-group instruction, any error made by the group is written on
a card. The same procedures are followed but the drill procedure ends once the
group can read accurately for two consecutive trials.
• The word supply procedure can be modified during drill procedure. The teacher
can supply the correct word, and the student can repeat it. The teacher then asks
“What word?” and the student repeats the word again.
Evidence Base: Jenkins and Larson 1979; Jenkins et al. 1983; Singh 1990; Singh
and Singh 1986
128 6  CBE Decoding

Handout 6.15  DISSECT Strategy


Targeted Skill: Decoding multisyllabic words and complex word units.
Purpose: To improve word identification skills in content area textbooks with a
strategy for pronouncing and recognizing complex multisyllabic words.
Materials:
• Social studies or science textbooks or other reading material in content areas.
• Posterboard and sheet of paper or student (optional) listing common prefixes and
suffixes.
• Posterboard and sheet of paper or student (optional) listing the DISSECT steps.
• Posterboard and sheet of paper or student (optional) listing the Rules of Twos
and Threes
• Classroom dictionaries, one per student.
Setting: Small-group and whole-group
Directions:
Introduction and Training
1. Display the list of DISSECT steps, list of prefixes and suffixes, and chart with
the Rules of Twos and Threes. If desired, give students individual copies of
materials to refer to during class work/homework assignments.
2. Have discussion about the importance of good reading skills to success in subject
area.
3. Tell students they’ll be learning a seven-step strategy to help them read and
remember difficult words. Explain that dissect- “to separate into parts” –will
help them remember the seven steps.
4. Using the DISSECT chart and board, describe and demonstrate the seven strat-
egy steps to use when encountering a difficult word.
5. Discuss situations in which students can apply the strategy (e.g., homework, lei-
sure reading) and the benefits of learning and using the strategy (e.g., improved
grades, more enjoyable reading experiences, greater knowledge of world and
community events, etc.).
6. Include suggestions in discussion: (a) strategy is most effective on reading
assignments that follow discussion of content in class; (b) the first six steps usu-
ally will not work on vocabulary words not yet introduced; (c) students should
learn strategy so well that they can complete the first five steps in no more than
10 seconds.
7. Write a multisyllabic word from current reading assignment on board and use it
to demonstrate the entire strategy using a think-aloud procedure so students can
observe process.
8. Write other multisyllabic words on board and select students to demonstrate the
strategy. Prompt them to think aloud as they go through the steps and provide
support and corrective feedback as needed.
Handout 129

Implementation
1. During social studies or science lessons, review the strategy when introducing
new vocabulary. Select students to demonstrate strategy on several words.
2. Provide time for students to apply the strategy during class assignments. If
desired, divide class into pairs and have them work together to apply the strat-
egy to a section of the text or reading materials while you circulate to provide
assistance.

DISSECT
D—Discover the context. Skip the difficult word, read to the end of the sentence,
and use the meaning of the sentence to make your best guess about a word that fits
in place of the unfamiliar word. If the guessed word does not match the difficult
word, proceed to the next step.
I—Isolate the prefix. Using the list of prefixes, look at the beginning of the word to
see if the first several letters form a prefix that you can pronounce. If so, box it off
by drawing a line between the prefix and the rest of the word.
S—Separate the suffix. Using the list of suffixes, look at the end of the word to see
if the last several letters form a suffix that you can pronounce. If so, box it off by
drawing a line between the suffix and the rest of the word.
S—Say the stem. If you recognize the stem (the part of the word that is left after
the prefix and suffix have been boxed off), pronounce the prefix, stem, and suffix
together. If you cannot recognize them, proceed to next step.
E—Examine the stem. Using the Rules of Twos and Threes, dissect the stem into
easy-to-pronounce word parts.
C—Check with someone. If you still cannot pronounce the word, ask someone for
help. If someone not available, go to the next step.
T—Try the dictionary. Look up the word in the dictionary, use the pronunciation
guide to pronounce the word and read the definition if you do not know the mean-
ing of the word.
Rules of Twos and Threes 
Rule 1
• If a stem or any part of a stem begins with a vowel, separate the first two letters
from the rest of the stem and pronounce them.
• If the stem or any part of the stem begins with a consonant, separate the first
three letters from the rest of the stem and pronounce them.
• Once you have separated the first two or three letters from the stem, apply the
same rules until you reach the end of the stem (example: al/ter/na/tor)
• Pronounce the stem by saying the dissected parts. If you can read the stem, add
the prefix and suffix and reread the entire word. If you cannot use Rule 1, use
Rule 2.
130 6  CBE Decoding

Rule 2
• Isolate the first letter of the stem and try to apply Rule 1 again. Rule 2 is espe-
cially useful when the stem begins with two or three consonants.
Rule 3
• If two different vowels appear together in a word, pronounce both of the vowel
sounds. If that does not sound right, pronounce one vowel sound at a time until
it sounds right. Rule 3 can be used with either Rule 1 or Rule 2.
Evidence Base: Lenz and Hughes 1990; Rathvon 2008
Handout 131

Handout 6.16  General Reading Instruction 


Targeted Skill: Phonics and knowledge of the alphabetic principle
Purpose: The purpose is to improve the student’s decoding skills. By providing
a structured, engaging format, teachers can exercise a high degree of control and
ensure students learn skills and demonstrate them accurately.
Materials
• The materials will vary by the instructional format, but the teacher should be
fluent with the instructional format.
Setting: Small-group or one-on-one
Directions
1. Introduction: Students are provided an overview of the learning objectives for
the day.
2. Review: Previously learned material is reviewed in order to build retention, accu-
racy, and fluency with the material.
3. Modeling: The teacher presents the new material in small, successive steps.
4. Guided practice: Students are provided an opportunity to practice the new skills.
Teachers use immediate error correction formats, praise for accurate demonstra-
tion of the skill, and signaling to prompt students to respond.
5. Independent practice: Students are provided independent practice with the new
skill. The teacher monitors performance, provides praise, and ensures accurate
demonstration of the skill.
6. Closure: A summary of what was learned is provided.
Considerations
• What is the pacing of instruction, as measured by OTRs? Students should have
4–6 OTRs in whole-group and 8–12 in small-group.
• What is the student’s accuracy with responses? With new material (guided prac-
tice), students should be responding with at least 80 % and with practice material,
responses should be at least 93 %.
• Can the group size of instructional plans be reduced and more instructional min-
utes added?
• Does the instructional focus of the student’s instruction match the skill deficits
identified in the student?
Evidence base: Carnine et al. 2009
132 6  CBE Decoding

Appendix 6A

Reward-Based Assessment Instructions


Purpose:  To determine if the student has a performance-deficit or a skill-deficit.
Materials Needed:
• Writing tool
• Timer
− Handout 6.6 for directions; Handout 6A.1 to record scores
− Rewards and prizes
− Three passages for each grade-level that will be assessed
− Student copies and evaluator copies
Directions:
1. Transfer the median scores from Handout 6.6 to Handout 6A.1 under the column
“without reward.”
2. Determine a small reward for the student by interviewing them for preferences.
3. Tell the student he or she may earn the reward if score improves by one correct
word.
− Explain that you will administer three grade-level probes and that the median
score will be used to determine the student’s success.
4. Readminister three new grade-level reading CBM passages and use the median
score. Provide the reward if student’s score improves.
− Consult Handout 6.6 for CBM directions.
Interpretation Guidelines: 
1. Record the student’s scores on Handout 6A.1.
2. Ask: Did the student’s score improve to criterion under the reward condition?
− If yes, consult with those who work with the student the need for reinforce-
ment in regular setting.
− If no, conclude that the student has a skill-deficit.
Handout 133

Handout 6A.1  Reward-Based Assessment Recording Sheet


^ƚƵĚĞŶƚEĂŵĞ͗ͺͺͺͺͺͺͺͺͺͺͺͺͺͺͺͺͺͺͺͺͺͺͺͺͺͺͺͺͺ
ͺͺͺͺ ĂƚĞ͗ͺͺͺͺͺͺͺͺͺ'ƌĂĚĞ͗ͺͺͺͺͺͺͺͺ

tŝƚŚŽƵƚ tŝƚŚZĞǁĂƌĚ
ZĞǁĂƌĚ
WĂƐƐĂŐĞϭ WĂƐƐĂŐĞϮ WĂƐƐĂŐĞϯ DĞĚŝĂŶ ƌŝƚĞƌŝŽŶ DĞƚ͍
>ĞǀĞůϴ ƌĂƚĞ ϭϮϯ
ĂĐĐƵƌĂĐLJ ϵϱй
>ĞǀĞůϳ ƌĂƚĞ ϭϭϵ
ĂĐĐƵƌĂĐLJ ϵϱй
>ĞǀĞůϲ ƌĂƚĞ ϭϭϲ
ĂĐĐƵƌĂĐLJ ϵϱй
>ĞǀĞůϲ ƌĂƚĞ ϵϰ
ĂĐĐƵƌĂĐLJ ϵϱй
>ĞǀĞůϰ ƌĂƚĞ ϴϰ
ĂĐĐƵƌĂĐLJ ϵϱй
>ĞǀĞůϯ ƌĂƚĞ ϱϵ
ĂĐĐƵƌĂĐLJ ϵϱй
>ĞǀĞůϮ ƌĂƚĞ ϯϱ
ĂĐĐƵƌĂĐLJ ϵϱй
>ĞǀĞůϭ ƌĂƚĞ ϭϵΎΎΎ
ĂĐĐƵƌĂĐLJ ϵϱй

EŽƚĞ͗dƌĂŶƐĨĞƌƚŚĞƌĞƐƵůƚƐŽĨƚŚĞ^ƵƌǀĞLJͲ>ĞǀĞůƐƐĞƐƐŵĞŶƚƚŽƚŚĞ͞tŝƚŚŽƵƚZĞǁĂƌĚ͟ŽůƵŵŶ
Chapter 7
CBE Early Literacy

7.1  Chapter Preview

This chapter describes the process for CBE early literacy skills. The chapter is struc-
tured around the four phases of the CBE Process and will walk the reader through
the entire process for early literacy skills. The chapter discusses specific assessment
techniques and intervention recommendations based on the results.

7.2  Early Literacy Skills

As described in Chapter  4, reading development begins with the acquisition of


phonemic awareness and the alphabetic principle. Vaughn and Linan-Thompson
(2004) define phonemic awareness as “the ability to identify phonemes (smallest
identifiable units of sound) of spoken language, and how they can be separated
(pulled apart or segmented), blended (put back together), and manipulated (added,
deleted, and substituted” (p. 8). They define the alphabetic principle as the “sys-
tematic relationship between sounds and letters” which in turn allows students to
“translate the letters and patterns of written words into speech sounds automati-
cally” (p. 33). The combination of teaching phonemic awareness and the alphabetic
principle is commonly known as phonics instruction.
Two additional skills are required to ensure that the acquisition of phonemic
awareness and the alphabetic principle occurs: print awareness (also referred to
as print concepts) and alphabetic knowledge. Print awareness is the understanding
that print is a form of communication of the spoken word. Students who have print
awareness know the functions of print, conventions of print, and book conventions
(Archer and Hughes 2010; Vaughn and Linan-Thompson 2004). Alphabetic knowl-
edge is the knowledge of letter names. As evaluators work through the CBE Process
in early literacy, they will evaluate a student’s understanding of three categories of
skills: (1) phonemic awareness, (2) print concepts and alphabetic knowledge, and
(3) alphabetic principle.

J. E. Harlacher et al., Practitioner’s Guide to Curriculum-Based Evaluation in Reading, 135


DOI 10.1007/978-1-4614-9360-0_7, © Springer Science+Business Media New York 2014
136 7  CBE Early Literacy

Fig. 7.1   Ask-Do-Teach cycle for steps within the CBE Process

7.3  CBE Early Literacy

The CBE Early Literacy Process moves through four phases, within which are a
series of steps that involve three types of tasks:

1. Ask: These are questions that signify a starting point for a step. Assessments
results are collected and interpreted in order to answer the question.
2. Do: These are assessment activities conducted with the student.
3. Teach: These are instructional recommendations based on the results of the CBE
Process.
Evaluators start with a question (Ask), which then requires an action or activity
(Do). Following a certain amount of Asks and Dos, the evaluator arrives at an in-
structional focus (Teach), which indicates specific instructional strategies (see
Fig. 7.1). The entire CBE Process for Early Literacy Skills is presented in Handout
7.1, which is designed to be a quick summary of the early literacy CBE Process.
Table 7.1 also outlines the CBE Process for Early Literacy Skills in a linear form.
All of the handouts used for the CBE Process for Early Literacy Skills are included
at the end of the chapter and the entire list is displayed in Table 7.2.

7.4  Problem Identification

7.4.1 Step 1: Ask: Is There a Problem with Early Literacy


Skills? Do: Initial Problem Identification

The first step of the CBE Process is to identify whether or not a problem in Ear-
ly Literacy Skills exists. Initial identification of a problem with Early Literacy
Skills can be identified through several means, including a review of records, an
interview with the student or teacher(s), or during the CBE Process for Decod-
ing (i.e., the student is not currently reading fluently and/or accurately with first
grade reading material).
Table 7.1   Steps of CBE Process for Early Literacy Skills
Ask Do Teach
Problem identification Is there a problem? Initial identification
7.4  Problem Identification

Does it warrant further investigation? Survey-level assessment


Problem analysis If below criterion on PSF, is an error pattern evident? Assess phonemic awareness skills
If below criterion on LNF, has the student mastered Assess print concepts and letter names
print concepts and all letter names?
If below criterion on LSF and/or NWF, has the stu- Assess letter-sound correspondence
dent mastered individual letter sounds?
Is an error pattern evident with letter blends? Assess letter blends
Are sight words a concern? Assess sight words (see Chapter 6)
Plan implementation Is the instructional focus phonemic awareness? Phonemic awareness
Is the instructional focus print concepts? Print concepts
Is the instructional focus letter knowledge? Letter knowledge and letter-
sound correspondence
Is the instructional focus letter sound Letter-sound correspondence
correspondence?
Is the instructional focus letter blending? Letter blends
Plan evaluation Is student progressing toward his or her goal? Monitor fidelity and student progress
137
138 7  CBE Early Literacy

Table 7.2   List of Handouts for CBE Process for Early Literacy Skills
Handout Title
Instructions and Process Sheets
7.1 Curriculum-Based Evaluation in Early Literacy Skills Flowchart
7.2 Letter Naming Fluency (LNF) Instructions
7.3 Letter Sound Fluency (LSF) Instructions
7.4 Phoneme Segmentation Fluency (PSF) Instructions
7.5 Nonsense Word Fluency (NWF) Instructions
7.6 Phonemic Awareness Skills Assessment Instructions
7.7 Print Concepts Assessment Instructions
7.8 Alphabetic Knowledge (Letter Naming) Assessment Instructions
7.9 Letter-Sound Correspondence Assessment Instructions
7.10 Letter Blends Assessment Instructions
7.11 Letter-Sound Correspondence: Introduction of Sounds
Tally and Assessment Sheets
7.12 Survey-Level Assessment Results for Early Literacy Skills
7.13 Phonemic Awareness Assessment Tally Sheet
7.14 Print Concepts Assessment Tally Sheet
7.15 Letter Naming and Letter Sound Assessment
7.16 Letter Blends Assessment Tally Sheet
7.17 Additional Phonics Assessment Tally Sheet
Strategy Sheets
7.18 Teach: Phonemic Awareness: Sound Box Activity
7.19 Teach: Guided Teaching of Print Concepts
7.20 Teach: Letter Identification with Letter-Sound Correspondence: Multisensory
Teaching of Letter Names
7.21 Teach: Letter-Sound Correspondence: Guided Instruction of Letter Sounds
7.22 Teach: Letter-Sound Correspondence: Visual Support
7.23 Teach: Letter Blends: Word Boxes
7.24 Teach: Letter Blends: Word Building

7.4.2 Step 2: Ask: Is the Student’s Performance Below Criteria?


Do: Survey-Level Assessment

After initially identifying a concern with early literacy skills, the next step is to
verify the extent of the problem by conducting a Survey-Level Assessment (SLA)
using early literacy probes that will assess phonemic awareness, alphabetic knowl-
edge, and the alphabetic principle. Early literacy probes are available from various
sources, such as dibels.org and aimsweb.com.
Benchmark standards  For the SLA in decoding, the fall 25th percentile and an
accuracy criterion of 95 % were used to identify the student’s instructional level. For
the SLA in early literacy, the goal is to identify those skills that are at a reasonable
level of mastery or that may require additional instruction/intervention beyond the
7.4  Problem Identification 139

core for development. Consequently, a criterion that predicts the likelihood that a
student will require additional instruction is used. The DIBELS Next benchmark
standards are used to identify a reasonable level of proficiency. In situations where
the benchmark is not available, such as with the Letter Sound Fluency (LSF), the
40th percentile is used since it is comparable to the other benchmark standards
(Good & Kaminski, 2011).
SLA Directions 
1. Begin by administering four, 1-minute Curriculum-Based Measurements for
Early Literacy: Letter Naming Fluency (LNF), Letter Sound Fluency (LSF),
Phoneme Segmentation Fluency (PSF), and Nonsense Word Fluency (NWF).
Directions for administration are located in Handouts 7.2 to 7.5.
a. Materials needed:
i. You will need “student copies” for the student to read for LNF, LSF, and
NWF.
ii. You will need “educator copies” for LNF, LSF, PSF, and NWF on which
to record the student’s responses. Additionally, a practice item/sheet is
needed for NWF (see Handout 7.5).
b. Administer one probe each for LNF, LSF, PSF, and NWF. Then compare
the student’s score to the criterion listed for the expected grade level. If the
student does not meet criterion for their expected grade level, compare the
score to the next lower grade-level criterion until the criterion is met. This is
slightly different than the SLA for CBE Decoding, since it is not necessary to
administer a new probe for a lower grade level.
2. Record the student’s scores on Handout 7.12. Complete the bottom portion of
Handout 7.12 to determine the severity of the problem.
3. Ask: Does it warrant further investigation?
− If the student is at criterion for his or her grade level on each of the Early Lit-
eracy probes, then the CBE Process for early literacy is complete and Decod-
ing CBE can be examined or reexamined (see Chapter 6).
− If the student is below criterion on any of the early literacy probes, then pro-
ceed to Problem Analysis for that particular skill. Low scores on PSF indicate
a need for assessment of phonemic awareness skills (Step 3 of CBE Process),
low scores on LNF indicate print concepts and alphabetic knowledge (Step
4 of CBE Process), and low scores on NWF and/or LSF indicate alphabetic
principle (Step 5 of CBE Process). As an evaluator, you will evaluate each of
these areas in Problem Analysis if the SLA indicates a score below criterion.

Things to Consider 
• As with decoding, it is helpful and efficient to record all types of errors
while completing the early literacy probes. It may be necessary to examine
errors later in the CBE Process, so recording this information now will
make the process more efficient.
140 7  CBE Early Literacy

Fig. 7.2   Phase 2 of CBE Early Literacy Process depicting individual steps

• You may also wish to administer three Early Literacy probes and use the
median score. This step is not necessary, but may lead to a more reliable
score. Any time there is a concern about the administration of an Early
Literacy probe, use a new probe and readminister.

7.5  Problem Analysis

In the Problem Analysis phase of CBE Process in Early Literacy, the evaluator
examines each of the scores from the SLA to determine what skills warrant further
investigation. If the student scored low on PSF, assess phonemic awareness skills
(Step 3 of CBE Early Literacy Process). If the student scored low on LNF, assess
alphabetic knowledge and print concepts (Step 4 of CBE Early Literacy Process). If
the student scored low on LSF or NWF, assess alphabetic knowledge (this step be-
gins with Step 5 and may include Steps 6 and 7 of the CBE Early Literacy Process).
The evaluator will assess each area that the SLA results indicate warrants further
investigation. Figure 7.2 provides a visual depiction of phase 2 of the CBE Early
Literacy Process.

7.5.1 Step 3: Ask: If Below Criterion on PSF, is an Error Pattern


Evident? Do: Assess Phonemic Awareness Skills

Assessment of phonemic awareness skills involves two parts: (1) administration of


phonemic awareness tasks and (2) examination of errors to see if there is a pattern.
Phonemic awareness is comprised of many different skills that are vital in translat-
ing sounds within oral language to written language. Assessing phonemic aware-
ness is conducted by administering tasks designed to answer several questions:
a. Can the student blend word parts, segment word parts, and/or rhyme words?
b. Can the student blend syllables and segment syllables?
c. Can the student delete onset (initial consonant sound of a word) or delete rime
(the vowel and the rest of the syllable that follows)?
d. Can the student blend phonemes and segment phonemes?
7.5  Problem Analysis 141

Fig. 7.3   Continuum of pho-


nemic awareness skills

Step 3 of the CBE Early Literacy Process involves having the student complete at
least ten items for each skill to answer the previous questions. Evaluators may cre-
ate their own items to use for each question, or they may use the assessment activity
in Handout 7.6. Directions for Step 3 are provided in Handout 7.6.
1. Assess each phonemic awareness skill using the directions in Handouts 7.6 and
7.13 to record the scores. The skills to be assessed include: blend word parts,
segment word parts, rhyme words, blend syllables, segment syllables, delete
onset, delete rime, blend phonemes, and segment phonemes.
2. After administering the phonemic awareness assessments, determine if an error
pattern is evident or if the student has a general lack of phonemic awareness
skills. Ask, Is there a pattern to errors made?
a. To determine if there are gaps in skills or a general lack of skills, the evalu-
ator looks for patterns in the phonemic awareness skills. If the student has
mastered most of the skills that were assessed but struggles with one or two
skills, then those missing skills can be targeted for instruction. The evaluator
can recommend “Teach: Phonemic Awareness: Sound Boxes” (Handout 7.18)
and “Teach: Targeted Instruction” as described in Chapter 6.
b. If the evaluator finds that the student has an overall lack of phonemic aware-
ness skills, then the evaluator can recommend general instruction using some
of the strategies described in the “Teach: Phonemic Awareness” section of this
chapter and the general reading instruction strategies described in Chapter 6
(see the “Teach: General Reading Instruction”). Additionally, Fig. 7.3 illus-
trates the continuum of skills from simple to complex. The evaluator can sug-
gest where on the continuum of phonemic awareness skills instruction should
focus.
c. After completion of Step 3, proceed to Step 4 and/or Step 5, depending on the
results of the SLA.
142 7  CBE Early Literacy

7.5.2 Step 4: Ask: If Below Criterion on LNF, does the Student


have Print Concepts and Letter Names Mastered? Do:
Assess Print Concepts and Letter Names

Knowledge of print concepts is a foundational skill when learning to read. Under-


standing that words in print provide meaning and obtaining that meaning is accom-
plished through certain conventions that need to be taught.
Assessment of print concepts can be done with any book. The educator sits with
a student and provides a book. Questions are asked to determine whether or not the
student understands book concepts, directionality concepts, word concepts, letter
concepts, and punctuation concepts. All concepts should be mastered by students in
kindergarten, except for punctuation concepts which are mastered later.
Assessment of Print Concepts 
1. To assess print concepts, select a book at the student’s grade level. Sit with the
student and work through the assessment as outlined in Handout 7.7.
2. Ask each item of the student listed in Handout 7.14 and mark whether or not the
student is able to complete the task.
3. Tally up the responses and identify any missing skills. These skills will be tar-
geted for instruction.

Assessment of Letter Names 


4. Knowing the letter names is a precursor to matching letter names with sounds.
LNF probes sample both upper and lower case letters. Evaluators will further
analyze errors to determine the letters that need to be taught.
5. Follow the directions in Handout 7.8 and have the student identify the names of
all upper- and lowercase letters. Record the results in Handout 7.15.
6. Next, analyze the data to determine which letters the student is able to identify
including whether they are uppercase letters, lowercase letters, or a combination
of both. The missing letters are then targeted for instruction.
7. Ask: Has the student mastered print concepts and letter names? Review the
results of the student’s print concepts and letter names.
a. If the student has mastered all of the print concepts and letter names (i.e., can
accurately identify all tasks in print concepts and identify all letter names),
then proceed to Step 6 (assuming the student scored low on LSF and/or
NWF).
b. If the student has not mastered all of the print concept skills or letter names,
identify which skills are missing. These are targeted using “Teach: Guided
Teaching of Print Concepts” (see Handout 7.19) and/or “Teach: Letter Identi-
fication with Letter-Sound Correspondence: Multisensory Teaching of Letter
Names” (see Handout 7.20).
7.5  Problem Analysis 143

7.5.3 Step 5: Ask: If Below Criterion on LSF and/or NWF, has


the Student Mastered Individual Letter Sounds? Do: Assess
Letter-Sound Correspondence and Letter Blends

For Steps 5 and 6, evaluators will examine the student’s letter-sound correspon-
dence for individual letters and letter blends. Letter sound fluency is one of the two
high-priority skills necessary to acquire the alphabetic principle (Hosp and Mac-
Connell 2008). First, the evaluator will assess the student’s knowledge of individual
sounds and then will assess letter blends. Letter Sound Fluency probes and Non-
sense Word Fluency probes sample letter sounds. The evaluator will further analyze
and verify the student’s knowledge of letter sound correspondence and letter blends.
1. Use the assessment described in Handout 7.9 with all lowercase letters. Ask the
student to produce each of the most common sounds for each letter.
2. Record the responses on the summary/tally sheet on the second page of Handout
7.15 (note that the same sheet is used for the letter name assessment).
3. Ask: Has the student mastered individual letter sounds?
a. If the student has mastered all of the letter sounds, then move to assessing
letter blends (Step 6).
b. If the student has not mastered all of the letter sounds, identify the missing
letter sounds and target for instruction. Recommend the strategy outlined in
Handout 7.20 (with a focus on letter sound) or one of the strategies outlined
in Handout 7.21 and Handout 7.22.
c. Further examine the errors and see if an error pattern emerges. With early
literacy skills, a pattern may not be as evident as with Decoding CBE because
there only a few categories of letters. The errors are examined for missing
vowel sounds, consonants, or confusion of visually or auditorily similar
letters.
i. If a pattern is clear, teach those specific letter sounds with targeted instruc-
tion (as described previously).
ii. If no pattern emerges, recommend a more general, balanced approach
to instruction as described in Chapter  6 (see “Teach: General Reading
Instruction”). The strategies described in Handouts 7.20 to 7.22 can still be
used, but the focus will be on more letters than if a pattern was identified.

7.5.4 Step 6: Ask: Is an Error Pattern Evident with Letter


Blends? Do: Assess Letter Blends

Once the evaluator has determined that individual letter sounds have been mastered,
it is necessary to determine if the student is able to blend those sounds into words.
The starting point is to examine performance on the NWF task. If the student scores
below the criterion for NWF, it may be necessary to assess further the student’s abil-
ity to blend sounds connected to print.
144 7  CBE Early Literacy

1. Assess the student’s ability to blend vowel–consonant (VC) words and conso-
nant–vowel–consonant (CVC) words. Use Handout 7.10 for directions.
2. Assess the student’s ability to blend letter sounds at the beginning of the word
and blend letter sounds at the end of the word. Use a dry-erase board or a piece
of paper with a pencil.
3. Record the results on Handout 7.16.
4. Ask: Is there an error pattern evident with letter blends or the student’s ability to
blend letter sounds?
a. Determine whether or not an error pattern is evident. The student may make
errors with specific letter sounds or with blending certain types of words (i.e.,
VC words, CVC words, or letter blends at the beginning or end of the word).
b. If a pattern exists, teach the missing skills. Follow guidelines described in
Chapter 6 (see Teach: Targeted Instruction) and recommend one of the strategies
outlined in Handouts 7.23 (i.e., Word Boxes) or Handout 7.24 (i.e., Word
Building).
c. If no pattern exists, consider general instruction related to blending. Consult
the “Teach: General Reading Instruction” outlined in Chapter 6 and recom-
mend one of the strategies in Handouts 7.23 and 7.24.

Things to Consider 
• Assessment of letter blends overlaps with the error analysis described in
Chapter 6. The evaluator may wish to follow the Error Analysis section
from Chapter 6 using primer or appropriate-level passages.
• It is also an option to examine previous work, the SLA assessment, and
other assessments to identify errors or missing letter sounds and letter
blends. Evaluators can create assessments or use a commercially available
diagnostic survey.
• If simple letter blends appear to be mastered, further assessment can be
conducted to determine if instruction is needed for diphthongs, digraphs,
and/or r-controlled syllables. Handout 7.17 provides an example of such
an assessment.

7.5.5 Step 7: Ask: Are Sight Words a Concern?


Do: Assess Sight Words

Assessment of sight words is explained in Chapter 6. Follow Step 7 in Chapter 6 to


assess sight words.

7.6  Plan Implementation

After conducting specific-level assessment in the problem analysis phase of the


CBE Early Literacy Process, the specific problem with early literacy skills should
be clear. With that information, three steps follow: (a) a strategy or intervention that
7.6  Plan Implementation 145

Table 7.3   Instructional strategies for early literacy skills


Strategy Focus of strategy
Sound manipulation activities Various phonemic awareness skills
Sound boxes Phonemic awareness
Guided teaching of print concepts Various print knowledge skills and concepts
Multisensory teaching of letter names Alphabetic knowledge and principle
Guided instruction of letter sounds Alphabetic knowledge and principle
Visual support Alphabetic knowledge and principle
Word boxes Letter blends and blending
Word building Letter blends and blending

Table 7.4   List of resources for instructional strategies for early literacy skills
Resource Location/Publisher
Phonemic Awareness in Young Children Jager Adams et al. (1998). Brookes Publishing
Effective School Interventions Rathvon (2008). Guilford Press
Intervention Central Sight Word Generator http://www.interventioncentral.org/tools/
wordlist-fluency-generator
Words Their Way: Word Study for Phonics, Bear et al. (2007). Prentice Hall
Vocabulary, and Spelling Instruction (4th ed)
Reading Rockets Readingrockets.org
CORE Literacy Library www.corelearn.com

is matched to the student’s skill deficits is selected, designed, and implemented, (b)
a goal is set, and (c) ways to measure fidelity and progress are determined.
Five general instructional foci (labeled as “Teach”) are described for use with
students depending on the results of the CBE Process. A list of all the strategies
described in this chapter is provided in Table  7.3, with each strategy detailed in
Handouts 7.18–7.24. There are numerous intervention strategies to support early
literacy needs, and listing all of them is beyond the scope of this book. The key to
selecting a strategy is to use problem analysis results to guide the selection, and
use formative assessment to ensure it results in student benefit. This section will
describe the overall instructional focus for a given result of the CBE Process and
share a few evidence-based strategies in the Handouts. Educators are encouraged
to explore other resources to locate additional strategies. A list of resources that
provide instructional strategies is presented in Table 7.4.
Targeted vs. General Reading Instruction  Much of the CBE Process is an
attempt to determine the presence of specific error patterns or if there is a general
skill problem. The most helpful recommendations may be lists of the types of errors
made as opposed to a specific instructional strategy. When an error pattern is identi-
fied, such as with phonemic awareness skills or letter-sound correspondence, then
the teaching strategy is to use targeted instruction to correct those identified errors.
The reader is referred to the “Teach: Targeted Instruction” section in Chapter 6. If
no clear pattern emerges with error analysis or if the student is struggling with the
skill overall, then it is recommended to use “Teach: General Reading Instruction”
as described in Chapter 6. When the need for general instruction is identified, the
146 7  CBE Early Literacy

student needs overall improvement of the skill and a general, balanced approach to
reading instruction is recommended.

7.6.1  Teach: Phonemic Awareness

If CBE results indicate phonemic awareness instruction is necessary, several sound


manipulation activities can be suggested. If specific phonemic awareness skills
are targeted, the activities described next will use the correct form of the errors as
the content of instruction; if the student has a general lack of phonemic awareness
skills, then the content will be based on where the student falls on the continuum of
phonemic awareness skills (see Fig. 7.3).
Sound manipulation activities  The activity chosen should be matched to the stu-
dent’s skill deficit. Analysis of the results from the CBE Process and an interview
with the teacher to determine which activities have been used previously will help
determine which activities to use. The use of such sound manipulation activities
are associated with improved student outcomes (Phillips et al. 2008; Vaughn and
Linan-Thompson 2004).
Sound matching activities teach the student to match sounds in two or more
words that are the same. Pictures of objects may be used. The teacher provides the
name of the object on the picture, and has the student match with a picture of an
object that has the same beginning sound. This also may be done with middle and
ending sounds.
Isolating the sounds activities have the student identify the sounds in the begin-
ning, middle, and end of words. The teacher provides two words that have the same
beginning, middle, or ending sound and ask the student to identify whether the be-
ginning, middle, or ending sounds are the same. Another technique is to say a word
and have the student identify the beginning, middle, or ending sound (example: In-
structor says, “What do you hear at the beginning of the word dog? Student should
respond with /d/.)
Substituting sounds activities involve teaching the student how to replace one
sound with another to form a new word. The teacher says a word and has the stu-
dent create a different word by substituting a sound in the word (can be beginning,
middle, or end). For example, the instructor says “What happens if we change the
/d/ in dog to /l/. What word do we get?”
Sound deletion activities involve teaching the student to remove a sound and
say the remaining word without the sound in it. For example, the instructor says,
“What word do I get if I take the /s/ off of /seat/? (eat)” Sounds may be deleted at
the beginning, middle, or end of the words. Ensure that once a sound is deleted a
real word is created.
Sound blending activities involve teaching the student to blend sounds, syllables,
or isolated phonemes into a word. For example, the instructor says, “It starts with /f/
and ends with /eet/, the word is…( feet).”
Sound segmentation activities involve teaching the student how to separate out
the individual sounds in words. The instructor says, “What sounds do you hear in
can?” The student says, “/c/-/a/-/n/.”
7.6  Plan Implementation 147

Sound boxes  Sound boxes, also known as Elkonin boxes (Elkonin 1973), are used
to teach students how to segment sounds of spoken words in sequence and make
students understand the positions of sounds in spoken and written words (McCar-
thy 2008). Picture cards or word cards with boxes represent the separate sounds
within the words. The instructor models the different phonemes within the word
for the student by slowly articulating the word and sliding chips into the box for
each phoneme within the word. For example, the word “sheep” has three phonemes
(/sh/ /ee/ /p/). The teacher would say the word, elongating each phoneme, and slide
a chip into one of three boxes for each phoneme. The student is then given the
opportunity to do the same.
The technique may be used to teach students how to identify beginning, middle,
or ending sounds by asking students to slide the chip into the box where they hear the
specific sound. This technique can be used with: CV, CVC, consonant–vowel–con-
sonant–silent vowel (CVCV), consonant–vowel–vowel–consonant (CVVC), con-
sonant–consonant–vowel–consonant (CCVC), and multisyllabic words. Maslanka
and Joseph (2002) compared the use of sound boxes to the use of sound sorts with a
group of preschoolers. Although both methods led to increases in phonemic aware-
ness skills, students who received instruction with sound boxes scored better on
measures of segmenting phonemes and isolating medial sounds. The strategy is
described in detail in Handout 7.18.

7.6.2  Teach: Print Concepts

Guided teaching of print concepts  For teaching print concepts, evaluators iden-
tify the exact missing skills and target those for instruction. The teacher directly
teaches and models the areas the student has yet to master within whole-group ins-
truction, small-group instruction, and/or one-to-one instruction. Two strategies are
suggested. First, a direct instruction approach is described in Handout 6.16 (Teach:
General Reading Instruction). Second, is the Shared Book Reading strategy, descri-
bed by Lovelace and Stewart (2007). Lovelace and Stewart describe an intervention
with preschool students who received a 10-minute print concepts lesson as part
of their 30-minute speech and language IEP session. The teacher sat next to the
individual student and read a story aloud. While reading, the teacher would point
out a total of 20 print concepts. Each participant at least doubled their mastery of
print concepts (as measured by a concept of print assessment; see Handouts 7.7 and
7.14). The intervention is described in more detail in Handout 7.19.

7.6.3 Teach: Letter Identification with


Letter-Sound Correspondence

Vaughn and Linan-Thompson (2004) recommend not separating letter knowledge


and letter sound during instruction. They suggest letter-sound associations should
be taught so that students learn to combine them to make words that they read and
understand. Vaughn and Linan-Thompson further recommend that teachers begin
148 7  CBE Early Literacy

teaching sounds that are easy to articulate. Refer to Handout 7.11 for their recom-
mended introduction of letter-sound correspondences. Using the data that you have
obtained through the CBE Process and comparing that to Handout 7.11 will help
determine which letter sounds to target first.
Multisensory teaching of letter names  Lafferty and colleagues (2005) describe a
direct instruction, multisensory approach used to teach four preschoolers (two with
typical language development and two with delayed language development) letters
that they had not mastered. Following a baseline assessment, a pool of five letters
was identified for instruction. Students were taught in small-group instruction for
30  minutes, three times per week. Students were directly taught a letter using a
model-production-feedback sequence. Then a multisensory strategy was used in
which the students used shaving cream or Play-Doh to “write” the letter, followed
by using paper and pencil to write the letter. Production of the letter involved both
letter name and letter sound. Each student showed large gains in accuracy of letter
name and letter sound identification, with results favoring recognition over produc-
tion. More detail on this method is provided in Handout 7.20.

7.6.4  Teach: Letter-Sound Correspondence

Many of the strategies described for teaching letter-sound correspondence also can
be used to teach letter blends. One strategy for teaching letter-sound correspon-
dence is presented within this section, but evaluators also should read the Letter
Blend section to see other possible strategies to recommend.
Guided instruction of letter sounds  This strategy involves the student learning
to identify letter names and letter sounds, both upper- and lowercase (Vaughn and
Linan-Thompson 2004). Explicit, guided instruction is used. The teacher begins by
introducing one vowel and three or four consonants. Letters may be added as stu-
dents master them. The instructor models the task by showing the student the first
letter and saying, “This is a, then asks, “What letter is this?” The student repeats the
letter. Showing each letter, the teacher asks the student, “What letter is this?” Once
the letters are mastered, match the sounds to the letters, and repeat the same process
by saying, “This is a. a says /a/. What is the sound of a?” Repeat with each letter.
This strategy is described in Handout 7.21.
Visual support  In addition to increasing engagement with letter sounds, using
the guided instruction of letter sounds strategy described previously (see Hand-
out 7.21), teachers may also consider using visuals to support the acquisition of
the letters. Animals paired to letters can provide the visuals for students to learn
the letters and letter sounds. For instance, an elephant would represent e, or a dog
would represent d (http://www.readingrockets.org/strategies/alphabet_matching).
Additionally, props or images that represent a functional depiction of the letter can
be created to augment instruction (Dilorenzo et al. 2011). For example, an image of
snake in the shape of an “s” can be used for the letter s. Refer to Handout 7.22 for
further directions.
7.6  Plan Implementation 149

Fig. 7.4   Example of a Word


Box CLAP

CL A P

7.6.5  Teach: Letter Blends

Word boxes  Word boxes are similar to sound boxes (described earlier) and assist
the student with one-to-one sequential correspondence between letters and sounds
in words (Clay 1993). The teacher uses word cards with boxes below each letter or
grouping of letters (blends) that represent the individual sounds within the word.
The student is presented the word card (see Fig. 7.4) and the instructor models for
the student by slowly articulating the word and sliding the letter (or combination of
letters) into the box when a sound in the word is pronounced.
The student then articulates the word slowly and slides the letters (or letters)
into the box when pronouncing a new sound in a word. This technique also can be
used to work with the student in identifying beginning, middle, and ending sounds
in words. The instructor presents a word and says, “Where do you hear the /cl/ in
clap?” The student then slides the letters into the box or position where they hear
the sound in the word. Word boxes are effective in improving student’s decoding
and word-reading ability (Joseph 2000). See Handout 7.23 for more detail on this
strategy.
Word building  Word building involves using a set of letter cards to teach students
how to build words and analyze the different words created by adding or replacing
certain letters. Each student is given a set of letter cards that correspond to the letter-
sound units for that particular lesson. The teacher presents a word on the board, stu-
dents pronounce it, and then build it with their cards. For example, the teacher may
write “cat” on the board, pronounce it, and then have students build it with their
cards. Next, students are taught to insert or delete certain letters to create new words
(e.g., cat to cap). When changing letters, particular attention is given to the position-
ing of the letter being changed, and after each new word is formed, students respond
chorally. If students cannot pronounce the word correctly, they are encouraged to
sound it out by looking at the letter sounds instead of supplying the word for them.
Variations on this method include a peer-tutoring and sentence-reading component.
Rathvon (2008) offers more detail on implementing this strategy and indicates it may
be best for students who have completed grade 1 but struggle with basic letter sounds.
Also, this strategy is particularly helpful for students who get the initial sound or letter
in a word, but do not fully decode the word (McCandliss et al. 2003). McCandliss and
colleagues found that students between ages 7 and 10 improved their decoding and
reading comprehension skills following 20 intervention sessions using Word Building
compared to a control group. See Handout 7.24 for instructions on this strategy.
150 7  CBE Early Literacy

7.7  Plan Evaluation

Having identified an instructional focus and strategy, the question in the Plan Evalu-
ation phase is about the effectiveness of the intervention plan on student’s early
literacy skills. This section describes ways to measure early literacy skills. The spe-
cific measurement tool will vary depending on which early literacy skill is being
targeted. As mentioned in Chapter 6, educators may wish to create measures to as-
sess the specific skill being targeted, or they may wish to analyze data collected as
part of daily instruction. A few options are summarized here.
Phonemic awareness  The use of Phoneme Segmentation Fluency (PSF) can be
used to measure progress related to phonemic awareness skills (Good and Kaminski
2011).
Print concepts  Handouts 7.7 and 7.14 can be used to assess the components of
print concepts during instruction.
Alphabetic knowledge  Curriculum-Based Measurement can be used to measure
progress of alphabetic knowledge. LNF probes are used to measure a student’s
progress in acquiring letter names.
Alphabetic principle  Two Curriculum-Based Measures are used for measuring
a student’s progress with the alphabetic principle: LSF (single letter sounds) and
NWF (incorporates blending letter sounds).
Integrated data  As previously mentioned in Chapter 6, data also can be collected as
part of the instruction. For example, the teacher could document the percentage of cor-
rect sounds a student produces during sound manipulation activities. The data obtained
during instruction can enhance formally administered individual assessments.

7.8  Expanding and Fine-Tuning

This section describes considerations and ways to expand the use of the CBE Pro-
cess. As evaluators build proficiency and use the CBE Process, they may wish to
tailor it to address deeper content. Things to consider and ways to expand the depth
and use of the CBE Process for Early Literacy are described.
Analysis of NWF  When analyzing the results of the NWF, the evaluator can deter-
mine where the breakdown is with decoding. For example, the student may con-
sistently be unable to identify the first letter sound or the student may be able to
decode, but struggle with recoding or blending.
Overlap with error analysis  Error analysis may be used to identify what types
of letter blends a student is missing. The evaluator may wish to consult Chapter 6
(Step 6).
7.9  Chapter Summary 151

7.9  Chapter Summary

This chapter outlined the CBE Process for Early Literacy Skills. The CBE Process
for Early Literacy Skills begins with an SLA with CBM early literacy probes fol-
lowed by working through a series of questions and tasks that examine the student’s
skills in phonemic awareness, alphabetic knowledge, and the alphabetic principle.
Instructional recommendations are determined based on the results.
Handout 7.1  Curriculum-Based Evaluation Process in Early Literacy Flowchart
Curriculum-Based Evaluaon: Early Literacy
152

PROBLEM IDENTIFICATION
1. Ask: Is there a problem in Early Literacy Skills?  Do: Inial idenficaon of problem
2. Ask: Does the problem warrant further invesgaon?  Do: Conduct Survey-Level Assessment
PSF LNF LSF NWF
PROBLEM ANALYSIS
Phonemic Awareness Print Concepts & Alphabec Knowledge Alphabec Principle
3. Ask: If below criterion on PSF, 4. Ask: If below criterion on LNF, does student 5. Ask: If below criterion on LSF and/or NWF,
is there an error paern evident? have print concepts and leer names mastered? has the student mastered individual leer sounds?
  
Do: Assess phonemic awareness skills Do: Assess print concepts and leer names Do: Assess leer-sound correspondence
Yes No Yes No Yes No
Teach: Phonemic Teach: Phonemic Go to Step 5 Teach: Print Concepts Go to Step 6 Teach Leer-Sound
Awareness with Awareness with and/or Correspondence
Targeted Instrucon General Instrucon Teach Leer
Idenficaon with (Consider Step 6)
Leer-Sound 6. Ask: Is there a paern to errors
Correspondence made with leer blends?

Do: Assess leer blends
Yes No
Teach Leer Blends with Teach Leer Blends with
Targeted Instrucon General Instrucon
7. Ask: Are sight words a concern?

Do: Assess sight words (see Chapter 6)
PLAN IMPLEMENTATION
Teach: Leer Idenficaon with Teach: Leer-Sound
Teach: Phonemic Awareness Teach: Print Concepts Teach: Leer Blends
Leer-Sound Correspondence Correspondence
PLAN EVALUATION
Monitor Effecveness Monitor Fidelity
7  CBE Early Literacy
Handout 153

Handout 7.2  Letter Naming Fluency Directions


Purpose: To assess student’s alphabetic knowledge
Materials Needed:
• Writing tool
• Timer
• Handout 7.12
• Letter Naming Fluency probes
− Student copies
− Examiner copies
Directions:
1. Place the student test form (probe) in front of the student.
2. Place the examiner’s copy in front of you, shielded from the student’s view.
3. Say to the student, “Here are some letters ( point to the student copy). Begin
here (point to the first letter) and tell me the names of as many letters as you
can. If you come to a letter you don’t know, I’ll tell it to you. Do you have
any questions? (Answer any questions the student may have and reread direc-
tions if necessary.) Put your finger under the first letter. Ready, begin.” Begin
timing.
4. As the student reads, follow along on the examiner’s copy and mark any errors
(letters named incorrectly or not named within 3 seconds).
5. Draw a slash (/) through the incorrect letter. If the student self-corrects within
3 seconds, mark the self-correction with an “SC.”
6. Mark errors as incorrect and let the student continue reading (do not correct the
errors). If a student stops or struggles with a letter for 3 seconds, tell the student
the letter name, mark it as incorrect, and then point to the next letter and say,
“What letter?” Do this as often as needed during the administration.
7. The first time the student says the letter sound rather than the letter name, say:
“Remember to tell me the letter name, not the sound it makes. This letter is
(say the letter name).” Mark the letter as incorrect. You may provide this prompt
only once during the administration. If the student provides the letter sound on a
later item, mark it as incorrect and make a note at the top of the examiner’s copy,
but do not give feedback.
8. At the end of 1 minute, place a bracket (]) after the last letter named and say, “Stop.”
9. Tally up the total number of correct letters named and record the score on Hand-
out 7.12.
Interpretation Guidelines:
10. Ask: Does the issue warrant further consideration?
− If the student is performing at criterion, then further assessment of alpha-
betic knowledge or print concepts is not necessary.
− If the student is performing below criterion, then proceed to problem analy-
sis (Step 4) to determine further steps.
154 7  CBE Early Literacy

Things to Consider 
• If a student makes errors without self-corrections on ten consecutive letters, dis-
continue the probe. Give credit for any letters correct before the discontinue rule
was met.
Note: The LNF directions are reprinted by permission (Pearson, 2012b). 2012
Copyright by Pearson Education Inc.
Handout 155

Handout 7.3  Letter Sound Fluency Directions


Purpose: To assess student’s letter-sound correspondence for individual letters
Materials Needed:
• Writing tool
• Timer
• Handout 7.12
• Letter sound fluency probes
− Student copies
− Examiner copies
Directions:
1. Place the student test form (probe) in front of the student.
2. Place the examiner’s copy in front of you, shielded from the student’s view.
3. Say to the student, “Here are some letters (point to the student copy). Begin here
(point to first letter) and tell me the sounds of as many letters as you can. If you
come to a letter you don’t know, I’ll tell it to you. Do you have any questions?
(Answer any questions the student may have and reread directions if necessary).
Put your finger under the first letter. Ready, begin.” Begin timing.
4. Follow along on the examiner’s copy and mark any letter sounds made incor-
rectly or not made within 3 seconds.
5. Draw a slash (/) through the incorrect letter. If the student self-corrects within
3 seconds, mark the self-correction with an “SC.”
6. If a student stops or struggles with a letter for 3 seconds, tell the student the let-
ter sound and then mark it as incorrect. Point to the next letter and say, “What
sound?” Do this as often as needed during the administration.
7. The first time the student says the letter name rather than the letter sound, say,
“Remember to tell me the sound the letter makes, not its name.” Mark the letter
as incorrect. You may provide this prompt only once during administration. If the
student provides the letter name on a later item, mark it as incorrect.
8. At the end of 1 minute, place a bracket (]) after the last letter sound made and
say: “Stop”.
9. Tally up the total number of correct letters named and record the score on Hand-
out 7.12.
Interpretation Guidelines:
10. Ask: Does the issue warrant further consideration?
− If the student is performing at criterion, then further assessment of alpha-
betic knowledge or print concepts is not necessary.
− If the student is performing below criterion, then proceed to problem analy-
sis (Step 4) to determine further steps.
Things to Consider 
• If a student makes errors without self-corrections on ten consecutive letters, dis-
continue the probe. Give credit for any correct letter sounds.
156 7  CBE Early Literacy

Handout 7.4  Phoneme Segmentation Fluency Directions


Purpose: To assess student’s letter-sound correspondence for individual letters
Materials Needed:
• Writing tool
• Timer
• Handout 7.12
• Copies of Phoneme Segmentation Fluency probes for examiner
Directions:
1. Place the examiner’s copy in front of you, shielded from the student’s view.
2. Say to the student, “I am going to say a word. After I say it, I want you to tell
me all the sounds in the word. So, if I say, Sam, you would say /s/ /a/ /m/.
Let’s try one (1-second pause). Tell me the sounds in mop.”
a. If the student responds correctly, say, “Very good. The sounds in mop are
/m/ /o/ /p/.”
b. If the student responds incorrectly, say, “The sounds in mop are /m/ /o/ /p/.
Your turn. Tell me the sounds in mop.” Let the student respond.
3. Proceed to the test. Say, “Here is your first word.” Say the first word and start
your timer.
4. Follow along on the examiner’s copy and record the student’s performance.
Mark any incorrect segments or segments the student does not say within 3 sec-
onds as errors.
5. Underline each sound segment produced correctly. Draw a slash (/) through a
segment produced incorrectly. If the student self-corrects within 3 seconds, mark
the self-correction with an “SC.”
6. Allow the student up to 3 seconds for each sound segment. If the student does
not say the first (or next) sound segment of a word after 3 seconds, give the next
word and mark the segments not produced as errors.
a. As soon as the student is finished saying the sounds of a word, present the
next word.
7. At the end of 1 minute, place a bracket ( ] ) after the last segment attempted.
Interpretation Guidelines:
8. Ask: Does the issue warrant further consideration?
– If the student is performing at criterion, then further assessment of alphabetic
knowledge or print concepts is not necessary .
– If the student is performing below criterion, then proceed to Problem Analy-
sis (Step 4) to determine further steps.
Things to Consider 
• If a student does not give any correct sound segments in the first 5 words, discon-
tinue the probe and record a score of zero.
Note: The PSF directions are reprinted by permission (Pearson, 2012b). 2012 Copy-
right by Pearson Education Inc.
Handout 157

Handout 7.5  Nonsense Word Fluency Directions


Purpose: To assess student’s letter-sound correspondence for individual letters
Materials Needed:
• Writing tool
• Timer
• Handout 7.12
• Nonsense Word Fluency probes
– Student copies
– Examiner copies
– Practice Sheet
Directions:
1. Place the Practice Sheet in front of the student, and say, “Look at this word
(point to bim). It’s a made-up word, not a real word. All the letters have sounds:
(point to each letter in turn as you say it) /b/, /i/, /m/. Altogether, the sounds are
(point to each letter) /b/ /i/ /m/, or bim. (Run your finger fast through the whole
word.) Remember, it is a made-up word. You can say the sounds of the letters,
/b/ /i/ /m/ (point to each letter), or you can say the whole word bim. (Run your
finger fast through the whole word.) Be sure to say any sounds you know.”
2. “Ready? Let’s try one. Read this word the best you can (point to lat). Point to
each letter and tell me the sound or tell me the whole word.”
a. If the student responds correctly or says some of all of the sounds, say, “That’s
right. The sounds are /l/ /a/ /t/ or lat.”
b. If the student does not respond within 3 seconds or responds incorrectly, say,
“Watch me: (point to each letter in turn) /l/, /a/, /t/. Altogether, the sounds
are /l/
/a/ /t/ (point to each letter) or lat (run your finger fast through the whole
word). Remember, you can say the sounds of the letters or you can say the
whole word. Let’s try again. Read this word the best you can (point to lat).”
3. Place the student probe in front of the student.
4. Place the examiner’s copy in front of you, shielded from the student’s view. Say.
“Here are some more made-up words (point to the student probe). When I say
begin, start here (point to the first word), go across the page (run your finger
across the page), and read the words the best you can. Remember, you can tell
me the sound of the letter or you can say the whole word. Put your finger under
the first word. Ready, begin.” Begin timing.
5. When you are sure the student clearly understands the task, use these shortened
directions: “When I say begin, start here (point to the first word), go across the
page (run your finger across the page), and read the words the best you can.
Remember, you can tell me the sound of the letter or read the whole word. Put
your finger under the first word. Ready, begin.”
Note: The NWF directions are reprinted by permission (Pearson, 2012b). 2012
Copyright by Pearson Education Inc.
158 7  CBE Early Literacy

Nonsense Word Fluency Practice Sheet

bim      lat
Handout 159

Handout 7.6  Phonemic Awareness Skills Assessment Instructions


Materials Needed:
• Writing utensil
• Handout 7.13
Directions:
1. Begin by asking the student to blend word parts, segment word parts, and then
rhyme words.
2. Blending word parts. Present at least ten items first for blending word parts.
Begin by presenting a practice item for the student. Say, “Listen and tell me
what word you hear: /pan/ /cake/. What word do you hear?
a. If the student responds correctly, say, “Yes, the word is pancake.”
b. If the student responds incorrectly, say “No, the word is pancake. /Pan/
(pause) /cake/ together is pancake. Listen again and tell me the word you
hear: /pan/ /cake/.”
c. Proceed to the actual items. Record the student’s actual response on Handout
7.13.
3. Segmenting word parts. Present at least ten items for segmenting word parts.
Begin with a practice item. Say, “I am going to clap for each part of the word
cowboy: /cow/ (clap) /boy/ (clap). Now you clap the parts of this word: cowgirl.”
a. If the student responds correctly, say, “Yes, the word parts are /cow/ /girl/.”
b. If the student responds incorrectly, say “No, the word parts are /cow/ (clap) /
girl/ (clap). Listen. Clap the parts of this word: cowgirl.”
c. Proceed to the actual items. Record the student’s actual response on Handout
7.13.
4. Rhyming words. Present at least ten items for rhyming words. Say, “Listen,
these words rhyme: cat-bat; car-far. These words do not rhyme: kite-far; fall-
there. Tell me if these words rhyme: eat-heat.”
a. If the student responds correctly, say, “Yes, eat and heat rhyme.”
b. If the student responds incorrectly, say, “Eat and heat do rhyme because they
sound the same. Tell me if these words rhyme: eat-heat.”
c. Proceed to the actual items. Record the student’s actual response on Handout
7.13.
5. Blending syllables. Next ask the student to blend and segment syllables. Say to
the student, “Say the word eat. Now put /m/ in front of eat. Say the word.”
a. If the student says the word correctly, say, “Yes. When you put /m/ in front of
eat, you get the word meat.”
b. If the student responds incorrectly, say, “When you put /m/ in front of eat,
you get the word meat. Say the word /eat/ with /m/ in front of it.”
6. Say to the student, “Say the word fast. Now put /er/ at the end of it.”
a. If the student says the word correctly (faster), say, “Yes. When you put /er/ at
the end of fast, you get the word faster.”
b. If the student responds incorrectly, say, “When you put /er/ at the end of fast,
you get the word faster. Say the word /fast/ with /er/ at the end of it.”
c. Proceed to the actual items. Record the student’s actual response on Handout
7.13.
160 7  CBE Early Literacy

7. Segmenting syllables. Next, ask the student to segment syllables. Say to the
student, “Say the first sound in ‘hit.’”
a. If the student says /h/, say “Yes, the first sound in hit is /h/.”
b. If the student is incorrect, say “The first sound in hit is /h/. Say the first
sound in ‘hit.’”
8. Next say, “Now listen to this word: invite. There are two parts to the word
invite. Say the two parts.”
a. If the student says /in/ /vite/, say “Yes, the two parts are /in/ /vite/.”
b. If the student is incorrect, say “No, the two parts are /in/ /vite/. Say the two
parts of the word invite.”
c. Proceed to the actual items. Record the student’s actual response on Hand-
out 7.13.
9. Next, assess the extent to which the student can delete onset sounds (initial con-
sonant sound of a word) and delete rimes (the vowel and the rest of the syllable
that follows).
10. Delete onset. Say to the student, “Say the word ‘seat.’ Now say ‘seat’ without
the /s/.”
a. If the student responds correctly, say “Yes, seat without the /s/ is eat.”
b. If the student responds incorrectly, say “No, seat without the /s/ is ‘eat.’ Say
‘seat’ without the /s/.”
c. Proceed to the actual items. Record the student’s actual response on Hand-
out 7.13.
11. Deleting rime. Say to the student, “Say ‘dust.’ Now say dust without the /
ust/.”
a. If the student responds correctly, say “Yes, dust without the /ust/ is /d/.”
b. If the student responds incorrectly, say “No, dust without the /ust/ is /d/.
Say dust without the /ust/.”
c. Proceed to the actual items. Record the student’s actual response on Hand-
out 7.13.
12. Next, assess the extent to which the student can blend and segment phonemes.
13. Blend phonemes. Say to the student, “I’m going to say some sounds. You put
the sounds together to make a word. /i/…/t/. Say the word.”
a. If the student responds correctly, say “Yes, /i/ and /t/ make it.”
b. If the student responds incorrectly, say, “/i/ and /t/ make it. Listen. /i/…/t/.
Say the word.”
c. Next, give a three-letter example. Say, “Listen to these sounds. /p/…/i/…
/g/. Say the word?”
d. If the student responds correctly, say “Yes, /p/ /i/ /g/ is pig.”
e. If the student responds incorrectly, say, “/p/ /i/ /g/ is pig. Listen. /p/…/i/…
/g/. Say the word.”
f. Proceed to the actual items. Record the student’s actual response on Handout
7.13.
14. Segment phonemes. Say to the student, “I’m going to say a word. I want you
to tell me the sounds in the word. Tell me the sounds in ‘top.’
a. If the responds correctly (/t/ /o/ /p/), say “Yes, the sounds in top are /t/ /o/
/p/.”
Handout 161

b. If the student responds incorrectly, say “No, the sounds in top are /t/ /o/ /p/.
Tell me the sounds in top.
c.  Proceed to the actual items. Record the student’s response on Handout 7.13.
Interpretation Guidelines:
15. After administering the phonemic awareness assessment, the next step is to
determine if there is a pattern to the student’s errors or if there is a general lack
of skill in phonemic awareness. Ask: Is an error pattern evident in phonemic
awareness skills?
a. Identify if there are 1 or 2 skill deficits or if there is an overall general deficit
with phonemic awareness.
b. If a few skills emerge as difficult for the student, then those are targeted for
instruction. The evaluator can recommend some of the activities described
in the “Teach: Phonemic Awareness” section in this chapter. The evaluator
can also consider the “Teach: Targeted Instruction” activities described in
Chapter 6.
c. If no clear pattern emerges, then recommend some of the activities described
in the “Teach: Phonemic Awareness” section in this chapter along with the
“Teach: General Instruction” activities described in Chapter 6.
d. Following completion of Step 3, proceed to Step 4 and/or Step 5, depending
on the results of the SLA.
162 7  CBE Early Literacy

Handout 7.7  Print Concepts Assessment Instructions


Materials Needed:
• Writing utensil
• A book appropriate for student’s age/grade
• Handout 7.14
Directions:
1. To conduct the assessment of print concepts, select a book at the student’s age/
grade level.
2. Say to the student, “I’m going to ask you some questions about how to read a
book. Please answer the best you can.”
3. Sit next to the student and ask each item presented in Handout 7.14. Use the items
in Table 7.14.1 for students at any grade, but only use the items in Table 7.14.2
for students in the first grade and above.
4. Mark in the correct column whether or not the student is able to complete the
task.
Interpretation Guidelines:
5. Tally up the responses and identify any missing skills.
6. Target missing skills with instruction.
Handout 163

Handout 7.8  Alphabetic Knowledge Assessment Instructions


Materials Needed:
• Writing utensil
• Handout 7.15
Directions:
1. Place the student copy of Handout 7.15 in front of the student. Say, “I’m going
to point to a letter. Please tell me the name of the letter.”
2. Do not time the student. Point to the first letter and say “What letter is this?”
3. Record the student’s response in Handout 7.15.
Interpretation Guidelines:
4. Summarize the results and determine if the student is missing knowledge of any
letter names. Missing letter names are targeted with instruction and are com-
bined with letter sound identification.
164 7  CBE Early Literacy

Handout 7.9  Letter-Sound Correspondence Assessment Instructions


Materials Needed:
• Writing utensil
• Handout 7.15
Directions:
1. Place the student copy of Handout 7.15 in front of the student. Say, “I’m going
to point to a letter. Please tell me the sound that the letter makes.”
2. Do not time the student. Point to the first letter and say “What sound does this
letter make?”
3. Record the student’s response on Handout 7.15.
Interpretation Guidelines:
4. Ask: Has the student mastered all of the individual letter sounds? Tally the
results and determine if the student is missing knowledge of any letter sounds.
a. If yes, then proceed to assess letter blends in Step 6.
b. If no, identify the missing letter sounds and target with instruction.
5. Next, examine the missing skills and determine if there is a pattern to the missing
sounds.
a. Does the student mix up visually similar letter sounds, such as “b” and “d” or
“v” and “w”?
b. Does the student mix up auditorily similar sounds, such as “i” and “e” or “g”
and “j”?
c. Is the student missing only vowel sounds?
d. Is the student missing only consonant sounds?
6. If a pattern emerges, target the error pattern with instruction using “Teach:
Letter-Sound Correspondence” activities (see Handout 7.20) combined with
“Teach: Targeted Instruction.” If no pattern emerges, use “Teach: Letter-Sound
Correspondence” with “Teach: General Instruction” (see Chapter 6).
Handout 165

Handout 7.10  Letter Blend Assessment Instructions


Materials Needed:
• Dry erase board and pen or a piece of paper with a pencil
• Handout 7.16
Directions:
1. Provide the student with the white board and marker or paper and pencil.
2. Say, “I’m going to write each letter that represents the sound I say. Watch me.”
Say, “/s/.” Write s on the board. Say, “/a/.” Write a on the board. Say, “/t/.” Write
t on the board.
3. Say, “Now what word is /s/ /a/ /t/?”
a. If the student says “sat,” say “Yes, the word is sat.”
b. If the student responds incorrectly, sound out the word again. Say, “/s/ /a/ /t/
is sat. What word is /s/ /a/ /t/?”
c. Say, “I will say more sounds and I want you to write the letter that repre-
sents each sound. Then tell me the word.”
4. Using Handout 7.16, administer the first ten items that measure both VC and
CVC words.
5. Record whether or not the student responds correctly for each item. When fin-
ished with the VC and CVC words, proceed to the next section that assesses
words that begin or end with letter blends.
Interpretation Guidelines:
6. Ask: Is there an error pattern evident?
a. Tally the results for the blending activity. Determine if there is a pattern for
certain letter blends and with the ability to blend.
b. If a pattern emerges, recommend some of the strategies in the “Teach: Letter-
Sound Correspondence” section (see Handouts 7.21 and 7.22) and the “Teach:
Targeted Instruction” described in Chapter 6.
c. If no pattern emerges, recommend some of the strategies in the “Teach: Let-
ter-Sound Correspondence” section (see Handouts 7.21 and 7.22) and the
“Teach: General Reading Instruction” in Chapter 6.
166 7  CBE Early Literacy

Additional Phonics Assessment Instructions (optional)


Materials Needed:
• Writing utensil
• Handout 7.17
• Set of flashcards with each word from Handout 7.17 on a separate card
Directions:
1. Say to the student, “I’m going to show some words on flashcards. Please read
each word the best you can.”
2. Present each flashcard to the student and say, “Read this word.”
3. Indicate words read correctly with a (+) or a “yes” or “no” on Handout 7.17.
4. If the word is read incorrectly, write exactly what the student says in the box.
Interpretation Guidelines
5. Ask: Is an error pattern evident?
a. Tally the results for the assessment. Determine if there is a pattern for certain
letter blends.
b. If a pattern emerges, recommend some of the strategies in the “Teach: Letter-
Sound Correspondence” section (see Handouts 7.23 and 7.24) and the Teach:
Targeted Instruction described in Chapter 6.
c. If no pattern emerges, recommend some of the strategies in the “Teach: Let-
ter-Sound Correspondence” section (see Handouts 7.23 and 7.24) and the
“Teach: General Reading Instruction” in Chapter 6.
Things to Consider 
• You may wish to create a version of this assessment using words to which the
student has been exposed in the curriculum. The words provided within Handout
7.17 are merely examples and the content of the assessment should be matched
to student level and needs.

Words in Handout 7.17


Consonant digraphs: chat, thin, mush, when, with
R-controlled syllables: burn, car, fir, sir, horn
Vowel digraphs/diphthongs: plain, boil, coat, glow, noon
Handout 167

Handout 7.11  Letter-Sound Correspondence: Introduction of Sounds


Compare the following sequence with the data obtained during the CBE Process to
determine where instruction should begin:
1. Initial Consonants ( m, n, t, s, p)
2. Short vowel and consonant combinations ( -at, -in, -ot)
3. Blends ( bl, dr, st)
4. Digraphs ( th, sh, ph)
5. Long vowels ( eat, oat)
6. Final e ( -ake, -ute, -ime)
7. Variant vowels and diphthongs ( -oi, -ou)
8. Silent letters and inflectional endings ( kn-, wr, gn, -es, -s).
Note: Adapted from Vaughn and Linan-Thompson (2004).
168 7  CBE Early Literacy

Handout 7.12  SLA Recording Sheet for Early Literacy Skills


Student Name: _________________________________ Date:_________ Grade:________

Phoneme Segmentaon Fluency (PSF) Le er Naming Fluency (LNF)


Score Level Criteria a Met? Score Level Criteriac Met?
1+ 40 1+ 37
K 20b K 8
Le er Sound Fluency (LSF) Nonsense Word Fluency (NWF)
Score Level Criteria Met? Score Level Criteriaaf Met?
1+ 27 d 2+ 54
K 22 e 1 27
K 17 b
a Based on DIBELS Next Benchmark criteria. b Based on Winter benchmark, as PSF and NWF are not given until then. c Based on DIBELS 6th

Edition Benchmark criteria. d Based on Fall 40th percentile from 2012-2013 AIMSweb normative data. e Based on Winter 40th percentile from
2012-2013 AIMSweb normative data, as LSF isn’t administered until winter. f score is correct letter sounds and does not indicate whole words
reads.

PSF
Expected level:
Obtained level:
Subtract obtained level from expected level = gap:
LNF
Expected level:
Obtained level:
Subtract obtained level from expected level = gap:
LSF
Expected level:
Obtained level:
Subtract obtained level from expected level = gap:
NWF
Expected level:
Obtained level:
Subtract obtained level from expected level = gap:

Things to Consider:
• It may be helpful to identify the expected criterion for the time of year (e.g., fall,
winter, and spring) in order to get a more accurate skill level.
Handout 169

Handout 7.13  Phonemic Awareness Assessment Tally Sheet


Use the following template to record student responses to each phonemic awareness
task.
A. Can the student blend word parts, segment word parts, and rhyme words?
Blending Word Parts Correct Response Accurate?
Pracce Item Listen and tell me what word you hear: /pan/ Pancake
/cake/. What word do you hear?
Prompt /…/ /…/. What word do you hear?
Items 1. /rain/ /bow/ rainbow
2. /bath/ /tub/ bathtub
3. /air/ /plane/ airplane
4. /tooth/ /brush/ toothbrush
5. /foot/ /ball/ football
6. /cow/ /girl/ cowgirl
7. /out/ /side/ outside
8. /lady/ /bug/ ladybug
9. /space/ /ship/ spaceship
10. /door/ bell/ doorbell
TOTAL ____/10

Segmenng Word Parts Correct Response Accurate?


Pracce Item I am going to clap for each part of the /cow/ (clap) /girl/ (girl)
word cowboy: /cow/ (clap) /boy/ (clap).
Now you clap the parts of this word:
cowgirl
Prompt Now you clap the parts of this word:
Items 1. hallway /hall/ (clap) /way/ (clap)
2. bobcat /bob/ (clap) /cat/ (clap)
3. pathway /path/ (clap) /way/ (clap)
4. inside /in/ (clap) /side/ (clap)
5. birthday /birth/ (clap) /day/ (clap)
6. sunshine /sun/ (clap) /shine/ (clap)
7. doorbell /door/ (clap) /bell/ (clap)
8. snowball /snow/ (clap) /ball/ (clap)
9. cupcake /cup/ (clap) /cake/ (clap)
10. popcorn /pop/ (clap) /corn/ (clap)
TOTAL ____/10
170 7  CBE Early Literacy

Rhyming Words Correct Response Accurate?


Pracce Item Listen, these words rhyme: cat-bat; car-far. Yes
These words do not rhyme: kite-far, see-fall.
Tell me if these words rhyme: eat-meat
Prompt Tell me if these words rhyme: …
Items 1. bet-e No
2. ray-say Yes
3. leg-head No
4. be-see Yes
5. sit-bed No
6. dip-sip Yes
7. bone-stone Yes
8. big-hide No
9. fat-sat Yes
10. date-fight No
TOTAL ____/10
Handout 171

B. Can the student blend and segment syllables?


Blend Syllables Correct Response Accurate?
Pracce Item Say the word: eat. Now put /m/ in front of eat. Meat
What is the word?
Prompt Say the word: …. Now put /…/ in front of…. What
word?
Items 1. /it/…/s/ sit
2. /ash/…/s/ sash
3. /an/…/m/ man
4. /end/…/m/ mend
5. /ill/…/s/ sill
TOTAL ____/5
Pracce Item Put these sounds together to make a word. Baker
Listen: /bak/…/er/. Say the word.
Prompt Listen to these sounds and put them together:
Items 1. /ba/…/by/ baby
2. /can/…/dy/ candy
3. /kit/…/ten/ kien
4. /spi/…/der/ spider
5. /win/…dow/ window
TOTAL ____/5
Pracce Item Put the /er/ sound at the end of fast. Say the Faster
word.
Prompt Put the /…/ sound at the end of:
Items 1. /er/…quick quicker
2. /s/…bite bites
3. /ful/…bash bashful
4. /ing/…end ending
5. /ly/…sad sadly
TOTAL ____/5
172 7  CBE Early Literacy

Segment Syllables Correct Response Accurate?


Pracce Item Listen to this word. ‘hit.’ Say the first sound in /h/
hit.
Prompt Listen to this word. ‘….’ Say the first sound in
….
Items 1. man /m/
2. dim /d/
3. can /c/
4. ten /t/
5. bit /b/
TOTAL ____/5
Pracce Item Listen to this word: invite. There are two /in/ /vite/
parts to the word invite. Say the two parts.
Prompt Listen to this word:…. Say the two parts.
Items 1. maybe /may/ /be/
2. happen /hap/ /pen/
3. lazy /la/ /zy/
4. buer /but/ /ter/
5. funny /fun/ /ny/
TOTAL ____/5
Handout 173

C. Can the student delete onset and rime?


Delete Onset Correct Response Accurate?
Pracce Item Say the word ‘seat’. Now say seat without the /s/. eat
Prompt Say: …. Now say … without the /…/.
Items 1. kite /k/ ite
2. dust /d/ ust
3. went /w/ ent
4. dash /d/ ash
5. date /d/ ate
6. wit /w/ it
7. fat /f/ at
8. fed /f/ ed
9. sun/s/ un
10. sand /s/ and
TOTAL ____/5

Delete Rime Correct Response Accurate?


Pracce Item Say the word ‘dust. Now say dust without the /ust/. d
Prompt Say: …. Now say … without the /…/.
Items 1. gi /i/ g
2. bump /ump/ b
3. past /ast/ p
4. task /ask/ t
5. gold /old/ g
6. went /ent/ w
7. bump /ump/ b
8. sold /old/ s
9. mask /ask/ m
10. just /ust/ j
TOTAL ____/10
174 7  CBE Early Literacy

D. Can the student blend and segment phonemes?

Blend Phonemes Correct Response Accurate?


Pracce Item I’m going to say some sounds. You put the sounds bit
together to make a word. /b/…/i/…/t/. What word?
Prompt Put these sounds together to make a word…
Items 1. /i/ /t/ it
2. /u/ /p/ up
3. /a/ /t/ at
4. /u/ /s/ us
5. /a/ /d/ ad
6. /g/ /e/ /t/ get
7. /b/ /u/ /g/ bug
8. /l/ /o/ /g/ log
9. /p/ /a/ /n/ pan
10. /m/ /o/ /p/ mop
TOTAL ____/10

Blend Phonemes Correct Response Accurate?


Pracce Item Put the /d/ sound at the end of sol. What word? sold
Prompt Put the /…/ sound at the end (front) of …. What word?
Items 1. /g/ i gi
2. /m/ old mold
3. /s/ end send
4. /p/ oke poke
5. /c/ age cage
6. cin /ch/ cinch
7. san /d/ sand
8. lun /ch/ lunch
9. plea /z/ please
10. teach /ez/ teaches
TOTAL ____/10
Handout 175

Segment Phonemes Correct Response Accurate?


Pracce Item I am going to say a word. I want you to tell me the /t/ /o/ /p/
sounds in the word. Tell me the sound in top.
Prompt Tell me the sounds in …
Items 1. men /m/ /e/ /n/
2. end /e/ /n/ /d/
3. fun /f/ /u/ /n/
4. and /a/ /n/ /d/
5. fish /f/ /i/ /sh/
6. wit /w/ /i/ /t/
7. cup /c/ /u/ /p/
8. bed /b/ /e/ /d/
9. shop /sh/ /o/ /p/
10. mop /m/ /o/ /p/
TOTAL ____/10
176 7  CBE Early Literacy

Handout 7.14  Print Concepts Assessment Tally Sheet


Present each assessment item/prompt to the student. Mark the corresponding box if
the student is able or not able to answer the prompt correctly.
Table 7.14.1  Kindergarten and first grade
Prompt Yes No
Book Concepts
Front Cover “Show me the front of this book.”
Back Cover “Show me the back of this book.”
Title “Show me the tle of the book.”
Author “Show me the author’s name.”
Direconality Concepts
Beginning of the text “Show me with your finger where I start reading.”
Le-to-Right “Show me with your finger which way I go as I
read on the page.”
Top-to-Boom “When I reach the end of the line, where do I go?”
Page-by-Page “Show me where I go when I reach the boom of
the page.” (Point to the boom of the page.)
Word Concepts
Idenfy the first word on the “Show me with your finger the first word on the
page page.”
Idenfy the last word on the “Show me with your finger the last word on the
page page.”
Leer Concepts
Count leers in a word (Point to a word.) “Count the number of leers in
this word.”
First Leer in a Word “Show me with your finger the first leer in a
word.”
Last Leer in a Word “Show me with your finger the last leer in a
word.”
Capital Leer “Point to a capital or big leer on the page.”
Lower Case Leer “Point to a lower case or small leer on the page.”
TOTALS /15 /15

Table 7.14.2   First grade only


Prompt Yes No
Punctuaon Concepts
Period (.) “What is this called (point to a ‘.’)?”
“What is it used for?”
Queson mark (?) “What is this called (point to a ‘?’)?
“What is it used for?”
Comma (,) “What is this called (point to a ‘,’)?”
“What is it used for?”
Exclamaon Point (!) “What is this called (point to a ‘!’)?”
“What is it used for?”
TOTALS /8 /8
Handout 177

Handout 7.15  Letter Naming and Letter Sound Assessment


Letter Naming Assessment

B D q E t i P N m I L h

H w z V j O p y d S l R

U a X M Y F k u K A b e

Z x C Q s n T o r J G f

W c g v

Letter Sound Assessment

r c e d s u g p i n z w

m l o h a b t f v j k q

x y

Note: In the Sample Letter Names, the lowercase “l” and uppercase “i” are visually
similar, so prompt the student to provide the other answer when the second letter is
assessed.
178 7  CBE Early Literacy

Table 7.15.1  Tally sheet for letter names and sounds


Record the student’s accuracy with either letter names or letter sounds in the spaces below. Mark a
check mark to indicate if the student is accurate and an “x” to indicate the student is not accurate.

Leer Name Leer Sound


Uppercase Lowercase
Accurate?/Error Accurate?/Error Leer Accurate/Error
Leer Leer
A a a
B b b
C c c
D d d
E e e
F f f
G g g
H h h
I i i
J j j
K k k
L l l
M m m
N n n
O o o
P p p
Q q q
R r r
S s s
T t t
U u u
V v v
W w w
X x x
Y y y
Z z z
Handout 179

Handout 7.16  Blending Sounds Assessment Tally Sheet


Use the following template to record the student’s responses to each letter blend
task. Have a dry erase board and pen or a piece of paper with a pencil available.

Vowel-Consonant words and Consonant-Vowel-Consonant words Correct Response Accurate?


Pracce Item “I’m going to write each leer that represents the sat
sound I say. Watch me. /s/.” Write s on the board.
“/a/.” (Write a on the board.)
“/t/.” (Write t on the board.)
“Now what word is /s/ /a/ /t/?” “The word is sat.”
“I will say more sounds and I want you to write the
leer that represents each sound. Then tell me the
word.”
Prompt (Say the sound for each leer and have the student
write each leer. Then have the student blend the
leers.)
Items 1. /i/ /n/ in
2. /a/ /t/ at
3. /i/ /t/ it
4. /a/ /m/ am
5. /u/ /p/ up
TOTAL ____/5
6. /m/ /a/ /t/ mat
7. /h/ /o/ /p/ hop
8. /c/ /u/ /t/ cut
9. /t/ /a/ /p/ tap
10. /n/ /o/ /t/ not
TOTAL ____/5
180 7  CBE Early Literacy

Blends at the Beginning of Words and Blends at the End of Words Correct Response Accurate?
Pracce Item “I’m going to write each leer that represents the step
sound I say. Watch me.”
“/s/.” (Write s on the board.)
“/t/.” (Write t on the board.)
“/e/”. (Write e on the board.)
“/p/.” (Write p on the board. )
“Now what word is /s/ /t/ /e/ /p/?” “The word is
step.”
“I will say more sounds and I want you to write the
leer that represents each sound. Then tell me the
word.”
Prompt (Say the sound for each leer and have the student
write each leer. Then have the student blend the
leers.)
Items 1. /s/ /t/ /o/ /p/ stop
2. /f/ /l/ /a/ /p/ flap
3. /s/ /n/ /a/ /p/ snap
4. /t/ /r/ /i/ /p/ trip
5. /s/ /k/ /i/ /n/ skin
TOTAL ____/5
6. /m/ /a/ /s/ /t/ mast
7. /j/ /u/ /m/ /p/ jump
8. /h/ /a/ /n/ /d/ hand
9. /b/ /e/ /n/ /d/ bend
10. /b/ /u/ /n/ /k/ bunk
TOTAL ____/5
Handout 181

Handout 7.17  Additional Phonics Assessment Tally Sheet


Use the following template to record the student’s responses to each phonemic
awareness task.
A. Can the student decode consonant digraphs?
Decoding Consonant Digraphs Student Response Accurate?
Prompt Show the first consonant digraph card. Say,
“What word is this?”
Items 1. chat
2. thin
3. show
4. when
5. with
TOTAL ____/5

B. Can the student decode R-controlled vowel syllables?


Decoding R-Controlled Vowel Syllables Student Response Accurate?
Prompt Show the first r-controlled vowel syllable card.
Say, “What word is this?”
Items 1. burn
2. car
3. fir
4. sir
5. horn
TOTAL ____/5
182 7  CBE Early Literacy

C. Can the student decode vowel digraphs/diphthongs?


Decoding Vowel Digraphs/Dipthongs Student Response Accurate?
Prompt Show the vowel digraph/dipthong card. Say,
“What word is this?”
Items 1. plain
2. boil
3. coat
4. glow
5. noon
TOTAL ____/5
Handout 183

Handout 7.18  Teach: Phonemic Awareness: Sound Boxes Activity


Targeted Skill: Phonemic awareness
Purpose: Teach students how to segment sounds of spoken words in sequence and
increase understanding the positions of sounds in spoken and written words.
Setting: One-to-one, small-group, whole-group
Materials Needed:
• List of words for instruction
• Chips or tokens
• Paper and pencil to draw the sound boxes or premade sound boxes
• Picture cards or word cards with boxes below that represent the sounds of the
words (optional)
Directions:
1. Instruct students to draw a box with dividers to represent each phoneme in a
word. For example, the word cat has three phonemes, so students would draw a
box with three sections (see as follows).

2. Provide students with a chip or token for each box. Have them place the chip
above the box.
3. Model the procedure for the students. Slowly articulate the word and slide a chip
or token into the box when a phoneme or sound in the word is pronounced.
4. Have the student articulate the word slowly and slide the chip or token into the
box when a new sound is pronounced.
Considerations and Modifications:
• This technique can be used to identify beginning, middle, ending sounds. For
example, say, “Where do you hear the /a/ sound in cat?” Student slides the chip
or token into the box or position where they hear the sound.
• An image of the word can be provided above the sound box.
• This technique can be used with: CV, CVC, CVCV, CVVC, CCVC, and multi-
syllabic words.
• The activity can be extended by having the student write the word after it is
created.
Evidence Base: Maslanka and Joseph (2002)
184 7  CBE Early Literacy

Handout 7.19   Teach: Guided Teaching of Print Concepts


Targeted Skill: Concepts of print
Purpose: The purpose is to increase the student’s knowledge of print concepts. Any
number of print concepts can be targeted.
Materials Needed:
• A book appropriate for the child’s interest level and age
• A list of prompts or skills to target
Setting: One-to-one (possibly small-group)
Directions:
1. Allow 10 minutes for the intervention.
2. Sit next to the student so that both of you can see the book.
3. Begin reading the story and point out the concepts of print that are being tar-
geted. The prompts are developed by the teacher, but should be a simple, descrip-
tive statement that defines the concept of print that is being taught.
a. For example, to teach the “title,” use the following prompt: Point to the title
of the book and say “The title of the book is (read title).”
b. A list of example prompts is provided in Table 7.19.1.
Considerations and Modifications:
• Have the student state the concept of print after the verbal prompt by the teacher.
For example, after the teacher says “The title of the book is (read title),” he or
she can then say “What is the title?” or “Can you point to the title?”
• The assessment described in Handouts 7.7 and 7.14 can be used to monitor the
student’s knowledge gains with print concepts.
Evidence Base: Lovelace and Stewart (2007)
Handout 185

Table 7.19.1   Example prompts


Concept Prompt
Book Concepts
Front Cover Hold the book up and say “Let’s look at the front of this book.”
Back Cover Aer reading the story, close the book and say “This is the back of the book.”
Title Point to the tle of the book and say “The tle of the book is (read tle)”.
Author Point to the author’s name and say “This is the author. He (or she) wrote the
book.”
Direconality Concepts
Beginning of the text “This is the beginning of the story (point), so I’ll start here.”
Le-to-Right “As I read, I go le to right”. (Slide finger under words as you read.)
Top-to-Boom “When I reach the end of the line, I go back and down to the next line.”
Page-by-Page “When I reach the end of the page, I go to the next page.” (Point)
Word Concepts
Idenfy the first word “Here is the first word on the page.”
on the page
Idenfy the last word “Here is the last word on the page.”
on the page
Leer Concepts
Count leers in a word (Point to a word.) “Here are some le€ers. Let me count them. 1, 2, 3…”
First Leer in a Word “This is the first le€er in the word.”
Last Leer in a Word “This is the last le€er in the word.”
Capital Leer “This is a capital le€er. It is also called an uppercase le€er.”
Lower Case Leer “This is a lowercase le€er.”

Table 7.19.2   First grade only


Concept Prompt
Punctuaon Concepts
Period (.) “This is a period. It tells us to stop at the end of a sentence.”
Queson mark (?) “This is a ques on mark. It means someone is asking a ques on.”
Comma (,) “This is a comma. It tells us to pause.”
Exclamaon Point (!) “This is an exclama on point. It tells us someone is speaking in an
excited way or is shou ng.”
186 7  CBE Early Literacy

Handout 7.20 Teach: Letter Identification with Letter-Sound Correspondence:


Multisensory Teaching of Letter Names
Targeted Skill: Letter identification and letter-sound correspondence
Purpose and Description: To increase student’s knowledge of both letter identifi-
cation and letter sounds.
Materials Needed:
• Identified letters to teach students
• Shaving cream or Play-Doh
• Writing paper and pencil for students
Setting: Small-group or one-to-one
Directions:
1. Allow 30 minutes for the teaching session.
2. During the introduction session, model the letter and its sound to the student
using a teaching format of:
a. Model: Identify the letter and model its name and sound. [“This is the letter
“B” (trace with finger). It says /b/.”] Use various activities to model the letter
and its sound, such as magnetic letters, books, posters, etc.
b. Verbal prompt: Ask the student(s) to produce the name and its sound. Model the
letter and then ask the student to produce the name and sound.
c. Feedback: Provide corrective feedback if the student misidentifies the letter.
(“This is the letter ‘B.’ It says /b/.”) Provide praise if the student gets the let-
ter correct. (“That’s right, that’s the letter ‘B.’ It says /b/.”)
3. Following the introduction sessions, proceed to use the multisensory format to
model and elicit responses from the student (i.e., shaving cream and Play-Doh).
Use the same format of model-prompt-feedback.
4. After the multisensory session, proceed to the writing activity. Using the same
format as the previous two sessions, have the students produce the letter name
and sound using paper and a writing utensil.
Evidence Base: Lafferty et al. (2005)
Handout 187

Handout 7.21  Teach: Letter-Sound Correspondence: Guided Instruction of Letter


Sounds
Targeted Skill: Letter-sound correspondence
Purpose and Description: To increase student’s knowledge of letter names and
letter sounds.
Materials Needed:
• Cards with vowels and consonants
Setting: Small-group or one-to-one
Directions:
1. Show the student the first letter on the card and say, “This is a.”
2. Ask, “What letter is this?”
3. Have the student respond. If he or she is accurate, say, “That’s right the letter is
a.” If the student is incorrect, provide an error correction (“This letter is a. What
letter is this?”) and then provide the next card.
4. Once the student has mastered letter names, match the sounds to the letters and
repeat the same process.
a. Show the student the letter a.
b. Say, “This is a. a says /a/. What is the sound of a?”
c. Repeat with each letter that the student has not mastered.
Considerations and Modifications:
• Begin by introducing one vowel and three or four consonants.
• The lesson can be altered by presenting a letter sound and having the student
write the letter name on a whiteboard.
• Images of the letters in the shape of related objects (e.g., a snake in the shape
of the letter s) can be used to further enhance the instruction. Also, asking the
student to write the letter in air (e.g., kinesthetic movement) can further enhance
instruction.
Evidence Base: Carnine et al. (2009); Dilorenzo et al. (2011)
188 7  CBE Early Literacy

Handout 7.22  Teacher: Letter-Sound Correspondence: Visual Support


Targeted Skill: Letter-sound correspondence
Purpose and Description: To increase student’s knowledge of letter names and
letter sounds.
Materials Needed:
• Picture cards with animals that represent letters and letter sounds (cards are avail-
able for reproduction at www.readingrockets.org/strategies/alphabet_matching/)
• Images or materials that depict each letter in the shape of the letter (e.g., snake in
the shape of an s, a baseball and bat organized to look like a b) (optional)
Setting: Small-group or one-to-one
Directions:
1. Present each animal letter card with the name of the letter, name of the animal,
and sound of the letter.
2. Say, “Here is an e. Elephant starts with e. E says /e/.”
3. After presenting each card, have the student match the uppercase letter card
(Mama card) with the lowercase letter card (Baby card). When the student
matches each pair of cards, have them say the letter name and sound.
Considerations and Modifications:
• Images of letters can be used instead of the picture cards. Or the images can be
combined with the letter cards.
Evidence Base: Dilorenzo et  al. (2011); www.readingrockets.org/strategies/
alphabet_matching
Handout 189

Handout 7.23  Teach: Letter Blends: Word Boxes


Targeted Skill: Knowledge of letter blends
Purpose: To build acquisition or fluency of letter blends and the ability to identify
certain letter blends.
Setting: One-to-one, small-group, whole-group
Materials Needed:
• List of words for instruction
• Chips or tokens
• Word cards with boxes below that represent the sounds of the words
• Individual letter cards that represent the individual sounds or blends within the
word.
Directions:
1. Pass out the word cards to the students.
2. Provide students with letter cards for each sound or phoneme in the word.
3. Model the procedure for the student. Slowly articulate the word, (i.e., “The word
is clap.”) and slide the corresponding letter card into the box when the sound in
the word is pronounced ( cl, a, p).

>W

>  W

4. Have the student articulate the word slowly and slide the letters (or letters) into
the box while pronouncing a new sound in a word.
Considerations and Modifications:
• This technique can be used with: consonant-vowel (CV), CVC, CVCV, conso-
nant-vowel-vowel-consonant (CVVC), consonant-consonant-vowel-consonant
(CCVC), and multisyllabic words.
• Instead of using letter cards initially, the instructor may use a token for each
sound. The student slides a token into the box for each sound and then replaces
each token with a letter card. This can provide further repetition and identifica-
tion of specific phonemes and letter blends.
• The activity can be extended by having the student write the word after pro-
nouncing and/or writing a sentence using the word.
• This technique can also be used to work with the student in identifying begin-
ning, middle, and ending sounds in words.
− The teacher presents a word and says, “Where do you hear the /cl/ in clap?”
− The student then slides the letters into the box or position where they hear the
sound in the word.
Evidence Base: Joseph (2000)
190 7  CBE Early Literacy

Handout 7.24  Teach: Letter Blends: Word Building


Targeted Skill: Letter-sound correspondence and letter blends
Purpose and Description: To develop letter-sound correspondence and to build
ability to decode whole words. Students are given cards and practice creating new
words by changing a grapheme at the beginning, middle, or end of a word.
Materials Needed:
• Set of letter cards
Setting: Whole-group, small-group, or one-to-one
Directions:
1. Give each student a set of letter cards and review those letter sounds.
2. Pronounce a word and ask students to build the word with their cards (e.g., cat).
Write the word on the board, have students read chorally, and fix their cards if
they made a mistake.
3. Tell students to add, remove, or exchange one card to transform the word to the
next word (e.g., Now I want you to change the word cat to cap). Letter changes
should draw attention to each letter position within a word. Students then read
the word chorally.
4. Error-correction procedure:
a. If students have difficulty pronouncing a word after forming it, encourage an
attempt based on the letter sounds. If necessary, guide them through progres-
sively blending sounds together.
b. If students mistake the word for a similarly spelled word, write out both target
word and error word and help them analyze differences in letter-sound units.
Consideration and Modifications:
• Instead of the teacher saying a word and asking students to build the word, the
level of explicitness can be increased. Teachers can write the word on the board,
have students pronounce the word, and then have them build the word.
• This activity can be extended to a peer tutoring activity. Students are assigned to
pairs and take turns reading the words that they just built in the group instruction
described previously. Students can be given premade flashcards or create their
own flashcards as part of the group instruction.
• The activity can also be extended with a sentence reading activity. Students read
sentences that contain the words they just built.
• Word building can be used with individual letter-sound correspondence or with
letter blends.
Evidence Base: Rathvon (2008)
Chapter 8
CBE Reading Comprehension

8.1  Chapter Preview

This chapter describes the process for CBE Reading Comprehension. The chapter
is structured around the four phases of the CBE Process and will walk the reader
through the entire process for Reading Comprehension. The chapter discusses spe-
cific assessment techniques and intervention based on the results.

8.2  CBE Reading Comprehension

Reading comprehension is the ability to derive meaning from text and is the ultimate
goal of reading (NICHHD 2000). It is a skill that rests on four factors: (a) decod-
ing, (b) vocabulary, (c) meta-cognitive skills or the ability to monitor one’s meaning
while reading, and (d) background knowledge of content (Klinger 2004; NICHHD
2000; Perfetti and Adlof 2012) (see Fig. 8.1). The CBE Process for Reading Com-
prehension involves assessing each one of these domains in a systematic manner.
For the CBE Process for Reading Comprehension, the evaluator will work
through the aforementioned factors and determine which one is contributing to the
student’s lack of comprehension. As with Chapters 6 and 7, the CBE Reading Com-
prehension Process moves through 4 phases, within which are a series of steps that
involve three types of tasks (see Fig. 8.2):
1. Ask: These are questions that signify a starting point for a step. Assessments
results are collected and interpreted in order to answer the question.
2. Do: These are assessment activities conducted with the student.
3. Teach: These are instructional recommendations based on the results of the CBE
Process.
The entire process is outlined in Handout 8.1, and Table 8.1 presents the steps in a
linear form. All of the handouts used for the CBE Process for Reading Comprehen-
sion are included at the end of the chapter, and the entire list is displayed in Table 8.2.

J. E. Harlacher et al., Practitioner’s Guide to Curriculum-Based Evaluation in Reading, 191


DOI 10.1007/978-1-4614-9360-0_8, © Springer Science+Business Media New York 2014
192 8  CBE Reading Comprehension

Fig. 8.1   Reading comprehen-


sion and its supporting skills

8.3  Problem Identification

8.3.1 Step 1: Ask: Is There a Problem? Do: Initial Problem


Identification

The first step is identifying if a problem in reading comprehension exists. Initial


identification of a reading difficulty can be identified in several ways, such as a
review of records, interview with the student or teacher(s), and/or using various as-
sessments such as CBM, permanent products, or other tests. This initial identifica-
tion can be as simple as a teacher reporting a concern during an interview or it can
be the result of formal test results.

8.3.2 Step 2: Ask: Does it Warrant Further Investigation?


Do: Survey-Level Assessment

After initially identifying a reading concern, the next action is to verify the prob-
lem and determine if its severity warrants further investigation. The question is
answered by conducting a Survey-Level Assessment (SLA) using both oral reading
fluency (ORF) probes and reading MAZE probes. ORF and MAZE are both used to
ensure a comprehensive picture of the student’s reading development.
SLA is conducted to determine a student’s instructional reading level. To be con-
sidered within the instructional range for a given grade-level with ORF, students

Fig. 8.2   Ask-Do-Teach cycle for steps within the CBE Process
Table 8.1   Steps of CBE Process for Reading Comprehension
8.3  Problem Identification

Ask Do Teach
Problem Is there a problem? Initial Identification
Identification Does it warrant further investigation? Survey-level Assessment
Problem analysis Does the student have sufficient accuracy and Examine rate and accuracy as described in
rate at grade-level with ORF and MAZE? Chapter 6
Is the student missing critical vocabulary? Examine vocabulary of content and passages
Is the student monitoring comprehension? Examine meta-cognitive skills
Does the student have sufficient background Examine the impact on retell after orienting the
knowledge for comprehension? student to the content of the text before reading
Plan Is the instructional focus decoding? (Follow recommendations in Chapter 6)
implementation Is the instructional focus vocabulary? Vocabulary instruction
Is the instructional focus meta-cognitive skills? Meta-cognitive skills across before/
during/after reading framework
Is the instructional focus teaching content? Preview and discuss content before
reading
Plan evaluation Is student progressing toward his/her goal? Monitoring fidelity and student progress
193
194 8  CBE Reading Comprehension

Table 8.2   List of Handouts for CBE Process for Reading Comprehension
Handout Title
Instructions and Process Sheets
8.1 Curriculum-based evaluation in reading comprehension flowchart
8.2 Survey-level assessment in MAZE instructions
8.3 MAZE practice instructions
8.4 Vocabulary list instructions
8.5 Comprehension interview instructions
8.6 Retell instructions
8.7 Background Knowledge Discussion instructions
Tally and Assessment Sheets
8.8 Survey-level assessment results for MAZE
8.9 Vocabulary list recording sheet
8.10 Comprehension interview recording sheet
8.11 Retell rubric and questions
Strategy Sheets
8.12 Teach: Peer tutoring in vocabulary
8.13 Teach: Before reading: Previewing and developing questions
8.14 Teach: During reading: Click or clunk
8.15 Teach: During reading: Paragraph shrinking
8.16 Teach: After reading: Summarizing and question-generating
8.17 Teach: After reading: Partner retell
8.18 Teach: Background knowledge: Connections to self, word, text
Additional forms
8.19 Story map template
8.20 Directions for Vocabulary-matching
8.21 Vocabulary-matching list template and example

should read at or above the fall 25th percentile based on national norms with at least
95 % accuracy (Hosp et al. 2006). For the reading MAZE, students should score at
or above the fall 25th percentile based on national norms with at least 80 % accuracy
to be considered proficient. Scores between 60 and 80 % accuracy are questionable,
and below 60 % likely is a frustrational range (Howell and Nolet 2000). After com-
pleting an SLA, evaluators conduct a gap analysis and quantify the problem.
SLA directions: 
1. Begin by administering three, 1-minute CBM ORF probes at expected grade
level and use the median WRC/Errors as the score (Directions for SLA are
included in Chapter 6 in Handout 6.2 and the formulas for calculating rate and
accuracy are “A” and “B” in Fig. 6.3, respectively).
a. You will need student copies of the passages and “evaluator copies” for
recording responses.
b. Record the student’s scores on Handout 6.5. If the student score is below
criteria, then administer reading passages from previous grade-levels until
criteria are met.
c. Complete the bottom portion of Handout 6.5 to determine the severity of the
problem.
8.4  Problem Analysis 195

Fig. 8.3   Formulas for rate and accuracy on Maze

2. Now conduct an SLA with reading MAZE. Administer one 3-minute CBM read-
ing MAZE probe at expected grade level. Directions for SLA are included in
Handout 8.2 and the formulas for calculating are displayed in Fig. 8.3. A practice
example and directions for MAZE are provided in Handout 8.3.
3. Record the student’s scores on Handout 8.8. If the student scores below grade-level,
then administer reading passages from previous grade-levels until criteria are met.
a. Complete the bottom portion of Handout 8.8 to determine the severity of the
problem.
4. Ask: Does it warrant further investigation?
a. If the student meets criteria for both rate and accuracy at grade-level on both
ORF and MAZE, normative data comparisons would not indicate reading
comprehension is a deficit for the student. However, the evaluator may wish
to further assess reading comprehension skills.
b. If the student is below grade-level criterion for either ORF or MAZE in
comparison to normative data or benchmark standards, proceed to Problem
Analysis.

Things to consider
• As mentioned in Chapter 6, some evaluators prefer to administer the easier
grade-level material first before progressing to higher grade-level material
to manage frustration with the task.
• It may be helpful to administer the ORF and MAZE probes in different
sittings to avoid burn-out or response fatigue.

8.4  Problem Analysis

In the Problem Analysis phase of the CBE Reading Comprehension Process, the
first step is to examine the student’s decoding abilities (i.e., the rate and accuracy).
Because automatic and fluent reading sets the stage for reading comprehension
(Therrien et al. 2012), it is examined first in an attempt to determine if it could ex-
plain why a student struggles with reading comprehension. If the student’s reading
rate and accuracy meet criteria at grade level, then the evaluator proceeds to ex-
amining the student’s vocabulary and meta-cognitive skills. The Problem Analysis
phase of CBE Reading Comprehension Process begins with Step 3.
196 8  CBE Reading Comprehension

8.4.1 Step 3: Ask: Does the Student have Sufficient Rate and


Accurate at Grade-Level with ORF? Do: Examine Rate
and Accuracy as Described in Chapter 6

The first step in determining why a student may have a reading comprehension
deficit is to identify the student’s level of decoding. Decoding is necessary for com-
prehension because students must be able to read the text before they can draw
meaning from it (Howell 2008). It is the ability to decode that allows access to text,
which in turn sets the stage for reading comprehension (Carnine et al. 2009; Howell
2008; NICHHD 2000).
1. Ask: “Does the student have sufficient rate and accuracy at grade-level with
ORF and MAZE?”
a. After administering the SLA, examine the student’s rate and accuracy with
grade-level material. If the student does not read at rate with accuracy, the
evaluator is referred to Step 3 of the CBE Process in Decoding (see Handout
6.1). The steps for analyzing the student’s accuracy and rate are described in
Chapter 6.
b. If the rate and accuracy meet criteria, then proceed to Step 4.

8.4.2 Step 4: Ask: Is the Student Missing Critical Vocabulary?


Do: Examine Vocabulary of Content and Passages

The goal of this step is to determine if the student’s lack of vocabulary knowledge
is contributing to comprehension difficulties. There are two types of vocabulary to
consider: (a) academic vocabulary and (b) content-specific vocabulary. Definitions
are provided next.
Academic vocabulary  These are words that are used across subjects to understand
and organize information. They are words critical to understanding the concepts
taught in any topic. Examples of academic vocabulary words include: analyze,
characteristic, distinguish, emphasis, hypothesize, sequence, transition, and utilize.
Resources for academic vocabulary are listed in Table 8.3.
Content-specific vocabulary  These are words that are specific to a particular topic
or content area. For example, the word “pregnancy” and “infectious disease” are
important for understanding the subject of health, but not as important for math-
ematics (in which words like “integer” and “exponent” are important). Resources
for content-specific vocabulary are listed in Table 8.3.
To conduct Step 4:
1. Analyze errors from work samples, review of records, and errors made on the
Survey-Level Assessment (both ORF and MAZE) to determine if student is
missing academic or content-specific vocabulary words.
8.4  Problem Analysis 197

Table 8.3   List of resources for vocabulary word banks and create vocabulary lists
Resource Location/Publisher
Academic vocabulary resources
Academic vocabulary http://www2.elc.polyu.edu.hk/cill/eap/wordlists.htm
English companion http://www.englishcompanion.com/pdfDocs/
acvocabulary2.pdf
Academic vocabulary in use http://assets.cambridge.org/97805216/89397/
frontmatter/9780521689397_frontmatter.pdf
Academic word list. Coxhead http://www.uefap.com/vocab/select/awl.htm
http://www.victoria.ac.nz/lals/resources/
academicwordlist/
Content-specific vocabulary resources
Building academic vocabulary: Marzano and Pickering 2005. ACSD
teacher’s manual
Marzano research laboratory Marzanoresearch.com

a. If an academic or content-specific vocabulary problem is suspected, verify


by conducting the next steps. These steps will be conducted separately for
academic and content-specific vocabulary if the student demonstrates deficits
in both.
b. If no issue with vocabulary is evident, then proceed to Step 5 of the CBE
Reading Comprehension Process.
2. Create word lists to assess each type of vocabulary in isolation.
a. To create a list of content-specific vocabulary, evaluators can generate lists
from the key content words in the text or use common lists found on various
websites. The glossary or bolded words from the textbook in subject area of
difficulty, teacher records, or work products can inform vocabulary lists.
b. To create a list of academic vocabulary, use common word lists for the stu-
dent’s grade level and age.
3. Administer the vocabulary list and verify if the student is struggling with either
type of vocabulary word. Directions are provided in Handout 8.4 and a tally
sheet is provided in Handout 8.9.
4. Ask: “Is the student missing critical vocabulary?”
a. If Yes, then provide instruction in vocabulary (see the “Teach: Vocabulary”
section later in the Chapter).
b. If No, move on to Step 5.

Things to consider
• Vocabulary concerns and decoding errors can be examined together to
improve focus of instruction. Instruction may center around teaching spe-
cific vocabulary words or include teaching specific word parts. If a student
shows deficits with word parts, the DISSECT strategy may be useful (see
Handout 6.15 in Chapter 6).
198 8  CBE Reading Comprehension

Table 8.4   Meta-cognitive strategies. (Howell 2008)


Strategy Description
Orienting Involves activating background knowledge, setting goals for reading, and
determining the demands of the task. The reader forms a general impression
of what he or she is going to read
Planning Developing a plan and purpose for reading, which can include previewing head-
ers & key terms, looking at questions at the end of the reading, examining
figures/illustrations, and identifying key sections of the reading
Executing This involves reading the text and using strategies to ensure comprehension
occurs. Examples include taking notes, highlighting words, and paraphrasing
what was read
Monitoring The use of monitoring comprehension while reading and seeking clarification
when comprehension does not occur
Checking and Involves reviewing and interpreting what is read to ensure there is an under-
evaluation standing of what was read. Includes tying information to background knowl-
edge and seeking congruence
Elaboration The process of applying and extending information gained from text. Includes
defining vocabulary, concepts, and symbols; drawing conclusions and
paraphrasing

8.4.3 Step 5: Ask: Is Student Monitoring Comprehension?


Do: Examine Meta-Cognitive Skills

At this point, you have ruled out or addressed decoding and lack of vocabulary as
reasons for the student’s difficulty with comprehension. The focus of this step is
on the extent to which the student is aware of his or her own reading comprehen-
sion. Is the student “thinking about thinking” and aware of the level of compre-
hension while reading? Successful readers read text with a purpose, are actively
pursuing meaning from the text, adjust reading rate in response to text complex-
ity, seek clarification, and use various strategies to monitor and obtain meaning
(e.g., take notes, highlight, summarize, etc.). In addition, successful readers will
preview the material, set a goal for reading, and then monitor the success toward
that goal (Carnine et al. 2009; Howell 2008). Table 8.4 summarizes 6 meta-cog-
nitive strategies that readers use to gain meaning from reading. Step 5 of CBE
Reading Comprehension Process is organized around the 6 meta-cognitive strate-
gies listed in Table 8.4.
There are two methods used to conduct Step 5. They are (a) conducting an inter-
view with the student while he or she reads, and (b) examining the student’s ability
to retell what he or she has read. Both methods are described next.

8.4.4  Comprehension Interview

Conduct an interview while the student reads a selected passage. For this interview,
the student will be asked to read a passage and to “think aloud” as he or she reads.
The evaluator observes the student for evidence of the reading comprehension strat-
8.4  Problem Analysis 199

egies listed in Handouts 8.5 and 8.10, and after the student reads the passage, the
evaluator follows up by asking questions to clarify the student’s use of reading strat-
egies. The evaluator also prompts thinking aloud by asking questions as the student
reads. The Interview used for this step is provided in Handout 8.10. The interview
was developed by considering a “before/during/after” orientation to reading and
by considering the meta-cognitive strategies listed in Table 8.4 (cf. Howell 2008;
Howell and Nolet 2000). The instructions for Step 5 are listed in Handout 8.5, and
the Comprehension Interview is provided in Handout 8.10:
1. Identify a passage from which the student reads with sufficient accuracy and rate.
2. Explain to the student that you want him or her to “think aloud” as he or she
reads and that you will ask questions during the reading.
a. Say to the student, “I want you to read this text/passage aloud. I want you to
“think aloud” as you read because I want to understand how you read and
how you make sense of what you read. I will ask you questions to help me
understand how you read. First, let me ask how you prepare yourself to read.
What do you do before you read this passage/text?”
b. Proceed to ask the student questions or encourage the student to explain what
he or she does before reading a passage or text. Encourage the student to
speak and elaborate by saying “Tell me more.” or “Explain that more fully.”
c. Mark the skills observed in the Comprehension Interview under the section
“Before Reading” on Handout 8.10.
3. Next, have the student read and observe the student for the skills listed in the
“During Reading” section of the Comprehension Interview. Mark the student’s
skills on that section in Handout 8.10.
a. Say, “Okay, now begin reading and talk aloud while you read. Pretend I am
a student and you are the teacher. How can I make sure I understand what is
being read?” Having the student read aloud makes the skills that the student
uses more observable and measureable.
b. Ask questions as the student reads to clarify each skill. If the student requires
prompting or assistance to identify the skill, then consider the skill to be par-
tially observed.
4. When the student finishes, ask him or her to explain what he or she does after
finishing a passage or text.
a. Say, “Now that you are finished, what do you do after you read to make sure
you understand what was read?”
b. Ask questions to clarify the skills listed in the “After Reading” section of the
Comprehension Interview on Handout 8.10.
5. Look over the Comprehension Interview and ask any clarifying questions to
ensure you have assessed each skill listed. After the student finishes reading the
passage, ask any follow-up questions to determine if he or she is using the skills
within the interview.
a. Mark the appropriate column to indicate if the skill was observed, partially
observed, or not observed in the Comprehension Interview (Handout 8.10).
6. Proceed to conduct the Retell before interpreting results.
200 8  CBE Reading Comprehension

8.4.5  Retell: Constructing Meaning from Text

After conducting the interview, ask the student to read a different passage and have
the student retell and summarize what was read. This activity helps determine if
the student is monitoring the meaning of the text, and if he or she understands the
structure of the text. The retell also can be used to answer some of the items on
the Comprehension Interview. Directions for conducting the Retell are provided in
Handout 8.6, and an example of scoring rubrics for both narrative and expository
texts are provided in Handout 8.11:
1. Gather approximately 250-word passages or reading texts that are both exposi-
tory and narrative, depending on the student’s grade level and the content areas
being assessed. Using the student’s textbook or curriculum to locate passages
may be the most straight forward option to identify text for this assessment.
a. Consider using a recording device so that you can replay the retell and accu-
rately interpret the student’s response.
2. Say to the student, “I want you to read this passage to yourself. I will then have
you tell me about what you read.”
3. Have the student read untimed. After the student finishes the passage, ask the
student to summarize what was read.
a. Say, “Please tell me about what you read.”
b. After the student provides a response, score the response using the rubric pro-
vided in Handout 8.11. You may also wish to create your own rubric, based
on your standards and curriculum.
After both the Interview and the Retell, determine if the student is monitoring his or
her comprehension while reading.
1. Ask: “Is the student monitoring his or her comprehension?”
a. If yes, then reconsider the problem (Problem Identification) and/or examine
background knowledge (see the “Expanding Knowledge” section).
b. If no, examine the results of the Interview and Retell to select a strategy to
use.
i. If the student scored low on the Comprehension Interview, use strategies
to re-teach the area of deficit.
1. For “Before Reading”, teach the student to develop questions and pre-
view the text (see Handout 8.13).
2. For “During Reading”, teach the student to use strategies to monitor
meaning (see Handouts 8.14 and 8.15).
3. For “After Reading”, teach the student to summarize and answer ques-
tions about the reading (see Handout 8.16).
ii. If the student scored low on Retell, teach partner retell (see Handout 8.17)
and/or story mapping (see the “Story Mapping” section later in the Chap-
ter). The instruction will be adjusted depending on whether the student
was able to provide accurate Retell with prompting or without.
8.4  Problem Analysis 201

Things to Consider
• Prompting during retell. You may want to determine if the student is able
to retell the text with assistance, as this can tailor instruction according
to the Instructional Hierarchy (e.g., more fluency-based than acquisition-
based). If the student needed assistance to perform a skill, then fluency
may be the focus of instruction. If the student could not perform the skill
at all, then acquisition of the skill is likely the focus.
1. After the Retell assessment, consider asking questions to prompt the
student’s retell and to determine if the student is identifying relevant
information from the passage. The specific questions asked are based
on the content and whether the text is expository or fictional. Examples
of questions to consider are provided in Handout 8.11.
2. Ask: With prompting/assistance, can student monitor and construct
meaning?
a. If yes, teach the student to monitor meaning and construct meaning
independently. The focus is on building fluency with this skill and
such strategies listed in Handouts 8.16 and 8.17 may be appropriate
if they are adjusted to target fluency instead of acquisition of the skill.
b. If no, then the student needs to be taught explicitly to monitor mean-
ing and to understand the structure of text.
• Teach the missing skill. There are many instructional strategies that may
be a good match based on the problem analysis. The guiding rule, how-
ever, is that if the student cannot perform the skill, then the student requires
specific, direct instruction to learn that skill. If the student can perform the
skill with assistance, then the student requires fluency building with that
skill.
• Expository and narrative texts. Consider assessing the student’s meta-
cognitive skills using both informational (expository) and fictional (narra-
tive) text. Have the student read each type of text and conduct an interview
and a retell.
• Interview educators familiar with the student. Younger students may
not be able to verbalize certain strategies or skills, so interviewing those
familiar with the student may provide more information about the stu-
dent’s meta-cognitive abilities. However, be sure that the interviewee pro-
vides not only observations but also evidence of the skill to avoid decisions
based solely on perception or subjective statements.
• CBM Retell. Have the student read an ORF passage and then provide
1-minute for a retell. This is a quick procedure to efficiently assess reading
comprehension. Follow procedures and directions outlined by DIBELS
(see http://dibels/org and Good and Kaminski 2011). This procedure could
serve as an alternative assessment for the Retell procedure described ear-
lier, or it could be part of the Survey-Level Assessment.
202 8  CBE Reading Comprehension

8.4.6 Step 6: Ask: Does the Student’s Background Knowledge


Support Text Content? Do: Examine Background
Knowledge

Background knowledge refers to the student’s knowledge accumulated through for-


mal and informal instruction and life experiences. Background knowledge helps us
make sense of new information and therefore is critical for learning. The National
Research Council states that “all learning involves transfer from previous experi-
ences. Even initial learning involves transfer that is based on previous experiences
and prior knowledge” (2000, p.  236). When students encounter text that links to
their prior knowledge or experiences, they are more likely to comprehend the infor-
mation. Consider your ability to comprehend a section of informational text about
the tax code, baseball, or orchestra. You will engage better with the text, understand
more, and retain it longer if your background information supports the new infor-
mation.
If you have ruled out decoding, vocabulary, and meta-cognitive skills as con-
tributing factors to a student’s difficulty with comprehension, examine background
knowledge. Background knowledge should be assessed for all students before be-
ginning any new unit about a topic, as it is essential to determine what students
know to ensure instruction is appropriate for every student. There are a number of
ways to assess background knowledge including use of prediction or anticipation
guides (Buehl 2008), carousel or gallery walks, and free discussion (Cossett Lent
2012). This section describes the a discussion method for assessing background
knowledge.

8.4.7  Background Knowledge Discussion

In this strategy, students are asked a question intended to prompt a discussion re-
lated to the reading. The evaluator determines if the student understands the topic.
Directions are provided in Handout 8.7. This assessment should be repeated for
each topic area of concern:
1. Gather reading passages containing the topic to be assessed.
2. Preview the reading passages and identify main theme and 3–5 key points that
are essential to understanding the passage. Identify important vocabulary words.
3. Explain to the students that you want them to read a passage but that first, you will
have a conversation to understand what they may already know about the topic.
4. Begin with an open-ended question, such as “Tell me what you know about
____.” Take notes while the student answers and look for identification of the
key points you identified.
5. Next ask specific questions about the key points and vocabulary that you identi-
fied. Record what the student says.
8.5  Plan Implementation 203

Interpretation Guidelines 
6. Ask, “Does the student’s background knowledge support text content?” Review
your notes and determine if the student’s background knowledge supports the
content of the text. As a general guideline, the student should know the majority
of the key points that you identified and not present inaccuracies.
a. If yes, then you can reasonably conclude that the student’s background knowl-
edge is sufficient for that particular topic
b. If no, then recommend teaching strategies that will activate or build back-
ground knowledge (see Teach: Background Knowledge section and Handouts
8.13 and 8.18).

Things to Consider
• If the student struggles to define vocabulary words related to the topic,
consider Step 4. Vocabulary is related to background knowledge, so the
student may need vocabulary instruction as part of background knowledge
instruction.
• Conduct the Free Discussion across multiple topics to which the student
has been introduced and struggled reading the text, and include topics with
which the student is familiar. This step provides additional information to
determine if background knowledge is contributing to reading difficulties.

8.5  Plan Implementation

After conducting specific-level assessment in the problem analysis phase of the


CBE Reading Comprehension Process, the specific problem with reading compre-
hension should be clear. With that information, three steps follow: (a) a strategy or
intervention that is matched to the student’s skill deficits is selected, designed and
implemented, (b) a goal is set, and (c) ways to measure fidelity and progress are
determined.
We describe 4 general instructional foci (labeled as “Teach”) to use depending
on the results of the CBE Reading Comprehension Process: “Teach: Decoding”
(described in Chapter 6), “Teach: Vocabulary,” “Teach: Meta-Cognitive Skills,” and
“Teach: Background Knowledge”. The specific strategies are described in Hand-
outs 8.12 to 8.18 and are listed in Table 8.5. Listing all available intervention strate-
gies for reading comprehension needs is beyond the scope of this book. The key
to selecting a strategy is to use problem analysis results to guide the selection, and
use formative assessment to ensure it results in student benefit. This section will
describe the overall instructional focus for a given result of the CBE Process and
share a few evidence-based strategies in the Handouts. Educators are encouraged to
explore other resources to locate additional strategies. A list of resources that share
instructional strategies is presented in Table 8.6.
204 8  CBE Reading Comprehension

Table 8.5   Instructional strategies


Strategy Focus of Strategy
Decoding instruction Accuracy and rate, depending on results of CBE Decoding
Process
Direct instruction in vocabulary Explicit teaching of missing vocabulary words
Peer tutoring of vocabulary Direct teaching of vocabulary words
Previewing and developing “Before Reading” comprehension skills, planning and
questions orienting to reading
Click or clunk Self-monitoring and active engagement with text, “During
Reading” comprehension skills
Paragraph shrinking Self-monitoring, “During and After Reading” comprehen-
sion skills
Summarizing and generating “After Reading” comprehension skills, paraphrasing and
questions elaborating with text
Partner retell “After Reading” comprehension skills, identifying key
concepts, understanding structure of text
Story mapping “After Reading” comprehension skills, Identifying key
concepts, understanding structure of text
Connections to self, word, text Activating background knowledge in order to facilitate
reading comprehension

Table 8.6   List of resources for reading comprehension instructional strategies


Resource Location/publisher
Best practices in curriculum-based Howell (2008). Best practices in school psychology V.
evaluation and advanced reading NASP
RTI in the classroom: guidelines and Brown-Chidsey, Bronaugh, & McGraw. (2009). Guilford
recipes for success Press
The ABCs of CBM Hosp, Hosp, & Howell. (2006). Guilford Press
Intensive interventions for stu- Vaughn, Wanzek, Murray, & Roberts, 2012. Center on
dents struggling in reading and instruction. http://www.centeroninstruction.org/files/
mathematics Intensive%20Interventions%20for%20Students%20
Struggling%20in%20Reading%20 %26 %20Math.pdf
Effective school interventions Rathvon. (2008). Guilford Press
Reading rockets www.readingrockets.org
Effective instruction for adolescent Boardman et al. 2008. Center on instruction. http://www.
struggling readers centeroninstruction.org/files/Practice%20Brief-Strug-
gling%20Readers.pdf
Intervention central www.interventioncentral.org

8.5.1  Teach: Vocabulary

If the student needs vocabulary instruction, he/she needs direct teaching of the
words that will enable he/she to understand and gain more meaning from the texts
that he/she read. Two approaches to instruction are described: a direct instruction
approach and a peer tutoring strategy. Additionally, the student may benefit from
being taught specific word parts or instruction on using context clues to define
words (see the DISSECT strategy in Handout 6.15 in Chapter 6).
8.5  Plan Implementation 205

Direct instruction  Academic and content-specific vocabulary unfamiliar to the


student indicates the need for direct instruction. Following problem analysis, miss-
ing vocabulary and error patterns would be evident, generating specific instruc-
tional needs.
Vocabulary is taught by defining and illustrating the specific word using example
and non-examples, providing opportunities to engage with the vocabulary word, and
then providing repeated exposure to the vocabulary word (Armbruster et al. 2001;
Marzano and Pickering 2005). Although vocabulary words can be learned directly
or indirectly, it is important that students receive explicit vocabulary instruction
because of its beneficial effect on reading development (Archer and Hughes 2011).
Peer tutoring  This strategy can be implemented with a whole class or in small
groups. Students are paired up, create vocabulary flash cards, and then take turns
quizzing each other on the words. This strategy has been effective with middle
school students (Rathvon 2008). Handout 8.12 describes the step for this strategy.

8.5.2  Teach: Meta-Cognitive Strategies

Meta-cognitive skills are categorized according to the “before/during/after” reading


approaches. Before reading, good readers orient themselves to the text, ask them-
selves questions to answer about the reading, and identify a goal for reading. During
reading, good readers actively monitor their understanding of the text and use sev-
eral strategies to ensure comprehension of the material. Examples of these strategies
include: summarizing and paraphrasing sections of the text, taking notes, highlight-
ing words or phrases, checking to see if what they just read makes sense (e.g., self-
questioning), and adjusting reading rate in response to changes in text complexity.
After reading, good readers consider whether or not they met their goal, ask if they
understood the text, and attempt to link information in the text to their background
knowledge. They actively work to correct any incongruence by re-reading portions
of the text or by elaborating on what was read. In addition, after reading good read-
ers are able to summarize and draw information from the text, whether it’s the mor-
al, key points, or the author’s point of view. Six strategies are described.
Before reading: Previewing and developing questions  Students who struggle to
orient and plan for reading prior to reading a passage or text may benefit from
previewing the text and developing questions about the reading. Previewing has
been shown to improve both reading rate and reading comprehension of students
with average and below-average reading abilities (Vaughn et al. 2000). Students are
taught to scan the material, think about what they know about the topic, and to pre-
dict what they will learn about the material. Handout 8.13 describes a previewing
strategy for both narrative and expository texts.
During reading: Click or clunk  Click or clunk is a strategy that teaches students
to self-monitor comprehension by asking themselves if what they are reading is
“clicking” (i.e., making sense immediately) or if it is “clunking” (i.e., not making
206 8  CBE Reading Comprehension

sense). Students are taught to use “fix-up strategies” if something they have read
“clunks”. Handout 8.14 describes the “click or clunk” strategy.
During/after reading: Paragraph shrinking  This is a strategy that can be used
during and after reading, as it focuses on summarizing and making meaning from
what was read. Students are taught to summarize each paragraph in 10 or fewer
words. Students work in pairs and can be provided questions to guide their sum-
maries. Handout 8.15 describes this strategy. Using paragraph shrinking within
the context of cooperative peer tutoring, Sáenz et  al. (2005) found that students
with learning disabilities and second language-learners scored better on measures
of word decoding and comprehension compared to students who did not use the
strategy.
After reading: Summarizing and question generating  Students can be taught to
summarize the main ideas and to write questions based on who, what, when, where,
why, and how. Students can read the text, write questions, and then work in pairs to
answer each other’s questions. Handout 8.16 describes this strategy.
After reading: Partner retell  In partner retell, students take turns reading in pairs
and then retell portions of the text to their partner. This strategy has been shown
to improve both reading decoding and reading comprehension scores, particularly
when combined with paragraph shrinking and prediction relay, which is a predictive
reading strategy (Rathvon 2008; Sáenz et al. 2005). Handout 8.17 includes informa-
tion on partner retell.
After reading and retell: Story mapping  Story mapping is a general strategy with
many variations, but it involves teaching students to understand and recognize the
structure of texts that they read. It is helpful during and after reading as it enables
readers to make sense of what was read and to understand the structure of text. Story
mapping has been effective across various grades, background, and ability levels
(Rathvon 2008; Vaughn and Linan-Thompson 2004).
Story mapping can be as simple as teaching students to identify “beginning,
middle, and the end” of a story, but they can also be quite complex and include
characters, plot lines, or compare/contrast elements. After reading a passage or text,
students are taught to skim the story to re-familiarize themselves with key compo-
nents of the text. Students then complete a pre-made story map. Variations of this
approach include having students work in groups or pairs to complete the story
map, including visual imagery to support the story map, and including a discussion
of the story before or after completing the story map. An example of a basic story
map is provided in Handout 8.19.

8.5.3  Teach: Background Knowledge

Once teachers have assessed background knowledge and know more about what
their students know, lessons targeting specific content can be created to activate
8.6  Plan Evaluation 207

background knowledge. Background knowledge can be activated by (a) previewing


content and (b) making connections to self, to world and to text.
Previewing content  Previewing and discussing the content in large and small
groups can be done before reading to activate background knowledge and increase
comprehension success with the first exposure to text. Discussing the text with
students after reading the text can further support linking information to their
background knowledge (Tomlinson and Britt 2012). Handout 8.13 describes a pre-
viewing strategy.
Making connections to self, to world, and to text  When students are able to make
connections about what they have read to themselves, to their experiences within
the world, and from one section of the reading to another section, their reading
comprehension increases (Archer and Hughes 2010; Carnine et  al. 2009; Lague
and Wilson 2010; NICHHD 2000). Handout 8.18 lists the steps for teaching this
strategy.

8.6  Plan Evaluation

Having identified an instructional focus and strategy, the Plan Evaluation phase
involves measuring the effectiveness of the strategy. Measurement of fidelity and
measurement of student progress are two critical components of Plan Evaluation.
Strategies for monitoring reading comprehension skills are discussed. General out-
come measures (i.e., CBM) can be used to monitor overall reading growth, and
other measures can be used to assess progress or mastery of the specific targeted
skill. Data collected as part of daily instruction also can be analyzed.
Oral reading fluency  ORF or reading CBM is a general outcome measure of read-
ing, including comprehension. As students improve with accuracy and rate, so does
their comprehension.
MAZE  Reading MAZE also is a general outcome measure of reading and can
be used to measure comprehension. MAZE is a good option for monitoring when
students have performed well on ORF but not on MAZE during the Survey-Level
Assessment.
Vocabulary lists  Creating vocabulary lists is an option to assess mastery of vocab-
ulary. Vocabulary lists can be used to assess students’ ability to produce definitions
or match definitions (see Handouts 8.20 and 8.21 for directions and examples).
MAZE passages can be created with specified vocabulary words serving as the
foils (i.e., replacement options) in order to assess word understanding within con-
text. A MAZE passage generator can be found on Interventioncentral.org (http://
www.interventioncentral.org/tools/maze-passage-generator). This generator ran-
domly replaces every 7th word and evaluators can replace certain vocabulary words
in order for the passage to assess changes in vocabulary growth.
208 8  CBE Reading Comprehension

8.7  Expanding Your Knowledge and Fine-Tuning

This section describes considerations for expanding the use of the CBE Process.
Vocabulary-matching  If the student cannot identify the definitions of words on
his or her own, providing definitions with a vocabulary-matching task will provide
information about the student’s ability to determine if the student can match words
to definitions. This is a less difficult task because the student only has to identify the
correct definition instead of both retrieving the information from long-term memory
and then determining if it is correct.
To examine this skill, create word lists using the missed vocabulary words in
Step 4 and provide the definitions. Ask the student to match each word with the cor-
rect definition. Directions and Templates for creating a matching vocabulary list are
provided in Handouts 8.20 and 8.21.
Assessing vocabulary in context  To assess vocabulary in context, obtain passages
with the missed words and underline them. Ask the student to use context clues and
think aloud to define the words. Observe the strategies used. If the student does
well, fluency building with context and vocabulary instruction likely would be ben-
eficial (see Handout 8.12). If the student struggles with this task, providing direct
instruction on using clues to define a word (e.g., DISSECT strategy) likely would
improve and increase use of this strategy (Handout 6.15 in Chapter 6).
Background Knowledge  An alternative way to assess background knowledge is
to create MAZE probes related to the topic area in question. Identify the topic to
assess and create a MAZE probe on that topic. Administer the MAZE and determine
if the student meets criterion. If the criterion is met, then background knowledge
may be sufficient. If criterion is not met, then background knowledge may be an
area to target. To verify if background knowledge is an issue, compare the results of
multiple MAZE administrations across different topics, including topics with which
the student is confident and familiar.
Self-monitoring  Consider conducting a self-monitoring assessment. The self-
monitoring assessment (see Chapter  6) can help the evaluator determine if the
student is actively monitoring reading. If the student scores well with Vocabulary,
Decoding, and Meta-Cognition, then Self-Monitoring can be used to determine if
the student makes meaning violating decoding errors that diminish comprehension.
Follow the self-monitoring steps outlined in Chapter 6 focusing on whether or not
the errors violate meaning.
Referents  Referents are the person, thing or concept to which words or expressions
refer. For example, in the phrase “Tom picked up his children,” his refers to Tom.
Students may be confused by such referents, particularly if they are more complex,
like in the phrase “Tom shouted at Bill because he spilled the coffee.” Examine the
student’s errors and determine if a high number of them are related to referents (e.g.,
he, she, they, that). Interview the student about a text selection to determine if they
can identify to what or whom the words refer. If the student struggles with referents,
additional instruction may be warranted.
8.8  Chapter Summary 209

8.8  Chapter Summary

This chapter outlined the CBE Process for Reading Comprehension. The CBE Pro-
cess for Reading Comprehension begins with a survey-level assessment with CBM
oral reading fluency and MAZE probes followed by working through a series of
questions and tasks that examine reading accuracy, reading rate, vocabulary, meta-
cognition, and background knowledge. Instructional recommendations are deter-
mined based on the results.
210

Handout 8.1  Curriculum-Based Evaluation Process in Reading Comprehension Flowchart


ƵƌƌŝĐƵůƵŵͲĂƐĞĚǀĂůƵĂƟŽŶ͗ZĞĂĚŝŶŐŽŵƉƌĞŚĞŶƐŝŽŶ
WZK>D/Ed/&/d/KE
ϭ͘ƐŬ͗/ƐƚŚĞƌĞĂƉƌŽďůĞŵ͍ Î  Ž͗/ŶŝƟĂůŝĚĞŶƟĮĐĂƟŽŶŽĨƉƌŽďůĞŵ
Ϯ͘ƐŬ͗ŽĞƐƚŚĞƉƌŽďůĞŵǁĂƌƌĂŶƚĨƵƌƚŚĞƌŝŶǀĞƐƟŐĂƟŽŶ͍ Î  Ž͗ŽŶĚƵĐƚ^ƵƌǀĞLJͲ>ĞǀĞůƐƐĞƐƐŵĞŶƚ
KZ& D
WZK>DE>z^/^
ĞĐŽĚŝŶŐ
ϯ͘ƐŬ͗ŽĞƐƚŚĞƐƚƵĚĞŶƚŚĂǀĞƐƵĸĐŝĞŶƚĂĐĐƵƌĂĐLJĂŶĚƌĂƚĞĂƚŐƌĂĚĞͲůĞǀĞůǁŝƚŚKZ&͍

Î

Ž͗džĂŵŝŶĞƌĂƚĞĂŶĚĂĐĐƵƌĂĐLJŽŶKZ&ĂƐĚĞƐĐƌŝďĞĚŝŶŚĂƉƚĞƌϲ
zĞƐ EŽ
WƌŽĐĞĞĚƚŽ^ƚĞƉϰ dĞĂĐŚ͗ĞĐŽĚŝŶŐ^ŬŝůůƐ
 ;ďĂƐĞĚŽŶƌĞƐƵůƚƐŽĨĞĐŽĚŝŶŐWƌŽĐĞƐƐŝŶŚĂƉƚĞƌϲͿ
sŽĐĂďƵůĂƌLJ DĞƚĂͲŽŐŶŝƟǀĞ ĂĐŬŐƌŽƵŶĚ<ŶŽǁůĞĚŐĞ
ϰ͘ƐŬ͗/ƐƐƚƵĚĞŶƚŵŝƐƐŝŶŐĐƌŝƟĐĂůǀŽĐĂďƵůĂƌLJ͍ ϱ͘ƐŬ͗/ƐƐƚƵĚĞŶƚŵŽŶŝƚŽƌŝŶŐĐŽŵƉƌĞŚĞŶƐŝŽŶ͍ ϲ͘ƐŬ͗ŽĞƐƚŚĞƐƚƵĚĞŶƚ͛ƐďĂĐŬŐƌŽƵŶĚ
ŬŶŽǁůĞĚŐĞƐƵƉƉŽƌƚƚĞdžƚĐŽŶƚĞŶƚ͍

Î
Î
Î


Ž͗džĂŵŝŶĞǀŽĐĂďƵůĂƌLJŽĨĐŽŶƚĞŶƚĂŶĚƉĂƐƐĂŐĞƐ Ž͗džĂŵŝŶĞŵĞƚĂͲĐŽŐŶŝƟǀĞƐŬŝůůƐ Ž͗džĂŵŝŶĞĂĐŬŐƌŽƵŶĚ<ŶŽǁůĞĚŐĞ
zĞƐ EŽ zĞƐ EŽ zĞƐ EŽ
dĞĂĐŚ͗sŽĐĂďƵůĂƌLJ WƌŽĐĞĞĚƚŽ^ƚĞƉϱ WƌŽĐĞĞĚƚŽ^ƚĞƉϲ dĞĂĐŚ͗DĞƚĂͲĐŽŐŶŝƟǀĞ dĞĂĐŚ͗ĂĐŬŐƌŽƵŶĚ
ZĞĐŽŶƐŝĚĞƌƉƌŽďůĞŵ
/ŶƐƚƌƵĐƟŽŶ ^ŬŝůůƐ <ŶŽǁůĞĚŐĞ
ŝĚĞŶƟĮĐĂƟŽŶ
;ŽŶƐŝĚĞƌ^ƚĞƉϱͿ
;ŽŶƐŝĚĞƌ^ƚĞƉϲͿ

W>E/DW>DEdd/KE
dĞĂĐŚ͗ĞĐŽĚŝŶŐ^ŬŝůůƐ dĞĂĐŚ͗sŽĐĂďƵůĂƌLJ/ŶƐƚƌƵĐƟŽŶ dĞĂĐŚ͗DĞƚĂͲŽŐŶŝƟǀĞ^ŬŝůůƐ dĞĂĐŚ͗ĂĐŬŐƌŽƵŶĚ<ŶŽǁůĞĚŐĞ
;ĂƐŽƵƚůŝŶĞĚŝŶŚĂƉƚĞƌϲͿ ĞĨŽƌĞͬƵƌŝŶŐͬŌĞƌZĞĂĚŝŶŐ&ƌĂŵĞǁŽƌŬ
W>Es>hd/KE
DŽŶŝƚŽƌīĞĐƟǀĞŶĞƐƐ DŽŶŝƚŽƌ&ŝĚĞůŝƚLJ
8  CBE Reading Comprehension
Handout 211

Handout 8.2  Survey-level Assessment Instructions


Purpose: To determine existence and severity of a reading problem.
Materials Needed:
• Writing tool, Timer
• Handout 8.8 to record scores
• A passage for each grade-level that will be assessed
− Student copies & Evaluator copies
Directions:
1. Determine if practice tests are needed. If so, use Handout 8.3 which outlines
directions and provides an example of a practice example.
2. Administer a 3-minute reading MAZE probe at the student’s grade-level using
standardized procedures.
a. Place the student copy of the passage in front of the student. Say
“When I say ‘Begin’ start reading silently. When you come to a group of
words, circle the 1 word that makes the most sense. Work as quickly as you
can without making mistakes. If you finish the first page, turn the page
and keep working until I say ‘Stop’. Do you have any questions? (pause)
Begin.”
b. Start timing and monitor the student to ensure he or she circles only 1 word
and is staying on-task.
c. After three minutes, say “Stop.”
Interpretation Guidelines:
3. Ask: Does the issue warrant further consideration?
a. If the student is performing at criterion with accuracy and rate, then the stu-
dent may have acceptable reading comprehension skills. The evaluator may
wish to examine other areas listed in the CBE Process.
b. If the student is not performing at criterion for either accuracy or rate, proceed
to the Problem Analysis and examine the student’s rate and accuracy to deter-
mine further steps.
Formula for Calculating Circled Words Correct (Rate)

Correct Responses (Circled Words Correct ) = Rate

Formula for Calculating Accuracy

Circled Words Correct (CWC)


× 100 = Accuracy
Total Attempted Responses

Note: Reprinted with permission (Shinn & Shinn, 2002a). Copyright 2002 by NCS
Pearson.
212 8  CBE Reading Comprehension

Handout 8.3  Practice Directions for MAZE


1. Create a practice item or use pre-created practice items.
2. Say to the student, “When, I say ‘Begin’, I want you to silently read a story.
You will have 3 minutes to read and complete the task. Listen carefully to the
directions. Some of the words in the story are replaced with a group of three
words. Your job is to circle the 1 word that makes the most sense in the story.
Only 1 word is correct.”
3. Now say, “Let’s practice one together. Look at the first page. Read the first sen-
tence silently while I read aloud: The dog apple, broke, ran after the cat. The
three choices are apple, broke, ran. The dog apple after the cat. That sentence
does not make sense. The dog broke after the cat. That sentence does not make
sense. The dog ran after the cat. That sentence does make sense, so circle the
word ran.”
4. Next, say, “Let’s go to the next sentence. Read it silently while I read aloud.
The cat ran quickly green for up the hill. Which word is the correct word for
the sentence?” (Student answers quickly) “Yes, ‘The cat ran quickly up the
hill.’ is correct, so circle the correct word quickly.” (Make sure students circle
quickly)
5. Next say, “Silently read the next sentence and raise your hand when you think
you know the answer.” (Make sure the student knows the correct word. Read the
sentence with the correct answer) “That’s right, ‘The dog barked at the cat.’
is correct. Now what do you do when you choose the correct word?” (Student
answers “Circle it.” Make sure the students understand the task) “That’s cor-
rect, you circle it. I think you’re ready to work on a story on your own.”
6. If the student answers correctly, proceed to administration. If the student does
not answer correctly, provide the correct response and proceed to testing.

Note: Reprinted with permission (Shinn & Shinn, 2002a). Copyright 2002 by NCS
Pearson.
Handout 213

Practice Items

The dog (apple, broke, ran) after the cat.

The cat ran (quickly, green, for) up the hill.

The dog barked (in, at , is) the cat.


214 8  CBE Reading Comprehension

Handout 8.4  Directions for Vocabulary Lists


Purpose: To measure the student’s ability to define words and to determine if a
vocabulary deficit exists.
Materials:
• Writing utensil
• List of words for student to define
• Scoring sheet (see Handout 8.9 for a scoring sheet/examiner copy).
Directions:
1. Analyze errors on the Survey-Level Assessments to determine if the student may
have a deficit with academic vocabulary, content-specific vocabulary, or both.
2. Create a vocabulary list by first gathering a pool of terms the student should
know. Create a separate list for content-specific and academic vocabulary (do
not blend the two domains).
a. To create a list of content-specific vocabulary, evaluators can generate lists
from the key words in the content area textbook or use common lists found
on various websites. Evaluators also can identify a subject area within which
the student is struggling and use the glossary or bolded words from the text
book’s chapter from that particular subject’s text book.
b. To create a list of academic vocabulary, use common word lists for the stu-
dent’s grade level and age found on websites.
3. Present the words to the student. They may be presented one at a time on note
cards, or on a list with the words covered up with a second piece of paper.
4. Ask the student to define the word. Say, “What does this word mean?” and point
to the word. Mark if the student defines the word fully, partially, or does not
define. Use Handout 8.9 to mark the student’s responses.
Interpretation Guidelines:
5. Ask: “Is the student missing critical vocabulary?”
a. If the student scores well, move on to Step 5. If the student can define words
in isolation, teach to fluency with recognizing words in text.
b. If the student scores low, then provide instruction in vocabulary identification
(see “Teach: Vocabulary” in Chapter 8). Next, test to see if the student can
produce the definition of the word when given context.
Handout 215

Handout 8.5  Comprehension Interview Instructions


Purpose: To measure the student’s awareness of reading and to determine the stu-
dent’s use of meta-cognitive skills while reading.
Materials:
• Writing utensil
• Comprehension Interview (see Handout 8.10)
• Passages in which the student reads with sufficient rate and accuracy
Directions:
1. Provide the student copies of text or passages at an instructional level for rate and
accuracy.
2. E
 xplain to the student that he or she will read the text/passage and that you will
ask questions to identify how the student approaches reading and understanding
text. Say to the student, “I want you to read this text/passage aloud. I want
you to “think aloud” as you read because I want to understand how you read
and how you make sense of what you read. I will ask you questions to help me
understand how you read. First, let me ask how you prepare yourself to read.
What do you do before you read this passage/text?”
3. Proceed to ask the student questions or encourage the student to explain what he
or she does before reading a passage or text. Encourage the student to speak and
elaborate by saying “Tell me more” or “Explain that more fully.” Observe and
ask questions to assess the skills listed in the Comprehension Interview under the
section “Before Reading”.
4. Next, have the student read and examine the skills listed in the “During Read-
ing” section of the Comprehension Interview. Ask questions as the student reads
to clarify each skill. Say, “Okay, now begin reading and talk aloud while you
read. Pretend I am a student and you are the teacher. How can I make sure I
understand what is being read?”
5. When the student finishes, ask him or her to explain what he or she does after
finishing a passage or text. Ask questions to clarify the skills listed in the “After
Reading” section of the Comprehension Interview. Say, “Now that you are fin-
ished, what do you do after you read to make sure you understand what was
read?”
6. Look over the Comprehension Interview and ask any clarifying questions to
ensure you have assessed each skill listed.
Interpretation Guidelines:
7. Consider the results of the Retell and Ask, “Is the student monitoring his or her
comprehension?”
a. If yes, then reconsider the problem (Problem Identification) and/or examine
background knowledge (see the “Expanding Knowledge” section).
216 8  CBE Reading Comprehension

b. If no, examine the results of the Interview and Retell to select a strategy to
use.
i. If the student scored low on the Comprehension Interview, teach strategies
based on where the deficit occurred.
1. For “Before Reading”, teach the student to develop questions and pre-
view the text (see Handout 8.13).
2. For “During Reading”, teach the student to use strategies to monitor
meaning (see Handouts 8.14 and 8.15).
3. For “After Reading”, teach the student to summarize and answer ques-
tions about the reading (see Handout 8.16).
ii. If the student scored low on Retell, teach partner retell (see Handout 8.17)
and/or story mapping (see the “Story Mapping” section later in the Chap-
ter). The instruction will be adjusted depending if the student was able to
provide accurate Retell with prompting or without.
Handout 217

Handout 8.6  Retell Instructions


Purpose: To assess the student’s ability to monitor meaning and understand the
structure of reading texts.
Materials:
• Passages for the student to read, both expository and narrative
• Handout 8.11
• Audio recorder (optional)
Directions:
1. Gather approximately 250-word passages or reading texts that are both exposi-
tory and narrative, depending on the student’s grade level and the content areas
being assessed. Consider using a recording device so that you can replay the
retell and accurately interpret the student’s response.
2. Say to the student, “I want you to read this passage to yourself. I will then have
you tell me about what you read.”
3. Have the student read untimed. After the student finishes the passage, ask the
student to summarize what was read. Say, “Please tell me about what you read.”
4. After the student provides a response, score the response using the rubric pro-
vided in Handout 8.10 or create your own.
Interpretation Guidelines:
5. Ask: Can the student actively construct meaning while reading?
a. If yes, the student’s monitoring of text and understanding of the structure of
text is secure.
b. If no, conduct the next step.
6. Ask questions to prompt the student’s retell and to determine if the student is
identifying relevant information from the passages. Examples of questions to
consider are provided in Handout 8.11.
7. Ask: With prompting/assistance, can student monitor and construct meaning?
a. If yes, then teach the student to monitor and construct meaning independently.
The focus is on building fluency with this skill.
b. If no, then teach the student to monitor meaning and to understand the struc-
ture of the text.
8. Consider the results of the Comprehension Interview and Ask: “Is the student
monitoring his or her comprehension?”
a. If yes, then examine background knowledge.
b. If no, examine the results of the Interview and Retell to select a strategy to
use.
218 8  CBE Reading Comprehension

c. If the student scored low on the Comprehension Interview, teach strategies
based on where the deficit occurred.
1. For “Before Reading”, teach the student to develop questions and preview
the text (see Handout 8.13).
2. For “During Reading”, teach the student to use strategies to monitor mean-
ing (see Handouts 8.14 and 8.15).
3. For “After Reading”, teach the student to summarize and answer questions
about the reading (see Handout 8.16).
d. If the student scored low on Retell, teach partner retell (see Handout 8.17)
and/or story mapping (see the “Story Mapping” section later in the Chapter).
The instruction will be adjusted depending on whether the student required
prompting to provide accurate Retell.
Handout 219

Handout 8.7  Background Knowledge Discussion Instructions


Purpose: To determine if students have sufficient background knowledge to gain
meaning from text.
Materials:
• Reading passages containing topics about which the student may not have back-
ground knowledge
• Writing implement
• Paper to record student responses
Directions:
1. Gather reading passages containing topics to be assessed.
2. Preview the reading passages and identify main theme and 3–5 key points that
are essential to understanding the passage. Identify important vocabulary words.
3. Explain to the students that you want them to read a passage but that first, you
will have a conversation to understand what they may already know about the
topic.
4. Begin with an open-ended question, such as “Tell me what you know about
____.” Take notes while the student answers and look for identification of the
key points you identified.
5. Next ask specific questions about the key points and vocabulary that you identi-
fied. Record what the student says.
Interpretation Guidelines:
6. Ask, “Does the student’s background knowledge support text content?” Review
your notes and determine if the student’s background knowledge supports the
content of the text. As a general guideline, the student should know the majority
of the key points that you identified and not present inaccuracies.
a. If yes, then you can reasonably conclude that the student’s background
knowledge is sufficient for that particular topic.
b. If no, then recommend teaching strategies that will activate or build back-
ground knowledge (see Teach: Background Knowledge section and Hand-
outs 8.13 and 818).
220 8  CBE Reading Comprehension

Handout 8.8  Survey-Level Assessment Recording Sheet for MAZE


Student Name: _________________________________ Date:_________ Grade:_______

a
Passage Score Criterion Met? Benchmark

Level 8 rate 18
N/A
accuracy >80%
Level 7 rate 18
N/A
accuracy >80%
Level 6 rate 16
18
accuracy >80%
Level 5 rate 12
18
accuracy >80%
Level 4 rate 10
15
accuracy >80%
Level 3 rate 8
8
accuracy >80%
Level 2 rate 2
N/A
accuracy >80%
a
based on Fall DIBELS Next Benchmark Goals (hp://dibels.org/papers/DIBELSNextBenchmarkGoals.pdf).

Expected Instruconal Level (Grade-level):


Obtained Instruconal Level (meets rate and accuracy):
Obtained rate with grade-level material:
Expected rate with grade-level material:
Expected rate minus obtained rate = rate discrepancy:
Obtained accuracy with grade-level material:
Expected accuracy with grade-level material: >80%
Expected rate minus obtained rate = accuracy discrepancy:
Handout 221

Handout 8.9  Vocabulary List Template. Examiner Copy


This template is used to record a student’s responses when asked to define vocabu-
lary words in isolation. Write the words to assess in the “Word” column. Write the
definition in the “Definition” column. Ask the student to define each word and mark
the correct box to indicate whether they defined it, partially defined it, or did not
define the word in the “Correct” column.

tŽƌĚ ĞĮŶŝƟŽŶ ŽƌƌĞĐƚ͍


ϭ͘ ĞĮŶĞĚ WĂƌƟĂů EŽƚĞĨ
႒ ႒ ႒
Ϯ͘ ĞĮŶĞĚ WĂƌƟĂů EŽƚĞĨ
႒ ႒ ႒
ϯ͘ ĞĮŶĞĚ WĂƌƟĂů EŽƚĞĨ
႒ ႒ ႒
ϰ͘ ĞĮŶĞĚ WĂƌƟĂů EŽƚĞĨ
႒ ႒ ႒
ϱ͘ ĞĮŶĞĚ WĂƌƟĂů EŽƚĞĨ
႒ ႒ ႒
ϲ͘ ĞĮŶĞĚ WĂƌƟĂů EŽƚĞĨ
႒ ႒ ႒
ϳ͘ ĞĮŶĞĚ WĂƌƟĂů EŽƚĞĨ
႒ ႒ ႒
ϴ͘ ĞĮŶĞĚ WĂƌƟĂů EŽƚĞĨ
႒ ႒ ႒
ϵ͘ ĞĮŶĞĚ WĂƌƟĂů EŽƚĞĨ
႒ ႒ ႒
ϭϬ͘ ĞĮŶĞĚ WĂƌƟĂů EŽƚĞĨ
႒ ႒ ႒
ϭϭ͘ ĞĮŶĞĚ WĂƌƟĂů EŽƚĞĨ
႒ ႒ ႒
ϭϮ͘ ĞĮŶĞĚ WĂƌƟĂů EŽƚĞĨ
႒ ႒ ႒
ϭϯ͘ ĞĮŶĞĚ WĂƌƟĂů EŽƚĞĨ
႒ ႒ ႒
ϭϰ͘ ĞĮŶĞĚ WĂƌƟĂů EŽƚĞĨ
႒ ႒ ႒
ϭϱ͘ ĞĮŶĞĚ WĂƌƟĂů EŽƚĞĨ
႒ ႒ ႒
ϭϲ͘ ĞĮŶĞĚ WĂƌƟĂů EŽƚĞĨ
႒ ႒ ႒
ϭϳ͘ ĞĮŶĞĚ WĂƌƟĂů EŽƚĞĨ
႒ ႒ ႒
ϭϴ͘ ĞĮŶĞĚ WĂƌƟĂů EŽƚĞĨ
႒ ႒ ႒
ϭϵ͘ ĞĮŶĞĚ WĂƌƟĂů EŽƚĞĨ
႒ ႒ ႒
ϮϬ͘ ĞĮŶĞĚ WĂƌƟĂů EŽƚĞĨ
႒ ႒ ႒
dŽƚĂůĞĮŶĞĚ͗й
dŽƚĂůWĂƌƟĂů͗й
222 8  CBE Reading Comprehension

Handout 8.10  Comprehension Interview


Use this Comprehension Interview to determine if the student monitors his or her
meaning while reading a passage or text. Use with Handout 8.5. Mark the box to
indicate whether the skill is observed, partially observed, or not observed while the
student is reading.

^Ŭŝůů
ĞĨŽƌĞZĞĂĚŝŶŐ
^ƚĂƚĞƐƉƵƌƉŽƐĞĨŽƌƌĞĂĚŝŶŐ KďƐĞƌǀĞĚ͍ WĂƌƟĂů͍ EŽƚKďƐĞƌǀĞĚ͍
႒ ႒ ႒
/ĚĞŶƟĮĞƐƋƵĞƐƟŽŶƐƚŽĐŽŶƐŝĚĞƌŽƌĂƐŬ KďƐĞƌǀĞĚ͍ WĂƌƟĂů͍ EŽƚKďƐĞƌǀĞĚ͍
႒ ႒ ႒
^ŬŝŵƐƉĂƐƐĂŐĞƐŽƌƉĂƌĂŐƌĂƉŚƐƚŽĮŶĚŝŶĨŽƌŵĂƟŽŶ KďƐĞƌǀĞĚ͍ WĂƌƟĂů͍ EŽƚKďƐĞƌǀĞĚ͍
ĨŽƌƋƵĞƐƟŽŶƐ ႒ ႒ ႒
&ŽƌŵƐĂŐĞŶĞƌĂůŝŵƉƌĞƐƐŝŽŶŽĨŝŶĨŽƌŵĂƟŽŶ KďƐĞƌǀĞĚ͍ WĂƌƟĂů͍ EŽƚKďƐĞƌǀĞĚ͍
ĞŵƉŚĂƐŝnjĞĚǁŝƚŚŝŶƚŚĞƚĞdžƚ ႒ ႒ ႒
^ĞƚƐĂŐŽĂůĨŽƌƌĞĂĚŝŶŐ KďƐĞƌǀĞĚ͍ WĂƌƟĂů͍ EŽƚKďƐĞƌǀĞĚ͍
႒ ႒ ႒
hƐĞƐƟƚůĞƚŽŝĚĞŶƟĨLJƉƵƌƉŽƐĞ KďƐĞƌǀĞĚ͍ WĂƌƟĂů͍ EŽƚKďƐĞƌǀĞĚ͍
႒ ႒ ႒
>ŽŽŬƐĂƚŝůůƵƐƚƌĂƟŽŶƐŽƌŚĞĂĚĞƌƐ KďƐĞƌǀĞĚ͍ WĂƌƟĂů͍ EŽƚKďƐĞƌǀĞĚ͍
႒ ႒ ႒
DĂŬĞƐƉƌĞĚŝĐƟŽŶƐĂďŽƵƚǁŚĂƚŝƐŝŶƚŚĞƚĞdžƚ KďƐĞƌǀĞĚ͍ WĂƌƟĂů͍ EŽƚKďƐĞƌǀĞĚ͍
႒ ႒ ႒
ƵƌŝŶŐZĞĂĚŝŶŐ
/ĚĞŶƟĮĞƐŵĂŝŶŝĚĞĂƐĂŶĚĐƌŝƟĐĂůĚĞƚĂŝůƐ KďƐĞƌǀĞĚ͍ WĂƌƟĂů͍ EŽƚKďƐĞƌǀĞĚ͍
႒ ႒ ႒
ĞĐŝĨĞƌƐǁŚŝĐŚŝŶĨŽƌŵĂƟŽŶŝƐƌĞůĞǀĂŶƚĂŶĚǁŚŝĐŚ KďƐĞƌǀĞĚ͍ WĂƌƟĂů͍ EŽƚKďƐĞƌǀĞĚ͍
ŝƐŶŽƚƌĞůĞǀĂŶƚ ႒ ႒ ႒
^LJŶƚŚĞƐŝnjĞƐŬĞLJŝĚĞĂƐ KďƐĞƌǀĞĚ͍ WĂƌƟĂů͍ EŽƚKďƐĞƌǀĞĚ͍
႒ ႒ ႒
ĚũƵƐƚƐƌĞĂĚŝŶŐƌĂƚĞǁŝƚŚĐŚĂŶŐĞƐŝŶƚĞdžƚ KďƐĞƌǀĞĚ͍ WĂƌƟĂů͍ EŽƚKďƐĞƌǀĞĚ͍
ĐŽŵƉůĞdžŝƚLJ ႒ ႒ ႒
^ĞůĨͲĐŽƌƌĞĐƚƐĞƌƌŽƌƐ KďƐĞƌǀĞĚ͍ WĂƌƟĂů͍ EŽƚKďƐĞƌǀĞĚ͍
႒ ႒ ႒
ZĞŵĞŵďĞƌƐƋƵĞƐƟŽŶƐĂŶĚƉƌĞĚŝĐƟŽŶƐǁŚŝůĞ KďƐĞƌǀĞĚ͍ WĂƌƟĂů͍ EŽƚKďƐĞƌǀĞĚ͍
ƌĞĂĚŝŶŐ ႒ ႒ ႒
ŚĞĐŬƐƚŽƐĞĞŝĨǁŚĂƚǁĂƐƌĞĂĚŵĂŬĞƐƐĞŶƐĞŽƌ KďƐĞƌǀĞĚ͍ WĂƌƟĂů͍ EŽƚKďƐĞƌǀĞĚ͍
ĂůŝŐŶƐǁŝƚŚƉƌĞǀŝŽƵƐŝŶĨŽƌŵĂƟŽŶ ႒ ႒ ႒
^ƚŽƉƐĂŶĚƐƵŵŵĂƌŝnjĞƐŝŶĨŽƌŵĂƟŽŶǁŚŝůĞƌĞĂĚŝŶŐ KďƐĞƌǀĞĚ͍ WĂƌƟĂů͍ EŽƚKďƐĞƌǀĞĚ͍
႒ ႒ ႒
ůĂƌŝĮĞƐŝŶĨŽƌŵĂƟŽŶǁŚĞŶŝƚĚŽĞƐŶŽƚŵĂŬĞƐĞŶƐĞ KďƐĞƌǀĞĚ͍ WĂƌƟĂů͍ EŽƚKďƐĞƌǀĞĚ͍
႒ ႒ ႒
Handout 223

ŌĞƌZĞĂĚŝŶŐ
/ĚĞŶƟĮĞƐŽƌƌĞĂĐŚĞƐĂĐŽŶĐůƵƐŝŽŶ KďƐĞƌǀĞĚ͍ WĂƌƟĂů͍ EŽƚKďƐĞƌǀĞĚ͍
႒ ႒ ႒
^ƵŵŵĂƌŝnjĞƐǁŚĂƚǁĂƐƌĞĂĚ KďƐĞƌǀĞĚ͍ WĂƌƟĂů͍ EŽƚKďƐĞƌǀĞĚ͍
႒ ႒ ႒
ŶƐǁĞƌƐƋƵĞƐƟŽŶƐƚŚĂƚǁĞƌĞƉŽƐĞĚĂƚƚŚĞ KďƐĞƌǀĞĚ͍ WĂƌƟĂů͍ EŽƚKďƐĞƌǀĞĚ͍
ďĞŐŝŶŶŝŶŐŽĨƌĞĂĚŝŶŐ ႒ ႒ ႒
ůĂďŽƌĂƚĞƐŽŶƌĞĂĚŝŶŐĂŶĚͬŽƌĐŽŶŶĞĐƚƐǁŝƚŚŽƚŚĞƌ KďƐĞƌǀĞĚ͍ WĂƌƟĂů͍ EŽƚKďƐĞƌǀĞĚ͍
ƐŽƵƌĐĞƐŽĨŝŶĨŽƌŵĂƟŽŶ͖hƐĞƐƉƌŝŽƌŬŶŽǁůĞĚŐĞƚŽ ႒ ႒ ႒
ŵĂŬĞƐĞŶƐĞŽĨǁŚĂƚǁĂƐƌĞĂĚ
DĂŬĞƐĚĞĐŝƐŝŽŶƐĂďŽƵƚƌĞĂĚŝŶŐƚŽĚĞƚĞƌŵŝŶĞŝĨ KďƐĞƌǀĞĚ͍ WĂƌƟĂů͍ EŽƚKďƐĞƌǀĞĚ͍
ƌĞĂĚŝŶŐŐŽĂůǁĂƐŵĞƚŽƌŝĨƐĞĐƟŽŶƐƌĞƋƵŝƌĞ ႒ ႒ ႒
ƌĞƌĞĂĚŝŶŐ
ZĞǀŝĞǁƐƐĞĐƟŽŶƐŽĨƚĞdžƚ KďƐĞƌǀĞĚ͍ WĂƌƟĂů͍ EŽƚKďƐĞƌǀĞĚ͍
႒ ႒ ႒

Questions to consider asking:


1. How do you prepare yourself before you read?
2. Do you ever go back and reread? Why would you do that?
3. What do you do if you don’t understand a word in a sentence? A whole sentence?
A whole paragraph?
4. What do you do when you see bolded words in a textbook or reading?
5. Do you read some text faster than others? Explain.
6. If you’re in a hurry and could not read an entire selection, which sentences would
you read? What part of the text could you skip?
7. Do you do anything to prepare before you read a story?
8. What do you do when you finish reading?
Notes: (Write down behaviors observed that monitor and improve comprehension).
224

Handout 8.11  Retell Rubric and Questions


Table 8.11.1   Basic rubric for retell
Narrative/fictional Examples of labels or headers
elements Inaccurate or not included Fragmented Partial Complete
Little comprehension Some comprehension Adequate comprehension Good/strong comprehension
0 1 2 3
Characters Does not identify characters Vaguely refers to characters Identifies characters Identifies characters in relation to
each other and events
Setting Does not include setting Vaguely includes setting Identifies story setting Identifies story setting with detail
Events and sequence Identifies 1 or 2 events Identifies events in random order Identifies events in order Details events in order
Author’s purpose Does not identify lesson or Vaguely Identifies lesson Identifies lesson or view- Identifies lesson of story or
viewpoint point in general terms author’s viewpoint with detail
Expository elements
Topic Does not identify topic Identifies topic
Main idea Does not identify main idea Identifies main idea
Details Does not provide details Provides few details Provides several details Provides most details
Organization Does not mention organiza- Vaguely refers to text Identifies text organization Identifies text organization and
tion of text organization explains why
Vocabulary Does not use vocabulary Uses vocabulary without clear Uses some vocabulary with Uses much vocabulary with
from passage understanding understanding understanding
8  CBE Reading Comprehension
Handout 225

Table 8.11.2   Guidelines for Retell


Basic Retell Identify and retell beginning, middle, end
Describe setting
Identify problem and resolution
Complete Identify and retell events and facts in sequence
Draw inferences
Retell causes of actions and effects
Comprehensive Identify and retell a sequence of actions and events
Draw inferences to account for actions
Offer an evaluation of story
Note: Adapted from Vaughn and Linan-Thompson (2004)

Possible Questions to Ask for Prompting During Retell


Fictional
• What happened at the beginning of the story?
• What happened before (or after) (a particular event) in the story?
• How did the story end?
• What is the lesson of the story?
Expository
• What is this text about?
• What are the main ideas?
• Share some details supporting the main idea.
• How is the information organized and presented?
• What are some key terms?
226 8  CBE Reading Comprehension

Handout 8.12  Teach: Peer Tutoring in Vocabulary


Targeted Skill: Vocabulary
Purpose and description: The purpose is to improve the student’s knowledge of
vocabulary words by pairing them with peers. Students take turns quizzing each
other on the words.
Materials:
• Note cards for students, writing utensils
• Dictionaries for each pair of students
• A list of vocabulary words for each pair of students
Setting: Whole-class or small-group
Directions:
1. Pair up students and explain they will work together to learn vocabulary words.
2. Distribute the vocabulary list to students on Monday. Designate one student to
serve as the first tutor.
3. Have students prepare individual cards by looking up vocabulary words and writ-
ing the word on one side of the card and the definition on the other side.
4. On Tuesday, conduct a whole-class review of the words and provide corrective
feedback to students.
5. For the tutoring session, have the tutor show the tutee the vocabulary word. The
tutee is to provide the definition. If correct, the tutee is praised, and the card is
placed faced down. If incorrect, the tutor reads the definition and has the tutee
repeat it. The missed card is placed back in the deck near the beginning to ensure
it is reviewed after a short time.
a. The teacher monitors and supervises during the tutoring session.
6. After finishing the deck, the tutor reviews the missed cards as many times as
possible in 10 minutes.
7. Following the 10  minutes, the students switch roles and repeat Steps 5 and 6
above.
Considerations and Modifications:
• Change pairs each week.
• At the end of the tutoring session, a quiz can be administered and scores tallied.
This quiz can be a tool to measure student progress.
• Students can earn points for completing the tutoring session, for displaying
expected behaviors, and for identifying correct word definitions. Rewards can
be provided for pairs, groups of students, or the whole-class.
Evidence-base: Rathvon 2008; Malone and McLaughlin 1998
Handout 227

Handout 8.13  Teach: Before Reading: Previewing and Developing Questions


Targeted Skill: Comprehension
Purpose and description: Students are taught to preview the text to activate back-
ground knowledge and to build motivation for reading. They answer questions
about what they already know about the topic and what they think they will learn
from the reading selection.
Materials:
• Previewing Question template
• Reading text or passages
Setting: One-to-one, whole-group, small-group
Directions:
1. Explain to students they will preview the text to predict what they will learn.
Drawing comparisons to movie previews is helpful in understanding previewing.
2. Explain that previewing is brief (e.g., 2–3  minutes) and that students should
figure out: (a) what the reading is about, (b) what they already know about the
reading, and (c) what they will learn from the reading.
3. Provide students 2–3 minutes to preview the reading.
a. Teach students to systematically preview the text. For example, read the title,
then look at pictures and figures, read each heading and think about what it
means, look for key words (bolded, underlined), and then read the summary
and/or the first and last paragraph.
4. Then provide about 6 minutes for students to discuss with each other what they
learned, develop predictions about the reading, and share connections between
the text and what they already know.
5. Have students complete a “Previewing Template”. An example is provided
below.
Previewing Template:
tŚĂƚ/ůƌĞĂĚLJ
dŽƉŝĐ <ŶŽǁďŽƵƚdŚĞdŽƉŝĐ tŚĂƚ/tŝůů>ĞĂƌŶ

Considerations and Modifications:


• For expository text, teach the acronym: THIEVES as a method for previewing.
Students review each portion of a textbook using THIEVES to remind them
what to read and in what order: T- title, H- heading, I- introduction, E- every
first sentence in a paragraph, V- vocabulary terms, E- end-of-chapter questions,
S- summary at end of chapter.
228 8  CBE Reading Comprehension

• Related strategies that can help with previewing reading passages and texts are
“Inquiry Charts”, which teach students to develop specific questions about the
topic and “Think Alouds”, which teach students to both develop questions and
monitor their meaning during reading. The reader is referred to http://www.read-
ingrockets.org/strategies/#comprehension for more information.
Evidence-base: Liff Manz 2002; Vaughn and Kettman Klinger 1999; Vaughn et al.
2000
Handout 229

Handout 8.14  Teach: During Reading: Click or Clunk


Targeted Skill: Comprehension
Purpose and description: The purpose is to teach students to monitor their reading
and actively engage with the text. Students are taught to stop and check whether what
they are reading is making sense (clicks) or not making sense (clunks). If the section
was a clunk, they use “fix-up” strategies to gain meaning.
Materials:
• Text
• Pre-assigned partners
Setting: One-to-one, partners, small-groups
Directions:
1. Discuss with students the difference between “click” and “clunk”. Click can be
described as understanding something immediately (like the snap of your fin-
gers) and clunk can be described as hitting a brick wall (Vaughn and Kettman
Klinger 1999).
2. Provide guided practice with “click or clunk” by reading passages to students
and then asking if the selection “clicks or clunks”. Begin by checking under-
standing after each sentence and then move to checking after each paragraph,
whole pages, sections, etc.
3. If students identify “clunks”, teach “clunk” fix-up strategies. Model and provide
practice with fix-up strategies. Examples are provided in Table 8.14.1.
4. Once students demonstrate accuracy of the skills, provide independent practice
with the strategies.

Table 8.14.1   Examples of fix-up strategies


Fix-up strategy
• If a sentence does not make sense, reread the sentences before and after it to look for clues
• Reread the sentence without the misunderstood word and think about what would make sense
• Look for a prefix or suffix that might help understand misunderstood words
• Look up the misunderstood word in the glossary or dictionary
• Ask a classmate
• Reread the paragraph and ask, “What did the paragraph say?”
• Identify the main idea of the paragraph
• Look ahead for clues to help understand a paragraph or passage
• Restate what was read into your own words; compare with a peer
• Adjust your reading rate by slowing down, rereading when understanding is not accomplished
• Look for visuals or pictures to facilitate understanding
• Summarize the author’s main points at various times during reading
• Reread sections aloud
230 8  CBE Reading Comprehension

Considerations and Modifications:


• Teaching students to identify the main idea of a selection can facilitate reading
comprehension. “Get the gist” can consist of two steps: (a) decide who or what
the paragraph is about and (b) name the most important idea about the topic.
• Teach students to be “click detectors”. They can work in pairs and provide
feedback to each other as they identify what clicks and how to fix clunks while
reading.
Evidence-base: Vaughn and Kettman Klinger 1999
Handout 231

Handout 8.15  Teach: During Reading: Paragraph Shrinking


Skill: Comprehension by summarizing information
Purpose and description: Students are assigned to pairs and taught to summarize
paragraphs in 10 words in less.
Materials:
• Pre-assigned groups or partners
• Text
• Prompt cards or list of questions
Setting: Individually, small-group, or whole-group
Directions:
1. Assign students to pairs and provide them with text.
2. Student 1 begins reading aloud for 5 minutes, and Student 2 follows along ensur-
ing accuracy.
3. At the end of each paragraph, Student 2 asks the student: (a) “Who or what is
the paragraph about?” and (b) “Tell me the most important thing about (who or
what).” Student 1 summarizes this information in 10 words or less.
4. If Student 2 decides Student 1 made an error, he or she says “That’s not quite
right. Skim the paragraph and try again.”
5. After the 5 minutes, students switch roles and repeat the procedure.
Considerations and Modifications:
• Students can earn points for correct responses and for stating the summary in 10
words or less. These points can be part of a larger reward system.
• Prompt cards can be provided that state: (1) Name the who or what, (2) Tell the
most important thing about the who or what, and (3) Say the main idea in 10
words or less.
Evidence-base: Rathvon 2008; Readingrockets.org; Sáenz et al. 2005
232 8  CBE Reading Comprehension

Handout 8.16  Teach: After Reading: Summarizing and Question-Generating


Targeted Skill: Comprehension
Purpose and Description: After reading, students identify the main idea(s) and
write questions about the reading.
Materials:
• List of questions
• Text
Directions:
1. After students read the text, have them underline or identify key words and main
ideas.
2. Write down the key words and main ideas on a piece of paper.
3. Have students summarize the text in 20 words or less. (The number of words can
be adjusted depending on the length of text and grade-level of the student.)
4. Next, teach students to write “teacher-like” questions about the reading using
who, what, when, where, why, and how.
Considerations and Modifications:
• Provide question stems, such as:
− What do you think would happen if ____________?
− How would you compare and contrast __________?
− How do you think _______________ could have been prevented?
− How would you interpret ______________?
• Students can preview the reading, write questions, and then answer those ques-
tions while they read.
• For narrative questions, have students write questions about the setting, charac-
ters, plots, and themes. For expository texts, have students write questions that
compare and contrast, sequence, are descriptive, and address organization of the
text, concepts or terms, examples and definitions, problems and solutions, and
cause and effect.
• Students can be taught to write Question-Answer Relationship questions. There
are four types of questions:
− Right There: Literal questions where the answer is found in the text using the
same words in the text. The answer is directly in the text.
− Think and Search: Questions which requiring gathering several sources of
information and putting them together to form the answer.
− Author and You: Questions based on information in the text, but require the
student to relate to his or her own experiences to answer. The answer is not
directly in the text.
− On My Own: Questions that do not require the student to have read, but
require background knowledge to answer.
Evidence-base: Hock and Mellard 2005; Vaughn and Kettman Klinger 1999
Handout 233

Handout 8.17  Teach: After Reading: Partner Retell


Skill: Comprehension and Retell
Purpose and description: Students work in pairs to read portions of a text and then
retell what was read.
Materials:
• Pre-assigned pairs
• Text
Setting: One-to-one and pairs
Directions:
1. Assign students to pairs or groups.
2. Student 1 reads for 5  minutes, and Student 2 follows along and corrects any
errors made (decoding, omissions, added words, and hesitations).
3. Students switch roles.
4. After each student reads the passage, students take turns retelling the main ideas
in the story or passage as they occur. Student 2 retells in sequence what was read
for 2  minutes. Student 1 provides prompts as needed, such as “What did you
learn first?” and “What did you learn second?” After the 2-minute retell, students
switch roles.
Considerations and modifications
• Use a 1-minute retell for younger students.
• Provide points for reading the passage correctly and for staying on-task. These
points can be part of a larger reward system.
Evidence-base: Rathvon 2008; Sáenz et al. 2005
234 8  CBE Reading Comprehension

Handout 8.18  Teach: Background Knowledge: Connections to Self, World, Text


Skill: Comprehension and Activation of Background Knowledge
Purpose and description: To enhance student’s comprehension of a text and to
expand upon topics read by tying the information to the student’s prior knowledge.
Materials:
• Reading Passage
• Handout 8.18a
Setting: One-to-one, small-group, or whole-class
Directions:
1. After students read a passage, encourage a discussion about the main topics
of the reading. Guide the discussion to draw out factual information from the
reading.
2. Ask students to think of how the reading is related to their lives. Have them
complete the “Text to Self” row on their worksheets. Sample prompts include:
a. What I read reminds me of when I…
b. I agree with what I read because one time, I…
c. I disagree with or don’t understand what I read because one time, I…
3. Now discuss how the text relates to events in the world. Ask students to complete
the “Text to World” row on their worksheets. Sample prompts include:
a. This reading reminds me of…
b. What I read is similar to this event in history…
c. What I read is similar to what is happening now…
4. Now discuss how the text reminds students of other books or readings they have
previously read. Ask students to complete the “Text to Text” row on their work-
sheets. Sample prompts include:
a. What I read reminds me of another reading I read because…
Evidence-base: Lague and Wilson 2010
Handout 235

Handout 8.18a
Name: _____________________________________

Main Ideas in Reading

Text to Self

Text to World

Text to Text
236 8  CBE Reading Comprehension

Handout 8.19  Story Map Template

Narrave or Ficonal Text


Story Title:

Main Characters:

Time and Seng:

Problem:

Major Events:
1
2
3
Outcome or Resoluon:

Expository Text
Topic Sentence or Author’s Purpose:

Supporng Detail 1:

Supporng Detail 2:

Supporng Detail 3:

Main Idea:
Handout 237

Handout 8.20  Directions for Vocabulary-Matching Probes


Purpose: To determine if student can identify vocabulary words when provided the
definition.
Materials:
• Timer, writing utensil
• List of missed vocabulary words from Step 4
• Created vocabulary lists
Directions:
1. Create probes based on the student’s actual curriculum words for which the stu-
dent was unable to produce a definition in Step 4 (see Handout 8.9).
2. The probe should have 20 terms and 22 definitions. If the student did not miss
20 terms, create a pool of items, selecting terms from the classroom textbook,
teacher lectures, or from academic lists.
3. Develop short definitions for each term. Use the glossary of the textbook or a
definition based on the curriculum.
4. Administer the probe by giving the student 5  minutes to match terms with
definitions.
5. Say, “When I say begin, match the words on the left-hand side of the page with
their definitions. Write the letter of the correct definition in the blank next to
each word. Do as many as you can. Do not worry about not knowing all of the
words. Just do your best work. Ready? Begin.”
Interpretation Guidelines:
6. Record the number of items matched correctly by the student. Calculate a per-
centage of total correct.
7. Compare the “matching score” to the score the student received on the definition
production measure.
a. If the student does better with matching the definition, the student likely will
benefit from fluency building with vocabulary definitions.
b. The student requires vocabulary instruction for those words he or she does not
define with matching.
Considerations:
• A practical way to develop the measures is to write each vocabulary term on the front
of an index card with its definition on the back. For each measure, shuffle all of the
cards, and randomly select terms and definitions. Place the terms on the left-hand
side of the page and the definitions in random order on the right-hand side. Number
the terms, leaving a blank space by each term; put letters by each definition. The
students write the letter for the correct definition in the blank next to each term.
• You can create a pool of vocabulary items and then create probes with which to
monitor progress. The measure should be long enough that the student is unable
to finish the task within the 5-minute time limit. (If you think students will finish
within the 5  minutes, create longer measures.) See http://teachingld.org/ques-
tions/12 for more information on monitoring.
Evidence-base: Espin and Foegen 1996; Espin et al. 2001; http://teachingld.org/
questions/12
238 8  CBE Reading Comprehension

Handout 8.21  Vocabulary-Matching List Template and Example


Match the definition on the right to the word on the left by writing the letter of the
definition next to the word. Write the letter in the answer column. An example is
provided in the shaded boxes

Answer Word Definions


Z Book Z A handwrien or printed work of ficon or
nonficon

V
Handout 239

Vocabulary-Matching Example

Answer Word Definions


C Analysis A A definive course of acon that outlines
rules and regulaons
A Policy
B To indicate or suggest
D Design
C The process of studying the nature of
E Emphasis something
B Imply D To create or plan

E To stress or place importance upon


Part III
Making Educational Decisions with CBE
Chapter 9
Progress Monitoring and Educational Decisions

9.1 Chapter Preview

This chapter focuses on using data to make instructional decisions, also known as
Plan Evaluation, which is the fourth step of the CBE Process. In this chapter, two
topics are discussed: (a) analyzing progress monitoring data for instructional deci-
sion making and (b) instructional factors to consider when adjusting or designing
instructional plans.

9.2 Educational Decisions During Plan Evaluation

During Phase 3 of the CBE Process, Plan Implementation, decisions are made about
how to measure student’s progress and the fidelity of treatment. Data are collected
for later review to ensure the instructional plan is working for the student. During
the last step of the CBE Process, Plan Evaluation, decisions about the effectiveness
of the instructional plan are made. To assist educators in making such decisions,
guidelines are provided on how to evaluate progress monitoring data. Instructional
factors to consider when adjusting instructional plans also are provided.

9.3 Progress Monitoring

Progress monitoring is the process of assessing student performance on a frequent


basis to determine the effectiveness of instruction. Assessment results are graphed
to allow visual analysis of student growth using slope [also known as rate of growth
(ROG) or rate of improvement (ROI)]. This section describes how to graph results
and visually analyze a progress monitoring graph.

J. E. Harlacher et al., Practitioner’s Guide to Curriculum-Based Evaluation in Reading, 243


DOI 10.1007/978-1-4614-9360-0_9, © Springer Science+Business Media New York 2014
244 9  Progress Monitoring and Educational Decisions

Table 9.1   Guidelines for examining progress monitoring graphs


Steps Questions
1. Basic components Does the y-axis represent the average range of scores?
Does the x-axis indicate the chronological dates the data were
collected?
2. Essential components Is there a goal?
Is there an aim line?
Is there a trend line? (Note: a minimum number of data points are
required to establish a valid trend line)
1. Is there a pattern of performance?
–  If no, gather more data
–  If yes, continue to next question
2. Is the response positive, poor, or questionable?
–  If positive, continue or consider ending intervention if goal is meta
–  If poor or questionable, then check fidelity
•  If fidelity is poor, improve fidelity and continue plan
• If fidelity is good, change or intensify the instructional plan.
(A questionable response may not need as significant a change
as a poor responseb)
a
Even with a positive response, it is helpful to check fidelity to ensure the connection between
implementation and effect
b
A questionable response indicates that instruction is working, but the student’s gap is not closing.
Such a response may indicate that minor changes are needed to intensify the plan. A poor response
requires a substantial change in the instructional plan.

9.3.1 Guidelines for Judging Growth

There are a series of decisions and steps to take when examining a student’s growth.
The steps are presented in Table 9.1 and discussed in detail next.

9.3.2 Graphing Basics

Each graph has an x-axis (i.e., horizontal line or abscissa) and a y-axis (i.e., verti-
cal line or ordinate). The scale on the y-axis should represent the normal range of
possible scores. As an example, look at Fig. 9.1. The hypothetical data indicate that
this second grade student is making consistent and steady reading progress. The
data points are going up at a steep angle. However, the data are displayed on a scale
that does not represent the normal range of scores expected for typical second grad-
ers. When the same data are displayed on a graph with the normal range of scores
represented, as seen in Fig. 9.2, a more accurate view of the student’s growth rate
is presented. Using the normal range of scores is part of accurately representing the
student’s growth.
9.3  Progress Monitoring 245

Fig. 9.1   Negative example of a y-axis. ( Note. Notice the scale of the y-axis is well below the ave-
rage range for a second grader, resulting in what appears to be very steep growth)

Fig. 9.2   Positive example of a y-axis. ( Note. The scale of the y-axis represents the normal range
of scores for all second graders. The same data in Fig. 9.1 are more accurately depicted within this
graph)

The other part is ensuring that the x-axis follows the actual dates the data were
collected. Examine Fig. 9.3. The data illustrate what appears to be steep growth for
the student. However, the x-axis is in numerical order, so information about the stu-
dent’s growth over time is missing. It appears the student is making a lot of growth.
246 9  Progress Monitoring and Educational Decisions

Fig. 9.3   Negative example of an x-axis. ( Note. The scale of the x-axis does not indicate the chro-
nology of the data collection; instead it is listed numerically)

Fig. 9.4   Positive example of an x-axis. ( Note. The scale of the x-axis is chronological and illust-
rates the actual dates the data were collected)

However, when we examine Fig. 9.4, which is the same data displayed on an x-axis
that indicates the chronological dates that the data were collected, we see the stu-
dent is making less impressive growth than it appeared in the first graph.
The purpose of reviewing a progress graph is to examine the student’s slope.
Slope incorporates the amount of growth made and the time that has passed. Slope
9.3  Progress Monitoring 247

Fig. 9.5   Progress monitoring graph with goal, aim line, and trend line

is not possible to examine if the x-axis is numerical instead of chronological. The


first two things to examine when reviewing graphs are: (a) whether or not the scale
of the y-axis illustrates the normal range of scores and (b) whether or not the x-axis
is chronological and the data is displayed on the date it was actually collected (data
should be recorded on the date of collection, not the date entered).

9.3.3 Essential Components: Goal, Aim Line, Trend Line

In addition to an appropriate x- and y-axis, a graph needs three components that


will allow for it to be interpreted: (a) a goal, (b) an aim line, and (c) a trend line.
The goal indicates the time frame and desired outcome of the instructional plan for
the student. The aim line, also referred to as the goal line, indicates the minimum
level of student performance needed to achieve the goal in the time frame indicated.
Scores that fall below the aim line indicate the student is not making sufficient
growth to reach the goal by the date indicated. Conversely, scores above the aim
line indicate the student is making adequate progress to reach the goal. Finally, the
trend line represents the student’s trend or rate of progress for his or her data series
(see Fig. 9.5).
Another component a graph may contain is a phase change line. This line is
drawn vertically on the graph at the date on which the instructional plan was modi-
fied. The phase change line stops the previous trend line and a new trend line begins
once enough data points are collected (discussion on how many data points is pro-
vided later in this chapter). A phase change line is illustrated in Figs. 9.8 and 9.10.
Having defined the components of a graph necessary for interpretation, examining
intervention effectiveness will be discussed next.
248 9  Progress Monitoring and Educational Decisions

Fig. 9.6   Positive and negative example of pattern of performance

9.3.4 Pattern of Performance?

Once a graph contains the necessary components, the first question to ask in exam-
ining growth is whether or not the data show a reliable pattern of performance. To
make this determination, ask “Can we predict what the next data point will be with
reasonable confidence?” If the answer is “yes,” then there is a pattern of perfor-
mance; if “no,” then a pattern likely does not exist.
It may take as many as 10 data points to establish a pattern (Christ 2010; Shinn
2002), with the most reliable slope having at least 14 data points (Christ et  al.
2012). If data points are consistently increasing, decreasing, or flat, five or six
data points may be enough to establish a pattern. When data points are inconsis-
tent from date to date, it will take closer to the 10 to 14 suggested by researchers.
For example, look at Fig. 9.6. The graph “A” illustrates a pattern of performance,
even though there are only five data points. One can look at graph “A” and guess
with reasonable confidence where the next data point will be (it will likely fall
somewhere between 40 and 50). Graph “B” illustrates a nonestablished pattern of
performance. When one looks at the student’s performance, it is difficult to guess
with any confidence where the next data point will be. The student’s performance
thus far is erratic. With erratic performance, more data are required to establish a
pattern of performance. High variability in data points may indicate other issues
to examine, which will be discussed later in the chapter. For now, it is important
to understand that an inability to predict the next data point indicates the need for
more data (Christ 2010).
There is not a hard and fast rule for deciding whether or not a pattern of per-
formance is established. As mentioned, it is dependent on the student’s data. We
provide examples and nonexamples of patterns of performance in Appendix 10A.
Examine the graphs and decide if a pattern of performance is evident. The answers
are provided in Appendix 10A. The rule is that if you are not confident in predict-
ing the next data point, more data should be collected until you can predict with
confidence.
9.3  Progress Monitoring 249

Fig. 9.7   Examples of the types of responses when comparing the trend line to the aim line

9.3.5 Judging Growth

Once a pattern of performance is established, then a judgment about growth can


be made. There are a couple ways to determine how well a student is responding
to instruction. One of the first and perhaps most basic ways to judge growth is to
examine the three most recent data points and compare them to the aim line. Some
suggest that once a student has three consecutive data points in a row below the aim
line, then a change in instruction is needed (Kaminski et al. 2008). Others recom-
mend that five data points are necessary to establish a pattern and make an instruc-
tional decision (Christ 2008).
Another method is to compare the student’s trend line to the aim line. Three con-
clusions can be drawn using this method: the student’s response is positive, nega-
tive, or questionable (see Fig. 9.7). In a positive response, the slope of the trend line
matches or exceeds that of the aim line, indicating that the goal will be met. A posi-
tive response could indicate recommendations to continue the current support, or
consider fading the support once the goal has been met or exceeded (see Fig. 9.7).
It is important to consider whether the student will maintain growth without the
supplemental support.
A negative response indicates that the trend line is flatter than the aim line, and
the student is not only not on target to meet the goal, but also the gap between the
student’s performance and expected performance is widening. A negative response
warrants a change in instruction. Before changing instruction, it is essential to check
the fidelity. If the fidelity of the intervention is good, it is possible to conclude that
the current instructional plan is not working. If the fidelity is poor, then an attempt
250 9  Progress Monitoring and Educational Decisions

Table 9.2   Responses of growth and recommendations


Response Gap and goal Recommendations
Positive Gap is closing and student is • Continue with current support
on-track to meet the goal • Fade support once goal is reached, when
performance is above the aim line, and/or
when the skills are maintained
Negative Gap is widening and student is • If fidelity is poor or not known, improve
not on track to meet the goal fidelity and continue with plan
• If fidelity is good, change the plan
Questionable Gap is not changing. Student is • If fidelity is poor or not known, improve
not on track to reach goal fidelity and continue with plan
• If fidelity is good, consider minor adjust-
ments to intensify growth

to implement the intervention with fidelity should occur before changing the inter-
vention plan. As discussed in Chapter 3, it is not logical to judge instruction if the
plan was not implemented as intended (see Fig. 3.3 in Chapter 3).
A questionable response indicates the trend line is approximately parallel to the
aim line, and the rate of improvement is not changing. In Fig. 9.7, the good news
is that the student is growing. The bad news is that the student will not reach the
goal. The instructional plan may require intensification to get a positive response.
Intensifying the plan implies a small adjustment versus a substantial change in the
intervention. Before making an instructional change, however, it is important to ex-
amine fidelity of the plan. If fidelity is poor, it may be the case that the only change
required is implementing the plan as intended. Figure 9.7 illustrates the three dif-
ferent responses and Table 9.2 is a description of the responses and accompanying
recommendations.

9.3.6 Additional Analyses

Analyses of level and variability may be useful when examining student progress,
particularly when comparing performance in response to different instructional
plans (Hixson et al. 2008; Kennedy 2005).
Level  Level refers to the mean performance across multiple data points (Kennedy
2005). Examining level of performance from one instructional phase to the next is
another way to judge intervention effectiveness. The level is determined by taking
the average of all the data points within an instructional phase. The level of one
phase can be compared to the level of a second instructional phase. It is also possi-
ble to compare the beginning level of performance in an instructional phase to the
end level of performance of the same instructional phase. For example, the mean of
the first three to five data points could be compared to the mean of the final three to
five data points. Figure 9.8 illustrates comparing levels between phase changes, and
Fig. 9.9 illustrates comparing levels within the same instructional phase.
9.3  Progress Monitoring 251

Fig. 9.8   Example of comparing level of performance between two phase changes

Fig. 9.9   Comparison level of performance within the same instructional phase
252 9  Progress Monitoring and Educational Decisions

Fig. 9.10   Examining variability of student’s data within and between phase changes

Variability  Variability refers to the complete range of scores within the instruc-
tional phase (i.e., the lowest score to the highest score). Variability also is used to
describe how far scores deviate from the level (Hixson et al. 2008). Some variability
or “bounce” from one score to the next is common and normal (Christ 2008; Christ
2010; Christ et al. 2012; Silberglitt and Hintze 2007). Variability can be examined
within one instructional phase or between two different phases (see Fig. 9.10).
Variability is important because highly variable data make detecting patterns
of performance, level of performance, and intervention effects difficult. Too much
variability calls into question the reliability of data. Errors related to scoring, tim-
ing, or the quality of the administration can impact data consistency (Hixson et al.
2008). Student distractibility, understanding the rules or purpose of the task or will-
ingness to participate can impact data consistency.
Instructional Decision Making  While examining data and making decisions about
the need for instructional changes, it is helpful to have the detailed instructional
plan and fidelity data. Having all the information readily available helps ensure the
conversation stays instructionally focused, is based on all relevant information, and
does not drift to unalterable variables. Consideration of only the graph can result
in conversations that focus exclusively on the student. When information about the
curriculum, instruction, and environment are readily available, the discussion can
focus on those variables that can be changed (see Fig. 9.11).
Summary of Analyzing Progress Monitoring Graphs  In summary, before making
data-based decisions, it is important to ensure the graph’s x- and y-axes have proper
9.4  What to do After a Poor or Questionable Response 253

Fig. 9.11   The learning triad and ensuring proper analysis of monitoring graphs

scales and ranges. The graph also should contain a goal, aim line, and trend line
(and a phase change line if there have been any instructional changes in the stu-
dent’s plan). Once those critical components are present, the graph can be examined
to determine the presence of a performance (i.e., is it possible to predict the next
data point with reasonable confidence?). If fidelity is good, a judgment can be made
about the effectiveness of the intervention by comparing the trend line to the aim
line, the level of performance for one intervention phase compared to another, or
the level of performance at the beginning of the intervention to the level of perfor-
mance at the end of the intervention. Variability in scores is common, which is why
it takes several data points to determine a performance pattern (Christ et al. 2012).
Too much variability can indicate problems with the assessment administration or
the student’s behavior during assessment.

9.4 What to do After a Poor or Questionable Response

Once progress data, fidelity data, and the instructional plan have been examined,
decisions are made to either continue the current plan (in the case of a positive
response) or to change the plan (in the case of negative or questionable responses).
One of the first things to consider is that when the need for instructional change is
evident, drastic changes may not be necessary. Small (but powerful) instructional
factors could be adjusted to improve the student’s rate of growth. This section pres-
ents a few research-based factors that can be adjusted to enhance instruction. It’s
important to note that a change in instruction does not always require a change in
curricular programs.
254 9  Progress Monitoring and Educational Decisions

9.5 Evidence-Based Instructional Factors

In this section, eight instructional factors that can be altered when an instructional
change is warranted are discussed. The factors all have a research base that demon-
strates an association with improved student achievement. This list is not exhaustive
list by any means. A summary of the instructional factors is listed in Table 9.3.

9.5.1  1. Time Allotted for Instruction

A simplistic but meaningful way to adjust instruction is to increase the instructional


time (Greenwood et al. 2008; Jimerson et al. 2007). Even more important than the
number of minutes allotted for instruction are the number of minutes the student
is experiencing academic learning time, which is the amount of time students are
engaged and experiencing success during instruction (i.e., responses are correct).
Adding time may not be an effective instructional adjustment if the student is not
successful with the content during that time.

9.5.2  2. Grouping and Homogeneity of the Group’s Skills

Instruction also can be intensified by reducing the group size. Related to group
size is the homogeneity of the skills and needs in the group. When skills and needs
are similar among group members, the focus of instruction is more targeted and
students spend more of the intervention time receiving direct instruction for their
needs. As the skills change over time, groups should be modified to maintain ho-
mogeneity.

9.5.3  3. Pacing

Instructional pacing is the rate at which instructional activities or learning opportu-


nities occur for students. Well-understood and practiced instructional routines foster
brisk instructional pacing. The pace of instruction ideally should be as quick as
students can tolerate. When new information is presented, pacing may need to be
slowed to allow students to benefit. Instructional pacing can be measured using
two separate variables: the number of opportunities to respond (OTRs) and the ac-
curacy of those responses. Brisk pacing of instruction, as measured by OTRs and
accuracy of student responding, is associated with increases in reading and math-
ematics achievement, increases in engagement, and reductions in problem behavior
(Haydon et al. 2009; Stichter et al. 2009).
Table 9.3   Instructional factors to consider when examining instruction
Factor Description Example of modifying the factor
Time The instructional minutes and frequency of sessions • Increasing the intervention time from 30 to 45 minutes
devoted to instruction • Providing Tier 3 in addition to Tier 2
Group size The size of the instructional group • Reducing the group size from 6 to 3
Homogeneity of skills The similarity among students’ skills in the same • Using reading group diagnostics to develop instructional groups
instructional group
Pacing: opportunities to The number of times that students have to respond to • Teacher sets a minimum goal of number of OTRs during instruction
respond academically oriented questions. Usually measured • Teacher marks a tally for each OTR he provides to ensure goal is met
by responses per minute
Pacing: accuracy of Whether or not a student response is correct. Usually • A peer observation is conducted to determine the accuracy of student
responses measured by total percentage of responses correct responses
9.5  Evidence-Based Instructional Factors

Amount of review The time devoted to reviewing previously learned • Teacher provides three to five review questions at the end of each lesson
material • Teacher provides 5–10 minutes of review of previously learned material
at the beginning of each lesson
Repetitions The number of times a student repeats a newly lear- • Teacher introduces new vocabulary word, students pronounce the word
ned concept or fact several times, read the word repeatedly in the context of the story, and
write a sentence using that key word
Activating background Providing explicit connection to previously learned • Previewing and discussing the content in large and small groups
knowledge material • Teach: Background knowledge: Connections to self, world, text (Hand-
out 8.18, Chapter 8)
Corrective feedback Feedback provided by a teacher designed to correct • A teacher states, “That word is (correct word). What word?”
an error produced by the student
Praise-to-redirect ratio The ratio of praise statements provided to serve as • A teacher places five pennies in her left pocket. When she provides a redi-
reinforcement, compared to the number of statements rect, she makes five praise statements, moving a penny to her other pocket
that redirect a student’s behavior, express disappro- after each praise statement
val, or correct a behavior • A teacher places a visual on the corner of the whiteboard and each time
she looks at it, she gives out a behavior-praise statement
255
256 9  Progress Monitoring and Educational Decisions

OTRs are defined as the number of times a teacher provides academic prompts
that requires active student responses (Simonsen et al. 2010). Each OTR involves a
teacher prompt, a student response, and teacher feedback, or a three-step interaction
between the teacher and student. OTRs can be verbal (e.g., oral responding) or non-
verbal (e.g., write on a white board, thumbs up or down) and require an individual
or a group to respond (i.e., choral responding) (Haydon et al. 2010; Simonsen et al.
2010).
An increase in OTRs is associated with an increase in student achievement by
commanding student attention, providing the teacher information about students’
mastery of the material (i.e., are the student responses correct or incorrect?) and
prompting praise or corrective feedback. Praise is effective in building rapport with
students and reinforcing the mastery of skills (Gable et al. 2009; Simonsen et al.
2011), and corrective feedback provides additional instruction to students and pre-
vents the practicing of errors (Hattie 2009; Stitcher et al. 2009).
The optimal level of OTRs and accuracy of responses depends on the students’
level of acquisition with the skill (i.e., initial acquisition vs practice) and the instruc-
tional grouping (i.e., whole vs small-group). Whole-class direct instruction is most
effective with approximately 3 to 6 OTRs per minute and small-group instruction
is most effective with approximately 8 to 12 OTRs per minute (Gunter et al. 2004;
Harlacher et al. 2010). Other recommendations suggest that students learning new
skills, particularly students with disabilities, may require as many as 10 OTRs per
minute, (Gunter et al. 2004; Haydon et al. 2009). Gunter et al. (2004) indicate that
students in the acquisition stage of the instructional hierarchy require 4 to 6 OTRs
per minute with at least 80 % accuracy; and students at the fluency stage and beyond
should have 9 to 12 OTRs per minute with at least 90–93 % accuracy. Students who
respond with at least 98 % accuracy have reached a level of independent mastery
and are ready for more difficult material (Treptow et al. 2007).
Increasing OTRs  Increasing OTRs is a powerful way of adjusting instruction and
can be accomplished in multiple ways. One way to increase the rate of OTRs is to
set a goal, and obtain graphed feedback from peer observers (Simonsen et al. 2010).
Another strategy is to use visual aids to prompt the provision of an OTR (e.g., pie-
ces of paper on the corner of the projector, moving coins from one pocket to the
other after providing an OTR, or a visual on each student’s desk).
Increasing Accuracy  When observing OTRs, the accuracy of those responses can
be recorded. If the accuracy is not sufficient, at least two strategies can improve
accuracy. First, ensure “think time” (also called wait time) is between 3 and
5 seconds, which is an optimal time for students to respond (Stitcher et al. 2009).
Second, use antecedent or precorrection strategies. Precorrections are prompts or
cues that remind students about the expected behavior or skill and are given prior
to students entering problematic areas (e.g., unstructured times during school or
before responding to certain academic content) (Simonsen et al. 2010). Examples
of precorrection in reading include reminding students of decoding rules prior to
reading, highlighting aspects of the reading material, or practicing a skill prior to
another activity involving that skill.
9.5  Evidence-Based Instructional Factors 257

9.5.4  4. Amount of Review

A fourth factor that can be adjusted is the amount of review included with instruc-
tion. Review is important both for maintenance of skills and for activating previ-
ously learned knowledge and skills prior to instruction (Hall 2002; Kame’enui and
Simmons 1990). Review and practice is more beneficial when it is distributed (i.e.,
a few minutes of review each day) instead of massed (i.e., longer review periods on
fewer days) (Donovan and Radosevich 1999; Hall 2002).
Adjusting Review  Review can be adjusted by considering massed versus distri-
buted review, review that is cumulative (both recent and newly learned material)
versus focused on one topic, and various schedules of review (daily, weekly, and
monthly) (Hall 2002). The amount of review can be adjusted by increasing chunks
of time set aside for review within a lesson, increasing repetitions of previously
learned material, or increasing the amount of review material that is interspersed
with new material.

9.5.5  5. Repetitions

Repetitions are opportunities for students to practice new skills and have been
shown to improve retention, increase proficiency and fluency, and free up work-
ing memory for more complex, higher-level tasks (Archer and Hughes 2010). The
amount of review and the number of repetitions are overlapping elements.
On average, it can take between three and eight repetitions for a student to learn
a new skill, with more proficient readers requiring fewer repetitions and less profi-
cient readers requiring more repetitions (Reistma 1983). Gunter et al. (2004) report
it may take up to 30 repetitions for a student to acquire a skill, and Wong and Wong
(2001), crediting research by Madeline Hunter, report that it can take, on average,
28 repetitions for a student to learn a new behavior that has to replace an old be-
havior.
Increasing Repetitions  Repetition can be increased by specifically allocating time
for repetition or by ensuring a certain number of repetitions are built in when pre-
senting new material. For example, some lessons will introduce a key vocabulary
word prior to reading a story, have students pronounce the word several times, read
the word repeatedly in the context of the story, and write a sentence using that key
word (Kame’enui and Simmons 1990).

9.5.6  6. Activating Background Knowledge

This factor is the process of connecting new content to previous knowledge to in-
crease the student’s ability to learn the new content. See Teach: Background Knowl-
edge in Chapter 8.
258 9  Progress Monitoring and Educational Decisions

9.5.7  7. Corrective Feedback

Corrective feedback refers to the feedback a teacher provides a student after an in-
correct response. Corrective feedback can be adjusted by increasing the directness
and clarity of the corrective feedback. Consider a misread word. Less direct correc-
tive feedback may be, “Look at that word again,” More direct corrective feedback
may be, “That word is _____. What word is it?” This directness may result in an
increase in the rate of OTRs per minute and provide more repetitions. Immediate
and direct corrective feedback is beneficial because less time is spent waiting for
information retrieval.

9.5.8  8. Praise-to-Redirect Statements

As was just discussed, clear feedback about behavior and performance contributes
to an effective learning environment. Providing clear feedback includes behavior-
specific praise. Praise is defined as feedback and acknowledgment for behavior or
academic responses that is specific and delivered immediately after the behavior is
performed (Flora 2000; Gable et al. 2009). Using behavior-specific praise as part of
clear feedback ensures students perform accurately and are aware of what behaviors
and skills are expected of them (Hattie 2009; Horner et al. 2005; Marzano 2010).
A redirect is defined as feedback that expresses disapproval and directs student to
a different response.
The recommended praise to redirect ratio is 5 to 1 (Flora 2000; Sutherland et al.
2003). Establishing a 5 to 1 ratio in a classroom can increase students’ time on-
task, correct academic responding, work production, and compliance with requests
(Gable et al. 2009; Simonsen et al. 2011; Sutherland et al. 2000, 2003).
Increasing Ratio of Positive-to-Redirect Statements  Increasing the ratio of positive
to redirect statements can be accomplished through peer observation by establis-
hing a baseline of performance, setting a goal, and determining if goal is met (see
Simonsen et  al. 2006, 2010). The ratio can also be improved through the use of
visual prompts (e.g., a poster in the classroom, sticky-note, etc.), a tactile reminder
(e.g., coins transferred from one pocket to another following a praise statement), or
other self-monitoring such as marking on a sticky-note each time praise is provided.

9.6 Chapter Summary and Key Points

Progress and fidelity monitoring are critical to plan evaluation. This chapter re-
viewed interpretation of progress monitoring graphs and provided a description
of several research-based instructional factors to adjust when data indicate an in-
structional change is warranted. Before reviewing student progress, critical graph
9.6  Chapter Summary and Key Points 259

components should be present including a y-axis that represents the normal range
of scores for the skill being measured, an x-axis representing chronological dates
the data were collected, a goal, an aim line, and a trend line. Progress can be in-
terpreted by examining the number of consecutive data points below the aim line
or by comparing the trend line to the aim line. Interpreting the trend line requires
consideration of the number of data points and a pattern of performance. When a
student’s progress is poor or questionable and fidelity is good, instructional changes
are warranted. A summary of eight instructional factors that can be adjusted to im-
prove student outcomes were provided, illustrating that instructional changes do not
always require changing curricula.

Key Points
• A progress monitoring graph has components that allow for interpretation
including a y-axis that represents the normal range of skills, an x-axis that
represents the chronological dates of administration, a goal, aim line, and
trend line.
• A pattern of performance must be established before examining growth.
• Research suggests that up to 14 data points may be needed to establish a
reliable trend line.
• Level and variability can be examined on progress monitoring graphs to
help interpret performance.
• Fidelity is necessary to consider before making decisions about instructio-
nal changes.
• A summary of eight instructional factors that can intensify instruction are
provided.
260 9  Progress Monitoring and Educational Decisions

Appendix 9A

Examples and Nonexamples of Pattern of Performance


Chapter 10
Frequently Asked Questions about Curriculum-
Based Evaluation

10.1 Is Curriculum-Based Evaluation Just for Tier 3?

Curriculum-based evaluation (CBE) is very time-intensive and is often reserved


for students requiring the most intensive level of reading support. However, ele-
ments of CBE can be used for students requiring any level of support. This section
describes how CBE can be used for students across the continuum. Table 10.1 sum-
marizes ways CBE can be used across the three tiers.
Tier 3—Individualized, Intensive Support  The use of CBE at a tier 3 level is
logical. Because tier 3 supports are intended to be highly individualized, a problem-
solving process including individualized assessment and planning is needed. CBE
is a process that continues until the student reaches the goal. Consequently, it is not
a “one-time” assessment but a process to assist schools in reaching high standards
for all students.
Tier 2—Strategic Support  Logistically, it would be difficult if not impossible to
conduct individualized assessment with every student needing supplemental sup-
port at tier 2. Therefore, group diagnostics is presented as a strategy of gathering
reading information for groups of students that informs targeted, tier 2 instruction.
Group diagnostics is the process of using data to sort students with common reading
needs into instructional groups. During the Problem Identification phase, students
will be identified who need supplemental instruction at tier 2. Group diagnostics
can then be conducted to inform instruction. Group diagnostics offers a compromise
between time-intensive individualized assessment and a one-size-fits-all standard
treatment protocol for reading. The process of group diagnostics is described next
using the four phases of the CBE Process.

10.1.1 Group Diagnostics

To conduct group diagnostics, evaluators examine oral reading fluency data for rate
and accuracy. Each student’s rate and accuracy are compared to either a normative
criterion or benchmark criterion for rate and a 95 % criterion for accuracy.
J. E. Harlacher et al., Practitioner’s Guide to Curriculum-Based Evaluation in Reading, 261
DOI 10.1007/978-1-4614-9360-0_10, © Springer Science+Business Media New York 2014
262 10  Frequently Asked Questions about Curriculum-Based Evaluation

Table 10.1   Description of how and where CBE can be used between tiers of instruction
Tier Use of CBE Examples
1 Survey-level assessment to verify • After benchmarking with CBM, a survey-level
instructional, frustrational, and assessment is conducted to determine instructio-
independent status nal reading level
• Survey-level assessment above a student’s grade
level to determine how advanced the student’s
skills are
2 Group diagnostics • A team of third grade teachers use group diagno-
stics to plan tier 2 support for students
Survey-level assessment • Survey-level assessment is conducted on a small
group of eighth grade students to determine their
appropriate placement for support
3 Entire CBE Process to design and • A school-level problem-solving team conducts
provide support CBE for students that receive tier 3 support
• As part of a student’s triannual review, a school
psychologist conducts CBE to help develop a
student’s individualized education program

Problem Identification 
Step 1—Ask: Is there a problem? Do: Universal screening with all students in
a class or grade level.
Examine universal screening data from a class or grade level to identify students
who are at risk for not meeting reading standards. A cutoff point (e.g., 25th per-
centile) will be determined by the school team, and students performing below this
cutoff will move to Step 2.
Step 2—Ask: Does the problem warrant further investigation? Do: Verify stu-
dents’ at-risk status with multiple sources of information.
For each student identified in the at-risk group from the universal screening data,
multiple sources of information should be used to verify their at-risk status before
moving to the problem analysis phase. Examples of the sources of information to
consider include results from state tests, district tests, classroom-based assessments,
teacher observation of daily classroom performance, etc.
Problem Analysis
Step 3—Ask: Why are the students at risk in reading? Do: Examine rate and
accuracy on oral reading fluency probes.
The Problem Identification phase will generate a list of students verified as at risk
for not meeting standards in reading. With these students, administer three reading
CBM probes and report the median words read correctly and errors. Calculate the
accuracy percentage. Refer to Chapter 6 for scoring directions and the formulas for
calculating rate and accuracy.
Plan Implementation  Once fluency and accuracy rates have been obtained, stu-
dents are sorted into four groups (see Table 10.2 and Handout 10.1): Group 1: accu-
10.1  Is Curriculum-Based Evaluation Just for Tier 3? 263

Table 10.2   Four-group instructional sort and teaching recommendations


Group 1: Accurate and fluent Group 2: Accurate and not fluent
Teach: Comprehension and vocabulary Teach: Fluency building
Refer to Chapter 8 for instructional strategies Refer to Chapter 6 for instructional strategies
Group 3: Inaccurate and fluent Group 4: Inaccurate and not fluent
Teach: Self-monitoring strategies (depending Teach: Decoding skills and/or sight words
on student’s decoding abilities)
Refer to Chapter 6 for instructional strategies Refer to Chapters 6 and 7 for instructional
strategies

rate and fluent; Group 2: accurate and not fluent; Group 3: inaccurate and fluent;
and Group 4: inaccurate and not fluent. Each group is associated with teaching
recommendations that can be provided in Tier 2 instruction. Table 10.2 summarizes
the teaching recommendations for each group.
Plan Evaluation  Plan Evaluation involves measuring progress to determine the
effectiveness of instruction. Measurement of fidelity and measurement of progress
are two critical components of Plan Evaluation. Oral reading fluency probes are
recommended as the tool to monitor general outcomes in reading. Both rate and
accuracy should be monitored. Other measures can be used in addition to oral rea-
ding fluency to monitor short-term goals and skill mastery. See Chapters 6 to 8 for
progress monitoring specifics.
In addition to group diagnostics, survey-level assessment can assist with Tier
2 instructional planning. The survey-level assessment provides information about
students; instructional level and can increase homogeneity of targeted groups.
Tier 1—Core Instruction  At Tier 1, there are a variety of ways to use CBE. Group
diagnostics can be used to determine the groups and focus for the small-group ins-
truction portion of Tier 1. Additionally, the survey-level assessment can be used to
determine a student’s frustrational, instructional, and independent reading levels,
which could contribute to verifying a student’s at-risk status after universal scree-
ning. For struggling students, a teacher could use the entire CBE Process.
The administration time will vary depending on the student’s skills, grade level,
and what phase of the CBE Process is being conducted. For the Problem Identifica-
tion phase, the initial identification of a problem can be as quick as a few minutes,
since review of records may be all that is necessary to determine the existence of a
problem. If included in the Problem Identification phase, a survey-level assessment
can take 20–30 minutes.
The Problem Analysis phase involves a series of short tasks that average about
10–15 minutes each. For example, the self-monitoring assessment for decoding can
take 5–10 minutes, but an in-depth teacher interview focused on comprehension and
approach to reading could take 20–30 minutes. Once the evaluator is fluent with the
CBE Process, a full evaluation can range from as little as an hour up to 3 hours to
conduct, depending on the student’s skills.
The Plan Implementation phase usually involves a meeting of about 1 hour and
results in an intervention plan and identified strategies for measuring progress and
fidelity.
264 10  Frequently Asked Questions about Curriculum-Based Evaluation

The Plan Evaluation phase can last several weeks or months and includes ongo-
ing progress and fidelity monitoring and regularly scheduled data review meetings
for data-based decision making.

10.2 How can I Convince My School to Use CBE?

To answer this question, we will refer to the literature on systems change and
Multi-Tier System of Supports. First, educators are more willing to try new
practices when they understand the purpose and benefits of their use (Barnes
and Harlacher 2008). An understanding of the rationale and the skills needed to
use the practice increases the likelihood of buy-in. Doing things because “the
district wants you to do it” may get compliance, but perhaps at the expense of
fidelity of implementation and understanding the practice (Greenwood et  al.
2008; Ikeda et al. 2002). Having discussions and providing presentations about
solutions to current problems are ways to introduce CBE. Another approach to
convincing others is to use the CBE Process yourself and share your data with
others to illustrate the value of CBE.

10.3 Is CBE Reliable and Valid?

CBE is a decision-making framework, so its reliability and validity are only as


strong as the tools used within the process. A discussion on the general concept
of reliability and validity is presented first before sharing information about CBE
psychometrics.
Reliability is defined as consistency or accuracy in measurement (e.g., a test’s
ability to “sort” students the same way each time it is administered). Reliability af-
fects the confidence one has in the assessment’s ability to yield the same score on
multiple administrations or to consistently “hit the same mark.” Imagine throwing
darts at a dartboard. The ability to consistently hit the same spot on the board is the
concept of reliability. If you throw three darts, are they clustered near each other
or did they land in three very different spots? (See Fig. 10.1) Test makers and con-
sumers want assessments that consistently produce the same result (Thorndike and
Thorndike-Christ 2010).
Validity is the extent to which a test actually measures what it purports to mea-
sure. If a test claims to measure a construct, such as reading comprehension or
sociability, then validity is the extent to which it actually measures reading compre-
hension or sociability (Thorndike and Thorndike-Christ 2010). Using the dartboard
again, validity is the extent to which you hit the mark at which you are aiming. A
dart may repeatedly hit the same spot, which is reliability, but the ability to hit the
intended mark, the bull’s eye, is validity (see Fig. 10.1).
From a psychometrics point of view, reliability coefficients (as measured with
Cronbach’s alpha) above α = 0.70 are considered strong; standards of α = 0.80 for
10.3  Is CBE Reliable and Valid? 265

Fig. 10.1   Illustration of reliability and validity

making decisions about individuals; and above α = 0.90 for high-stakes decisions,
such as disability classifications (Foegen et al. 2007; Kaplan and Saccuzzo 2008;
Thorndike and Thorndike-Christ 2010). Tools are generally considered to have
good validity if they correlate moderately with tests that measure the same con-
struct (i.e., correlation coefficients above 0.40). However, this is a general guide-
line and in some cases, a correlation below 0.40 may be considered good validity
(Thorndike and Thorndike-Christ 2010).
CBE frequently uses CBM in the process, which has strong reliability (Wayman
et al. 2007). Reliability coefficients for test-retest reliability for word identification
and oral reading fluency (ORF) have been between r = 0.84 and 0.96, and alternate-
forms reliability coefficients have ranged from r = 0.84 to 0.96 (Deno 2003; Way-
man et al. 2007).
In addition, CBM has demonstrated good validity because of its association with
other reading measures and state-level assessments (McGlinchey and Hixson 2004;
Miura Wayman et al. 2007; Silberglitt et al. 2006). Fuchs et al. (1988) compared
ORF to other common measures of comprehension, including a subtest from a read-
ing achievement test, cloze (a silent reading activity in which every seventh word is
blank and the student must select the word that fits the sentence), story retell (i.e.,
266 10  Frequently Asked Questions about Curriculum-Based Evaluation

ability to summarize or paraphrase a passage that was read), and question–answer


measures. ORF correlated at 0.91 with the comprehension subtest of an achieve-
ment test, compared to a range of 0.76–0.82 for the other comprehension measures
described previously. Additionally, criterion-related validity for ORF ranges from
0.63 to 0.90, with most studies reporting coefficient values above 0.80 (Wayman
et al. 2007).
Classroometric Theory  A classroom-based theory of psychometrics, dubbed
“classroometric” by Brookhart (2003), provides another view of reliability and
validity (McMillian 2003). The traditional view of psychometrics views assessment
separate from instruction and is concerned with isolated measurement of skills. The
goal of large-scale standardized testing is to document whether or not learning has
taken place (i.e., to obtain a score from which to draw conclusions). Any modifi-
cation of instruction during test-taking or assistance while taking the test would
spoil standardization and would be perceived as “cheating.” Within a classroom,
however, the line between instruction and assessment is blurred because the goal of
assessment is to use the results to provide feedback to students and guide instruction
(Marzano 2010; Stiggins and Chappuis 2006). Any instructional modifications or
teacher support for the students are seen as valuable outcomes of assessment, not
spoiling the results if they are not standardized. CBE allows educators to get an
accurate view of student skills, which leads to improved instruction. Thus, although
it may not be conducted the same every time, it is arguably a reliable and valid
assessment process from a classroom-based theory of psychometrics (Brookhart
2003; McMillian 2003).
From a classroom-based perspective, reliability is the extent to which a teacher
has accurate information to know enough about student skills to meaningfully ad-
just instruction. Consider a student who typically earns high marks in class, but
on one particular day earns an “F.” The teacher, knowing the student’s history and
previous performance, knows that the “F” does not accurately represent the stu-
dent’s skills, regardless of the test’s reliability coefficient. This conceptualization
of reliability is related to the traditional view of psychometrics in that it is about
consistency of a score. With classroom-based psychometrics, consistency refers to
a teacher’s knowledge of student skills. With traditional psychometrics, consistency
refers to repeated measurement of some kind (i.e., across time, between raters, or
between items on a test) (Brookhart 2003).
According to the classroometric theory, validity is considered the extent to which
the information from the results is beneficial to the teacher in planning instruction
for the student (Brookhart 2003; McMillian 2003). Do the results of an assessment
lead to improved outcomes for the learner? For example, do the test items or scores
on an end-of-unit reading test help the teacher plan instruction for a particular stu-
dent? If yes, it can be considered valid; if no, then the test is not considered valid.
The psychometrics of the assessments used in the CBE Process are reliable and
valid from a technical standpoint (Howell and Nolet 2000), and also arguably reli-
able and valid from a classroom-based theory of psychometrics.
10.4  Is CBE Evidence-Based? 267

10.4 Is CBE Evidence-Based?

Inferences about CBE’s effectiveness can be drawn from two sources: (a) the re-
search on use of formative assessment as a means to improve academics and (b) the
use of CBM within the assessment process.
Formative Assessment Research  Formative assessment refers to a broad set of
assessments practices in which the data obtained from those assessments is used to
improve teaching or learning (Black and Wiliam 1998; Kingston and Nash 2011). It
is termed “formative” because the data is used while learning is still forming; that
is, instruction is occurring during assessment so that the information obtained can
guide instruction. Formative assessment is in contrast to summative assessments,
which are assessments used at the end of a learning cycle to make global conclusi-
ons about a student’s skills. The term summative implies the goal is to measure the
sum of learning after teaching has occurred (Stiggins and Chappuis 2006). CBE is
designed to be used in a formative manner, therefore the literature base for forma-
tive assessment is applicable.
Black and Wiliam’s (1998) oft-cited analysis of formative assessment conclud-
ed that formative assessment has a positive effect on student achievement. They
determined that the use of formative assessment results in an effect size (ES) of
0.40–0.70 (ESs of 0.20, 0.50, and 0.80 are rated as small, moderate, and large, re-
spectively; Cohen 1988). (See Inset 3.1 in Chapter 3 for an explanation of an ES.)
However, there were questions about what types of formative assessment practices
were analyzed in the study, and there may have been definition issues with Black
and Wiliam’s analysis.
To correct this lack of clarity on the definition, Kingston and Nash (2011) con-
ducted a meta-analysis to see the effect of the use of formative assessment on stu-
dent achievement. Overall, formative assessment was associated with a small ES
( d = 0.20), but was moderated by the type of formative assessment and by subject
area. Professional development (i.e., training teachers to use formative assessment)
netted an ES of 0.30 and use of computer-based formative assessment systems re-
sulted in an ES of 0.28. Formative assessments used in reading and mathemat-
ics were associated with an ES of 0.32 and 0.17, respectively. Given Black and
Wiliam’s (1998) findings and Kingston and Nash’s (2011) recent meta-analysis, it
is fair to conclude that the use of formative assessment, particularly in reading and
around professional development and computer-based systems, is associated with
improvement in student’s achievement.
CBM Research  CBM used as a progress monitoring tool lends more support to the
effectiveness of the CBE Process. Progress monitoring is the frequent assessment
and visual graphing of a student’s performance to determine the effectiveness of
a given instructional plan and to inform decisions about the need for instructional
changes (Hosp 2008). Formative assessment can take the form of informal assess-
ment (e.g., anecdotal observations; use of mastery-checklists; checks for understan-
ding, etc.); progress monitoring is a specific type of formative assessment that is a
268 10  Frequently Asked Questions about Curriculum-Based Evaluation

more formal process of standardized procedures and technically adequate assess-


ments (Hosp et al. 2006). Progress monitoring tools should be brief and efficient to
administer and score, have alternate forms, be able to be administered frequently,
and measure basic skills. CBM meets all of those criteria (Deno 2003; Hosp et al.
2006).
The use of CBM by teachers is associated with increases in student achieve-
ment when compared to the student achievement of teachers who do not use
CBM. Teachers who set goals for students, monitor those goals with CBM, and
adjust instruction accordingly have students with higher achievement scores
compared to teachers who do not use CBM or progress monitoring tools (Conte
and Hintze 2000; Stecker et al. 2005). However, such increases in achievement
are related to a few circumstances. The use of CBM for progress monitoring
combined with data-decision rules, which specify when to make instructional
changes, appears to be critical for improved student outcomes. Additionally, the
use of skills’ analysis, computer entry and software, and consultation around
what instructional changes to make when the data indicates a need for a change
were additional factors associated with higher student outcomes (Stecker et al.
2005). These caveats support the use of CBE because CBE provides analysis of
skills and relies on data-decision rules (both during the assessment process and
during the monitoring process). It also points to the importance of educators
collaborating with evaluators around CBE and its results.

10.5 Do Directions Influence a Student’s Reading Rate


on Reading CBM Passages?

They do! When students are told to “read fast and carefully,” they do read faster,
but they make more errors. It is important to use the phrase “do your best reading”
contained in the standardized administration directions as it gets the most accurate
and valid reading from the student (Colon and Kranzler 2006).

10.6 Does Oral Reading Fluency Measure Comprehension?

Oral reading fluency are general outcome measures of reading. When scores im-
prove, they indicate that overall reading has improved. In fact, oral reading fluency
correlates quite well with measures of reading comprehension. Fuchs et al. (1988)
compared ORF to other common measures of comprehension, including cloze (a
silent reading activity in which every seventh word is blank and the student must
select the word that fits the sentence), story retell, and question–answer measures.
ORF correlated at 0.91 with the comprehension subtest of an achievement test,
compared to 0.76–0.82 for the typical comprehension measures. A small, moder-
ate, and large ES for correlations is 0.20, 0.50, and 0.80, respectively; Cohen 1988.
Like the body temperature as measured by a thermometer is an indicator of over-
all physical health, ORF is an indicator of overall reading performance. A fever does
10.9  Why do I Have to do a Survey-Level Assessment if I Know the Student’s … 269

not tell us exactly what is wrong (e.g., infection, virus, etc.), but indicates a problem
exists that warrants further investigation. Similarly, a low score on an oral reading
fluency probe does not tell us exactly what is wrong with reading (e.g., decoding,
comprehension, etc.), but indicates a reading problem exists that warrants further
investigation. After further investigation, a health problem is identified and treated
and the thermometer is used to monitor body temperature, which indicates if the
treatment is working. Likewise, after further investigation, a reading problem is
identified and targeted with intervention and monitored with oral reading fluency
probes to determine if the intervention is working.

10.7 What about the Common Core State Standards?

The English Language Arts Common Core State Standards (ELA CCSS) include
Reading Foundations standards, which are divided into three sections: (1) Print
Concepts for kindergarten and first grade, (2) Phonological Awareness for kinder-
garten and first grade, and (3) Phonics and Word Analysis for kindergarten through
fifth grade. The standards describe what the students should know at the end of each
grade level.
The ELA CCSS do not provide tools to pinpoint the skills struggling students are
missing, or guidelines for how to teach those skills at the prescribed grade levels
and for students above fifth grade who have yet to master the Reading Foundations
standards.
CBE supports the implementation of ELA CCSS by providing a process to iden-
tify missing foundational skills that are preventing students from achieving the stan-
dards. CBE offers educators the tools to pinpoint missing skills and teach those
skills so students can meet ELA CCSS.

10.8 Do I Have to use the Median When Administering


ORF Measures?

Because ORF measures are sensitive to changes in reading skills, they are also
sensitive to error in administration, student background knowledge, and other ex-
traneous factors. Using the median can ensure a more robust and reliable measure of
reading skills (Good and Kaminski 2011; Miura Wayman et al. 2007).

10.9 Why do I Have to do a Survey-Level Assessment


if I Know the Student’s Reading Skills are Low?

The survey-level assessment (SLA) is the beginning point of the CBE Process and
determines the focus of the specific-level assessment. It provides a comprehensive
picture of the student’s reading level and provides specific information about a stu-
270 10  Frequently Asked Questions about Curriculum-Based Evaluation

dent’s reading level that supports verifying that a problem warrants further investi-
gation, allows a gap analysis to inform goal setting, informs material selection for
teacher-led instruction and independent reading, and indicates the need for scaffold-
ing when material is frustrational. The results of the SLA kick off the specific-level
assessment in the CBE Process because they inform the first assessment activities
to pinpoint missing skills.
Handout 271

Handout 10.1 Four-Group Instructional Sort 

ZĂƚĞŽĨtŽƌĚƐZĞĂĚŽƌƌĞĐƚWĞƌDŝŶƵƚĞ

,ŝŐŚ;хŶŽƌŵŽƌďĞŶĐŚŵĂƌŬͿ >Žǁ;фŶŽƌŵŽƌďĞŶĐŚŵĂƌŬͿ

'ƌŽƵƉϭ͗ĐĐƵƌĂƚĞĂŶĚ&ůƵĞŶƚZĞĂĚĞƌ 'ƌŽƵƉϮ͗ĐĐƵƌĂƚĞĂŶĚEŽƚ&ůƵĞŶƚZĞĂĚĞƌ
/ŶƐƚƌƵĐƟŽŶĂů,ŝĞƌĂƌĐŚLJ͗ /ŶƐƚƌƵĐƟŽŶĂů,ŝĞƌĂƌĐŚLJ͗
'ĞŶĞƌĂůŝnjĂƟŽŶĂŶĚĚĂƉƚĂƟŽŶ &ůƵĞŶĐLJ
,ŝŐŚ;хϵϱйͿ

dĞĂĐŚ͗'ƌĂĚĞͲ>ĞǀĞůŽŶƚĞŶƚ͕ dĞĂĐŚ͗&ůƵĞŶĐLJĂŶĚƌĂƚĞďƵŝůĚŝŶŐ
ŽŵƉƌĞŚĞŶƐŝŽŶĂŶĚsŽĐĂďƵůĂƌLJ;ƐĞĞ
ŚĂƉƚĞƌϴͿ WůĂŶŽĨĐƟŽŶ͗ƵŝůĚŇƵĞŶĐLJĂŶĚ
ĂƵƚŽŵĂƟĐŝƚLJĂƚƚŚĞǁŽƌĚ͕ƐĞŶƚĞŶĐĞ͕ĂŶĚ
WůĂŶŽĨĐƟŽŶ͗/ŶƐƚƌƵĐƟŽŶŽŶŵĞĂŶŝŶŐ ƉĂƐƐĂŐĞůĞǀĞů͘/ŶƐƚƌƵĐƟŽŶŽŶŐƌŽƵƉŝŶŐ
ŽĨĐŽŶƚĞŶƚ͕ƐƉĞĐŝĮĐǁŽƌĚƐ͕ĂŶĚ ǁŽƌĚƐƚŽŝŵƉƌŽǀĞƉƌŽƐŽĚLJ͕ƌĂƚĞ͕ĂŶĚ
ŵĂŝŶƚĞŶĂŶĐĞŽĨƐŬŝůůƐ ĂƵƚŽŵĂƟĐŝƚLJ͘
ĐĐƵƌĂĐLJŽĨdĞdžƚ

'ƌŽƵƉϯ͗/ŶĂĐĐƵƌĂƚĞĂŶĚ&ůƵĞŶƚZĞĂĚĞƌ 'ƌŽƵƉϰ͗/ŶĂĐĐƵƌĂƚĞĂŶĚEŽƚ&ůƵĞŶƚZĞĂĚĞƌ
/ŶƐƚƌƵĐƟŽŶĂů,ŝĞƌĂƌĐŚLJ͗ /ŶƐƚƌƵĐƟŽŶĂů,ŝĞƌĂƌĐŚLJ͗
ĐƋƵŝƐŝƟŽŶ;ŽĨƉŚŽŶŝĐƐƐŬŝůůƐͿŽƌ&ůƵĞŶĐLJ ĐƋƵŝƐŝƟŽŶ
ĂŶĚ'ĞŶĞƌĂůŝnjĂƟŽŶ;ŽĨƐŬŝůůƐƚŽŶĞǁ
dĞĂĐŚ͗ĐƋƵŝƐŝƟŽŶĂŶĚŇƵĞŶĐLJŽĨďĂƐŝĐ
ǁŽƌĚƐĂŶĚƚĞdžƚͿ
ƌĞĂĚŝŶŐƐŬŝůůƐ;ĚĞĐŽĚŝŶŐĂŶĚͬŽƌƐŝŐŚƚǁŽƌĚƐͿ
dĞĂĐŚ͗ĞƚĞƌŵŝŶĞŝĨƐƚƵĚĞŶƚ͛ƐŚŝŐŚĞƌƌŽƌ
>Žǁ;фϵϱйͿ

WůĂŶŽĨĐƟŽŶ͗/ŶƐƚƌƵĐƟŽŶŽŶŵŝƐƐŝŶŐ
ƌĂƚĞŝƐĚƵĞƚŽůĂĐŬŽĨƐĞůĨͲĐŽƌƌĞĐƟŶŐ
ĚĞĐŽĚŝŶŐƐŬŝůůƐĂŶĚƐŝŐŚƚǁŽƌĚƐ;ďĂƐĞĚŽŶ
ĞƌƌŽƌƐŽƌŝĨƐƚƵĚĞŶƚůĂĐŬƐĚĞĐŽĚŝŶŐƐŬŝůůƐ͘
ƌĞƐƵůƚƐŽĨĞƌƌŽƌĂŶĂůLJƐĞƐͿ͘tŽƌŬŽŶĂƉƉůLJŝŶŐ
WůĂŶŽĨĐƟŽŶ͗/ĨƐĞůĨͲĐŽƌƌĞĐƟŶŐĞƌƌŽƌ͕ ƐŬŝůůƐƚŽĐŽŶŶĞĐƚĞĚƚĞdžƚĂŶĚďƵŝůĚŝŶŐ
ĨŽĐƵƐŽŶĂĐĐƵƌĂĐLJĂŶĚƐĞůĨͲŵŽŶŝƚŽƌŝŶŐ͘/Ĩ ŇƵĞŶĐLJ͘
ůĂĐŬŽĨĚĞĐŽĚŝŶŐ͕ĨŽůůŽǁ͞/ŶĂĐĐƵƌĂƚĞĂŶĚ
EŽƚ&ůƵĞŶƚZĞĂĚĞƌ͟ƐƚƌĂƚĞŐŝĞƐ͘/ĨďŽƚŚ͕
ĐŽŵďŝŶĞƐƚƌĂƚĞŐŝĞƐĨƌŽŵ'ƌŽƵƉϯĂŶĚ
'ƌŽƵƉϰ

Note: Accuracy and rate are based on result of Reading Curriculum-Based Mea-
surement. Adapted from Kansas State Department of Education (2011).
Appendices

Appendix A: RIOT and ICEL Forms. Examples of Interview


and Observation Forms

Form A1  Teacher Interview Form, Example 1

Student: ____________________ Teacher: ___________________ Date:_________

1. Primary area of concern?

2. Any other areas/behaviors of concern?

3. Student strengths?

4. Describe student’s parcipaon in class:

5. Describe instruconal strategies used with this student:

6. Student’s abilies compared to an average peer?


Reading: Above Same Below Wring: Above Same Below
Math: Above Same Below Organizaon: Above Same Below
7. Level of performance (CBM scores, state tests, district tests, etc.):

8. Percentageof work completed and accuracy:


a. In-class- ______ ______
b. Homework- ______ ______

9. Current instruconal program and intervenons for area of concern:


Tier I Tier II Tier III
Program
Time
Grouping(s)
Skills targeted:
Academic
progress
(posive,
quesonable,
negave):
Dates:

10. Current goal and level:

11. Previous intervenons/accommodaons and result:

Addional notes:

J. E. Harlacher et al., Practitioner’s Guide to Curriculum-Based Evaluation in Reading, 273


DOI 10.1007/978-1-4614-9360-0, © Springer Science+Business Media New York 2014
274 Appendices

Form A2  Teacher Interview Form, Example 2

Teacher Interview for Students Moving from Tier 2 to Tier 3

Teacher: Date:

Student

Data Review:
Aendance

Vision

Hearing

Other?

Opening: Thank you for taking the me to meet about this student. The purpose of this interview is to
gain some informaon about the student from your perspecve. This informaon will be used to help
idenfy a focus for a classroom observaon and to add to data that the (name of school-level problem-
solving team) will use when designing instrucon for (name of student).
1. What are the strengths of the student? What does s/he enjoy?

2. What are your concerns?

3. How is the student doing with …


• Math –

• Wring –

• Reading –

• Organizaonal Skills –

• Behavior –

• Social Interacons –
Appendices 275

4. What percentage of in-class work does the student complete?

• How accurate is the completed work?

5. Describe instruconal strategies used with this student.


• What materials are you using?

• How successful is the student with the materials?

6. What posive behavior/movaonal/discipline strategies are used in your classroom?

7. To what extent are students familiar with the school wide behavior expectaons?

• How oen is this student recognized for meeng academic and behavioral expectaons?

8. Is there anything else you would like to share that the team might need to know?
276 Appendices

Form A3  Parent Interview Form, Example 1

Parent Interview Form

Child’s Name:_______________________________________________________

Thank you for your me. I want to just ask a few quesons to get to know your child and to
get your input about how he/she is doing in school. I’ll start by asking…

1. What acvies/hobbies does your child enjoy?

2. What does your child do well? In what areas have you seen growth?

3. What concerns do you have currently about your child’s growth and development?

4. How is your child’s physical health? Any medical issues, hearing, vision?

5. Any other comments or informaon you think would be important for our school team to
know in planning support for your child?
Appendices 277

Form A6  Student Interview Form (adjust language according to age of student)

Student Interview

1) What do you like to do for fun? (To figure out what they find reinforcing)

2) What do you like about school?

3) Some things are tough in school for students. What is tough for you? (To get informaon about
academic funconing)
3a) Follow up with specific quesons about a subject, "What is tough about reading for you?" or
"What is tough about math for you?"

4) Somemes students get in trouble every now and then. Do you ever get in trouble? (To get
informaon about behavioral funconing)
4a) Ask follow-up quesons to get at why, how oen, and when he/she gets in trouble

5) What do you (play at recess/do during breaks) and with whom)? (Ask about friends, weekend
acvies) (To get informaon about their social funconing)

6) Tell me about your day when you go home. What do you do first, then second...etc. (To get a sense
of their daily roune at home)
278 Appendices

Form A7  ICEL Interview Form

ICEL INTERVIEW GUIDE:

• Review, Interview, Observe, and Test


• Instrucon, Curriculum, Environment, and Learner

Instrucon
1. What about the instrucon may contribute to lower than expected performance?

2. What changes to the instrucon would likely improve the student’s performance?

Curriculum
3. What about the curriculum may contribute to lower than expected performance?

4. What changes to the curriculum would likely improve the student’s performance?

Environment
5. What about the environment may contribute to lower than expected performance?

6. What changes to the environment would likely improve the student’s performance?

Learner
7. What characteriscs/experiences/circumstances of the student may contribute to lower
than expected performance?
Form A8  Instructional Observation Form, Example 1
Instruconal Observaon Form
Observer Name : ____________________________________ Date: ________________________ Time: ________________
Appendices

Instruc onal Tier (Circle one) Tier 1 Tier 2 Tier 3


Instructor: (Circle one) Gen Ed Teacher IA Intervenonist SPED Teacher Parent Volunteer Other: _______________
Program Used/Materials: _____________________________________ Group Size: ____________ Grade: ____________
Direc ons: During each 1-minute interval, mark “I” for each OTR and circle the mark if the student is correct. Mark a C if the child is provided
with correcve feedback. Pick two peers to compare and alternate the peer each 1-minute interval. If the whole group is given an OTR, mark
both the target and peer box.
Instruconal Target Student Peer 1 Instruconal Target Student Peer 2
Acvity Acvity
00-:59 10:00-10:59

1:00-1:59 11:00-11:59

2:00-2:59 12:00-12:59

3:00-3:59 13:00-13:59

4:00-4:59 14:00-14:59

5:00-5:59 15:00-15:59

6:00-6:59 16:00-16:59

7:00-7:59 17:00-17:59

8:00-8:59 18:00-18:59

9:00-9:59 19:00-19:59

Total Target OTRs: _____ OTR/min: ____ Accuracy:_____


Total Peer OTRs: _____ OTR/min: ____ Accuracy:_____
279
280 Appendices

I = Opportunity to Respond (Circle if response is correct)  Student is given a


chance to answer a question or provide a comment about the topic (e.g., identifying
what word after being asked to do so).
An OTR is not calling out answers without permission or talking about off-topic
subjects. A student reading aloud for an extended time period will be scored as 1
OTR for each 30-second block of time; if the students reads into a new interval, the
new interval signifies a new OTR.
C = Corrective feedback from teacher When a student gives an incorrect
response, the teacher provides the correct response.
Directions  During each 1-minute interval, mark “I” for each OTR and circle the
mark if the student is correct. Mark a C if the child is provided with corrective
feedback. Pick two peers to compare and alternate the peer each 1-minute interval.
Form A9  Instructional Observation Form, Example 2
Instruconal Observaon Form
Observer Name: ____________________________________ Date: ________________________
Appendices

Instruconal Tier (Circle one) Tier 1 Tier 2 Tier 3


Instructor: (Circle one) Gen Ed Teacher IA Intervenonist SPED Teacher Parent Volunteer Other: __________________________
Reading Program Used: _____________________________________ Group Size: ____________ Grade: ____________

Instruconal On Opportunies to Teacher/Student Instruconal On Opportunies to Teacher/Student


Acvity Task Respond Interacons Acvity Task Respond Interacons
10 sec (OTR) + - 10 sec (OTR) + -
Student 1 Student 2
(:00-:59) (10:00-10:59)
Student 2 Student 3
(1:00-1:59) (11:00-11:59)
Student 3 Student 1
(2:00-2:59) (12:00-12:59)
Student 1 Student 2
(3:00-3:59) (13:00-13:59)
Student 2 Student 3
(4:00-4:59) (14:00-14:59)
Student 3 Student 1
(5:00-5:59) (15:00-15:59)
Student 1 Student 2
(6:00-6:59) (16:00-16:59)
Student 2 Student 3
(7:00-7:59) (17:00-17:59)
Student 3 Student 1
(8:00-8:59) (18:00-18:59)
Student 1 Student 2
(9:00-9:59) (19:00-19:59)

Correct Responses: _____ + Interacons: ______ Comments, general student behavior, etc:_____________________________________
Total OTR’s: _____ - Interacons: ______ ______________________________________________________________________________
Student Success %: _____ +:-Rao: ____:____ ______________________________________________________________________________
Pacing (OTR’s/min) _____ Intervals On Task ____ / 20 ______________________________________________________________________________
281
282 Appendices

Instructional Observation Form Directions:


General  Select 3 students at random from the group to observe. You will observe
each of these 3 students in successive 1-minute intervals.
Instructional Activity  Record the specific instructional activity that is occurring
during the observation. Example activities include word attack, story reading, com-
prehension or vocabulary activities, independent work, peer tutoring, etc. You only
need to record the activity once when it begins.
On Task  For the first 10 seconds of each interval, observe whether the student is on
task. On-task is defined as following directions within 5 seconds of a teacher direc-
tion, having their eyes oriented toward the teacher or appropriate class materials, and
using work materials appropriately. If the student is on task for the entire 10 seconds,
mark a + . If the student is not on task for the entire 10 seconds, mark a O.
Opportunities to Respond  Each time the teacher asks the target student or the entire
group an academic question record a + if the target student provides the correct
answer or a O if the student provides an incorrect answer or does not answer within
5 seconds.
Teacher/Student Interactions  Each time the teacher provides a praise statement
or positive gesture to the target student or the entire group put a tally mark in
the  +  column. Each time the teacher provides a redirect to the target student or the
entire class put a tally mark in the − column.
Appendices 283

Form A10  Instructional Observation Form, Example 3

Observaon Form

Student: Grade: Teacher: Start me:

Date of observaon: Subject: End me:

I. Check type of acvies observed:

_____ Teacher-directed _____ Self-directed _____ Independent seat work


_____ Whole class _____ Small group _____ Partner work
_____ Other:

Describe:

II.Physicality: III. Quesons to consider:

_____ Number of Students in the class 1. Describe the specific expectaons (instrucons
given, behaviors/acons expected, work to be
Where is student seated in the classroom? completed, etc.).

Is the classroom noise or acvity level a


problem for this student?
2. What specific adaptaons were made for others and
If applicable, describe physical arrangement for this parcular student?
of the class:

3. To what extent was student able to meet


expectaons?

4. To what extent were other students, groups or


enre class able to meet expectaons?

5. What is the approximate rao of posive statements


to redirects?
284 Appendices

Mark time every 5 minutes or less. Briefly describe what the teacher says or does
(or the activity, expectations, etc.) and what the student says or does. Examine data
for patterns or relationships between the environment and the student’s behavior or
performance.

Time Teacher/Acvity or Task Student


Appendices 285

Form A11  Observation Questions to Consider

ICEL: Observaon Quesons to Consider

Instrucon:
• What effecve teaching pracces do you see that benefit the student?
• How is the student’s instrucon differenated?
• Any modificaons or accommodaons?
• Can student transion from one task to another? Require follow-up re-teaching or prompng
a er whole-class direcons are given?

Curriculum:
• What materials are being used (at grade level)?
• What task-related skills do you see that the student demonstrates or does not demonstrate?
(e.g., raising hand, knowing how to get help, cleaning area and preparing for next acvity, etc.).

Environment
• Where is the student seated (away from noise, busy areas, etc.)?
• How is the noise level?
• What posive behavior/movaonal/discipline strategies do you see?
• How many posives to redirects are observed? (Track posives to redirects)
• Describe how the teacher interacts with the student

Learner:
• How is the student’s on-task behavior compared to other students?
• How successful does the student appear with the task?
Glossary

Alphabetic Principle  The relationship between phonemes and printed letters and
letter patterns.
Assessment  The process of gathering information to make decisions.
Curriculum  The scope and sequence of knowledge and skills that students are
intended to learn. The “what” of teaching.
Curriculum-based assessment (CBA) any tool used to assess student perfor-
mance with the curriculum compared to classmates to inform instruction. For
example, examining a student’s score on a math test compared to classmates, or
identifying independent reading levels. The CBA is broad category under which
CBM and CBE fall.
Curriculum-Based Evaluation (CBE) A systematic and fluid problem-solving
process in which various assessment activities and tasks are used to identify
missing skills and to design and evaluate instruction.
Curriculum-Based Measurement (CBM) A reliable and valid standardized
assessment method used to monitor basic academic skills over time.
Decoding  The process of using letter-sound relationships to read a word. Decoding
involves breaking apart the sounds of a printed word and re-assembling those
sounds to read the word.
Diagnostic Assessment The process of gathering data to determine student
strengths and weaknesses. Diagnostic assessments tease apart broad skills into
discrete skills to pinpoint specific strengths and weaknesses.
Evaluation  The process of gathering and synthesizing data from multiple sources
of information to make decisions.
Evidenced-Based Practices  Practices and instructional strategies that have been
developed using research and have documented results demonstrating effective-
ness (see also “Research-Based Practices”)
Fidelity (Treatment Integrity)  The extent to which a plan is implemented as it
was originally designed to be implemented.
Fluency  The ability to read words, phrases, sentences, paragraphs, and passages
with automaticity, accuracy, and prosody (i.e., intonation).
Formative Assessment  a range of formal and informal assessments used during
instruction designed to modify instruction and improve student outcomes.

J. E. Harlacher et al., Practitioner’s Guide to Curriculum-Based Evaluation in Reading, 287


DOI 10.1007/978-1-4614-9360-0, © Springer Science+Business Media New York 2014
288 Glossary

Instruction  The process of teaching. The “how” of teaching.


Instructional Match  The alignment between a student’s specific skill deficit and
the focus of the instruction.
Instructional Plan A complete description of instruction a student receives.
Includes how, where, when, what subjects and objectives are taught, and with
whom the instruction occurs.
Intervention  Instructional time that a student receives above and beyond Tier 1.
Multi-Tiered System of Supports (MTSS)  A coherent continuum of evidence-
based, system-wide practices to support a rapid response to academic and
behavioral needs, with frequent data-based monitoring for instructional deci-
sion-making to improve student outcomes.
Phoneme  A speech sound that is the smallest unit of language.
Phonemic Awareness  The ability to hear the individual sounds in words and the
understanding that spoken words are made up of individual phonemes.
Phonics  The method of teaching readers to read and pronounce words based on
the alphabetic principle (i.e., on the phonetic sounds associated with letters and
letter patterns).
Problem-Solving Model A 4-step model that includes Problem Identification,
Problem Analysis, Plan Implementation, and Plan Evaluation. The problem-
solving model prevents failure because it focuses on continuous improvement.
Positive Behavioral Interventions and Supports (PBIS) A decision-making
framework that guides selection, integration, and implementation of the best evi-
dence-based behavioral practices for improving important academic and behav-
ior outcomes for all students.
Reading Comprehension  The ability to understand and gain meaning from text.
Research-Based Practices Practices and instructional strategies that are devel-
oped using research and methodology. (See also “Evidence-Based Practices”.)
Response to Intervention (RTI) A multi-tiered, school wide model of service
delivery model that emphasizes problem solving, data-based decision making,
and evidence-based interventions.
RIOT/ICEL  Acronyms used to depict an evaluation framework. RIOT depicts
assessment methods: Review, Interview, Observe, Test. ICEL depicts assessment
areas: Instruction, Curriculum, Environment, Learner.
Screening Assessment (Screener)  An assessment or data source used to identify
students who may be at-risk for academic or behavioral difficulties.
Summative Assessment  Assessments that measure the totality of learning after
learning has occurred. Often determines a student’s status or skills across several
grades and/or levels.
Tier  One of three levels of instruction, either academic or behavioral. The tiers com-
bine to provide a continuum of support designed to meet the needs of all students
in a school setting. Tier 1 is designed to meet the needs of at least 80 % of the
student population, and is also referred to as core instruction. Tier 2 is targeted
group instruction designed to meet the needs of 10–15 % of the student population
requiring supplemental instruction. Tier 3 is individualized support designed to
meet the needs of 3–5 % of the student population requiring intensive instruction.
References

Abbott, M., Wills, H., Kamps, D., Greenwood, C. R., Dawson-Bannister, H., Kaufman, J., et al.
(2008). The kansas reading and behavior center’s K-3 prevention model. In C. Greenwood,
T. Kratochwill, & M. Clements (Eds.), Schoolwide prevention models: Lessons learned in
elementary schools (pp. 215–265). New York: Guilford.
AIMSweb. (n. d.). AIMSweb national norms tables. Retrieved from www.aimsweb.com.
Ainsworth, L. B., & Viegut, D. J. (2006). Common formative assessments: How to connect
standards-based instruction and assessment. Thousand Oaks: Corwin Press.
Algozzine, B., Cooke, N., White, R., Helf, S., Algozzine, K., & McClanahan, T. (2008). The North
Carolina reading and behavior center’s K-3 prevention model: Eastside elementary school case
study. In C. Greenwood, T. Kratochwill, & M. Clements (Eds.), Schoolwide prevention models:
Lessons learned in elementary schools (pp. 173–214). New York: The Guilford Press.
Algozzine, B., Wang, C., White, R., Cooke, N., Marr, M. B., Algozzine, K., et al. (2012). Effects of
multi-tier academic and behavior instruction on difficult-to-teach students. Exceptional Chil-
dren, 79(1), 4564.
Archer, A. L., & Hughes, C. A. (2010). Explicit instruction: Effective and efficient teaching.
New York: Guilford.
Armbruster, B. B., Lehr, F., & Osborn, J. (2001). Put reading first: The research building blocks
for teaching children to read. National Institute for Literacy, The Partnership for Reading.
Ash, G. E., Kuhn, M. R., & Walpole, S. (2009). Analyzing “inconsistencies” in practice: Teachers’
continued use of round robin reading. Reading & Writing Quarterly, 25, 87–103.
Baker, S. K., Simmons, D. C., & Kame’enuim, E. J. (1997). Vocabulary acquisition: Research
bases. In D. C. Simmons & E. J. Kame’enui (Eds.), What reading research tells us about chil-
dren with diverse learning needs: Bases and basics (pp. 183–218). Mahwah: Erlbaum.
Baldi, S., Jin, Y., Skemer, M., Green, P. J., & Herget, D. (2007). Highlights from PISA 2006:
Performance of U.S. 15-year-old students in science and mathematics literacy in an inter-
national context. Washington, DC: National Center for Education Statistics, Institute of
Education Sciences, U.S. Department of Education. Retrieved from http://nces.ed.gov/
pubs2008/2008016.pdf.
Barnes, A. C., & Harlacher, J. E. (2008). Response-to-intervention as a set of principles: Clearing
the confusion. Education & Treatment of Children, 31(1), 417–431.
Barnes, G., Crowe, E., & Schaefer, B. (2007). The cost of teacher turnover in five school districts.
Washington, DC: National Commission on Teaching and America’s Future. Retrieved from
http://nctaf.org/wp-content/uploads/CTTExecutiveSummaryfinal.pdf.
Bear, D. R., Invernizzi, M., Templeton, S., & Johnston, F. (2007). Words their way: Word study for
phonics, vocabulary, and spelling instruction (4th ed). New Jersey: Prentice Hall.
Begeny, J., & Silber, J. (2006). An examination of group-based treatment packages for increasing
elementary-aged students’ reading fluency. Psychology in the Schools, 43(2), 183.
Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education:
Principles, Policy & Practice, 5(1), 7–74.

J. E. Harlacher et al., Practitioner’s Guide to Curriculum-Based Evaluation in Reading, 289


DOI 10.1007/978-1-4614-9360-0, © Springer Science+Business Media New York 2014
290 References

Bohanon, H., Fenning, P., Carney, K. L., Minnis-Kim, M. J., Moroz, K. B., Hicks, K. J., et al.
(2006). Schoolwide application of positive behavior support in an urban high school: A case
study. Journal of Positive Behavior Interventions, 8(3), 131–145.
Braden, J. P., & Shaw, S. R. (2009). Intervention utility of cognitive assessments. Assessment for
Effective Intervention, 34(2), 106–115.
Brady, K., & Woolfson, L. (2008). What teacher factors influence their attributions for chil-
dren’s difficulties in learning? British Journal of Educational Psychology, 78(4), 527–544.
doi: 10.1348/000709907X268570.
Brookhart, S. M. (2003). Developing measurement theory for classroom assessment purposes and
uses. Educational Measurement Issues and Practice, 22(4), 5–12.
Brown-Chidsey, R., & Steege, M. W. (2010). Response to intervention: Principles and strategies
for effective practice. New York: Guilford.
Brown-Chidsey, R., Bronaugh, L., & McGraw. K. (2009). RTI in the classroom: Guidelines and
recipes for success. New York: Guilford Press.
Buehl, D. (2008). Classroom strategies for interactive learning. International Reading Associa-
tion.
Burns, M. K. (2008). Response to instruction at the secondary level. Principal Leadership, 8(7),
12–15.
Burns, M. K., Appleton, J. L., & Stehouwer, J. D. (2005). Meta-analytic review of responsiveness-
to-intervention research: Examining field-based and research-implemented models. Journal of
Psychoeducational Assessment, 23, 381–394.
Burns, M. K., & Parker, D. C. (n. d.). Using instructional level as a criterion to target reading interven-
tions. Retrieved from http://www.cehd.umn.edu/reading/documents/reports/Burns-Parker-2010.
pdf.
Burns, M. K., Riley-Tillman, T. C., VanDerHeyden, A. K. (2012). RTI applications: Academic and
behavioral interventions (Vol. 1). New York: Guilford Press.
Bush, T. W., Pederson, K., Espin, C. A., & Weissenburger, J. W. (2001). Teaching students with
learning disabilities: Perceptions of a first-year teacher. The Journal of Special Education,
35(2), 92–99.
Carroll, T. G., & Foster, E. (2008). Learning teams: Creating what’s next. Washington, DC:
National Commission on Teaching and America’s Future. Retrieved from http://nctaf.org/
wp-content/uploads/2012/01/NCTAFLearningTeams408REG2.pdf.
Carnine, D. W., Silbert, J., Kame’enui, E. J., & Tarver, S. G. (2009). Direct instruction reading.
New Jersey: Pearson.
Chard, D. J., Vaughn, S., & Tyler, B. (2002). A synthesis of research on effective interventions
for building reading fluency with elementary students with learning disabilities. Journal of
Learning Disabilities, 35(5), 386–406.
Chenowerth, K. (2009). It can be done, it’s being done, and here’s how. Kappan, 91(1), 38–43.
Christ, T. J. (2008). Best practices in problem analysis. In A. Thomas & J. Grimes (Eds.), Best
practices in school psychology V (pp. 159–176). Bethesda: National Association of School
Psychologists.
Christ, T. J. (2010). Curriculum-based measurement of oral reading (CBM-R): Summary and
discussion of recent research-based guidelines for progress monitoring. Workshop presented
at Minnesota Center for Reading Research, 2010 Workshop. Retrieved from http://www.cehd.
umn.edu/reading/events/AugWkshop2010/MCRR-8-11-10-TChrist.pdf.
Christ, T. J., Zopluoglu, C., Long, J. D., & Monaghen, B. D. (2012). Curriculum-based measure-
ment of oral reading: Quality of progress monitoring outcomes. Exceptional Children, 78(3),
356–373.
Clarke, B., & Shinn, M. R. (2002). Test of Early Numeracy (TEN). Administration and scoring
of AIMSweb early numeracy measures for use with AIMSweb. Bloomington: MNL NCS
Pearson, Inc.
Clay, M. M. (1993). An observation study of early literacy achievement. Portsmouth: Heinemann.
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale:
Lawrence Earlbaum Associates.
References 291

Colón, E. P., & Kranzler, J. H. (2006). Effect of instructions on curriculum-based measurement of


reading. Journal of Psychoeducational Assessment, 24, 318–328.
Conte, K. L., & Hintze, J. M. (2000). The effects of performance feedback and goal setting on oral
reading fluency within curriculum-based measurement. Diagnostique, 25(2), 85–98.
Cossett Lent, R. (2012). Overcoming textbook fatigure: 21st century tools to revitalize teaching and
learning. Alexandria: Association for Supervision & Curriculum Department.
Coyne, M. D., Kame’enui, E. J., & Carnine, D. W. (2010). Effective teaching strategies that
accommodate diverse learners (4th ed.). New Jersey: Pearson.
Crone, D. A., & Horner, R. H. (2003). Building positive behavior support plans in schools: Func-
tional behavioral assessment. New York: The Guilford Press.
Crone, D. A., Hawken, L. S., & Horner, R. H. (2010). Responding to problem behavior in schools:
The behavior education program (2nd ed.). New York: The Guilford Press.
Curtis, R., Van Horne, J. W., Robertson, P., & Karvonen, M. (2010). Outcomes of a school-wide
positive behavioral support program. Professional School Counseling, 13(3), 159–164.
Daly, E. J., & Martens, B. K. (1994). A comparison of three interventions for increasing oral
reading performance: Application of the instructional hierarchy. Journal of Applied Behavior
Analysis, 27(3), 459–469.
Daly, E. J. III, Lentz, F. E., Jr., & Boyer, J. (1996). The instructional hierarchy: A conceptual model
for understanding the effective components of reading interventions. School Psychology Quar-
terly, 11(4), 369–386.
Deno, S. (2003). Developments in curriculum-based measurement. The Journal of Special
Education, 37(3), 184–192.
Deno, S. L., Fuchs, L. S., Marston, D., & Shin, J. (2001). Using curriculum-based measurement to
establish growth standards for students with learning disabilities. School Psychology Review,
30(4), 507–524.
Denton, C. A., Fletcher, J. M., Simos, P. C., Papanicolaou, A. C., & Anthony, J. L. (2007). An
implementation of a tiered intervention model: Reading outcomes and neural correlates. In
D. Haager, J. Klingner, & S. Vaughn (Eds.), Evidence-based reading practices for response to
intervention (pp. 107–137). Baltimore: Brookes.
Denton, C., Vaughn, S., & Fletcher, J. (2003). Bringing research based practice in reading inter-
vention to scale. Learning Disabilities Research and Practice, 18(3), 201–211.
DiLorenzo, K. E., Rody, C. A., Bucholz, J. L., & Brady, M. P. (2011). Teaching letter sound
connections with picture mnemonics: Itchy’s alphabet and early decoding. Preventing School
Failure: Alternative Education for Children and Youth, 55(1), 28–43.
Donovan, J. J., Radosevich, D. J. (1999). A meta-analytic review of the distribution of practice
effect: Now you see it, now you don’t. Journal of Applied Psychology, 84(5), 795–805.
DuFour, R. (2004). What is a “professional learning community”? Educational Leadership, 63(8),
6–11.
DuFour, R., & Marzano, R. J. (2011). Leaders of learning: How district, school, and classroom
leaders improve student achievement. Bloomington: Solution Tree.
Elkonin, D. B. (1973). U. S. S. R. In J. Downing (Ed.), Comparative reading (pp. 551–579).
New York: Macmillian.
Elmore, R. F. (2000). Building a new structure for school leadership. American Educator, 1–9.
http://www.aft.org/pdfs/americaneducator/winter9900/NewStructureWint99_00.pdf.
Espin, C. A., & Foegen, A. (1996). Validity of general outcome measures for predicting secondary
students’ performance on content-area tasks. Exceptional Children, 62(6), 497–514.
Espin, C. A., Busch, T. W., Shin, J., & Kruschwitz, R. (2001). Curriculum-based measurement in
the content areas: Validity of vocabulary-matching as an indicator of performance in social
studies. Learning Disabilities Research & Practice, 16(3), 142–151.
Fiorello, C. A., Hale, J. & Snyder, L. E. (2006). Cognitive hypothesis testing and response to
intervention with children with reading problems. Psychology in the Schools, 43(8), 835–853.
Flora, S. R. (2000). Praise’s magic reinforcement ratio: Five to one gets the job done. The Behavior
Analyst Today, 1, 64–69.
292 References

Fisher, D., Grant, M., Frey, N., & Johnson, C. (2008). Taking formative assessment schoolwide.
Educational Leadership, 65(4), 64–68.
Fixsen, D., Naoom, S., Blase, K., & Wallace, F. (2007, Winter/Spring). Implementation: The
missing link between research and practice. The APSAC Advisor, 4–10.
Fleischman, H. L., Hopstock, P. J., Pelczar, M. P., & Shelley, B. E. (2010). Highlights from PISA
2009: Performance of U.S. 15-year-old students in reading, mathematics, and science literacy
in an international context. Washington, DC: U.S. Government Printing Office. Retrieved
from http://nces.ed.gov/pubs2011/2011004.pdf.
Fletcher, J. M., & Lyon, G. R. (1998). Reading: A research-based approach. In W. M. Evers (Ed.),
What’s gone wrong in America’s classrooms (pp. 49–90). Stanford: Hoover Institution Press.
Foegen, A., Jiban, C., & Deno, S. (2007). Progress monitoring measures in mathematics: A review
of the literature. The Journal of Special Educaiton, 41(2), 121–139.
Fuchs, L. S., Fuchs, D., & Maxwell, L. (1988). The validity of informal comprehension measures.
Remedial and Special Education, 9, 20–28.
Gable, R. A., Hester, P. H., Rock, M. L., & Hughes, K. G. (2009). Back to basics: Rules, praise,
ignoring, and reprimands revisited. Intervention in School and Clinic, 44(4), 195–205.
Gibbons, K., & Silberglitt, B. (2008). Best practices in evaluation psychoeducational services
based on student outcome data. In A. Thomas & J. Grimes (Eds.), Best practices in school
psychology V (pp. 2103–2116). Bethesda: NASP Publications.
Gibson, C., & Jung, K. (2002). Historical census statistics on population totals by race, 1790
to 1990, and by hispanic origin, 1970 to 1990, for the United States, regions, divisions, and
states. Washington, DC: US Census Bureau. Retrieved from http://www.census.gov/popula-
tion/www/documentation/twps0056/twps0056.html.
Goddard, Y. L., Goddard, R. D., & Tschannen-Moran, M. (2007). A theoretical and empirical
investigation of teacher collaboration for school improvement and student achievement in
public elementary schools. Teachers College Record, 109(4), 877–896.
Gonzalez, L., Stallone Brown, M., & Slate, J. R. (2008). Teachers who left the teaching profession:
A qualitative understanding. The Qualitative Report, 13(1), 1–11.
Good, R. H., Gruba, J., & Kaminski, R. (2002). Best practice in using Dynamic Indicators of
Basic Early Literacy Skills (DIBELS) in an outcomes-driven model. In A. Thomas & Grimes
(Eds.), Best practices in school psychology IV (pp. 679–699). Bethesda: National Association
of School Psychologists.
Good, R. G., & Kaminski, R. A. (2011). DIBELS next assessment manual. Eugene: Dynamic
Measurement Group.
Good, R. G., Simmons, D. C., & Smith, S. (1998). The importance and decision-making utility
of a continuum of fluency-based indicators of foundational reading skills for third-grade high-
stakes outcomes. Scientific Studies of Reading, 5(3), 257–288.
Graden, J. L., Stollar, S. A., & Poth, R. L. (2007). The Ohio integrated systems model: Overview
and lessons learned. In S. R. Jimerson, M. K. Burns, & A. M. VanDerHeyden (Eds.), Handbook
of response to intervention (pp. 288–299). New York: Springer.
Greenwood, C. R., Kratochwill, T. R., & Clements, M. (2008). Schoolwide prevention models:
Lessons learned in elementary schools. New York: Guilford Press.
Gresham, F. M., & Witt, J. C. (1997). Utility of intelligence tests for treatment planning, clas-
sification, and placement decisions: Recent empirical findings and future directions. School
Psychology Quarterly, 12(3), 249–267.
Griffin, J., & Hatterdorf, R. (2010). Successful RTI implementation in middle schools. Perspec-
tives on Language and Literacy, 36(2), 30–34.
Griffiths, A., VanDerHeyden, A. M., Parson, L. B., & Burns, M. K. (2006). Practical applications
of response-to-intervention research. Assessment for Effective Intervention, 32(1), 50–57.
Griffiths, A., Parson, L. B., Burns, M. K., VanDerHeyden, A., & Tilly, W. D. (2007). Response
to intervention: Research for practice. Alexandria: National Association of State Directors of
Special Education.
Grissmer, D. W., & Nataraj Kirby, S. (1987). Teacher attrition: The uphill climb to staff the nation’s
schools. Santa Monica: The RAND Corporation.
References 293

Gunter, P. L., Reffel, J., Barnett, C. A., Lee, J. L., & Patrick, J. (2004). Academic response rates in
elementary-school classrooms. Education & Treatment of Children, 27(2), 105–113.
Haager, D., Klinger, J., & Vaughn, S. (2007). Evidence-based reading practices for response to
intervention. Baltimore: Brookes Publishing.
Hall, T. (2002). Explicit instruction. Wakefield: National Center on Accessing the General Curric-
ulum. Retrieved from http://www.cast.org/publications/ncac/ncac_explicit.html.
Harlacher, J. E., Nelson Walker, N. J., & Sanford, A. K. (2010). The “I” in RTI: Research-based
factors for intensifying instruction. Teaching Exceptional Children, 42(6), 30–38.
Haring, N. G., Lovitt, T. C., Eaton, M. D., & Hansen, C. L. (1978). The fourth R: Research in the
classroom. Columbus: Charles E. Merrill Publishing Co.
Harn, B. A., Kame’enui, E. J., & Simmons, D. C. (2007). The nature and role of the third tier in
a prevention model for kindergarten students. In D. Haager, J. Klinger, & S. Vaughn (Eds.),
Evidence-based reading practices for response to intervention (pp. 161–184). Baltimore:
Brookes.
Hart, B., & Risley, R. T. (1995). Meaningful differences in the everyday experience of young
American children. Baltimore: Brookes.
Hattie, J. (2009). Visible learning: A synthesis of over 800 meta-analyses relating to achievement.
Florence: Routledge.
Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Education Research, 77(1),
81–112.
Hawken, L. S., Adolphson, S. L., Macleod, K. S., & Schumann, J. (2009). Secondary-tier inter-
ventions and supports. In W. Sailor, G. Dunlap, G. Sugai, & R. Horner (Eds.), Handbook of
positive behavior support (pp. 395–420). New York: Springer.
Haydon, T., Conroy, M. A., Scott, T. M., Sindelar, P. T., Barber, B. R., & Orlando, A. (2010). A
comparison of three types of opportunities to respond on student academic and social behav-
iors. Journal of Emotional and Behavioral Disorders, 18(1), 27–40.
Haydon, T., Mancil, G. R., & Van Loan, C. (2009). Using opportunities to respond in a general
education classroom: A case study. Education and Treatment of Children, 32(2), 267–278.
Hintze, J. M., & Conte, K. L. (1997). Oral reading fluency and authentic reading material: Crite-
rion validity of the technical features of CBM survey-level assessment. School Psychology
Review, 26(4), 535–553.
Hoagwood, K., & Johnson, J. (2003). School psychology: A public health framework I. From
evidence-based practices to evidence-based policies. Journal of School Psychology, 41, 3–21.
Hock, M., & Mellard, D. (2005). Reading comprehension strategies for adult literacy outcomes.
Journal of Adolescent & Adult Literacy, 49(3), 182–200.
Horner, R. H., Sugai, G., Todd, A. W., & Lewis-Palmer, T. (2005) School-wide positive behavior
support: An alternative approach to discipline in schools. In L. M. Bambara & L. Kern (Eds.),
Individualized supports for students with problem behaviors (pp. 359–390). New York:
Guilford Press.
Hosp, J. L. (2008). Best practices in aligning academic assessment with instruction. In A. Thomas
& J. Grimes (Eds.), Best practices in school psychology V (pp. 363–376). Bethesda: NASP
Publications.
Hosp, M. K., Hosp, J. L., & Howell, K. W. (2006). The ABCs of CBM. New York: The Guilford
Press.
Hosp, M. K., & MacConnell, K. L. (2008). Best practices in curriculum-based evaluation in early
reading. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology (pp. 377–396).
Bethesda: National Association of School Psychologists.
Howell, K. W. (2010). FAQs: Patterns of strengths and weaknesses instruction (Aptitude by treat-
ment interaction). [Personal writing]. Retrieved from http://www.wce.wwu.edu/Depts/SPED/
Forms/Resources%20and%20Readings/Learning%20Styles%20Instruction%204-2-10.pdf.
Howell, K. W., & Nolet, V. (2000). Curriculum-based evaluation: Teaching and decision making.
Belmont: Wadsworth.
294 References

Howell, K. W., Hosp, J. L., & Kurns, S. (2008). Best practices in curriculum-based evaluation. In
A. Thomas & J. Grimes (Eds.), Best practices in school psychology V (pp. 349–362). Bethesda:
NASP Publications.
Ikeda, M. J., Grimes, J., Tilly, W. D., III, Allison, R., Kurns, S., & Stumme, J. (2002). Implementing
an intervention-based approach to service delivery: A case example. In M. Shinn, H. Walker,
& G. Stoner (Eds.), Interventions for academic and behavioral problems II: Preventative and
remedial approaches (pp. 53–69). Bethesda: National Association of School Psychologists.
Intervention Central. (n. d.). The instructional hierarchy: Linking stages of learning to effective
instructional techniques. Retrieved from http://www.interventioncentral.org/academic-inter-
ventions/general-academic/instructional-hierarchy-linking-stages-learning-effective-in.
Jenkins, J., & Larson, K. (1979). Evaluation of error-correction procedures for oral reading.
Journal of Special Education, 13, 145–156.
Jenkins, J. R., Larson, K., & Fleisher, L. (1983). Effects of error correction on word recognition
and reading comprehension. Learning Disability Quarterly, 6, 139–145.
Jimerson, S. R., Burns, M. K., & VanDerHeyden, A. (2007). Handbook of response to interven-
tion: The science and practice of assessment and intervention. New York: Springer.
Johnson, E. S., & Smith, L. (2008). Implementation of response to intervention at middle schools:
Challenges and potential benefits. Teaching Exceptional Children, 40(3), 46–52.
Johnston, P. H. (2011). Response to intervention in literacy. The Elementary School Journal,
111(4), 511–534.
Joseph, L. M. (2000). Using word boxes as a large group phonics approach in a first grade class-
room. Reading Horizons, 41(2), 117–127.
Kaiser, A. (2011). Beginning teacher attrition and mobility: Results from the first through third
waves of the 2007–08 beginning teacher longitudinal study. Washington, DC: US Department
of Education, National Center for Education Statistics. Retrieved from http://nces.ed.gov/
pubs2011/2011318.pdf.
Kame’enui, E. J., & Simmons, D. C. (1990). Designing instructional strategies: The prevention of
learning problems. Columbus: Merrill Publishing Company.
Kaminski, R., Cummings, K. D., Powell-Smith, K. A., & Good, R. H. (2008). Best practices in
using dynamic indicators or basic early literacy skills for formative assessment and evaluation.
In A. Thomas & J. Grimes (Eds.), Best practices in school psychology V (pp. 1181–1204).
Bethesda: National Association of School Psychologists.
Kansas State Department of Education. (2011). Kansas multi-tier system of supports: Collab-
orative team workbook reading. Topeka: Kansas MTSS Project, Kansas Technical Assistance
System Network.
Kansas Multi-Tier System of Supports. (n. d.). Overview. Retrieved from http://www.kansasmtss.
org.
Kaplan, R. M., & Saccuzzo, D. P. (2008). Psychological testing: Principles, applications, and
issues. Belmont: Wadsworth Publishing.
Keigher, A. (2010). Teacher attrition and mobility: Results from the 2008–09 teacher follow-up
survey. Washington, DC: US Department of Education, National Center for Education Statis-
tics. Retrieved from http://nces.ed.gov/pubs2010/2010353.pdf.
Kennedy, C. H. (2005). Single-case design for educational research. Boston: Allyn and Bacon.
Kim, J. (2011). Relationships among and between ELL status, demographic characteristics,
enrollment history, and school persistence. Los Angeles: University of California, National
Center for Research on Evaluation, Standards, and Student Testing (CRESST). Retrieved from
http://www.cse.ucla.edu/products/reports/R810.pdf.
Kingston, N., & Nash, B. (2011). Formative assessment: A meta-analysis and call for research.
Educational Measurement: Issues and Practice, 30(4), 28–37.
Klinger, J. K. (2004). Assessing reading comprehension. Assessment for Effective Intervnetion, 29,
59–70. doi: 10.1177/073724770402900408.
Kuhn, M. R., & Stahl, S. A. (2003). Fluency: A review of developmental and remedial practices.
Journal of Educational Psychology, 95(1), 3–21.
References 295

Lafferty, A. E., Gray, S., & Wilcox, M. J. (2005). Teaching alphabetic knowledge to pre-school chil-
dren with developmental language delay and typical language development. Child Language
Teaching and Therapy, 21(3), 263–277. doi: 10.1191=0265659005ct292oa.
Lague, K. M., & Wilson, K. (2010). Using peer tutors to improve reading comprehension. Kappa
Delta Pi, 46(4), 182–186.
Landers, E., Alter, P., & Servilio, K. (2008). Students’ Challenging Behavior and Teachers’ Job
Satisfaction. Beyond Behavior, 18(1), 26–33.
Lemke, M., Sen, A., Pahlke, E., Partelow, L., Miller, D., Williams, T., et al. (2004). International
outcomes of learning in mathematics literacy and problem solving: PISA 2003 results from the
U.S. perspective. Washington, DC: US Department of Education, National Center for Educa-
tion Statistics.
Lenz, B. K., & Hughes, C. A. (1990). A word identification strategy for adolescents with learning
disabilities. Journal of Learning Disabilites, 23(3), 149–163. doi: 10.1177/002221949002300304.
LeVasseur, V. M., Macaruso, P., & Shankweiler, D. (2008). Promoting gains in reading fluency:
A comparision of three approaches. Reading and Writing: An Interdisciplinary Journal, 21(3),
205–230.
Liff Manz, S. (2002). A strategy for previewing textbooks: Teaching reads to become THIEVES.
The Reading Teacher, 55(5), 434–435.
Lo, Y., Cooke, N. L., & Starling, A. L. (2011). Using a repeated reading program to improve
generalization of oral reading fluency. Education and Treatment of Children, 34(1), 115–140.
Lovelace, S., & Stewart, S. R. (2007). Increasing print awareness in preschoolers with language
impairment using non-evocative print referencing. Language, Speech, and Hearing Services in
Schools, 38, 16–30. doi: 0161-1461/06/3801-0016.
Malone, R. A., & McLaughlin, T. F. (1997). The effects of reciprocal peer tutoring with a group
contingency on quiz performance in vocabulary with seventh- and eighth-grade students.
Behavioral Interventions, 12, 27–40.
Mandinach, E. B. (2012). A perfect time for data use: Using data-driven decision making to inform
practice. Educational Psychologist, 47(2), 71–85.
Marchand-Martella, N. E., Ruby, S. F., & Martella, R. C. (2007). Intensifying reading instruction for
students within a three-tier model: Standard-protocol and problem solving approaches within
a Response-to-Intervention (RTI) system. Teaching Exceptional Children Plus, 3(5). Retrieved
from http://journals.cec.sped.org/cgi/viewcontent.cgi?article=1313&context = tecplus
Marzano, R. J. (2010). Formative assessment and standards-based grading. Bloomington:
Marzano Research Laboratory.
Marzano, R. J., & Pickering, D. J. (2005). Building academic vocabulary: Teacher’s manual.
Alexandria: Association for Supervision and Curriculum Development.
Maslanka, P., & Joseph, L. M. (2002). A comparison of two phonological awareness tech-
niques between samples of preschool children. Reading Psychology, 23(4), 271–288.
doi:10.1080/713775284.
McCandliss, B., Beck, I. L., Sandak, R., & Perfetti, C. (2003). Focusing attention on decoding for
children with poor reading skills: Design and preliminary tests of the word building interven-
tion. Scientific Studies of Reading, 71(1), 75–104. doi: 10.1207/S1532799XSSR0701_05.
McCarthy, P. A. (2008). Using sound boxes systematically to develop phonemic awareness. The
Reading Teacher, 62(4), 346–349. doi: 10.1598/RT.62.4.7.
McCurdy, B. L., Mannella, M. C., & Norris, E. (2003). Positive behavior support in urban schools:
Can we prevent the escalation of antisocial behavior? Journal of Positive Behavior Interven-
tions, 5(3), 158–170.
McDonald Connor, S., Piasta, S. B., Fishman, B., Glasney, S., Schatschneider, C., Crowe, E., et al.,
(2009). Individualizing student instruction precisely: Effects of child x instruction interactions
on first graders‫ ׳‬literacy development. Child Development, 80(1), 77–100.
McIntosh, K., Goodman, S., & Bohanon, H. (2010). Toward true integration of academic and
behavior response to interventions systems: Part one: Tier 1 support. Communiqué, 39(2), 1,
14–16.
296 References

McIntosh, K., Horner, R. H., Chard, D. J., Boland, J. B., & Good, R. G. (2006). The use of reading
and behavior screening measures to predict nonresponse to school-wide positive behavior
support: A longitudinal analysis. School Psychology Review, 35(2), 275–291.
McGlinchey, M. T., & Hixson, M. D. (2004). Using curriculum-based measurement to predict
performance on state assessment in reading. School Psychology Review, 33(2), 193–203.
Merrell, K. W., Ervin, R. A., & Gimpel, G. A. (2006). School psychology for the 21st century. New
York: The Guildford Press.
Miura Wayman, M., Wallace, T., Ives Wiley, H., Tichá, R., & Espin, C. A. (2007). Literature
synthesis on curriculum-based measurement in reading. The Journal of Special Education,
41(2), 85–120.
Moats, L. (1999). Teaching reading IS rocket science: What expert teachers of reading should
know and be able to do. American Federation of teachers. Retrieved from http://www.louisa-
moats.com/Assets/Reading.is.Rocket.Science.pdf.
Musti-Roo, S., Hawkins, R. O., & Barkley, E. A. (2009). Effects of repeated readings on the oral
reading fluency of urban-fourth grade students: Implications for practice. Preventing School
Failure, 54(1), 12–23.
Meyer, L. A. (1982). The relative effects of word-analysis and word-supply correction procedures
with poor readers during word-attack training. Reading Research Quarterly, 4, 544–555.
National Association of State Directors of Special Education (NASDSE). (2005). Response to
intervention: Policy considerations and implementation. Alexandria: Author.
National Center for Educational Statistics. (2011a). Reading 2011: National assessment of educa-
tional progress at grades 4 and 8. Washington, DC: National Center for Education Statistics,
Institute of Education Sciences, U.S. Department of Education. Retrieved from http://nces.
ed.gov/nationsreportcard/pdf/main2011/2012457.pdf.
National Center for Educational Statistics. (2011b). Math 2011: National assessment of educa-
tional progress at grades 4 and 8. Washington, DC: National Center for Education Statistics,
Institute of Education Sciences, U.S. Department of Education. Retrieved from http://nces.
ed.gov/nationsreportcard/pdf/main2011/2012458.pdf.
National Center for Educational Statistics. (2011c). Digest of education statistics: 2011. Wash-
ington, DC: US Department of Education, Institute for Education Services, US Department of
Education. Retrieved from http://nces.ed.gov/programs/digest/d11.
National Institute of Child Health and Human Development (NICHHD). (2000). Report of the
national reading panel. Teaching children to read: an evidence-based assessment of the scien-
tific research literature on reading and its implications for reading instruction. Retrieved from
http://www.nichd.nih.gov/publications/nrp/smallbook.htm.
National Research Council. (2000). How people learn: Brain, mind, experience, and school.
Washington, DC: National Academy Press.
Nelson, J. M., & Machek, G. R. (2007). A survey of training, practice, and competence in reading
assessment and intervention. School Psychology Review, 36(2), 311–327.
Netzel, D. M., & Eber, L. (2003). Shifting from reactive to proactive discipline in an urban school
district: A change of focus through PBIS implementation. Journal of Positive Behavior Inter-
ventions, 5(2), 71–79.
Newmann, F. M., Smith, B., Allensworth, E., & Bryk, A. S. (2001). Instructional program coher-
ence: What it is and why it should guide school improvement. Educational Evaluation and
Policy Analysis, 23(4), 297–321. doi: 10.3102/01623737023004297.
Office of Special Education Programs (OSEP). (2011). 30th annual report to congress on the
implementation of the individuals with disabilities education act, 2008. Washington, DC: US
Department of Education, Office of Special Education and Rehabilitative Services, Office of
Special Education Programs.
Organisation for Economic Co-Operation and Development (OECD). (2001). Knowledge and
skills for life: First results from the OECD Programme for International Student Assess-
ment (PISA) 2000. OECD. Retrieved from http://www.oecd.org/edu/preschoolandschool/
programme­forinternationalstudentassessmentpisa/33691596.pdf.
References 297

Ortiz, S. O., Flanagan, D. P., & Dynda, A. M. (2008). Best practices in working with cultur-
ally diverse children and families. In A. Thomas & J. Grimes (Eds.), Best practices in school
psychology V (pp. 1721–1738). Bethesda: National Association of School Psychologists.
Pashler, H., McDaniel, M., Rohrer, D., & Bjork, R. (2005). Learning styles: Concepts and evidence.
Psychological Science in the Public Interest, 9(3), 105–119.
Pearson, Inc. (2012a). AIMSweb: Test of early literacy administration and scoring guide. Bloom-
ington: NCS Pearson, Inc. Retrieved from http://www.aimsweb.com/wp-content/uploads/
TEL_Admin_Scoring-Guide_2.0.pdf.
Pearson, Inc. (2012b). AIMSweb: Progress monitoring guide. Bloomington: NCS Pearson, Inc.
Perfetti, C., & Adlof, S. M. (2012). Reading comprehension: A conceptual framework from
word meaning to text meaning. In J. Sabatini, E. Albro, & T. O’Reilly (Eds.), Measuring up:
Advances in how we assess reading ability. Lanham: R & L Education.
Peshak George, H., Kincaid, D., & Pollard-Sage, J. (2009). Primary-tier interventions and supports.
In W. Sailor, G. Dunlap, G. Sugai, & R. Horner (Eds.), Handbook of positive behavior support
(pp. 375–394). New York: Springer.
Pyle, N., & Vaughn, S. (2012). Remediating reading difficulties in a response to intervention
model with secondary students. Psychology in the Schools, 49(3), 273–284. doi: 10.1002/pits.
Partnership for Reading. (2001). Fluency: An introduction. Retrieved from http://www.readin-
grockets.org/article/3415.
Phillips, B. M., Clancy-Menchetti, J., & Lonigan, C. J. (2008). Successful phonological awareness
instruction with preschool children. Topics in Early Childhood Special Education, 28(1), 3–17.
doi: 10.117/0271121407313813.
Rasinski, T. V. (1994). Developing syntactic sensitivity in reading through phrase-cued texts.
Intervention in School and Clinic, 29, 165–168. doi: 10.1177/105345129402900307.
Rathvon, N. (2008). Effective school interventions (2nd ed.). New York: The Guilford Press.
Reitsma, P. (1983). Printed word learning in beginning readers. Journal of Experimental Child
Psychology, 36, 321–339.
Reschly, D. J. (2008). School psychology paradigm shift and beyond. In A. Thomas & J. Grimes
(Eds.), Best practices in school psychology V (pp. 3–15). Bethesda: National Association of
School Psychologists.
Restori, A. F., Gresham, F. M., & Cook, C. R. (2008). Old habits die hard: Past and current issues
pertaining to Response-to-Intervention. The California School Psychologist, 13, 67–78.
Rhodes, R., Ochoa, S. H., & Ortiz, S. O. (2005). Comprehensive assessment of culturally and
linguistically diverse students: A practical approach. New York: Guilford.
Rolison, M. A., & Medway, F. J. (1985). Teachers' expectations and attributions for student
achievement: Effects of label, performance pattern, and special education intervention. Amer-
ican Educational Research Journal, 22(4), 561–573.
Sáenz, L. M., Fuchs, L. S., & Fuchs, D. (2005). Peer-assisted learning strategies for English
language learners with learning disabilities. Exceptional Children, 71(3), 231–247.
Samson, J. F., & Lesaux, N. K. (2009). Language-minority learners in special education: Rates
and predictors of identification for services. Journal of Learning Disabilities, 42(2), 148–162.
Schmoker, M. J. (2006). Results now: How we can achieve unprecedented improvement in teaching
and learning. Alexandria: Association for Supervision & Curriculum Development.
Scott, T. M., Anderson, C., Mancil, R., & Alter, P. (2009). Function-based supports for individual
students in school settings. In W. Sailor, G. Dunlap, G. Sugai, & R. Horner (Eds.), Handbook
of positive behavior support (pp. 421–442). New York: Springer.
Shapiro, E. S. (2008). Best practices in setting progress monitoring goals for academic skill
improvement. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology V (pp.
141–158). Bethesda: National Association of School Psychologists.
Shinn, M. R. (2002a). Best practices in using curriculum-based measurement in a problem solving
model. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology IV (pp. 671–697).
Bethesda: National Association of School Psychologists.
Shinn, M. R. (2002b). AIMSweb training workbook: Strategies for writing individualized goals in
general curriculum and more frequent formative evaluation. Eden Prairie: Edformation, Inc.
298 References

Shinn, M. R. (2008). Best practices in curriculum-based measurement and its use in a problem-
solving model. In A. Thomas & J. Grimes (Eds.), Best practices in school psychology V
(pp. 243–262). Bethesda: National Association of School Psychologists.
Shinn, M. R., & Shinn, M. M. (2002). AIMSweb training workbook. Administration and scoring
of reading curriculum-based measurement (R-CBM) in general outcome measurement.
Eden Prairie: Edformation Inc. Retrieved from http://aimsweb.com/uploads/pdfs/Manuals/
RCBM%20Manual.pdf.
Silberglitt, B., Burns, M. K., Madyun, N. H., & Lail, K. E. (2006). Relationship of reading fluency
assessment data with state accountability test scores: A longitudinal comparison of grade
levels. Psychology in the Schools, 43(5), 527–535. doi: 10.1002/pits.20175.
Silberglitt, B., & Hintze, J. M. (2007). How much growth can we expect? A conditional analysis of
R-CBM growth rates by level of performance. Exceptional Children, 74, 71–84.
Simonsen, B., Fairbanks, S., Briesch, A., & Sugai, G. (2006). Positive behavior support classroom
management: Self-assessment revised. OSEP Positive Behavioral Interventions and Support.
US Office of Special Education Programs.
Simonsen, B., Myers, D., & DeLuca, C. (2010). Teaching teachers to use prompts, opportunities to
respond, and specific praise. Teacher Education and Special Education, 33(4), 300–318. doi:
10.1177/0888406409359905.
Singh, N. N. (1990). Effects of two error-correction procedures on oral reading errors. Behavior
Modiciation, 14(2), 188–199.
Singh, N. N., & Singh, J. (1986). Increasing oral reading proficiency. Behavior Modification,
10(1), 115–130.
Stecker, P. M., Fuchs, L. S., & Fuchs, D. (2005). Using curriculum-based measurement to improve
student achievement: Review of research. Psychology in the Schools, 42(8), 795–819.
Stichter, J. P., Lewis, T. J., Whittaker, T., Richter, M., Johnson, N. & Trussel, R. (2009). Assessing
teacher use of opportunities to respond and effective classroom management strategies within
inclusive classrooms: Comparisons among high and low risk elementary schools. Journal of
Positive Behavior Interventions, 11, 68–81.
Stiggins, R., & Chappuis, J. (2006). What a difference a word makes. Assessment for learning
rather than assessment of learning helps students succeed. Journal of Staff Development, 27(1),
10–14.
Stiggins, R., & DuFour, R. (2009). Maximizing the power of formative assessments. Phi Delta
Kappan, 90(9), 640–644.
Stuebing, K. K., Barth, A. E., Molfese, P. J., Weiss, B. & Fletcher, J. M. (2009). IQ is not strongly
related to response to reading instruction: A meta-analytic interpretation. Exceptional Children,
76(1), 31–51.
Sugai, G., & Horner, R. (2006). A promising approach for expanding and sustaining school-wide
positive behavior support. School Psychology Review, 35(2), 245–259.
Sugai, G., & Horner, R. (2009). Defining and describing schoolwide positive behavior support. In
W. Sailor, G. Dunlap, G. Sugai, & R. Horner (Eds.), Handbook of positive behavior support
(pp. 307–326). New York: Springer.
Sullivan, A. L. (2011). Disproportionality in special education identification and placement of
English language learners. Exceptional Children, 77(3), 317–334.
Sutherland, K., Alder, N., & Gunter, P. L. (2003). The effect of varying rates of opportunities to
respond to academic requests on the classroom behavior of students with EBD. Journal of
Emotional and Behavioral Disorders, 11, 239–248.
Sutherland, K. S., Wehby, J. H., & Copeland, S. R. (2000). Effect of varying rates of behavior-
specific praise on the on-task behavior of students with EBD. Journal of Emotional and Behav-
ioral Disorders, 8(1), 2–8.
Taylor-Greene, S., Brown, D., Nelson, L., Longton, J., Gassman, Cohen, J., et al. (1997). School-
wide behavioral support: Starting the year off right. Journal of Behavior Education, 7(1),
99–112.
Therrien, W. J. (2004). Fluency and comprehension gains as a result of repeated reading. Remedial
and Special Education, 25(4), 252–261.
References 299

Therrien, W. J., Kirk, J. F., Woods-Groves, S. (2012). Comparison of a reading fluency interven-
tion with and without passage repetition on reading achievement. Remedial and Special Educa-
tion, 33(5), 309–319.
Thorndike, R. M., & Thorndike-Christ, T. (2010). Measurement and evaluation in psychology and
education (8th ed). New York: Pearson.
Tilly, W. D., III. (2008). The evolution of school psychology to science-based practice: Problem-
solving and the three-tiered model. In A. Thomas & J. Grimes (Eds.), Best practices in school
psychology V (pp. 17–35). Bethesda: National Association of School Psychologists.
Tomlinson, C. A., & Britt, S. (2012). Common core standards: Where does differentiation fit?
Webinar available at www.ascd.org/professional-development/webinars/tomlinson-and-britt-
webinar.aspx.
Torgesen, J. K. (2000). Individual differences in response to early interventions in reading: The
lingering problem of treatment resisters. Learning Disabilities Research & Practice, 15(1),
55–64.
Treptow, M. A., Burns, M. K., & McComas, J. J. (2007). Reading at the frustration, instructional,
and independent levels: Effects on student time on task and comprehension. School Psychology
Review, 36, 159–166.
UNICEF. (2002). A league table of educational disadvantage in rich nations, Innocenti Report
Card No.4. Florence: UNICEF Innocenti Research Centre.
US Census Bureau. (2011a). Overview of race and hispanic origin: 2010. US department of
commerce, economics and statistics administration. Retrieved from http://www.census.gov/
prod/cen2010/briefs/c2010br-02.pdf.
US Census Bureau. (2011b). Living arrangements of children: 2009. US department of commerce,
economics and statistics administration. Retrieved from https://www.census.gov/prod/2011pubs/
p70-126.pdf.
US Department of Education. (2012). Digest of education statistics, 2011. National center for
education statistics. Retrieved from http://nces.ed.gov/fastfacts/display.asp?id=64.
VanDerHeyden, A. M., & Witt, J. C. (2008). Best practices in can’t do/won’t do assessment. In A.
Thomas & J. Grimes (Eds.), Best practices in school psychology V (pp. 131–140). Bethesda:
National Association of School Psychologists.
VanDerHeyden, A. M., Witt, J. C., & Gilbertson, D. (2007). A multi-year evaluation of the effects
of a response to intervention (RTI) model on identification of children for special education.
Journal of School Psychology, 45, 225–256.
Vaughn, S., Cirino, P. T., Wanzek, J., Wexler, J., Fletcher, J. M., Denton, C. D., et  al. (2010).
Response to intervention for middle school students with reading difficulties: Effects of a
primary and secondary intervention. School Psychology Review, 39(1), 2–21.
Vaughn, S., & Fletcher, J. (2010). Thoughts on rethinking response to intervention with secondary
students. School Psychology Review, 39(2), 296–299.
Vaughn, S., & Kettman Klinger, S. (1999). Teaching reading comprehension through collaborative
strategic reading. Intervention in School and Clinic, 34(5), 284–292.
Vaughn, S., & Linan-Thompson, A. (2004). Research-based methods of reading instruction.
Grades K-3. Alexandria: Association for Supervision and Curriculum Development.
Vaughn, S., Linan-Thompson, S., & Hickman, P. (2003). Response to instruction as a means of
identifying students with reading/learning disabilities. Exceptional Children, 69(4), 391–409.
Vaughn, S., Wanzek, J. S., & Murray, G. (2012). Intensive interventions for students strugging in
reading and mathematics: A practice guide. Portsmouth: RMC Research Corporation, Center
on Instruction.
Vaughn, S., Wanzek, J., Woodruff, A. L., & Linan-Thompson, S. (2007). Prevention and early
identification of students with reading disabilities. In D. Haager, J. Klinger, & S. Vaughn
(Eds.), Evidence-based reading practices for response to intervention (pp. 11–27). Baltimore:
Brookes.
Viadero, D. (2011, October 19 ). Dropouts: Trends in high school dropout and completion rates in
the United States: 1972–2009. Education Week, 31(8), 4.
300 References

Walpole, S., & McKenna, M. C. (2007). Differentiated reading instruction: Strategies for the
primary grades. New York: The Guilford Press.
Walsh, K., Glaser, D., & Dunne Wilcox, D. (2006). What education schools aren’t teaching about
reading and what elementary teachers aren't learning. National Council on Teacher Quality
(NCTQ).
Watkins, C. L., & Slocum, T. A. (2004). The components of direct instruction. Journal of Direct
Instruction, 3, 75–110.
White, R. B., Polly, D., & Audette, R. H. (2012). A case analysis of an elementary school’s imple-
mentation of response to intervention. Journal of Research in Childhood Education, 26, 73–90.
Wilkinson, L. A. (2006). Monitoring treatment integrity: An alternative to the ‘consult and hope’
strategy in school-based behavioural consultation. School Psychology International, 27(4),
426–438. doi: 10.1177/0143034306070428.
Wolfe, I. S. (2005). Fifty percent of new teachers leave in five years. The Total View. Retrieved
from http://www.super-solutions.com/teachershortages.asp#axzz1NE7Bf2gA.
Wolery, M. (2011). Intervention research: The importance of fidelity measurement. Topics in Early
Childhood Special Education, 31(3), 155–157.
Wong, H. K., & Wong, R. T. (2001). The first days of school: How to be an effective teacher.
Mountain View: Harry K. Wong Publications.
Woodcock, S., Vialle, W. (2011). Are we exacerbating students’ learning disabilities? An investiga-
tion of preservice teachers’ attributions of the educational outcomes of students with learning
disabilities. Annuals of Dyslexia, 61, 223–241. doi: 10.1007/s11811-011-0058-9.
Yates, H. M., & Collins, V. K. (2006). How on school made the pieces fit. Journal of Staff Develop-
ment, 27(4), 30–35.
Yell, M. L., & Stecker, P. M. (2003). Developing legally correct and educationally meaningful
IEPs, using curriculum-based measurement. Assessment for Effective Intervention, 28, 73–88.
doi: 10.1177/073724770302800308.
Yoon, K. S., Duncan, T., Lee, S. W. Y., Scarloss, B., & Shapley, K. (2007). Reviewing the evidence
on how teacher professional development affects student achievement. Washington, DC: U.S.
Department of Education, Institute of Education Sciences, National Center for Education Eval-
uation and Regional Assistance, Regional Educational Laboratory Southwest. http://ies.ed.gov/
ncee/edlabs.
Ysseldyke, J., & Christenson, S. L. (1988). Linking assessment to intervention. In J. L. Graden,
J. E. Zins, & M. J. Curtis (Eds.), Alternative educational delivery systems: Enhancing instruc-
tional options for all students (pp. 91–110). Washington: National Association of School
Psychologists.
Ysseldyke, J., Burns, M. K., Scholin, S. E., & Parker, D. C. (2010). Instructionally valid assess-
ment within response to intervention. Teaching Exceptional Children, 42(4), 54–61.
Zhang, D., & Katsiyannis, A. (2002). Minority representation in special education: A persistent
challenge. Remedial and Special Education, 23(3), 180–187.
Index

A Curriculum-Based Measurement
Academic vocabulary  196, 214 (CBM)  36, 47
content specific and  214 alphabetic knowledge  150
Achievement  9, 23, 58 characteristics  73
academic  27 use in  37, 267
and outcomes  19 within assessment process  267
data for student  10
gap  70 D
outcomes  40 Decoding  3, 48, 79, 91, 98, 136, 196, 202,
positive effect of student  267 206, 208, 256
positive gains in  25 and arduously working  60
state level test  192 and lack of vocabulary  198
student  17, 254, 256, 267 and phonics skills  85
teacher and student  17 breakdown with  150
test  266, 268 CBE Process  79, 84, 99, 103, 143
Aim line  249, 253, 259 errors  86, 88–90, 113
essential components  247 process  60
Alphabetic knowledge  135, 138–140, 150 reading comprehension skills  149
assessment of  153, 155, 156, 163 self-monitoring assessment for  263
Alphabetic principle  59, 97, 131, 135, 138, skills  60, 85, 91, 131
139, 143, 150 student  149, 195
Alterable variables  2, 20, 26, 40, 52, 65 Diagnostic assessment  31, 35, 36
focusing on  52, 53 use  40
Assessment system 
comprehensive  34, 37, 38 E
Assumptions behind CBE  47, 48 Effective practices  17, 19, 29, 45
and data  45
B limited use of  11, 14, 15
Background knowledge  203 Error analysis  100, 145
Big five areas of reading  58, 60 coding sheets  111
conduct  88, 90
C instructions  107
Content-specific vocabulary  197 overlap with  150
academic and  205 tally sheet  114
resources for  196 Evaluation 
Curriculum-based evaluation (CBE)  2, 20 plan  98, 99, 150, 207, 243, 258, 263, 264
definition of  47
framework for use  44

J. E. Harlacher et al., Practitioner’s Guide to Curriculum-Based Evaluation in Reading, 301


DOI 10.1007/978-1-4614-9360-0, © Springer Science+Business Media New York 2014
302 Index

F focus and amount of  52


Fluency  14, 27, 58, 98, 201, 256, 257 focus of  254
and accuracy rates  93, 263 formal and informal  202
at phrase level  126 general  146
building  85, 93, 201, 208 general reading  90, 141
develop phonics  59 guided  148
during partner reading  125 guided practice and  92
letter sound  143 pacing of  50
of letter-blends  189 small-group  148, 263
oral reading  68, 103 supplemental  261
reading  73 targeted for  33
stage  56 targeted to correct errors  96, 97
teach  94, 96 targeting  86, 88
with connected text  121, 124 teacher plan  266
with material  131 to students and  256
Formative assessment  16, 25, 36, 37, 145, 203 whole-class direct  256
research  267 Instruction, curriculum, environment,
use  91 learner(ICEL)  3, 54, 61
Formative evaluation  74 and IH  53
assessment framework and IH  47, 53
G framewok  36
Gap-analysis  64 Instructional factors  243
Goal line  247 evidence based  254
Goals  research based  258
goal-setting  3, 49, 63 Instructional hierarchy (IH)  47, 56, 61, 66
goal-writing  63 acquisition  56, 201, 256
Group diagnostics  261, 263, 264 adaptation  56, 58
Growth  fluency  56, 58, 201, 256
growth rates  75 generalization  56, 58
judging growth  244, 249, 250 RIOT/ICEL  53
Instructional match  66
H Instructions  13
High-inference  45 and student learning  17
individualized  15
I
Individual problem-solving  3, 19, 20 K
Instruction  11, 27, 30, 32, 34, 37, 44, 53, 54, Key principles  24, 27
56, 58, 60, 63, 66, 70, 72–74, 93, 100, data-based decision making, use  29
144, 147, 201, 204, 207, 216, 244, 257, evidence-based practices, use  28
266–268 instructional match  29
adjustments to  18 preventative approach to education  27
align curriculum and  29 proactive approach to education  27
and assessment  38 school wide use and collaboration  29
and independent reading  270
and level of support  41 L
and practices in schools  13 Letter Naming Fluency (LNF)  139, 142, 150
assessment and  44 Letter Sound Fluency (LSF)  139, 143, 150
change in  249, 253 Letter-sound correspondence 
context and vocabulary  208 letter blends  143, 145
core  41 letter identification with  147
corresponding level of  19 teach  148
direct  97, 147 Low-inference  3
effective fluency  94 assessments  45, 52
Index 303

M R
MAZE  196, 207 Reliability 
probes  192, 208 and validity  264
students  194 coefficients  265
Multi-Tiered System of Supports (MTSS)  3, of data  252
19, 23 Review, interview, observation, and testing
description of  23, 24 (RIOT)  3, 54, 61
and IH  53
N assessment framework and IH  47, 53
Nonsense Word Fluency (NWF)  73, 139, 143, framework  36
150
analysis of  150 S
task  143 Screening assessment (screening)  35, 45, 80
Setting goals  68, 70
O Specific-level assessment  53, 65, 85, 145, 203
Oral Reading Fluency (ORF)  68, 80, 207, in CBE Process  270
265, 268, 269 Survey-level assessment (SLA)  64, 71, 80,
and MAZE  192, 209 138, 139, 192, 201, 214, 263, 269
rates  27 MAZE  207
with CBM  151, 209
P with reading CBM  101
Pattern of performance  248, 249 Systems-level problem solving  41, 43
Percentile  26, 68, 71–73, 80, 84, 99, 138, 194
Phoneme  59, 60, 97, 135, 146, 150, 189 T
Phoneme Segmentation Fluency (PSF)  156 Tiers of instruction 
use  150 tier 1  32
Phonemic awareness  14, 59, 85, 135, 138, tier 2  33
150, 151 tier 3  33, 34
skills  140, 141, 145, 147 Trend line  247, 249, 253, 259
teach  146
Print concepts  135, 139, 142, 150, 153, 155, V
156, 269 Validity  264, 265
teach  147 Variability  252, 253
Problem-solving model  Vocabulary  14, 28, 59, 60, 90, 97, 195–197,
use  19 202–205
Problem-solving model (PSM)  3, 7, 19, 20, list  207
23, 44, 63 matching  208
plan evaluation  38, 41, 64
plan implementation  38, 40
problem analysis  38, 40
problem identification  38, 64
use  31, 45
Progress monitoring  3, 36, 65, 71, 101, 243
data  30, 74
graphs  258
tools  37, 73, 267

You might also like