Professional Documents
Culture Documents
Tests of Achievement
Nancy Mather • Barbara J. Wendling
Examiner’s Manual
Standard & Extended Batteries
Woodcock-Johnson® IV Tests of Achievement
Examiner’s Manual
Nancy Mather ◆ Barbara J. Wendling
Reference Citations
■■ To cite the entire WJ IV battery, use:
Schrank, F. A., McGrew, K. S., & Mather, N. (2014). Woodcock-Johnson IV. Rolling
Meadows, IL: Riverside Publishing.
■■ To cite the WJ IV Tests of Achievement, use:
Schrank, F. A., Mather, N., & McGrew, K. S. (2014). Woodcock-Johnson IV Tests of
Achievement. Rolling Meadows, IL: Riverside Publishing.
■■ To cite this manual, use:
Mather, N., & Wendling, B. J. (2014). Examiner’s Manual. Woodcock-Johnson IV Tests of
Achievement. Rolling Meadows, IL: Riverside Publishing.
■■ To cite the online scoring and reporting program, use:
Schrank, F. A., & Dailey, D. (2014). Woodcock-Johnson Online Scoring and Reporting
[Online format]. Rolling Meadows, IL: Riverside Publishing.
Copyright © 2014 by Riverside Assessments, LLC. No part of this work may be reproduced
or transmitted in any form or by any means, electronic or mechanical, including
photocopying and recording or by any information storage or retrieval system, without the
prior written permission of Riverside Assessments, LLC, unless such copying is expressly
permitted by federal copyright law. Requests for permission to make copies of any part of
the work should be addressed to Riverside Insights, Attention: Permissions, One Pierce
Place, Suite 900W, Itasca, Illinois 60143.
Batería III Woodcock-Muñoz, WJ III, WJ-R, Woodcock-Johnson, the Woodcock-Johnson
IV logo, and Woodcock-Muñoz Language Survey are registered trademarks of Riverside
Assessments, LLC.
WIIIP, WJ IV Interpretation and Instructional Interventions Program, Woodcock
Interpretation and Instructional Interventions Program, and WJ IV are trademarks of
Riverside Assessments, LLC.
The MindHub is a registered trademark of the Institute for Applied Psychometrics (IAP)
and Interactive Metronome.
All other trademarks are the property of their respective owners.
The WJ IV tests are not to be used in any program operating under statutes or regulations
that require disclosure of specific item content and/or correct responses to the public,
including subjects or their parents. Any unauthorized distribution of the specific item
content and/or correct responses is prohibited by copyright law.
For technical information, please visit www.riversideinsights.com or call Riverside Insights
Customer Service at 800.323.9540.
About the Authors of
the WJ IV
Fredrick A. Schrank
Fredrick A. (Fred) Schrank guided the development of the Woodcock-Johnson® IV (WJ IV™)
as the author team leader. He managed the test development company Measurement Learning
Consultants (MLC) and provided stewardship to the Woodcock-Muñoz Foundation.
Dr. Schrank is a licensed psychologist (Washington) and a board certified specialist in
school psychology from the American Board of Professional Psychology (ABPP). He worked
in the Dodgeville, North Fond du Lac, and De Forest (Wisconsin) school districts before
earning a PhD from the University of Wisconsin–Madison. Dr. Schrank then taught at
Truman State University (Missouri) and the University of Puget Sound (Washington) prior
to a 25-year career devoted almost exclusively to the development and publication of the
Woodcock-Johnson family of tests. In service to professional psychology, he has been an oral
examiner for the American Board of School Psychology (ABSP) and president of the American
Academy of School Psychology (AASP). Fred was instrumental in the development of the
organizational and interpretive plan for the WJ IV, including the Woodcock-Johnson online
scoring and reporting program.
Nancy Mather
Nancy Mather is a Professor at the University of Arizona in the Department of Disability
and Psychoeducational Studies. She holds an MA in Behavior Disorders and a PhD from
the University of Arizona in Special Education and Learning Disabilities. She completed
a postdoctoral fellowship under the mentorship of Dr. Samuel Kirk at the University of
Arizona.
Dr. Mather assisted Dr. Richard Woodcock with several aspects of test development for the
Woodcock-Johnson Psycho-Educational Battery–Revised (WJ-R®), including coauthoring the
Examiner’s Manuals for the WJ-R Tests of Cognitive Ability and Achievement. She has been
a coauthor of both the Woodcock-Johnson III (WJ III®) and the WJ IV and has coauthored
two books on the interpretation and application of the WJ III—Essentials of WJ III Tests of
Achievement Assessment and Woodcock-Johnson III: Reports, Recommendations, and Strategies.
She has served as a learning disabilities teacher, a diagnostician, a university professor, and
an educational consultant. Dr. Mather conducts research in the areas of reading and writing
development. She has published numerous articles, conducts workshops on assessment and
instruction both nationally and internationally, and has coauthored several books linking
assessment and intervention, including Learning Disabilities and Challenging Behaviors: A
Guide to Intervention and Classroom Management, Evidence-Based Interventions for Students
iii
with Learning and Behavioral Challenges, Essentials of Assessment Report Writing, Essentials
of Evidence-Based Academic Interventions, Writing Assessment and Instruction for Students with
Learning Disabilities, and most recently, Essentials of Dyslexia: Assessment and Intervention.
Kevin S. McGrew
Kevin S. McGrew is Director of the Institute for Applied Psychometrics (IAP), LLC, a private
research and consulting organization he established in 1998. He was an Associate Director
of Measurement Learning Consultants and Research Director of the Woodcock-Muñoz
Foundation. He also is a Visiting Lecturer in Educational Psychology (School Psychology
Program) at the University of Minnesota and Director of Research for Interactive Metronome,
a neurotechnology and rehabilitation company. He holds a PhD in Educational Psychology
(Special Education) from the University of Minnesota and an MS in School Psychology and a
BA in Psychology from Minnesota State University–Moorhead.
Dr. McGrew was a practicing school psychologist for 12 years in Iowa and Minnesota.
From 1989 to 2000, he was a Professor in the Department of Applied Psychology at St. Cloud
State University, St. Cloud, Minnesota. He has served as a measurement consultant to a
number of psychological test publishers, national research studies, and organizations.
He has authored numerous publications and made state, national, and international
presentations in his primary areas of research interest in human intelligence, intellectual
assessment, human competence, applied psychometrics, and the Cattell-Horn-Carroll (CHC)
theory of cognitive abilities. He is an active distributor of theoretical and research information
via three professional blogs and The MindHub® web portal.
Dr. McGrew was the primary measurement consultant for the WJ-R and served in the
same capacity as coauthor of the Mini-Battery of Achievement (MBA), Sharpe-McNear-McGrew
Braille Assessment Inventory (BAI), WJ III, Woodcock-Johnson Diagnostic Supplement to the
Tests of Cognitive Abilities, Batería III Woodcock-Muñoz® (Batería III), Woodcock-Johnson
III Normative Update, Woodcock-Johnson III–Australian Adaptation, and WJ IV. He was the
psychometric and statistical consultant for the development of the Children’s Psychological
Processes Scale.
Contributing Author
Barbara J. Wendling coauthored the WJ IV Examiner’s Manuals with Nancy Mather.
Barbara is an educational consultant with expertise in assessment, test interpretation, and
academic interventions. She holds an MA in Learning Disabilities, and she has over 17 years
of experience as an educator and diagnostician in Illinois public schools and 11 years of
experience in educational and assessment publishing. Currently she is the Education Director
of the Woodcock-Muñoz Foundation.
Barbara has coauthored several books on assessment and intervention, including Essentials
of Evidence-Based Academic Interventions, Writing Assessment and Instruction for Students with
Learning Disabilities, and Essentials of Dyslexia: Assessment and Intervention. In addition, she
has coauthored the following books on the use and interpretation of the Woodcock-Johnson:
Essentials of the WJ III Tests of Achievement Assessment; Essentials of the WJ III Tests of
Cognitive Abilities Assessment, Second Edition; and Essentials of the WJ IV Tests of Achievement
Assessment. She is also coauthor of the WJ III and WJ IV versions of the Woodcock
Interpretation and Instructional Interventions Program™, WJ IV Interpretation and Instructional
Interventions Program™ (WIIIP™).
iv
Acknowledgments
The Woodcock-Johnson IV was developed from the contributions of thousands of individuals,
spanning time and distance, each motivated by a desire or a call to make a valuable
contribution to the future of contemporary assessment practice. Although it is impossible
to acknowledge everyone individually, a few key people have made such significant
contributions that even special mention seems inadequate as an expression of their impact.
When author team meetings were scheduled, Barbara Wendling was deemed to be so
invaluable that she was always invited to participate. Her experience as an educator and
diagnostician, her work in educational and test publishing, and the insights she has gleaned
from developing and delivering trainings on learning disabilities and assessment over many
years are reflected in the examiner’s manuals and all of the WJ IV materials.
From the Measurement Learning Consultants project center offices on the beautiful
Oregon coast, Mary Ruef fostered and supervised a staff of highly qualified employees who
prepared standardization materials and scored the test results from the standardization and
validity studies. In addition, she helped prepare the final data for analysis, including the
preparation of preliminary data reports from which publication items were selected.
Extensive expertise in test publishing dedicated to the Woodcock-Johnson family of tests
made Melanie Bartels Graw an indispensable asset to the quality of the published materials.
Her painstaking attention to detail is evidenced throughout the battery, from the item keys
to the user-friendliness of the examiner instructions. She single-handedly managed the
monumental coordination effort of submitting and reviewing multiple iterations of all of the
tests, test records, response booklets, and manuals to Riverside.
The critical task of converting standardization data to norms was accomplished through
the superior craftsmanship of David Dailey, who not only trained and managed a staff of
norms construction technicians, but also was instrumental in managing all of the nuances of
the WJ IV blueprint so that each successive iteration of the battery plan could be reviewed
and improved by the authors. A professional statistician, he played a key consulting role for a
variety of statistical analyses reported in the Woodcock-Johnson IV Technical Manual.
Based on his years of experience creating the software programs for the Woodcock-
Johnson family of tests, both in the United States and internationally, Todd Simmons expertly
programmed the Woodcock-Johnson online scoring and reporting program, offering the
perspective of ease-of-use in software design. He was ably assisted in his efforts by Melanie
Pammer Maerz who assured that the software program worked as intended.
Joining the team in the latter years of the project, Erica LaForte brought a wealth of Rasch
measurement expertise to the development effort. She completed a number of statistical
analyses and helped write the Woodcock-Johnson IV Technical Manual. Throughout the half-
decade-long developmental effort, the technical quality of the data analyses has been ensured
by the contributions of Dr. Jack McArdle and Dr. Mark Davison.
v
Under the thoughtful guidance of Dr. Ana Muñoz-Sandoval, three Spanish oral language
tests were adapted from parallel English oral language tests for use with Spanish-speaking
bilingual individuals. Dr. Lynne Jaffe and Dr. Criselda Alvarado assisted with sections of
the examiner’s manuals, providing expertise for accommodations for students with specific
disabilities, Spanish oral language assessment, and English language learners. Dr. Kathleen
Donalson provided expertise in item content analysis for several of the reading and
spelling tests.
Finally, sincere appreciation is expressed to the more than 8,000 standardization and
validity study participants who contributed their time and invaluable test-taking efforts to
this project.
FAS
NM
KSM
vi
Table of Contents
About the Authors of the WJ IV iii
Acknowledgments v
Chapter 1: Overview 1
Comparison to the WJ III Tests of Achievement 2
Organization of the WJ IV Tests of Achievement 3
Components of the WJ IV Tests of Achievement 3
Test Books 5
Examiner’s Manual 5
Technical Manual 5
Woodcock-Johnson Online Scoring and Reporting 6
Test Record 6
Response Booklet 6
Audio Recording 6
Relationship of the WJ IV to the CHC Theory of Cognitive Abilities 6
Uses of the WJ IV Tests of Achievement 7
Use With the WJ IV COG 7
Use With the WJ IV OL 7
Diagnosis 7
Determination of Variations and Comparisons 8
Educational Programming 8
Planning Individual Programs 8
Guidance 9
Assessing Growth 9
Program Evaluation 9
Research 9
Psychometric Training 10
Examiner Qualifications 10
Confidentiality of Test Materials and Content 11
vii
Test 8: Oral Reading 16
Test 9: Sentence Reading Fluency 16
Test 10: Math Facts Fluency 16
Test 11: Sentence Writing Fluency 16
Test 12: Reading Recall 16
Test 13: Number Matrices 16
Test 14: Editing 16
Test 15: Word Reading Fluency 17
Test 16: Spelling of Sounds 17
Test 17: Reading Vocabulary 17
Test 18: Science 17
Test 19: Social Studies 17
Test 20: Humanities 17
WJ IV ACH Clusters 18
Reading Clusters 18
Math Clusters 19
Written Language Clusters 20
Cross-Domain Clusters 20
viii
Additional Notations for Recording Responses 36
Scoring Multiple Responses 36
Computing Raw Scores 37
Obtaining Age- and Grade-Equivalent Scores 37
Using the Woodcock-Johnson Online Scoring and Reporting Program 37
Accommodations 38
Recommended Accommodations 38
Young Children 39
English Language Learners 41
Individuals With Learning and/or Reading Difficulties 42
Individuals With Attentional and Behavioral Difficulties 42
Individuals With Hearing Impairments 44
Individuals With Visual Impairments 48
Individuals With Physical Impairments 51
Interpretive Cautions 52
Use of Derived Scores 52
ix
W Score 79
Grade Equivalent 79
Age Equivalent 80
W Difference Score 80
Relative Proficiency Index 80
Instructional Zone 81
CALP Levels 81
Percentile Rank 83
Standard Score 83
Standard Error of Measurement 84
Interpreting Tests 84
Interpreting the Reading Tests 85
Interpreting the Math Tests 91
Interpreting the Written Language Tests 94
Interpreting the Academic Knowledge Tests 99
Interpreting Variations and Comparisons 99
Intra-Ability Variations 100
Intra-Achievement Variations 100
Academic Skills/Academic Fluency/Academic Applications Variations 102
Intra-Cognitive Variations 103
Intra-Oral Language Variations 104
Ability/Achievement Comparisons 104
Academic Knowledge/Achievement Comparisons 104
Three Cognitive Ability/Achievement Comparisons 106
Oral Language/Achievement Comparisons 107
Discrepancy Scores 107
Implications Derived From Test Results 107
References 109
x
List of Tables
Table 1-1 Organization of the WJ IV ACH Tests 4
Table 1-2 Organization of the WJ IV ACH Clusters 4
Table 1-3 Examiner Qualification Standards From the Standards for Educational and
Psychological Testing 10
Table 1-4 Test Security Standards From the Standards for Educational and
Psychological Testing 11
Table 2-1 WJ IV ACH Selective Testing Table 14
Table 3-1 Standards Regarding Examinee Accommodations From the Standards for
Educational and Psychological Testing 39
Table 3-2 WJ IV ACH Tests Useful for Individuals With Hearing Impairments 46
Table 3-3 WJ IV ACH Tests Useful for Individuals With Visual Impairments 51
Table 5-1 Hierarchy of WJ IV ACH Test Information 76
Table 5-2 ACH Clusters That Yield a CALP Level 81
Table 5-3 CALP Levels and Corresponding Implications 82
Table 5-4 Classification of Standard Score and Percentile Rank Ranges 84
Table 5-5 Percentage by Age of Occurrence of Qualitative Observations for
Test 1: Letter-Word Identification 87
Table 5-6 Percentage by Age of Occurrence of Qualitative Observations for
Test 4: Passage Comprehension 89
Table 5-7 Percentage by Age of Occurrence of Qualitative Observations for
Test 9: Sentence Reading Fluency 90
Table 5-8 Percentage by Age of Occurrence of Qualitative Observations for
Test 2: Applied Problems 92
Table 5-9 Percentage by Age of Occurrence of Qualitative Observations for
Test 5: Calculation 93
Table 5-10 Percentage by Age of Occurrence of Qualitative Observations for
Test 10: Math Facts Fluency 94
Table 5-11 Percentage by Age of Occurrence of Qualitative Observations for
Test 3: Spelling 96
Table 5-12 Percentage by Age of Occurrence of Qualitative Observations for
Test 6: Writing Samples 97
Table 5-13 Percentage by Age of Occurrence of Qualitative Observations for
Test 11: Sentence Writing Fluency 98
Table 5-14 WJ IV Intra-Ability Variation and Ability/Achievement
Comparison Procedures 99
Table 5-15 WJ IV Intra-Achievement Variations 102
Table 5-16 WJ IV Academic Skills/Academic Fluency/Academic Applications
Variations 103
Table 5-17 WJ IV Academic Knowledge/Achievement Comparisons 106
Table 5-18 Responsible Test Interpretation Standards From the Standards for
Educational and Psychological Testing 108
xi
List of Figures
Figure 1-1 Components of the WJ IV ACH. 5
Figure 3-1 Recommended arrangement for administering the test. 25
Figure 3-2 Suggested Starting Points table for Test 2: Applied Problems from the
WJ IV ACH Form A Test Book. 27
Figure 3-3 Example of Item 1 used as the basal on Test 1: Letter-Word Identification. 29
Figure 3-4 Determination of basal and ceiling with two apparent basals and two
apparent ceilings. 31
Figure 3-5 The “Test Session Observations Checklist” from the Test Record. 33
Figure 3-6 “Qualitative Observation” checklist for Test 1: Letter-Word Identification. 35
Figure 4-1 Reading error types in Test 8: Oral Reading. 63
Figure 4-2 Example of completed Test Record and “Qualitative Observation Tally” for
Test 8: Oral Reading. 63
Figure 5-1 Comparison of the traditional and extended percentile rank scales with the
standard score scale (M = 100, SD = 15). 83
Figure 5-2 Various skills measured by the WJ IV ACH reading tests. 85
Figure 5-3 Various skills measured by the WJ IV ACH math tests. 91
Figure 5-4 Various skills measured by the WJ IV ACH writing tests. 95
Figure 5-5 Four types of intra-ability variation models in the WJ IV. 101
Figure 5-6 Five types of ability/achievement comparison models in the WJ IV. 105
xii
Chapter 1
Overview
The Woodcock-Johnson® IV (WJ IV™) (Schrank, McGrew, & Mather, 2014a) is composed of
three assessment instruments: the Woodcock-Johnson IV Tests of Cognitive Abilities (WJ IV
COG) (Schrank, McGrew, & Mather, 2014b), the Woodcock-Johnson IV Tests of Oral Language
(WJ IV OL) (Schrank, Mather, & McGrew, 2014b), and the Woodcock-Johnson IV Tests of
Achievement (WJ IV ACH) (Schrank, Mather, & McGrew, 2014a). Together these instruments
provide a comprehensive set of individually administered, norm-referenced tests for
measuring intellectual abilities, academic achievement, and oral language abilities.
This revision represents a significant advance in the measurement of cognitive, linguistic,
and achievement abilities. The WJ IV revision blueprint was guided by multiple goals. First,
this comprehensive assessment system is designed to be on the cutting edge of practice.
It facilitates exploring individual strengths and weaknesses across cognitive, linguistic,
and academic abilities; complements response to intervention (RTI) models; and reframes
variations and ability/achievement comparisons. Second, the blueprint pushes the tests
beyond CHC theory as it was conceived in the Woodcock-Johnson III (WJ III®) (Woodcock,
McGrew, & Mather, 2001). Whereas the WJ III focused primarily on broad CHC abilities,
the WJ IV focuses on the most important broad and narrow CHC abilities for contemporary
assessment needs—describing cognitive performance and understanding the nature of
learning problems (McGrew, 2012; McGrew & Wendling, 2010; Schneider & McGrew,
2012). Some WJ IV tests and clusters emphasize narrow CHC abilities, and others were
designed to reflect the importance of cognitive complexity through the influence of two or
more narrow abilities on task requirements. Additional goals address ease and flexibility of
use. New features allow novice examiners to use the tests with confidence while providing
experienced examiners with a rich array of interpretive options to customize and enhance
their evaluations. The structure of the WJ IV system also facilitates examiner use by creating
comprehensive cognitive, achievement, and oral language batteries that can be used in
conjunction with one another or as standalone batteries.
WJ IV normative data are based on a single sample that was administered the cognitive,
oral language, and achievement tests. The national standardization included over 7,000
individuals ranging in age from 2 to over 90 years, including college and university
undergraduate and graduate students. The demographic and community characteristics
closely match those of the general U.S. population. Further information about the norming
sample is provided in the Woodcock-Johnson IV Technical Manual (McGrew, LaForte, &
Schrank, 2014).
The WJ IV interpretation plan includes a full array of derived scores for reporting results.
The accompanying Woodcock-Johnson online scoring and reporting program (Schrank &
Dailey, 2014) quickly calculates and reports all derived scores.
This manual describes the WJ IV ACH, which can be used independently or in
conjunction with the WJ IV OL or WJ IV COG batteries.
Overview 1
Comparison to the WJ III Tests of Achievement
The WJ IV ACH is a revised and expanded version of the Woodcock-Johnson III Tests of
Achievement (WJ III ACH) (Woodcock, McGrew, & Mather, 2001). Extensive renorming
and the addition of several new tests, clusters, and interpretive procedures, improve and
increase the diagnostic power of this instrument while retaining many of the features of the
WJ III ACH.
Following is a summary of the major differences between the WJ III ACH and the
WJ IV ACH.
■■ The WJ IV has three parallel forms of the Standard Battery (Forms A, B, and C) and
one form of the Extended Battery, which is designed to be used with any form of the
Standard Battery.
■■ The WJ IV ACH includes a core set of tests (Tests 1 through 6) that are used for
■■ There are 7 new tests in the WJ IV ACH: Test 8: Oral Reading, Test 12: Reading Recall,
and Test 15: Word Reading Fluency are new tests; Test 13: Number Matrices is new
to the WJ IV ACH battery; and Test 18: Science, Test 19: Social Studies, and Test 20:
Humanities are now full-length tests rather than subtests. As a result of this increased
coverage, more interpretive options are available.
■■ There are 22 clusters, including 8 new clusters: Reading Comprehension–Extended,
Reading Fluency, and Reading Rate are new clusters; Reading, Written Language,
Mathematics, Brief Achievement, and Broad Achievement were only available in WJ III
ACH Form C (Woodcock, Schrank, Mather, & McGrew, 2007) and are now included
in all forms of the WJ IV ACH battery.
■■ Fifteen clusters are available from the Standard Battery tests; 7 additional clusters are
Directions, and Sound Awareness) are now in the WJ IV OL. Story Recall is now in the
WJ IV COG.
■■ Three test names were changed to more accurately reflect the task: Writing Fluency
is now Test 11: Sentence Writing Fluency; Math Fluency is now Test 10: Math Facts
Fluency; and Reading Fluency is now Test 9: Sentence Reading Fluency.
■■ The procedures for evaluating ability/achievement comparisons and intra-ability
variations have been simplified and offer increased flexibility for the examiner.
∘∘ Four types of intra-ability variations are available: intra-cognitive, intra-
achievement, intra-oral language, and academic skills/academic fluency/academic
applications.
∘∘ Five types of ability/achievement comparisons are available: general intellectual
ability (GIA), Gf-Gc composite, scholastic aptitude, oral language ability, and
academic knowledge.
■■ The WJ III predicted achievement/achievement discrepancy procedure has been
2 Overview
Organization of the WJ IV Tests of Achievement
The WJ IV ACH is available in three forms (Forms A, B, and C) that are parallel in content.
An examiner can alternate use of these three forms to reduce an examinee’s familiarity with
specific item content. Some school districts may designate one or more of the forms for
a specific purpose or for use by specific professionals. Each form contains 11 tests in the
Standard Battery (Tests 1 through 11). There is one form of the Extended Battery (Tests 12
through 20) that can be used with any of the Standard Battery forms (Forms A, B, or C).
Depending on the purpose and extent of the assessment, an examiner can use the Standard
Battery alone or in conjunction with the Extended Battery. Using the Standard Battery
provides a broad set of scores, while the Extended Battery allows more in-depth diagnostic
assessment of specific academic strengths and/or weaknesses. This feature allows examiners
to be more focused and selective in testing by only administering the specific tests relevant to
the referral question(s).
One goal of the revision was to increase ease of use and flexibility of the WJ IV ACH,
and the organization of the tests within the Standard and Extended Batteries reflects this
goal. For example, Tests 1 through 6 represent a core set of tests that yields clusters in
Reading, Written Language, Mathematics, Academic Skills, Academic Applications, and Brief
Achievement and serve as the basis for the intra-achievement variations procedure. Additional
tests can be selected to address the individual’s specific referral questions.
An examiner seldom needs to administer all of the tests or complete all of the interpretive
options for a single person. The importance of selective testing becomes apparent as the
examiner gains familiarity with the WJ IV ACH. An analogy to craftsmanship is appropriate:
The WJ IV ACH provides an extensive tool chest that can be used selectively by a variety of
skilled assessment professionals. Different assessments require different combinations of tools.
Table 1-1 lists the tests included in the WJ IV ACH. Icons following several tests indicate
tests that are administered using an audio recording ( ), tests that are administered using
the Response Booklet ( ), and tests that are timed ( ). The table groups the tests by
content area rather than by order of appearance in the Test Book.
Table 1-2 illustrates the 22 clusters, or groupings of tests, that are available from the WJ
IV ACH. These clusters are the primary source of interpretive information to help identify
performance levels, determine educational progress, and identify an individual’s strengths and
weaknesses.
Overview 3
Table 1-1. STANDARD BATTERY
Organization of the ACADEMIC AREA (FORMS A, B, AND C) EXTENDED BATTERY
WJ IV ACH Tests Reading Test 1: Letter-Word Identification Test 12: Reading Recall
Test 4: Passage Comprehension Test 15: Word Reading Fluency
Test 7: Word Attack Test 17: Reading Vocabulary
Test 8: Oral Reading
Test 9: Sentence Reading Fluency
Mathematics Test 2: Applied Problems Test 13: Number Matrices
Test 5: Calculation
Test 10: Math Facts Fluency
Writing Test 3: Spelling Test 14: Editing
Test 6: Writing Samples Test 16: Spelling of Sounds
Test 11: Sentence Writing Fluency
Academic Test 18: Science
Knowledge Test 19: Social Studies
Test 20: Humanities
= test in the Response Booklet
= timed test
= audio-recorded test
4 Overview
Figure 1-1.
Components of the
WJ IV ACH.
Test Books
The Standard Battery and the Extended Battery Test Books are in an easel format positioned
so the stimulus pictures or words face the examinee and the directions face the examiner.
The ring-binder format allows the examiner to rearrange the order of the tests to facilitate
selective testing. Specific administration directions are provided page by page for all tests.
Examiner’s Manual
The Examiner’s Manual includes detailed information for using the WJ IV ACH. Chapter 1
is an overview. Chapter 2 provides descriptions of the 20 tests and the 22 clusters. General
administration and scoring procedures and accommodations for special populations are
discussed in Chapter 3. Chapter 4 provides specific administration and scoring instructions
for each test. Chapter 5 provides a discussion of the scores and levels of interpretive
information that are available.
This manual also includes several appendices. Appendix A contains a list of norming sites.
Appendix B contains the Scoring Guide with scoring criteria for Test 6: Writing Samples
(Forms A, B, and C). Appendices C and D contain reproducible checklists to assist examiners
in building competency with the WJ IV ACH. Appendix C is the “WJ IV Tests of Achievement
Examiner Training Checklist,” a test-by-test form that may be used as an observation or self-
study tool. Appendix D is the “WJ IV General Test Observations Checklist,” which covers
general testing procedures and may be used by an experienced examiner when observing a
new examiner.
Technical Manual
The Technical Manual is an e-book on CD and provides a summary of the development,
standardization, and technical characteristics of the WJ IV, including summary statistics.
Overview 5
Woodcock-Johnson Online Scoring and Reporting
The Woodcock-Johnson online scoring and reporting program (Schrank & Dailey, 2014)
eliminates the time-consuming norm table searches required when scoring a test by hand
and reduces the possibility of clerical errors. The automated online scoring quickly and
accurately provides all derived scores for the tests and clusters and computes variations and
comparisons.
Test Record
The Test Record includes guidelines for examiner scoring and is used to record identifying
information, observations of behavior, examinee responses, raw scores, and other information
that may be helpful in interpreting test results. Built-in scoring tables for each test enable the
examiner to immediately obtain estimated age- and grade-equivalent scores.
Response Booklet
The Response Booklet provides space for the examinee to respond to items requiring written
responses or mathematical calculations. Tests 3: Spelling, Test 5: Calculation, Test 6: Writing
Samples, Test 9: Sentence Reading Fluency, Test 10: Math Facts Fluency, Test 11: Sentence
Writing Fluency, Test 15: Word Reading Fluency, and Test 16: Spelling of Sounds all require
the Response Booklet. In addition, a worksheet is provided in the Response Booklet for
Test 2: Applied Problems and Test 13: Number Matrices.
Audio Recording
The audio recording is provided for standardized administration of Test 16: Spelling of
Sounds.
6 Overview
Gq is represented by Test 2: Applied Problems, Test 5: Calculation, Test 10: Math Facts
Fluency, and Test 13: Number Matrices.
Grw is represented by Test 1: Letter-Word Identification, Test 3: Spelling, Test 4: Passage
Comprehension, Test 6: Writing Samples, Test 8: Oral Reading, Test 9: Sentence Reading
Fluency, Test 11: Sentence Writing Fluency, Test 12: Reading Recall, Test 14: Editing, Test 15:
Word Reading Fluency, and Test 17: Reading Vocabulary.
Gc is measured by the Academic Knowledge cluster composed of Test 18: Science, Test 19:
Social Studies, and Test 20: Humanities.
Glr, especially the narrow ability of meaningful memory, is required in Test 12: Reading
Recall, Test 18: Science, Test 19: Social Studies, and Test 20: Humanities. Associative memory,
another narrow Glr ability, is required in many of the tests that measure decoding, encoding,
or recall of math facts.
Ga, in particular the narrow ability of phonetic coding, is required in Test 7: Word Attack
and Test 16: Spelling of Sounds.
Diagnosis
An examiner can use the WJ IV ACH to determine and describe a profile of an individual’s
academic strengths and weaknesses. Additionally, test results help determine how certain
factors affect related aspects of development. For example, a weakness in phoneme/grapheme
knowledge may interfere with overall development in reading and spelling. Similarly, a
weakness in spelling may help explain an individual’s difficulties on school assignments
requiring writing.
An examiner also can use the WJ IV ACH for a more in-depth evaluation after an
individual has failed a screening procedure (e.g., a kindergarten screening) or to substantiate
the results of other tests or prior evaluations.
Overview 7
Determination of Variations and Comparisons
The information provided by the WJ IV ACH, WJ IV OL, and the WJ IV COG is particularly
appropriate for documenting the nature of, and differentiating between, intra-ability (intra-
achievement, academic skills/academic fluency/academic applications, intra-cognitive, intra-
oral language) variations and ability/achievement discrepancies or comparisons (academic
knowledge/achievement, general intellectual ability/achievement, Gf-Gc/other ability,
scholastic aptitude/achievement, oral language ability/achievement).
The WJ IV intra-ability variations are useful for understanding an individual’s strengths
and weaknesses, diagnosing and documenting the existence of specific abilities and
disabilities, and acquiring the most relevant information for educational and vocational
planning. Analysis of this in-depth assessment data, which goes well beyond the historical
and traditional singular focus on ability/achievement discrepancy data, can be linked
more directly to recommendations for service delivery and the design of an appropriate
educational program.
Although many unresolved issues characterize the appropriate determination and
application of discrepancy information in the field of learning disabilities, an ability/
achievement discrepancy may be used as part of the selection criteria for learning disability
(LD) programs. Even though a discrepancy may be statistically significant, this type of
comparison is rarely appropriate as the sole criterion for determining the existence or
nonexistence of a learning disability or for determining eligibility for special services.
Analyses of other abilities and an understanding of the relationships and interactions
among various abilities and skills are needed to determine whether a person does or does
not have a learning disability. Given the problems inherent in employing and interpreting
ability/achievement discrepancies, multiple sources of information, including background
information (e.g., educational history, classroom performance), as well as clinical experience,
are needed to make an accurate diagnosis.
Educational Programming
When combined with behavioral observations, work samples, and other pertinent
information, WJ IV ACH results will help the skilled clinician make decisions regarding
educational programming. The test results demonstrate a student’s most appropriate
instructional level and the types of services that may be needed. The WJ IV ACH also can
assist in vocational planning, particularly when successful job performance depends on
specific types of skills, such as reading, writing, or mathematics performance.
8 Overview
Guidance
The WJ IV ACH can provide guidance in educational and clinical settings. The results of the
evaluation can help teachers, counselors, social workers, and other personnel understand
the nature of an individual’s academic strengths and weaknesses and determine the necessary
levels of assistance. The WJ IV ACH also can provide valuable information to help parents
understand their child’s particular academic problems or needs.
Assessing Growth
The WJ IV ACH can provide a record of functioning and growth throughout an individual’s
lifetime. The availability of three forms—Forms A, B, and C—reduces an examinee’s
familiarity with specific item content and makes it possible to administer the achievement
tests more frequently, if needed. The WJ IV ACH also can be used to assess changes in a
person’s performance following a specific time interval, such as after a year of receiving
special educational services.
Program Evaluation
The WJ IV ACH can provide information about program effectiveness at all levels of
education, from preschool through adult. For example, WJ IV ACH tests can be administered
to evaluate the effects of specific school programs or the relative performance levels (in a
certain skill) of students in a class or school.
The continuous-year feature of the WJ IV school-age norms meets the reporting
requirements for educational programs. This feature is especially useful because it provides
norms based on data gathered continuously throughout the school year as opposed to norms
based on data gathered at, perhaps, two points in the school year and then presented as fall
and spring norms.
Research
The possibilities for using the WJ IV ACH in research are unlimited. The wide age range and
breadth of coverage are important advantages underlying its use for research at all age levels,
from preschool through geriatric. Computer scoring allows easy storage of clinical data.
Because the WJ IV ACH tests are individually administered, the researcher has more control
over the quality of the data obtained.
The WJ IV ACH provides predictor or criterion measures that can be used in studies
investigating a variety of experimental effects. Additionally, the wide age range allows
longitudinal or cohort research data to be gathered using the same set of tests and test
content. In educational research, the WJ IV ACH provides a comprehensive set of related
measures for evaluating the comparative efficacy of several programs or services or for
evaluating the effectiveness of curricular interventions. The WJ IV ACH also is useful for
describing the characteristics of examinees included in a sample or experimental condition
and for pairing students in certain experimental designs.
The range of interpretive information available for each test and cluster includes error
analysis, description of developmental status (age and grade equivalents), description of
quality of performance (RPIs and instructional zones), and comparison with grade or age
mates to determine group standing (percentile ranks and standard scores). The W score and
standard score scales (discussed in Chapter 5) are both equal-interval scales that can be used
in statistical analyses based on the assumption of equal-interval metrics. As described in the
Technical Manual, the W score is the preferred metric for most statistical analyses.
Overview 9
Psychometric Training
This manual contains the basic principles of individual clinical assessment and specific
administration, scoring, and interpretive information for the WJ IV ACH, which makes the
WJ IV ACH an ideal instrument for introducing individualized assessment in college and
university courses. The WJ IV ACH provides new examiners with a broad foundation in
the administration, scoring, and interpretation of individualized assessments. Experience
in clinical assessment with the WJ IV ACH provides a solid foundation for learning to
administer and interpret other test instruments.
Examiner Qualifications
The examiner qualifications for the WJ IV ACH have been informed by the joint Standards for
Educational and Psychological Testing (American Educational Research Association [AERA],
American Psychological Association [APA], & National Council on Measurement in Education
[NCME], 2014). Table 1-3 includes three applicable standards from this publication. This
section includes a discussion of these standards as they apply to the WJ IV ACH.
12.15 Those responsible for educational testing programs should take appropriate steps to verify that the
individuals who interpret the test results to make decisions within the school context are qualified to do
so or are assisted by and consult with persons who are so qualified. (p. 199)
12.16 Those responsible for educational testing programs should provide appropriate training, documentation,
and oversight so that the individuals who administer and score the test(s) are proficient in the appropriate
test administration and scoring procedures and understand the importance of adhering to the directions
provided by the test developer. (p. 200)
Any person administering the WJ IV ACH needs thorough knowledge of the exact
administration and scoring procedures and an understanding of the importance of adhering
to standardized procedures. To become proficient in administering the WJ IV ACH, examiners
need to study the administration and scoring procedures carefully and follow the procedures
precisely. This Examiner’s Manual provides guidelines for examiner training and includes
specific instructions for administering and scoring each test.
Competent interpretation of the WJ IV ACH requires a higher degree of knowledge and
experience than is required for administering and scoring the tests. Graduate-level training in
educational assessment and a background in diagnostic decision-making are recommended
for individuals who will interpret WJ IV ACH results. Only trained and knowledgeable
professionals who are sensitive to the conditions that may compromise, or even invalidate,
standardized test results should make interpretations and decisions. The level of formal
education recommended to interpret the WJ IV ACH is typically documented by successful
completion of an applicable graduate-level program of study that includes, at a minimum,
a practicum-type course covering administration and interpretation of standardized tests
of academic achievement. In addition, many qualified examiners possess state, provincial,
or professional certification, registration, or licensure in a field or profession that includes
as part of its formal training and code of ethics the responsibility for rendering educational
assessment and interpretation services.
10 Overview
Because professional titles, roles, and responsibilities vary among states (or provinces),
or even from one school district to another, it is impossible to equate competency to
professional titles. Consequently, the Standards for Educational and Psychological Testing
(AERA, APA, & NCME, 2014) suggest that it is the responsibility of each school district to
be informed by this statement of examiner qualifications and subsequently determine who,
under its aegis, is qualified to administer and interpret the WJ IV ACH.
If the WJ IV test materials are stored in an area accessible to people with a nonprofessional
interest in the tests, the materials should be kept in locked cabinets. Also, the test materials
should not be left unattended in a classroom where students can see the materials and look at
the test items.
The issue of test confidentiality is important. Test content should not be shared with
curious nonprofessionals or made available for public inspection. Disclosing specific test
content invalidates future administrations. As noted on the copyright page of this manual and
the Test Books, the WJ IV is not to be used in programs that require disclosure of test items
or answers.
An examiner should not inform examinees of the correct answers to any of the questions
during or after testing. When discussing test results, examiners may describe the nature of
the items included in a test, but they should not review specific test content. Examiners
should use examples similar to the test items without revealing actual items.
Questions often arise about the federal requirement that families be given access to
certain educational records. To comply with this requirement, a school or school district
may be required to permit “access” to test protocols; however, “access” does not include the
right to make copies of the materials provided. The Family Educational Rights and Privacy
Act (FERPA) provides that parents are to be given the right to “inspect and review” the
educational records of their children (U.S. Department of Education. Family Educational
Rights and Privacy Act. [1974]. 20 U.S.C. § 1232g; 34 CFR §99.10). The right to inspect
and review is defined as including the right to a response from the participating agency “to
reasonable requests for explanations and interpretations of the records” (34 CFR §99.10(c))
and, if circumstances prevent inspection or review, the agency may either (a) provide a copy
or (b) make other arrangements that allow for inspection and review (34 CFR §99.10(d)).
Overview 11
So long as the test protocols are made available to the parent, or the parent’s representative,
for review, all requirements of the law are met without violating the publisher’s rights or the
obligations of the educational institution to keep the test materials confidential. There is,
therefore, no obligation to provide copies or to permit the parent, or the legal representative
of the parent, to make copies.
Similar concerns arise when a party seeks to introduce testing materials in a trial or
other legal proceeding. In such cases, it is important that the court take steps to protect the
confidentiality of the test and to prevent further copying or dissemination of any of the test
materials. Such steps include: (a) issuing a protective order prohibiting parties from copying
the materials, (b) requiring the return of the materials to the qualified professional upon
the conclusion of the proceedings, and (c) ensuring that the materials and all references to
the content of the materials will not become part of the public record of the proceedings. To
ensure that these protections are obtained, Riverside Insights™ should be contacted whenever
it appears likely that testing materials will be introduced as evidence in a legal proceeding.
Examiners or school districts with questions about copyright ownership or confidentiality
obligations should contact Riverside Insights at the toll-free telephone number listed on the
copyright page of this manual.
12 Overview
Chapter 2
Descriptions of the WJ IV
ACH Tests and Clusters
The Woodcock-Johnson IV Tests of Achievement (WJ IV ACH) (Schrank, Mather, & McGrew,
2014a) contains 20 tests measuring four curricular areas—reading, mathematics, written
language, and academic knowledge. Specific combinations, or groupings, of these 20 tests
form clusters for interpretive purposes. (For administration and scoring procedures, see
Chapters 3 and 4 of this manual.) There are three alternate and parallel forms of the 11
tests in the WJ IV ACH Standard Battery—Forms A, B, and C. There is only one version of
the 9 tests in the WJ IV ACH Extended Battery. The Extended Battery tests are designed to
supplement all three forms of the Standard Battery.
The tests in the Standard Battery (Form A, B, or C) combine to form 15 cluster scores,
including a Brief Achievement score and a Broad Achievement score. When the Standard
Battery tests are used in conjunction with the Extended Battery tests, 7 additional cluster
scores may be derived. Although tests are the basic administration components of the WJ IV
ACH, clusters of tests provide the primary basis for test interpretation. Cluster interpretation
minimizes the danger of generalizing from the score for a single narrow ability to a broad,
multifaceted ability or skill. Cluster interpretation results in higher validity because more
than one component of a broad ability comprises the score that serves as the basis for
interpretation. For example, broad ability cluster interpretation results in higher validity
when more than one component of a broad ability comprises the score that serves as the
basis for the interpretation. In some situations, however, the narrow abilities and skills that
are measured by the individual tests should be considered. This is particularly important
when significant differences exist between or among the tests in a cluster. In these cases,
more information is obtained by analyzing performance on each test, which may indicate the
need for further testing. Occasions exist when it is more meaningful to describe a narrow
ability than it is to report performance on a broad ability. To increase the validity of narrow
ability interpretation, the WJ IV provides clusters for a number of important narrow abilities.
These narrow abilities often have more relevance for informing instruction and intervention
(McGrew & Wendling, 2010).
WJ IV ACH Tests
The selective testing table, presented in Table 2-1, illustrates the scope of the WJ IV ACH
interpretive information via the combinations of tests that form various clusters. Note that
Tests 1 through 6, the core set of tests, provide a number of important interpretive options,
including Reading, Written Language, Mathematics, Academic Skills, Academic Applications,
and Brief Achievement clusters and are required for calculating the intra-achievement
variations procedure (see Chapter 5 for a description of the variation procedures).
e
) Ach nowledg
Testing Table
ent
ievem
ion
K
ngua ing
Skill
ation
hens
eme- owledge
ngu
eme
kills
ills
atics
Solv
on
ncy
ge
Math lculation
Basi ritten La
Read Compre
Acad ic Applic
ing R ncy
h
Acad xpressi
Read ading S
Acad ic Skills
Grap
them
c Re ng
Flue
road
lem
e
ue
Kn
Broa atics
adi
a t
l
Prob
a
(or B
ing F
d Ma
emic
emic
en E
en L
d Re
c Wr
dW
Ca
em
ing
ing
em
em
Phon
Math
Math
Read
Read
Acad
Writt
Broa
Broa
Brief
Basi
ACH 1 Letter-Word Identification
ACH 2 Applied Problems
ACH 3 Spelling
ACH 4 Passage Comprehension
Standard Battery
ACH 5 Calculation
ACH 6 Writing Samples
ACH 7 Word Attack
ACH 8 Oral Reading
ACH 9 Sentence Reading Fluency
ACH 10 Math Facts Fluency
ACH 11 Sentence Writing Fluency
ACH 12 Reading Recall
ACH 13 Number Matrices
ACH 14 Editing
Extended Battery
Test 5: Calculation
Calculation is a test of math achievement measuring the ability to perform mathematical
computations, a quantitative knowledge (Gq) ability. The initial items in Calculation require
the individual to write single numbers. The remaining items require the person to perform
addition, subtraction, multiplication, division, and combinations of these basic operations, as
well as some geometric, trigonometric, logarithmic, and calculus operations. The calculations
involve negative numbers, percentages, decimals, fractions, and whole numbers. Because the
calculations are presented in a traditional problem format in the Response Booklet, the person
is not required to make any decisions about what operations to use or what data to include.
Calculation has a median reliability of .93 in the 5 to 19 age range and .93 in the adult age
range.
Reading Clusters
Seven reading clusters are available, four that use tests from the Standard Battery and three
that require additional tests from the Extended Battery.
Reading
The Reading cluster is a measure of reading achievement (a reading-writing [Grw] ability),
including reading decoding and the ability to comprehend connected text while reading.
This cluster is a combination of Test 1: Letter-Word Identification and Test 4: Passage
Comprehension. It has a median reliability of .94 in the 5 to 19 age range and .96 in the adult
age range.
Broad Reading
The Broad Reading cluster provides a comprehensive measure of reading achievement (a
reading-writing [Grw] ability) including reading decoding, reading speed, and the ability to
comprehend connected text while reading. This cluster is a combination of Test 1: Letter-
Word Identification, Test 4: Passage Comprehension, and Test 9: Sentence Reading Fluency. It
has a median reliability of .96 in the 5 to 19 age range and .97 in the adult age range.
Reading Comprehension
The Reading Comprehension cluster is an aggregate measure of comprehension and
reasoning (reading-writing [Grw] and, to a lesser extent, long-term retrieval [Glr] abilities).
It is a combination of Test 4: Passage Comprehension and Test 12: Reading Recall from the
Extended Battery. This cluster has a median reliability of .93 in the 5 to 19 age range and .93
in the adult age range.
Reading Comprehension–Extended
The Reading Comprehension–Extended cluster provides a broad measure of reading
comprehension skills and is an aggregate measure of comprehension, vocabulary, and
reasoning (reading-writing [Grw] and, to a lesser extent, long-term retrieval [Glr] abilities).
This cluster is a combination of Test 4: Passage Comprehension and Test 12: Reading Recall
and Test 17: Reading Vocabulary from the Extended Battery. It has a median reliability of .96
in the 5 to 19 age range and .94 in the adult age range.
Reading Rate
The Reading Rate cluster provides a measure of automaticity with reading at the single word
and sentence levels (reading-writing [Grw] and cognitive processing speed [Gs] abilities).
It is a combination of Test 9: Sentence Reading Fluency and Test 15: Word Reading Fluency
from the Extended Battery. This cluster has a median reliability of .96 in the 5 to 19 age range
and .96 in the adult age range.
Math Clusters
Four math clusters are available. Three clusters are formed from tests in the Standard Battery,
and the remaining cluster requires a test from the Extended Battery.
Mathematics
The Mathematics cluster provides a measure of math achievement (quantitative knowledge
[Gq] ability) including problem solving and computational skills. This cluster includes Test 2:
Applied Problems and Test 5: Calculation. It has a median reliability of .96 in the 5 to 19 age
range and .96 in the adult age range.
Broad Mathematics
The Broad Mathematics cluster provides a comprehensive measure of math achievement,
including problem solving, number facility, automaticity, and reasoning (quantitative
knowledge [Gq] and cognitive processing speed [Gs] abilities). This cluster includes
Test 2: Applied Problems, Test 5: Calculation, and Test 10: Math Facts Fluency. It has a
median reliability of .97 in the 5 to 19 age range and .97 in the adult age range.
Written Language
The Written Language cluster provides a comprehensive measure of written language
achievement, including spelling of single-word responses and quality of expression (reading-
writing [Grw] ability). This cluster includes Test 3: Spelling and Test 6: Writing Samples. It
has a median reliability of .94 in the 5 to 19 age range and .95 in the adult age range.
Written Expression
The Written Expression cluster is an aggregate measure of meaningful written expression and
fluency (reading-writing [Grw] and cognitive processing speed [Gs] abilities). This cluster
is a combination of Test 6: Writing Samples and Test 11: Sentence Writing Fluency. It has a
median reliability of .91 in the 5 to 19 age range and .92 in the adult age range.
Cross-Domain Clusters
Seven cross-domain clusters are available. Two general academic proficiency cluster scores,
Brief Achievement and Broad Achievement, are based on tests in the Standard Battery. Various
combinations of tests in the Standard and Extended Batteries are used to form five additional
cluster scores: Academic Skills, Academic Fluency, Academic Applications, Academic
Knowledge, and Phoneme-Grapheme Knowledge. The three academic clusters (skills, fluency,
and applications) contain tests of reading, math, and written language and can be used
to determine whether the person exhibits significant strengths and/or weaknesses among
these three types of tasks across academic areas. The Academic Knowledge cluster provides
specific information about an individual’s content knowledge of science, social studies,
and humanities. The Phoneme-Grapheme Knowledge cluster provides in-depth diagnostic
information about the person’s basic understanding of sound/symbol relationships.
Broad Achievement
The Broad Achievement cluster is a combination of the nine tests (Tests 1 through 6 and
Tests 9 through 11) included in the Broad Reading, Broad Mathematics, and Broad Written
Language clusters. The Broad Achievement cluster represents a person’s overall performance
across the various achievement domains. It has a median reliability of .99 in the 5 to 19 age
range and .99 in the adult age range.
Academic Skills
The Academic Skills cluster is an aggregate measure of reading decoding, math calculation,
and spelling of single-word responses, providing an overall score of basic achievement
skills. It is a combination of Test 1: Letter-Word Identification, Test 3: Spelling, and Test 5:
Calculation. This cluster has a median reliability of .97 in the 5 to 19 age range and .97 in the
adult age range.
Academic Fluency
The Academic Fluency cluster provides an overall index of academic fluency. It is a
combination of Test 9: Sentence Reading Fluency, Test 10: Math Facts Fluency, and Test 11:
Sentence Writing Fluency. This cluster has a median reliability of .97 in the 5 to 19 age range
and .97 in the adult age range.
Academic Applications
The Academic Applications cluster is a combination of Test 2: Applied Problems, Test 4:
Passage Comprehension, and Test 6: Writing Samples. These three tests require the individual
to apply academic skills to academic problems. This cluster has a median reliability of .95 in
the 5 to 19 age range and .96 in the adult age range.
Academic Knowledge
The Academic Knowledge cluster is comprised of three tests from the Extended Battery:
Test 18: Science, Test 19: Social Studies, and Test 20: Humanities. These tests provide a
broad sample of the individual’s range of scientific knowledge, social studies knowledge, and
cultural knowledge (comprehension-knowledge [Gc]). The Academic Knowledge cluster has
a median reliability of .92 in the 5 to 19 age range and .96 in the adult age range.
Phoneme-Grapheme Knowledge
The Phoneme-Grapheme Knowledge cluster is a combination of Test 7: Word Attack and
Test 16: Spelling of Sounds from the Extended Battery, requiring both reading-writing (Grw)
and auditory processing (Ga) abilities. It may be used to evaluate an individual’s proficiency
with phonic (sound) generalizations and his or her knowledge of common orthographic
patterns (frequently occurring letter clusters) in both decoding (word reading) and encoding
(spelling). It has a median reliability of .93 in the 5 to 19 age range and .94 in the adult age
range.
General Administration
and Scoring Procedures
To become proficient in administering and scoring the Woodcock-Johnson IV Tests of
Achievement (Schrank, Mather, & McGrew, 2014a) (WJ IV ACH), examiners should carefully
study the general administration and scoring procedures in this chapter and the specific
procedures for each test in Chapter 4 and in the Test Books. Additionally, two appendices
of this manual provide reproducible checklists to help examiners build competency
administering and scoring the tests. Appendix C, the “WJ IV Tests of Achievement Examiner
Training Checklist,” is a test-by-test form that may be used as a self-study or observation tool.
Appendix D is the “WJ IV General Test Observations Checklist,” which may be used by an
experienced examiner when observing a new examiner.
Practice Administration
After thoroughly studying this Examiner’s Manual, the Test Books, the Test Record, and
the Response Booklet, both experienced and novice examiners should administer several
practice tests. When administering practice tests, try to replicate an actual testing situation,
pretending that the practice session is an actual administration. Do not discuss the test or the
answers to specific items. After completing each practice administration, record any questions
that arose during the practice session. Before administering another practice test, answer
the questions by reviewing the Examiner’s Manual or consulting an experienced examiner.
While administering practice tests, strive for these two goals: exact administration and brief
administration.
Exact Administration
The goal of standardized testing is to see how well a person can respond when given
instructions identical to those presented to individuals in the norming sample. When learning
to administer the WJ IV ACH tests, study the contents of the Test Book, paying particular
attention to the information on the introductory page of each test, the specific instructions on
the test pages, and the boxes with special instructions.
The first page after the tab in each test provides general information and instructions
specific to that test. Review this information frequently. This page usually includes
administration information, scoring information, suggested starting points, basal and ceiling
requirements, and information about materials required to administer the test.
The directions for administering each item are located on the examiner’s side of the
pages in the Test Book. The directions include the script to be read to the examinee (printed
in bold blue type) and, if applicable, specific pointing instructions. Always use the exact
Brisk Administration
After the initial practice sessions, strive for a brisk testing pace. Inefficient testing procedures
bore the examinee, invite distraction, and increase testing time. It is not appropriate to stop
testing and visit with the examinee during the testing session. When the person has finished
responding to an item, immediately begin the next item.
In most instances, an examinee does not need a break before beginning the next test.
Each test begins with easy questions presented in a different format, thus providing a built-
in change of pace from one test to the next. Using a brisk testing pace enhances rapport and
helps an examinee maintain attention.
Continue to practice administering the tests until the two goals of exact and brisk
administration have been met.
The room should have a table (or other flat working space of adequate size) and two
chairs, one being an appropriate size for the examinee. A suitable seating arrangement
allows the examiner to view both sides of the easel Test Book, point to all parts of the
examinee’s page and the Response Booklet, regulate the audio equipment, and record
responses on the Test Record out of the examinee’s view. The examinee should be able to
view only the examinee’s test pages. When the Test Book easel is set up for administration,
it becomes a screen allowing the examiner to record responses on the Test Record out of the
examinee’s view.
The best seating arrangement is one in which the examiner and the examinee sit
diagonally across from each other at the corner of a table. This arrangement is illustrated in
Figure 3-1 for a right-handed examiner. The arrangement (seating and setup of materials)
should be reversed for a left-handed examiner.
Another possible seating arrangement is for the examiner and the examinee to sit directly
across the table from each other. With this arrangement, the table must be narrow and low
enough so that the examiner can see over the upright Test Book easel and accurately point to
the examinee’s page when necessary.
Establishing Rapport
In most instances, the examiner will have little difficulty establishing a good relationship
with the examinee. Do not begin testing unless the person seems relatively at ease. If he
or she does not feel well or will not respond appropriately, do not attempt testing. Often
examiners begin the testing session with a short period of conversation while completing
the “Identifying Information” portion of the Test Record. A brief explanation of the test is
provided in the “Introduction” section in the front of each Test Book.
To help put the individual at ease, smile frequently throughout the testing session and
call the person by name. Between tests, let the examinee know that he or she is doing a
good job, using such comments as “fine” and “good.” Encourage a response even when
items are difficult. It is fine to say, “Would you like to take a guess on that one?” but the
comments should not reveal whether answers are correct or incorrect. Do not say, “Good”
only after correct responses or pause longer after incorrect responses before proceeding to the
next item.
Test Selection
It is important to select tests that are appropriate for the individuals being evaluated.
Consider the individual’s age, developmental level, and achievement levels as part of this test
selection process. For example, it would be inappropriate to give a test that requires reading
ability to a young child with limited reading experience. Whereas some tests, like Test 1:
Letter-Word Identification or Test 4: Passage Comprehension have a number of prereading
items, other tests like Test 9: Sentence Reading Fluency or Test 15: Word Reading Fluency
do not. Do not administer these reading fluency measures to an individual who has not
developed basic reading skills because the results would not reflect reading fluency, but
rather the person’s limited reading skill. For example, on Test 9: Sentence Reading Fluency,
the individual is asked to read each sentence, decide whether it is true or false, and circle yes
or no. If this test is administered to a person who cannot read, the individual may randomly
mark yes or no without reading the sentences at all and obtain a score that would not be a
valid indicator of his or her reading skill.
Examiners are encouraged to use selective testing principles for choosing the most
appropriate set of tests for each individual. To help examiners determine whether or not a
test is appropriate for an individual, many of the WJ IV ACH tests provide sample items and
practice exercises. Examiners are directed to discontinue a test without administering the test
items if the examinee does not get a specified number of sample items correct. Other tests
provide early cut-offs if an individual’s performance is limited.
Order of Administration
In most cases, administer the first six tests in the order that they appear in the Standard
Battery. These are the core tests (Tests 1 through 6) and they have been organized to alternate
between different tasks and achievement areas (e.g., reading versus math) to facilitate optimal
attention and interest. However, testing may begin with the Extended Battery and the tests
may be administered in any order. For example, testing may begin with Test 5: Calculation,
Time Requirements
Always schedule adequate time for testing. Generally, experienced examiners will require
approximately 40 minutes to administer the core set of tests (Tests 1 through 6) in the
Standard Battery. Administration of Test 6: Writing Samples requires about 15 to 20 minutes,
whereas the other tests require about 5 to 10 minutes each. Allow a reasonable amount of
time for a person to respond and then suggest moving on to the next item. Also allow more
time for a specific item if the person requests it or if more time is allowed under the specific
test directions.
Very young individuals or those who have unique characteristics that may impact
test administration may require additional testing time. These individuals may produce
a scattering of correct responses requiring administration of a greater number of items.
Some people may respond more slowly, change their answers more frequently, or require
more prompting and querying. In addition, an examiner may inadvertently begin at an
inappropriate starting point, which extends the testing time.
46 overwhelm
47 signal
48 imagine
49 investigate
50 reverse
51 doubtful
52 guarantee
53 knead
2 Form A
1 L 54 veteran
55 sphere
2 A 56 accustomed
57 contrary
3 S 58 cologne
4 W 59 stamina
60 ferocious
5 k
61 breathes
6 y
62 silhouette
7 R
63 thoroughfare
8 F
64 staunchest
9 p
65 millinery
10 J
66 heuristic
11 car 67 scepter
12 sun
68 municipality
13 dog
69 idiosyncrasy
14 1 the 70 minuend
15 1 at 71 rhetoric
16 1 and 72 aggrandizement
17 1 no 73 milieu
18 1 man
74 tertiary
19 1 she
75 septuagenarian
20 1 cup
76 echelon
21 1 fish
77 coiffure
22 1 have 78 macaque
23 1 into
Number
24 1 keep 34 Correct (0–78)
25 1 them
26 0 must STEP 3:
27 1 going
28 1 people Tested backward one more page and administered Items 14–21. The
29 1 light basal is established because the examinee answered all correctly.
30 1 morning The 6 lowest-numbered consecutive items administered were correct
31 1 animal (Items 14–19) and form the basal.
32 1 could
33 0 garden
34 1 become STEP 2:
35 1 knew Tested backward one page and administered Items 22–29. No basal
36 0 library
37 0 point was established because the examinee missed Item 26. (The 6 lowest-
38 0 often
numbered items [Items 22–27] administered were not all correct.)
39 0 special
40 0 however STEP 1:
41 0 brought Testing began with Item 30. After completing the page, no basal
42 0 jewel
43 0 whose was established because the 6 lowest-numbered consecutive items
44 0 natural administered (Items 30–35) were not all correct. The examinee
45 1 distance
missed Item 33 (as well as Items 36 and 37).
46 0 overwhelm
47 0 signal STEP 4:
48 0 imagine
49 0 investigate Resumed testing with Item 38 and administered the complete page
50 0 reverse (Items 38–45). No ceiling was established because the examinee
51 0 doubtful
52 0 guarantee answered Item 45 correctly.
53 0 knead
STEP 5:
STEP 6: The examiner continued testing and administered Items 46–53. The
Discontinued testing and calculated the ceiling is established because the examinee missed the 6 highest-
Number Correct (34). numbered items answered (Items 48–53) and completed a page.
2 Form A
Timed Tests
Test 9: Sentence Reading Fluency, Test 10: Math Facts Fluency, Test 11: Sentence Writing
Fluency, and Test 15: Word Reading Fluency are timed tests. The following tests each have
a 3-minute time limit: Test 9: Sentence Reading Fluency, Test 10: Math Facts Fluency, and
Test 15: Word Reading Fluency. Test 11: Sentence Writing Fluency has a 5-minute time limit.
Although Tests 9 through 11 are in a numeric sequence, it is recommended that these three
timed tests not be administered consecutively.
The time limits are noted in both the Test Book and the Test Record. Administer these
tests using a stopwatch. If not using a stopwatch, write the exact starting and finishing times
in minutes and seconds in the space provided on the Test Record. For example, 17:23 would
indicate that the test started at 17 minutes and 23 seconds after the hour. The test then
would end exactly 3 minutes later at 20 minutes and 23 seconds (20:23) after the hour. A
watch or clock with a second hand is also useful for administering tests with the instruction
to proceed to the next item if an examinee has not responded to an item within a specified
period of time.
Audio-Recorded Tests
Use the standardized audio recording to present Test 16: Spelling of Sounds. Use a good
quality CD player and headphones or earbuds to administer the audio test. Make sure that
the audio equipment has a good speaker; is in good working order; and produces a faithful,
clear reproduction of the test items. During the standardization, all audio-recorded tests were
administered using good-quality equipment. Using a computer to administer the audio tests is
not recommended because the sound quality varies greatly and may distort the audio stimuli.
If a computer is used, it must have good quality external speakers or a good quality headset.
It is the examiner’s responsibility to ensure that the audio equipment used for testing presents
the audio stimuli accurately so that an examinee’s performance is not affected by poor sound
quality.
Although Test 16: Spelling of Sounds may be presented orally, use of the audio recording
and headphones is recommended unless the person resists wearing headphones or has
difficulty attending to an audio-recorded presentation. If a test must be presented orally,
attempt to say each item in the same manner that it is presented on the audio recording.
Because the audio test is presented on a CD, the tracks are identified for each starting
point. Consult the Test Book or the CD’s table of contents to locate the appropriate track
number and starting point for this test.
Adjust the volume on the audio equipment before the examinee puts on the headphones.
The examiner can wear a monaural earphone or wear only one headphone over one ear.
Examiner Queries
For certain responses, the Query keys in the Test Book provide prompts designed to elicit
another answer from the examinee. For example, a few items on Test 5: Calculation require
the examinee to reduce fractions to obtain credit. A query on these items is a reminder to
ask the examinee to simplify his or her answer. Use professional judgment when querying
responses that are not listed in the Query key. For example, if an individual provides a
response that seems to be partially correct, it is permissible to query with a comment such as,
“Tell me more about that.”
Scoring
Because the examinee’s pattern of correct and incorrect responses is needed to determine
basal and ceiling levels, complete the item scoring during test administration (except for
the timed tests and Test 6: Writing Samples). Some raw scores (number correct or number
of points) can be calculated between tests, while others are calculated after all testing is
completed. After the raw scores are totaled, estimated age- and grade-equivalent scores
are readily available from the “Scoring Tables” on the Test Record. Use the online scoring
program to complete all other scoring.
Item Scoring
With the exception of three tests (Test 6: Writing Samples, Test 8: Oral Reading, and Test 12:
Reading Recall), score each item administered by placing a 1 or a 0 in the appropriate space
on the Test Record: 1 = correct response, 0 = incorrect or no response. (Detailed scoring
procedures for Test 6: Writing Samples, Test 8: Oral Reading, and Test 12: Reading Recall are
included in Chapter 4.) For items not administered, leave the corresponding spaces on the
Test Record blank. After a test has been administered and completely scored, the only blank
spaces should be items below the basal and above the ceiling levels or items not included in
the assigned block of items.
Accommodations
The WJ IV is ideally suited to increase the participation of students with disabilities in
assessment and accountability systems. This section identifies several administration features
of the WJ IV that allow individuals with disabilities to participate more fully in the evaluation
process.
Setting
The individual administration format of the WJ IV ACH provides the opportunity for
standardized assessment on a one-to-one basis. Use of a separate location for testing
minimizes the distractions inherent in a classroom group-testing environment. If needed,
use noise buffers such as earplugs or headphones to mask external sounds. Also, incorporate
special lighting, special acoustics, or adaptive or special furniture if needed.
Timing
Use of basal and ceiling rules focuses the assessment on the examinee’s level of ability and
minimizes testing time. In addition, frequent breaks can be taken between tests, if needed.
With the exception of the timed tests, individuals can have extended time to complete tasks,
if required.
Presentation
All instructions are presented orally to the examinee, and the language of the instructions is
at a sufficiently simple level of linguistic complexity to minimize language comprehension
barriers. The instructions may be repeated or signed, if necessary. Special sample items on
many of the tests help clarify the person’s understanding. Use of large print, fewer items per
page, and increased space between items allows examinees to focus better on individual items
without being overwhelmed by simultaneous presentation of multiple items as would occur
during a group-administered assessment. Audio presentation of certain tests helps ensure
standardized item presentation and allows adjustment of the volume to a comfortable level
for each individual. Visual magnification devices and templates that reduce glare also may be
incorporated into the assessment without affecting validity.
Scheduling
Administration of the WJ IV ACH tests can be scheduled at a specific time of day to
accommodate individual examinee needs. The tests may be presented in any order to
maximize interest and performance. When an individual cannot sustain peak performance for
long periods of time, the test may be administered over several days.
Recommended Accommodations
As a general rule, the examiner should adhere to standard administration and scoring
procedures. However, at times, an examinee’s special attributes need to be accommodated.
“An appropriate accommodation is one that responds to specific individual characteristics but
does so in a way that does not change the construct the test is measuring or the meaning of
the scores” (AERA, APA, NCME, 2014, p. 67). In providing accommodations and interpreting
test results for individuals with disabilities, be sensitive to the limitations different
impairments may impose on a person’s abilities and behavior.
Generally, the examiner should select and administer tests that do not require
modifications. The broad classes of examinees often requiring some level of accommodation
in the assessment process are: young children; English language learners; individuals
with attentional or learning difficulties; and individuals with hearing, visual, and physical
impairments. Prior to making accommodations, the examiner should be trained in the
specific area or should consult with a professional who has such expertise. Selected portions
of the WJ IV ACH may be used for individuals with sensory impairments if their physical or
sensory limitations interfere with performance, or make performance impossible, on certain
other tests.
Young Children
Assessing young children in their preschool and early school years requires an examiner who
is trained and knowledgeable in this type of assessment. Examiners must select tests that
are appropriate for the age and functional level of the examinee. Some tests may not have
an adequate floor for young or low-functioning individuals, and other tests are designed for
use with school-age children or older individuals. For example, few individuals below age 6
would be expected to perform adequately on tests such as Test 9: Sentence Reading Fluency,
Test 13: Number Matrices, or Test 14: Editing. On the other hand, examinees as young as age
2 generally can perform beginning tasks on Test 18: Science, Test 19: Social Studies, and Test
20: Humanities.
Preparation for Testing
Some young children may be uncomfortable with unfamiliar adults and may have difficulty
separating from their caregiver or teacher. It may be necessary to spend additional time with
such a child with a familiar adult nearby prior to accompanying the child to the testing
situation. Let the young child know that the caregiver is nearby and will be around when
testing is completed. In extreme circumstances, it may be necessary to have the familiar adult
stay with the child during testing. However, under these circumstances, the caregiver must
understand the standardized conditions under which the testing must occur. Every effort
semantics, syntax, and pragmatics, using the hands, body, and facial expressions.
■■ Manually Coded English (MCE): The use of signs, mainly in English word order, and
sometimes including English parts of speech that do not exist in ASL. MCE includes
Signed Exact English and Pidgin Signed English.
■■ Sign-Supported speech: The use of spoken English with sign used simultaneously all or
most of the time. People using this form of communication are not able to adequately
comprehend spoken English without sign accompaniment.
■■ Aural/Oral English: The use of spoken English without sign, usually aided by some
1 Test 1: Letter-Word Identification—This is a test of word identification for hearing examinees, but it is a reading vocabulary test for sign
communicators because the sign for a word represents its meaning rather than its sound. Additionally, for some of the stimulus words, one sign
can refer to multiple items (e.g., cup, glass, can), some are routinely fingerspelled, and some have no meaning out of context. Examinees using
sign-supported speech must be able to read the words orally.
2 Test 1: Letter-Word Identification, Test 7: Word Attack—An examinee’s pronunciation will indicate how well he or she is able to apply phonics
skills and knowledge of English orthography; however, the examinee’s internal pronunciation may be more accurate than his or her voiced
pronunciation. Additionally, pronunciation errors may be secondary to the hearing impairment (articulation) rather than indications of limited word
attack skill.
3 Test 2: Applied Problems—In some of the earlier items, the question incorporates a sign that gives the answer (e.g., “two fingers” is signed with
two fingers). In some later items, signing the problem depicts the method of solution (e.g., which operation is needed). Fewer of these problems
occur after Item 25. At this point, the items are more complex, the examiner cannot assume that the examinee will be able to read them, and the
interpreter’s accuracy is critical. Consequently, prior to the test session, it is essential that the interpreter has ample time to read all of the items
the examinee is likely to take so that he or she can develop a well-reasoned approach to signing them. When deciding whether or not to use the
scores, take into account the level of the items administered, the extent to which the signing provided clues to the answer, and, for later items,
whether or not the examinee appeared to understand the signed interpretation.
4 Test 3: Spelling—The examinee who uses sign-supported speech or aural/oral English may misunderstand a stimulus word due to sound
distortion. If this happens, provide additional sentences to clarify the word. Test 3: Spelling should not be administered in sign. Many of the
stimulus words do not have a specific sign or are fingerspelled, and a few do not exist in ASL (e.g., is, am). Additionally, some of the stimulus
words are represented by signs that have multiple meanings (e.g., the same sign can mean already, finished, complete, and done).
5 Test 4: Passage Comprehension, Test 9: Sentence Reading Fluency—The examinee may miss some specific items that are biased toward hearing
(e.g., completing a rhyme) or English syntax (e.g., “Bird flying” is a complete ASL sentence; is does not exist in ASL).
6 Test 4: Passage Comprehension, Test 9: Sentence Reading Fluency—If an examinee’s comprehension is weak or his or her reading speed is slow,
consider that English is a second (foreign) language for most people who are deaf and who use ASL as their primary mode of communication. The
norms, however, represent the performance of people who use English as their primary language and who, for the most part, have a wider reading
vocabulary and an innate sense of English syntax.
7 Test 4: Passage Comprehension, Test 9: Sentence Reading Fluency, Test 12: Reading Recall—People who are hard of hearing often have a
more limited oral vocabulary than their hearing peers because they do not have the same access to spoken language. Rather than demonstrating
difficulty with reading speed or recall, the examinee may not know the meaning of some of the words.
8 Test 6: Writing Samples—Explain the directions carefully and possibly change the wording if the examinee does not appear to understand.
9 Test 6: Writing Samples, Test 11: Sentence Writing Fluency—Spelling errors made by individuals whose primary communication mode is manual
often have little phonetic relationship to the intended word. Allow time to review the responses and, if the response word is not understandable due
to a nonphonetic misspelling, ask the examinee to sign it. Even if no credit is awarded, knowing what word the examinee intended will help with
interpretation.
10 Test 8: Oral Reading—Because a person must know the meaning of a word to sign it, for sign communicators, this test assesses reading
vocabulary and comprehension instead of oral reading. Consequently, responses cannot be compared with the performance of hearing/speaking
peers in the norm sample. For examinees who use speech, consider that errors in pronunciation may be secondary to the hearing impairment
(articulation) rather than indications of weak decoding skills.
11 Test 12: Reading Recall—For examinees who use ASL and MCE, this test might indicate their comprehension and recall of written English;
however, they will have to fingerspell names and other words that do not have signs. The interpreter must be alerted to the importance of the
bolded words so that he or she will voice those particular words if the examinee’s signed response appropriately represents them.
12 Test 13: Number Matrices—Because of the complexity, signed instructions may have to deviate significantly from the standardized instructions to
14 Test 17: Reading Vocabulary—Most signs do not have synonyms, which rules out subtest 17A: Synonyms for examinees who use ASL or MCE.
Because scoring of the test requires both subtests (synonyms and antonyms), a score cannot be obtained for Reading Vocabulary for these
examinees. An examinee who uses sign-supported speech will have to respond in oral English.
15 Test 18: Science, Test 19: Social Studies, Test 20: Humanities—When signed, many of the items in these tests are so modified so as to disallow
use of the norms. The modifications include: (a) items that require fingerspelling in either the question or the response and thus introduce an
unintended reading/spelling component (e.g., V-E-T is the sign for veterinarian); (b) signs in the question that give the answer; (c) names of
pictured objects that are signed by gesturing their use (e.g., guitar) or image (e.g., Red Cross); and (d) signs that represent two words, one of
which is the correct response and the other an incorrect response (e.g., ocean, river). To avoid these problems for examinees who use sign-
supported English, some of the items will have to be administered without sign. For these examinees and those who use aural/oral English,
consider the impact of the examinee’s hearing loss on incidental learning.
3: Spelling ◆3
4: Passage Comprehension □4
5: Calculation ◆
6: Writing Samples □2
7: Word Attack ◆
8: Oral Reading □4
14: Editing □4
18: Science □5
20: Humanities □5
1 Test 1: Letter-Word Identification—Extend or dispense with 5-second response guideline.
2 Test 2: Applied Problems, Test 6: Writing Samples—Point to the picture prompt(s) and text on the examinee’s page, regardless of the test
instructions.
3 Test 3: Spelling, Test 16: Spelling of Sounds—Provide whatever type of writing utensil and paper (e.g., black lined) the student normally uses in
the classroom.
4 Test 4: Passage Comprehension, Test 8: Oral Reading, Test 14: Editing—If the examinee has a visual impairment that interferes with his or
her ability to scan smoothly across a line of print, errors and repetitions may be due to the visual impairment rather than to a deficiency in the
examinee’s academic skill.
5 Test 1: Letter-Word Identification, Test 12: Reading Recall, Test 17: Reading Vocabulary, Test 18: Science, Test 19: Social Studies, Test 20:
Humanities—Poor performance may be due to limited vocabulary and concepts secondary to the examinee’s limited visually based incidental
learning and experiences.
6 Test 9: Sentence Reading Fluency, Test 10: Math Facts Fluency, Test 11: Sentence Writing Fluency, Test 15: Word Reading Fluency—If the
examinee’s responses are correct but the score is low compared to similar tests without time limits, consider that the visual impairment may be
interfering with rapid symbol and/or picture recognition. Thus, the results may indicate a need for extra time for visual work but may not indicate a
weakness in the underlying language or academic skills.
7 Test 13: Number Matrices—If the examinee is trying to mask parts of the matrix with a hand, provide a blank, unlined index card.
Interpretive Cautions
Many test modifications, such as altering administration procedures by providing additional
cues, are appropriate in specific circumstances. Modifying test procedures requires
understanding the examinee’s condition or English-speaking limitations, as well as the nature
and purpose of each test. Keep in mind that, in many instances, the purpose of an evaluation
is to determine an individual’s unique pattern of strengths and weaknesses and then to use
this assessment data to suggest appropriate classroom accommodations and to recommend
possible teaching strategies and interventions. Although a modification may improve
test performance, the resulting score may not be an accurate reflection of an examinee’s
capabilities. Note any deviation from the standardized administration on the Test Record and
always include a statement of the modified testing conditions in the written report.
Administering and
Scoring the WJ IV ACH
Tests
This chapter contains detailed administration procedures for each of the tests in the WJ IV
Tests of Achievement (WJ IV ACH) (Schrank, Mather, & McGrew, 2014a). Comparing the
information in this chapter with the actual instructions in the Test Book will help examiners
learn both administration and scoring procedures. In addition, the test-by-test “WJ IV Tests of
Achievement Examiner Training Checklist” in Appendix C of this manual can be a helpful tool
for examiners learning to administer the WJ IV ACH. It is recommended that examiners first
learn and practice administering the tests of the Standard Battery and then the tests of the
Extended Battery. The one form of the Extended Battery is designed for use with any of the
three Standard Battery forms (A, B, or C).
Starting Point
Select a starting point based on an estimate of the examinee’s present level of reading
achievement. Consult the Suggested Starting Points table in the Test Book, on the page
after the Letter-Word Identification tab, to determine an appropriate starting point for the
examinee.
Basal
Test by complete pages until the 6 lowest-numbered items administered are correct, or until
the page with Item 1 has been administered.
Scoring
Score each correct response 1 and each incorrect response 0. Score words that are not read
fluently (smoothly) on the last attempt 0. Do not penalize an examinee for mispronunciations
resulting from articulation errors, dialect variations, or regional speech patterns. Record the
total number of all items answered correctly and all items below the basal in the Number
Correct box after the last Letter-Word Identification item on the Test Record.
Administration Procedures
Know the exact pronunciation of each item before administering the test. The correct
pronunciation is in parentheses following more difficult items. For additional help with
pronunciation, refer to a standard dictionary. Do not tell or help the examinee with any
letters or words during this test.
If the examinee’s response to a specific item is unclear, do not ask him or her to repeat the
specific item. Instead, allow the examinee to complete the entire page and then ask him or
her to repeat all of the items on that page. Score only the item in question; do not rescore the
other items.
If the examinee pronounces words letter by letter or syllable by syllable instead of reading
them fluently, tell the examinee, “First read the word silently and then say the whole word
smoothly.” Give this instruction only once during administration of this test. If the examinee
gives more than one response, score the last response. Examiners may wish to record
incorrect responses for later error analysis. In addition, examiners may wish to complete
the “Qualitative Observation” checklist on the Test Record to document how the person
performed the task.
Starting Point
Select a starting point based on an estimate of the examinee’s present level of math
achievement. Consult the Suggested Starting Points table in the Test Book, on the page after
the Applied Problems tab, to determine an appropriate starting point for the individual.
Basal
Test by complete pages until the 5 lowest-numbered items administered are correct, or until
the page with Item 1 has been administered.
Ceiling
Test by complete pages until the 5 highest-numbered items administered are incorrect, or
until the page with Item 56 has been administered.
Administration Procedures
If the examinee requests or appears to need it, provide the worksheet in the Response Booklet
and a pencil with eraser prior to being prompted to do so. In all cases, provide the Response
Booklet and a pencil as directed at Item 25. Any question may be repeated during the test
whenever the examinee requests. Because the focal construct of this test is not the person’s
reading ability, read all items to the examinee. Completing the “Qualitative Observation”
checklist on the Test Record can help characterize the examinee’s performance on this task.
Test 3: Spelling
When prompted, give the examinee the Response Booklet and a pencil with an eraser.
Starting Point
Select a starting point based on an estimate of the examinee’s present level of spelling skill.
Consult the Suggested Starting Points table in the Test Book, on the page after the Spelling
tab, to determine an appropriate starting point for the person.
Basal
Test until the 6 lowest-numbered items administered are correct, or until Item 1 has been
administered.
Ceiling
Test until the 6 highest-numbered items administered are incorrect, or until Item 60 has been
administered.
Scoring
Score each correct response 1 and each incorrect response 0. Do not penalize for poor
handwriting or reversed letters as long as the letter does not form a different letter. For
example, a reversed lowercase c would not be penalized, but a reversed lowercase b would
be penalized because it becomes the letter d. Accept upper- or lowercase responses as correct
unless a case is specified. Record the total number of all items answered correctly and all
items below the basal in the Number Correct box after the last Spelling item on the Test
Record.
Administration Procedures
Know the exact pronunciation of each test item before administering the test. The correct
pronunciation is in parentheses following more difficult items. For additional help with
pronunciation, refer to a standard dictionary. Request printed responses; however, accept
cursive responses. Completing the “Qualitative Observation” checklist on the Test Record can
help describe the examinee’s automaticity on this task.
Starting Point
Begin with the Introduction for examinees functioning at the preschool to kindergarten level.
Begin with Item 5 for all examinees functioning at the grade 1 level. For all other examinees,
administer Sample Item B and then select a starting point based on an estimate of the
examinee’s present level of reading achievement. Consult the Suggested Starting Points table
following Sample Item B in the Test Book to determine an appropriate starting point for the
individual.
Basal
Test by complete pages until the 6 lowest-numbered items administered are correct, or until
the page with Item 1 has been administered.
Ceiling
Test by complete pages until the 6 highest-numbered items administered are incorrect, or
until the page with Item 52 has been administered.
Scoring
Score each correct response 1 and each incorrect response 0. Unless noted, accept only one-
word responses as correct. If an examinee gives a two-word or longer response, ask for a one-
word answer. Score a response correct if it differs from the correct response(s) listed only in
verb tense or number (singular/plural), unless otherwise indicated by the scoring key. Score
a response incorrect if the person substitutes a different part of speech, such as a noun for a
verb, unless otherwise indicated by the scoring key. Do not penalize for mispronunciations
resulting from articulation errors, dialect variations, or regional speech patterns.
Record the total number of all items answered correctly and all items below the basal in
the Number Correct box after the last Passage Comprehension item on the Test Record. Do
not include points for the introduction or sample items.
Administration Procedures
Examinees should read the passages silently; however, some individuals, especially younger
children, may read aloud. If this happens, ask the person to read silently. If the individual
continues to read aloud, do not insist on silent reading. Do not tell the examinee any words
on this test.
The examinee needs to identify the specific word that goes in the blank. If he or she
reads the sentence aloud with a correct answer, say, “Tell me one word that goes in the blank
space.” If the examinee cannot provide the word, score the item incorrect.
For Items 12 and higher, if the examinee does not respond to an item in about 30 seconds,
encourage a response. If the person still does not respond, score the item 0, point to the
next item and say “Try this one.” The 30 seconds is a guideline and not a time limit. If an
examinee requires more time to complete an item, more time may be given. For example, if a
response is encouraged after 30 seconds and the examinee indicates he or she is still reading
or needs more time, it is permissible to give more time.
Mark the one description on the “Qualitative Observation” checklist on the Test Record
that best describes the person’s performance on this task.
Starting Point
Select a starting point based on an estimate of the examinee’s present level of computational
skill. Consult the Suggested Starting Points table in the Test Book, on the page after the
Calculation tab, to determine an appropriate starting point for the person.
Basal
Test until the 6 lowest-numbered items administered are correct, or until Item 1 has been
administered.
Ceiling
Test until the 6 highest-numbered items administered are incorrect, or until Item 57 has been
administered.
Scoring
Score every item on this test before moving to another test to verify the basal and ceiling and
to complete any queries. Score each correct response 1 and each incorrect response 0. If the
examinee skips an item before the last completed item, score the item 0. Score poorly formed
or reversed numbers correct on this test. Score transposed numbers (e.g., 12 for 21) incorrect.
Record the total number of all items answered correctly and all items below the basal in the
Number Correct box after the last Calculation item on the Test Record. Do not include points
for sample items.
Administration Procedures
If testing begins with Sample Item A and the examinee responds incorrectly to one or both
of the sample items, discontinue testing and record a score of 0 for this test. Make sure to
complete any queries listed in the Test Book, such as the items involving reducing fractions.
Do not point to the signs or remind the examinee to pay attention to the signs during this
test. Use the “Qualitative Observation” checklist on the Test Record to help describe the
person’s rate and automaticity on this task.
Starting Point
Select a starting point based on an estimate of the examinee’s present level of writing ability.
Administer the appropriate block of items as indicated in the Suggested Starting Points table
in the Test Book, on the page after the Writing Samples tab.
Scoring
Score Writing Samples after testing is completed. Unlike other WJ IV ACH tests, Writing
Samples uses a modified holistic procedure that requires the use of examiner judgment when
scoring responses. Because scoring Writing Samples is more involved and subjective than
scoring other WJ IV ACH tests, special rating and scoring procedures are provided. If the
examinee’s score on the block of items administered falls within one of the shaded areas on
Starting Point
Select a starting point based on an estimate of the examinee’s present level of reading skill.
The table in the Test Book, on the page after the Word Attack tab, presents suggested starting
points.
Basal
Test by complete pages until the 6 lowest-numbered items administered are correct, or until
the page with Item 1 has been administered.
Ceiling
Test by complete pages until the 6 highest-numbered items administered are incorrect, or
until the page with Item 32 has been administered.
Scoring
Score each correct response 1 and each incorrect response 0. Score words that are not read
fluently (smoothly) on the last attempt 0. Do not penalize an examinee for mispronunciations
resulting from articulation errors, dialect variations, or regional speech patterns. Record the
total number of all items answered correctly and all items below the basal in the Number
Correct box after the last Word Attack item on the Test Record. Do not include points for
sample items.
Administration Procedures
It is essential to know the exact pronunciation of each test item before administering the test.
The correct pronunciation is in parentheses following more difficult items. For additional
help with pronunciation, refer to a standard dictionary. Say the phoneme (the most common
sound of the letter), not the letter name, when letters are printed within slashes, such as /p/.
If the examinee has any special speech characteristics resulting from articulation errors or
dialect variations, become familiar with the examinee’s speech pattern before administering
this test.
If the examinee’s response to a specific item is unclear, do not ask him or her to repeat the
specific item. Instead, allow the person to complete the entire page and then ask him or her
to repeat all of the items on that page. Score only the item in question; do not rescore the
other items.
Starting Point
Select a starting point based on an estimate of the examinee’s present level of reading skill.
Consult the Suggested Starting Points table in the Test Book, on the page after the Oral
Reading tab, to determine an appropriate starting point for the individual.
Continuation Instructions
This test uses continuation instructions instead of basal and ceiling rules. Follow the
continuation instructions to determine which additional sentences should be administered
and when to discontinue testing. The continuation instructions are located at the bottom of
the examiner pages in the Test Book and on the Test Record.
Scoring
When the examinee reads a sentence with no errors, score the item 2. If the examinee makes
one error on the sentence, score the item 1. When the examinee makes two or more errors,
score the item 0. Types of reading errors include: mispronunciations, omissions, insertions,
substitutions, hesitations of more than 3 seconds, repetitions, transpositions, or ignores
punctuation. If the examinee self-corrects within 3 seconds, do not count the word as an
error. Do not penalize the examinee for mispronunciations resulting from articulation errors,
dialect variations, or regional speech patterns. Record the number of points earned in the
Number of Points box after the last Oral Reading item on the Test Record.
Administration Procedures
It is essential to know the exact pronunciation of each test item. The correct pronunciation is
in parentheses following more difficult words. For additional help with pronunciation, refer
to a standard dictionary.
Become familiar with the types of reading mistakes that count as errors on this test. Figure
4-1 lists the types of reading errors that are shown in the Test Book. Sentences are reproduced
on the Test Record to facilitate scoring. During the test, follow along on the Test Record as
the examinee reads each sentence and mark each error with a slash (/) at the point in the
sentence where the error occurs. In most cases, the slash will be placed on the printed word
that was the error (i.e., mispronunciation, omission, substitution, transposition, hesitation,
or repetition). For an inserted word, place the slash between the two printed words where
the insertion occurred. If the examinee ignores punctuation (e.g., does not pause at a comma
or raise his or her voice for a question mark), place the slash on the punctuation mark that
was ignored. The examiner can also record and total each type of error in the “Qualitative
Observation Tally” on the Test Record. Figure 4-2 illustrates a portion of a completed Test
Record and tally.
Figure 4-2.
Example of completed Test Test 8 Oral Reading Qualitative Observation Tally
Note: Basal and ceiling rules do not apply to this test.
Record and “Qualitative
Ignores Punctuation
Scoring is based on the administration of specific groups of items.
Mispronunciation
Observation Tally” for Test 8:
Transposition
Substitution
Repetition
Hesitation
Omission
Oral Reading.
Insertion
Score 2, 1, 0
12 The worker bees and drones are smaller than the queen.
15 Sometimes a teacher may ask, “Do you know how long bees live?”
Items 11–15 8C: Number of Number of Errors
Points (0–10) Items 11–15
5 or fewer points: Administer Items 6–10 unless already administered
6 or more points: Administer Items 16–20
Form A 9
Time Limit
Discontinue testing after exactly 3 minutes and collect the examinee’s pencil and Response
Booklet. Record the exact finishing time in minutes and seconds on the Test Record. It is
important to record the exact finishing time because examinees who do well and finish in less
than 3 minutes will receive a higher score than individuals who continue to work for the full
3 minutes.
Scoring
Score each correct response 1 and each incorrect response 0. Ignore skipped items. Use the
scoring guide overlay to score this test. Record both the total number of items answered
correctly and the total number of items answered incorrectly within the 3-minute time limit
in the Sentence Reading Fluency Number Correct and Number Incorrect boxes on the Test
Record. To obtain the estimated age and grade equivalents on the Test Record, subtract
the Number Incorrect from the Number Correct. Enter both the Number Correct and the
Number Incorrect into the online scoring program. Do not include points for sample items or
practice exercises.
Administration Procedures
If the examinee has 2 or fewer correct on Practice Exercise C through F, discontinue testing
and record a score of 0 in the Sentence Reading Fluency Number Correct box on the Test
Record.
The sentences are intended to be read silently. Remind the examinee to read silently if he
or she begins reading aloud. If the person appears to be answering items without reading the
sentences, remind him or her to read each sentence. If the individual stops at the bottom of
a page, remind him or her to continue to the top of the next column or to the next page. If
the examinee starts to erase a response, provide a reminder to cross out the answer he or she
does not want.
This test may be administered simultaneously to a small group of two or three individuals
if, in the examiner’s judgment, this procedure will not affect any person’s performance.
However, do not administer this test to individuals who cannot read.
Starting Point
All examinees begin with Item 1.
Time Limit
Discontinue testing after exactly 3 minutes and collect the examinee’s pencil and Response
Booklet. Record the exact finishing time in minutes and seconds on the Test Record. It is
important to record the exact finishing time because examinees who do well and finish in less
than 3 minutes will receive a higher score than individuals who continue to work for the full
3 minutes.
Scoring
Score each correct response 1 and each incorrect response 0. Use the scoring guide overlay
to score this test. Do not penalize for poorly formed or reversed numbers. However, score
transposed numbers (e.g., 12 for 21) incorrect. Record the total number of calculations
answered correctly within the 3-minute time limit in the Math Facts Fluency Number Correct
box on the Test Record.
Administration Procedures
Do not point to the signs or remind the examinee to pay attention to the signs during testing.
Watch to make sure the examinee is going from left to right, row by row, down the page.
Some examinees may choose to work left to right on the first row, right to left on the second
row, and so on, which is acceptable. However, if the examinee starts skipping around, remind
him or her to proceed across the page, one row at a time. If the examinee stops at the bottom
of the page, remind him or her to continue to the top of the next page. If the examinee starts
to erase a response, remind the examinee to cross out the answer he or she does not want.
This test may be administered simultaneously to a small group of two or three individuals
if, in the examiner’s judgment, this procedure will not affect any person’s performance.
Starting Point
All examinees complete the sample items and then begin with Item 1.
Time Limit
Discontinue testing after exactly 5 minutes and collect the examinee’s pencil and Response
Booklet. Record the exact finishing time in minutes and seconds on the Test Record. It is
important to record the exact finishing time because examinees who do well and finish in less
than 5 minutes will receive a higher score than individuals who continue to work for the full
5 minutes.
If an examinee has 3 or fewer correct responses within the first 2 minutes, discontinue
testing. Record a time of 2 minutes and the Number Correct (0 to 3) on the Test Record.
Scoring
Score each correct response 1 and each incorrect response 0. Score any skipped items
incorrect. Do not penalize an examinee for errors in punctuation, capitalization, or spelling or
for poor handwriting unless the response is illegible. Score illegible items incorrect.
Sometimes it may not be immediately apparent whether to score an item correct or
incorrect. A few general guidelines will assist in scoring the Sentence Writing Fluency test.
To receive credit for an item, the examinee must use all three stimulus words in a complete
sentence. As noted in the Test Book instructions, the examinee may not change the stimulus
word in any way. If, for example, the examinee alters the tense of a verb or changes a noun
from singular to plural, score the item incorrect. A minor change in a word may make it
Administration Procedures
If the examinee receives a 0 on Sample Items B through D after the error correction
procedure, discontinue testing and record a score of 0 in the Sentence Writing Fluency
Number Correct box on the Test Record. If the examinee stops at the bottom of a page,
remind him or her to continue to the top of the next page.
In this test, the examiner may read any of the stimulus words to the examinee if the
examinee requests. This test may be administered simultaneously to a small group of two or
three individuals if, in the examiner’s judgment, this procedure will not affect any person’s
performance.
Starting Point
Select a starting point based on an estimate of the examinee’s present level of reading ability.
Consult the Suggested Starting Points table in the Test Book, on the page after the Reading
Recall tab, to determine an appropriate starting point for the examinee.
Scoring
On the Test Record, the elements to be scored are separated by slash marks (/). Place a check
mark above each element that the examinee recalls correctly during the retelling. Score each
correctly recalled element 1 and each incorrectly recalled element 0. Score elements not
recalled at all (correctly or incorrectly) 0. Scoring is based on a key word (shown in bold
type) in each element. The examinee must recall the specific element, a synonym, or a word
that preserves the meaning to receive credit. For example, if the element to be recalled is
“dad” and, when retelling the story, the examinee says “father,” score the element correct.
However, if the element is “three months” and the examinee says, “four months,” score the
response incorrect. The examinee may recall the elements in any order.
Record the number of elements the examinee recalls correctly for each set of two stories
and enter the total in the Number of Points box for each set on the Test Record. Enter
these numbers in the online scoring program and enter an X if a set of stories was not
administered. Use the Number of Points for each set of stories administered to obtain an
estimated age and grade equivalent from the “Scoring Table” on the Test Record. If more
than two sets of stories are administered, use the column corresponding to the last two sets
administered to obtain the estimated age and grade equivalents.
Administration Procedures
Direct the examinee to read the story once silently. If necessary, remind the examinee of
this rule. Turn the page after the examinee has finished reading the story once. Prompt the
examinee as directed to retell the story. Do not tell the examinee any words on this test. It is
important to be familiar with the stories and required elements before administering this test.
This will facilitate scoring elements, particularly if the examinee retells them out of sequence.
Starting Point
Select the appropriate sample item based on an estimate of the person’s present achievement
level. Begin with Sample Item A for examinees functioning at the Kindergarten to grade 8
level. For all other examinees, administer Sample Item B and then select a starting point
based on an estimate of the examinee’s present level of ability. Consult the Suggested Starting
Points table following Sample Item B in the Test Book to determine an appropriate starting
point for the individual.
Basal
Test by complete pages until the 6 lowest-numbered items administered are correct, or until
the page with Item 1 has been administered.
Scoring
Score each correct response 1 and each incorrect response 0. To be correct, an answer must
solve the problem both horizontally and vertically. Record the total number of all items
answered correctly and all items below the basal in the Number Correct box after the last
Number Matrices item on the Test Record. Do not include points for sample items.
Administration Procedures
Follow all verbal and pointing directions carefully when administering the sample items,
including the error or no response corrections. For each item, follow the time guideline. If
the examinee is actively engaged in trying to solve the problem, the examiner may allow
more time. However, if the examinee does not appear to be trying to solve the problem,
encourage a response. If the examinee does not give a response, score the item 0 and ask him
or her to move on to the next item. If the examinee provides a response that is not a whole
number, ask him or her to solve the problem using whole numbers only.
Very young or low-functioning examinees may be confused by more than one matrix per
page. In these cases, it is permissible to use a piece of paper to present one matrix at a time.
Starting Point
Administer Sample Items A through D to all examinees and then select a starting point
based on an estimate of the examinee’s present level of writing ability. Consult the Suggested
Starting Points table following Sample Item D in the Test Book to determine an appropriate
starting point for the individual.
Basal
Test by complete pages until the 6 lowest-numbered items administered are correct, or until
the page with Item 1 has been administered.
Ceiling
Test by complete pages until the 6 highest-numbered items administered are incorrect, or
until the page with Item 36 has been administered.
Scoring
Score each correct response 1 and each incorrect response 0. For a response to be correct,
the examinee must clearly indicate where the error is located and how the error should
be corrected. Record the total number of all items answered correctly and all items below
the basal in the Number Correct box after the last Editing item on the Test Record. Do not
include points for sample items.
Starting Point
All examinees complete the sample items and practice exercise and then begin with Item 1.
Time Limit
Discontinue testing after exactly 3 minutes and collect the examinee’s pencil and Response
Booklet. Record the exact finishing time in minutes and seconds on the Test Record. It is
important to record the exact finishing time because examinees who do well and finish in less
than 3 minutes will receive a higher score than individuals who continue to work for the full
3 minutes.
Scoring
Score each correct response 1 and each incorrect response 0. Use the scoring guide overlay to
score this test. Record the total number of items answered correctly within the 3-minute time
limit in the Word Reading Fluency Number Correct box on the Test Record. Do not include
points for sample items or the practice exercise.
Administration Procedures
Follow all directions for error correction or no response during the administration of the
sample items and practice test to ensure the examinee understands the task. If the examinee
has 1 or 0 correct on the practice exercise, discontinue testing and record a score of 0 without
administering any test items. In addition, do not administer this test to examinees who
cannot read.
If the examinee stops at the bottom of a page, remind him or her to continue to the top
of the next column or to the next page. Do not tell the examinee any words during this test.
If the examinee has trouble reading the words or finding the two words that go together, tell
him or her to skip that item and move on to the next one.
Basal
Test until the 6 lowest-numbered items administered are correct, or until Item 1 has been
administered.
Ceiling
Test until the 6 highest-numbered items administered are incorrect, or until Item 30 has been
administered.
Scoring
Score each correct written response 1 and each incorrect written response 0. The responses
listed in the Test Book are the only acceptable correct answers. Although a response may
seem like a reasonable spelling, the intent of this test is to measure both phonological coding
skills, as well as sensitivity to the most commonly occurring orthographic patterns (visual
sequences of letters) in the English language.
The directions tell the examinee to spell the nonsense words as they would most likely
be spelled if they were real English words. Consequently, only the most frequently occurring
English spelling patterns are scored as correct. For example, the nonsense word cridge needs
to be spelled as cridge rather than kridge. Although kridge may be considered a correct
sound spelling, the /kr/ sound is most commonly spelled using the letters cr in the English
language. Similarly, the dge pattern is most commonly used with a short vowel sound, so ge
would not be considered correct after a short vowel sound. For the same reason, klow would
not be considered a correct spelling of the nonsense word clow, because the /kl/ sound is
almost always spelled using the letters cl. But, clough would be scored as correct because
the ough pattern also represents a common English spelling, such as in the word plough.
For the nonsense word ket, cet would not be considered correct because typically the letter
c before the letter e makes a soft /s/ sound, rather than a hard /k/ sound. Generally, correct
sound spellings that do not represent the most common and frequent orthographic patterns
are scored as incorrect. For example, the nonsense word hoak could be spelled as hoak or
hoke, but a correctly sequenced sound spelling, such as hoack or hoake would be scored as
incorrect. Analysis of errors can help determine if the examinee is able to sequence sounds
correctly but has difficulty assimilating or recalling common orthographic patterns.
Do not penalize an examinee for poor handwriting or reversed letters as long as the letter
does not form a different letter. For example, a reversed lowercase c would not be penalized,
but a reversed lowercase b would be penalized because it becomes the letter d.
Record the total number of all items answered correctly and all items below the basal in
the Number Correct box after the last Spelling of Sounds item on the Test Record. Do not
include points for sample items.
Administration Procedures
Before testing, locate the track for Item 6 on the audio recording and adjust the volume to a
comfortably loud level on the examinee’s headphones or the speaker. Present Sample Items A
through D and Items 1 through 5 orally. When a letter is printed within slashes, such as /p/,
Starting Point
Administer sample items to all examinees and then select a starting point based on an
estimate of the examinee’s present level of reading ability. Consult the Suggested Starting
Points table following Sample Item B in the Test Book to determine appropriate starting
points for each subtest.
Basal
Test by complete pages until the 5 lowest-numbered items administered are correct, or until
Item 1 has been administered for each subtest.
Ceiling
Test by complete pages until the 5 highest-numbered items administered are incorrect, or
until the last item has been administered for each subtest.
Scoring
Score each correct response 1 and each incorrect response 0. Unless noted, accept only one-
word responses as correct. If an examinee gives a two-word or longer response, ask for a
one-word answer. Score a response correct if it differs from the correct response(s) listed only
in verb tense or number (singular/plural), unless otherwise indicated by the scoring key. For
example, on Item 4 of the Synonyms subtest (stone), the responses rock or rocks would be
correct. Score a response incorrect if the examinee substitutes a different part of speech, such
as a noun for a verb, unless otherwise indicated by the scoring key. For example, on Item 7
of the Antonyms subtest (ugly, an adjective) the response beauty (a noun) is incorrect. If an
examinee responds to an Antonyms item by giving the stimulus word preceded by non or
un, ask for another answer, unless otherwise indicated by the scoring key. Do not penalize
an examinee for mispronunciations resulting from articulation errors, dialect variations, or
regional speech patterns.
Record the total number of all items answered correctly and all items below the basal in
the Number Correct box after the last Reading Vocabulary item on the Test Record for each
subtest. Do not include points for sample items.
Starting Point
Select a starting point based on an estimate of the examinee’s present achievement level.
Consult the Suggested Starting Points table in the Test Book, on the page after the Science
tab, to determine an appropriate starting point for the individual.
Basal
Test by complete pages until the 6 lowest-numbered items administered are correct, or until
the page with Item 1 has been administered.
Ceiling
Test by complete pages until the 6 highest-numbered items administered are incorrect, or
until the page with Item 40 has been administered.
Scoring
Score each correct response 1 and each incorrect response 0. Do not penalize an examinee
for mispronunciations resulting from articulation errors, dialect variations, or regional speech
patterns. Record the total number of all items answered correctly and all items below the
basal in the Number Correct box after the last Science item on the Test Record.
Administration Procedures
Know the exact pronunciation of each test item before administering this test. The correct
pronunciation is in parentheses following more difficult items. For additional help with
pronunciation, refer to a standard dictionary. Repeat items during the test whenever the
examinee requests.
Starting Point
Select a starting point based on an estimate of the examinee’s present achievement level.
Consult the Suggested Starting Points table in the Test Book, on the page after the Social
Studies tab, to determine an appropriate starting point for the examinee.
Basal
Test by complete pages until the 6 lowest-numbered items administered are correct, or until
the page with Item 1 has been administered.
Scoring
Score each correct response 1 and each incorrect response 0. Do not penalize an examinee
for mispronunciations resulting from articulation errors, dialect variations, or regional speech
patterns. Record the total number of all items answered correctly and all items below the
basal in the Number Correct box after the last Social Studies item on the Test Record.
Administration Procedures
Know the exact pronunciation of each test item before administering this test. The correct
pronunciation is in parentheses following more difficult items. For additional help with
pronunciation, refer to a standard dictionary. Repeat items during the test whenever the
examinee requests.
Starting Point
Select a starting point based on an estimate of the examinee’s present achievement level.
Consult the Suggested Starting Points table in the Test Book, on the page after the Humanities
tab, to determine an appropriate starting point for the individual.
Basal
Test by complete pages until the 6 lowest-numbered items administered are correct, or until
the page with Item 1 has been administered.
Ceiling
Test by complete pages until the 6 highest-numbered items administered are incorrect, or
until the page with Item 40 has been administered.
Scoring
Score each correct response 1 and each incorrect response 0. Do not penalize an examinee
for mispronunciations resulting from articulation errors, dialect variations, or regional speech
patterns. Record the total number of all items answered correctly and all items below the
basal in the Number Correct box after the last Humanities item on the Test Record.
Administration Procedures
Know the exact pronunciation of each item before administering this test. The correct
pronunciation is in parentheses following more difficult items. For additional help with
pronunciation, refer to a standard dictionary. Repeat items during the test whenever the
examinee requests.
The four levels of test information are cumulative; that is, each successive level builds on
information from the previous level. Information from all four levels is necessary to describe
a person’s performance completely. Level 1 provides qualitative data that are often used to
support a clinical hypothesis. Levels 2, 3, and 4 include a variety of score options from which
to select.
Level 1 information is obtained through behavioral observations during testing and
through analysis of erroneous responses to individual items. Observation of an examinee’s
Types of Scores
This section discusses the variety of scores available for test interpretation. Included among
these scores are grade equivalents (GE), age equivalents (AE), relative proficiency indexes
(RPI), cognitive-academic language proficiency (CALP) levels, percentile ranks (PR), and
standard scores (SS). Most of these scores will be familiar to examiners who have used the
Woodcock-Johnson III Tests of Achievement (WJ III ACH) (Woodcock, McGrew, & Mather,
2001) or the Woodcock Reading Mastery Tests–Third Edition (Woodcock, 2011). Several
optional standard score scales, including the normal curve equivalents (NCE) scale, also are
discussed.
Raw Score
For most tests, the raw score is the number of correct responses, each receiving 1 raw score
point. The three exceptions in the WJ IV ACH are Test 6: Writing Samples in which responses
to Items 7 and higher can receive 2, 1, or 0 points; Test 8: Oral Reading in which responses
can receive 2, 1, or 0 points; and Test 12: Reading Recall in which the raw score is based on
the number of elements recalled correctly on the stories administered. Number Correct or
Number of Points is listed in the left column of the “Scoring Table” that appears for each test
on the Test Record. Procedures for calculating the raw score are presented in Chapter 3 of
this manual.
When an examinee receives a score of 0 on any test, the examiner needs to judge whether
that score is a true assessment of the examinee’s ability or whether it reflects the individual’s
inability to perform the task. If it is the latter, it may be more appropriate to assume that the
examinee has no score for the test rather than using the score of 0 in further calculation and
interpretation. For example, if a third-grade student had a score of 0 on the Test 14: Editing,
the score may be an accurate representation of the child’s ability. However, if a kindergarten
student obtained a 0 on Test 14: Editing, the score may indicate that the child has not yet
learned to read.
Grade Equivalent
A grade equivalent (GE), or grade score, reflects the examinee’s performance in terms of the
grade level in the norming sample at which the median score is the same as the examinee’s
score. In other words, if the median W score on a test for students in the sixth month of
the second grade is 488, then an examinee who scored 488 would receive 2.6 as a grade
equivalent score.
At the ends of the grade scale, when using the online scoring program, less than (<) signs
are used for grade scores falling below the median score obtained by children beginning
kindergarten (K.0) and greater than (>) signs are used for grade scores higher than the
median score obtained by graduate students finishing the first year of graduate school (17.9),
or, if scored by 2-year college norms, at the end of the final year of a 2-year program (14.9).
For example, a student who scored above the median for students finishing the first year of
graduate school would receive a grade equivalent of >17.9, whereas a student who scored
below the median of students entering kindergarten would receive a score of <K.0.
When hand scoring, grade equivalents can only be closely approximated. Thus, the grade
equivalents located in the third column of the “Scoring Table” for each test on the Test
Record are estimates (Est). Precise grade equivalents for tests and grade-equivalent scores for
clusters are only available when using the online scoring program.
One frequently alleged disadvantage of grade scores is that they are not useful for
instructional planning because they do not reflect the student’s ability. This is sometimes
followed by the recommendation that some other metric, such as standard scores, be
used in place of grade equivalents. (Recall from the discussion about levels of interpretive
information that standard scores provide information regarding peer comparison but do not
provide information regarding level of development.)
Grade equivalent scores may cause interpretive problems in tests that are composed
mostly of items with a limited range of difficulty (such as the multilevel tests of many group
achievement batteries). For example, if a third-grade student earns a grade equivalent of 6.5
on a test that is intended to be administered to grade 3, it does not mean that the student
will be successful on tasks associated with the mid-sixth-grade level. Rather, it means that
the student answered correctly a high percentage of the items on a third-grade test—the same
percentage of items that an average sixth-grade student answered correctly on the third-grade
test. The student’s score in this case is more a reflection of the student’s accuracy level than
the grade level of task difficulty that this student can perform.
This problem with grade scores is eliminated when test items are distributed uniformly
in a test over a wide range of difficulty, when students are administered the subset of items
Age Equivalent
An age equivalent (AE), or age score, is similar to a grade equivalent, except that it reflects
performance in terms of the age level in the norming sample at which the median score is the
same as the examinee’s score. Age equivalents may be more useful in some applications than
grade equivalents, especially as they relate to the abilities of young children or adults not
attending school.
At the ends of the age scale, less than (<) signs are used for levels of performance that fall
below the median of the specified age. Greater than (>) signs are used for levels above the
median of the specified age.
When hand scoring, age equivalents can only be closely approximated. Thus, the age
equivalents located in the second column of the “Scoring Table” for each test on the Test
Record are estimates (Est). The online scoring program reports the precise age equivalents for
tests and age-equivalent scores for clusters.
W Difference Score
Level 3 scores (RPIs) and level 4 scores (standard scores, percentile ranks) are based on
test or cluster W difference scores. The W difference scores are the difference between an
examinee’s test or cluster W score and the median test or cluster W score for the reference
group in the norming sample (same age or same grade) with which the comparison is being
made.
Instructional Zone
The instructional zone (called developmental zone in the WJ IV COG and the WJ IV OL) is
a special application of the RPI. An examinee will perceive tasks that fall at an RPI of 96/90
as easy, whereas he or she will perceive tasks that fall at an RPI of 75/90 as difficult. Thus,
the instructional zone identifies a range along a developmental scale that encompasses an
examinee’s present level of functioning from easy (the independent level) to difficult (the
frustration level). The lower and higher points of this zone are labeled EASY and DIFF in the
“Table of Scores” generated when using the online scoring program.
CALP Levels
Cummins (1984) formalized a distinction between two types of language proficiency: basic
interpersonal communication skill (BICS) and cognitive-academic language proficiency
(CALP). BICS is defined as language proficiency in everyday communicative contexts, or
those aspects of language proficiency that seem to be acquired naturally and without formal
schooling. CALP is defined as language proficiency in academic situations, or those aspects of
language proficiency that emerge and become distinctive with formal schooling. Classroom-
appropriate academic proficiency is further defined by literacy skills involving conceptual-
linguistic knowledge that occur in a context of semantics, abstractions, and context-reduced
linguistic forms.
The online scoring program includes the option to report CALP levels to help describe
the examinee’s language proficiency in English. If the option is selected, CALP levels can
be reported for several clusters in the WJ IV ACH (see Table 5-2). Clusters in the WJ IV
COG and WJ IV OL that measure comprehension-knowledge (Gc), oral language, or
acquired knowledge also yield CALP levels, if selected. See the Woodcock-Johnson IV Tests
of Cognitive Abilities Examiner’s Manual (Mather & Wendling, 2014a) and the Woodcock-
Johnson IV Tests of Oral Language Examiner’s Manual (Mather & Wendling, 2014b) for more
information. Table 5-3 illustrates the six CALP levels as well as two regions of uncertainty
and corresponding implications. The CALP levels are based on W difference scores, and the
RPIs corresponding to these W difference scores provide meaningful interpretations regarding
the individual’s language proficiency.
Standard Score
The standard score scale used in the WJ IV ACH is based on a mean (M) of 100 and a
standard deviation (SD) of 15. This scale is the same as most deviation-IQ scales and may
be used to relate standard scores from the WJ IV to other test scores based on the same
mean and standard deviation. The WJ IV also includes extended standard scores, providing
a greater range of standard scores (0 to over 200) than do other tests. Standard scores
sometimes present a disadvantage to inexperienced users and others, such as parents or the
examinee, because the scores lack objective meaning. Consequently, the interpretation of a
standard score is often explained using its equivalent percentile rank. Figure 5-1 illustrates
the relationship between selected standard scores and the extended percentile rank scale.
The online scoring program provides the option to report an additional standard score
from a selection of four other types of standard scores: z scores, T scores, stanines, and
normal curve equivalents (NCEs). The basic standard score is the z score with a mean of
0 and a standard deviation of 1. The T score has a mean of 50 and a standard deviation
of 10. Although T scores have been frequently used in education and industry, they have
been replaced by the deviation-IQ scale (M = 100, SD = 15) for most clinical applications.
Another standard score scale is the traditional stanine scale. Stanines have a mean of 5 and a
standard deviation of 2 and are most useful in applications in which a single-digit gross scale
of measurement is desired. The normal curve equivalent scale (Tallmadge & Wood, 1976) has
a mean of 50 and a standard deviation of 21.06 and has been used most often for evaluating
student performance in certain federally funded programs.
Interpreting Tests
This section contains details on interpretation of the tests in each of the curricular areas.
Chapter 2 contains functional definitions of the abilities measured by each test. In evaluating
the practical significance of differences among test performance, consider any extenuating
circumstances that may explain these differences, as well as any unusual behaviors or
responses obtained on those tests. This information may have useful diagnostic implications.
Both the “Test Session Observations Checklist” and the “Qualitative Observation” checklists
Test 12: Reading Recall Printed passages Reading and recalling elements
of a passage
Isolated Words Test 15: Word Printed words Reading and matching the two
(Lexical Level) Reading Fluency words that go together quickly
Test 17: Reading Vocabulary Printed words Reading and producing synonyms
and antonyms
Test 1: Letter-Word Printed words Pronouncing real words
Identification (word items)
Phono/Orthographic Test 7: Word Attack Printed words Applying phonic and structural analysis
Coding (nonsense) skills to pronouncing nonsense words
LESS COMPLEX
Automaticity Test 10: Math Facts Fluency Printed math facts Quickly calculating
single-digit math facts
(addition, subtraction,
and multiplication)
LESS COMPLEX
In terms of complexity, the skills measured in the four WJ IV math tests range from the
lower-level ability of recognizing math symbols and vocabulary to the higher-level ability
of mathematical reasoning and problem solving. Based on CHC theory, the math tests are
primarily measures of quantitative knowledge (Gq), although some math tests measure other
aspects of processing, particularly fluid reasoning (Gf ) or processing speed (Gs).
Test 5: Calculation
This test of math achievement measures the ability to perform mathematical computations
(Gq). The task requires the examinee to perform a variety of calculations ranging from simple
addition to calculus. Low performance may be a function of limited basic math skills, limited
instruction, or lack of attention. The “Qualitative Observation” checklist for this test helps
document how the examinee approached the task. Table 5-9 provides information about the
percentage of age mates who were assigned each rating in the norming sample. For example,
of the 12-year-olds whose performance was rated, 8% worked very slowly and relied on
inefficient strategies, 8% solved the problems quickly and with no observed difficulties, and
2% appeared to work too quickly. Using this information can help determine how typical or
atypical the examinee’s performance is compared to age mates.
Rate/Automaticity Test 11: Sentence A picture and three Writing short sentences
Writing Fluency words to form into quickly—requires correct
a sentence syntax and automaticity
Isolated Letters Test 3: Spelling Orally presented letters Writing letter names
(Sublexical Level)
Test 16: Spelling of Sounds Orally presented Writing letters corresponding
phonemes to phonemes
LESS COMPLEX
Test 3: Spelling
This test measures knowledge of prewriting skills and spelling (Grw). The task requires the
production of single letters or words in response to oral prompts.
Performance on Test 3: Spelling may be related to several factors, including handwriting. If
an examinee is unable to complete Items 1 through 3, he or she may not have developed the
muscular control or visual-motor skill needed in beginning handwriting.
A closer analysis of Test 3: Spelling items will help examiners differentiate between
phonetically accurate and phonetically inaccurate spelling errors. In analyzing an examinee’s
responses, an examiner may determine whether a difference exists in the individual’s ability
to spell words that have regular phoneme-grapheme correspondence and those that require
the memorization of visual features. In addition, the following specific error patterns may
be present in an examinee’s misspellings: (a) addition of unnecessary letters, (b) omissions
of needed letters, (c) mispronunciations or dialectal speech patterns, (d) reversals of letters,
(e) transpositions of whole words (e.g., was for saw) or of consonants and/or vowels (e.g.,
brithday), (f) phonetic spellings of nonphonetic words (e.g., they as thay), and (g) incorrect
associations of sounds with letters (e.g., efry for every).
The “Qualitative Observation” checklist for this test helps document how the examinee
approached the task. Table 5-11 provides information about the percentage of age mates who
were assigned each rating in the norming sample. For example, of the 13-year-olds whose
performance was rated, 2% spelled words easily and accurately and 27% spelled words in a
laborious, nonautomatic manner. Using this information can help determine how typical or
atypical the examinee’s performance is compared to age mates.
Intra-Achievement Variations
This variation procedure allows comparison of one area of academic achievement to
the examinee’s expected or predicted performance as determined by his or her average
performance on other achievement areas. An intra-achievement variation is present within
individuals who have specific achievement strengths or weaknesses, such as superior
math skills relative to their expected achievement based on their average performance
in other areas of achievement. Individuals with a significant intra-achievement variation
exhibit specific strengths or weaknesses in one or more areas of achievement. This type of
information is an invaluable aid in instructional planning and can be used, for example,
to support the hypothesis of a specific difficulty as compared to generally low academic
performance across achievement domains. For example, a student may perform poorly in
mathematics but may have average abilities on tasks involving reading.
As indicated in Table 5-15, intra-achievement variations can be calculated if WJ IV ACH
Tests 1 through 6 are administered. Each test is compared to the examinee’s predicted or
expected test score based on his or her average performance on the other five tests. For
example, when considering Test 1: Letter-Word Identification, the individual’s average
performance on the remaining five tests (Tests 2 through 6) is used as the predictor to
determine his or her expected score on Test 1: Letter-Word Identification. This expected
score is then compared to the person’s obtained Test 1: Letter-Word Identification score. If
the individual’s expected score is higher than his or her actual score, a relative weakness is
identified. If the expected score is lower than the actual score, a relative strength is identified.
Intra-Oral Language
Variations
Cognitive Abilities Oral Language Achievement
Intra-Achievement
Variations
Cognitive Abilities Oral Language Achievement
As an option, other tests can be included in the variation procedure. When including
any additional tests, the corresponding cluster or clusters that are created also are included
in the variation procedure. For example, if Test 7: Word Attack is administered, the Basic
Reading Skills cluster is available when combined with Test 1: Letter-Word Identification.
Therefore, both Test 7: Word Attack and the Basic Reading Skills cluster are compared to
the same expected score based on the same predictor as Test 1: Letter-Word Identification in
the variation procedure. No matter how many tests are administered, the predictor score is
always based on five tests from WJ IV ACH Tests 1 through 6. An intra-achievement variation
is present within individuals who have specific academic strengths or weaknesses, such as
superior Basic Reading Skills (Grw) relative to their expected performance based on their
average performance on the remaining five tests. If any of the optional additional tests are
included in the variation procedure, the variation is labeled Intra-Achievement (Extended).
Intra-Cognitive Variations
This variation is present within individuals who have specific cognitive strengths or
weaknesses, such as high fluid reasoning (Gf ) or poor short-term working memory (Gwm).
Equal interest exists in either a strength or a weakness in one ability relative to an individual’s
average performance in other cognitive abilities. This profile of variations can document areas
of relative strength and weakness, provide insights for program planning, and contribute to
a deeper understanding of the types of tasks that will be especially easy or difficult for an
individual compared to his or her other abilities.
Based on WJ IV COG Tests 1 through 7, this variation procedure allows comparison
of one area of cognitive ability to the examinee’s expected or predicted score based on his
or her average performance on six of the first seven cognitive tests, each measuring some
aspect of a different CHC cognitive ability (Gc, Gf, Gwm, Gs, Ga, Glr, Gv). For example,
when considering Test 1: Oral Vocabulary, the individual’s average performance on the
remaining six tests (Test 2: Number Series, Test 3: Verbal Attention, Test 4: Letter-Pattern
Matching, Test 5: Phonological Processing, Test 6: Story Recall, and Test 7: Visualization) is
used as the predictor to determine the person’s expected Test 1: Oral Vocabulary score. This
expected score is then compared to the person’s obtained Test 1: Oral Vocabulary score. An
intra-cognitive variation is present within individuals who have specific cognitive strengths
Ability/Achievement Comparisons
Ability/achievement comparison models are unidirectional comparisons (as represented by
the single-headed arrows in Figure 5-6) that use certain intellectual or linguistic abilities to
predict academic performance.
The ability/achievement comparison models are procedures for comparing an individual’s
current academic performance to the performance of others of the same age or grade with
the same ability score (based upon general intellectual ability, scholastic aptitude, Gf-Gc
composite, oral language, or academic knowledge). These five models are not intended
to gauge an individual’s potential for future success. They are, however, valid methods for
evaluating the presence and significance of discrepancies between current levels of ability
and achievement. All WJ IV ability/achievement comparisons account for regression to
the mean and provide actual or real discrepancy norms (for more information, see the
Technical Manual).
Five alternatives may be used for the ability measure. The Academic Knowledge cluster
from the WJ IV ACH may be used as the predictor for clusters in reading, writing, math,
and cross-academic areas. The General Intellectual Ability, Scholastic Aptitude, or Gf-Gc
composites may be used from the WJ IV COG as predictors or measures of ability. The Broad
Oral Language (or Amplio lenguaje oral) cluster from the WJ IV OL may be used to predict
level of achievement based upon the individual’s level of oral language development. Each of
these procedures fulfills a different purpose.
The one alternative available in the WJ IV ACH is discussed in detail. A summary of the
other four alternatives—GIA, Scholastic Aptitude, and Gf-Gc Composite from the WJ IV COG
and the Broad Oral Language cluster from the WJ IV OL—are presented. The WJ IV COG
Examiner’s Manual and the WJ IV OL Examiner’s Manual provide further information about
these comparison procedures.
Gf-Gc Comparisons
The Academic Knowledge cluster is comprised of Test 18: Science, Test 19: Social Studies,
and Test 20: Humanities. The standard score for this cluster is used as the predictor of
expected achievement. The individual’s expected achievement is then compared to his or her
actual achievement. Examinees with expected scores significantly higher than their actual
achievement scores exhibit a relative strength in academic knowledge with weaknesses in
one or more areas of achievement. If expected scores are significantly lower than actual
achievement, the individual exhibits a relative weakness in academic knowledge with
strengths in one or more areas of achievement. Additionally, three clusters from the WJ IV OL
can be included in this comparison procedure. Table 5-17 lists the various clusters that can
be included in this ability/achievement comparison procedure.
Reading (Tests 1, 4)
Broad Reading (Tests 1, 4, 9)
Basic Reading Skills (Tests 1, 7)
Reading Comprehension (Tests 4, 12)
Reading Comprehension–Extended
(Tests 4, 12, 17)
Reading Fluency (Tests 8, 9)
Reading Rate (Tests 9, 15)
Mathematics (Tests 2, 5)
Broad Mathematics (Tests 2, 5, 10)
Math Calculation Skills (Tests 5, 10)
Math Problem Solving (Tests 2, 13)
Discrepancy Scores
The online scoring program includes two scores for use in interpreting the presence and
severity of any variation, comparison, or discrepancy. These are called the discrepancy
percentile rank (discrepancy PR) and the discrepancy standard deviation (discrepancy SD).
These scores are based on actual difference scores computed for each individual in the
norming sample. (See the Technical Manual for more information.)
The discrepancy percentile rank indicates the percentage of the examinee’s peer group
(same age or grade and same predicted score) with a difference score that is the same as or
larger than the examinee’s difference score. For example, a discrepancy percentile rank of 1
on Basic Reading Skills indicates that only 1% of the examinee’s peer group had the same or
larger negative difference score on this cluster. On the other hand, a discrepancy percentile
rank of 97 on Math Problem Solving indicates that only 3% of the examinee’s peer group had
the same or larger positive difference score on this cluster. The WJ IV discrepancy PR values
provide the identical information typically referred to as the “base rate” in the population.
The discrepancy SD score is a standardized z score that reports (in standard deviation
units) the difference between an individual’s difference score and the average difference
score for individuals at the same age or grade level in the norming sample who had the
same predictor score. A negative value indicates the examinee’s actual ability is lower than
predicted. A positive value indicates the examinee’s actual ability is higher than predicted.
This statement of significance can be used, instead of the percentile rank, in programs with
selection criteria based on such criteria as “a difference equal to or greater than one and one-
half times the standard deviation.”
9.13 In educational, clinical, and counseling settings, a test taker’s score should not be interpreted in
isolation; other relevant information that may lead to alternative explanations for the examinee’s
test performance should be considered. (p. 145)
August, D., & Shanahan, T. (2006). Developing literacy in second-language learners: Report of
the national literacy panel on language minority children and youth. Mahwah, NJ: Lawrence
Erlbaum.
Burney, V. H., & Beilke, J. R. (2008). The constraints of poverty on high achievement. Journal
for the Education of the Gifted, 31, 171–197.
Carroll, J. B. (1993). Human cognitive abilities: A survey of factor-analytic studies. New York,
NY: Cambridge University Press.
Carroll, J. B., & Maxwell, S. E. (1979). Individual differences in cognitive abilities. Annual
Review of Psychology, 30, 603–640.
Cattell, R. B. (1950). Personality: A systematic theoretical and factoral study. New York, NY:
McGraw-Hill.
Corn, A. L., & Lusk, K. E. (2010). Perspectives on low vision. In A. L. Corn & J. N. Erin
(Eds.), Foundations of low vision: Clinical and functional perspectives (2nd ed., pp. 3–34).
New York, NY: AFB Press.
Cummins, J. (1984). Bilingualism and special education: Issues in assessment and pedagogy.
Austin, TX: Pro-Ed.
Cummins, J., & Hornberger, N. H. (2008). Bilingual education. New York, NY: Springer.
de Leeuw, E. (2008). When your native language sounds foreign: A phonetic investigation into
first language attrition (Unpublished doctoral dissertation). Queen Margaret University,
Edinburgh, Scotland. Retrieved from http://etheses.qmu.ac.uk/119/
Flege, J. E., Schirru, C., & MacKay, I. R. (2003). Interaction between the native and second
language phonetic subsystems. Speech Communication, 40, 467–491.
Grosjean, F. (2001). The bilingual’s language modes. In J. Nicol (Ed.), One mind, two
languages: Bilingual language processing (2nd ed., pp. 1–22). Oxford, England: Blackwell.
References 109
Herschell, A. D., Greco, L. A., Filcheck, H. A., & McNeil, C. B. (2002). Who is testing
whom: Ten suggestions for managing disruptive behavior of young children during testing.
Intervention in School and Clinic, 37, 140–148.
Horn, J. L. (1989). Models for intelligence. In R. Linn (Ed.), Intelligence: Measurement, theory
and public policy (pp. 29–73). Urbana, IL: University of Illinois Press.
Horn, J. L., & Cattell, R. B. (1966). Refinement and test of the theory of fluid and crystallized
general intelligences. Journal of Educational Psychology, 57, 253–270.
Horn, J. L., & Stankov, L. (1982). Auditory and visual factors of intelligence. Intelligence, 6,
165–185.
Jaffe, L. E., & Henderson, B. W. (with Evans, C. A., McClurg, L., & Etter, N). (2009).
Woodcock-Johnson III Tests of Achievement Normative Update–Braille Adaptation. Louisville,
KY: American Printing House for the Blind.
Mather, N., & Wendling, B. J. (2014b). Examiner’s Manual. Woodcock-Johnson IV Tests of Oral
Language. Rolling Meadows, IL: Riverside Publishing.
McArdle, J. J., & Woodcock, R. W. (1998). Human cognitive abilities in theory and practice.
Mahwah, NJ: Lawrence Erlbaum.
McGrew, K. S., LaForte, E. M., & Schrank, F. A. (2014). Technical Manual. Woodcock-Johnson
IV. Rolling Meadows, IL: Riverside Publishing.
110 References
McGrew, K. S., & Wendling, B. J. (2010). Cattell-Horn-Carroll cognitive-achievement
relations: What we have learned from the past 20 years of research. Psychology in the
Schools, 47, 651–675.
Muñoz-Sandoval, A. F., Woodcock, R. W., McGrew, K. S., & Mather, N. (2007). Batería III
Woodcock-Muñoz Normative Update: Pruebas de aprovechamiento. Rolling Meadows, IL:
Riverside Publishing.
Prifitera, A., Saklofske, D. H., & Weiss, L. G. (Eds.). (2008). WISC-IV clinical assessment and
intervention. (2nd ed., pp. 217–235). San Diego, CA: Academic Press.
Rasch, G. (1960). Probabilistic models for some intelligence and attainment tests. Copenhagen,
Denmark: Danish Institute for Educational Research.
Sattler, J. M., & Hoge, R. D. (2005). Assessment of children: Behavioral, social, and clinical
foundations (5th ed.). San Diego, CA: Author.
Schrank, F. A., & Dailey, D. (2014). Woodcock-Johnson Online Scoring and Reporting [Online
format]. Rolling Meadows, IL: Riverside Publishing.
Schrank, F. A., Mather, N., & McGrew, K. S. (2014b). Woodcock-Johnson IV Tests of Oral
Language. Rolling Meadows, IL: Riverside Publishing.
Schrank, F. A., McGrew, K. S., & Mather, N. (2014a). Woodcock-Johnson IV. Rolling Meadows,
IL: Riverside Publishing.
Schrank, F. A., McGrew, K. S., & Mather, N. (2014b). Woodcock-Johnson IV Tests of Cognitive
Abilities. Rolling Meadows, IL: Riverside Publishing.
Smith, J. K. (1999). The effects of practice on the reading speed, accuracy, duration, and visual
fatigue of students with low vision when accessing standard size print with optical devices.
(Unpublished doctoral dissertation). University of Arizona, Tucson, AZ.
Stevens, S. S. (1951). Handbook of experimental psychology. New York, NY: John Wiley.
Tallmadge, G. K., & Wood, C. T. (1976, October). User’s guide, ESEA Title I evaluation and
reporting system. Mountain View, CA: RMC Research.
Thomas, W. P., & Collier, V. P. (2002). A national study of school effectiveness for language
minority students’ long-term academic achievement. Santa Cruz, CA: Center for Research on
Education, Diversity, and Excellence, University of California-Santa Cruz. Retrieved from:
www.crede.ucsc.edu/research/llaa/1.1%5ffinal.html.
U.S. Department of Education. Family Educational Rights and Privacy Act. (1974). 20 U.S.C.
§ 1232g; 34 CFR Part 99.10(c) and (d).
References 111
Woodcock, R. W. (1973). Woodcock Reading Mastery Tests. Circle Pines, MN: American
Guidance Service.
Woodcock, R. W. (1982, March). Interpretation of the Rasch ability and difficulty scales
for educational purposes. Paper presented at the meeting of the National Council on
Measurement in Education, New York, NY.
Woodcock, R. W. (1987, 1998). Woodcock Reading Mastery Tests–Revised. Circle Pines, MN:
American Guidance Service.
Woodcock, R. W. (1988, August). Factor structure of the tests of cognitive ability from the 1977
and 1989 Woodcock-Johnson. Paper presented at the Australian Council on Educational
Research Seminar on Intelligence, Melbourne, Australia.
Woodcock, R. W. (1999). What can Rasch-based scores convey about a person’s test
performance? In S. E. Embretson & S. L. Hershberger (Eds.), The new rules of
measurement: What every psychologist and educator should know (pp. 105–128). Mahwah,
NJ: Lawrence Erlbaum.
Woodcock, R. W. (2011). Woodcock Reading Mastery Tests (3rd ed.). San Antonio, TX:
Pearson Assessments.
Woodcock, R. W., & Dahl, M. N. (1971). A common scale for the measurement of person ability
and test item difficulty (AGS Paper No. 10). Circle Pines, MN: American Guidance Service.
Woodcock, R. W., McGrew, K. S., & Mather, N. (2001). Woodcock-Johnson III. Itasca, IL:
Riverside Publishing.
Woodcock, R. W., McGrew, K. S., & Mather, N. (2001). Woodcock-Johnson III Tests of
Achievement. Itasca, IL: Riverside Publishing.
Woodcock, R. W., Schrank, F. A., Mather, N., & McGrew, K. S. (2007). Woodcock-Johnson III
Tests of Achievement Form C/Brief Battery. Rolling Meadows, IL: Riverside Publishing.
Wright, B. D., & Stone, M. H. (1979). Best test design. Chicago, IL: MESA Press.
112 References
Appendix A
Norming Site States and Cities
The authors wish to thank the more than 8,000 individuals who participated in the
Woodcock-Johnson IV (Schrank, McGrew, & Mather, 2014a) national standardization and
related studies as well as the professionals and schools who assisted in obtaining the data.
The following is a list of states and cities where data were collected.
Forestdale Bowie
Arkansas
Gardendale Buckeye
Chandler Arkadelphia
Hamilton
Douglas Blytheville
Helena
El Mirage Lowell
Homewood
Gilbert Pine Bluff
Hoover
Glendale Redfield
Huntsville
Goodyear Springdale
Moody
Mountain Brook Hereford
California
Pelham Laveen
Acton
Riverchase Mesa
Alameda
Roebuck Plaza Oro Valley
Alhambra
Scottsboro Parker
Aliso Viejo
Selma Peoria
Alpine
Tarrant Phoenix
Alta Loma
Trussville Pima
Altadena
Vestavia Hills Pirtleville
Anaheim
Portal
Anderson
Appendix A 113
Baldwin Park Igo Pacoima
Beverly Hills Imperial Beach Palo Alto
Biggs Indio Palo Cedro
Bonita Inglewood Paradise
Brea Irvine Pasadena
Buena Park Isleton Perris
Burbank Jamul Pico Rivera
Camarillo La Canada Playa Del Ray
Canyon Lake La Habra Pomona
Carlsbad La Mesa Rancho Cucamonga
Ceres La Mirada Red Bluff
Cerritos La Verne Redding
Chico Lake Elsinore Reseda
Chula Vista Lakeside Riverside
Claremont Lathrop Rosemead
Colton Lemon Grove San Diego
Compton Linden San Dimas
Corning Long Beach San Francisco
Cotati Los Angeles San Gabriel
Cottonwood Los Molinos San Jacinto
Covina Magalia San Jose
Davis Malibu San Marcos
Del Mar Manteca San Mateo
Durham Marysville San Rafael
El Cajon Menifee San Ramon
Encino Millville Santa Clarita
Escondido Modesto Santa Cruz
Fontana Monrovia Santa Maria
Forest Ranch Montebello Santa Monica
Foster City Moreno Valley Santa Rosa
Fountain Valley Mount Shasta Santee
Garden Grove Murrieta Scotts Valley
Gardena National City Shasta
Goleta North Hollywood Shasta Lake
Gridley Northridge Sherman Oaks
Hacienda Heights Oak Run South San Francisco
Half Moon Bay Oak View Spring Valley
Hawthorne Oakdale Stockton
Hayward Oceanside Studio City
Holiday Orland Sun City
Huntington Beach Oroville Sylmar
114 Appendix A
Tarzana Connecticut Clearwater Beach
Temecula Chester Clermont
Temple City Clinton Coconut Creek
Thousand Oaks Durham Cooper City
Tiburon East Haven Coral Gables
Turlock Essex Coral Springs
Tustin Groton Crystal River
Upland Ivoryton Davie
Valencia Litchfield Deerfield Beach
Van Nuys Middlefield Dunedin
Venice New Britain Fruitland Park
Ventura New Haven Fort Lauderdale
Vista New London Fort Myers
West Hollywood Oakdale Glen Saint Mary
Willows Oakville Green Cove Springs
Winchester Southington Greenacres
Windsor Stratford Hallandale
Woodland Torrington Hallandale Beach
Woodland Hills Waterford Hernando
Yorba Linda Watertown Hialeah
Yreka West Granby Holiday
Appendix A 115
Nokomis Trinity Helena
North Lauderdale Valrico Kennesaw
North Miami Beach Wellington LaGrange
Oakland Park Wesley Chapel Lawrenceville
Ocala West Palm Beach Lilburn
Ocklawaha Weston Lincolnton
Ocoee Lithonia
Odessa Georgia Loganville
Oldsmar Alpharetta Lula
Orange Park Athens Marietta
Palm Beach Atlanta McDonough
Palm Beach Gardens Blakely Meansville
Palm City Bogart Milton
Palm Coast Bonaire Monroe
Palm Harbor Brooks Morganton
Palmetto Buford Morris
Parkland Calhoun Norcross
Pembroke Pines Canon Oxford
Plant City Carlton Ringgold
Plantation Chamblee Riverdale
Pompano Beach College Park Rock Springs
Ponte Vedra Beach Columbus Rossville
Port Orange Comer Rydal
Port Richey Conyers Sandy Springs
Port Saint Lucie Cordele Smyrna
Redington Shores Cumming Snellville
Riverview Cuthbert Social Circle
Safety Harbor Dacula Stockbridge
Saint Augustine Decatur Stone Mountain
Saint Petersburg Doraville Summerville
Sarasota Douglasville Suwanee
Seffner Duluth Sylvester
Southwest Ranches Dunwoody Smyrna
Stuart Ellenwood Trenton
Sunrise Fayetteville Tucker
Tallahassee Flintstone Union City
Tamarac Flowery Branch Warner Robins
Tampa Fort Gaines Watkinsville
Tarpon Springs Fort Oglethorpe Winder
Temple Terrace Gainesville Woodstock
The Villages Grayson
116 Appendix A
Hawaii Buffalo Grove Matteson
Appendix A 117
Indiana Maine Pasadena
Gaithersburg Falmouth
Kentucky Glenwood Framingham
London Hampstead Hanover
La Plata Holliston
Laurel Hull
118 Appendix A
Hyannis West Harwich Hamtramck
Kingston West Yarmouth Harrison
Lancaster Weymouth Harrison Township
Lawrence Winchester Hastings
Leominster Worcester Hazel Park
Lexington Yarmouth Port Inkster
Malden Kalkaska
Manchester Michigan Kentwood
Manomet Ann Arbor Kewadin
Marion Auburn Hills Kingsford
Marlborough Battle Creek Kingsley
Marstons Mills Berkley Lake Orion
Medway Birmingham Lambertville
Mendon Bloomfield Hills Lathrup Village
Methuen Brighton Lawrence
Nantucket Brown City Lincoln Park
Natick Canton Livonia
Needham Cedar Luna Pier
Newton Center Line Macomb
North Andover Central Lake Madison Heights
North Chatham Chesterfield Mancelona
North Falmouth Clawson Manton
Northbridge Clinton Township Marysville
Orleans Coldwater Melvindale
Osterville Colon Mesick
Plymouth Commerce Milford
Quincy Dearborn New Baltimore
Roxbury Dearborn Heights Northville
Rutland Delton Norway
Sandwich Detroit Oak Park
Shrewsbury Eastpointe Oakland
Somerville Ecorse Ottawa Lake
South Attleboro Elk Rapids Petoskey
South Chatham Farmington Plainwell
South Dennis Farmington Hills Plymouth
Sterling Fenton Pontiac
Wakefield Ferndale Redford
Wareham Fife Lake River Rouge
Wayland Freeport Rochester
Wellfleet Garden City Rochester Hills
West Barnstable Grand Rapids Romulus
Appendix A 119
Roseville Duluth Woodbury
Royal Oak Forest Lake Wyoming
Saint Clair Shores Fridley
Sault Sainte Marie Golden Valley Mississippi
Shelby Ham Lake Bay Springs
South Boardman Hopkins Brandon
Southfield Hugo Decatur
Spring Lake Hutchinson Forest
Springfield Lexington Hickory
Sterling Heights Lindstrom Lawrence
Taylor Lino Lakes Little Rock
Temperance Mahtomedi Meridian
Traverse City Maple Grove Newton
Troy Maple Lake Pearl
Twin Lake Mapleton Richland
Walled Lake Minneapolis Union
Warren Minnetonka
Waterford Mounds View
Missouri
Wayland New Brighton Arnold
Wayne New London Ballwin
West Bloomfield North Mankato Barnhart
Westland Norwood Young America Battlefield
White Lake Oakdale Belton
Williamsburg Owatonna Bonne Terre
Wixom Plymouth Buffalo
Ypsilanti Rochester Chesterfield
Rockford Clarksville
Minnesota Roseville Fair Grove
Andover Saint Clair Ferguson
Big Lake Saint Cloud Florissant
Blaine Saint Francis Foley
Brooklyn Center Saint Paul Garden City
Buffalo Shakopee Hazelwood
Cambridge Shoreview High Ridge
Centerville Spring Lake Park Hollister
Champlin Stacy Imperial
Chaska Stillwater Independence
Chisago City Vadnais Heights Joplin
Circle Pines Wayzata Kansas City
Coon Rapids Wells Labadie
Delano White Bear Lake Lake Saint Louis
120 Appendix A
Lebanon Montana Portsmouth
Lee’s Summit Billings Rye
Lowry City Livingston Swanzey
Manchester Missoula
Maplewood New Jersey
Marshfield Nebraska Allendale
Nixa Firth Bloomfield
O’Fallon Lincoln Carlstadt
Olivette Omaha Cliffside Park
Oronogo Roca Closter
Osceola Seward Dumont
Ozark Valparaiso East Orange
Pacific East Rutherford
Park Hills Nevada Elmwood Park
Peculiar Reno Englewood
Pleasant Hope Fair Lawn
Raytown New Hampshire Fairview
Republic Allenstown Fort Lee
Richmond Ashland Franklin
Riverview Atkinson Garfield
Rogersville Bedford Hackensack
Saint Ann Chichester Haledon
Saint Charles Claremont Hamilton
Saint Clair Concord Jersey City
Saint John Derry Landing
Saint Louis Dover Lincroft
Saint Peters Epsom Linden
Springfield Goffstown Lodi
Strafford Hampstead Mahwah
Troy Henniker Midland Park
Villa Ridge Hooksett Newark
Walnut Shade Hudson North Arlington
Warsaw Laconia North Bergen
Webster Groves Londonderry Northvale
Wentzville Manchester Oradell
Willard Merrimack Palisades Park
Wright City Milford Paramus
Nashua Paterson
New Boston Pequannock
Pembroke Ridgefield Park
Plaistow Ridgewood
Appendix A 121
River Vale Boiceville Irvington
Riverdale Brewster Jamaica
Rochelle Park Brownville Jewett
Rutherford Canandaigua Kenmore
Saddle Brook Castleton Lake Peekskill
Secaucus Castleton on Hudson Lakeville
Sewell Churchville Lancaster
South Orange Clifton Park Lexington
Springfield Clifton Springs Lima
Succasunna Clyde Liverpool
Teaneck Cohoes Lowville
Titusville Cortland Lyons
Union Beach Croghan Macedon
Wallington Croton on Hudson Malverne
Wanaque Dansville Manchester
Wayne Dexter Maplecrest
Wenonah Dix Hills Marathon
West New York East Jewett Marion
Westville East Northport Maspeth
Westwood East Rochester Massapequa
Woodland Park Eastchester Merrick
Wyckoff Elmsford Middlesex
Fairport Miller Place
New Mexico Farmington Mineola
Espanola Freedom Monroe
Las Cruces Gansevoort Mount Vernon
Los Alamos Garden City Naples
Santa Fe Garden City South New Hyde Park
Geneseo New Rochelle
New York Geneva New York
Acra Glen Park Newark
Albany Glenmont Niagara Falls
Ardsley Glenville Niskayuna
Arkville Harrison North Rose
Ashland Hemlock North Syracuse
Astoria Hempstead Nunda
Auburn Hensonville Ontario
Ballston Lake Homer Orchard Park
Beaver Falls Honeoye Falls Ossining
Bethpage Huntington Station Palmyra
Bloomfield Hurley Pelham
122 Appendix A
Penfield Westbury Southern Shores
Phelps White Plains Stanley
Pittsford Whitestone Statesville
Plainfield Williamson Sylva
Port Gibson Windham Vale
Prattsville Wolcott Wake Forest
Rensselaer Woodhaven Waxhaw
Rexford Woodstock Wilmington
Ridgewood Yonkers Youngsville
Rochester Yorktown Heights
Rockville Centre Ohio
Romulus North Carolina Akron
Rotterdam Apex Baltimore
Rye Brook Asheville Bay Village
Saugerties Brevard Bedford
Savannah Cary Brookfield
Scarsdale Catawba Canal Winchester
Schenectady Chapel Hill Canfield
Scotia Charlotte Chagrin Falls
Seneca Falls Cornelius Cleveland
Shandaken Cullowhee Columbus
Shortsville Davidson Elyria
Sleepy Hollow Denver Fredericksburg
Smithtown Durham Fremont
Sodus Franklin Guysville
Somers Garner Highland Heights
Sparkill Gastonia Hudson
Spencerport Grover Ironton
Stillwater Hendersonville Jefferson
Tappan Huntersville Kent
Truxton Indian Trail Kitts Hill
Uniondale Kitty Hawk Lakewood
Verplanck Louisburg Lithopolis
Victor Matthews Logan
Walworth Mint Hill Mentor
Waterloo Mount Holly Miamisburg
Watertown Otto Painesville
Watkins Glen Raleigh Pedro
Webster Roanoke Rapids Solon
West Hempstead Salisbury Strongsville
West Henrietta Shelby Tiffin
Appendix A 123
Toledo Brockway Souderton
Twinsburg Brookville Trumbauersville
Westerville Claysburg Upper Darby
Wickliffe Conshohocken Wallingford
Wooster Dallas Warminster
Everson Warrington
Oklahoma Fort Washington West Chester
Drumright Forty Fort West Pittston
Sallisaw Freeport Wexford
Stillwater Furlong Wilkes-Barre
Gettysburg Williamsport
Oregon Homer City Williamstown
Aloha Hummelstown Wynnewood
Astoria Indiana Yardley
Beaverton Irwin
Bend Johnstown Rhode Island
Cannon Beach Kingston Pawtucket
Clackamas Langhorne Providence
Cornelius Lititz
Eugene Malvern
South Carolina
Grants Pass Media Aiken
Gresham Milton Anderson
Happy Valley Montgomery Boiling Springs
Lake Oswego Montoursville Camden
Manzanita Mount Pleasant Cassatt
Nehalem Nesquehoning Charleston
Oregon City Newton Clover
Portland North Wales Columbia
Sandy Oakmont Easley
Seaside Oreland Elgin
Tillamook Palmerton Florence
Tolovana Park Perkasie Fort Mill
Troutdale Philadelphia Fountain Inn
Warrenton Pittsburgh Goose Creek
Pittston Greenville
Pennsylvania Greer
Pottstown
Allison Park Quakertown Hanahan
Bechtelsville Richeyville Hartsville
Bensalem Scranton Kershaw
Blairsville Shavertown Landrum
Bradford Shelocta Leesville
124 Appendix A
Lexington McDonald Crosby
Lugoff Murfreesboro Dallas
Mauldin New Tazewell Dayton
McBee Oliver Springs Denton
McCormick Oneida DeSoto
Moore Ooltewah Elgin
Mount Pleasant Pigeon Forge Euless
Myrtle Beach Pioneer Farmers Branch
Rock Hill Powell Farmersville
Saluda Sevierville Fort Worth
Simpsonville Seymour Fredericksburg
Summerville Signal Mountain Frisco
Wellford Soddy Daisy Garland
West Columbia Sunbright Georgetown
Tellico Plains Gonzales
Tennessee Ten Mile Grand Prairie
Alcoa Walland Helotes
Apison Woodbury Hereford
Blaine Highlands
Caryville Texas Holliday
Chattanooga Addison Houston
Cleveland Allen Huffman
Corryton Austin Humble
Dandridge Balch Springs Iowa Park
Dayton Baytown Irving
Dunlap Bellaire Katy
East Ridge Belton Keller
Georgetown Blue Ridge Killeen
Graysville Boerne Kingwood
Harrison Bryan Lake Dallas
Helenwood Bulverde Lancaster
Hixson Burkburnett La Porte
Huntland Burke Leander
Huntsville Burleson Lewisville
Jacksboro Carrollton Liberty Hill
Knoxville Cedar Hill Live Oak
Kodak Cedar Park Llano
La Follette Channelview Longview
Louisville Cleveland Louisville
Luttrell Coppell Lucas
Maryville Corinth McKinney
Appendix A 125
Mesquite Utah Daleville
Missouri City Clarkston Falls Church
Murphy Clearfield Ferrum
New Braunfels Ephraim Forest
New Ulm Layton Fredericksburg
Odessa Lehi Glade Hill
Olney Logan Hampton
Pasadena Midvale Hardy
Pearland Murray Lynchburg
Pflugerville Orem New Castle
Pinehurst Pleasant Grove Newport
Plano Provo Newport News
Porter Salt Lake City North Tazewell
Princeton Sandy Oak Hill
Richardson Santaquin Reston
Rosharon Saratoga Springs Roanoke
Round Rock Spanish Fork Rocky Mount
Rowlett Springville Salem
Sachse Taylorsville Springfield
San Antonio West Jordan Sterling
San Saba West Valley City Troutville
Schertz Warrenton
Seguin Vermont Winchester
Selma Essex Junction Woodbridge
Spring Highgate
Spring Branch Washington
Lyndonville
Stafford Passumpsic Auburn
Stephenville Rochester Bellevue
Sterling City South Burlington Bellingham
Sugar Land Swanton Blaine
Sunnyvale White River Junction Brush Prairie
Taylor Burien
Weatherford Virginia Camano Island
Wichita Falls Alexandria Centralia
Wolfe City Barboursville Clinton
Wylie Boones Mill Coupeville
Bristow Des Moines
Callaway Dupont
Charlottesville Duvall
Christiansburg Eatonville
Cumberland Edmonds
126 Appendix A
Everett West Virginia Hartford
Federal Way Anmoore Lake Geneva
Ferndale Bridgeport Lakewood
Freeland Buckhannon Manawa
Gig Harbor Charleston Marathon
Kenmore Clarksburg Milwaukee
Kennewick Fairmont Muskego
Kent Fort Ashby New Berlin
Lacey Franklin Oak Creek
Lake Tapps Grafton Oconomowoc
Lakewood Harpers Ferry Racine
Langley Hedgesville Saint Francis
Liberty Lake Huntington Salem
Lynnwood Keyser South Milwaukee
Milton Lost Creek Superior
Moses Lake Mineral Wells Townsend
Mountlake Terrace Monongah Union Grove
Newcastle Morgantown Wabeno
Oak Harbor Nutter Fort Waterford
Okanogan Ona Wauwatosa
Olympia Parkersburg West Allis
Packwood Ridgeley Winneconne
Port Angeles Shinnston
Port Hadlock Springfield
Port Townsend West Milford
Puyallup
Redmond Wisconsin
Renton Bonduel
Seattle Burlington
Sequim Cecil
Shelton Cedarburg
Shoreline Cudahy
Snoqualmie Delavan
Spokane Eagle River
Steilacoom Elkhorn
Tacoma Fontana
Tukwila Franklin
University Place Germantown
Glendale
Greenfield
Hales Corners
Appendix A 127
Appendix B
Test 6: Writing Samples Scoring Guide
Use this Scoring Guide to Score Test 6: Writing Samples. This guide provides sample
responses to each item arranged by the number of points that should be scored (1 or 0 points
for Items 1 through 6 and 2, 1, or 0 points for Items 7 through 28). Although examples are
not provided, items also may be scored .5 points or 1.5 points if the quality of the response
falls between the examples given for the other scores. A description of the criteria to be used
in scoring the examinee’s responses is also included. Turn to the correct guide for Form A,
Form B, or Form C.
Appendix B 129
2. This is a _________. (car)
cra, kra ■■ has all three letters but out of sequence; must begin
with c or k
0 points
krk ■■ includes a letter representing an incorrect sound
crg
0 points
t ■■ two or fewer correct letters
tr
130 Appendix B
4. This girl is very sad. This girl is very _________. (happy)
0 points
hpy ■■ missing one or more sounds
hpe
hpee
gad
gla
0 points
hat ■■ one word
black
Appendix B 131
6. Write a good sentence that tells what is happening in the picture. (boy skating)
0 points
The boy is funny. ■■ does not describe the action
Note: Items 7 through 28 are scored 2, 1, or 0 points. Credit on most items requires a complete sentence. Do not penalize
the examinee for punctuation, capitalization, spelling, or usage errors unless otherwise indicated in the item scoring
criteria.
7. Write a good sentence that tells what is happening in the picture. (bird feeding babies)
1 point
A bird is feeding its babies. ■■ provides a simple description with no additional detail
The bird has a worm.
The mom is feeding the babies.
She is feeding her babies.
0 points
This is a bird. ■■ limited content
The bird is feeding.
It is eating.
132 Appendix B
8. Write a good sentence that tells what the girl is doing. (girl swinging)
The girl is playing on her swing set. ■■ describes the picture in some detail
The girl is swinging on the swing in the park.
She is swinging at the playground.
1 point
She is swinging (playing). ■■ uses a pronoun in place of the word girl
The (This, That) girl is swing. ■■ refers to the girl but incorrectly uses the verb phrase is
swinging
0 points
The girl is nice. ■■ does not describe the action
9. A woman cannot find her car keys. Write one good sentence that tells about this and uses the words by the car.
1 point
The woman could look by the car. ■■ does not mention the keys
She is looking by the car.
Maybe I should look by the car.
Appendix B 133
She lost (finds, left) them by the car. ■■ uses pronouns in place of the words woman and keys
The (My) keys are by the car. ■■ does not mention the woman
The woman cannot find her car keys. They are by the car. ■■ uses two sentences
Note: Punctuation is not required between the two
sentences.
0 points
Her keys are in her pocket. ■■ does not use the phrase by the car
10. Write one good sentence that tells what is happening in this picture and what could happen next. (cat and fishbowl)
1 point
He is trying to catch and eat the fish. ■■ simply describes both the action and a possible outcome
The cat is trying to get the fish and eat it.
The cat will (might, could, is going to, is trying to) eat (get) ■■ describes only the action or a possible outcome
the fish.
The cat is reaching in the fishbowl for a fish.
A cat is trying to eat the fish. He will get the fish with his ■■ uses two sentences to describe the action and a
paw. possible outcome
Note: Punctuation is not required between the two
sentences.
0 Points
Cats like fish. ■■ limited content
The cat ate the fish.
134 Appendix B
11. Write a good sentence that tells what a flashlight does.
1 point
It shines so you can see. ■■ uses a pronoun in place of the word flashlight
It lights up for you.
It shines bright at night.
A flashlight shines (lights up, makes light, gives light). ■■ refers to a flashlight but has limited content
A flashlight helps you see.
0 points
A flashlight is good to have. ■■ does not describe what a flashlight does
This flashlight is on.
This is a flashlight. It is glowing.
12. Write a good sentence that tells what is happening in the picture. (dog burying bone)
The dog is digging up bones for his dinner. ■■ includes a description of the dog’s action in some detail
Appendix B 135
1 point
The dog is going to bury his bone. ■■ describes the dog’s action and mentions the hole, the
The dog is digging a hole. ground, or the bone
The dog is burying a bone.
The dog is digging in the ground.
He (She, It) is digging a hole for his (her, its) bone. ■■ uses a pronoun in place of the word dog or bone
The dog is burying it in the ground.
0 points
The dog is burying. ■■ limited content
The dog is digging.
the dog digging
13. Write a good sentence that tells about the picture and uses the word because. (boy on crutches)
Javier broke his leg because he jumped off the barn last ■■ uses the word because and refers to either the boy on
Saturday. crutches or the children running by with some detail
The boy and girl are running laps because they need to get
ready for the district track meet.
The boy is wearing a cast because he broke his ankle in the
high jump.
1 point
A boy and girl are looking at someone because he has a ■■ uses the word because, and refers to the boy and the
broken leg. children in a simple sentence
136 Appendix B
The boy broke his leg because he fell down. ■■ uses the word because but only refers to the boy or the
The boy is using the crutch because he hurt his foot. other children
The boy is sad because he broke his leg.
Because he broke his foot, he needs crutches.
He broke his leg because he fell out of a tree.
The other children can run because they did not get hurt.
0 points
He is not having fun. ■■ does not use the word because
14. Write one sentence that tells three things you like to do on weekends. Remember, three things.
1 point
On the weekends, I like to eat, sleep, and play. ■■ includes three simple weekend activities that are
I like to play, run, and listen to my radio. correctly punctuated with commas
I like to sleep, eat, and read.
I like to fish and swim and dance. ■■ includes three activities connected by and or
I like to swim dance and sleep. with no commas
I like to ride my bike. I like to watch cartoons. I like to go ■■ uses two or three complete sentences that include three
swimming. activities and end with periods, are connected with
I like to go places, I like to swim, and I like to go to my commas, or are not punctuated
friends.
I like to play with my friends then I swing then I get mail.
Watch TV, ride bikes, and go to the movies. ■■ three verb phrases that are correctly punctuated
Appendix B 137
0 points
play, eat, and run ■■ three simple verbs
15. Write a good sentence that tells about the picture and uses the word and. (playing catch)
1 point
The boy and girl are playing with the ball. ■■ uses the word and, refers to the boy and girl, and tells
The boy and girl are playing ball (tossing the ball, throwing that they are playing with the ball
the ball, catching the ball).
The boy and girl play (are playing) catch.
Ann and Bob are playing catch.
The boy and girl are throwing it to each other.
They’re having fun and playing ball.
We threw the ball to each other and caught it.
He and she are catching the ball.
0 points
The boy and girl play. ■■ mentions the boy and girl but not the ball
He and she play.
They are playing catch and with a big ball. ■■ uses the word and incorrectly
The people are playing with and ball.
The boy is playing catch. ■■ does not use the word and
138 Appendix B
16. Write a good sentence that tells why it is dangerous to wear headphones when you ride a bicycle down a busy street.
Headphones block out street and traffic noises that may be ■■ explains in detail a reason why it may be dangerous
important for your safety and the safety of others.
Headphones are hazards for bicyclists because they drown
out the sounds of approaching vehicles.
It is dangerous because you can’t hear a car or bus if an
emergency were to occur.
You might not hear a car honking at you and you could get
hit.
1 point
You might not hear a car honking at you. ■■ states that you might not hear traffic
You cannot hear what is happening around you.
A car may honk its horn and you may not hear it.
You might not hear the traffic coming.
It is dangerous because you couldn’t hear the cars. ■■ simply states that it is dangerous and provides a
It is dangerous because you can’t hear traffic. reason why
It is dangerous because you could get hurt.
It is dangerous because you might get hit by a car.
Wearing headphones can cause a crash while riding a bike.
It is dangerous to wear headphones because you may fall ■■ states that it is dangerous but expresses unlikely
down and poke out your eardrum. concerns (poke out eardrum, hurt bicycle, break
headphones)
0 points
It can cause an accident. ■■ does not clearly specify or suggest an obvious danger
A man might say “watch out.”
The cord may get tangled in the tire (spokes). ■■ expresses concern over the headphones, bicycle, or the
If you fall it may poke in your ear. person’s ear
Appendix B 139
You cannot hear cars. ■■ limited content
You may crash.
You might get hurt.
Because you can’t hear a car.
Because you can crash into a car.
17. Write one sentence that tells what a rainbow looks like.
A rainbow looks like a gigantic arc painted with many ■■ compares a rainbow in detail with something that is
colors. similar in appearance
A rainbow is a prism of colors floating free for only a short
time.
A rainbow looks like a bouquet of wildflowers thrown into
the sky.
1 point
A rainbow has many colors like red, blue, green, and ■■ provides a simple description
yellow.
A rainbow has pretty colors and it’s beautiful.
A rainbow has all different colors.
0 points
I saw a rainbow. ■■ does not describe a rainbow
140 Appendix B
18. This information will be included in a paragraph about sports. Write a good main or topic sentence for the
paragraph.
1 point
This is a report on sports. ■■ provides only a simple fact or statement
Some sports are tennis, swimming, baseball, and soccer.
I like tennis, swimming, baseball, and soccer.
My favorite sports are tennis, swimming, baseball, and
soccer.
0 points
Most of these sports are fun. ■■ refers to the list of sports in a general way
All of these sports take energy.
I play soccer and tennis and then swimming and baseball. ■■ uses all of the listed words but does not write a
topic sentence
Appendix B 141
19. The second sentence is missing from this paragraph. Write one good sentence that goes with the other two sentences.
(1) Whenever you buy a present, you should consider the interests of the receiver. (2) _________. (3) If, on the
other hand, you are selecting a gift for your little cousin, you might choose a caboose for his new electric train set.
1 point
If your dad wants a new sweater, buy it. ■■ refers to a receiver and describes the gift but does not
Your mom would like a tennis racket. maintain the writer’s style
Your best friend may like a T-shirt.
You should consider whom it’s for and what he or she likes. ■■ provides a general statement about buying presents
Find something in the person’s age range. for people
Think of something practical for the person.
0 points
You can’t always tell. ■■ limited content
Go ahead and ask them.
Buy a book.
142 Appendix B
20. Write one good sentence that uses the words despite his effort.
1 point
Despite his effort, he could not climb over the fence. ■■ demonstrates an understanding of the phrase and may
Despite his effort, the team still lost. suggest the person’s effort but does not describe a
Despite his effort, he didn’t make the team. specific situation or context
Despite his effort, he still failed the test.
0 points
His writing was good despite his effort. ■■ uses despite his effort but contradicts the meaning of the
Despite his effort, he got an “A” on the test. phrase or uses it incorrectly
He will despite his effort go to work.
21. The second sentence is missing from this paragraph. Write a good sentence that will fit.
(1) That year we moved from the quiet, peaceful countryside to the fast-paced city. (2) _________. (3) That was the
one activity I missed most.
1 point
We sat on the porch swing. ■■ simply describes an activity
We could listen to the birds.
I used to play with our animals.
Appendix B 143
0 points
I really missed the tranquility and quietness. ■■ refers to the country but does not mention an activity
22. Write one good sentence that tells how a library and a forest are alike.
1 point
Both libraries and forests are alike because they are quiet. ■■ explains how a library and a forest are alike stating a
Paper is made of wood and wood comes from trees in a specific similarity in a simple sentence
forest.
A forest has lots of trees and a library has a lot of books.
Peaceful moments can be found in a library and a forest.
A library and a forest are alike because they both have
wood in them.
A library and forest are alike because a library has books
about the forest.
A library and a forest are alike because a library is full of
books that are made of wood, and a forest contains trees
made of wood.
They are both quiet places where you can spend a peaceful ■■ does not mention a library and a forest but gives a
afternoon. detailed reason of how they are alike
0 points
They both are in cities. ■■ does not express an obvious or correct similarity
144 Appendix B
23. The second sentence is missing from this paragraph. Write one good sentence that the writer may have used.
(1) In one of the black vinyl booths off to the left, a middle-aged woman and man were hunched over an immaculate
table. (2) _________. (3) The man was round from every aspect; a small round potbelly, meaty forearms with black
hair, a chubby face with a mouth too small, and a puggish nose that ended in a little round ball.
1 point
The woman was tall and thin with curly hair and a pointed ■■ describes the woman or the couple with simple
nose. vocabulary (tends to use common nouns, verbs, and
The woman had beautiful sleek, black hair and oval eyes. adjectives); does not maintain the writer’s style
The thin woman did not seem to belong with her date.
The woman was staring at the unusual-looking man.
The woman was very thin and frail-looking.
They were a very odd couple and sat in the booth very
quietly.
They both had an extremely unique appearance.
0 points
The woman wore a red hat. ■■ limited content
They looked old.
They were eating the food.
The tables were very close.
Appendix B 145
24. Finish the sentence.
Few people understood the extent of his disappointment, the loss of his desire, or _________.
1 point
…how he would recover from this bitter experience. ■■ maintains sentence meaning but does not maintain the
…his inspiration to carry on the fight. parallel structure of the sentence
…his helpless feelings.
…the name of his company.
…lack of initiative.
…the loss of his ambitions. ■■ uses parallel structure but repeats the word loss
0 points
…his mother ■■ inconsistent with sentence meaning
25. The second sentence is missing from this paragraph. Write one good sentence that the writer may have used.
(1) Two mountain chains traverse the country roughly from east to west forming between them a number of verdant
valleys and plateaus. (2) _________. (3) The walls of the town, which is built on a hill, are high, the streets and lanes
tortuous and broken, the roads winding and narrow.
146 Appendix B
1 point
My hometown lies on the northernmost chain of the ■■ refers to a town and describes the setting or town with
mountains. simple vocabulary (tends to use common nouns, verbs,
Hidden among the two mountain chains is a small town of and adjectives); does not maintain the writer’s style
natives.
A small group of travelers stopped here and built a small
town.
There lies the sleepy little town of Borna.
In the valleys on one plateau is a town.
One town is found between two mountains.
0 points
So many towns were built in these valleys. ■■ does not refer to one town
Located in these valleys are quaint little towns.
Many hot air balloonists survey the land. ■■ does not mention a town
The area can be dangerous because of all the steep ranges.
26. Write a sentence about being hired for a job. Include the word thus in your sentence.
1 point
I was the only one who applied thus I was hired. ■■ uses the word thus correctly in a simple sentence
I had a lot of experience thus, I got the job. Note: Punctuation is not required.
I was the only one who had training. Thus, I got the job. ■■ provides two sentences that use thus correctly
Note: Punctuation is not required.
0 points
Thus I got the job. ■■ begins with thus
Thus, I was hired for the job.
I wanted to get a job; thus, I was too young. ■■ uses thus incorrectly
Appendix B 147
27. The topic sentence is missing from this paragraph. Using the expressive style that the writer used, compose a good
topic sentence that communicates the paragraph’s main idea.
1 point
We can gather a lot of information by touching things ■■ an adequate topic sentence that communicates the
because there just aren’t that many substances things can paragraph’s main idea but does not include additional
be made from. descriptive words
With just the touch of an object you can usually tell if it is a
natural or synthetic substance.
All objects are made of only a few substances.
The sense of touch can be relied on to give truthful
information about the composition of our surroundings.
Through our sense of touch, we can judge what things are
made of.
0 points
What is everything made out of? ■■ a topic sentence that doesn’t capture the scope of
the paragraph
Our sense of touch is an amazing utility. ■■ a sentence that does not introduce the main idea of
We know what it looks like, but what is it really made of? the paragraph
148 Appendix B
28. The concluding sentence is missing from this paragraph. Using the expressive style that the writer used, compose a
good final sentence that summarizes the content of this paragraph.
Artistic expressions of the world we inhabit go back much farther in time than the well-known art of the ancient
Egyptians and Greeks. Carefully realized, well-rendered images of animals and people, created thirty thousand
years ago, have been found in ancient caverns in France and elsewhere. Cave paintings, with a variety of colored
pigments, depict subjects running or standing still and cleverly placed shading creates the believable illusion of three
dimensions on flat or mildly contoured sections of cave walls. These paintings were made more than twenty-seven
millennia before the sculptures decorating the Parthenon in Athens. ________________________________________.
1 point
People might think Greek art is the oldest there is, but some ■■ an adequate concluding sentence that summarizes the
well-done cave paintings are far older. paragraph’s main idea but does not include additional
This ancient art work proves to us that artistic expression descriptive words
has always been valued throughout human history.
Art has been an important part of human society since its
inception.
Cave paintings show us that art has been important to
humans for a long period of time.
0 points
Cave paintings are very old. ■■ a concluding sentence that doesn’t capture the scope of
3-D imagery was in existence even then. the paragraph
Ancient three-dimensional art has evolved to 3-D movies ■■ a sentence that introduces new content
and television shows.
Appendix B 149
General Scoring Guidelines
■■ Do not penalize the examinee for punctuation, capitalization, spelling, or usage errors
unless otherwise indicated in the item scoring criteria.
■■ Do not penalize the examinee for poor handwriting unless the response is illegible.
Items 1–6
■■ A 1-point response is a standard response (meets task requirements).
hta ■■ has all three letters but out of sequence; must begin with
h, c, or k
150 Appendix B
0 points
hr ■■ includes a letter representing an incorrect sound
at ■■ missing h, c, or k
0 points
t ■■ one or no correct letters or does not include a vowel
tl
Appendix B 151
0 points
big ■■ one word
box
0 points
old ■■ one word
shoe
6. This animal is a cow. Write a sentence that tells what this animal is. (fish)
0 points
the fish ■■ limited content
fish
152 Appendix B
Note: Items 7 through 28 are scored 2, 1, or 0 points. Credit on most items requires a complete sentence. Do not penalize
the examinee for punctuation, capitalization, spelling, or usage errors unless otherwise indicated in the item scoring
criteria.
7. This woman is a queen. Write a sentence that tells what this man is. (king)
The man is a king and wears a crown. ■■ identifies the king or prince and adds a detail
He is the king of a faraway land.
This is a king who rules a country.
1 point
This (That, He) is a king (prince, ruler). ■■ omits the word man
A king rules a country. ■■ does not directly state that the man is a king or a prince
A king is married to a queen.
A prince will turn into a king.
0 points
the king ■■ limited content
king
8. The man is showing the woman some shoes. Write a question that tells what the man may have asked.
Appendix B 153
1 point
What size? ■■ an abbreviated question with a question mark
How about these?
Like these shoes?
Will these be OK?
How much do these cost? ■■ a question that the woman might have asked
Do you have these shoes in size 11?
He is asking her if this one is the right color. ■■ a statement explaining what the salesperson may
The man asked the woman what size she wears. have asked
Are these the shoes you want to try. ■■ a question without a question mark
Do you like these new red shoes.
What size do you wear.
0 points
What size. ■■ an abbreviated question without a question mark
How about these.
9. Write a good sentence that tells about this toy. (toy truck)
1 point
This (The) toy is a truck. ■■ identifies the picture as a truck
This (It) is a truck.
The toy is fun to play with. ■■ simply describes one characteristic of a toy truck
This is a big truck.
It is the boy’s truck.
It is my favorite toy.
It is fun to play with.
154 Appendix B
0 points
It is pretty. ■■ does not describe the truck
It is small.
10. Write one good sentence that tells what is happening in the picture. (present)
The joyous little boy received a toy airplane on his birthday. ■■ describes the action (opening a present) and the
As he opened the package, he found a toy plane. outcome (got a plane, toy)
He is excited about the new toy airplane that he got on his ■■ describes the boy’s reaction (happy, excited) and the
birthday. outcome (got a plane)
The boy is very happy about the new model plane he
received.
1 point
The boy was happy to get his own toy. ■■ provides a simple description of the picture
The boy is opening a toy airplane.
The boy got an airplane.
The boy got a new toy plane.
He is opening a birthday present.
The boy is opening a package.
0 points
He is opening something. ■■ limited content
He got a plane.
He is very excited.
It’s an airplane.
He got something.
He is playing.
Appendix B 155
11. Write one good sentence that tells what is happening in the picture. (boy and girl under an umbrella in the rain)
1 point
The children (A boy and girl) are sharing an umbrella. ■■ mentions only two of the three elements—the children,
Sara and Chris are sharing an umbrella. the umbrella, the rain
Two people are under an umbrella.
A boy and girl (two kids) are walking in the rain.
It is raining and they are sharing an umbrella. ■■ uses a pronoun in place of the word children
They are walking and holding an umbrella.
0 points
It is raining. ■■ mentions only one element
They are walking.
They have an umbrella.
12. The second sentence is missing from the paragraph. Write one good sentence that goes with the other two sentences.
(1) Before I go out into a winter storm, I do several things. (2) _________. (3) Next, I put a woolen hat on my head
and pull it down over my ears.
1 point
I put on a warm coat (jacket, scarf, boots). ■■ provides an appropriate first step
I get my soft coat on.
156 Appendix B
0 points
Put on a jacket. ■■ limited content
And I get a jacket
Like put on a coat.
a jacket and scarf
13. Write a good sentence that tells what is happening in this picture and what could happen next. (blindfolded boy
about to trip over a chair)
1 point
He can’t see and may trip. ■■ simply describes both the action and an outcome
He has a blindfold and will fall.
The boy is going to trip over (is walking into) the chair. ■■ describes only the action or an outcome
The boy could fall and get hurt.
The boy is walking with a blindfold on.
He is walking blindfolded. He is going to trip over the chair. ■■ uses two sentences to describe the action and
A boy is walking into a chair. He might fall. an outcome
Note: Punctuation is not required between the two
sentences.
0 points
He’s going to trip. ■■ simply describes only the action or the outcome
He will fall down.
He is blindfolded.
He can’t see where he is going.
The boy is going to sit down.
Appendix B 157
14. Write one sentence that tells three things you would like to do on a sunny day.
1 point
I would like to swim, play baseball, and go hiking. ■■ includes three simple activities that are correctly
punctuated with commas
I would like to swim and run and sleep. ■■ includes three simple activities connected by and or
I like to hike swim and go on a picnic. with no commas
First I’d play basketball. Then I’d take a shower. Last I’d ■■ uses two or three complete sentences that include three
walk to my friend’s house. activities and end with periods, are connected with
I would like to go swimming. I’d like to go to the mall. I’d commas, or are not punctuated
like to sail in the ocean.
Play video games, play with friends, and play with my pet ■■ three verb phrases that are correctly punctuated
snake.
0 points
swim, fish, run ■■ three simple verbs
158 Appendix B
15. Write a good sentence that tells what a balloon looks like.
A balloon is like a big floating colored circle. ■■ compares a balloon in detail with something that is
A balloon is a delicate ball that floats in the air. similar in appearance
A balloon looks like a semi-transparent sphere.
1 point
A balloon is round and pretty. ■■ provides a simple description
A balloon is large, round and very light.
A balloon looks like a round ball with a string.
Balloons are round and come in a variety of colors.
A balloon is filled with air and may be any color.
A balloon looks like a round ball. ■■ simply compares a balloon to a round object like a ball,
A balloon looks like a circle. circle, bubble, or the letter o
A balloon looks like a bubble.
It looks like a big ball. ■■ uses a pronoun in place of the word balloon
It is round and pretty.
0 points
I see a balloon. ■■ does not describe a balloon
I have a big balloon.
Appendix B 159
16. The second sentence is missing from this paragraph. Write a good sentence that will fit.
(1) When my father agrees to build a house, he follows several steps. (2) _________. (3) Next, he determines the
exact plan his customer has in mind.
1 point
He sets up a meeting with the people. ■■ provides an appropriate first step
He makes sure the house will fit the building code.
He checks out the location.
He needs to draw a plan.
17. Write one sentence that tells two ways that a bus and a train are alike.
1 point
Both get you somewhere and have wheels. ■■ explains two ways a bus and a train are alike using
They both are long and carry many people. simple vocabulary
They are both methods of transportation and require fuel.
A bus and a train both take people places and they are both ■■ explains two ways a bus and a train are alike but the
means of transportation. ways are similar in meaning
160 Appendix B
A bus and a train both have wheels. They both go fast. ■■ uses two sentences
Note: Punctuation is not required between the two
sentences.
A bus and train are able to move many people at one time. ■■ explains one way a bus and train are alike in detail
A bus and train are alike because they both carry people
around town.
A bus and a train are both popular means of transportation.
0 points
They drive and make noise. ■■ does not express an obvious or correct similarity
They both have engineers and tires.
They both have motors. ■■ tells one way a bus and train are alike but has
They both carry people. limited content
People ride in them.
The ski lodge is popular because of its easy accessibility, because of its large swimming pool, and _________.
1 point
…view of the ski resort. ■■ maintains sentence meaning but does not maintain the
…nice, warm, comfortable rooms. parallel structure of the sentence
…challenging ski slopes.
…the white fluffy snow.
…warm hot tub.
0 points
…long lines ■■ inconsistent with sentence meaning
Appendix B 161
19. You have been asked to write an adventure story about a stormy trip on a boat. Write an exciting first sentence.
1 point
It was a dark, stormy night at sea. ■■ provides a simple opening sentence (tends to use
The thunder roared as the waves crashed against the boat. common verbs, nouns, and adjectives)
The boat was rocking furiously and water came into the
boat.
The huge waves crashed into the side of our boat.
We were out at sea and the weather started to turn.
It was a stormy day and I was in a boat with my friend.
0 points
The Bad Trip ■■ provides a title instead of first sentence
162 Appendix B
20. Write a good sentence using the words despite her anger.
1 point
Despite her anger, she didn’t yell at him. ■■ demonstrates understanding of the phrase and may
Despite her anger, she bit her tongue. suggest the person’s restraint but does not describe a
Despite her anger, she participated reluctantly. specific situation or context
Despite her anger, Lucy didn’t lose her temper.
She finished the job despite her anger.
Despite her anger, he still worked with her.
0 points
She was very mad, despite her anger. ■■ uses despite her anger but contradicts the meaning of the
Despite her anger, she slammed the door. phrase or uses it incorrectly
Despite her anger, she is nice.
She will despite her anger today.
She likes to despite her anger on me.
Appendix B 163
21. The second sentence is missing from this paragraph. Write one good sentence that goes with the other two sentences.
(1) The drama is set in an industrial factory in a small community. (2) _________. (3) Her offer to help him evolves
into a lasting friendship.
1 point
A woman tries to help a man. ■■ introduces two people in a simple sentence
A man is having trouble and a woman comes to help.
The main characters are Bart and Mary.
Two people met while working there.
0 points
All the important characters were in the factory. ■■ does not unite the first and third sentences
Where people get to know each other.
164 Appendix B
22. You have been asked to write an essay about the importance of world peace. Write a good first sentence.
1 point
World peace is an important factor in our everyday lives. ■■ provides a simple sentence or question
There are many important things in this world, but to
me and many other people, world peace is the most
important.
World peace is very important for future generations.
Peace in our world is the only thing that really matters.
Wouldn’t it be nice if all the nations could come together?
What can we do to achieve world peace?
0 points
The Importance of World Peace ■■ provides a title instead of a first sentence
Appendix B 165
23. The second sentence is missing from this paragraph. Write a good sentence that the writer may have used.
(1) The slope on the left was densely wooded, and the somber shadow that fell from the hillside lay like an amber
rope on the morning mist. (2) _________. (3) Between these diverse ridges, a long, ruffled trail wound sinuously up
the precipitous incline, carving a path like a charmed snake.
1 point
The other slope was not densely wooded, but had more ■■ refers to the slope on the right or the diverse ridges with
rocks. simple vocabulary (tends to use common verbs, nouns,
The slope on the right was clear and very open. and adjectives); does not maintain the writer’s style
The slope on the right was even denser than the other
slope.
The slope on the right had a very steep incline.
On the other side of this rocky hillside were tall, steep
ridges.
At the bottom of the slope there were several diverse ridges.
The warm wind rustled through the pine trees and seemed ■■ does not refer to the slope on the right or the diverse
to be whispering a song to us. ridges, but maintains the writer’s style
0 points
It was a beautiful day. ■■ limited content
Also, the snow was thick.
We didn’t go down that slope.
Both sides looked bad.
The ridges looked very challenging
166 Appendix B
24. The second sentence is missing from this paragraph. Write one good sentence that the writer may have used.
(1) The first creatures, gigantic earthworms with trifurcated tongues ending in suction cups, are capable of rapid
subterranean transit. (2) _________. (3) These great special effects, modeled after the mutant-monster tradition of
1950s’ horror movies, satirize that tradition in a delicate way.
1 point
They also created great dinosaur-like creatures with huge ■■ describes the other creatures, special effects, or
fangs and claws. earthworms with simple vocabulary (tends to use
These creatures also had saliva dripping from their mouths. common nouns, verbs, and adjectives); does not
They attack entire towns, crushing cars and houses on their maintain the writer’s style
way.
These ugly creatures seemed real but were only make-
believe models.
The second creatures were not as big but they were more
monstrous.
0 points
Hollywood made them. ■■ limited content
They were made by filmmakers.
The next creatures were from a movie.
The earthworms were ugly.
The president and the press disliked the demonstrators because of their youthful arrogance, because of their seeming
disdain for traditional values, and _________.
Appendix B 167
1 point
…their destruction of property. ■■ maintains sentence meaning but does not maintain the
…their obvious lack of respect. parallel structure of the sentence
…the damage they were doing to the park.
…because they disrupted the meeting.
…because they were causing too much publicity.
…because of their hair. ■■ maintains parallel structure but has imprecise content
…because of their dogs.
…because of their stupid ideas.
0 points
…their simple respect for their fellow Americans. ■■ inconsistent with sentence meaning
…because of their nice manners.
26. Write a sentence that tells about the picture and uses the word nevertheless.
1 point
A lot of words are the same, nevertheless they can differ. ■■ uses nevertheless correctly in a simple sentence
These words sound the same, nevertheless they are Note: Punctuation is not required.
different.
Even though the words sound alike they are different. ■■ two sentences that use nevertheless correctly
Nevertheless, the boy will get them correct. Note: Punctuation is not required.
The man was looking in the dictionary, but could not find ■■ uses the word nevertheless in a sentence, but doesn’t
the word nevertheless. illustrate the meaning of the word
At the spelling bee one of the words was “nevertheless.”
The boy wrote the words there, their, hear, here, so, sow,
sew, and nevertheless.
168 Appendix B
0 points
Nevertheless there are words that sound alike but are ■■ begins with nevertheless
spelled differently.
Nevertheless, these words are all spelled the same way.
Nevertheless, all of these words sound the same.
The words all sound the same, nevertheless they are. ■■ uses nevertheless incorrectly
27. The topic sentence is missing from this paragraph. Using the expressive style that the writer used, compose a good
topic sentence that communicates the paragraph’s main idea.
1 point
Novelty catches our attention, but quickly ceases to be ■■ an adequate topic sentence that communicates the
important to our awareness. paragraph’s main idea but does not include additional
Our senses are strongest when we are first introduced to descriptive words
new stimuli.
After entering an environment, the senses that seemed
strong begin to die down.
0 points
You notice smells more to begin with. ■■ a topic sentence that doesn’t capture the scope of
We can control the sensitivity of our senses. the paragraph
Our mind is busy processing data all the time. ■■ a sentence that does not introduce the main idea of
the paragraph
Appendix B 169
28. The concluding sentence is missing from this paragraph. Using the expressive style that the writer used, compose a
good final sentence that summarizes the content of this paragraph.
When applied properly, glue can hold wood together with great strength, even without mechanical attachments such
as screws, bolts, or nails. The surface of wood may look and feel very smooth; but at microscopic scales, the wood is
shown to be quite rough, even porous. That rough surface gives glue tiny cavities to flow into and grip. In an hour or
a day, when the glue hardens, that grip is locked in place and may actually form a bond stronger than the wood itself.
________________________________________________________________.
1 point
Wood glue holds boards together because it dries in the ■■ an adequate concluding sentence that summarizes the
porous surfaces. paragraph’s main idea but does not include additional
Glue is a powerful tool when used correctly. descriptive words
Due to the nature of wood, gluing wood can make a
surprisingly strong bond.
0 points
The wood now is strongly glued with cavities that flow into ■■ a concluding sentence that doesn’t capture the scope of
a grip. the paragraph
Glue can be much more useful than just making paper ■■ a sentence that is not a concluding sentence
crafts.
This is how large wooden models are constructed.
This wood comes from many different types of trees. ■■ a sentence that introduces new content
170 Appendix B
General Scoring Guidelines
■■ Do not penalize the examinee for punctuation, capitalization, spelling, or usage errors
unless otherwise indicated in the item scoring criteria.
■■ Do not penalize the examinee for poor handwriting unless the response is illegible.
Items 1–6
■■ A 1-point response is a standard response (meets task requirements).
cta ■■ has all three letters but out of sequence; must begin
with c or k
Appendix B 171
0 points
cr ■■ includes an incorrect sound
at ■■ missing c or k
0 points
ap ■■ missing letter representing one or more sounds
al
172 Appendix B
0 points
big ■■ one word
ball
5. This boy is standing. Write a second sentence on the line that tells what the other boy is doing. (sitting)
0 points
This boy is standing. ■■ copies first sentence
6. This animal is a horse. Write a sentence that tells what this animal is. (cow)
0 points
the cow ■■ limited content
cow
Appendix B 173
Note: Items 7 through 28 are scored 2, 1, or 0. Credit on most items requires a complete sentence. Do not penalize the
examinee for punctuation, capitalization, spelling, or usage errors unless otherwise indicated in the item scoring criteria.
7. Write one good sentence that tells what is happening in the picture. (boy/man talking on phone)
1 point
A boy is on the phone. ■■ mentions the person and the phone or talking
A boy (He) is calling his friend.
The boy is talking.
0 points
Talking on the phone. ■■ limited content
boy talking
8. Write a good sentence that tells what is happening in the picture. (chick hatching)
An egg is cracking and a baby will emerge. ■■ describes what happens to the egg and adds a detail
1 point
The chick is hatching. ■■ tells what the chick is doing
A baby bird is hatching.
The chick popped out.
The (This, An) egg is hatching. ■■ describes what happens to the egg
The egg is breaking (cracking).
That is a cracked egg.
It is hatching from the egg. ■■ uses a pronoun in place of the word chick
174 Appendix B
The egg is hatching. ■■ uses a simple sentence to describe the picture
She is hatching.
0 points
The chick is a baby. ■■ does not describe the action
It came out. ■■ does not mention either the chick or the egg
9. Write a good sentence that tells what the seal is doing. (balancing a ball)
The seal is performing at the circus for children from the ■■ tells what the seal is doing and adds a good detail
homeless shelter.
1 point
The seal is doing a trick. ■■ tells what the seal is doing but does not mention the
The seal is balancing a ball. location of the ball
The seal is playing with the ball.
The seal is playing ball.
The seal is holding the ball.
The seal hit the ball into the air. ■■ describes what the seal is doing
He (She, It) is playing with a ball. ■■ uses a simple sentence to describe the picture
He is playing with a balloon. ■■ uses pronouns in place of seal or ball
The seal has it on his nose.
0 points
The seal has a ball. ■■ limited content
The seal (He, She, It) is playing.
Balancing a ball.
Playing ball.
Appendix B 175
10. A sentence is missing from this paragraph. Write one good sentence that goes with the other three sentences.
(1) When Jean packs her lunch, she takes three things. (2) First, she packs an apple. (3) _________. (4) Finally, she
puts in a carton of milk.
1 point
She packs a sandwich. ■■ provides an appropriate item
She puts in a sandwich and cookies.
0 points
And a drink. ■■ limited content
A sandwich.
Some candy.
11. Write a good sentence that tells something about this toy. (drum)
1 point
This (The) toy is a drum. ■■ identifies the picture as a toy drum
This is a toy drum.
It (This) is a drum.
There are 2 sticks and they are making an X. ■■ describes the picture with detail but does not
mention the drum
176 Appendix B
0 points
It is big. ■■ limited content
It is round.
a drum
12. The girl is going to a movie. The man is asking her a question. Write a question that the man may have asked.
1 point
Which movie? ■■ an abbreviated question with a question mark
For how many people?
How many tickets?
Where can I buy candy? ■■ a question the woman might have asked
How much are the tickets?
Which way is the movie?
He probably asked what movie would she like to see. ■■ a statement explaining what the man may have asked
Would you like a ticket to the movie. ■■ a question without a question mark
What movie do you want this afternoon.
0 points
What movie do you want. ■■ an abbreviated question without a question mark
How many tickets.
Appendix B 177
13. Write one good sentence that tells how a car and a bus are alike.
1 point
Both a car and a bus have wheels (tires, motors). ■■ explains how a car and a bus are alike using simple
A bus carries people and a car carries people. vocabulary
People ride in both.
They both move and people ride in them.
Both have motors, tires, and windows.
They both are transportation (vehicles).
0 points
They both are yellow. ■■ does not express an obvious or correct similarity
14. Write a good sentence that tells why it is dangerous to dive into a pool when you do not know how deep it is.
If you dive in a pool without knowing how deep it is, you ■■ explains in detail a reason why it is dangerous
could hit your head and drown.
If you do not know how deep a pool is, and it is shallow
when you dive in, you could hit your head on the bottom
of the pool.
If you don’t know how deep a pool is and you don’t swim
well, you might drown in deep water.
When you dive into a pool and you don’t know the depth,
you are taking a chance with your life.
178 Appendix B
1 point
You may hit your head and drown. ■■ simply states a reason why it may be dangerous
You might hit the bottom hard.
You could hit your head on the bottom of the pool.
You may dive too shallow and break your neck.
It’s dangerous because you might hit your head on the
bottom.
It is very dangerous because you can drown. ■■ simply states that it is dangerous and tells a reason why
It is dangerous because you might get hurt.
It is dangerous because you could end up paralyzed.
It is dangerous to dive into a pool when you don’t know the ■■ states that it is dangerous but expresses unlikely
depth, because you could crack the pool. concerns (it would hurt the pool)
0 points
It can cause an accident. ■■ does not clearly specify or suggest an obvious danger
It is fun to swim in a pool.
You may hurt the pool. ■■ expresses an unlikely concern; does not state that it
is dangerous
Appendix B 179
15. Once upon a time in a land far away, there lived a dinosaur. Write one good sentence that tells what this dinosaur
looked like.
1 point
The dinosaur was blue with purple spots. ■■ provides a simple description
He was tall and gray with blue eyes.
It was big and black with big teeth.
He was green with spikes down his back.
He looked like an overgrown lizard.
The dinosaur was tall and green.
0 points
He was big. ■■ limited content
It looked ugly.
green with red dots
16. Write one sentence that tells three things you would like to do on a vacation. Remember, three things.
1 point
I like to fish, hunt, and visit relatives in Ohio. ■■ includes three simple vacation activities that are correctly
On vacation, I would like to swim, eat and fish. punctuated with commas
I’d go to Europe, Canada, or Mexico.
I like skiing, snorkeling and diving.
180 Appendix B
I would like to see the President. I would like to go to ■■ uses two or three compete sentences that include three
Disney World. I would like to go to the ocean. activities and end with periods, are connected with
commas, or are not punctuated
I would like to go camping and hiking and skiing. ■■ includes three activities connected by and
I would like to go to Disneyland and Mexico and New York.
Go to the beach, camp in the mountains, and relax at home. ■■ three verb phrases that are correctly punctuated
0 points
relax, swim, and read ■■ three simple verbs
ski, hike, relax
17. The second sentence is missing from this paragraph. Write one good sentence that goes with the other two sentences.
(1) I had always wanted to go on a camping trip. (2) _________. (3) I can’t wait to go back to the mountains again.
1 point
Then I finally went. ■■ refers to the trip with a simple sentence
Finally I got to go.
I liked hearing the coyotes howl.
I loved that camping trip.
We went to the mountains.
0 points
We were planning a trip. ■■ doesn’t indicate that the trip has already occurred
I would like to go.
People say it’s fun.
Appendix B 181
18. Write one sentence about a boy finding a lost dog. Include the words who found the in the middle of your sentence.
Mother said, “That is the boy who found the lost dog last ■■ uses who found the in the middle of a detailed sentence in
Saturday at the circus.” a dialogue or question format
The boy asked, Who found the little, stray dog in the Note: Punctuation is not required.
woods?
The boy asked, “Who found the lost puppy?”
1 point
He (That, There, This) is the boy who found the dog. ■■ uses who found the in the middle of a simple sentence
The boy who found the dog was very happy. or question
The boy who found the large dog got a reward.
The little boy who found the lost dog was nice.
Can I ask who found the little lost dog?
I am the one who found the lost dog.
My mom asked, Who found the dog?
0 points
Who found the dog? ■■ begins with who found the
Who found the lost dog?
Once a boy found a dog. ■■ does not use who found the
The boy asked the girl who found the ball. ■■ does not refer to the lost dog
182 Appendix B
19. The second sentence is missing from this paragraph. Write a good sentence that will fit.
(1) When doing the laundry, always separate the light clothes from the dark clothes. (2) _________. (3) My white
tennis shorts were covered with blue spots!
My husband forgot this simple rule when it was his turn to ■■ describes a past situation when someone else forgot to
do the laundry. separate the clothes
One time my mother forgot to separate the clothes.
1 point
If you don’t, the colors will mix. ■■ explains a consequence of mixing light and dark clothes
The light clothes will become dark. but does not refer to a past situation
If you don’t, the light clothes may get stained by the dark
clothes.
My jeans were covered with white spots. ■■ provides another example of colors running in clothes
My blue pants were covered with white streaks.
I put tennis shorts in with my blue jeans. ■■ refers to a present experience with this laundry rule
I put a blue sweater in.
0 points
…because the colors will run together. ■■ adds information to the first or last sentence
…and do not put bleach.
I didn’t and when I removed them from the washer…
I forgot to do that and…
Don’t put too much soap in the machine. ■■ provides another laundry tip
Put them in at a different temperature.
You should put in bleach with light not dark clothes.
Appendix B 183
20. This information will be included in a report about insects. Write a main or topic sentence for the report.
1 point
There are many kinds of insects. ■■ provides only a simple fact or statement
Mosquitoes, bees, black flies, butterflies, lady bugs, and
beetles are all insects.
I hate mosquitoes, bees, and black flies but I like butterflies,
lady bugs, and beetles.
Most insects are harmless.
A lot of insects have wings.
This is a report on bugs.
My report is about insects.
0 points
Most of the following insects can fly. ■■ refers to the list of insects in a general way
The mosquitoes bit the bees, black flies, butterflies, ■■ uses all of the listed words but does not write a
lady bugs, and beetles. topic sentence
184 Appendix B
21. The second sentence is missing from this paragraph. Write one good sentence that goes with the other two sentences.
(1) Enough food was packed to feed the group for several months. (2) _________. (3) Because of these careful
preparations, the expedition would be able to cope with any emergency.
1 point
The food was packed in waterproof containers. ■■ provides a detail about the food
The medical supplies were ready. ■■ provides a general example of the preparations
They had prepared all the supplies they needed.
There was also plenty of medicine around.
They had a lot of first aid kits.
0 points
We packed enough food. ■■ does not refer to an additional preparation
We were stopped by weather.
The group was climbing Mt. Everest.
We planned to be gone for several months.
22. Write a good sentence using the words in spite of his success.
Appendix B 185
1 point
In spite of his success, he still did not make the team. ■■ demonstrates an understanding of the phrase and may
In spite of his success, he was not happy. suggest the person’s success but does not describe a
In spite of his success, he still had some problems. specific situation or context
In spite of his success, Mark was not pleased.
0 points
Everyone was jealous in spite of his success. ■■ uses in spite of his success but contradicts the meaning of
Joe did that just in spite of his success. the phrase or uses it incorrectly
In spite of his success, he’s okay. Note: Uses in spite of to mean because of.
He will play in spite of his success.
He goes in spite of his success.
23. Write a good sentence about several activities you like. Include the words for example in your sentence.
1 point
My favorite activities for example are reading and cooking. ■■ uses for example correctly in a simple sentence
I like several activities, for example, I like swimming, and Note: Punctuation is not required.
running.
There are many activities I like, for example: tennis and
hiking.
I enjoy many outdoor activities. For example, boating, ■■ includes two sentences that use for example correctly
skiing, and golf are three of my favorites. Note: Punctuation is not required.
0 points
For example, I like cooking and writing. ■■ begins with for example
I like to do yard work, golf, and running for example, all the ■■ uses for example incorrectly
races I have run.
I like lots of things for example horseback riding. ■■ only names one specific activity
186 Appendix B
24. The second sentence is missing from this paragraph. Write a good sentence that the writer might have used.
(1) The feathered pair assembled by the picturesque fountain formed as unlikely a duet as a fiddle and a bassoon.
(2) _________. (3) The tall, long-legged bird, whose close-fitting wings were draped across his body like the tails of a
dress coat, was preposterously uncomely.
1 point
The short, ugly bird had bright green feathers. ■■ describes the other bird or pair of birds with simple
One was a small bird with loose-fitting feathers and wings. vocabulary (tends to use common nouns, verbs, and
The short, fat bird had a black beak and was very colorful. adjectives) but does not maintain the writer’s style
The short-legged, fat bird had drooping wings.
They both looked strange, as if they were trying to outdo
one another.
The two birds were very unusual looking, one short and the
other tall.
The birds seemed as if they were made for each other.
When one of the pair flew off, the other was left standing by
himself.
0 points
It was a sight to behold. ■■ limited content
They were beautiful birds.
They were very strange birds.
The feathered pair were ugly.
The short bird was drinking water.
Appendix B 187
25. The second sentence is missing from this paragraph. Write a good sentence that the writer might have used.
(1) Although plainly in view, the car, a black, battered, hearselike automobile, continued to approach slowly, as if
the two occupants were reticent to greet the inquisitive group waiting on the embankment. (2) _________. (3) The
passenger was wearing a blue sweatshirt with golden stars embossed on the front; the front brim of his baseball cap
stood up at a jaunty angle, revealing a tousle of blonde hair.
After the sudden arrival, a man dressed like a chauffeur got ■■ refers to the stopping of the car and the emerging or
out and moved to the rear door from which the passenger examination of the passenger(s), and maintains the
emerged. writer’s style (similar to sentences 1 and 3)
The people on the embankment peered curiously into the ■■ describes the group on the embankment, introduces the
car, trying to discern the identity of the passenger in the passenger(s), and maintains the writer’s style (similar to
back seat. sentences 1 and 3)
1 point
The driver was wearing blue jeans, a black sweater, and ■■ describes the driver or the passenger with simple
blue boots. vocabulary (tends to use common nouns, verbs, and
The driver had red hair and was wearing a cowboy hat and adjectives); does not maintain the writer’s style
an orange shirt.
The driver was fat and had a brown mustache and
sunglasses.
We then realized it was a baseball star doing a guest
appearance at the beach.
When the passenger got out of the car, he was wearing an
odd outfit.
The car stopped and two men opened the doors. ■■ refers to the stopping of the car or the emerging or
The car came closer and then the driver pushed out the examination of the passengers(s) with simple vocabulary
passenger. (tends to use common nouns, verbs, and adjectives);
All eyes were on the old automobile to see who was does not maintain the writers style
arriving.
188 Appendix B
The crowd on the embankment was very anxious to meet ■■ describes the group on the embankment with simple
the occupants in the car. vocabulary (tends to use common nouns, verbs, and
The group was a bunch of native women wearing hula adjectives) but does not maintain the writer’s style
skirts.
0 points
So we hid. ■■ does not directly refer to the driver, the occupants of the
It was warm and the sun was hot. car, or the group on the embankment
Have you ever seen anything like it?
The car rolled across the hill.
They were waiting for a baseball player so they could see ■■ does not unite the first and third sentences in
him play. the paragraph
One passenger protested about the car.
26. Write a sentence that tells about the picture and uses the word consequently.
1 point
The window is broken and consequently the boy will buy a ■■ uses consequently correctly but the sentence lacks detail
new one. Note: Punctuation is not required.
He broke the window and consequently he has to get a new
one.
He broke the window and will consequently get in trouble.
Appendix B 189
The boy broke the window with his ball. Consequently, he ■■ two sentences that use consequently correctly
must repair it. Note: Punctuation is not required.
0 points
Consequently, you can’t play anymore. ■■ begins the sentence with consequently
Tom broke the window but consequently, his Dad was a ■■ uses consequently awkwardly or incorrectly
carpenter and could fix it.
27. The topic sentence is missing from this paragraph. Using the expressive style that the writer used, compose a good
topic sentence that communicates the paragraph’s main idea.
1 point
Trees grown in fields and forests will develop different ■■ an adequate topic sentence that communicates the
shapes. paragraph’s main idea but does not include additional
The environment that surrounds an oak tree has the most descriptive words
influence on how well and how high it will grow.
There are many elements that can affect the manner in
which trees grow.
Differing environments can produce considerable variation
between trees of the same genus.
Trees can grow in a variety of ways depending on their
environment.
190 Appendix B
0 points
Trees grown in open fields look better. ■■ a topic sentence that doesn’t capture the scope of
the paragraph
The environment in which a tree grows can have an effect ■■ a sentence that does not introduce the main idea of
on how weak or strong its seeds may be. the paragraph
An oak tree has many great characteristics.
28. The concluding sentence is missing from this paragraph. Using the expressive style that the writer used, compose a
good final sentence that summarizes the content of this paragraph.
When a frozen, five-thousand-year-old human body (later nicknamed “the Iceman”) was found high in the Alps, we
gathered a wealth of clues about living long ago in the landscape that is now Italy. The Iceman’s clothing, tools, and
weapons were frozen with him when he died and remained undisturbed through all that time. The many implements
he carried were made from a wide variety of different species of wood and several metals. A series of tattoos on his
back may indicate that he had received some sort of medical treatment, possibly a form of acupuncture. ___________
_____________________________________________________.
1 point
A five thousand year old frozen body found high in the ■■ an adequate concluding sentence that summarizes the
mountains of Italy taught scientists a lot about that time. paragraph’s main idea but does not include additional
From these clues we can construct a fuller picture of the descriptive words
Iceman’s civilization.
The preservation of “Iceman” provides a wealth of
information about his existence.
By looking at the Iceman’s clothes and tools, we can
conclude a lot about his life.
0 points
Iceman are totally different than we are. ■■ a concluding sentence that doesn’t capture the scope of
We may not know much about the iceman, but we are one the paragraph
step closer.
Appendix B 191
Freezing preserves people a long time. ■■ a sentence that is not a concluding sentence
He also had scars on his feet which may have been caused ■■ a sentence that introduces new content
by sharp rocks.
192 Appendix B
Appendix C
WJ IV Tests of Achievement Examiner Training Checklist
Name of Examiner:_ _________________________________ Date:_ __________________________________________
Y N N/O 3. Asks examinee to reread all items on page if response is unclear and then scores only item
in question.
Y N N/O 4. Does not tell examinee any letters or words during test.
Y N N/O 5. Gives reminder to pronounce words smoothly only once during test.
Y N N/O 7. Encourages examinee to try next word after 5 seconds unless examinee is still actively
engaged in trying to pronounce word.
Y N N/O 4. Provides Response Booklet and pencil at any time if examinee requests it or appears to need
it (e.g., uses finger to write on table or in air).
Y N N/O 7. Does not require examinee responses to contain unit labels unless specified in Test Book
correct keys.
Y N N/O 8. Scores item incorrect if numeric response is wrong or if examinee provides incorrect label
(required or not).
Appendix C 193
Y N N/O 9. Tests by complete pages.
Test 3: Spelling
Y N N/O 1. Uses Response Booklet and pencil.
Y N N/O 4. Does not penalize for poor handwriting or reversed letters as long as letter does not form
different letter (e.g., reversed b becomes d and would be an error).
Y N N/O 3. Begins with Sample Item B for all other examinees and then selects appropriate
starting point.
Y N N/O 4. Does not insist on silent reading if examinee persists in reading aloud.
Y N N/O 6. Accepts only one-word responses as correct unless indicated otherwise by scoring key.
Y N N/O 7. Asks examinee to provide one word that goes in blank when he or she reads item aloud and
provides answer in context.
Y N N/O 8. Scores responses correct if they differ in verb tense or number, unless otherwise indicated.
Y N N/O 9. Scores responses incorrect if examinee substitutes different part of speech, unless otherwise
indicated.
Test 5: Calculation
Y N N/O 1. Uses Response Booklet and pencil.
Y N N/O 3. Discontinues testing and records score of 0 if examinee responds incorrectly to both
sample items.
194 Appendix C
Y N N/O 6. Scores items skipped by examinee as incorrect.
Y N N/O 5. Uses “Writing Samples Scoring Guide” in Appendix B of Examiner’s Manual to score items
after testing.
Y N N/O 6. Does not penalize for spelling, punctuation, capitalization, or usage errors unless otherwise
indicated in “Writing Samples Scoring Guide.”
Y N N/O 7. Asks examinee to write as neatly as possible if responses are illegible or difficult to read.
Y N N/O 8. Consults Writing Samples “Scoring Table” on Test Record to determine when additional
items need to be administered (score falls in shaded area).
Y N N/O 9. Scores Items 1–6 as 1 or 0 points as indicated in “Writing Samples Scoring Guide.”
Y N N/O 10. Scores Items 7–28 as 2, 1, or 0 points as indicated in “Writing Samples Scoring Guide.”
Y N N/O 11. Knows that .5 and 1.5 can be used to score responses that fall between 0-, 1-, and
2-point examples.
Y N N/O 12. Does not penalize for spelling or handwriting errors unless words are illegible.
Y N N/O 14. Does not ask examinee to read his or her response to score item.
Y N N/O 15. Alternates between assigning higher and lower scores when unsure of how to score
certain items.
Y N N/O 17. If examinee writes more than one sentence for item, selects and scores one sentence that
best satisfies task demands.
Y N N/O 18. Reduces score by 1 point for severe grammatical or usage errors or if a significant word is
illegible.
Y N N/O 21. Enters score for only one block of items (even if more than one block was administered)
into scoring program.
Appendix C 195
Test 7: Word Attack
Y N N/O 1. Uses suggested starting points.
Y N N/O 3. Says most common sound (phoneme) for letters printed within slashes (e.g., /p/), not
letter name.
Y N N/O 4. Reminds examinee to say words smoothly only once during test if examinee pronounces
nonword phoneme by phoneme or syllable by syllable.
Y N N/O 5. Asks examinee to reread all items on page if response is unclear and then scores only item
in question.
Y N N/O 6. Does not tell examinee any letters or words during test.
Y N N/O 6. Marks slash (/) at each point on Test Record where error occurs.
Y N N/O 7. After hesitation of 3 seconds, marks word as incorrect and tells examinee to go on to
next word.
Y N N/O 8. Knows that self-corrections within 3 seconds are not counted as errors.
Y N N/O 9. Scores each sentence as 2 (no errors), 1 (one error), or 0 (two or more errors).
Y N N/O 3. Begins with sample items and practice exercise for all examinees.
Y N N/O 4. Discontinues testing if examinee has 2 or fewer items correct on Practice Exercises C–F and
records score of 0 on Test Record.
196 Appendix C
Y N N/O 6. Records exact starting and stopping times if stopwatch is unavailable.
Y N N/O 7. Records exact finishing time in minutes and seconds on Test Record.
Y N N/O 8. Reminds examinee to read each sentence if he or she appears to be answering items
without reading.
Y N N/O 10. Reminds examinee to continue if he or she stops at bottom of page or column.
Y N N/O 13. Enters both Number Correct and Number Incorrect into scoring program.
Y N N/O 14. Subtracts Number Incorrect from Number Correct when obtaining estimated AE/GE from
Test Record.
Y N N/O 4. Discontinues testing if examinee has 3 or fewer items correct after 1 minute and records
time of 1 minute and Number Correct (0 to 3) on Test Record.
Y N N/O 7. Records exact finishing time in minutes and seconds on Test Record.
Y N N/O 8. Does not draw attention to mathematical signs or remind examinee to pay attention to signs
during test.
Y N N/O 11. Reminds examinee to continue if he or she stops at bottom of first page.
Y N N/O 4. Discontinues testing if examinee has score of 0 on Sample Items B–D after error correction
and records score of 0 on Test Record.
Appendix C 197
Y N N/O 5. Discontinues testing if examinee has 3 or fewer correct after 2 minutes and records time of
2 minutes and Number Correct (0 to 3) on Test Record.
Y N N/O 8. Records exact finishing time in minutes and seconds on Test Record.
Y N N/O 11. Scores as correct all responses that are complete, reasonable sentences using all target words.
Y N N/O 12. Knows target words may not be changed in any way (e.g., verb tense or nouns changed
from singular to plural).
Y N N/O 13. Does not penalize for spelling, punctuation, or capitalization errors.
Y N N/O 14. Does not penalize for poor handwriting or spelling unless response is illegible.
Y N N/O 17. Scores responses that omit less meaningful words (e.g., the or a) as correct if all other
criteria are met.
Y N N/O 18. Accepts abbreviations (e.g., w/ for with) or symbols (e.g., & for and) if all other
criteria are met.
Y N N/O 2. Follows Continuation Instructions to determine when to continue testing or when to stop.
Y N N/O 7. Does not penalize for mispronunciations resulting from articulation errors, dialect
variations, or regional speech patterns.
Y N N/O 8. Scores response correct if it differs from correct response listed only in possessive case, verb
tense, or number (singular/plural), unless otherwise indicated in scoring key.
Y N N/O 9. Knows that any number that is a key word (in bold), must be recalled exactly.
Y N N/O 10. Scores derivations of names as correct (e.g., Annie for Ann).
198 Appendix C
Test 13: Number Matrices
Y N N/O 1. Gives examinee worksheet in Response Booklet and pencil when directed.
Y N N/O 5. Allows 30 seconds for Items 1–6 and 1 minute for Items 12–30 before moving to next item.
Y N N/O 2. Discontinues testing if examinee has score of 0 on Sample Items A–D or on Items 1–4 and
records score of 0 on Test Record.
Y N N/O 3. Requires examinee to clearly indicate both where error is and how to correct it to
receive credit.
Y N N/O 5. Asks examinee how to correct error if he or she reads item aloud and inadvertently corrects
error in context.
Y N N/O 6. Asks examinee how to correct mistake if he or she indicates error without explaining how
to correct it.
Y N N/O 4. Discontinues testing if examinee has 1 or 0 correct on practice exercise and records score of
0 on Test Record.
Y N N/O 7. Records exact finishing time in minutes and seconds on Test Record.
Appendix C 199
Test 16: Spelling of Sounds
Y N N/O 1. Follows standardized procedures for audio recorded tests.
Y N N/O 4. Presents Sample Items A–D and Items 1–5 orally and presents remaining items from audio
recording.
Y N N/O 5. Says most common sound (phoneme) for letters printed within slashes (e.g., /m/), not
letter name.
Y N N/O 6. Knows that responses listed in Test Book are only correct answers.
Y N N/O 7. Does not penalize for reversed letters as long as letter does not form different letter (e.g.,
reversed b becomes d and would be an error).
Y N N/O 8. Scores items 1 if they are spelled correctly or 0 if they are spelled incorrectly.
Y N N/O 9. Does not penalize if examinee does not repeat stimulus word or pronounces it incorrectly.
Scores only written response.
Y N N/O 10. Pauses or stops audio recording if examinee requires additional response time.
Y N N/O 12. Presents items orally if examinee is not responsive to audio recording.
Y N N/O 2. Begins with sample items for all examinees on each subtest.
Y N N/O 8. Asks for one-word answer if examinee provides two-word or longer response.
Y N N/O 9. Scores responses correct if they differ in verb tense or number, unless otherwise indicated.
Y N N/O 10. Scores responses incorrect if they substitute different part of speech, unless otherwise
indicated.
Y N N/O 11. Does not penalize if examinee reads stimulus word incorrectly. Scores only synonym or
antonym produced.
Y N N/O 13. Counts all items below basal on each subtest as correct.
200 Appendix C
Y N N/O 14. Records errors for further analysis.
Y N N/O 16. Enters Number Correct from each subtest into scoring program.
Y N N/O 17. Sums scores from two subtests when obtaining estimated AE/GE from Test Record.
Appendix C 201
Appendix D
WJ IV General Test Observations Checklist
Name of Examiner:_ _________________________________ Date:_ __________________________________________
Y N N/O 2. Develops seating arrangement in which examiner can see both sides of Test Book but
examinee can see only examinee pages.
Administration
Y N N/O 3. Keeps Test Record behind Test Book and out of examinee’s view.
Y N N/O 5. Points with left hand while recording responses with right hand (reversed for left-handed
examiner).
Y N N/O 12. Moves to next item after allowing examinee appropriate, but not excessive, amount of time
to respond.
Y N N/O 13. Is familiar with contents of all examiner page boxes containing supplementary instructions.
Y N N/O 15. When testing backward to obtain basal, starts with first item on preceding page and presents
all items on page if stimuli are visible to examinee.
Y N N/O 16. Administers all items on page when stimuli are visible to examinee rather than stopping in
middle of page when ceiling is reached.
Appendix D 203
Y N N/O 17. Smoothly locates correct starting track on CD for audio tests.
Y N N/O 18. Looks away from examinee when audio test item is presented and then looks back at
examinee when prompt is heard.
Y N N/O 19. Presses pause button on audio equipment if examinee needs additional time.
Y N N/O 20. Encourages effort and praises examinee for putting forth his or her best effort.
Y N N/O 21. Queries whenever needed and allowed to clarify examinee’s response.
Scoring
Y N N/O 24. Does not penalize examinee for mispronunciations resulting from articulation, speech, or
dialectical differences.
Y N N/O 25. Uses item-scoring procedures specified in manual (e.g., 1 = correct response, 0 = incorrect
response, and blanks for items not administered).
Y N N/O 29. Uses optional “Qualitative Observation” checklists for Tests 1–11, as appropriate.
Y N N/O 30. Enters all identifying information and scores correctly into scoring program.
Comments:
204 Appendix D
800.323.9540
www.wj-iv.com