You are on page 1of 18

School Psychology Review

ISSN: (Print) 2372-966x (Online) Journal homepage: https://www.tandfonline.com/loi/uspr20

Teacher Ratings of Academic Skills: The


Development of the Academic Performance Rating
Scale

George J. DuPaul, Mark D. Rapport & Lucy M. Perriello

To cite this article: George J. DuPaul, Mark D. Rapport & Lucy M. Perriello (1991) Teacher
Ratings of Academic Skills: The Development of the Academic Performance Rating Scale, School
Psychology Review, 20:2, 284-300, DOI: 10.1080/02796015.1991.12085552

To link to this article: https://doi.org/10.1080/02796015.1991.12085552

Published online: 22 Dec 2019.

Submit your article to this journal

Article views: 1

View related articles

Citing articles: 5 View citing articles

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=uspr20
School Psychology Review
Volume 20, No. 2,1991, pp. 284-300

TEACHER RATINGS OF ACADEMIC SKILLS:


THE DEVELOPMENT OF THE
ACADEMIC PERFORMANCE RATING SCALE
George J. DuPaul Mark D. Rapport
University of Massachusetts University of Hawaii
Medical Center at Mama
Lucy M. Perriello
University of Massachusetts Medical Center

Abstract= This study investigated the normative and psychometric properties of


a recently developed teacher checklist, the Academic Pet=fomnance Rating Scale
(APRS), in a large sample of urban elementary school children. This instrument
was developed to assess teacher judgments of academic performance to identify
the presence of academic skills deficits in students with disruptive behavior
disorders and to continuously monitor changes in these skills associated with
treatment. A principal components analysis was conducted wherein a three-factor
solution was found for the APRS. All subscales were found to be internally
consistent, to possess adequate test-retest reliability, and to share variance with
criterion measures of children’s academic achievement, weekly classroom
academic performance, and behavior. The total APRS score and all three subscales
also were found to discriminate between children with and without classroom
behavior problems according to teacher ratings.

The academic performance and ad- disorders and concurrent academic per-
justment of school-aged children has come formance dficulties are at higher risk for
under scrutiny over the past decade due poor long-term outcome (e.g., Weiss &
to concerns about increasing rates of Hechtman, 1986).
failure and poor standardized test scores These fmdings have direct implica-
(Children’s Defense Fund, 1988; National tions for the assessment of the classroom
Commission on Excellence in Education, functioning of students with behavior
1983). Reports indicate that relatively disorders. Specifically, it has become
large percentages of children (i.e., 20-30%) increasingly important to screen for
experience academic difficulties during possible academic skills deficits in this
their elementary school years (Glidewell population and monitor changes in aca-
& Swallow, 1969; Rubin & Balow, 1978), demic performance associated with thera-
and these rates are even higher among peutic interventions. Frequently, tradi-
students with disruptive behavior dis- tional measures of academic achievement
orders (Cantwell & Satterfield, 1978; (e.g., standardized psychoeducational
Kazdin, 1986). Further, the results of batteries) are used as integral parts of the
available longitudinal studies suggest that diagnostic process and for long-term
youngsters with disruptive behavior assessment of academic success. Several

This project was supported in part by BRSG Grant SO7 RR05712 awarded to the first author by the Biomedical
Research Support Grant Program, Division of Research Resources, National Institutes of Health. A portion
of these results was presented at the annual convention of the National Association of School Psychologists,
April, 1990, in San Francisco, CA
The authors extend their appreciation to Craig Edelbrock and three anonymous reviewers for their helpful
comments on an earlier draft of this article and to Russ Barkley, Terri Shelton, Kenneth Fletcher, Gary
Stoner, and the teachers and principals of the Worcester MA Public Schools for their invaluable contributions
to this study.
Address all correspondence to George J. DuPaul, Department of Psychiatry, University of Massachusetts
Medical Center, 55 Lake Avenue North, Worcester, MA 01655.

284
Academic Performance Rating Scale 285

factors limit the usefulness of norm- possess several advantages for both
referenced achievement tests for these screening and identification purposes.
purposes, such as (a) a failure to sample Teachers are able to observe student
the curriculum in use adequately, (b) the performance on a more comprehensive
use of a limited number of items to sample sample of academic content than could
various skills, (c) the use of response be included on a standardized achieve-
formats that do not require the student ment test. Thus their judgments provide
to perform the behavior (e.g., writing) of a more representative sample of the
interest, (d) an insensitivity to small domain of interest in academic assess-
changes in student performance, and (e) ment (Gresham et al., 1987). Such judg-
limited contribution to decisions about ments also provide unique data regarding
programmatic interventions (Marston, the “teachability” (e.g., ability to succeed
1989; Shapiro, 1989). in a regular education classroom) of
Given the limitations of traditional students (Gerber & Semmel, 1984). Fi-
achievement tests, more direct measure- nally, obtaining teacher input about a
ment methods have been utilized to screen student’s academic performance can
for academic skills deficits and monitor provide social validity data in support of
intervention effects (Shapiro, 1989; Sha- classification and treatment-monitoring
piro & Kratochwill, 1988.) Several meth- decisions. At the present time, however,
ods are available to achieve these pur- teachers typically are not asked for this
poses, including curriculum-based information in a systematic fashion, and
measurement (Shinn, 1989), direct obser- when available, such input is considered
vations of classroom behavior (Shapiro & to be highly suspect data (Gresham et al.,
Kratochwill, 1988), and calculation of 1987).
product completion and accuracy rates Teacher rating scales are important
(Rapport, DuPaul, Stoner, & Jones, 1986). components of a multimodal assessment
These behavioral assessment techniques battery used in the evaluation of the
involve direct sampling of academic diagnostic status and effects of treatment
behavior and have demonstrated sensitiv- on children with disruptive behavior
ity to the presence of skills deficits and disorders (Barkley, 1988; Rapport, 1987).
to treatment-induced change in such Given that functioning in a variety of
performance (Shapiro, 1989). behavioral domains (e.g., following rules,
In addition to these direct assessment academic achievement) across divergent
methods, teacher judgments of students’ settings is often affected in children with
achievement have been found to be quite such disorders, it is important to include
accurate in identifying children in need information from multiple sources across
of academic support services (Gresham, home and school environments. Unfortu-
Reschly, & Carey, 1987; Hoge, 1983). For nately, most of the available teacher rating
example, Gresham and colleagues (1987) scales specifically target the frequency of
collected brief ratings from teachers problem behaviors, with few, if any, items
regarding the academic status of a large related directly to academic performance.
sample of schoolchildren. These ratings Thus, the dearth of items targeting teacher
were highly accurate in classifying stu- judgments of academic performance is a
dents as learning disabled or non-handi- major disadvantage of these measures
capped and were significantly correlated when screening for skills deficits or mon-
with student performance on two norm- itoring of academic progress is a focus of
referenced aptitude and achievement the assessment.
tests. In fact, teacher judgments were as To address the exclusivity of the focus
accurate in discriminating between these on problem behaviors by most teacher
two groups as the combination of the questionnaires, a small number of rating
standardized tests. scales have been developed in recent years
Although teacher judgments may be that include items related to academic
subject to inherent biases (e.g., confirming acquisition and classroom performance
previous classification decisions), they variables. Among these are the Children’s
286 School Psychology Review, 7997, Vol. 20, No. 2

Behavior Rating &ale (Neeper & Lahey, the APRS and reports on its basic psy-
1986), Classroom Adjustment Ratings chometric properties with respect to
Scale (Lorion, Cowen, & Caldwell, 1975), factor structure, internal consistency,
Health Resources Inventory (Gesten, test-retest reliability, and criterion-related
1976), the Social Skills Rating System validity. In addition, normative data by
(Gresham & Elliott, 1990), the Teacher- gender across elementary school grade
mild Rating Scale (Hightower et al., levels were collected.
1986), and the WaZlCimneZZ Scale of
social Chphnceand SchoolAdjustment
(Walker & McConnell, 1988). These scales METHOD
have been developed primarily as screen- Subjects
ing and problem identification instru-
ments and all have demonstrated relia- Subjects were children enrolled in the
bility and validity for these purposes. first through sixth grades from 45 public
Although all of these questionnaires are schools in Worcester, Massachusetts. This
psychometrically sound, each scale pos- system is an urban, lower middle-class
sesses one or more of the following school district with a 28.5% minority
characteristics that limit its utility for both (African-American, Asian-American, and
screening and progress monitoring of Hispanic) population. Complete teacher
academic skills deficits. These factors ratings were obtained for 493 children
include (a) items worded at too general (251 boys and 242 girls), which were
a level (e.g., “Produces work of acceptable included in factor analytic and normative
quality given her/his skills level”) to allow data analyses. Children ranged in age from
targeting of academic completion and 6 to 12 years of age (M = 8.9; SD = 1.8).
accuracy rates across subject areas, (b) A two-factor index of socioeconomic
a failure to establish validity with respect status (Hollingshead, 1975) was obtained
to criterion-based measures of academic with the relative percentages of subjects
success, and (c) requirements for comple- in each class as follows: I (upper), 12.3%;
tion (e.g., large number of items) that II (upper middle), 7.1%; III (middle),
detract from their appeal as instruments 45.5%; IV (lower middle), 26.3% and V
that may be used repeatedly or on a weekly (lower), 8.8%.
basis for brief periods. A subsample of 50 children, 22 girls
The need for a brief rating scale that and 28 boys, was randomly selected from
could be used to identify the presence of the above sample to participate in a study
academic skills deficits in students with of the validity of the APRS. Children at
disruptive behavior disorders and to all grade levels participated, with the
monitor continuously changes in those relative distribution of subjects across
skills associated with treatment was grades as follows: first, 19%; second, 16%;
instrumental in the development of the third, 17%; fourth, 17%; fifth, 13.5%; and
Academic Performance Rating Scale sixth, 17.5%. The relative distribution of
(APRS). The APRS was designed to obtain subjects across socioeconomic strata was
teacher perceptions of specific aspects equivalent to that obtained in the original
(e.g., completion and accuracy of work in sample.
various subject areas) of a student’s
academic achievement in the context of Measures
a multimodal evaluation paradigm which
would include more direct assessment The primary classroom teacher of
techniques (e.g., curriculum-based mea- each participant completed two brief
surement, behavioral observations). Be- measures: the APRS and Attention/‘h$i-
fore investigating the usefulness of this tit-Hperact+vity Disorder {ADHD] Rat-
measure for the above purposes, its ing Scale (DuPaul, in press). In addition,
psychometric properties and technical teachers of the children participating in
adequacy must be established. Thus, this the validity study completed the Abbre-
study describes the initial development of viated Canners Teacher Rating Scale
Academic Performance Rating Scale 287

(ACTRS); (Goyette, Conners, & Ulrich, at all) to 4 (very much) Likert scale with
1978). higher scores indicative of greater ADHD-
related behavior. This scale has been
APRS. The APRS is a 19-item scale that
found to have adequate internal consis-
was developed to reflect teachers’ percep- tency and test-retest reliability, and to
tions of children’s academic performance
correlate with criterion measures of
and abilities in classroom settings (see
Appendix A). Thirty items were initially classroom performance (DuPaul, in
generated based on suggestions provided press).
by several classroom teachers, school ACTRS. The ACTRS (or Hyperactivity
psychologists, and clinical child psychol- Index) is a lo-item rating scale designed
ogists. Of the original 30 items, 19 were to assess teacher perceptions of psycho-
retained based on feedback from a sep- pathology (e.g., hyperactivity, poor con-
arate group of classroom teachers, prin- duct, inattention) and is a widely used
cipals, and school and child psychologists, index for identifying children at-risk for
regarding item content validity, clarity, ADHD and other disruptive behavior
and importance. The final version in- disorders. It has adequate psychometric
cluded items directed towards work properties and is highly sensitive to the
performance in various subject areas (e.g., effects of psychopharmacological inter-
“Estimate the percentage of written math ventions (Barkley, 1988; Rapport, in
work completed relative to classmates”), press).
academic success (e.g., “What is the quality
of this child’s reading skills?“), behavioral Observational measures. Children
control in academic situations (e.g., “How participating in the validity study were
often does the child begin written work observed unobtrusively in their regular
prior to understanding the directions?“), classrooms by a research assistant who
and attention to assignments (e.g., “How was blind to obtained teacher rating scale
often is the child able to pay attention scores. Observations were conducted
without you prompting him/her?“). Two during a time when each child was
additional items were included to assess completing independent seatwork (e.g.,
the frequency of staring episodes and math worksheet, phonics workbook).
social withdrawal. Although the latter are Observations were conducted for 20 min
only tangentially related to the afore- with on-task behavior recorded for 60
mentioned constructs, they were included consecutive intervals. Each interval was
because “overfocused” attention (Kins- divided into 15 s of observation followed
bourne & Swanson, 1979) and reduced by 5 s for recording. A child’s behavior was
social responding (Whalen, Henker, & recorded as on or off-task in the same
Granger, 1989) are emergent symptoms manner as employed by Rapport and
associated with psychostimulant treat- colleagues (1982). A child was considered
ment. Teachers answered each item using off-task if (s)he exhibited visual nonatten-
a 1 (never or poor) to 5 (very often or tion to written work or the teacher for
excellent) Likert scale format. SevenAPRS more than 2 consecutive seconds within
items (i.e., nos. 12,13,15- 19) were reverse- each 15 s observation interval, unless the
keyed in scoring so that a higher total child was engaged in another task-
score corresponded with a positive aca- appropriate behavior (e.g., sharpening a
demic status. pencil). The observer was situated in a
ADHD Rating Scale. The ADHD Rat- part of the classroom that avoided direct
ing Scale consists of 14 items directly eye contact with the target child, but at
adapted from the ADHD symptom list in a distance that allowed easy determina-
the most recent edition of the Diagnostic tion of on-task behavior. This measure was
and Statistical Manual of Mental Disorders included as a partial index of academic
(DSM-III-R; American Psychiatric Associ- engaged time which has been shown to
ation, 1987). Teachers indicated the be significantly related to academic
frequency of each symptom on a 1 (not achievement (Rosenshine, 1981).
288 School Psychology Review, 7997, Vol. 20, No. 2

Academic efficiency score. Academic randomly selected from class roster), to


seatwork was assigned by each child’s complete APRS ratings according to each
classroom teacher at a level consistent child’s academic performance during the
with the teacher’s perceptions of the previous week, and that responses on the
child’s ability level with the stipulation ADHD scale were to reflect the child’s
that the assignment be gradeable in terms usual behavior over the year. Teacher
of percentage completed and percentage ratings for the large sample (N= 487) were
accurate. Assignments were graded after obtained within a l-month period in the
the observation period by the research early spring, to ensure familiarity with the
assistant and teacher, the latter of whom student’s behavior.
served as the reliability observer for A subsample of 50 children was
academic measures. An academic effi- selected randomly from the larger sample
ciency score (AES) was calculated in a and parent consent for participation in
manner identical to that employed by the validity study was procured. Teacher
Rapport and colleagues (1986) whereby ratings for this subsample were obtained
the number of items’ completed correctly within a 3-month period in the late winter
by the child was divided by the number and early spring. Teacher ratings on the
of items assigned to the class multiplied APRS were randomly obtained for half of
by 100. This statistic represents the mean the sample participating in the validity
weekly percentage of academic assign- study (n = 25) on a second occasion, 2
ments completed correctly relative to weeks after the original administration of
classmates and was used as the class- this scale, to assess test-retest reliability.
room-based criterion measure of aca- Ratings reflected children’s academic
demic performance. performance over the previous week The
Published norm-referenced achieve-
research assistant completed the behav-
ment test scores. The results of school-
ioral observations and collected AES data
based norm-referenced achievement tests on 3 separate days (i.e., a total of 60 min
(i.e., Comprehensive Test of Basic Skills; of observation) during the same week that
CTB/McGraw-Hill, 1982) were obtained APRS, ADHD, and ACIRS ratings were
from the school records of each student completed. Means (across the 3 observa-
in the validity sample. These tests are tion days) for percentage on-task and AES
administered routinely on a group basis scores were used in the data analyses.
in the fall or spring of each school year. Interobserver reliability. The research
National percentile scores from the most assistant was trained by the first author
recent administration (i.e., within the past to an interobserver reliability of 90% or
year) of this test were recorded for greater prior to conducting live observa-
Mathematics, Reading, and Language tions using videotapes of children com-
Arts. pleting independent work. Reliability
coefficients for on-task percentage were
calculated by dividing agreements by
Procedure
agreements plus disagreements and mul-
Regular education teachers from 300 tiplying by 100%. Interobserver reliability
classrooms for grades 1 through 6 were also was assessed weekly throughout the
asked to complete the APRS and ADHD data collection phase of the study using
rating scales with regard to the perfor- videotapes of 10 individual children (who
mance of two children in their class. were participants in the validity study)
Teachers from elementary schools in all completing academic work during one of
parts of the city of Worcester participated the observation sessions. Interobserver
(ie., a return rate of 93.5%) resulting in reliability was consistently above 80%with
a sample that included children from all a mean of 90% for all children. A mean
socio-economic strata. Teachers were Kappa coefficient (Cohen, 1960) of .74 was
instructed by one of the authors on which obtained for all observations to indicate
students to assess (i.e., one boy and girl reliability beyond chance levels. Following
Academic Performance Rating Scale 289

each observation period, the teacher and subscale (e.g., items 3-6 included on both
assistant independently calculated the the Academic Success and Academic
amount of work completed by the student Productivity subscales).
relative to classmates and the percentage Given that the APRS was designed to
of items completed correctly. Interrater evaluate the unitary construct of aca-
reliability for these measures was consis- demic performance, it was expected that
tently above 96% with a mean reliability the derived factors would be highly
of 99%. correlated. This hypothesis was confirmed
as the intercorrelations among Academic
Success and Impulse Control, Academic
Success and Academic Productivity, and
Several analyses will be presented to Impulse Control and Academic Produc-
explicate the psychometric properties of tivity were .69, .88, and .63, respectively.
the APRS. First, the factor structure of this Despite the high degree of overlap between
instrument was determined to aid in the the Academic Success and Productivity
construction of subscales. Second, the components (Le., items reflecting accu-
internal consistency and stability of APRS racy and consistency of work correlated
scores were examined. Next, gender and with both), examination of the factor
grade comparisons were conducted to loadings revealed some important differ-
identify the effects these variables may ences (see Table 1). Specifically, the
have on APRS ratings as well as to provide Academic Success factor appears related
normative data. Finally, the concurrent to classroom performance outcomes, such
validity of the APRS was evaluated by as the quality of a child’s academic
calculating correlation coefficients be- achievement, ability to learn material
tween rating scale scores and the criterion quickly, and recall skills. Alternatively, the
measures. Academic Productivity factor is asso-
ciated with behaviors that are important
Factor Structure of the APRS
in the pocess of achieving classroom
success, including completion of work,
The APRS was factor analyzed using following instructions accurately, and
a principal components analysis followed ability to work independently in a timely
by a normalized varimax rotation with fashion.
iterations (Bernstein, 1988). As shown in
Table 1, three components with eigen- Internal Consistency and
values greater than unity were extracted,
accounting for approximately 68% of the Reliability of the AIRS
variance: Academic Success (7 items), Coefficient alphas were calculated to
Impulse Control (3 items), and Academic determine the internal consistency of the
Productivity (12 items). The factor struc- APRS and its subscales. The results of
ture replicated across halved random these analyses demonstrated adequate
subsamples (i.e., n = 242 and 246, respec- internal consistencies for the Total APRS
tively). Congruence coefficients (Harman, (.96), as well as for the Academic Success
1976) between similar components (.94) and Academic Productivity (.94)
ranged from 84 to .98 with a mean of .92, subscales. The internal consistency of the
indicating a high degree of similarity in Impulse Control subscale was weaker
factor structure across subsamples. Items (.72). Subsequently, the total sample was
with loadings of 60 or greater on a specific randomly subdivided (i.e., n = 242 and 246,
component were retained to keep the respectively) into two independent sub-
number of complex items (i.e., those with samples. Coefficient alphas were calcu-
significant loadings on more than one lated for all APRS scores within each
factor) to a minimum. In subsequent subsample with results nearly identical to
analyses, factor (subscale) scores were the above obtained.
calculated in an unweighted fashion with Test-retest reliability data were ob-
complex items included on more than one tained for a subsample of 26 children
290 School Psychology Review, 7997, Vol. 20, No. 2

TABLE1
Factor Structure of the Academic Performance Rating Scale

Academic Impulse Academic


Scale Item Success Control Productivity

I. Math work completed .30 0.02 .84


2. language Arts completed .32 .06 ,82
3. Math work accuracy .60 .I1 F3
4. Language Arts accuracy G .I7 xi
5. Consistency of work so .21 z
6. Follows group instructions rl .35 169
7. Follows small-group instructions .39 .37 ,64
8. Learns material quickly .81 .I7 36
9. Neatness of handwriting z .50 .31
10. Quality of reading .87 ,Is .23
11. Quality of speaking -80 .20 .21
12. Careless work completion Iii .72 .36
13. Time to complete work .36 Ti .61
14. Attention without prompts .24 .35 s3
15. Requires assistance .44 .39 53
16. Begins work carelessly .I6 .82 -02
17. Recall difficulties .66 z .38
18. Stares excessively 5 .39 .67
19. Social withdrawal .I6 .28 ,57
Estimate of % variance 55.5 6.6 67

Note: Underlined values indicate items included in the factor named in the column head.

(with both genders and all grades repre- Gender and Grade Comparisons
sented) across a 2-week interval as Teacher ratings on the APRS were
described previously. The reliability coef- broken down by gender and grade level
ficients were uniformly high for the Total to (a) assess the effects of these variables
APRS Score (.95), and Academic Success on APRS ratings and (b) provide norma-
(.91), Impulse Control (.88), and Aca- tive comparison data. The means and
demic Productivity (.93) subscales. Since standard deviations across grade levels for
rating scale scores can sometimes %n- APRS total and subscale scores are
prove” simply as a function of repeated presented for girls and boys in Table 2.
administrations (Barkley, 1988), the two A 2 (Gender) x 6 (Grade) multivariate
mean scores for each scale were compared analysis of variance (MANOVA) was
using separate t-tests for correlated conducted employing APRS scores as the
measures. Scores for each APRS scale were dependent variables. Significant multivar-
found to be equivalent across administra- iate effects were obtained for the main
tions with t-test results, as follows: Total effect of Gender (Wilk’s Lambda = .95; fl4,
APRS Score (t( 24) = 1.24, N.S.), Academic 472) = 6.20, p < .OOl) and the interaction
Success (t( 24) = 1.31, N.S.), Academic between Gender and Grade (Wilk’s
Productivity (t(24) = 1.32, N.S.), and Lambda = .93; F(20,1566) = 1.61,~ < .95).
Impulse Control (t(24) = .15, N.S.). Separate 2 x 6 univariate analyses of
Academic Performance Rating Scale 291

TABLE 2
Means and Standard Deviations for the APRS by Grade and Gender

Total Academic Impulse Academic


Grade Score Success Control Productivity

Grade1 (n =82)
Girls (n = 40) 67.02 (16.27) 23.92 (7.37) 9.76 (2.49) 44.68 (10.91)
Boys(n=42) 71.95 (16.09) 26.86 (6.18) 10.67 (2.82) 46.48 (11.24)

Grade2(n=91)
Girls (n = 46) 72.56 12.33) 26.61 (5.55) 10.15 (2.70) 47.85 7.82)
Boys(n =45) 67.84 14.86) 25.24 (6.15) 9.56 (2.72) 44.30 10.76)

Grade 3 (n = 92)
Girls (n = 43) 72.10 14.43) 25.07 (6.07 10.86 (2.65) 47.88 9.35)
Boys (n =49) 68.49 16.96) 25.26 (6.53) 9.27 (2.67) 45.61 11.89)

Grade4(n =79)
Girls (n = 38) 67.79 (18.69) 24.08 (7.56) 10.36 (2.91) 44.26
Boys (n=41) 69.77 (15.83) 25.35 (6.50) 9.83 (2.77) 45.71

Grade5(n=79)
Girls (n = 44) 73.02 (14.10) 26.11 (6.01) 10.76 (2.34) 48.36
Boys(n =35) 63.68 (18.04) 23.14 (7.31) 8.69 (2.82) 42.40 (12.47)

Grade6(n =70)
Girls (n = 31) 74.10 (14.45) 26.59 (6.26) 10.79 (2.25) 48.77 ( 9.13)
Boys (n =39) 65.24 (12.39) 23.75 (5.90) 9.05 (2.35) 43.59 ( 8.19)

Note: Standard deviations are in parentheses.

variance (ANOVAs) were conducted sub- to elucidate Gender effects within each
sequently for each of the APRS scores to Grade level for those variables where a
determine the source of obtained multiv- significant interaction was obtained.
ariate effects. A main effect for Gender Relatively similar results were obtained
was obtained for the APRS Total score across APRS scores. Gender effects were
(fll, 476) = 6.37, p < .05), Impulse Control found only within grades 6 (fll, 475) =
(F(1, 475) = 16.79, p < .OOl), and Aca- 7.02, p < .Ol) and 6 (fly, 475) = 6.61, p
demic Productivity (fll, 475) = 6.95, p < < .05) for the APRS total score. Alterna-
.05) subscale scores. For each of these tively, gender differences on the Academic
scores, girls obtained higher ratings than Success subscale were obtained solely
boys, indicating greater teacher-rated within grades 1 (F(1,475) = 4.24, p < .05)
academic productivity and behavioral and 5 (F(1, 475) = 4.14, p < .05). These
functioning among girls. No main effect results indicate that girls in the first and
for Gender was obtained on Academic f&h grades were rated as more academ-
Success subscale scores. Finally, a signif- ically competent than boys. Significant
icant interaction between Gender and differences between boys and girls in
Grade was obtained for the APRS Total Impulse Control scores were also found
score (F(5,476) = 2.68, p < .05), Academic within grades 3 (fll, 475) = 8.73, p < .Ol),
Success (F(5, 475) = 2.63, p < .05), and 5 (F(1,475) = 12.24,~ < .OOl), and 6 (F(I,
Impulse Control (e&475) = 3.59, p < .Ol) 475) = 8.06, p < .Ol) with girls judged to
subscale scores. All other main and exhibit greater behavioral control in these
interaction effects were nonsignificant. three grades. All other simple effects tests
Simple effects tests were conducted were nonsignificant.
School Psychology Review, 7997, Vol. 20, No. 2

TABLE 3
Correlations Between APRS Scores and Criterion Measures

Total Academic Impulse Academic


Measures Score Success Control Productivity

ACTRS’ -m6()***b 9.43’” 0.49”” ,.&4***


ADHD Ratings -.72*** 0.59”’ -.61*** 0.72”“”
On Task Percentage .29* .22 .24 .31*
AES” .53*** .26 .41** .57***
CTBS Math .48*** .62*** .28 .39**
CTBS Reading .53*** .62*** .34* 44’”
CTBS Language .53*** .61*** .41** .45**

‘Abbreviated Conners Teacher Rating Scale.


bCorrelations are based on N = 50 with degrees of freedom = 48.
‘Academic Efficiency Score.
"pC.05 **p<.o1 -p < .ool
Note: National percentile scores were used for all Comprehensive Test of Basic Skills (CTBS) subscales.

Relationships Among APRS Scores Divergent Validity of the APRS


and Criterion Measures Correlation coefficients between
The relationships among all APRS APRS scores and criterion measures were
scores and several criterion measures calculated with ACTRS ratings partialled
out to statistically
were examined to determine the concur- attributable to teacher control for variance
ratings of problem
rent validity of the APRS. Criterion behavior (see Table 4). Significant rela-
measures included two teacher rating tionships remained between APRS aca-
scales (ACTRS, ADHD Rating Scale), direct demic dimensions (i.e., Total Score, Aca-
observations of on-task behavior, percent- demic Success, and Academic Pro-
age of academic assignments completed ductivity subscales) and performance
correctly @ES), and norm-referenced measures such as AES and achievement
achievement test scores (CTBS reading, test scores. As expected, partialling out
math, and language). Pearson product- ACTRS scores reduced the correlations
moment correlations among these mea- between the Impulse Control subscale and
sures are presented in Table 3. Overall, the criterion measures to nonsignificant
absolute values of obtained correlation levels. None of the partial correlations
coefficients ranged from .22 to .72 with with ADHD ratings and on-task percent-
24 out of 28 coefficients achieving statis- age were statistically significant, indicat-
tical significance. Further, the APRS Total ing that these criterion measures were
Score and Academic Productivity subscale more related to teacher perceptions of a
were found to share greater than 36% of her child’s behavioral control than to his or
academic performance. The Academic
the variance with the AES, ACTRS, and Success subscale continued to share 26%
ADHD Rating Scale.The Academic Success or greater of the variance of CTBS scores
subscale shared an average of 38% of the when ACIDS scores were partialled out.
variance of CTBS scores. Weaker correla- In addition, the Total APRS score and the
tions were obtained between APRS scores Academic Productivity subscale shared 9%
and direct observations of on-task behav- of the variance with AES beyond that
ior with only an average of 7.2% of the accounted for by teacher ratings of
latter’s variance accounted for. problem behavior.
Academic Performance Rating Scale 293

TABLE 4
Correlations Between APRS Scores and Criterion Measures
with ACTRSa Scores Partialled Out

Total Academic Impulse Academic


Measures Score Success Control Productivity

ADHD Ratings -.12b 0.24 0.24 -. 07


On Task Percentage 0.04 0.01 0.03 9.04
AESC .32* .06 .22 .37**
CTBS Math .38** .56*** .I4 .25
CTBS Reading .46*** .58*** .24 .34*
CTBS Language .43** .54*** .28 .30*

*Abbreviated Conners Teacher Rating Scale.


bCorrelations are based on N = 50 with degrees of freedom = 48.
‘Academic Efficiency Score.
*p < .05 *+p < .Ol ““p < a01
Note: National percentile scores were used for all Comprehensive Test of Basic Skills (CTBS) subscales.

The divergent validities of the APRS p < .Ol) ratings. Finally, the relationship
subscales were examined to assess the between Academic Success ratings and
possible unique associations between CTEB Language scores was significantly
subscale scores and criterion measures. greater than that obtained between the
This was evaluated using separate t-tests latter and Academic Productivity ratings
for differences between correlation coef- (t(47) = 2.12, p < .OS).
ficients that are from the same sample The Academic Productivity subscale
(Guilford & Fruchter, 1973, p. 167). The was found to have the strongest relation-
Academic Success subscale was more ships with teacher ratings of problem
strongly associated with CTBS percentile behavior and accurate completion of
rankings than the other subscales or academic assignments. The correlation
ACTRS ratings. This finding was expected between Academic Productivity and
given that the Academic Success subscale ACTRS ratings was significantly greater
is comprised of items related to the than that obtained between ACTRS and
outcome of academic performance. Spe- Academic Success ratings (t(47) = 2.84,
cifically, the relationship between CTBS p < .Ol). In a similar fashion, Academic
Math scores and Academic Success rat- Productivity ratings were associated to a
ings was significantly greater than that greater degree with AES scores than were
obtained between CTBS Math scores and Academic Success ratings (t(47) = 4.29,
Impulse Control (t(47) = 3.03, p < .Ol), p < .Ol). Thus, the Academic Productivity
Academic Productivity (t(47) = 3.11, p < subscale was significantly related to
.Ol, and ACTRS (t(47) = 2.35, p < .05) criterion variables that represent factors
ratings. Similar results were obtained for associated with achieving classroom
CTBS Reading scores. The correlation of success (i.e., absence of problem behaviors
the latter with Academic Success ratings and accurate work completion). It should
was significantly greater than its relation- be noted that validity coefficients asso-
ship with Impulse Control (t(47) = 2.50, ciated with the Impulse Control subscales
p < .05, Academic Productivity (t(47) = were not found to be significantly greater
2.38, p < .05, and ACTRS (t(47) = 2.76, than either of the other subscales.
294 School fsvcholonv Review, 7997, Vol. 20, A/o. 2
, “/

APRS Ratings: regarding factors associated with the


Sensitivity to Group Differences process of achieving classroom success
(e.g., work completion, following instruc-
A final analysis was conducted to tions, behavioral conduct).
investigate the sensitivity of APRS ratings
to differences between groups of children
with and without attention and impulse Psychometric Properties of the APRS
control problems (i.e., the latter group The APRS total and subscale scores
representing students who are potentially were found to possess acceptable internal
exhibiting academic performance difficul- consistency, to be stable across a 2-week
ties). Children from the total sample with interval, and to evidence significant levels
scores 2 standard deviations above the of criterion-related validity. Although the
mean on the ADHD rating scale (n = 35) Impulse Control subscale was found to
were compared with students who re- have adequate test-retest reliability, its
ceived teacher ratings of ADHD sympto- internal consistency was lower than the
matology within 1 standard deviation of other subscales. This latter finding is likely
the mean (n = 390). Separate t-tests were due to the fewer number of items in this
conducted employing each of the APRS subscale. The relationship among APRS
scores as dependent measures. Statisti- scores and criterion measures, such as
cally significant differences were obtained academic efficiency, behavior ratings, and
between groups for the APRS Total score standardized academic achievement test
(t( 1,423) = 12.32,~ < .OOl), and Academic scores, were statistically significant. The
Success (t(1, 423) = 7.23, p < .OOl), APRS Total Score and two subscales were
Impulse Control (t( 1, 423) = 8.95, p < found to have moderate validity coeffi-
.OOl), and Academic Productivity (t(1, cients and to share appreciable variance
423) = 10.20, p < .OOl) subscales, with the with several subtests of a norm-referenced
children exhibiting ADHD symptoms achievement test and a measure of
rated as significantly inferior on all APRS classwork accuracy. Further, when valid-
dimensions relative to control children. ity coefficients were calculated with
ACTRS readings partialled out, most
DISCUSSION continued to be statistically significant
indicating that APRS scores provide
The APRS is a brief teacher question- unique information regarding a child’s
naire that provides reliable and valid classroom performance relative to brief
information about the quality of a stu- ratings of problem behavior.
dent’s academic performance and behav- Two of the three APRS subscales were
ioral conduct in educational situations. found to exhibit divergent validity. Al-
Separate principal components analyses though all APRS subscales were positively
resulted in the extraction of three com- correlated with achievement test scores,
ponents or subscales (i.e., Academic the strongest relationships were found
Success, Impulse Control, and Academic between the Academic Success subscale
Productivity) that were congruent across and CTBS percentile rankings, accounting
random subsamples. The Academic Suc- for an average of 38% of the variance.
cess subscale accounted for over half of Alternatively, although negative correla-
the variance which supports the construct tions were obtained between teacher
validity of the APRS, as it was intended report of problem behaviors (i.e., ACTRS
to assess teacher perceptions of the and ADHD ratings) and all APRS scores,
quality of students’ academic skills. An the strongest relationships were found
additional 13% of rating variance was between the former rating scales and
accounted for by the Academic Produc- Academic Productivity scores. Further, a
tivity and Impulse Control subscales. classroom-based measure of work comple-
Although the latter are highly correlated tion accuracy (AES) had a significantly
with the Academic Success subscale, both greater correlation with the Academic
appear to provide unique information Productivity subscale with 32.5% variance
Academic Performance Rating Scale 295

accounted for. This latter finding may more specific index such as on-task
appear counterintuitive (i.e., that Aca- frequency.
demic Success did not have the strongest Teacher ratings on the APRS differ-
relationship with AES), but is most likely entiated a group of children displaying
due to the fact that AES represents a behavior and attention problems from
combination of the child’s academic their normal classmates. Youngsters who
ability, attention to task, behavioral had received scores 2 or more standard
control, and motivation to perform. Given deviations above the mean on a teacher
the varied item content of the Academic rating of ADHD symptomatology received
Productivity subscale, it is not surprising significantly lower scores on all APRS
that it shares more variance with a scales relative to a group of classmates
complex variable like AES. This pattern who were within 1 standard deviation of
of results indicates that the Academic the mean on ADHD ratings. This result
Success subscale is most representative of provides preliminary evidence of the
the teacher’s judgment of a student’s APRS’s discriminant validity and value for
global achievement status, whereas the screening/problem identification pur-
Academic Productivity subscale has a poses. Further studies are necessary to
greater relationship with factors asso- establish its utility in differentiating
ciated with the process of day-to-day youngsters with disruptive behavior
academic performance. Finally, although disorders who are exhibiting concomitant
the Impulse Control subscale was signif- academic problems versus those who are
icantly associated with most of the not.
criterion measures, it was not found to
demonstrate divergent validity. This APRS: Grade and Gender Differences
result, combined with its brevity, lower
internal consistency, and redundancy Girls were rated to be more compe-
with teacher ratings of problem behavior, tent than boys on the Academic Produc-
limits its practical utility as a separate tivity subscale, regardless of grade level.
This result was expected as gender
subscale. differences favoring girls have been found
Although statistically significant for most similar teacher questionnaires
positive correlations with on-task percent- (e.g., Weissberg et al., 1987). Alternatively,
age were obtained for the APRS Total and for the total and remaining subscale
Academic Productivity scores, the Aca- scores, girls were rated as outperforming
demic Success and Impulse Control boys only within specific grade levels. In
subscales were not related to this obser- general, these were obtained at the fifth
vational measure. One explanation for this and sixth grade levels, wherein gender
result is that the Academic Productivity differences with respect to achievement
subscale is more closely related to factors status and behavioral control are most
associated with independent work pro- evident at the upper grades. The latter
ductivity (e.g., attention to task) than are result could indicate that gender differ-
the other subscales. A second possible ences in daily academic performance do
explanation for the weaker correlations not impact on teachers’ overall assess-
between this criterion variable and all ment of educational status until the later
APRS scores is that children’s classroom grades when demands for independent
performance is a function of multiple work greatly increase. Interestingly, no
variables and is unlikely to be represented significant grade differences were ob-
by a single, specific construct. As such, tained for any of the APRS scores. As
teacher ratings of academic functioning Hightower and colleagues (1986) have
should be more strongly related to global suggested, a lack of differences across
measures, such as AES or standardized grade levels implies that teachers com-
achievement test scores, that represent a plete ratings of academic performance in
composite of ability, attention to task, task relative (i.e., in comparison with similar-
completion and accuracy, than with a aged peers) rather than absolute terms.
296 School Psychology Review, 7997, Vol. 20, No. 2

Limitations of the Present Study supplement to behavioral assessment


Several factors limit definitive conclu- techniques (e.g., direct observations of
sions about the utility of the APRS based behavior, curriculum-based measure-
on the present results. First, the sample ment) given its brevity, focus on both
of children studied was limited to an global and specific achievement parame-
urban location in one geographic region; ters, and relationship with classroom-
it is unknown how representative these based criteria of academic success. The
normative data would be for children from present results provide initial support for
rural or suburban settings as well as other the utility of the APRS as a screening/
regions. Previous research with similar problem identification measure. Further,
teacher questionnaires would suggest when used in the context of an assessment
significant differences in scores across battery that includes more direct mea-
urban, suburban, and rural settings (e.g., sures of academic performance, the APRS
Hightower et al., 1986). Secondly, for the may provide important data regarding the
norms to be generally applicable, APRS social validity (i.e., teacher perceptions of
ratings would need to be collected for a changes in academic status) of obtained
sample representative of the general intervention effects, although its incre-
population with respect to ethnicity and mental validity would need to be estab-
socioeconomic status. A further limitation lished. The APRS’s sensitivity to the effects
of the present study was the limited range of behavioral and psychopharmacological
of criterion measures employed. In par- interventions awaits further empirical
ticular, the relationship of APRS scores study.
with more direct measures of academic
performance (e.g., criterion-based mea-
surement) should be explored, as the
weaknesses of norm-referenced achieve- American Psychiatric Association. (1987). Diugnos-
ment tests for this purpose are well tic and statistical manual of mental disorders
documented (Marston, 1989; Shapiro, (3rd ed. Revised). Washington, DC: Author.
1989). Finally, additional psychometric Barkley, R. A. (1988). Child behavior rating scales
properties of this scale, such as predictive and checklists. In M. Rutter, A. H. Tuma, & I. S.
validity and inter-rater reliability, need to Lann (Eds.), Assessment and diagnosis in child
be documented. Empirical investigations psychopathology (pp. 113-155). New York:
are necessary to determine the usefulness Guilford.
of the APRS as a treatment-sensitive Bernstein, I. H. (1988). Applied multivariate
instrument. Evidence for the latter is analysis. New York: Springer-Verlag.
especially important as a primary purpose
for creating the APRS was to allow Cantwell, D. P., & Satterfield, J. H. (1978). The
prevalence of academic under-achievement in
assessment of intervention effects on hyperactive children. Journal @‘Pediatric pszlchol-
academic performance. w, 3, 168-171.

Children’s Defense Fund. (1988). A call for actiun


Summary to make our nation sqfie for children: A briefing
book on the status of American children in 1988.
The results of this preliminary inves- Washington, DC: Author.
tigation indicate that the APRS is a highly
reliable rating scale that has demon- Cohen, J. (1960). A coefficient of agreement for
strated initial validity for assessing nominal scales. Educational and pS@&gical
Measurement, 20,37-46.
teacher perceptions of the quality of
student academic performance. Given its CTB/McGraw-Hill. (1982). l%e comprehensive Test
unique focus on academic competencies of Basic Skills. Monterey, CA Author.
rather than behavioral deficits, it appears DuPaul, G. J. (in press). Parent and teacher ratings
to have potential utilitywithin the context of ADHD symptoms: Psychometric properties in a
of a multimethod assessment battery. In community-based sample. Journal of Clinical
particular, it should serve as a valuable Child Psychologg.
Academic Performance Rating Scale 297

Gerber, M. M., & Semmel, M. I. (1984). Teacher as Lorion, R. P., Cowen, E. L., & Caldwell, R. A. ( 1975).
imperfect test: Reconceptualizing the referral Normative and parametric analyses of school
process. Educational Psychologist, 19, 137-148. maladjustment. American Journal of Community
Psychology, 3,291-301.
Gesten, E. L. (1976). A Health Resources Inventory:
The development of a measure of the personal and Marston, D. B. (1989). A curriculum-based measure-
social competence of primary-grade children. ment approach to assessing academic perfor-
Journal of Consulting and Clinical Psychology, 4-4, mance: What it is and why do it. In M. R. Shinn
775-786. (Ed.), Curriculum-based measurement: Assessing
special children (pp. 18-78). New York: Guilford
Glidewell, .I. C., & Swallow, C. S. (1969). The Press.
prevalence of maladjustment in elementary
schools. Report prepared for the Joint Commission National Commission on Excellence in Education.
on Mental Illness and Health of Children. Chicago: (1983). A nation at risk: 17Le immative for
University of Chicago Press. educational reform. Washington, DC: Author.

Goyette, C. H., Conners, C. K., & Ulrich, R. F. (1978). Neeper, R., & Lahey, B. B. (1986). The Children’s
Normative data on Revised Conners Parent and Behavior Rating Scale: A factor analytic develop-
Teacher Rating Scales. Journal of Abnormal Child mental study. school Psychology Reuiew, 15, 277-
Psychdogy, 6,221-236. 288.

Gresham, F. M., & Elliott, S. N. (1990). Social skills Rapport, M. D. (1987). Attention Deficit Disorder
rating system. Circle Pines, MN: American Guid- with Hyperactivity. In M. Hersen &V. B. Van Hasselt
ance Service. (Eds.), Behavior therapy with children and
adolescents (pp. 325-361). New York: Wiley.
Gresham, F. M., Reschly, D. .I., & Carey, M. P. (1987).
Rapport, M. D. (in press). Psychostimulant effects
Teachers as “tests”: Classification accuracy and
on learning and cognitive function in children with
concurrent validation in the identification of
Attention Deficit Hyperactivity Disorder: Findings
learning disabled children. S&ool Psychology
and implications. In J. L. Matson (Ed.), Hwac-
Rewiim, 16,543-553.
tivity in children: A handbook. New York:
Guilford, .I. P., & Fruchter, B. (1973). Fundamental Pergamon Press.
statistics in psychology and education (5th ed.).
Rapport, M. D., DuPaul, G. J., Stoner, G., & Jones,
New York: McGraw-Hill. J. T. (1986). Comparing classroom and clinic
measures of attention deficit disorder: Differential
Harman, H. H. (1976). Malern factor analysis (3rd
ed.-revised). Chicago: The University of Chicago idiosyncratic, and dose-response effects of methyl-
phenidate. Journal of Consulting and Clinical
Press.
PsycWQgy, 54,334-341.
Hightower, A. D., Work, W. C., Cowen, E. L.,
Rapport, M. D., Murphy, A., & Bailey, J. S. (1982).
Lotyczewski, B. S., Spine& A. T., Guare, J. C., &
Ritalin vs. response cost in the control of hyper-
Rohrbeck, C. A. (1986). The Child Rating Scale: The activity children: A within-subject comparison.
development of a socioemotional self-rating scale
Journal of Applied Behavior Analysis, 15, 205-
for elementary school children. school Psychology 216.
Review, 16,239-255.
Rosenshine, B. V. (1981). Academic engaged time,
Hoge, R. D. (1983). Psychometric properties of content covered, and direct instruction. Journal
teacher-judgment measures of pupil aptitudes, of Education, 3,38-66.
classroom behaviors, and achievement levels.
Journal of &x&al Education, 17,401-429. Rubin, R. A, & Balow, B. (1978). Prevalence of
teacher-identified behavior problems. Exceptional
Hollingshead, A. B. (1975). Fourfactor index of social Children, 45, 102-111.
status. New Haven, CT Yale University, Department
of Sociology. Shapiro, E. S. (1989). Academic skills problems:
Direct assessment and intervention. New York:
Kazdin, A. E. (1985). Treatment of antisocial GuiIford Press.
behavior in children and adolescents. Homewood,
IL: Dorsey Press. Shapiro, E. S., & Kratochwill, T. R. (Eds.). (1988).
Behavioral assessment in schools: Conceptual
Kinsbourne, M., & Swanson, J. M. (1979). Models of foundations and practical applications. New York:
hyperactivity: Implications for diagnosis and Guilford Press.
treatment. In R. L. Trites (Ed.), Hyperactivity in
children: Etiology, measurement, and treatment Shinn, M. R. (Ed.). (1989). Curriculum-based
implications (pp. l-20). Baltimore: University measurement: Assessing special children. New
Park Press. York: Guilford Press.
290 School Psychology Review, 7997, Vol. 20, No. 2

Wallrer, H. M., & McConnell, S. R. (1988). ?ViuZti- & Gesten,E. L (1987). Teacherratings of children’s
M&ml1 &ale of social GmqMmce and &hool problem and competence behatiors: Normative
A- Austin, TX: Pro-Ed, Inc. and parametric characteristics.AmericanJoumMtl
c#cOmmun~pszlcho~, 15,387-401.
Weiss, G., & Hechtman, L. (1986). Hyperactive
clddm grown up. New York: GuMord. Whalen, C. K., Henker, B., & Granger, D. A. (1989).
Ratings of medication effects in hyperactive
Weissberg,R.P.,Cowen,E. L., Lotyczewski,B. S.,Boike, children: Viable or vulnerable?Behavioral Assess-
M. F., Orara, N., Ahvay, Stalonas, P., Sterling, S., ment, 11,179.199.

e J. DuPauI, PhD, received his doctorate from the University of Rhode


-7
lslan in 1985. He is currently Assistant Professor of Psychiatry at the
University of Massachusetts Medical Center. His research interests include
the assessment and treatment of Attention Deficit Hyperactivity Disorder
and related behavior disorders.
Mak D. Rapport, PhD, is currently Associate Professor of Psychology at
the University of Hawaii at Manoa. His research interests include assessment
of the cognitive effects of psychotropic medications and the treatment of
Attention Deficit Hyperactivity Disorder and related behavior disorders.
Lucy M. PerrieIIo, MA, received a Master’s degree in Counseling Psychology
from Assumption College in 1988. She is currently a Research Associate
in Behavioral Medicine at the University of Massachusetts Medical Center.
Academic Performance Rating Scale

APPENDIX A

Student Date
Grade Teacher
For each of the below items, please estimate the above student’s performance over the PAST
WEEK. For each item, please circle one choice only.

Estimate the percentage of 049% 5049% 70-79% 8049% 90-100%


written math work completed
(regardless of accuracy) rela- 1 2 3 4 5
tive to classmates.
Estimate the percentage of 049% 5049% 70-79% 804% 90400%
written language arts work
I 2 3 4 5
completed (regardless of ac-
curacy) relative to classmates.

.Estimate the accuracy of com- 044% 65-69% 70-79% 8049% 90-100%


4 pleted written math work
1 2 3 4 5
(i.e., percent correct of work
done).

4. Estimate the accuracy of com- 044% 6549% 70-79% 8&89% 90400%


pleted written language arts
1 2 3 4 5
work (i.e., percent correct of
work done).

5. How consistent has the qual- Consistently More Poor Variable More Consistently
ity of this child’s academic Poor than Successful successful
work been over the past Successful than Poor
week?
1 2 3 4 5

6. How frequently does the stu-


Never Rarely Sometimes Often Very often
dent accurately follow teacher
instructions and/or class dis- 1 2 3 4 5
cussion during large-group
(e.g., whole class) instruction?

7. How frequently does the stu-


dent accurately follow teacher Never Rarely Sometimes Often Very often
instructions and/or class dis- 1 2 3 4 5
cussion during small-group
(e.g., reading group)
instruction?

8. How quickly does this child Very Slow Slow Average Quickly very
learn new material (i.e., pick Quickly
up novel concepts)?
1 2 3 4 5

9. What is the quality or neat- Poor Fair Average Above Excellent


ness of this child’s Average
handwriting?
I 2 3 4 5
300 SchoolPsychologyReview,7997, Vo/.2OJVo.2

10. What is the quality of this Poor Fair Average Above Excellent
child’s reading skills? Average
1 2 3 4 5

11. What is the quality of this Poor Fair Average Above Excellent
child’s speaking skills? Average
1 2 3 4 5

12. How often does the child Never Rarely Sometimes Often Very Often
complete written work in a
careless, hasty fashion? 1 2 3 4 5

13. How frequently does the Never Rarely Sometimes Often Very Often
child take more time to com-
plete work than his/her 1 2 3 4 5
classmates?

14. How often is the child able to Never Rarely Sometimes Often Very Often
pay attention without you
prompting him/her? 1 2 3 4 5

15. How frequently does this Never Rarely Sometimes Often Very Often
child require your assistance
to accurately complete his/ 1 2 3 4 5
her academic work?

16. How often does the child Never Rarely Sometimes Often Very Often
begin written work prior to
understanding the directions? 1 2 3 4 5

17. How frequently does this Never Rarely Sometimes Often Very Often
child have difficulty recalling
material from a previous day’s 1 2 3 4 5
lessons?

18. How often does the child ap- Never Rarely Sometimes Often Very Often
pear to be staring excessively
or “spaced out”? 1 2 3 4 5

19. How often does the child ap- Never Rarely Sometimes Often Very Often
pear withdrawn or tend to
lack an emotional response in 1 2 3 4 5
a social situation?

You might also like