You are on page 1of 31

Contemporary Educational Psychology 27, 51–79 (2002)

doi:10.1006/ceps.2001.1091, available online at http://www.idealibrary.com on

Measures of Children’s Knowledge and Regulation


of Cognition
Rayne A. Sperling
The Pennsylvania State University

Bruce C. Howard
NASA Classroom of the Future, Wheeling Jesuit University

Lee Ann Miller


West Virginia University

and
Cheryl Murphy
University of Arkansas

Published online October 2, 2001

Two studies were conducted to investigate measures of children’s metacognition.


Experiment 1 presented two versions of a self-report inventory, the Jr. MAI, appro-
priate for assessing metacognition in children in grades 3–9. Factor analyses are
interpreted that illustrate how the items measure components of metacognition. Ex-
periment 2 further addressed properties of the two versions and compared the instru-
ment to other inventories, teacher ratings of children’s metacognition, and student
achievement scores. Findings indicated high correlations between the Jr. MAI and
an existing metacognitive problem-solving inventory (Fortunato, Hecht, Tittle, &
Alvarez, 1991). Moderate correlations between the Jr. MAI and other self-report
instruments of metacognition while reading (Jacobs & Paris, 1987; Schmitt, 1990)
and low correlations between the Jr. MAI and teacher ratings of metacognition and

Data presented in this article were collected while the first author was a faculty member
at West Virginia University. Portions of data from Experiment 1 were presented at the annual
meeting of the American Educational Research Association, 1996. Portions of data from Ex-
periment 2 were presented at the annual meeting of the American Educational Research Asso-
ciation, 1998.
Address correspondence and reprint requests to Rayne Sperling, 201 CEDAR Building, The
Pennsylvania State University, University Park, PA 16801. E-mail: rsd7@psu.edu.

51
0361-476X/01 $35.00
 2001 Elsevier Science (USA)
All rights reserved.
52 SPERLING ET AL.

overall achievement scores were also found. Gender and grade level differences in
metacognition are presented and discussed. The instruments are appended.  2001
Elsevier Science (USA)

One goal of education is to promote and develop self-regulated learners.


Both practitioners and researchers are interested in the extent and develop-
ment of self-regulatory abilities, such as metacognitive knowledge and regu-
lation, in school-age learners. There is a current need for measures of meta-
cognition as well as measures of other self-regulatory constructs (Winne &
Perry, 2000). There are two main reasons for this need. First, many teachers,
administrators, and researchers are interested in facilitating self-regulated
learning and therefore need to assess the effects of learning strategy interven-
tions on learners’ metacognitive processing and strategy use. For example,
when planning and testing various interventions, clear information about the
status of metacognitive skills in learners could facilitate effective targeting
of metacognitive and other self-regulatory weaknesses. If such weaknesses
were related to reading comprehension, for instance, instruction could be
focused to develop more effective planning and monitoring skills (Cross &
Paris, 1988; Manning, Glasner, & Smith, 1996; Paris, Cross, & Lipson, 1987;
Schraw & Dennison, 1994). Effectively identifying metacognitive and self-
regulatory skills is crucial to the appropriate design of subsequent interven-
tions.
Second, there is a need to further understand the components of self-regu-
lated learning. Currently relatively little is known about the relationships
among constructs comprising self-regulation, such as strategy use, metacog-
nition, and motivation (Brownlee, Leventhal, & Leventhal, 2000; DeCorte,
Verschaffel, & Op ’t Eynde, 2000; Weinstein, Husman, & Dierking, 2000).
Understanding of the interrelationships among self-regulatory constructs,
through effective measurement, can facilitate needed further theory devel-
opment and testing (Winne & Perry, 2000). The current work addresses
these two needs by examining measures of metacognition in learners in
grades 3–9.
Several methods of measuring metacognition and self-regulation have
been implemented both for research and practice with children. Some have
relied on standardized achievement scores. Achievement scores are some-
times used as dependent measures in intervention studies with special and
general education children (see as one example, Feitler & Hellekson, 1993).
It is important in school-based settings to determine if interventions may
impact standardized test scores. Standardized achievement scores, however,
can be problematic when used as measures of self-regulatory constructs such
as strategy use, motivation, and metacognition, since research indicates that
the relationship between standardized achievement scores and metacognition
is not direct. Metacognitive training programs, for example, have been shown
to be effective for teaching reading and problem-solving strategies regardless
MEASURING CHILDREN’S METACOGNITION 53

of learning aptitude or achievement (Delclos & Harrington, 1991; Jacobs &


Paris, 1987; Palincsar & Brown, 1984). Also, Swanson (1990) indicated that
metacognitive knowledge and intellectual aptitude were unrelated and that
metacognitive skills helped children of lower aptitude compensate on
problem-solving tasks. Similarly, Pintrich, Smith, Garcia, and McKeachie
(1991) indicated that metacognition and strategy use were not highly corre-
lated with academic achievement. In addition, Pressley and Ghatala (1989)
also found metacognition to be unrelated to verbal ability and further indi-
cated that achievement and ability measures are not a good proxy for meta-
cognitive skills. Additional research also illustrates a lack of significant cor-
relation between aptitude measures and metacognition. For example, Allon
and others (Allon, Gutkin, & Bruning, 1999) found no relationship between
metacognition and IQ in a ninth-grade sample. Therefore, based on several
studies, the relationship between achievement or aptitude and metacognitive
constructs is unclear.
One would hypothesize that metacognitive knowledge and regulation
would facilitate learning and have an effect on academic achievement; how-
ever, as illustrated, research findings do not always support that hypothesis.
As such, the use of achievement measures as indications of metacognitive
knowledge or other self-regulatory constructs is unwarranted.
Measures that directly assess metacognition and broader self-regulatory
constructs have also been used. H. Lee Swanson (1990), for example, used
an interview technique with learners in the intermediate grades to assess
metacognition. Similarly, Zimmerman (Zimmerman & Martinez-Pons, 1986,
1988) employed a structured interview technique to measure self-regulation.
Others have employed monitoring checklists (e.g., Manning, 1996) to pro-
mote and measure metacognition. Newman (1984a, 1984b; Newman &
Wick, 1987), Pressley and colleagues (Pressley & Ghatala, 1989; Pressley,
Levin, Ghatala, & Amhad, 1987), and more recently Tobias and colleagues
(e.g., Tobias, Everson, & Laitusis, 1999) have used calibration techniques
to assess learners’ metacognitive regulation. Teacher ratings have also been
employed to measure metacognitive and self-regulatory processes. Zimmer-
man and Martinez-Pons (1988) found teacher ratings of self-regulation to
be slightly correlated with student ratings of their own self-regulation and
moderately related with achievement measures. Although Zimmerman and
Martinez-Pons used teacher ratings as measures of self-regulation, others
have employed teacher ratings as measures of metacognition alone (Tobias,
Everson, & Laitusis, 1999). Additionally, inventories have also been used
to measure metacognitive processes in school-age learners (e.g., Fortunato
et al., 1991; Jacobs & Paris, 1987; Pereira-Laird & Deane, 1997; Schmitt,
1990).
Each of the methods of measurement has advantages and disadvantages.
The main reason for many disadvantages is that it is difficult to capture meta-
54 SPERLING ET AL.

cognitive processing via direct observation. Achievement scores are particu-


larly limited as a measure, as noted, because gains in content knowledge and
abilities may or may not reflect changes in metacognition, strategic knowl-
edge, or other regulatory constructs. Interviews and other rich data sources,
such as journals and open-ended responses, are problematic because of the
relatively lengthy time to administer and time-consuming process of data
analysis. Sometimes these data are reduced to a relatively small numerical
scale, which may decrease the benefits of such rich measures (e.g., Craig &
Yore, 1995). Think-aloud protocol measures are often similarly cumbersome
in administration and scoring. The strengths and weaknesses of such proto-
cols were addressed in reviews both by Nisbett and Wilson (1977) and Erics-
son and Simon (1980, 1993). Pressley and Afflerbach’s (1995) critique of
Ericsson and Simon’s work and their own contributions provide suggestions
for effective use of protocols and their subsequent analyses. Difficulties re-
main, however, and at times learners’ level of strategies and skills may pre-
clude them from also thinking aloud while engaging in learning and monitor-
ing tasks (Cavanaugh & Perlmutter, 1982; Pressley & Afflerbach, 1995).
Monitoring checklists, where learners check off target activities during or
after a learning task, provide a systematic means to assess both metacognitive
classroom academic learning and self-regulation of behavior in interventions
(Manning, 1984, 1991; Reid & Harris, 1993). Calibration techniques, where
learners make a judgment of their learning or of their performance that is
then compared to their actual performance, also have served as a measure
of metacognitive processing across age and ability levels (Pressley & Gha-
tala, 1989; Pressley, Levin, Ghatala, & Amhad, 1987; Schraw & Roedel,
1994). In calibration measures, the difference between the rating and actual
performance is calculated as either a bias or accuracy measure. The magni-
tude of this difference is taken as a measure of a learner’s ability to monitor
and evaluate her learning. The benefits of the calibration technique are that
calibrations may be less subjective but are also easily administered and
scored. Although often used, one concern with calibration techniques is their
close similarity to self-efficacy ratings. Pajares (1996) provided discussion
of measures of self-efficacy that are very similar to calibration assessments. It
may be that calibration techniques capture self-efficacy or other motivational
constructs rather than metacognition. Winne and Perry (2000) further address
the limitations of various approaches to measurement of metacognition and
other self-regulatory constructs.
Self-report inventories as measures of metacognitive processing are per-
haps, in some ways, the least problematic technique. In terms of their bene-
fits, these inventories are easily administered and scored, which makes them
useful large-scale assessment tools for determining which learners may need
interventions in metacognition, strategy use, or superordinate self-regulation.
Self-report inventories may also be helpful for use in theoretical research.
MEASURING CHILDREN’S METACOGNITION 55

For instance, research has demonstrated that both the knowledge and regula-
tion components of metacognition can be measured via self-report inven-
tories (Pereira-Laird & Deane, 1997; Schraw & Dennison, 1994). As self-
regulatory constructs are further delineated, researchers will need measures
of metacognitive processes to facilitate theoretical and predictive models
of self-regulation.
Unlike inventories used with older learners, such as the Metacognitive
Awareness Inventory (MAI) (Schraw & Dennison, 1994) or the Motivated
Strategies for Learning Questionnaire (MSLQ) (Pintrich, Smith, Garcia, &
McKeachie, 1991), and the Learning and Study Strategies Inventory (LASSI)
(Weinstein, Schulte, & Palmer, 1987) that have undergone assessment for
psychometric properties, less is known regarding self-report inventories of
metacognition for use with younger learners. Such inventories are often de-
veloped for a specific one-time use and are rarely compared to other mea-
sures of metacognitive processing or to achievement.
Research on children’s metacognition generally employs one of two
frameworks although other conceptions of metacognition are present in the
literature (e.g., Nelson & Narnes, 1996). One framework, initiated by Flavell
(Flavell, 1979; Flavell, Miller, & Miller, 1993), presents metacognition as
including metacognitive knowledge and metacognitive experiences. Meta-
cognitive knowledge includes task, person, and strategy components. Meta-
cognitive experiences include feelings of understanding and may be the im-
petus for strategy implementation (Flavell, 1979). In later writings Flavell
and colleagues referred to these components as metacognitive monitoring
and self-regulation (1993).
The second framework initiated by Brown (1978), and further delineated
and discussed in later work (Baker & Brown, 1984; Cross & Paris, 1988;
Jacobs & Paris, 1987; Paris, Cross, & Lipson, 1984; Pireira-Laird & Deane,
1987) also suggests two components: Knowledge of cognition and regulation
of cognition. The knowledge component includes declarative, procedural,
and conditional knowledge of cognition. The regulation of cognition compo-
nent includes constructs such as planning, monitoring, and evaluation. The
current study employs the Brown framework of metacognition as the theoret-
ical foundation.
The Brown framework was chosen for several reasons. First, the instru-
ments developed and tested in the current work are based on an adult measure
of metacognition that was developed conceptually from the Brown frame-
work. Second, in Experiment 2, the new instrument, the Jr. MAI, is examined
in an initial construct validity study. The other instruments used in Experi-
ment 2 were either developed based on the Brown framework of metacogni-
tion or the items on the instruments can be readily classified into the Brown
framework. In this way, the Brown framework provides a consistent con-
struct definition for the initial validity examination. Third, the current study
56 SPERLING ET AL.

addresses metacognition within the context of an academic setting. The


Brown framework provides direct application to academic learning settings
(e.g., Baker & Brown, 1984).
The current work addresses three main goals. First, both experiments ad-
dress the development of a self-report inventory measure of metacognitive
processes. Based on the Brown (1978) framework of metacognition two ver-
sions of the Jr. MAI were created to assess the metacognitive skills in learn-
ers in grades 3 through 9. Second, the second experiment investigates rela-
tionships among measures of metacognitive processing in school-age
learners and examines four self-report inventories of metacognitive pro-
cessing as well as teacher ratings of metacognitive processing. Third, the
second experiment also considers the relationship between measures of meta-
cognition and standardized achievement measures, gender differences, and
grade-related developmental changes in metacognition.
EXPERIMENT 1
The first study was conducted to assess the psychometric properties of
two versions of a metacognitive instrument, titled the Junior Metacognitive
Awareness Inventory, or Jr. MAI, developed for use with younger popula-
tions. The Jr. MAI was developed from a previous instrument, the Metacog-
nitive Awareness Inventory (MAI), used with adult populations (Schraw &
Dennison, 1994).
Methods
Participants
Participants included all 344 children in third through ninth grades from a rural K–9 grade
school. Less than 1% minorities reside in the district and therefore few minorities were in-
cluded in the study. Fifty-one percent of the children in the school qualified for free or reduced
lunch programs.

Materials
Two self-report inventories were developed. The first inventory (Jr. MAI, Version A) in-
cluded 12 items with a three-choice response (never, sometimes, or always) for use with learn-
ers in grades 3 through 5 (Jr. MAI, Version A is presented in Appendix A). The second
inventory (Jr. MAI, Version B) included the same 12 items but also included 6 additional
items and used a 5-point Likert scale for use with learners in grades 6 through 9. The additional
6 items were added to reflect higher levels of regulation that would likely be evidenced in
older, more experienced learners (Jr. MAI Version B is presented in Appendix B).
The Schraw and Dennison (1994) measure for adults (the MAI) was used as a reference
measure. First, items that loaded strongly on the knowledge of cognition and regulation of
cognition factors of the MAI were examined. After selecting those items that most strongly
represented the two factors, each item was first checked for relevance to a younger population.
For example, an item from the MAI, which was less appropriate for the current population,
was I understand my intellectual strengths and weaknesses. Second, most items were reworded
MEASURING CHILDREN’S METACOGNITION 57

to represent language understandable to younger populations. In addition, some items were


given more of a context to assist young learners’ understanding. For example, on the MAI
an item that loaded heavily on the regulation factor stated, I ask myself how well I accomplished
my goals once I’m finished. The item was reworded for the current population as, I ask myself
if I learned as much as I could have when I finish a task. Third, items were further considered
based on their loadings on the original eight components of metacognition represented in the
Schraw and Dennison inventory. Consistent with Brown’s theory, the Schraw and Dennison
inventory, and related work, three components of knowledge of cognition (declarative knowl-
edge, conditional knowledge, and procedural knowledge), and five components of regulation
of cognition (planning, monitoring, information management, evaluation, and debugging) were
considered. Fourth, two classroom teachers from the school from which the sample was drawn,
a third-grade teacher and a fifth-/sixth-grade teacher, were consulted prior to administration.
Both teachers felt the items would be answerable by their learners. The third-grade teacher
confirmed some of the Version B items would be too complex for her learners.
Table 1 provides Jr. MAI item number, the original MAI item number, and item. The load-
ings from the original scale as reported by Schraw and Dennison (1994, Experiment 1) are
presented and the item affiliation is included. Appendix A and Appendix B present the two
versions of the Jr. MAI and can be referenced to illustrate changes in the items from their
original form as found in the MAI.

Procedures
All learners in grades 3 through 9 were included in the study. In all classes the instrument
was administered as part of normal class procedures. Although not part of the protocol, teachers
administered the inventory in grade 3 by reading each item aloud in a group setting. All other
learners completed the inventory at their own pace. The classroom teacher or paraprofessional
gave additional assistance to those in need of it. The Jr. MAI, Version A, was administered
to grades 3 through 5; the Jr. MAI, Version B, was administered to grades 6 through 9.

Results
The primary goals when developing the Jr. MAI were to create a short,
easily administered instrument for use to screen learners for potential meta-
cognitive and cognitive strategy interventions and for use as an assessment
tool to determine the effectiveness of ongoing interventions. First, all items
were checked to assure no items were nonnormal given Skewness or Kurto-
sis. All items were within criteria for meeting normality (Fabriger, MacCal-
lum, Wegener, & Strahan, 1999; Tabachnick & Fidell, 1996). Communality
estimates were examined for the items in each of the inventories. Two items
for Version A (Item 4 and Item 5) indicated communality estimates near 1,
perhaps indicating some redundancy in those items. No items warranted con-
cern based on communality estimates in Version B. Next, separate explor-
atory factor analyses were conducted using an orthogonal varimax rotation
with a principal components extraction method for each of the instruments.
Oblique analyses were also conducted but are not reported as they yielded
very similar results. This is consistent with findings from Schraw and Den-
nison (1994) in factor analyses conducted with the MAI.
58
TABLE 1
Jr. MAI Items and MAI Original Items, Factor Loadings, and Theoretical Correspondence

Jr. Factor loading from


MAI Experiment 1

SPERLING ET AL.
item MAI: item number and item (Schraw and Dennison, 1994) Item conceptual affiliation
1 32: I am a good judge of how well I understand something .70 Knowledge of cognition (K)
Declarative knowledge
2 26: I can motivate myself to learn when I need to. .66 Knowledge of cognition (K)
Conditional knowledge
3 3: I try to use strategies that have worked in the past .55 Knowledge of cognition (K)
Procedural knowledge
4 16: I know what the teacher expects me to learn. .53 Knowledge of cognition (K)
Declarative knowledge
5 15: I learn best when I already know something about the topic .53 Knowledge of cognition (K)
Conditional knowledge
6 37: I draw pictures or diagrams to help me understand while .38 Regulation of cognition (R)
learning Information management skill
7 50: I ask myself if I learned as much as I could have once I finish a .65 Regulation of cognition (R)
task Evaluation
8 11: I ask myself if I have considered all options when solving a .46 Regulation of cognition (R)
problem Monitoring
9 6: I think about what I really need to learn before I begin a task .59 Regulation of cognition (R)
Planning
10 49: I ask myself questions about how well I am learning while I am .55 Regulation of cognition (R)
learning something new Monitoring
11 30: I focus on the meaning and significance of new information .59 Regulation of cognition (R)
Information management sk ill
12 46: I learn more when I am interested in the topic .57 Knowledge of cognition (K)
Declarative knowledge
13 29: I use my intellectual strengths to compensate for my weaknesses .54 Knowledge of cognition (K)

MEASURING CHILDREN’S METACOGNITION


Conditional knowledge
14 18: I use different learning strategies depending on the situation .43 Knowledge of cognition (K)
Conditional knowledge
15 1: I ask myself periodically if I am meeting my goals .62 Regulation of cognition (R)
Monitoring
16 33: I find myself using helpful learning strategies automatically .48 Knowledge of cognition (K)
Procedural knowledge
17 19: I ask myself if there was an easier way to do things after I fin- .44 Regulation of cognition (R)
ish a task Evaluation
18 8: I set specific goals before I begin a task .68 Regulation of cognition (R)
Planning

59
60 SPERLING ET AL.

Version A
One-hundred forty-four participants completed the inventory. Two of
these cases were discarded due to incomplete data. Although the inventory
employed a 3-point scale, variance in the scores was evident. The exploratory
factor analyses indicated a five-factor solution using an eigenvalue of 1 crite-
rion. Item affiliation to factor was determined by a greater than .35 correla-
tion (Table 2).
The five-factor solution accounted for 60.4% of the sample variance. Item
1 was the only complex item in the sample and loaded both on Factor 2 and
Factor 5. Factor 1 included four regulation items. Factor 2 was represented
by two knowledge items and one regulation item. Factor 3 split with one
regulation and one knowledge item. Factor 4 and Factor 5 included only
knowledge items. In summary, in the five-factor solution two factors repre-
sented only knowledge items, one represented only regulation items, and
two factors split and contained items from both the regulation and knowledge
factors and therefore items generally affiliated as hypothesized. An explor-
atory factor analysis that forced two factors was also conducted. The two-
factor solution accounted for 31% of the sample variance.
Table 3 presents means and standard deviations for each of the items as
well as the loadings from a two-factor solution. The last column presents
the hypothesized affiliation of the item as either knowledge of cognition (K)
or regulation of cognition (R).
Neither Item 5, I learn best when I already know something about the
topic, nor Item 4, I know what the teacher expects me to learn, loaded in
the two-factor solution. These items are also the two items that had somewhat
weak communality estimates. In the same sample, for the two-factor solution,
Factor 1 as represented in Table 3 is composed of one knowledge of cogni-
tion item and five regulation of cognition items. Factor 2 is composed of
three knowledge items and one regulation item. Both Item 4 and Item 5 were
hypothesized to have been on the knowledge factor. Overall then, while not

TABLE 2
Item Loadings by Factor in Exploratory
Analysis for Jr. MAI, Version A
Factor Item numbers

Factor 1 7, 8, 9, 10
Factor 2 1, 3, 6
Factor 3 2, 11
Factor 4 4, 12
Factor 5 1, 5

Note. Bold indicates primary affiliation


of complex items.
MEASURING CHILDREN’S METACOGNITION 61

TABLE 3
Means, Standard Deviations, Factor Loadings for Two-Factor Solution,
and Construct Affiliation for Jr. MAI, Version A

Mean Standard deviation Factor 1 Factor 2 Affiliation


Item 1 2.36 .53 —.01 .41 K
Item 2 2.54 .63 .43 —.40 K
Item 3 2.22 .55 .15 .61 K
Item 4 2.71 .51 .26 .19 K
Item 5 2.41 .57 .29 .05 K
Item 6 1.80 .48 .18 .54 R
Item 7 1.96 .70 .70 .02 R
Item 8 2.33 .64 .39 .15 R
Item 9 2.21 .67 .59 .27 R
Item 10 2.06 .70 .57 .25 R
Item 11 2.73 .48 .51 —.21 R
Item 12 2.67 .55 .14 .70 K

all items loaded in the two-factor solution, the findings are still fairly consis-
tent with expectations. That is, Factor 1 represents primarily regulation of
cognition and Factor 2 better represents knowledge of cognition.
It was expected that the factors would be highly correlated, as was found
in the factor solution of the MAI and as is hypothesized in the literature. If
high relationships between the factors were indicated it would indicate that
the factor structure might shift some in subsequent samples. To assess the
relationships among factors, the five-factor solution was further examined.
Scales were created from each of the factors. Items were only placed on the
scale with their primary affiliation to reduce inflation of the relationships.
That is, complex items were only included on one scale. As expected, there
were significant correlations between some of the factors. Significant correla-
tions were found among Factors 1, 2, and 4. A significant correlation was
also indicated between Factor 3 and Factor 5. While significant, these corre-
lations were much smaller than those found in a college sample with the
MAI and ranged between r = .19 and r = .24.
Grade level means and standard deviations are presented in Table 4. A

TABLE 4
Means and Standard Deviations by Grade Level
for Jr. MAI, Version A

M SD n
Grade 3 29.18 2.25 48
Grade 4 27.04 3.32 46
Grade 5 27.89 3.00 48
62 SPERLING ET AL.

one-way ANOVA indicated significant differences between grade-level per-


formance on the overall inventory, F(2, 141) = 6.67, p < .05. Tukey HSD
post hoc analyses revealed that grade 3 outperformed grade 4. This is likely
due to administration differences. Teachers in grade 3, but not in grades 4
or 5, read the inventory aloud.
Version B
Two hundred participants in grades 6 through 9 completed the Jr. MAI
Version B. Four of these cases were discarded due to incomplete data. Again,
first an exploratory analysis was examined and five factors were again re-
vealed. These factors accounted for 55% of the sample variance.
Table 5 presents the item affiliation by factor. Several items in this sample
loaded on more than one factor and represent complex items. No items, how-
ever, loaded on more than two factors. Table 5 presents all of the items that
loaded on each factor as indicated by a .35 or greater correlation. Complex
items are represented by bold print for their primary affiliation. All items
loaded on at least one of the five factors.
Factor 1 was the least clear of the five factors and represented items repre-
senting both knowledge of cognition and regulation of cognition almost
equally. For Factor 2, two of the eight items represent knowledge of cogni-
tion items while the remaining six items are regulation of cognition items.
The two items that primarily load on Factor 3 include one knowledge of
cognition item and one regulation item. With Factor 4, both Items 5 and 12
represented knowledge of cognition while for Factor 5, two of the three items
represented knowledge of cognition, but Item 15 represented a regulation
of cognition item. Therefore, items generally loaded as expected with the
exception of Factor 1, which contained items developed to measure both
knowledge and regulation.
As with the younger sample, an exploratory analysis that forced two fac-
tors was also conducted. It was expected in a two-factor solution that Items
1, 2, 3, 4, 5, 12, 13, 14, and 16 would represent knowledge of cognition.

TABLE 5
Item Loadings by Factor Exploratory
Analyses for Jr. MAI, Version B
Factor Item numbers

Factor 1 3, 6, 7, 8, 9, 13, 14, 16, 17


Factor 2 4, 7, 8, 9, 10, 11, 14, 18
Factor 3 1, 11, 12
Factor 4 5, 12
Factor 5 2, 3, 15
Note. Bold indicates primary affiliation
of complex items.
MEASURING CHILDREN’S METACOGNITION 63

Six of these nine items load together in a two-factor solution. It was similarly
expected that Items 6–11, 15, 17, and 18 would represent a second factor.
These items loaded together in one factor in a two-factor solution. Therefore,
all items written to represent regulation of cognition did load on a single
factor. Interestingly, six of nine knowledge of cognition items loaded on this
factor as well. For the older learners both the five-factor and the two-factor
solution are quite interpretable. Table 6 presents the item level means and
standard deviations as well as factor loading from an exploratory two-factor
solution and hypothesized item affiliations in the final column.
Relationships among factors were again examined in this sample. The
same procedure was employed as with the younger sample. Findings indi-
cated higher correlations between factors in the older sample. All factors
were correlated with each other at the p < .01 level but none were more
strongly related. The highest relationship was found between Factor 1 and
Factor 2 (r = .61). All other correlations ranged from r = .23 and r = .42.
This finding was consistent with conceptions that knowledge and regulation
of cognition are related, while more similar to findings reported with the
MAI than findings from the younger sample, the relationship was less robust
than reported for college learners (Schraw & Dennison, 1994).
A one-way ANOVA was conducted to assess differences between over-
all responses by grade level and significant differences were indicated,

TABLE 6
Means, Standard Deviations, Factor Loadings for Two-Factor Solution,
and Construct Affiliation for Jr. MAI, Version B

M SD Factor 1 Factor 2 Affiliation


Item 1 4.13 .78 .29 .45 K
Item 2 3.90 .94 .37 .05 K
Item 3 3.62 1.04 .57 .01 K
Item 4 3.94 .88 .38 —.18 K
Item 5 4.32 .83 .32 .41 K
Item 6 2.65 .92 .51 —.03 R
Item 7 2.27 1.11 .60 —.48 R
Item 8 3.10 1.05 .63 —.08 R
Item 9 3.00 1.16 .66 —.40 R
Item 10 2.92 1.18 .58 —.31 R
Item 11 3.88 .93 .59 .31 R
Item 12 4.65 .68 .33 .71 K
Item 13 3.40 1.05 .68 .06 K
Item 14 3.38 .93 .75 .03 K
Item 15 3.82 1.09 .53 .25 R
Item 16 3.22 .98 .43 .09 K
Item 17 3.12 1.19 .58 —.04 R
Item 18 3.63 1.21 .58 —.08 R
64 SPERLING ET AL.

TABLE 7
Means and Standard Deviations by Grade Level
on Jr. MAI, Version B

M SD n
Grade 6 62.33 8.77 76
Grade 7 59.90 11.51 48
Grade 8 65.51 8.64 45
Grade 9 67.00 10.48 27

F(3, 195) = 4.27, p < .05. Tukey HSD post hoc analyses indicated grades 8
and 9 outperformed grade 7. No other differences were indicated. Table 7
presents means and standard deviations by grade level.
The first experiment represented the initial phases of instrument develop-
ment for two metacognitive inventories appropriate for grade school and
middle school children as an assessment and screening device. Generally
items loaded as expected. Some factors did split and contain items from
both the knowledge of cognition and regulation of cognition constructs. This
finding is consistent with similar work with the MAI (Schraw & Dennison,
1994) and is likely due to the high correlation between knowledge and regu-
lation aspects of metacognition. Overall, all items did load on a factor in
the exploratory analyses. Neither instrument, therefore, was altered from the
original form found in Appendix A and Appendix B. The second experiment
further tested the factor structure of the instruments and examined the instru-
ments through an initial validity investigation.

EXPERIMENT 2
Method
Participants
Students enrolled in one of two schools in a rural school district participated in the study.
The first school’s total enrollment was 255 in grades 5 through 8. All of these children were
administered the materials by either their science (grades 5–6) or math (grades 7–8) teachers
during class time as part of normal class procedures. The second school had a total enrollment
of 405 students in grades pre-K through 8. All children in grades 3 through 8 participated in
the study as part of their class activities (n = 157). To facilitate comparisons across samples,
as in the first experiment, learners in third grade were read the instruments aloud, and any
learners in need of additional assistance to complete the inventories were assisted by the class-
room teacher or paraprofessional.
The instruments were administered in the math and science classes in grades 7 and 8 of
the second school for consistency with the subject pool of the first school. Since both these
schools are within the same district, the data for these two schools were combined for analyses
(total n = 416). Students in this sample were from a rural county. The number of children
receiving free or reduced lunch in the countywide district was just over 60%. As with the
Experiment 1 sample, a small minority population (approximately 1%) resided in the county.
MEASURING CHILDREN’S METACOGNITION 65

Materials
Materials included a teacher information sheet, a teacher rating scale, an administration
record sheet, and the two versions of the Jr. MAI. Additional measures of metacognition
included the Strategic Problem Solving Inventory (Fortunato, Hecht, Tittle, & Alvarez, 1991),
the Metacomprehension Strategies Index (Schmitt, 1990), and the Index of Reading Awareness
(Jacobs & Paris, 1987). These instruments were chosen because each was published in conjunc-
tion with articles on measuring or promoting metacognition and were previously administered
to similar-age learners. In addition to the teacher ratings and the inventories, data were also
collected from subscales from a fall administration of the Stanford 9 (Stanford Achievement
Test Series, Ninth Edition, 1996). Demographic data collected included gender, grade level,
school, and teacher. Prior to administration, the inventory materials were piloted on eight
similar-age children to assure that the items were clearly presented and answerable. No changes
were made to the instruments based upon the pilot administration.
Teacher information sheet. Prior to rating students, teachers were provided a summary de-
scription of metacognition and characteristics of metacognitive learners to consistently define
metacognition to all teachers.
Teacher rating. The teacher rating instrument, developed for this study and included in
Appendix C, required teachers to rate each of their students on a scale of 1–6 for metacogni-
tion. Examples of student behaviors were provided for each point on the scale to assist teachers
in rating. For the third grade, two teachers made ratings; for the fourth and fifth grades,
one teacher made ratings; for the sixth and eighth grades, three teachers made ratings; and
for the seventh grade, four teachers made ratings. A total of eight teachers, therefore, made
ratings for the entire sample of students. The average ratings by teacher ranged from 3.5 to
4.0 on the 6-point scale. No significant differences by teacher were indicated in the ratings,
F(8, 356) = 1.47, p = .17.
Administration record. Teachers administered the inventories as part of classroom proce-
dures. To ensure consistency in administration, and to assure no anomalies in administration,
a report form was developed to record length of time of administration, date of administration,
as well as anomalies in administration and any questions asked by learners. Nothing worthy
of concern was indicated from the administration records and no data were removed from the
sample.
Each of the inventories was administered in the same sequence to all children. Descriptions
of the inventories follow in the order in which they were administered.
Jr. MAI. Two separate versions of the Jr. MAI were administered: Version A for grades 3–
5 and Version B for grades 6–8. In this study the internal consistency reliability of the Jr.
MAI was .76 for the younger and .82 for the older learners.
Strategic Problem Solving Inventory (Fortunato et al., 1991). The full 21-item inventory
was administered, but was slightly modified for this study. In the original inventory a 3-point
(yes, no, and maybe) response was used. In the current study the 3-point and the 5-point Likert
scales used for the two versions of the Jr. MAI were employed. The internal consistency
reliability of this instrument in this study was .82 for younger and .87 for older learners.
Metacomprehension Strategies Index (Schmitt, 1990). This 25-item multiple-choice inven-
tory required learners to identify behaviors they engage in while planning to read, monitoring
reading, and evaluating reading. Items are scored as correct or incorrect. From a metacognitive
perspective the inventory focused on regulation of cognition. The internal consistency reliabil-
ity for this instrument in this study was .60 for the younger and .57 for the older learners.
Index of Reading Awareness (Jacobs & Paris, 1987). The inventory was used with a slight
modification made to the directions for greater clarity. This 20-item multiple-choice inventory
required learners to rate activities they engage in while reading and again focused on regulation
of cognition. The internal consistency reliability for this instrument was .53 for the younger
and .38 for the older learners.
Stanford 9 (Harcourt Educational Measurement, 1996). The Stanford 9 combines multiple-
66 SPERLING ET AL.

choice and open-ended assessment to measure students’ performance in several content areas.
The scores accessed for the current study included overall achievement, reading comprehen-
sion, and mathematical problem solving.

Results
The initial analyses were conducted to examine characteristics of the in-
ventories and the consistency of the factor structure of the Jr. MAI invento-
ries in a second sample. Again, exploratory factor analysis was employed to
understand the underlying structure of the items. Again, five-factor solutions
emerged from both data sets. Again, two-factor solutions were also forced
on the data. A varimax orthogonal rotation with a principal components ex-
traction method was again implemented. All items were checked for normal-
ity and communality deviations. No items on either version in this sample
warranted concern.
In the five-factor solution for Version A, approximately 61.8% of the sam-
ple variance was accounted for by the items. Table 8 presents the item affili-
ation by factor. Several items in this sample loaded on more than one factor
but no items loaded on more than two factors. Table 8 presents all of the
items that loaded at .35 or greater. Bold print illustrates primary affiliation
for complex items. Item 5 did not load on any of the five factors. For Factor
1, all items represented knowledge of cognition items. For Factor 2, two
items were knowledge of cognition items while two were regulation of cogni-
tion items. For Factor 3 and for Factor 4 all items were regulation of cogni-
tion items. Finally, Factor 5, represented by only Item 4, was classified as
a knowledge of cognition item. Therefore, except for Factor 2, all items
loaded with their affiliation, though in more than two factors.
A two-factor exploratory analysis was also conducted. The two-factor so-
lution was also quite interpretable. Factor 1 was represented by five regula-
tion items and one knowledge item. This item was Item 4, which did not
load in the two-factor solution in Experiment 1 and represented a single
factor in the five-factor solution in this sample. Factor 2 included five knowl-

TABLE 8
Item Loadings by Factor in Exploratory
Analysis for Jr. MAI, Version A

Factor Item numbers


Factor 1 1, 2, 12
Factor 2 2, 3, 7, 11
Factor 3 6, 7, 8, 9
Factor 4 6, 8, 10, 11
Factor 5 4

Note. Bold indicates primary affiliation


of complex items.
MEASURING CHILDREN’S METACOGNITION 67

edge items and one regulation item. The regulation item was the only com-
plex item in the factor and actually also loaded on Factor 1 with the re-
maining regulation items. The two-factor solution accounted for 46% of the
sample variance. Both the five-factor and the two-factor solutions appear
to support distinctions between knowledge of cognition and regulation of
cognition. As expected, however, relationships among items were strong,
providing support for measurement of metacognition generally.
Table 9 presents means, standard deviations, and factor loadings from a
two-factor solution for the items in the second administration of the Jr. MAI,
Version A. The item means and standard deviations are consistent with the
first administration.
For Version B, an exploratory factor analysis again yielded a five-factor
solution. In this solution, 52% of the sample variance was accounted for by
the five factors. In contrast, 36% was accounted for by a subsequent explor-
atory two-factor solution. For the five-factor solution, Factor 1 represented
6 items, all of which are regulation of cognition items. Factor 5 represented
only knowledge of cognition. Factor 4 contained 3 knowledge of cognition
items and 1 regulation item. The regulation item on Factor 4, Item 11, was
a complex item and also loaded more strongly on Factor 2. Factor 3 split
and had 2 knowledge items and 1 regulation item. Factor 2 had both knowl-
edge (n = 3) and regulation (n = 5) items. Table 10 presents the item load-
ings by factor for the five factor solution.
Again, a two factor exploratory solution was conducted. Factor 1 in the
two-factor solution contained 10 items. These were 7 of the 9 regulation
items on the scale and 3 additional knowledge items. These 3 knowledge
items all were complex items and also loaded on Factor 2. Factor 2 in this

TABLE 9
Means, Standard Deviations, and Factor Loadings for Items
of the Jr. MAI, Version A

M SD Factor 1 Factor 2
Item 1 2.38 .58 .09 .61
Item 2 2.54 .63 .13 .62
Item 3 2.33 .67 .17 .55
Item 4 2.59 .59 .44 .28
Item 5 2.42 .56 .07 .50
Item 6 1.74 .61 .57 —
.02
Item 7 1.74 .61 .67 —
.14
Item 8 2.21 .70 .64 .17
Item 9 2.04 .79 .51 .31
Item 10 2.06 .69 .42 .14
Item 11 2.57 .58 .38 .47
68 SPERLING ET AL.

Item 12 2.55 .64 —.06 .68


MEASURING CHILDREN’S METACOGNITION 69

TABLE 10
Item Loadings by Factor in Exploratory
Analysis for Jr. MAI, Version B

Factor Item numbers


Factor 1 7, 8, 9, 10, 17, 18
Factor 2 3, 6, 7, 8, 11, 13, 14, 15
Factor 3 6, 14, 16
Factor 4 1, 2, 4, 11
Factor 5 5, 12

Note. Bold indicates primary affiliation


of complex items.

solution was represented by all of the knowledge items and 2 additional


regulation items. Therefore, the items generally grouped as was expected in
both the five- and the two-factor solutions. Table 11 presents the means,
standard deviations, and factor loadings for the two-factor solution for the
Jr. MAI, Version B.
The second part of the analyses was conducted as an initial construct valid-
ity investigation of the Jr. MAI. Table 12 presents the means and standard
deviations by grade for all instruments. Table 13 presents correlations among

TABLE 11
Means, Standard Deviations, and Factor Loadings for Items
of the Junior MAI, Version B

M SD Factor 1 Factor 2

Item 1 4.28 .78 —.16 .52


Item 2 4.03 .91 .06 .47
Item 3 3.81 .99 .38 .43
Item 4 4.04 .89 .24 .45
Item 5 4.31 .88 —.02 .51
Item 6 2.19 1.03 .52 —
.02
Item 7 2.38 1.12 .68 .08
Item 8 3.32 1.21 .63 .03
Item 9 3.11 1.18 .73 .06
Item 10 3.15 1.20 .62 .13
Item 11 4.16 .89 .32 .47
Item 12 4.73 .59 —.06 .44
Item 13 3.47 .99 .42 .44
Item 14 3.54 .99 .49 .44
Item 15 3.87 1.07 .32 .42
Item 16 3.28 1.08 .27 .35
Item 17 3.22 1.23 .51 .03
Item 18 3.11 1.18 .59 .24
70 SPERLING ET AL.

TABLE 12
Means and Standard Deviations for Additional Dependent Measures

MEASURING CHILDREN’S METACOGNITION


Strategic
Metacomprehen- Index of read- problem-solving
Jr. MAI sion index ing awareness inventory Stanford 9 Teacher ratings

Mean SD Mean SD Mean SD Mean SD Mean SD Mean SD


Grade 3 27.81 2.94 9.58 3.26 24.16 4.39 45.64 4.62 50.71 14.60 3.84 1.64
n =36 n =33 n =32 n =33 n =36 n =19
Grade 4 25.15 4.60 8.28 4.16 27.97 4.22 40.15 6.55 44.36 18.53 3.88 1.90
n =33 n =29 n =29 n =33 n =30 n =34
Grade 5 27.85 3.40 9.45 3.45 24.12 5.43 44.61 6.29 44.73 14.85 3.21 1.29
n =66 n =60 n =60 n =64 n =62 n =68
Grade 6 64.33 9.66 10.01 3.65 26.32 4.11 67.81 15.70 49.24 15.84 3.54 1.43
n =99 n =93 n =100 n =97 n =92 n =104
Grade 7 64.56 8.24 10.93 3.79 27.79 3.87 70.71 10.56 48.53 14.13 3.85 1.22
n =83 n =69 n =78 n =79 n =78 n =86
Grade 8 64.63 8.81 11.07 3.48 27.01 3.82 69.10 12.97 46.02 14.05 3.94 1.20
n =82 n =67 n =73 n =79 n =76 n =76

69
70 SPERLING ET AL.

TABLE 13
Correlations among Measures of Metacognition

Strategic
Index of problem
Metacomprehension reading solving Teacher
index awareness inventory ratings

Younger learners
Jr. MAI, Version A .30* .22 .72*** .21*
Older learners
Jr. MAI, Version B .23** .28** .68*** .09
* p < .05.
** p < .01.
*** p <.001.

measures of metacognition employed in this study. For Version A, significant


correlations were indicated between the Jr. MAI and The Strategic Problem
Solving Inventory, the Jr. MAI and The Metacomprehension Strategies In-
dex, and the Jr. MAI and Teacher Ratings of Metacognition. The correlation
between The Index of Reading Awareness and the Jr. MAI was not signifi-
cant likely due to an n issue.
For Version B, the Jr. MAI was significantly correlated with all other
inventory measures of metacognition but was not correlated with Teacher
Ratings of Metacognition. Although many findings are significant, the large
number of participants should be considered when interpreting these find-
ings. Many of the correlations are not very strong. The strongest correlations
were found between the Jr. MAI and the Strategic Problem Solving Inven-
tory. This is likely due to the slightly more general nature of those two instru-
ments. The Metacomprehension Strategies Index and the Index of Reading
Awareness rely heavily on the reading context. In contrast, the Strategic
Problem Solving Inventory addresses general problem solving and the Jr.
MAI addresses academic learning more generally.
No gender differences were indicated in either version of the instrument,
t(129) = 1.17, p = .24, for the younger learners and t(246) = .62, p = .43, for
the older learners. Although no differences were found among the older
grades, F(2, 136) = .03, p = .97, differences were found in the younger
grades, F(2, 261) =6.83, p <.001. Tukey LSDs indicated both grades three
and five were different from grade 4 but there were no differences between
grades 3 and 5. Table 14 presents the means and standard deviations for
these analyses.
The third part of the analyses addressed the overall grade-level differences
and the relationship of the Jr. MAI to achievement. To assess grade-level
changes overall, the first 12 items, common to both inventories, were com-
pared across grade levels. To do this, responses from the older learners were
transformed from a 5-point scale to a 3-point scale. Table 15 presents the
MEASURING CHILDREN’S METACOGNITION 71

TABLE 14
Means and Standard Deviations by Grade for the
Jr. MAI, Versions A and B

M SD n
Version A
Grade 3 27.81 2.94 36
Grade 4 25.15 4.60 33
Grade 5 27.85 3.40 66
Version B
Grade 6 64.33 9.66 99
Grade 7 64.56 8.24 83
Grade 8 64.63 8.81 82

means and standard deviations for the first 12 items common to both versions
of the Jr. MAI. ANOVA indicated significant differences between groups,
F(5, 393) = 4.09, p < .001. Post hoc comparisons not illustrated above
indicated differences between grade 3 and 6 and grades 3 and 8 with third
grade outperforming both sixth and eighth grades. Additional significant dif-
ferences were also indicated between grade 4 and grades 6 and 7, differences
between grade 5 and all of the older grades were also indicated with grade
5 outperforming each of the older grades. It was expected that there would
be consistent grade level progression in scores on the Jr. MAI and this was
the primary justification for a second version of the instrument for older
learners. Expected differences were not indicated. Contrary to a priori be-
liefs, it may be that Version B is also appropriate for a younger sample.
The final analyses illustrated the relationships between the Jr. MAI and
standardized achievement measures. As discussed, previous work has not
consistently indicated relationships between these variables. It was expected
that in the older grades, metacognitive knowledge and regulation might be
related to better achievement. Table 16 presents the correlations between the
Jr. MAI and achievement scores. These correlations are rather low and indi-

TABLE 15
Means and Standard Deviations for Jr. MAI,
Version A, for all Grades

Mean Standard deviation


Grade 3 (n =36) 27.81 2.94
Grade 4 (n =33) 25.15 4.60
Grade 5 (n =66) 27.85 3.40
Grade 6 (n =99) 26.53 3.33
Grade 7 (n =83) 26.69 2.98
Grade 8 (n =82) 26.30 3.09
72 SPERLING ET AL.

TABLE 16
Correlations between the Jr. MAI and Achievement Measures

Overall Problem solving Reading comprehension


Jr. MAI (Version A) *.18 *.17 *.20
Jr. MAI (Version B) — —.08 —.00
.03
* p < .05.

cate no meaningful relationship between achievement and metacognition in


this sample.
Discussion
The results of Experiment 2 provide further information about the Jr. MAI.
The Jr. MAI inventories developed and presented here are based upon the
often-referenced Brown theoretical framework of metacognition (Brown,
1978). Both of the theoretical constructs of knowledge of cognition and regu-
lation of cognition were represented through exploratory factor analysis. In
this study, an initial construct examination of the Jr. MAI indicated statisti-
cally significant correlations with all other inventories of metacognition in
older learners and significant correlations with two other inventory measures
and teacher ratings of metacognition in the younger learners. Gender differ-
ences were not indicated and grade-level differences were reported.
Although Version A was significantly correlated with achievement the
correlations were not likely meaningfully significant as they were rather
modest. Correlations between Version B and achievement were very low
and some were negative.
GENERAL DISCUSSION
The current research sought to add to the knowledge base regarding the
measurement of metacognitive skills for classroom assessment and to facili-
tate further development of theoretical models of self-regulation. Additional
goals of this research were to develop a new measure of general metacogni-
tion for use in grades 3–9 learning settings, to provide information as to the
strength of the instrument to represent metacognitive constructs, and to ad-
dress the relationship between achievement and metacognition.
Experiment 1 addressed the development of two versions of the Jr. MAI.
The development process included constructing items based on an existing
measure composed of two factors: Knowledge of cognition and regulation
of cognition. Experiment 2 further examined the inventories and also pre-
sented an initial validity investigation of the instruments. Exploratory factor
analyses were conducted in both experiments for both samples of learners.
Across both samples and both versions of the instrument these analyses dem-
MEASURING CHILDREN’S METACOGNITION 73

onstrated that the preponderance of items loaded strongly on five factors and
accounted for large sample variance. Further, there were many consistencies
in the factor structure across samples. As knowledge and regulation of cogni-
tion are related it was expected that some items might shift in their factor
affiliation. This is consistent with findings from the MAI as reported by
Schraw and Dennison (1994). It is promising that across both samples the
factors were quite interpretable and appear to measure both knowledge and
regulation of cognition. Given some shift, however, those who use the Jr.
MAI might best employ the inventories as a measure of knowledge and regu-
lation of cognition overall and not rely heavily on individual factor scores.
The Jr. MAI inventories are an important addition to research and instrumen-
tation regarding metacognition since the factor structure across samples indi-
cates that the Jr. MAI measures metacognition more broadly than existing
measures, which often focus solely on regulation components.
Experiment 2 examined the correlations between the Jr. MAI and other
metacognitive instruments, teacher ratings, and achievement scores. Find-
ings indicated significant correlations between the Jr. MAI Versions A and
B and the Strategic Problem-Solving Inventory (Fortunato et al., 1991) (A:
r =.72; B: r =.68); moderate significant correlations between both versions
of the Jr. MAI and the Metacomprehension Strategies Index (Schmitt, 1990)
(A: r = .30; B: .23); moderate correlations, but significant for Version B,
between the Index of Reading Awareness (Jacobs & Paris, 1987) (A: r =
.22; B: r = .28); and low correlations, although statistically significant for
Version A, between the Jr. MAI and teacher ratings of metacognition (A:
r = .21; B: r = .09). The correlations between achievement scores and the Jr.
MAI were statistically significant, but low, for Version A and nonsignifi-
cant, and sometimes negative, for Version B. Overall, these results provide
valuable support for the construct validity of the Jr. MAI.
The relationships between the Jr. MAI, Version A, and other inventories
showed fewer significant correlations than did those for Version B. The dis-
parity may be due to instruction that emphasizes learning discrete knowledge
and regulation of strategies within specific contexts in the lower grades. In
a related sense, the two metacognitive reading measures were generally less
correlated with the Jr. MAI than was the Strategic Problem Solving Inventory
(Fortunato et al., 1991). It is likely that the reading context provided a more
specific reference point for the learners. The children may have deemed the
Jr. MAI and the Strategic Problem Solving Inventory as less specific.
Interestingly, the teacher rating measure was significantly correlated with
the Jr. MAI, Version A, but was the only measure not correlated with the
Jr. MAI, Version B. This finding may be due to the teachers of the younger
learners making their ratings of student metacognition based on the skills
they had observed within multiple content areas. In the older learners, the
teachers were restricted to observations of students in a specific subject area.
74 SPERLING ET AL.

As such, the teacher rating measure provides validity to the notion that the
Jr. MAI measures metacognition broadly and across subject areas.
The relationship between metacognitive skills and achievement is compli-
cated and not well addressed in previous research. Findings are inconsistent.
Researchers strive to be able to measure metacognitive processes separate
from achievement. The appeal of this approach is that if we can measure
the constructs separately we can more clearly target metacognitive skills and
other self-regulation constructs. The contrasting view is that when promoting
general self-regulation and specifically metacognitive skills, such as monitor-
ing, researchers assume that an increase in these skills should lead to an
increase in academic achievement. That is, within more advanced learners,
metacognitive skills should drive academic achievement. If this hypothesized
relationship exists between metacognitive skills and processes and achieve-
ment, correlations between measures of metacognitive processes and
achievement should be less strong in the younger academic years than in
more advanced learners. The current work lacks support for this hypothesis.
The findings from the current study, however, are consistent with previous
findings that indicate either small or nonsignificant findings between meta-
cognition and aptitude and achievement (e.g., Allon, Gutkin, & Bruning,
1995).
The correlations between the Jr. MAI versions and achievement are gener-
ally low. They actually decrease in the older population. A favorable view of
this finding is that the Jr. MAI measures something other than achievement.
Swanson’s (1990) research would support this notion. It is more likely that
as learners age and gain more content-specific knowledge, strategic pro-
cesses also become more domain-specific. Hence, a more domain-general
measure of metacognitive processes loses its predictive power. Findings
from two studies of the development and subsequent initial validation of a
reading strategy and metacognitive measure for use with slightly older learn-
ers found higher correlations between that measure and reading achievement
than for other achievement areas such as mathematics or study skills (Peirera-
Laird & Deane, 1997). A competing view of domain-specific versus general
views of metacognitive processes is presented in Schraw’s work (e.g.,
Schraw, 1997; Schraw & Neitfield, 1998).
Based on the findings from these two studies, the Jr. MAI appears to be
a reliable measure of metacognition and initial construct validation is promis-
ing. Further work examining the instruments is necessary. Such work should
include a stability analysis. In addition the inventories should be tested for
generalizability in a more diverse sample. These are both weaknesses in the
current study. Additional work should also include longitudinal studies with
school-age learners. Studies that address the validity of the instruments with
additional measures of metacognitive processing, such as calibration tech-
niques, self-graphing, or perhaps interviews, are also needed.
MEASURING CHILDREN’S METACOGNITION 75

The Jr. MAI can be used as a tool for classroom diagnosis and intervention
and for future construct development for those studying self-regulatory con-
structs. It is easily administered and scored. The data presented here indicate
evidence for validity and the general nature of the instrument is appropriate
for use in assessing ongoing interventions. The authors have already received
numerous requests from researchers and practitioners to use the instrument
in their work. In addition, researchers at the NASA Classroom of the Future
have used the instrument in four studies and have found it to be a valid
predictor of learning in on-line environments (Schwartz, Andersen, Howard,
Hong, & McGee, 1998), success in problem solving (Hong & Jonassen,
1999), effectiveness in cooperative learning (Howard, 1998), and higher
grade point averages and science-related attitudes (Howard, 1998).
The two current studies further both theoretical and pragmatic work in
the area of self-regulatory constructs. Future research should examine means
of measuring metacognitive processing as well as other self-regulatory
abilities in young learners for both theory development and effective
instructional practice and should also continue to focus on more clearly
understanding the relationship between metacognitive processing and achie-
vement.

APPENDIX A
Grade level: 3 4 5
We are interested in what learners do when they study. Please read the following sen-
tences and circle the answer that relates to you and the way you are when you are doing
school work or home work. Please answer as honestly as possible.
1 = Never 2 = Sometimes 3 = Always
1. I know when I understand something. Never Sometimes Always
2. I can make myself learn when I need to. Never Sometimes Always
3. I try to use ways of studying that have worked for me Never Sometimes Always
before.
4. I know what the teacher expects me to learn. Never Sometimes Always
5. I learn best when I already know something about the Never Sometimes Always
topic.
6. I draw pictures or diagrams to help me under- Never Sometimes Always
understand while learning.
7. When I am done with my schoolwork, I ask myself if Never Sometimes Always
I learned what I wanted to learn.
8. I think of several ways to solve a problem and then Never Sometimes Always
choose the best one.
9. I think about what I need to learn before I start working. Never Sometimes Always
10. I ask myself how well I am doing while I am learning Never Sometimes Always
something new.
11. I really pay attention to important information. Never Sometimes Always
12. I learn more when I am interested in the topic. Never Sometimes Always
APPENDIX B
Grade level: 6 7 8 9
We are interested in what learners do when they study. Please read the following sen-
tences and circle the answer that relates to you and the way you are when you are doing
school work or home work. Please answer as honestly as possible.
1 = Never 2 = Seldom 3 = Sometimes 4 = Often 5 = Always
1. I know when I understand something. 1 2 3 4 5
2. I can make myself learn when I need to. 1 2 3 4 5
3. I try to use ways of studying that have worked for me before. 1 2 3 4 5
4. I know what the teacher expects me to learn. 1 2 3 4 5
5. I learn best when I already know something about the topic. 1 2 3 4 5
6. I draw pictures or diagrams to help me understand while learning. 1 2 3 4 5
7. When I am done with my schoolwork, I ask myself if I learned what 1 2 3 4 5
I wanted to learn.
8. I think of several ways to solve a problem and then choose the best 1 2 3 4 5
one.
9. I think about what I need to learn before I start working. 1 2 3 4 5
10. I ask myself how well I am doing while I am learning something 1 2 3 4 5
new.
11. I really pay attention to important information. 1 2 3 4 5
12. I learn more when I am interested in the topic. 1 2 3 4 5
13. I use my learning strengths to make up for my weaknesses. 1 2 3 4 5
14. I use different learning strategies depending on the task. 1 2 3 4 5
15. I occasionally check to make sure I’ll get my work done on time. 1 2 3 4 5
16. I sometimes use learning strategies without thinking. 1 2 3 4 5
17. I ask myself if there was an easier way to do things after I finish a 1 2 3 4 5
task.
18. I decide what I need to get done before I start a task. 1 2 3 4 5

APPENDIX C
Teacher Rating of Student Metacognition
Metacognition refers to one’s thinking about thinking or one’s knowing about knowing. Stu-
dents who are HIGHLY metacognitive tend to exhibit cognitive behaviors that are different
from LOW metacognitive students. Listed below are several behavioral descriptors that would
distinguish students who are HIGH and LOW in metacognition.
HIGH Metacognition LOW Metacognition
1. Focuses attention 1. Attends randomly
2. Studies purposefully 2. Studies haphazardly
3. Makes study plans 3. Doesn’t plan much
4. Judges own performance accurately 4. Inaccurate about own performance
5. Asks questions to insure understanding 5. Continues work without understanding
Using the following scale, rate each student in your class regarding your best judgment of
his or her level of metacognition.
6 = Very High Metacognition 3 = Below Average Metacognition 5
=High Metacognition 2 = Low Metacognition
4 = Above Average Metacognition 1 = Very Low Metacognition
Class:
Student Name Rating (1–6) Student Name Rating (1–6)
MEASURING CHILDREN’S METACOGNITION 77

REFERENCES
Allon, M., Gutkin, T. B., & Bruning, R. (1994). The relationship between metacognition and
intelligence in normal adolescents: Some tentative but surprising findings. Psychology
in the Schools, 31, 93–96.
Brown, A. L. (1978). Knowing when, where, and how to remember: A problem of metacogni-
tion. Advances in Instructional Psychology, 1, 77–165.
Baker, L., & Brown, A. (1984). Metacognitive skills and reading. In P. D. Pearson, M. Kamil,
R. Barr, & P. Mosenthal (Eds.), Handbook of reading research (pp. 353–394). New
York: Longman.
Brownlee, S., Leventhal, H., & Leventhal, E. A. (2000). Regulation, self-regulation, and con-
struction of the self in the maintenance of physical health. In M. Boekaerts, P. R. Pin-
trich, & M. Zeidner (Eds.), Handbook of self-regulation (pp. 369–409). San Diego, CA:
Academic Press.
Cavanaugh, J. C., & Perlmutter, M. (1982). Metamemory: A critical examination. Child Devel-
opment, 53(1), 11–28.
Craig, M. T., & Yore, L. D. (1995). Middle school students’ metacognitive knowledge about
science reading and science text: An interview study. Reading Psychology: An Interna-
tional Quarterly, 16, 169–213.
Cross, D. R., & Paris, S. G. (1988). Developmental and instructional analyses of children’s
metacognition and reading comprehension. Journal of Educational Psychology, 80(2),
131–142.
De Corte, E., Verschaffel, L., & Op ’t Eynde, P. (2000). Self-regulation: A characteristic and
a goal of mathematics education. In M. Boekaerts, P. R. Pintrich, & M. Zeidner (Eds.),
Handbook of self-regulation (pp. 687–722). San Diego, CA: Academic Press.
Delclos, V. R., & Harrington, C. (1991). Effects of strategy monitoring and proactive instruc-
tion on children’s problem-solving performance. Journal of Educational Psychology,
83(1), 35–42.
Ericsson, K. A., & Simon, H. A. (1980). Verbal reports as data. Psychological Review, 87(3),
215–259.
Ericsson, K. A., & Simon, H. A. (1993). Protocol analysis: Verbal reports as data. Cambridge,
MA: MIT Press.
Fabriger, L. R., MacCallum, R. C., Wegener, D. T., & Strahan, E. J. (1999). Evaluating the
use of exploratory factor analysis in psychological research. Psychological Methods, 4(3),
272–299.
Feitler, F. C., & Hellekson, L. E. (1993). Active verbalization plus metacognitive awareness
yields positive achievement gains in at-risk first graders. Reading Research and Instruc-
tion, 33(1), 1–11.
Flavell, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitive devel-
opmental inquiry. American Psychologist, 34(10), 906–911.
Flavell, J. H., Miller, P. H., & Miller, S. A. (1993). Cognitive development (3rd ed.). Engle-
wood Cliffs, NJ: Prentice Hall.
Fortunato, I., Hecht, D., Tittle, C. K., & Alvarez, L. (1991). Metacognition and problem solv-
ing. Arithmetic Teacher, 39(4), 38–40.
Harcourt Brace Educational Measurement (1996). Stanford Achievement Test (9th ed.). San
Antonio, TX: Psychological Corporation.
Hong, N., & Jonassen, D. H. (April 1999). Well-structured and ill-structured problem solving
78 SPERLING ET AL.

in a multimedia simulation. Paper presented at the annual meeting of the American Educa-
tional Research Association, Montreal, Canada.
Howard, B. C. (1998). Metacognitive Awareness Inventories: NASA COTF Research Results.
Technical Report, NASA Classroom of the Future.
Howard, B. C., McGee, S., Hong, N., & Shia, R. (April, 2000). Student Self-Regulated Learn-
ing and Scientific Problem Solving in Computer Based Learning Environments. Paper
presented at the annual meeting of the American Educational Research Association, New
Orleans, LA.
Jacobs, J. E., & Paris, S. G. (1987). Children’s metacognition about reading: Issues in defini-
tion, measurement, and instruction. Educational Psychologist, 22(3&4), 235–278.
Manning, B. H. (1984). A self-communication structure for learning mathematics. School Sci-
ence and Mathematics, 84(1), 43–51.
Manning, B. H., (1991). Cognitive self-instruction for classroom processes. Albany, NY: State
University of New York Press.
Manning, B. H., Glasner, S. E., & Smith, E. R. (1996). The self-regulated learning aspect of
metacognition: A component of gifted education. Roeper Review, 18(3), 217–223.
Nelson, T. O., & Narnes, L. (1996). In J. Metcalfe & A. P. Shimamura (Eds.), Metacognition:
Knowing about knowing. Cambridge, MA: MIT Press.
Newman, R. S. (1984a). Children’s achievement and self-regulation evaluations in mathemat-
ics: A longitudinal study. Journal of Educational Psychology, 76(5), 857–873.
Newman, R. S. (1984b). Children’s numerical skill and judgments of confidence in estimation.
Journal of Experimental Child Psychology, 37(1), 107–123.
Newman, R. S., & Wick, P. L. (1987). Effect of age, skill, and performance feedback on
children’s judgments of confidence. Journal of Educational Psychology, 79(2), 115–119.
Nisbett, R. E., & Wilson, T. D. (1977). Telling more than we can know: Verbal reports on
mental processes. Psychological Review, 84(1), 231–251.
Pajares, F. (1996). Self-efficacy beliefs in academic settings. Review of Educational Research,
66(4), 543–578.
Palincsar, A. S., & Brown, A. L. (1984). Reciprocal teaching of comprehension fostering and
comprehension monitoring activities. Cognition and Instruction, 2, 117–175.
Paris, S., Cross, D. R., & Lipson, M. Y. (1984). Informed strategies for learning: A program
to improve children’s reading awareness and comprehension. Journal of Educational Psy-
chology, 76(6), 1239–1252.
Pereira-Laird, J. A., & Deane, F. P. (1997). Development and validation of a self-report mea-
sure of reading strategy use. Reading Psychology: An International Quarterly, 18, 185–
235.
Pintrich, P. R., Smith, D. A. F., Garcia, T., & McKeachie, W. J. (1991). A manual for the use
of the motivated strategies learning questionnaire (MSLQ). Ann Arbor, MI: University
of Michigan, National Center for Research to Improve Postsecondary Teaching and
Learning.
Pressley, M., & Afflerbach, P. (1995). Verbal protocols of reading: The nature of construc-
tively responsive reading. Hillsdale, NJ: Erlbaum.
Pressley, M., & Ghatala, E. S. (1989). Metacognitive benefits of taking a test for children and
young adolescents. Journal of Experimental Child Psychology, 47, 430–450.
Pressley, M., Levin, J. R., Ghatala, E. S., & Amhad, M. (1987). Test monitoring in young
grade school children. Journal of Experimental Child Psychology, 43(1), 96–111.
Reid, R., & Harris, K. R. (1993). Self-monitoring of attention versus self-monitoring of perfor-
MEASURING CHILDREN’S METACOGNITION 79

mance: Effects of attention and academic performance. Exceptional Children, 60(1), 29–
38.
Schmitt, C. (1990). A questionnaire to measure children’s awareness of strategic reading pro-
cesses. The Reading Teacher, 43(7), 454–459.
Schraw, G. (1997). The effect of generalized metacognitive knowledge on test performance
and confidence judgments. Journal of Experimental Education, 65(2), 135–146.
Schraw, G., & Dennison (1994). Assessing metacognitive awareness. Contemporary Educa-
tional Psychology, 19, 460–475.
Schraw, G., & Nietfeld, J. (1998). A further test of the general monitoring skill hypothesis.
Journal of Educational Psychology, 90(2), 236–248.
Schraw, G., & Roedel, T. D. (1994). Test difficulty and judgment bias. Memory & Cognition,
22(1), 63–69.
Schwartz, N., Andersen, C., Howard, B. C., Hong, N., & McGee, S. M. (April, 1998). The
influence of configurational knowledge on children’s problem-solving performance in a
hypermedia environment. Paper presented at the annual meeting of the American Educa-
tional Research Association, San Diego, CA.
Swanson, H. L. (1990). Influence of Metacognitive knowledge and aptitude on problem solv-
ing. Journal of Educational Psychology, 82(2), 306–314.
Tabachnick, B. G., & Fidell, L. S. (1996). Using multivariate statistics (3rd ed.). New York:
HarperCollins College.
Tobias, S., Everson, H., Laitusis, V. (1999). Toward a performance-based measure of meta-
cognitive knowledge monitoring: Relationships with self-reports and behavior ratings.
Paper presented at the annual meeting of the American Educational Research Association,
Montreal, Canada.
Weinstein, C. E., Husman, J., & Dierking, D. R. (2000). Self-regulated interventions with a
focus on learning strategies. In M. Boekaerts, P. R. Pintrich, & M. Zeidner (Eds.), Hand-
book of self-regulation (pp. 728–744). San Diego, CA: Academic Press.
Weinstein, C. E., Schulte, A., & Palmer, D. (1987). Learning and study strategies inventory.
Clearwater, FL: H & H.
Winne, P. H., & Perry, N. E. (2000). Measuring self-regulated learning. In M. Boekaerts, P. R.
Pintrich, & M. Zeidner (Eds.), Handbook of self-regulation (pp. 532–564). San
Diego, CA: Academic Press.
Zimmerman, B. J., & Martinez-Pons, M. (1986). Development of a structured interview for
assessing student use of self-regulated learning strategies. American Educational Re-
search Journal, 23, 614–628.
Zimmerman, B. J., & Martinez-Pons, M. (1988). Construct validation of a strategy model of
self-regulated learning. Journal of Educational Psychology, 80, 284–290.

You might also like