You are on page 1of 13

Student Strategies for Learning

Programming from a Computational


Environment
Margaret M. Recker and Peter Pirolli
Graduate School of Education
University of California
Berkeley, CA 94720, U.S.A.
E-mail: mimi@soe.berkeley.edu

Abstract
This paper discusses the design and evaluation of a hypertext-based en-
vironment that presents instructional material on programming in Lisp. The
design of the environment was motivated by results from studies investigating
students' strategies for knowledge acquisition. The e ectiveness of the de-
sign was evaluated by conducting a study that investigated how subjects used
and learned from the instructional environment compared to subjects using
more standard, structured, linear instruction. The results showed an ability
by environment interaction: the higher ability subjects using the hypertext en-
vironment improved and made signi cantly less errors when programming new
concepts while the lower ability subjects did not improve and made more errors.
Meanwhile, subjects using the control environment did not show this ability-
based di erence. These results have implications for the design of intelligent
tutoring systems. They a ect decisions involving the amount of learner control
that is provided to students and the way student models are constructed.

1 Introduction
A long standing debate in education that has implications for the design of intelligent
tutoring systems (ITSs) is the amount of learner control provided to the student.
Proponents of learning by exploration have argued that activities involving discovery
or the personal construction of understanding are more e ective than a didactic
pedagogy. Underlying this argument is the assumption that students possess the
appropriate motivation, strategies, and self-regulatory skills to e ectively control
their own learning. These assumptions form guiding principles in the design of
exploratory learning environments and microworlds [6].
 In 1992 Proceedings of the International Conference on Intelligent Tutoring Systems, pp 382-
394. Berlin: Springer Verlag

1
However, there are reasons to question some of these assumptions. First, studies
have found signi cant individual di erences in the kinds of strategies that students
use to explain instructional text and examples to themselves. Furthermore, these
di erences in self-explanations seem to a ect subsequent problem solving perfor-
mance [2, 9]. These results suggest that students are not equally able to e ectively
study instruction and may di er in their metacognitive abilities.
Second, several researchers have reported aptitude-treatment interactions in stud-
ies of di erent learning environments. For example, a review of many studies of CAI
systems showed that a high degree of learner control proved to be more advantageous
to higher ability students while more structured environments seemed to best bene t
lower ability students [15]. Others have suggested that while high ability subjects
seemed to adapt to more complex, unstructured instructional environments, lower
ability subjects may best bene t with highly structured and guided curricula [12, 14].
These results have implications for those designing ITSs. They a ect design deci-
sions involving the amount of learner control that is provided to students and the
way student models are constructed.
In this paper, we report results from a study where students learned to program
in Lisp. The study involved ve lessons on programming, including recursion. Each
lesson had two phases: (1) studying instructional material (knowledge acquisition),
followed by (2) programming using the CMU Lisp Tutor (problem solving). For the
target lesson, the lesson on recursion, two sets of computer-based instruction were
developed. Subjects were randomly assigned to one of the two environments to learn
about the concepts of recursion prior to programming recursion with the CMU Lisp
Tutor [1]. The design of the rst environment, the Explanation Environment, was
motivated by studies investigating how students explain instructional materials to
themselves [9]. The second set of instruction served primarily as a control condition
to the Explanation Environment. While also computer-based, its structure and
content mirrored more standard, linear instruction.
In light of the research discussed above, we expected to nd di erences in how
subjects learned from the Explanation Environment and these di erences, in large
part, would be re ective of their ability. In addition, we expected to nd interactions
between subjects' abilities and the instructional environment they learned from.
That is, we expected that higher ability students would be better able to manage the
complexity of the hypertext-based Explanation Environment, whereas lower ability
subjects would be more successful in the more structured control environment.
Overview of the paper. The next section brie y describes the learning en-
vironments used in the study. In the following section, the method used in the
empirical study is described. We then report learning outcomes for the subjects
in the two instructional conditions. These are examined in terms of subjects' ver-
bal protocols and their interactions with the environments. We conclude with an
examination of the learning strategies exhibited by subjects using the Explanation
Environment.
2 Learning Environments
As previously mentioned, the study involved three learning environments. The rst
is the CMU Lisp Tutor, which has been described elsewhere [1]. The other two envi-
ronments contained instructional materials on programming recursion in Lisp, and
subjects used these to learn about the concepts of recursion prior to programming.
2.1 The Explanation Environment
The rst instructional environment was called the Explanation Environment (EE).
The environment contained instruction and examples explaining the topic of recur-
sion. The environment was implemented within a hypertext environment to provide
students the ability to make explicit links between text and examples, to allow a
hierarchical structure in the presentation of the instructional material that is not
available in linear media, and to provide students with the option of viewing as much
instructional material as they felt was necessary.
The Explanation Environment was designed with the following ve features: (1)
a hierarchical structure in the presentation of instructional text, (2) the presence
of explanatory elaborations embedded within examples, (3) the presentation of pro-
gramming abstractions, (4) the ability to highlight new or unknown terms in the
text, and (5) an on-line ability to save self generated explanations. Each of these
features is reviewed in more detail below.
Hierarchical Structure. Each instructional topic was viewed as a node in
a hierarchical tree. As mentioned above, this was realized by implementing the
system within a hypertext environment. In this tree, top-level nodes contained the
most important information. As one descended the tree (via button selections),
the instruction became progressively detailed and speci c. The top-level screens in
the instruction presented instruction on the structure of recursive functions, their
evaluation, the design of function, and heuristics for deriving the recursive relation.
Explanatory Elaborations. The topic of recursion was exempli ed through
a set of example Lisp functions. These examples were annotated with explanatory
elaborations (accessed via mouse clicks) provided for subjects that may not have been
able to generate them on their own. The elaborations explained how programming
principles were implemented within a concrete model.
Programming Abstractions. The Explanation Environment also contained
a special instructional window that had the ability to display, at di erent levels of
abstraction, the step-by-step design of a simple, tail-recursive program. This set of
programming abstractions were essentially declarative isomorphs of the new abstract
production rules contained in the Lisp Tutor's lesson on recursion (not including
speci c code generation productions). They thus represented an abstract model of
the necessary skill involved in programming recursion [7]. Through interactions with
the \abstraction" box, subjects could move down through the abstraction hierarchy
until it bottomed out in an actual Lisp function. Each level of abstraction was
accompanied by a short textual description of the goals, conditions, and situations
which apply to the particular abstraction.
Metacognitive Support. The environment o ered support for the kinds of
metacognitive reasoning that were shown to be important in prior self-explanation
studies. Learners in this environment could highlight terms in the instruction that
they did not understand. Highlighting new terms provided implicit monitoring of
a learner's state of comprehension and marked potential explanation goals. These
highlighted words were stored in a \New Words" window which was constantly
displayed throughout the instruction. At any time, learners could select a word
from the \New Words" window and type in their own de nitions. This feature was
motivated by ndings that show the general superiority of self-generated elaborations
over text-supplied elaborations [11].
Navigation. Substantial e ort was made to address what is called the nav-
igation problem. Many hypermedia environments su er from the fact that their
structure (e.g. the links and nodes) is so complex that users quickly get lost within
the system [5]. Keeping track of one's location can add signi cantly to the cognitive
overload of a learner [4], which may, in turn, a ect the user's learning performance
[16].
Users were provided with two navigational methods. The rst, global navigation,
provided learners with a navigational map at the lower right-hand corner of the
screen. This map contained a series of vertically arranged buttons which represented
the layout of the top-level nodes in the instructional system. Each button represented
an instructional topic and users could access a topic by clicking on the appropriate
button with the mouse. The map was also used to show the user his or her current
location within the instruction and which topics had already been visited.
The second navigational method, local navigation, was implemented by providing
two buttons on each instructional screen that allowed the user to move to the next
and previous top-level instructional topics, respectively.
Figure 1 shows a sample interaction from the Explanation Environment. In this
display, the learner has chosen to view instruction on the \Structure of Recursive
Functions." The main point of the instruction is displayed in the box near the top
of the screen. The learner then selected the \See Example" button which caused
an example to be displayed in the lower portion of the screen. Additionally, the
learner requested an explanatory elaboration for the second line of the example and
it is displayed to the right of the Lisp code. Note the \New Words" window at the
upper right of the screen. It currently contains one word, \Recursive." Also note
the navigational map at the lower right of the screen.
2.2 The Control Environment
The second instructional environment served as a control condition in our experi-
ment. Its structure mirrored more standard, linear instruction1 . The layout of the
instruction in the CE was sequential with text and examples located on separate
screens. As in the EE, subjects moved between pages of instruction by clicking
on buttons. The environment did not include the monitoring and metacognitive
components and did not contain explicit instruction on abstractions. While the
informational content of the two environments was intended to be the same, their
structure was quite di erent.
1 The Control Environment was also computer-based in order to factor out di erences due to
speed and fatigue in users using a CRT.
Figure 1: An example screen from the Explanation Environment.

3 Method
Subjects. Sixteen college-aged subjects (nine women and seven men) participated
in the study. They were recruited through an advertisement placed in the University
of California, Berkeley, student newspaper. To be accepted into the study, a subject
must have completed at least one semester of calculus, and he or she must have no or
minimal programming experience. For the last requirement, one semester of BASIC
instruction was the maximum amount of programming experience permitted. The
subjects were paid $5 per hour for participation in the study.
Introductory Phase. In the introductory phase, subjects proceeded through
four programming lessons. Each lesson had two parts: (1) reading new material from
an instructional booklet (knowledge acquisition), followed by (2) a programming
phase using the Tutor (problem solving). Subjects worked at their own pace through
the materials. Subjects' programming performance in the last introductory lesson
was used to determine an ability measure for each subject.
Target Phase. In the target phase of the study, the topic of recursion was
introduced. This lesson had the same structure as previous lessons except that,
in this lesson, the instructional material was computer-based and subjects were
randomly assigned to one of two environments (EE or the control). Prior to working
with these environments, subjects proceeded through an introductory phase that
introduced the instructional environments and the use of a mouse. In addition,
subjects in the EE condition received training on navigating a hypertext system. In
the programming part of the lesson, subjects solved twelve recursive programming
problems using the CMU Lisp Tutor.
Data Sources. Subjects were requested to provide think-aloud protocols as
they studied the instruction, and all activities were video-taped. In addition, the
EE and control environments collected detailed logs of subjects' interactions in terms
of the amount and kinds of mouse clicking activity and the number, order, and time
spent on each instructional screen. The Tutor also collected detailed logs of subjects'
solution traces.

4 Results
We begin by reviewing learning outcomes for subjects in the two instructional condi-
tions. We then turn to analyses of the verbal protocols that subjects generated while
using the environments, and we analyze them in conjunction with their interactions
with the environments.
4.1 Ability by Environment Interaction
We rst examined subjects' overall programmingperformance in terms of the number
of errors they made while programming. In contrasting the performance of subjects
using the EE versus those using the control, we did not nd any signi cant di erence
in outcome.
However, we were more interested in the impact that the two environments had
on di erent ability groups. To address this issue, we divided subjects into two ability
groups, based on a median split of subjects' errors on the programming lesson prior
to recursion. Further, as the Lisp Tutor represents each programming opportunity
in terms of a production rule, performance measures were collected at this level. Of
particular interest was subjects' rst opportunity for coding a new concept, since
these opportunities are greatly in uenced by the declarative knowledge extracted
from instruction. Thus, on the very rst trial of each new production instance in
the recursion lesson, the mean number of errors was recorded for each subject.
Cast in this light, the results showed an interesting ability by environment inter-
action (ATI). When we examined subjects' performance in terms of the number of
errors when programming a new concept, we found that the higher ability subjects
using the Explanation Environment made signi cantly less errors when programming
new concepts while the lower ability subjects made more errors on new concepts.
Meanwhile, subjects using the control environment showed the opposite e ect: the
lower ability subjects made less errors while coding new concepts while the higher
ability subjects made more.
More speci cally, we conducted an ANOVA with Ability (High ability or Low
ability, based on a median split of subjects' errors on the programming lesson prior
to recursion) by Instructional Environment (EE or Control) as the independent
factors. The dependent measure was the mean number of errors on subjects' rst
opportunity for coding a new concept. The ANOVA had four subjects per cell (see
Table 1).
Environment
Ability EE Control Mean
High Ability .23 .45 .37
Low Ability .75 .24 .49
Mean .52 .35 .43
Table 1: Mean errors on rst opportunity for programming a new concept.
EE Control
Ability Prior Recursion Prior Recursion Mean
High Ability 2.30 2.74 2.37 3.12 2.65
Low Ability 3.80 5.87 4.27 3.62 4.39
Mean 3.04 4.30 3.32 3.40 3.52
Table 2: Mean number of errors in the lesson prior to recursion and the recursion
lesson.

Although there was neither a main e ect of Ability, F(1, 12) = .638; p = .45,
nor a main e ect of Instructional Environment, F(1, 12) = 1.37; p = .27, there
was a signi cant interaction of Ability by Instructional Environment, F(1, 12) =
4.96; p < .05. Note also that a linear contrast using the ANOVA showed that
the di erence between Low and High ability subjects learning from the EE was
signi cant. (t(12) = 2.48; p < .05). However, the di erence was not signi cant for
subjects in the control condition. These results suggest that the EE had a signi cant
impact on subjects' learning when measured in terms of their prior ability. However,
the control environment showed less of an ability-dependent e ect.
A similar aptitude-treatment interaction was found when considering the rela-
tive improvement that subjects made between the lesson prior to recursion and the
recursion lesson. A 3-factor repeated measures ANOVA was conducted where the
factors were, as above, Ability (High Ability or Low Ability) by Instructional Envi-
ronment (EE or Control). The repeated measures were subjects' average number of
errors on the lesson prior to recursion (Prior) and the recursion lesson (Recursion).
The only signi cant main e ect was that of Ability F(1, 12) = 11.36; p < .01. More
interestingly, there was a signi cant 3-way interaction of Ability by Instructional
Environment by repeated measure, F(1, 12) = 5.16; p < .05 (see Table 2).
These results suggest that the Explanation Environment was bene cial to those
subjects that were already performing well in the early lessons. It seems that these
subjects were able to take advantage of the structure of the environment and to be
self-driven in extracting the important points of the instruction. As a result, they
improved more, and made fewer errors when programming new concepts. On the
other hand, the lower performing subjects did not seem to be able to take advantage
of the environment and, in fact, may have been overwhelmed by the amount of
learner control provided.
In the following sections, we describe a scheme for coding verbal protocols and re-
port results from analyses of subjects' protocols and their interactions with the envi-
ronments in an attempt to explain and understand the observed aptitude-treatment
Text Example
Elaboration Type Good Poor (p-value) Good Poor (p-value)
Domain 3.37 1.12 (.05) 13.12 9.25 (.17)
Monitor 9.12 3.60 (.09) 14.25 12.50 (.34)
Strategy 0.87 0.25 (.11) 1.12 0.75 (.35)
Navigation 1.50 3.50 (.06) .62 1.37 (.05)
Activity (total) 4.87 2.37 (.19)
Reread 10.10 3.80 (.15) 6.12 2.80 (.17)
Ties (total) 2.00 0.75 (.08)
Recursion Related (%) 0.89 0.73 (.09)
Total 27.12 12.50 (.09) 38.75 28.87 (.18)
Table 3: Mean number of elaborations per performance category (Good and Poor)
in the top-level coding category.

interactions.
4.2 Verbal protocol coding scheme
The verbal protocols of the sixteen subjects were transcribed and segmented into
pause-bounded utterances. An utterance was treated as an elaboration of the in-
struction if it was not a rst reading of the text. These elaborations were then
classi ed into a hierarchical typology of elaborations. This classi cation was de-
signed to capture the important categories of self-explanation. The particular coding
scheme used in the present study was based on one used in a previous study of self-
explanation [10]. However, certain additions were required to account for subjects'
utterances that concerned learning from a computational interface.
In brief, we identi ed seven top-level coding categories. Subjects could make
elaborations about: (1) the domain of Lisp and recursion, (2) the activity of studying
instruction, (3) the act of monitoring one's understanding, (4) the act of rereading
a piece of instruction, (5) an explicit learning strategy, (6) navigation through the
instruction, and (7) other (the residual category). These categories were further
subdivided to capture important ner-grain distinctions. For example, we noted if
a domain elaboration pertained to the topic of recursion (recursion related) and if it
made a connection to previously read instruction (tie).
4.3 Summary of Elaboration types
We begin with a comparison of the number of elaboration types with results from
previous studies of self-explanation [2, 9]. In order to replicate previous studies
of self-explanation, subjects were divided into two performance groups, Good and
Poor, based on a post-hoc median split of the mean number of errors they made while
programming recursion. The split was made independent of instructional condition.
Table 3 shows the mean number of elaborations made by Good and Poor sub-
jects for the various protocol categories. The categories were divided depending on
whether subjects were processing textual information or examples. As can be seen,
Good subjects made more elaborations in most categories. The di erences were sig-
ni cant for the domain, monitor, ties, and recursion related categories. This suggests
that Good subjects, regardless of instructional condition, focused on the domain of
recursion, attempted to connect and integrate parts of the instruction, and exhibited
e ective metacognition. In general, these di erences replicate previous ndings on
the di erence between the self-explanations of Good and Poor subjects [2, 9].
However, the striking exception occurs in the navigation category. For example, a
navigation statement is as follows: \I guess I'll click on the example button." In this
category, subjects who made more navigation elaborations also made more errors on
the recursion lesson (F(1, 14) = 5.59, p = .03.). This di erence is also evident when
examining the mean errors on the rst opportunity for coding a new production.
Subjects who made more navigation related statements also made signi cantly more
errors on their rst opportunity for coding a new concept (F(1, 14) = 10.78, p =
.005).
These large di erences in the navigation protocol category suggest that poor
subjects were more prone to be driven by features of the interface during the studying
phase. That is, their self-explanation processes and decisions were decided on the
basis of buttons available on the screen. Good subjects, on the other hand, may have
been more active and self-driven in suggesting their own learning goals. Similar
results were reported in a study of student learning strategies when exploring a
basic computer environment [18]. This study found that a class of students exhibit
what was called cognitive dilettantism, in that these students moved around the
environment at a rapid pace, without much re ection or systematicity. Likewise,
the study found that such a strategy did not lead to productive learning.
4.4 Explanation Environment
In this section, we focus on how di erent subjects used the hypertext-based Expla-
nation Environment, both in terms of their interactions and their verbal protocols.
First, we note that most of the di erences based on a Good/Poor split previously
reported still hold. Good subjects made more domain, monitor, and tie elaborations.
This di erence is also true for the navigation category. Poor performance subjects
made signi cantly more navigation elaborations (F(1, 6) = 9.15, p < .05), showing
evidence that they were very system-driven while processing the instruction. How-
ever, in the case of the Explanation Environment, a system-driven strategy is not
very e ective for managing a complex, distributed instructional environment.
Use of Explanation Environment. We found that subjects who exhibited
better performance during the programming phase also were more active in their
use of the environment. That is, their activity (measured in terms of the number
of mouse clicks) was inversely correlated with the number of errors made while
programming (t(6) = 1.65; p < .05). In addition, the better performing subjects
tended to visit more hypertext screens, although the di erence was not signi cant
(t(6) = 1.25; p = .12).
The most productive time was spent on learning how to code functions and
looking at example recursive functions. This was indicated by the better performing
subjects who spent signi cantly more time viewing the \design" screens and the
screen containing the example code for a recursive function.
In general, all subjects showed a preference for viewing examples. At points
in the instruction where subjects could either choose to view an example or more
textual information, they chose an example 77% of the time. In fact, the importance
of examples within instruction is a robust nding in the literature [8, 13, 17].
The metacognitive features in the EE were generally ignored by subjects. It is
possible that subjects viewed the feature as a great additional cognitive load.
Finally, subjects preferred to browse the instruction in a serial fashion. The most
frequently selected navigational method was the \Next" button, which accounted for
63% of all navigation. This results could be interpreted as a general preference by
subjects for serial progression through instruction. If true, this preference could
be interpreted as an argument against hypermedia or exploratory style instruction.
A preference for serial progression for browsing through hypermedia and a general
underuse of available navigational methods has been reported elsewhere. For exam-
ple, [4] describe a system with eight navigation methods. In this system, the \Next
Card" button was the most frequently used method.

5 Discussion
In this paper, we described an empirical evaluation of a hypertext-based learning
environment, the Explanation Environment, which contained instruction on pro-
gramming. The design of the environment was motivated by prior results that
investigated students' strategies for knowledge acquisition. Note that in this study,
the environment was used in a browsing mode and thus these results may not gener-
alize to hypertext learning environments that are used by students in an authoring
mode.
The study involved analyzing verbal protocols of subjects as they explained to
themselves instructional materials contained in the environment prior to program-
ming. In addition, the environment collected a detailed log of subjects' trajectories
through the system. Learning success was then measured in terms of subsequent
programming performance. The study also involved contrasting subjects' learning
to a control group of subjects assigned to a more standard, linear environment.
When we contrasted the overall programming performance of subjects using to
the Explanation Environment to those in the control, we did not nd any signi cant
di erences in outcome. This lack of di erence in outcome has been reported in
other studies which evaluated hypermedia as learning systems [4, 5]. However, we
did nd an interesting ability by environment interaction. In our study, higher ability
subjects (as measured by their performance on earlier programming lessons) using
the hypertext-based Explanation Environment improved and made signi cantly less
errors when programming new concepts while the lower ability subjects did not
improve and made more errors on new concepts. Meanwhile, subjects in the control
environment showed the opposite trend, though not signi cantly.
In examining the verbal protocols of subjects using the Explanation Environment
with respect to their interactions with the environment, we have identi ed at least
two classes of learning styles. The more successful learners, those who also exhibited
programming success, were much more active and strategy-driven in their use of
the environment. As evidenced by the kinds of verbal protocols generated, they
focussed on the more important instructional concepts and they were more self-
driven in attempting to understand the instruction. In sum, they seemed better able
to take advantage of the greater degree of learner control provided in the Explanation
Environment. As a consequence, they improved more and made fewer errors.
The less successful class of learners seemed very data-driven. Their actions ap-
peared to be mostly driven by features and buttons present on the interface. Perhaps
due to the added complexity of the environment and the resultant cognitive overload,
the lower ability subjects were not able to take advantage of the non-standard envi-
ronment, they were not able to construct coherent explanations of the new material,
and consequently made many more errors while programming.
The results show that even within the context of a simple computational interface,
substantial individual di erences are evident. In addition, the results highlight the
delicate balance between o ering students the chance to direct and manage their own
learning at the expense of overwhelming and confusing other students who may lack
the appropriate background knowledge or learning strategies. For students who may
be struggling with basic concepts or lack a great deal of metacognitive awareness,
the drawbacks of a higher degree of learner control and exibility may outweigh the
potential bene ts.

6 Future Work
We are currently working on a computational model to simulate subjects' interac-
tions with the Explanation Environment, implemented within the Soar architecture
[3]. The general modelling strategy is to construct a set of productions for each
subject that represents their background knowledge and their learning strategies.
This set of productions for each subject is called the student pro le. These pro les
can then be run in conjunction with systems that simulate the di erent instructional
environments and resulting learning (chunking) can then be analyzed.
Such a model may lead us to a better understanding of the subtle interactions
between students' learning strategies, their varying background knowledge, and their
use of di ering styles of instruction. As such, it may contribute to the design of
instructional environments sensitive to the individual learning strategies and styles
of students.

Acknowledgements
Portions of this research were funded by the National Science Foundation under
contract IRI-9001233 to Peter Pirolli, and by a University of California Regents'
Dissertation Fellowship to M. Recker. We would like to thank Steve Adams, Daniel
Berger, Kate Bielaczyc, Patti Schank, and members of the University of California,
Berkeley, CSM research group for useful comments on an earlier draft of this paper.
References
[1] J.R. Anderson and B.J. Reiser. The LISP Tutor. Byte, 10:159{175, 1985.
[2] M.T.H. Chi, M. Bassok, M.W. Lewis, P. Reimann, and R. Glaser. Self-
explanations: How students study and use examples in learning to solve prob-
lems. Cognitive Science, 13:145{182, 1989.
[3] J. Laird, P. Rosenbloom, and A. Newell. Soar: An architecture for general
intelligence. Arti cial Intelligence, 33:1{64, 1987.
[4] T. Mayes, M. Kibby, and T. Anderson. Learning about learning from hypertext.
In D. Jonassen and H. Mandl, editors, Designing Hypermedia for Learning,
pages 227{250. Springer Verlag, Berlin, 1990.
[5] J. Nielsen. Hypertext & Hypermedia. Academic Press, San Diego, CA, 1990.
[6] S. Papert. Mindstorms: Children, Computers, and Powerful Ideas. Basic Books,
New York, 1980.
[7] P. Pirolli. A cognitive model and computer tutor for programming recursion.
Human-Computer Interaction, 2:319{355, 1986.
[8] P. Pirolli and J.R. Anderson. The role of learning from examples in the ac-
quisition of recursive programming skills. Canadian Journal of Psychology,
39(2):240{272, 1985.
[9] P. Pirolli and M. Recker. Knowledge construction and transfer using an in-
telligent tutoring system: The role of examples, self-explanation, practice, and
re ection. Technical Report CSM-1, University of California, Berkeley, 1991.
[10] M. Recker and P. Pirolli. Self-explanation verbal protocols: A protocol coding
scheme and representative protocols. Technical Report CSM-5, University of
California, Berkeley, 1991.
[11] L. Reder, D. Charney, and K. Morgan. The role of elaborations in learning a
skill from instructional text. Memory and Cognition, 14:64{78, 1986.
[12] B. Reiser, W. Copen, M. Ranney, A. Hamid, and D. Kimberg. Cognitive and
motivational consequences of tutoring and discovery learning. Technical report,
Cognitive Science Laboratory, Princeton University, 1991.
[13] B.H. Ross. Remindings and their e ects in learning a cognitive skill. Cognitive
Psychology, 16:371{416, 1984.
[14] R. E. Snow and D. F. Lohman. Toward a theory of cognitive aptitude for
learning from instruction. Journal of Educational Psychology, 76:347{376, 1984.
[15] E. Steinberg. Cognition and learner control: A literature review, 1977-1988.
Journal of Computer-Based Education, 16(4):117{121, 1989.
[16] J. Sweller. Cognitive load during problem solving: E ects on learning. Cognitive
Science, 12:257{285, 1988.
[17] J. Sweller and G.A. Cooper. The use of worked examples as a substitute for
problem solving in learning algebra. Cognition and Instruction, 7:1{39, 1985.
[18] M. Twidale. Cognitive Agoraphobia and Dilettantism: Issues for Reactive
Learning Environments. In L. Birbaum, editor, Proceedings of the Interna-
tional Conference of the Learning Sciences. Association for the Advancement
of Computing in Education, Charlottesville, VA, 1991.

You might also like