Professional Documents
Culture Documents
Abstract
The aim of this research is to develop an intelligent tutoring system (ITS) which
teaches students the causal relationships between the components of the circulatory
physiology system and the complex behavior of the negative feedback system that sta-
bilizes blood pressure. This system will accept naturM language input from students
and generate limited natural language explanations. It contains rules that identify the
student's errors and build a "bug-based" student model. It uses tutoring rules to plan
each response based on its model of the student and the dialog history so that it can tai-
lor the dialog to fit the student's learning needs. The tutoring rule interpreter manages
the dialog and determines strategy and tactics to achieve its educational goals.
Since we assume that our students have already been taught the relevhnt domain
knowledge, our system is designed to help the students integrate their piece-by-piece
knowledge and correct their misconceptions by working through set of predefined prob-
lems, which were selected to deal with physiological phenomena of particular importance
or difficulty.
1. Introduction
The subject area of the system that we are implementing is cardiovascular (CV)
physiology. CIRCSIM-TUTOR deals with the complex behavior of the cardiovascular
system, which incorporates a negative feedback system to stabilize blood pressure. This
feature is particularly difficult for the novice to understand.
The CV system consists of many mutually interacting components, and the student
must understand the input-output relationships for each individual component of the
system. A qualitative change in one component of the system can cause many other
components to change. A perturbation in the system will have a direct and immediate
effect on some components. Other components are only affected by neural reflexes called
into play by the negative feedback mechanism to stabilize blood pressure. The complex
behavior of the negative feedback system cannot be captured by simply understanding
the causal relationships between individual components. Hence, the students must learn
to integrate the piecemeal facts about the relationships between the components into
mental models of the operation of the overall system.
For effective learning, the students must be able to use this knowledge reliably,
that is, without errors or ambiguities, and flexibly, so that they can apply it consis-
tently in diverse and unfamiliar situations (Ohlsson, 1987; Reif, 1985). This means that
the students must know how to use their knowledge to make correct predictions when a
perturbation is introduced. The blood pressure system, like any negative feedback sys-
tem, is very complex. Ordinarily, students have only limited opportunities to integrate
and use the knowledge they learn from lectures and textbooks. CIRCSIM-TUTOR is
designed to give them a chance to integrate and apply this knowledge in a number of
different situations that require explicit qualitative answers.
256
4. Implementation
4.1. Scenario.
square in the Predictions Table displayed on the screen (see Figure 1), using a "+" sign
to represent an increase, a "-" to represent a decrease, and a "0" to indicate that the
variable will not change. The first column is used to predict the Direct Response of
each variable to the perturbation; the second is used to predict the changes produced
by the Reflex Response after the negative feedback system has been activated; and the
third to predict the Steady State.
Parameters DR RR SS
Cardiac Contractility
Right Atrial Pressure
Stroke Volume
Heart Rate
Cardiac Output
Arterial Resistance
Mean Arterial Pressure
For example, the tutor may ask the student to predict the changes that occur when
the arterial resistance is decreased to 50% of normal. The student may predict all
the changes in the seven variables correctly, or may show multiple errors in the table.
Based on the student's errors, the tutor updates the student model and then builds a
lesson plan using the lesson planning rules. These rules set up tutorial goals starting
with the most serious misconception first. The tutor may ask a Socratic question to
achieve the first tutorial goal and evaluate the student's answer. Looking up the dialog
history record and the student model, it responds to that answer. It may provide an
informative explanation or merely give a hint. It may replan tutorial goals to include a
sub-goal or ask a question to better understand the student's misconceptions (Stevens
et aI., 1882). When it asks a question, it can solve the problem in order to evaluate
the student's answer. If the tutor judges that the student does not understand the
experimental procedure or does not know more important and basic concepts, then it
must replan the lesson.
The dialog is organized (from the tutor's side at least) by the discourse rules which
determine tutorial strategy and tactics. Of course, the student may interrupt the tutor-
initiated dialog to ask some question of the tutor. The natural language understander
first analyzes the sentence and passes its semantic form to the tutor. The tutoring rules
then look at the student model and the dialog history and decide on the best instruc-
tional action in response to that question. Both the system's goal and the student's
response are recorded in the dialog history record at every step. If one tutorial goal is
achieved, the tutor selects the next goal and executes the same tutorial cycle. However,
if the student seems to achieve a goal which has not been selected from the goal list,
258
the tutor replans the lesson goals. When all the goals are satisfied, the tutor chooses
the next problem from the curriculum list.
The system uses domain knowledge for several purposes: (1) to solve the problems
that it asks the student to solve; (2) to provide explanations to the student, (3) to
understand questions and answers generated by the student, and (4) to aid the tutor in
planning the tutoring discourse (Clancey, 1983). Hence, the knowledge base must be a
glass box rather than a black box and must be organized to serve in natural language
understanding and text generation as well as problem solving.
The domain knowledge of CIRCSIM-TUTOR is represented in two different forms,
frames for declarative knowledge and rules for procedural knowledge. Figure 2 shows an
abstract representation of the components of the cardiovascular system compiled by our
experts (Michael and Rovick), called a Concept Map. The frame knowledge base is built
as a two level hierarchy, the lower level contains the concept map and the higher level
contains the entities to which the components in the concept map belong such as the
nervous system and the heart. Our first step was to convert this graphic representation
into frames. Figure 3 shows a sample frame and Figure 4 shows some causal rules.
Ordinary classroom teaching and tutoring have similar goals - helping students
learn domain knowledge and learn how to apply that knowledge in problem-solving
situations. However, the former puts more emphasis on organizing and delivering the
domain knowledge effectively while the latter focuses on providing adaptive instruction,
finding out what the student knows and does not know, and helping fill in the gaps and
correct the misconceptions (Wilkinson, 1984).
An intelligent tutor must have a student model that represents an estimate of the
student's current understanding of the domain knowledge in question. By using the
student model, the tutor can give adaptive explanations and hints and generate ques-
tions dynamically rather than presenting predefined questions during instruction. Also
the tutor can consult the student model when it determines the next topic (Van Lehn,
1988). It is evident that the more the tutor knows about the student, the more adaptive
instruction the tutor can provide. However, it is very difficult to access the mental states
that the student traverses in the problem solving process since only the keyboard inputs
are available. But, the tutor can track student's approximated reasoning processes by
asking enough questions as LISP TUTOR does (Reiser et al., 1985).
259
I
c~
i' A
'i'
j CC J
!
I i i
A A
A
I
A
I SPT
Figure 2. A Concept Map of Domain Knowledge
The goal of our research on student modeling is to obtain more information about
the student's cognitive state from the keyboard input in order to provide more indi-
vidualized instruction. We are also experimenting with different tutoring rules that
determine how and when the controller uses and updates the student model and what
should be represented in that model.
260
4.3.1. Data for the Student Modeler. The student's keyboard entries in the Predic-
tions Table indicating qualitative changes of seven variables are our major data about
the student's level of understanding. As shown in Figure 1, the the student is asked to
answer what qualitative changes would happen to seven CV variables in three separate
stages. Thus the student's intermediate causal knowledge at each step is available to
the tutor. Not only the student's answers about the causal changes of variables at each
step but also the order in which the answers are entered in the Predictions Table are
important sources of knowledge to the student modeler. Unfortunately, the sequence of
entries does not always correspond to the sequence of thought; some of the very best
students work the whole procedure out in their heads and fill in the predictions table
systematically from top to bottom.
261
4.3.2. Representation of Student Model. We assume that the student has already
been taught the subject matter. But even students who know individual relationships
may not have integrated knowledge about the domain. To diagnose the student's mis-
conceptions, CIRCSIM-TUTOR uses a set of carefully selected predefined procedures.
The model of a particular student is based on the student's entries in the predictions
table and answers to questions. The student modeler in the initial version of CIRCSIM-
TUTOR uses the student's errors to fire the student-modeling rules. These errors may
involve actual prediction values or relationships between predictions of variables that
affect each other. The rules also use the order in which entries are made in the Predic-
tions Table to try to trace the course of the student's reasoning process. The student
model consists of a set of terms in Prolog which have two different functors, namely,
donot_know and wrong_order. The general forms are:
(1) donor_know(Type, Variable, Count, Corrected).
For example, donot_.know(causality, sv, 1, yes).
(2) wrong_order(Variable, Count, Corrected).
For example, wrong_order(ra,l,no).
The student's errors are classified into three different types called "causalit:¢;"
"equation," and "concept." A count is associated with each error; if the student re-
peats an error, the count for that error will be incremented by 1.
4.3.3. Student Modeler. The student modeler uses the modeling rules to build a
student model, based on the student's answers, the right answers, and a list of expected
misconceptions. Two examples of the modeling rules follow:
rule(l) :-
student_ans(hr, increased),
student_ans(sv, increased),
student.,uns(co, nochange),
then, store(donot_know(equation,co,1,no)).
rule(2) :-
student_ans(co, X),
correct_ans(co, Y),
X#Y,
then st ore(donot_know(causality,co,l,no)).
After the error is remediated, the slot value of "corrected" is changed to "yes."
This module consists of a collection of tutoring rules that control the overall tutoring
discourse and determine what instructional action to perform next. Our main design
goal was to achieve context-dependent tutoring. Thus it is essential for the tutor to
maintain a coherent dialog while tailoring the discourse to fit the student's dynamically
changing needs.
262
Since the control mechanisms of traditional CAI systems are tightly coupled with
the domain knowledge, it prevents them from handling new domains. One new approach
to avoiding this problem is the discourse management network that explicitly describes
the tutorial states with a fixed control mechanism to handle transitions between states
as in an augmented transition network (Woolf, 1984). As Murray points out (1988),
the major flaw in this organization is the difficulty in supporting lesson planning and
mixed-initiative interaction. We have tried to combine a discourse managment network
with a planner modeled on Russell's (1988) IDE system.
In our system, it is very important to determination the order of remediation goals
in cases when the student shows evidence of multiple misconceptions. Hence, we have
domain-dependent rules to determine the sequence of teaching goals based on the stu-
dent's misconceptions, called lesson planning rules. We have another set of more generic
rules to control tutoring discourses and to handle situations where the student takes
the initiative.
Once the remediation discourse is complete, then the system comes back to the
main discourse rules and continues. If the student asks a question, the discourse rules
for student's initiative are applied. Similiarly, when the student's intiative is completed,
the main cycle resumes. The framework for managing tutoring discourse (main tutoring
process and remediation process only) appear in Figure 5 and some examples of explicit
rules appear in Figure 6. These rules are executed by a rule interpreter.
<Remediation discourse>
Typical ITSs need three levels of instructional planning: curriculum planning, les-
son planning, and discourse planning (Murray, 1988). CIRCSIM-TUTOR's curriculum
is preplanned and stored as a set of experimental procedures, which deal with different
instructional objectives. However, it needs to build a lesson plan based on the student's
answers and it must be able to dynamically replan the lesson as the student model
is updated. New tutorial goals may be added and some of the lesson plans may be
omitted. The lesson planner fires all the lesson planning rules which satisfy the current
tutoring environments. At the moment it is crucial that the rules are fired in the order
of the most important tutorial goals. Hence, the order of rules in the rule base must be
determined by our experts since it is completely a domain dependent matter. Here are
some examples of lesson planning rules.
rule 1: if the procedure variable is X
and X affects Y directly
student predicts the causal change in Y incorrectly
then teach(object(direct _effect),variable(Y)).
rule 2: if student does not know hr is under neural control
and does not know cc is under neural control
then teach(object(neural_control),variable(hr,cc)).
264
Summary
This research represents a first step at moving from a useful but not really intelligent
CAI program, CIRCSIM, to an ITS, CIRCSIM-TUTOR. Its educational objective is to
tutor the complex causal relationships of the cardiovascular negative feedback system.
It has two sets of tutoring rules, discourse rules and lesson planning rules, which are
interpreted by the system planner. Using these rules the tutor can plan and replan in
response to the changes in the student model and organize the discourse to serve the
goals of the tutor and respond to the students's errors and questions dynamically. All
these rules are explicitly stated so that they can easily be changed by the expert and so
that tutoring actions can be explained to the student. The tutor also has some simple
natural language capabilities, which we are working to expand.
References
Brown, J. S., Burton, R. R., and de Kleer, J. (1982). Pedagogical, natural language, and
knowledge engineering techniques in SOPHIE I, II, and III. In D.H. Sleeman
and J.S. Brown, (Eds.), Intelligent Tutoring Systems, Academic Press, New
York. 227-282.
266