You are on page 1of 16

Articles

Serious Games Get Smart:


Intelligent Game-Based
Learning Environments

James C. Lester, Eun Y. Ha, Seung Y. Lee, Bradford W. Mott,


Jonathan P. Rowe, Jennifer L. Sabourin

R
I Intelligent game-based learning envi- ecent years have seen a growing recognition of the
ronments integrate commercial game transformative potential of game-based learning envi-
technologies with AI methods from ronments. The burgeoning field of game-based learn-
intelligent tutoring systems and intelli- ing has made significant advances, including theoretical
gent narrative technologies. This article developments (Gee 2007), as well as the creation of game-
introduces the Crystal Island intelligent
based learning environments for a broad range of K–12 sub-
game-based learning environment,
which has been under development in
jects (Habgood and Ainsworth 2011; Ketelhut et al. 2010;
the authors’ laboratory for the past sev- Warren, Dondlinger, and Barab 2008) and training objectives
en years. After presenting Crystal (Johnson 2010; Kim et al. 2009). Of particular note are the
Island, the principal technical problems results of recent empirical studies demonstrating that in addi-
of intelligent game-based learning envi- tion to game-based learning environments’ potential for
ronments are discussed: narrative-cen- motivation, they can enable students to achieve learning
tered tutorial planning, student affect gains in controlled laboratory settings (Habgood and
recognition, student knowledge model- Ainsworth 2011) as well as classroom settings (Ketelhut et al.
ing, and student goal recognition. Solu-
2010).
tions to these problems are illustrated
Motivated by the goal of creating game-based learning
with research conducted with the Crys-
tal Island learning environment experiences that are personalized to individual students, we
have been investigating intelligent game-based learning
environments that leverage commercial game technologies
and AI techniques from intelligent tutoring systems (D’Mel-
lo and Graesser 2010; Feng, Heffernan, and Koedinger 2009;
VanLehn 2006) and intelligent narrative technologies (Lim
et al. 2012; McCoy et al. 2011; Si, Marsella, and Pynadath
2009; Yu and Riedl 2012). This work ranges from research on
real-time narrative planning and goal recognition to affective
computing models for recognizing student emotional states.
Intelligent game-based learning environments serve as an
excellent laboratory for investigating AI techniques because
they make significant inferential demands and play out in
complex interactive scenarios.
In this article we introduce intelligent game-based learning

Copyright © 2013, Association for the Advancement of Artificial Intelligence. All rights reserved. ISSN 0738-4602 WINTER 2013 31
Articles

Figure 1. Crystal Island Narrative-Centered Learning Environment.

environments by presenting Crystal Island, an intel- tigating a range of artificial intelligence technologies
ligent game-based learning environment for middle for dynamically supporting students’ learning expe-
grade science education (Rowe et al. 2011). Crystal riences. This includes work on narrative-centered
Island has been under continual development tutorial planning (Lee, Mott, and Lester 2012; Mott
through a series of learning technology investiga- and Lester 2006), student knowledge modeling
tions and laboratory and classroom studies over the (Rowe and Lester 2010), student goal recognition (Ha
past seven years. We discuss technical problems that et al. 2011), and affect recognition models (Sabourin,
we have investigated in the context of Crystal Island, Mott, and Lester 2011). The environment has also
including narrative-centered tutorial planning, stu- been the subject of extensive empirical investigations
dent affect recognition, student modeling, and stu- of student learning and presence (Rowe et al. 2011),
dent goal recognition. We conclude with a discussion with results informing the design and revision of suc-
of educational impacts of intelligent game-based cessive iterations of the system.
learning environments, and future directions for the Crystal Island features a science mystery where stu-
field. dents attempt to discover the identity and source of
an infectious disease that is plaguing a research team
The Crystal Island stationed on a remote island. Students adopt the role
of a medical field agent who has been assigned to
Learning Environment investigate the illness and save the research team
Crystal Island (figure 1) is a narrative-centered learn- from the outbreak. Students explore the research
ing environment that was originally built on Valve camp from a first-person viewpoint and manipulate
Software’s Source engine, the three-dimensional (3- virtual objects, converse with characters, and use lab
D) game platform for Half-Life 2, and now runs on equipment and other resources to solve the mystery.
the Unity cross-platform game engine from Unity The mystery is solved when students complete a
Technologies. The curriculum underlying Crystal series of partially ordered goals that involve uncover-
Island’s mystery narrative is derived from the North ing details about the spreading infection, testing
Carolina state standard course of study for eighth- potential transmission sources of the disease, record-
grade microbiology. The environment is designed as ing a diagnosis and treatment plan, and presenting
a supplement to classroom instruction, and it blends the findings to the camp nurse.
elements of both inquiry learning and direct instruc- The following scenario illustrates a typical interac-
tion. Crystal Island has served as a platform for inves- tion with Crystal Island. The scenario begins with the

32 AI MAGAZINE
Articles

Figure 2. Camp Nurse and Sick Patient in Virtual Infirmary.

player-character disembarking from her boat shortly addition to learning about the sick team members, the
after arriving on the island. After completing a brief student can walk to the camp’s living quarters to con-
tutorial that introduces the game’s controls, the play- verse with a pair of virtual scientists who answer ques-
er leaves the dock to approach the research camp. tions about viruses and bacteria. The student can also
Near the camp entrance, the student encounters an learn more about pathogens by viewing posters hang-
infirmary with several sick patients and a camp ing inside of the camp’s buildings or reading books
nurse. Upon entering the infirmary, the student located in a small library.
approaches the nurse and initiates a conversation Beyond gathering information about the disease
(figure 2). The nurse explains that an unidentified ill- from virtual scientists and other instructional
ness is spreading through the camp and asks for the resources, the student can test potential transmission
player’s help in determining a diagnosis. She advises sources using the laboratory’s testing equipment. For
the student to use an in-game diagnosis worksheet in example, the student encounters several food items
order to record her findings, hypotheses, and final that have been lying out in the dining hall, and she
diagnosis (figure 3). This worksheet is designed to can test the items for infectious agents at any point
scaffold the student’s problem-solving process, as during the learning interaction.
well as provide a space for the student to offload any After running several tests, the student discovers
findings gathered about the illness. The conversation that the sick team members have consumed milk that
with the nurse takes place through a combination of is contaminated with a bacterial agent. The student
multimodal character dialogue — spoken language, can use the camp’s books and posters in order to inves-
gesture, facial expression, and text — and player dia- tigate bacterial diseases that are associated with symp-
logue menu selections. toms matching those reported by the sick team mem-
After speaking with the nurse, the student has sev- bers. Once she has narrowed down a diagnosis and
eral options for investigating the illness. The student recommended treatment, the student returns to the
can talk to sick patients lying on medical cots in order infirmary in order to inform the camp nurse. If the stu-
to gather information about the team members’ symp- dent’s diagnosis is incorrect, the nurse identifies the
toms and recent eating habits. Alternatively, the stu- error and recommends that the player keep working.
dent can move to the camp’s dining hall to speak with The student can use this feedback to roughly deter-
the camp cook. The cook describes the types of food mine how close she is to solving the mystery. When
that the team has recently eaten and provides clues the student correctly diagnoses the illness and speci-
about which items warrant closer investigation. In fies an appropriate treatment, the mystery is solved.

WINTER 2013 33
Articles

Figure 3. Diagnosis Worksheet Where Students Record their Findings.

Decision-Theoretic (DDN) (Dean and Kanazawa 1989) to model narrative


objectives, storyworld state, and user state in order to
Narrative Planning inform run-time adaptations of Crystal Island’s sci-
In order to perform real-time adaptive personaliza- ence mystery. A DDN-based approach to narrative
tion of interactive narratives in game-based learning planning provides a principled decision-making
environments like Crystal Island, director agents (or framework for reasoning about the multiple sources
drama managers) plan unfolding narratives by oper- of evidence that inform narrative-centered tutorial
ating on at least two distinct but interacting levels: planning. At each game clock cycle, U-Director sys-
they craft the global story arc, typically by traversing tematically evaluates available evidence, updates its
a plot graph that encodes a partial order of significant beliefs, and selects the storyworld action that maxi-
events in the narrative, and they plan virtual charac- mizes expected narrative-tutorial utility.
ter behaviors and physical events in the storyworld. Because narrative is fundamentally a time-based
This task, known as narrative-centered tutorial plan- phenomenon, in each decision cycle U-Director con-
ning, blends aspects of interactive narrative planning siders candidate narrative actions to project forward
and tutorial planning in a single problem. Interactive in time the effects of the actions being taken and
narrative planning encompasses a number of chal- their consequent effects on the user (Mott and Lester
lenges, such as balancing character believability and 2006). To do so, it evaluates its narrative objectives in
plot coherence in the presence of unpredictable user light of the current storyworld state and user state.
actions (Riedl and Young 2004), or providing rich Each decision cycle considers three distinct time
character behaviors that are consistent with authori- slices (narrative statet, narrative statet+1, and narrative
al objectives (Mateas and Stern 2005). The task’s tuto- statet+2), each of which consists of interconnected
rial planning component requires that the director subnetworks containing chance nodes in the DDN
agent guide students’ cognitive, affective, and (figure 4). The three slices represent (1) the current
metacognitive processes to promote effective learn- narrative state, (2) the narrative state after the direc-
ing outcomes. tor agent’s decision, and (3) the narrative state after
To address these requirements for Crystal Island, the user’s next action. The DDN’s director action is a
we designed and implemented U-Director (Mott and decision node, the DDN’s user action is a chance
Lester 2006), a decision-theoretic narrative planning node, and utilityt+2 is a utility node in the DDN. Each
architecture that uses a dynamic decision network time slice encodes a probabilistic representation of

34 AI MAGAZINE
Articles

Narrative Statet Narrative Statet+1 Narrative Statet+2


Narrative Narrative Narrative
Objectivest Objectivest+1 Objectivest+2
Plot Progresst Plot Progresst+1 Plot Progresst+2

Narrative Flowt Narrative Flowt+1 Narrative Flowt+2

Storyworld Storyworld Storyworld


Statet Statet+1 Statet+2
Plot Focust Plot Focust+1 Plot Focust+2
Utilityt+2
Physical Statet Physical Statet+1 Physical Statet+2

User Statet User Statet+1 User Statet+2


Goalst Goalst+1 Goalst+2

Beliefst Beliefst+1 Beliefst+2


Chance Node
Experiential Statet Experiential Statet+1 Experiential Statet+2
Decision Node

Director User Sub-network


Action Action Utility Node

t t +1 t +2

Figure 4. High-Level Illustration of Dynamic Decision Network for Interactive Narrative Director Agent.

the director’s beliefs about the overall state of the nar- lated users (three cooperative users who typically fol-
rative, represented with narrative-centered knowl- lowed the agent’s guidance and three uncooperative
edge sources. users who typically did not follow the agent’s guid-
U-Director begins its computation by considering ance) interacted with the Crystal Island director agent
narrative statet, which represents the model’s current to create six different narrative experiences. Traces of
beliefs about the unfolding story’s narrative objec- the director agent’s decision making were analyzed to
tives, storyworld state, and user state (Mott and see if the agent took appropriate action to guide users
Lester 2006). Links from the director action node to through the narrative. As was desired, the analysis
the narrative statet+1 node model candidate director revealed that the director agent adopted a hint-cen-
actions and how they affect the story. Example direc- tered approach for cooperative users and a more
tor actions include instructing nonplayer characters heavy-handed approach for uncooperative users. The
to perform actions in the environment or guiding simulated user approach offers a promising means for
students through the story by providing appropriate- establishing baseline performance prior to conducting
ly leveled hints. U-Director constrains the number of extensive focus group studies with human users.
candidate director actions to evaluate at each time In addition to formalisms based on dynamic deci-
step using abstract director actions, thereby limiting sion networks, we have also experimented with
the number of concrete actions considered. Next, it empirically based models of narrative-centered tuto-
models how the possible worlds encoded in narrative rial planning using a Wizard-of-Oz framework,
statet+1 influence the user’s action and how the user’s where human wizards serve as expert narrative-cen-
action in turn affects the story in narrative statet+2 tered tutorial planners (Lee, Mott, and Lester 2012).
Finally, using links from narrative statet+2 to the utili- In this approach, trained wizards interact with stu-
ty node utilityt+2, U-Director models preferences over dents who are attempting to solve Crystal Island’s
potential narrative states. Preferences provide a rep- science mystery. The wizards perform tutorial and
resentation in which authors specify the relative narrative planning functionalities by controlling a
importance of salient features of the narrative state. virtual character and guiding students through the
To gauge the overall effectiveness of U-Director’s game-based learning environment. Detailed trace
narrative planning capabilities a simulated user data from these wizard-student interactions is col-
approach was taken (Mott and Lester 2006). Six simu- lected by the virtual environment — including all

WINTER 2013 35
Articles

wizard decision-making, navigation, and manipula- the next tutorial action. The model has full control of
tion activities — in order to generate a training cor- the tutorial intervention decisions (that is, determin-
pus for inducing narrative-centered tutorial planning ing when to intervene) and tutorial action decisions
models using supervised machine-learning tech- (that is, determining what the intervention should
niques. be). The full guidance tutorial planning model
To explore the run-time effectiveness of the employs the entire set of narrative-centered tutorial
machine-learned narrative-centered tutorial plan- planning decisions available to it. This list is
ning models, induced intervention and action mod- described in Lee, Mott, and Lester (2011).
els were integrated into Crystal Island, where the In an experiment comparing the three narrative-
models determine when is the most appropriate time centered tutorial planning models, a total of 150
to intervene and what is the most appropriate next eighth-grade students used Crystal Island and com-
narrative-tutorial action. This extended version of pleted the pre- and posttest measures (Lee, Mott, and
Crystal Island was identical to the Wizard-of-Oz- Lester 2012). The students were drawn from a subur-
based version, except a narrative-centered tutorial ban middle school. While the study was held during
planner rather than a human wizard drove nonplay- the school day, groups of students were pulled out of
er character interactions. The planning models class to play the game in a laboratory setting. The pre-
actively observe students’ activities and dynamically and posttests consisted of the same multiple-choice
guide students by directing characters in the virtual questions about Crystal Island’s microbiology cur-
storyworld. riculum. Students completed the pretest several days
Three experimental conditions were created to prior to using Crystal Island, and they completed the
evaluate the effectiveness of the induced narrative- posttest immediately after solving the science mys-
centered tutorial planner: minimal guidance, inter- tery. In all three conditions, the tutorial planning
mediate guidance, and full guidance. Outcomes of models’ actions focused on introducing narrative
the three conditions were compared to determine the events, or providing problem-solving advice to stu-
effectiveness of utilizing our machine-learned mod- dents. The tutorial planner did not perform actions
els. involving direct instruction of curricular content.
Minimal Guidance: Students experience the story- An investigation of overall learning found that stu-
world under the guidance of a minimal narrative- dents who interacted with Crystal Island achieved
centered tutorial planning model. This model con- positive learning gains on the curriculum test. A two-
trols events that are required to be performed by the tailed matched pairs t-test between posttest and
system (that is, students cannot complete the events pretest scores indicates that the learning gains were
without the system taking action). The model in this significant, t(149) = 2.03, p < .05. It was also found
condition is not machine learned. It performs an that students’ Crystal Island interactions in the full
action once all preconditions are met for the action guidance condition yielded significant learning
to be taken. This condition served as a baseline for gains, as measured by the difference of posttest and
the investigation. pretest scores. As shown in table 1, a two-tailed
Intermediate Guidance: Students experience the sto- matched pairs t-test showed that students in the full
ryworld under the guidance of an intermediate nar- guidance condition showed statistically significant
rative-centered tutorial planning model. This is an learning gains. Students in the intermediate and min-
ablated model inspired by the notions of interactive imal guidance conditions did not achieve significant
narrative islands (Riedl et al. 2008). Islands are inter- learning gains.
mediate plan steps through which all valid solution In addition, we analyzed learning gain differences
paths must pass. They have preconditions describing between the conditions and the results showed that
the intermediate world state, and if the plan does not there were significant differences. Performing an
satisfy each island’s preconditions, the plan will nev- ANCOVA to control for pretest scores, the learning
er achieve its goal. In our version of Crystal Island, gains were significantly different for the full and min-
transitions between narrative arc phases represent imal guidance conditions, F(2, 99) = 38.64, p < .001,
“islands” in the narrative. Each arc phase consists of as were the learning gains for the full and intermedi-
a number of potential narrative-centered tutorial ate guidance conditions, F(2, 100) = 40.22, p < .001.
planning decisions. However, the phases are bound- Thus, students who received full guidance from our
ed by specific narrative-centered tutorial planning machine-learned models achieved significantly high-
decisions that define when each phase starts and er learning gains than the students who were in the
ends. We employ these specific tutorial action deci- other two conditions. These results are consistent
sions as our islands. with findings from the education literature, which
Full Guidance: Students experience the storyworld suggest that students who receive problem-solving
under the guidance of the full narrative-centered guidance (coaching, hints) during inquiry-based
tutorial planning model. The model actively moni- learning achieve greater learning outcomes than stu-
tors students interacting with the storyworld in order dents who receive minimal or no guidance (Kirschn-
to determine when it is appropriate to intervene with er, Sweller, and Clark 2006; Mayer 2004).

36 AI MAGAZINE
Articles

Modeling Student Affect


Affect has begun to play an increasingly important Conditions Students Gain Avg. SD t p
role in game-based learning environments. The intel- Full 55 1.28 2.66 2.03 < 0.05
ligent tutoring systems community has seen the Intermediate 48 0.13 2.69 0.19 0.84
emergence of work on affective student modeling Minimal 47 0.89 3.12 1.23 0.22
(Conati and Maclaren 2009), including models for
detecting frustration and stress (McQuiggan, Lee, and
Lester 2007), modeling agents’ emotional states
(Marsella and Gratch 2009), and detecting student Table 1. Learning Gain Statistics for Comparison of
motivation (de Vicente and Pain 2002). All of this Induced Narrative-Centered Tutorial Planning Models.
work seeks to increase the fidelity with which affec-
The lack of a widely accepted and validated mod-
tive and motivational processes are understood and
el of learner emotions poses a challenge for the
utilized in intelligent tutoring systems in an effort to
development of affect-detection systems using theo-
increase the effectiveness of tutorial interactions and,
retical grounding in place of physical sensors. How-
ultimately, learning.
ever, Elliot and Pekrun’s model of learner emotions
This focus on examining affect is largely due to the
describes how students’ affective states relate to their
effects it has been shown to have on learning out-
comes. Emotional experiences during learning have general goals during learning tasks, such as whether
been shown to affect problem-solving strategies, the they are focused on learning or performance, and
level of engagement exhibited by students, and the how well these goals are being met (Elliot and
degree to which the student is motivated to contin- Pekrun 2007). The generality of this model allows it
ue with the learning process (Picard et al. 2004). to be adapted to specific learning tasks and makes it
These factors have the power to dictate how students well suited for predictive modeling in open-ended
learn immediately and their learning behaviors in learning environments.
the future. Consequently, the ability to understand Elliot and Pekrun’s model of learner emotions was
and model affective behaviors in learning environ- used as a theoretical foundation for structuring a
ments has been a focus of recent work (Arroyo et al. sensor-free affect detection model (Sabourin, Mott,
2009; Conati and Maclaren 2009; D’Mello and and Lester 2011). This model was empirically learned
Graesser 2010). from a corpus of student interaction data. During
Correct prediction of students’ affective states is an their interactions with Crystal Island, students were
important first step in designing affect-sensitive asked to self-report on their affective state through
game-based learning environments. The delivery of an in-game smartphone device. They were prompted
appropriate affective feedback requires first that the every seven minutes to select one emotion from a set
student’s state be accurately identified. However, the of seven options, which included anxious, bored,
detection and modeling of affective behaviors in confused, curious, excited, focused, and frustrated.
learning environments poses significant challenges. This set of cognitive-affective states is based on prior
For example, many successful approaches to affect research identifying states that are relevant to learn-
detection utilize a variety of physical sensors to ing (Craig et al. 2004, Elliot and Pekrun 2007). Each
inform model predictions. However, deploying these emotional state was also accompanied by a validated
types of sensors in schools or home settings often emoticon to provide clarity to the emotion label.
proves to be difficult, or impractical, due to concerns A static Bayesian network (figure 5) was designed
about cost, privacy, and invasiveness. Consequently, with the structure hand-crafted to include the rela-
reliance on physical sensors when building affect- tionships described within Elliot and Pekrun’s mod-
sensitive learning environments can limit the poten- el of learner emotions (Sabourin, Mott, and Lester
tial scalability of these systems. To combat this issue, 2011). Specifically the structure focused on repre-
many systems attempt to model emotion without senting the appraisal of learning and performance
the use of physiological sensors. Some researchers goals and how these goals were being met based on
taking this approach have focused on incorporating students’ progress and activities in the game. For
theoretical models of emotion, such as appraisal the- example, some activities such as book reading or
ory, which is particularly well suited for computa- note taking in Crystal Island are related to learning
tional environments (Marsella and Gratch 2009). objectives, while achievement of certain milestones
These models propose that individuals appraise and external validation are more likely to contribute
events and actions in their environment according to to measures of performance goals. Several other com-
specific criteria (for example, desirability or cause) to ponents of Elliot and Pekrun’s model are also con-
arrive at emotional experiences. While there are a sidered. For example, students with approach orien-
variety of appraisal-based theories of emotions, few tations are expected to have generally more positive
appraisal models focus specifically on the emotions temperaments and emotional experiences than stu-
that typically occur during learning (Picard et al. dents with avoidance orientations.
2004). Using a training corpus of student interaction

WINTER 2013 37
Articles

Poster Views

Mastery
Notes Taken Avoidance
Learning
Valence Learning Focus Mastery
Book Views Approach
Emotion
Worksheet
Checks Performance
Avoidance
Worksheet Performance
Correct Focus Performance
Approach
Worksheet
Incorrect
Openness
Performance Valence
Tests Run Valence
Agreeableness
Successful Test
Conscientiousness
Appraisal Personal Environment
Total Goals Variables Attributes Variables
Completed

Figure 5. Structure of Static Bayesian Network Model of Learner Affect.

data to learn and validate the parameters of static Student Modeling with
and dynamic Bayesian network models (Sabourin,
Mott, and Lester 2011), it was found that the Elliot
Dynamic Bayesian Networks
and Pekrun (2007) model of learner emotions can Devising effective models of student knowledge in
successfully serve as the basis for computational game-based learning environments poses significant
models of learner emotions. The model grounded computational challenges. First, models of student
by a theoretical understanding of learner emotions knowledge must cope with multiple sources of uncer-
outperformed several baseline measures including tainty inherent in the modeling task. Second, knowl-
most-frequent class as well as a Bayesian network edge models must dynamically model knowledge
model operating under naïve variable independence states that change over the course of a narrative inter-
assumptions (table 2). It was also found that model- action. Third, the models must concisely represent
ing the dynamic quality of emotional states across complex interdependencies among different types of
time through a dynamic Bayesian network offered knowledge, and naturally incorporate multiple
sources of evidence about user knowledge. Fourth, the
significant improvement in performance over the
models must operate under the real-time performance
static Bayesian network. The model performed par-
constraints of the interactive narratives that play out
ticularly well at identifying positive states, includ-
in intelligent game-based learning environments.
ing the state of focused, which was the most fre-
To address these challenges, we developed a
quent emotion label (tables 3–4). The model had dynamic Bayesian network (DBN)) approach to mod-
difficulty distinguishing between the states of con- eling user knowledge during interactive narrative
fusion and frustration, which is intriguing because experiences (Rowe and Lester 2010). DBNs offer a
the causes and experiences of these two states are unified formalism for representing temporal stochas-
often very similar. These findings indicate the need tic processes such as those associated with knowledge
for future work to improve predictions of negative modeling in interactive narrative environments. The
emotional states so that models for automatic detec- framework provides a mechanism for dynamically
tion of students’ learning emotions can be used for updating a set of probabilistic beliefs about a stu-
affective support in game-based learning environ- dent’s understanding of narrative, mystery solution,
ments. strategic, and curricular knowledge components that

38 AI MAGAZINE
Articles

are accumulated and demonstrated during interac-


tions with a narrative environment. An initial ver- Emotion Accuracy Valence Accuracy
sion of the model has been implemented in Crystal Baseline 22.4% 54.5%
Island. Naïve Bayes 18.1% 51.2%
The DBN for knowledge tracing was implemented Bayes Net 25.5% 66.8%
with the SMILE Bayesian modeling and inference Dynamic BN 32.6% 72.6%
library developed by the University of Pittsburgh’s
Decision Systems Laboratory (Druzdzel 1999). The
model maintains approximately 135 binary nodes,
Table 2. Prediction Accuracies.
100 directed links, and more than 750 conditional
probabilities. As the knowledge-tracing model
observes student actions in the environment, the
associated evidence is incorporated into the network,
and a Bayesian update procedure is performed. The Actual Emotion Correct Emotion Prediction Correct Valence
Prediction
update procedure, in combination with the net-
work’s singly connected structure, yields updates that anxious 2% 60%
complete in less than one second. Initial probability bored 18% 75%
values were fixed across all students; probabilities confused 32% 59%
were chosen to represent the assumption that stu- curious 38% 85%
dents at the onset had no understanding of scenario- excited 19% 79%
specific knowledge components and were unlikely to focused 52% 81%
have mastery of curriculum knowledge components. frustrated 28% 56%
A human participant study was conducted with
116 eighth-grade students from a middle school
interacting with the Crystal Island environment Table 3. Prediction Accuracy by Emotion.
(Rowe and Lester 2010). During the study, students
interacted with Crystal Island for approximately 50
minutes. Logs of students’ in-game actions were
recorded, and were subsequently used to conduct an
evaluation of the DBN knowledge-tracing model. The Predicted Valence
evaluation aimed to assess the model’s ability to Positive Negative
accurately predict students’ performance on a con- Actual Positive 823 184
tent knowledge postexperiment test. While the eval- Valence Negative 326 512
uation does not inspect the student knowledge mod-
el’s intermediate states during the narrative
interaction, nor students’ narrative-specific knowl-
edge, an evaluation of the model’s final knowledge Table 4. Valence Confusion Matrix.
assessment accuracy provided a useful starting point
for refining the model and evaluating its accuracy.
The DBN knowledge-tracing model used the stu-
nents bore on multiple questions. This yielded a
dents’ recorded trace data as evidence to approximate
many-to-many mapping between knowledge com-
students’ knowledge at the end of the learning inter-
action. This yielded a set of probability values for ponents and posttest questions.
each student, corresponding to each of the knowl- The evaluation procedure required the definition
edge-tracing model’s knowledge components. The of a threshold value to discriminate between mas-
resultant data was used in the analysis of the model’s tered and unmastered knowledge components:
ability to accurately predict student responses on knowledge components whose model values exceed-
postexperiment content test questions. The mapping ed the threshold were considered mastered, and
between the model’s knowledge components and knowledge components whose model values fell
individual posttest questions was generated by a below the threshold were considered unmastered.
researcher, and used the following heuristic: if a The model predicted a correct response on a posttest
posttest question or correct response shared impor- question if all of the question’s associated knowledge
tant content terms with the description of a particu- components were considered mastered. The model
lar knowledge component, that knowledge compo- predicted an incorrect response on a posttest ques-
nent was designated as necessary for providing an tion if one or more associated knowledge compo-
informed, correct response to the question. Accord- nents were considered unmastered. The use of a
ing to this heuristic, several questions required the threshold to discriminate between mastered and
simultaneous application of multiple knowledge unmastered knowledge components mirrors how
components, and a number of knowledge compo- the knowledge model might be used in a run-time

WINTER 2013 39
Articles

environment to inform interactive narrative decision Individual players may also pursue ill-defined goals,
making. such as “explore” or “try to break the game.” In these
Rather than choose a single threshold, a series of cases goals and actions may be cyclically related;
values ranging between 0.0 and 1.0 were selected. For players take actions in pursuit of goals, but they may
each threshold, the DBN knowledge model was com- also choose goals after they are revealed by particular
pared to a random model, which assigned uniformly actions. For these reasons, game-based learning envi-
distributed, random probabilities for each knowledge ronments offer promising test beds for investigating
component. New random probabilities were generat- different formulations of goal-recognition tasks.
ed for each knowledge component, student, and To investigate these issues, we have been devising
threshold. Both the DBN model and random model goal-recognition models for Crystal Island (Ha et al.
were used to predict student posttest responses, and 2011). Given its relationship to abduction, goal
accuracies for each threshold were determined across recognition appears well suited for logical represen-
the entire test. Accuracy was measured as the sum of tation and inference. However, goal recognition in
successfully predicted correct responses plus the digital games also involves inherent uncertainty. For
number of successfully predicted incorrect respons- example, a single sequence of actions is often
es, divided by the total number of questions. explainable by multiple possible goals. Markov logic
The DBN model outperformed a random baseline networks (MLNs) provide a formalism that unifies
model across a range of threshold values. The DBN logical and probabilistic representations into a single
model most accurately predicted students’ posttest framework (Richardson and Domingos 2006). To
responses at a threshold level of 0.32 (M = .594, SD = address the problem of goal recognition with
.152). A Wilcoxon-Mann-Whitney U test verified exploratory goals in game environments, a Markov
that the DBN knowledge-tracing model was signifi- logic goal-recognition framework was devised.
cantly more accurate than the random model at the The MLN goal-recognition model was trained on a
0.32 threshold level, z = 4.79, p < .0001. Additional corpus collected from player interactions with Crys-
Mann-Whitney tests revealed that the DBN model’s tal Island (Ha et al. 2011). In this setting, goal recog-
predictive accuracy was significantly greater than nition involves predicting the next narrative subgoal
that of the random model, at the  = .05 level, for the that the student will complete as part of investigating
entire range of thresholds between .08 and .56. the mystery.
An MLN consists of a set of weighted first-order
logic formulae. A weight reflects the importance of
Student Goal Recognition the constraint represented by its associated logic for-
Goal recognition, as well as its sibling tasks, plan mula in a given model. Figure 6 shows 13 MLN for-
recognition and activity recognition, are long-stand- mulae that are included in the goal-recognition mod-
ing AI problems (Charniak and Goldman 1993; Kautz el. Formula 1 represents a hard constraint that needs
and Allen 1986; Singla and Mooney 2011) that are to be satisfied at all times. This formula requires that,
central to game-based learning. The problems are cas- for each action a at each time step t, there exists a sin-
es of abduction: given domain knowledge and a gle goal g. The formulae 2–13 are soft constraints that
sequence of actions performed by an agent, the task are allowed to be violated. Formula 2 reflects the pri-
is to infer which plan or goal the agent is pursuing. or distribution of goals in the corpus. Formulae 3–13
Recent work has yielded notable advances in recog- predict the player’s goal g at time t based on the val-
nizing agents’ goals and plans, including methods for ues of the three action properties, action type a, loca-
recognizing multiple concurrent and interleaved tion l, and narrative state s, as well as the previous
goals (Hu and Yang 2008), methods for recognizing goal. The weights for the soft formulae were learned
activities in multiagent settings (Sadilek and Kautz with theBeast, an off-the-shelf tool for MLNs that
2010), and methods for augmenting statistical rela- uses cutting plane inference (Riedel 2008).
tional learning techniques to better support abduc- Similar to the U-Director framework described
tion (Singla and Mooney 2011). above, the MLN framework encodes player actions in
Digital games pose significant computational chal- the game environment using three properties: action
lenges for goal-recognition models. For example, in type, location, and narrative state.
many games players’ abilities change over time, both Action Type: Type of action taken by the player,
by unlocking new powers and improving motor skills such as moving to a particular location, opening a
over the course of game play. In effect, a player’s door, and testing an object using the laboratory’s test-
action model may change, in turn modifying the ing equipment. Our data includes 19 distinct types of
relationships between actions and goals. Action fail- player actions.
ure is also a critical and deliberate design choice in Location: Place in the virtual environment where a
games; in platform games, a poorly timed jump often player action is taken. This includes 39 fine-grained
leads to a player’s demise. In multiplayer games, mul- and nonoverlapping sublocations that decompose
tiagent goals arise that may involve players compet- the seven major camp locations in Crystal Island.
ing or collaborating to accomplish game objectives. Narrative State: Representation of the player’s

40 AI MAGAZINE
Articles

Figure 6. Formulae for MLN Goal-Recognition Model.

progress in solving the narrative scenario. Narrative data. Each subset of the data was used for testing
state is encoded as a vector of four binary variables, exactly once. The models’ performance was meas-
each of which represents a milestone event within ured using F1, which is the harmonic mean of preci-
the narrative. sion and recall. Table 5 shows the average perform-
The performance of the proposed MLN goal-recog- ance of each model over ten-fold cross validation.
nition model was compared to one trivial and two The MLN model scored 0.484 in F1, achieving an 82
nontrivial baseline models (Ha et al. 2011). The triv- percent improvement over the baseline. The uni-
ial baseline was the majority baseline, which always gram model performed better than the bigram mod-
predicted the goal that appears most frequently in el. A one-way repeated-measures ANOVA confirmed
the training data. The nontrivial baselines were two that the differences among the three compared mod-
n-gram models, unigram and bigram. The unigram els were statistically significant (F(2, 18) = 71.87, p <
model predicted goals based on the current player 0.0001). A post hoc Tukey test revealed the differ-
action only, while the bigram model considered the ences between all pairs of the three models were sta-
previous action as well. In the n-gram models, player tistically significant (p < .01).
actions were represented by a single aggregate feature
that combined all three action properties: action
type, location, and narrative state. In addition to
Educational Impact
game environments, n-gram models have been used In addition to the version of Crystal Island for mid-
in previous goal-recognition work for spoken dia- dle school science education described above, two
logue systems (Blaylock and Allen 2003). additional versions of Crystal Island have been
The two n-gram models and the proposed MLN developed: one for elementary science education,
model were evaluated with tenfold cross validation. and one for integrated science and literacy education
The entire data set was partitioned into ten nonover- for middle school students. More than 4000 students
lapping subsets, ensuring data from the same player have been involved with studies with the Crystal
did not appear in both the training and the testing Island learning environments. While the learning

WINTER 2013 41
Articles

formance, R2 = .35, F(4, 127) = 16.9, p < .01. Similar


results have been found with the elementary school
Baseline Unigram Bigram Factored MLN version of Crystal Island. With 800 students over
F1 0.266 0.396 0.330 0.484 twelve 50-minute class periods, students’ science con-
Improvement N/A 49% 24% 82% tent test scores increased significantly, the equivalent
over Baseline
of a letter grade.

Related Work
Table 5. F1 Scores for MLN and Baseline Goal-Recognition Models.
Intelligent game-based learning environments lever-
age advances from two communities: intelligent
tutoring systems and intelligent narrative technolo-
Predictor B SE
gies. Intelligent tutoring systems model key aspects
Pretest .46** .09 .33**
of one-on-one human tutoring in order to create
learning experiences that are individually tailored to
Presence .03* .01 .15*
students based on their cognitive, affective, and
Final Game Score .01** .00 .31**
metacognitive states (Woolf 2008). By maintaining
R`1 .33
explicit representations of learners’ knowledge and
problem-solving skills, intelligent tutoring systems
can dynamically customize problems, feedback, and
Table 6. Regression Results Predicting hints to individual learners (Koedinger et al. 1997,
Students’ Microbiology Posttest Performance. VanLehn 2006). Recent advances in intelligent tutor-
Note: ** - p < .01; * - p < .05 ing systems include improvements to fine-grained,
temporal models of student knowledge acquisition
(Baker, Goldstein, and Heffernan 2011); models of
environments are in continual development, they tutorial dialogue strategies that enhance students’
regularly achieve learning gains in students. For cognitive and affective learning outcomes (Forbes-
example, an observational study involving 150 stu- Riley and Litman 2011); models of students’ affective
dents investigated the relationship between learning states and transitions during learning (Conati and
and engagement in Crystal Island (Rowe et al. 2011). Maclaren 2009, D’Mello and Graesser 2010);
All students in the study played Crystal Island. The machine-learning-based techniques for embedded
investigation explored questions in the science edu- assessment (Feng, Heffernan, and Koedinger 2009);
cation community about whether learning effective- and tutors that model and directly enhance students’
ness and engagement are synergistic or conflicting in self-regulated learning skills (Azevedo et al. 2010,
game-based learning. Students were given pre- and Biswas et al. 2010). Critically, the field has converged
posttests on the science subject matter. An investiga- on a set of scaffolding functionalities that yield
tion of learning gains found that students answered improved student learning outcomes compared to
more questions correctly on the posttest (M = 8.61, nonadaptive techniques (VanLehn 2006).
SD = 2.98) than the pretest (M = 6.34, SD = 2.02), and Intelligent narrative technologies model human
this finding was statistically significant, t(149) = story telling and comprehension processes, including
10.49, p < .001. methods for generating interactive narrative experi-
The relationship between learning and engage- ences that develop and adapt in real time. Given the
ment was investigated by analyzing students’ learn- central role of narratives in human cognition and
ing gains, problem-solving performance, and several communication, there has been growing interest in
engagement-related factors. The engagement-related leveraging computational models of narrative for a
factors included presence, situational interest, and broad range of applications in training (Johnson
in-game score. Rather than finding an oppositional 2010; Kim et al. 2009; Si, Marsella, and Pynadath
relationship between learning and engagement, the 2009) and entertainment (Lin and Walker 2011;
study found a strong positive relationship between McCoy et al. 2011; Porteous et al. 2011; Swanson and
learning outcomes, in-game problem solving, and Gordon 2008; Yu and Riedl 2012). In educational set-
increased engagement (table 6). Specifically, a linear tings, computational models of interactive narrative
regression analysis found that microbiology back- can serve as the basis for story-centric problem-solv-
ground knowledge, presence, and in-game score were ing scenarios that adapt story-centered instruction to
significant predictors of microbiology posttest score, individual learners (Johnson 2010; Lim et al. 2012;
and the model as a whole was significant, R2 = .33, Rowe et al. 2011).
F(3, 143) = 23.46, p < .001. A related linear regression Educational research on story-centered game-
analysis found that microbiology pretest score, num- based learning environments is still in its nascent
ber of in-game goals completed, and presence were stages. While there have been several examples of
all significant predictors of microbiology posttest per- successful story-centric game-based learning envi-

42 AI MAGAZINE
Articles

ronments for K–12 education (Ketelhut et al. 2010; tive-centered tutorial planning, student affect recog-
Lim et al. 2012; Warren, Dondlinger, and Barab 2008) nition, student modeling, and student goal recogni-
and training applications (Johnson 2010), consider- tion will be critical for actualizing wide-scale use of
able work remains to establish an empirical account intelligent game-based learning environments in
of which educational settings and game features are learners’ everyday lives.
best suited for promoting effective learning and high
engagement. For example, recent experiments con- Acknowledgements
ducted with college psychology students have indi- The authors wish to thank members of the Intelli-
cated that narrative-centered learning environments Media Group of North Carolina State University for
may not always enhance short-term content reten- their assistance. This research was supported by the
tion more effectively than direct instruction (Adams National Science Foundation under Grants REC-
et al. 2012). However, the instructional effectiveness 0632450, DRL-0822200, IIS-0812291, and CNS-
of story-centered game-based learning environments 0540523. This work was also supported by the
is likely affected by their instructional designs, game National Science Foundation through a Graduate
designs, and capacity to tailor scaffolding to individ- Research Fellowship. Any opinions, findings, and
ual learners using narrative-centered tutorial plan- conclusions or recommendations expressed in this
ning, student affect recognition, student modeling, material are those of the authors and do not neces-
and student goal recognition. Additional empirical sarily reflect the views of the National Science Foun-
research is needed to determine which features, and dation. This work was also supported by the Bill and
artificial intelligence techniques, are most effective in Melinda Gates Foundation, the William and Flora
various settings. Furthermore, intelligent game-based Hewlett Foundation, and Educause.
learning environments have a broad range of appli-
cations for guided inquiry-based learning, prepara- References
tion for future learning, assessment, and learning in Adams, D. M.; Mayer, R. E.; Macnamara, A.; Koenig, A.; and
informal settings such as museums and homes. Iden- Wainess, R. 2012. Narrative Games for Learning: Testing the
Discovery and Narrative Hypotheses. Journal of Educational
tifying how integrated AI systems that combine tech-
Psychology 104(1): 235–249.
niques from intelligent tutoring systems and intelli-
Arroyo, I.; Cooper, D.; Burleson, W.; Woolf, B.; Muldner, K.;
gent narrative technologies can best be utilized to
and Christopherson, R. 2009. Emotion Sensors Go to
enhance robust, lifelong learning is a critical chal-
School. In Proceedings of the Fourteenth International Confer-
lenge for the field. ence on Artificial Intelligence in Education, 17–24. Amsterdam:
IOS Press.
Conclusion Azevedo, R.; Moos, D. C.; Johnson, A. M.; and Chauncey, A.
D. 2010. Measuring Cognitive and Metacognitive Regula-
Intelligent game-based learning environments artful- tory Processes During Hypermedia Learning: Issues and
ly integrate commercial game technologies and AI Challenges. Educational Psychologist 45(4): 210–223.
frameworks from intelligent tutoring systems and Baker, R. S. J. D.; Goldstein, A. B.; and Heffernan, N. T. 2011.
intelligent narrative technologies to create personal- Detecting Learning Moment-By-Moment. International Jour-
ized learning experiences that are both effective and nal of Artificial Intelligence in Education 21(1–2): 5–25.
engaging. Because of their ability to dynamically tai- Biswas, G.; Jeong, H.; Kinnebrew, J. S.; Sulcer, B.; and
lor narrative-centered problem-solving scenarios to Roscoe, R. 2010. Measuring Self-Regulated Learning Skills
customize advice to students, and to provide real- Through Social Interactions in a Teachable Agent Environ-
ment. Research and Practice in Technology Enhanced Learning
time assessment, intelligent game-based learning
5(2): 123–152.
environments offer significant potential for learning
Blaylock, N., and Allen, J. 2003. Corpus-Based, Statistical
both in and out of the classroom.
Goal Recognition. in Proceedings of the Eighteenth Interna-
Over the next few years as student modeling and
tional Joint Conference on Artificial Intelligence, 1303–1308.
affect recognition capabilities become even more San Francisco: Morgan Kaufmann Publishers.
powerful, intelligent game-based learning environ-
Charniak, E., and Goldman, R. P. 1993. A Bayesian Model
ments will continue to make their way into an of Plan Recognition. Artificial Intelligence 64(1): 53–79.
increasingly broad range of educational settings
Conati, C., and MacLaren, H. 2009. Empirically Building
spanning classrooms, homes, science centers, and and Evaluating a Probabilistic Model of User Affect. User
museums. With their growing ubiquity, they will be Modeling and User-Adapted Interaction 19(3): 267–303.
scaled to new curricula and serve students of all ages. Craig, S.; Graesser, A.; Sullins, J.; and Gholson, B. 2004.
It will thus become increasingly important to under- Affect and Learning: An Exploratory Look into the Role of
stand how students can most effectively interact with Affect in Learning with Autotutor. Learning, Media and Tech-
them, and what role AI-based narrative and game nology 29(3): 241–250.
mechanics can play in scaffolding learning and real- Dean, T., and Kanazawa, K. 1989. A Model for Reasoning
izing sustained engagement. Further, continued About Persistence and Causation. Computational Intelligence
improvements in the accuracy, scalability, robust- 5(3): 142–150.
ness, and ease of authorship of AI models for narra- de Vicente, A., and Pain, H. 2002. Informing the Detection

WINTER 2013 43
Articles

of the Students’ Motivational State: An Empirical Study. In City. International Journal of Artificial Intelligence in Education
Proceedings of the Sixth International Conference on Intelligent 8(1): 30–43.
Tutoring Systems, 933–943. Berlin: Springer. Lee, S. Y.; Mott, B. W.; and Lester, J. C. 2012. Real-Time Nar-
D’Mello, S., and Graesser, A. 2010. Multimodal Semi-Auto- rative-Centered Tutorial Planning for Story-Based Learning.
mated Affect Detection from Conversational Cues, Gross In Proceedings of the Eleventh International Conference on Intel-
Body Language, and Facial Features. User Modeling and User- ligent Tutoring Systems, 476–481. Berlin: Springer.
Adapted Interaction 20(2): 147–187. Lee, S. Y.; Mott, B. W.; and Lester, J. C. 2011. Modeling Nar-
Druzdzel, M. J. 1999. Smile: Structural Modeling, Inference, rative-Centered Tutorial Decision Making in Guided Dis-
and Learning Engine and Genie: A Development Environ- covery Learning. In Proceedings of the Fifteenth International
ment for Graphical Decision-Theoretic Models. In Proceed- Conference on Artificial Intelligence and Education, 163–170.
ings of the Eleventh Conference on Innovative Applications of Berlin: Springer.
Artificial Intelligence, 902–903. Menlo Park, CA: AAAI Press. Lim, M. Y.; Dias, J.; Aylett, R.; and Paiva, A. 2012. Creating
Elliot, A. J., and Pekrun, R. 2007. Emotion in the Hierarchi- Adaptive Affective Autonomous NPCS. Autonomous Agents
cal Model of Approach-Avoidance Achievement Motiva- and Multi-Agent Systems 24(2): 287–311.
tion. In Emotion in Education, ed. P. A. Schutz, and R. Pekrun, Lin, G. I., and Walker, M. A. 2011. All the World’s a Stage:
57–74. Amsterdam: Elsevier. Learning Character Models from Film. In Proceedings of the
Feng, M.; Heffernan, N.; and Koedinger, K. 2009. Addressing Seventh Artificial Intelligence and Interactive Digital Entertain-
the Assessment Challenge with an Online System that ment Conference, 46–52. Palo Alto, CA: AAAI Press.
Tutors as It Assesses. User Modeling and User-Adapted Interac- Marsella, S. C., and Gratch, J. 2009. EMA: A Process Model
tion 19(3): 243–266. of Appraisal Dynamics. Cognitive Systems Research 10(1): 70–
Forbes-Riley, K., and Litman, D. 2011. Designing and Eval- 90.
uating a Wizarded Uncertainty-Adaptive Spoken Dialogue Mateas, M., and Stern, A. 2005. Structuring Content in the
Tutoring System. Computer Speech and Language 25(1): 105– Facade Interactive Drama Architecture. In Proceedings of Arti-
126. ficial Intelligence and Interactive Digital Entertainment Confer-
Gee, J. P. 2007. What Video Games Have to Teach Us About ence, 93–98. Palo Alto CA: AAAI Press.
Learning and Literacy, 2nd ed. New York: Palgrave Macmil- Mayer, R. 2004. Should There Be a Three-Strike Rule Against
lan. Pure Discovery Learning? American Psychologist 59(1): 4–19.
Ha, E. Y.; Rowe, J. P.; Mott, B. W.; and Lester, J. C. 2011. Goal McCoy, J.; Treanor, M.; Samuel, B.; Wardrip-Fruin, N.; and
Recognition with Markov Logic Networks for Player-Adap- Mateas, M. 2011. Comme Il Faut: A System for Authoring
tive Games. In Proceedings of the Seventh International Con- Playable Social Models. In Proceedings of the Seventh Interna-
ference on Artificial Intelligence and Interactive Digital Enter- tional Conference on Artificial Intelligence and Interactive Digi-
tainment, 32–39. Menlo Park, CA: AAAI Press. tal Entertainment, 158–163. Palo Alto, CA: AAAI Press.
Habgood, M. P. J., and Ainsworth, S. E. 2011. Motivating Mcquiggan, S.; Lee, S.; and Lester, J. 2007. Early Prediction
Children to Learn Effectively: Exploring the Value of Intrin- of Student Frustration. In Proceedings of the Second Interna-
sic Integration in Educational Games. Journal of the Learning tional Conference on Affective Computing and Intelligent Inter-
Sciences 20(2): 169–206. action, 698–709. Berlin: Springer.
Hu, D. H., and Yang, Q. 2008. Cigar: Concurrent and Inter- Meluso, A.; Zheng, M.; Spires, H. A.; and Lester, J. 2012.
leaving Goal and Activity Recognition. In Proceedings of the Enhancing Fifth Graders’ Science Content Knowledge and
Twenty-Third National Conference on Artificial Intelligence, Self-Efficacy Through Game-Based Learning. Computers and
1363–1368. Menlo Park, CA: AAAI Press. Education 59(2): 497–504.
Johnson, W. L. 2010. Serious Use of a Serious Game for Lan- Mott, B. W., and Lester, J. C. 2006. U-Director: A Decision-
guage Learning. International Journal of Artificial Intelligence Theoretic Narrative Planning Architecture for Storytelling
in Education 20(2): 175–195. Environments. In Proceedings of the 5th International Joint
Kautz, H., and Allen, J. F. 1986. Generalized Plan Recogni- Conference on Autonomous Agents and Multiagent Systems,
tion. In Proceedings of the Fifth National Conference on Artifi- 997–984. New York: Association for Computing Machinery.
cial Intelligence, 32–37. Menlo Park, CA: AAAI Press. Picard, R. W.; Papert, S.; Bender, W.; Blumberg, B.; Breazeal,
Ketelhut, D. J.; Dede, C.; Clarke, J.; and Nelson, B. C. 2010. C.; Cavallo, D.; Machover, T. 2004. Affective Learning — A
A Multi-User Virtual Environment for Building Higher Manifesto. BT Technology Journal 22(4): 253–269.
Order Inquiry Skills in Science. British Journal of Educational Porteous, J.; Teutenberg, J.; Charles, F.; and Cavazza, M.
Technology 41(1): 56–68. 2011. Controlling Narrative Time in Interactive Storytelling.
Kim, J.; Hill, R. W.; Durlach, P. J.; Lane, H. C.; Forbell, E.; In Proceedings of the Tenth International Conference on
Core, M.; Marsella, S. C., Et Al. 2009. Bilat: A Game-Based Autonomous Agents and Multiagent Systems, 449–456. Rich-
Environment for Practicing Negotiation in a Cultural Con- land, SC: International Foundation for Autonomous Agents
text. International Journal of Artificial Intelligence in Education and Multiagent Systems.
19(3): 289–308. Richardson, M., and Domingos, P. 2006. Markov Logic Net-
Kirschner, P.; Sweller, J.; and Clark, R. 2006. Why Minimal works. Journal of Machine Learning 62(1–2): 107–136.
Guidance During Instruction Does Not Work: An Analysis Riedel, S. 2008. Improving the Accuracy and Efficiency of
of the Failure of Constructivist, Discovery, Problem-Based, Map Inference for Markov Logic. In Proceedings of the Twen-
Experiential, and Inquiry-Based Teaching. Educational Psy- ty-Fourth Annual Conference on Uncertainty in Artificial Intelli-
chologist 41(2): 75–86. gence, 468–475. Seattle, WA: AUAI Press.
Koedinger, K. R.; Anderson, J. R.; Hadley, W. H.; and Mark, Riedl, M. O.; Stern, A.; Dini, D. M.; and Alderman, J. M.
M. A. 1997. Intelligent Tutoring Goes to School in the Big 2008. Dynamic Experience Management in Virtual Worlds

44 AI MAGAZINE
Articles

for Entertainment, Education, and Training. tive Tutors: Student-Centered Strategies for Rev-
International Transactions on Systems Science olutionizing E-Learning. San Francisco: Mor-
and Applications, Special Issue on Agent Based gan Kaufmann Publishers.
Systems for Human Learning 4(2): 23–42. Yu, H., and Riedl, M. O. 2012. A Sequential
AAAI-14 Timetable
Riedl, M. O., and Young, R. M. 2004. An Recommendation Approach for Interactive for Authors
Intent-Driven Planner for Multi-Agent Sto- Personalized Story Generation. In Proceed-
ry Generation. In Proceedings of the Third ings of the Eleventh International Conference December 6, 2013 –
International Conference on Autonomous on Autonomous Agents and Multiagent Sys- January 31, 2014:
Agents and Multi Agent Systems, 186–193. Pis- tems, 71–78. Richland, SC: International Authors register on the
cataway, NJ: Institute of Electrical and Elec- Foundation for Autonomous Agents and
AAAI web site
tronics Engineers. Multiagent Systems.
Rowe, J. P., and Lester, J. C. 2010. Modeling January 31, 2014:
James Lester is a distinguished professor of
User Knowledge with Dynamic Bayesian Electronic abstracts due
computer science at North Carolina State
Networks in Interactive Narrative Environ-
University. His research focuses on trans-
ments. In Proceedings of the Sixth Annual
Artificial Intelligence and Interactive Digital
forming education with technology-rich February 4, 2014:
learning environments. Utilizing AI, game Electronic papers due
Entertainment Conference, 57–62. Palo Alto,
technologies, and computational linguis-
CA: AAAI Press.
tics, he designs, develops, fields, and evalu- March 18 – March 22, 2014:
Rowe, J. P.; Shores, L. R.; Mott, B. W.; and ates next-generation learning technologies
Lester, J. C. 2011. Integrating Learning, Author feedback about
for K–12 science, literacy, and computer sci-
Problem Solving, and Engagement in Nar- ence education. His work on personalized
initial reviews
rative-Centered Learning Environments. learning ranges from game-based learning
International Journal of Artificial Intelligence environments and intelligent tutoring sys- April 7, 2014:
in Education 21(2): 115–133. tems to affective computing, computation- Notification of acceptance
Sabourin, J. L.; Mott, B. W.; and Lester, J. C. al models of narrative, and natural lan- or rejection
2011. Modeling Learner Affect with Theo- guage tutorial dialogue. He has served as
retically Grounded Dynamic Bayesian Net- program chair for the International Con- April 22, 2014:
works. In Proceedings of the Fourth Interna- ference on Intelligent Tutoring Systems, Camera-ready copy due
tional Conference on Affective Computing and the International Conference on Intelligent at AAAI office
Intelligent Interaction, 286–295. Berlin: User Interfaces, and the International Con-
Springer. ference on Foundations of Digital Games,
Sadilek, A., and Kautz, H. 2010. Recognizing and as editor-in-chief of the International nology. In particular, his research focuses
Multi-Agent Activities from Gps Data. In Journal of Artificial Intelligence in Education. on game-based learning environments,
Proceedings of the Twenty-Fourth AAAI Con- He has been recognized with a National Sci- intelligent tutoring systems, computer
ference on Artificial Intelligence, 1134–1139. ence Foundation CAREER Award and sever- games, and computational models of inter-
Menlo Park, CA: AAAI Press. al Best Paper Awards. active narrative. Prior to joining North Car-
olina State University, he worked in the
Si, M.; Marsella, S.; and Pynadath, D. 2009. Eun Ha is a postdoctoral research scholar
games industry developing cross-platform
Directorial Control in a Decision-Theoretic in the Department of Computer Science at
middleware solutions for the PlayStation 3,
Framework for Interactive Narrative. In Pro- North Carolina State University. Her
Wii, and Xbox 360. He received his Ph.D.
ceedings of the Second International Conference research interests lie in bringing robust
from North Carolina State University in
on Interactive Digital Storytelling, 221–233. solutions to complex problems at the inter-
2006.
Berlin: Springer. face of humans and computers by leverag-
Singla, P., and Mooney, R. 2011. Abductive ing and advancing AI to create intelligent Jonathan Rowe is a senior research associ-
Markov Logic for Plan Recognition. In Pro- interactive systems. Her research in natural ate in the Department of Computer Science
ceedings of the Twenty-Fifth Conference on language has focused on discourse process- at North Carolina State University. His
Artificial Intelligence, 1069-1075. San Fran- ing and natural language dialogue systems, research investigates applications of artifi-
cisco, CA: AAAI Press. and her research on user modeling has cial intelligence and data mining for
focused on goal recognition. advanced learning technologies, with an
Swanson, R., and Gordon, A. 2008. Say Any-
thing: A Massively Collaborative Open emphasis on game-based learning environ-
Seung Lee received the Ph.D. degree in
Domain Story Writing Companion. In Pro- ments. He is particularly interested in mod-
computer science from North Carolina
ceedings of the 1st Joint International Confer- eling how learners solve problems and
State University in 2012. His research inter-
ence on Interactive Digital Storytelling, 32–40. experience interactive narratives in virtual
ests are in multimodal interaction, interac-
Berlin: Springer. worlds.
tive narrative, statistical natural language
VanLehn, K. 2006. The Behavior of Tutoring processing, and affective computing. He is Jennifer Sabourin is a doctoral student in
Systems. International Journal of Artificial currently with the SAS Institute where his the Department of Computer Science at
Intelligence in Education 16(3): 227–265. work focuses on sentiment analysis in nat- North Carolina State University. Her
Warren, S. J.; Dondlinger, M. J.; and Barab, ural language processing. research focus is on developing effective
S. A. 2008. A MUVE Towards Pbl Writing: educational technology with the applica-
Bradford Mott is a senior research scientist
Effects of a Digital Learning Environment tion of AI and data mining techniques. She
at the Center for Educational Informatics
Designed to Improve Elementary Student is particularly interested in the affective and
in the College of Engineering at North Car-
Writing. Journal of Research on Technology in metacognitive processes associated with
olina State University. His research interests
Education 41(1): 113–140. learning with technology.
include AI and human-computer interac-
Woolf, B. P. 2008. Building Intelligent Interac- tion, with applications in educational tech-

WINTER 2013 45
Reproduced with permission of the copyright owner. Further reproduction prohibited without
permission.

You might also like