Professional Documents
Culture Documents
North-Holland
This paper tries to show that some cross-fertilization between the fields of Artificial Intelligence
(AI) and literary studies could be beneficial to both fields. Consideration of what is required to
understand literary narrative can provide both a new set of challenging AI problems and greater
insight into the mechanisms that contribute to the meaning of narrative texts in general. Literary
studies can also benefit by re-casting their assumptions and theories in terms formal enough to
lead to experiments on computers. We show that current AI models of story understanding
require some fundamental changes in order to handle literary narrative, in areas such as the
representation of sentence meaning, the understanding of a text’s purpose and themes, the
derivation of meaning from non-narrative textual features, and the need for more explicit and
flexible models of the reader’s activity, goals, and abilities. We propose a general model of
understanding that requires an interpreter capable of simulating different readers and operating at
a level outside and surrounding those of current AI models.
1. Introduction
Over the past decade and a half, a body of research called ‘story understand-
ing’ has evolved within the field of Artificial Intelligence (AI). The goal of this
research is typically to enable computers to create an internal representation
of the ‘meaning’ of a story, which can then be used in tasks such as question
answering, paraphrase, or synopsis. Such attempts to formalize the processes
of comprehension and represent the meaning of a text in computationally
implementable ways have provided a perspective on problems of narrative
comprehension that can contribute to our understanding of human compre-
hension processes, even though most AI work is not intended to model them
explicitly. AI work has shed much light on what is required to understand
narrative and the relationships between different kinds of information found
in a text, as well as on the amount and kind of information used or acquired
during the process of comprehension. For this reason research in story
* Authors’ addresses: Nancy M. Ide, Dept. of Computer Science, Vassar College, Poughkeepsie,
New York, 12601, USA, ide@vassar.bitnet; Jean Veronis, Groupe Representation et Traitement
des Connaissances, Centre National de la Recherche Scientifique, 31 Chemin Joseph Aiguier,
13402 Marseille cedex 9, France, veronis@frmopll.bitnet
accompanies the last rule above, and indicates that the Event is a cause of the
Reaction.
This approach, which can be seen as a systemization of Propp’s (1968)
proposal for a morphology of folk tales, has been followed by a’number of
researchers (for instance, Mandler and Johnson (1977), Thorndyke (1977)). It
made a substantial contribution to the field of story understanding by point-
ing out that not all coherent texts are stories, and thus demonstrated the need
for some theory to account for the notion of ‘storyness’ of texts. This focussed
attention on the generalized structural knowledge that is required to organize
and understand stories and, more generally, narrative.
Because of their concern with structure rather than content, story grammars
have been criticized as having very little to offer to machine understanding of
texts (and even more generally as an inadequate theory of stories for any
purpose; see Black and Wilensky (1979)). It has become more and more
evident that any approach to story understanding by machine must involve
creating an internal representation of the ‘meaning’ of a story, to which access
can later be had, often in order to generate paraphrases, answer questions, or
provide a story synopsis. Early work was concerned mainly with the represen-
tation of the meaning of isolated sentences, and we will outline below one
scheme for representing sentence meaning, namely, the Conceptual Depend-
ency (CD) theory developed at Yale University by Schank (1972), since much
of the important work in story understanding has been done by former and
current members of the Yale group and has been based on this theory.
However, in order to understand a text, it is not enough to understand
individual sentences. Much information is left out of the text since it can be
inferred by the reader. We will thus discuss in the subsequent parts of this
40 N.M. Ide, f. Vtkmis / AI and literary narrame
To understand this short narrative, the understanding system must have access
to a causal syntax (containing, for example, rules such as ‘actions can result in
state changes’) and a store of more specific causal knowledge (for example, a
certain action results in a certain state change or set of state changes, under
certain conditions). This will enable it to infer that Mary’s knocking over the
bricks must have caused the bricks to be propelled in such a way as to hit
John’s leg, an event which is not stated and which, though omitted from the
literal statement of the sentence, was the proximate cause of John’s injury. In
Schank’s (1977) system, the causal syntax and causal knowledge are expressed
in terms of CD primitive acts and relations. Using these tools, the system
infers a complete causal chain for a narrative, making explicit the events and
causal sequences that are implicit in the literal text.
2.2.3. Scripts
Because we utilize much idiosyncratic information about stereotypic situations
to understand texts, the notion of a script ’ was developed (Schank and
Abelson (1977)) and first implemented in the Script Applier Mechanism
’ Minsky’s (1981) notion of frames is very similar to that of a script, and has been used in a
similar way in other language understanding systems (see, for instance, Charniak (1978)).
N.M. Ide, J. V&mis / AI and literary narrative 43
(SAM) (Cullingford (1978)). A script contains the causal chain of events for a
specific situation; thus, when the situation is encountered in a narrative, the
script is activated and the pre-arranged event sequence is provided to the
processor. For example, we understand the event sequence associated with a
stereotypic event such as a birthday party: one goes to the party and brings a
gift which has been wrapped, eats cake, sings ‘Happy Birthday’, etc. The
availability of this information in a ‘pre-packaged’ form eliminates the need
for what may be the impossible task of inferring a complex chain of events for
specialized situations. Because humans are expected to have knowledge of the
events that will take place in specialized situations such as restaurants and
birthday parties, access to this knowledge is often assumed in narrative. We
have no difficulty understanding the referent for ‘the cake’ or even the reason
why I should be eating a cake in ‘I went to Anne’s birthday party. The cake
was delicious’ because of our knowledge of birthday parties.
On the basis of the first sentence of this ‘text’, we assume that Willa has a goal
of satisfying her hunger. The means by which we understand the second
sentence is more complex, since even though Willa’s goal is known, it is not
44 N.M. Ide, J. Ve%mis / AI and literary narrative
readily apparent why she picks up the Michelin guide. Wilensky (1983: 47)
explains the process by which the second sentence is understood:
‘The knowledge base contains the fact that picking up something is a plan for possessing that
thing. So the reader hypothesizes that Willa must have had the goal of possessing the Michelin
guide. This goal must then be explained. Again, there is no ready explanation for this in the
story text. However, the knowledge base contains the fact that having possession of something
that has a function is instrumental to performing that function. Thus the reader infers that
Willa is going to read the guide. Reading is often a plan for finding out some information, and
since the Michelin guide is a source of information about restaurants, the reader infers that
Willa must have had the goal of knowing the location of a restaurant. Having this goal can be
explained by the fact that knowing the location of a place is often instrumental to going there.
Being at a restaurant is in turn instrumental to eating at the restaurant. Eating at a restaurant
is a plan to satisfy hunger, which was previously predicted to be one of Willa’s goals. An
explanation for Willa’s action has been found, and the inference process ceases.’
This example demonstrates the kind of inferencing about plans that is often
required to understand a text, and provides a sense of the kinds of knowledge
about planning and goals that the processor must have available to it in order
to make such inferences. In general, when encountering the actions of a
character in a story, the processor will, if the goal is given, try to determine
which plan among those known to satisfy that goal is being executed, by
matching subsequent actions with actions known to constitute such a plan. If
the goal is not given, the processor will attempt to infer the goal (as well as the
state of the world that having that goal implies) from the actions given, again
by determining the plan the actions serve to implement. Thus the processor
must have knowledge of the plans that can be used to satisfy goals and the
goals that may be satisfied by specific plans. Plans must be specified in the
knowledge base in a sufficiently general way to provide for broad applicability
across situations. Note that when the goals and plans of a character are
understood along the lines of the example given above, an enormous amount
of implied information is generated which connects sentences and can be used
to understand later sentences. The primary requirement for text understanding
is still considered to be the recovery of implied information.
Actions are explained by plans, plans are explained by go&, and goals are
explained by themes. Themes are given in the knowledge base as normal
irreducible goals for which no further explanation is needed and which can be
postulated for any agent, unless explicitly contradicted, such as ‘being accepted
by the opposite sex’ or ‘honesty’. Themes can be viewed as the background
knowledge required to predict that individuals will have certain goals. Identify-
ing appropriate themes is yet another task of story understanding; therefore,
the text understander must also have a thorough knowledge of which goals are
motivated by particular themes, Because themes themselves are taken as
N.M. Ide, .I. VPronis / AI and literary narrative 45
Wilensky (1982) points out that goal interactions are essential to capture the
notion of ‘storyness’, since not all coherent sets of sentences are stories. He
identifies a story as a set of sentences that ‘has a point to it’, that is, ‘some
element that invokes the interest of the reader’. Most of these points are in
turn expressed as goal relationships such as goal conflict, goal subsumption, or
goal competition. The concern with goal interactions in AI models of text
understanding is, for the literary scholar, an interesting step in story under-
standing: situations such as goal conflict are the building blocks of plot and
literary theme, and so for the first time we see concern for the kinds of
information about a text that would be of interest in literary analysis.
To understand a story it is necessary to do more than to discover a single
plan of a single character: it is necessary to be able to track the potentially
multiple plans of multiple characters and to handle such things as the revision
of plans and the abandonment of goals. PAM, the Plan Applier Mechanism
developed by Wilensky (1978) accomplishes these aspects of plan manage-
ment. Beyond such tracking, it is necessary to engage in what Wilensky (1983)
calls ‘meta-planning’ in order to understand interactions between goals. The
processor must first be capable of recognizing and dealing with the interac-
tions between a character’s plans and goals and between the plans and goals of
different characters. That is, situations such as goal conflict, goal overlap, and
goal concord must be recognized; and meta-planning procedures must be
invoked to attempt to explain what is and what may be done by the characters
involved in these situations. For instance, if it is established that a goal
conflict exists between two of a character’s goals, then the meta-planner must,
on the basis of the meta-theme ‘achieve as many goals as possible’, assume
that the character has the meta-goal to ‘resolve goal conflict; subsequent
actions can then be explained by fitting them into a meta-plan (for example,
‘try alternate plan’) that would satisfy the meta-goal.
Wilensky’s notion of meta-planning and the recognition of goal interaction
as an important facet of narrative understanding represent the concerns of the
next important step in story understanding by machine: the use of thematic
knowledge. Thematic knowledge, in the AI sense of the term, derives from
recognition of patterns in goals and events, such as the recognition of goal
’ To stop the process of explanation at the level of theme, as it is defined by Schank and Abelson
(1977) and Wilensky (1982). has intuitive appeal, since humans seem, in routine communication,
to accept such tenets without additional motivation. To stop at this point in any case prevents an
infinite regress of explanation.
46 N.M. Ide, J. Vhnis / AI and litermy narratrue
It should be noted that while several story understanding systems based on the
theories outlined above have been implemented, these systems are severely
restricted in the actual texts they can handle. For instance, BORIS, which is one
of the most sophisticated understanding programs yet developed, can handle
only a few very simple stories within a highly specialized domain of topics. It
is clear, then, that nothing like an understanding program capable of handling
literary texts is conceivable in the near future.
We can, however, consider modifications that would be required in current
AI models in order to handle meaning in literary narrative. Certain of these
modifications are quantitative and consist of extending the information and
processing strategies already implemented in story understanding systems. For
N.M. Ide, J. V&ok / AI and literary narrative 41
3 The problems of representing and accessing vast amounts of knowledge are the focus of a
substantial body of research within AI, and the prospects for being able to represent anything like
the knowledge a human possesses are not near at hand.
48 N. M. Ide, J. V&onis / A I and literary narrative
seek a single ‘best’ meaning for any sentence and try to solve ambiguities by
any available means - for instance, on the basis of probability. If left
unresolved, ambiguity presents substantial problems for most AI understand-
ing systems, and may even halt the understanding process. However, in human
communication it is not always necessary to resolve ambiguities; in literary
texts, ambiguity is often intentional and contributes significantly to the text’s
meaning. Thus for literary understanding ambiguity must be not only
acknowledged but also retained in the representation of meaning. Ambiguous
inferences must be retained as well. Ultimately all possible interpretations
must be linked up, where appropriate, to other structures that have been built
so far in the course of processing. 4 This will in turn require a means to mark
the ambiguous elements, in case one or the other is discarded on the basis of
later information. The nature of the ambiguity - for instance, whether or not
alternate meanings are incompatible - must also be determined.
Changes will also be required in the amount and kind of information that is
included in a representation of meaning, in particular for sentences and
smaller narrative units. Meaning representation schemes such as CD structures
will often represent, indiscriminately, too much of the meaning of a sentence,
given its context. For example, the details of John’s pulling the trigger are
likely not part of what a reader will conceptualize in most situations when he
sees the sentence ‘John shot Mary’, although this information is part of the
CD representation of the sentence’s meaning. This information may be rele-
vant in certain situations - for example, if a subsequent part of the narrative
describes the trembling of John’s finger pulling the trigger - in which case a
link must be established between this new information, when it is encountered,
and the information related to the pulling of the trigger in the representation
of ‘John shot Mary’. However, in most other cases the information concerning
the trigger will be irrelevant. In understanding systems that utilize CD and
similar representations of sentence meaning there is no means to differentiate
between relevant and irrelevant information in the representation, based on
context. Such differentiation is essential for representing meaning in literary
texts, because of the tendency in literature to use details extensively in order to
enforce certain perspectives on characters and events. Of course, it is impossi-
ble to predict before the entire text is processed which details in a representa-
tion may be relevant, and therefore it is necessary during processing to retain
the full range of potentially relevant information available while clearly
distinguishing those features of meaning that have been shown to be relevant
so far.
On the other hand, CD and similar representation schemes often do not
include information implicit in the meaning of a sentence that may be
4 When and how an ambiguity is resolved may be important for meaning as well; see section 3.3.1.
N.M. Ide, J. Vthnis / AI and literary narrative 49
Meaning - that is, for our purposes that which the text conveys to the reader -
in a literary work is derived from many sources other than narrative events
and intentions. Meaning in literary works is typically multiple and occasion-
ally ambiguous. Event sequences and the goals and plans they imply are in
some cases only secondary contributors to the overall effect of a literary work;
literature is characterized by conveying its meaning through means other than
explicit statement. It is in fact literature’s tendency to take a reader through an
experience, to show rather than to tell, that differentiates it from other kinds
of narrative; it is precisely when narrative that is typically non-literary (e.g.,
journalism, history) moves from telling to showing that it is most apt to be felt
‘literary’ - and vice versa. Making a slightly weaker claim, we could alterna-
52 N.M. Ide, J. V&onis / AI and literary narrative
tively say that literature is not wholly reducible to its explicit paraphrasable
content. Either claim suffices to motivate a discussion of the ways in which the
non-paraphrasable content of a text is to be understood and made explicit.
Thus when the reader encounters the fact that Willa intends to go to a
restaurant, as in Schank’s example above, a great number of inferences are
possible; but if earlier in the text it has been established that Willa is
struggling against her own reclusive tendencies (information, we assume, that
would have been foregrounded earlier in the text), the number of possible
inferences is narrowed considerably. On the other hand, if the reader knows
that Willa is very poor, then different inferences may be made. In addition,
foregrounding helps to determine which inferences are reasonable, in much the
same ways as it helps determine the relevant information about a sentence. In
fact, foregrounding will make certain inferences possible in the first place. For
instance, when encountering the fact that Willa picked up the Michelin Guide,
the inference engine would not routinely start thinking about what this may
imply about Willa’s extravagant life style, unless, perhaps, wealth or money
had been foregrounded earlier.
To take the effects of foregrounding into account in processing literary
narrative, substantial data management will be required to keep track of
foregrounded items. Probably, the potential impact of a foregrounded item is
very strong just after it is encountered; after that, unless it is reinforced by
linkage to a newly encountered item, its strength may ‘decay’ gradually as the
reader moves farther and farther away from it. Of course, if an item is indeed
linked to a new item, then a third entity results, which will exert a still stronger
influence on the processing of subsequent information, both because it has
been encountered more recently than the original foregrounded item and
because its strength has been augmented by having been reinforced. The
conception of this process as one involving activation, decay, and reactivation
of foregrounded items suggests a connectionist model; however, it is clear that
consideration of the behavior and interactions of foregrounded items is an
area which demands much more study in order to propose any scheme for
modelling them.
Literary narrative also depends heavily on establishing relations among
textual elements - relations such as opposition, causality, or straightforward
association - to convey meaning. The establishing of relations among elements
is very intimately connected with the notion of foregrounding, since fore-
grounded elements often serve as the elements involved in such relations. The
relation between foregrounding and linkage of textual elements is complex,
and full understanding and cataloguing of their interactions will require
substantial study. Speaking very generally, we can observe that the relation-
ships that are established among foregrounded elements within a text are knit
together in a complex network which will typically transcend the inherent
linearity of narrative. It is in this network that much of the meaning of a
literary narrative will reside, and from which its structure and its thematic
statements can be gleaned. The structure that reflects the networking of ideas
after a text is processed should reflect the structure of ideas in that text. It
54 N.M. Ide, J. V&onis / AI and literary narratwe
should be clear within this network what the major ideas are, how ideas relate
to one another within the text, at what points in the text significant ‘fusions’ -
possibly involving the sudden connection of several items, or two or more
important items - occur, where significant parallels or contrasts are estab-
lished and resolved, etc. This structure in turn will affect the meaning of the
text for the reader: for example, if a significant fusion of several ideas occurs
at a particular place in a text, the reader will recognize this point in the text as
significant. Other information that appears there will be foregrounded. The
remainder of the text will, possibly, be processed differently, if the reader
senses that the climactic moment has been reached and assumes the remainder
of the text constitutes the denouement.
This outline of the role of foregrounding and interlinkage of textual
elements is extremely sketchy, and leaves out a consideration of many im-
portant elements, such as the varying effects of different kinds of fore-
grounded information and links. It is intended to suggest avenues for further
consideration, and not to be a definitive outline of what would be required in
an implemented narrative understanding program. However, the role of fore-
grounding and the development of the network of relationships seems to be a
most promising and important area for further investigation, because of the
significance of these processes in literary understanding.
s Wellek and Warren (1970) define story as process: ‘To tell a story, one has to be concerned
about the happening, not merely the outcome [the reader] who reads only the ‘concluding
chapter’ of a nineteenth century novel would be somebody incapable of interest in a story, which
is process - even though process towards an end’ (our emphasis). This concern with the processes
of comprehension is also consistent with Wolfgang Iser’s reader-oriented theories of narrative
understanding. Galloway (1983) makes this point and suggests that AI might consider an analysis
of processing itself to get at meaning, but does not develop this idea further.
N.M. Ide, J. Vhnis / AI and literary narrative 55
The meaning of the text for the reader - the reader’s experience of the text -
can be defined very largely by the ways in which these false assumptions are
created, and how and when they are shattered. To understand a murder
mystery, for instance, it is not enough to recite the sequence of events by
which the crime was committed and solved; one must also grasp the means by
which the knowledge of the perpetrator is kept from the reader and how it is
slowly or obliquely revealed to him. Similarly, in a text which presents a
certain view of a character and later reveals that this view is flawed, one of the
major points or themes of the text may be that partial or incorrect information
can lead to false assumptions about a person. Many narration mechanisms
also achieve their purpose through their effect on processing - for instance,
limited-point-of-view narration (which may encourage certain inferences and
associations and inhibit others, based on the limits of the point of view), and
out-of-sequence narration and multi-stranded narrative (which may limit the
reader’s knowledge and purposefully add to it in order to affect the drawing of
inferences).
All of this suggests that the processing of narrative itself must be taken into
account to yield that text’s ‘final’ meaning. Thus the steps mvolved in
processing should be retained for examination after processing is completed.
Retaining a trace of the procedures used would enable access to information
which could contribute, at the least, to a still better understanding of the ways
in which meaning is created (if not to the meaning itself) - for example, it
would be revealing to determine which information in a text required calling
the same sequences of procedures, or required accessing the same knowledge
structures. In addition to (or as an alternative to) examining a trace of
processing after it has been completed, information about processing could be
gathered and considered during the processing itself. Certainly, it is necessary
to consider the effects of, say, learning information through flashback at the
time the flashback is encountered; this will modify both the way the informa-
tion itself is perceived and the way subsequent information is processed. It
may be that certain phenomena of processing must be considered as they are
encountered, while others must wait to be considered after processing is
complete. In any event, it is clear that the dynamic evolution of the meaning
during the processing of text is itself an important contributor to the overall
meaning, and one which AI models have not yet included.
6 See Ide (1988) for a discussion of some of the special kinds of semantic knowledge needed for
literary understanding.
’ This also suggests that one could modify the understanding program in order to determine the
effects of enhancing or reducing ability, or, for that matter, the effects of modifying the knowledge
that a reader has.
N.M. Ide, J. Vhnis / AI and literary narratrve 51
most or all of the goals generated during processing will have been fulfilled or
abandoned before the entire text is processed.
The goals within the goal hierarchy at any given moment during processing
and the plans that are implemented to serve these goals may interact in very
complex ways - they may, for instance, complement one another (for example,
understanding the psychology of a character can lead to the discovery of the
key to solving the story’s mystery) or conflict (for example, lingering over
details in order to experience a scene vs. reading rapidly in order to fill in
one’s knowledge of the event sequence). When goals or plans conflict, it must
be clear which among the conflicting goals will be served. For instance, will
the goal to discover the overall purpose of the text or the goal of gaining
emotional experience always override the ‘lesser’, more immediate goal of
learning what a character will do next? Putting this more generally, will goals
higher up in the hierarchy be served first when a conflict arises? Are current
AI strategies for modelling the resolution of goal conflict for characters in
narrative (i.e., Wilensky’s ‘metaplanning’) adequate to model these processes
in readers as well? The answers to these questions are not clear, and it may in
fact be the case that priorities among reader’s goals vary from reader to reader,
and possibly from situation to situation. This speaks again to the need to
include a model of the reader’s goals and plans in an understanding system.
In addition to being arranged hierarchically, goals within the hierarchy may
also be organized linearly, as a sequence of goals. For instance, readers bring
to the reading of a text their general knowledge about stories and literature
(which will vary from reader to reader), which will include, for instance,
knowledge of typical story schemas similar to those embodied in story
grammars. Thus a reader will come to a text expecting an overall sequence of
‘events’ to occur, such as the setting up of a conflict, a moment when the
conflict reaches its height, and a resolution. The reader will generate goals to
fulfil these expectations. When the first goal (e.g., identify the conflict) is
achieved, the reader moves on to the second, etc., in a sequential fashion.
Obviously, each of these goals may generate several sub-goals, which for story
schema may consist of a sequence of goals to recognize the components of a
situation or scene and which may themselves be linearly arranged. The
reader’s goals to fulfil these expectations based on his knowledge of the
structure of story, episode, scene, etc., will guide the way in which new
information is processed as it is encountered, by favoring certain interpreta-
tions and emphases on the basis of its hypothetical role in the structure of the
story.
The reader in a model of understanding is himself responsible for monitor-
ing, managing, and interrelating points of view, including that of the reader.
This involves constructing a partial world view and a set of goals and plans for
characters, narrator, and possibly even an implied author. The management of
this information involves not only passing control to the appropriate entity at
N.M. Ide, J. Vtkmis / AI and literary narrative 59
the right moment as the text is processed, but also involves a potentially
multi-tiered structure in which the narrator’s view of the characters, and then
the reader’s view of the narrator, may be included. Beyond this the reader
must track relationships among the world views, plans, and goals of characters
and (depending on the sophistication of the reader and the text itself ‘)
narrator and the implied author. Tracking relations among characters’ goals
and plans enables recognition of Wilensky’s ‘themes’; tracking relations
among the reader’s goals (especially specific goals generated as the text is
processed) and those of characters or narrator may serve to identify the
function of a character - protagonist, antagonist - in the text.
Thus, two levels of goal and plan management need to be added to current
AI models of understanding: first, a level which models the goals and plans of
the reader himself in the act of reading, and second, a level which models the
reader’s own management and coordination of plans, goals, and point of view
for himself, narrator, and characters. Much AI work on goals and plan
generation and interaction can be applied to these additional elements as well,
but other mechanisms will likely be required. In particular, means to account
for and handle interactions among the reader’s top-level goals in reading will
need to be developed, since considerations very different from those in
Wilensky’s meta-planning schemes seem to be involved in the resolution of
conflicts among goals of this kind.
’ Compare, for example, what would be required to deal with Sterne’s Tristram Shandy as
opposed to an ‘objective’ novel of Dickens, where the narrator is effectively non-existent and
narrative point of view is controlled throughout.
60 N. M. Ide, J. V&mm / A I and literary narrative
result most often from the deliberate construction of the text by the author,
and leave identifiable marks of their presence (for example, changes in point
of view, the holding back of certain pieces of information, etc.). On the other
hand, different readers are not totally independent and intellectually disjoint
entities. Any two readers, as different as they may be, will always share a large
body of cultural knowledge and personal experience. When readers share
common knowledge and abilities, they will tend to process a text in the same
way, and literary texts, especially, are consciously constructed to guide the
attentive reader in this processing. In this way common meanings can be
extracted from a text. But individual readers, who in addition to the common
core of knowledge and abilities may possess additional, possibly idiosyncratic
knowledge and abilities, will process the text accordingly. The approach to
meaning that we suggest for AI models of understanding can therefore be seen
as falling somewhere between the extremes of the purely structuralist and the
purely reader-oriented views.
In existing story understanding systems, the ‘reader’ is, effectively, the
understanding program itself. The restricted goals of this reader (to gain
information, to find a character’s plan which explains each event, to find a
character’s goal for each plan, to identify themes, etc.), as well as the
knowledge, abilities, and strategies necessary to realize these goals, are not
articulated and implemented separately but are instead enmeshed in the
procedures of the understanding program. If the goals, strategies, knowledge,
and abilities of the reader outlined in sections 3.4.1 and 3.4.2 were to be made
explicit and implemented in an easily modifiable, declarutiue form, it would be
possible to model differences among readers as well as to observe the results,
in terms perhaps of differences in meaning, for different readers of the same
text. In such a model, the program is not a ‘reader’ but it is instead an
‘interpreter’ capable of simulating the behavior of different readers. Such an
approach adds, at least conceptually, a level of processing beyond that in
current AI models.
The approach outlined above is consistent with the criticism of meaning
representation developed in section 3.1. The analysis of a text will not, as in
CD-like theories, yield a unique and exhaustive representation of meaning, but
will instead yield a minimal schema, resulting from the extraction of the
objectively identifiable textual elements that contribute to meaning, which is
then interpreted by different ‘readers’ in order to obtain their different (but
partially overlapping) meanings.
4. Conclusion
We believe that at the least, the discussion in this paper makes it clear that
researchers in AI can benefit by considering the range of stimulating and
challenging problems raised by literary texts. Similarly, literary studies can
gain clarity by attempting to re-cast their assumptions and theories in terms
formal enough to lead to experiments on computers. The lack of cross-fertili-
zation between the fields of Artificial Intelligence and literary studies is
striking; hopefully, this will soon change, to the benefit of all.
References
Black, J.B. and Wilensky. R., 1979 An evaluation of story grammars. Cognitive Science 3,
213-230.
Charniak, E., 1972 Towards a model of children’s story comprehension (Al TR-266) Massachu-
setts Institute of Technology.
Charniak, E., 1978. On the use of framed knowledge in language comprehension. Artificial
Intelligence 11, 225-265.
Cullingford, R.E., 1978. Script application: Computer understanding of newspaper stories. Yale
University Computer Science Research Report No. 116.
DeJong, II, G.F., 1979. Skimming stories in real time: An experiment in integrated understanding.
Yale University Research Report No. 158.
Dyer, M.G., 1981. The role of TAUs in narratives. Proceedings of the Third Conference of the
Cognitive Science Society. Berkeley, CA.
Dyer, M.G., 1983. In-depth understanding: A computer model of integrated processing for
narrative comprehension, Cambridge, MA: MIT Press.
Fillmore, C., 1968. The case for case. In: E. Bach and R.T. Harms (eds.), Universals in linguistic
theory, l-88 New York; Holt, Rinehart and Winston.
Galloway, P., 1983. Narrative theories as computational models: Reader-oriented theory and
artificial intelligence. Computers and the Humanities 17, 169-174.
Ide, N., 1988. The lexical database in semantic studies. Linguistica Computazionale: Computa-
tional Lexicology and Lexicography, forthcoming.
Iser, W., 1978. The act of reading. Baltimore, MD: Johns Hopkins.
Kintsch, W. and T.A. van Dijk, 1975. Recalling and summarizing stories. Language 40, 98-116.
Lehnert, W.G., 1981. Plot units and narrative summarization. Cognitive Science 5, 293-331.
Lehnert, W.G., M.G. Dyer, P.N. Johnson, C.J. Yang and S. Harley, 1983. BORIS - An
experiment in in-depth understanding of narratives. Artificial Intelligence 20, 15-62.
Mandler, J.M. and J.S. Johnson, 1977. Remembrance of things parsed: Story structure and recall.
Cognitive Psychology 9, 111-151.
Minsky, M., 1981. A framework for representing knowledge. In: J. Haugeland (ed.), Mind design,
95-128. Cambridge, MA: MIT Press.
Propp, V., 1968. Morphology of the folktale. 2nd ed. Austin, TX: University of Texas Press.
Reiger, C., 1975. Conceptual memory and inference. In: R.C. Schank (ed.), Conceptual informa-
tion processing, 157-288. Amsterdam: North-Holland.
Reiser, B.R., W.G. Lehnert and J.B. Black, 1981. Recognizing thematic units in narrative. Third
Annual Conference of the Cognitive Science Society. Berkeley, CA.
Rumelhart, D.E., 1975. Notes on a schema for stories. In: D.G. Bobrow and A. Collins (eds.),
Representation and understanding: Studies in cognitive science, 211-236. New York: Academic
Press.
Schank, R.C., 1972. Conceptual dependency: A theory of natural language understanding.
Cognitive Psychology 3, 552-631.
N.M. Ide, J. Vthnis / AI and literary narrative 63
Schank, R.C., 1973. Identification of conceptualizations underlying natural language. In: R.C
Schank and KM. Colby (eds.), Computer models of thought and Language, 187-247. San
Francisco, CA: WH Freeman.
Schank, R.C., 1975. Conceptual information processing. Amsterdam: North-Holland.
Schank, R.C., 1982. Dynamic memory: A theory of reminding and learning in computers and
people. New York: Cambridge University Press.
Schank, R.C. and R. Abelson, 1977. Scripts, plans, goals, and understanding. Hillsdale, NJ:
Lawrence Erlbaum.
Thorndyke, P., 1977. Cognitive structures in comprehension and memory of narrative discourse.
Cognitive Psychology 9, 88-110.
Wellek, R. and A. Warren, 1970. Theory of literature. 3rd ed. New York: Harcourt, Brace, and
World.
Wilensky, R., 1978. Understanding goal-based stories. Yale University Research Report No. 140.
Wilensky, R., 1982. Points: A theory of the structure of stories in memory. In: W.G. Lehnert and
M.H. Ringle, Strategies for Natural Language Processing, 345-374. Hillsdale, NJ: Lawrence
Erlbaum.
Wilensky, R., 1983. Planning and understanding. Reading, MA: Addison-Wesley.