Professional Documents
Culture Documents
net/publication/220356030
CITATIONS READS
6 445
2 authors:
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Aldo Franco Dragoni on 27 February 2017.
EPHRAIM NISSAN
School of Computing and Mathematical Sciences,
University of Greenwich, Greenwich, London,
England, United Kingdom
Inside the Juror (Hastie 1994) was, in a sense, a point of arrival for research developing
formalisms that describe judicial decision making. Meter-based models of various kinds were
mature, and even ready for giving way to such models that would concern themselves with the
narrative content of the cases at hand, that a court is called to decide upon. Moreover,
excessive emphasis was placed on lay factfinders, i.e. on jurors. It is noticeable that as
‘‘AI & Law’’ has become increasingly concerned with evidence in recent years with efforts
coordinated by Nissan & Martino, Zeleznikow, and othersthe baggage of the meter-based
models from jury research does not appear to be exploited. In this article, we try to combine
their tradition with a technique of belief revision from artificial intelligence, in an attempt
to provide an architectural component that would be complementary to models that apply
representations or reasoning to legal narrative content.
Address correspondence to Dr. Ephraim Nissan, 282 Gipsy Road, Welling, Kent DA16 1JJ, England.
E-mail: ephraimnissan@hotmail.com
277
278 A. F. Dragoni and E. Nissan
Hastie (1994, Figure 1.1 on p. 7) describes trial events ‘‘in terms of the
types of information presented to the juror.’’ These include: indictment,
defendant’s plea, prosecution opening statement, defense opening statement,
witnesses (comprising the sequence: statements of witness and judge, obser-
vations of witnesses, observations of judge), defense closing arguments, pros-
ecution closing arguments, judge’s instructions on procedures (the
procedures being: presumption of innocence, determination of facts, admissi-
bility, credibility, reasonable inference, and standard of proof), and judge’s
Meter-Models Tradition 279
and evaluating their plausibility and fitness as explanations, then learning the
verdict categories, and, next, matching the accepted story to verdict cate-
gories. This fourth approach is associated in jury research with the names
of Pennington and Hastie (1983) (see also Hastie et al. 1983).
Twining (1997) has warned about it being unfortunate that contested jury
trials be treated, in ‘‘much American evidentiary discourse’’ and ‘‘satellite
fields, such as the agenda for psychological research into evidentiary
problems,’’ ‘‘as the only or the main or even the paradigm arena in which
important decisions about questions of fact are taken’’ (ibid., p. 444), and
while acknowledging the importance of the cognitive story model (ibid., n.
16), he has signaled in this connection the excessive emphasis on the jury.
Twining (1999, Sec. 2.3; 1994, Ch. 7) has warned as well about the danger
that a ‘‘good story’’ poses in court of pushing out the true story. Jackson
(1996) provides a semiotic perspective on narrative in the context of the
criminal trial. See also Jackson (1990), and his other works (Jackson 1985;
1988a; 1988b; 1994).
Meter-Models Tradition 281
FIGURE 2. Taken from Nissan (2001a), Figure 2 is a redrawn, coalesced flowchart of the ones Hastie
gives for the Bayesian probability updating model, and for the algebraic sequential averaging model
(Hastie 1994, Figure 1.4 on p. 13 and Figure 1.5 on p. 18).
282 A. F. Dragoni and E. Nissan
FIGURE 3. A comparison of the Bayesian probability updating model and the stochastic Poisson process
model.
Meter-Models Tradition 283
Nissan has argued elsewhere for the crucial role of an AI model of mak-
ing sense of a narrative and of its plausibility for research applying AI to legal
evidence. For example, on narrative stereotypes and narrative improbability,
see the papers on the Jama story (Geiger et al. 2001; Nissan 2001b; Nissan
and Dragoni 2000). Refer as well to the papers on the ALIBI project (Kuflik
et al. 1991; Fakher-Eldeen et al. 1993; Nissan and Rousseau 1997), as well as
to Nissan’s papers in the companion special issue in Cybernetics & Systems
(2003), Nissan’s paper on an amnesia case (2001c), and his paper on the maze
of identities in Pirandello’s Henry IV (Nissan 2002a). Nissan has also deve-
loped a goal-driven formal analysis (the COLUMBUS model) for a passage
in a complex literary work whose narratives are not in the realistic tradition
(Nissan 2002b).
not merely be the old theory extended with the new formula (as this would
exhibit inconsistency).
It is necessary to find how to change the given theory (i.e., the current set
of beliefs) in order to incorporate the incoming information, i.e., the new for-
mula. To get an idea of how complex this is, consider that a (logic) theory is
an infinite set of formulae: namely, all those formulae obtainable from the
basic formulae by making the deductive closure. AGM set forth three ration-
ality principles that must govern the change:
AGM1. Consistency: K p must be consistent (i.e., with no conflicting propo-
sitions, as possibly introduced by p).
AGM2. Minimal change: K p should alter as little as possible K (while trying
to satisfy the consistency principle).
AGM3. Priority to the incoming information: p must belong to K p (thus not
being relegated to the status of a rarely consulted appendix to the old
theory).
We will focus our attention on the second and the third principles. The for-
mer says that the new theory must be as similar as possible to the old one. It’s
a rehashed Occam’s razor: ‘‘Simpler explanations should be preferred until
new evidence proves otherwise.’’ Said otherwise: ‘‘If to explain a phenom-
enon it is unnecessary to make given hypotheses, then don’t make them.’’
As to the third principle, it says that the new theory must incorporate the
new formula. This requires that priority be given to the incoming infor-
mation, because if you consult the older formulas first, then you ‘‘neglect’’
the new formula’s impact (it ‘‘doesn’t bother’’ the old theory).
From these rationality principles, AGM derived eight postulates affecting
theory revision:
1. AGM1: For any p 2 L and K 2 TL ; K p 2 TL :
The new theory must be a theory.
2. AGM2: p 2 K p
The new formula must belong to the new theory. Known as ‘‘postulate of
success,’’ this is the most controversial AGM postulate.
3. AGM3: K p Kþ p
The new theory must be a subset of the expanded theory we would get if
it had been allowable to merely augment the old theory with the new
formula: Such an expanded theory would be inconsistent if a contradicts
the old theory. The expanded, inconsistent theory includes all those
formulae of language L that necessarily satisfy the axiom.
4. AGM4: If : p 2 = K then Kþ p K p:
If the negation of the new information is not derivable from the old
theory, then the new theory must contain all those formulae that can be
derived by merely adding the new formula to the old theory.
Meter-Models Tradition 285
(Doyle 1979). See, for instance, the modeling in Martins and Shapiro (1988).
De Kleer’s Assumption-Base Truth Maintenance Systems (ATMS) paradigm
(De Kleer 1986) overcame some limitations of Doyle’s TMS, and was
immediately regarded as a powerful reasoning tool to revise beliefs (and to
perform the dual cognitive operation of diagnosis). Crucial to the ATMS
architecture is the notion of assumption, which designates a decision to
believe something without any commitment as to what is assumed. An
assumed datum is the problem-solver output that has been assumed to hold
after the assumption was utilized to derive it. Assumptions are connected to
assumed data via justifications, and form the foundation to which every
datum’s support can be ultimately traced back. The same assumption may
justify multiple data, or one datum may be justified by multiple assumptions.
According to us, to be fruitfully applied in modeling the cognitive state of
an inquirer or a juror, receiving information from many sources about a
same static situation, a BR framework should possess some special requisites.
These are:
FIGURE 4. If q ‘‘rejects’’ p and subsequently :q ‘‘rejects’’ q, then p should be restored, even if it is not the
case that :q implies p.
288 A. F. Dragoni and E. Nissan
SOLUTION PROPOSED
From this discussion it should be evident that we cannot rely on the
AGM framework to model the belief revision process of a juror or an inves-
tigator. We better repose on the ATMS abstract conception. An implemented
and tested computational architecture that does that it shown in Figure 5.
Let us zoom now on the initial part of the flowchart. For the purposes of
exemplification of the flow of information for a given set of beliefs and a
given new information, refer to the detail of the architecture as shown in
Figure 6.
The overall schema of the multi-agent belief revision system we propose
(see Figure 5) incorporates the basic ideas of:
In the notation of set theory, the Venn diagram on the right side of
Figure 7 is intended to capture the following concept. Three GOODSs have
been generated; the one labeled 1 includes a, b, and v; the one labeled 2
includes B, v, and the rule ‘‘If a, then not b’’; whereas yet another GOODS,
labeled 3, includes: a, v, and the same rule ‘‘If a, then not b’’.
Each one out of these three GOODSs is a candidate for being the pre-
ferred new cognitive state (rather than the only new cognitive state). The de-
cision as to which cognitive state to select is taken based on Dempster-Shafer
(see Figure 9). Refer to Figure 8.
FIGURE 7. The second step: The generation of all the maximally consistent subsets of KB (i.e., the
knowledge base), plus the incoming information.
292 A. F. Dragoni and E. Nissan
evaluating the GOOD in its entirety. (In fact, as the GOOD is a formula,
Dempster-Shafer could conceivably assign a weight to it directly.)
Next, from the ranking of credibility on the individual formulae, we can
obtain (by means of algorithms not discussed here) a ranking of preferences
on the GOODSs themselves. In the example, this highest ranking is for the
GOOD with: a, b, and v. (Thus, provisionally discarding the contribution
of source T, which here was said to be ‘‘common sense.’’) Nevertheless,
our system generates a different output. The output actually generated by
the system obtains downstream of a recalculation of source reliability,
achieved by trivially applying the Bayes theorem. In our example, it can be
seen how its source T (‘‘common sense’’) which is most penalized by the
contradiction that occurred. Thus, in output B0 , the rule which precludes b
was replaced with b itself. Note that the selection of B0 is merely a suggestion:
the user of the system could make different choices, by suitably activating
search functions, or then by modifying the reliability values of the sources.
Once the next information will arrive, everything will be triggered anew from
the start, but with a new knowledge base, which will be the old knowledge
base revised with the information. It is important to realize that the new
294 A. F. Dragoni and E. Nissan
CONCLUDING REMARKS
This paper introduced what is (relatively) a fairly powerful meter-based
formalism for capturing the process of juror decision making. It compares
favorably with other approaches described in the literature of jury research,
and it’s especially in this regard—namely, salvaging from the tradition of the
meter based models an ongoing contribution to the field—that the present
work is intended. This does not amount, however, to saying that the present
approach is with no problems. (Much less, that it is more than a building
block: ambition for more completeness would call for an inference engine
operating on a narrative representation.)
A few remarks of evaluation follow, first about consistency. Of course,
we want to enforce or restore consistency: judiciary acts cannot stem from
an inconsistent set of hypotheses. Yet, we want to avoid unduly dismissing
any possibility altogether. Therefore we contrast all such GOODSs that
obtain from the set of information items (which are globally inconsistent)
provided by the various sources involved. Sometimes the same source
may be found in contradiction, or provide inconsistent information (self-
inconsistency). In 1981, Marvin Minsky stated: ‘‘I do not believe that consist-
ency is necessary or even desirable in developing an intelligent system.’’
‘‘What is important is how one handles paradoxes or conflicts.’’ Enforcing
consistency produces limitations: ‘‘I doubt the feasibility of representing or-
dinary knowledge in the form of many small independently true propositions
(context-free truths).’’ In our own approach, we have a single, global, never
forgetting, inconsistent knowledge background, upon which many, specific,
competitive, ever changing, consistent cognitive contexts are acting.
Meter-Models Tradition 295
FIGURE 10.
296 A. F. Dragoni and E. Nissan
NOTES
1. There is a more general consideration to be made about attitudes toward
Bayesianism. In the literature of epistemology, objections and counter-
objections have been expressed concerning the adequacy of Bayesianism.
One well-known critic is Alvin Plantinga (1993a, Chap. 7; 1993b, Chap.
8). In a textbook, philosopher Adam Morton (2003, Chap. 10) gave these
headings to the main objections generally made by some epistemologists:
‘‘Beliefs cannot be measured in numbers,’’ ‘‘Conditionalization gives the
wrong answers,’’ ‘‘Bayesianism does not define the strength of evidence,’’
and, most seriously, ‘‘Bayesianism needs a fixed body of propositions’’
(ibid., pp. 158 159). One of the Bayesian responses to the latter objection
about ‘‘the difficulty of knowing what probabilities to give novel proposi-
tions’’ (ibid., p. 160), ‘‘is to argue that we can rationally give a completely
novel proposition any probability we like. Some probabilities may be more
convenient or more normal, but if the proposition is really novel, then no
probability is forbidden. Then we can consider evidence and use it, via
Bayes’ theorem, to change these probabilities. Given enough evidence,
many differences in the probabilities that are first assigned will disappear,
as the evidence forces them to a common value’’ (ibid.). For specific objec-
tions to Bayesian models of judicial decision making, the reader is urged to
see the ones made in Ron Allen’s lead article in Allen and Redmayne
(1997).
2. Consider parties A and B at a bench trial. A’s witnesses have finished giving
evidence, and now a witness for B is being examined or cross-examined,
and a witness for A (e.g., the plaintiff himself ) clings to the hope that
the judge will remember to refer to the notes he had been scribbling when
A was giving evidence, as an item said by A contradicts what B is saying
now. That there is such contradiction, will affect credibility: what A said
before will be less likely to be accepted as though there wasn’t to be
adverse evidence from B. Yet, our dropping the principle of ‘‘priority to
the incoming information’’ in a multi-source environment should corre-
spond to the practical rule that just because B is giving evidence after A,
this by itself is not supposed to make B more credible than A.
3. The Anglo-American adversarial judicial system, in which the two parties
in a trial are rather symmetrical, provides a convenient example for our
purposes, as the witnesses for the two parties constitute two sets of ‘‘sources’’,
from which items of information flow. Yet, sources in a trial need not be
so ‘‘similar’’. Consider ancient canon law. Rumors a source considered
public may have tainted a cleric, who wished to clear himself. He then
brought witness in his defense. Each such witness was called a compur-
gator. These compurgatores and the rumors are sources of a different
nature. The rumors themselves are items of information, and the source
298 A. F. Dragoni and E. Nissan
was not embodied in an individual (or in, say, a newspaper), but generi-
cally in a subset of the public.
4. Here is a commonsensical example of the interplay of source and context
in making sense (in good faith or as a posture) of an item of information.
Bona fide misinterpretation in context is exemplified by a situation in which I
(Nissan), from a bus, was watching a poster behind a glass window; it
showed the photograph of the head of a man, his face very contracted,
and his mouth as wide open as he apparently could. Something round
and white, which I eventually realized was the head of pin keeping the pos-
ter in place, happened to be positioned on the man’s tongue. I may be for-
given for thinking on the spur of the moment that some pill was being
advertised, which was not the case. Besides, interpretation activity is not
always done in a truth-seeking mood. On the flank of the bus by which
I was commuting this morning, there was the following ad. (Because of
pragmatic knowledge about posters on the flank of a bus being ads, it must
have been an ad.) The poster was showing the face of a celebrity and read:
‘‘Jennifer Lopez is in. . .’’ It included no other inscription, apparently to
build up expectation for a sequel poster. Moreover, the palms of the hands
of the woman in the picture were raised in the forefront—for example, as
though she was leaning on glass (which was not the case). I only noticed
that poster once the bus had stopped in front of a funeral parlor, whose
display window was exhibiting an array of tombstones and similar niceties
for people to order. The image (and inscription) from the ad on the bus
flank was mirrored on the display window. My expectations, as a prag-
matically competent viewer, of what the advertisers would or wouldn’t ex-
pect, prevents good faith (and affords this rather being a posture) if I were
to interpret the juxtaposed visual information (the facial image of the cel-
ebrity along with the textual assertion from the poster, and the sample
tombstones) as though this meant: ‘‘Celebrity So and So is inside the
shop,’’ or even ‘‘inside one of these lovely displays.’’ Surely the marketing
people didn’t foresee that this would be the situation of viewing the poster,
and I am supposed to know this was unexpected. If I am to adopt an her-
meneutic option that (mis)construes the advertiser’s given document (the
exemplar of the ad) as in the given, singular, quite unflattering circum-
stances, in such a way that the advertisement’s message backfires (i.e.,
grossly thwarts the intended value of the image), this is an appropriation
of the ‘‘text’’ (as broadly meant) for communicative purposes that, according
to a situational context, may in turn be or not be legitimate. Appropriation-
cum-‘‘bending,’’ however, is common in human communication. Some
textual genres practice appropriation quite overtly, prominently, and
organically; see, for example, Nissan et al. (1997). More generally, on
quotations (ironic and otherwise), see Kotthoff (1998). On intertextuality,
see Genette (1979). Nissan (2002b) applies it in the COLUMBUS model.
Meter-Models Tradition 299
be engaged in questions, may hope that the information new to him may
be useful while cross-examining the next witness(es) of the employer (if any
is=are left), in order to induce contradictions in the defense.
Legal debate can cope with different philosophical approaches to
knowledge. William Twining, the London-based legal theorist, in a paper
(Twining 1999) which explores issues in the Anglo-American evidence
theory, has remarked: ‘‘The tradition is both tough and flexible in that
it has accommodated a variety of perspectives and values and has usually
avoided making extravagant claims: in the legal context one is concerned
with probabilities not certainties; with ‘soft’ rationality or informal logic
rather than closed system logic; with rational support rather than demon-
stration; and with reasonably warranted judgments rather than perfect
knowledge. It is generally recognized that the pursuit of truth in adjudi-
cation is an important, but not an absolute social value, which may be
overridden by competing values such as ‘preponderant vexation, expense
or delay’. . . . Some premises of the Rationalist Tradition have been subject
to sceptical attack from the outside. But it has been a sufficiently broad
church to assimilate or co-opt most apparent external sceptics. Similarly
while most Anglo-American evidence scholars have espoused or assumed
what looks like a correspondence theory of truth, there is no reason why a
coherence theory of truth cannot be accommodated, if indeed there is any
distinction of substance [p. 71] between the theories’’ (ibid., pp. 70 71).
An example of a coherentist among legal evidence theorists is Bernard
Jackson, also in England (Twining actually points out that much, ibid.,
p. 71, note 9). See Jackson (1988a).
REFERENCES
Alchourron, C. E., P. Gärdenfors, and D. Makinson. 1985. On the logic of theory change: Partial meet
contraction and revision functions. The Journal of Symbolic Logic 50:510 530.
Alcoff, L. M. 1998. Introduction to part five: What is truth? In Epistemology: The Big Questions, ed.
L. M. Alcoff, 309 310, Oxford: Blackwell.
Benferhat, S., D. Dubois, and H. Prade. 2001. A computational model for belief change. In Frontiers in
Belief Revision, Applied Logic Series (22), eds. M. A. Williams and H. Rott, pp. 109 134, Dordrecht:
Kluwer.
BonJour, L. 1998. The elements of coherentism. In Epistemology: The Big Questions, ed. L. M. Alcoff, pp.
210 231, Oxford: Blackwell. (Page numbers are referred to as in Alcoff.) Originally, in Structure of
Empirical Knowledge, 87 110. Cambridge: Harvard University Press.
Cabras, C. 1996. Un mostro di carta. In Psicologia della prova, ed. C. Cabras, pp. 233 258. Milano: Giuffrè.
De Kleer, J. 1986. An assumption based truth maintenance system. Artificial Intelligence 28:127 162.
Doyle, J. 1979. A truth maintenance system. Artificial Intelligence 12(3):231 272.
Dragoni, A. F. 1992. A model for belief revision in a multi-agent environment. In Decentralized A.I.3, eds.
E. Werner and Y. Demazeau, pp. 103 112. Amsterdam: North Holland Elsevier Science, 1992.
Dragoni, A. F., P. Mascaretti, and P. Puliti. 1995. A generalized approach to consistency-based belief
revision. In M. Gori and G. Soda, eds., Topics in Artificial Intelligence, Proc. of the 4th Conference
of the Italian Association for Artificial Intelligence, LNAI 992, pp. 231–236, Springer-Verlag.
Meter-Models Tradition 301
Dragoni, A. F. 1997. Belief revision: from theory to practice. In The Knowledge Engineering Review 12(2),
pp. 147 179. Cambridge, UK: Cambridge University Press.
Dragoni, A. F., and P. Giorgini. 1997a. Distributed knowledge revision-integration. In Proceedings of the
Sixth ACM International Conference on Information Technology and Management pp. 121 127.
New York: ACM Press.
Dragoni, A. F., and P. Giorgini. 1997b. Belief revision through the belief function formalism in a multi-
agent environment. In Intelligent Agents III, eds. M. Wooldridge, N. R. Jennings, and J. Muller,
Lecture Notes in Computer Science, no. 1193. Heidelberg: Springer-Verlag.
Dragoni, A. F., P. Giorgini, and E. Nissan. 2000. Distributed belief revision as applied within a descriptive
model of jury deliberations. In the Preproceedings of the AISB 2000 Symposium on AI and Legal
Reasoning, April 17, 2000, Birmingham, pp. 55 63, Reprinted in Information and Communications
Technology Law 10(1):53 65, 2001.
Fakher-Eldeen, F., T. Kuflik, E. Nissan, G. Puni, R. Salfati, Y. Shaul, and A. Spanioli. 1993. Inter-
pretation of imputed behavior in ALIBI (1 to 3) and SKILL. Informatica e Diritto, 2nd series
2(1= 2):213 242.
Fermé, E. 1998. On the logic of theory change: Contraction without recovery. Journal of Logic, Language
and Information 7:127 137.
Gaines, D. M. 1994. Juror Simulation, BSc Project Report, No. CS-DCB-9320, Computer Science Dept.,
Worcester Polytechnic Institute.
Gaines, D. M., D. C. Brown, and J. K. Doyle. 1996. A computer simulation model of juror decision
making. In Expert Systems With Applications, 11: 13 28.
Geiger, A., E. Nissan, and A. Stollman. 2001. The Jama legal narrative. Part I: The JAMA model and
narrative interpretation patterns. Information and Communications Technology Law 10(1):21 37.
Genette, G. 1979. Introduction à l’architexte. Paris: Seuil; The Architext: An Introduction (trans.
J. E. Lewin), Berkeley: University of California Press, 1992.
Hansson, S. O. 1999. Recovery and epistemic residue. Journal of Logic, Language and Information 8(4):
421 428.
Harper, W. L. 1976. Ramsey test conditionals and iterated belief change. In Foundations of Probability
Theory, Statistical Inference, and Statistical Theories of Sciences, vol. 1. eds. W. L. Harper, and
C. A. Hooker, 117 135. Norwell, MA: D. Reidel.
Hastie, R., ed. 1993. Inside the Juror: The Psychology of Juror Decision Making (Cambridge Series on
Judgment and Decision Making.) Cambridge, UK: Cambridge University Press.
Hastie, R. 1994. Introduction.
Harper, W. L. 1977. Rational conceptual change. In PSA 1976, Vol. 2, East Lansing, Michigan.
Hastie, R., S. D. Penrod, and N. Pennington. 1983. Inside the Jury.Cambridge, MA: Harvard University
Press.
Horwich, P. 1990. The minimal theory. In Epistemology: The Big Questions, ed. L.M. Alcoff, 311 321.
Oxford: Blackwell (Page numbers are referred to as in Alcoff.) Originally, in Truth, by P. Horwich,
1 14. Oxford: Blackwell, 1990.
Jackson, B. S. 1985. Semiotics and Legal Theory. London: Routledge & Kegan Paul.
Jackson B. S. 1988a. Law, Fact and Narrative Coherence. Merseyside: Deborah Charles Publications.
Jackson, B. S. 1988b. Narrative models in legal proof. International Journal for the Semiotics of Law 1:
225 246.
Jackson, B. S. 1990. Narrative theories and legal discourse. In Narrative in Culture: The Uses of Story-
telling in the Sciences, Philosophy and Literature, ed. C. Nash, 23 50. London: Routledge.
Jackson, B. S. 1994. Towards a semiotic model of professional practice, with some narrative reflections on
the criminal process. International Journal of the Legal Profession 1:55 79.
Jackson, B.S. 1995. Making Sense in Law. Liverpool: Deborah Charles Publications.
Jackson, B. S. 1996. ‘‘Anchored narratives’’ and the interface of law, psychology and semiotics. Legal and
Criminological Psychology 1(1):17 45.
Jackson, B. S. 1998. Bentham, truth and the semiotics of law. In Legal Theory at the End of the
Millennium, ed. M.D.A. Freeman, pp. 493 531. (Current Legal Problems 1998, Vol. 51.) Oxford:
Oxford University Press.
302 A. F. Dragoni and E. Nissan
Katsuno, H., and A. O. Mendelzon. 1991. On the difference between updating a knowledge base and revis-
ing it. In Proceeding of the 2nd International Conference on Principles of Knowledge Representation
and Reasoning, eds. J.’Allen, R. Fikes, and E. Sandewall, pp. 389 394. Morgan Kaufmann.
Kotthoff, H. 1998. Irony, quotation, and other forms of staged intertextuality: Double or contrastive
perspectivation in conversation. In Perspectivity in Discourse, eds. C. F. Graumann and W. Kall-
meyer, Amsterdam: Benjamins. Also: http: == ling.uni-konstanz.de = pp. = home = kotthoff = Seiten =
ironyframe.html
Kuflik, T., E. Nissan, and G. Puni. 1991. Finding excuses with ALIBI: Alternative plans that are deonti-
cally more defensible. Computers and Artificial Intelligence 10(4):297 325.
Levi, I. 1977. Subjunctives, dispositions and chances. Synthese 34:423 455.
Levi, I. 1980. The Enterprise of Knowledge. Cambridge, MA: The MIT Press.
Levi, L. 1991. The Fixation of Beliefs and its Undoing. Cambridge, UK: Cambridge University Press.
Lindström, S., and W. Rabinowicz. 1991. Epistemic entrenchment with incomparabilities and relational
belief revision. In The Logic of Theory Change. Journal of Philosophical Logic, 16. eds. Fuhrmann
and Morreau, 93 126. Springer Verlag.
Makinson, D. 1997. On the force of some apparent counterexamples to recovery. In Normative Systems in
Legal and Moral Theory: Festschrift for Carlos Alchourron and Eugenio Bulygin, eds. E. Garzon
Valdéz et al., 475 481. Berlin: Duncker & Humbolt.
Martins, J. P., and S. C. Shapiro. 1988. A model for belief revision. Artificial Intelligence 35:25 97.
Morton, A. 2003. A Guide through the Theory of Knowledge, 3rd ed. Oxford: Blackwell.
Nayak, A. 1994. Foundational belief change. Journal of Philosophical Logic 23:495 533.
Nissan, E. 2001a. Can you measure circumstantial evidence? The background of probative formalisms for
law. Information and Communication Technology Law 10(2):231 245.
Nissan, E. 2001b. The Jama legal narrative. Part II: A foray into concepts of improbability. Information
and Communications Technology Law 10(1):39 52.
Nissan, E. 2001c. An AI formalism for competing claims of identity: Capturing the ‘‘Smemorato di
Collegno’’ amnesia case. Computing and Informatics 20(6):625 656.
Nissan, E. 2002a. A formalism for misapprehended identities: Taking a leaf out of Pirandello. In Proceed-
ings of the Twentieth Twente Workshop on Language Technology, eds. O. Stock, C. Strapparava, and
A. Nijholt, pp. 113 123, Trento, Italy, April 15 16, 2002, Twente, Enschede, The Netherlands: Uni-
versity of Twente.
Nissan, E. 2002b. The COLUMBUS model (2 parts). International Journal of Computing Anticipatory
Systems 12:105 120 & 121 136.
Nissan, E. 2003. Identification and doing without it, Parts I to IV Cybernetics & Systems 34(4 5) and
34(6 7):317 380, 467 530.
Nissan, E., and A. F. Dragoni. 2000. Exoneration, and reasoning about it: A quick overview of three
perspectives. In Proceedings of the International ICSC Congress ‘‘Intelligent Systems Applications’’
(ISA’2000), pp. 94 100, Wollongong, Australia, December 2000.
Nissan, E., and D. Rousseau. 1997. Towards AI formalisms for legal evidence. In Foundations of Inte-
lligent Systems: Proceedings of the 10th International Symposium, ISMIS’97, eds. Z. W. Raś and
A. Skowron. Pages 328 337. Springer-Verlag.
Nissan, E., I. Rossler, and H. Weiss. 1997. Hermeneutics, accreting receptions, hypermedia. Journal of
Educational Computing Research 17:297 318.
Pennington, N., and R. Hastie. 1983. Juror decision making models: The generalization gap. Psychological
Bulletin 89:246 287.
Plantinga, A. 1993a. Warrant: The Current Debate. Oxford: Oxford University Press.
Plantinga, A. 1993b. Warrant and Proper Function. Oxford: Oxford University Press.
Shimony, S. E., and E. Nissan. 2001. Kappa calculus and evidentiary strength: A note on Åqvist’s logical
theory of legal evidence. Artificial Intelligence and Law 9(2–3):153 163.
Tillers, P., and E. Green (eds.) 1998. Probability and Inference in the Law of Evidence: The Uses and Limits
of Bayesianism. Boston and Dordrecht: Kluwer.
Twining, W. 1997. Freedom of proof and the reform of criminal evidence. Israel Law Review 31(1–3):
439 463.
Meter-Models Tradition 303
Twining, W. 1999. Necessary but dangerous? Generalizations and narrative in argumentation about
‘‘facts’’ in criminal process. In Complex Cases: Perspectives on the Netherlands Criminal Justice
‘‘System, ed. M. Malsch and J. F. Nijboer, 6998. Amsterdam: Thela Thesis.
Williams, M. A., and H. Rott (eds.) 2001. Frontiers in Belief Revision, (Applied Logic Series, 22.)
Dordrecht, The Netherlands: Kluwer Academic Publishers.