You are on page 1of 28

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/220356030

Salvaging The Spirit Of The Meter-Models Tradition: A Model Of


Belief Revision By Way Of An Abstract Idealization Of Response To
Incoming Evidence Delivery During The Const....

Article in Applied Artificial Intelligence · March 2004


DOI: 10.1080/08839510490279889 · Source: DBLP

CITATIONS READS

6 445

2 authors:

Aldo Franco Dragoni Ephraim Nissan


Università Politecnica delle Marche 203 PUBLICATIONS 1,166 CITATIONS
217 PUBLICATIONS 1,652 CITATIONS
SEE PROFILE
SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Intelligenza Artificiale e Polizia Scientifica View project

Computer-aided clinical diagnosis View project

All content following this page was uploaded by Aldo Franco Dragoni on 27 February 2017.

The user has requested enhancement of the downloaded file.


Applied Artificial Intelligence, 18:277303, 2004
Copyright # Taylor & Francis Inc.
ISSN: 0883-9514 print/1087-6545 online
DOI: 10.1080=08839510490279889

u SALVAGING THE SPIRIT


OF THE METER-MODELS
TRADITION: A MODEL
OF BELIEF REVISION BY WAY
OF AN ABSTRACT
IDEALIZATION OF RESPONSE
TO INCOMING EVIDENCE
DELIVERY DURING THE
CONSTRUCTION OF PROOF
IN COURT

ALDO FRANCO DRAGONI


Istituto di Informatica, University of Ancona,
Ancona, Italy

EPHRAIM NISSAN
School of Computing and Mathematical Sciences,
University of Greenwich, Greenwich, London,
England, United Kingdom

Inside the Juror (Hastie 1994) was, in a sense, a point of arrival for research developing
formalisms that describe judicial decision making. Meter-based models of various kinds were
mature, and even ready for giving way to such models that would concern themselves with the
narrative content of the cases at hand, that a court is called to decide upon. Moreover,
excessive emphasis was placed on lay factfinders, i.e. on jurors. It is noticeable that as
‘‘AI & Law’’ has become increasingly concerned with evidence in recent years  with efforts
coordinated by Nissan & Martino, Zeleznikow, and othersthe baggage of the meter-based
models from jury research does not appear to be exploited. In this article, we try to combine
their tradition with a technique of belief revision from artificial intelligence, in an attempt
to provide an architectural component that would be complementary to models that apply
representations or reasoning to legal narrative content.

Address correspondence to Dr. Ephraim Nissan, 282 Gipsy Road, Welling, Kent DA16 1JJ, England.
E-mail: ephraimnissan@hotmail.com

277
278 A. F. Dragoni and E. Nissan

BACKGROUND IN JURY RESEARCH


What is the proper role for artificial intelligence tools, architectures, or
formal models in the service of treating legal evidence during investigations
or in the courtroom? It seems to be unquestionable that such a role would
be welcomed if it’s to support the legal professionals involved in any role,
provided that such support takes place in any way other than affecting what
the verdict on the defendant’s guilt is going to be.
If, instead, the role of the AI tool is one of suggesting to the factfinders a
truth value for a proposition concerning the guilty or not guilty status of the
suspect or the defendant, other than independently supported by other
means, well, that is a terrain rife with controversy; see, for example, views
from the opposing camps in Allen and Redmayne (1997) and also Tillers
and Green (1998).1 This is less so, if such a model is relegated to the status
of on object with which scholarship for its own sake could be safely left to
play with, also to the satisfaction of the skeptics. This paper does not get into
the controversy, because the contribution of the formalism it is going to pro-
pose is to jury research; namely, it augments the taxonomy of approaches
that found their place in Reid Hastie’s Inside the Juror (Hastie 1994). As
far as we know, the earliest implementation of this kind of model in artificial
intelligence (with either symbolic or connectionist computation) is the one de-
scribed in Gaines et al. (1996). That model, which adopts a (by then) current
approach from jury research and represents it by means of artificial neural
networks, had earlier been the subject of Gaines (1994).
As presented in Hastie (1994), there are four current main approaches to
the formal modeling of the process of juror decision making, by which,
through exposure to the evidence being presented in court, a juror’s attitude
to the accused being or not being guilty is shaped:

. Probability theory approaches (Bayesian posterior probability of guilt).


. Algebraic formulation (sequential averaging model): perceived strength of
evidence for guilt.
. Stochastic process models (state transitions over time are probabilistic).
. Cognitive information processing and story model.

Hastie (1994, Figure 1.1 on p. 7) describes trial events ‘‘in terms of the
types of information presented to the juror.’’ These include: indictment,
defendant’s plea, prosecution opening statement, defense opening statement,
witnesses (comprising the sequence: statements of witness and judge, obser-
vations of witnesses, observations of judge), defense closing arguments, pros-
ecution closing arguments, judge’s instructions on procedures (the
procedures being: presumption of innocence, determination of facts, admissi-
bility, credibility, reasonable inference, and standard of proof), and judge’s
Meter-Models Tradition 279

instructions on verdicts (where verdict categories have these features:


identity, mental state, actions, and circumstances).
For the juror’s task, Hastie proposes a flowchart of its tentative structure
(Hastie 1994, p. 8, Figure 1.2), notwithstanding the differences of opinions
that admittedly exist in the literature about how this takes place in the juror’s
cognition. Given inputs from the trial (witnesses, exhibits, and so forth), the
juror has to encode meaning, the next step being (A) ‘‘Select admissible evi-
dence.’’ Later on in the trial events, given the judge’s procedural instructions,
the juror has to encode the meaning of the procedures (presumption of inno-
cence, and so forth, as listed earlier), and this in turn has three outgoing arcs,
to: (B) ‘‘Evaluate for credibility’’ (into which, an arc comes from A as well),
(C) ‘‘Evaluate for implications,’’ and (Z), for which see in the following.
There is a loop by which (B) ‘‘Evaluate for credibility’’ leads into (C) ‘‘Evalu-
ate for implications,’’ and then into (D) ‘‘Construct sequence of events,’’
which in turn provides a feedback which affects B. Besides, D leads to a test:
(T) ‘‘More evidence?’’ If there is indeed, one goes to A; otherwise, one goes
to Z. Given the judge’s instructions on verdicts, the juror has to learn verdict
categories, and this in turn leads to (Z) ‘‘Predeliberation judgment.’’ The
flowchart from Hastie is redrawn here, with some simplification, in Figure 1.
The Bayesian and the algebraic approaches are meter-models, in the
sense that a hypothetical meter measures the propensity of the juror
to consider the defendant guilty as charged, starting from the presumption of
innocence, when gradually faced with new evidence. Unlike in the Bayesian
model, in the algebraic model, the fundamental dimension of judgment is
not assumed to be a subjective probability. Accordingly, the rules of coher-
ence of judgment are required in the Bayesian model, but are relaxed in
the algebraic model. The Bayesian model requires successive judgment to
be coherent: extreme judgments (0 or 1) are final. In contrast, in the
algebraic model, extreme judgments are not necessarily final, and can be
adjusted when subsequent evidence is received.
The belief updating is ‘‘additive’’ in the algebraic model: weigh each
evidence item and add the weighted value to the current degree of belief.
In contrast, in the Bayesian model, the belief updating is by multiplication:
there is a multiplicative adjustment calculation.
The difference between the Bayesian probability updating model and the
stochastic Poisson process model is that, in the latter, what is probabilistic is
the state-transitions over time. Figure 3 (taken from Nissan [2001a]) com-
pares these two approaches, by modifying and coalescing two flowcharts
from Hastie’s overview (Hastie 1994, p. 21, Figure 1.6, and p. 13, Figure 1.4).
Next, the fourth approach in that overview is the cognitive story model
within the cognitive information processing model, which is within the
conceptual universe of cognitive science (at its interface with computational
models of cognition). The cognitive story model is about constructing stories
280 A. F. Dragoni and E. Nissan

FIGURE 1. A flowchart for the juror’s task.

and evaluating their plausibility and fitness as explanations, then learning the
verdict categories, and, next, matching the accepted story to verdict cate-
gories. This fourth approach is associated in jury research with the names
of Pennington and Hastie (1983) (see also Hastie et al. 1983).
Twining (1997) has warned about it being unfortunate that contested jury
trials be treated, in ‘‘much American evidentiary discourse’’ and ‘‘satellite
fields, such as the agenda for psychological research into evidentiary
problems,’’ ‘‘as the only or the main or even the paradigm arena in which
important decisions about questions of fact are taken’’ (ibid., p. 444), and
while acknowledging the importance of the cognitive story model (ibid., n.
16), he has signaled in this connection the excessive emphasis on the jury.
Twining (1999, Sec. 2.3; 1994, Ch. 7) has warned as well about the danger
that a ‘‘good story’’ poses in court of pushing out the true story. Jackson
(1996) provides a semiotic perspective on narrative in the context of the
criminal trial. See also Jackson (1990), and his other works (Jackson 1985;
1988a; 1988b; 1994).
Meter-Models Tradition 281

FIGURE 2. Taken from Nissan (2001a), Figure 2 is a redrawn, coalesced flowchart of the ones Hastie
gives for the Bayesian probability updating model, and for the algebraic sequential averaging model
(Hastie 1994, Figure 1.4 on p. 13 and Figure 1.5 on p. 18).
282 A. F. Dragoni and E. Nissan

FIGURE 3. A comparison of the Bayesian probability updating model and the stochastic Poisson process
model.
Meter-Models Tradition 283

Nissan has argued elsewhere for the crucial role of an AI model of mak-
ing sense of a narrative and of its plausibility for research applying AI to legal
evidence. For example, on narrative stereotypes and narrative improbability,
see the papers on the Jama story (Geiger et al. 2001; Nissan 2001b; Nissan
and Dragoni 2000). Refer as well to the papers on the ALIBI project (Kuflik
et al. 1991; Fakher-Eldeen et al. 1993; Nissan and Rousseau 1997), as well as
to Nissan’s papers in the companion special issue in Cybernetics & Systems
(2003), Nissan’s paper on an amnesia case (2001c), and his paper on the maze
of identities in Pirandello’s Henry IV (Nissan 2002a). Nissan has also deve-
loped a goal-driven formal analysis (the COLUMBUS model) for a passage
in a complex literary work whose narratives are not in the realistic tradition
(Nissan 2002b).

‘‘BELIEF REVISION’’: AN EMERGENT DISCIPLINE FROM


ARTIFICIAL INTELLIGENCE
Belief Revision (BR) is an emergent discipline from artificial intelligence.
It studies the impact of acquiring new information. The ability to revise opi-
nions and beliefs is imperative for intelligent systems exchanging information
in a dynamic world. BR first came into focus in the work of the philosophers
of cognition William Harper (1976; 1977) and Isaac Levi (1977; 1980; 1991),
but almost immediately broke into computer science and artificial intelli-
gence through a seminal paper to which we are going to refer as the AGM
approach (Alchourr on et al. 1985). Interestingly, article was actually inspired
by an application to law: the promulgation of a new law, added to an extant
body of law. After about two decades of intensive research, BR has estab-
lished itself as a mature field of study (see Williams and Rott [2001] for a
recent state-of-the-art collection). In the following, we try to sketch the evol-
utionary line of this complex field, focusing on the reason why we needed to
depart considerably from the initial start-up concepts in order to model some
of the inquirer’s cognitive processes.
The AGM approach formalized the set of beliefs as a logic theory K
described in a formal language L. The problem of revision arises when we
get a new formula p (belonging to L). Given a theory K, let us get a new for-
mula p. The result has to be a new revised theory. The revision K p is based
on p. More generally, if TL is the set of theories in language L, a ‘‘revision’’
is a function  : TL  L ! TL . So, ‘‘revision’’ is an operator: (K, p) !K p
(i.e., it takes a theory K and a formula p, and returns a revised theory
K p). All the research aims at finding how to change the theory K in order
to take into account the new formula p.
A problem arises especially when the new formula p is in contrast with
the extant theory K. How to handle contradiction, in case the new infor-
mation contradicts the current set of beliefs? Then the new theory could
284 A. F. Dragoni and E. Nissan

not merely be the old theory extended with the new formula (as this would
exhibit inconsistency).
It is necessary to find how to change the given theory (i.e., the current set
of beliefs) in order to incorporate the incoming information, i.e., the new for-
mula. To get an idea of how complex this is, consider that a (logic) theory is
an infinite set of formulae: namely, all those formulae obtainable from the
basic formulae by making the deductive closure. AGM set forth three ration-
ality principles that must govern the change:
AGM1. Consistency: K p must be consistent (i.e., with no conflicting propo-
sitions, as possibly introduced by p).
AGM2. Minimal change: K p should alter as little as possible K (while trying
to satisfy the consistency principle).
AGM3. Priority to the incoming information: p must belong to K p (thus not
being relegated to the status of a rarely consulted appendix to the old
theory).

We will focus our attention on the second and the third principles. The for-
mer says that the new theory must be as similar as possible to the old one. It’s
a rehashed Occam’s razor: ‘‘Simpler explanations should be preferred until
new evidence proves otherwise.’’ Said otherwise: ‘‘If to explain a phenom-
enon it is unnecessary to make given hypotheses, then don’t make them.’’
As to the third principle, it says that the new theory must incorporate the
new formula. This requires that priority be given to the incoming infor-
mation, because if you consult the older formulas first, then you ‘‘neglect’’
the new formula’s impact (it ‘‘doesn’t bother’’ the old theory).
From these rationality principles, AGM derived eight postulates affecting
theory revision:
1. AGM1: For any p 2 L and K 2 TL ; K  p 2 TL :
The new theory must be a theory.
2. AGM2: p 2 K p
The new formula must belong to the new theory. Known as ‘‘postulate of
success,’’ this is the most controversial AGM postulate.
3. AGM3: K p  Kþ p
The new theory must be a subset of the expanded theory we would get if
it had been allowable to merely augment the old theory with the new
formula: Such an expanded theory would be inconsistent if a contradicts
the old theory. The expanded, inconsistent theory includes all those
formulae of language L that necessarily satisfy the axiom.
4. AGM4: If : p 2 = K then Kþ p  K p:
If the negation of the new information is not derivable from the old
theory, then the new theory must contain all those formulae that can be
derived by merely adding the new formula to the old theory.
Meter-Models Tradition 285

5. AGM5: K p is inconsistent if and only if p is inconsistent.


The new theory is only inconsistent if the new formula is, and vice versa.
6. AGM6: If p  q then K p ¼ K q:
Two logically equivalent formulae produce the same effects of change.
(We’ll come back to that, questioning this postulate.)
7. AGM7: K p ^ q  ðK pÞþ q
If the new formula is a logic conjunction p and q, then the new theory must
be a subset of the final result of this sequence of steps:
a) Revise the old theory with p, then,
b) Expand the intermediate theory by merely adding q.
8. AGM8: If : q 2 = K p then ðK pÞþ q  K p ^ q:
If the new formula is a logic conjunction p and q, and, moreover, the ne-
gation of q is not derivable from the theory as revised with p, then the new
theory as revised with the new formula must contain all those formulae
obtainable by first revising the old theory with p, and then expanding
the intermediate theory by merely adding q.
These axioms describe the rational properties to which revision should
obey, but they do not suggest how to perform it. A first operational defi-
nition came from Levi’s identity:
K p ¼ ðK : pÞþ p
that defines revision in terms of contraction. Contracting a theory K by a
formula p means making it impossible to derive p from K. So Levi’s identity
simply means that to derive K by p, we must first make it impossible to
derive :p (possibly deleting some axioms from K), and then we have to
add p.
Being so closely related, it is intuitive that there exist eight postulates for
contraction too. One of them deserve to be mentioned since, probably, it has
been the most debated issue in the belief change literature (Fermé 1998;
Hansson 1999; Lindström and Rabinowicz 1991; Makinson 1997; Nayak
1994), the so-called postulate of recovery:
K  ðK pÞþ p
Behind these problematic issues, the main problem is the fact that the
eight postulates for revision do not univocally define revision. The open
question is which theories, out of the infinitely numerous new ones satisfying
the eight postulates, are we to choose for our present purposes? Levi’s ident-
ity leaves the problem open, since there are different ways to perform con-
traction. The key ideas here should be those of minimality and similarity
between epistemic states. Unfortunately, as the editors of Williams and Rott
(2001) pointed out: ‘‘formalising this intuition has proved illusive and a defi-
nition of minimal change, or indeed similarity, has not been developed.’’
286 A. F. Dragoni and E. Nissan

Computer scientists recently pointed out the similarities between belief


revision and database merging: ‘‘Fusion and revision are closely related be-
cause revision can be viewed as a fusion process where the input information
has priority over a priori beliefs, while fusion is basically considered as a sym-
metric operation. Both fusion and revision involve inconsistency handling.’’
(Benferhat et al. 2001).
An important follow-up of this line of research has been the sharp dis-
tinction made between ‘‘revision’’ and ‘‘updating.’’ If the new information
reports of some modification in the current state of a dynamic world, then
the consequent change in the representation of the world is called ‘‘updat-
ing.’’ If the new information reports of new evidence regarding a static world
whose representation was approximate, incomplete, or erroneous, then the
corresponding change is called ‘‘revision.’’ With revision, the items of infor-
mation which gradually arrive all refer to the same situation, which is fixed in
time: such is the case of a criminal event whose narrative circumstances have
to be reconstructed by an investigative team or by fact finders (a judge or a
jury). In contrast, with updating, the items of information which gradually
arrive refer to situations which keep changing dynamically: the use for such
items of information is to make the current representation as corresponding
as possible to the current state of the situation represented.
This applies, for example, to a flow of information on a serial killer still
on the loose. For example, Cabras (1996) considers the impact of the con-
strual in the mass media in Italy of a criminal case, on the investigation itself,
and on the ‘‘giudici popolari’’ to whom the case could have gone (this is the
‘‘domesticated’’ version of a jury at criminal trials at a ‘‘Corte d’Assise’’ in
Italy, where trained judges are in control anyway). The case she selected is
that of the so-called ‘‘Monster of Foligno,’’ a serial killer who used to leave
messages between crimes. A man eventually implicated himself by claiming
that he was the killer. He was released in due course, and later on, the real
culprit was found. We didn’t try our formalism on this case, yet arguably
investigations on serial killers are a good example of updating instead of
revision, vis-à-vis recoverability in the sense we explained in the main text.
As Katsuno and Mendelzon (1991) pointed out, AGM3 and AGM4, in
defining expansion as a particular case of revision, do not apply to updating.
Information coming from a real world normally imply both kinds of cogni-
tive operations. In the remainder of this paper, we focus on this case in which
incoming information brings some refinement or correction to previous in-
complete and=or erroneous description of a static situation and we need to
perform a pure belief revision.
Even if the AGM approach cannot be set aside when speaking of belief
revision, much effort in bridging the gap between theory and practice came
from a parallel conception of belief revision, which originated (almost
at the same time) research into the so called Truth Maintenance Systems
Meter-Models Tradition 287

(Doyle 1979). See, for instance, the modeling in Martins and Shapiro (1988).
De Kleer’s Assumption-Base Truth Maintenance Systems (ATMS) paradigm
(De Kleer 1986) overcame some limitations of Doyle’s TMS, and was
immediately regarded as a powerful reasoning tool to revise beliefs (and to
perform the dual cognitive operation of diagnosis). Crucial to the ATMS
architecture is the notion of assumption, which designates a decision to
believe something without any commitment as to what is assumed. An
assumed datum is the problem-solver output that has been assumed to hold
after the assumption was utilized to derive it. Assumptions are connected to
assumed data via justifications, and form the foundation to which every
datum’s support can be ultimately traced back. The same assumption may
justify multiple data, or one datum may be justified by multiple assumptions.
According to us, to be fruitfully applied in modeling the cognitive state of
an inquirer or a juror, receiving information from many sources about a
same static situation, a BR framework should possess some special requisites.
These are:

. The ability to reject an incoming information.


A belief revision system for a multi-source environment should drop the
rationality principle of ‘‘priority to the incoming information,’’ since there
is no direct correlation between the chronology of the informative acts and
the credibility of their contents; it seems more reasonable treating all the
available pieces of information as they had been collected at the same time.2
. The ability to recover previously discarded beliefs.
Cognitive agents should be able to recover previously discarded pieces of
knowledge after that new evidence redeems them (see Figure 4). The point
is that this should be done not only when the new information directly ‘‘sup-
ports’’ a previously rejected belief, but also when the incoming information
indirectly supports it by disclaiming those beliefs that contradicted it causing
its ostracism. Elsewhere we called this rule ‘‘Principle of Recoverability,’’
which means any previously held piece of knowledge should belong to the
current knowledge space if consistent with it (Dragoni et al. 1995; Dragoni

FIGURE 4. If q ‘‘rejects’’ p and subsequently :q ‘‘rejects’’ q, then p should be restored, even if it is not the
case that :q implies p.
288 A. F. Dragoni and E. Nissan

1997). But in this paper, we rename it the ‘‘Principle of Persistence.’’ The


rationale for this principle is that if someone gave us a piece of information
(sometime in the past) and currently there is no reason to reject it, then we
should accept it. Of course, this principle does not hold for updating,
where the represented world does change. Here is an example of infor-
mation which updates a situation. If I see an object in position B, I can
no longer believe it is in position A. (The miracle of ubiquity is considered
a miracle for the very reason that it does not belong in our common sense
experience of the world.) If somebody tells me that the object is not in B,
this does not amount to having me believe that the object is now back in A
or never moved from A, as it may be that the object moved from B to C.
In general, if observation b has determined the removal of information a,
this does not imply that some further notification of change, c, which pro-
vokes removal of observation b, should necessarily restore observation a.
. The ability to combine contradictory and concomitant evidences.
The notion of fusion should blend that of revision (Dragoni and Giorgini
1997). Every incoming information changes the cognitive state. Rejecting
the incoming information does not mean leaving beliefs unchanged since,
in general, incoming information alters the distribution of the credibility
weights. Surely the last come information decreases the credibility of the
beliefs with which it got in contradiction, even in the case that it has been
rejected. The same is true when receiving a piece of information of which
we were already aware; it is not the case that nothing happened (as AGM4
states) since we are now, in general, more sure about that belief. Further-
more, if it is true that incoming information affects the old one, it is like-
wise true that the old one affects the incoming information. In fact, an
autonomous agent (where ‘‘autonomous’’ means that his cognitive state
is not determined by other agents) judges the credibility of a new infor-
mation on the basis of its previous cognitive state. In conclusion, ‘‘revising
beliefs’’ should simply mean ‘‘dealing with a new broader set of pieces of
information.’’
. The ability to deal with couples < source, information > .
The way the credibility ordering is generated and revised must reflect the
fact that beliefs come from different sources of information, since the re-
liability and the number of independent informants3 affect the credibility
of the information and vice versa (Dragoni 1992). A juror cannot disregard
where his beliefs come from because the same information, if coming from
different sources, may deserve different weights in terms of credibility, or
even different interpretations of what it means.4
. Ability to maintain and compare multiple candidate cognitive states.
This ability is the part of human intelligence that does not limit its action
to comparing single pieces of information, but goes on trying to recon-
struct alternative cognitive scenarios as far as it is possible.
Meter-Models Tradition 289

. Sensitivity to the syntax.


Although the AGM approach axiomatizes belief revision at the semantic
level, we recognize that syntax plays an important role in everyday life.
The way we syntactically pack (and unpack) pieces of information reflects
the way we organize thinking and judge credibility, importance, relevance,
and even truthfulness. A testimony of the form p ^ b c ^    ^ t ^ : p from a
defendant A in a trial has the same semantic truth value than the testimony
q ^ : q from a defendant B, but normally B will be condemned while A
could be absolved by having her testimony regarded ‘‘partially true,’’
whereas B’s testimony will be regarded as ‘‘totally contradictory.’’ Yet this
is unwarranted: e.g., local inconsistencies should not be necessarily fatal to
the credibility of a witness statement. (Semiologist of law Bernard Jackson
showed5 that the pragmatics of delivery in court is paramount and not mere
legal narrative semantics). A set of sentences seems not to be cognitively
equivalent to their logical conjunction, and we could change a cognitive
state by simply clustering the same beliefs in a different way.

SOLUTION PROPOSED
From this discussion it should be evident that we cannot rely on the
AGM framework to model the belief revision process of a juror or an inves-
tigator. We better repose on the ATMS abstract conception. An implemented
and tested computational architecture that does that it shown in Figure 5.
Let us zoom now on the initial part of the flowchart. For the purposes of
exemplification of the flow of information for a given set of beliefs and a
given new information, refer to the detail of the architecture as shown in
Figure 6.
The overall schema of the multi-agent belief revision system we propose
(see Figure 5) incorporates the basic ideas of:

. Assumption-based Truth Maintenance System (ATMS) to keep different


scenarios.
. Bayesian probability to recalculate the a-posteriori reliability of the
sources of information.
. The Dempster-Shafer Theory of Evidence to calculate the credibility of the
various pieces of information.

In Figure 6, on the left, one can see an incoming information, b (whose


source, U, is identified), further to the set of beliefs already found in the
knowledge base, namely, informations a and v, which both come from source
W, and moreover an information being a rule (‘‘If a, then not b’’), which
comes from source T. The latter could, for example, be an expert witness
290 A. F. Dragoni and E. Nissan

FIGURE 5. Our way to belief revision.

or then a fictitious character such as common sense. In the parlance of


Anglo-American legal evidence theory, common sense is called ‘‘background
generalizations,’’ ‘‘common-sense generalizations,’’ or ‘‘general experience’’
(see Twining 1999).
Meter-Models Tradition 291

FIGURE 6. The first step: The arrival of ^a new information item.

Once past the knowledge base in the flowchart of Figure 6, in order to


revise the set of beliefs with the new information b coming from source U,
two steps are undertaken. Refer to Figure 7.
The ATMS-like mechanism is triggered; it executes steps S1 and S2.
These are dual operations, respectively, as follows.

. Find all minimally inconsistent subsets (NOGOODSs).


. Find all maximally consistent subsets (GOODSs).

In the notation of set theory, the Venn diagram on the right side of
Figure 7 is intended to capture the following concept. Three GOODSs have
been generated; the one labeled 1 includes a, b, and v; the one labeled 2
includes B, v, and the rule ‘‘If a, then not b’’; whereas yet another GOODS,
labeled 3, includes: a, v, and the same rule ‘‘If a, then not b’’.
Each one out of these three GOODSs is a candidate for being the pre-
ferred new cognitive state (rather than the only new cognitive state). The de-
cision as to which cognitive state to select is taken based on Dempster-Shafer
(see Figure 9). Refer to Figure 8.

FIGURE 7. The second step: The generation of all the maximally consistent subsets of KB (i.e., the
knowledge base), plus the incoming information.
292 A. F. Dragoni and E. Nissan

FIGURE 8. The complete process of belief revision.

Dempster-Shafer is resorted to in order to select the new preferred


cognitive state, which consists of an assignment of degrees of credibility to
the three competing GOODSs. Dempster-Shafer takes as input values of a
priori source-reliability (we’ll come back to that: this degree being set a priori
is possibly a limitation) and translates them into a ranking in terms of credi-
bility of the items of information given by those sources. Yet, Dempster-
Shafer could instead directly weigh the three GOODS, whereas (as said)
we make it weigh the formulae instead. This choice stems from Dragoni’s
feeling that the behavior of Dempster-Shafer is unsatisfactory when
Meter-Models Tradition 293

FIGURE 9. The role of the Dempster-Shafer Theory of Evidence.

evaluating the GOOD in its entirety. (In fact, as the GOOD is a formula,
Dempster-Shafer could conceivably assign a weight to it directly.)
Next, from the ranking of credibility on the individual formulae, we can
obtain (by means of algorithms not discussed here) a ranking of preferences
on the GOODSs themselves. In the example, this highest ranking is for the
GOOD with: a, b, and v. (Thus, provisionally discarding the contribution
of source T, which here was said to be ‘‘common sense.’’) Nevertheless,
our system generates a different output. The output actually generated by
the system obtains downstream of a recalculation of source reliability,
achieved by trivially applying the Bayes theorem. In our example, it can be
seen how its source T (‘‘common sense’’) which is most penalized by the
contradiction that occurred. Thus, in output B0 , the rule which precludes b
was replaced with b itself. Note that the selection of B0 is merely a suggestion:
the user of the system could make different choices, by suitably activating
search functions, or then by modifying the reliability values of the sources.
Once the next information will arrive, everything will be triggered anew from
the start, but with a new knowledge base, which will be the old knowledge
base revised with the information. It is important to realize that the new
294 A. F. Dragoni and E. Nissan

knowledge base is not to be confused with B0 . Therefore, any information


provisionally discarded is recoverable later on, and if it will be recovered
indeed, it will be owing to the maximal consistency of the GOODSs.
The Dempster-Shafer Theory of Evidence is a simple and intuitive way to
transfer the sources’ reliability to the information’s credibility, and to
combine the evidences of multiple sources.
Notwithstanding these advantages there are shortcomings, including
the requirement that the degrees of reliability of the sources be established
a priori, as well as computational complexity, and also disadvantages stem-
ming from epistemological considerations from legal theory. Anyway, the
adoption of Dempster-Shafer in the present framework is a choice that could
perhaps be called into question. A refinement is called for, because as it
stands, the system requires (as said) associating with the sources an a priori
degree of reliability and, moreover, application other than approximated of
Dempster-Shafer is computationally very complex.

CONCLUDING REMARKS
This paper introduced what is (relatively) a fairly powerful meter-based
formalism for capturing the process of juror decision making. It compares
favorably with other approaches described in the literature of jury research,
and it’s especially in this regard—namely, salvaging from the tradition of the
meter based models an ongoing contribution to the field—that the present
work is intended. This does not amount, however, to saying that the present
approach is with no problems. (Much less, that it is more than a building
block: ambition for more completeness would call for an inference engine
operating on a narrative representation.)
A few remarks of evaluation follow, first about consistency. Of course,
we want to enforce or restore consistency: judiciary acts cannot stem from
an inconsistent set of hypotheses. Yet, we want to avoid unduly dismissing
any possibility altogether. Therefore we contrast all such GOODSs that
obtain from the set of information items (which are globally inconsistent)
provided by the various sources involved. Sometimes the same source
may be found in contradiction, or provide inconsistent information (self-
inconsistency). In 1981, Marvin Minsky stated: ‘‘I do not believe that consist-
ency is necessary or even desirable in developing an intelligent system.’’
‘‘What is important is how one handles paradoxes or conflicts.’’ Enforcing
consistency produces limitations: ‘‘I doubt the feasibility of representing or-
dinary knowledge in the form of many small independently true propositions
(context-free truths).’’ In our own approach, we have a single, global, never
forgetting, inconsistent knowledge background, upon which many, specific,
competitive, ever changing, consistent cognitive contexts are acting.
Meter-Models Tradition 295

Epistemologist Laurence BonJour (1998, originally 1985), while


introducing the first one of his five conditions for coherence,6 the first one
being: ‘‘A system of beliefs is coherent only if it is logically consistent’’
( p. 217), remarked in note 7: ‘‘It may be questioned whether it is not an over-
simplification to make logical consistency in this way an absolutely necessary
condition for coherence. In particular, some proponents of relevance logics
may want to argue that in some cases a system of beliefs which was suffi-
ciently rich and complex but which contained some trivial inconsistency
might be preferable to a much less rich system which was totally consis-
tent.. . .’’ (BonJour, ibid., p. 230). Relevance logics in this context are not
necessarily very relevant to the concept of ‘‘relevance’’ in the parlance of legal
evidence scholarship (in which sense, the term has to do with the admissibility
of evidence as a matter of policy).
Among the requirements or desiderata for a distributed belief revision
framework, we listed the desirability of the mechanism also displaying sensi-
tivity to the syntax. Consider Figure 10: Is it really necessary to consider
the set of propositions in the circle on the left side equivalent to what is found
in the circle on the right? Are redundancies of no value at all? Moreover, is
even a local, peripheral inconsistency enough to invariably ditch a witness
statement?
The discovery of a pair of items of information in contradiction inside a
rich-textured, articulate witness, should not necessarily invalidate the infor-
mational content of the entire deposition. That is to say: a set of propositions
is not equivalent to their logic conjunction. This is a critique of the so-called
‘‘Dalal’s Principle’’ (satisfied by AGM as per Williams), by which two
logically equivalent informations should produce exactly the same revision.
Arguably, Dalal’s Principle is unworkable in practice because it takes cogni-
tive arbitrariness to delimit the set. How fine-grained are the details to be?
What about cross-examination tactics?
Desiderata include criteria for deciding about inconsistency
containedness. How local is it to be? When can we just cut off the
NOGOODSs and retain the rest? Within the architecture described earlier
in this paper, this would belong in the phases of recalculation of source

FIGURE 10.
296 A. F. Dragoni and E. Nissan

reliability and information credibility. One more problem is confabulation in


depositions. Whereas our present framework is too abstract to take narrative
aspects into account, arguably our system could be a building block in an
architecture with a complementary component to deal with narratives. In
particular, a witness who reconstructs by inference instead of just describing
what was witnessed is confabulating; this is precisely what traditional AI sys-
tems from the 1980s, whose function was to answer questions about an input
narrative text, do when information does not explicitly appear in the text
they analyze. Within the compass of this paper, we cannot address these
issues. Nevertheless, we claim that the formal framework described is as
good as other meter-base formal approaches to modeling juror decision
making, to the extent that such models do not explicitly handle narrative
structure.
Yet a major problem stemming from the adoption of Dempster-Shafer
is that it is apparently tilted towards verificationism instead of falsification-
ism. Take the case of a terrorist or organized crime ‘‘supergrass’’ informing
about accomplices and testifying in court. In Italy, such ‘‘pentiti’’ or
‘‘superpentiti’’ are not considered to be reliable until further proof is
obtained; the supergrasses reliability is taken to be greater to the extent that
greater is the extent to which the deposition matches further evidence. A
shortcoming of this is that part of the deposition may be false and unaffec-
ted by such further proof. Dempster-Shafer, as described in the framework
of the architecture introduced in this paper, falls short of not being tricked
into unduly increasing the reliability of such an untruthful witness. Demp-
ster-Shafer also tends to believe information from a source until contrary
evidence is obtained. Such epistemological considerations affect not only
formal representations; they also affect the way, for example, the mass me-
dia may convey a criminal case or the proceedings in court. They may also
affect, what justice itself makes of witness statements made by children
(i.e., child testimony). All of these issues are not addressed in the formalism
presented.
The multi-agent approach described is appropriate when a flow of new
items of information arrives from several sources, and each informa-
tion=source pair has an unknown credibility degree. This befits the gradual
delivery of the evidence in court, when a juror’s opinion (or the opinion of
the judges in a bench trial) is shaped about evidentiary strength. A formalism
to deal with evidentiary strength has been presented in Shimony and Nissan
(2001).
The results of a previous stage in this research were presented by us in
Dragoni et al. (2001). The general approach, not specifically concerned with
legal matters, was developed as a formalism for belief revision. Previous
stages are represented by Dragoni (1992;1997) and Dragoni and Giorgini
(1997a; 1997b).
Meter-Models Tradition 297

NOTES
1. There is a more general consideration to be made about attitudes toward
Bayesianism. In the literature of epistemology, objections and counter-
objections have been expressed concerning the adequacy of Bayesianism.
One well-known critic is Alvin Plantinga (1993a, Chap. 7; 1993b, Chap.
8). In a textbook, philosopher Adam Morton (2003, Chap. 10) gave these
headings to the main objections generally made by some epistemologists:
‘‘Beliefs cannot be measured in numbers,’’ ‘‘Conditionalization gives the
wrong answers,’’ ‘‘Bayesianism does not define the strength of evidence,’’
and, most seriously, ‘‘Bayesianism needs a fixed body of propositions’’
(ibid., pp. 158 159). One of the Bayesian responses to the latter objection
about ‘‘the difficulty of knowing what probabilities to give novel proposi-
tions’’ (ibid., p. 160), ‘‘is to argue that we can rationally give a completely
novel proposition any probability we like. Some probabilities may be more
convenient or more normal, but if the proposition is really novel, then no
probability is forbidden. Then we can consider evidence and use it, via
Bayes’ theorem, to change these probabilities. Given enough evidence,
many differences in the probabilities that are first assigned will disappear,
as the evidence forces them to a common value’’ (ibid.). For specific objec-
tions to Bayesian models of judicial decision making, the reader is urged to
see the ones made in Ron Allen’s lead article in Allen and Redmayne
(1997).
2. Consider parties A and B at a bench trial. A’s witnesses have finished giving
evidence, and now a witness for B is being examined or cross-examined,
and a witness for A (e.g., the plaintiff himself ) clings to the hope that
the judge will remember to refer to the notes he had been scribbling when
A was giving evidence, as an item said by A contradicts what B is saying
now. That there is such contradiction, will affect credibility: what A said
before will be less likely to be accepted as though there wasn’t to be
adverse evidence from B. Yet, our dropping the principle of ‘‘priority to
the incoming information’’ in a multi-source environment should corre-
spond to the practical rule that just because B is giving evidence after A,
this by itself is not supposed to make B more credible than A.
3. The Anglo-American adversarial judicial system, in which the two parties
in a trial are rather symmetrical, provides a convenient example for our
purposes, as the witnesses for the two parties constitute two sets of ‘‘sources’’,
from which items of information flow. Yet, sources in a trial need not be
so ‘‘similar’’. Consider ancient canon law. Rumors a source considered
public may have tainted a cleric, who wished to clear himself. He then
brought witness in his defense. Each such witness was called a compur-
gator. These compurgatores and the rumors are sources of a different
nature. The rumors themselves are items of information, and the source
298 A. F. Dragoni and E. Nissan

was not embodied in an individual (or in, say, a newspaper), but generi-
cally in a subset of the public.
4. Here is a commonsensical example of the interplay of source and context
in making sense (in good faith or as a posture) of an item of information.
Bona fide misinterpretation in context is exemplified by a situation in which I
(Nissan), from a bus, was watching a poster behind a glass window; it
showed the photograph of the head of a man, his face very contracted,
and his mouth as wide open as he apparently could. Something round
and white, which I eventually realized was the head of pin keeping the pos-
ter in place, happened to be positioned on the man’s tongue. I may be for-
given for thinking on the spur of the moment that some pill was being
advertised, which was not the case. Besides, interpretation activity is not
always done in a truth-seeking mood. On the flank of the bus by which
I was commuting this morning, there was the following ad. (Because of
pragmatic knowledge about posters on the flank of a bus being ads, it must
have been an ad.) The poster was showing the face of a celebrity and read:
‘‘Jennifer Lopez is in. . .’’ It included no other inscription, apparently to
build up expectation for a sequel poster. Moreover, the palms of the hands
of the woman in the picture were raised in the forefront—for example, as
though she was leaning on glass (which was not the case). I only noticed
that poster once the bus had stopped in front of a funeral parlor, whose
display window was exhibiting an array of tombstones and similar niceties
for people to order. The image (and inscription) from the ad on the bus
flank was mirrored on the display window. My expectations, as a prag-
matically competent viewer, of what the advertisers would or wouldn’t ex-
pect, prevents good faith (and affords this rather being a posture) if I were
to interpret the juxtaposed visual information (the facial image of the cel-
ebrity along with the textual assertion from the poster, and the sample
tombstones) as though this meant: ‘‘Celebrity So and So is inside the
shop,’’ or even ‘‘inside one of these lovely displays.’’ Surely the marketing
people didn’t foresee that this would be the situation of viewing the poster,
and I am supposed to know this was unexpected. If I am to adopt an her-
meneutic option that (mis)construes the advertiser’s given document (the
exemplar of the ad) as in the given, singular, quite unflattering circum-
stances, in such a way that the advertisement’s message backfires (i.e.,
grossly thwarts the intended value of the image), this is an appropriation
of the ‘‘text’’ (as broadly meant) for communicative purposes that, according
to a situational context, may in turn be or not be legitimate. Appropriation-
cum-‘‘bending,’’ however, is common in human communication. Some
textual genres practice appropriation quite overtly, prominently, and
organically; see, for example, Nissan et al. (1997). More generally, on
quotations (ironic and otherwise), see Kotthoff (1998). On intertextuality,
see Genette (1979). Nissan (2002b) applies it in the COLUMBUS model.
Meter-Models Tradition 299

5. Bernard Jackson (1988a, p.88) distinguishes the ‘‘story of the trial’’ as


against the ‘‘story in the trial.’’ As he worded it in Jackson (1998,
p.263), he argues ‘‘that the ‘story in the trial’ (e.g., the murder of which
the defendant is accused) is mediated through the ‘story of the trial’ (that
collection of narrative encounters manifest in the courtroom process
itself )’’. See on this Jackson (1988a, p.8ff; 1995, p.160 and Chap. 1012
passim). Jackson refers to the narrative about which the court is called
to decide, by the term semantics of the legal narrative. By pragmatics
he refers to the delivery about it in court. Moreover, the semantics and
pragmatics must be distinguished as well, e.g., in the process of collective
deliberation among jurors. ‘‘Each juror must make sense not only of what
has been perceived in court, but also what is perceived in the behaviour of
fellow jurors: not only what they say (The story in jury speech: the semantic
level) but also how they behave in saying it (the story of jury speech:
the pragmatic level)’’ (Jackson 1996, p.43). By that same author, also
see Jackson (1985, 1988b, 1990, 1994).
6. Distinguish between factual truth and legal truth, which is the one relevant
in judicial contexts—this is because of the conventions of how proof is
constructed and which kinds of evidence are admissible or inadmissible.
As to philosophy, or course coherence theory is just one out of various
views of truth current in epistemology (for all of Alcoff’s remark [1998,
p. 309]: ‘‘It may seem odd, but for most epistemologists today truth is a
non-issue’’). To Paul Horwich, who advocates a deflationary ‘‘minimal
theory’’ of truth (itself evolved from the correspondence theory of truth),
once he had shortly introduced coherence theory: ‘‘What has seemed
wrong with this point of view is its refusal to endorse an apparently central
feature of our conception of truth, namely the possibility of there being
some discrepancy between what really is true and what we will (or should,
given all possible evidence) believe to be true’’ (Horwich 1998, pp.
316317). Moreover, as pointed out above, factual truth and believable
truth are themselves distinct from the legal truth, which is what the court
will find once the evidence is presented, which itself is subjected to the rules
of evidence, as well as to the contingencies of how (if it is a criminal case)
the police inquiry was carried out and what it found.
Or, if it is, e.g., a case brought before an employment tribunal  how the
solicitors for the two parties selected the documentary evidence, how the
barristers directly examine and cross-examine, and how the witnesses cope
with this, and whether the applicant (i.e., the plaintiff ) and his witnesses
have said enough to counteract beforehand whatever the witnesses for
the respondent (the employer) may say orally at a time when the applicant
can no longer speak, so that his barrister (possibly unaware of some facts,
but made aware of what the appropriate response would have been, after
the given witness for the respondent has already finished and can no longer
300 A. F. Dragoni and E. Nissan

be engaged in questions, may hope that the information new to him may
be useful while cross-examining the next witness(es) of the employer (if any
is=are left), in order to induce contradictions in the defense.
Legal debate can cope with different philosophical approaches to
knowledge. William Twining, the London-based legal theorist, in a paper
(Twining 1999) which explores issues in the Anglo-American evidence
theory, has remarked: ‘‘The tradition is both tough and flexible in that
it has accommodated a variety of perspectives and values and has usually
avoided making extravagant claims: in the legal context one is concerned
with probabilities not certainties; with ‘soft’ rationality or informal logic
rather than closed system logic; with rational support rather than demon-
stration; and with reasonably warranted judgments rather than perfect
knowledge. It is generally recognized that the pursuit of truth in adjudi-
cation is an important, but not an absolute social value, which may be
overridden by competing values such as ‘preponderant vexation, expense
or delay’. . . . Some premises of the Rationalist Tradition have been subject
to sceptical attack from the outside. But it has been a sufficiently broad
church to assimilate or co-opt most apparent external sceptics. Similarly
while most Anglo-American evidence scholars have espoused or assumed
what looks like a correspondence theory of truth, there is no reason why a
coherence theory of truth cannot be accommodated, if indeed there is any
distinction of substance [p. 71] between the theories’’ (ibid., pp. 70 71).
An example of a coherentist among legal evidence theorists is Bernard
Jackson, also in England (Twining actually points out that much, ibid.,
p. 71, note 9). See Jackson (1988a).

REFERENCES
Alchourron, C. E., P. Gärdenfors, and D. Makinson. 1985. On the logic of theory change: Partial meet
contraction and revision functions. The Journal of Symbolic Logic 50:510 530.
Alcoff, L. M. 1998. Introduction to part five: What is truth? In Epistemology: The Big Questions, ed.
L. M. Alcoff, 309 310, Oxford: Blackwell.
Benferhat, S., D. Dubois, and H. Prade. 2001. A computational model for belief change. In Frontiers in
Belief Revision, Applied Logic Series (22), eds. M. A. Williams and H. Rott, pp. 109 134, Dordrecht:
Kluwer.
BonJour, L. 1998. The elements of coherentism. In Epistemology: The Big Questions, ed. L. M. Alcoff, pp.
210 231, Oxford: Blackwell. (Page numbers are referred to as in Alcoff.) Originally, in Structure of
Empirical Knowledge, 87 110. Cambridge: Harvard University Press.
Cabras, C. 1996. Un mostro di carta. In Psicologia della prova, ed. C. Cabras, pp. 233 258. Milano: Giuffrè.
De Kleer, J. 1986. An assumption based truth maintenance system. Artificial Intelligence 28:127 162.
Doyle, J. 1979. A truth maintenance system. Artificial Intelligence 12(3):231 272.
Dragoni, A. F. 1992. A model for belief revision in a multi-agent environment. In Decentralized A.I.3, eds.
E. Werner and Y. Demazeau, pp. 103 112. Amsterdam: North Holland Elsevier Science, 1992.
Dragoni, A. F., P. Mascaretti, and P. Puliti. 1995. A generalized approach to consistency-based belief
revision. In M. Gori and G. Soda, eds., Topics in Artificial Intelligence, Proc. of the 4th Conference
of the Italian Association for Artificial Intelligence, LNAI 992, pp. 231–236, Springer-Verlag.
Meter-Models Tradition 301

Dragoni, A. F. 1997. Belief revision: from theory to practice. In The Knowledge Engineering Review 12(2),
pp. 147 179. Cambridge, UK: Cambridge University Press.
Dragoni, A. F., and P. Giorgini. 1997a. Distributed knowledge revision-integration. In Proceedings of the
Sixth ACM International Conference on Information Technology and Management pp. 121 127.
New York: ACM Press.
Dragoni, A. F., and P. Giorgini. 1997b. Belief revision through the belief function formalism in a multi-
agent environment. In Intelligent Agents III, eds. M. Wooldridge, N. R. Jennings, and J. Muller,
Lecture Notes in Computer Science, no. 1193. Heidelberg: Springer-Verlag.
Dragoni, A. F., P. Giorgini, and E. Nissan. 2000. Distributed belief revision as applied within a descriptive
model of jury deliberations. In the Preproceedings of the AISB 2000 Symposium on AI and Legal
Reasoning, April 17, 2000, Birmingham, pp. 55 63, Reprinted in Information and Communications
Technology Law 10(1):53 65, 2001.
Fakher-Eldeen, F., T. Kuflik, E. Nissan, G. Puni, R. Salfati, Y. Shaul, and A. Spanioli. 1993. Inter-
pretation of imputed behavior in ALIBI (1 to 3) and SKILL. Informatica e Diritto, 2nd series
2(1= 2):213 242.
Fermé, E. 1998. On the logic of theory change: Contraction without recovery. Journal of Logic, Language
and Information 7:127 137.
Gaines, D. M. 1994. Juror Simulation, BSc Project Report, No. CS-DCB-9320, Computer Science Dept.,
Worcester Polytechnic Institute.
Gaines, D. M., D. C. Brown, and J. K. Doyle. 1996. A computer simulation model of juror decision
making. In Expert Systems With Applications, 11: 13 28.
Geiger, A., E. Nissan, and A. Stollman. 2001. The Jama legal narrative. Part I: The JAMA model and
narrative interpretation patterns. Information and Communications Technology Law 10(1):21 37.
Genette, G. 1979. Introduction à l’architexte. Paris: Seuil; The Architext: An Introduction (trans.
J. E. Lewin), Berkeley: University of California Press, 1992.
Hansson, S. O. 1999. Recovery and epistemic residue. Journal of Logic, Language and Information 8(4):
421 428.
Harper, W. L. 1976. Ramsey test conditionals and iterated belief change. In Foundations of Probability
Theory, Statistical Inference, and Statistical Theories of Sciences, vol. 1. eds. W. L. Harper, and
C. A. Hooker, 117 135. Norwell, MA: D. Reidel.
Hastie, R., ed. 1993. Inside the Juror: The Psychology of Juror Decision Making (Cambridge Series on
Judgment and Decision Making.) Cambridge, UK: Cambridge University Press.
Hastie, R. 1994. Introduction.
Harper, W. L. 1977. Rational conceptual change. In PSA 1976, Vol. 2, East Lansing, Michigan.
Hastie, R., S. D. Penrod, and N. Pennington. 1983. Inside the Jury.Cambridge, MA: Harvard University
Press.
Horwich, P. 1990. The minimal theory. In Epistemology: The Big Questions, ed. L.M. Alcoff, 311 321.
Oxford: Blackwell (Page numbers are referred to as in Alcoff.) Originally, in Truth, by P. Horwich,
1 14. Oxford: Blackwell, 1990.
Jackson, B. S. 1985. Semiotics and Legal Theory. London: Routledge & Kegan Paul.
Jackson B. S. 1988a. Law, Fact and Narrative Coherence. Merseyside: Deborah Charles Publications.
Jackson, B. S. 1988b. Narrative models in legal proof. International Journal for the Semiotics of Law 1:
225 246.
Jackson, B. S. 1990. Narrative theories and legal discourse. In Narrative in Culture: The Uses of Story-
telling in the Sciences, Philosophy and Literature, ed. C. Nash, 23 50. London: Routledge.
Jackson, B. S. 1994. Towards a semiotic model of professional practice, with some narrative reflections on
the criminal process. International Journal of the Legal Profession 1:55 79.
Jackson, B.S. 1995. Making Sense in Law. Liverpool: Deborah Charles Publications.
Jackson, B. S. 1996. ‘‘Anchored narratives’’ and the interface of law, psychology and semiotics. Legal and
Criminological Psychology 1(1):17 45.
Jackson, B. S. 1998. Bentham, truth and the semiotics of law. In Legal Theory at the End of the
Millennium, ed. M.D.A. Freeman, pp. 493 531. (Current Legal Problems 1998, Vol. 51.) Oxford:
Oxford University Press.
302 A. F. Dragoni and E. Nissan

Katsuno, H., and A. O. Mendelzon. 1991. On the difference between updating a knowledge base and revis-
ing it. In Proceeding of the 2nd International Conference on Principles of Knowledge Representation
and Reasoning, eds. J.’Allen, R. Fikes, and E. Sandewall, pp. 389 394. Morgan Kaufmann.
Kotthoff, H. 1998. Irony, quotation, and other forms of staged intertextuality: Double or contrastive
perspectivation in conversation. In Perspectivity in Discourse, eds. C. F. Graumann and W. Kall-
meyer, Amsterdam: Benjamins. Also: http: == ling.uni-konstanz.de = pp. = home = kotthoff = Seiten =
ironyframe.html
Kuflik, T., E. Nissan, and G. Puni. 1991. Finding excuses with ALIBI: Alternative plans that are deonti-
cally more defensible. Computers and Artificial Intelligence 10(4):297 325.
Levi, I. 1977. Subjunctives, dispositions and chances. Synthese 34:423 455.
Levi, I. 1980. The Enterprise of Knowledge. Cambridge, MA: The MIT Press.
Levi, L. 1991. The Fixation of Beliefs and its Undoing. Cambridge, UK: Cambridge University Press.
Lindström, S., and W. Rabinowicz. 1991. Epistemic entrenchment with incomparabilities and relational
belief revision. In The Logic of Theory Change. Journal of Philosophical Logic, 16. eds. Fuhrmann
and Morreau, 93 126. Springer Verlag.
Makinson, D. 1997. On the force of some apparent counterexamples to recovery. In Normative Systems in
Legal and Moral Theory: Festschrift for Carlos Alchourron and Eugenio Bulygin, eds. E. Garzon
Valdéz et al., 475 481. Berlin: Duncker & Humbolt.
Martins, J. P., and S. C. Shapiro. 1988. A model for belief revision. Artificial Intelligence 35:25 97.
Morton, A. 2003. A Guide through the Theory of Knowledge, 3rd ed. Oxford: Blackwell.
Nayak, A. 1994. Foundational belief change. Journal of Philosophical Logic 23:495 533.
Nissan, E. 2001a. Can you measure circumstantial evidence? The background of probative formalisms for
law. Information and Communication Technology Law 10(2):231 245.
Nissan, E. 2001b. The Jama legal narrative. Part II: A foray into concepts of improbability. Information
and Communications Technology Law 10(1):39 52.
Nissan, E. 2001c. An AI formalism for competing claims of identity: Capturing the ‘‘Smemorato di
Collegno’’ amnesia case. Computing and Informatics 20(6):625 656.
Nissan, E. 2002a. A formalism for misapprehended identities: Taking a leaf out of Pirandello. In Proceed-
ings of the Twentieth Twente Workshop on Language Technology, eds. O. Stock, C. Strapparava, and
A. Nijholt, pp. 113 123, Trento, Italy, April 15 16, 2002, Twente, Enschede, The Netherlands: Uni-
versity of Twente.
Nissan, E. 2002b. The COLUMBUS model (2 parts). International Journal of Computing Anticipatory
Systems 12:105 120 & 121 136.
Nissan, E. 2003. Identification and doing without it, Parts I to IV Cybernetics & Systems 34(4 5) and
34(6 7):317 380, 467 530.
Nissan, E., and A. F. Dragoni. 2000. Exoneration, and reasoning about it: A quick overview of three
perspectives. In Proceedings of the International ICSC Congress ‘‘Intelligent Systems Applications’’
(ISA’2000), pp. 94 100, Wollongong, Australia, December 2000.
Nissan, E., and D. Rousseau. 1997. Towards AI formalisms for legal evidence. In Foundations of Inte-
lligent Systems: Proceedings of the 10th International Symposium, ISMIS’97, eds. Z. W. Raś and
A. Skowron. Pages 328 337. Springer-Verlag.
Nissan, E., I. Rossler, and H. Weiss. 1997. Hermeneutics, accreting receptions, hypermedia. Journal of
Educational Computing Research 17:297 318.
Pennington, N., and R. Hastie. 1983. Juror decision making models: The generalization gap. Psychological
Bulletin 89:246 287.
Plantinga, A. 1993a. Warrant: The Current Debate. Oxford: Oxford University Press.
Plantinga, A. 1993b. Warrant and Proper Function. Oxford: Oxford University Press.
Shimony, S. E., and E. Nissan. 2001. Kappa calculus and evidentiary strength: A note on Åqvist’s logical
theory of legal evidence. Artificial Intelligence and Law 9(2–3):153 163.
Tillers, P., and E. Green (eds.) 1998. Probability and Inference in the Law of Evidence: The Uses and Limits
of Bayesianism. Boston and Dordrecht: Kluwer.
Twining, W. 1997. Freedom of proof and the reform of criminal evidence. Israel Law Review 31(1–3):
439 463.
Meter-Models Tradition 303

Twining, W. 1999. Necessary but dangerous? Generalizations and narrative in argumentation about
‘‘facts’’ in criminal process. In Complex Cases: Perspectives on the Netherlands Criminal Justice
‘‘System, ed. M. Malsch and J. F. Nijboer, 6998. Amsterdam: Thela Thesis.
Williams, M. A., and H. Rott (eds.) 2001. Frontiers in Belief Revision, (Applied Logic Series, 22.)
Dordrecht, The Netherlands: Kluwer Academic Publishers.

View publication stats

You might also like