You are on page 1of 50

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/311667346

Applications of Bayesian Networks In Legal Reasoning

Thesis · August 2016

CITATIONS READS

3 2,150

1 author:

Inga Catharina Ibs


Technische Universität Darmstadt
5 PUBLICATIONS 97 CITATIONS

SEE PROFILE

All content following this page was uploaded by Inga Catharina Ibs on 26 May 2021.

The user has requested enhancement of the downloaded file.


Department of Humanities
Institute of Cognitive Science

Bachelor Thesis

Applications of Bayesian Networks In


Legal Reasoning

by
Inga Catharina Ibs
(iibs@uni-osnabrueck.de)

October 10, 2016

First Supervisor: Second Supervisor:


Prof. Dr. Frank Jäkel Dr. Axel Kohler
Acknowledgements

I would like to thank my supervisor, Prof. Dr. Frank Jäkel, for the constructive and
motivating discussions.

I would also like to thank in particular Rasmus Diederichsen for proofreading and
discussing this thesis, for support with the graphic design and general support .

Additionally, I would like to thank Thea Behrens, Tabea Strunk and Patrick Faion who
proofread parts of my thesis.
Abtract

Reasoning in criminal trials is mainly concerned with reasoning over evidence. Legal
experts, like lawyers and judges, use legal reasoning methods such as arguments and
narratives, since these methods are accessible for juries and other entities in court, and
suitable for reasoning over a whole case. If statistical evidence is involved, forensic
experts use probabilistic reasoning to analyse the evidence and to testify about the
results in court. Bayesian networks are a common tool for this kind of analysis because
multiple factors influencing the evidential value of the evidence can be related in the
network. Bayesian Networks can accommodate complexity of evidence, arising for
example from low quality DNA samples. Translating the probabilistic analysis over
evidence results into legal reasoning often poses a problem for legal experts, who are
not trained in statistical analysis. As a consequence, misinterpretation of this kind of
evidence occurs. In order to prevent erroneous reasoning over uncertainty, the integration
of legal reasoning methods into probabilistic reasoning is researched. Bayesian networks
enable a visualization of dependencies between hypothesis and evidence and systematic
updating of beliefs. Therefore, they are an appropriate integrating method for legal into
probabilistic reasoning. One area of related research is concerned with the modelling of
full criminal cases using Bayesian networks. The design of these networks is based on
legal reasoning concepts. Inference from these networks aims to be accessible for legal
experts and juries. This thesis presents a review of different approaches to modelling
full criminal cases using Bayesian networks. The approaches are elaborated and tested
on a fictional example. The results of the review are then discussed with regard to the
methods’ applicability in court.
Contents
1 Introduction 2

2 Bayesian Networks 6
2.1 Flow of Influence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Probability Elicitation . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

3 Legal Reasoning with Bayesian Networks 11


3.1 Arguments and Narratives . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.1.1 Arguments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.1.2 Narratives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.2 The Fictional Case of Lisa and Kai . . . . . . . . . . . . . . . . . . . . 13
3.3 Arguments and Bayesian Networks . . . . . . . . . . . . . . . . . . . . 13
3.3.1 Idioms-based Approach . . . . . . . . . . . . . . . . . . . . . . . 14
3.3.2 Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.3.3 Support Graphs as Explanation of Bayesian Networks . . . . . . 20
3.4 Narratives and Bayesian Networks . . . . . . . . . . . . . . . . . . . . . 24
3.4.1 Structuring Bayesian Networks using Scenarios . . . . . . . . . . 24
3.4.2 Quality of Scenarios . . . . . . . . . . . . . . . . . . . . . . . . 28

4 Discussion 32

5 Appendix 36
5.1 Probabilities for the Idiom-based BN from Figure 7 . . . . . . . . . . . 36
5.2 Probabilities for the BN Based on Scenarios from Figure 11 . . . . . . . 37

References 42

List of Figures 44

List of Tables 45
Applications of Bayesian Networks In Legal Reasoning 2

1 Introduction
Lawyers, judges and juries in criminal trials are tasked to decide on the conviction of
the defendant. The fundamental issue in that task is that usually only circumstantial
evidence for a crime is known if no direct confession from the defendant is available. It
is necessary for the lawyers and judges to use legal reasoning over the existing evidence
in order to arrive at a decision, which has lasting effects on the life of a defendant
and society. Since human reasoning is prone to fallacies and manipulation, lawyers
and judges undergo intensive training in argumentative reasoning and reasoning over
narratives. In the three last decades forensic scientists have developed methods to
analyse crime scene elements – e.g. in form of fingerprint matching and DNA analysis.
For this reason, lawyers and judges are facing an increasing amount of statistical
evidence. Most of the methods for obtaining forensic evidence are afflicted with a
certain amount of uncertainty. Forensic experts are usually trained in probabilistic
reasoning addressing these uncertainties. The majority of legal experts on the other
hand is not schooled in this kind of reasoning and prone to probabilistic reasoning errors
as elaborated in related scientific literature (see Fenton and Neil, 2011; Gigerenzer,
2002; Kahneman, 2011).
The Sally Clark trial from 1999 is an example of how probabilistic reasoning errors
can lead to miscarriage of justice (Nobles and Schiff, 2005). Sally Clark was a solicitor
who was convicted for the murder of two of her sons. The children died within two years
of each other and no certain cause of death could be determined. Usually the condition
of “sudden infant death syndrome” is registered in such cases as cause of death. The
prosecution hypothesis however relied heavily on the expert judgement given by the
paediatrician Professor Roy Meadow. Meadow testified that chances of two deaths of
children in the same family resulting from sudden infant death syndrome were 1 to 73
million.
Two kinds of errors related to this expert statement occurred during the trial.
Ironically, one of them is a probabilistic reasoning error made by the expert, who should
have been trained in this matter. Meadows referred to a statistic for sudden infant death
syndrome which stated the chances of occurrence as 1 in 8543. Furthermore, Meadows
argued that the probability for two sudden infant death syndrome cases in one family
has to be 1 in 8543 times 1 in 8543, resulting in probability of 1 in 73 million (Nobles
and Schiff, 2005). As a letter from the president of the Royal Statistical discussed
in 2001, a fundamental error was made (The Royal Statistical Society, 2001). The
independence of the two deaths is assumed, in a case where there is a high chance that
they are related. There may have been family diseases which were not recognized, hence
to disregard potential predisposition by family conditions is a gross oversimplification.
This error was also pointed out to the jury by a defence expert days later. Still, the
impact of Meadow’s conclusion was high.
The second reasoning error made in the trial was a classical instance of the prosecu-
tor’s fallacy. The prosecutor’s fallacy exaggerates the case for the prosecution. In this
case it was the wrongful suggestion to the jury that the chances of 1 in 73 million are
actually the chances that Sally Clark is innocent. So the error is the confusion of the
probability of the two deaths occurring given that she was innocent with the probability
of Sally Clark being innocent given the evidence. By equating these probabilities, the
prior odds of Sally Clark committing a murder are neglected, which overestimates the
Applications of Bayesian Networks In Legal Reasoning 3

probability of innocence.
Both probabilistic reasoning errors influenced the jury’s decision which was therefore
made under wrong assumptions. The conviction of Sally Clark was overturned in the
second appeal in 2003, with the explicit reference that the prosecutor’s fallacy might
actually have affected the jury’s reasoning, regardless of the judges efforts in the original
trial to point out that the evidence’s impact should not be overstated. This trial is
an illustration of the severe consequences erroneous probabilistic reasoning can have
in court. Since this case is not an isolated incident (see Meester et al., 2007) the use
of statistical evidence is a controversial topic in legal discussions. In the letter of the
Royal Statistical Society regarding the Sally Clark case it is argued that these cases
of miscarriage of justice can be avoided by appropriate presentation of the evidence
(The Royal Statistical Society, 2001). Thus, a sufficiently qualified expert could prevent
these fallacies. Nobles and Schiff (2005) argue for a removal of cases involving statistical
evidence, set out to misinterpretation, from jury judgement, since the judge in the
Sally Clark trial made the impending fallacy explicit and the jury was still prone to the
reasoning error.
In the case of R v T 2009 the English Court of Appeal ruled that the use of formulas
for the calculation of probabilities in areas in which the data for the calculations is
often not sufficient – such as footwear mark evidence – be prohibited in court. One
of these formulas is Bayes’ theorem which is central to probabilistic reasoning (see
Verheij et al., 2016; Fenton and Neil, 2011). Since Bayesian analysis is often regarded
as a key approach for preventing probabilistic fallacies like the prosecutor’s fallacy (see
Fenton and Neil, 2011; Hahn and Oaksford, 2007) and the jury’s fallacy (see Fenton and
Neil, 2000), the ban of methods dealing with uncertainty actually dismisses a way to
prevent probabilistic reasoning errors. Especially Bayesian networks provide a valuable
method for the analysis of numerous factors in the investigation of evidence and their
dependencies. They have been used in the analysis of a variety of forensic evidence like
low quality DNA evidence and gunshot residue (Fenton et al., 2014). With Bayesian
networks one can show the relationships of different factors influencing the evidence with
nodes connected in a graph. The conditional relationships are implemented numerically
in conditional probability tables for each node. The prior beliefs for the variables in
the network can be updated by calculating the posterior beliefs in the variables using
Bayes’ theorem if new evidence is introduced.
Evidence analysis using Bayesian techniques like Bayesian networks has two major
issues often addressed by statisticians as well as legal experts. For one thing, the use
of Bayesian methods requires to set a prior belief in the variables. Since, this is often
difficult due to insufficient data bases, it poses a problem for the inference from the
network. This issue was the basis for the court decision of R v T and the point of
criticism also fuels the distrust of legal experts and juries in Bayesian techniques.
The other issue is the mathematical nature of the analysis. Mathematical assump-
tions and calculations are often too complex for laymen to understand, especially
regarding Bayesian networks, where numerous variables are related using formulas. This
decreases the trust of the jury in the results of the analysis and inhibits a transparent
presentation of evidential reasoning.
A lot of scientific work has been done to tackle these issues. Inference methods
regarding the measure of evidential strength of the evidence like the likelihood ratio
have been developed. The use of likelihood ratios instead of just the respective posterior
Applications of Bayesian Networks In Legal Reasoning 4

probabilities has the property that it excludes the prior beliefs of the variables. This is
an advantage, since if the priors are difficult to elicit due to insufficient databases it
still gives a measure of the evidential value an observation has for a respective variable
(Fenton et al., 2014). The likelihood ratio gives a measure of the impact a factor has on
a considered, rather than a new belief in the variable. The likelihood ratio approach
could help to solve the issue of insufficient information, but as Fenton et al. (2014)
explained, this method should be used only under consideration of certain limitations
which will be discussed later in this thesis. An approach to make Bayesian reasoning
understandable for lawyers, judges and juries was developed by Fenton and Neil (2011).
They propose to first explain the basic Bayesian principle and to outsource the rest
of the calculations to a Bayesian calculation program. Thus, the computations are
ensured to be correct and laymen will understand the basic principle of reasoning with
uncertainty. As Gigerenzer (2002) discussed Bayesian reasoning can be explained to
laypeople using event trees.
These basic approaches to close the communication gap between forensic and legal
experts or juries still leaves the issue of correct translation of probabilistic reasoning
into legal reasoning. An approach to solving this problem is to model complete cases
with Bayesian networks using legal idioms. This establishes a common ground for
reasoning over evidence for both legal and probabilistic reasoning approaches (Fenton
et al., 2013). Since criminal trials usually involve a huge set of complex relationships
between evidence, it is an ambitious idea to model full cases. The network still has
to model these relationships correctly and inference has to be accessible for lawyers
and juries. This thesis explains and reviews the research on modelling entire criminal
cases with Bayesian networks, in order to answer the question if these support legal
reasoning processes. A framework for the design of complete cases and the inference
from the networks must satisfy some requirements in order to be applicable in legal
reasoning processes. For the review, the methods explained in this thesis were tested
on an fictional example of a criminal case and reviewed with regard to the following
questions:
1. Does the structure of the network incorporate the basic principles underlying legal
reasoning with arguments and narratives?

2. How high are the costs of designing the network?

3. Is the inference from the network robust against wrong prior assumptions?

4. Is the inference from the network accessible for lawyers and juries?
This thesis is structured as follows: Basic properties of Bayesian networks are
explained in section 2. Following that, basic characteristics of two approaches to legal
reasoning – arguments and narratives – are explained in subsection 3.1. In subsection 3.2
a fictional example of a criminal trial is outlined, which is used later for the review of the
different methods. The idioms-based approach for the design of a network with regard
to argumentative legal reasoning is explained subsubsection 3.3.1. Subsubsection 3.3.2
is concerned with the sequential presentation of evidence and the resulting change of
the belief in the main hypothesis.
Inference from the network with the use of an intermediate structure called support
graphs is explained and tested on the example, in subsubsection 3.3.3.
Applications of Bayesian Networks In Legal Reasoning 5

In subsection 3.4 an approach to modelling narratives in a Bayesian network is


outlined together with the proposed framework for the translation of this network into
a report in subsubsection 3.4.2. Lastly, the different methods are discussed with regard
to the aforementioned questions and future research is outlined in section 4.
Applications of Bayesian Networks In Legal Reasoning 6

2 Bayesian Networks
Reasoning over causal relationships between several events is often subject to uncertainty.
This uncertainty arises from insufficient information, since an infinity of possible factors
might influence the events in question. Consequently, it is often impossible to make
statements of certainty about them (Russell and Norvig, 2010, pp.481). Probability
theory offers multiple tools to model reasoning under uncertainty, while still considering
causal dependencies between the different variables via conditional probabilities and
Bayes’ theorem.
In order to model a causal reasoning process with probability theory, one needs
to find the probability distributions for each variable and combine them in a joint
distribution. Handling these distributions in their explicit form can get intractable with
a increasing number of variables, which constitutes a problem in complex reasoning
processes (Koller and Friedman, 2009, pp.45). Exploiting conditional dependence
properties of the variables can help to reduce the computational costs of the calculations.
Additionally, marginalization – over the states of one variable in order to take it out
of consideration – introduces a way to make the inference more accessible (Koller and
Friedman, 2009, pp.45-48). The factorized joint distribution can be used for inference
from the network by asking queries about variables. These inferences can then be used
in the reasoning process and are less costly than inference from a joint distribution.
Bayesian networks are a useful representation of the joint distributions in their factorized
form.

P(H) P(A)
H: Defendant A: Accuracy
t 0.01 t 0.9
f 0.99
guilty of evidence f 0.1

E: Evidence of
blood match DNA

P(E | H, A)
H t f
A t f t f
t 1.0 0.5 10−6 0.5
f 0 0.5 1 − 10−6 0.5

Figure 1: Bayesian network example (Fenton et al., 2013). Each variable is represented by a
node associated with a probability density function conditioned on the parents of that node.

A Bayesian network (BN ) consists of an acyclic directed graph G comprised of


variables V and directed edges E that denote the assumed conditional dependence
relationships between the variables. It models the underlying probability distribution
by representing it in the factorized full joint distribution (Koller and Friedman, 2009,
pp.51). Observations regarding the variables can be included in the model and the
changes of the posterior probabilities, given those observations1 , can be calculated.
1
Observed nodes in this thesis are filled gray.
Applications of Bayesian Networks In Legal Reasoning 7

The conditional dependencies of the variable are collected in a conditional probability


table (CPT ) for each node. BNs therefore consist of qualitative parts like the graphical
structure and direction of edges and quantitative parts like the CPTs specifying the
strength of influence the parent nodes have on their children (Vlek et al., 2014). It is
important to note that the dependencies in the graph do usually not imply a causal
relationship between the variables. However, since BNs in this thesis are constructed,
such that they reflect causal relationships they will allow a causal interpretation.
Consider a case in which a judge or a jury wants to infer from the evidence of a
DNA match of blood found at the crime scene the probability of the hypothesis H
that the defendant is guilty. The probability of the DNA evidence being observed is
causally influenced by its two parent nodes: the named hypothesis H and accuracy of
the evidence A. These variables can be modelled by the example BN shown in Figure 1.
In this case each node can only have two possible states – true t and false f. The values
in the CPTs next to each node define the prior probabilities for each state if needed
with consideration of the different states of their parents. Using the network structure
the change of belief in variables can be calculated. If evidence is added by fixing the
state of the corresponding variables, the posterior probability for the different states of
a variable is calculated using Bayes’ rule.

P (Evidence | Variable)P (Variable)


P (Variable | Evidence) =
P (Evidence)

For example, the belief in the hypothesis of the above example being true – which
is P (H = t | E) – changes if we know that the blood matches the DNA. The new belief,
i.e. the posterior, can now be calculated using Bayes’ formula. Because the evidence is
not only conditioned on the hypothesis “Defendant guilty”, but also on “Accuracy of
evidence” the calculation has to take into account the accuracy node A with its different
states:

P
A P (E = t | H = t, A)P (H = t)
P (H = t | E = t) = P
A P (E = t)

(P (E = t | H = t, A = t)P (A = t) + P (E = t | H = t, A = f )P (A = f ))P (H = t)
= P
H P (E = t | H, A = t)P (A = t) + P (E = t | H, A = f )P (A = f ))P (H)

(1 · 0.9 + 0.5 · 0.1) · 0.01


=
(1 · 0.9 + 0.5 · 0.1) · 0.01 + (10−6 · 0.9 + 0.5 · 0.1) · 0.99

= 0.161

0.161 is the updated belief in the hypothesis – the posterior of H.

2.1 Flow of Influence


When a set of variables is observed the probabilities for another set of variables might
change. The posterior probabilities for these variables need to be recalculated given the
Applications of Bayesian Networks In Legal Reasoning 8

M X Y M

Y M X Y

(a) (b) (c)

Figure 2: Different trails occurring in a BN (Koller and Friedman, 2009, pp.50). (a) is a
causal trail, (b) a common effect and (c) a common cause structure.

new evidence. BNs contain the needed information for this process called probabilistic
inference (Russell and Norvig, 2010, pp.522). Hence, it is useful to determine if variables
are independent from each other given the set of observed variables in the network.
Distinct kinds of paths in a BN show different dependency properties. Dependency
of variables can be interpreted as influence flowing between them. If the two variables
are dependent on each other, there exists an active path between them through which
influence can flow. Paths are either active or blocked depending on whether they
fulfil certain conditions or rather contain certain structures. These structures can be
separated into four trails shown in Figure 2. If the path contains a causal trail (Figure 2
(a)) or a common cause structure (Figure 2 (c)) with the middle node of the trail being
in the set of the observed nodes, the path is blocked by the observed node. Another
condition specifies that a path containing a common effect structure (Figure 2 (b))
blocks the influence flow between the two nodes X and Y, given neither the middle node
nor any descendant of it is in the set of observed nodes2 (Koller and Friedman, 2009,
pp.70). A reasoning pattern frequently used in BNs is that of intercausal reasoning
or explaining away (Pearl, 2000). It occurs if one parent node and the child node in
a common effect structure are observed which is shown in Figure 3. If the node has
a negative influence on the other parent node, but a positive influence on the child
node, its observation explains away the influence of the other parent node on the child
(Koller and Friedman, 2009, pp.55). If for example we observed the states Accuracy of
evidence = false and Evidence of blood match DNA = true , the lack of accuracy
would explain away the guilt of the defendant.

2.2 Probability Elicitation


The assessment of probabilities for a BN constitutes a major issue for the creator of BNs.
For the determination of prior probabilities of the variables the databases are mostly not
large enough, incomplete, not accessible or even systematically biased. For this reason,
domain experts are often the source of probabilistic information. In order to have a
2
A more formal definition of dependency properties with the use of the d-separation criterion (d for
directional) can be found in Koller and Friedman (2009, pp.70).
Applications of Bayesian Networks In Legal Reasoning 9

X Y

Figure 3: Intercausal reasoning structure.

successful elicitation process, selected experts need to be motivated and trained and
questions need to be structured for the elicitation (Renooij, 2001, pp.256). Additionally,
the documentation and verification of the results are important steps to complete a
useful assessment of priors. Unfortunately, capabilities of experts regarding probability
judgements face issues that need to be considered, as Renooij (2001) discusses.
Human judgement about probabilities is subject to different kind of biases. The
motivational bias influences the judgement according to the expert’s interest in the
outcome and the circumstances in which the expert makes the judgement. Additionally,
acognitive bias and the use of a erroneous heuristics might influence the judgement
of the expert. Examples include biases like be the apparent availability of the event,
anchoring on a starting value, how representative one event might be for another, base
rate neglect and overconfidence (Renooij, 2001, pp.257). In order to compensate for
these biases, the right experts need to be selected and trained accordingly. The selection
of more than one expert might be worth considering because it might result in more
accurate probabilities, although depending on the form of integration of multiple expert
opinions group interaction problems might arise (Renooij, 2001, pp.258).

2.3 Inference
The information implemented in BNs can be used for the analysis of beliefs in variables
and of the impact observations have on these beliefs. The calculation of the posteriors
of variables is informative, but has limited explanatory value when priors can not be
reliably elicited. In consideration of the limitations of posterior belief analysis, the
analysis of the network’s values might still revealing. Measures of inferential strength
can be examined, in order to assess the impact of evidence on variables of interest.
For example, one can assess the evidential value of the observation of E: Evidence of
blood match DNA = true has for the hypothesis H: Defendant guilty = true. The
specific representation of Bayes’ theorem in form of the odds is useful for this purpose.

P (H | e) P (e | H) P (H)
=
P (¬H | e) P (e | ¬H) P (¬H)
| {z } | {z } | {z }
posterior odds likelihood ratio prior odds

This form of Bayes’ theorem consists of three terms, the posterior odds, the prior
odds and the likelihood ratio each of which measures a specific kind of support for the
hypothesis (Pearl, 2000).
The likelihood ratio is a measure of the strength of evidential support for the
hypothesis. It does not take the priors into account and makes the analysis much
Applications of Bayesian Networks In Legal Reasoning 10

less susceptible to calculation of wrong posterior beliefs due to inappropriate priors.


However, if the two hypothesis regarded are not exclusive and exhaustive, the likelihood
ratio for an evidence item might show it to be neutral if it supports in fact an unknown
hypothesis. This problem is elaborated in depth in Fenton et al. (2014).
The prior odds offer a measure of how much support the hypothesis receives through
the prior assumptions. The posterior odds represent a degree of belief in a hypothesis
given evidence, compared to the degree of belief in the alternative hypothesis given
evidence. This is called the diagnostic support (Pearl, 2000, pp.7).
In order to test the assumptions made for the variable, the priors can be examined
separately. A sensitivity analysis helps to asses the different effects, estimations of priors
have on posterior probabilities (Koller and Friedman, 2009, pp.95). This is especially
useful when relationships of a hypothesis and observed evidence are modelled in the
network, as the influence of the evidence on the hypothesis can be tested.
Applications of Bayesian Networks In Legal Reasoning 11

3 Legal Reasoning with Bayesian Networks


3.1 Arguments and Narratives
Three different approaches are used for evidential reasoning: Argumentative reasoning,
reasoning about narratives and the probabilistic approach. The former two are used by
legal experts and concern mostly reasoning over facts. The latter is used by forensic
experts to analyse evidence and BNs are used among other means of analysis. For
modelling full criminal cases with BNs, legal reasoning approaches have to be integrated
into the probabilistic reasoning process. For the review of methods integrating arguments
and narratives into probabilistic approaches to evidential reasoning an understanding
of arguments and narratives in the legal context is necessary. The second part of the
paper by Verheij et al. (2016) focuses on the connection between these approaches and
will serve as a basis for the outline of these reasoning methods in the next sections.

3.1.1 Arguments
A useful definition for arguments in this context can be derived from (Verheij et al.,
2016). Arguments are regarded as a set of deduction rules applied to premises in order
to arrive at a specific conclusion. Premises in legal arguments are based on the evidence
given in a trial. The set of premises is then further complemented by conclusions of
previous arguments. Therefore chains of arguments can be built to argue in a more
complex manner (Verheij et al., 2016, pp.5-6). Verheij et al. focus on formally defined
arguments. It is that a set of deduction rules is derived from an argumentation scheme,
which defines the necessary elements in an argument. This makes it possible to detect
incomplete arguments. Defeasibility might arise if the argument has sound premises,
but the deduction rule applied is inadequate. This is modelled in a formal argument,
via exceptions from the deduction rules, occurring under certain circumstances (Verheij
et al., 2016, pp.7). It is therefore possible to define sources of doubt in the argument
focusing either on premises, the conclusion or the deduction rule. This sources of doubt
built the basis for testing the arguments in several ways. Attack of arguments via new
evidence that supports an exception of the deduction rule is called undercutting. This
is the case if another argument can be build that attacks the deduction rule and the
conclusion of the given argument (Verheij et al., 2016, pp.7). For example we can derive
an argument using the variables of Figure 1. E: Evidence of blood match DNA as
a premise. From this we can deduce the conclusion H: Defendant guilty, using the
evidential value of a DNA match, which is here the deduction rule. A: Accuracy of
evidence represents here an exception of the rule, since in the case that the evidence
is not accurate the deduction rule does not hold, in other words it represents an
undercutter to the inference. This may constitute the premise for another argument
attacking the original argument for the conclusion. Figure 4 shows a visualisation of
this argument. With regard to BNs, explaining away is the most obvious instance of
an exception of an deduction rule. As Verheij et al. (2016) argue that a strength of
the argumentative approach is that it is adversarial. Counterarguments have to be
considered in order to build a sound argumentation (Verheij et al., 2016, pp.3).
Applications of Bayesian Networks In Legal Reasoning 12

E: Evidence
H: Defen-
of blood
dant guilty
match DNA

A: Accuracy
of evidence

Figure 4: Argument for the example shown in Figure 1 (Timmer et al., 2015c). The node
A:Accuracy of evidence represents an undercutter for the argument from E: Evidence of
blood match DNA to H: Defendant guilty.

3.1.2 Narratives
Legal narratives as described by Verheij et al. (2016) attempt to provide a possible
scenario, i.e. a sequence of events interpreted as causally linked that explains the
evidence given in the trial. It is up to the jury or the judge to assess the coherence
of the scenario given the observations in the trial and their own general knowledge
about the world and to evaluate if the scenario is sufficiently supported by evidence. As
Verheij et al. (2016) point out, so called story schemes – scripts consisting of patterns,
which are expected in a crime – serve as helpful templates for constructing scenarios
and as indication if all necessary elements are included. If the scenario is complete
according to a story scheme, the scenario exhibits global coherence (Verheij et al.,
2016, pp.8).When evaluating a scenario evidential gaps might occur, which point in the
direction in which the case has to be investigated further (Verheij et al., 2016, pp.9).
This might lead to new evidence being detected. For example, it might be a crucial
point in a scenario, that the murderer had a murder weapon. If this is unsupported by
evidence, it represents an evidential gap and should be investigated further-
The quality of a scenario is defined by different factors. It has to be evaluated how
much of the actual evidence of the case is covered and how plausible a scenario is,
that is how much of the story, apart from the evidence, is consistent with our general
knowledge about the world (Verheij et al., 2016, pp.10). Different scenarios can be
compared on plausibility, consistency and completeness. (Verheij et al., 2016, pp.7-10).
One connection between arguments and scenarios is to view each element of a scenario
as an object for an argumentative analysis which can be supported or attacked by
arguments (Verheij et al., 2016, pp.3). As Verheij et al. (2016) emphasise, it is important
to analyse and compare different scenarios in order to prevent tunnel vision, hence not
to believe more in a good story rather than a true one.
Narratives and arguments are different approaches to evidential reasoning. Argu-
ments, on the one hand, focus on the support and attack for a hypothesis using the
evidence. Narrative methods, on the other hand, aim to explain the evidence in a
coherent story (Verheij et al., 2016, pp.3). Both methods are relevant in the legal
context by lawyers and judges and are therefore targets for methods aiming to integrate
legal reasoning into probabilistic reasoning.
Applications of Bayesian Networks In Legal Reasoning 13

3.2 The Fictional Case of Lisa and Kai


The following fictional case will be used for the illustration of network structure methods
and inference approaches in the following sections and is inspired by the stabbing example
outlined in Vlek et al. (2016)3 .
Lisa is charged with the murder of Kai. The cause of death of the victim was
determined by the coroner as stab wounds inflicted by a left-handed individual. Tom,
a neighbour of Kai, testifies that he saw Lisa with Kai shortly before the murder in
front of Kai’s apartment, which was the crime scene. Additionally, Lisa is identified as
a left-hander. Conflicting with Tom’s testimony, Lisa provides a cinema ticket for the
time of the murder as an alibi.
For the calculations of the prior and posterior probabilities of the variables of interest
in the BN the tool GeNIe 2.1 (BayesFusion, 2016) was used. The graphical structure
and the CPTs were entered in the model and the propagation of instantiated evidence
through the BN was then automatically performed by the program. The priors assessed
for each variable are very subjective and were elicited for the test of the methods on
the example. The CPTs for the nodes in the networks can be found in section 5.

3.3 Arguments and Bayesian Networks


The structure of BN in legal context is susceptible to false assumptions and to errors in
the creation process. The choice of nodes for evidence might be biased on what kind
of argument is presented. For example, arguments for the defence or the prosecution
might emphasise opposing conclusions and therefore include only a subset of evidence.
If no consistent framework is used for the construction, BNs designed by the different
parties for one case might show different outcomes. With this in mind, a main emphasis
in research of legal use of BNs lies in proposing a consistent guideline for a robust
construction of the graphs in which influence of evidence is neither overestimated nor
underestimated. It is critical for the creation of a BN for legal reasoning to design a
network comprehensible for the jury and the judge.
The representation of the BN has to match the intuitive ascription of causal rela-
tionships between an ultimate hypothesis like “The defendant is guilty.”, sub-hypothesis
“The defendant was at the crime scene.” and the evidence of the case (Fenton et al.,
2013).
Additionally to the issues occurring during the structuring process, inference from the
network poses a problem if done under the wrong assumptions. The probabilities even
if based on expert judgement might be biased due interference factors in the elicitation
process, as pointed out in subsection 2.2. Methods for inference therefore have to make
sure that the probabilities of the networks are not misinterpreted as facts and the factor
of uncertainty is pointed out. They have to compare the probabilities for opposing
hypothesis and have to provide a framework for lawyers to infer arguments from the
network. In subsubsection 3.3.1 a framework for BN construction regarding arguments
is outlined and illustrated with the aforementioned example. The resulting BN is then
used for the illustration and application of inference methods in subsubsection 3.3.2
and subsubsection 3.3.3.
3
A fully modelled scenario BN for the example of Vlek et al. (2016) can be found at (Vlek, 2016)
Applications of Bayesian Networks In Legal Reasoning 14

3.3.1 Idioms-based Approach


One approach to design a consistent framework for BNs modelling arguments is based
on the use of idioms which are “recurrent patterns of evidence” expected in a criminal
case, like motive and alibi (Fenton et al., 2013, pp.61). This approach is elaborated
and tested by Fenton et al. (2013), to show the possible gain from the mathematical
model for human comprehension of evidence relations. Fenton et al. propose a variety
of idioms, which provide a basis for legal reasoning with a Bayesian Network. Referring
to limits of the human working memory, they argue that these idioms reflect human
comprehension in complex arguments because repeatable structures represent aid for
saving information storage similar to the process of chunking (Fenton et al., 2013,
pp.63). Their guideline for BN structuring aims to model relationships of a collection of
hypotheses and evidence. The use of the structure is limited to one ultimate hypothesis,
i.e. the guilt hypothesis (Fenton et al., 2013, pp.64). This constraint is based on the
assumption that the usual scenario in court involves the prosecution trying to convince
the jury (or judge) of the guilt of the defendant and the defence to convince them of
their innocence. Yet sub-hypothesis can be considered.
Fenton et al. (2013) point out that the BN should be designed in the direction of the
causal flow. This is implemented as a causal trail from hypothesis to evidence, which
is also consistent with human intuition, since evidence doesn’t cause the guilt of the
defendant, it rather supports it (Fenton et al., 2013, pp.66). The network structure is
implemented such that direct evidence – which renders the hypothesis true or false – and
circumstantial evidence – which involves more inferential steps and has less influence on
the hypothesis – can be clearly distinguished (Fenton et al., 2013, pp.90). For simplicity,
the values of the variables are restricted in this thesis to boolean outcomes, except
the constraint nodes. Fenton et al. (2013) state however that it is generally possible
to model nodes which are not in the set of hypotheses or evidence nodes, to not be
restricted to boolean outcomes.
Fenton et al. (2013) introduce seven different kinds of patterns: The evidence
idiom, the evidence-accuracy idiom, patterns for modelling opportunity and motive, the
dependency idiom, the alibi idiom and the explaining-away idiom which are elaborated
in section 3 of Fenton et al. (2013, pp.70-91). A lot of uncertainty in evidence evaluation
arises from lab errors and other threats to the validity of evidence. The evidence idiom
itself – which is modelled by a causal relationship from hypothesis to evidence – can
account for this uncertainty with it’s CPTs of the corresponding evidence nodes (Fenton
et al., 2013, pp.73). The intention of introducing a separate evidence-accuracy idiom
is to explicitly state the problem of evidence accuracy and its effect on the impact of
evidence on the hypothesis. For the case that different sources of uncertainty influence
the accuracy of the evidence, the guidelines propose to use the idiom repeatedly with
different instances of the evidence. For modelling eyewitness testimony accuracy it
is proposed to use three different instances of the idiom: competence, objectivity and
veracity. The accuracy idiom should be modelled in a causal trail from the accuracy
node to the evidence node (Fenton et al., 2013, pp.72-77). An example of this pattern,
as well as the evidence-accuracy idiom regarding eyewitness testimonies is illustrated in
Figure 5. In fact, it is an intercausal reasoning structure, because the lack of accuracy of
evidence explains away the hypothesized cause (the defendant is source) of the evidence.
Contrary to the causal relationship between the ultimate hypothesis and the evidence,
Applications of Bayesian Networks In Legal Reasoning 15

Lisa attacked
Kai with the Accuracy
left hand of wound
classification

Stab wounds
(a)

Veracity

Accuracy of
Objectivity Eyewitness
testimony
testimony
evidence
Competence

(b)

Figure 5: (a) Evidence-accuracy idiom used for modelling a part of the example. (b)
Eyewitness-accuracy idiom (Fenton et al., 2013, p.77)
.

opportunity and motive are intuitively regarded to have a causal influence on the ultimate
hypothesis. The opportunity and motive nodes should be formulated as sub-hypothesis,
such that multiple items of evidence can be conditioned by them. One advantage of
using these kinds of nodes is that the unconditional prior requirement for the ultimate
hypothesis is shifted to both the motive and the opportunity hypotheses, with the
advantage that these probabilities might be easier to elicit. In the example of Lisa and
Kai, the unconditioned probability for Lisa stabbed Kai is difficult to obtain, but
the conditional relationship between it and Lisa was present at the scene is easier
to implement in probabilities, given our knowledge about the world. The ultimate
hypothesis should not be conditioned on several motives directly, but the nodes should
influence one combined motive node that in turn has a direct influence on the hypothesis
node (Fenton et al., 2013, pp.78-81). The structure for the opportunity and the motive
idioms is shown in Figure 6 (a).
Another issue that has to be considered is that several pieces of evidence involved in a
case might be dependent on each other. Ignoring the dependency of evidence, redundant
information might lead to overstating the impact of the evidence on the hypothesis as
if it were independent. For example, if the information of two cameras pointing at the
same spot is used in a trial as evidence for the presence of a defendant, the evidence
from the cameras show a dependency. Modelling these items independently may lead to
an increase in belief in the hypothesis if both camera captured the defendant, though
the second camera would fail to provide new evidence. Modelling the dependency
explicitly has the advantage that the evidential values are not overstated, but still
other information can be used, if for example only one camera captures the defendant
on the scene (Fenton et al., 2013, pp.82-84). The use of an dependency idiom with
a separate dependency node supports a more realistic evaluation of the impact of
dependent evidence items (Fenton et al., 2013, pp. 83). A generic structure for this
pattern is illustrated in Figure 6 (b). A special case of dependency of evidence is
Applications of Bayesian Networks In Legal Reasoning 16

Motive 1 Motive 2

Opportunity Motive Defendant


Dependency
guilty

Defendant
guilty Evidence 1 Evidence 2
(a) (b)

Cause 1 Cause 2
P(Constraint | C1, C2, Imp.)
Cause 1 t f
Cause 2 t f t f
Evidence Constraint
cause 1 0 1 0 0
cause 2 0 0 1 0
impossible 1 0 0 1

(c)

Figure 6: (a) Idioms modelling opportunity and multiple motives. (b) Generic idiom for
modelling dependency. (c) Explaining-away idiom for modelling two exclusive causes with the
CPT for the constraint node (Fenton et al., 2013, p.88).

the alibi idiom. The alibi of the defendant often represents conflicting evidence and
generally contradicts the hypothesis of the prosecution. Fenton et al. (2013) consider
the eyewitness testimony for the alibi as a classical case of alibi evidence. An issue
that arises from the use of the alibi pattern is that not only the alibi influences the
probability of the guilt hypothesis, but also the guilt hypothesis might influence the
alibi. If for example the defendant is guilty, the accuracy of the testimony of a close
friend, who assures an alibi for the defendant, is explained away (Fenton et al., 2013,
pp.85-87).
Lastly, Fenton et al. (2013) introduce the explaining-away idiom for the case that two
mutually exclusive and exhaustive causes influence one evidence node on separate paths.
Since the usual intercausal reasoning structure, already used in the evidence-accuracy
idiom, fails to model this case correctly for reasons explained in (Fenton et al., 2011),
they propose to add a constraint node with three possible states, one for each cause and
one for the impossible cases, as a child of both exclusive causes. The constraint node
assigns the probability of 1 for the state impossible in cases that more than one cause
is true – in which case they wouldn’t be exclusive – or all are false – not exhaustive. In
this way the exclusivity of the nodes is assured and made explicit (Fenton et al., 2013,
pp. 87-90). The structure for this idiom is shown in Figure 6 (c) together with the
CPT for the constraint node 4 .
In the given example of Lisa and Kai, the evidence idiom, the evidence-accuracy
4
Additionally to the CPTs Fenton et al. (2013) recommend to add soft evidence to the model, which
is evidence weighted on the prior probabilities of the causes. This is further explained in Fenton et al.
(2011).
Applications of Bayesian Networks In Legal Reasoning 17

Opportunity
Testimony H1 : Lisa
Accuracy was present

Tom saw Lisa


and Kai at scene H0 : Lisa
stabbed Kai

Authenticity H2 : Lisa attacked Kai


with knife in left hand
Cinema Ticket
Classification
Alibi Accuracy

Lisa is Stab wounds


left-handed inflicted
by left-hander

Left-hander evidence
Figure 7: Bayesian network model for the example implementing the the evidence idiom,
evidence-accuracy idiom, opportunity idiom and alibi idiom.

idiom, the opportunity idiom as well as the alibi idiom are applicable. For reasons
of clarity the eyewitness testimony of Tom is modelled only with a simple accuracy
node. The full network is shown in Figure 7. The CPTs for the nodes are listed in
subsection 5.1.
The idioms-based approach provides a framework that might find good use in a
legal context, because it translates probabilistic evidence into legal reasoning patterns.
The idioms not only provide help for the construction of the network, but determine
inference within the network. This means that the connections of the BN are analysed
which are determined by the idioms. The network can be translated to arguments by
taking first the evidence nodes as premises and then use the connection properties of
the BN as deduction rules, in order to arrive to a conclusion.

3.3.2 Inference
A variety of assumptions influencing the choice of prior and conditional probabilities
for the nodes underlie BNs in legal context. Naturally, there is a scope of disagreement
with regard to these assumptions and the resulting probabilities, which raises concerns
about the validity of the posterior probabilities calculated for the variables of interest.
For example, for the elicitation of the probabilities in the example of Lisa and Kai the
prior probabilities for H1 :Lisa was present require assumptions about the frequency
with which Lisa was at Kai’s apartment. It is apparent, that the assumed unconditioned
prior is, in fact, conditioned on our knowledge about the world, as Fenton et al. (2014)
elaborate. If we knew that Lisa met Kai every other day at his apartment, we would
assign a high prior probability for the state H1 :Lisa was present = true. If on the
Applications of Bayesian Networks In Legal Reasoning 18

contrary we have no reasons to believe that Lisa and Kai knew each other at all, we
would assign a low prior to the state. This kind of issue arises particularly for variables
for which no statistical database exists, no expert is specialised on or which are not
possible to examine in an experimental setting. It is important that the probabilities
are consistent with our knowledge about the world, so the elicitation process should
be made clear for the jury and the judge. Only then a correct understanding of the
networks results is possible. This problem is of even higher significance regarding the
ultimate hypothesis. Even if only the conditional probability of Lisa’s guilt given her
opportunity needs to be elicited, a fundamental legal axiom has to be considered: The
presumption of innocence. The prosecution has the burden of proof and has to prove
the defendant’s guilt beyond reasonable doubt (Verheij et al., 2016, pp.27). To set
an unconditioned prior for the guilt of the defendant is a violation of this principle.
Although the probabilities might not be accurate and the elicitation of a prior for
the guilt hypothesis is inconsistent with this fundamental principle, inference that
can aid the interpretation of evidence is possible. In fact, the BN should be used to
examine the change of belief in the guilt hypothesis given the different items of evidence
(Fenton et al., 2013, pp.94). The evidential value of the evidence items for the guilt
hypothesis constitutes the main objective of inference in BNs. This evidential value can
be examined with the use of the likelihood ratio which has the advantage that the priors
of the guilt hypothesis do not influence the results. Rather, one compares for each item
of evidence individually and sets of evidence the likelihoods for its occurrence given the
prosecution hypothesis – “the defendant is guilty” – and the defence hypothesis – “the
defendant is innocent”. Another way to evaluate the change of belief in the hypothesis
given the evidence is to consider posterior odds. Here, the change of belief in the
prosecution hypothesis given the evidence is compared to the change of belief in the
defence hypothesis given the evidence.
Fenton et al. (2013) use the sequential presentation of evidence for the analysis of
the effect the evidence has on the hypothesis and for the assessment of interactions
between evidence items. This can be revealing for a first analysis of the evidential
values, but has some disadvantages for legal applications which are mentioned later in
this section.
Table 1 shows the sequential presentation of evidence order together with the
resulting belief in the guilt hypothesis Lisa stabbed Kai for the network shown in
page 17. It should be noted that altering the order of presentation changes the
impact of the evidence items presented. With this in mind, some characteristic points
attract attention in Table 1. For one thing, the cinema ticket reduces the belief in the
prosecution hypothesis by 3.87%. This is expected as it is introduced alone. However,
if introduced as the last evidence item, the cinema ticket increases the belief in the
ultimate hypothesis by 0.13% from 86.03% to 86.16%. The impact of the alibi on the
guilt hypothesis is minimized, as it is not independent from it. The more we believe in
the guilt hypothesis, the less we believe that the cinema ticket is authentic. In fact, it
is implemented in the CPTs that the ticket cannot be authentic if Lisa stabbed Kai.
In addition, the sequence of evidence leads to two big jumps in the belief in the guilt
hypothesis from 33.86% to 46.046% and then to 86.16%. The probability of guilt seems
overwhelming given the increase by 76.16%.
The sequential presentation of evidence gives some indication of the relationship of
evidential support on the prosecution hypothesis. Additionally, the network shows that
Applications of Bayesian Networks In Legal Reasoning 19

Sequence of evidence P(H0 : Lisa stabbed Kai = t | E) in %

1. Prior with no observations 10

2. Cinema Ticket 6.13

3. Tom saw Lisa and Kai at scene 33.86

4. Lisa is left-handed 46.046

5. Stab wounds

inflicted by left-hander 86.16

Table 1: Effect of sequentially presented evidence on the guilt hypothesis in the example of
Lisa and Kai. All evidence items were assigned the state true.

every evidence item is actually circumstantial evidence, since none of the items directly
renders the guilt hypothesis true or false. Furthermore, the Classification Accuracy
variable as well as the Testimony Accuracy node are worth analysing. The Testimony
Accuracy node reflects our belief in Tom’s testimony. Because the strength of this belief
is hard to determine it makes sense to test the network’s sensibility on this variable
by altering its states given all the evidence. When set to true, the posterior belief in
the guilt hypothesis increases to 99.18%, when set to false, the belief in the Lisa’s guilt
drops to 44.29% compared to the prior. The posterior is still high, but the wide impact
range the testimony accuracy has, clearly shows the value of Tom’s testimony. The
impact of the Classification Accuracy on the posterior of the guilt hypothesis is
even higher. If it is set to true, it increases the belief in the guilt hypothesis to 86.9%,
but if set to false the posterior drops to 30.95%.
The analysis of the posterior is interesting regarding the relationships of evidence
and the hypothesis. Nevertheless, the prior belief in the ultimate hypothesis given
motive and opportunity is difficult to elicit in some cases. In this example, the prior
belief that Lisa is guilty given she was at the crime scene already is at 50%. This
was determined (here clearly very subjectively) under the supposition that only either
Lisa and Tom could be the murderer. The assumption that only two people were
at the crime scene is strong given the evidence and hidden possibilities can heavily
influence the variables. Therefore, an analysis of evidential support of the evidence
using measures like the likelihood ratio can be of advantage in cases where the priors
are hard to determine.
Although the analysis of the posterior alone is informative, it has a low explanatory
value for legal experts and juries. The consideration of the beliefs in one hypothesis
alone, does not take into account the evidential value that evidence items have on other
hypotheses. Also, the choice of the prior has a high impact, which is problematic for the
reasons mentioned above. In order to make BNs applicable for legal experts, structured
guidelines for the inference from the networks are needed to prevent misinterpretation
and confusion.
Applications of Bayesian Networks In Legal Reasoning 20

3.3.3 Support Graphs as Explanation of Bayesian Networks


One method for a more systematic inference from BNs was elaborated by Timmer
et al. (2015c). The approach is based on similarities of probabilistic and argumentative
reasoning and introduces support graphs as an intermediate graphical structure. The
authors argue that BNs are efficient, but unintuitive when modelling argumentative
support. The aim of using support graphs is to make the inference from a BN accessible
for legal experts and to explain the information modelled in the network. A support
graph focuses on the structural elements of the BN, abstract from the outcomes of the
variables. When identifying premises of an argument in a BN only certain variables
that directly influence the conclusion variable are of interest (Timmer et al., 2015c,
pp.5). Focusing only on these variables of interest solves the problem of the multiple
number of ways arguments can be built from inference rules of the BNs.
Influence between variables only flows through active trails. Using this observation,
Timmer et al. advocate that two particular interactions of the variable of interest
with other variables have to be distinguished, the first being a correlation of a direct
neighbour with the node in question, the second being the node in question connected
to another node via a common effect structure. In order to cover these connections,
the authors use a Markov blanket, which is formed by the set of parents, children and
parents of children of the node of interest. The Markov blanket comprises a minimal
set of variables that directly influences the respective conclusion because a node is
independent from all the other nodes in the network given its Markov blanket. All
nodes in the Markov blanket constitute possible premises for the argument for a given
conclusion variable and are called support factors (Timmer et al., 2015b, pp.112-113).
A support graph in essence is a directed graph with nodes N corresponding to the
variables V of the BN. Each element of V might be included multiple times in case the
network contains a loop 5 . The edges represent support. Support can be negative or
positive support, which is not distinguished. At first, the variable of interest is included
and on the next level the support factors of the variable are added as parents. For each
support factor, the associated support factors are added recursively (Timmer et al.,
2015b, pp.114-115).
In order to prevent circular reasoning and support via common effect structures,
each support node is assigned a forbidden set of variables that are excluded from use as
further support factors for the respective node. The forbidden set is formed always by
the forbidden set of the predecessor Vi and the currently considered support variable
Vj itself. Additionally, either the common children of Vj and Vi are included if Vj is
connected to Vi via common effect structure, or all the parents of Vj if Vj is a child of
Vi . A formal description as in (Timmer et al., 2015b, pp.115) is as follows.
The forbidden set F(Nj ) of a new support node Nj with V(Nj ) = Vj is

• F(Nj ) = {Vj } for the root of the support graph.

• F(Nj ) = F(Ni ) ∪ {Vj } if Vj is a parent of Vi in the BN.

• F(Nj ) = F(Ni ) ∪ {Vj } ∪ Parents(Vj ) if Vj is a child of Vi in the BN.


5
A loop exists in the network if variables are connected on multiple trails (Koller and Friedman,
2009).
Applications of Bayesian Networks In Legal Reasoning 21

• F(Nj ) = F(Ni ) ∪ {Vj } ∪ Children(Vj ) if Vj is connected to Vi via a common effect


structure in the BN.

The support graph based on the BN of Figure 7 for the example of Lisa and Kai is
shown in Figure 8. The graph includes the the forbidden set for each support factor.
The resulting skeleton of the support graph is already a helpful depiction for how
inference propagates in the support graph (Timmer et al., 2015c, pp.6). In order to
make the support graph an argumentative template, it has to be labelled according
to the observations in the BN. If a variable is observed, it is depicted in the graph as
observationally supported, in this example displayed as a doubled framed node. All
other nodes which are not observationally supported are greyed out and retained as
future support or attack of the argument, this highlights the important branches of the
support graph.
At last, Timmer et al. (2015c, pp.6) propose to include instantiations of the variables
in the support graph so it can be interpreted for the inference in the argument. Observed
variables are assigned the state given in the BN, here depicted as a blue label in the
bottom left corner. For the other nodes Timmer et al. use probabilistic measures of
inferential strength, arguing that they are representative for argumentative inference.
The likelihood ratio is proposed as such measure. Each variable is assigned a state
according to its likelihood ratio. If the likelihood ratio is bigger than 1 the node is
assigned the state true, if it is below 1 it is assigned the state false. The set of observed
states of the support factors for the node is used for the calculations. The calculations
start at the lowest level and propagate in the branches to the root node. If the likelihood
ratio from the states is equal to 1 no state is assigned (Timmer et al., 2015c, pp.6).
The authors note at this point that different measures of inferential strength might
emphasise distinct argumentative interpretations as for example the likelihood ratio
approach reflects a change in belief, whereas posterior odds reflect a degree of belief in
the conclusion (Timmer et al., 2015c, pp.6). In Figure 8 the likelihood ratio for each
node, if observationally supported, is included in the bottom right corner.
Using this information, the different branches can be analysed. In argument terms:
Each set of parents on each level represents premises for the respective child node,
which is then the conclusion of the argument. The strength of the deduction rule is
given by the likelihood ratio of the conclusion. This conclusion variable can then be
used again in the next argument for its child node.
In the example of Kai and Lisa, the first branch shows the argument from Tom’s
Testimony, via Lisa was present together with Cinema Ticket to the conclusion
variable Ticket Authenticity. In the absence of concrete values this argument would
be regarded as an undercutter for the argument for Lisa stabbed Kai, the guilt
hypothesis on the root level. This results from the relationship of the the ticket’s
authenticity to the guilt hypothesis. If the ticket is authentic, Lisa is most probably not
guilty. Starting at the parents highest in the hierarchy of the graph – in this example
at the bottom of the graph –, we see that Tom’s Testimony supports the hypothesis
Lisa was present with a likelihood ratio of 8.2. Assuming Lisa was present and the
evidence of the Cinema Ticket, one can see the effect of intercausal reasoning in the
network. The assumption that Lisa was present explains away the ticket’s authenticity
shown by the likelihood ratio of 0. The one branch of the support graph that could have
undercut the other arguments is rendered false and is actually supporting positively the
Applications of Bayesian Networks In Legal Reasoning 22

H0 : Lisa stabbed Kai


F = {H0 }
true 150

H2 : Lisa attacked
Aut: Ticket
 Authenticity H1 : Lisa was present
with left hand
F = H0 ,Aut F = {H0 , H1 }
F = {H0 , H1 }
false 0 true 2.37
true 394.71

H1 : Lisa Stab: Stab Class: Classification


Cin: Cinema Left: Lisa is
was Wounds
Ticket  present  lefthanded    Accuracy
 
H0 , H1 H0 , H2  
H0 , H2 

H0 , H 1 F= H0 , H 2
F= Aut, Cin F= F = Stab, F = Class,
Aut, Cin Left
Class
 
true true Stab
 
true 8.2 true

Acc: Testimony T: Tom’s


Accuracy Testimony
 
H0 , H1 , 
 
H 0 , H 1 
F = T, Acc, F = Aut,Cin,
T,Acc
 
Aut, Cin
 
true

T: Tom’s Acc: Testimony Cin: Cinema Ticket


Testimony
  Accuracy Ticket
  Authenticity
H0 , H 1 
H0 , H1
 H0 , H 1 
H0 , H1 ,

F= F= F= F=
T, Acc T, Acc Aut, Cin Aut,Cin
true true

Figure 8: Full support graph for the example network from Figure 7 with the forbidden
set, assigned truth value and if applicable likelihood ratio for each node. Observed nodes are
marked with a doubled frame.

root node. It is notable here that there is a discrepancy between the likelihood ratios of
the nodes Lisa was present in the first branch on the second level and in the second
branch on the first level. The former has a much higher support shown by the likelihood
ratio of 8.62 compared to the latter 2.37. This difference results from the exclusion of
the Cinema Ticket in the first branch, meaning that not all available information is
included in the argument. This piece of evidence actually undercuts the conclusion Lisa
was present. Timmer et al. mention this issue by pointing out that the method only
infers from the collective evidence of each level in the graph. Information of evidence
with negative influence on the conclusion is not used if it is paired with enough positive
influence from other variables. In the example the negative influence of the premise
Cinema Ticket in the second branch is not represented by the likelihood ratio of Lisa
was present. This information can be deduced by the comparison of the first and
second branch in the example, however in bigger support graph it is much more difficult
to compare the branches. Since this kind of information could be crucial, Timmer et al.
(2015c) argue for further research on this matter, such that the discrepancy between
the branches or the likelihood ratio strength can be documented explicitly.
The strongest argument in the support graph is the one listed in the third branch.
Here, the two evidence items Lisa is left handed and the Stab Wounds strongly
support the conclusion that Lisa attacked Kai with the left hand. The argument
strength with a likelihood ratio of 394.7 is much higher than the others.
Compared to the inference using sequential presentation of evidence, the support
Applications of Bayesian Networks In Legal Reasoning 23

graph method provides almost the same information. The impact of the circumstantial
evidence involving the stab wound classification evidence as well as Lisa’s handedness
becomes clear in both approaches. Unfortunately, the information regarding the alibi
evidence is only implicitly extractable from the support graph. As only the collective
evidential value from the arguments and sub-arguments is illustrated, information of
influence regarding individual premises for arguments is lost, which is a disadvantage
compared to individual analysis of evidence. Beyond the explanation of legal BNs and
the information modelled in them, Timmer et al. (2015a) propose a method using the
constructed support graphs in the process of extracting arguments from BNs, which is
out of the scope of this thesis, but shows the possible application in legal contexts.
Applications of Bayesian Networks In Legal Reasoning 24

Event 1 Scenario
Scenario t f
t 1 ...
f 0 ...
Event 1 Event 2 Subscenario 1
......

Subevent 1 Subevent 2
(a)

Scenario 1 Scenario 2

Constraint P(Constraint | S1, S2, Imp.)


Disjunction node
Scenario 1 t f
Scenario 2 t f t f
Variation 2 scenario 1 0 1 0 0.5
scenario 2 0 0 1 0.5
Variation 1 Variation 3 impossible 1 0 0 0

(b) (c)

Figure 9: (a): The scenario idiom together with the subscenario idiom. (b): The variation
idiom. (c) Two merged scenarios with the constraint node’s CPT.

3.4 Narratives and Bayesian Networks


The methods elaborated in subsection 3.3 aim to integrate characteristics of arguments
into the design of BN for legal reasoning and to provide guidelines for the translation of
the inference results into arguments. Another approach to legal reasoning are narratives.
The use of narratives aims to provide a coherent story for the evidence in a criminal case
and to make evidential gaps explicit. Characteristics of narratives in legal reasoning are
outlined in subsection 3.4. The team of Vlek et al. developed a method for integrating
narratives into probabilistic reasoning with BNs. The construction method is described
in subsubsection 3.4.1 and a proposed inference method explained in subsubsection 3.4.2.

3.4.1 Structuring Bayesian Networks using Scenarios


Vlek et al. (2014) explained a method for using BNs on the basis of narratives. The
approach aims to capture coherence properties of a scenario in the BN as well as to
make a retrieval of the narrative from the graph possible (Vlek et al., 2014, pp.7).
Additionally, it is thought to provide a representation of the quality of each scenario for
a comparison of scenarios. The method uses three scenario scheme idioms: The scenario
idiom, the variation idiom and the merged scenario idiom. All nodes are required to be
boolean and to be formulated as propositions. To assist the translation of the BN into
text, the arrows between the nodes are annotated, according to their relationship, with
either a c for causal or a t for temporal. The text retrieved from the BN is necessary
for a report of the quality of the scenario. To show a way of constructing a BN for
a whole case, the method proposed in (Vlek et al., 2014) and extended in Vlek et al.
(2016) is explained in the following section using the example of Lisa and Kai.
Applications of Bayesian Networks In Legal Reasoning 25

The scenario idiom captures the coherence of the scenario by connecting a parent
scenario node with all the elements of the scenario as children. The scenario node
has only outgoing connections and is therefore the cause in a common cause structure.
Because it is never instantiated, the influence between the nodes can flow through it.
This assures a transfer of evidential support (Vlek et al., 2014, pp.7), i.e. elements in
a coherent scenario might receive higher (or lower) belief if other elements supported
by evidence gain more credibility. The subjective prior of the belief in the scenario is
assigned to the scenario node and represents the plausibility of the scenario, i.e. how
well the scenario agrees with our knowledge about the world without any evidence (Vlek
et al., 2014, pp.8-10). Complementing the scenario node, subscenario nodes account for
elements in the narrative that need to be assessed in detail (Vlek et al., 2014, pp.10).
Figure 9 (a) shows a generic structure with a scenario and a subscenario idiom. The
scenario idiom has the property that if it is true all the elements conditioned on it have
to be true. Therefore, the probability P(Event = t | Scenario = t,...) has to be assigned
the value 1, as shown in the CPT of Figure 9 (a). This also holds for the elements
conditioned on the subscenario idiom. If the scenario is not true, the elements in it
might still be true (Vlek et al., 2014, pp.9). This special relationship between the nodes
is signified by a doubled edge.
The variation idiom is an element in the scenario that enables the constructor of
the model to include variations of elements, if the alteration does not influence the
overall conclusion of the scenario. The use of variation structures helps to reduce the
complexity of the graph and benefits a clear structure for the relevant evidence. For
example, it would not influence the scenario’s conclusion Lisa stabbed Kai if one
wanted to include different ways in which Lisa got a knife for the murder in our example.
In one variation she might have taken the knife from the table, in another variation
she might have had it in her bag. The idiom consists of a disjunction node, which is a
node in the scenario with the CPT representing the usual relationship of an element
in the scenario and subelements conditioned on the disjunction node. The disjunction
node is modelled analogously to the explaining-away idiom used in the approach by
Fenton et al. (2013) see subsubsection 3.3.1. The subelements represent the variations
V = {V1 , V2 , ..., Vn }. In order to make sure that exactly one variation is true, the
elements are interconnected such that there is an edge from Vi to Vj only if i < j (Vlek
et al., 2014, pp.12). The CPTs for the elements reflect the relationship between the
variations, which is that exactly one element has to be true if the disjunction node is
true. In turn, all variations have to be false, if the disjunction node is false. This is
done by assigning for each variation element the probability elicited for the respective
variation given that the disjunction is true and all variations beforehand were false.
The last variation Vn in this case is assigned the probability of 1 to make sure that one
variation is true (Vlek et al., 2014, pp.14-15). Figure 9 (b) shows a generic version of
the variation idiom.
In order to build a BN with all the scenarios of a case, the merged scenario idiom
is given as a way to assemble the scenarios. The merged scenario idiom proposed by
Vlek et al. (2014) and referenced in (Vlek et al., 2016) only accounts for the case that
the collection of scenarios to be merged is exclusive and exhaustive (Vlek et al., 2014,
pp.16-17). The scenarios are intended to be exclusive, and if they are not, they should
be modelled as variations in one scenario, not separately. Exhaustiveness, however, is
not guaranteed with the choice of scenarios. For example, two scenarios are designed for
Applications of Bayesian Networks In Legal Reasoning 26

Scenario 1

Lisa and Kai Lisa had Kai Lisa provided


met at crime scene a knife died a false alibi
c c
t
t
Lisa
murdered Kai

Lisa attacked
t Lisa
Kai with
left hand stabbed Kai

Figure 10: Scenario 1 for the example of Lisa and Kai without evidence.

the example of Lisa and Kai and shown in Figure 11. In one Tom is the murderer, in
the other Lisa. These two scenarios are exclusive, but not exhaustive, since there exists
the possibility that another unknown person is actually the killer. It might well be that
evidence against one scenario seems to strongly favour the other, but in fact supports
the unknown scenario. This problem arising from unknown causes is mentioned in
Fenton et al. (2011, pp.9), but no sufficient solution is proposed. If it were the case that
two exclusive and exhaustive examples were to be merged, Vlek et al. (2014) propose to
introduce a constraint node, analogously to the constraint node in the explaining-away
idiom proposed by Fenton et al. (2013) see subsubsection 3.3.1. For modelling the
example in this thesis the constraint node from the Anjum example from (Vlek, 2016)
was used, since in this case the CPT allows both scenarios to be false and assigns the
probability of chance to either one of them in this case. The idiom together with the
CPT for the constraint node is shown in Figure 9 (c). It should be noted here that the
problem of modelling exclusive, but not exhaustive causes, remains unsolved and might
as well affect the inference performed on the example.
Vlek et al. (2014) outline a procedure for composing the BN step by step. First, all
necessary elements are gathered and added to the graph using the scenario idiom. To
construct complete scenarios, scenario schemes should be used (Vlek et al., 2016, pp.8).
A scenario is complete if for every element of the scenario scheme there is a proposition
in the corresponding BN (Vlek et al., 2016, pp.9). To this point no database for scenario
schemes is given. For the first scenario in our example the elements Lisa and Kai
met at crime scene and Lisa had a knife influence the element Lisa murdered
Kai. Because the relationship is temporal – Lisa and Kai met at crime scene, then
Lisa murdered Kai – the edges are annotated with a t for temporal. The element Lisa
murdered Kai in turn influences the elements Kai died and Lisa provided a false
alibi. These elements are causally related – Lisa murdered Kai, therefore Kai died –
hence a c for causal is annotated to the edge.
Subsequently, each element of the scenario is unfolded to the required extent. For
this, each element of a scenario is unfolded to a subscenario node if more details
for the elements are required (Vlek et al., 2014, pp.18). If the elements cannot be
Applications of Bayesian Networks In Legal Reasoning 27

Scenario 1 Scenario 2
Constraint

Lisa and Kai


met at crime scene Kai died
Lisa had Tom was at Tom had
a knife scene a knife
Kai’s c
t
c dead body
t t
t

Lisa c Lisa provided Tom murdered


murdered Kai a false alibi Kai

c
Cinema Ticket

Tom’s Testimony Tom accuses


Lisa

Lisa attacked Tom attacked


t Lisa Tom stabbed t
Kai with Kai with
left hand stabbed Kai Kai left hand

Lisa is Stab wound Tom is


left-handed classification right-handed

Figure 11: The BN with the merged scenarios for the example of Lisa and Kai.

related directly to the given evidence, a subscenario explaining this evidence has to be
introduced. In our example the element Lisa murdered Kai needs unfolding, due to
our knowledge about the circumstances of the murder. The evidence we have regarding
this – Lisa is left-handed and evidence of Stab wound classification – can not
be directly connected to Lisa murdered Kai. Therefore, using the subscenario idiom,
two subelements Lisa attacked Kai with left hand and Lisa stabbed Kai, are
introduced and connected in a temporal manner. A full unfolded scenario BN for the
example is shown in Figure 10.
In the next step, Vlek et al. (2014) suggest all scenario nodes are merged to a BN
containing the possible scenarios. For the example of Lisa and Kai we assume that
during the trial the defence sketched a scenario against the prosecution scenario. This
scenario is based on the story that Tom murdered Kai and falsely accused Lisa. For
simplicity, it is assumed that these two scenarios are the only determinable stories
behind the crime. Evidence is then added to the network as children to the nodes it
supports. The last step for the design of a BN for scenarios is to include evidence idioms
as proposed in (Fenton et al., 2013) as local structures of evidence, which could be for
example to include the evidence-accuracy idiom (Vlek et al., 2014, pp.19) . Since for
Applications of Bayesian Networks In Legal Reasoning 28

the example most of the accuracy issues could be implemented within the CPTs, this
step was left out for clarity. The full extended BN with the two merged scenarios and
the evidence is shown in Figure 11. Note here that a new item of evidence was added
to the network. In the example case, Tom is right-handed was discovered during
the trial. This shows a feature of the scenario approach, which is that the evidential
gap given by the previous unsupported node Tom attacked Kai with left hand was
closed by investigating the handedness of Tom.

3.4.2 Quality of Scenarios


Adding to the framework for the design, Vlek et al. (2016) published guidelines for
the retrieval of a report for the BN modelling narratives. This report is thought to
assist the inference from the network and enables an evaluation of the quality of each
scenario. This also helps to present the results in a less mathematical way in court,
which is crucial to make the network accessible for laymen. As explained already
in subsection 3.4, the quality of a scenario can be assessed under certain objectives
like the consistency, completeness and plausibility. Furthermore, due to the fact that
a probabilistic approach for the modelling of the scenario is used, an evaluation of
evidential support is another party of the analysis. Consistency and completeness are
binary properties of a scenario in contrast to plausibility (Vlek et al., 2016, 11-12). Vlek
et al. argue that only consistent and complete scenarios should be considered in the
network, as they are not suited for explaining the crime otherwise. Since consistency
and completeness of scenarios are already considered in the choice of scenarios for the
BN, these factors do not have to be reported in the assessment of the scenario quality. A
lack of plausibility in the scenario, however, is not a reason to dismiss the scenario right
away, since evidential support for implausible evidence can still increase the belief in the
scenario, as Vlek et al. (2016) explain. In the example of Lisa and Kai, both scenarios –
the one with Lisa, the other with Tom as a perpetrator – have implausible elements in
them. One of them is that Lisa or in turn Tom had to have a knife. With sufficient
evidence, like for example a knife found in Lisa’s backpack, this implausible element
gains sufficient support which makes the whole scenario more probable. Therefore ,it
makes sense to report the implausible elements of the network and to point out if they
are supported by evidence. If not, they remain an evidential gap.
Vlek et al. (2016) propose that the report be composed of a translation of the
scenario’s elements into a text, an evaluation of evidential support of the evidence as
well as a list of implausible elements and relations in the scenario. The translator starts
at the nodes, which are only conditioned on the scenario node and writes down the
node’s names. The direction of the edges should be respected. If there are more nodes
connected to a common child, these nodes are connected with an “and” in the report.
The next step is to evaluate the kind of relationship these nodes have to their child. Here
the annotation on the edges, c and t, are used. Whenever a c is annotated to an edge,
the next sentence containing the child nodes starts with a “therefore” and whenever
all edges involved are annotated with t it should start with a “then” Vlek et al. (2016,
pp.15). For the example’s first scenario the first sentence would be “Lisa and Kai met at
crime scene and Lisa had a knife. Then Lisa murdered Kai.”. Whenever the edges have
different annotations, “therefore” is used as connection, since this relationship between
the elements is of greater importance (Vlek et al., 2016, pp.16). Another case that has
Applications of Bayesian Networks In Legal Reasoning 29

x< 0.001 Very strong evidence to attack

0.001 ≤x< 0.01 Strong evidence to attack

0.01 ≤x< 0.1 Moderate evidence to attack

0.1 ≤x< 1 Weak evidence to attack

1 <x≤ 10 Weak evidence to support

10 <x≤ 100 Moderate evidence to support

100 <x≤ 1000 Strong evidence to support

1000 <x Very strong evidence to support

Table 2: Qualitative scale for evidential support from (Vlek et al., 2016, pp.22).

to be considered is if explaining away occurs between two parents. In that case, they
should be connected with an “or” instead of an “and” to make the relationship explicit.
Subscenario nodes are included in square brackets. To make the translation complete,
the prior and posterior beliefs in the scenario node should be included in the text. A
full translation of the two scenarios of our example is given in the following paragraph:

• Scenario 1 (prior probability: 0.001, posterior probability: 0.314)


Lisa and Kai met at crime scene and Lisa had a knife. Then Lisa murdered
Kai [Lisa attacked Kai with the left hand. Then Lisa stabbed Kai.]. Therefore
Kai died and Lisa provided a false alibi.

• Scenario 2 (prior probability: 0.001, posterior probability: 0.0014)


Tom was at crime scene and Kai and Tom had a knife. Then Tom stabbed
Kai [Tom attacked Kai with the left hand. Then Tom stabbed Kai]. Therefore
Kai died and Tom accuses Lisa.

For the second part of the report, Vlek et al. propose that evidential support values
should be assessed. Using the likelihood ratio is not an option in this case because the
scenarios given are not always exclusive and exhaustive (Vlek et al., 2016, pp.20-21).
Although the focus is on a set of scenarios determined to be possible, there might still
be a possibility that another, not considered scenario is actually true. For reasons given
in subsection 2.3, the likelihood ratio is not a reliable measure of strength in this case.
Vlek et al. (2016) propose to compare the belief in the scenario given the evidence to
the prior probability of the scenario. The measure of support in this case is:
P (Scenario = true | Evidence)
P (Scenario = true)
If this measure of evidential support is < 1 it signals attacking, if it is > 1 it signals
supporting and if it is equal to 1 it signals neutral evidence. Vlek et al. argue for
Applications of Bayesian Networks In Legal Reasoning 30

Evidence P(Scenario 1|Evidence)


P(Scenario 1)
P(Scenario 2|Evidence)
P(Scenario 2)

Kai’s dead body 70.22 70.22

Cinema Ticket 0.33 1

Tom’s Testimony 4.86 3.48

Lisa is left-handed 5.69 1

Tom is right-handed 1 0.01

Stab wound classification 30.65 30.65

Combined evidence 314.13 1.40

Table 3: Evidential support for individual items and set of evidence for scenario 1 and
scenario 2.

considering the measure of support of each evidence item individually as well as of the
whole set of evidence for the scenario, in order to cover the effect of the transfer of
evidential support in the BN. For the translation of the support values to the report
format Vlek et al. (2016) provide a qualitative scale shown in Table 2. The evidential
support values are listed in Table 3.
Translated with the scale from Table 2, the evidential support values constitute the
second part of the report for the example of Lisa and Kai:

• Evidence to support/attack scenario 1


Kai’s dead body = true: moderate evidence to support
Stab wound classification = true: moderate evidence to support
Lisa is left-handed = true: weak evidence to support
Cinema Ticket = true: weak evidence to attack
Tom’s Testimony = true: weak evidence to support
Combined strength of evidence: strong evidence to support

• Evidence to support/attack scenario 2


Kai’s dead body = true: moderate evidence to support
Stab wound classification = true: moderate evidence to support
Tom is right-handed = true: moderate evidence to attack
Tom’s Testimony = true: weak evidence to support
Combined strength of evidence: weak evidence to support
Applications of Bayesian Networks In Legal Reasoning 31

From this it is possible to evaluate which evidence is distinguishing evidence and


which is neutral. Distinguishing evidence shows different support values for the dif-
ferent scenarios, whereas neutral evidence shows identical values. In the example all
evidence is distinguishing evidence, except Kai’s dead body = true and Stab wound
classification = true.
At last, the plausibility of the scenarios has to be examined. A list of implausible
elements and connections in the scenarios, evidential gaps and elements supported
by evidence. In order to determine what elements are implausible, Vlek et al. (2016)
introduce the threshold of of 0.01. Considering a set of elements E in the network,
an element Ei is rendered implausible, if P(Ei = true) ≤ 0.01. If P(Ej = true| E1 =
true,...,Ei = true) ≤ 0.01 we render Ej implausible given its set of parents E1 , ..., Ei
(Vlek et al., 2016, pp.23). For each implausible element it is stated if it is supported by
evidence or if it remains an evidential gap.
The complete list of the implausible elements for both scenarios of the example is:

• Implausible elements/connections in each scenario:


Scenario 1 contains the implausible element Lisa had a knife. Given the
evidence this remains implausible, it is thus an evidential gap.
Scenario 1 contains the implausible element Lisa attacked Kai with left
hand, which is supported by evidence.
Scenario 2 contains the implausible element Tom had a knife. Given the
evidence this remains implausible, it is thus an evidential gap.
Scenario 2 contains the implausible element Tom attacked Kai with left
hand, which is attacked by evidence.

As expected from the previous analyses in subsubsection 3.3.2 and subsubsec-


tion 3.3.3, the report strongly supports the first scenario, in which Lisa is the per-
petrator. The posterior probability of 31% is much higher than 1.4% of the second
scenario. Additionally, the combined strength of the evidence supports scenario 1
strongly, whereas it supports scenario 2 only weakly. This is the result of the evidence
Tom is right-handed, since this directly contradicts Tom attacking Kai with the left
hand. It is a moderate attack to the scenario. The Cinema ticket evidence on the
other hand attacks scenario 1 only weakly, such that the combined strength of the
evidence is still strongly in support of scenario 1. The list of implausible events also
support scenario 1, since it started with two implausible elements from which only one
remains an evidential gap after the inclusion of evidence, however, scenario 2 is still left
with two implausible events from which one is even attacked by evidence. As a result of
this report, the second scenario can be ruled out. The posterior of the first scenario is
still relatively low with only 0.31, so a third unknown scenario cannot be ruled out and
should be further explored. At this point it should be noted that the scenarios were
merged with the merged scenario idiom, which does not correctly reflect the exclusive,
but not exhaustive nature of the two scenarios. Also, the priors as well as all the priors
for the BNs before are subjectively biased and cannot be backed by data.
Applications of Bayesian Networks In Legal Reasoning 32

4 Discussion
In order to examine the applicability of BNs on full criminal cases and the usefulness
of this approach in court, different methods were outlined in section 3. The two
approaches on structuring explained in subsubsection 3.3.1 and subsubsection 3.4.1 and
the three methods for inference outlined in subsubsection 3.3.2, subsubsection 3.3.3
and subsubsection 3.4.2 of this thesis show different strengths and weaknesses in the
attempt to integrate legal reasoning into probabilistic reasoning and to be accessible for
legal experts. These strengths and weaknesses are discussed in this section on the basis
of the questions stated in the introduction to test if they meet the following objectives.
• The structure of the network must incorporate the basic principles underlying
legal reasoning with arguments and narratives respectively.

• The costs for designing the network need to reasonable, such that the use in court
is worth consideration.

• The inference from the network needs to be robust against wrong prior assump-
tions.

• The inference must be accessible for lawyers and juries.


The integration of the basic concepts of legal reasoning into the BN design provides
the basis for the BN to be applicable in court. As shown in the explanation of the
structuring approaches, both the idiom-based approach by Fenton et al. (2013) and
the scenario approach by Vlek et al. (2014) display the characteristics of arguments
and narratives. In the BN created using the method from (Fenton et al., 2013), the
evidence nodes function as premises for arguments concerning the conclusion variable,
the guilt hypothesis. This seems naturally to be the case for any BN. The different
take from (Fenton et al., 2013) is that they provide a set of idioms for the design
and simultaneously assist in finding deduction rules used in arguments. Implicitly, by
defining the idioms they provide a guideline for the inference from them. For example,
if the opportunity node is false, is has to be deduced that the guilt hypothesis cannot
be true. This is a deduction rule leading from the premise “the defendant had no
opportunity” via the deduction rule “the opportunity is necessary for the guilt of the
defendant” to the conclusion “the defendant is not guilty”. Since the opportunity is
modelled in the network as a parent of the guilt hypothesis, the deduction rule is
incorporated in the CPT for the respective nodes. Of course, inference from the network
usually requires more steps, but it becomes clear that the idioms aid the extraction of
arguments.
The modelling approach to narratives provided by Vlek et al. (2014) captures all
the characteristics that narratives possess. As Vlek et al. (2016) propose, the scenarios
should be designed using scenario schemes, analogous to the non-probabilistic approach
to building narratives using story schemes. To this point, no database for the scenario
schemes exists, so further research is needed in this area. Using the scenario schemes,
automatically leads to the notion of completeness, since only complete scenarios fulfil
the scheme and should be regarded. Consistency is realised via the variation idiom. If
inconsistencies arise, the probability of the scenario is set to zero. The coherence of
the scenario is implemented with the scenario idiom through which evidential transfer
Applications of Bayesian Networks In Legal Reasoning 33

can occur to update the beliefs in the variables. The concept of plausibility is realized
by setting the priors, such that they correspond to our world knowledge. Evidential
gaps are nodes in the network that are not supported by direct evidence. Hence all
properties of the scenarios are implemented. However, one issue occurs in the case that
exclusive, but possibly not exhaustive scenarios are compared in the same network. The
merged scenario idiom by Vlek et al. (2014) does not account for this case so further
research is needed here.
If the respective legal reasoning approach can be implemented in the network, it
should be at justifiable costs. If the design of the network, the probability elicitation
process and the inference from it is too expensive, the method does not provide an asset
for legal experts. However, the costs of network design are relatively high, considering
the prior elicitation for each node and the relationships. The scenario approach by
Vlek et al. (2014) is worse in comparison, since with every scenario more priors are
needed. In our example the number of priors needed doubles (neglecting the evidence).
Another problem occurs considering the sheer infinity of possible scenarios for one set
of evidence. Of course, usually a set of scenarios is already much more plausible than
others. Still, there are possibly many small variations in criminal cases. The general
problem with modelling criminal cases is that there is not a closed set of circumstances
for the crimes. Every case is different. This problem is tackled to some degree by
fitting the scenarios to scenario schemes and therefore defining a set of prioritized
events. This helps to abstract from small variations and focus on the important parts.
However, since no database exists for the schemes, this is an unsolved problem which
may result in unjustifiably high costs for the design. Nevertheless, the advantages of
the scenario approach might justify the costs. Using and comparing scenarios might
lead to the discovery of new evidence, when investigated in the direction of evidential
gaps. Additionally, the approach is highly adversarial, and prevents a consideration too
focused on the scenario relating to the defendant’s guilt by explicitly exploring other
possibilities. Both structuring approaches enable introduction of new nodes without
redesigning the whole network.
Inference methods for networks designed for argumentative and narrative reasoning
have to be robust against erroneous prior assumptions, since the prior elicitation
process constitutes a major issue when working with probabilistic reasoning. The
methods presented – sequential presentation of evidence, support graphs and the quality
evaluation of the scenarios – show different approaches to evidential value analysis and
how this is connected to the belief in the guilt hypothesis.
Sequential presentation of evidence is very prone to misinterpretation of the analysis,
since only the change of the posterior belief in the guilt hypothesis is considered. It is
not compared to the counter-hypothesis and the values are dependent on the priors.
This is a problem, since it may be misunderstood by the jury, as if the belief was
determined. In fact, only the evidential value of the evidence in the network should
be regarded, since this is less prone to wrong assumptions. A sensitivity analysis on
the accuracy values might influence the perception of the results, since it shows how
sensitively the network reacts to the respective prior beliefs.
Compared to the analysis of posteriors, support graphs provide a much more robust
method of analysis with regard to erroneous prior. The likelihood ratio provides a
measure of evidential strength, as well as a measure of argumentative strength for
the different branches in the graph. However, interactions between different branches
Applications of Bayesian Networks In Legal Reasoning 34

influencing the argument strength are not shown explicitly, which leads to a loss of
information. For example, the negative influence of the cinema ticket evidence can only
be read out implicitly from the support graph. More research on detailing the support
values of the individual branches in the graph is needed.
With regard to scenarios, the use of the measure of strength P(Scenario|Evidence)
P(Scenario)
is a
good way to compensate for possibly wrong priors. The use of the likelihood ratio is not
possible, since it cannot be guaranteed that the scenarios are exclusive and exhaustive.
To consider the measure incorporating change of beliefs is less accurate, since the priors
are still included, but the verbal scale in the report shown in Table 2 was adapted to
account for some degree of freedom in the choice of priors.
At last, it should be discussed if the respective inference methods are accessible for
legal experts and juries, since this is a necessary condition and motivation for the use
of BNs in court. The sequential presentation and the analysis of the posterior do not
provide a guideline for interpretation of results. The posterior can be misinterpreted.
Additionally, the order of evidence is important and should be regarded. Since this
method is not a structured approach to evidence analysis and is prone to misinterpreta-
tion, it does not close the communication gap between scientists and legal experts. It
rather shows that a framework for the inference from the BNs is needed which includes
guidelines for the interpretation of the results.
Compared to that, the support graph method of (Timmer et al., 2015c) has a lot of
advantages. It provides a structured guideline for the inference and a useful illustration
of the inference process. In the support graph, arguments can be extracted following
the branches which is useful for legal experts who are used to argument illustrations
in the same manner. Undercutters of arguments can be easily spotted and also the
strength of the arguments is implemented in the graph. Focusing on the structural
characteristics of the BN using the Markov Blanket makes argument extraction from
the BN more efficient than step-by-step enumeration of arguments, which leads to the
inclusion of many redundant arguments. Another advantage of this method is that the
support graphs can be computed automatically.
The evaluation of the scenario quality has the advantage that the framework already
provides a systematic approach for deriving a report in text form from the network.
The step-by-step instructions are very accessible for lawyers, since they involve simple
rules and only basic calculations. Also, it succeeds in painting a complete picture of the
comparison between several scenarios by focusing on the different aspects of change of
belief in the scenario, distinguishing evidence and implausible elements in the scenarios.
In summary, the two structuring approaches capture accurately the properties of
the respective legal reasoning methods. However, there are still central elements in
the narratives approach for which further research is needed, namely a compilation
of a scenario scheme database, the restriction of the variety of possible scenarios and
a solution for the merged scenario idiom. Inference using sequential presentation of
evidence and sensitivity analysis fails to provide a tool for legal experts, since the
results are prone to be interpreted with reasoning fallacies and the analysis is on too
mathematical a level. It might still be revealing on a basic level, but was used in this
thesis to show the need for a more structured framework for inference. The support
graph method by Timmer et al. (2015c) and the quality evaluation by Vlek et al. (2016)
provide reasonable aid for inference for legal experts. The results these methods provide
are accessible for statistical laymen. Nevertheless, the support graph method demands
Applications of Bayesian Networks In Legal Reasoning 35

further research on how to include the individual impact of evidence in hypotheses in


the support graph. In general the inference results of all three methods were consistent,
when tested on the example. In conclusion, the methods present a promising approach
of the integration of legal in probabilistic reasoning for court application. Although
they are accessible to legal experts, further research is needed on some fundamentals to
make their application useful.

Future research
The objectives defined above are reasonable on a theoretical level, but the approaches
have to be examined in experimental setups to test their applicability in court. Especially
the ability to convince juries should be tested. Since jurors argue on a more informal
level, the translatability of inference results of the methods reviewed in this thesis
into informal argumentation should be explored. Research in this area is already done
by Hahn and Oaksford (2007) and the methods reviewed here should be examined
from this view. Also, connections between the approaches explained in this theses and
current belief updating research (see Cook and Lewandowsky, 2016; Gunn et al., 2016)
should be analysed, in order to research the impact of inference processes in BNs on
laymen. Another area of research could be to expand the scope of kinds of evidence
relations which can be modelled in the BNs. Harris and Hahn (2009) researched an
Bayesian approach on how to measure the coherence of multiple witness testimonies.
They arrived to promising results, which will be further researched in the future and
could be an asset for the integration of multiple witness testimonies in the methods
reviewed in this thesis. Since most of the methods explained in this thesis are promising
approaches for modelling full criminal cases, these methods might be applicable in court
in the future and may help to make the court system more susceptible for statistical
evidence.
Appendix 36

5 Appendix
5.1 Probabilities for the Idiom-based BN from Figure 7

P(H0 : Lisa stabbed Kai | H1 )


P(H1 :Lisa was present)
H1 : Lisa was present t f
t 0.2
t 0.5 0
f 0.8
f 0.5 1
Table 5: CPT for H1 :Lisa was present
Table 4: CPT for H0 :Lisa stabbed Kai

P(H2 :Lisa attacked Kai with knife in left hand | H1 )


H1 :Lisa stabbed Kai t f
t 0.15 0.01
f 0.85 0.99

Table 6: CPT for H2 :Lisa attacked Kai with knife in left hand

P(Cinema Ticket | H1 , Aut.)


P(Authenticity | H0 )
H1 :Lisa was present t f
H0 :Lisa stabbed Mark t f
Authenticity t f t f
t 0 0.9
t 0 0.5 1 0.5
f 1 0.1
f 1 0.5 0 0.5
Table 8: CPT for Authenticity
Table 7: CPT for Cinema Ticket

P(Tom saw Lisa and Kai at the scene | H1 , Acc)


H1 :Lisa was present t f
Testimony Accuracy t f t f P(Testimony Accuracy)
t 0.9 0.5 0 0.5 t 0.8
f 0.1 0.5 1 0.5 f 0.2

Table 9: CPT for Tom saw Lisa and Kai at Table 10: CPT for Testimony Accuracy
the scene
Appendix 37

P(Stab wounds inflicted by left hander | H2 , Acc)


H2 :Lisa attacked Kai with knife in left hand t f
Classification Accuracy t f t f
t 0.99 0.01 0.01 0.5
f 0.01 0.99 0.99 0.5

Table 11: CPT for Stab wounds inflicted by left hander

P(Classification Accuracy)
t 0.99
f 0.01

Table 12: CPT for Classification Accuracy

P(Lisa is left-handed | H2 )
H2 :Lisa attacked Kai with knife in left hand t f
t 0.9 0.15
f 0.1 0.85

Table 13: CPT for Lisa is left-handed

5.2 Probabilities for the BN Based on Scenarios from Figure 11

P(Lisa had a knife | Sc1)


P(Scenario 1)
Scenario 1 t f
t 0.001
t 1 0.009
f 0.999
f 0 0.991
Table 14: CPT for Scenario 1
Table 15: CPT for Lisa had a knife

P(Lisa and Kai met at crime scene | Sc1)


Scenario 1 t f
t 1 0.199
f 0 0.801

Table 16: CPT for Lisa and Kai met at crime scene
Appendix 38

P(Lisa provided a false alibi | Sc1, Lisa murder.)


Scenario 1 t f
Lisa murdered Kai t f t f
t 1 1 0.299 0
f 0 0 0.701 1

Table 17: CPT for Lisa provided a false alibi

P(Lisa murdered Kai | Sc1, Lisa met.)


Scenario 1 t f
Lisa and Kai met at crime scene t f t f
Lisa had a knife t f t f t f t f
t 1 1 1 1 0.499 0 0.004 0
f 0 0 0 0 0.501 1 0.996 1

Table 18: CPT for Lisa murdered Kai

P(Lisa attacked Kai with left hand| Lisa murder.)


Lisa murdered Kai t f
t 1 0.009
f 0 0.991

Table 19: CPT for Lisa attacked Kai with left hand

P(Lisa stabbed Kai | Lisa murder., Lisa attack.)


Lisa murdered Kai t f
Lisa attacked Kai with left hand t f t f
t 1 1 0.044 0.009
f 0 0 0.956 0.991

Table 20: CPT for Lisa stabbed Kai


Appendix 39

P(Kai died | Sc1, Sc2, Lisa murder.,Kai murder.)


Scenario 1 t f
Scenario 2 t f t f
Lisa murdered Kai t f t f t f t f
Tom murdered Kai t f t f t f t f t f t f t f t f
t 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0.01
f 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0.99

Table 21: CPT for Kai died

P(Cinema Ticket| Alibi) P(Kai’s dead body| Died)


Lisa provided a false alibi t f Kai died t f
t 0.1 0.3 t 1 0
f 0.9 0.7 f 0 1

Table 22: CPT for Cinema Ticket Table 23: CPT for Kai’s dead body

P(Lisa is left-handed | Lisa attack.)


Lisa attacked Kai
with left hand t f
t 0.9 0.15
f 0.1 0.85

Table 24: CPT for Lisa is left-handed

P(Stab wound classification| Lisa stabbed., Tom stabbed.)


Lisa stabbed Kai t f
Tom stabbed Kai t f t f
t 0.99 0.99 0.99 0.01
f 0.01 0.01 0.01 0.99

Table 25: CPT for Stab wound classification


Appendix 40

P(Constraint| Sc1, Sc2)


Scenario 1 t f
P(Scenario 2)
Scenario 2 t f t f
t 0.001
t 0 1 0 0.5
f 0.999
f 0 0 1 0.5
impossible 1 0 0 0 Table 27: CPT for Scenario 2

Table 26: CPT for Constraint

P(Tom was at scene | Sc2) P(Tom had a knife| Sc2)


Scenario 2 t f Scenario 2 t f
t 1 0.799 t 1 0.009
f 0 0.701 f 0 0.991

Table 28: CPT for Tom was at scene Table 29: CPT for Tom had a knife

P(Tom murdered Kai| Sc2, Tom met.)


Scenario 2 t f
Tom and Kai met at crime scene t f t f
Tom had a knife t f t f t f t f
t 1 1 1 1 0.499 0 0.004 0
f 0 0 0 0 0.501 1 0.996 1

Table 30: CPT for Tom murdered Kai

P(Tom attacked Kai with left hand | Tom murder.)


Tom murdered Kai t f
t 1 0.009
f 0 0.991

Table 31: CPT for Tom attacked Kai with left hand

P(Tom accuses Lisa | Sc2, Tom murder)


Scenario 2 t f
Tom murdered Kai t f t f
t 1 1 0.199 0.009
f 0 0 0.801 0.991

Table 32: CPT for Tom accuses Lisa


Appendix 41

P(Tom is right-handed | Tom attack)


Tom attacked Kai with left hand t f
t 0.01 0.85
f 0.99 0.15

Table 33: CPT for Tom is right-handed

P(Tom stabbed Kai| Tom murder., Tom stabbed.)


Tom murdered Kai t f
Tom attacked Kai with left hand t f t f
t 1 1 0.044 0.009
f 0 0 0.956 0.991

Table 34: CPT for Tom stabbed Kai

P(Tom’s Testimony| Lisa met., Tom accuses)


Lisa and Kai met at crime scene t f
Tom accuses Lisa t f t f
t 0.5 0.7 0.5 0
f 0.5 0.3 0.5 1

Table 35: CPT for Tom’s Testimony


Applications of Bayesian Networks In Legal Reasoning 42

References
R v t. EWCA Crim 2439, 2009. URL www.bailii.org/ew/cases/EWCA/Crim/2010/
2439.pdf.

LLC BayesFusion. GeNIe 2.1, Academic Version, 2016. http://download


.bayesfusion.com/files.html?category=Academia [Accessed: 15.3.2016].

John Cook and Stephan Lewandowsky. Rational irrationality: Modeling climate change
belief polarization using bayesian networks. Topics in Cognitive Science, 8:160–179,
2016.

Norman Fenton and Martin Neil. The “jury observation fallacy” and the use of bayesian
networks to present probabilistic legal arguments. Mathematics Today, 36(6):180–187,
2000.

Norman Fenton and Martin Neil. Avoiding probabilistic reasoning fallacies in legal
practice using bayesian networks. Philosophy, 36:114–151, 2011.

Norman Fenton, Martin Neil, and David Lagnado. Modelling mutually exclusive causes
in bayesian networks. Submitted to IEEE Transactions on Knowledge and Data
Engineering, April 2011, 2011.

Norman Fenton, Martin Neil, and David Lagnado. A general structure for legal
arguments about evidence using bayesian networks. Cognitive Science, 37:61–102,
2013.

Norman Fenton, Daniel Berger, David Lagnado, Martin Neil, and Anne Hsu. When
‘neutral’ evidence still has probative value (with implications from the barry george
case). Science and Justice, 54(4):274–287, 2014.

Gerd Gigerenzer. Reckoning with risk: learning to live with uncertainty. Penguin Books,
2002.

Lachlan J. Gunn, François Chapeau-Blondeau, Mark D. McDonnell, Bruce R. Davis,


Andrew Allison, and Derek Abbott. Too good to be true: when overwhelming evidence
fails to convince. Proceedings of the Royal Society of London A: Mathematical, Physical
and Engineering Sciences, 472(2187), 2016.

Ulrike Hahn and Mike Oaksford. The rationality of informal argumentation: A bayesian
approach to reasoning fallacies. Psychological Review, 114(3):704–732, 2007.

Adam J.L. Harris and Ulrike Hahn. Bayesian rationality in evaluating multiple testi-
monies: Incorporating the role of coherence. Journal of Experimental Psychology:
2009 American Psychological Association Learning, Memory, and Cognition, 35(5):
1366–1373, 2009.

Daniel Kahneman. Thinking, Fast and Slow. Farrar Straus and Giroux, 2011.

Daphne Koller and Nir Friedman. Probabilistic Graphical Models. The MIT Press, 2009.
Applications of Bayesian Networks In Legal Reasoning 43

Ronals Meester, Marieke Collins, Richard Gill, and Michiel can Lambalgen. On the
(ab)use of statistics in the legal case against the nurse lucia de b. Law, Probability
and Risk, 5:233–250, 2007.

Richard Nobles and David Schiff. Misleading statistics within criminal trials. Significance,
2(1):17–19, 2005.

Judea Pearl. Causality: Models, Reasoning, and Inference. Cambridge University Press,
2000.

Silja Renooij. Probability elicitation for belief networks: issues to consider. The
Knowledge Engineering Review, 16(3):255–269, 2001.

Stuart J. Russell and Peter Norvig. Artificial Intelligence, A Modern Approach, chap-
ter IV. Pearson Education, Inc., 2010.

RSS The Royal Statistical Society. Royal statistical society concerned by issues raised
in sally clark case. 2001.

Sjoerd Timmer, John-Jules Ch. Meyer, Henry Prakken, Silja Renooij, and Bart Verheij.
Explaining bayesian networks using argumentation. In Proceedings of the 13th
European Conference on Symbolic and Quantitative Approaches to Reasoning with
Uncertainty, pages 83–92. Springer, 2015a.

Sjoerd Timmer, John-Jules Ch. Meyer, Henry Prakken, Silja Renooij, and Bart Verheij.
A structure-guided approach to capturing bayesian reasoning about legal evidence.
ICAIL ’15 Proceedings of the 15th International Conference on Artificial Intelligence
and Law, pages 109–118, 2015b.

Sjoerd Timmer, John-Jules Ch. Meyer, Henry Prakken, Silja Renooij, and Bart Verheij.
Explaining legal bayesian networks using support graphs. Legal Knowledge and
Information Systems. JURIX 2015: The Twenty-eighth Annual Conference, pages
121–130, 2015c.

Bart Verheij, Floris Bex, Sjoerd Timmer, Charlotte Vlek, John-Jules Meyer, Silja
Renooij, and Henry Prakken. Arguments, scenarios and probabilities: connections
between three normative frameworks for evidential reasoning. Law, Probability and
Risk, 15(1), 2016.

Charlotte Vlek. Supplementary bayesian network models, 2016. http://www


.charlottevlek.nl/networks/ [Accessed: 24.7.2016].

Charlotte Vlek, Henry Prakken, Silja Renooij, and Bart Verheij. Building bayesian
networks for legal evidence with narratives. Artificial intelligence and law, (4):375–421,
2014.

Charlotte Vlek, Henry Prakken, Silja Renooij, and Bart Verheij. A method for explaining
bayesian networks for legal evidence with scenarios. Artificial Intelligence and Law,
pages 1–40, 2016.
Applications of Bayesian Networks In Legal Reasoning 44

List of Figures
1 Bayesian network example . . . . . . . . . . . . . . . . . . . . . . . . . 6
2 Trail structures in a BN . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3 Intercausal reasoning structure . . . . . . . . . . . . . . . . . . . . . . . 9
4 Argument diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
5 Evidence-accuracy idiom . . . . . . . . . . . . . . . . . . . . . . . . . . 15
6 Opportunity, motive, dependency and explaining-away idiom . . . . . . 16
7 Idiom-based network for example . . . . . . . . . . . . . . . . . . . . . 17
8 Support Graph for example . . . . . . . . . . . . . . . . . . . . . . . . 22
9 Scenario, subscenario, variation and merged scenario idiom . . . . . . . 24
10 Scenario 1 for example . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
11 Full BN based on scenarios for example . . . . . . . . . . . . . . . . . . 27
Applications of Bayesian Networks In Legal Reasoning 45

List of Tables
1 Sequential presentation of evidence . . . . . . . . . . . . . . . . . . . . 19
2 Qualitative scale for evidential support . . . . . . . . . . . . . . . . . . 29
3 Evidential support for the two scenarios in the example . . . . . . . . . 30
4 CPT for H0 :Lisa stabbed Kai . . . . . . . . . . . . . . . . . . . . . . 36
5 CPT for H1 :Lisa was present . . . . . . . . . . . . . . . . . . . . . 36
6 CPT for H2 :Lisa attacked Kai with knife in left hand . . . . . 36
7 CPT for Cinema Ticket . . . . . . . . . . . . . . . . . . . . . . . . . . 36
8 CPT for Authenticity . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
9 CPT for Tom saw Lisa and Kai at the scene . . . . . . . . . . . . 36
10 CPT for Testimony Accuracy . . . . . . . . . . . . . . . . . . . . . . 36
11 CPT for Stab wounds inflicted by left hander . . . . . . . . . . . 37
12 CPT for Classification Accuracy . . . . . . . . . . . . . . . . . . . 37
13 CPT for Lisa is left-handed . . . . . . . . . . . . . . . . . . . . . . 37
14 CPT for Scenario 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
15 CPT for Lisa had a knife . . . . . . . . . . . . . . . . . . . . . . . . 37
16 CPT for Lisa and Kai met at crime scene . . . . . . . . . . . . . . 37
17 CPT for Lisa provided a false alibi . . . . . . . . . . . . . . . . 38
18 CPT for Lisa murdered Kai . . . . . . . . . . . . . . . . . . . . . . . 38
19 CPT for Lisa attacked Kai with left hand . . . . . . . . . . . . . 38
20 CPT for Lisa stabbed Kai . . . . . . . . . . . . . . . . . . . . . . . . 38
21 CPT for Kai died . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
22 CPT for Cinema Ticket . . . . . . . . . . . . . . . . . . . . . . . . . . 39
23 CPT for Kai’s dead body . . . . . . . . . . . . . . . . . . . . . . . . . 39
24 CPT for Lisa is left-handed . . . . . . . . . . . . . . . . . . . . . . 39
25 CPT for Stab wound classification . . . . . . . . . . . . . . . . . . 39
26 CPT for Constraint . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
27 CPT for Scenario 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
28 CPT for Tom was at scene . . . . . . . . . . . . . . . . . . . . . . . . 40
29 CPT for Tom had a knife . . . . . . . . . . . . . . . . . . . . . . . . . 40
30 CPT for Tom murdered Kai . . . . . . . . . . . . . . . . . . . . . . . . 40
31 CPT for Tom attacked Kai with left hand . . . . . . . . . . . . . . 40
32 CPT for Tom accuses Lisa . . . . . . . . . . . . . . . . . . . . . . . . 40
33 CPT for Tom is right-handed . . . . . . . . . . . . . . . . . . . . . . 41
34 CPT for Tom stabbed Kai . . . . . . . . . . . . . . . . . . . . . . . . 41
35 CPT for Tom’s Testimony . . . . . . . . . . . . . . . . . . . . . . . . 41
46

Declaration of Authorship
I, Inga Catharina Ibs, hereby certify that the work presented here is, to the best of
my knowledge and belief, original and the result of my own investigations, except as
acknowledged, and has not been submitted, either in part or whole, for a degree at this
or any other university.

signature

city, date

View publication stats

You might also like