You are on page 1of 8

Australian Journal of Forensic Sciences

ISSN: 0045-0618 (Print) 1834-562X (Online) Journal homepage: http://www.tandfonline.com/loi/tajf20

Human expert performance in forensic decision


making: Seven different sources of bias

Itiel E. Dror

To cite this article: Itiel E. Dror (2017): Human expert performance in forensic decision making:
Seven different sources of bias, Australian Journal of Forensic Sciences

To link to this article: http://dx.doi.org/10.1080/00450618.2017.1281348

Published online: 28 Feb 2017.

Submit your article to this journal

View related articles

View Crossmark data

Full Terms & Conditions of access and use can be found at


http://www.tandfonline.com/action/journalInformation?journalCode=tajf20

Download by: [2.123.25.173] Date: 28 February 2017, At: 23:23


Australian Journal of Forensic Sciences, 2017
http://dx.doi.org/10.1080/00450618.2017.1281348

Human expert performance in forensic decision making:


Seven different sources of bias†
Itiel E. Drora,b
a
University College London (UCL), London, UK; bCognitive Consultants International (CCI-HQ)

ABSTRACT ARTICLE HISTORY


Forensic science has existed for many decades without due attention Received 3 January 2017
being given to the important role of human cognition in forensic Accepted 5 January 2017
work. Without such attention, forensic examiners were believed to
KEYWORDS
be objective and immune to bias. This past decade we have seen an Bias; cognitive forensics;
impressive shift in forensic science, now taking human factors into contextual information;
account. One important element in cognitive forensics is to minimize decision making; expertise;
potential bias in forensic work. To accomplish this we must first human factors
understand the different sources of bias and then develop and deploy
counter measures whenever possible. In this paper, I go through seven
sources of bias, some arising from the mere fact that we are humans,
other originating from training, motivations and organizational factors
(and other general sources of bias), and others arising from the specific
case at hand. Bias is then placed within the wider context of human
performance, showing the hierarchy of expert performance (HEP)
that distinguishes between observations and conclusions in decision-
making, between effects that are due to bias and those that do not
arise from bias, and when performance varies among examiners
and when it varies within the same examiner. A cognitive informed
approach can substantially improve and contribute to forensic science.

Over a decade ago, when I started to examine the role of human decision-making in forensic
science, the notion of cognitive bias did not exist in this area. Forensic examiners were con-
sidered (and considered themselves) to be objective, immune to influences by irrelevant
contextual information and even infallible. The very idea that the human examiners play an
important element, and sometimes are even the instrument of analysis, in forensic science
was basically non-existent.
Things have drastically and dramatically changed in the past few years. The new area of
Cognitive Forensics1,2 and its scope (e.g. human factors in forensic science, cognitive science
and forensic decision-making, the human element in forensic science3−5) has been well pub-
lished and accepted in the literature. The forensic community at large (withstanding some
exceptions for those who are still resistant and defensive) has accepted and adopted these

CONTACT  Itiel E. Dror  i.dror@ucl.ac.uk, Itiel@cci-hq.com



Note: I dedicate this article to Bryan Found, who was not only a personal friend but also a forensic scientist who was at the
very forefront of understanding the critical role of human decision-making in forensic science and in establishing the area
of cognitive forensics. He is greatly missed, personally and professionally.
© 2017 Australian Academy of Forensic Sciences
2   I. E. DROR

issues. For example, the United Kingdom Forensic Science Regulator has published guidance
on ‘Cognitive Bias Effects Relevant to Forensic Science Examinations’6, The United States
National Commission on Forensic Science has adopted a document on ‘Ensuring That
Forensic Analysis is Based Upon Task-Relevant Information’7, and President Obama’s Council
of Advisors on Science and Technology8 and the National Academy of Sciences9, as well as
enquiries into forensic science10,11, have all acknowledged the central role that human exam-
iners and bias can play in forensic decision-making.
As we progress our understanding of cognitive bias and of other cognitive issues in foren-
sic science, we can further enhance forensic work by suggesting best cognitive practices12
and operating procedures, such as LSU (Linear Sequential Unmasking)13. These are aimed
to maximize the quality of forensic decision-making by lessening potential cognitive con-
tamination such as bias. To further our understanding of bias in forensic work, we need to
carefully consider the different possible sources of such bias. With such understanding we
can develop and deploy different measures to combat biasing effects.
The taxonomy (Figure 1) explicates different sources of bias that may interfere with obser-
vations and forensic conclusions. The general structure of the taxonomy has at the very
bottom, the foundation basic human nature the mere fact we are human, the brain and
cognitive architecture – what Francis Bacon calls idola tribus (idols of the tribe)14. As we go
up the taxonomy, the sources of bias are more dependent on specific environmental cir-
cumstances and the individuals involved (such as their training, motivation, organizational
culture, experience, etc.). The top part of the taxonomy presents sources of bias that are
specific to the case at hand (e.g. contextual information, as well as the actual evidence from
the crime scene and the suspect). I will go through the taxonomy, starting at the very top,
making my way down, explaining each source of bias and giving examples.
At the very top of the taxonomy is the Case Evidence. This level includes the actual evi-
dence that was collected from the crime scene. It ranges from latent fingerprints, shoe and
tire marks, handwriting, DNA, bullet casings, voice recording, facial images, to bite marks. In
some cases, the actual evidence, per se, is not a source of bias, such as in latent fingerprints.
The latent fingerprint mark itself does not impinge on the forensic evaluation, as it only
conveys relevant information about the evidence itself, such as the presence of minutia.
However, other evidence may be a source of bias because information within the actual
evidence conveys irrelevant information that may bias the forensic evaluation. For example,
a bite mark present on a young child may reveal the nature of the crime, and voice recording
may convey a whole range of irrelevant information that goes beyond what is relevant to
the forensic examiner analysing the voice.
There are instances where the biasing information is so inherent and integrated into the
actual evidence that it is practically impossible to separate them. However, in other cases,
when possible, it is advisable to remove the potential source of bias within the actual evi-
dence. For example, in signature examination, there is no need to provide the forensic sig-
nature examiner with the entire typed letter that includes information not relevant to the
actual signature examination.
Many forensic domains require comparing the case evidence (see above) with Reference
Materials (see the second level in the taxonomy, Figure 1). For example, the latent fingerprint
from the crime scene is compared with the fingerprint of a suspect; or the DNA profile from
the biological material at the crime scene is compared with the DNA profile of a suspect; or
the bullet casing found at the crime scene is compared with a bullet casing fired from a gun
AUSTRALIAN JOURNAL OF FORENSIC SCIENCES   3

Figure 1. A taxonomy of different sources of bias that may cognitively contaminate forensic observations
and conclusions.

found at the home of a suspect; etc. Such comparisons are essential for forensic comparisons.
However, the forensic work needs to be guided and driven by the actual evidence, not the
suspect. The forensic examiner needs to work from the evidence to the suspect, not back-
wards, from the suspect to the evidence. Otherwise the suspect is driving the forensic exam-
ination, not the actual evidence, and the examiner is looking for the suspect in the evidence,
causing a biased examination.
The Reference Material source of bias can be easily minimized by requiring the forensic
examination to work from the evidence to the suspect. The LSU (Linear Sequential Unmasking)
procedure13 is aimed at achieving this very point. The case evidence must be examined first,
by itself, free from the context (and influence) of the ‘target’ suspect. This ensures that the
evidence is examined with minimal bias, and the evidence itself is driving the forensic exam-
ination. Only after the evidence is examined and characterized, only then, is the forensic
examiner exposed to the suspect, and can compare. For example, in DNA examination, the
forensic examiner must first develop and analyse the profile from the biological material in
the crime scene before that of the suspect and the subsequent comparison. Similarly, latent
fingerprints from the crime scene need to be analysed and characterized prior to exposure
to the fingerprint of the suspect. Such approaches, have now begun to be adopted and used
across the board15.
The last level in the taxonomy that relates to the specific case relates to Irrelevant Case
Information. This source of bias refers to a whole range of contextual information that is not
relevant to the forensic examiner and their work, but nevertheless may influence and bias
their examination. For example, a forensic anthropologist determining the sex based on
skeletal remains, needs to make their assessment based on the actual evidence, the skeletal
remains, not other information16. Even more so when analysing bone fragments, conclusions
may be affected by irrelevant contextual information provided by the police (or others).
Although they believe their conclusions are solely based on the bone fragment evidence,
and are objective, in fact their exposure to irrelevant case information biases their work.
Therefore, it is prudent and critically important that forensic examiners only be exposed to
the relevant information they need to do their forensic work7. Irrelevant case information
may also include the past convictions of the suspect, what the detective thinks, whether the
suspect confessed to the crime, what other lines of forensic evidence show, witness
4   I. E. DROR

statements, and many other pieces of case information that are not relevant to the forensic
examination at hand.
As we move down the taxonomy, we move to the more fundamental sources of bias, less
related to the specific case at hand (see Figure 1). The first of these levels relates to Base Rate
Expectations. This source of bias does not originate from anything related to the specific
case at hand (e.g. the evidence, reference materials, or irrelevant case information – the three
top levels of the taxonomy). Base rate expectations are based on past experience, of other
cases in the past, which cause an expectation about this specific case.
It is biasing, as this expectation has nothing to do with the specific case at hand but,
before examining it, the examiner already has an expectation. The expectation is based on
past regularities. Take, for example, security X-ray examiners at the airport. Although they
are trained and try to look at every X-ray, their experience tells them that there are no bombs
in the suitcase. They learn it after examining thousands of X-rays and not finding a single
bomb. Although they are still motivated to perform this important task well, their experience
causes the brain to have a base rate expectation that no bombs are present, and that results
in a bias in how they look at the X-ray. Similarly, in Intensive Care Units (ICU) at hospitals,
alarms go off very often and almost always they are false alarms. Therefore, based on this
base rate expectation, alarms are often ignored17.
Combating base rate expectation is relatively easy, you just need to give counter examples
that break the expectation12. For the security X-ray domain, they use a TIP (Threat Image
Projection) method18, which entails from time to time projecting bombs onto the X-ray to
make sure the human is paying attention. This breaks the expectation that no bombs are
ever present in the X-ray.
Base rate expectation is also applicable to forensic science. There are many regularities
that are learned from experience. These experiences are brought to bear on new cases, and
therefore bias them. Take, for example, the use of AFIS (Automated Fingerprint Identification
System). AFIS provides a list of candidates, however experience tells examiners that if there
is a match, then it is almost always at the top of the list. This regularity causes a base rate
expectation that biases the examiners19. Researching 55,200 AFIS decisions shows that base
rate bias manifests in three ways20: (1) as examiners go down the list of candidates they
spend less time examining the prints (that is, when the same print is presented lower down
the list they spend less time on it than they do when it appears on the top of the list); (2)
false positives are most likely to occur at the top of the list (that is, given that if there is a
match, they expect it to be at the top of the list, they are more likely to make a false match
of prints that appear at the top of the list); (3) false negatives are more likely to occur further
down the list of candidates (that is, given that they are not expecting to find a match further
down the list, they are more likely to fail to make a correct identification when the matching
print is further down the list). These base rate biases apply to any situation where there are
regular results, and the examiner thus approaches a new case based on their experience
with those situations.
The solutions to such base rate bias in forensic science are similar to those taken in other
expert domains. For instance, by providing counter-examples the base rate expectation is
reduced if not eliminated. In AFIS it means that, from time to time, one moves the top
matching print further down the list, or to just randomize the entire order of the candidate
list12.
AUSTRALIAN JOURNAL OF FORENSIC SCIENCES   5

The next level in the taxonomy relates to Organizational Factors. This source of bias
includes a variety of biases that originate from the organization and culture with which the
forensic examiner is affiliated to and identifies with. Although examiners try to conduct their
work objectively, their work environment affects their work: allegiances, loyalties, ideologies,
and many other factors work at this level. A clear and powerful biasing factor at this level is
the adversarial nature of many legal systems. Forensic examiners who examined identical
evidence reached different conclusions, depending on whether they believed they were
working for the defence or for the prosecution21. This has been termed adversarial
allegiance22.
As we move to the bottom of the taxonomy, we focus on the more fundamental and basic
things that make up the human forensic examiner. The first biasing source at this level is the
training they have received. Training plays an important part in shaping the work of the
forensic examiner. A variety of biases may be introduced during training. Another source at
this level relates to motivation. Different motivations are at play, and these may bias the
forensic work being carried out.
At the very bottom of the taxonomy is Cognitive Architecture and the Brain. At this level,
the very making of us, as humans, and how our brains work, introduces a whole set of biases.
These have been investigated and documented in a variety of domains, and lately have
started to be applied to forensic science4,12.
Whichever source of bias is at work, it may bias the observations being made or it can
also bias the conclusions reached. It is important to distinguish between instances in which
the bias impacts the observations (e.g. in observing minutia in fingerprints) and instances
in which bias impacts the forensic conclusions (e.g. in determining if two fingerprints match,
that is, concluding that they both come from the same source).
The Hierarchy of Expert Performance (HEP)23, see Figure 2, distinguishes between the
‘observations’ and the ‘conclusions’ elements in expert decision-making, and illustrates this
in different expert domains (it details and provides a framework for research into forensic

Figure 2. The Hierarchy of Expert Performance (HEP).


6   I. E. DROR

decision-making 23). In the medical domain, SBAR (Situation, Background, Assessment, and
Recommendation)24,25 is used to clearly demarcate elements that relate to observations
(situation and background) and those that relate to conclusions (assessment and
recommendation).
The HEP hierarchy also designates when bias causes different experts to perform differ-
ently on the same data (see levels 4 and 8 in Figure 2, where bias is between experts), and
when bias causes the same expert to perform differently on the same data (see levels 3 and
7 in Figure 2, where bias is within the same expert).
Furthermore, the HEP hierarchy also makes an important point: expert performance is
not only dependent on bias. Bias is only one element that impacts expert decision-making.
Experts may vary among themselves in their performance (levels 2 and 6 in Figure 2), as well
as vary within themselves (levels 1 and 5 in Figure 2) even without any biasing influences.
For details, see HEP23.
The realization that human cognition plays a critically important role in forensic work is
the first step on the long journey of researching and gaining a better understanding of what
underpins forensic decision-making. This understanding will create the foundation that will
enable us to find ways to enhance forensic work and its contribution to society.

Note 
The development of this taxonomy began in a paper in 2009 (Dror, How can Francis Bacon
help forensic science? The four idols of human biases), and was then further developed in a
2014 paper (Stoel, Berger, Kerkhoff, Mattijssen, & Dror, Minimising contextual bias in forensic
casework), with additional sources of bias added subsequently (in Dror, Cognitive neurosci-
ence in forensic science: Understanding and utilising the human element, and Zapf, & Dror,
Mitigating the impact of bias in forensic evaluation: lessons from forensic science).

Disclosure statement
No potential conflict of interest was reported by the author.

References
 1. Dror IE, Stoel RD. Cognitive forensics: human cognition, contextual information, and bias.
Encyclopaedia of criminology and criminal justice. New York (NY): Springer Publishing New York;
2014.
 2. Found B. Deciphering the human condition: the rise of cognitive forensics. Aust J For Sci.
2015;47(4):386–401.
  3. Edmond G, Towler A, Growns B, Ribeiro G, Found B, White D, Ballantyne K, Searston RA, Thompson
MB, Tangen JM, et al. Thinking forensics: cognitive science for forensic practitioners. Sci Justice.
Forthcoming.
  4. Dror IE. Cognitive neuroscience in forensic science: understanding and utilising the human element.
Philos Trans R Soc. 2015;370(1674):20140255. doi:10.1098/rstb.2014.0255.
  5. Expert Working Group on Human Factors in Latent Print Analysis. Latent print examination and
human factors: improving the practice through a systems approach. National Institute of Standards
and Technology Interagency/Internal Report (NISTIR); 2012;7842.
 6.  Forensic Science Regulator. Guidance: cognitive bias effects relevant to forensic science
examinations. 2015;FSR-G-217.
AUSTRALIAN JOURNAL OF FORENSIC SCIENCES   7

  7. National Commission on Forensic Science. Ensuring that forensic analysis is based upon task-
relevant information. Washington (DC). 2015. Available from: https://www.justice.gov/ncfs/
file/818196/download
  8. Presidents’ Council of Advisors on Science and Technology. Forensic science in criminal courts:
ensuring scientific validity of feature-comparison methods. Washington (DC). 2016. Available from:
https://obamawhitehouse.archives.gov/sites/default/files/microsites/ostp/PCAST/pcast_forensic_
science_report_final.pdf
  9. National Academy of Sciences. Strengthening forensic science in the United States: a path forward.
Washington (DC): National Academies Press; 2009.
10. Campbell A. The fingerprint inquiry. Scotland; 2011. Available from: http://www.aridgetoofar.com/
documents/TheFingerprintInquiryReport_Low_res.pdf
11. Office of the Inspector General. A review of the FBI’s handling of the Brandon Mayfield case.
Washington (DC): U.S. Department of Justice; 2006.
12. Dror IE. Practical solutions to cognitive and human factor challenges in forensic science. Forensic
Sci Pol Manag. 2014;4:105–113.
13. Dror IE, Thompson WC, Meissner CA, Kornfield I, Krane D, Saks M, Risinger M. Context management
toolbox: a Linear Sequential Unmasking (LSU) approach for minimizing cognitive bias in forensic
decision making. J Forensic Sci. 2015;60(4):1111–1112.
14. Dror IE. How can Francis Bacon help forensic science? the four idols of human biases. Jurimetrics.
2009;50:93–110.
15. Office of the Inspector General. A review of the FBI’s progress in responding to the recommendations
in the office of the inspector general report on the fingerprint misidentification in the Brandon
Mayfield case. Washington (DC); 2011. Available from: https://www.oig.justice.gov/special/s1105.
pdf
16. Passalacqua NV, De La Paz J, Zejdlik K. Identification of missing Norwegian world war II soldiers.
J Forensic Sci. 2016;61(5):1405–1407.
17. Lawless ST. Crying wolf: false alarms in a paediatric intensive care unit. Crit Care Med. 1994;22:
981–985.
18.  Schwaninger A. Threat image projection: enhancing performance? Aviation Security Inter.
2006;December:36–41.
19. Dror IE, Mnookin J. The use of technology in human expert domains: challenges and risks arising
from the use of automated fingerprint identification systems in forensics. Law, Probability and
Risk. 2010;9(1):47–67.
20. Dror IE, Wertheim K, Fraser-Mackenzie P, Walajtys J. The impact of human-technology cooperation
and distributed cognition in forensic science: biasing effects of AFIS contextual information on
human experts. J Forensic Sci. 2012;57(2):343–352.
21. Murrie D, Boccaccini M, Guarnera L, Rufino K. Are forensic experts biased by the side that retained
them? Psychol Sci. 2013;24:1889–1897.
22. Murrie D, Boccaccini M, Turner D, Meeks M, Woods C, Tussey C. Rater (dis)agreement on risk
assessment measures in sexually violent predator proceedings: evidence of adversarial allegiance
in forensic evaluation? Psychol Public Policy Law. 2009;15:19–53.
23. Dror IE. A hierarchy of expert performance. J Appl Res Mem Cogn. 2016;5(2):121–127.
24. Thomas CM, Bertram E, Johnson D. The SBAR communication technique. Nurse Educator.
2009;34(4):176–180.
25. Wacogne I, Diwakar V. Handover and note-keeping: the SBAR approach. Clin Risk. 2010;16(5):
173–175.

You might also like