You are on page 1of 20

Chapter 17

Artificial Intelligence and the Future of


Dispute Resolution
The Age of AI-DR

Orna Rabinovich-Einy and Ethan Katsh

1 Introduction
This chapter addresses the rise of artificial intelligence (AI) and its implications for the
field of dispute resolution. In recent decades, digital technology has re-shaped alternative
dispute resolution into online dispute resolution (ODR). In the last few years, ODR has
spread to courts, further blurring boundaries between formal and informal dispute
resolution, as well as between online and offline avenues of redress.
More recently, AI has begun to infiltrate dispute resolution, mostly in the realm of
AI-based prediction. With the maturing of ODR and the spread of AI, we can expect the
overlap between the two to further evolve in both private and public dispute resolution,
giving rise to a new form of dispute resolution, AI-DR.
AI is increasingly being employed in a variety of legal and ADR settings and its
role can be expected to grow further in the near future. The questions that remain open are
the design choices that will be made with respect to AI and the means that will be employed
to maximize its contribution and curb the challenges associated with AI-based decisions,
recommendations, and predictions. Dispute resolution processes need not only to be
efficient but to be fair, trustworthy, and accountable and must be perceived as such if they
are to sustain their legitimacy and fulfill their societal role.
In the sections that follow, we describe the relevance and current use of AI in
dispute resolution, while drawing on particular examples to illustrate the promise and
limitations of such use, which can be expected to expand dramatically in the next decade.

Electronic copy available at: https://ssrn.com/abstract=3830033


2 Why AI?
2.1 What Is AI?
Artificial intelligence has made huge leaps in recent decades. Several decades ago, Moore’s
law posited that computer processing power would grow exponentially, doubling every 18
months. We have already experienced computers outperforming humans in games such as
chess, Go and Jeopardy,1 and increasingly algorithms decide our most basic needs –
whether we will be approved for a loan,2 hired for a job3 or receive a refund.4 Algorithms
are influencing whether freedom will be taken away, as tools such as COMPAS advise
judges on a defendant’s risk level and the chances of reoffending, in deciding whether to
release or arrest them.5 AI is also writing some of the news we read, 6 recommending books,
movies, and music for us,7 and will soon be driving our cars. 8
It is already evident that AI is having a sweeping impact on every aspect of our lives.
Yet most of us know very little about it and how it operates. The concepts are complex and
not always distinguishable – AI, algorithms, learning algorithms, deep learning, machine
learning, and more. These are all overlapping but not identical in meaning. For our
purposes, however, we will refer to most of these terms under the umbrella term of ‘AI’.
AI is a set of algorithms derived from an array of calculations. Computer algorithms
are automated instructions, ranging from the simple to the highly complex. The first
generation of algorithms worked much like recipes, with a set of instructions that resulted

1
Elizabeth Gibney, Google AI Algorithm Masters Ancient Game of Go, 529 Nature News 445 (2016); John
Markoff, Computer Wins on ‘Jeopardy!’: Trivial, It’s Not, N.Y. Times (16 February 2004),
www.nytimes.com/2011/02/17/science/17jeopardy-watson.html.
2
Joshua A. Kroll et al., Accountable Algorithms, 165 U. Pa. L. Rev. 633, 633 (2016).
3
Pauline T. Kim, Data-Driven Discrimination at Work, 58 William & Mary L. Rev. 857, 860 (2017).
4
Chavie Lieber, Amazon Might Ban You If You Return Too Much, Racked (23 May 2018),
www.racked.com/2018/5/23/17384044/amazon-bans-shoppers-returns (last visited on 17 May 2020).
5
Megan Stevenson, Assessing Risk Assessment in Action, 303 Minn. L. Rev. 103 (2018).
6
Jihii Jolly, How Algorithms Decide the News You See, Columbia Journalism Rev. (20 May 2014),
https://archives.cjr.org/news_literacy/algorithms_filter_bubble.php (last visited on 17 May 2020).
7
Ben Ratliff, Slaves to the Algorithm: How Music Fans Can Reclaim their Playlists from Spotify, The
Guardian (19 February 2016), www.theguardian.com/books/2016/feb/19/slave-to-the-algorithm-how-
music-fans-can-reclaim-their-playlists-from-spotify (last visited on 17 May 2020).
8
Although at present, semi-autonomous cars present more of a challenge than fully autonomous ones. See
Tracy Hresko Pearl, Hands on the Wheel: A Call for Greater Regulation of Semi-Autonomous Cars, 93 Ind.
L. J. 713, 717-721 (2018).

Electronic copy available at: https://ssrn.com/abstract=3830033


in a foreseeable result.9 Over the years, algorithms evolved and came to rely not just on
inputs that were designed to operate as triggers, but also on unstructured data that allowed
them to change, adapt, and grow. This is what is sometimes referred to as a ‘second
generation’ of machines.10 In this new, advanced generation we define AI as the ability of
machines to perform actions, reach outcomes, anticipate problems, learn to adapt to new
circumstances and to solve complex problems without human intervention or
supervision.11
The combination of the huge increase in computational power in recent years combined
with the availability of Big Data to serve as training data is what has buoyed these
developments. In some cases, the algorithms use experience to look for patterns learned
from the data.12 In others, they draw on unstructured data and the goal is for the machine
to cope with unforeseen circumstances by simulating natural intelligence to solve complex
problems.13 The ability to change and adapt to new circumstances is termed
‘intelligence’.14
AI enjoys some important advantages over human activity through these new
capabilities, but it also suffers from severe drawbacks and challenges. In the following
sections we explore the potential contribution and appeal of AI for the dispute resolution
world, as well as the sources of concern and disadvantages associated with the
implementation of AI in dispute resolution processes and systems.

2.2 Why Would We Want AI in Dispute Resolution?


The dispute resolution field has traditionally relied on human resolution in a physical space.
Resolving disputes in modern times has therefore depended on the pace of human decision-
making and the capacity of physical courts. These characteristics have made dispute

9
Richard Susskind, Online Courts and the Future of Justice 266-268 (2019).
10
Id., at 271-272.
11
Michael Kearns & Aaron Roth, The Ethical Algorithm: The Science of Socially Aware Algorithm
Design 6-7 (2020).
12
Id.
13
The distinction between ML and AI is somewhat blurry as it is not always possible to distinguish
between structured and unstructured data.
14
See definition of ‘artificial intelligence’ in the Encyclopedia Britannica,
www.britannica.com/technology/artificial-intelligence (last visited on 19 May 2020).

Electronic copy available at: https://ssrn.com/abstract=3830033


resolution costly, lengthy, and out of reach for many.15 While few could afford the high
costs associated with adversarial litigation, the prospect of pro se litigation had been an
unattractive one given the complex nature of court proceedings.16 Such a setting clearly
benefitted those with power, resources, and the relevant knowledge and expertise. 17
Against that backdrop, alternative dispute resolution (ADR) emerged in the 1970s,
ultimately resulting in the widespread institutionalization of the mediation process in courts
in the U.S., and, later, in many other jurisdictions worldwide. 18 Mediation was supposed to
provide a cure to the ills of the court system – both by freeing the courts from their heavy
backlog and allowing easier access to litigation. It was also hoped to provide an alternative
that would not only be less costly and quicker than litigation but would also allow parties
direct participation and remedies that addressed their needs and interests, preserved
relationships and expanded options beyond legal rights and remedies. 19
Over time, the expectations surrounding ADR were fermented. Critics bemoaned
the privatization of the judicial system, allowing efficiency-related considerations and the
private interests of parties to outweigh the public interest in the development of the law,
declaration of societal values, and the protection of rights of disempowered groups in
society.20 The fear was that access was being preferred over justice and efficiency over
fairness.21 A clear distinction was understood to exist between public proceedings based
on procedural due process and substantive legal norms on the one hand, and quicker and
simpler private processes that enjoy procedural and substantive flexibility on the other
hand. This distinction was blurred with the emergence of a more legalized version of

15
Jerold S. Auerbach, Justice Without Law? 95 (1983) (describing the legal system of the time as ‘a horse-
and-buggy [system] near collapse in an urban industrial society’).
16
James E. Cabral et al., Using Technology to Enhance Access to Justice, 26 Harv. J.L. & Tech. 241, 256
(2012).
17
Marc Galanter, Why the “Haves” Come out Ahead: Speculations on the Limits of Legal Change, 9 L. &
Soc’y Rev. 95 (1974).
18
Carrie Menkel-Meadow, Regulation of Dispute Resolution in the United States of America: From the
Formal to the Informal to the ‘Semi-formal’, in Regulating Dispute Resolution: ADR and Access to Justice
at the Crossroads 419 (Felix Steffek & Hannes Unberath, eds. 2013).
19
Carrie Menkel-Meadow, The Trouble with the Adversary System in a Post Modern, Multicultural World,
38 Wm. & Mary L. Rev. 5, 17-18 (1996); Nancy A. Welsh, Making Deals in Court-Connected Mediation:
What’s Justice Got to Do with It?, 79 Wash. U. L.Q. 787 (2001).
20
Owen M. Fiss, Against Settlement, 93 Yale L.J. 1073 (1984).
21
Orna Rabinovich-Einy & Ethan Katsh, The New New Courts, 67 Amer. U. L. Rev. 165, 181 (2017).

Electronic copy available at: https://ssrn.com/abstract=3830033


mediation in courts as well as the permeation of ADR-like tools and practices into the work
of judges.22 Managerial judging and judicial dispute resolution became the norm,
accelerating the growth of the vanishing trial phenomenon. 23 Despite growth of ADR, there
was no surge in non-adversarial mediation processes, as most ADR processes were being
practiced in court, in the presence of lawyers and by lawyers or former judges resulting in
a thin, efficient, and rights-based version of mediation. 24
The spread of ADR, the emergence of the internet and the growth of digital
technology, created a new arena in which dispute resolution processes were needed but
inaccessible. For e-commerce-related disputes, the prospect of convening physically was
often out of reach, and new tools and processes were soon developed to fill this vacuum. 25
Online equivalents of negotiation, mediation, and arbitration evolved and became
increasingly sophisticated as technological capabilities developed and social perceptions
and preferences related to use of online tools also changed. 26 These new dispute resolution
processes that were being offered online came to be known as online dispute resolution or
ODR.
Initially, the understanding was that ODR processes were the online equivalents of
familiar ADR processes that were designed to address online disputes. Over the course of
the last two decades, this definition broadened substantially in three respects. First, ODR
was no longer seen as merely mimicking existing processes, but as an opportunity to
refashion dispute resolution processes by drawing on the unique qualities of digital
technology.27 Second, ODR was no longer seen as consigned to the realm of online disputes
and began to gradually expand to offline disputes, such as consumer complaints, tax
appeals, and insurance claims. In addition, in the last few years, a major growth area for
ODR has been the courts.28 ODR is now no longer seen as a transformation in medium for

22
Orna Rabinovich-Einy, The Legitimacy Crisis and the Future of Courts, 17 Cardozo J. of Conf. Res. 23,
33-40 (2015).
23
Id., at 34-35; Rabinovich-Einy & Katsh, supra note 21, 181-184.
24
Rabinovich-Einy, supra note 22, at 36-37.
25
Ethan Katsh & Orna Rabinovich-Einy, Digital Justice: Technology and the Internet of Disputes 7, 33
(2017).
26
Id., at 37-38.
27
Id., at 33.
28
Rabinovich-Einy & Katsh, supra note 21, at 188-203.

Electronic copy available at: https://ssrn.com/abstract=3830033


ADR processes, but as an umbrella term covering both court proceedings and alternatives,
public and private.
ODR in the courts offered an opportunity to re-design the litigation process,
including the ways in which alternatives, such as negotiation and mediation, were being
administered. Through novel designs, ODR could improve access to justice by reducing
complexity, improving efficiency, and lowering costs.29 The structure associated with
software and pre-fixed language and options offered to parties was also viewed as a means
for leveling the playing field between parties and enhancing consistency by third parties –
whether operating in a facilitative role or in an adjudicative one. 30 Another advantage
exemplified by some ODR processes through the use of structured options and language is
that they could also steer parties away from adversarial conduct and language more
effectively than oral interventions in the face to face settings, focusing parties on interests
and options rather than rights and positions.31 Alongside the expansion of court ODR,
online processes have continued to evolve in e-commerce, developing from product
marketplaces to the ‘gig economy’ and smart contracts. 32
To date, use of AI in alternatives to litigation has been limited. In formal court
proceedings, the most well-known example is the reliance on COMPAS and other similar
AI-based predictive algorithms. Such tools assist judges in assessing pre-trial danger and
post-trial recidivism, which support judicial decision-making on pre-trial release or on
sentencing.33 In informal ODR, algorithms are used mainly in supporting parties’ own
decision-making either in the initial diagnosis phase at the outset of the process, or later on
in considering common outcomes in similar cases and circumstances on such matters as

29
Id., at 207-208.
30
Id., at 208-209.
31
Id., at 194.
32
Orna Rabinovich-Einy & Ethan Katsh, Blockchain and the Inevitability of Disputes: The Role for Online
Dispute Resolution, 2019(2) J. of Disp. Resol. 47, 49 (2019); Ethan Katsh & Orna Rabinovich-Einy,
Dispute Resolution in the Sharing Economy, Internet Monitor (30 January 2015),
https://medium.com/internet-monitor-2014-platforms-and-policy/dispute-resolution-in-the-sharing-
economy-573f6369e3e8 (last visited on 17 May 2020).
33
Gary Calionese & Lavi M. Ben-Dor, AI in Adjudication and Administration, Brook. L. Rev. 1, 7-14
(forthcoming, 2020).

Electronic copy available at: https://ssrn.com/abstract=3830033


child support.34 Also, algorithms are sometimes used in a negotiation or mediation to assist
parties in drafting court documents and in devising joint agreements based on their input
in online exchanges.35 A primary growth area for AI has been the realm of legal tech. It is
evident to all those involved in ODR that the role of AI in ODR, including court ODR and
court proceedings more generally, will grow dramatically because of its potential to further
enhance the strengths of ODR and remedy some of the problems associated with the current
dispute resolution landscape – the need for enhanced efficiency and, in certain instances,
improved accuracy.
Clearly, use of AI is highly efficient since it relies on automation and inevitably
fulfils tasks much more quickly than any human can and at a much larger scale. Indeed,
that is where the appeal of algorithms lies – the ability to handle mass decisions at high
precision and low costs.36 Not only can it handle vast amounts of data and process it
efficiently but one of the primary advantages of AI is its ability to detect correlations in
large datasets that no human can.37 These correlations can be used for AI-based decision-
making, recommendations, predictions, and other tools.
Indeed, AI can facilitate human decision-making by incorporating tools that assist
parties’ understanding of their own situation and options, improve parties’ presentation of
the conflict, enhance decision makers’ familiarity with decisions rendered under similar
circumstances in previous instances, as well as provide decisionmakers with options for
language and reasoning.38 One should bear in mind, however, that to perform these
functions through AI, one would need sufficient training data, e.g., examples of previous
non-AI–based decisions, options, language choices and the like, all of which would need
to be granularly analysed per case types. 39 This is a challenge because, at least in the ADR

34
See discussion of Civil Resolution Tribunal’s ‘Solutions Explorer’ and of Rechtwisjer’s child support
calculator below.
35
See id.
36
Maxi Scherer, International Arbitration 3.0 – How Artificial Intelligence Will Change Dispute
Resolution?, in Austrian Yearbook on International Arbitration 504 (Klausinger et al., eds., 2019).
37
Ari E. Waldman, Power, Process, and Automated Decision-Making, 88 Fordham L. Rev. 613, 619
(2019).
38
Dave Orr & Colin Rule, Artificial Intelligence and the Future of Online Dispute Resolution, available at
www.newhandshake.org/SCU/ai.pdf.
39
See id.

Electronic copy available at: https://ssrn.com/abstract=3830033


field, sessions are private and outcomes often remain unpublished, and not easy to come
by.40
AI has the potential to improve the effectiveness of dispute resolution – it can study
third party interventions and discern successful ones, it can identify features of common
solutions to specific disputes and identify sources of repeat conflicts. Where the
efficiencies of AI-based decision-making or predictions are combined with ODR and allow
for dispute resolution to take place asynchronously from afar, away from the physical
limitations of time and space associated with courtrooms and even mediation or arbitration
sessions, then time and cost savings as well as convenience and access are all the more
pronounced. Also, ODR, because it involves automatic documentation of exchanges, can
make training data for algorithms more readily available, reducing the difficulty of
obtaining dispute resolution data for such purpose. 41
AI can enhance both efficiency and accuracy. By allowing for prediction of
outcomes of certain disputes or claims, use of AI could make negotiations in the shadow
of the law swifter and more accurate and bring parties closer together. While parties have
always assessed the probable outcome in their case and bargained in its shadow,
algorithmic capabilities for studying vast amounts of data present new possibilities in this
realm.42 The prospects of analysing large amounts of data could provide important
information that parties may miss and serve as a reality test curbing unfounded
expectations. At the same time, where parties have differing access to such technology and
to relevant databases, the potential for efficient bargaining may deteriorate into one-sided,
imbalanced outcomes.
AI could also fortify efforts to address other types of problems in current dispute
resolution processes. One such problem has been human bias, as exemplified in third party
facilitation and decision-making. All humans, even those committed to egalitarian values,
have unconscious biases and these biases inevitably impact their decision-making. 43 Over

40
Scherer, supra note 36, at 509.
41
While the digital trail created in ODR was initially viewed as a limitation, the advantages of automatic
documentation soon became clear. See for example Orna Rabinovich-Einy, Technology’s Impact: The
Quest for a New Paradigm for Accountability in Mediation, 11 Harv. Neg. L. Rev. 253 (2006).
42
See infra Part III.B.
43
Anthony G. Greenwald et al., Measuring Individual Differences in Implicit Cognition: The Implicit
Association Test, 74 J. Personality & Soc. Psychol. 1464, 1465-1466 (1998).

Electronic copy available at: https://ssrn.com/abstract=3830033


the years, much research has been devoted to the phenomenon, and in uncovering its
potential and actual impact in court proceedings and their alternatives. 44
Research has uncovered the impact of parties’ social identity on the outcome of
their claims, documenting persistent judicial outcome disparities in cases involving
minorities.45 In ADR, research has been inconclusive, to a large extent due to
methodological weaknesses but there has been ongoing speculation on the disparate
outcomes informal processes have on the types of outcomes reached through such
processes by members of disempowered groups. 46 Despite attempts to eliminate bias and
level the playing field, these efforts have had limited success in the face of ongoing
pressures to streamline decision making and in light of the spread of informal decision
making, conditions under which stereotype-based decision making tends to thrive. 47 Both
in court and in ADR, outcome disparities can be viewed as ‘inaccuracies’, perhaps of the
sort AI could effectively pre-empt.
The introduction of technology into dispute resolution, it was thought, could make
a difference, in more effectively curbing implicit bias. 48 Such expectation was in line with
a somewhat naïve view of technology as being neutral and value free in the early days of
digital technology and internet communication. Indeed, the hope was that the introduction
of technology in various realms would not only make processes more efficient but would
also make them less biased.
Reality has proven somewhat nuanced. On the one hand, with current uses of ODR,
there is in fact some indication that outcome disparities along group identity lines may be
reduced, at least where identity is less salient and judicial decision-making is more
structured than in a less formal face-to-face court encounters.49 On the other hand, it has
also become clear that technology is no panacea and that behind the code are humans, and

44
Jerry Kang et al., Implicit Bias in the Courtroom, 59 UCLA L. Rev. 1124, 1169-1186 (2012).
45
See, for example, David C. Baldus et al., Racial Discrimination and the Death Penalty in the Post-
Furman Era: An Empirical and Legal Overview, with Recent Findings from Philadelphia, 83 Cornell L.
Rev. 1638, 1675-1722 (1998).
46
Gilat J. Bachar & Deborah R. Hensler, Does Alternative Dispute Resolution Facilitate Prejudice and
Bias? We Still Don’t Know, 70 SMU L. Rev. 817, 829-830 (2017).
47
Avital Mentovich, J.J. Prescott & Orna Rabinovich-Einy, Are Litigation Outcome Disparities Inevitable?
Technology and the Future of Impartiality, 71(4) Ala. L. Rev. 893 (forthcoming, 2020).
48
Scherer, supra note 36, at 510.
49
Mentovich, Prescott & Rabinovich-Einy, supra note 47.

Electronic copy available at: https://ssrn.com/abstract=3830033


that technological processes inevitably reflect human values and choices, and,
consequently, biases as well.50
The question arises what happens to dispute resolution when decision-making or
facilitation is not made by humans from afar through a platform, but human decision-
making is displaced by a machine? With AI, as with technology more generally, early
voices believed that the shift to algorithmic decision-making would eliminate the problem
of human bias and enhance accuracy of legal outcomes and informal resolutions.51 Over
time, however, these hopes quickly dissipated, at least under current AI design, as we
discuss further below.

2.3 Why Would We Worry About AI in Dispute Resolution?


Reservations about use of AI in dispute resolution typically fall in one of two categories:
those concerned with losing important qualities of human dispute resolution processes with
the implementation of AI, and fears about unique characteristics of AI that could detract
from the effectiveness or fairness of dispute resolution, as well as exacerbate existing
drawbacks of various dispute resolution processes.
In terms of the first concern, the question arises to what degree can AI replicate the
ability of humans in dispute resolution to perform their roles. Some writing has been
devoted to the role of lawyers – to what extent can the expertise offered by lawyers be
performed through AI.52 Some have predicted that a significant portion of what lawyers do
today, ranging from document review to legal advice and court outcome prediction, could
in the future be performed by machines.53 Others have been more skeptical, but there is
already some indication that some tasks can and will become the domain of algorithms. 54
For those who view lawyers as problem solvers, then the range of activities that may not
be covered by AI may seem more expansive, and include the exploration of interests,

50
Kroll et al., supra note 3, at 680; Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases
Inequality and Threatens Democracy 25-27, 59-60 (2016); Scherer, supra note 36, at 511.
51
See for example Jerry Kang, Cyber-Race, 113 Harv. L. Rev. 1131 (2000).
52
Dana Remus & Frank S. Levy, Can Robots Be Lawyers? Computers, Lawyers, and the Practice of Law
(27 November 2016). Available at
SSRN: https://ssrn.com/abstract=2701092 or http://dx.doi.org/10.2139/ssrn.2701092.
53
See id.; see also Richard Susskind & Daniel Susskind, The Future of the Professions (2015).
54
See infra Part III.B.

10

Electronic copy available at: https://ssrn.com/abstract=3830033


brainstorming, ensuring balanced participation, and generating creative options for
resolution.55 At the same time, even our relatively brief experience with ODR has
demonstrated that software can, where appropriately designed, successfully fulfill at least
some of these functions.56
With third parties, the concerns are somewhat different than with lawyers.
Adjudicators (judges and arbitrators) are expected to render decisions that are derived from
the application of rules to the parties’ factual circumstances. The decisions should be fair,
consistent, nonarbitrary, timely, and well reasoned. This task is far from simple and is often
open to a myriad of options that fall under the third party’s discretion. How do we translate
the complexities of human judgment, which involves elusive determinations of what
comprises ratio decidendi as opposed to dicta, and how such rules should be applied (or
distinguished) in each case (sometimes due to extra-legal considerations)?
Similarly, where AI supports direct negotiation between parties or guides human
third-party facilitation, doubts color the ability of algorithms to emulate the richness of
human communication skills, sensitivity, and creativity.57 At the same time, AI proponents
do not purport to displace human intervention by mimicking human capabilities and
actions, but by doing ‘something else’.58 What precisely is entailed in the workings of AI
seems to be the main conundrum.
A principal challenge has to do with the accuracy and transparency of algorithm-
based outcomes and here we approach the second (and related) type of concerns voiced
with respect to the employment of AI in dispute resolution (as in other contexts). Since we
no longer rely on predetermined ‘recipes’ for their operation, it is often extremely difficult,

55
For some discussion of the limitations of current AI on some of these realms, see Scherer, supra note 36,
at 513. There are, however, technologies aiming to succeed in some of these realms. See for example
Samiha Samrose et al., CoCo: Collaboration Coach for Understanding Team Dynamics during Video
Conferencing, 1(4) ACM Hum. Comput. Interact. Art. 39 (2017),
https://hoques.com/Publications/2018/2018-UbiComp-coco-collaboration-coach.pdf (last visited on 21 May
2020) (describing a technology that provides feedback in group dynamics on the level and quality of
participation of each participant in an effort to ensure more balanced participation).
56
See for example Smartsettle’s algorithm as discussed in infra Part III.B.
57
See for example, Scherer, supra note 36, at 513 (demonstrating the limitations of algorithms in terms of
creative writing).
58
Jamie Condliffe, AI Is Learning to See the World—But Not the Way Humans Do, Mit Technology
Review (30 June 2016), www.technologyreview.com/2016/06/30/159029/ai-is-learning-to-see-the-world-
but-not-the-way-humans-do/ (last visited on 17 May 2020).

11

Electronic copy available at: https://ssrn.com/abstract=3830033


at times impossible, to decipher how an algorithm reached a decision, what values, norms,
and principles guided such decision, and whether it was erroneous or biased. 59
Algorithms run on models and models inevitably simplify reality, and therefore are
bound to make mistakes. Sometimes, these are not mistakes, but biases. Models reflect
human values and as such often reflect our own biases and stereotypes. 60 Technology may
seem objective, but in fact obscures the ‘values in the design’, and the related, potentially
skewed, outcome.61 Even where we can discern that an outcome might be flawed, it is quite
difficult to uncover whether such problems stem from the data from which the algorithm
‘learned’ or from the meaning assigned to such data by the machine. 62 In some instances,
even the programmers find it difficult to explain ‘why and how the algorithm arrives at the
predictions it makes’.63 There have been efforts to address biased algorithmic outcomes by
working on their definition of ‘similar’ individuals, by treating individuals belonging to
disempowered groups in the same way as the overall population, and by examining whether
principles of fairness were followed in the algorithmic decision-making process. 64
Nevertheless, concerns over lack of transparency and accountability of AI systems
continue.
The opaque nature of algorithms has given rise to concerns over their ‘black box’
character and lack of accountability.65 One way to approach such concern would be to limit
use of AI to those instances where cost of error is high. But when is the cost of errors low?

59
While critics tend to attribute algorithms’ opacity to the lack of transparency by large corporations and
government (see Frank Pasquale, The Black Box Society: The Secret Algorithms that Control Money and
Information (2015)), others believe that even those who have access to the code cannot always explain the
operation of an algorithm (see David Auerbach, The Code We Can’t Control, Slate (14 January 2015),
https://slate.com/technology/2015/01/black-box-society-by-frank-pasquale-a-chilling-vision-of-how-big-
data-has-invaded-our-lives.html (last visited on 17 May 2020)).
60
O’Neil, supra note 50, at 25-27, 59-60.
61
Helen Nissenbaum, Values in Technical Design, in Encyclopedia of Science, Technology, and Ethics
lxvi, lxvi (Carl Mitcham ed., 2005).
62
Ryan Calo, Artificial Intelligence Policy: A Primer and Roadmap, 51 UC Davis L. Rev. 399, 411-412
(2017). For an analysis of the different levels on which the harms of automated decision-making may
occur, see David Lehr & Paul Ohm, Playing with the Data: What Legal Scholars Should Learn About
Machine Learning, 51 U.C. Davis L. Rev. 653 (2017).
63
Scherer, supra note 36, at 511.
64
Min K. Lee et al., Procedural Justice in Algorithmic Fairness: Leveraging Transparency and Outcome
Control for Fair Algorithmic Mediation, 3 ACM Hum-Comput. Interact. 182:3 (2019),
http://minlee.net/materials/Publication/2019-CSCW-Al_ProceduralFairness.pdf.
65
Pasquale, supra note 59.

12

Electronic copy available at: https://ssrn.com/abstract=3830033


And who bears such costs? It is true that human decision-making can be similarly flawed
and biased, and therefore it could be expected that machines, engineered by humans, would
replicate some of the biases and problems associated with human decision-making as
explained above.
In one book, the author describes the dangers associated with algorithms that
operate at mass scale, across domains and make consistent errors that impact individuals
who belong to disempowered groups – minorities, immigrants, the poor, and the
uneducated.66 Members of such groups have limited ability to challenge algorithmic
decisions, few resources to devote a long and difficult battle and are often inclined to accept
such outcomes as consistent with a history of maltreatment. Clearly, human decision-
making can also be erroneous and biased, but it can also evolve, while machines, on the
other hand, depend on the introduction of change by the humans who develop them. 67 Not
only does AI operate on what are opaque and often discriminatory correlations, but the data
that feeds into such AI-decision or recommendations constitutes a breach of our
expectations of privacy,68 and ultimately, the reliance on an unreasoned decision defies
some of what many of us view as a fundamental element of decision-making. 69
While use of AI continues to spread to more and more domains, questions over the
fairness of these systems and their effect on different segments of society, continue. These
concerns have stimulated various entities, public and private, to generate principles, best
practices and ethical guidelines for the use of AI in the justice arena. 70 In all of these
guidelines fairness, transparency, and accountability of AI systems feature prominently as

66
O’Neil, supra note 50, at 8, 97, 111; see also Solon Barocas & Andrew D. Selbst, Big Data’s Disparate
Impact, 104 Calif. L. Rev. 671 (2016).
67
O’Neil, supra note 50, at 204.
68
Waldman, supra note 37, at 619-621.
69
Scherer, supra note 36, at 512.
70
The European Commission for the Efficiency of Justice, for example, published an ethical charter on the
use of AI in justice systems, see: European ethical Charter on the use of Artificial Intelligence in judicial
systems and their environment (European Commission For The Efficiency Of Justice (CEPEJ), 2018),
https://rm.coe.int/ethical-charter-en-for-publication-4-december-2018/16808f699c (last visited on 15 May
2020); see also a Berkman Center study comparing mapping AI principles: Jessica Fjeld et al., Principled
Artificial Intelligence: Mapping Consensus in Ethical and Rights-based Approaches to Principles for AI,
Berkman Klein Center for Internet & Society (2020),
https://cyber.harvard.edu/publication/2020/principled-
ai?fbclid=IwAR0TbxO1xBRYypSjYNBi2G4YgVK28IuA01BsNyRLXgltGwnQ4zpxk4nKpcY (last
visited on 15 May 2020).

13

Electronic copy available at: https://ssrn.com/abstract=3830033


central elements of ethical algorithmic design.71 Transparency and fairness are the
cornerstones of legitimacy, which is the goal of all dispute resolution processes.
Legitimacy is what makes individuals bring their disputes before a dispute resolution
forum, participate in the process, and abide by the final resolution. Where the workings of
an algorithm remain opaque and the standards by which it reached its outcomes are not
disclosed, it is difficult to see how a proceeding can meet the widely accepted
underpinnings of legitimacy. 72
In the dispute resolution literature, there is often reference to perceived legitimacy,
which is derived from parties’ perceptions of the process they relied on to settle their
dispute – the quality of the process (was it impartial, fair, and balanced), the quality of
interpersonal treatment from the third party (did the third party treat the disputants with
respect, did s/he listen to the parties) and the opportunity for voice for the parties (did the
parties feel they were heard, were given an opportunity to tell their story, and were
understood).73 Clearly, though, perceptions of legitimacy grounded in procedural
requirements cannot displace the need for substantive measures of fairness and equality. 74
However, the role of procedure in shaping people’s perception of fairness has proven
central and therefore must be examined in the AI context as well.
The major studies on procedural justice were, for the most part, performed in the
face to face setting. There is reason to think that the while parties’ perceptions of the quality
of the process will rise, the perceived quality of interpersonal treatment may decrease. 75
With respect to opportunity for voice, the introduction of AI could have an impact in both
directions. On the one hand, parties may feel there is no person at the other end and
therefore their story is not truly being considered. On the other hand, the change in the
nature of the process and being able to asynchronously consider what information to
present, at one’s leisure, without the pressure of real time synchronous communication,
could make individuals feel that they had a better opportunity to convey their story. 76

71
See also Calo, supra note 62, at 411; Kearns & Roth, supra note 11.
72
Rabinovich-Einy, supra note 22, at 25.
73
Rabinovich-Einy & Katsh, supra note 21, at 174.
74
Waldman, supra note 37, at 628-629.
75
Mentovich, Prescott & Rabinovich-Einy, supra note 47.
76
Id.

14

Electronic copy available at: https://ssrn.com/abstract=3830033


The few studies conducted in the online setting shed some light on the impact that
AI in dispute resolution could have on perceptions of legitimacy, but more research is
needed. What the studies that have been conducted on the legitimacy of AI-based decision-
making in the dispute resolution (or a close) context show, is that we cannot refer to
decision-making as a whole.77 We need to adopt a contextual approach, referring to the
nature of the task for which AI is employed, the subject matter it relates to, and the weight
such intervention carries – a recommendation versus an authoritative decision. All of these
studies demonstrate that under certain conditions and in particular contexts, AI-based
decision making may surpass human decision-making in terms of party perceptions.
Above all, and beyond party perceptions, hovers the deep moral question regarding
the delegation of human decision-making to machines over dispute resolution, certainly in
the public court context, but also in the private arena where questions of power asymmetry
and opacity can play an even more pronounced role. As we discuss below, it seems that AI
is slowly making its way into the dispute resolution arena without deep reflection on the
normative boundaries of such a move taking place. Use of AI in the dispute resolution
context thus far has been quite limited, certainly where decision-making is involved, but
change is clearly underway, and AI can be implicated in the disputation world in much
richer ways than mere dispute resolution.

3 How AI?
As with other technological advancements, AI applications are not only generating
advancement, but often cause disruption and conflict. Automated decisions on loans, job
applications and medical coverage may seem arbitrary and capricious. Other times, such
decisions may in fact seem deliberate but biased. In both cases, making such claims is a
difficult task considering the opaque nature of the workings of AI, often undecipherable
even to those engineers who developed the algorithm.

77
Lee et al., supra note 64, at 182:1; Ayelet Sela, Can Computers Be Fair?, 33 Ohio St. J. on Disp. Resol.
91 (2018); Min K. Lee, Understanding Perception of Algorithmic Decisions: Fairness, Trust, and Emotion
in Response to Algorithmic Management, Big Data & Soc. (2018),
https://journals.sagepub.com/doi/full/10.1177/2053951718756684; Theo B. Araujo et al., In AI We Trust?
Perceptions About Automated Decision-Making by Artificial Intelligence, AI & Soc. (2020),
https://pure.uva.nl/ws/files/50211045/Araujo2020_Article_InAIWeTrustPerceptionsAboutAut.pdf

15

Electronic copy available at: https://ssrn.com/abstract=3830033


Other problems also abound. One algorithm mistakenly labeled African Americans
as gorillas.78 In another case, a Facebook yearly summary of photos ‘insensitively’ brought
up the memories of a recently deceased child. 79 In yet a third story, an algorithm mistakenly
flagged a crime victim as a fraudster because his spouse had switched their health insurance
right before the incident occurred.80 These problems, disputes or mishaps are typically
unintentional. Indeed, unforeseen consequences in large scale, automated interventions
should be expected. The pace and volume of disputes and legal claims can be expected to
rise given the large volume of AI-based interactions.81 This will make AI-based dispute
resolution, legal advice, and prevention efforts all the more necessary in order to deal with
such problems.
Many contemporary AI-based applications are providing legal assistance which in
the past was obtained from human lawyers, ranging from will82 and contract drafting,83
through the provision of legal analysis and advice,84 to assistance in filing lawsuits and in
pursuing other redress avenues.85 In courts, some ODR processes offer AI-based assistance
for parties, at least in the initial phase of diagnosis of their claim and the avenues of redress
that are available to them. In the Civil Resolution Tribunal in British Columbia, parties use
the ‘solutions explorer’ to locate important information, resources, and template letters in
certain types of disputes.86

78
Katsh & Rabinovich-Einy, supra note 25.
79
Id.
80
Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor 1-3
(2017).
81
Nicole Lewis, AI-Related Lawsuits Are Coming, Shrm (1 November 2019),
www.shrm.org/resourcesandtools/hr-topics/technology/pages/ai-lawsuits-are-coming.aspx.
82
“Ailira,” Facebook page, www.facebook.com/ailira/ (last visited on 11 May 2020) (a low cost will-
drafting app).
83
“Robot Lawyer LISA”, Facebook page,
www.facebook.com/pg/RobotLawyerLISA/services/?ref=page_internal (last visited on 11 May 2020);
HirePeter is an artificial intelligence business lawyer that uses blockchain technology to notarise, store
contracts and generate legal templates, CH & co. (8 July 2016), https://fintank.chappuishalder.com/case-
studies/hirepeter-ai/ (last visited on 11 May 2020).
84
“Visabot Helps you Cut Green-Card Red Tape”, Venturebeat
https://venturebeat.com/2017/07/11/visabot-helps-you-cut-green-card-red-tape/ (last visited on 11 May
2020).
85
“The World’s First Robot Lawyer”, Donotpay, https://donotpay.com/ (last visited on 11 May 2020).
86
“Solution Explorer”, Government of B.C., https://civilresolutionbc.ca/how-the-crt-works/getting-
started/strata-solution-explorer/ (last visited on 19 March 2021). For a similar diagnosis feature, see The

16

Electronic copy available at: https://ssrn.com/abstract=3830033


Other AI technologies target lawyers, improving their performance through
algorithmic assistance, and allowing them to focus on their human strengths. The now
celebrated Ross Intelligence assists lawyers in conducting legal research, including case
law, while avoiding overturned or criticized cases. 87 Others offer document review and due
diligence tools.88 Some platforms also offer automated contract analysis and negotiation
tools, as well as automated document review boasting enhanced accuracy and efficiency at
reduced costs.89
Lawyers and non-lawyers are also harnessing AI for predicting case outcomes,
deciding on a winning strategy, or refining arguments, all based on statistical analyses of
big data.90 Clearly, analysis of cases across judges, case types, and the like is not new and
large law firms with significant resources could always employ some form of data analysis.
Nevertheless, the employment of AI in this context has changed the landscape dramatically
by allowing for large scale studies to take place of judicial 91 and arbitral decisions,92 as
well as mediated cases93 and lawyer success rates before judges.94 Similar tools are also

Ohio State tax appeals system created by Modria, “Do I have a strong case for my appeal?”, https://ohio-
bta.modria.com/resources/ohio-bta-diagnosis/strongcase.html (last visited on 21 May 2020).
87
“The Intelligent Legal Research Choice”, Ross Intelligence, www.rossintelligence.com/ (last visited on
11 May 2020). Other applications such as Casetext are now claiming superiority over Ross, see “Casetext
v. Ross Intelligence”, Casetext, https://casetext.com/ross-vs-casetext/ (last visited on 11 May 2020).
88
“The Most Powerful and Accurate Contract Analysis Software”, Kira Systems,
https://kirasystems.com/benefits/ (last visited on 11 May 2020).
89
https://leverton.ai/; https://ebrevia.com/#homepage; www.thoughtriver.com/; www.lawgeex.com/;
www.luminance.com/ (last visited on 11 May 2020).
90
AI is being used for prediction purposes outside the court setting as well, most notably in the
administrative realm. The Internal Revenue Service, for example, is using AI to predict cases of possible
tax fraud by drawing on previous tax records and other data. See Cary Coglianese & Lavi M. Ben Dor, AI
in Adjudication and Administration, Faculty Scholarship at Penn Law 20 (Forthcoming in the Brooklyn
Law Review, 2020),
https://scholarship.law.upenn.edu/cgi/viewcontent.cgi?article=3120&context=faculty_scholarship, at 20.
91
Daniel M. Katz, A General Approach for Predicting The Behavior of The Supreme Court of The United
States, 12(4) PLoS ONE (2017); Scherer, supra note 36, at 508-509 (describing one such research
conducted on Supreme Court decisions in 2017 with a 70% accuracy rate, as well as a 2016 study
concerning decisions of the European Court of Human Rights with a 79% success rate).
92
“Boundless Legal Intelligence”, ArbiLex, www.arbilex.co/welcome (last visited on 20 May 2020).
93
“Analytics”, Decisionset, www.decisionset.com/analytics.html (last visited on 12 May 2020).
94
“The World’s Largest Litigation Database”, Premonition, www.losingisexpensive.com/ (last visited on
12 May 2020).

17

Electronic copy available at: https://ssrn.com/abstract=3830033


being used to study potential jury members demographics, history, and online persona in
an attempt to predict who should be selected to the jury. 95
In addition, there are decision-support tools and processes that are already in use in
the ODR arena. Most notably, automated negotiation, one of the early developments in the
ODR field, uses automation to support party to party direct negotiation on a designated
platform. In this type of process, parties are asked to provide input on their ‘side of the
story’ and the software plays an active role in assisting them to frame their account of what
were the circumstances that gave rise to the conflict as well as the desirable outcome to
settle the dispute. The algorithm can highlight to parties what outcomes were reached in
similar cases, incentivize parties to bargain collaboratively, 96 and offer parties optimization
of their outcome based on preferences they disclosed privately. 97
Beyond decision-support, we are starting to see examples of automated decision-
making in dispute resolution, but these are rare. One of the earliest examples were the
‘automated blind-bidding’ processes for the resolution of disputes over money. Two
pioneers in the field were Cybersettle and Smartsettle. Each side would disclose to the
software what its true reservation price was and if such amounts were within a given range
of one another then the platform would split the difference and announce that a settlement
was reached. Otherwise, the platform would inform the parties that the reservation prices
were too far apart, and neither would find out what was the true reservation price of the
other side. With both platforms, the algorithm could ‘decide’ the dispute and deliver an

95
Voltaire Uses AI and Big Data to Help Pick Your Jury, Artificial Lawyer (26 April 2017),
www.artificiallawyer.com/2017/04/26/voltaire-uses-ai-and-big-data-to-help-pick-your-jury/ (last visited on
13 May 2020).
96
Smartsettle is an automated negotiation tool that employs blind bidding. Parties can state to the other
side what their reservation price is but also disclose a secret reservation price that leans more towards their
adversary (which the algorithm can draw on, if the openly stated reservation prices do not overlap). The
algorithm is programmed to reward parties for bargaining more collaboratively by revealing a number that
is closer to their true reservation price by declaring an agreed-upon amount that is closer to the
collaborative party’s stated price than the less collaborative party’s offer. See Katsh & Rabinovich-Einy,
supra note 25, at 35-36.
97
Once parties reach a settlement on Smartsettle infinity product which is a multi-interest negotiation tool,
they can choose to have the algorithm optimize the resolution by creating an alternative outcome that
improves at least one party’s overall satisfaction with the agreement without detracting from that of the
other side. This is possible because the parties secretly disclose to the algorithm their preferences regarding
each of the issues that are being negotiated. See id., at 48-49.

18

Electronic copy available at: https://ssrn.com/abstract=3830033


outcome, according to the agreed upon parameters for its performance and the input
provided by the parties.
In terms of AI in U.S. courts, there is currently no ‘automated justice’ in the sense
that a machine is solely responsible for the outcome of a case with no human involvement. 98
That said, perhaps surprisingly, in the courts the criminal arena has been in the lead in
terms of AI use in the form of risk assessment tools being used by courts in pretrial arrest
decisions as well as for punishment and parole purposes, weighing various factors such as
the defendant’s age, nature of current charge, substance abuse, their personality, and their
social circle’s criminal history (depending on the stage at which the tool is being used). 99
While these risk assessment tools have been challenged on due process grounds 100 and
critiqued for being structurally biased,101 their use has become widespread.
Finally, by studying patterns in big data and predicting the circumstances under
which undesirable outcomes may occur, AI can be used to prevent such outcomes from
occurring. We see such uses in various arenas, some, such as policing and fraud detection,
were mentioned above. Other examples include use of AI in healthcare to improve health
services and improve the accuracy of disease detection. The preservation of dispute
resolution data, which used to be an anomaly and occurs automatically in ODR, makes the
analysis of dispute data and the detection of patterns a prime candidate for algorithmic
intervention.

4 Toward a Future Dispute Resolution Landscape


AI is here, but is still in its infancy. It is our contention that a growing part of what we now
know as judging, resolving disputes informally, legal work, and the administration of

98
Calionese & Ben-Dor, supra note 33, at 3.
99
Id., at 9.
100
881 N.W.2d 749 (Wis. 2016) – State V. Loomis, Supreme Court of Wisconsin. In another cases, the
Indiana Supreme Court condoned the use of another algorithmic risk assessment tool since the decision was
grounded is other separate factors and since the tool did not displace judicial discretion. See Calionese &
Ben-Dor, supra note 33, at 13
101
Carolyn McKay, Predicting Risk in Criminal Procedure: Actuarial Tools, Algorithms, AI and Judicial
Decision-Making, 19/67 The University of Sydney Law School (2019),
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3494076&dgcid=ejournal_htmlemail_university:of:sy
dney:law:school:legal:studies:research:paper:series_abstractlink; O’Neil, supra note 50, at 97-103.

19

Electronic copy available at: https://ssrn.com/abstract=3830033


justice, whether delivered online or in the physical setting, will involve AI and will become
part of AI-DR.
While AI holds a promise to resolve many of the long-standing problems associated
with the dispute resolution landscape – its backlog and inefficiencies, its high costs and
complexity, and even some of its imprecision and biases, AI introduces new problems and
challenges. The history of technology is a history of unintended consequences. New and
unforeseen biases, reduced levels of privacy, and a lack of transparency feature
prominently in the criticisms of AI. These challenges have been met with some responses
in the form of ethical design of AI, which may mitigate some of these concerns and
engender increased accountability and legitimacy.
Legitimacy is the organizing principle of the dispute resolution arena. If it is to be
sustained in the emerging AI-DR setting, we need to understand its sources and design
dispute resolution processes that are both perceived as legitimate and meet objective
criteria for sustaining legitimacy. While the guidelines emphasize such objective measures,
they leave more to be desired in terms of disputants’ perceptions – the types of activities
they trust machines to perform, in what types of cases, and under what conditions. It might
be surprising to find that algorithms feature most prominently at the moment in the criminal
setting in decisions on defendants’ liberty, but have yet to be employed in small scale,
repetitive, and simple civil disputes. Similarly, one would expect more AI development in
the informal dispute resolution arena.
A final point which the rise of AI-DR seems to crystallize is the further erosion of
boundaries in the dispute resolution landscape. With the introduction of ADR into courts,
the boundaries between formal and informal dispute resolution began to fade. Later, as
ODR began to be adopted in courts, the boundaries between online and offline arenas
diminished. And now, as AI is being introduced into all these settings, boundaries are
further eroded: all of these processes can involve technology and automation, at various
stages, displacing some of the roles and functions assumed to be inherently the domain of
humans.

20

Electronic copy available at: https://ssrn.com/abstract=3830033

You might also like