Professional Documents
Culture Documents
IA y Futuro ODR Inglés
IA y Futuro ODR Inglés
1 Introduction
This chapter addresses the rise of artificial intelligence (AI) and its implications for the
field of dispute resolution. In recent decades, digital technology has re-shaped alternative
dispute resolution into online dispute resolution (ODR). In the last few years, ODR has
spread to courts, further blurring boundaries between formal and informal dispute
resolution, as well as between online and offline avenues of redress.
More recently, AI has begun to infiltrate dispute resolution, mostly in the realm of
AI-based prediction. With the maturing of ODR and the spread of AI, we can expect the
overlap between the two to further evolve in both private and public dispute resolution,
giving rise to a new form of dispute resolution, AI-DR.
AI is increasingly being employed in a variety of legal and ADR settings and its
role can be expected to grow further in the near future. The questions that remain open are
the design choices that will be made with respect to AI and the means that will be employed
to maximize its contribution and curb the challenges associated with AI-based decisions,
recommendations, and predictions. Dispute resolution processes need not only to be
efficient but to be fair, trustworthy, and accountable and must be perceived as such if they
are to sustain their legitimacy and fulfill their societal role.
In the sections that follow, we describe the relevance and current use of AI in
dispute resolution, while drawing on particular examples to illustrate the promise and
limitations of such use, which can be expected to expand dramatically in the next decade.
1
Elizabeth Gibney, Google AI Algorithm Masters Ancient Game of Go, 529 Nature News 445 (2016); John
Markoff, Computer Wins on ‘Jeopardy!’: Trivial, It’s Not, N.Y. Times (16 February 2004),
www.nytimes.com/2011/02/17/science/17jeopardy-watson.html.
2
Joshua A. Kroll et al., Accountable Algorithms, 165 U. Pa. L. Rev. 633, 633 (2016).
3
Pauline T. Kim, Data-Driven Discrimination at Work, 58 William & Mary L. Rev. 857, 860 (2017).
4
Chavie Lieber, Amazon Might Ban You If You Return Too Much, Racked (23 May 2018),
www.racked.com/2018/5/23/17384044/amazon-bans-shoppers-returns (last visited on 17 May 2020).
5
Megan Stevenson, Assessing Risk Assessment in Action, 303 Minn. L. Rev. 103 (2018).
6
Jihii Jolly, How Algorithms Decide the News You See, Columbia Journalism Rev. (20 May 2014),
https://archives.cjr.org/news_literacy/algorithms_filter_bubble.php (last visited on 17 May 2020).
7
Ben Ratliff, Slaves to the Algorithm: How Music Fans Can Reclaim their Playlists from Spotify, The
Guardian (19 February 2016), www.theguardian.com/books/2016/feb/19/slave-to-the-algorithm-how-
music-fans-can-reclaim-their-playlists-from-spotify (last visited on 17 May 2020).
8
Although at present, semi-autonomous cars present more of a challenge than fully autonomous ones. See
Tracy Hresko Pearl, Hands on the Wheel: A Call for Greater Regulation of Semi-Autonomous Cars, 93 Ind.
L. J. 713, 717-721 (2018).
9
Richard Susskind, Online Courts and the Future of Justice 266-268 (2019).
10
Id., at 271-272.
11
Michael Kearns & Aaron Roth, The Ethical Algorithm: The Science of Socially Aware Algorithm
Design 6-7 (2020).
12
Id.
13
The distinction between ML and AI is somewhat blurry as it is not always possible to distinguish
between structured and unstructured data.
14
See definition of ‘artificial intelligence’ in the Encyclopedia Britannica,
www.britannica.com/technology/artificial-intelligence (last visited on 19 May 2020).
15
Jerold S. Auerbach, Justice Without Law? 95 (1983) (describing the legal system of the time as ‘a horse-
and-buggy [system] near collapse in an urban industrial society’).
16
James E. Cabral et al., Using Technology to Enhance Access to Justice, 26 Harv. J.L. & Tech. 241, 256
(2012).
17
Marc Galanter, Why the “Haves” Come out Ahead: Speculations on the Limits of Legal Change, 9 L. &
Soc’y Rev. 95 (1974).
18
Carrie Menkel-Meadow, Regulation of Dispute Resolution in the United States of America: From the
Formal to the Informal to the ‘Semi-formal’, in Regulating Dispute Resolution: ADR and Access to Justice
at the Crossroads 419 (Felix Steffek & Hannes Unberath, eds. 2013).
19
Carrie Menkel-Meadow, The Trouble with the Adversary System in a Post Modern, Multicultural World,
38 Wm. & Mary L. Rev. 5, 17-18 (1996); Nancy A. Welsh, Making Deals in Court-Connected Mediation:
What’s Justice Got to Do with It?, 79 Wash. U. L.Q. 787 (2001).
20
Owen M. Fiss, Against Settlement, 93 Yale L.J. 1073 (1984).
21
Orna Rabinovich-Einy & Ethan Katsh, The New New Courts, 67 Amer. U. L. Rev. 165, 181 (2017).
22
Orna Rabinovich-Einy, The Legitimacy Crisis and the Future of Courts, 17 Cardozo J. of Conf. Res. 23,
33-40 (2015).
23
Id., at 34-35; Rabinovich-Einy & Katsh, supra note 21, 181-184.
24
Rabinovich-Einy, supra note 22, at 36-37.
25
Ethan Katsh & Orna Rabinovich-Einy, Digital Justice: Technology and the Internet of Disputes 7, 33
(2017).
26
Id., at 37-38.
27
Id., at 33.
28
Rabinovich-Einy & Katsh, supra note 21, at 188-203.
29
Id., at 207-208.
30
Id., at 208-209.
31
Id., at 194.
32
Orna Rabinovich-Einy & Ethan Katsh, Blockchain and the Inevitability of Disputes: The Role for Online
Dispute Resolution, 2019(2) J. of Disp. Resol. 47, 49 (2019); Ethan Katsh & Orna Rabinovich-Einy,
Dispute Resolution in the Sharing Economy, Internet Monitor (30 January 2015),
https://medium.com/internet-monitor-2014-platforms-and-policy/dispute-resolution-in-the-sharing-
economy-573f6369e3e8 (last visited on 17 May 2020).
33
Gary Calionese & Lavi M. Ben-Dor, AI in Adjudication and Administration, Brook. L. Rev. 1, 7-14
(forthcoming, 2020).
34
See discussion of Civil Resolution Tribunal’s ‘Solutions Explorer’ and of Rechtwisjer’s child support
calculator below.
35
See id.
36
Maxi Scherer, International Arbitration 3.0 – How Artificial Intelligence Will Change Dispute
Resolution?, in Austrian Yearbook on International Arbitration 504 (Klausinger et al., eds., 2019).
37
Ari E. Waldman, Power, Process, and Automated Decision-Making, 88 Fordham L. Rev. 613, 619
(2019).
38
Dave Orr & Colin Rule, Artificial Intelligence and the Future of Online Dispute Resolution, available at
www.newhandshake.org/SCU/ai.pdf.
39
See id.
40
Scherer, supra note 36, at 509.
41
While the digital trail created in ODR was initially viewed as a limitation, the advantages of automatic
documentation soon became clear. See for example Orna Rabinovich-Einy, Technology’s Impact: The
Quest for a New Paradigm for Accountability in Mediation, 11 Harv. Neg. L. Rev. 253 (2006).
42
See infra Part III.B.
43
Anthony G. Greenwald et al., Measuring Individual Differences in Implicit Cognition: The Implicit
Association Test, 74 J. Personality & Soc. Psychol. 1464, 1465-1466 (1998).
44
Jerry Kang et al., Implicit Bias in the Courtroom, 59 UCLA L. Rev. 1124, 1169-1186 (2012).
45
See, for example, David C. Baldus et al., Racial Discrimination and the Death Penalty in the Post-
Furman Era: An Empirical and Legal Overview, with Recent Findings from Philadelphia, 83 Cornell L.
Rev. 1638, 1675-1722 (1998).
46
Gilat J. Bachar & Deborah R. Hensler, Does Alternative Dispute Resolution Facilitate Prejudice and
Bias? We Still Don’t Know, 70 SMU L. Rev. 817, 829-830 (2017).
47
Avital Mentovich, J.J. Prescott & Orna Rabinovich-Einy, Are Litigation Outcome Disparities Inevitable?
Technology and the Future of Impartiality, 71(4) Ala. L. Rev. 893 (forthcoming, 2020).
48
Scherer, supra note 36, at 510.
49
Mentovich, Prescott & Rabinovich-Einy, supra note 47.
50
Kroll et al., supra note 3, at 680; Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases
Inequality and Threatens Democracy 25-27, 59-60 (2016); Scherer, supra note 36, at 511.
51
See for example Jerry Kang, Cyber-Race, 113 Harv. L. Rev. 1131 (2000).
52
Dana Remus & Frank S. Levy, Can Robots Be Lawyers? Computers, Lawyers, and the Practice of Law
(27 November 2016). Available at
SSRN: https://ssrn.com/abstract=2701092 or http://dx.doi.org/10.2139/ssrn.2701092.
53
See id.; see also Richard Susskind & Daniel Susskind, The Future of the Professions (2015).
54
See infra Part III.B.
10
55
For some discussion of the limitations of current AI on some of these realms, see Scherer, supra note 36,
at 513. There are, however, technologies aiming to succeed in some of these realms. See for example
Samiha Samrose et al., CoCo: Collaboration Coach for Understanding Team Dynamics during Video
Conferencing, 1(4) ACM Hum. Comput. Interact. Art. 39 (2017),
https://hoques.com/Publications/2018/2018-UbiComp-coco-collaboration-coach.pdf (last visited on 21 May
2020) (describing a technology that provides feedback in group dynamics on the level and quality of
participation of each participant in an effort to ensure more balanced participation).
56
See for example Smartsettle’s algorithm as discussed in infra Part III.B.
57
See for example, Scherer, supra note 36, at 513 (demonstrating the limitations of algorithms in terms of
creative writing).
58
Jamie Condliffe, AI Is Learning to See the World—But Not the Way Humans Do, Mit Technology
Review (30 June 2016), www.technologyreview.com/2016/06/30/159029/ai-is-learning-to-see-the-world-
but-not-the-way-humans-do/ (last visited on 17 May 2020).
11
59
While critics tend to attribute algorithms’ opacity to the lack of transparency by large corporations and
government (see Frank Pasquale, The Black Box Society: The Secret Algorithms that Control Money and
Information (2015)), others believe that even those who have access to the code cannot always explain the
operation of an algorithm (see David Auerbach, The Code We Can’t Control, Slate (14 January 2015),
https://slate.com/technology/2015/01/black-box-society-by-frank-pasquale-a-chilling-vision-of-how-big-
data-has-invaded-our-lives.html (last visited on 17 May 2020)).
60
O’Neil, supra note 50, at 25-27, 59-60.
61
Helen Nissenbaum, Values in Technical Design, in Encyclopedia of Science, Technology, and Ethics
lxvi, lxvi (Carl Mitcham ed., 2005).
62
Ryan Calo, Artificial Intelligence Policy: A Primer and Roadmap, 51 UC Davis L. Rev. 399, 411-412
(2017). For an analysis of the different levels on which the harms of automated decision-making may
occur, see David Lehr & Paul Ohm, Playing with the Data: What Legal Scholars Should Learn About
Machine Learning, 51 U.C. Davis L. Rev. 653 (2017).
63
Scherer, supra note 36, at 511.
64
Min K. Lee et al., Procedural Justice in Algorithmic Fairness: Leveraging Transparency and Outcome
Control for Fair Algorithmic Mediation, 3 ACM Hum-Comput. Interact. 182:3 (2019),
http://minlee.net/materials/Publication/2019-CSCW-Al_ProceduralFairness.pdf.
65
Pasquale, supra note 59.
12
66
O’Neil, supra note 50, at 8, 97, 111; see also Solon Barocas & Andrew D. Selbst, Big Data’s Disparate
Impact, 104 Calif. L. Rev. 671 (2016).
67
O’Neil, supra note 50, at 204.
68
Waldman, supra note 37, at 619-621.
69
Scherer, supra note 36, at 512.
70
The European Commission for the Efficiency of Justice, for example, published an ethical charter on the
use of AI in justice systems, see: European ethical Charter on the use of Artificial Intelligence in judicial
systems and their environment (European Commission For The Efficiency Of Justice (CEPEJ), 2018),
https://rm.coe.int/ethical-charter-en-for-publication-4-december-2018/16808f699c (last visited on 15 May
2020); see also a Berkman Center study comparing mapping AI principles: Jessica Fjeld et al., Principled
Artificial Intelligence: Mapping Consensus in Ethical and Rights-based Approaches to Principles for AI,
Berkman Klein Center for Internet & Society (2020),
https://cyber.harvard.edu/publication/2020/principled-
ai?fbclid=IwAR0TbxO1xBRYypSjYNBi2G4YgVK28IuA01BsNyRLXgltGwnQ4zpxk4nKpcY (last
visited on 15 May 2020).
13
71
See also Calo, supra note 62, at 411; Kearns & Roth, supra note 11.
72
Rabinovich-Einy, supra note 22, at 25.
73
Rabinovich-Einy & Katsh, supra note 21, at 174.
74
Waldman, supra note 37, at 628-629.
75
Mentovich, Prescott & Rabinovich-Einy, supra note 47.
76
Id.
14
3 How AI?
As with other technological advancements, AI applications are not only generating
advancement, but often cause disruption and conflict. Automated decisions on loans, job
applications and medical coverage may seem arbitrary and capricious. Other times, such
decisions may in fact seem deliberate but biased. In both cases, making such claims is a
difficult task considering the opaque nature of the workings of AI, often undecipherable
even to those engineers who developed the algorithm.
77
Lee et al., supra note 64, at 182:1; Ayelet Sela, Can Computers Be Fair?, 33 Ohio St. J. on Disp. Resol.
91 (2018); Min K. Lee, Understanding Perception of Algorithmic Decisions: Fairness, Trust, and Emotion
in Response to Algorithmic Management, Big Data & Soc. (2018),
https://journals.sagepub.com/doi/full/10.1177/2053951718756684; Theo B. Araujo et al., In AI We Trust?
Perceptions About Automated Decision-Making by Artificial Intelligence, AI & Soc. (2020),
https://pure.uva.nl/ws/files/50211045/Araujo2020_Article_InAIWeTrustPerceptionsAboutAut.pdf
15
78
Katsh & Rabinovich-Einy, supra note 25.
79
Id.
80
Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor 1-3
(2017).
81
Nicole Lewis, AI-Related Lawsuits Are Coming, Shrm (1 November 2019),
www.shrm.org/resourcesandtools/hr-topics/technology/pages/ai-lawsuits-are-coming.aspx.
82
“Ailira,” Facebook page, www.facebook.com/ailira/ (last visited on 11 May 2020) (a low cost will-
drafting app).
83
“Robot Lawyer LISA”, Facebook page,
www.facebook.com/pg/RobotLawyerLISA/services/?ref=page_internal (last visited on 11 May 2020);
HirePeter is an artificial intelligence business lawyer that uses blockchain technology to notarise, store
contracts and generate legal templates, CH & co. (8 July 2016), https://fintank.chappuishalder.com/case-
studies/hirepeter-ai/ (last visited on 11 May 2020).
84
“Visabot Helps you Cut Green-Card Red Tape”, Venturebeat
https://venturebeat.com/2017/07/11/visabot-helps-you-cut-green-card-red-tape/ (last visited on 11 May
2020).
85
“The World’s First Robot Lawyer”, Donotpay, https://donotpay.com/ (last visited on 11 May 2020).
86
“Solution Explorer”, Government of B.C., https://civilresolutionbc.ca/how-the-crt-works/getting-
started/strata-solution-explorer/ (last visited on 19 March 2021). For a similar diagnosis feature, see The
16
Ohio State tax appeals system created by Modria, “Do I have a strong case for my appeal?”, https://ohio-
bta.modria.com/resources/ohio-bta-diagnosis/strongcase.html (last visited on 21 May 2020).
87
“The Intelligent Legal Research Choice”, Ross Intelligence, www.rossintelligence.com/ (last visited on
11 May 2020). Other applications such as Casetext are now claiming superiority over Ross, see “Casetext
v. Ross Intelligence”, Casetext, https://casetext.com/ross-vs-casetext/ (last visited on 11 May 2020).
88
“The Most Powerful and Accurate Contract Analysis Software”, Kira Systems,
https://kirasystems.com/benefits/ (last visited on 11 May 2020).
89
https://leverton.ai/; https://ebrevia.com/#homepage; www.thoughtriver.com/; www.lawgeex.com/;
www.luminance.com/ (last visited on 11 May 2020).
90
AI is being used for prediction purposes outside the court setting as well, most notably in the
administrative realm. The Internal Revenue Service, for example, is using AI to predict cases of possible
tax fraud by drawing on previous tax records and other data. See Cary Coglianese & Lavi M. Ben Dor, AI
in Adjudication and Administration, Faculty Scholarship at Penn Law 20 (Forthcoming in the Brooklyn
Law Review, 2020),
https://scholarship.law.upenn.edu/cgi/viewcontent.cgi?article=3120&context=faculty_scholarship, at 20.
91
Daniel M. Katz, A General Approach for Predicting The Behavior of The Supreme Court of The United
States, 12(4) PLoS ONE (2017); Scherer, supra note 36, at 508-509 (describing one such research
conducted on Supreme Court decisions in 2017 with a 70% accuracy rate, as well as a 2016 study
concerning decisions of the European Court of Human Rights with a 79% success rate).
92
“Boundless Legal Intelligence”, ArbiLex, www.arbilex.co/welcome (last visited on 20 May 2020).
93
“Analytics”, Decisionset, www.decisionset.com/analytics.html (last visited on 12 May 2020).
94
“The World’s Largest Litigation Database”, Premonition, www.losingisexpensive.com/ (last visited on
12 May 2020).
17
95
Voltaire Uses AI and Big Data to Help Pick Your Jury, Artificial Lawyer (26 April 2017),
www.artificiallawyer.com/2017/04/26/voltaire-uses-ai-and-big-data-to-help-pick-your-jury/ (last visited on
13 May 2020).
96
Smartsettle is an automated negotiation tool that employs blind bidding. Parties can state to the other
side what their reservation price is but also disclose a secret reservation price that leans more towards their
adversary (which the algorithm can draw on, if the openly stated reservation prices do not overlap). The
algorithm is programmed to reward parties for bargaining more collaboratively by revealing a number that
is closer to their true reservation price by declaring an agreed-upon amount that is closer to the
collaborative party’s stated price than the less collaborative party’s offer. See Katsh & Rabinovich-Einy,
supra note 25, at 35-36.
97
Once parties reach a settlement on Smartsettle infinity product which is a multi-interest negotiation tool,
they can choose to have the algorithm optimize the resolution by creating an alternative outcome that
improves at least one party’s overall satisfaction with the agreement without detracting from that of the
other side. This is possible because the parties secretly disclose to the algorithm their preferences regarding
each of the issues that are being negotiated. See id., at 48-49.
18
98
Calionese & Ben-Dor, supra note 33, at 3.
99
Id., at 9.
100
881 N.W.2d 749 (Wis. 2016) – State V. Loomis, Supreme Court of Wisconsin. In another cases, the
Indiana Supreme Court condoned the use of another algorithmic risk assessment tool since the decision was
grounded is other separate factors and since the tool did not displace judicial discretion. See Calionese &
Ben-Dor, supra note 33, at 13
101
Carolyn McKay, Predicting Risk in Criminal Procedure: Actuarial Tools, Algorithms, AI and Judicial
Decision-Making, 19/67 The University of Sydney Law School (2019),
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3494076&dgcid=ejournal_htmlemail_university:of:sy
dney:law:school:legal:studies:research:paper:series_abstractlink; O’Neil, supra note 50, at 97-103.
19
20