You are on page 1of 10

W340 EMA PI: E4044844

Edward Freddie Cook

Action Plan:

Having chosen to discuss the challenges surrounding legally regulating decision-making


robots, I need to devise and follow a plan to answer as comprehensively as possible.

My first goal is to assess the current legal thinking around the subject, as well as
investigating the current and future uses of decision-making robotics. It should be noted
having done this that areas of interest include self-driving cars, surgical robots, and military
autonomous killing machines. Additional note I will not discuss killing machines in this essay
due to a lack of information due to military secrecy. I believe the widespread commercial use
and rapid growth of the autonomous car market, particularly with brands such as Tesla, will
provide a wealth of relevant information, statistics, and insight.

Secondly, I want to examine some of the philosophy surrounding the issue. Can that level of
autonomy be granted to a robot without considering it, on some level, to be near-human?
And if so what impact does that have on our denying the robot rights and responsibilities
such as liability and intellectual property. Note that I have expanded my thinking to consider
torts such as negligence, and the need for the robot itself to be able to be considered as an
actor under the law, but what would be the ramifications of holding a robot accountable?
There is definitely a lack of academic resource on this field, I should think because it is so
new and so fast growing that there just has not been any significant turning points where it
becomes a national conversation anywhere.

Thirdly, having considered the progression in AI and robotics, particularly in regard to


vehicles and medicine, establish my position on if and what challenges are posed to legal
regulation. The Automated and Electric Vehicles Act is certainly interesting, I feel like the
joint report into automated vehicles was far more comprehensive and should have been
used to bring in a whole new model into legislation. Note the AVR is a great example of how
I believe this legislation should be considered and enacted off the back of it.

Fourthly find examples of effective, ineffective, and non-existent regulation on decision-


making robots. From each category learn what is making those examples work or fail. The
Trivago case in Australia is fascinating, it provides a real insight into the judicial efficacy of
regulating industries running on artificial intelligence, my gut instinct was that it would not
work, but the court’s ability to bring in expert witnesses on both sides, dissect the AI’s
reasoning, and keep copyrighted company secrets protected is impressive.

Finally, tie up my varying threads into one conclusion. This topic is too broad for one simple
one, I will have to separate it into legislative and ethical challenges. There is a huge amount
to cover so keep to the two tracks. (468 words)
W340 EMA PI: E4044844
Edward Freddie Cook

EMA:

This essay examines if decision-making robots pose a challenge to legal regulation, and if
they do, what those challenges might be. This will be done by discussing the differences
between decision-making in humans and robots, critically discussing the challenges to
legally regulating aforementioned differences, and discussing the need for legal safeguards
to govern decision-making by robots. This essay will examine robots operating in various
spheres of everyday life with a focus on vehicles and surgical equipment, and how the
underpinning operational technology of Artificial Intelligence (AI) raises questions about
conferring both rights and responsibilities onto the robots themselves, their users, and
manufacturers. As there is no statutory definition of a robot in England and Wales, for the
purposes of this essay, the term robot will apply to the definition given in ‘Guidelines on
Regulating Robots’[1]as an “autonomous machine able to perform human actions”.

In English law, especially in regard to areas such as the tort of negligence, people are held
accountable under the law in line with the reasonable person test. The essence of this test is
would a reasonable person have acted in the same way as the defendant in order to
establish if their behaviour was reasonable or if it was potentially negligent. This is judged on
a case by case basis, aided by a long history of case law and an understanding of how
people act in certain circumstances and under certain stresses day to day. Standards such
as the “man on the Clapham omnibus” have been used since the beginning of the twentieth
century. Whilst these principles are not outlined specifically, it is one test for all people and
the bar is raised or lowered according to the circumstances. In the case of Dunnage v
Randall[2] we can see that children are held to a lower standard of care, one that is suitable
to a ‘typical’ child of their own age. Furthermore, in Bolam v Friern Hospital Management
Company[3] it was established that adults professing and exercising a special skill such as a
medical one could not be held to the standard of the man on the Clapham omnibus, rather
the ordinary skill of an ordinary competent man exercising that particular art. This raises
another challenge to legal regulation of decision-making robots. Should a fully autonomous
vehicle be expected to drive as safe as, or safer than the average reasonable road user?

Certain self-driving cars, such as Tesla’s autopilot system, relies upon deep neural networks,
which means that not even the developers of the software can be entirely certain of how or
why certain decisions are made, these differ from explainable AI systems where decisions
can be challenged and the AI itself explains how and why that decision was made[4].
Goodall[5] notes that automatic driving vehicles require the ability to make ethical decisions,
W340 EMA PI: E4044844
Edward Freddie Cook

either by explicitly pre-programmed instructions, machine learning, or a combination of the


two.

It is debated in ethics such as in hypothetical scenarios like the trolley problem, where
whether one should take action to cause harm to a smaller group or take no action knowing
it will result in harm to a larger group. Leaving moral decisions to a decision-making robot, if
it were to result in damage or even loss of life, raises serious questions about legal
regulation. If the robot successfully followed its programming, then could the manufacturer
deemed to be liable? Additionally if the car was fully autonomous then could the human in
the driving seat hold any liability? If the answer to both questions is no, then how would the
legal system seek to redress potential harm done. Goodall notes “the fields of moral
modelling and machine ethics has made some progress, but much work remains”.

Eversmann [6] notes that robots are becoming increasingly prevalent in human decision-
making, and their study showed that there were close levels of trust in the decision-making
process between humans and robots in certain different scenarios, including one involving
the crash of a self-driving car. Part of the reason for this is likely due to the fact that this
study gave the robot a chance to explain its decision. This is very important when we note
that depending if a company uses explainable AI as opposed to deep neural networks, then
it could serve to increase the level of trust that people have in the rationale of a self-driving
car in the case of a crash to see if the decision was reasonable and ethical.

In a paper investigating the impact of robotic surgery, particularly with regards to the Da
Vinci surgical robot, on decision-making, Rebecca Randell et al [7]. notes that upon multi-site
interview study with nine hospitals “the findings reveal both potential benefits and challenges
of robotic decision making” this is noted as being in part because of the difference in how a
surgeon makes decisions with or without a robot in the suite. Some factors noted include
“tactile perception, visual perception, motor skill, and instrument complexity, all of which are
affected by robotic surgery”. It is also noted that the proximity of the surgeon to human team
members or a robot can affect the decision making. However, much as in the paper “A
systematic review on artificial intelligence in robot-assisted surgery”[16] it is acknowledged
that there is not sufficient data specifically related to differential decision-making yet, as the
limited information is primarily focused on patient outcomes currently.

When it comes to explaining the difference between human and robot decision-making in a
legal setting, one significant factor is that human decision-making is largely more
understandable as it is relatable. The defendant can explain their thought process, and the
court can relate that to their own similar lived experience and the use of empathy. However,
when assessing the decision making of a robot, it becomes a technical issue whereby you
W340 EMA PI: E4044844
Edward Freddie Cook

need to look at the programming which led to the learning which forms the artificial
intelligence, it is dependent on their initial programming, and their ability to learn[17]. The
aforementioned studies show that explainable AI systems will appear more trustworthy upon
examination of decision-making capability more so than deep neural network AI.

To legally regulate decision-making robots, first it must be decided to whom their decisions
are held accountable, be it the manufacturer, the user, or the robot itself. This decision is
based on an understanding of what level of decision-making the robot is actually making for
itself, and if it is capable of truly learning and understanding the cause and effect of said
decisions, or if it is more a reflection of its original programming or the system within which it
was used.

Before one can look at regulating decision-making robots, one must first examine if and how
they can be considered actors with a level of personhood[19].

It is worth noting that robots with artificial intelligence are not the first non-human actors to
be considered under the law. Whilst there is a precedence for non-human personhood, it is
not entirely consistent and varies between countries.

Perhaps the most common is corporate personhood, the effect of companies acting under
the law as non-human persons, able to own property, enter contracts, and be guilty of
tortious offence in their own names and therefore charged and fined as an entity without
naming human defendants. Similarly, the New Zealand government [8] passed a bill
bestowing non-human personhood on Whanganui River to aid in its protection and
restoration, obviously in these circumstances the bodies enjoy certain rights and protections,
but it is limited and obviously does not apply to human rights such as voting.

There are contrasting examples where it has been shown that applying certain rights under
non-human personhood cannot be conferred in some legal systems. Two notable examples
come from the United States, as reported by Lynam, 2022 [9], a habeas corpus claim on
behalf of a chimpanzee was denied because “while a chimpanzee is aware and intelligent,
they cannot bear duties or responsibilities.”[18] This is noted by Lynam as being a flawed
argument as human children and disabled people are given rights but lack certain duties and
responsibilities.

The second notable case is that of Naruto v Slater [10], in which it was decided that all non-
human actors lack statutory standing in regard to holding copyright and intellectual property
rights.

These cases cause issue to legal regulation because there are disparate cases with regards
to human and non-human decision-making and the rights and responsibilities which should
W340 EMA PI: E4044844
Edward Freddie Cook

be granted accordingly, and whilst these predominantly revolve around animals, the core of
the argument remains. Whilst artificial intelligence might be far better placed to emulate and
understand human decision-making, it is inherently inhuman and therefore should be
challenged accordingly.

On top of this, artificial intelligence, unlike animals, is created and made by humans and can
therefore in and of itself be considered property. This raises the question of how something
can be considered a person and property without it being akin to slavery. This poses an
ethical challenge in placing legal regulation on advanced decision-making robots as it
simultaneously requires accepting that decision-making robots can understand the human
world and human behaviours well enough to intelligently navigate them and even contribute
to them autonomously but granting none of the positive or negative credit for their success or
failure to do so.

These cases show that whilst non-human actors can be granted non-human personhood,
there are individually considered rights and responsibilities conferred upon each that are
lesser than human persons.

The challenge of to what extent, if any, a decision-making robot can be considered either a
legal person, property, or both simultaneously raises questions around the potential to
legally regulate the extent to which artificial intelligence can be held liable for its own actions.
The aforementioned examples show that non-human entities can be afforded appropriate
rights whilst still very much not being legally considered as a person [11].

One of the fastest growing areas with decision-making robots taking some or all decision-
making duty away from people is that of driving. Currently in England and Wales this is
regulated by the Automated and Electric Vehicles Act 2018 (AEVA 2018) [12]. In a report on
Automated Vehicles by the Law Commission (AVR) [13], it was recommended that the
Consumer Protection Act 1987 (CPA 1987)[14] is reviewed as to how product liability applies
to new technology in general, but also to automated vehicles. The finding of the report
understand that automated vehicles actually run on a hybrid control model; sometimes the
human driver is in full control of the vehicle, sometimes in partial control, and sometimes the
vehicle is entirely automated. Under this hybrid model, the human driver should be liable for
when they are in control of the car, the transition period between automated and manual
driving, and if they fail to heed warnings from the car that there is an error with the
automated driving function.

The relationship between the AVR and CPA is endemic of the challenges around legal
regulation of decision-making robots. Whilst some legislation already exists, be it specifically
for new technologies such as AEVA 2018, or an extension of legislation on now outdated
W340 EMA PI: E4044844
Edward Freddie Cook

technologies such as CPA 1987, the AVR shows that new considerations have to be brought
into our understanding of the law on fundamental levels to cope with the emersion of more
impactful technologies.

The company is liable to provide a product fit for service as outlined in CPA 1987, which in
this scenario means an automated car which can be provably reliable in the design of its
decision-making capability so that it can drive at least as safely as a reasonable and
qualified driver, however due to the vehicle itself not being an entity like a corporation or a
person, it cannot itself be charged with any infraction. Whilst it might seem like an
unreasonably difficult challenge to prove fault in the underlying technology which forms the
robot’s decisions, one should look at ACCC v Trivago [15]. Trivago were charged with
misleading customers in regard to their claims about their ability to find, rank, and promote
the best hotel room prices using an algorithm relying on artificial intelligence. Australia’s
federal court set up specific rules for evidential discovery that allowed both parties to look at
how Trivago’s artificial intelligence was running and allowed expert witnesses to be called by
both sides, all whilst protecting Trivago’s intellectual property. Trivago lost the case and were
fined 44.7 million AUD.

The AVR also proposes three new legal actors. The user-in-charge (UIC) which is the
human in the driving seat. Next is the Authorised Self-Driving Entity (ADSE) which is the
manufacturer who puts forward vehicles for authorisation and takes responsibilities for its
actions. Finally is the NUIC (no-user-in-charge) operator, a contracted licenced operator for
supervision and maintenance in matters such as software installation and cybersecurity. The
report goes on to clearly outline the duties of the three parties. This is a fantastic example of
how decision-making robots can be legally regulated in balance with their human users and
the manufacturers.

As robots become more independent of human co-actors there are considerations to be


made. Firstly, after examining the implications of granting legal non-human personhood and
national citizenship to robots, there is a strong argument that there must always be a person
or company run by persons who can be held accountable for the actions of the robot. Just as
parents and pet owners can in some instances be held accountable for negligent care of
their children and pets leading to harmful situations meaning that, as when courts have
looked at granting certain rights through personhood on non-human bodies, rights such as
citizenship, the right to vote, and others must be considered to be inapplicable to robots.
W340 EMA PI: E4044844
Edward Freddie Cook

Secondly, the Trivago case shows how an effective court setup can be used to investigate
the decision-making capabilities of artificially intelligent robots and hold the companies that
create them to account. However, the infrastructure needs time to be set up, and will need to
be tested as cases begin to become more common. Getting ahead of the curve will benefit
everyone who is negatively affected by poor decision-making from a robot.

There is a need for legal safeguards because the industries around decision-making robots
are of high impact to society today, and as the technology develops into new areas it will
only have more impact on the lives of citizens and therefore those citizens must have
recourse to hold manufacturers and users of said technologies accountable like if someone
is injured by a self-driving car that was technologically deficient upon leaving the factory.
However, there must also be legal regulation for those affected by a robot that successfully
followed its programming, independent of any driver or operator influencing it, and still
causing harm.

Legal safeguards need to be considered and implemented to take into account the skills and
failures of decision-making robots. As they take on more professional skillsets, such as fully
autonomous driving and surgical assistance, it must be clear to what standard they are being
held as a product, and also to what extent manufacturers and users are going to be held
accountable both when the robot is working as intended and when it is not.

In conclusion, in the question of whether decision-making by robots poses a challenge for


legal regulation, the answer is undoubtedly yes it does. It raises legislative, practical, and
ethical questions and will be no mean undertaking. However it is essential.

In regard to legislative problems, it is clear that decision-making robots and other


progressive technologies need to be more clearly legislated upon. However, one can see
that there have already been examples where it has been done on some level, such as
AEVA 2018. Furthermore, one issue with legally regulating decision-making robots is the
current lack of case law, as more cases are brought before the courts, the decisions that are
made in formative cases will affect to what extent regulation can be enforced as technology
progresses further. Legislation could be used to exact a provable level of quality and
transparency from manufacturers around the hardware and the underlying artificial
intelligence itself. The Trivago case showed that this is already achievable.

There could also be requirements placed on suppliers of decision-making robotic technology


to hold a certain amount of capital specifically for damages or fines relative to the number of
their robots in use in the UK. It would also be advisable for end-users to be required to hold
W340 EMA PI: E4044844
Edward Freddie Cook

additional insurance to protect them from costs incurred of use or misuse of decision-making
robots so that even if a person in an autonomous car had no control at the time of the crash,
their insurance policy would still allow for affected parties to make a claim. These measures
would put a clearer responsibility on the users and manufacturers to create and use the
robots in a responsible way. A similar principle could also be applied to hospitals utilizing
decision-making medical robots. Reports and consultation such as the AVR will prove
essential when considering legal regulation, having insightful reports fuelling comprehensive
legislation and judicial precedence will remove many of the challenges legal regulation
faces.

Examining the ethical problems, it is important to note the reconciliation between


acknowledging decision-making robots as being intelligent enough to act independently in
scenarios where the lives of people are at stake, acting as a new type of non-human person
the likes of which has not been truly encountered before by the legal system, whilst also
noting that it they are not to be considered human persons or citizens. The precise nature of
this non-human personhood should be noted in any legislation or formative case law arising
around the technology and should also leave room for it to be updated alongside the
technology itself. This perspective could end up proving the most challenging.

Whilst decision-making robots to pose challenges to legal regulation, these are not
challenges that cannot or are not already being overcome.

(2986 words)
W340 EMA PI: E4044844
Edward Freddie Cook

Reference List:

Robolaw (2012) “Regulating Emerging Robotic Technologies in Europe: Robotics


[1]

facing Law and Ethics”. [online] available at


http://www.robolaw.eu/RoboLaw_files/documents/robolaw_d6.2_guidelinesregulating
robotics_20140922.pdf (accessed 08/03/2023)
[2]
Dunnage v Randall & UK Insurance Ltd [2015] EWCA Civ 673

[3]
Bolam v Friern Hospital Management Committee [1957] 1 WLR 583
[4]
Snoswell, A. et al. (2022) “When self-driving cars crash, who’s responsible?
Courts and insurers need to know what’s inside the ‘black box’”. The Conversation
[online] available at https://theconversation.com/when-self-driving-cars-crash-whos-
responsible-courts-and-insurers-need-to-know-whats-inside-the-black-box-180334
(accessed 08/03/2023)
[5]
Goodall, N.J. (2014) “Machine Ethics and automated vehicles,” Road V
Automation, pp. 93–102. Available at:
https://www.researchgate.net/publication/300567119_Machine_Ethics_and_Aut
omated_Vehicles (accessed 08/03/2023)
[6]
Eversmann, M.C. (2022) “Teaming With Robots: Do Humans Judge Decisions
Made by Robots Differently Than Decisions Made by Humans?” University of
Twente. [Online] Available at
http://essay.utwente.nl/89652/1/Eversmann_MA_BMS.pdf (accessed
08/03/2023)
[7]
Randell, R. et al. (2015) “Impact of Robotic Surgery on Decision Making:
Perspectives of Surgical Teams” [online] available at
https://www.researchgate.net/publication/317088424_Impact_of_Robotic_Surgery_o
n_Decision_Making_Perspectives_of_Surgical_Teams (accessed 08/03/2023)
[8]
New Zealand Parliament (2017) “Innovative bill protects Whanganui River with
legal personhood” [Online] Available at
https://www.parliament.nz/en/get-involved/features/innovative-bill-protects-
whanganui-river-with-legal-personhood/ (accessed 08/03/2023)
[9]
Lynam, D. (2022) “What are the principle arguments advanced by the non-human
rights project (NHRP) for recognition of animal personhood?” Queen’s University
Belfast Student Law Journal. Issue 7. [Online] Available at
https://blogs.qub.ac.uk/studentlawjournal/2022/03/30/1107/#:~:text=In%20recent
%20years%20animal%20personhood,have%20followed%20a%20similar
%20direction. (accessed 08/03/2023)
[10]
Naruto v. Slater, No. 16-15469 (9th Cir. 2018)
W340 EMA PI: E4044844
Edward Freddie Cook
[11]
Guerra, A., Parisi, F. and Pi, D. (2022) “Liability for robots I: legal
challenges,” Journal of Institutional Economics. Cambridge University Press, 18(3),
pp. 331–343. doi: 10.1017/S1744137421000825. (accessed 08/03/2023)
[12]
Automated and Electric Vehicles Act 2018 – legislation.gov.uk [Online] available at
https://www.legislation.gov.uk/ukpga/2018/18/contents/enacted (accessed
08/03/2023)
[13]
The Law Commission (2018). Automated Vehicles: joint report [online] available at
https://s3-eu-west-2.amazonaws.com/lawcom-prod-storage-11jsxou24uy7q/
uploads/2022/01/Automated-vehicles-joint-report-cvr-03-02-22.pdf (accessed
08/03/2023)
[14]
Consumer Protection Act 1987 – legislation.gov.uk [Online] available at
https://www.legislation.gov.uk/ukpga/1987/43 (accessed 08/03/2023)
[15]
ACCC v Trivago [2020] FCA 16; 142 ACSR 338

Moglia, A. et al. (2021) “A systematic review on artificial intelligence in robot-


[16]

assisted surgery”. International Journal of Surgery [online] available at


https://www.researchgate.net/profile/Richard-Satava/publication/355581801_A_syste
matic_review_on_artificial_intelligence_in_robot-assisted_surgery/links/
61ad243fca2d401f27cafa07/A-systematic-review-on-artificial-intelligence-in-robot-
assisted-surgery.pdf (accessed 08/03/2023)

Morse, S.C. (2019) “When Robots Make Legal Mistakes”. Oklahoma Law Review,
[17]

Volume 72, Number 1. [Online] Available at


https://digitalcommons.law.ou.edu/cgi/viewcontent.cgi?article=1382&context=olr
(accessed 08/03/2023)

Pallotta, N. (2018) “Though Denied by New York Court of Appeals, Habeas


[18]

Corpus Claim for Chimpanzees Prompts Reflection” Animal Legal Defense Fund
[Online] Available at https://aldf.org/article/though-denied-by-new-york-court-of-
appeals-habeas-corpus-claim-for-chimpanzees-prompts-reflection/ (accessed
08/03/2023)

Negri, S. (2021) “Robot as Legal Person: Electronic Personhood in Robotics and


[19]

Artificial Intelligence”. Frontiers in Robotics and AI. Vol 8. [online] available at


https://www.frontiersin.org/articles/10.3389/frobt.2021.789327/full#:~:text=The
%20ascription%20of%20legal%20personhood,refers%20to%20a%20human
%20being. (accessed 08/03/2023)

You might also like