You are on page 1of 2

Robots’ Legal Personality

08 Mar 2017 / Oxford Business Law Blog

Horst Eidenmüller

Freshfields Professor of Commercial Law

St Hugh's College

Smart robots appear capable of purposive actions, and they exhibit ‘moral agency’: they
seem to understand the consequences of their behaviour, and they have a choice of
actions. Further, manufacturers of smart robots often cannot fully foresee their
behaviour because of machine learning. Should we, therefore, accord smart robots legal
personality?

This would have significant practical consequences. For example, think of a fully
autonomous self-driving car that causes an accident. Should we hold the car liable
instead of its owner or driver? If so, then we might conclude that the car should also
have the power to acquire property, conclude contracts, etc.

Societies will answer these fundamental and difficult questions differently, depending
on their respective ‘deep normative structure’. By ‘deep normative structure’, I refer to
the shared value judgments and conceptions that shape the social fabric of a particular
society. If this structure is utilitarian, according smart robots legal personality does not
seem to be utopian.
To begin with, smart robots act similarly as humans would in similar situations. Second,
we treat anthropomorphic robots like humans. Should we not then give them rights to
send a signal against mistreatment of humans generally? Third, we accord corporations
legal personality. There is a long debate amongst legal scholars whether corporate
personality is based on a fiction (as von Savigny argued) or whether there is something
‘real’ about corporations, whether there is something that could be called a ‘real group-
person’ (as von Gierke argued). Smart robots seem no less real than corporate persons.

Most of us probably feel uneasy when considering whether smart robots should be
accorded legal personality. The case against treating robots like humans rests on
epistemological and ontological arguments. Legal personhood, one can argue, is tied to
humans because only humans understand the meaning of rights and
obligations. Thinking is more than formal symbol manipulation (syntax). It involves
sensitivity to the meaning (semantics) of these symbols, and robots don’t have that.
Robots can be programmed to conform to rules but they cannot follow rules. Rule-
following presupposes an understanding of the meaning of these rules. Robots are not
capable of such understanding. Robots are not active in the discipline of hermeneutics –
and they will never be.

The second argument against treating smart robots like humans is an ontological one.
The laws of a particular society in general, and the rights and obligations accorded to
members of that society in particular, are an expression of the ‘human condition’. Laws
reflect what we believe is a precondition for an orderly interaction between humans. But
laws also reflect what we believe lies at the heart of humanity, at the heart of what it
means to be human. Just think of fundamental human rights in general and of freedom
of expression and speech in particular. But also think of such controversial issues as
abortion or same-sex marriage. It simply and literally would be dehumanising the world
if we were to accord machines legal personality and the power to acquire property and
conclude contracts, even though such machines may be smart – possibly even smarter
than we humans. So treating robots like humans would dehumanise humans.

Is corporate personhood a good counter-argument, weakening the case against treating


robots like humans? I don’t believe so. Corporations always act ‘through’ humans.
Humans sit on the boards whose actions are attributed to the corporation. It is true that
also in firms, Artificial Intelligence is becoming more and more important. At the same
time, it is still the case that final decisions are taken by humans – at least for the time
being.

Be that as it may: it seems to be clear that the question about the legal personality of
robots raises deep philosophical problems, and robot law will be shaped by what I have
called the ‘deep normative structure’ of a society. It very much matters whether a
society is based on a utilitarian conception of ‘the good’ or whether it rather is based on
a humanitarian/Kantian vision according to which not everything that is utility-
maximizing is necessarily the better policy. What seems to be clear is that a utilitarian
conception of ‘the good’ will tend to move a society in a direction in which robots
eventually will take a fairly prominent role – by virtue of the law.

Horst Eidenmüller is the Freshfields Professor of Commercial Law at the University of


Oxford.

You might also like