You are on page 1of 9

Students are required to make an Article review in their own words and thereby identify the

problems that lead to meditate more on conferment of legal personality when it comes to AI.

Articles on Personhood of Artificial Intelligence and Robotic beings.

In particular the students are required to make an appraisal of these two articles and identify
correctly the arguments that the Authors of these research papers endeavour to convey in favour of
giving personhood to AI and the arguments against it

1) Introduction: Wherein students are required to identify the necessity or problems that
necessitates to think in terms of conferring legal personality on AI and Robotic beings as
discussed in these Papers.
2) Arguments used by the authors of the Articles that supports the idea of giving legal recognition
to personhood of the non-biological intelligence i.e. AI and Robotic entities.
3) Arguments used by the authors of the Articles that talks of limitations in giving such legal
recognition to personhood of the non-biological intelligence i.e. AI and Robotic entities.
4) Concluding remarks: Students may point out here in their own view, which set of arguments
seems to be proper. They may showcase here any new dimension that they want to highlight but
not addressed in the research paper.

Page 1 of 9
INTRODUCTION

The word 'Person' is derived from the Latin term 'Persona' which means those who are
recognized by law. As such person are the subjects or bearer of not only rights but
duties too. Salmond explained this by stating that a person can be anyone to whom the
law consider of having rights and duties, as such, a natural persons is a human being as
is considered by the law as capable of rights and duties. Therefore, a body recognized
by the law as being entitled to rights and duties in the same way as a natural or human
person, is what these two articles are about i.e. granting legal recognition to non-
natural beings. legal personality in essence is granted and recognized by law to all
human beings and legal personality being an artificial creation of the law may be
conferred on entities other than individual human beings.

Salmond said all legal personality involves personification, the converse is not true.
Salmond said legal person is other than human beings to which the law give legal
personality. A juristic person is a legal entity collection of person which is able to
perform legal actions as a different identity for different purposes. Juristic person is
entitled to legal protection of its rights and duties with the exception of some that may
only be enjoyed or incurred by a natural person. Legal persons are called as juristic or
artificial. For e.g. idiots, dead men, unborn persons, corporations, companies, idols,
RBI, UPSC, Registered Societies, Trade Union etc

These articles in crux deal with fiction bound by artificial objects(robots, clones, and
bioengineered humanoids), by creating a specific legal status for robots in the long
run, so that at least the most sophisticated autonomous robots could be established as
having the status of electronic persons responsible for making good any damage they
may cause, and possibly applying electronic personality to cases where robots make
autonomous decisions or otherwise interact with third parties independently. Such a
complex criterion creation requires, intense advancement and a certain degree of
responsiveness towards the criterion, in cases of robots and such subsequent figures

However, the purpose of recognition for being declared as a juristic personality dwells
from the fact, that the commodity in question has a potential for becoming an in
separable part of human living, in future. This is the case with non-natural, man-made

Page 2 of 9
robots and their artificial intelligence. Such an involvement always comes with its
side-effects, and it is in the side-effects that various necessities to grant the juristic
recognition lies. These calls exist not to romanticise futures that are nigh unattainable,
but to emphasise that the ethical treatment of other beings is essential to the human
experience. The fact that ethics exists as an academic discipline proves how integral
moral behaviour is to us. As such, it is equally as crucial that humanity considers how
non-biologic entities are treated both on a legal and moral basis, given their interaction
with human society

Though currently AI remains science fiction for the present, it invites consideration
whether legal status could shape or constrain behaviour if or when humanity is
surpassed. The first being that there is someone to blame when things go wrong. This
is presented as the answer to potential accountability gaps created by their speed,
autonomy, and opacity. The jest lies in the fact that in the long run, at least the most
sophisticated autonomous robots could be established as having the status of electronic
persons responsible, possibly applying electronic personality to cases where robots
make autonomous decisions or otherwise interact with third parties independently

The authors have proceeded with a word of caution, i.e. this is an anticipatory
approach rather than a present immediate need. Such contextual fiction dwells on
artists’ conceptions of the human condition, and the contexts in which that condition
might or might not be altered. Human-like artefacts are no longer fiction, and
humanity is now confronted by the very real legal challenge of a supranational entity
considering whether to attribute legal personality to purely synthetic intelligent
artefacts.

Page 3 of 9
ARGUMENTS IN SUPPORT

The question whether an entity should be considered a legal person is reducible to


other questions about whether or not the entity can and should be made the subject of a
set of legal rights and duties." The particular bundle of rights and duties that
accompanies legal personhood varies with the nature of the entity. The question is
nonetheless of some interest. Cognitive science begins with the assumption that the
nature of human intelligence is computational, and therefore, that the human mind can,
in principle, be modelled as a program that runs on a computer.'

Artificial intelligence entities should be treated as legal personalities thereby, making


them responsible in law, in a manner similar to that of companies. One can have
serious connotations about comparing the corporations and artificial intelligence
machines, well the amalgamation lies in the distinct recognition of their juristic
personality. Driving analogy from the same, the recognition of artificial intelligence as
a juristic personality is same as corporations that is to limit the liability of the
corporation on an individual's shoulder, thereby in turn motivating people to engage in
further commercial activities without the fear of being penalised in short or long terms.

Within the same vein, the idea of individuality ought to be extended to artificial
entities as is accorded to company bodies, because this has the potential to alter the
present system as well as tackle approaching challenges presented by Artificial general
intelligence and non-biological intelligence. Such a move is substantially beneficial
for the substantive laws of the countries as no major change is involved requiring
tough legislative amendment and inclusions. One such concern that pertinently
worries developers of Artificial general intelligence and non-biological intelligence is
that the liability arising from its actions. Once computing develops to grade wherever
it begins to truly suppose, it'll be engaged in many tasks. Such engagements in task
will obviously give rise to Criminal and Civil Liability, for illustrative purposes what
if a human bot enters into a contract that isn't accomplished owing to its limitation,
then who shares/ owns the liability?

Page 4 of 9
The intensity of the above question deepens, when the liability is enlarged say
multiplied x1000. Legitimate question arises who will be held liable for these actions
of the computer, can its owner or the developers be entirely attributed to those
liabilities though they ne'er supposed to such associate act. The authors of the article
explain in one stance, and ill elucidate with an example that who could be held
responsible for cars with autopilot, when the driver of the autopilot stops on a highway
to check for flat tire and the autopilot overrides the instructions just to crash the car
killing the driver. Accountability cannot be in this instance be fixed upon the tangible
machine but it can be on the developer of the auto pilot though he did not have any
intention to kill the pilot at first instance. This kind of system is repressive at first
instance but this has to be thought from the third-party point of view that is the driver
for instance and his family, then the repression seems justified. However, we tend to
elevate from this concept and it is upon this angle that for granting juristic personality
upon the artificial general intelligence and non-biological intelligence seems justified.

However, if such intelligence is taken into account as a legal entity it may be


command chargeable for its own actions. during this case, the autopilot may be held to
be responsible which is able to save the developers from liability. Such granting of
juristic personality also brings into account the possibility of correction, i.e via
algorithms corrections. This will save the innocent developers of the AI as well as its
owners from liability arising from an act which they never intended and will promote
the development of the AI field as it will save discouragement of AI developers and its
users and will also promote more innovations into the artificial intelligence field.

Moreover, after developing weak AI scientist are also trying to develop strong AI
which will be sentient, they will be unique like humans, therefore they must have their
own identity. These machines will have emotional intelligence which will diminish the

line demarcating among humans and the machines. They will in their capacity to
perform any work and also in their pattern to perform a task. They may even demand
basic right to facilitate their well-being. Granting legal personhood to artificial
intelligence will not only ensure that our current legal system gets prepared for the
technological change but it will also ensure that our interactions with these artificially
intelligent beings are harmonious and benefits the human beings.

Page 5 of 9
However, this might give offenders a protect from the system in style of computing
and might take the legal temperament of AI as a statutory privilege to commit degree
offence.

In such a state of affairs we will once more derive analogy from the legal individuality
of companies. Like in companies if someone is found to require unfair advantage of
the legal temperament of the corporation, then the courts pierce through the company
protect and hold such person responsible. This method of company veil may be
adopted just in case if somebody uses computing as a method to satisfy his own
inconsiderate motives or to avoid wasting himself from any criminal liability.

Over years many precedents have been established, a pertinent example is the case of
“computer raped by telephone” which was widely covered by the media. In this case,
a computer programmer broke into a computer to steal private data by using a
telephone link. During the investigation it a search warrant was issued to the computer
for examination of its data and components. This was the first case where the world
witnessed that a machine was being treated like a legal person. At this juncture it
becomes very important to take into consideration of the fact which Dr Martin Luthur
king mentioned The arc of the moral universe is long, but it bends towards justice.

Page 6 of 9
ARGUMENTS IN AGAINST

If a technological advancement is being debated upon and into, the severe


consequences of any technological advancement is brought into severe critique
because if artificial intelligence systems do eventually match human intelligence it
seems unlikely that they would stop there. The prospect of AGI surpassing human
capabilities the kind of speciesism used to engage in such rationalisations of the status
quo echoes older legal forms that kept property relations in their rightful place

In the face of many unknown unknowns, two broad strategies have been proposed to
mitigate the risk. The first is to ensure any such entity can be controlled, either by
limiting its capacities to interact with the world or ensuring our ability to contain it,
including a means of stopping it functioning: a kill switch. Assuming that the system
has some kind of purpose, however, that purpose would most likely be best served by
continuing to function. In a now classic thought experiment, a superintelligence tasked
with making paperclips could take its instructions literally and prioritise that above all
else

Though most serious researchers do not presently see a pathway even to general AI in
the near future, there is a rich history of science fiction presaging real world scientific
innovation. Taking Nick Bostrom’s definition of superintelligence as an intellect that

greatly exceeds human cognitive performance in virtually all relevant domains, it is at


least conceivable that such an entity could be created within the next century.

The risks associated with that development are hard to quantify. Though a malevolent
superintelligence bent on extermination or enslavement of the human race is the most
dramatic scenario, more plausible ones include a misalignment of values, such that the
ends desired by the superintelligence conflict with those of humanity, or a desire for
self-preservation, which could lead such an entity to prevent humans from being able
to switch it off or otherwise impair its ability to function.

Page 7 of 9
A true superintelligence would, moreover, have the ability to predict and avoid human
interventions or deceive us into not making them It is entirely possible that efforts
focused on controlling such an entity may bring about the catastrophe that they are
intended to prevent. Such an issue is corrected by the usage of the second strategy, i.e.
s to ensure that any superintelligence is aligned with our own values—emphasising not
what it could do, but what it might want to do.

The arguments are too complex in that many are variations on the android fallacy,
based on unstated assumptions about the future development of AI systems for which
personality would not only be useful but deserved. At least for the foreseeable future,
the better solution is to rely on existing categories, with responsibility for wrongdoing
tied to users, owners, or manufacturers rather than the AI systems themselves.

Page 8 of 9
CONCLUSION

Page 9 of 9

You might also like