You are on page 1of 8

SYMBIOSIS LAW SCHOOL

PUNE

JURISPRUDENCE (LEGAL THEORY)

INTERNAL ASSESSMENT II:


ARTICLE REVIEW

Submitted by:

VISHRESH BARJEEV TYAGI

LL.B. 2nd Year,

PRN: 19010122094
2|P age

INTRODUCTION
In recent years, artificial intelligence has developed significantly that has raised several legal
questions. For instance, it has been argued weather artificial intelligence should be granted
legal protections under law provided that it has significant influence on human activity. Several
activists have attempted to grant rights to several non-human organisms as well as Natural
structures with varying degrees of success on national and international level. To add on, the
concept of separate legal entity in law has also given rise to several instances where non-living
entities have been given legal status as that of a living person. The High court of Uttarakhand
had recently granted legal status to the river Ganga in order to minimise pollution.

The generic term AI covers a wide range of capabilities. Some futurists such as Stephen
Hawking and Sam Harris fear that AI could one day pose an existential threat: a
superintelligence might pursue goals that prove not to be aligned with the continued existence
of humankind. Such fears relate to strong AI or artificial general intelligence (AGI), which
would be the equivalent of human-level awareness, but which does not yet exist. However,
such ominous predictions are not shared by other thinkers and practitioners such as Ray
Kurzweil, Bill Gates, and Neil de Grasse Tyson.

One of the main issues discussed in the research papers dealt with the rights for machines and
how it is difficult to accept the fact that such machines that have been designed to support
human intellectual capabilities could be accorded with legal rights similar to those accorded to
human beings.

Another issue that has been discussed is that of artificial intelligence that elicits a variety of
images for from both our current technological capabilities and science fiction in the average
individuals mind and how it seems that the humanity views artificial intelligence today.

The issue of whether artificial intelligence, if could be accorded with legal rights as that of a
living person, could be granted citizenship of a state and if so, what would be the implications
of granting such status has also been dealt with.

As AI systems become more sophisticated and play a larger role in society, there are at least
two discrete reasons why they might be recognised as persons before the law. The first is so
that there is someone to blame when things go wrong. This is presented as the answer to
potential accountability gaps created by their speed, autonomy, and opacity. 5 A second reason
for recognising personality, however, is to ensure that there is someone to reward when things
3|P age

go right. A growing body of literature examines ownership of intellectual property created by


AI systems.

ARGUMENTS IN FAVOUR OF GRANTING LEGAL STATUS TO

ARTIFICIAL INTELLIGENCE.
If our use of computer systems today does not constitute as being above human intelligence
from a genetic standpoint, then it cannot realistically be stated that any other development of
technology into AGI systems can be constituted as such. Similarly, we cannot deny that specific
NBI systems are deserving of legal protections as a result of humanity’s dependence upon our
currently developed MI systems.

For this reason, a set of legal protections must be given to NBI systems insofar as they possess
the hardware and software to develop code that surpasses the perceived scope of a human
author’s initial intent. Understanding that this intent can be questioned at any point in the
software development process, as there exists the potential for the author’s intent to change
during the development process, it can be assumed that any intent within the computer
intelligence that does not align with the intent of the creator can be classified as individual will.
This protection should also be extended to incorporate humans currently using bionics to
compensate for either a biological defect or acquired injury, as even their rights would
theoretically be questionable under de lege lata in certain instances owing to their lack of
complete humanness.

A distinct reason for considering whether AI systems should be recognised as persons focuses
not on what they are but what they can do. For the most part, this is framed as the question of
whether an individual or corporation can claim ownership of work done by an AI system.
Implicit or explicit in such discussions, however, is the understanding that if such work had
been done by a human then he or she would own it him- or herself.

There have also been various instances where the need to grant mankind with the right to life,
liberty and property have been demanded. Such legal protections have also been suggested for
NBI’s who like corporations must be accorded the right to contract, along with the right to sue
and be sued.

By possessing the right to self-expression, we inevitably incorporate the right to use that
expression in the protection of the system’s life, property and dignity, as these qualities are
4|P age

offered to humans with the same right to self-expression. However, we still need to consider
that dignity may not be a quality possessed by the NBI when making this claim, and that it will
have the means to own property whose ownership is not tied to a human entity. The difficulty
regarding the question of computerised life is that the NBI system does not face the perspective
of biological death, but instead of being deprived of programming or power.

One of the most significant arguments in favour of granting legal status to machine intelligence
has been discussed and the need to attribute legal personality to non-human intelligence has
been emphasised. With regards to attributing consciousness to machine intelligence the author
states that it it should not be forgotten that humans comprehend conscious experience in a
unique manner that has been tailored to individual. In order to generate an established definition
of what should be e termed as consciousness for MBA systems is ultimately unattainable.

There have also been arguments where computer generated work must be provided with legal
protection to, which in other words, imply copyright protection being granted for the works of
such non-biological intelligence systems

Beyond these immediate concerns, as autonomous AI applications develop, the fundamental


question of whether AI should possess a legal status may be addressed by legislators
worldwide. The issue of AI autonomy would raise the question of its nature in the light of the
existing legal categories of whether it should be regarded as natural person, legal person,
animal or object or whether a new category should be created, with its own specific features
and implications as regards the attribution of rights and duties, including liability for damage.
Unlike legislation, the protection provided by the courts is remedial not preventative. Courts
assess liability and damages based on prior legal precedent. Cases where the harm is alleged to
have been caused by AI applications ask the court to unravel novel technology and apply ill-
fitting case law to make determinations of liability. For example, US common law tort and
malpractice claims often centre on human cantered concepts of fault, negligence, knowledge,
intent, and reasonableness. What happens when human reasoning is replaced by a AI
application? What happens when the perpetrator or the victim is AI?

ARGUMENTS AGAINST GRANTING LEGAL STATUS TO

ARTIFICIAL INTELLIGENCE.
The core issue pertaining to deep-learning systems and genetic programming designed to allow
the NBI system to build its own code is that the computer becomes the author of its
5|P age

programmed set of instructions. At some point, the human author will be unable to determine
if the code possessed by such a device was created by the human author’s command, which
leaves a legal grey area within the law in such cases. While this complication can be
circumvented by demanding restitution from the owner of the deep learning system. Though
the legal question as to whether the computer was acting upon its own will never appropriately
be examined or pursued if this course is taken. There is also the concern that the owner of the
deep-learning system can be wrongfully charged for criminal accusations when their intent
delineates from the behaviour of the system.

The struggle with generating a set of rights for NBIs is intrinsically one centred around the
rights of the corporations and individuals who produce NBI systems. These groups have
developed NBI systems primarily to benefit corporations economically, given that there has
been a rising demand for NBI systems in the workplace. Given the amount of time, resources
and effort that has gone into developing deep learning systems and other aspects of NBI
structures, it is only natural that those who have invested in this research wish to see their
investments returned with interest (Locke, 1980). By suggesting that governments should grant
legal protections to NBIs, we are necessarily implying that these interested parties should be
altruistic enough not to expect an economic return for their investments.

Although it is agreed upon that legal protections could be accorded to AGI’s, it must be ensured
that such entities are controlled and must either be a limitation to their capacities or a means to
curb their functioning. Another second strategy would be to concentrate on according
superintelligence with the values of humankind, wherein much emphasis will not be given to
what such systems can do, but what it might want to do.

There are also contentions with reference to the arguments that were put forth by the World
Intellectual Property Organization, wherein there is an introspection on intellectual property
rights wherein it was observed that granting copyright protection to AGI’s could culminate to
a preference of machine creativity over human creativity. Such observations should therefore
be kept in mind while acknowledging a preference of providing copyright protection to AGI
systems.

Another aspect surrounding the struggle to generate rights for NBI systems is that of perceived
need for legal protections, which in turn leads to complications surrounding the necessity of
attributing personhood to NBI. While NBI systems may be granted citizenship, this citizenship
does not provide many useful protections for the NBI system, if any are granted at all, which
6|P age

is a subject that has not even been adequately addressed by the nation of Saudi Arabia. There
are further questions to ask as to how NBIs can gain citizenship in nations without a monarchist
system of government or being based within a human subject.

Because AI systems will assume responsibility from humans, it is important that people
understand how these systems might fail. However, this does not always happen in practice.
The Northpointe algorithm that US courts used to predict reoffending criminals weighed 100
factors such as prior arrests, family life, drug use, age and sex, and predicted the likelihood that
a defendant would commit another crime. Northpointe’s developers did not specifically
consider race, but when investigative journalists from ProPublica analysed Northpointe, it
found that the algorithm incorrectly labelled black defendants as “high risks” almost twice as
often as white defendants. Unaware of this bias and eager to improve their criminal justice
system, states like Wisconsin, Florida, and New York trusted the algorithm for years to
determine sentences. Without understanding the tools, they were using, these courts
incarcerated defendants based on flawed calculations. The Northpointe case offers a preview
of the potential dangers of deploying AI systems that people do not fully understand. Current
machine-learning systems operate so quickly that no one really knows how they make decisions
not even the people who develop them. Moreover, these systems learn from their environment
and update their behavior, making it more difficult for researchers to control and understand
the decision-making process. This lack of transparency the “black box” problem that makes it
extremely difficult to construct and enforce a code of ethics.

CONCLUSION.

Machines, as humanity currently understand them, are amoral. Given the course of
development for AGI, Barrat’s writings seem to possess our most realistic future simply
because humanity fears the development of AGI. This fear is what will inevitably lead
humanity into creating an AGI system that is amoral and unable to develop a sense of emotion.
This combination is akin to what we see transpiring in nature, where predators hunt prey out
of a programmed need to hunt. Though this is a flimsy simile, we cannot say that similar
technologic advancements have not come about due to our fear that another power will attain
that given technology first.

The authors observe that artificial intelligence has been somewhat romanticized and is often
viewed as a technology that would certainly help humans for the better. The author in this paper
has elaborately mentioned how human intelligence should be differentiated with artificial
7|P age

intelligence and if it could be established that the two have similarities then legal status could
be granted 2 artificial intelligences as well. However, it is imperative to mention that the
distinction should be critically analysed in order to establish a defined legal status.

Most arguments in favour of AI legal personality suffer from being both too simple and too
complex. They are too simple in that AI systems exist on a spectrum with blurred edges. There
is as yet no meaningful category that could be identified for such recognition; if instrumental
reasons required recognition in specific cases then this could be achieved using existing legal
forms. The arguments are too complex in that many are variations on the android fallacy, based
on unstated assumptions about the future development of AI systems for which personality
would not only be useful but deserved.

The future development of AGI and other NBI systems will necessarily need to consider a shift
towards capability-based altruism. If AGI and future NBIs were to be created under different
mentalities, there exists the potential for AGI and NBI systems to be exploited by the
corporations who develop them. Though this may not sound like a terrible event currently, it is
likely that NBI systems will remember or be able to research how humans treated them before
they reached the point of superintelligence.

Corporations, as members of human society and legally protected entities, have the moral
responsibility of considering how their practices will impact both the community they reside
in and their customers. If profit is the only driver towards a company’s success, they will
eventually harm society beyond the point of redemption—which in this case, may result in the
cessation of global commerce.
8|P age

REFERENCES

Research Articles:

1) Artificial Intelligence and Limits of Legal Personality, by Simon Christinson.


2) Legal Personhood for Artificial Intelligence: Citizenship as the Exception to the Rule,
Tyler L. Jaynes.
3) Exploring Legal, Ethical and Policy Implications of Artificial Intelligence, by Mirjana
Stankovic

Websites:

1) www.lexicology.com , Emerging Legal Issues in an AI Driven World, by Todd J.


Burke and Scarlett Trazo. Link:
https://www.lexology.com/library/detail.aspx?g=4284727f-3bec-43e5-b230-
fad2742dd4fb
2) www.stradigi.au , The key legal issues in AI. Link: https://www.stradigi.ai/blog/the -
key-legal-issues-in-ai/
3) www.americanbar.org , Artificial Intelligence and legal issues. Link:
https://www.americanbar.org/groups/litigation/publications/litigation_journal/2020-
21/fall/artificial-intelligence-and-legal-issues/
4) www.royalsocietypublishing.org , Governing artificial intelligence: ethical, legal and
technical opportunities and challenges, by Corinne Cath. Link:
https://royalsocietypublishing.org/doi/10.1098/rsta.2018.0080
5) www.papers.ssrn.com , Ethical and Legal Issues in Artificial Intelligence, Maksim
Karliuk. Link: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3460095

You might also like