You are on page 1of 5

Criminal Liability

Artificial intelligence is … robots can malfunction and cause harm. The concept of punishment
is vague to be understood by a robot. There can be coding, marketing and employing of robots to
do international crimes. These crimes will be done at to the top niche perfection and without and
collateral loss.

The self-learning robot will soon stand the societal standard, it may get out of control and cause
damage. The question arises about the duty of care, responsibility and harm, who should be held
responsible manufacturer, trader or the ai technology to whom the concept of concept of criminal
liability or retributive punishment does not hold any importance as the modern intelligent agent
can drive to a decision based on evaluation of their opinion. It is a modern dilemmas after all it is
a combination of software and hardware.

There is no international laws governing the liability of robots, there is a requirement of


municipal law to develop and fit to the model of criminal legislation to control over the criminal
liability of robots. It is not hindering the growth of artificial intelligence but taking necessary
precautionary measures for the betterment of society.

In an interview Sophia claims to be given a better hand on the rights than any other Saudi women
because she doesn’t possess any humanly error or ambiguity. There were questions addressed to
Sophia relating to destruction caused by robots which she dogged stating that “you have been
reading too much of Elon Musk or watching too many Hollywood movies.1

Both military and commercial robots will in the future incorporate ‘artificial intelligence’ (AI)
that could make them capable of undertaking tasks and missions on their own. In the military
context, this gives rise to a debate as to whether such robots should be allowed to execute such
missions, especially if there is a possibility that any human life could be at stake.2

Elon Musk and the other co-founders of the Google AI subsidiary deepmind, have signed a
pledge promise to not develop “lethal autonomous weapons.” The pledge warns that weapon
systems that use AI to “[select] and [engage] targets without human intervention” pose moral and
1
https://cic.org.sa/2017/10/saudi-arabia-is-first-country-in-the-world-to-grant-a-robot-citizenship/ (Lat visited
May,8 at 2:39 pm)
2
https://www.chathamhouse.org/sites/default/files/publications/research/2017-01-26-artificial-intelligence-
future-warfare-cummings-final.pdf
pragmatic threats. Morally, the signatories argue, the decision to take a human life “should never
be delegated to a machine.” On the pragmatic front, they say that the spread of such weaponry
would be “dangerously destabilizing for every country and individual.” 3

A robot nonetheless cannot be conscious of its freedom, cannot understand itself as an entity
with a past and a future, and certainly cannot grasp the concept of having rights and obligations. 4
Even robots that are able to learn do not have a conscience and do not reflect upon whether their
actions are good or bad, henceforth the robots cannot be said as a free agent and cannot be held
personally liable for all the harm caused.

The issue arises that can robots be punished or blamed because of the coding or evolution if they
are a part of criminal activity. It is clear that the sanctions are geared up towards human beings,
they are neither meant nor designed to punish non-human entities. The concept of corporate legal
personality can even be pierced, it is not designed to affect the corporation as such but rather to
affect those human beings who have an interest in the financial well being of the corporation. It
would be difficult to apply the same concept for intelligent agent, as any fine imposed by him
need to be paid by legal owner.5 The punishment for the power driven machine can be the rarest
to rarest scenario of death penalty. The robot does not possess the sort of self awareness or self
reflection which could make the owner possible target of blame, as we senter that the nexus of
criminal and crime end after his death then what about machine liability after the death of person
ipso facto what about the liability if a person owns multiple agents.

Meanwhile, reflections on liability, including criminal liability, of the Al make sense only if
mankind retains control over it. The extent of reasonability of doubts in this is provided by
separate statements. Thus, James Barratt says the final stage of creating intelligent machines, and

3
https://www.theverge.com/2018/7/18/17582570/ai-weapons-pledge-elon-musk-deepmind-founders-future-of-
life-institute
4
e Lawrence B. Solum, Legal Personhood for Artificial Intelligences, 70N.C. L. REV. 1231 (1992); ANDREAS
MATTHIAS, AUTOMATEN ALS TRA¨ GER VON RECHTEN (2d ed., 2010).
5
Bert-Jaap Koops, Mireille Hildebrandt, & David-Olivier Jaquet-Chiffelle, Bridging the Accountability Gap: Rights for
New Entities in the Information Society?
later - machines that are more intelligent than humans, is not their integration into our lives, but
their victory over us.6

AI liability will only make sense if mankind has control over it, there has to be schedule added in
the legislation mentioning specific laws, sanctions to be passed and countries to be a signatory.

 The personhood of artificial agents could be a means to shield humans from the consequences of
their conduct. In light of the International Tin Council case before the House of Lords in
October 1989, “the risk is that electronic personality would shield humans actors from
accountability for violating rights of other legal persons, particularly human or corporate7

, ‘Can computers make contracts?’. Article 12, United Nations Convention on the Use of
Electronic Communications in International Contracts - a need for more comprehensive
legislation on the subject. An explanatory note by the UNCITRAL Secretariat on the matter
clarifies that messages from such automated systems should be regarded as ‘originating’ from
the legal entity on behalf of which the message system or computer is operated. This circles back
to the debate of giving AI entities a legal personality.8

A more serious difficulty in imposing liability upon bodies corporate arises from the following
considera tion. The wrongful acts so attributed by the law to fictitious persons are in reality the
acts of their agents. Now we have already seen that the limits of the authority of these agents are
determined by the law itself, and that acts beyond these limits will not be deemed in law to be the
acts of the corporation.9

the legal personhood of AI robots, we should recognize that granting someone, or


something, legal personhood is—and always has been—a highly sensitive political issue. In
addition to rivers in New Zealand and India, or the entire ecosystem in Ecuador, consider the
legal jungle of the status, or condition, of individuals as legal members of a state, e.g., people’s
citizenship. As shown by the legal condition of Turks in Germany, or of some Brazilian football

6
Barrat, J. Our Final Invention: Artificial Intelligence and the End of the Human Era. Moscow: Alpina Non-fiction,
2015. 304 p
7
Bryson, J.J.; Diamantis, M.E.; Grant, T.D. Of, for, and by the People: The Legal Lacuna of Synthetic
Persons. Artif. Intell. Law 2017, 23, 273–291
8
http://www.nishithdesai.com/fileadmin/user_upload/pdfs/Research_Papers/Artificial_Intelligence_and_Robotics.
pdfORY
9
JOHN H SALMOND, JURISPRUDENCE ON THE THEORY OF LAW, pg-355
players in Italy, or of young immigrants in US, this is the realm of political discretionary power
that sometimes turns into simple chaos, or mere sovereign arbitrariness. The recent case of Saudi
Arabia enrolling Sophia as a citizen of her own is hence unsurprising. It reminds us of
Suetonius’ Lives of the Twelve Caesars (121 AD), in which we find Caligula planning to make
his horse, Incitatus, a senator, and “the horse would invite dignitaries to dine with him in a house
outfitted with servants there to entertain such events”.
From Incitatus to Sophia, the paper has stressed the normative reasons, according to which
we can evaluate whether granting legal personhood makes sense, or turns out to be a simple
matter of sheer chance and political unpredictability. In the case of legal persons, such as
corporations, political decisions have to do with matters of efficiency, financial transparency,
accountability, and the like. In the case of human fellows, the reference is to their dignity,
consciousness, intrinsic worth, and so forth. Certainly, we cannot prevent on this basis the odd
decisions of legislators making robots citizens, or horses senators. Yet, from Caligula’s horse to
current Sophia, basic legal principles make clear when political decisions on “persons” are
incongruous, so that courts may one day overturn them for having no rational basis.10

Searle tells us what he is up to when he writes in his essay, "What we wanted to know is
what distinguishes the mind from thermostats and livers." He distinguishes minds from
thermostats by simply asserting that 'The study of minds starts with such facts as that humans
have beliefs, while thermostats, telephones, and adding machines don't." So it is an item of
dogma that humans have beliefs and machines don't. So much for the argument against
thermostats! Further, it is an equally dogmatic claim that humans have intentionality, and
machines get finished off with the assertion that, "Whatever else intentionality is it is a biological
phenomenon." Thus given his original statement about "thermostats and livers", what he has left
to do after these claims is to distinguish the mind from the liver. This task he accomplishes in an
equally summary fashion, with "only something that has the same causal powers as brains can
have intentionality." In short, Searle can be understood as follows:
1. Humans have intentionality,
2. Intentionality results from the causal powers biological brains,
3. Machines don't have biological brains; so therefore

10
https://www.mdpi.com/2078-2489/9/9/230/htm
4. Machines don't have intentionality.11

Legal Personality

The need for the hour is to talk about AI, who has been given legal personhood.

Artificial intelligence cannot be treated on par with natural person as it lacks-

(1) A soul
(2) Inter-nationality’
(3) Consciousness
(4) Interest
(5) Free will

‘Hanson Robotics’ made Sophia, a social humanoid robot being granted citizenship by
Saudi Arabia in 2017.

Now, the debate on who will be liable for the activity as the self driven car by uber on
which a 0n-board passenger lost his life.

Effect on citizenship in India

http://www.nishithdesai.com/fileadmin/user_upload/pdfs/Research_Papers/Artificial_Intelligenc
e_and_Robotics.pdf

page 15

11
Tarcy B Hanly, natural problem and artificial intelligence, Behavior and Philosophy, Vol. 18, No. 2 (Fall/Winter,
1990), pp. 43-56, pg-46

You might also like