Professional Documents
Culture Documents
Gabriel Halle v y
5 4 3 2 1
to my dear f amil y
And God said, let us make man in our image, after our
likeness: and let them have dominion over the fish of the sea,
and over the fowl of the air, and over the cattle,
and over all the earth, and over every creeping thing that
creepeth upon the earth. So God created man in
his own image, in the image of God created he him;
male and female created he them.
—Ge nesis 1:26–27
Con te nts
1
T he E mer ge nce of M achi na S a pie ns C rimi nalis . 1
1.1. The Endless Quest for Machina Sapiens . 1
1.1.1. History and Prehistory of Artificial Intelligence . 1
1.1.2. Defining Artificial Intelligence, and the Endlessness
of the Quest for Machina Sapiens . 4
1.1.3. Turning Disadvantages into Advantages:
Industrial and Private Use of Artificial Intelligence . 12
1.2. On Evolution and Devolution:
Machina Sapiens Criminalis . 14
1.2.1. Serve and Protect: Human Fears of
Human-Robot Coexistence . 14
1.2.2. Evolution and Devolution:
Machina Sapiens Criminalis as a By-product . 17
1.2.3. Inapplicability of the Zoological
Legal Model to ai Technology . 22
1.3. The Modern Offender . 25
1.3.1. Basic Requirements for a Given Offense . 26
1.3.2. Requirements of a Given Offense:
Modern Criminal Liability as a Matrix of Minimalism . 29
1.3.3. The Case of Nonhuman Corporations (Round 1) . 34
2
a i C rimi nal L ia bility for In te ntio nal O ffe nses . 38
2.1. The Factual Element Requirement . 38
2.1.1. Conduct . 39
ix
con te nts . x
2.1.2. Circumstances . 42
2.1.3. Results and Causal Connection . 43
2.2. The Mental Element Requirement . 44
2.2.1. Structure of the Mens Rea Requirement . 45
2.2.2. Fulfillment of the Cognitive Aspect . 49
2.2.3. Fulfillment of the Volitive Aspect . 56
2.3. Criminally Liable Entities for Intentional ai Offenses . 64
2.3.1. ai Entity Liability . 64
2.3.2. Human Liability: Perpetration-through-Another . 69
2.3.3. Joint Human and ai Entity Liability:
Probable Consequence Liability . 75
Closing the Opening Example: Intentional Killing Robot . 82
3
a i C rimi nal L ia bility for Ne gli ge nce O ffe nses . 84
4
a i C rimi nal L ia bility for S trict L ia bility O ffe nses . 104
5
App lica bility of Ge neral D efe nses to
a i C rimi nal L ia bility . 120
6
S e nte nci ng a i . 156
Conclusions . 177
Notes . 179
Selected Bibliography . 213
Index . 241
t ables
xiii
pref ace
xv
pref ace . xvi
began in the 1940s and early 1950s. Since then, ai entities have become an
integral part of modern human life, functioning in a much more sophisti-
cated way than other common tools. Could these entities become danger-
ous? Indeed, they already are, as the incident I recounted earlier attests.
Artificial intelligence and robotics are making their way into everyday
modern life. And some of the questions I have listed are currently before
the courts or regulators. But there is little comprehensive analysis in the
literature about assessing liability for robots, machines, or software that
exercise varying degrees of autonomy.
The objective of this book is to develop a comprehensive, general, and
legally sophisticated theory of the criminal liability for artificial intelli-
gence and robotics. In addition to the ai entity itself, the theory covers the
manufacturer, the programmer, the user, and all other entities involved.
Identifying and selecting analogies from existing principles of criminal
law, the theory proposes specific ways of thinking through criminal li-
ability for a diverse array of autonomous technologies in a diverse set of
reasonable circumstances.
Chapter 1 contains an investigation of some basic concepts on which
the development of subsequent chapters is built. I will address two main
questions:
Because criminal law does not depend on moral accountability, the debate
about the moral accountability of machines is irrelevant to these ques-
tions. Although at times there is an overlap between criminal liability,
certain types of moral accountability, and certain types of ethics, such co-
incidence is not necessary in order to impose criminal liability.
The question of the applicability of criminal law to machines involves
two interrelated, secondary questions:
offered in this book are accepted, then a new social entity has indeed been
created, machina sapiens criminalis. The emergence of machina sapiens
criminalis is examined in chapter 1. Finally, this study answers the ques-
tion of whether artificial intelligence entities could ever be considered
legally, if not morally, liable for the commission of criminal offenses, and
deserving of punishment.
This book focuses on the criminal liability of ai entities and does not
formally venture into the related areas of ethics (including roboethics) and
morality. Nevertheless, the discussion of criminal liability presented here
sets the stage for an in-depth consideration of the ethical issues involved,
which will be necessary given the proliferation of ai entities in everyday
human life. This book constructs a suitable framework for concrete and
functional discussion of these issues.
I wish to thank Dr. Phyllis D. Deutsch of University Press of New En-
gland for her faith in this project from its very beginning, for guiding the
publication of the book from inception to conclusion, and for her useful
comments. I also wish to thank Gabriel Lanyi for his comments, insights,
and disputations. I thank the anonymous readers for their comments. I also
thank Ono Academic College and Northeastern University for their gener-
ous support of this project. Finally, I wish to thank my wife and daughters
for their staunch support all along the way.
—G. H .
When Robots Kill
1
The E mergence of M achina Sapien s C riminali s
1
When R obo ts Kill . 2
think rationally. The latter can reach the right conclusions from correct in-
formation as outsiders, but the former can participate in events, taking the
right action based on correct information. For example, an ai goalkeeper
in a soccer game that sees the ball approaching the goal rapidly can not
only calculate the speed and angle at which the ball is approaching and
determine the action required to block it, but also take the required action
and stop the ball.
The quest for machina sapiens, however, reaches much deeper than
these classification and definitions, and if the quest is successful, it will
answer the question of what makes humans intelligent, and will result in
the design of intelligent machines. Since the late 1980s, the accepted ap-
proach has been that intelligent thinking is identified by certain attributes,
specifically the following five: (1) communication, (2) internal knowledge,
(3) external knowledge, (4) goal-driven conduct, and (5) creativity.35 Let us
discuss each of these attributes.
Communication is considered the most important attribute that defines
an intelligent entity, because we can communicate with intelligent enti-
ties. Note that we communicate not only with humans, but with certain
animals as well, although the range of such communication is narrower
than our range with humans, and not all ideas can be expressed. For exam-
ple, you can let your dog know how angry you are, but you cannot explain
to him the Chinese Room thought experiment. This situation is not differ-
ent from communicating with a two-year-old human. The more intelligent
the other entity, the more complex the communication conducted with it.
Communication assumes proper understanding of the information
contained in it, and can be used to test the ability to understand compli-
cated ideas. But communication does not always attest to the quality of
understanding. Some highly intelligent people, at genius level, are not
good communicators, and some autistic geniuses cannot communicate at
all. At the same time, most normative people have advanced communica-
tion skills, although only a few can communicate about complex matters
such as Einstein’s general theory of relativity, quantum mechanics, parallel
universes, or exponential equations.
Communication can be carried out not only through speech, but by
other means as well, including writing. Therefore, machines can be quite
intelligent even if they lack the ability to speak, similar to mute people. So
although the communication attribute of intelligence is important, there
are many exceptions to it. The Turing test is based on this attribute, but if
human communication is so inaccurate, how can we trust it to identify ai ?
Internal knowledge refers to the knowledge of entities about them-
When R obo ts Kill . 8
bone behind an obstacle, she plans to bypass the obstacle to get the bone.
Executing the plan is goal-driven conduct on the part of the dog.
Different creatures may have different goals, of differing levels of com-
plexity. The more intelligent the entity, the more complex are its goals.
The goals of dogs are at the level of calling for help when their masters are
in distress, whereas the goals of humans may include landing on Mars.
Computers have the ability to plan the landing on Mars, and robots and
unmanned vehicles are already exploring Mars, searching for data. The
reductionist approach to goal-driven conduct deconstructs a complex goal
into many simple ones; achieving these is considered goal-driven conduct.
Goals and plans for achieving them may be incorporated into computer
programs. But not all humans pursue complex goals at all times and under
all circumstances. The question is, what level of complexity of goals is
needed for a machine to be considered intelligent?
Creativity involves finding new ways of understanding or doing things.
All intelligent entities are assumed to have some degree of creativity. When
a fly tries to get out of the room through a closed window, it crashes against
the windowpane over and over again, trying invariably the same conduct
without success. This is the opposite of creativity. At some point, however,
the fly does seek another way out, which is considered to be more creative.
A dog may find a way out in much fewer attempts; a human in fewer still.
Computers may be programmed not to repeat the same conduct more than
once, and instead to seek other ways of solving the problem. This type of
behavior is essential for general problem-solving software.
Creativity is not homogenous, and has degrees and levels. Not all humans
are considered to be thinking outside the box, and many perform their daily
tasks exactly the same way day after day. Most lottery players play the same
numbers time after time for years without winning. What makes their crea
tivity different from that of the fly? The fly’s chance of breaking through the
window is not smaller than that of the lottery player winning the main prize.
Many factory workers perform the same operations for hours every day, and
they are considered intelligent entities. The question is, what is the exact
level of creativity required to be considered intelligent?
Not all humans who are considered intelligent share all five attributes.
Why, then, should we use different standards for humans and machines to
measure intelligence? But each time some new software has succeeded in
conquering a given attribute, the achievement was rejected for not being
real communication, internal knowledge, external knowledge, goal-driven
conduct, or creativity.38 Consequently, new tests were proposed to make
sure that the relevant ai technology was really intelligent.39
When R obo ts Kill . 10
science fiction stories assembled under the title I, Robot in 1950: “(1) A
robot may not injure a human being or, through inaction, allow a human
being to come to harm; (2) A robot must obey the orders given it by human
beings, except where such orders would conflict with the First Law; (3) A
robot must protect its own existence, as long as such protection does not
conflict with the First or Second Laws.” 62 In the 1950s, these laws were
considered innovative, and restored some confidence in a terrified public.
After all, harming humans was not allowed.
Asimov’s first two laws represent a human-centered approach to safety
in relation to ai robots. They also reflect the general approach that as robots
gradually take on more intensive and repetitive jobs outside industrial
factories, it is increasingly important that safety rules support the concept
of human superiority over robots.63 The third law straddles the border be-
tween human-centered and machine-centered approaches to safety. The
functional purpose of the robots is to satisfy human needs, and therefore
robots should protect themselves in order to better perform these tasks,
functioning as human property.
But these ethical rules, or laws, were insufficient, ambiguous, and not
sufficiently broad, as Asimov himself admitted.64 For example, a robot in
police service trying to protect hostages taken by a criminal who intends
to shoot one of them understands that the only way to stop the murder of
an innocent person is to shoot the criminal. But the first law prohibits the
robot from killing or injuring the criminal (by acting), and at the same time
prohibits it from letting the criminal kill the hostage (through inaction).
What can we expect the robot to do based on the first law, with no other
alternatives available? Indeed, what would we expect a human police of-
ficer to do? Any solution breaches the first law.
Examining the other two laws does not change the consequences. If
a human commander ordered the robot to shoot the criminal, the robot
would still be breaching the first law. Even if the commander were in im-
mediate danger, it would be impossible for the robot to act. And even if
the criminal threatened to blow up fifty hostages, the robot would not be
allowed to protect their lives by injuring the criminal. Matters would be
even more complicated if the commander were a robot too, but the con-
sequences would be no different. And the third law would not alter the
results even if the robot itself were in danger.
The dilemma of the police robot is not uncommon in a society in
which humans and robots coexist. Any activity by robots and ai technol-
ogy in such a society is riddled with such dilemmas. Consider a robot in
medical service that is required to perform an intrusive emergency surgi-
The Emerge nce of Machin a Sapiens C r iminalis . 17
cal procedure intended to save a patient’s life, and to which the patient
objects. Any action or inaction by the robot breaches Asimov’s first law,
and any order by a superior is unable to solve the dilemma because an
order to act causes injury to the patient and an order to refrain from action
results in the patient’s death.
Some dilemmas are easier to solve when one of the options involves no
injury to humans, but this raises other questions. In the previous example
of robots used as prison guards,65 what should such a robot do when a pris-
oner attempts to escape and the only way to stop the prisoner is by causing
injury? What should a sex robot do when ordered to commit sadistic acts?
If the answers are not to act, then the question is, why use such robots in
first place if their intended actions breach the first law?
If humanity determines that the destiny of ai robots and other artifi-
cial entities is to serve and protect humanity in various situations, this
involves difficult decisions that will have to be made by these entities.
The terms injury and harm may be wider than specific bodily harm, and
these entities may harm people in other ways than bodily. Moreover, in
various situations, causing one sort of harm should be necessary in order
to prevent greater harm. In most cases, such decision involves complicated
judgment that exceeds the simple dogmatic rules of ethics.66
The debate about Asimov’s laws gave rise to debate about the moral ac-
countability of machines.67 The moral accountability of robots, machines,
and ai entities is part of the endless quest for machina sapiens. But robots
exist, they participate in daily human life in industrial and private envi-
ronments, and they do cause harm from time to time, regardless of whether
they can be subjected to moral accountability. Therefore, the ethical sphere
is unsuitable for settling the issue. Ethics involve moral accountability,
complicated inner judgment, and inapplicable rules. The question, then,
is what can settle robot functioning, human-robot relations, and human-
robot coexistence? The answer may lie in criminal law.
situation is that the task undertaken by the robot has not been accom-
plished successfully. But some failure situations can involve harm and
danger to individuals and society.
For example, the task of prison-guard robots has been defined as pre-
venting escape by using minimal force against the prisoners. A prisoner
attempting to escape may be restrained by the robot guard, which holds
the prisoner firmly but causes injury; the prisoner may then argue that the
robot has excessively used its power. Analyzing the robot’s actions may
reveal that it could have chosen a more moderate action, but the robot had
evaluated the risk as being graver than it actually was. In this case, who is
responsible for the injury?
This type of example raises important questions and many arguments
about the responsibility of the ai entity. If analyzed through the lens of
ethics, the failure in this situation is that of the programmer or the de-
signer, as most scientists would argue, not of the robot itself. The robot
cannot consolidate the necessary moral accountability to be responsible
for any harm caused by its actions. According to this point of view, only
humans can consolidate such moral accountability. The robot is nothing
but a tool in the hands of its programmer, regardless of the quality of its
software or cognitive abilities. This argument is related to the debate about
machina sapiens.
Moral accountability is indeed a highly complex issue, not only for
machines, but for humans as well. Morality, in general, has no common
definition that is acceptable in all societies by all individuals. Deonto-
logical morality (concentrated on the will and conduct) and teleological
morality (concentrated on the result) are the most acceptable types, and in
many situations they recommend opposite actions.68 The Nazis considered
themselves deontologically moral, although most societies and individu-
als disagreed. If morality is so difficult to assess, then moral accountability
may not be the most appropriate and efficient way of evaluating responsi-
bility in the type of case we have just examined.
In this context, the issue of the responsibility of ai entities will always
return to the debate about the conceptual ability of machines to become
human-like, so that the endless quest for machina sapiens would become
an endless quest for ai accountability. The relevant question here exceeds
the technological one, and it is mostly a social question. How do we, as a
human society, choose to evaluate responsibility in situations of harm and
danger to individuals and society?
The main social tool available for handling such situations in daily
life is criminal law, which defines the criminal liability of individuals
The Emerge nce of Machin a Sapiens C r iminalis . 19
who harm society or endanger it. Criminal law also has educational social
value because it educates individuals on how to behave within their so-
ciety. For example, criminal law prohibits murder; in other words, the
law defines what is considered to be murder, and prohibits it. This has
the value of punishing individuals for murder ex post, and prospectively
educating individuals not to murder ex ante, as part of the rules of living
together in society. Thus, criminal law plays a dominant role in social
control.69
Criminal law is considered the most efficient social measure for di-
recting individual behavior in any society. It is far from perfect, but under
modern circumstances it is the most efficient measure. And because it is
efficient regarding human individuals, then it is reasonable to examine
whether it is efficient regarding nonhuman entities as well, specifically
ai entities. Naturally, the first step in evaluating the efficiency of criminal
law with regard to machines is to examine the applicability of criminal
law to them. So the question in this context is whether machines may be
subject to criminal law.
Because criminal law does not depend on moral accountability, the
debate about the moral accountability of machines is irrelevant to the
question at hand. Although at times there is an overlap between criminal
liability, certain types of moral accountability, and certain types of ethics,
such coincidence is not necessary in order to impose criminal liability.
The question of the applicability of criminal law to machines involves two
interrelated, secondary questions:
The first question is a core issue of criminal law, and imposition of crimi-
nal liability depends on meeting its requirements. Only if these require-
ments are met does the question of punishability arise, that is, how human
society can punish machines. The answers proposed in this book are affir-
mative for both questions. Chapters 2 through 5 examine the first question,
and chapter 6 examines the second. If the affirmative answers offered in
this book are accepted, then a new social entity has indeed been created,
machina sapiens criminalis.
Machina sapiens criminalis is the inevitable by-product of human ef-
forts to create machina sapiens. Technological response to the endlessness
of the quest for machina sapiens has resulted in advanced developments
in ai technology, which have made it possible to imitate human minds
and their associated skills better than ever before. Current ai technology
When R obo ts Kill . 20
nificantly toward the creation of machina sapiens, this would make cur-
rent criminal law even more relevant to addressing ai technology because
such technology imitates the human mind, and the human mind is already
subject to current criminal law. Thus, the closer ai technology approaches
to a complete imitation of the human mind, the more relevant the current
criminal law becomes.
Subjecting ai robots to the criminal law may relax our fears of human-
robot coexistence. Criminal law plays an important role in giving people
a sense of personal confidence. Each individual knows that all other indi-
viduals in society are bound to obey the law, especially the criminal law.
If the law is breached by any individual, society enforces it by means of
its relevant coercive powers (police, courts, and so on). If any individual
or group is not subject to the criminal law, the personal confidence of the
other individuals is severely harmed because those who are not subject to
the criminal law have no incentive to obey the law.
The same logic works for humans, corporations, and ai entities.74 If any
of these is not subject to criminal law, the personal confidence of the other
entities is harmed, and more broadly, the sense of confidence of the entire
society is harmed. Consequently, society must make all efforts to subject
all active entities to its criminal law. At times this requires conceptual
changes in the general insights of society. For example, when the fear of
corporations that are not subject to criminal law became real, in the seven
teenth century, criminal law was applied to them. It is perhaps time to do
the same for ai entities in order to alleviate human fears.75
But one more issue must be clarified. There are other creatures living
among us, in our society, in coexistence with us: animals. Why not apply
our legal rules concerning animals to ai entities as well?
veals two important aspects that the law addresses. The first is the law’s
relation to animals as a property of humans, and the second is our duty
to show mercy toward animals. The first aspect involves the ownership,
possession, and other property rights of humans toward animals. For ex-
ample, if damage is caused by an animal, the human who has the property
rights over the animal is legally responsible for the damage. If a person is
attacked by a dog, its owner is legally responsible for any damage.
In most countries these are issues of tort law, although in some coun-
tries they are related to criminal law as well, but in either case the legal
responsibility is the human’s, not the animal’s. Since ancient times, an
animal was incapacitated only if it was considered too dangerous for so-
ciety. Incapacitation usually meant killing the animal. This was the case
if an ox gored a person in ancient times,77 and it still is the case if a dog
bites a person under certain circumstances. No legal system considers an
animal to be directly subject to the law, especially not to criminal law,
regardless of the animal’s intelligence.
The second aspect of law — the duty to show mercy toward animals —
is aimed at humans. As humans are considered superior to animals
because of their human intelligence, animals are viewed as helpless. Con-
sequently, the law prohibits the abuse of power by humans against animals
because of cruelty. The subjects of these legal provisions are humans, not
animals. Animals abused by humans have no standing in court, even if
damaged. The legal “victim” in these cases is society, not the animals, so
in most cases, these legal provisions are part of criminal law.
Society indicts humans who abuse animals because such cruelty
harms society, not the animals. This type of protection of animals differs
little from the protection of property. Most criminal codes prohibit dam-
aging another’s property, in order to protect the property rights of the pos-
sessor or owner. But in the case of cruelty to animals, this type of protec-
tion has nothing to do with property rights. The legal owner of a dog may
be indicted for abusing the dog regardless of property rights with respect
to the dog. These legal provisions, which have existed since time imme-
morial, form the zoological legal model. The question is why this model
cannot be applied to ai entities.
We are considering three types of entities: humans, animals, and artifi-
cially intelligent entities. If we wish to subordinate ai entities to the crimi-
nal law of humans, we must be able to justify the resemblance between
humans and ai entities in this context. Indeed, we should explain why
ai entities carry more resemblance to humans than to animals. Otherwise,
the zoological legal model would be satisfactory and adequate for settling
When R obo ts Kill . 24
the ai activity. Thus, the question is, who does an ai robot resemble more:
a human or a dog?
The zoological legal model has been examined previously with re-
spect to ai entities when discussing the control of unmanned aircrafts78
and other machines,79 including “new generation robots.”80 For some of
the legal issues raised in these contexts, the zoological legal model was
able to provide answers, but the core problems could not be solved by this
model.81 When the ai entity was able to figure out by itself, using its soft-
ware, how it needed to act, something in the legal responsibility puzzle
was still missing. Communication of complicated ideas is much easier
with ai entities than with animals. This is the case regarding external
knowledge and the quality of reasonable conclusions in various situations.
An ai entity is programmed by humans to conform to formal human
logical reasoning. This forms the core reasoning on which the activity
of the ai entity is based, so that its calculations are accountable through
formal human logical reasoning. Most animals, in most situations, lack
this type of reasoning. It is not that animals are not reasonable, but their
reasonableness is not necessarily based on formal human logic. Emotion-
ality plays an important role in the activity of most living creatures, both
animals and humans, and it may supply the drive and motivation to some
human and animal activity. This is not the case with machines.
If emotionality is used as a measure, humans and animals are much
closer to each other than either is to machines. But if pure rationality is
used as a measure, machines may be closer to humans than to animals.
Although emotionality and rationality affect each other, the law distin-
guishes between them regarding their applicability. For the law, especially
for criminal law, the main factor being considered is rationality, whereas
emotionality is only rarely taken into account. For example, a murderer’s
feelings of hatred, envy, or passion are immaterial for his or her convic-
tion; and only in rare cases, when the provocation defense is used, may
the court consider the emotional state of the offender.82
Further, the zoological legal model educates humans to be merciful
toward animals, as already noted, which is a key consideration under this
model. In relation to ai entities, however, this consideration is immaterial.
Because ai entities lack basic attributes of emotionality, they cannot be sad-
dened, made to suffer, disappointed, or tortured in any emotional manner,
and this aspect of the zoological legal model has no significance regarding
ai entities. For example, cutting off a dog’s leg for no medical reason is
considered abuse, and in some countries it is considered an offense. But
no country considers cutting off a robot’s arm an offense or abuse, even if
The Emerge nce of Machin a Sapiens C r iminalis . 25
criminal liability. Morality of any kind and criminal liability are vastly
different. At times they coincide, but such coincidence is not necessary
for the imposition of criminal liability. An offender is a person on whom
criminal liability has been imposed. When the legal requirements of a
given criminal offense are met by individual behavior, criminal liability
is imposed, and the individual is considered an offender. Evil may or may
not be involved. Thus, imposition of criminal liability requires the explo-
ration of its requirements.
Criminal law is considered the most efficient means of social control.
Society, as an abstract body or entity, controls its individual members
using a variety of measures (for example, moral, economic, or cultural),
but one of the most efficient is the law. And given that only criminal law
includes significant sanctions (imprisonment, fines, and even death), it is
considered the most efficient of the means used to control individuals; this
is commonly called legal social control. Imposition of criminal liability
is the application and implementation of legal social control.
Modern criminal liability is independent of morality of any kind, and
independent of evil. It is imposed in an organized, almost mathematical
way. Two cumulative types of requirements must be met for the impo-
sition of criminal liability: the first has to do with the law, the second
with the offender. Both types of requirements must be met in order to
impose criminal liability, and if they are met, no additional conditions
are required.
The offense must have a legitimate legal source that creates and defines
it.87 In most countries, the ultimate legitimate legal source is legislation,
and case law is not considered a legitimate source because legislation is
enacted by elected public representatives of society as a whole. Given that
criminal law exercises legal social control, it should reflect the will of
society by means of its representatives. The requirement that the offense
be applicable in time means that retroactive offenses are illegal.88 In plan-
ning their moves, individuals must know the prohibitions in advance, not
retroactively. Only in rare cases are retroactive offenses considered legal,
for example when a new offense is to the benefit of the defendant (it is
more lenient than its predecessor or allows for a new defense) or when the
offense covers cogent international custom, or jus cogens, (for example,
genocide, crimes against humanity, or war crimes).89
The offense must also be applicable in place, so that extraterritorial of-
fenses are illegal.90 Criminal law is based on the authority of the sovereign,
which is domestic, meaning that criminal law must be domestic as well.
For example, the criminal law of France is applicable in France but not
in the United States. Thus, extraterritorial offense is illegal (for example,
French offense that is applicable in the United States). In rare cases, how-
ever, the sovereign is authorized to protect itself or its inhabitants abroad
through extraterritorial offenses, as, for example, in the case of the foreign
terrorists who attacked the US Embassy in Kenya and who may be indicted
in the United States under US criminal law, although they have never been
to the United States.91 Extraterritorial offenses may also apply in cases of
international cooperation between states.
Offenses must be well formulated and phrased. They must be general
and address an unspecified public (for example, “Samuel Jackson is not
allowed to . . .” is not a legitimate offense).92 Offenses must be feasible,
so that legal social control is realistic (for example, “Whoever breathes
for more than five minutes uninterruptedly shall be guilty of . . .” is not a
legitimate offense).93 They must also be clear and precise, so that individu-
als will know exactly what they are allowed to do and what is prohibited
conduct.94 When all these conditions are met, the requirement of legality
is satisfied.
Conduct is a requirement that every offense must meet in order to
be considered legal (nullum crimen sine actu). Modern society prefers
freedom of thought and has no interest in punishing mere thoughts (cogi-
tationis poenam nemo patitur). Effective legal social control cannot be
achieved through mind policing, which is not enforceable. The conduct
is the objective and external expression of the commission of the offense.
When R obo ts Kill . 28
Offenses that do not meet this are not legitimate. Throughout legal human
history, only tyrants and totalitarian regimes have used offenses that lack
conduct.
The conduct requirement of some offenses can be satisfied by inac-
tion. Offenses that criminalize the status of the individual rather than his
or her conduct are considered status offenses. For example, offenses that
punish the relatives of traitors simply because they are relatives, regard-
less of their conduct, are considered status offenses.95 So are offenses that
punish individuals of certain ethnic origin.96 Most developed countries
have abolished these offenses, and defendants indicted for such offenses
are acquitted in court because status offenses contradict the principle of
conduct in criminal law.97 Only when conduct is required may offenses be
considered legal and legitimate.
The culpability requirement (nullum crimen sine culpa) must be met
for an offense to be considered legal. Modern society has no interest in
punishing accidental, thoughtless, or random events, only events that are
the result of an individual’s culpability. Someone’s death does not nec-
essarily indicate the presence of an offender. A person may pass next to
another exactly when the other person is struck dead by lightning, but
the passerby is not necessarily culpable. The offense must require some
level of culpability for the imposition of criminal liability; otherwise, this
would amount to cruel maltreatment of individuals by society.
Culpability relates to the mental state of the offender and reflects the
subjective-internal expression of the commission of the offense. The re-
quired mental state of the offender, which forms the requirement of culpa-
bility, may be reflected both in the particular requirements of the specific
offense and in the general defenses. For example, the offense of man-
slaughter requires recklessness as its minimal level of culpability.98 But if
the offender is insane (general defense of insanity),99 is an infant (general
defense of infancy),100 or has acted in self-defense (general defense of self-
defense),101 no criminal liability is imposed. No culpability can be ascribed
to such an offender. An offense is considered legal and legitimate only if
it requires culpability.
Personal liability is required for an offense to be considered legal.102
Modern society has no interest in punishing one person for the behavior
of another, regardless of their relationship. Effective legal social control
cannot be achieved unless all individuals are liable for their own behavior.
If someone knows that no legal liability is imposed on him for his behav-
ior, he has no incentive to avoid committing offenses or other types of anti
social behavior. Legal social control can be effective only when a person
The Emerge nce of Machin a Sapiens C r iminalis . 29
knows that for his own behavior, no other person is liable. Punishment
deters individuals only if they may be punished personally.
Personal liability guarantees that each offender would be criminally
liable and punished only for his or her own behavior. When individuals
collaborate in committing an offense, each of the accomplices is criminally
liable only for his or her own part. The accessory is criminally liable for
accessoryship, and the joint perpetrator for joint perpetration. The various
types of criminal liability in conjunction with the principle of personal
liability have established the general forms of complicity in criminal law
(for example, joint perpetration, perpetration-through-another, conspiracy,
incitement, and accessoryship). Only when personal liability is required
can the offense be considered legal and legitimate.
When all four basic requirements of legality, conduct, culpability, and
personal liability are met, the offense that embodies them is considered to
be legitimate and legal. Only then can society impose criminal liability on
individuals for the commission of these offenses. The legitimacy and legal-
ity of specific offenses is necessary for the imposition of criminal liability,
but not sufficient. For the imposition of criminal liability, the particular
requirements of the offense must be met by the offender as well. These
requirements are part of the definition of the offense.
(nullum crimen sine actu),103 designed to answer four main questions about
the factual aspects of the delinquent event: “What?” “Who?” “When?” and
“Where?” “What” refers to the substantive facts of the event (what hap-
pened). “Who” relates to the identity of the offender. “When” addresses
the time of the event. “Where” specifies the location of the event.
In some offenses, these questions are answered directly within the
definition of the offense. In others, some of the questions are answered
through the applicability of the principle of legality in criminal law.104
For example, the offense “Whoever assaults another person . . .” does not
relate directly to the questions of “who,” “when,” and “where,” but the
questions are answered through the applicability of the principle of legal-
ity. Because the offense is likely to be general,105 the answer to the “who”
question is any person who is legally competent. As this type of offense
may not be applicable retroactively,106 the answer to the “when” ques-
tion is from the time the offense was validated onward. And because this
type of offense may not be applicable extraterritorially (with some excep-
tions),107 the answer to the “where” question is within the territorial juris-
diction of the sovereign (with some exceptions).
Nevertheless, the answer to the “what” question must be incorporated
directly into the definition of the offense. This question addresses the core
of the offense and cannot be answered through the principle of legality.
This approach is at the foundation of the modern structure of the factual
element requirement, which consists of three main components: conduct,
circumstances, and results. Conduct is a mandatory component, whereas
circumstances and results are not. Thus, if the offense is defined as having
no conduct requirement, it is not legal, and the courts cannot convict indi-
viduals based on such a charge. Table 1.1 lists the components that answer
the four questions.
The conduct component is at the heart of the answer to the “what”
question. Status offenses, in which the conduct component is absent, are
considered illegal, and in general they are abolished when discovered.108
But the absence of circumstances or of results in the definition of an of-
fense does not invalidate the offense.109 These components are aimed at
meeting the factual element requirement with greater accuracy than by
conduct alone. Therefore, there are four formulas that can satisfy the fac-
tual element requirement:
1. conduct
2. conduct + circumstances
3. conduct + results
4. conduct + circumstances + results
The Emerge nce of Machin a Sapiens C r iminalis . 31
conduct “what”
circumstances “what,” “who,” “when,” “where”
results “what”
posing criminal liability for the offense. Table 1.3 compares schematically
the requirements for satisfying the three forms of mental element.
This structure of criminal liability forms a matrix of minimalist re-
quirements. Each offense embodies the minimum requirement for the im-
position of criminal liability, and meeting these requirements is sufficient
to impose criminal liability. No additional psychological elements are re-
quired, so that the modern offender is an individual who meets the mini-
mal requirement of an offense. The offender is not required to be “evil,”
“mean,” or “wicked” — only to meet all the requirements of the offense.
The imposition of criminal liability is therefore dry and rational (as op-
posed to emotional), and resembles mathematics.
Thus, the matrix of minimalist requirements for criminal liability has
two aspects: structural and substantive. For example, if the mental ele-
ment of the offense requires only awareness, then no other component of
the mental element is required (structural aspect), and the required aware-
ness is defined by criminal law irrespective of its meaning in psychol-
ogy, philosophy, theology, and so on (substantive aspect). The substantive
aspect is discussed in detail in chapters 2, 3, and 4.
When R obo ts Kill . 34
ing the risks of the owners regarding the financial hazards of the business,
and as a result, corporations have become common.117
The use of corporations flourished during the first industrial revolu-
tion, to the point where they have been identified both with the fruits of
the revolution and with the misery of the lower classes and the workers
(who were created by the revolution). Corporations were regarded as being
responsible for the poverty of the workers, who did not share in the profits,
and for the continuing abuse of children employed by the corporations.
As the industrial revolution progressed, public and social pressure on the
corporations increased, until legislators were forced to restrict the activi-
ties of corporations.
In the beginning of the eighteenth century, the British Parliament en-
acted statutes against the abuse of power by corporations, the very power
granted to them by the state for the benefit of social welfare.118 To ensure
the effectiveness of these statutes, they included criminal offenses. The
offense most often used for this purpose was that of public nuisance.119
This legislative trend deepened as the revolution progressed, and in the
nineteenth century, most developed countries had advanced legislation
regarding the activities of corporations in a variety of contexts. To be
effective, this legislation included criminal offenses as well, prompt-
ing the conceptual question, how can criminal liability be imposed on
corporations?
Criminal liability requires a factual element, whereas corporations pos-
sess no physical body. Criminal liability also requires a mental element,
whereas corporations have no mind, brain, spirit, or soul.120 Some Euro-
pean countries refused to impose criminal liability on nonhuman crea-
tures, and revived the Roman rule whereby corporations were not subject
to criminal liability (societas delinquere non potest). But this approach
was highly problematic because it created legal shelters for offenders. For
example, an individual who did not pay taxes was criminally liable, but
when the individual was a corporation, it was exempt. This provided an
incentive for individuals to work through corporations and evade paying
taxes. Eventually, all countries subjected corporations to criminal law, but
not until the twentieth century.
The Anglo-American legal tradition accepted the idea of the criminal
liability of corporations early because of its many social advantages and
benefits. In 1635, criminal liability for corporations was enacted and im-
posed for the first time.121 This was a relatively primitive imposition of
criminal liability because it relied on vicarious liability, but it enabled
When R obo ts Kill . 36
seventeenth century, the road may be open to crossing the next barrier that
stands before the imposition of criminal liability on ai entities.
In the following chapters, the criminal liability of ai entities is ana-
lyzed in detail for all types of offenses. Because the major obstacle on the
way to criminal liability for ai entities is that of the mental element, the of-
fenses are divided according to their mental element requirements (mens
rea, negligence, and strict liability). Next, the applicability of general de-
fenses for ai entities is analyzed. Finally, in the last chapter, the applicabil-
ity of human punishments to ai entities is discussed.
2
ai C riminal L ia bili ty for I n ten tional O ffen se s
38
AI C r iminal L iabili ty for Intention al Offen ses . 39
2.1.1. C onduct
Conduct, as already noted, is the only mandatory component of the factual
element requirement; in other words, an offense that does not require con-
duct is illegal and illegitimate, and no criminal liability can be legitimately
imposed based on it. In independent offenses, the conduct component of
the factual element requirement may be expressed both by an act and by
omission. In derivative criminal liability, the conduct component may be
expressed also by inaction, with some restrictions. We will examine these
forms of conduct in relation to the capabilities of ai machines.
In criminal law, act is defined as material performance with a factual-
external presentation. According to this definition, the materiality of the
act is manifest in its factual-external presentation, which differentiates
the act from subjective-internal matters related to the mental element.
Because thoughts have no factual-external presentation, they are related
not to the factual, but to the mental element. Will may initiate acts, but
in itself has no factual-external presentation, and is therefore considered
part of the mental element. Consequently, involuntary and unwilled ac-
tions, as well as reflexes, are still considered acts as far as the factual ele-
ment requirement is concerned.3
But although unwilled acts or reflexes are considered acts, criminal
liability is not necessarily imposed for such offenses because of reasons
involving the mental element requirement or general defenses. For ex-
ample, B physically pushes A in the direction of C. Although the physical
contact between A and C is involuntary for both, it is still considered an
act. But it is unlikely that criminal liability for assault will be imposed on
A, because the mental element requirement for assault has not been met.
Acts committed as a result of loss of self-control are still considered acts,
but loss of self-control is a general defense, which if proven and accepted,
exempts the offender from criminal liability.
This definition of an act concentrates on its factual aspects to the ex-
clusion of mental aspects.4 The definition is also sufficiently broad to in-
clude actions originating in telekinesis, psychokinesis, and so on, if these
are possible,5 as long as they have a factual-external presentation.6 If act
were restricted to “willed muscular construction” or “willed bodily move-
ment,”7 this would prevent imposition of criminal liability in cases of
perpetration-through-another (for example, B in the example just cited)
because no act has been performed, and it would be possible to assault
someone and be exempt from criminal liability, by pushing an innocent
person upon the victim.
When R obo ts Kill . 40
2.1.2. C ircumstances
Circumstances are not a mandatory component of the factual element re-
quirement. Some offenses require circumstances in addition to conduct,
and some do not. Circumstances are defined as factual components that
describe the conduct but do not derive from it. According to this defini-
tion, circumstances specify the criminal conduct in more accurate terms.
When defining specific offenses, circumstances are required especially
when the conduct component is too broad or vague, and greater specific-
ity is needed in order to avoid over-criminalization of situations that are
considered legal by society.
In the case of most offenses, the circumstances represent the factual
data that make the conduct criminal. For example, in most legal systems,
the conduct in the offense of rape is having sexual intercourse (although
the specific verbs may vary). But in itself, having sexual intercourse is not
an offense, and it becomes one only if it is committed without consent.
The factual element of “without consent” is what makes the conduct of
“having sexual intercourse” criminal. In this offense, the factual compo-
nent “without consent” functions as a circumstance.15 Additionally, the
factual component “with a woman” also functions as circumstance be-
cause it describes the conduct (raping a chair is not an offense).
In this example, the factual element of the offense of rape is “having
sexual intercourse with a woman without consent.”16 Whereas “having
sexual intercourse” is the mandatory conduct component, “with a woman”
and “without consent” function as circumstance components that de-
scribe the conduct more accurately, to make sure that the factual element
requirement of rape is specified adequately, in a way that avoids over-
criminalization.
According to the definition, circumstances are not derived from the
conduct in order to allow distinguishing them from the result compo-
nent.17 For example, in homicide offenses, the conduct is required to cause
the “death” of a “human being.” Death describes the conduct and also de-
rives from it because it is the conduct that caused death. Therefore, in ho-
micide offenses, “death” does not function as a circumstance. The factual
data that functions as a circumstance is “human being.” The victim was a
human being long before the conduct took place, and therefore it does not
derive from the conduct. But these words also describe the conduct (caus-
ing the death of a human being, not of an insect), and therefore function
as a circumstance.
On this basis, ai technology is capable of satisfying the circumstance
component requirement of the factual element. When the circumstances
AI C r iminal L iabili ty for Intention al Offen ses . 43
are external to the offender, the identity of the offender is immaterial, and
therefore the offender may be human or a machine, without affecting the
circumstances. In the earlier example of rape, the circumstance “with a
woman” is external to the offender. The victim is required to be a woman
regardless of the identity of the rapist. The raped woman is still considered
“woman” whether she was attacked by a human or a machine, or has not
been attacked at all.
When the circumstances are not external to the offender but are re-
lated somehow to the offender’s conduct, the circumstances assimilate
within the conduct component. In the example of rape, the circumstance
“without consent” describes the conduct as if it were part of the conduct
(how exactly did the offender have sexual intercourse with the victim).
To satisfy the requirement of this type of circumstance, the offender must
commit the conduct in a particular way. Consequently, meeting the re-
quirement of this type of circumstance is the same as committing the con-
duct. As already discussed, it is within the capabilities of machines to
meet the conduct component requirement.
hit B with her car, and B died. Because B was terminally ill, A may argue
that B would have died anyway in the near future, so B’s death is not the
ultimate result of A’s conduct, and the result would have occurred anyway,
even without A’s conduct. But because the factual causal connection has
to do with the way in which the results occurred, the requirement is met
in this example: B would not have died the way he did had A not run
him over.
On this basis, ai technology is capable of satisfying the result com-
ponent requirement of the factual element. To achieve the results, the of-
fender must initiate the conduct. Commission of the conduct forms the
results, and the results are examined objectively to determine whether
they are derived from the commission of the conduct.21 Thus, when the
offender commits the conduct, and the conduct has been carried out, it is
the conduct, not the offender, that is the cause of the results, if there was
a result. The offender is not required to cause, separately, any results, but
only to commit the conduct. Although the offender begins the factual pro-
cess that forms the results, this process is initiated only through the com-
mission of the conduct component.22
Because ai technology is capable of committing conducts of all types,
in the context of criminal law it is capable of causing results deriving from
this conduct. For example, when a robot operates a firearm and causes it
to shoot a bullet toward a human individual, this fulfills the conduct com-
ponent requirement of homicide offenses. At that point, a causal connec-
tion test is used to examine the conduct and determine whether it caused
that individual’s death. If it did, both the result and conduct component
requirements are met, although physically the robot “did” nothing but
commit the conduct. Because meeting the conduct component require-
ment is within the capabilities of machines, as already discussed, so is
meeting the results component requirements.
First, we explore the structure of the mens rea requirement, then we turn
to the fulfillment of the requirement by ai technology.
would be convicted of rape, whether he was cruel or not, but his punish-
ment would likely be much harsher than that of a less cruel rapist.
Identifying the factual element components as the object of mens rea
components is the basis for the structure of mens rea. Mens rea has two
layers of requirement: cognition and volition. The layer of cognition con-
sists of awareness. The term knowledge is also used to describe the cogni-
tion layer, but awareness seems more accurate. Both awareness and knowl-
edge function the same way and have the same meaning in this context.
We can be aware only of facts that occurred in the past or are occurring at
present, but we cannot be aware of future facts.
For example, we can be aware of the fact that A ate an ice-cream cone
two minutes ago, and we can be aware of the fact that B is eating one right
now. If C said that she intends to eat an ice-cream cone, we can predict it,
foresee it, or estimate the probability that it will happen, but we cannot
be aware of it, simply because it has not occurred yet. If the criminal law
had required offenders to be aware of future facts, it would have required
prophetic skills. As far as we know, most criminals do not possess such
skills and are far from being prophets.
In this context, the offender’s temporal point of view determines the
point at which the conduct is actually performed. Therefore, from the of-
fender’s point of view, the conduct component always occurs in the pres-
ent. Consequently, awareness is a relevant component of mens rea in re-
lation to conduct. Circumstance is defined as factual data that describes
the conduct but does not derive from it.27 To describe current conduct,
circumstances must exist in the present as well. For example, the circum-
stance “with a woman” in the offense of rape, described earlier, must exist
simultaneously with the conduct “having sexual intercourse.”
The raped person must be a woman during the commission of the of-
fense for the circumstance requirement to be fulfilled. Therefore, aware-
ness is a relevant component of mens rea in relation to circumstances as
well. The situation is different, however, in relation to results, which are
defined as a factual component that derives from the conduct; this means
that results must occur after the conduct.28 For example, B dies at 10:00:00
after which, at 10:00:10, A shoots her. In this case, the conduct (the shoot-
ing) is not the cause of the other factual event (B’s death), which does not
function as a result.
From the offender’s point of view, given that the results occur after the
conduct, and because the offender’s temporal point of view determines
the point at which the conduct is performed, the results occur in the
future. Consequently, because results do not occur in the present, aware-
AI C r iminal L iabili ty for Intention al Offen ses . 47
ness is not relevant for them. The offender is not expected to be aware of
the results, which have still not occurred from his point of view. But al-
though the offender cannot be aware of the future results, he is capable of
predicting them and assessing the probability of their occurrence. These
capabilities are present simultaneously with the actual performance of
the conduct.
For example, A shoots B. At the point in time when the shot is fired,
B’s death has not yet occurred, but in the act of shooting, A is aware of the
possibility of B’s death as a result of the shot. Therefore, awareness is a rel-
evant component of mens rea in relation to the possibility that the results
will occur, but not in relation to the results themselves. Awareness of this
possibility is not required to be related to how reasonable or probable the
occurrence of the result is. If the offender is aware of the mere existence of
the possibility, whether its probability is high or low, that the result will
follow from the conduct, this component of mens rea is present.29
The additional layer of mens rea is that of volition. This layer is ad-
ditional to cognition and based on it. Volition is never alone, but always
accompanies awareness. Volition relates to the offender’s will regarding
the results of the factual event. In rare offenses, volition may relate to mo-
tives and purposes beyond the specific factual event, and is expressed as
specific intent.30 The main question regarding volition (apart from the of-
fender’s awareness of the possibility that the results will follow from the
conduct) is whether the offender wanted the results to occur. Because from
the offender’s point of view the results occur in the future, they are the
only reasonable object of volition.
From the offender’s temporal point of view, neither circumstances nor
conduct have anything to do with will. The raped person was a woman
before, during, and after the rape, regardless of the rapist’s will. The sexual
intercourse is what it is at that point in time, regardless of the rapist’s will.
If the offender argues that the conduct occurred against his will — that is,
the offender did not control it — this argument is related to the general de-
fense of loss of self-control, which we will discuss in a moment.31 Thus, as
far as conduct and circumstances are concerned, no volition component is
required, only awareness.
In the factual reality there are many levels of will, but criminal law
accepts only three: (1) intent (and specific intent), (2) indifference, and (3)
rashness. The first represents positive will (the offender wants the results
to occur), the second represents nullity (the offender is indifferent about
the results), and the third represents negative will (the offender does not
want the results to occur, but has taken unreasonable risks that caused
When R obo ts Kill . 48
understood that the human mind is selective, meaning that humans are ca-
pable of focusing their minds on certain stimuli while ignoring others, so
that the ignored stimuli do not enter the human mind at all. If the human
mind were to include all internal and external stimuli and had to pay at-
tention to each one of them, it would be incapable of functioning.
The function of the human and animal sensory systems is to absorb
the stimuli (light, sound, heat, pressure, and so on) and transfer them to
the brain for the processing of factual information. Processing is executed
in the brain internally until a relevant general image of the factual data is
created. This process is called perception, which is considered one of the
basic skills of the human mind.
Many stimuli are active at any one time. To enable the creation of an
organized image of factual data, the human brain must focus on some of
the stimuli and ignore others. This is accomplished through the process of
attention, which permits the brain to concentrate on some stimuli while
ignoring others. In practice, the other stimuli are not entirely ignored,
but they exist in the background of the perception process. The nervous
system still retains adequate vigilance that allows it to absorb other stimuli
when the process of attention is at work.
For example, Archie is watching television and is focused on it. Edith
calls him because she wants to speak with him. When she first calls, he
does not react. When she calls him for the second time, much louder, he
reacts, asks her what she wants, and says that he must go to the bathroom.
When Archie is focused on the television, many of the stimuli in his envi-
ronment are ignored (for example, the sound of his heartbeats, the smells
from the kitchen, and the pressure in his bladder), allowing him to focus
on the television. But his nervous system is still sufficiently alert to absorb
some of these stimuli. Edith’s first call is just another sound to be ignored
by the process of attention. But when she calls the second time, the at-
tention process used to focus on the television is stopped, and the other
stimuli are absorbed and receive some attention. This is why Archie sud-
denly remembers that he must go to bathroom.
Perception includes not only absorbing stimuli, but also processing
them into a relevant general image, which is what usually gives meaning
to the accumulation of stimuli. Processing the factual data into a rele
vant general image is accomplished through unconscious inference, so
that there is absolutely no need for awareness to carry this out.36 But the
result of the processing — that is, the relevant general image — is a con-
scious result. Thus, although the human mind is not conscious of most of
the process, it is conscious of its results when the relevant general image
AI C r iminal L iabili ty for Intention al Offen ses . 51
mation, transfer it, integrate it with other information, and act based on
it — in other words, to understand it.42
Consider the example of security robots based on ai technology.43 The
task of these robots is to identify intruders and either alert human troops
(police, army) or stop the intruders on their own. Their sensors (cameras,
microphones) absorb the factual information and transfer it to the pro-
cessors, which are then supposed to identify the intruder. To do so, the
processors analyze the factual data, making sure not to mistake police and
other security personnel in the area for intruders. The analysis is based on
identifying and discriminating between changing sights and sounds. The
robots may compare the shape and color of clothes and use whatever other
attributes they need to make the identification.
Following this short process, the robot assesses the probabilities. If
the probabilities do not produce an accurate identification, the robot ini-
tiates a process of vocal identification and asks the suspicious figure to
identify itself, provide a password, or perform any other action relevant to
the situation. If the intruder answers, the sound is compared with sound
patterns stored in memory. After all the necessary factual data has been
collected, the robot has sufficient information to make a decision, on the
basis of which it can act. Indeed, this robot has created a relevant general
image out of the factual data absorbed by its sensors, which it was able to
use, transfer, and integrate with other information, and then act based on
it — in other words, to understand it.
Consider how a human guard would act in this situation: very likely,
in the same way. The human guard would identify a figure or a sound
as suspicious, whereas the robot would examine a change in the current
image or sound. We prefer robot guards because they work more thor-
oughly than their human counterparts and do not fall asleep on the job.
Both the human guard and the robot use their memories to try to identify
the figure or the sound by comparing it with patterns stored in memory.
But whereas the robot cannot forget figures, humans can. The human
guard is not always sure, but the robot assesses all the probabilities. The
human guard calls out to ask the figure to identify itself, and so does the
robot. Both the human and the robot compare the answer with information
stored in memory — yet another function that the robot carries out more
accurately than its human counterpart. Following all the data collection,
both human and robot guards make a decision. They both understand the
situation.
We say that the human guard was aware of the relevant factual data.
Can we not say the same about the robot guard? There is no reason why
When R obo ts Kill . 54
not. Its internal processes are more or less the same as those of the human
guard, except that the robot is more accurate, faster, and works more thor-
oughly than the human guard. When they become aware of a suspicious
figure or sound, they both absorb the data and act accordingly.
It is possible to argue that the human guard may have absorbed much
more factual information than the robot — for example, smells — and
that a human is capable of filtering out irrelevant information, such as
cricket sounds. This argument does not invalidate the analysis just pre-
sented. Robots are also capable of absorbing additional factual data, such
as smells. But as already noted, humans use the attention process to focus
on a portion of the factual data, as Archie did in the example cited earlier.
Although humans are capable of absorbing a wide range of factual data,
this may prove to be a distraction for attending to the task at hand. Robots
may be programmed to ignore such data.
If smells are considered irrelevant for a task of guarding, then robots
will not consider this data. If they were human, we would say that the
robots will not pay attention to this data. Humans filter out irrelevant data
using the process of attention that runs in the background of the human
mind. Robots, however, examine all factual data and eliminate the irrele
vant options only after analyzing the data thoroughly. We may now ask
ourselves which type of guard we humans would prefer: one who uncon-
sciously does not pay attention to factual data, or one who examines all
factual data thoroughly?
In sum, ai technology is also capable of fulfilling the requirements for
the second stage of awareness. And given that the two stages already de-
scribed are the only stages of the awareness process recognized by crimi-
nal law, we may conclude that ai technology is capable of meeting the
awareness requirement in criminal law.
Nevertheless, some of us may have the feeling that something is still
missing before we conclude that machines are capable of awareness. This
may be correct if we consider awareness in its broader sense, as it is used
in psychology, philosophy, or the cognitive sciences. But in this book
we are examining the criminal liability of ai entities, and not the wide
meaning of cognition in psychology, philosophy, the cognitive sciences,
and so on. Therefore, the only standards of awareness that are relevant
here are the standards of criminal law. The other standards are irrele-
vant for the assessment of criminal liability imposed on either humans or
ai entities.
The criminal law definition of awareness is indeed much narrower
than corresponding definitions in the other spheres of knowledge. This is
AI C r iminal L iabili ty for Intention al Offen ses . 55
true for the imposition of criminal liability not only on ai entities, but also
on humans. The definitions of awareness in psychology, philosophy, or
the cognitive sciences may be relevant for the endless quest for machina
sapiens, but not for machina sapiens criminalis, which is based on the
definitions of criminal law.44 This distinction is one of the best illustra-
tions of the difference between machina sapiens and machina sapiens
criminalis.
As noted earlier, awareness itself is very difficult to prove in court, es-
pecially in criminal cases where it must be proved beyond any reasonable
doubt.45 Therefore, criminal law developed two evidential substitutes for
awareness: the presumptions of willful blindness and awareness. Accord-
ing to the willful blindness presumption, the offender is presumed to be
aware of the conduct and circumstances if he or she suspected that they
existed but did not verify that suspicion.46 The rationale of this presump-
tion is that because the “blindness” of the offender to the facts is willful,
he or she is considered to be aware of these facts even if not actually aware
of them.
If the offender wished to avoid the commission of the offense, he or
she would have checked the facts. For example, the rapist who suspects
that the woman does not consent to having sexual intercourse with him
predicts that if he were to ask her, she would refuse, in which case there
would be no doubt about her lack of consent. Therefore, he ignores his
suspicion, and at his interrogation says that he thought she consented be-
cause he heard no objection. The willful blindness presumption equates
unchecked suspicion with full awareness of the relevant conduct or cir-
cumstances. The question is whether this presumption is relevant to ai
technology.
According to the awareness presumption, a person is presumed to be
aware of the possibility of the occurrence of the natural results of his or her
conduct.47 The rationale of this presumption is that humans have the basic
skills necessary to asses the natural consequences of their conduct. For
example, when firing a bullet at someone’s head, the shooter is presumed
to be able to asses the possibility of death as a natural consequence of the
shooting. Humans who lack such skills, permanently or temporarily, have
an opportunity to refute the presumption. The question is whether this
presumption is relevant to ai technology.
Awareness is indeed difficult to prove for humans. But because within
a criminal law context, the processes that comprise awareness can be mon-
itored accurately by ai technology, there is no need for any substitute. In
human terms, this would be the same as if the brain of every individual
When R obo ts Kill . 56
lows: aware will that accompanies the commission of conduct, the motive
for which is satisfied or the purpose of which is achieved.
Intent is an expression of positive will, that is, the will to make a fac-
tual event occur.51 Although there are higher levels of will than intent (for
example, lust, longing, or desire), in criminal law intent has been accepted
as the highest level of will required for the imposition of criminal liabil-
ity.52 Consequently, no offense requires a higher level of will than intent,
and if intent is proven, it satisfies all other levels of will, including reck-
lessness (indifference and rashness).
We must distinguish between aware will and unaware will. Unaware
will is an internal urge, impulse, or instinct of which the person is not
aware and which is naturally uncontrollable. Individuals cannot control
their will unless they are aware of it. Being aware of one’s will does not
guarantee the ability of controlling it; but controlling the will requires
awareness of it and the activation of conscious processes in the mind to
cause the relevant activity to start, cease, or continue without interference.
Imposing criminal liability based on intent requires an individual who is
aware of his will so that he may control it.
Intent does not require some abstract aware will. The aware will must
be focused on certain targets: results, motives, or purposes. Intent is an
aware will focused on these targets. For example, the murderer’s intent is
an aware will focused on causing the victim’s death. For the intent to be
relevant to the imposition of criminal liability, this will must exist simul-
taneously with the commission of the conduct and must accompany it. If
a person kills another by mistake, and then, after the victim’s death, devel-
ops the will to kill the victim, this is not intent. Only when the murder is
committed simultaneously with the existence of the relevant will can the
will be considered intent.
Proving intent is much more difficult than proving awareness. Al-
though both are internal processes of the human mind, intent relates to
future factual situations, and awareness relates to current facts. Aware-
ness is rational and realistic, but intent is not necessarily so. For example,
a person may have the intention of becoming a fish, but she cannot have
an awareness of being a fish. Because of the serious difficulties in proving
intent, criminal law developed evidentiary substitutes for it. The most
common substitute is the foreseeability rule (dolus indirectus), a legal pre-
sumption designed to prove the existence of intent.
According to the foreseeability rule presumption, the offender is pre-
sumed to intend the occurrence of the results if during commission of the
conduct, with full awareness, he has foreseen the occurrence of the results
AI C r iminal L iabili ty for Intention al Offen ses . 59
players. If we can say that human chess players have the intention of win-
ning chess games, it seems that we can say the same thing about ai chess.
Analysis of their course of conduct in the relevant situations is entirely
consistent with the foreseeability rule presumption.
Any entity, human or ai, that examines several options for conduct
and then makes an aware decision to commit one of them, while assess-
ing the probability that a given factual event would result from this con-
duct with a high degree of probability, is considered to be foreseeing the
occurrence of that factual event. In the case of a chess game, this seems
quite normal to us. But there is no substantial difference, in this context,
between playing chess for the purpose of winning and committing any
other conduct for the purpose of bringing about some result. If the result
and the conduct together form a criminal offense, we enter the territory
of criminal law.
When a computer assesses the probability that a factual event (win-
ning a chess game, for example, or killing or injuring a person) will result
from its conduct and chooses accordingly to commit the relevant conduct
(moving chess pieces on the board, pulling a trigger, or moving its hydrau-
lic arm in the direction of a human body), the computer meets the required
conditions for the foreseeability rule presumption. The computer is there-
fore presumed to have the intention that the results should occur. This is
exactly how the court examines the offender’s intent in most cases, when
the offender does not confess.
One may ask, what is considered to be a very high probability in this
context? The answer is the same as for humans, the only difference being
that the computer can assess the probability more accurately than a human
can. For example, one points a loaded gun at someone’s head. A human
shooter evaluates that the probability of the victim’s death is high as a
result of pulling the trigger, but most humans are incapable of assessing ex-
actly how high the probability is. If the entity is a computer, it can assesses
the exact probability of occurrence based on the factual data to which it
is exposed to (the distance of the victim from the gun, the direction and
velocity of the wind, the mechanical condition of the gun, and so on).
If probability is assessed to be high, the computer is required to act
accordingly, and in full awareness to commit a conduct that advances the
relevant factual event. As already noted, ai technology has the capability
to consolidate awareness to factual data.58 Commission of the conduct is
considered as factual data, therefore ai technology can meet both condi-
tions needed to prove the foreseeability rule presumption. This presump-
tion is considered to be an absolute legal presumption (praesumptio juris
AI C r iminal L iabili ty for Intention al Offen ses . 61
is monitored, together with the factors that were taken into consideration
in the decision-making process. This data provides direct proof of the in-
difference of the ai entity. Indifference can also be proven through the
foreseeability rule presumption, as noted earlier.
Rashness is the lower level of recklessness. It consists of aware volition
for the relevant factual event not to occur, but at the same time the com-
mitment of aware and unreasonable conduct that causes it to occur. Al-
though the occurrence of the factual event is undesired by the rash person,
she conducts herself with unreasonable risk so that the event eventually
occurs. Rashness is considered a degree of will, because if the rash of-
fender had not wanted the event to occur at any price, she would not have
taken unreasonable risks in her conduct. This is the major reason that
rashness is part of the volitive aspect of mens rea. Otherwise, negative will
alone does not justify criminal liability.
For example, someone drives on a narrow road behind a slow truck.
It is a two-way road divided by a solid white line. The driver is in a hurry
and eventually decides to pass the truck, crossing the solid line. She does
not intend to kill anyone; all she wants is to pass the truck. But a motor
cyclist happens to come from the opposite direction. The driver is unable
to avoid the motorcyclist, and kills him. If the driver had wanted the mo-
torcyclist to be dead, she would have been criminally liable for murder,
but this was not the case. Nor would it be true to say that she was indiffer-
ent to the death of the motorcyclist. She was rash, however. She did not
want to hit the motorcycle, but she took unreasonable risks that resulted
in death. She is therefore criminally liable for manslaughter.
In the context of criminal law, rashness requires awareness. The of-
fender is required to be aware of the possible options, to prefer not to cause
the occurrence of the resulting event, but to commit conduct that is unrea-
sonable if she were to avoid the occurrence of the event.64 The decision-
making process of strong ai technology is based on assessing probabilities,
as we have discussed. When the computer makes a decision to act in a
certain way, but it does not weigh one of the relevant factors as sufficiently
significant, it is considered to be rash regarding the occurrence of a given
factual event. In our example, if the car driver were replaced by a driving
computer, and if the computer calculated the probability of colliding with
the motorcycle by crossing the solid line to be low, the decision to pass
would be considered rash.
As already noted, complicated decision-making processes in ai are
characterized by many factors to be considered. In situations like this,
humans sometimes miscalculate the weight of some factors. So do com-
When R obo ts Kill . 64
puters. Humans reinforce their decisions by hopes and beliefs, but com-
puters do not use such methods. Some computers are programmed to
weigh certain factors in a certain way, but strong ai technology can learn
to weigh factors correctly and accurately. Again, this learning by strong
ai is called machine learning.
Whenever the decision maker is an ai entity, the decision-making pro-
cess is monitored; therefore, there is no shortage of evidence in proving
rashness on the part of an ai entity regarding the occurrence of a given fac-
tual event. The ai entity’s awareness of the possibility of the occurrence of
the event is monitored, together with the factors that were taken into con-
sideration in the decision-making process, and also the actual weight they
were assigned in the given decision. This data provides direct proof of
the rashness of the ai entity. As noted earlier, rashness can also be proven
through the foreseeability rule presumption.
Therefore, all components of the volitive aspect of mens rea apply to
ai technology and can be proven in court. So the question is, who is to be
held criminally liable for the commission of this type of offense?
The factual and mental elements are neutral in this context, and do
not necessarily have evil or good content.67 Meeting their requirements is
much more “technical” than the detection of evil would be. For example,
we prohibit murder. Murder is causing the death of a human with the
awareness and intent of causing death. If an individual factually causes
another person’s death, the factual element requirement is fulfilled. If the
conduct was committed with full awareness and intent, the mental ele-
ment is also fulfilled, and the individual is criminally liable for murder,
unless one of general defenses is applicable (for example, self-defense or
insanity). The reason for the murder is immaterial for the imposition of
criminal liability, and it makes no difference whether the murder was com-
mitted out of mercy (euthanasia) or out of villainy.
This is the way that criminal liability is imposed on human offenders.
If we embrace this standard in relation to ai entities, we can impose crimi-
nal liability on them as well. This is the basic idea behind the criminal
liability of ai entities, and it differs from their moral accountability, social
responsibility, or even civil legal personhood.68 The narrow definition of
criminal liability makes it possible for the ai entity to become subject to
criminal law. Nevertheless, the reader may feel that something is still miss-
ing in this analysis, that perhaps it falls short. We try to refute these feel-
ings using rational arguments.
One such feeling may be that the capacity of an ai entity to follow a
program is not sufficient to enable the system to make moral judgments
and exercise discretion, although the program may contain a highly elabo-
rate and complex system of rules.69 This feeling may relate eventually to
the moral choice of the offender. The deeper argument is that no formal
system can adequately make the moral choices that may confront an of-
fender. Two answers apply to this argument.
First, it is not certain that formal systems are morally blind. There are
many types of morality and moral values.70 Teleological morality, such as
utilitarianism, deals with the utility values of conduct. These values may
be measured, compared, and decided upon based on quantitative compari-
sons. For example, an ai robot controls a heavy wagon that malfunctions,
leaving two possible paths for the wagon to follow. The robot calculates
the probabilities, and determines that following one path will result in the
death of one person, and following the other path will result in the death
of fifty people. Teleological morality would direct any human to choose
the first path, and the person who followed that path would be considered
a moral person. The robot can be directed to act in accordance with this
morality. The robot’s morality is dictated by its program, which makes it
When R obo ts Kill . 66
evaluate the consequences of each possible path. Do humans not act the
same way?
Second, even if the first answer is not convincing, and formal sys-
tems are considered incapable of morality of any kind, criminal liability
is still neither dependent on any morality nor fed by it. Morality is not
even a condition for the imposition of criminal liability. Criminal courts
do not asses the morality of human offenders when imposing criminal li-
ability. The offender may be highly moral in the court’s view, and still be
convicted, as in the case of euthanasia, for example. Alternatively, the of-
fender may be very immoral in the eyes of the court, and still be acquitted,
as in the case of adultery. Given that no morality of any type is required
for the imposition of criminal liability on human offenders, it should not
be a consideration when ai entities are involved, either.
Some may feel that ai entities are not humans, and that criminal li-
ability is intended for humans only because it involves constitutional
human rights that only humans may have.71 In this context, it is immate-
rial whether the constitutional rights refer to substantial rights or to pro-
cedural rights. The answer to this argument is that perhaps criminal law
was originally designed for humans, but since the seventeenth century, it
has not been applied to humans exclusively, as we have discussed.72 Cor-
porations, which are nonhuman entities, are also subject to criminal law,
and not only to criminal law. For the past four centuries, criminal liability
and punishments have been imposed on corporations, albeit with some
occasional adjustments.
Others may argue that although corporations have been recognized as
subject to criminal law, the personhood of ai entities should not be recog-
nized because humans have no interest in recognizing it.73 This argument
cannot be considered applicable in an analytical legal discussion on the
criminal liability of ai entities. There are many cases in everyday life in
which the imposition of criminal liability does not benefit human soci-
ety, and still criminal liability is imposed. A famous example comes from
Immanuel Kant, who is interpreted as believing that if the last person on
Earth is an offender, he should be punished even if it leads to the extinc-
tion of the human race.74 Human benefit has not been recognized as a valid
component of criminal liability.
Finally, some may feel that the concept of awareness presented here
is too shallow for ai entities to be called into account, blamed, or faulted
for the factual harm they may cause.75 This feeling is based on confusion:
the concept of awareness and consciousness in psychology, philosophy,
theology, and cognitive sciences is confused with the concept of awareness
AI C r iminal L iabili ty for Intention al Offen ses . 67
argument of the worker having exceeded his authority (ultra vires) was
rejected. Thus, even if the worker acted in contradiction to the specific
order of a superior, the superior was still liable for the worker’s actions if
they were carried out in the general course of business. This approach was
developed in tort law, but the English courts did not restrict it to tort law,
and applied it to criminal law as well.86
Thus, vicarious liability was developed under specific social condi-
tions, under which only individuals of the upper classes had the required
competence to be considered legal entities. In Roman law, only the father
of the family could become a prosecutor, plaintiff, or defendant. When the
concept of social classes began to fade, in the nineteenth century, vicarious
liability faded away with it.
In criminal law at the beginning of the nineteenth century, the cases
of vicarious liability fell into three main types. The first was classic com-
plicity. If the relations between the parties were based on real cooperation,
they were classified as joint perpetration even if they had an employer-
employee or some other hierarchical relation. But if within the hierarchi-
cal relations, information gaps between the parties or the use of power
made one party lose its ability to commit an aware and willed offense, the
act was no longer considered joint perpetration. The party that had lost the
ability to commit an aware and willed offense was considered an innocent
agent, functioning as a mere instrument in the hands of the other party.
The innocent agent was not criminally liable, and the offense was con-
sidered perpetration-through-another, where another party had full crimi-
nal liability for the actions of the innocent agent.87 This was the basis for
the emergence of perpetration-through-another from vicarious liability,
and it became the second type of criminal liability derived from vicarious
liability. The third type accounted for the core of the original vicarious
liability. In most modern legal systems, this type is embodied in specific
offenses and not in the general formation of criminal liability. Since the
emergence of the modern law of complicity, the original vicarious liability
is no longer considered a legitimate form of criminal liability.
Since the end of the nineteenth century and the beginning of the twen-
tieth, the concept of the innocent agent has been widened to include par-
ties with no hierarchical relations between them. Whenever a party acts
without awareness of its actions or without will, it is considered an inno-
cent agent. The acts of an innocent agent could be the results of another
party’s initiative (for example, using the innocent agent through threats,
coercion, misleading, or lies); or the results of another party’s abuse of an
existing factual situation that eliminates the awareness or will of the in-
When R obo ts Kill . 72
not satisfy either the factual or the mental element requirements of homi-
cide, because he did not physically commit it, nor was he aware of it. The
homicide was not part of the perpetrators’ joint criminal plan. The ques-
tion may be expanded also to the inciters and accessories to robbery, if any.
In general, the question of the probable consequence liability refers to the
criminal liability of a person for unplanned offenses that were committed
by another. Before applying the probable consequence liability to human-a i
offenses, we must explore its features.106
There are two extreme opposite approaches to this general ques-
tion. The first calls for imposition of full criminal liability on all par-
ties; the second calls for broad exemption from criminal liability for any
party that does not meet the factual and mental element requirements
of the unplanned offense. The first is considered problematic for over-
criminalization, while the second is considered problematic for under-
criminalization. Consequently, moderate approaches were developed and
embraced by various legal systems.
The first extreme approach does not consider at all the factual and
mental elements of the unplanned offense. This approach originates in
Roman civil law, which has been adapted to criminal cases by several legal
systems. According to this approach, any involvement in the delinquent
event is considered to include criminal liability for any further delinquent
event derived from it (versanti in re illicita imputantur omnia quae se-
quuntur ex delicto).107 This extreme approach requires neither factual nor
mental elements for the unplanned offense from the other parties, besides
the party that actually committed the offense and accounts for both factual
and mental elements.
According to this extreme approach, the criminal liability for the un-
planned offense is an automatic derivative. The basic rationale of this ap-
proach is deterrence of potential offenders from participating in future
criminal enterprises by widening the criminal liability to include not only
the planned offenses, but the unplanned ones as well. The potential party
must realize that his personal criminal liability may not be restricted to
specific types of offenses, and that he may be criminally liable for all ex-
pected and unexpected developments that are derived directly or indi-
rectly from his conduct. Potential parties are expected to be deterred and
avoid involvement in delinquent acts.
This approach does not distinguish between various forms of involve-
ment in the delinquent event. The criminal liability for the unplanned of-
fense is imposed regardless of the role of the offender in the commission
of the planned offense as perpetrator, inciter, or accessory. The criminal
AI C r iminal L iabili ty for Intention al Offen ses . 77
liability imposed for the unplanned offense does not depend on meeting
factual and mental element requirements by the parties. If the criminal lia-
bility for the unplanned offense is imposed on all parties of the original en-
terprise, including those who could have no control over the commission
of the unplanned offense, the deterrent value of this approach is extreme.
Prospectively, this approach educates individuals to keep away from
involvement in delinquent events, regardless of the specific role they may
potentially play in the commission of the offense. Any deviation from the
criminal plan, even if not under the direct control of the party, is basis for
criminal liability for all individuals involved, as if it had been fully per-
petrated by all parties. The effect of this extreme approach can be broad
and encompassing. Parties to another (third) offense, different from the
unplanned offense, who were not direct parties to the unplanned offense,
may be criminally liable for the unplanned offense as well, if there is the
slightest connection between the offenses. The criminal liability for the
unplanned offense is uniform for all parties and requires no factual and
mental elements. Most Western legal systems consider such a deterrent
approach too extreme, and have rejected it.108
The second extreme approach is the exact opposite of the first, and
focuses on the factual and mental elements of the unplanned offense. Ac-
cording to this approach, to impose criminal liability for the unplanned
offense, both factual and mental element requirements must be met by
each party. Only if both requirements of the unplanned offense are met by
a given party is it legitimate to impose criminal liability on it. Naturally,
because the unplanned offense was not planned, it is most unlikely that
any of the parties would be criminally liable for that offense, except for
the party that actually committed it. This extreme approach ignores the
social endangerment inherent in the criminal enterprise, which includes
not only planned offenses, but unplanned ones as well.
In light of this extreme approach, offenders have no incentive to re-
strict their involvement in the delinquent event. Prospectively, any party
that wishes to escape criminal liability for the probable consequences of
the criminal plan needs only to avoid participation in the factual aspect of
any further offense. Such offenders would tend to involve more parties in
the commission of the offense in order to increase the chance for commit-
ting further offenses, and therefore most modern legal systems prefer not
to adopt this extreme approach.
Several moderate approaches have been developed to meet the dif-
ficulties raised by these two extreme approaches. The core of the mod-
erate approaches lies in the creation of probable consequence liability,
When R obo ts Kill . 78
that is, criminal liability for the unplanned offense, whose commission is
the probable consequence of the planned original offense. Probable con-
sequence refers both to mental probability from the point of view of the
party participating in the offense, and to the factual consequence resulting
from the planned offense. Thus, probable consequence liability generally
requires two principal conditions to impose criminal liability for the un-
planned offense:
serves as the causal background for both unplanned offenses, as they in-
cidentally derive from it.109
The mental condition (“probable”) requires that the occurrence of the
unplanned offense be probable in the eyes of the relevant party, meaning
that it could have been foreseen and reasonably predicted. Some legal sys-
tems prefer to examine the actual and subjective foreseeability (the party
has actually and subjectively foreseen the occurrence of the unplanned
offense), whereas others prefer to evaluate the ability to foresee through
an objective standard of reasonability (the party has not actually foreseen
the occurrence of the unplanned offense, but any reasonable person in his
state could have). Actual foreseeability parallels the subjective mens rea,
whereas objective foreseeability parallels objective negligence.
For example, A and B conspire to rob a bank. A is intended to break
into the safe and B to watch the guard. They execute the plan, and B shoots
and kills the guard while A breaks into the safe. In some legal systems,
A is criminally liable for the killing only if A had actually foreseen the
homicide, and in others, if a reasonable person could have foreseen the
forthcoming homicide in these circumstances. Consequently, if the rel-
evant accomplice did not actually foresee the unplanned offense, or any
reasonable person in the same condition could not have foreseen it, he or
she is not criminally liable for the unplanned offense.
This type of approach is considered moderate because it provides an-
swers to the social endangerment problem, and at the same time it has a
positive relation with the factual and mental elements of criminal liabil-
ity. The factual and mental conditions are the starting terms and minimal
requirements for the imposition of criminal liability for the unplanned
offense.
Legal systems differ on the legal consequences of probable conse-
quence liability. The main factor in these differences is the mental condi-
tion. Some legal systems require negligence, but others require mens rea,
and the consequences may be both legally and socially different. Moder-
ate approaches that are close to the extreme approach, which holds that
all accomplices are criminally liable for the unplanned offense (versanti
in re illicita imputantur omnia quae sequuntur ex delicto), impose full
criminal liability for the unplanned offense if both factual and mental
conditions are met. According to these approaches, the party is criminally
liable for unplanned mens rea offenses even if he or she may have been
merely negligent.
More lenient moderate approaches do not impose full criminal liabil-
ity on all the parties for the unplanned offense. These approaches can
When R obo ts Kill . 80
show more leniency with respect to the mental element, and match the
actual mental element of the party to the type of offense. Thus, the negli-
gent party in the unplanned offense is criminally liable for a negligence
offense, whereas the party who is aware is criminally liable for a mens rea
offense.110 For example, A, B, and C plan to commit robbery. The robbery
is executed, and C shoots and kills the guard. A foresaw this, but B did not,
although a reasonable person would have foreseen this outcome under the
circumstances.
All three are criminally liable for robbery as joint perpetrators. C is
criminally liable for murder, which is a mens rea offense. A, who acted
under mens rea, is criminally liable for manslaughter or murder, both
mens rea offenses. But B was negligent regarding the homicide, and is
therefore criminally liable for negligent homicide. Negligent offenders are
criminally liable for no more than negligence offenses, whereas other of-
fenders, who meet mens rea requirements, are criminally liable for mens
rea offenses.
American criminal law imposes full criminal liability for unplanned
offense equally on all parties of the planned offense111 as long as the un-
planned offense is the probable consequence of the planned one.112 Ap-
propriate legislation has been enacted to accept the probable consequence
liability, and has been considered constitutionally valid.113 Moreover, in
the context of homicide, American law incriminates incidental unplanned
homicide committed in the course of the commission of another planned
offense as murder, even if the mental element of the parties was not ad-
equate for murder.114
By comparison, European- Continental legal systems and English
common law impose criminal liability for the unplanned offense equally
on all parties of the planned offense.115 The English116 and European-
Continental moderate approaches are closer to the first extreme approach.117
For the applicability of the probable consequence liability to human-
i offenses, we must distinguish between two types of cases, as shown in
a
our opening examples of computer programmers at the beginning of this
section. The first type refers to cases in which the programmer designed
the ai system to commit a certain offense, but the system exceeded the pro-
grammer’s plan quantitatively (and committed more offenses of the same
type), qualitatively (committed more offenses of different types), or in both
respects. The second type refers to cases in which the programmer did not
design the ai system to commit any offense, but the system committed an
offense nevertheless.
In the first type of cases, criminal liability is divided into the liabil-
AI C r iminal L iabili ty for Intention al Offen ses . 81
ity for the planned offense and for the unplanned one. If the program-
mer designed the system to commit a certain offense, this is perpetration-
through-another of that offense, at most. The programmer instructed the
system what it should do, therefore he used the system instrumentally for
the commission of the offense. In this case, the programmer is alone re-
sponsible for the offense, as already discussed.118 For this particular crimi-
nal liability, there is no difference between an ai system, some other com-
puterized system, a screwdriver, and any innocent human agent.
In the case of the exceeded offenses, a different approach is required.
If the ai system is strong, and is capable of computing the commission of
an additional offense, the ai system is considered criminally liable for that
offense according to the standard rules of criminal liability, as described
earlier.119 This completes the criminal liability of the ai system, and the
criminal liability of the programmer is determined according to the prob-
able consequence liability already described. Thus, if from the program-
mer’s point of view the additional offense was a probable consequence of
the planned offense, then criminal liability is imposed on the program-
mer for the unplanned offense in addition to the criminal liability for the
planned offense.
If the ai system is not considered strong, and is not capable of com-
puting the commission of the additional offense, the ai system cannot be
considered criminally liable for the additional offense. Under these condi-
tions, the ai system is considered as innocent agent. The criminal liability
for the additional offense remains the programmer’s alone, on the same
basis of probable consequence liability already described. Thus, if the ad-
ditional offense was a probable consequence of the planned offense from
the programmer’s point of view, the programmer is criminally liable for
the unplanned offense in addition to his criminal liability for the planned
offense.
In the second type of cases, the programmer has no intention to commit
any offense. From the programmer’s point of view, the occurrence of the
offense is not more than an unwilled accident. Because the programmer’s
initiative was not criminal, the probable consequence liability is inappro-
priate. The presence of a planned offense is crucial for the imposition of
probable consequence liability, as discussed, because the probable con-
sequence liability is intended to address unplanned developments of a
planned delinquent event and serve as a deterrent against participation in
delinquent activities.
When the programmer’s starting point is not delinquent, and from his
point of view the occurrence of the offense is accidental, then deterrence
When R obo ts Kill . 82
fenses, it does not form a machina sapiens criminalis in this context, and
it is not criminally liable for the homicide. The only criminal liability for
the homicide would be the programmer’s and that of other related parties
(users, the factory, and so on). Here, programmer refers not only to the
specific person or persons who actually programmed the system, but to
all related persons, including the company that manufactured the robot.
The programmer’s criminal liability is determined based on his role
in bringing about the homicide. If the programmer instrumentally used
the robot to kill the worker by designing it to operate in this way, he is
a perpetrator-through-another of the homicide. This is also the case if
the robot has no advanced ai capabilities for meeting the requirements
of the offense. In these cases, the programmer and the related parties are
alone criminally liable for the homicide, and their exact criminal liabil-
ity for the homicide, whether murder or manslaughter, is determined by
the manner in which they met the mental element requirements of these
offenses.
If the programmer designed the robot to commit an offense (any of-
fense) other than the homicide of the worker, and the ai system of the
robot continued the delinquent activity through an unplanned offense (the
homicide of the worker), then the programmer’s criminal liability for the
homicide is determined based on probable consequence liability. The ro-
bot’s criminal liability in this case is examined according to its capability
to meet the requirements of the given offense. If the programmer did not
design the robot to commit any offense, but the unplanned homicide oc-
curred, then the robot’s criminal liability is examined based on its capabil-
ity to meet the requirements for homicide, and the programmer’s criminal
liability is examined according to the standards of negligence, as we will
discuss in the next chapter.
3
ai C riminal L ia bili ty for N egligence O ffen se s
regarding the results of the offense. This development has been embraced
by American courts.20
How, then, can negligence function as a mental element in criminal
law if the offender is not even required to be aware of his or her conduct?
Some scholars have asked to exclude negligence from criminal law and
leave it for tort law or other civil proceedings.21 But the justification of neg-
ligence as mental element focuses on its function as an omission of aware-
ness. Exactly the same way that act and omission are considered identical
for the imposition of criminal liability, as discussed earlier in relation to
the factual element,22 so can both awareness and omission of awareness
be considered as the basis for meeting the mental element requirement.
Following the analogy with the factual element, negligence does not
parallel inaction, but parallels omission. If a person was simply not aware
of the factual element components, and nothing more, she is not consid-
ered negligent, but innocent. Omitting to be aware means that the person
was not aware although a reasonable person could and should have been;
that is, the individual is considered to not be using her existing capabili-
ties of forming awareness. If the individual was not aware but was capable
of being aware, she also had the duty to be aware (non scire quod scire
debemus et possumus culpa est).
Negligence does not incriminate individuals who are incapable of
forming awareness, but only those who failed to use their existing capa-
bilities to do so. Negligence does not incriminate the blind person for not
seeing, but only people who have the ability to see but failed to use this
ability. Wrong decisions are a common part of daily human life, so neg-
ligence does not apply to situations of this type. Taking risks is also part
of human life, and therefore negligence is not intended to inhibit it, and
indeed society encourages taking risks in many situations. Negligence ap-
plies to taking unreasonable risks.
If people were to take no risks at all, then human development would
stop in its tracks. If we had taken no risks, we would have been staring at
the burning branch after it was struck by lightning, afraid to reach out, grab
it, and use it for our needs. Society constantly pushes people to take risks,
but reasonable ones. The question is, how can we identify the unreasonable
risks and distinguish them from the reasonable ones, which are legitimate?
For example, scientists propose to develop an advanced device that
would significantly ease our daily lives. It is comfortable, fast, elegant,
and accessible, but using it may cause the deaths of about 30,000 people
each year in the United States alone. Would developing such a device be
considered a reasonable risk?
When R obo ts Kill . 88
The core of the negligence template in relation to the factual element com-
ponents is expressed by lack of awareness of the factual component de-
spite the capability to form such awareness, when a reasonable person
could and should have been aware of that component. Lack of awareness
is naturally the opposite of awareness, which is required by mens rea, as
discussed earlier.27 Consequently, for a person to be considered aware of
certain factual data, two accumulative conditions are required: (1) absorb-
ing the factual data through the senses; and (2) creating a relevant general
image with regard to this data in the brain. If either of these conditions is
missing, the person is not considered to be aware.
Lack of awareness may be achieved by the absence of at least one of
these two conditions. If the factual data has not been absorbed by the
senses, or if it was absorbed but no relevant general image has been cre-
ated, then the situation is considered one of lack of awareness to that fac-
tual data. Awareness is a binary situation in the context of criminal law;
therefore, no partial awareness is recognized. If portions of the awareness
process are present but the process has not been accomplished in full, the
person is considered to be unaware of the relevant factual data. This is true
for both human and ai offenders.
For lack of awareness to be considered omission of awareness, it must
occur despite the capability of the offender to form awareness, when a rea-
sonable person could and should have been aware. These are two separate
conditions: (1) possessing the cognitive capabilities needed to consolidate
awareness; and (2) a reasonable person could and should have been aware
of the factual data. The first condition has to do with the offender’s physi-
cal capabilities. If the offender lacks these capabilities, regardless of the
AI C r iminal L iabili ty for Neglige nce Offe nses . 91
Machine learning enables the ai system to change the equation from time
to time.
Indeed, effective machine learning should cause changes in the equa-
tion almost every time a given case is analyzed. This is what happens to
our image of the world as our life experience becomes richer and broadens.
Without machine learning, the equation remains constant and the system
becomes ineffective. Expert systems that are not capable of machine learn-
ing are not different from human experts who insist on not updating their
own knowledge. Machine learning is essential for the ai system to con-
tinue developing.
When the ai system is activated for the first time, the equation and its
terms are programmed by human experts who determine what is a reason-
able course of conduct in individual cases. After analyzing a few cases,
the system begins to identify exceptions, wider and narrower definitions,
new factors and new connections between existing ones, and so on. The
ai system generalizes the knowledge absorbed from individual cases by
reformulating the relevant equations. We use the term equation to describe
the relevant algorithm, but it is not necessarily an equation in the mathe
matical sense.
Reformulating equations raises the possibility that the ai entity will
make different decisions in the future than it made in the past. This pro-
cess of induction is at the core of machine learning, and changes in the
equation produce each time a different sphere of right decisions. For ex-
ample, if on its first activation, a medical expert system is given a list of
symptoms of both common cold and influenza, it produces a diagnosis
based on those symptoms alone. But after being exposed to more cases, the
system learns to take into account other symptoms that may be essential
for distinguishing between the two conditions.
If the system is required to recommend medical treatments, these will
differ for different diagnoses. At times, the expert system will not be “sure”
because the symptoms can match more than one disease, and the system
will assess the probabilities based on measurement and analysis of the
various factors. For example, the expert system may determine that there
is a probability of 47 percent that the patient has a common cold, and of
53 percent that she has influenza. Processing these probabilities may be
the cause of negligence on the part of ai systems, and of mistaken conclu-
sions, both sure and unsure. The system may be sure in the case of a mis-
taken conclusion, and can also assess probabilities mistakenly.
Mistakes may be caused by inauspicious changes of the equation,
wrong factors being considered, ignorance of certain factors, or wrong
When R obo ts Kill . 94
see through an objective standard of reasonability (the party has not actu-
ally foreseen the occurrence of the unplanned offense, but any reasonable
person in this situation could have).
Actual foreseeability parallels subjective mens rea, whereas objec-
tive foreseeability parallels objective negligence. Consequently, in legal
systems that require objective foreseeability, the programmer should be
at least negligent for the commission of the negligence offense by the
ai system for the imposition of criminal liability for that offense. But in
legal systems that require subjective foreseeability, the programmer must
be at least aware of the possibility of the commission of the negligence of-
fense by the ai system for imposition of criminal liability for that offense.
However, if neither subjective nor objective foreseeability of the com-
mission of the offense apply to the programmer, the probable consequence
liability is irrelevant. In this case, no criminal liability can be imposed on
the programmer, and the criminal liability of the ai system for the negli-
gence offense does not affect the programmer’s liability.
diagnosis, and that a reasonable person could have formed such aware-
ness. The first condition can be proven based on the characteristics of the
system and according to the definition of awareness in criminal law (per-
ception by the senses of factual data and its understanding), as discussed
earlier.41 If the system has such capability, and if that capability has been
activated, then the system formed awareness.
If the system had been aware of the bacterial infection and nevertheless
produced an inappropriate diagnosis and recommendations, this would
not have been negligence but mens rea, and the homicide offense would
have been manslaughter or murder. Otherwise, the system meets the
second condition for negligence. For the ai system to be criminally liable
for negligent homicide, its mental state must be compared with that of a
reasonable person. The subjective incarnation of the reasonable person in
this case is a reasonable medical expert system. The question is whether
the system took into consideration all the relevant factors required for
proper diagnosis, and whether they were appropriately weighed, given the
relevant circumstances of the case.
The diagnosis produced by the system did not necessarily have to be
correct for the system to be exempt from liability for negligence. Not every
incorrect diagnosis is considered negligent. At issue here is the system’s
judgment. If it is proven that the system’s judgment was correct (the cal-
culation took all factors into consideration and weighed them properly),
then the system acted reasonably. Although the diagnosis did not cor-
rectly identify the cause of the symptoms, this is still not in the realm of
negligence if the system’s judgment was reasonable. Not every mistake is
considered negligent. In this case, the court would have to examine cir-
cumstances carefully, with or without the help of medical and software
experts.
If the court reaches the conclusion that a reasonable person would
have been aware of the possibility that the symptoms were caused by a
bacterial infection, or would have considered the probability to be low that
influenza had caused the symptoms, then the system is considered negli-
gent. Otherwise, the system is exempt from criminal liability for the death
of the baby. Three other actors may be criminally liable in this case, in
addition to (but not instead of) the ai system. The criminal liability of the
ai system may affect that the liability of these actors, but not necessarily.
The programmer’s criminal liability is determined by his role in the oc-
currence of the death. If the programmer instrumentally used the ai system
to kill the baby by designing it in such a way as to produce erroneous
medical diagnoses, the programmer is a perpetrator-through-another of
AI C r iminal L iabili ty for Neglige nce Offe nses . 103
the homicide. This would also be the case when the ai system had no ad-
vanced ai capabilities to meet the requirements of the offense, so that the
programmer and the other related parties, but not the ai system, are the
only ones criminally liable for the homicide. Their exact criminal liabil-
ity for the homicide, whether murder or manslaughter, is determined by
whether they meet the mental element of these offenses.
If the programmer has foreseen the consequence (the patient’s death),
or if any reasonable programmer could have foreseen it, and if the ai system
is criminally liable for the homicide, the programmer may be criminally
liable for negligent homicide through probable consequence liability, if the
offense was unplanned. Generally, if the programmer did not intention-
ally design the system to commit any offense, and an unplanned homi-
cide (murder, manslaughter, or negligent homicide) occurred, the system’s
criminal liability is examined based on its capabilities and on meeting the
requirements of the relevant homicide, and the programmer’s criminal li-
ability is examined based on the standards of negligence. Here, program-
mer refers not only to the person or persons who actually programmed the
system, but to all related persons, including the company that produced
the system.
Two other actors who may be criminally liable are the users of the
system (the medical staff) and the hospital. The hospital may be criminally
liable for making the decision to use an ai system in real-life medical cases.
To determine the liability of the hospital, it is necessary to examine the
factual data about the system that was available and known to the hospital,
including the probability that the system would make mistakes. If the deci-
sion was reasonable, the hospital is not negligent. Similarly, the medical
staff must be examined to determine its criminal liability for homicide.
The factual data about the system that was available and known to the staff
is extremely important in evaluating the decision of the medical staff to
follow the system’s diagnosis and recommendations.
If the ai system was correct in the previous 1,000 cases, then the medi-
cal staff’s decision to follow its recommendations may be reasonable. But
it is up to the court to determine the sphere of reasonability in each case.
Perhaps in emergency rooms a 10 percent rate of mistakes is reasonable,
whereas in cosmetic surgery such a rate is far beyond reasonability. Thus,
criminal liability of ai systems for negligence offenses is possible, and it
does not necessarily reduce the criminal liability of other relevant entities.
4
ai C riminal L ia bili ty for
Stric t L ia bili ty O ffen se s
104
AI L iabili ty for Stric t L iabili ty Offe nses . 105
and imposition of criminal liability for them required proof of the factual
element alone.6
Absolute liability offenses were considered exceptional because they
require no mental element. In some cases Parliament intervened and re-
quired a mental element,7 and in other cases mental element requirements
were added by court rulings.8 By the mid-nineteenth century, English
courts began to consider efficiency criteria as part of criminal law in vari-
ous contexts, which gave rise to convictions on the basis of public incon-
venience. Offenders were indicted and convicted despite the fact that no
mental element was proven because of the public inconvenience caused
by the commission of the offense.9
These convictions created, in practice, an upper threshold for neg-
ligence, a type of augmented negligence, which required individuals to
follow strict guidelines and make sure they committed no offenses. This
standard of behavior is higher than the one imposed by negligence, which
requires only reasonable conduct. Strict liability offenses require more
than reasonability, making sure that no offense is committed at all. These
offenses show a clear preference of the public welfare over the strict jus-
tice for the potential offender. Because these offenses were not considered
severe, they were expanded “for the good of all.”10
This development was considered necessary because of the legal and
social changes at the time of the first industrial revolution. For example,
the increasing number of workers in the cities caused employers to de-
grade the workers’ social conditions. Parliament intervened through social
welfare legislation, and the efficient enforcement of this legislation was
through absolute liability offenses.11 It was immaterial whether the em-
ployer knew what the proper social conditions for workers were; employ-
ers had to make sure that no violation of these conditions occurred.12 In the
twentieth century, this type of criminal liability spread to other spheres of
law as well, including traffic law.13
American criminal law accepted absolute liability as a basis for crimi-
nal liability in the mid-nineteenth century,14 ignoring previous rulings that
rejected it.15 Acceptance was restricted to petty offenses, and violations
were punished by fines, and not very severe ones at that. Similar accep-
tance occurred at about the same time in the European-Continental legal
systems, so that absolute liability in criminal law became a global phe-
nomenon.16 In the meantime, the fault element in criminal law became
much more important because of internal developments in criminal law,
and mens rea became the major and dominant requirement for mental ele-
ment in criminal law.
AI L iabili ty for Stric t L iabili ty Offe nses . 107
vant facts, and that no other reasonable person could have been aware of
them under the same circumstances. This proof resembles refuting mens
rea in mens rea offenses and negligence in negligence offenses.
However, strict liability offenses are not mens rea or negligence of-
fenses, because refuting mens rea and negligence is not sufficient to pre-
vent imposition of criminal liability. The social and behavioral purpose
of these offenses is to ensure that individuals conduct themselves strictly
according to rules, and commit no offenses. That should be proven as well.
The offender must therefore prove that he or she took all reasonable mea-
sures to prevent the offense.24 Note the difference between strict liability
and negligence: to refute negligence, it is sufficient to prove that the of-
fender acted reasonably, but to refute strict liability, it is necessary to prove
that all reasonable measure were taken.25
To refute the negligence presumption of strict liability, the defendant
must positively meet both conditions by a preponderance of the evidence,
as in civil law cases. The defendant is not required to prove that these
conditions have been met beyond any reasonable doubt, but in general, it
is not sufficient to merely raise a reasonable doubt. The burden of proof
is higher than the general burden of the defendant. The possibility for the
offender to refute the presumption becomes part of the strict liability re-
quirement because it relates to the offender’s mental state. The structure
of strict liability is described schematically in table 4.1.
The modern structure of strict liability is consistent with the concept
of criminal law as a matrix of minimal requirements, described earlier,26
and it has both internal and external aspects. Internally, strict liability is
the minimal mental element requirement for each of the factual element
components. Consequently, if strict liability is proven in relation to cir-
cumstances and results but negligence is proven in relation to conduct,
that satisfies the requirement of strict liability. This means that for each
of the factual element components, at least strict liability is required, but
not exclusively strict liability. Externally, the mental element require-
ment of strict liability offenses is satisfied by at least strict liability, but
not exclusively. This means that criminal liability for strict liability of-
fenses may be imposed by proving mens rea and negligence as well as
strict liability.
Because strict liability is still considered an exception to the general
requirement of mens rea, it is an adequate mental element for relatively
light offenses. In some legal systems, strict liability has been restricted ex
ante or ex post to light offenses.27 This general structure of strict liability
is a template that contains terms derived from the mental terminology. To
AI L iabili ty for Stric t L iabili ty Offe nses . 109
explore whether ai entities are capable of meeting the strict liability re-
quirement of given offenses, we must examine these terms.
strengthens the argument just made that all inner capabilities of machina
sapiens criminalis are uniform, regardless of the type of offense.
Although the capability is the same in both negligence and strict li-
ability offenses, it operates differently in the cases of negligence and strict
liability. For negligence offenses, it is required that the ai system identify
the reasonable options and make a choice between them. The practical re-
quirement is that its final choice be a reasonable option. For strict liability
offenses, it is also required that the ai system identify the reasonable op-
tions, but in addition, it must choose to exercise all of them. The choice
here is between reasonable and unreasonable, whereas in the case of neg-
ligence, the choice is among the reasonable options.
It is much easier for an ai system, under these conditions, to act reason-
ably in a strict liability context than in the context of negligence because
in strict liability situations, fewer choices are required. Accordingly, in
relation to the criminal liability of an ai system in strict liability offenses,
the court must answer the following three questions:
If the answer to all three questions is affirmative, and this is proven beyond
any reasonable doubt, then the ai system has fulfilled the requirements of
the strict liability offense, and the ai system is presumed to be at least
negligent.
At this point, the defense has the opportunity to refute the negligence
presumption through positive evidence. After the evidence is presented,
the court must decide in two questions:
If the answer to even one of these two questions is affirmative, the neg-
ligence presumption has not been refuted, and criminal liability for the
strict liability offense is imposed. Only if the answers to both questions are
negative is the negligence presumption refuted, and no criminal liability
for the strict liability offense can be imposed on the ai system. In general,
ai systems that are capable of forming awareness and negligence, as dis-
When R obo ts Kill . 112
make some, but not all possible efforts to prevent the commission of an
offense. Most of the time, this does not contradict the norms of criminal
law. For example, we may take the risk of investing our money in doubtful
stocks, and we may not take all possible precautions to prevent damage to
our investments, but that does not breach any criminal law.
In some cases, however, wrong judgment contradicts a norm of crimi-
nal law that was designed to educate individuals to make sure that no
offense is committed. As long as our wrong judgment and lack of effort
to prevent offenses do not contradict the criminal law, society expects us
to learn our lesson on our own. Next time we invest our money, we will
examine the details of our investment more carefully. This is how we gain
life experience. In this case, society takes a risk that we will not learn our
lesson, but still does not intervene through criminal law.
But when criminal offenses are committed, society does not assume
the risk of allowing individuals to learn their lessons on their own. In
these cases the social harm is too grave to allow such risk, and society
intervenes using both strict liability and negligence offenses. In the case
of strict liability offenses, the purpose is to educate individuals to make
every possible effort to prevent the occurrence of the offense. For example,
drivers are expected (and educated) to drive carefully to prevent any pos-
sible moving violation.
The objective of the offense is to make sure that individuals learn how
to behave extra carefully. Prospectively, it is assumed that after some-
one has been convicted of a strict liability offense, the probability for re-
commission of the offense is much lower. For example, society educates
its drivers to drive extra carefully, and its employers to adhere to state reg-
ulations regarding the payment of wages, and so on. Human and corporate
offenders are expected to learn how to behave carefully by means of the
criminal law. Is this method of education relevant for ai entities as well?
The answer is affirmative. For the educational purpose of strict liabil-
ity, there is not much use in imposing criminal liability unless the offender
has the ability to learn and change his or her behavior accordingly. If we
want to make offenders learn to behave very carefully, we must assume
that they are capable of learning and implementing that knowledge. If such
capabilities are present and exercised, it is necessary to impose criminal
liability for strict liability offenses. But if no such capabilities are exer-
cised, then imposing criminal liability is entirely unnecessary because no
prospective value is expected, and the result of using or not using criminal
liability for strict liability offenses would be the same.
For ai systems that are equipped with the relevant capabilities for ma-
When R obo ts Kill . 114
though the perpetrator is at the same time criminally liable for a mens rea
offense. Negligence is the lowest level of mental element required for the
person instrumentally used to be considered a semi-innocent agent. In this
context, strict liability is too low a level of mental element for consider-
ation as a semi-innocent agent. If the person who is instrumentally used by
another is in a mental state of strict liability, that person is to be considered
an innocent agent, as if lacking any criminal mental state at all.
Thus, in perpetration of an offense-through-another, the other (instru-
mentally used) person may be in four possible mental states that reflect
matching legal consequences, as described schematically in table 4.2.
When the other person is aware of the delinquent enterprise and still
continues to participate although he is under no pressure, he becomes an
accomplice to the commission of the offense. Negligence reduces the per-
son’s legal state to that of a semi-innocent agent.36 But whether the mental
state of this person is that of strict liability or an absence of a criminal
mental state, he must be considered an innocent agent, so that no criminal
liability is imposed on him, and the full criminal liability for the rele
vant offense is imposed on the perpetrator who instrumentally used that
person. This is true for both humans and ai entities.
For this legal construct of perpetration-through-another, it is immate-
rial whether the ai system was instrumentally used, and whether it used
its strong ai capabilities to form a strict liability mental state. Thus, the
instrumental use of a weak ai system, of a strong ai system that formed
strict liability, or of a screwdriver has the same legal consequences. The
entity making the instrumental use (human, corporation, or ai system) is
criminally liable for the commission of the offense in full, and the instru-
mentally used entity (human, corporation, or ai system) is considered an
innocent agent.
For example, the human user of an unmanned vehicle based on an ad-
When R obo ts Kill . 116
and subjective foreseeability (the party has actually and subjectively fore-
seen the occurrence of the unplanned offense), whereas others prefer to
evaluate the ability to foresee based on an objective standard of reasonabil-
ity (the party has not actually foreseen the occurrence of the unplanned
offense, but any reasonable person under the same condition would have).
Actual foreseeability parallels subjective mens rea, whereas objective
foreseeability parallels objective negligence. A lower level of foreseeabil-
ity is not adequate for probable consequence liability. Thus, the question
is about the level of foreseeability required of the robbers. If they actually
foresaw the commission of that offense by the vehicle, then criminal li-
ability for the offense is imposed on them in addition to criminal liability
for robbery. If the robbers formed objective foreseeability, then criminal
liability for the additional offense is imposed only in legal systems in
which probable consequence liability can be satisfied through objective
foreseeability.
But if the mental state of the bank robbers in relation to the additional
offense is strict liability, this is not adequate for the imposition of criminal
liability through probable consequence liability. Although the additional
offense requires strict liability for imposition of criminal liability, this is
true only for the perpetrator of that offense, and not for imposing criminal
liability through probable consequence liability. Thus, if neither subjective
nor objective foreseeability regarding the commission of the offense can be
attributed to the robbers, then probable consequence liability is irrelevant.
In this type of cases, no criminal liability is imposed on the offenders, and
the criminal liability of the ai system for the strict liability offense does
not affect the offenders’ liability.
mentally used the ai system to kill the patient in the ambulance by design-
ing the system in such a way that it would collide with the ambulance
because it misunderstands the factual situation, they are perpetrators-
through-another of homicide. This is also the case if the ai system has
no advanced ai capabilities to meet the requirements of the offense. In
this case, they and the related parties, but not the ai system, are the only
ones criminally liable for the homicide. Their exact criminal liability for
the homicide, whether as murder or manslaughter, is determined by their
meeting the mental element requirements of these offenses.
If the manufacturer, the programmer, and the user foresaw the conse-
quences (the patient’s death) or any reasonable person in their condition
could have foreseen these consequences, and if the ai system is criminally
liable for the homicide, they may be criminally liable for the homicide
through probable consequence liability, as long as the offense was un-
planned. Generally, if they did not design the system to commit any of-
fense, but the unplanned homicide occurred, then the criminal liability of
the system is examined based on its capabilities and on meeting the rel-
evant homicide requirements, and the criminal liability of the human par-
ties is examined based on the standards of probable consequence liability.
Therefore, ai systems can be criminally liable for strict liability of-
fenses, and that does not necessarily reduce the criminal liability of other
relevant entities.
5
A pplica bili ty of G eneral Defen se s
to ai C riminal L ia bili ty
120
Applicabili ty of General Defenses . 121
the context of criminal law. Analysis of the records of these robots reveals
that the mens rea requirements of homicide are met in full. But this is not
the regular course of conduct for these robots, which are programmed not
to be violent unless it is absolutely necessary. Accurate analysis of this
case requires the application of the general defenses of criminal law to
ai technology. This is the question that this chapter seeks to answer: are
general defenses applicable to ai criminal liability?
General defenses can be divided into two main types: exemptions and
justifications.5 Exemptions are general defenses related to the personal
characteristics of the offender (in personam), and justifications are related
to the characteristics of the factual event (in rem). The personal charac-
teristics of the offender may negate the fault for the commission of the
offense, regardless of the factual characteristics of the event or the exact
identity of the offense. In exemptions, the personal characteristics of the
offender are sufficient to prevent imposition of criminal liability for any
offense.
For example, a child under the age of legal maturity is not criminally
liable for any offense factually committed by him. The same is true for in-
dividuals who committed the offense at a time when they were considered
to be insane. The exact nature of the offense committed by the individual
is immaterial for the imposition of criminal liability. It may be relevant for
subsequent steps toward the treatment or rehabilitation of the offender, but
not for the imposition of criminal liability.
Justifications are impersonal general defenses, and as such they do not
depend on the identity of the offender, but only on the factual event that
took place. The personal characteristics of the individual are immaterial
for justifications. For example, one person is attacked by another person,
and her life is in danger. She pushes the attacker away, which is consid-
ered assault unless it is done with consent. In this case, the person doing
the pushing claims self-defense regardless of her identity, the identity of
the attacker, or any other of their personal attributes, because self-defense
has to do with only the factual event itself.
Because justifications are impersonal, they also have a prospective
value. Not only is the individual not criminally liable if she acted with
justification, but this is the way in which she should have been acting. Jus-
tifications define not only types of general defense, but also proper behav-
ior.6 Therefore, individuals should defend themselves in situations that
require self-defense, even if this may appear as an offense. This is not true
for exemptions. A child below the maturity age is not supposed to commit
offenses even if no criminal liability is imposed on him, and neither are
insane individuals.
The prospective behavioral value of justifications expresses the social
values of society. If we accept self-defense, for example, as legitimate jus-
tification, this means that we prefer people to protect themselves when
the authorities are unable to protect them. We prefer to reduce the monop-
oly of the state on power by legitimizing self-assistance rather than leave
people vulnerable and helpless. We do not coerce individuals to act in
Applicabili ty of General Defenses . 123
their self-defense, but if they do, then we do not consider them criminally
liable for the offense committed through self-defense.
Both exemptions and justifications are general defenses. The term gen-
eral defenses relates to defenses that may be applied to any offense, and
not to a certain group of offenses. For example, infancy can be applied to
any offense as long as it was committed by an infant. By contrast, some
specific defenses can be applied only to specific offenses or types of of-
fenses. For example, in some countries in the case of statutory rape, a
defense is applicable if the age gap between the defendant and the victim
is less than three years. This defense is unique to statutory rape, and it is
irrelevant for any other offense. Exemptions and justifications are classi-
fied as general defenses.
As defense arguments, general defenses are argued positively by the
defense. If the defense chooses not to raise these arguments, they are not
discussed in court, even if participants at the trial understand that such an
argument may be relevant. It is not sufficient to state the general defense:
its elements must be proven by the defendant. In some legal systems, the
general defense is proven by raising a reasonable doubt about the presence
of the elements of the defense. In other legal systems, the general defense
must be proven by a preponderance of the evidence. Thus, the prosecution
has the opportunity to refute the general defense.
General defenses of the exemption type include infancy, loss of self-
control, insanity, intoxication, factual mistakes, legal mistakes, and sub-
stantive immunity. General defenses of the justification type include self-
defense (including defense of dwelling), necessity, duress, superior orders,
and de minimis defense. All these general defenses may invalidate the
offender’s fault. The question is whether these general defenses are appli-
cable to ai technology in the context of criminal law. We answer this ques-
tion now, discussing exemptions and justifications separately.
5.2. Exemption s
Exemptions are general defenses related to the personal characteristics
of the offender (in personam), as noted previously.7 The applicability of
exemptions to the criminal liability of ai entities raises the question of the
capability of these entities to form the personal characteristics required
for general defenses. For example, the question of whether an ai system
could be insane is rephrased to ask whether it has the mental capability of
forming the elements of insanity in criminal law. This question, mutatis
mutandis, is relevant to all exemptions, as we will discuss.
When R obo ts Kill . 124
5.2.1. I nfancy
Could an ai system be considered an infant for the purposes of criminal
liability? Since ancient times, children under a certain biological age were
not considered criminally liable (doli incapax). The difference between
the various legal systems was in the exact age of maturity. For example,
Roman law set the age of maturity at seven years.8 This defense is deter-
mined by legislation9 and case law.10 It is clear that the cutoff age is biologi-
cal and not mental, primarily for evidentiary reasons, because biological
age is much easier to prove.11 It was presumed, however, that the biological
age matches the mental age.
If the child’s biological age exceeds the lower threshold but is under the
age of full maturity (for example, fourteen in the United States), the mental
age of the child is examined based on evidence (for example, expert testi-
mony).12 The conclusive examination is whether the child understands his
own behavior and its wrong character,13 and whether he understands that
he may be criminally liable for the offense, as if he were mature. But there
may still be some procedural changes in the criminal process compared to
the standard process (for example, juvenile court, the presence of parents,
or more lenient punishments).
The rationale behind this general defense is that children under a cer-
tain age, whether biological or mental, are presumed to be incapable of
forming the fault required for criminal liability.14 The child is not inca-
pable of mentally containing the fault and understanding the full social
and individual meanings of criminal liability, so that imposition of crimi-
nal liability would be irrelevant, unnecessary, and vicious. Consequently,
children are not criminally liable but rather are educated, rehabilitated,
and treated.15 The question, in our case, is whether this rationale is rele
vant only to humans but to other legal entities as well.
The general defense of infancy is not considered applicable to corpo-
rations. There are no “child corporations,” so the moment a corporation
is registered it exists, and criminal liability may be imposed on it legally;
therefore, the rationale of this general defense is irrelevant for corpora-
tions. A child lacks the mental capability to form the required fault be-
cause human consciousness is underdeveloped at this age. As the child be-
comes older, his or her mental capacity develops gradually until it reaches
full understanding of right and wrong. At this point, criminal liability
becomes relevant.
The mental capacity of corporations does not depend on their chrono-
logical “age,” that is, the date of registration, and it is considered to be
constant. Moreover, the mental capacity of a corporation derives from its
human officers, who are mature entities. Consequently, there is no legiti-
Applicabili ty of General Defenses . 125
the self-behavior, and (2) inability to control the conditions under which
that behavior occurred.
In our example, the patient did not control his self- behavior (the
reflex), but he controlled the conditions for its occurrence (tapping the
knee); therefore, the general defense of loss of self-control is not applicable
to him.
Many types of situations have been recognized as loss of self-control,
including automatism (acting without aware central control over the
body),21 convulsions, post-epileptic states,22 post-stroke states,23 organic
brain diseases, diseases of the central nervous system, hypoglycemia, hy-
perglycemia,24 somnambulism (sleepwalking),25 extreme sleep depriva-
tion,26 side effects of bodily27 or mental traumas,28 blackout situations,29
side effects of amnesia30 and brainwashing,31 and many more.32 The cause
for the loss of self-control is immaterial for meeting the first condition. As
long as the offender is indeed incapable of controlling his or her behavior,
the first condition is fulfilled.
The second condition refers to the cause for being in the first condi-
tion. If the cause was controlled by the offender, he is not considered to
have lost his self-control. Controlling the conditions for losing the self-
control is controlling the behavior. Therefore, when the offender controls
the conditions for moving in and out of control, he cannot be considered
to have lost his self-control. In Europe, the second condition is based on
the doctrine of actio libera in causa, which dictates that if the one controls
one’s entry into a situation that is out of control, the general defense of loss
of self-control is rejected.33
Accordingly, ai systems can experience loss of self-control in the con-
text of criminal law due to external and internal causes. For example, a
human pushes an ai robot onto another human. The robot being pushed
has no control over that movement. This is an example of an external
cause for loss of self-control. If the pushed robot makes nonconsensual
physical contact with the other person, this may be considered an assault.
The mental element required for assault is awareness, so if the robot is
aware of that physical contact, both factual and mental elements require-
ments of the assault are met.
If the ai robot were human, it would have probably claimed the gen-
eral defense of loss of self-control. Thus, although both mental and fac-
tual elements of the assault are fulfilled, no criminal liability is imposed
because commission of the offense was involuntary, or due to loss of self-
control. This general defense would prevent the imposition of criminal
liability on human offenders, and it should also prevent the imposition
When R obo ts Kill . 128
5.2.3. I nsanity
Can an ai system be considered insane for the purposes of criminal liabil-
ity? Insanity has been known to humanity since the fourth millennium
bc ,34 but at that time it was considered to be a punishment for religious
sins,35 and therefore there was no need to look for cures.36 Only since the
middle of the eighteenth century has insanity been explored as a mental
disease, and its legal aspects considered.37 In the nineteenth century, the
terms insanity and moral insanity described situations in which the indi-
vidual lost his or her moral orientation, or had a defective moral compass,
but was aware of common moral values.38
Insanity was diagnosed as such only in case of significant deviations
from common behavior, especially sexual behavior.39 Since the end of
the nineteenth century, it has been understood that insanity is mental
malfunctioning, and that at times it is not expressed in deviations from
common behavior. This approach lies at the foundation of the understand-
ings of insanity in criminal law and criminology.40 Mental diseases and
defects were categorized according to their symptoms and respective med-
ical treatments, and their effect on criminal liability was explored and re-
Applicabili ty of General Defenses . 129
corded. But the different needs of psychiatry and criminal law produced
different definitions for insanity.
For example, the early English legal definition of an insane person
(“idiot”) was not being able to count to twenty. In psychiatry, such a
person is not considered insane.41 Criminal law needed a clear and con-
clusive definition of insanity, whereas psychiatry had no such needs. In
most modern legal systems, today’s legal definition of insanity is inspired
by two nineteenth-century English tests. One is the M’Naughten rules of
1843,42 and the second is the irresistible impulse test of 1840.43 The com-
bination of these two tests ensures that the general defense of insanity
complies with the structure of mens rea, as discussed previously.44
The legal definition of insanity has both cognitive and volitive aspects.
The cognitive aspect of insanity addresses the ability to understand the
criminality of the conduct, whereas the volitive aspect addresses the abil-
ity to control the will. Thus, if a mental disease or defect causes cognitive
malfunction (difficulty to understand the factual reality and the criminal-
ity of the conduct) or volitive malfunction (irresistible impulse), the condi-
tion is considered insanity from the legal point of view.45 This is the con-
clusive common test for insanity.46 It fits the structure of mens rea, which
also contains both cognitive and volitive aspects, and it is complementary
to the mens rea requirement.47
This definition of insanity is functional and not categorical. For an
individual to be considered insane, it is not necessary to have a mental ill-
ness that appears on a certain list of mental diseases. Any mental defect,
of any type, can be the basis for insanity as long as it causes cognitive or
volitive malfunctions. The malfunctions are examined functionally and
not necessarily medically, and they need not appear on a list of mental
diseases.48 Therefore, a person may be considered insane by criminal law
and perfectly sane from the point of view of psychiatry, as in the case of
a cognitive malfunction that is not categorized as a mental illness. The
opposite situation, whereby a person is sane for the purposes of criminal
law but insane psychiatrically, is also possible, as in the case of a mental
disease that does not cause any cognitive or volitive malfunction.
The insane person is presumed to be incapable of forming the fault re-
quired for criminal liability. The question is whether the general defense
of insanity is applicable to ai systems. The general defense of insanity re-
quires a mental, or inner, defect that causes cognitive or volitive malfunc-
tion. There is no need to demonstrate any specific type of mental disease,
and any mental defect is satisfactory. The question is, how can we know
about the existence of that “mental defect”? Because the mental defect is
When R obo ts Kill . 130
5.2.4. I ntoxication
Can an ai system be considered intoxicated for the purposes of criminal li-
ability? The effects of intoxicating substances have been known to human-
Applicabili ty of General Defenses . 131
ity since prehistory. In the early law of ancient times, the term intoxication
referred to drunkenness as a result of ingesting alcohol. Later, when the
intoxicating effects of other materials became known, the term was ex-
panded.51 Until the beginning of the nineteenth century, intoxication was
not accepted as a general defense. The Archbishop of Canterbury wrote
in the seventh century that imposing criminal liability on a drunk person
who committed homicide was justified for two reasons: first, the drunken-
ness itself, and second, the homicide of a Christian person.52
Drunkenness was conceptualized as a religious and moral sin, and
therefore it was considered immoral to let offenders be exempt from crim-
inal liability for being drunk.53 Only in the nineteenth century did the
courts undertake a serious legal discussion of intoxication, made possible
by legal and scientific developments that have produced the understand-
ing that an intoxicated person is not necessarily mentally competent for
purposes of criminal liability (non compos mentis).
From the beginning of the legal evaluation of intoxication in the nine-
teenth century, the courts distinguished between cases of voluntary and
involuntary intoxication.54 Voluntary intoxication was considered to be
the offender’s fault, so it could not be the basis for exemption from crimi-
nal liability. Nevertheless, voluntary intoxication could be considered as a
relevant circumstance for the imposition of a more lenient punishment.55
Moreover, voluntary intoxication could be used to refute premeditation in
first-degree murder cases.56 Courts have distinguished between cases based
on the reasons for becoming intoxicated. Voluntary intoxication grounded
in the will to commit an offense was considered different from voluntary
intoxication undertaken for no criminal reason.57
By contrast, involuntary intoxication was recognized and accepted as
an exemption from criminal liability.58 Involuntary intoxication is a situa-
tion imposed on the individual, so it is not just and fair to impose criminal
liability in such situations. Thus, the general defense of intoxication has
two main functions. When intoxication is involuntary, the general defense
prevents imposition of criminal liability. When it is voluntary but not in-
tended for the commission of an offense, intoxication is a consideration
for the imposition of a more lenient punishment.
The modern understanding of intoxication includes any mental effect
caused by an external substance (for example, chemicals). The required
mental effect matches the structure of mens rea, discussed earlier. Conse-
quently, the effect may be cognitive or volitive.59 The intoxicating effect
may relate to the offender’s perception, understanding of factual reality,
or awareness (cognitive effect); or it may affect the offender’s will, includ-
When R obo ts Kill . 132
For example, a man charged with rape admits that he and the com-
plainant had sexual intercourse, but proves that he had really believed
that it was carried out consensually; and the prosecution proves that the
complainant did not consent. The court believes both, and both are telling
the truth. Should the court acquit or convict the defendant? Since the sev-
enteenth century, modern criminal law has preferred the subjective per-
spective of the individual (the defendant) about the factual reality to the
factual reality itself.65 The general concept is that the individual cannot be
criminally liable except for “facts” that he or she believed to be knowing,
regardless of whether they actually occurred in factual reality.66
In most legal systems, the limitations imposed on this concept are evi-
dentiary, so that the defendant’s argument would be considered true and
authentic, if proven by the defendant. When the argument is considered
true and authentic, this becomes the basis for the imposition of crimi-
nal liability. In our example, the defendant is acquitted because he truly
believed the intercourse was consensual. If the defendant’s perspective
negates the mental element requirement, the factual mistake works as a
general defense that leads to the defendant’s acquittal.67
The general defense of factual mistake is applicable not only in mens
rea offenses, but also in negligence and strict liability offenses. The differ-
ence between them is in the type of mistake required. In mens rea offenses,
any authentic mistake negates awareness of the factual reality and is con-
sidered adequate for that general defense. In negligence offenses, the mis-
take is also required to be reasonable for the defendant to be considered
to have acted reasonably.68 In strict liability offenses, the mistake must be
inevitable despite the defendant having taken all reasonable measures to
prevent it.69 The question is whether the general defense of factual mistake
is applicable to ai systems.
Both humans and ai systems can experience difficulties, errors, and
malfunctions in the process of awareness of the factual reality. These dif-
ficulties may occur both in the process of absorbing the factual data by the
senses and in the process of creating the relevant general image about the
data. In most cases, such malfunctions result in the formation of an inner
factual image that differs from the factual reality as the court understands
it. This is a factual mistake concerning the factual reality. Factual mistakes
are part of our everyday life, and they are a common basis for our behavior.
In some cases, factual mistakes by both humans and ai systems can
lead to the commission of an offense. This means that according to factual
reality the action is considered an offense, but not so according to the sub-
jective inner factual image of the individual, which happens to involve a
When R obo ts Kill . 136
hibited. This is the legal situation for most offenses owing to prospective
considerations of everyday life in society.71
If offenders were required to know about the prohibition as a condi-
tion for the imposition of criminal liability, they would be encouraged not
to learn the law, and as long as they were ignorant of the law, they would
enjoy immunity from criminal liability. If no such condition is required,
the public is encouraged to inquire about the law, to know it, and to obey
it. These prospective considerations do not fully match the fault require-
ment in criminal law. As a result, criminal law seeks a balance between
justice, the fault requirement, and prospective considerations concerned
with everyday life in society.
Initially, considerations were entirely prospective. Roman law stated
that ignorance of the law does not excuse the commission of offenses (ig-
norantia juris non excusat),72 and until the nineteenth century, this was
the general approach followed by most legal systems.73 In the nineteenth
century, when the culpability requirement in criminal law underwent a
dramatic development, it became necessary to establish a balance, and the
legal mistake was required to be made in good faith (bona fide)74 and to
reflect the highest state of mental element, that is, strict liability. Accord-
ing to this standard, the legal mistake is an inevitable one, despite the fact
that all reasonable measures have been taken to prevent it.75
This high standard for legal mistakes is required for all types of of-
fenses, regardless of their mental element requirement. Therefore, the gen-
eral standard for legal mistakes is higher than that for factual mistakes.
In this context, the main question in courts is whether the offender has
indeed taken all reasonable measures to prevent the legal mistake, includ-
ing reasonable reliance on statutes, judicial decisions,76 official interpreta-
tions of the law (including pre-rulings),77 and the advice of private coun-
sel.78 The question for us is whether the general defense of legal mistake is
applicable to ai systems.
Technically, if the relevant entity, whether a human or an ai system,
is capable of fulfilling the mental element requirement of strict liability
offenses, the entity also is capable of claiming legal mistake as a general
defense. Because strong ai systems have the ability to meet the mental el-
ement requirement of strict liability offenses, as discussed earlier,79 they
also have the capabilities needed to make the general defense of legal mis-
take applicable to them. The absence of the legal knowledge of a given
issue can be proven by examining the records attesting to the knowledge
of the ai entity, thereby meeting the good faith requirement as well.
The fundamental meaning of the applicability of the legal mistake
When R obo ts Kill . 138
defense to ai systems is that the system was not restricted by any formal
legal restriction, and it acted accordingly. If the ai system contains a soft-
ware mechanism that searches for such restrictions, and although the
mechanism has been activated no such legal restriction has been found,
then the general defense applies. Note, however, that the system’s ex-
emption from criminal liability does not function as an exemption from
criminal liability for the programmers or users of the system. If these
persons could have restricted the activities of the system to strictly legal
ones but failed to do so, they may be criminally liable for the offense
based on perpetration-through-another or on probable consequence, as
we have discussed.80
For example, an ai system absorbs factual data about certain people,
and it is required to analyze their personalities based on the data and pub-
lish the results within certain guidelines. In one instance, the publication
is considered criminal libel. If the records of the system show that it has
not been constrained by any restriction regarding libelous publications
and it either did not have the mechanism to search for such restrictions
or it did have the mechanism but found no such restrictions, then the
system is not criminally liable for libel. The manufacturer, programmers,
and users may be criminally liable for the libel, however, as perpetrators-
through-another or through probable consequence liability.
An ai system may have broad knowledge about many types of issues,
but it may not contain legal knowledge on every legal matter. The system
may be searching for legal restrictions if it is designed to do that, but may
not necessarily find any; this then becomes a case for the general defense
of legal mistake. If legal mistakes have the same substantive and functional
effects on both humans and ai systems, there is no legitimate reason to
make the general defense of legal mistake applicable to one type of of-
fender but not to the other. Consequently, it appears that the general de-
fense of legal mistake is applicable to ai systems.
5.3.1. Self-Defense
Self-defense is one of the most ancient defenses in human culture. Its
basic function is to mitigate the society’s absolute monopoly on power
according to which only society (the state) has the authority to use force
against individuals.83 When one individual has a dispute with another, he
or she is not allowed to use force, but must apply to the state (through the
courts, police, and so on) for the state to solve the problem and use force
if necessary. According to this concept, no power is left in the hands of
individuals.
For this concept to be effective, however, state representatives must be
present at all times in all places. If one individual is attacked by another
in a dark alley, she cannot retaliate, but must wait for the state represen-
tatives, who may be unavailable at that time. To enable people to protect
themselves from attackers in situations like this, society must partially re-
treat from this concept. One such retreat is the acceptance of self-defense
as a general defense in criminal law. Self-defense allows individuals to
protect some values using force, skirting the monopoly on power enjoyed
by the state.
Being in a situation that warrants self-defense is considered to nul-
lify the individual’s fault required for the imposition of criminal liability.
This concept has been accepted by legal systems worldwide since ancient
times, and eventually the defense has become wider and more accurate.84
Applicabili ty of General Defenses . 141
noted, however, this is not part of the legal question, and as far as the ap-
plicability of self-defense to robots is concerned, only the legal question
is relevant.
The legal question is much simpler, and it asks whether an ai system
has the ability to protect its own “life,” freedom, body, and property. To
protect these legitimate interests, the ai system must first possess them.
Only if an ai system possesses property can it protect it; otherwise it can
protect only the property of others. The same is true concerning life, free-
dom, and body. Currently, criminal law protects human life, freedom,
and body. The question of whether a robot has analogous life, freedom,
and body is one of legal interpretation, in addition to the moral questions
involved.
Criminal law recognized decades ago that the corporation, which is not
a human entity, possesses life, freedom, body, and property.96 We return
to this analogy in the next chapter when we discuss the punishments of
ai systems, including capital punishment and imprisonment. But if the
legal question concerning corporations, which are abstract creatures, has
been decided affirmatively, it would be unreasonable to decide otherwise
in the case of ai systems, which physically simulate these human attri-
butes much better than do abstract corporations.
For example, in a prisoner escape, a prison-guard robot is attacked by
the escaping prisoners, who intend to tie it up in order to incapacitate it
and prevent it from interfering with their escape. Is it legitimate for the
robot to defend its mission and protect its freedom? If the prisoners intend
to break its arms, is it legitimate for the robot to defend its mission and
protect its “body”? If the answers are analogous to those supplied for cor-
porations, they are affirmative. In any case, even if the idea of an ai robot’s
life, body, and freedom is not acceptable, despite the analogy with corpo-
rations and despite the positive moral attitude, ai systems still meet the
first condition of self-defense by protecting the life, body, freedom, and
property of humans.
The other two legal conditions of self-defense do not appear to differ
for humans and ai systems. The condition requiring an illegitimate attack
depends on the attacker, not on the identity of defender, whether human or
not. The condition of clear and imminent danger to the protected interest
does not depend on the defender either. The nature of the danger is deter-
mined by the attacker, not by the defender, and therefore this condition is
met not through the behavior of the defender, but through the analysis of
the attack, independent of the defender’s identity.
The condition of repelling the attack by a proportional and immediate
Applicabili ty of General Defenses . 143
5.3.2. N ecessity
Can an ai system be considered to be acting out of necessity in the criminal
law context? Necessity is a justification of the same type as self-defense.
Both are designed to mitigate the society’s absolute monopoly on power,
discussed earlier.101 The principal difference between self-defense and ne-
cessity is in the identity of the object of the response. In the case of self-
defense, the defender’s reaction is against the attacker, whereas in the case
of necessity it is against an innocent object (innocent person, property,
and so on). The innocent object is not necessarily related to the cause of
the reaction.
For example, two people are sailing on the high seas. The boat hits an
iceberg and sinks. Both people survive on an improvised raft, but have
no water or food. After a few days, one of them eats the other in order to
survive.102 The victim did not attack the eater and was not to blame for the
crash; she was completely innocent. The attacker was also innocent, but
he knew that if he did not eat the other person, he would definitely die.
If the eater is indicted for murder, he can claim necessity (self-defense is
not relevant in this case because the victim did not threaten the attacker).
The traditional approach toward necessity is that under the right cir-
cumstances, it can justify the commission of offenses (quod necessitas non
habet legem).103 The traditional justification of the defense is the under-
standing in criminal law of the weakness of human nature. The individual
who acts under necessity is considered to be choosing the lesser of two
Applicabili ty of General Defenses . 145
evils from his or her own point of view.104 In our example, if the attacker
chooses not to eat the other person, they both perish; if he eats the other
person, only one of them dies. Both situations are evil, but the lesser evil
of the two is the one in which one person survives. The victim of neces-
sity is not blamed for anything but is innocent, and the act is considered
to be justified.105
Given that the act performed out of necessity is the individual’s, car-
ried out when the authorities are not available, the general defense of ne-
cessity partially reduces the applicability of the general concept whereby
only society (the state) has a monopoly on power, as we have discussed.
Being in a situation that requires an act of necessity is considered to nul-
lify the individual’s fault required for the imposition of criminal liabil-
ity. This concept has been accepted by legal systems worldwide since
ancient times, and eventually the defense became broader and more ac-
curate, so that its modern form allows individuals to protect legitimate
interests.106
There are several conditions for this general defense to be applicable:
terest. The main difference between self-defense and necessity lies in one
of the conditions. Whereas in self-defense the act is directed toward the
attacker, in the case of necessity the act is directed toward an external or
innocent interest. In the case of necessity, the defender must choose be-
tween the lesser of two evils, one of which harms an external interest, who
may be an innocent person having nothing to do with the danger.
The question regarding ai systems in this context is whether they are
capable of choosing the lesser of two evils. For example, an ai -based lo-
comotive transporting twenty passengers arrives at a junction of two rail-
roads. A child is playing on one of the rails, but the second rail leads
to a cliff. If the system chooses the first rail, the child will certainly die
but the twenty passengers will survive. If it chooses the second rail, the
child will survive, but given the vehicle’s speed and the distance from
the cliff, it will certainly fall off the cliff and none of the passengers will
survive.
If the vehicle had been driven by a human driver who chose the first
rail and killed the child, no criminal liability would be imposed on him
owing to the general defense of necessity. An ai system can calculate the
probabilities for each option and choose the possibility with the lowest
number of casualties. Strong ai systems are already used to predict compli-
cated events (for example, weather patterns), and calculating the probabil-
ities in the locomotive example is considered much easier. The ai system
analyzing the possibilities is faced with the same two possibilities that the
human driver has to consider.
If the ai system takes into account the number of probable casualties,
based on its programming or as a result of the relevant machine learning,
it will probably choose to sacrifice the child. This choice by the ai system
meets all the conditions of necessity, so that if it were human, no criminal
liability would be imposed on it owing to the general defense of neces-
sity. Should the ai system be treated differently, then? Moreover, if the ai
system chooses the alternative option, and causes not the lesser but the
greater of the two evils to occur, we would probably want to impose crimi-
nal liability (on the programmer, the user, or the ai system) in exactly the
same way as if the driver were human.
Naturally, some moral dilemmas may be involved in these choices
and decisions: for example, whether it is legitimate for the ai system to
make decisions in matters of human life, and whether it is legitimate for
ai systems to cause human death or severe injury. But these dilemmas are
no different from the moral dilemmas of self-defense, discussed earlier.112
Moreover, moral questions are not to be taken into consideration in rela-
Applicabili ty of General Defenses . 147
5.3.3. Duress
Can an ai system be considered as acting under duress in the criminal
law context? Duress is a justification of the same type as self-defense and
necessity. All are designed to mitigate the society’s absolute monopoly
on power, already discussed.113 The principal difference between self-
defense, necessity, and duress is in the course of conduct. In self-defense,
the defender’s reaction is intended to repel the attacker; in necessity, it is
a reaction against an external innocent object; and in the case of duress, it
is surrendering to a threat by committing an offense.
For example, a retired criminal with expertise in breaking into safes is
no longer active. He is invited by ex-friends to participate in another rob-
bery, where his expertise is required. He says no. They try to convince him,
but he still refuses. So they kidnap his daughter and threaten him that if
he does not participate in the robbery, they will kill her. He knows them
well, and knows that they are capable of doing it. He also knows that if he
involves police, they will kill his daughter. As a result, he surrenders to
the threat, participates in the robbery, and uses his expertise. If captured,
he can claim duress. Self-defense and necessity are irrelevant in this case
because he surrendered to the threat rather than face it.
The traditional approach toward duress is that under the right circum-
stances, it can justify the commission of offenses.114 The traditional justifi-
cation of the defense is the understanding in criminal law of the weakness
of human nature. At times, people would rather commit an offense under
threat than face the threat and pay the price by causing harm to precious
interests. Until the eighteenth century, the general defense of duress was
applicable to all offenses.115 Later, the Anglo-American legal systems nar-
rowed its applicability, and today it no longer includes severe homicide
offenses that require mens rea.116
In the safe- breaking example, if the offense were not robbery but
murder, the general defense of duress would not be applicable for the im-
position of criminal liability, but only as a consideration in punishment.
The reason for the narrow applicability is the sanctity of human life.117 But
there are many exceptions to this approach.118 In general, with the narrow
exception for homicide, duress enjoys broad application as a general de-
fense in criminal law worldwide. The individual who acts under duress is
considered to be choosing the lesser of two evils, from his or her point of
When R obo ts Kill . 148
and then it executes the order. When the records of the drone are exam-
ined, it emerges that the drone understood the legality of the orders to be
in a grey area regarding the terrorist’s family, and that it could have flown
lower and destroyed the lab with fewer casualties.
If the drone were human, it would probably claim the general defense
of superior order, because international law accepts such orders under
certain situations, as long as the order is not manifestly illegal. Therefore,
a human pilot would probably have been acquitted in such a case, and
the general defense would have been applicable. Should an ai system be
treated differently? If both the human pilot and the ai system have the
same functional discretion, and both meet the conditions of this general
defense, there is no legitimate reason for using a double standard in such
cases.
Naturally, the criminal liability of the ai system, if any, does not affect
the criminal liability of its superiors, if any (in case of an illegal order).
Some moral dilemmas may also be involved in these choices and deci-
sions: for example, whether it is legitimate for the ai system to make deci-
sions in matters of human life, and whether it is legitimate for ai systems
to cause human death or severe injury. But these dilemmas are no different
from the moral dilemmas involving the applicability of other general de-
fenses, discussed earlier.130 Moreover, moral questions are not to be taken
into consideration in relation to the question of criminal liability. Conse-
quently, it appears that the general defense of superior orders is applicable
to ai systems.
5.3.5. De M inimis
Can the general defense of de minimis be applicable for ai systems? In
most legal systems, offenses are defined and formulated broadly, inevita-
bly resulting in over-inclusion or over-criminalization, so that cases that
are not supposed to be considered criminal end up being included within
the scope of the offense. At times, the criminal proceedings in these cases
would cause more social harm than benefit. For example, the case of a
fourteen-year-old girl who steals her brother’s basketball falls within the
scope of the offense of theft, and the question is whether it would be ap-
propriate to institute criminal proceedings for theft in such a case, consid-
ering its social consequences.
Most legal systems solve such problems by granting broad discretion to
the prosecution and the courts. The prosecution may decide not to initiate
criminal proceedings in cases of low public interest, and if such proceed-
ings are started, the court may decide to acquit the defendant owing to low
Applicabili ty of General Defenses . 153
public interest. When the prosecution exercises this option, it is within its
administrative discretion. When the court exercises this discretion, it is
within its judicial power through the application of the general defense of
de minimis, which enables the court to acquit the defendant based on low
public interest in the case.
This type of judicial discretion has been widely accepted since ancient
times. Roman law, for example, determined that criminal law does not
extend to infant and petty matters (de minimis non curat lex), and that
the judge should not be troubled with such matters (de minimis non curat
praetor).131 In modern criminal law, the general defense of de minimis is
seldom exercised by the court given the wide administrative discretion
of the prosecution. But in some extreme cases, the court may exercise
this judicial discretion, in addition to the administrative discretion of the
prosecution.132
The basic test for de minimis defense is one of the social endangerment
involved in the commission of the offense. For the general defense of de
minimis to be applicable, the offense should reflect extremely low social
endangerment.133 Naturally, different societies at various times may attri-
bute different levels of social endangerment to the same offenses, because
social endangerment is dynamically conceptualized through morality, cul-
ture, religion, and so on. The relevant social endangerment is determined
by the court. The question is whether this general defense is relevant for
offenses committed by ai systems.
Applicability of de minimis defense depends on the particulars of the
case at hand and is based on its relevant aspects, and not necessarily on
the identity of the offender. The personality of the offender may also be
taken into consideration, but only as part of assessing the case. For this
reason, there is no difference between humans, corporations, or ai entities
regarding the applicability of this defense. The required low social endan-
germent is reflected in the factual event (in rem). For example, a human
driver swerves on the road and hits the guardrail. No damages are caused
to the guardrail or to other property, and there are no casualties. This is
a case for the de minimis defense, although it may be within the scope of
several traffic violations.
Would this case be legally different if the driver were not human but
an ai entity? And would it be different if the car were owned by a corpo-
ration? There is no substantive difference between humans, corporations,
and ai systems regarding the applicability of the de minimis defense, es-
pecially in light of the fact that this general defense is based on the char-
acteristics of the factual event and not necessarily on those of the offender.
When R obo ts Kill . 154
of grabbing the human guard and smashing him against the wall be justi-
fied and defended?
As understood by the robot, this reaction is at most one of repelling a
threat. If all the conditions of self-defense are met, this may serve as the
basis for the general defense of self-defense. The robot’s mission, however,
was to prevent the escape of other prisoners but not to protect their lives,
freedom, body, or property. Therefore, self-defense is inapplicable in this
case, as are necessity and duress. Even if the system had orders to kill re-
sisting prisoners, the general defense of superior orders would not have
been applicable. Killing prisoners in response to an attempted escape, that
results in the escape of other prisoners (the disguised prisoner managed
to escape as a result of the robot’s actions), would likely be considered as
following a manifestly illegal order, obeying which is not protected.
But if it can be proven, based on the records, that the robot’s decision
to kill the guard was a direct consequence of the effect of the virus on the
system, then the legal situation may be different. If the virus caused the
robot to act automatically when killing the guard, or caused a malfunction
in its cognitive or volitive processes as a result of which the robot killed
the guard, then the general defenses of loss of self-control, insanity, or in-
toxication may be applicable according to the concrete effect of the virus.
Naturally, this does not reduce the criminal liability of others involved
in the event, including humans (superiors, prisoners, guards, and hackers),
corporations (the prison service and the manufacturer of the ai systems),
and other ai systems. It appears, however, that general defenses in criminal
law are applicable to ai systems as well as to humans and corporations.
6
Sen tencing ai
156
Sentencing AI . 157
most people tend to park their cars in these areas, regardless of the level
of maximum punishment. Thus, the deterrent value of punishment alone
is very limited.14
Second, the deterrent value of punishment, even if it is maximal, tends
to erode in time with use, and at some point, the punishment may end
up functioning as an incentive for repeated offense.15 For instance, when
an offender is sentenced to prison for the first time, the deterrent effect
of the punishment is at its maximum level for that offender. The second
time, the effect is reduced, and the tenth time it is reduced dramatically.
After spending forty years in prison, the prisoner does not really have a
life outside of prison. Prison provides meals, social status, social security,
and friends. If released, the prisoner may commit offenses only to return
to this familiar social environment.
Thousands of homeless people in the Western world are committing
offenses on cold days to find shelter from the cold, have a warm meal,
and find a place to sleep. Not only does punishment not serve as a deter-
rent, but it serves as an incentive to repeat offending. In other words, the
punishment turns into the benefit of the offense. Nevertheless, for most
offenders and potential offenders, a combination of proper enforcement
with appropriate punishments can supply some deterrence, even if this is
not entirely effective. Our question, then, is whether deterrence is relevant
to ai systems.
Deterrence is intended to prevent the commission of the next offense,
by using intimidation. In the current technology, intimidation is a feeling
that machines cannot experience. Intimidation is based on a fear of future
suffering imposed if an offense is committed. Because machines do not
experience suffering, as we have already noted, both intimidation and the
reason for it are nullified when considering appropriate punishment for
robots. At the same time, both retribution and deterrence may be relevant
purposes for the punishment of human participants in the commission of
offenses by ai entities (for example, users and programmers).
Rehabilitation is a relatively newer purpose of modern sentencing,
based on the idea that the punishment and the process of sentencing may
serve as opportunities for correcting offenders’ social problems. Rehabili-
tation was inspired by the religious concept that every person may be
corrected given the right conditions, and has been embraced as a major
sentencing consideration since 1895 in Britain, spreading to the rest of
the world.16 The golden age of rehabilitation was between the 1920s and
1970s,17 and its popularity rested on its prospective quality that offered
hope for a better world.
When R obo ts Kill . 160
The methods of execution used in the United States were not considered
unconstitutional either.41
Retribution justifies capital penalty only for severe offenses in which
penalty parallels the offense in suffering or result (for example, homicide
offenses).42 Deterrence can justify capital punishment only as a deterrent
of the public and not of the offender, because a dead person cannot be
deterred.43 Rehabilitation is completely irrelevant for this punishment, as
dead people cannot be rehabilitated. And given that retribution and de-
terrence are irrelevant for ai sentencing, as noted previously, and because
rehabilitation is irrelevant to capital punishment, the only general consid-
eration that supports capital punishment in the sentencing of ai entities
is incapacitation.
There is no doubt that a dead person is incapacitated from commit-
ting further offenses and that the most dominant consideration for the
death penalty is incapacitation.44 The question regarding the applicability
of capital punishment to ai systems is how to impose the death penalty
on them. To answer this question, we must follow the three-stage model
presented earlier.
First, we should analyze the meaning of capital punishment; second,
seek out its roots as far as ai systems are concerned; and finally, adjust the
punishment to these roots. Functionally, capital punishment is depriva-
tion of life. Although this deprivation may affect not only the offender but
other people as well (relatives, employees, and so on), the essence of the
death penalty is the death of the offender. When the offender is human, life
means the person’s existence as a functioning creature. When the offender
is a corporation or an ai system, its life may be defined through its activity.
A living ai system is a functioning one, therefore the “life” of an ai
system is its capability to function as such. Stopping the ai system’s ac-
tivity does not necessarily mean the “death” of the system. Death means
permanent incapacitation of the system’s life. Therefore, capital punish-
ment for an ai system means its permanent shutdown so that no further
offenses or any other activity can be expected on the part of the system.
When an ai system is shut down by court order, this means that the society
prohibits the operation of that entity because it is considered too danger-
ous for society.
This application of capital punishment on ai systems serves both the
purposes of capital punishment and of incapacitation (as a general pur-
pose of sentencing) in relation to ai systems. When the offender is too
dangerous for society, and society decides to impose the death penalty, if
the punishment is acceptable in the given legal system, it is intended to
Sentencing AI . 167
accomplish the total and final incapacitation of the offender. This is true
of human offenders, corporations, and ai systems. For ai systems, perma-
nent incapacitation means absolute shutdown under court order, with no
option of reactivating the system again.
A permanently incapacitated system will not be involved in further
delinquent events. It may be argued that such shutdown may affect other
innocent persons (for example, the manufacturer of the system, or its pro-
grammers and users). Yet this is true not only for ai systems, but for human
offenders and corporations as well. The execution of an offender also af-
fects his innocent family (in the case of a human offender) or affects em-
ployees, directors, managers, shareholders, and so on (in the case of a cor-
poration). When the offender is an ai system, shutdown also affects other
innocent persons, but this is not unique to ai systems. Therefore, capital
punishment may be applicable to ai systems.
offenders from offending. Both retribution and deterrence, however, are ir-
relevant to ai systems because they experience neither suffering nor fear.50
Imprisonment for ai systems should be evaluated based on rehabilitation
and incapacitation, which are relevant to ai entities.
When the offender is deprived of his or her liberty, society can use this
prison term for the offender’s rehabilitation, by initiating inner change.
This change can be the result of activity carried out in prison. If the of-
fender accepts the inner change and does not return to delinquency, then
imprisonment is considered successful and the offender rehabilitated. At
the same time, when the offender is under strict supervision inside the
prison, his or her capability for committing further offenses is dramatically
reduced, which may be considered incapacitation as long as supervision
prevents recidivism.
Here is the question regarding the applicability of imprisonment: how
can imprisonment be imposed on ai systems? To answer this question,
we again follow the three-stage model. First, we analyze the roots of the
meaning of imprisonment; second, we seek the meaning of these roots for
ai systems; and finally, we adjust the punishment to the roots we have re-
vealed for ai systems.
Functionally, imprisonment is deprivation of liberty. Although this
deprivation may affect not only the imprisoned offender, but other people
as well (for example, relatives, employees, and so on), the essence of the
imprisonment consists of restricting the offender’s activity. When the of-
fender is human, liberty means the person’s freedom to act in any desired
way. When the offender is a corporation or an ai system, its liberty may
also be defined through its activity. The freedom of an ai system lies in the
exercise of its capabilities without restriction, including both the exercise
of these capabilities and their content.
Thus, the imposition of imprisonment on ai systems consists of de-
priving them of their liberty to act by restricting their activities for a spec-
ified period of time and under strict supervision. During this time the
ai system may be repaired to prevent the commission of further offenses.
Repair of the ai system may be more efficient if the system is incapacitated,
especially if it is done under court order. This situation can serve both
purposes of rehabilitation and incapacitation, which are the relevant sen-
tencing purposes for ai systems. When the ai system is in custody, under
restriction and supervision, its capability to offend is incapacitated.
An ai system being repaired through inner changes, initiated by ex-
ternal factors (for example, programmers working under court order), and
gaining experience during the period of restriction, can be substantively
Sentencing AI . 169
6.2.3. P robation
Prison overcrowding has forced states to develop substitutes for incar-
ceration. One of the popular substitutes is probation, developed in the
mid-nineteenth century by private charity and religious organizations that
undertook to care for convicted offenders by giving them the social tools
they needed to abandon delinquency and reintegrate into society.52 Most
countries embraced probation as part of their public sentencing systems.
The first US state to do that was Massachusetts, in 1878. Thus, probation
was operated and supervised by the state.53
The social tools provided to the offender vary from case to case. When
the offender is addicted to drugs, the social tools include a drug rehabili-
When R obo ts Kill . 170
not severe, lenient measures are taken to bring about the required inner
change in the offender.
Public service has yet another dimension, which relates to the com-
munity. Public service is carried out in the offender’s community to signify
that he or she is part of that community, and that causing harm to the com-
munity boomerangs back onto the offender.57 In many cases, public service
is added to probation in order to improve the chances of the offender’s full
rehabilitation within the community.58 Public service has more than mere
compensational value, as it proposes to make the offender understand and
become sensitive to the needs of the community. Public service is part of
the learning and reintegration processes that the offender experiences.
Retribution is irrelevant for public service, because public service is
not intended to make the offender suffer. Neither is deterrence relevant
for public service because it is perceived as a lenient penalty with neg-
ligible deterrent value. Both retribution and deterrence are irrelevant to
ai sentencing. Incapacitation, which is relevant for ai sentencing, is not
reflected in public service, unless the framework for the specific public
service is extremely strict and the offender’s delinquent capabilities are
incapacitated.
The dominant purpose of public service, however, is rehabilitation
through reintegration of the offender into society. Here is the question re-
garding the applicability of public service: how can public service be im-
posed on ai systems? To answer this question, we again follow the three-
stage model. First, we analyze the roots of the meaning of public service;
second, we seek the meaning of these roots for ai systems; and finally, we
adjust the punishment to the roots we have revealed for ai systems.
Functionally, public service consists of supervised compensation to
society provided by the experience of integration with society. The of-
fender expands his or her experience with society, making integration
easier. Broadening the offender’s social experience benefits society be-
cause it also includes a compensational dimension. Social experience is
not exclusive to human offenders. Both corporations and ai systems have
strong interactions with the community. Public service may empower and
strengthen these interactions, and make them the basis for the required
inner change.
For example, a medical expert system, equipped with machine-
learning capabilities, is used in a private clinic to diagnose patients. The
system was considered negligent, and the court has imposed public ser-
vice on it. To carry out this penalty, the system may be used by public
medical services or public hospitals. This can serve two main goals. First,
Sentencing AI . 173
the system is exposed to more cases, and through machine learning it can
refine its functioning. Second, its work can be considered compensation
to society for the harm caused by the offense.
At the end of the public service term, the ai system is more experi-
enced, and if the machine-learning process was effective, system perfor-
mance is improved. Because public service is supervised, regardless of
whether it is accompanied by probation, the machine-learning process
and other inner processes are directed toward the prevention of further
offenses during the public service. By the end of the public service period,
the ai system will have contributed time and resources to the benefit of so-
ciety, and this may be considered compensation for the social harm caused
by the commission of the offense. Thus, the public service of ai systems
resembles human public service in its substance.
It may be argued that the compensation is in practice contributed by
the manufacturers or users of the system, because they are the ones who
suffer from the absence of the system’s activity. This is true not only for
ai systems, but also for human offenders and corporations. When a human
offender performs public service, his or her absence is felt by family and
friends. When a corporation carries out public service, its resources are
not available to its employees, directors, clients, and so on. This absence
is part of carrying out the public service, regardless of the identity of the
offender, so that ai systems are not unique in this regard.
6.2.5. F ine
Fine is a payment made by the offender to the state treasury. It is not con-
sidered compensation because it is not given directly to the victims of the
offense, but to society as a whole. If we regard society as the victim of any
offense, we may view a fine as a type of general compensation. The fine
has evolved from the general remedy of compensation, when the criminal
process was still between two individuals (private plaintiff versus defen-
dant).59 When criminal law became public, the original compensation was
converted into a fine. Today, criminal courts may impose both fines and
compensations as part of the criminal process.
In the eighteenth century, the fine was not considered a preferred
penalty because it was not deemed to be as strong a deterrent as impris-
onment.60 But during the twentieth century, when prisons became over-
crowded and the cost of maintaining prisoners in state prisons increased,
the penal system was urged to increase the use of fines.61 To facilitate the
efficient collection of fines, in most legal systems the court can impose
imprisonment, public service, or confiscation of property in case of non-
When R obo ts Kill . 174
Criminal liability for artificial intelligence entities may sound radical. For
centuries, criminal liability was considered to be part of an exclusively
human universe. The first crack in the concept occurred in the seventeenth
century, when corporations were admitted into this exclusive club. Cor-
porate offenders joined human offenders, as both criminal liability and
punishments were equally imposed on both. The twentieth century intro-
duced ai technology to humankind. This technology developed rapidly in
an attempt to imitate human capabilities. Today, ai systems imitate some
human capabilities perfectly; indeed, they outperform humans in many
areas. Other human capabilities, however, still cannot be imitated.
Criminal liability does not require that offenders possess all human
capabilities, only some. If an ai entity possesses these capabilities, then
logically and rationally, criminal liability can be imposed whenever an
offense is committed. The idea of criminal liability of ai entities should
not be confused with the idea of moral accountability, which is a hugely
complex issue not only for machines, but for humans. Morality, in general,
has no single definition that is acceptable to all societies and individuals.
Deontological and teleological morality are the most acceptable types,
and in many situations they lead in opposite directions, both “moral.” The
Nazis considered themselves to be deontologically moral, although most
other people, societies, and individuals thought otherwise. Because moral-
ity is so difficult to assess, moral accountability is not the most appropriate
and efficient way of evaluating responsibility in criminal cases. Therefore,
society has chosen criminal liability as the most appropriate means for
dealing with delinquency. Modern criminal liability is independent from
morality of any kind, as well as from the concept of evil. It is imposed in
an organized, almost mathematical way.
Criminal liability has definite requirements, not more and not less. If
an ai entity meets all these requirements, then there is no reason not to
impose criminal liability on it. We have seen that ai entities are capable
of meeting these requirements in the same way that human and corporate
offenders do (but not animals). We have also seen that all types of crimi-
177
When R obo ts Kill . 178
Preface
1. This is the opening example of chapter 2, analyzed there in detail.
179
Note s to Chapte r 1 . 180
13. See, for example, Edwina L. Rissland, Artificial Intelligence and Law:
Stepping Stones to a Model of Legal Reasoning, 99 yale l . j . 1957, 1961 – 1964
(1990); alan tyree , expert systems in law 7 – 11 (1989).
14. robert m . glorioso & fernando c . colon osorio , engineering intelli -
gent systems : concepts and applications (1980).
15. padhy , supra note 6, at p. 13.
16. Nick Carbone, South Korea Rolls Out Robotic Prison Guards (Nov.
27, 2011), http://newsfeed.time.com (last visited Jan. 4, 2012); Alex Knapp,
South Korean Prison To Feature Robot Guards (Nov. 27, 2011), http://www
.forbes.com (last visited Feb. 29, 2012).
17. W. J. Hennigan, New Drone Has No Pilot Anywhere, So Who’s Ac-
countable?, los angeles times , Jan. 26, 2012. See also http://www.latimes
.com/business (last visited Feb. 29, 2012).
18. eugene charniak & drew mcdermott , introduction to artificial
intelligence (1985).
19. genesis 3:1 – 24.
20. richard e . bellman , an introduction to artificial intelligence :
can computers think ? (1978).
21. john haugeland , artificial intelligence : the very idea (1985).
22. charniak & mcdermott , supra note 18.
23. robert j . schalkoff , artificial intelligence : an engineering ap -
proach (1990).
24. raymond kurzweil , the age of intelligent machines (1990).
25. patrick henry winston , artificial intelligence (3d ed. 1992).
26. george f . luger & william a . stubblefield , artificial intelligence :
structures and strategies for complex problem solving (6th ed. 2008).
27. elaine rich & kevin knight , artificial intelligence (2d ed. 1991).
28. padhy , supra note 6, at p. 7.
29. Alan Turing, Computing Machinery and Intelligence, 59 mind 433,
433 – 460 (1950).
30. Donald Davidson, Turing’s Test, in modelling the mind 1 (K. A. Moh
yeldin Said, W. H. Newton-Smith, R. Viale, & K. V. Wilkes eds., 1990).
31. Robert M. French, Subcognition and the Limits of the Turing Test,
99 mind 53, 53 – 54 (1990).
32. masoud yazdani & ajit narayanan , artificial intelligence : human
effects (1985).
33. steven l . tanimoto , elements of artificial intelligence : an intro -
duction using lisp (1987).
34. john r . searle , minds , brains and science 28 – 41 (1984); John R.
Searle, Minds, Brains & Programs, 3 behavioral & brain sci . 417 (1980).
35. Roger C. Schank, What is ai , Anyway?, in the foundations of artifi -
cial intelligence 3, 4 – 6 (Derek Pertridge & Yorick Wilks eds., 1990, 2006).
36. douglas r . hofstadter , gödel , escher , bach : an eternal golden
braid 539 – 604 (1979, 1999).
Note s to Chapte r 1 . 181
L.Ed. 874 (1900); State v. Bowen, 118 Kan. 31, 234 P. 46 (1925); Hughes v.
Commonwealth, 19 Ky.L.R. 497, 41 S.W. 294 (1897); People v. Cherry, 307
N.Y. 308, 121 N.E.2d 238 (1954); State v. Hooker, 17 Vt. 658 (1845); Common
wealth v. French, 531 Pa. 42, 611 A.2d 175 (1992).
102. gabriel hallevy , the matrix of derivative criminal liability 18 – 24
(2012).
103. Dugdale, (1853) 1 El. & Bl. 435, 118 E.R. 499, 500: “the mere intent
cannot constitute a misdemeanour when unaccompanied with any act”; Ex
parte Smith, 135 Mo. 223, 36 S.W. 628 (1896); Proctor v. State, 15 Okl.Cr.
338, 176 P. 771 (1918); State v. Labato, 7 N.J. 137, 80 A.2d 617 (1951); Lam
bert v. State, 374 P.2d 783 (Okla.Crim.App.1962); In re Leroy, 285 Md. 508,
403 A.2d 1226 (1979).
104. For the principle of legality in criminal law, see hallevy , supra note 86.
105. Id. at pp. 135 – 137.
106. Id. at pp. 49 – 80.
107. Id. at pp. 81 – 132.
108. See, for example, in the United States, Robinson v. California, 370
U.S. 660, 82 S.Ct. 1417, 8 L.Ed.2d 758 (1962).
109. glanville williams , criminal law : the general part sec. 11 (2nd ed.
1961).
110. andrew ashworth , principles of criminal law 157 – 158, 202 (5th ed.
2006).
111. G. R. Sullivan, Knowledge, Belief, and Culpability, in criminal law
theory — doctrines of the general part 207, 214 (Stephen Shute & A. P.
Simester eds., 2005).
112. See, for example, article 2.02(5) of The American Law Institute,
Model Penal Code — Official Draft and Explanatory Notes 22 (1962, 1985),
which provides, “When the law provides that negligence suffices to establish
an element of an offense, such element also is established if a person acts
purposely, knowingly or recklessly. When recklessness suffices to establish
an element, such element also is established if a person acts purposely or
knowingly. When acting knowingly suffices to establish an element, such
element also is established if a person acts purposely.”
113. William S. Laufer, Corporate Bodies and Guilty Minds, 43 emory l . j .
647 (1994); Kathleen F. Brickey, Corporate Criminal Accountability: A Brief
History and an Observation, 60 wash . u . l . q . 393 (1983).
114. william searle holdsworth , a history of english law 475 – 476
(1923).
115. William Searle Holdsworth, English Corporation Law in the 16th and
17th Centuries, 31 yale l . j . 382 (1922).
116. william robert scott , the constitution and finance of english ,
scottish and irish joint -s tock companies to 1720 462 (1912).
117. bishop carleton hunt , the development of the business corpora -
tion in england 1800 – 1 867 6 (1963).
Note s to Chapte r 2 . 186
16. See, for example, State v. Dubina, 164 Conn. 95, 318 A.2d 95 (1972);
State v. Bono, 128 N.J.Super. 254, 319 A.2d 762 (1974); State v. Fletcher, 322
N.C. 415, 368 S.E.2d 633 (1988).
17. S. Z. Feller, Les Délits de Mise en Danger, 40 rev . int . de droit pénal
179 (1969).
18. This is the results component of all homicide offenses. See sir edward
coke , institutions of the laws of england — third part 47 (6th ed. 1681,
1817, 2001): “Murder is when a man of sound memory, and of the age of dis
cretion, unlawfully killeth within any county of the realm any reasonable
creature in rerum natura under the king’s peace, with malice aforethought,
either expressed by the party or implied by law, [so as the party wounded,
or hurt, etc die of the wound or hurt, etc within a year and a day after the
same].”
19. Henderson v. Kibbe, 431 U.S. 145, 97 S.Ct. 1730, 52 L.Ed.2d 203
(1977); Commonwealth v. Green, 477 Pa. 170, 383 A.2d 877 (1978); State v.
Crocker, 431 A.2d 1323 (Me.1981); State v. Martin, 119 N.J. 2, 573 A.2d 1359
(1990).
20. See, for example, Wilson v. State, 24 S.W. 409 (Tex.Crim.App.1893);
Henderson v. State, 11 Ala.App. 37, 65 So. 721 (1914); Cox v. State, 305 Ark.
244, 808 S.W.2d 306 (1991); People v. Bailey, 451 Mich. 657, 549 N.W.2d 325
(1996).
21. Morton J. Horwitz, The Rise and Early Progressive Critique of Objec-
tive Causation, in the politics of law : a progressive critique 471 (David
Kairys ed., 3d ed. 1998); Benge, (1865) 4 F. & F. 504, 176 E.R. 665; Long
bottom, (1849) 3 Cox C. C. 439.
22. Jane Stapelton, Law, Causation and Common Sense, 8 oxford j . legal
stud . 111 (1988).
23. jerome hall , general principles of criminal law 70 – 77 (2d ed. 1960,
2005); david ormerod , smith & hogan criminal law 91 – 92 (11th ed. 2005);
G., [2003] U.K.H.L. 50, [2004] 1 A.C. 1034, [2003] 3 W.L.R. 1060, [2003] 4 All
E.R. 765, [2004] 1 Cr. App. Rep. 21, (2003) 167 J.P. 621, [2004] Crim. L. R. 369.
24. Sweet v. Parsley, [1970] A.C. 132, [1969] 1 All E.R. 347, [1969] 2 W.L.R.
470, 133 J.P. 188, 53 Cr. App. Rep. 221, 209 E.G. 703, [1969] E.G.D. 123.
25. See sections 1.3.2 and 2.1.
26. See, for example, supra note 16.
27. For the definition of circumstances, see section 2.1.2.
28. For the definition of results, see section 2.1.3.
29. This component of awareness functions also as the legal causal con
nection in mens rea offenses, but this function has no additional significance
in this context.
30. Specific intent is sometimes mistakenly referred to as intent in order
to differentiate it from general intent, which is normally used to express
mens rea.
31. See section 5.2.2.
Note s to Chapte r 2 . 189
32. sir gerald gordon , the criminal law of scotland 61 (1st ed. 1967);
Treacy v. Director of Public Prosecutions, [1971] A.C. 537, 559, [1971] 1 All
E.R. 110, [1971] 2 W.L.R. 112, 55 Cr. App. Rep. 113, 135 J.P. 112.
33. William G. Lycan, Introduction, in mind and cognition 3, 3 – 13 (Wil
liam G. Lycan ed., 1990).
34. See, for example, william james , the principles of psychology (1890).
35. See, for example, bernard baars , in the theatre of consciousness
(1997).
36. hermann von helmholtz , the facts of perception (1878).
37. United States v. Youts, 229 F.3d 1312 (10th Cir.2000); State v. Sargent,
156 Vt. 463, 594 A.2d 401 (1991); United States v. Spinney, 65 F.3d 231 (1st
Cir.1995); State v. Wyatt, 198 W.Va. 530, 482 S.E.2d 147 (1996); United States
v. Wert-Ruiz, 228 F.3d 250 (3rd Cir.2000); Rollin M. Perkins, “Knowledge” as
a Mens Rea Requirement, 29 hastings l . j . 953, 964 (1978).
38. United States v. Jewell, 532 F.2d 697 (9th Cir.1976); United States v.
Ladish Malting Co., 135 F.3d 484 (7th Cir.1998).
39. Paul Weiss, On the Impossibility of Artificial Intelligence, 44 rev .
metaphysics 335, 340 (1990).
40. See, for example, tim morris , computer vision and image process -
ing (2004); milan sonka , vaclav hlavac , & roger boyle , image processing ,
analysis , and machine vision (2008).
41. walter w . soroka , analog methods in computation and simulation
(1954).
42. See, for example, david manners & tsugio makimoto , living with the
chip (1995).
43. For robot guards, see section 1.1.1.
44. For the endlessness of the quest for machina sapiens, see section
1.1.2.
45. In re Winship, 397 U.S. 358, 90 S.Ct. 1068, 25 L.Ed.2d 368 (1970).
46. United States v. Heredia, 483 F.3d 913 (2006); United States v. Ramon-
Rodriguez, 492 F.3d 930 (2007); Saik, [2006] U.K.H.L. 18, [2007] 1 A.C. 18;
Da Silva, [2006] E.W.C.A. Crim. 1654 , [2006] 4 All E.R. 900, [2006] 2 Cr.
App. Rep. 517; Evans v. Bartlam, [1937] A.C. 473, 479, [1937] 2 All E.R.
646; G. R. Sullivan, Knowledge, Belief, and Culpability, in criminal law
theory — doctrines of the general part 207, 213 – 214 (Stephen Shute &
A. P. Simester eds., 2005).
47. State v. Pereira, 72 Conn. App. 545, 805 A.2d 787 (2002); Thompson v.
United States, 348 F.Supp.2d 398 (2005); Virgin Islands v. Joyce, 210 F. App.
208 (2006).
48. See, for example, United States v. Doe, 136 F.3d 631 (9th Cir.1998);
State v. Audette, 149 Vt. 218, 543 A.2d 1315 (1988); Ricketts v. State, 291
Md. 701, 436 A.2d 906 (1981); State v. Rocker, 52 Haw. 336, 475 P.2d 684
(1970); State v. Hobbs, 252 Iowa 432, 107 N.W.2d 238 (1961); State v. Daniels,
236 La. 998, 109 So.2d 896 (1958).
Note s to Chapte r 2 . 190
49. People v. Disimone, 251 Mich.App. 605, 650 N.W.2d 436 (2002);
Carter v. United States, 530 U.S. 255, 120 S.Ct. 2159, 147 L.Ed.2d 203 (2000);
State v. Neuzil, 589 N.W.2d 708 (Iowa 1999); People v. Henry, 239 Mich.App.
140, 607 N.W.2d 767 (1999); Frey v. United States, 708 So.2d 918 (Fla.1998);
United States v. Randolph, 93 F.3d 656 (9th Cir.1996); United States v.
Torres, 977 F.2d 321 (7th Cir.1992).
50. Schmidt v. United States, 133 F. 257 (9th Cir.1904); State v. Ehlers, 98
N.J.L. 263, 119 A. 15 (1922); United States v. Pomponio, 429 U.S. 10, 97 S.Ct.
22, 50 L.Ed.2d 12 (1976); State v. Gray, 221 Conn. 713, 607 A.2d 391 (1992);
State v. Mendoza, 709 A.2d 1030 (R.I.1998).
51. ludwig wittgenstein , philosophische untersuchungen §§629 – 660
(1953).
52. State v. Ayer, 136 N.H. 191, 612 A.2d 923 (1992); State v. Smith, 170
Wis.2d 701, 490 N.W.2d 40 (App.1992).
53. Studstill v. State, 7 Ga. 2 (1849); Glanville Williams, Oblique Inten-
tion, 46 camb . l . j . 417 (1987).
54. People v. Smith, 57 Cal. App. 4th 1470, 67 Cal. Rptr. 2d 604 (1997);
Wieland v. State, 101 Md. App. 1, 643 A.2d 446 (1994).
55. Stephen Shute, Knowledge and Belief in the Criminal Law, in crimi -
nal law theory — doctrines of the general part 182 – 187 (Stephen Shute
& A. P. Simester eds., 2005); antony kenny , will , freedom and power 42 – 43
(1975); john r . searle , the rediscovery of mind 62 (1992); antony kenny ,
what is faith ? 30 – 31 (1992); State v. VanTreese, 198 Iowa 984, 200 N.W.
570 (1924), but see Montgomery v. Commonwealth, 189 Ky. 306, 224 S.W.
878 (1920); State v. Murphy, 674 P.2d 1220 (Utah.1983); State v. Blakely, 399
N.W.2d 317 (S.D.1987).
56. See, for example, Ned Block, What Intuitions About Homunculi Don’t
Show, 3 behavioral & brain sci . 425 (1980); Bruce Bridgeman, Brains + Pro-
grams = Minds, 3 behavioral & brain sci . 427 (1980).
57. See, for example, feng -h siung hsu , behind deep blue : building the
computer that defeated the world chess champion (2002); david levy &
monty newborn , how computers play chess (1991).
58. See section 2.2.2.
59. For the endlessness of the quest for machina sapiens, see section
1.1.2.
60. See section 1.3.2.
61. Russian roulette is a game of chance in which participants place a
single round in a gun, spin the cylinder, place the muzzle against their own
head, and pull the trigger.
62. G., [2003] U.K.H.L. 50, [2003] 4 All E.R. 765, [2004] 1 Cr. App. Rep.
237, 167 J.P. 621, [2004] Crim. L.R. 369, [2004] 1 A.C. 1034; Victor Tadors,
Recklessness and the Duty to Take Care, in criminal law theory — doctrines
of the general part 227 (Stephen Shute & A. P. Simester eds., 2005); Gar
diner, [1994] Crim. L.R. 455.
Note s to Chapte r 2 . 191
102. People v. Monks, 133 Cal. App. 440, 446 (Cal. Dist. Ct. App. 1933).
103. See Solum, supra note 68, at pp. 1276 – 1279.
104. See section 2.3.1.
105. See section 2.3.2.
106. hallevy , supra note 14, at pp. 241 – 248.
107. digesta , 48.19.38.5; codex justinianus , 9.12.6; reinhard zimmermann ,
the law of obligations — roman foundations of the civilian tradition 197
(1996).
108. See, for example, United States v. Greer, 467 F.2d 1064 (7th Cir.1972);
People v. Cooper, 194 Ill.2d 419, 252 Ill.Dec. 458, 743 N.E.2d 32 (2000).
109. State v. Lucas, 55 Iowa 321, 7 N.W. 583 (1880); Roy v. United
States, 652 A.2d 1098 (D.C.App.1995); People v. Weiss, 256 App.Div. 162, 9
N.Y.S.2d 1 (1939); People v. Little, 41 Cal.App.2d 797, 107 P.2d 634 (1941).
110. State v. Linscott, 520 A.2d 1067 (Me.1987): “a rule allowing for a
murder conviction under a theory of accomplice liability based upon an
objective standard, despite the absence of evidence that the defendant pos
sessed the culpable subjective mental state that constitutes an element of
the crime of murder, does not represent a departure from prior Maine law”
(original emphasis).
111. People v. Prettyman, 14 Cal.4th 248, 58 Cal.Rptr.2d 827, 926 P.2d
1013 (1996); Chance v. State, 685 A.2d 351 (Del.1996); Ingram v. United
States, 592 A.2d 992 (D.C.App.1991); Richardson v. State, 697 N.E.2d 462
(Ind.1998); Mitchell v. State, 114 Nev. 1417, 971 P.2d 813 (1998); State v.
Carrasco, 122 N.M. 554, 928 P.2d 939 (1996); State v. Jackson, 137 Wash.2d
712, 976 P.2d 1229 (1999).
112. United States v. Powell, 929 F.2d 724 (D.C.Cir.1991).
113. State v. Kaiser, 260 Kan. 235, 918 P.2d 629 (1996); United States
v. Andrews, 75 F.3d 552 (9th Cir.1996); State v. Goodall, 407 A.2d 268
(Me.1979). Compare: People v. Kessler, 57 Ill.2d 493, 315 N.E.2d 29 (1974).
114. People v. Cabaltero, 31 Cal.App.2d 52, 87 P.2d 364 (1939); People v.
Michalow, 229 N.Y. 325, 128 N.E. 228 (1920).
115. Anderson, [1966] 2 Q.B. 110, [1966] 2 All E.R. 644, [1966] 2 W.L.R.
1195, 50 Cr. App. Rep. 216, 130 J.P. 318: “put the principle of law to be in
voked in this form: that where two persons embark on a joint enterprise,
each is liable for the acts done in pursuance of that joint enterprise, that that
includes liability for unusual consequences if they arise from the execution
of the agreed joint enterprise.”
116. English, [1999] A.C. 1, [1997] 4 All E.R. 545, [1997] 3 W.L.R. 959,
[1998] 1 Cr. App. Rep. 261, [1998] Crim. L.R. 48, 162 J.P. 1; Webb, [2006]
E.W.C.A. Crim. 2496, [2007] All E.R. (D) 406; O’Flaherty, [2004] E.W.C.A.
Crim. 526, [2004] 2 Cr. App. Rep. 315.
117. BGH 24, 213; BGH 26, 176; BGH 26, 244.
118. See section 2.3.2.
119. See section 2.3.1.
Note s to Chapte r 3 . 194
21. See, for example, Jerome Hall, Negligent Behaviour Should Be Ex-
cluded from Penal Liability, 63 colum . l . rev . 632 (1963); Robert P. Fine &
Gary M. Cohen, Is Criminal Negligence a Defensible Basis for Criminal Li-
ability?, 16 buff . l . r . 749 (1966).
22. See section 2.1.1.
23. For car accident statistics in the United States, see, for example,
http://www.cdc.gov (last visited Feb. 29, 2012).
24. See section 3.2.2.
25. See section 1.3.2.
26. For example, in France (article 121 – 3 of the French penal code).
27. See section 2.2.2.
28. In Hall v. Brooklands Auto Racing Club, [1932] All E.R. 208, [1933]
1 K.B. 205, 101 L.J.K.B. 679, 147 L.T. 404, 48 T.L.R. 546, the “reasonable
person” has been described this way: “The person concerned is sometimes
described as ‘the man in the street’, or ‘the man in the Clapham omnibus’, or,
as I recently read in an American author, ‘the man who takes the magazines
at home, and in the evening pushes the lawn mower in his shirt sleeves’.”
29. State v. Bunkley, 202 Conn. 629, 522 A.2d 795 (1987); State v. Evans,
134 N.H. 378, 594 A.2d 154 (1991).
30. People v. Decina, 2 N.Y.2d 133, 157 N.Y.S.2d 558, 138 N.E.2d 799
(1956); Government of the Virgin Islands v. Smith, 278 F.2d 169 (3rd
Cir.1960); People v. Howk, 56 Cal.2d 687, 16 Cal.Rptr. 370, 365 P.2d 426
(1961); State v. Torres, 495 N.W.2d 678 (1993).
31. See section 1.1.2.
32. See section 2.2.2.
33. See section 1.3.
34. See, for example, Rollins v. State, 2009 Ark. 484, 347 S.W.3d 20
(2009); People v. Larkins, 2010 Mich. App. Lexis 1891 (2010); Driver v. State,
2011 Tex. Crim. App. Lexis 4413 (2011).
35. See section 2.3.2.
36. See, for example, Peter Alldridge, The Doctrine of Innocent Agency,
2 crim . l . f . 45 (1990).
37. See section 2.3.1.
38. See section 2.3.3.
39. See sections 3.3.1 and 3.3.2.
40. See, for example, United States v. Robertson, 33 M.J. 832 (1991);
United States v. Buber, 62 M.J. 476 (2006).
41. See section 2.2.2.
41. See, for example, Peter Lynch, The Origins of Computer Weather Pre-
diction and Climate Modeling, 227 journal of computational physics 3431
(2008).
15. Andrew Walkover, The Infancy Defense in the New Juvenile Court, 31
u . c . l . a . l . rev .
503 (1984); Keith Foren, Casenote: In Re Tyvonne M. Revis-
ited: The Criminal Infancy Defense in Connecticut, 18 q . l . rev . 733 (1999).
16. Frederick J. Ludwig, Rationale of Responsibility for Young Offenders, 29
neb . l . rev . 521 (1950); In re Tyvonne, 211 Conn. 151, 558 A.2d 661 (1989).
17. See section 5.2.3.
18. See section 5.2.4.
19. In Bratty v. Attorney-General for Northern Ireland, [1963] A.C. 386,
409, [1961] 3 All E.R. 523, [1961] 3 W.L.R. 965, 46 Cr. App. Rep 1, Lord Den
ning noted, “The requirement that it should be a voluntary act is essential,
not only in a murder case, but also in every criminal case. No act is punish
able if it is done involuntarily.” State v. Mishne, 427 A.2d 450 (Me.1981);
State v. Case, 672 A.2d 586 (Me.1996).
20. See, for example, People v. Newton, 8 Cal.App.3d 359, 87 Cal.Rptr.
394 (1970).
21. Kenneth L. Campbell, Psychological Blow Automatism: A Narrow De-
fence, 23 crim . l . q . 342 (1981); Winifred H. Holland, Automatism and Crim-
inal Responsibility, 25 crim . l . q . 95 (1982).
22. People v. Higgins, 5 N.Y.2d 607, 186 N.Y.S.2d 623, 159 N.E.2d 179
(1959); State v. Welsh, 8 Wash.App. 719, 508 P.2d 1041 (1973).
23. Reed v. State, 693 N.E.2d 988 (Ind.App.1998).
24. Quick, [1973] Q.B. 910, [1973] 3 All E.R. 347, [1973] 3 W.L.R. 26, 57
Cr. App. Rep. 722, 137 J.P. 763; C, [2007] E.W.C.A. Crim. 1862, [2007] All
E.R. (D) 91.
25. Fain v. Commonwealth, 78 Ky. 183 (1879); Bradley v. State, 102 Tex.
Crim.R. 41, 277 S.W. 147 (1926); Norval Morris, Somnambulistic Homicide:
Ghosts, Spiders, and North Koreans, 5 res judicatae 29 (1951).
26. McClain v. State, 678 N.E.2d 104 (Ind.1997).
27. People v. Newton, 8 Cal.App.3d 359, 87 Cal.Rptr. 394 (1970); Read v.
People, 119 Colo. 506, 205 P.2d 233 (1949); Carter v. State, 376 P.2d 351 (Okl.
Crim.App.1962).
28. People v. Wilson, 66 Cal.2d 749, 59 Cal.Rptr. 156, 427 P.2d 820 (1967);
People v. Lisnow, 88 Cal.App.3d Supp. 21, 151 Cal.Rptr. 621 (1978); Law
rence Taylor & Katharina Dalton, Premenstrual Syndrome: A New Criminal
Defense?, 19 cal . w . l . rev . 269 (1983); Michael J. Davidson, Feminine Hor-
monal Defenses: Premenstrual Syndrome and Postpartum Psychosis, 5 army
lawyer (2000).
29. Government of the Virgin Islands v. Smith, 278 F.2d 169 (3rd
Cir.1960); People v. Freeman, 61 Cal.App.2d 110, 142 P.2d 435 (1943); State
v. Hinkle, 200 W.Va. 280, 489 S.E.2d 257 (1996).
30. State v. Gish, 17 Idaho 341, 393 P.2d 342 (1964); Evans v. State, 322
Md. 24, 585 A.2d 204 (1991); State v. Jenner, 451 N.W.2d 710 (S.D.1990);
Lester v. State, 212 Tenn. 338, 370 S.W.2d 405 (1963); Polston v. State, 685
P.2d 1 (Wyo.1984).
Note s to Chapte r 5 . 200
v. Currens, 290 F.2d 751 (3rd Cir.1961); United States v. Chandler, 393 F.2d
920 (4th Cir.1968); Blake v. United States, 407 F.2d 908 (5th Cir.1969); United
States v. Smith, 404 F.2d 720 (6th Cir.1968); United States v. Shapiro, 383
F.2d 680 (7th Cir.1967); Pope v. United States, 372 F.2d 710 (8th Cir.1970).
46. Commonwealth v. Herd, 413 Mass. 834, 604 N.E.2d 1294 (1992); State
v. Curry, 45 Ohio St.3d 109, 543 N.E.2d 1228 (1989); State v. Barrett, 768
A.2d 929 (R.I.2001); State v. Lockhart, 208 W.Va. 622, 542 S.E.2d 443 (2000).
See also 18 U.S.C.A. §17.
47. The American Law Institute, Model Penal Code — Official Draft and
Explanatory Notes 61 – 62 (1962, 1985): “(1) A person is not responsible for
criminal conduct if at the time of such conduct as a result of mental disease
or defect he lacks substantial capacity either to appreciate the criminality
[wrongfulness] of his conduct or to conform his conduct to the requirements
of law; (2) As used in this Article, the terms ‘mental disease or defect’ do not
include an abnormality manifested only by repeated criminal or otherwise
antisocial conduct.”
48. State v. Elsea, 251 S.W.2d 650 (Mo.1952); State v. Johnson, 233 Wis.
668, 290 N.W. 159 (1940); State v. Hadley, 65 Utah 109, 234 P. 940 (1925);
henry weihofen , mental disorder as a criminal defense 119 (1954);
K. W. M. Fulford, Value, Action, Mental Illness, and the Law, in action and
value in criminal law 279 (Stephen Shute, John Gardner, & Jeremy Horder
eds., 2003).
49. See sections 2.2.2 and 2.2.3.
50. People v. Sommers, 200 P.3d 1089 (2008); McNeil v. United States, 933
A.2d 354 (2007); Rangel v. State, 2009 Tex.App. 1555 (2009); Commonwealth
v. Shumway, 72 Va.Cir. 481 (2007).
51. R. U. Singh, History of the Defence of Drunkenness in English Crimi-
nal Law, 49 law q . rev . 528 (1933).
52. theodori liber poenitentialis , III, 13 (668 – 690).
53. Francis Bowes Sayre, Mens Rea, 45 harv . l . rev . 974, 1014 – 1015
(1932).
54. william oldnall russell , a treatise on crimes and misdemeanors 8
(1843, 1964).
55. Marshall, (1830) 1 Lewin 76, 168 E.R. 965.
56. Pearson, (1835) 2 Lewin 144, 168 E.R. 1108; Thomas, (1837) 7 Car. & P.
817, 173 E.R. 356: “Drunkenness may be taken into consideration in cases
where what the law deems sufficient provocation has been given, because
the question is, in such cases, whether the fatal act is to be attributed to the
passion of anger excited by the previous provocation, and that passion is
more easily excitable in a person when in a state of intoxication than when
he is sober.”
57. Meakin, (1836) 7 Car. & P. 297, 173 E.R. 131; Meade, [1909] 1 K.B. 895;
Pigman v. State, 14 Ohio 555 (1846); People v. Harris, 29 Cal. 678 (1866);
People v. Townsend, 214 Mich. 267, 183 N.W. 177 (1921).
Note s to Chapte r 5 . 202
72. digesta , 22.6.9: “juris quidam ignorantiam cuique nocere, facti vero
ignorantiam non nocere.”
73. See, for example, Brett v. Rigden, (1568) 1 Plowd. 340, 75 E.R. 516;
Mildmay, (1584) 1 Co. Rep. 175a, 76 E.R. 379; Manser, (1584) 2 Co. Rep. 3,
76 E.R. 392; Vaux, (1613) 1 Blustrode 197, 80 E.R. 885; Bailey, (1818) Russ. &
Ry. 341, 168 E.R. 835; Esop, (1836) 7 Car. & P. 456, 173 E.R. 203; Crawshaw,
(1860) Bell. 303, 169 E.R. 1271; Schuster v. State, 48 Ala. 199 (1872).
74. Forbes, (1835) 7 Car. & P. 224, 173 E.R. 99; Parish, (1837) 8 Car. & P. 94,
173 E.R. 413; Allday, (1837) 8 Car. & P. 136, 173 E.R. 431; Dotson v. State, 6
Cold. 545 (1869); Cutter v. State, 36 N.J.L. 125 (1873); Squire v. State, 46 Ind.
459 (1874).
75. State v. Goodenow, 65 Me. 30 (1876); State v. Whitoomb, 52 Iowa 85,
2 N.W. 970 (1879).
76. Lutwin v. State, 97 N.J.L. 67, 117 A. 164 (1922); State v. Whitman,
116 Fla. 196, 156 So. 705 (1934); United States v. Mancuso, 139 F.2d 90 (3rd
Cir.1943); State v. Chicago, M. & St.P.R. Co., 130 Minn. 144, 153 N.W. 320
(1915); Coal & C.R. v. Conley, 67 W.Va. 129, 67 S.E. 613 (1910); State v. Strig
gles, 202 Iowa 1318, 210 N.W. 137 (1926); United States v. Albertini, 830 F.2d
985 (9th Cir.1987).
77. State v. Sheedy, 125 N.H. 108, 480 A.2d 887 (1984); People v. Fergu
son, 134 Cal.App. 41, 24 P.2d 965 (1933); Andrew Ashworth, Testing Fidelity
to Legal Values: Official Involvement and Criminal Justice , 63 mod . l . rev .
663 (2000); Glanville Williams, The Draft Code and Reliance upon Official
Statements, 9 legal stud . 177 (1989).
78. Rollin M. Perkins, Ignorance and Mistake in Criminal Law, 88 u . pa . l .
rev . 35 (1940).
79. See section 4.2.2.
80. See sections 4.3.2 and 4.3.3.
81. See section 1.1.1.
82. See section 5.1.
83. Chas E. George, Limitation of Police Powers, 12 law . & banker &
s . bench & b . rev . 740 (1919); Kam C. Wong, Police Powers and Control in
the People’s Republic of China: The History of Shoushen, 10 colum . j . asian
l . 367 (1996); John S. Baker, Jr., State Police Powers and the Federalization of
Local Crime, 72 temp . l . rev . 673 (1999).
84. Dolores A. Donovan & Stephanie M. Wildman, Is the Reasonable Man
Obsolete? A Critical Perspective on Self-Defense and Provocation, 14 loy .
l . a . l . rev . 435, 441 (1981); Joshua Dressler, Rethinking Heat of Passion: A
Defense in Search of a Rationale, 73 j . crim . l . & criminology 421, 444 – 450
(1982); Kent Greenawalt, The Perplexing Borders of Justification and Excuse,
84 colum . l . rev . 1897, 1898, 1915 – 1919 (1984).
85. State v. Brosnan, 221 Conn. 788, 608 A.2d 49 (1992); State v. Galla
gher, 191 Conn. 433, 465 A.2d 323 (1983); State v. Nelson, 329 N.W.2d 643
(Iowa 1983); State v. Farley, 225 Kan. 127, 587 P.2d 337 (1978).
Note s to Chapte r 5 . 204
86. Commonwealth v. Monico, 373 Mass. 298, 366 N.E.2d 1241 (1977);
Commonwealth v. Johnson, 412 Mass. 368, 589 N.E.2d 311 (1992); Duck
ett v. State, 966 P.2d 941 (Wyo.1998); People v. Young, 11 N.Y.2d 274, 229
N.Y.S.2d 1, 183 N.E.2d 319 (1962); Batson v. State, 113 Nev. 669, 941 P.2d
478 (1997); State v. Wenger, 58 Ohio St.2d 336, 390 N.E.2d 801 (1979); Moore
v. State, 25 Okl.Crim. 118, 218 P. 1102 (1923).
87. Williams v. State, 70 Ga.App. 10, 27 S.E.2d 109 (1943); State v.
Totman, 80 Mo.App. 125 (1899).
88. Lawson, [1986] V.R. 515; Daniel v. State, 187 Ga. 411, 1 S.E.2d 6
(1939).
89. John Barker Waite, The Law of Arrest, 24 tex . l . rev . 279 (1946).
90. People v. Williams, 56 Ill.App.2d 159, 205 N.E.2d 749 (1965); People
v. Minifie, 13 Cal.4th 1055, 56 Cal.Rptr.2d 133, 920 P.2d 1337 (1996); State v.
Coffin, 128 N.M. 192, 991 P.2d 477 (1999).
91. State Philbrick, 402 A.2d 59 (Me.1979); State v. Havican, 213 Conn.
593, 569 A.2d 1089 (1990); State v. Harris, 222 N.W.2d 462 (Iowa 1974);
Judith Fabricant, Homicide in Response to a Threat of Rape: A Theoreti-
cal Examination of the Rule of Justification, 11 golden gate u . l . rev . 945
(1981).
92. Celia Wells, Battered Woman Syndrome and Defences to Homicide:
Where Now?, 14 legal stud . 266 (1994); Aileen McColgan, In Defence of Bat-
tered Women who Kill, 13 oxford j . legal stud . 508 (1993); Joshua Dressler,
Battered Women Who Kill Their Sleeping Tormenters: Reflections on Main-
taining Respect for Human Life while Killing Moral Monsters, in criminal
law theory — doctrines of the general part 259 (Stephen Shute & A. P.
Simester eds., 2005).
93. State v. Moore, 158 N.J. 292, 729 A.2d 1021 (1999); State v. Robinson,
132 Ohio App.3d 830, 726 N.E.2d 581 (1999).
94. Lawrence B. Solum, Legal Personhood for Artificial Intelligences, 70
n . c . l . rev . 1231, 1255 – 1258 (1992).
95. See section 1.2.1; isaac asimov , i , robot 40 (1950).
96. See, for example, United States v. Allegheny Bottling Company, 695
F.Supp. 856 (1988); John C. Coffee, Jr., “No Soul to Damn: No Body to Kick”:
An Unscandalised Inquiry into the Problem of Corporate Punishment, 79
mich . l . rev . 386 (1981).
97. judith jarvis thomson , rights , restitution and risk : essays in moral
theory 33 – 48 (1986); Sanford Kadish, Respect for Life and Regard for Rights
in the Criminal Law, 64 cal . l . rev . 871 (1976); Patrick Montague, Self-
Defense and Choosing Between Lives, 40 phil . stud . 207 (1981); Cheyney C.
Ryan, Self-Defense, Pacificism, and the Possibility of Killing, 93 ethics 508
(1983).
98. See section 1.2.1.
99. See section 1.1.1.
100. See section 3.2.2.
Note s to Chapte r 5 . 205
396 Mass. 840, 489 N.E.2d 666 (1986); People v. Craig, 78 N.Y.2d 616, 578
N.Y.S.2d 471, 585 N.E.2d 783 (1991); State v. Warshow, 138 Vt. 22, 410 A.2d
1000 (1979).
112. See section 5.3.1.
113. Id.
114. John Lawrence Hill, A Utilitarian Theory of Duress, 84 iowa l . rev .
275 (1999); Rollin M. Perkins, Impelled Perpetration Restated, 33 hastings
l . j . 403 (1981); United States v. Johnson, 956 F.2d 894 (9th Cir.1992); Sanders
v. State, 466 N.E.2d 424 (Ind.1984); State v. Daoud, 141 N.H. 142, 679 A.2d
577 (1996); Alford v. State, 866 S.W.2d 619 (Tex.Crim.App.1993).
115. McGrowther, (1746) 18 How. St. Tr. 394.
116. United States v. LaFleur, 971 F.2d 200 (9th Cir.1991); Hunt v. State,
753 So.2d 609 (Fla.App.2000); Taylor v. State, 158 Miss. 505, 130 So. 502
(1930); State v. Finnell, 101 N.M. 732, 688 P.2d 769 (1984); State v. Nar
gashian, 26 R.I. 299, 58 A. 953 (1904); State v. Rocheville, 310 S.C. 20, 425
S.E.2d 32 (1993); Arp v. State, 97 Ala. 5, 12 So. 301 (1893).
117. State v. Nargashian, 26 R.I. 299, 58 A. 953 (1904).
118. People v. Merhige, 212 Mich. 601, 180 N.W. 418 (1920); People v.
Pantano, 239 N.Y. 416, 146 N.E. 646 (1925); Tully v. State, 730 P.2d 1206
(Okl.Crim.App.1986); Pugliese v. Commonwealth, 16 Va.App. 82, 428 S.E.2d
16 (1993).
119. United States v. Bakhtiari, 913 F.2d 1053 (2nd Cir.1990); R.I. Recre
ation Center v. Aetna Cas. & Surety Co., 177 F.2d 603 (1st Cir.1949); Sam v.
Commonwealth, 13 Va.App. 312, 411 S.E.2d 832 (1991).
120. Commonwealth v. Perl, 50 Mass.App.Ct. 445, 737 N.E.2d 937 (2000);
United States v. Contento-Pachon, 723 F.2d 691 (9th Cir.1984); State v. Ellis,
232 Or. 70, 374 P.2d 461 (1962); State v. Torphy, 78 Mo.App. 206 (1899).
121. People v. Richards, 269 Cal.App.2d 768, 75 Cal.Rptr. 597 (1969);
United States v. Bailey, 444 U.S. 394, 100 S.Ct. 624, 62 L.Ed.2d 575 (1980);
United States v. Gomez, 81 F.3d 846 (9th Cir.1996); United States v. Arthurs,
73 F.3d 444 (1st Cir.1996); United States v. Lee, 694 F.2d 649 (11th Cir.1983);
United States v. Campbell, 675 F.2d 815 (6th Cir.1982); State v. Daoud, 141
N.H. 142, 679 A.2d 577 (1996).
122. United States v. Bailey, 444 U.S. 394, 100 S.Ct. 624, 62 L.Ed.2d 575
(1980); People v. Handy, 198 Colo. 556, 603 P.2d 941 (1979); State v. Reese,
272 N.W.2d 863 (Iowa 1978); State v. Reed, 205 Neb. 45, 286 N.W.2d 111
(1979).
123. Fitzpatrick, [1977] N.I. 20; Hasan, [2005] U.K.H.L. 22, [2005] 4 All
E.R. 685, [2005] 2 Cr. App. Rep. 314, [2006] Crim. L.R. 142, [2005] All E.R.
(D) 299.
124. See sections 5.3.1 and 5.3.2.
125. Michael A. Musmanno, Are Subordinate Officials Penally Responsi-
ble for Obeying Superior Orders which Direct Commission of Crime?, 67 dick .
l . rev . 221 (1963).
Note s to Chapte r 6 . 207
126. Axtell, (1660) 84 E.R. 1060; Calley v. Callaway, 519 F.2d 184 (5th
Cir.1975); United States v. Calley, 48 C.M.R. 19, 22 U.S.C.M.A. 534 (1973).
127. a . p . rogers , law on the battlefield 143 – 147 (1996).
128. Jurco v. State, 825 P.2d 909 (Alaska App.1992); State v. Stoehr, 134
Wis.2d 66, 396 N.W.2d 177 (1986).
129. For the machine learning, see section 1.1.2.
130. See, for example, sections 5.3.1, 5.3.2, and 5.3.3.
131. Vashon R. Rogers, Jr., De Minimis Non Curat Lex, 21 albany l . j . 186
(1880); Max L. Veech & Charles R. Moon, De Minimis non Curat Lex, 45 mich .
l . rev . 537 (1947).
132. The American Law Institute, Model Penal Code — Official Draft and
Explanatory Notes 40 (1962, 1985).
133. Stanislaw Pomorski, On Multiculturalism, Concepts of Crime, and
the “De Minimis” Defense, 1997 b.y.u. l. rev. 51 (1997).
6. Sentencing ai
1. bronislaw malinowski , crime and custom in savage society (1959,
1982).
2. exodus 21:24.
3. gertrude ezorsky , philosophical perspectives on punishment 102 – 134
(1972).
4. Sheldon Glueck, Principles of a Rational Penal Code, 41 harv . l . rev .
453 (1928).
5. Jackson Toby, Is Punishment Necessary?, 55 j . crim . l . criminology &
police sci . 332 (1964); C. G. Schoenfeld, In Defence of Retribution in the Law,
35 psychoanalytic q . 108 (1966).
6. francis allen , the decline of the rehabilitative ideal 66 (1981).
7. barbara hudson , understanding justice : an introduction to ideas ,
perspectives and controversies in modern penal theory 39 (1996, 2003);
jessica mitford , kind and usual punishment : the prison business (1974).
8. Russell L. Christopher, Deterring Retributivism: The Injustice of “Just”
Punishment, 96 nw . u . l . rev . 843 (2002); Douglas Husak, Holistic Retribu-
tion, 88 cal . l . rev . 991 (2000); Douglas Husak, Retribution in Criminal
Theory, 37 san diego l . rev . 959 (2000); Dan Markel, Are Shaming Punish-
ments Beautifully Retributive? Retributivism and the Implications for the
Alternative Sanctions Debate, 54 vand . l . rev . 2157 (2001).
9. andrew von hirsch , doing justice : the choice of punishment 74 – 7 5
(1976); Andrew von Hirsch, Proportionate Sentences: A Desert Perspective,
in principled sentencing : readings on theory and policy 115 (Andrew von
Hirsch, Andrew Ashworth, & Julian Roberts eds., 3d ed. 2009).
10. Paul H. Robinson & John M. Darley, The Utility of Desert, 91 nw . u . l .
rev . 453 (1997); Samuel Scheffler, Justice and Desert in Liberal Theory, 88
cal . l . rev . 965 (2000); Edward M. Wise, The Concept of Desert, 33 wayne l .
rev . 1343 (1987).
Note s to Chapte r 6 . 208
welsh , evidence -b ased crime prevention (2006); rosemary sheehan , gill
mclvor , & chris trotter , what works with women offenders (2007);
Laaman v. Helgemoe, 437 F.Supp. 269 (1977).
24. Richard P. Seiter & Karen R. Kadela, Prisoner Reentry: What
Works,What Does Not, and What Is Promissing, 49 crime & delinquency 360
(2003); Clive R. Hollin, Treatment Programs for Offenders, 22 int ’ l j . of law
& psychiatry 361 (1999).
25. See section 3.2.2.
26. david garland , the culture of control : crime and social order in
contemporary society 102 (2002).
27. Ledger Wood, Responsibility and Punishment, 28 am . inst . crim . l . &
criminology 630, 639 (1938).
28. Martin P. Kafka, Sex Offending and Sexual Appetite: The Clinical and
Theoretical Relevance of Hypersexual Desire, 47 int ’ l j . of offender ther -
apy & comp . criminology 439 (2003); Matthew Jones, Overcoming the Myth
of Free Will in Criminal Law: The True Impact of the Genetic Revolution, 52
duke l . j . 1031 (2003); Sanford H. Kadish, Excusing Crime, 75 cal . l . rev . 257
(1987).
29. See section 1.3.3.
30. Stuart Field & Nico Jorg, Corporate Liability and Manslaughter:
Should We Be Going Dutch?, [1991] crim . l . r . 156 (1991).
31. Gerard E. Lynch, The Role of Criminal Law in Policing Corporate Mis-
conduct, 60 law & contemp . probs . 23 (1997); Richard Gruner, To Let the
Punishment Fit the Organization: Sanctioning Corporate Offenders Through
Corporate Probation, 16 am . j . crim . l . 1 (1988); Steven Walt & William S.
Laufer, Why Personhood Doesn’t Matter: Corporate Criminal Liability and
Sanctions, 18 am . j . crim . l . 263 (1991).
32. United States v. Allegheny Bottling Company, 695 F.Supp. 856
(1988).
33. Id. at p. 858.
34. Id.
35. John C. Coffee, Jr., “No Soul to Damn: No Body to Kick”: An Unscan-
dalised Inquiry Into the Problem of Corporate Punishment, 79 mich . l . rev .
386 (1981); steven box , power , crime and mystification 16 – 79 (1983); Brent
Fisse & John Braithwaite, The Allocation of Responsibility for Corporate
Crime: Individualism, Collectivism and Accountability, 11 sydney l . rev .
468 (1988).
36. Allegheny Bottling Company case, supra note 32, at p. 861.
37. Id. at p. 861.
38. russ versteeg , early mesopotamian law 126 (2000); g . r . driver &
john c . miles , the babylonian laws , vol . i : legal commentary 206, 495 – 496
(1952): “The capital penalty is most often expressed by saying that the of
fender ‘shall be killed’ . . . ; this occurs seventeen times in the first thirty-four
Note s to Chapte r 6 . 210
sections. A second form of expression, which occurs five times, is that ‘they
shall kill’ . . . the offender.”
39. Frank E. Hartung, Trends in the Use of Capital Punishment, 284(1)
annals am . acad . pol . & soc . sci . 8 (1952).
40. Gregg v. Georgia, 428 U.S. 153, S.Ct. 2909, 49 L.Ed.2d 859 (1979).
41. Provenzano v. Moore, 744 So.2d 413 (Fla. 1999); Dutton v. State, 123
Md. 373, 91 A. 417 (1914); Campbell v. Wood, 18 F.3d 662 (9th Cir. 1994);
Wilkerson v. Utah, 99 U.S. (9 Otto) 130, 25 L.Ed. 345 (1878); People v. Daugh
erty, 40 Cal.2d 876, 256 P.2d 911 (1953); Gray v. Lucas, 710 F.2d 1048 (5th
Cir. 1983); Hunt v. Nuth, 57 F.3d 1327 (4th Cir. 1995).
42. robert m . bohm , deathquest : an introduction to the theory and
practice of capital punishment in the united states 74 (1999).
43. Peter Fitzpatrick, “Always More to Do”: Capital Punishment and the
(De)Composition of Law, in the killing state — capital punishment in law ,
politics , and culture 117 (Austin Sarat ed., 1999); Franklin E. Zimring, The
Executioner’s Dissonant Song: On Capital Punishment and American Legal
Values, in the killing state — capital punishment in law , politics , and cul -
ture 137 (Austin Sarat ed., 1999).
44. Anne Norton, After the Terror: Mortality, Equality, Fraternity, in
the killing state — capital punishment in law , politics , and culture 27
(Austin Sarat ed., 1999); Hugo Adam Bedau, Abolishing the Death Penalty
Even for the Worst Murderers, in the killing state — capital punishment in
law , politics , and culture 40 (Austin Sarat ed., 1999).
45. Sean McConville, The Victorian Prison: England 1865 – 1965, in the
oxford history of the prison 131 (Norval Morris & David J. Rothman eds.,
1995); thorsten j . sellin , slavery and the penal system (1976); horsfall j .
turner , the annals of the wakefield house of corrections for three hun -
dred years 154 – 172 (1904).
46. john howard , the state of prisons in england and wales (1777,
1996).
47. David J. Rothman, For the Good of All: The Progressive Tradition in
Prison Reform, in history and crime : implications for criminal justice
policy 271 (James A. Inciardi & Charles E. Faupel eds., 1980).
48. Roy D. King, The Rise and Rise of Supermax: An American Solution
in Search of a Problem?, 1 punishment & society 163 (1999); chase riveland ,
supermax prisons : overview and general considerations (1999); jamie fell -
ner & joanne mariner , cold storage : super -m aximum security confinement
in indiana (1997).
49. dorris layton mackanzie & eugene e . hebert , correctional boot
camps : a tough intermediate sanction (1996); Sue Frank, Oklahoma Camp
Stresses Structure and Discipline, 53 corrections today 102 (1991); roberta
c . cronin , boot camps for adult and juvenile offenders : overview and
update (1994).
Note s to Chapte r 6 . 211
(1988); nigel walker & nicola padfield , sentencing : theory , law and prac -
tice (1996).
63. michael h . tonry & kathleen hatlestad , sentencing reform in over -
crowded times : a comparative perspective (1997).
64. See section 6.2.4.
65. United States v. Allegheny Bottling Company, 695 F.Supp. 856 (1988).
Selected Bi bliography
franz alexander & hugo staub , the criminal , the judge , and the public
(1931)
franz alexander , our age of unreason : a study of the irrational forces
in social life (rev. ed. 1971)
harry e . allen , eric w . carlson , & evalyn c . parks , critical issues in adult
probation (1979)
Susan M. Allan, No Code Orders v. Resuscitation: The Decision to Withhold
Life-Prolonging Treatment from the Terminally Ill, 26 wayne l . rev . 139
(1980)
Peter Alldridge, The Doctrine of Innocent Agency, 2 crim . l . f . 45 (1990)
francis allen , the decline of the rehabilitative ideal (1981)
Marc Ancel, The System of Conditional Sentence or Sursis, 80 l . q . rev . 334
(1964)
marc ancel , suspended sentence (1971)
Johannes Andenaes, The General Preventive Effects of Punishment, 114
u . pa . l . rev . 949 (1966)
Johannes Andenaes, The Morality of Deterrence, 37 u . chi . l . rev . 649
(1970)
Susan Leigh Anderson, Asimov’s “Three Laws of Robotics” and Machine Me-
taethics, 22 ai soc . 477 (2008)
Edward B. Arnolds & Norman F. Garland, The Defense of Necessity in Crimi-
nal Law: The Right to Choose the Lesser Evil, 65 j . crim . l . & criminology
289 (1974)
Andrew Ashworth, The Scope of Criminal Liability for Omissions, 84 l . q .
rev . 424 (1989)
Andrew Ashworth, Testing Fidelity to Legal Values: Official Involvement and
Criminal Justice , 63 mod . l . rev . 663 (2000)
andrew ashworth , principles of criminal law (5th ed. 2006)
Andrew Ashworth, Rehabilitation, in principled sentencing : readings on
theory and policy 1 (Andrew von Hirsch, Andrew Ashworth, & Julian
Roberts eds., 3d ed. 2009)
isaac asimov , i , robot (1950)
issac asimov , the rest of robots (1964)
Tom Athanasiou, High-Tech Politics: The Case of Artificial Intelligence, 92
socialist rev . 7 (1987)
213
Selected Bi bliography . 214
James Austin & Barry Krisberg, The Unmet Promise of Alternatives, 28 j . res .
crime & delinq . (1982)
john austin , the province of jurisprudence determined (1832, 2000)
bernard baars , in the theatre of consciousness (1997)
John S. Baker, Jr., State Police Powers and the Federalization of Local Crime,
72 temp . l . rev . 673 (1999)
stephen baker , final jeopardy : man vs . machine and the quest to know
everything (2011)
cesare beccaria , traité des délits et des peines (1764)
Hugo Adam Bedau, Abolishing the Death Penalty Even for the Worst Murder-
ers, in the killing state — capital punishment in law , politics , and cul -
ture 40 (Austin Sarat ed., 1999)
richard e . bellman , an introduction to artificial intelligence : can com -
puters think ? (1978)
jeremy bentham , an introduction to the principles of morals and legisla -
tion (1789, 1996)
Jeremy Bentham, Punishment and Deterrence, in principled sentencing :
readings on theory and policy (Andrew von Hirsch, Andrew Ashworth,
& Julian Roberts eds., 3d ed. 2009)
john biggs , the guilty mind (1955)
Ned Block, What Intuitions About Homunculi Don’t Show, 3 behavioral &
brain sci . 425 (1980)
robert m . bohm , deathquest : an introduction to the theory and practice
of capital punishment in the united states (1999)
Addison M. Bowman, Narcotic Addiction and Criminal Responsibility under
Durham, 53 geo . l . j . 1017 (1965)
steven box , power , crime and mystification (1983)
richard b . brandt , ethical theory (1959)
Kathleen F. Brickey, Corporate Criminal Accountability: A Brief History and
an Observation, 60 wash . u . l . q . 393 (1983)
Bruce Bridgeman, Brains + Programs = Minds, 3 behavioral & brain sci . 427
(1980)
walter bromberg , from shaman to psychotherapist : a history of the
treatment of mental illness (1975)
Andrew G. Brooks & Ronald C. Arkin, Behavioral Overlays for Non-Verbal
Communication Expression on a Humanoid Robot, 22 auton . robots 55
(2007)
Timothy L. Butler, Can a Computer Be an Author — Copyright Aspects of Ar-
tificial Intelligence, 4 comm . ent . l . s . 707 (1982)
Kenneth L. Campbell, Psychological Blow Automatism: A Narrow Defence,
23 crim . l . q . 342 (1981)
W. G. Carson, Some Sociological Aspects of Strict Liability and the Enforce-
ment of Factory Legislation, 33 mod . l . rev . 396 (1970)
W. G. Carson, The Conventionalisation of Early Factory Crime, 7 int ’ l j . of
sociology of law 37 (1979)
Selected Bi bliography . 215
sir gerald gordon , the criminal law of scotland (1st ed. 1967)
gerhardt grebing , the fine in comparative law : a survey of 21 countries
(1982)
Kent Greenawalt, The Perplexing Borders of Justification and Excuse, 84
colum . l . rev . 1897 (1984)
Kent Greenawalt, Distinguishing Justifications from Excuses, 49 law & con -
temp . probs . 89 (1986)
David F. Greenberg, The Corrective Effects of Corrections: A Survey of Evalu-
ation, in corrections and punishment 111 (David F. Greenberg ed., 1977)
Judith A. Greene, Structuring Criminal Fines: Making an ‘Intermediate Pen-
alty’ More Useful and Equitable, 13 justice system j . 37 (1988)
Richard Gruner, To Let the Punishment Fit the Organization: Sanctioning
Corporate Offenders Through Corporate Probation, 16 am . j . crim . l . 1
(1988)
jerome hall , general principles of criminal law (2d ed. 1960, 2005)
Jerome Hall, Intoxication and Criminal Responsibility, 57 harv . l . rev . 1045
(1944)
Jerome Hall, Negligent Behaviour Should Be Excluded from Penal Liability,
63 colum . l . rev . 632 (1963)
Seymour L. Halleck, The Historical and Ethical Antecedents of Psychiatric
Criminology, in psychiatric aspects of criminology 8 (Halleck & Brom
berg eds., 1968)
Gabriel Hallevy, The Recidivist Wants to Be Punished — Punishment as an
Incentive to Re-offend, 5 int ’ l j . punishment & sentencing 124 (2009)
gabriel hallevy , a modern treatise on the principle of legality in crimi -
nal law (2010)
Gabriel Hallevy, The Criminal Liability of Artificial Intelligence Entities —
from Science Fiction to Legal Social Control, 4 akron intell . prop . j . 171
(2010)
Gabriel Hallevy, Unmanned Vehicles — Subordination to Criminal Law under
the Modern Concept of Criminal Liability, 21 j . l . inf . & sci . 200 (2011)
Gabriel Hallevy, Therapeutic Victim-Offender Mediation within the Crimi-
nal Justice Process — Sharpening the Evaluation of Personal Potential
for Rehabilitation while Righting Wrongs under the Alternative-Dispute-
Resolution (ADR) Philosophy, 16 harv . negot . l . rev . 65 (2011)
gabriel hallevy , the matrix of derivative criminal liability (2012)
John Harding, The Development of the Community Service, in alternative
strategies for coping with crime 164 (Norman Tutt ed., 1978)
herbert l . a . hart , punishment and responsibility : essays in the philoso -
phy of law (1968)
Frank E. Hartung, Trends in the Use of Capital Punishment, 284(1) annals
am . acad . pol . & soc . sci . 8 (1952)
John Haugeland, Semantic Engines: An Introduction to Mind Design, in mind
design 1 (John Haugeland ed., 1981)
Selected Bi bliography . 219
Douglas Husak & Andrew von Hirsch, Culpability and Mistake of Law, in
action and value in criminal law 157 (Stephen Shute, John Gardner, &
Jeremy Horder eds., 2003)
Peter Barton Hutt & Richard A. Merrill, Criminal Responsibility and the Right
to Treatment for Intoxication and Alcoholism, 57 geo . l . j . 835 (1969)
ray jackendoff , consciousness and the computational mind (1987)
william james , the principles of psychology (1890)
phillip n . johnson -l aird , mental models (1983)
Matthew Jones, Overcoming the Myth of Free Will in Criminal Law: The True
Impact of the Genetic Revolution, 52 duke l . j . 1031 (2003)
Sanford Kadish, Respect for Life and Regard for Rights in the Criminal Law,
64 cal . l . rev . 871 (1976)
Sanford H. Kadish, Excusing Crime, 75 cal . l . rev . 257 (1987)
Martin P. Kafka, Sex Offending and Sexual Appetite: The Clinical and Theo-
retical Relevance of Hypersexual Desire, 47 int ’ l j . of offender therapy
& comp . criminology 439 (2003)
immanuel kant , our duties to animals (1780)
A. W. G. Kean, The History of the Criminal Liability of Children, 53 l . q . rev .
364 (1937)
vojislav kecman , learning and soft computing , support vector machines ,
neural networks and fuzzy logic models (2001)
Edwin R. Keedy, Ignorance and Mistake in the Criminal Law, 22 harv . l .
rev . 75 (1909)
Paul W. Keve, The Professional Character of the Presentence Report, 26 fed .
probation 51 (1962)
antony kenny , will , freedom and power (1975)
antony kenny , what is faith ? (1992)
Roy D. King, The Rise and Rise of Supermax: An American Solution in
Search of a Problem?, 1 punishment & society 163 (1999)
raymond kurzweil , the age of intelligent machines (1990)
nicola lacey & celia wells , reconstructing criminal law — critical per -
spectives on crime and the criminal process (2d ed. 1998)
nicola lacey , celia wells , & oliver quick , reconstructing criminal law
(3d ed. 2003, 2006)
j . g . landels , engineering in the ancient world (rev. ed. 2000)
William S. Laufer, Corporate Bodies and Guilty Minds, 43 emory l . j . 647
(1994)
gottfried wilhelm leibniz , characteristica universalis (1676)
Julie Leibrich, Burt Galaway, & Yvonne Underhill, Community Sentencing in
New Zealand: A Survey of Users, 50 fed . probation 55 (1986)
lawrence lessig , code and other laws of cyberspace (1999)
david levy , robots unlimited : life in a virtual age (2006)
david levy , love and sex with robots : the evolution of human -r obot rela -
tionships (2007)
Selected Bi bliography . 221
david levy & monty newborn , how computers play chess (1991)
K. W. Lidstone, Social Control and the Criminal Law, 27 brit . j . criminology
31 (1987)
douglas s . lipton , robert martinson , & judith wilks , the effectiveness
of correctional treatment : a survey of treatment evaluation studies
(1975)
Frederick J. Ludwig, Rationale of Responsibility for Young Offenders, 29 neb .
l . rev . 521 (1950)
george f . luger , artificial intelligence : structures and strategies for
complex problem solving (2001)
george f . luger & william a . stubblefield , artificial intelligence : struc -
tures and strategies for complex problem solving (6th ed. 2008)
William G. Lycan, Introduction, in mind and cognition 3 (William G. Lycan
ed., 1990)
Gerard E. Lynch, The Role of Criminal Law in Policing Corporate Miscon-
duct, 60 law & contemp . probs . 23 (1997)
Peter Lynch, The Origins of Computer Weather Prediction and Climate Mod-
eling, 227 journal of computational physics 3431 (2008)
david lyons , forms and limits of utilitarianism (1965)
David Lyons, Open Texture and the Possibility of Legal Interpretation, 18 law
phil . 297 (1999)
dorris layton mackanzie & eugene e . hebert , correctional boot camps :
a tough intermediate sanction (1996)
bronislaw malinowski , crime and custom in savage society (1959, 1982)
david manners & tsugio makimoto , living with the chip (1995)
Dan Markel, Are Shaming Punishments Beautifully Retributive? Retributiv-
ism and the Implications for the Alternative Sanctions Debate, 54 vand .
l . rev . 2157 (2001)
Robert Martinson, What Works? Questions and Answers about Prison
Reform, 35 public interest 22 (1974)
Thomas Mathiesen, The Viewer Society: Michel Foucault’s ‘Panopticon’ Re-
visited, 1 theoretical criminology 215 (1997)
Peter McCandless, Liberty and Lunacy: The Victorians and Wrongful Con-
finement, in madhouses , mad -d octors , and madmen : the social history
of psychiatry in the victorian era (Scull ed., 1981)
Aileen McColgan, In Defence of Battered Women who Kill, 13 oxford j .
legal stud . 508 (1993)
Sean McConville, The Victorian Prison: England 1865 – 1965, in the oxford
history of the prison 131 (Norval Morris & David J. Rothman eds., 1995)
j . r . mcdonald , g . m . burt , j . s . zielinski , & s . d . j . mcarthur , intelligent
knowledge based system in electrical power engineering (1997)
colin mcginn , the problem of consciousness : essays towards a resolution
(1991)
karl menninger , martin mayman , & paul pruyser , the vital balance (1963)
Selected Bi bliography . 222
Cases
Abrams v. United States, 250 U.S. 616, 63 L.Ed. 1173, 40 S.Ct. 17 (1919)
Adams v. State, 8 Md.App. 684, 262 A.2d 69 (1970)
Alford v. State, 866 S.W.2d 619 (Tex.Crim.App.1993)
Allday, (1837) 8 Car. & P. 136, 173 E.R. 431
Almon, (1770) 5 Burr. 2686, 98 E.R. 411
Anderson v. State, 66 Okl.Cr. 291, 91 P.2d 794 (1939)
Anderson, [1966] 2 Q.B. 110, [1966] 2 All E.R. 644, [1966] 2 W.L.R. 1195, 50
Cr. App. Rep. 216, 130 J.P. 318
Selected Bi bliography . 228
Evans v. Bartlam, [1937] A.C. 473, 479, [1937] 2 All E.R. 646
Evans v. State, 322 Md. 24, 585 A.2d 204 (1991)
Fain v. Commonwealth, 78 Ky. 183, 39 Am.Rep. 213 (1879)
Farmer v. People, 77 Ill. 322 (1875)
Finney, (1874) 12 Cox C.C. 625
Firth, (1990) 91 Cr. App. Rep. 217, 154 J.P. 576, [1990] Crim. L.R. 326
Fitzpatrick v. Kelly, [1873] 8 Q.B. 337
Fitzpatrick, [1977] N.I. 20
Forbes, (1835) 7 Car. & P. 224, 173 E.R. 99
Frey v. United States, 708 So.2d 918 (Fla.1998)
G., [2003] U.K.H.L. 50, [2004] 1 A.C. 1034, [2003] 3 W.L.R. 1060, [2003] 4 All
E.R. 765, [2004] 1 Cr. App. Rep. 21, (2003) 167 J.P. 621, [2004] Crim. L. R.
369
G., [2008] U.K.H.L. 37, [2009] A.C. 92
Gammon (Hong Kong) Ltd. v. Attorney-General of Hong Kong, [1985] 1 A.C.
1, [1984] 2 All E.R. 503, [1984] 3 W.L.R. 437, 80 Cr. App. Rep. 194, 26
Build L.R. 159
Gardiner, [1994] Crim. L.R. 455
Godfrey v. State, 31 Ala. 323 (1858)
Government of the Virgin Islands v. Smith, 278 F.2d 169 (3rd Cir.1960)
Granite Construction Co. v. Superior Court, 149 Cal.App.3d 465, 197 Cal.
Rptr. 3 (1983)
Gray v. Lucas, 710 F.2d 1048 (5th Cir. 1983)
Great Broughton (Inhabitants), (1771) 5 Burr. 2700, 98 E.R. 418
Gregg v. Georgia, 428 U.S. 153, S.Ct. 2909, 49 L.Ed.2d 859 (1979)
Grout, (1834) 6 Car. & P. 629, 172 E.R. 1394
Hall v. Brooklands Auto Racing Club, [1932] All E.R. 208, [1933] 1 K.B. 205,
101 L.J.K.B. 679, 147 L.T. 404, 48 T.L.R. 546
Hardcastle v. Bielby, [1892] 1 Q.B. 709
Hartson v. People, 125 Colo. 1, 240 P.2d 907 (1951)
Hasan, [2005] U.K.H.L. 22, [2005] 4 All E.R. 685, [2005] 2 Cr. App. Rep. 314,
[2006] Crim. L.R. 142, [2005] All E.R. (D) 299
Heilman v. Commonwealth, 84 Ky. 457, 1 S.W. 731 (1886)
Henderson v. Kibbe, 431 U.S. 145, 97 S.Ct. 1730, 52 L.Ed.2d 203 (1977)
Henderson v. State, 11 Ala.App. 37, 65 So. 721 (1914)
Hentzner v. State, 613 P.2d 821 (Alaska 1980)
Hern v. Nichols, (1708) 1 Salkeld 289, 91 E.R. 256
Hobbs v. Winchester Corporation, [1910] 2 K.B. 471
Holbrook, (1878) 4 Q.B.D. 42
Howard v. State, 73 Ga.App. 265, 36 S.E.2d 161 (1945)
Huggins, (1730) 2 Strange 882, 93 E.R. 915
Hughes v. Commonwealth, 19 Ky.L.R. 497, 41 S.W. 294 (1897)
Hull, (1664) Kel. 40, 84 E.R. 1072
Humphrey v. Commonwealth, 37 Va.App. 36, 553 S.E.2d 546 (2001)
Selected Bi bliography . 231
Ratzlaf v. United States, 510 U.S. 135, 114 S.Ct. 655, 126 L.Ed.2d 615 (1994)
Read v. People, 119 Colo. 506, 205 P.2d 233 (1949)
Redmond v. State, 36 Ark. 58 (1880)
Reed v. State, 693 N.E.2d 988 (Ind.App.1998)
Reniger v. Fogossa, (1551) 1 Plowd. 1, 75 E.R. 1
Rice v. State, 8 Mo. 403 (1844)
Richards, [2004] E.W.C.A. Crim. 192
Richardson v. State, 697 N.E.2d 462 (Ind.1998)
Ricketts v. State, 291 Md. 701, 436 A.2d 906 (1981)
Roberts v. People, 19 Mich. 401 (1870)
Robinson v. California, 370 U.S. 660, 82 S.Ct. 1417, 8 L.Ed.2d 758 (1962)
Rollins v. State, 2009 Ark. 484, 347 S.W.3d 20 (2009)
Roy v. United States, 652 A.2d 1098 (D.C.App.1995)
Saik, [2006] U.K.H.L. 18, [2007] 1 A.C. 18
Saintiff, (1705) 6 Mod. 255, 87 E.R. 1002
Sam v. Commonwealth, 13 Va.App. 312, 411 S.E.2d 832 (1991)
Sanders v. State, 466 N.E.2d 424 (Ind.1984)
Scales v. United States, 367 U.S. 203, 81 S.Ct. 1469, 6 L.Ed.2d 782 (1961)
Schmidt v. United States, 133 F. 257 (9th Cir.1904)
Schuster v. State, 48 Ala. 199 (1872)
Scott v. State, 71 Tex.Crim.R. 41, 158 S.W. 814 (1913)
Seaboard Offshore Ltd. v. Secretary of State for Transport, [1994] 2 All E.R.
99, [1994] 1 W.L.R. 541, [1994] 1 Lloyd’s Rep. 593
Seaman v. Browning, (1589) 4 Leonard 123, 74 E.R. 771
Severn and Wye Railway Co., (1819) 2 B. & Ald. 646, 106 E.R. 501
Ex parte Smith, 135 Mo. 223, 36 S.W. 628 (1896)
Smith v. California, 361 U.S. 147, 80 S.Ct. 215, 4 L.Ed.2d 205 (1959)
Smith v. State, 83 Ala. 26, 3 So. 551 (1888)
Spiers & Pond v. Bennett, [1896] 2 Q.B. 65
Squire v. State, 46 Ind. 459 (1874)
State Philbrick, 402 A.2d 59 (Me.1979)
State v. Aaron, 4 N.J.L. 269 (1818)
State v. Anderson, 141 Wash.2d 357, 5 P.3d 1247 (2000)
State v. Anthuber, 201 Wis.2d 512, 549 N.W.2d 477 (App.1996)
State v. Asher, 50 Ark. 427, 8 S.W. 177 (1888)
State v. Audette, 149 Vt. 218, 543 A.2d 1315 (1988)
State v. Ayer, 136 N.H. 191, 612 A.2d 923 (1992)
State v. Barker, 128 W.Va. 744, 38 S.E.2d 346 (1946)
State v. Barrett, 768 A.2d 929 (R.I.2001)
State v. Blakely, 399 N.W.2d 317 (S.D.1987)
State v. Bono, 128 N.J.Super. 254, 319 A.2d 762 (1974)
State v. Bowen, 118 Kan. 31, 234 P. 46 (1925)
State v. Brosnan, 221 Conn. 788, 608 A.2d 49 (1992)
State v. Brown, 389 So.2d 48 (La.1980)
Selected Bi bliography . 235
United States v. Pomponio, 429 U.S. 10, 97 S.Ct. 22, 50 L.Ed.2d 12 (1976)
United States v. Powell, 929 F.2d 724 (D.C.Cir.1991)
United States v. Quaintance, 471 F. Supp.2d 1153 (2006)
United States v. Ramon-Rodriguez, 492 F.3d 930 (2007)
United States v. Randall, 104 Wash.D.C.Rep. 2249 (D.C.Super.1976)
United States v. Randolph, 93 F.3d 656 (9th Cir.1996)
United States v. Robertson, 33 M.J. 832 (1991)
United States v. Ruffin, 613 F.2d 408 (2d Cir. 1979)
United States v. Shapiro, 383 F.2d 680 (7th Cir.1967)
United States v. Smith, 404 F.2d 720 (6th Cir.1968)
United States v. Spinney, 65 F.3d 231 (1st Cir.1995)
United States v. Sued-Jimenez, 275 F.3d 1 (1st Cir.2001)
United States v. Thompson-Powell Drilling Co., 196 F.Supp. 571
(N.D.Tex.1961)
United States v. Tobon-Builes, 706 F.2d 1092 (11th Cir. 1983)
United States v. Torres, 977 F.2d 321 (7th Cir.1992)
United States v. Warner, 28 Fed. Cas. 404, 6 W.L.J. 255, 4 McLean 463 (1848)
United States v. Wert-Ruiz, 228 F.3d 250 (3rd Cir.2000)
United States v. Youts, 229 F.3d 1312 (10th Cir.2000)
Vantandillo, (1815) 4 M. & S. 73, 105 E.R. 762
Vaux, (1613) 1 Blustrode 197, 80 E.R. 885
Virgin Islands v. Joyce, 210 F. App. 208 (2006)
Walter, (1799) 3 Esp. 21, 170 E.R. 524
Webb, [2006] E.W.C.A. Crim. 2496, [2007] All E.R. (D) 406
In re Welfare of C.R.M., 611 N.W.2d 802 (Minn.2000)
Wheatley v. Commonwealth, 26 Ky.L.Rep. 436, 81 S.W. 687 (1904)
Wieland v. State, 101 Md. App. 1, 643 A.2d 446 (1994)
Wilkerson v. Utah, 99 U.S. (9 Otto) 130, 25 L.Ed. 345 (1878)
Willet v. Commonwealth, 76 Ky. 230 (1877)
Williams v. State, 70 Ga.App. 10, 27 S.E.2d 109 (1943)
Williamson, (1807) 3 Car. & P. 635, 172 E.R. 579
Wilson v. State, 24 S.W. 409 (Tex.Crim.App.1893)
Wilson v. State, 777 S.W.2d 823 (Tex.App.1989)
In re Winship, 397 U.S. 358, 90 S.Ct. 1068, 25 L.Ed.2d 368 (1970)
Woodrow, (1846) 15 M. & W. 404, 153 E.R. 907
Index
241
index . 242
corporations: and general criminal lia in general defenses, 120, 129, 131,
bility, 21–22, 34–36; and general de 133–136, 154; and sentencing 160
fenses, 124–126, 142–144, 153, 155; fine, 26, 106, 158, 162–163, 165, 173–175
and intentional offenses, 66–69; foreseeability, 58–64, 79, 100–101, 117
and negligence, 96–97, 99; and sen formal logic, 6, 8, 29
tencing, 162–165, 167, 170–175;
and strict liability, 112, 114 general intent, 32, 44, 57
creativity, 7, 9, 20 generalization, 10–11, 84, 92, 100
general problem solver, 2, 6
de minimis, 123, 152–154 good faith. See bona fide
death penalty. See capital punishment
Deep Blue, 11 heuristics, 3
deontological morality, 18 Hobbes, 1
Descartes, 1 Hollywood, 15
deterrence, 76, 81, 157–159, 161, homicide, 25, 32; and general defenses,
166–168, 170, 172, 174 121, 131, 147, 154; intentional, 38,
discipline, 150, 167 42–44, 48, 75–76, 78–80, 82–83;
dreaming, 134 negligent, 86, 94, 96, 98, 100–103;
drone, 4, 104, 143, 151–152 and sentencing, 166; strict liability,
duress, 120, 123, 140, 147–149, 155 117–119
dynamic modification, 10 humanoid, 14, 25
human-robot coexistence, 14–15, 17, 22
evil, 25–26, 33, 44, 56, 64–65, 68; and
necessity, 145–147; and negligence, imminent danger, 141–142, 145, 148
96 imprisonment, 26, 142, 156, 162–165,
exemptions, 122–123 167–169, 171, 173, 175–176
ex officio, 138 inaction, 16–17, 28, 39–41, 87
expert systems, 3–4, 8, 11, 13, 84, 93. incapacitation, 23, 157, 161–162,
See also general problem solver 166–172, 176
external knowledge, 7–9, 24 incarceration, 162, 164, 167, 169. See
eye for an eye, 157 also imprisonment
indexing, 10
factory robots, 13 indifference, 47–49, 56, 58, 61–63
factual element requirement (of crimi infancy, 28, 123–126
nal liability), 8, 29–32, 35; and gen in personam, 122–123
eral defenses, 118, 127, 136; and in rem, 122, 140, 153
intentional offenses, 38–39, 41–46, insanity, 28, 31, 65, 72, 121, 123,
48, 57, 65, 72–73, 82; and negli 126; as general defense, 128–130,
gence offenses, 85–90, 101; and 154–155
strict liability offenses, 105–111 internal knowledge, 7–9
factual mistake, 72, 121, 123, 133, intoxication, 72, 123, 126, 130–133,
135–137, 154 154–155
factual reality, 8, 10, 31–32, 45, 47; irresistible impulse, 129, 132
index . 243
just deserts, 157–158, 160. See also and sentencing, 156–157; and strict
retribution liability, 109–112, 114–115
justifications, 96, 112, 122–123, 140 monitoring, 4
moral accountability, 17–19, 65
Kant, 66 morality, 18, 26, 65–66, 153
legal mistake, 123, 136–138 necessity, 120, 123, 140, 144–149, 155
legal social control, 26–28 negligence, 32–33, 33, 36–38; and
Leibniz, 1 general defenses, 125, 135, 144;
lesser of two evils, 146–147, 149 and intentional offenses, 45, 68,
loss of self-control, 39, 47, 123, 72, 74, 79–80, 82; as offenses’ type,
126–128, 154–155 84–103; and strict liability, 115,
115–118
machina sapiens, 1, 4–5, 7, 10–11, 13, nullity, 47
17–21, 55, 61, 68
machina sapiens criminalis, 1, 12–14, obedience, 151
17, 19, 21; and intentional offenses, objectivity, 90, 94
55, 61, 68, 82–83; and negligence, omission, 32–33, 36, 39–41, 86–87, 90
101; and strict liability offenses
110–111, 117–118 Pascal, 1
machine learning, 4, 11; in general de perception, 3, 49–52, 102, 131, 133,
fenses, 125, 146, 149, 151; in inten 136
tional offenses, 62, 64, 75; in negli police robots, 13
gence offenses, 84, 92–95, 97, 100; prediction, 10
and sentencing, 160–161, 171–173; prison guard robots, 13, 18
in strict liability offenses, 114, 125, probable consequence liability: and
146, 149, 151 general defenses, 138; in intentional
mechanical computer, 1 offenses, 69, 75–81, 83; in negli
mens rea, 32–33, 33, 36–37; and gen gence, 100–101, 103; in strict liabil
eral defenses, 120–121, 129–131, ity 116–117, 119
135–136, 147; and intentional of probation, 162–163, 165, 169–173
fenses, 44–49, 51–52, 56–57, 63–64, proportionality, 144
67–69, 74, 79–80, 82, and negli prostitution, 13
gence, 85–86, 89–91, 95–96, public service, 165, 167, 171–175
98–102; and strict liability, punishability, 19
105–112, 114–117, 115
mental element requirement (of crimi rashness, 47–48, 49, 56, 58, 61, 63–64
nal liability), 29, 31–41; and general reasonableness, 24, 56, 144
defenses, 118–119, 121, 127–128, recidivism, 167–168
135–137, 139; and intentional of recklessness, 28, 31, 48, 49, 56, 58,
fenses, 44–45, 51, 62, 64–65, 67, 61–63, 82; and negligence, 85–86
69, 72–77, 79–80, 82–83; and negli rehabilitation, 122, 157, 159–162, 166,
gence, 84–87, 89, 95–101, 103–104; 168–172, 176
index . 244