Professional Documents
Culture Documents
G. Whistleblower Policy
1. Whistleblower Law in India and the US and Europe
Distinctions between humans and non-humans might erode. Ideas about personhood
might alter once it becomes possible to upload and store a digitalized brain on a
computer, much as nowadays we can store human embryos.
Any rights to security and privacy are potentially undermined not only through drones
or robot soldiers, but also through increasing legibility and traceability of individuals
in a world of electronically recorded human activities and presences. The amount of
data available about people will likely increase enormously, especially once biometric
sensors can monitor human health.
There will be challenges to civil and political rights arising from the sheer existence of
these data and from the fact that these data might well be privately owned, but not by
those whose data they are. Leading companies in the AI sector are more powerful than
oil companies ever were, and this is presumably just the beginning of their ascension.
AI and inequality, and the connection between that topic and human rights. To begin
with, we should heed Thomas Piketty’s warning that capitalism left to its own devices
in times of peace generates ever increasing economic inequality. Those who own the
economy benefit from it more than those who just work there. Over time life chances
will ever more depend on social status at birth.
We also see more and more how those who either produce technology or know how to
use technology to magnify impact can command higher and higher wages. AI will only
reinforce these tendencies, making it ever easier for leaders across all segments to
magnify their impact. That in turn makes producers of AI ever more highly priced
providers of technology. More recently, we have learned from Walter Scheidel that,
historically, substantial decreases in inequality have only occurred in response to
calamities such as epidemics, social breakdowns, natural disasters or war. Otherwise it
is hard to muster effective political will for change.
2. Should AGI\AMA be allowed?: Is it OK to throw the switch that saves five lives
by directing a runaway trolley onto a side track, where it will kill one person who would
have been safe? Well, . . . Deontology says it’s wrong to allow preventable deaths;
Utilitarianism says fewer deaths is better; Virtue ethics says the virtuous person can
make hard choices.
Since none of the ethical traditions will singly satisfy the whole world, some scientists
are proposing a secular “AI Safety Engineering” field. A common theme in AI safety
research is the possibility of keeping a superintelligent agent in a sealed hardware so as
to prevent it from doing any harm to humankind. Such ideas originate with scientific
visionaries such as Eric Drexler who has suggested confining transhuman machines so
that their outputs could be studied and used safely.
Similarly, Nick Bostrom, a futurologist, has proposed [9] an idea for an Oracle AI
(OAI), which would be only capable of answering questions. Finally, in 2010 David
Chalmers proposed the idea of a “leakproof” singularity [12]. He suggested that for
safety reasons, AI systems first be restricted to simulated virtual worlds until their
behavioral tendencies could be fully understood under the controlled conditions.
Similarly we argue that certain types of artificial intelligence research fall under
the category of dangerous technologies and should be restricted.
Classical AI research in which a computer is taught to automate human behavior in a
particular domain such as mail sorting or spellchecking documents is certainly ethical
and does not present an existential risk problem to humanity. On the other hand, we
argue that Artificial General Intelligence (AGI) research should be considered
unethical. This follows logically from a number of observations. First, true AGIs will
be capable of universal problem solving and recursive self-improvement.
Consequently, they have potential of outcompeting humans in any domain essentially
making humankind unnecessary and so subject to extinction. Additionally, a truly AGI
system may possess a type of consciousness comparable to the human type making
robot suffering a real possibility and any experiments with AGI unethical for that reason
as well.
A similar argument was presented by Ted Kazynsky in his famous manifesto [26]: “It
might be argued that the human race would never be foolish enough to hand over all
the power to the machines. But we are suggesting neither that the human race would
voluntarily turn power over to the machines nor that the machines would wilfully seize
power. What we do suggest is that the human race might easily permit itself to drift into
a position of such dependence on the machines that it would have no practical choice
but to accept all of the machines decisions. As society and the problems that face it
become more and more complex and machines become more and more intelligent,
people will let machines make more of their decision for them, simply because
machine-made decisions will bring better result than man-made ones. Eventually a
stage may be reached at which the decisions necessary to keep the system running will
be so complex that human beings will be incapable of making them intelligently. At
that stage the machines will be in effective control. People won't be able to just turn the
machines off, because they will be so dependent on them that turning them off would
amount to suicide. ” ( Kaczynski, T.: Industrial Society and Its Future. The New York
Times (September19, 1995)
Algorithms can do anything that can be coded, as long as they have access to data they
need, at the required speed, and are put into a design frame that allows for execution of
the tasks thus determined. In all these domains, progress has been enormous. The
an enormous amount of data on all human activity and other processes in the world
about what happens next by detecting patterns. Algorithms do better than humans
wherever tested, even though human biases are perpetuated in them: any system
designed by humans reflects human bias, and algorithms rely on data capturing the
past, thus automating the status quo if we fail to prevent them. But algorithms are
noise-free: unlike human subjects, they arrive at the same decision on the same problem
An important question arises: How should machines be constrained, such that they act
morally acceptable towards humans? This question concerns Machine Ethics – the search
for formal, unambiguous, algorithmizable and implementable behavioral constraints for
systems, so as to enable them to exhibit morally acceptable behavior. After pointing out
why this is important, we will argue that there is one feasible supplement for Machine
Ethics: Machine Explainability – the ability of an autonomous system to explain its actions
and to argue for them in a way comprehensible for humans. Responsibility, transparency,
auditability, incorruptibility, predictability, and a tendency to not make innocent victims
scream with helpless frustration: all criteria that apply to humans performing social
functions; all criteria that must be considered in an algorithm intended to replace human
judgment of social functions; all criteria that may not appear in a journal of machine
learning considering how an algorithm scales up to more computers. This list of criteria is
by no means exhaustive, but it serves as a small sample of what an increasingly
computerized society should be thinking about. A rock has no moral status: we may crush
it, pulverize it, or subject it to any treatment we like without any concern for the rock itself.
A human person, on the other hand, must be treated not only as a means but also as an end,
that is, a human person has moral status.
While it is fairly consensual that present-day AI systems lack moral status, it is unclear
exactly what attributes ground moral status. Two criteria are commonly proposed as being
importantly linked to moral status, either separately or in combination: sentience and
sapience (or personhood). These may be characterized roughly as follows:
Sentience: the capacity for phenomenal experience or qualia, such as the capacity to feel
pain and suffer
Sapience: a set of capacities associated with higher intelligence, such as self- awareness
and being a reason-responsive agent.
Superintelligence
Good (1965) set forth the classic hypothesis concerning superintelligence: that an AI
sufficiently intelligent to understand its own design could redesign itself or create a
successor system, more intelligent, which could then redesign itself yet again to become
even more intelligent, and so on in a positive feedback cycle. Good called this the
“intelligence explosion.”
Kurzweil (2005) holds that “intelligence is inherently impossible to control,” and that
despite any human attempts at taking precautions, by definition . . . intelligent entities have
the cleverness to easily overcome such barriers.” . Yet it does not follow that the AI must
want to rewrite itself to a hostile form. This presents us with perhaps the ultimate challenge
of machine ethics: How do
you build an AI which, when it executes, becomes more ethical than you? If we are serious
about developing advanced AI, this is a challenge that we must