You are on page 1of 9

meet.

If machines are to be placed in a position of being stronger, faster, more trusted, or


smarter than humans, then the discipline of machine ethics must commit itself to seeking
human-superior (not just human-equivalent) niceness.

Ethical Dilemmas that will arise in the AI-infested future world of ours: The case of the AI
robot that needs the electricity vs the human patient on life support. If the decision is in
favour of the robot, then it has acquired moral status!!

The debate over Autonomous Weapon Systems (AWS) has begun

in earnest with advocates for the absolute and immediate banning of

AWS development, production, and use planting their flag first. They

argue that AWS should be banned because these systems lack human

qualities, such as the ability to relate to other humans and to apply

human judgment, that are necessary to comply with the law. In

addition, the weapons would not be constrained by the capacity for

compassion, which can provide a key check on the killing of civilians. There are counter-
arguments:- AWS has the potential to ultimately save human lives (both civilian and
military) in armed conflicts; AWS is as inevitable as any other technology that could

potentially make our lives better; and to pass on the opportunity to

develop AWS is irresponsible from a national security perspective.

a. Human-in-the-loop or semi-autonomous systems require a

human to direct the system to select a target and attack it, such

as Predator or Reaper UAVs.

b. Human-on-the loop or human-supervised autonomous systems

are weapon systems that select targets and attack them, albeit

with human operator oversight; examples include Israel’s Iron

Dome and the U.S. Navy’s Phalanx Close In Weapons System

(or CIWS).
c. Human-out-of-the-loop or fully autonomous weapon systems

can attack without any human interaction; there are currently

no such weapons.

The real issue is hegemonistic competition among nations—AWS can’t be stopped, unless
research on this is banned like a Nuclear Non-Proliferation Treaty. At their core,
autonomous weapon systems must be able to distinguish combatants from non-combatants
as well as friend from foe. LOAC is designed to protect those who cannot protect
themselves, and an underlying driver is to protect civilians from death and combatants rom
unnecessary suffering. Everyone is in agreement on this. There is a UN Law on this called
the LOAC = Law of Armed Conflict or the UN Humanitarian Law that signatory countries
have to adhere to. Counter to Counter-arguments: For instance, “How does this technology
impact the likely successes of counter-insurgency operations or humanitarian
interventions? Does not such weaponry run the risk of making war too easy to wage and
tempt policy makers into killing when other more difficult means should be undertaken?”
Will countries be more willing to use force because their populations would have less to
lose (i.e. their loved ones) and it would be politically more acceptable?

2. Eco-Ethics of AI: Harm also need not be directly to persons, e.g., it could also
be to the environment. In the computer industry, “e-waste” is a growing and urgent
problem, given the disposal of heavy metals and toxic materials in the devices at the
end of their product lifecycle. Robots as embodied computers will likely exacerbate the
problem, as well as increase pressure on rare-earth elements needed today to build
computing devices and energy resources needed to power them. Networked robots
would also increase the amount of ambient radiofrequency radiation, like that created
by mobile phones—which have been blamed, fairly or not, for a decline of honeybees
necessary for pollination and agriculture, in addition to human health problems.

3. Need for an Ethical Theoretical Framework to be in-built in


AMA’s: Due to the inherent autonomy of these systems, the ethical considerations
have to be conducted by themselves. This means, that these autonomous cognitive
machines are in need of a theory, with the help of which they can, in a specific situation,
choose the action that adheres best to the moral standards. Why are AI scientists veering
towards Aristotle’s Virtue Ethics to be the most probable ethics theory usable in AGI
or AMAs? (Artificial General Intelligence and Autonomous Moral Agents)? Applied
to AI ethics this means that a machine cannot have practical wisdom (and thus can’t act
morally) before it has learned from realistic data. Machine learning is the improvement
of a machine’s performance of a task through experience and Aristotle’s virtue ethics
is the improvement of one’s virtues through experience. Therefore, if one equates the
task performance with virtuous actions, developing a virtue ethics-based machine
appears possible. Aristotle’s ergon argument and the AMA’s arête. Clear understanding
of Eudaimonia—it’s not maximising of happiness or pleasure, but successfully
conducting one’s life according to one’s argon. A virtuous machine programed to
pursue eudaimonia would therefore not be prone to wireheading, which is the artificial
stimulation of the brain’s reward center to experience pleasure. This partition originated
in Aristotle’s soul theory in which he lists virtues of reason (dianoetic virtues) next to
virtues of character (ethical virtues) as properties of the intelligent part of the soul. The
virtues of reason comprise the virtues of pure reason and the virtues of practical reason.
Pure reason includes science (epist ̄em ̄e), wisdom (sophia) and intuitive thought
(no ̄us). Practical reason on the other hand refers to the virtues of craftsmanship
(techn ̄e), of making (poi ̄esis) and practical wisdom (phron ̄esis). According to this
subdivision in pure and practical reason, there exist two ways to lead a good life in the
eudaimonic sense: the theoretical life and the practical life. AI systems can lead a
theoretical life of contemplation, e.g. when they are applied to scientific data analysis,
but to lead a practical life they need the capacity for practical wisdom and morality.
This distinction in theoretical and practical life of an AI somewhat resembles the
distinction into narrow and general AIs, where narrow AI describes artificial
intelligence systems that are focused on performing one specific task (e.g. image
classification) while general AI can operate in more general and realistic situations.
In contrast to deontology and consequentialism, virtue ethics has a hard time giving
reasons for its actions (they certainly exist, but are hard to codify). While deontologists
can point towards the principles and duties which have guided their actions, a
consequentialist can explain why her actions have led to the best consequences. An
AMA based on virtue ethics on the other hand would have to show how its virtues,
which gave rise to its actions, have been formed through experience. This poses an even
greater problem if its capability to learn virtues has been implemented as an artificial
neural network, due to it being almost impossible to extract intuitively understandable
reasons from the many network weights. Without being able to give reasons to one’s
actions, one cannot take over responsibility, which is a concept underlying not only our
insurance system but also our justice system. If the actions of an AMA produce harm
then someone has to take responsibility for it and the victims have a right to explanation.
Furthermore, by discussing several virtues in detail, we showed that virtue ethics is a
promising moral theory for solving the two major challenges of contemporary AI safety
research, the control problem and the value alignment problem. A machine endowed
with the virtue of temperance would not have any desire for excess of any kind, not
even for exponential self-improvement, which might lead to a superintelligence posing
an existential risk for humanity. Since virtues are an integral part of one’s character, the
AI would not have the desire of changing its virtue of temperance. Learning from
virtuous exemplars has been a process of aligning values for centuries (and possibly for
all of human history). Thus building artificial systems with the same imitation learning
capability appears to be a reasonable approach.
Deontology and Utilitarianism pose problems for humans, when it comes to ethical
dilemmas—it becomes even more difficult to algorithimise solutions to dilemmas into
AMAs. Also, in Deontology, there is no room for learning, the imperatives are
categorical—difficult to determine what is good for oneself, let alone universalize the
same to all.
Utilitarianism as an ethical theory for AMAs fail again in the hedonistic calculations.
The time available to do this calculation and the act. In fact, the problem is
computationally intractable when we consider the ever-extending ripple effects that any
act can have on the happiness of others across both space and time. And if the
calculation is not required, as in Rule Utilitarianism of J.S. Mill, and only virtues and
well-established social\human rules are needed, then why not go to Virtue Ethics?

What are the machine learning methodologies to install morality into AMAs? Not all of
our devices need moral agency. As autonomy increases, morality becomes more necessary in
robots, but the reverse also holds. Machines with little autonomy need less ethical sensitivity.
A refrigerator need not decide if the amount someone eats is healthy, and limit access
accordingly. In fact, that fridge would infringe on human autonomy.

There is the Bottom-up Approach and the Top Down Approach. In the former, there are three
techniques:
a. The Neural Networks Method: This functions similarly to neurons: connections
between inputs and outputs make up a system that can learn to do various things,
from playing computer games to running bipedally in a simulation. By using that
learning capability on ethical endeavors, a moral machine begins to develop. From
reinforcement of positive behaviors and penalty of negative ones, the algorithm
learns the pattern of our moral systems. One downside to this is the uncertainty
regarding what the algorithm learned. When the army tried to get a neural net to
recognize tanks hidden in trees, what looked like a distinction between trees, tanks,
and partly concealed tanks turned out to be a distinction between a sunny and cloudy
day!! Solution: having two neural networks working side by side. The first learns
the correlation between input and output, challenging situation and ethically right
decision, respectively. The second algorithm focuses on learning language and
connects tags or captions from an input and explains what cues and ideas the second
algorithm used to come up with a course of action.
b. Genetic Algorithms: Large numbers of simple digital agents run through ethically
challenging simulations. The ones that return the best scores get “mated” with each
other, blending code with a few randomizations, and then the test runs again (Fox,
2009). After the best (or acceptably best) scores based on desired outcomes are
achieved, a new situation is added to the repertoire that each program must surpass.
In this way, machines can learn our moral patterns. Once they have matured
ethically, they can be put thru a neural network method for more accurate ethical
outcomes. Again the downside is we cannot tell what it learned or whether it will
make mistakes in the future.
c. Scenario Analysis Method: Teaching AI by having it read books and stories and
learn the literature’s ideas and social norms. After analyzing the settings and events
of each scenario, the program would save the connections it made for later human
inspection. If the program’s connections proved ‘good,’ it would then receive a new
batch of scenarios to test through, and repeat the cycle. One downside to this
approach involves painstaking human analysis. Neural net (the first method) could
work in tandem with a scenario analysis system to alleviate the human requirement
for analysis. Same downsides as the first two methods—the AMA will not be
perfect ethically all the time. As of yet, we do not have a reliable method to develop
an artificial moral agent.
What are the essential ingredients needed to be a true AMA that
has human-like qualities? To build an artificial moral agent, DeBaets (2014) argues
that a machine must have embodiment, learning, teleology toward the good, and empathy. If
the AMA is an ethereal entity, like God, then it will be difficult to convince human beings that
it was the AMA that solved this particular problem, just as no amount of miracles can convince
man of the existence and imminence of God, unless an embodied godman or woman does it.
So an essential ingredient is embodiment.

The second ingredient is the ability to learn from the past and from its own actions\mistakes.
In short, it needs to have memory. The third ingredient is a teleology for the good and the fourth
ingredient is empathy. These last two ingredients need a kind of “consciousness” that is there
in humans.

The first two ingredients can be inputted through machine learning techniques. How do you
input the next two? For a machine to empathize with and understand emotions of others, it must
have emotion itself. Thus, if robots do not have consciousness or mental states, they cannot
have emotions and therefore cannot have moral agency. Additionally, if a machine innately
desires to do good, it must have some form of inner thoughts or feeling that it is indeed doing
good, so teleology also requires consciousness or mental states. A claim to insanity, that is, not
having a teleology for doing good, can get you off the hook in a court of law!!

The argument against the need to have a teleology towards good and empathy, in short against
the need to have consciousness, is that we don’t have a foolproof method of measuring
consciousness in humans, then why do we insists on consciousness in robots? People can fake
emotions and get away with it and we accept that. In theory, a robot could imitate or fake
emotional cues as well as humans display them naturally. People already tend to
anthropomorphize robots, empathize with them, and interpret their behavior as emotional. For
consistency in the way we treat human display of emotion and interpret it as real, we must also
treat robotic display of emotion as real. Compassion could be the reason an autonomous car
veers into a tree rather than a line of children, but the appearance of compassion could also
serve the same effect. Beavers’ (2011) discussion of classical utilitarianism, referencing Mill
(1979), claims that acting good is the same as being good. The same applies to humans, as far
as we can tell from the outside. In other words, a ‘good’ and ‘moral’ robot is one that takes
moral and good actions. Thus, while we may not get true teleology, functional teleology can
suffice.
Also, philosophers have long puzzled about the nature of the mind. One question is if

there is more to the mind than the brain. Whatever else it is, the brain is also a complex

algorithm. But is the brain fully described thereby, or does that omit what makes us

distinct, namely, consciousness? Consciousness is the qualitative experience of being

somebody or something, its “what-it-is-like-to-be-that”-ness, as one might say. If there

is nothing more to the mind than the brain, then algorithms in the era of Big Data will

outdo us soon at almost everything we do.

4. AI and the Future of HR and Employment:


If society needs fewer workers due to automation and robotics, and many social benefits
are delivered through jobs, how are people outside the workforce for a lengthy period
of time going to get health care and pensions?
In 2013, for example, there were an estimated 1.2 million robots in use. This total rose
to around 1.5 million in 2014 and is projected to increase to about 1.9 million in 2017.5
Japan has the largest number with 306,700, followed by North America (237,400),
China (182,300), South Korea (175,600), and Germany (175,200). Overall, robotics is
expected to rise from a $15 billion sector now to $67 billion by 2025.6.
Amazon has organized a “picking challenge” designed to see if robots can
“autonomously grab items from a shelf and place them in a tub.” The firm has around
50,000 people working in its warehouses and it wants to see if robots can perform the
tasks of selecting items and moving them around the warehouse. During the
competition, a Berlin robot successfully completed ten of the twelve tasks. To move
goods around the facility, the company already uses 15,000 robots and it expects to
purchase additional ones in the future.
In the restaurant industry, firms are using technology to remove humans from parts of
food delivery. Some places, for example, are using tablets that allow customers to order
directly from the kitchen with no requirement of talking to a waiter or waitress. Others
enable people to pay directly, obviating the need for cashiers. Still others tell chefs how
much of an ingredient to add to a dish, which cuts down on food expenses.
The Stock Exchange and Trading has now been fully taken over by AI. The role of
brokers has come down by more than half.
Machine-to-machine communications and remote monitoring sensors that remove
humans from the equation and substitute automated processes have become popular in
the health care area. There are sensors that record vital signs and electronically transmit
them to medical doctors. For example, heart patients have monitors that compile blood
pressure, blood oxygen levels, and heart rates. Readings are sent to a doctor, who
adjusts medications as the readings come in. According to medical professionals,
“we’ve been able to show significant reduction” in hospital admissions through these
and other kinds of wireless devices.
There also are devices that measure “biological, chemical, or physical processes” and
deliver “a drug or intervention based on the sensor data obtained.”22 They help people
maintain an independent lifestyle as they age and keep them in close touch with medical
personnel. “Point-of-care” technologies keep people out of hospitals and emergency
rooms, while still providing access to the latest therapies.
Unmanned vehicles and autonomous drones are creating new markets for machines and
performing functions that used to require human intervention. Driverless cars represent
one of the latest examples. Google has driven its cars almost 500,000 miles and found
a remarkable level of performance. Manufacturers such as Tesla, Audi, and General
Motors have found that autonomous cars experience fewer accidents and obtain better
mileage than vehicles driven by people.
Bill Gates recently observed that “the emergence of the robotics industry ... is
developing in much the same way that the computer business did 30 years ago.”
In some countries, robots are quite literally replacements for humans, such as Japan,
where a growing elderly population and declining birthrates mean a shrinking
workforce . Robots are built to specifically fill that labor gap. And given the nation’s
storied love of technology, it is therefore unsurprising that approximately one out of 25
workers in Japan is a robot. While the US currently dominates the market in military
robotics, nations such as Japan and South Korea lead in the market for social robotics,
such as elderly-care robots. Other nations with similar demographics, such as Italy, are
expected to introduce more robotics into their societies, as a way to shore up a
decreasing workforce; and nations without such concerns can drive productivity,
efficiency, and effectiveness to new heights with robotics.
However, researchers and policy makers are underequipped to forecast the labor trends
resulting from specific cognitive technologies, such as AI.
5. Robot Rights: If robots acquire autonomous status and human qualities, then
don’t they have rights? A person does not have to be able to speak to have rights.
Indeed, small infants whose ability to reason, communicate and do many other things
that we tend to identify with intelligence is still in the process of formation have their
rights protected by law. The issue is thus not really rights for artificial intelligences so
much as rights for machine persons. It is the definition and identification of the latter
that is the crucial issue.
Nevertheless, the distinction would seem to be a valid one for as long as it remains a
meaningful one: machines that develop their own personhood in imitation of humans
will probably deserve to be recognized as persons, whereas mere simulacra designed as
an elaborate contrivance will not.
Some soldiers have emotionally bonded with the bomb-disposing PackBots that have
saved their lives, sobbing when the robot meets its end. And robots are predicted to
soon become our lovers and companions. they will always listen and never cheat on us.
Given the lack of research studies in these areas, it is unclear whether psychological
harm might arise from replacing human relationships with robotic ones.
If there indeed is more to the mind than the brain, dealing with AI including humanoid
robots would be easier. Consciousness, or perhaps accompanying possession of a
conscience, might then set us apart. It is a genuinely open question how to make sense
of qualitative experience and thus of consciousness. But even though considerations
about consciousness might contradict the view that AI systems are moral agents, they
will not make it impossible for such systems to be legal actors and as such own property,
commit crimes and be accountable in legally enforceable ways. After all, we have a
history of treating corporations in such ways, which also do not have consciousness.
Principle of Substrate Non-Discrimination: If two beings have the same functionality
and the same conscious experience and differ only in the substrate of their
implementation, then they have the same moral status.
Principle of Ontogeny Non-Discrimination: If two beings have the same functionality
and the same consciousness experience, and differ only in how they came into
existence, then they have the same moral status.
Parents have special duties to their child which they do not have to other children, and
which they would not have even if there were another child qualitatively identical to
their own. Similarly, the Principle of Ontogeny Non-Discrimination is consistent with

You might also like