Professional Documents
Culture Documents
Ethical Dilemmas that will arise in the AI-infested future world of ours: The case of the AI
robot that needs the electricity vs the human patient on life support. If the decision is in
favour of the robot, then it has acquired moral status!!
AWS development, production, and use planting their flag first. They
argue that AWS should be banned because these systems lack human
compassion, which can provide a key check on the killing of civilians. There are counter-
arguments:- AWS has the potential to ultimately save human lives (both civilian and
military) in armed conflicts; AWS is as inevitable as any other technology that could
human to direct the system to select a target and attack it, such
are weapon systems that select targets and attack them, albeit
(or CIWS).
c. Human-out-of-the-loop or fully autonomous weapon systems
no such weapons.
The real issue is hegemonistic competition among nations—AWS can’t be stopped, unless
research on this is banned like a Nuclear Non-Proliferation Treaty. At their core,
autonomous weapon systems must be able to distinguish combatants from non-combatants
as well as friend from foe. LOAC is designed to protect those who cannot protect
themselves, and an underlying driver is to protect civilians from death and combatants rom
unnecessary suffering. Everyone is in agreement on this. There is a UN Law on this called
the LOAC = Law of Armed Conflict or the UN Humanitarian Law that signatory countries
have to adhere to. Counter to Counter-arguments: For instance, “How does this technology
impact the likely successes of counter-insurgency operations or humanitarian
interventions? Does not such weaponry run the risk of making war too easy to wage and
tempt policy makers into killing when other more difficult means should be undertaken?”
Will countries be more willing to use force because their populations would have less to
lose (i.e. their loved ones) and it would be politically more acceptable?
2. Eco-Ethics of AI: Harm also need not be directly to persons, e.g., it could also
be to the environment. In the computer industry, “e-waste” is a growing and urgent
problem, given the disposal of heavy metals and toxic materials in the devices at the
end of their product lifecycle. Robots as embodied computers will likely exacerbate the
problem, as well as increase pressure on rare-earth elements needed today to build
computing devices and energy resources needed to power them. Networked robots
would also increase the amount of ambient radiofrequency radiation, like that created
by mobile phones—which have been blamed, fairly or not, for a decline of honeybees
necessary for pollination and agriculture, in addition to human health problems.
What are the machine learning methodologies to install morality into AMAs? Not all of
our devices need moral agency. As autonomy increases, morality becomes more necessary in
robots, but the reverse also holds. Machines with little autonomy need less ethical sensitivity.
A refrigerator need not decide if the amount someone eats is healthy, and limit access
accordingly. In fact, that fridge would infringe on human autonomy.
There is the Bottom-up Approach and the Top Down Approach. In the former, there are three
techniques:
a. The Neural Networks Method: This functions similarly to neurons: connections
between inputs and outputs make up a system that can learn to do various things,
from playing computer games to running bipedally in a simulation. By using that
learning capability on ethical endeavors, a moral machine begins to develop. From
reinforcement of positive behaviors and penalty of negative ones, the algorithm
learns the pattern of our moral systems. One downside to this is the uncertainty
regarding what the algorithm learned. When the army tried to get a neural net to
recognize tanks hidden in trees, what looked like a distinction between trees, tanks,
and partly concealed tanks turned out to be a distinction between a sunny and cloudy
day!! Solution: having two neural networks working side by side. The first learns
the correlation between input and output, challenging situation and ethically right
decision, respectively. The second algorithm focuses on learning language and
connects tags or captions from an input and explains what cues and ideas the second
algorithm used to come up with a course of action.
b. Genetic Algorithms: Large numbers of simple digital agents run through ethically
challenging simulations. The ones that return the best scores get “mated” with each
other, blending code with a few randomizations, and then the test runs again (Fox,
2009). After the best (or acceptably best) scores based on desired outcomes are
achieved, a new situation is added to the repertoire that each program must surpass.
In this way, machines can learn our moral patterns. Once they have matured
ethically, they can be put thru a neural network method for more accurate ethical
outcomes. Again the downside is we cannot tell what it learned or whether it will
make mistakes in the future.
c. Scenario Analysis Method: Teaching AI by having it read books and stories and
learn the literature’s ideas and social norms. After analyzing the settings and events
of each scenario, the program would save the connections it made for later human
inspection. If the program’s connections proved ‘good,’ it would then receive a new
batch of scenarios to test through, and repeat the cycle. One downside to this
approach involves painstaking human analysis. Neural net (the first method) could
work in tandem with a scenario analysis system to alleviate the human requirement
for analysis. Same downsides as the first two methods—the AMA will not be
perfect ethically all the time. As of yet, we do not have a reliable method to develop
an artificial moral agent.
What are the essential ingredients needed to be a true AMA that
has human-like qualities? To build an artificial moral agent, DeBaets (2014) argues
that a machine must have embodiment, learning, teleology toward the good, and empathy. If
the AMA is an ethereal entity, like God, then it will be difficult to convince human beings that
it was the AMA that solved this particular problem, just as no amount of miracles can convince
man of the existence and imminence of God, unless an embodied godman or woman does it.
So an essential ingredient is embodiment.
The second ingredient is the ability to learn from the past and from its own actions\mistakes.
In short, it needs to have memory. The third ingredient is a teleology for the good and the fourth
ingredient is empathy. These last two ingredients need a kind of “consciousness” that is there
in humans.
The first two ingredients can be inputted through machine learning techniques. How do you
input the next two? For a machine to empathize with and understand emotions of others, it must
have emotion itself. Thus, if robots do not have consciousness or mental states, they cannot
have emotions and therefore cannot have moral agency. Additionally, if a machine innately
desires to do good, it must have some form of inner thoughts or feeling that it is indeed doing
good, so teleology also requires consciousness or mental states. A claim to insanity, that is, not
having a teleology for doing good, can get you off the hook in a court of law!!
The argument against the need to have a teleology towards good and empathy, in short against
the need to have consciousness, is that we don’t have a foolproof method of measuring
consciousness in humans, then why do we insists on consciousness in robots? People can fake
emotions and get away with it and we accept that. In theory, a robot could imitate or fake
emotional cues as well as humans display them naturally. People already tend to
anthropomorphize robots, empathize with them, and interpret their behavior as emotional. For
consistency in the way we treat human display of emotion and interpret it as real, we must also
treat robotic display of emotion as real. Compassion could be the reason an autonomous car
veers into a tree rather than a line of children, but the appearance of compassion could also
serve the same effect. Beavers’ (2011) discussion of classical utilitarianism, referencing Mill
(1979), claims that acting good is the same as being good. The same applies to humans, as far
as we can tell from the outside. In other words, a ‘good’ and ‘moral’ robot is one that takes
moral and good actions. Thus, while we may not get true teleology, functional teleology can
suffice.
Also, philosophers have long puzzled about the nature of the mind. One question is if
there is more to the mind than the brain. Whatever else it is, the brain is also a complex
algorithm. But is the brain fully described thereby, or does that omit what makes us
is nothing more to the mind than the brain, then algorithms in the era of Big Data will