You are on page 1of 4

V

viewpoints

DOI:10.1145/3379920 Peter J. Denning and Dorothy E. Denning

The Profession of IT
Dilemmas of Artificial
Intelligence
Artificial intelligence has confronted us with a raft of dilemmas
that challenge us to decide what values are important in our designs.

M
ANY SPEAKERS HAVE point-
ed to various challeng-
ing ethical and design
dilemmas raised by AI
technology—we will de-
scribe 10 of the most prominent ones
in this column. The first few are mostly

SCREEN CA PT URE OF THE STO P-F RAGILIT Y P ROBL EM CO URT ESY O F POLO CLUB O F DATA SC IENCE, GEO RGI A T ECH/YOUT UBE , BAS E D ON PAPE R ARX I V.ORG/ABS/18 04.05 8 10
technical; they arise from seemingly
impenetrable complexity of the new
technology. The final few ethical and
design dilemmas include strong social
dimensions; they arise from the diffi-
culty of resolving emotional value con-
flicts to everyone’s satisfaction.

Explainability
The most common AI technology is
the artificial neural network (ANN).
An ANN consists of many layers of ar-
tificial neurons interconnected via
weighted links. ANNs are not pro-
grammed in the conventional way by
specifying the steps of an algorithm.
Instead they are trained by showing
them large numbers of examples of
input-output pairs and adjusting their An example of the stop-sign fragility problem: Will a driverless car’s road-sign recognizer
internal connection weights so that correctly see a stop sign?
every input gives a correct output.
The matrix of connection weights can the operator would locate the code Fragility
amount to several gigabytes of storage. segment responsible for the output Neural networks can be quite sensi-
In effect, an ANN encodes the training and if necessary repair it. In a neural tive to small changes in their inputs.
examples of a function in its connec- network, the operator sees no algo- For example, changing a few pixels of
tion matrix and extrapolates them to rithmic steps, just an unintelligible a trained input image can cause the
estimate the outputs for data not in gigabyte size matrix of weights. How output to change significantly even
the training examples. the weights relate to the unexpected though the human operator cannot
What happens if the human op- output is totally opaque. It is a hot re- see a difference in the image. This
erator wants to know why the network search area to find ways to augment leads to uncertainty in whether to
generated an unexpected or erroneous neural networks so that their outputs trust a neural network when it is pre-
output? In a conventional program, can be explained. sented with new data on which it was

22 COMM UNICATIO NS O F THE AC M | M A R C H 2020 | VO L . 63 | NO. 3


viewpoints

not trained. For example, when shown labeling. If physicians were paid $50
a new photo of a person’s face, will it an hour for this job, the training set
identify it as that person or someone It is as hot research would cost $50 million.
else? Will a road sign recognizer in a area to find ways Training is also energy-intensive: a
driverless car correctly see a stop sign, training that takes several days is as com-
and stop? to augment neural putationally intensive as bitcoin mining.
The sensitivity to small input networks so This means good quality training
changes is a vulnerability. A new sub- sets are hard to come by.
field, “adversarial AI,” has sprung up that their outputs To keep the costs down there is a
to find defenses against an adversary can be captured. lot of interest in open source train-
seeking to cause a neural network ing sets. Users of these training
to malfunction. In one famous ex- sets are right to be concerned over
periment, a road-sign recognizer was the quality of the data because the
confused by an image of a stop sign persons contributing might be low-
on which small squares of masking wage amateurs rather than well-paid
tape were applied at strategic loca- The bias issue is further compli- professionals. There are reports of
tions; instead of saying “stop sign” cated by the fact that human beings exactly this happening in open data-
the network said “speed limit sign.” are inherently biased. Each person sets that are then used to train medi-
In the current state of the art, it ap- has an individual way of interpreting cal diagnosis networks.
pears small changes in sensor out- the world that does not always agree So even if developers are deter-
puts that feed a neural network can with others. What appears as bias to mined to avoid bias by getting large
produce significantly wrong outputs. one person may appear as fairness to datasets, they will be expensive and
What looks to a human like a small another. What one person sees as the right now it is difficult to determine
continuous change to the input solution to a bias problem may appear their quality.
looks to the network as a discontinu- as a new bias to another. This aspect The big tech companies have a lot of
ous jump to a new state. of bias cannot be resolved within the reliable raw data about their users but
Fragility can also be seen when technology by new statistical methods. are not sharing.
comparing neural networks. Suppose It demands that humans respect each
two neural networks are each trained other’s differences and negotiate solu- Military Uses of AI
from a different training set taken as tions for conflicts. Project Maven is a U.S. Pentagon proj-
a sample from a larger population. By ect to use AI to give drones the power
all standard measures the two training Fakes to distinguish between people and
sets are fair representatives of the pop- Tools for editing images, videos, and objects. Google was a partner and
ulation. When this is tried in practice, soundtracks are being combined with outsourced image differentiation to
the two networks can respond with dif- AI tools to produce convincing fakes.1 a company that used captchas to dis-
ferent outputs when shown the same They cannot be distinguished from tinguish people from other objects.
input. Statistically minor changes in real images, videos, or soundtracks The gig workers looking at the capt-
the training data can result in major without advanced equipment and fo- chas did not know they were teach-
changes of the output. rensic skills. These digital objects of- ing an AI system for a military pur-
Researchers are looking for im- ten contain biometric data of specific pose. When 3,000 Google employees
proved methods to measure the sen- individuals, used for identification. formally protested, saying Google
sitivity of neural networks to small How can we trust digital identifica- should not be developing technolo-
changes in their inputs, and ways to en- tions when digitized forms of tradi- gies of war, Google executives decid-
sure a small input change results only tional identifications cannot be dis- ed not to renew the Maven contract.
in a small output change. tinguished from fakes? Aversion to research for the mili-
tary has been a difficult issue in uni-
Bias High Cost of Reliable Training Data versities since the days of the U.S.
This is an issue that arises with the Neural networks require large train- Vietnam war. Most universities di-
training data of neural networks. A bias ing sets. Getting properly labeled data vested themselves of laboratories that
in the training data can skew outputs. is time consuming and expensive. researched such technologies. Most
Many people are concerned about po- Consider the labor costs of a training DOD contracts are with private com-
lice use of neural networks trained by scenario. Trained physicians must panies that are not involved with uni-
faces of predominately white people review colon images to identify suspi- versities. With the large influx of new
that give wrong identifications of faces cious polyps and label the images with graduates into the big tech compa-
of people of color. The bias of the train- their diagnoses. Suppose training a nies, the same aversion is now show-
ing data may be invisible to the people suspicious-polyp recognizer needs a ing up among employees of private
running the training algorithms and million labeled images and a physi- companies. The dilemma is in how to
only becomes visible in the results cian can diagnose and label an image balance the need for national defense
when the network is presented with in six minutes. Then 100,000 physi- with the desire of many employees to
untrained inputs. cian hours are needed to complete the avoid contributing to war.

MA R C H 2 0 2 0 | VO L. 6 3 | N O. 3 | C OM M U N IC AT ION S OF T HE ACM 23
viewpoints

Weapons and Control so attractive and convenient the tide to


The military’s interest in AI to distin- adopt them will not soon reverse. The
guish potential targets for drone at- Should we work dilemma for app developers is to find
tacks introduces another dilemma: on developing a way that provides the service without
Should a drone be allowed to deploy its compromising individual user con-
weapon without an explicit command general AI when we trol over their data. The dilemma for
from a human operator? If AI is used in do not know if citizens is how to effectively resist the
any weapons system, should a human trend to monetize their personal data
have the final say in whether a weapon we can control it? and manipulate their behavior.
is launched?
Looking to the future, AI may also Decision Making
facilitate the creation of inexpensive Dilemmas arise around machines that
weapons of mass destruction. Stuart make decisions in lieu of humans. Con-
Russell, a computer science profes- sider the self-driving car when the sen-
sor at UC Berkeley and an AI pioneer new technology is likely to produce sors indicate “pedestrian ahead.” How
issued a dire warning about AI con- more jobs in the long run than it dis- does the car decide between applying
trolled drones being used as WMD.2 places, the new jobs require new skill the brakes abruptly and potentially
He produced a video, “Slaughterbots,” sets the displaced workers do not have. harming the occupant, or applying the
which presented a near-future sce- The appearance of new jobs does not brakes moderately and potentially hit-
nario where swarms of cheap drones help the displaced. ting the pedestrian? Or, should the car
with on-board facial recognition and One solution to this is regional train- swerve into the car alongside or drive
a deadly payload assassinate political ing centers that help displaced workers off a cliff? Or do we hand control to the
opponents and perform other atroci- move into the new professions. Unfor- human and let that person choose an
ties. A swarm of 25,000 drones could tunately, the investment in such cen- alternative? More generally, do we want
be as destructive as a small nuclear ters is currently limited. machines to only make recommenda-
bomb at a tiny fraction of the price. Another proposed solution is the tions or machines that make and act on
Russell worries not only about the Universal Base Income (UBI), which decisions autonomously? Is it even pos-
destructive potential of current AI tech- would give every adult a monthly stipend sible for machines to “act ethically”? Or
nology, but about even more destruc- to make up for income lost to automa- is that something only humans can do?
tive potential of advanced AI. He says tion. This proposal is very controversial.
the creation of a super-intelligent com- Conclusion
puter would be the most significant Surveillance Capitalism None of these dilemmas is easily re-
event in human history—and might Surveillance capitalism is a term solved. Many can be couched as ethi-
well be its last. coined by Shoshana Zubhoff to de- cal dilemmas that no professional
Issac Asimov postulated the famous scribe a new phenomenon arising in code of ethics has been able to answer.
Three Laws of Robotics in 1950 but no the commercial space of the Internet.3 Some of these dilemmas make obeying
one has found a way to enforce them in The issue is that most online services Asimov’s First Law impossible: no mat-
the design of robots. The dilemma is: capture voluminous data about user ter what action is taken (or not taken),
Should we continue to work on devel- actions, which the service provider a human will get hurt. Software devel-
oping general AI when we do not know then sells to advertisers. The advertis- opers face major challenges in finding
if we can control it? ers then use AI to target ads and tempt designs that resolve them.
individuals into purchases they find
Employment and Jobs difficult to resist. They also use AI to se- References
1. Farid, H. Fake Photos. MIT Press Essential Knowledge
There is widespread fear that AI-pow- lectively customize information to in- Series, 2019.
ered machines will automate many dividuals to manipulate their behavior 2. Russell, S.R. Human Compatible: AI and the Problem
of Control. Allen Lane of Penguin Books, Random
familiar office tasks and displace such as their thinking about political House, UK, 2019.
3. Zuboff, S. The Age of Surveillance Capitalism. Public
many jobs. This fear is not unique to candidates or causes. Affairs Books, 2019.
AI technology. For hundreds of years, The phenomenon is spreading to
new technologies have stirred social app developers as well. Their apps are For more on autonomous vehicles, see Awad et al. on p. 48.
unrest when workers felt threatened Internet connected and provide data
by loss of their jobs and livelihoods. from mobile device sensors. A grow- Peter J. Denning (pjd@nps.edu) is Distinguished
The fear is heightened in the modern ing number are opting for “X as a ser- Professor of Computer Science and Director of the
Cebrowski Institute for information innovation at
age by the accelerated pace of AI au- vice,” meaning function X is no longer he Naval Postgraduate School in Monterey, CA, USA,
tomation. A century ago, a technology provided as installable software, but is is Editor of ACM Ubiquity, and is a past president of ACM.
The author’s views expressed here are not necessarily
change was a slow process that took a instead a subscription service. In addi- those of his employer or the U.S. federal government.
generation to be fully adopted. Today, tion to a steady stream of monetizable Dorothy E. Denning (dedennin@nps.edu) is Distinguished
a technology change can appear as an personal data, this strategy provides Professor Emeritus of Defense Analysis at the Naval
Postgraduate School in Monterey, CA, USA.
avalanche, sweeping away jobs, identi- a steady stream of income from sub-
ties, and professions in just a few years. scribers.
Although the historical record says the Many of these services and apps are Copyright held by authors.

24 COMM UNICATIO NS O F THE ACM | M A R C H 2020 | VO L . 63 | NO. 3


Copyright of Communications of the ACM is the property of Association for Computing
Machinery and its content may not be copied or emailed to multiple sites or posted to a
listserv without the copyright holder's express written permission. However, users may print,
download, or email articles for individual use.

You might also like