You are on page 1of 7

THE AGE OF INTELLIGENT MACHINES | The Social Impact of Artificial

Intelligence
April 25, 2001

author | Margaret A. Boden

Is artificial intelligence in human society a utopian dream or a Faustian nightmare? Will our descendants honor us
for making machines do things that human minds do or berate us for irresponsibility and hubris? Either of these
judgments might be made of us, for like most human projects this infant technology is ambivalent. Just which
aspects of its potential are realized will depend largely on social and political factors. Although these are not
wholly subject to deliberate control, they can be influenced by human choice and public opinion. If future
generations are to have reason to thank us rather than to curse us, its important that the public (and politicians)
of today should know as much as possible about the potential effects-for good or ill-of artificial intelligence (AI).

What are some of the potential advantages of AI? Clearly, AI can make knowledge more widely available. We
shall certainly see a wide variety of expert systems: for aiding medical diagnosis and prescription, for helping
scientists, lawyers, welfare advisers, and other professionals, and for providing people with information and
suggestions for solving problems in the privacy of their homes. Educational expert systems include interactive
programs that can help students (schoolchildren or adults, such as medical students) to familiarize themselves
with some established domain. This would give us much more than a set of useful tools and educational cribs. In
virtue of its applications in the communication and exploration of knowledge, AI could revolutionize our capacity
for creativity and problem solving, much as the invention of printing did.

One advantage of having computers in the schoolroom and elsewhere is that they are not human. Precisely
because they are not, they will not be bored by their human users questions, nor scorn their users mistakes, as
another person might. The user may be ignorant, stupid, or naive, but the computer will not think so. Moreover,
what looks like ignorance, stupidity, or naivete is often a sort of exploratory playing around with ideas that is the
essence of learning and of creativity. Many children have their self-confidence undermined by their teachers
explicit or implicit rejection of their attempts at self-directed thinking. Similarly, many people-for instance, those
who are female, working class, Jewish, disabled, or black-encounter unspoken (and often unconscious)
prejudice in their dealings with official or professional bodies. An AI welfare adviser, for example, would not be
prejudiced against such clients unless its data and inferential rules were biased in the relevant ways. A program
could, of course, be written so as to embody its programmers prejudices, but the program can be printed out and
examined, whereas social attitudes cannot.

Artificial intelligence might even lead to a society in which people have greater freedom and greater incentive to
concentrate on what is most fully human. Too few of us today (especially men) have time to commit ourselves to
developing our interpersonal relations with family and friends. Increased leisure time throughout society (on the
assumption that appropriate political and economic structures had been developed to allow for this) would make
room for such conviviality. Partly as a result of this and perhaps partly as a reaction against the unemotional
nature of most AI programs, the emotional dimension of personality might come to be more highly valued (again,
especially by men) than it is in the West today. In my view, this would be all to the good. Similarly, the new
technology might make it possible for many more people (yet again, especially men) to engage in activities,
whether paid or unpaid, in the service sector: education, health, recreation, and welfare. The need for such
activities is pressing, but the current distribution of income makes these intrinsically satisfying jobs financially
unattractive. One of the most important benefits of all is that AI can rehumanize-yes, rehumanize-our image of
ourselves. How can this be? Most people assume that AI either has nothing to teach us about the nature of being
human or that it depicts us as nothing but machines: poor deluded folk, we believe ourselves to be purposive,
responsible creatures whereas in reality we are nothing of the kind.

The crucial point is that AI is concerned with representations, and how they can be constructed, stored,
accessed, compared, and transformed. A computer program is itself a set of representations, a symbol system
that models the world more or less adequately. This is why it is possible for an AI program to reflect the sexist or
racist prejudices of its programmer. But representation is central to psychology as well, for the mind too is a
system that represents the world and possible worlds in various ways. Our hopes, fears, beliefs, memories,
perceptions, intentions, and desires all involve our ideas about (our mental models of) the world and other worlds.
This is what humanist philosophers and psychologists have always said, of course, but until recently they had no
support from science. Because sciences like physics and chemistry have no place for the concept of
representation, their philosophical influence aver the past four centuries has been insidiously dehumanizing. The
mechanization of our world picture-including our image of man was inevitable, for what a science cannot describe
it cannot recognize. Not only can artificial intelligence recognize the mind (as distinct from the body); it can also
help to explain it. It gives us back to ourselves, by helping us to understand how it is possible for a
representational system to be embodied in a physical mechanism (brain or computer).

So much for the rose-colored spectacles. What of the darker implications? Many people fear that in developing
AI, we may be sowing the seeds of our own destruction, our own physical, political, economical, and moral
destruction. Physical destruction could conceivably result from the current plans to use AI within the U.S.
Strategic Defense Initiative (Star Wars). One highly respected computer scientist, David Parnas, publicly
resigned from the U.S. governments top advisory committee on SDI computing on the grounds that computer
technology (and AI in particular) cannot in principle achieve the reliability required for a use where even one
failure could be disastrous. Having worked on military applications throughout his professional life, Parnas had no
political ax to grind. His resignation, like his testimony before the U.S. Senate in December 1985, was based on
purely technical judgment.

Political destruction could result from the exploitation of AI (and highly centralized telecommunications) by a
totalitarian state. If AI research had developed programs with a capacity for understanding text, understanding
speech, interpreting images, and updating memory, the amount of information about individuals that was
potentially available to government would be enormous. Good news for Big Brother, perhaps, but not for you and
me.

Economic destruction might happen too if changes in the patterns and/or rates of employment are not
accompanied by radical structural changes in industrial society and in the way people think about work.
Economists differ about whether the convivial society described above is even possible: some argue that no
stable economic system could exist in which only a small fraction of the people do productive (nonservice) work.
Certainly, if anything like this is to be achieved, and achieved without horrendous social costs, new ways of
defining and distributing societys goods will have to be found. At the same time, our notion of work will have to
change: the Protestant ethic is not appropriate for a high-technology postindustrial society.

Last, what of moral destruction: could we become less human-indeed, less than human-as a result of advances
in AI? This might happen if people were to come to believe that purpose, choice, hope, and responsibility are all
sentimental illusions. Those who believe that they have no choice, no autonomy, are unlikely to try to exercise it.
But this need not happen, for our goals and beliefs-in a word, our subjectivity-are not threatened by AI. As we
have seen, the philosophical implications of AI are the reverse of what they are commonly assumed to be:
properly understood, AI is not dehumanizing.

A practical corollary of this apparently abstract point is that we must not abandon our responsibility for evaluating-
and, if necessary, rejecting-the advice or conclusions of computer programs. Precisely because a program is
a symbolic representation of the world, rather than a part of the world objectively considered, it is in principle
open to question. A program functions in virtue of its data, its inferential rules, and its values (decision criteria),
each and every one of which may be inadequate in various ways. (Think of the example of the racist expert
system.) We take it for granted that human beings, including experts (perhaps especially experts), can be
mistaken or ill advised about any of these three aspects of thinking. We must equally take it for granted that
computer programs which in any event are far less subtle and commonsensical than their programmers and
even their users can be questioned too. If we ever forget that Its true because the computer says so is never
adequate justification, the social impact of AI will be horrendous indeed.

http://www.kurzweilai.net/the-age-of-intelligent-machines-the-social-impact-of-artificial-intelligence

Margaret A. Boden is Research Professor of Cognitive


Science at the University of Sussex. She is a member of the
Academia Europaea, and a Fellow of the British Academy and
of the American Association for Artificial Intelligence. She
holds degrees in medical sciences, philosophy, and
psychology (including a Cambridge ScD and a Harvard PhD),
and three honorary Doctorates (from Sussex Bristol, and the
Open University). In the New Year Honours list of 2002 she
was awarded an OBE for services to cognitive science. Her
writing has been translated into 20 foreign languages, and she
has given lectures, and media-interviews, across North and
Courtesy of Margaret A. Boden South America, Europe, India, the USSR, and the Pacific. Her
latest books are The Creative Mind: Myths and Mechanisms
(2nd edn., expanded, Routledge: 2004), Mind As Machine: A
History of Cognitive Science (Oxford University Press: 2006),
and Creativity and Art: Three Roads to Surprise (Oxford
University Press: 2010). She has two children and four
grandchildren, and lives in Brighton.
Its time to take off your
tinfoil hats: AI is safe for
human consumption
The Golden Globe-nominated film The
Imitation Game has helped to reignite the
discussion around artificial intelligence and its
value to humanity. Pundits are warning of a
disastrous future, but these opinions are
discounting the many ways in which AI could
enhance our lives.
By Charles Ortiz
Posted January 15, 2015

One of the most captivating scenes from the recent Alan Turing biopic, The Imitation
Game, sees Benedict Cumberbatch as Turing defend his perspective on the idea of
machines being capable of thinking.

The vision.
Turing was one of the pioneers of computer science; his research presaged the
development of the field of artificial intelligence (AI). Since Turings time, we have
seen great advances in AI. You may not realize it, but there are elements of AI
incorporated into many of our favorite devices. In fact, the device youre reading this
on likely has AI in some form built in perhaps as a virtual personal assistant. The
virtual assistant is an exciting development in the field, allowing people to receive
help with day-to-day tasks or find useful information in specialized domains like
medicine.

The doomsday bunch.


Unfortunately, AI has drawn some rather extreme headlines recently, as the
conversation has shifted from the progress that we are seeing to a feared end result
AI-based technology surpassing the levels of intelligence of humans and posing a
threat to our existence. Apart from the popularity of such doomsday scenarios in
science fiction, this outlook appears unfounded: there is currently no evidence to
suggest that anything like this would necessarily happen. Perhaps even more
importantly, were getting rather ahead of ourselves with these sorts of predictions.
AI has indeed seen some encouraging and impressive progress over the last few
years, but we still have a long way to go before we achieve anything capable of the
scenarios that have been discussed.

AI as transformative technology.
In fact, why focus on such extreme scenarios when there are many alternatives that
would see a peaceful co-existence and productive collaboration between humans
and machines? These systems could, for example, become partners or teachers, or
perhaps even feel indebted to us, their creators.
Consider instead some of the promising futures that AI could enable. AI has the
potential to radically transform, in a positive way, the degree to which we can utilize
and process data and information in ways that people simply cannot. In addition,
simple everyday actions, such as interacting with the Internet of Things, that have
become overly complex because of arcane interfaces (e.g., setting a thermostat or
controlling a TV) can be radically simplified through natural language. As AI systems
mature, they will drive important advancements for society, in areas like healthcare,
education, the economy and many more. An AI system could help a doctor with a
diagnosis, serve as a virtual teacher with the wealth of Internet knowledge at its
fingertips, or be woven into the fabric of our daily lives, helping us with everything
from basic decision-making to driving our cars.

Where we are.
Although we are in the early stages of achieving behaviors and intelligence that is in
line with humans, recent advances have enabled the virtual assistants mentioned
earlier to understand us and interact with us through spoken language. As AI
technology improves, these assistants are also demonstrating proactive capabilities
acquired through the identification of patterns in our behavior. And researchers
both at Nuance and at other AI-dedicated labs around the world are in the process
of driving not just new advancements, but new ways of assessing progress.

Making strides.
The Turing Test was long held as the benchmark for measuring AI. Since Turings
time, however, researchers have uncovered a number of shortcomings with that test,
stemming from the underlying requirement that the program try to trick a person into
thinking that it is human. A good example of this was recently seen in a program that
was claimed to have passed the test by mimicking a 13-year old boy. The validity of
that claim has been debated by many researchers, but in the meantime, Nuance is
exploring more suitable alternatives for assessing progress in AI through its
sponsorship of the Winograd Schema Challenge an exciting new proposal for
gauging progress in the field through tests that involve answering multiple-choice
questions that require commonsense reasoning.
Other methods are also being examined. The Association for the Advancement of
Artificial Intelligence (AAAI) has called for a summit in January to discuss AI
challenges and competitions aside from the Turing Test. Dubbed Beyond the Turing
Test, this event will see leading researchers in the field of AI present new ideas that
could serve as more useful tests of progress in AI.
The recent focus on AI suggests an outcome that we will likely continue to debate.
More importantly, these conversations speak to the potential of this technology a
potential that we remain committed to developing, understanding, and applying to
our daily lives.
http://whatsnext.nuance.com/in-the-labs/effects-of-artificial-intelligence-on-humanity/

Charles Ortiz
Charles Ortiz is Senior Principal Manager of the Artificial Intelligence and Reasoning
Group at the Nuance Natural Language and AI Laboratory in Sunnyvale, CA. Prior to
joining Nuance, he was the director of research in collaborative multi-agent systems
at the AI Center at SRI International. His research interests and contributions are in
multiagent systems (collaborative dialogue-structured assistants,
collaborative work environments, negotiation protocols, and logic-
based BDI theories), knowledge representation and reasoning
(causation, counterfactuals, and commonsense reasoning), and
robotics (cognitive robotics, team-based robotics, and dialogue-
based human-robot interaction). He has approximately 20 years of
technical leadership and management experience in leading major
projects and setting strategic directions. He has collaborated
extensively with faculty and students at many academic institutions including
Harvard University, Bar-Ilan University, UC Berkeley, Columbia University, University
of Southern California, Vassar College, and Carnegie Mellon University. He holds a
S.B. in Physics from MIT, an M.S. in Computer Science from Columbia University,
and a Ph.D. in Computer and Information Science from the University of
Pennsylvania. Following his PhD research, he was a Postdoctoral Research Fellow
at Harvard University. He has taught courses at Harvard and at UC Berkeley (as an
an Adjunct Professor) and has also presented tutorials at technical conferences
(IJCAI 1999 and 2005, AAAI 2002 and 2004, AAMAS 2002-2004).