You are on page 1of 9

opinion

Andrzej Nowak Paul Lukowicz Paweł Horodecki

Assessing
ecent rapid advance-

R
ments in Artificial
Intelligence (AI) are

Artificial Intelligence
arguably the most
important dimension
of humanity’s progress to date. As

for Humanity
members of the human race, that
is, homo sapiens, we are defined
by our capacity for cognition. Until
now, humans were the only species
capable of higher cognitive func-
tions. But today AI has advanced to
Will AI be Our Biggest Ever
a stage where on many cognition- Advance — or the Biggest Threat?
related tasks it can match and even
surpass the performance of humans.
Examples include not only AI’s spec-
tacular successes in winning Go [1],
chess [2], and other board games
with humans, and in surpassing
humans on fully defined world puz-
zles. But AI is also now achieving
extremely high efficiency in practical
applications such as speech and
object recognition, self-driving cars,
intelligent tutoring systems, efficient
decision support systems, and in the
capacity to detect patterns in Big iStock/LagartoFilm

Data and in constructing accurate


models of social behavior.
Thus, for the first time in history, we
must ask ourselves: “has our monopo-
ly on intelligence, however defined [3], activities, and assisting in develop- recently has been much discussion
been challenged?” ing new products and services. Using on whether Artificial Intelligence
The development of AI is vig- AI, humans can potentially, on the could perhaps become the seed of
orously supported by industry. AI one hand, solve some of our most the destruction of humanity [4], [5].
can radically cut costs for industry challenging practical problems, and The potential danger of AI became
and corporations by reducing paid at the same time provide labor to one of the key topics of public dis-
employment of humans. AI can also replace humans at boring jobs. On course with the publication in
enhance the quality of products and the other hand, AI can reach levels January 2015 of a letter initiated
services, allowing the scaling up of of intelligence never before attain- by Stephen Hawking and Elon Musk,
able for humanity. and signed by many prominent AI re-
Digital Object Identifier 10.1109/MTS.2018.2876105
Concurrent to AI’s development searchers [6]. In the discourse Musk
Date of publication: 30 November 2018 and support from industry, there also called AI research “summoning the

26 IEEE Technology and Society Magazine ∕ december 2018


demon,” and Hawking warned that each AI may actually be able to per- connect humans in new ways and op-
the development of AI could “spell form its own tasks better than a human timize functioning of techno-human
the end of human race.” (at least from a short-term perspective), systems. AI can collaborate with
The apparent question is then, without proper coordination and humans and support them in the
which of these visions is correct – if without the ability to consider the big service of human goals. Because
any? Should the development of AI picture, the sum of the actions of all AI in this scenario AI supports humans
be supported by all means as the may lead to negative if not catastrophic in making more informed decisions
biggest promise to humanity? Or long-term consequences. rather than making decisions for hu-
should we try to stop it because it The general concept is something mans, the results will combine the
represents the ultimate danger? that anyone who has used a car strengths of AI with human strengths
Our answer is that, as often in life, navigation to bypass a traf-
it’s more about shades of grey than fic jam knows. While each
about black and white. The question individual navigation system
is not whether we should develop AI makes a recommendation
or not, but rather what sort of AI we that seems to make sense Human-Centered AI needs to
should develop. from an individual point of
What all the “dark” scenarios have view, the sum of decisions understand human lines of
in common is a view that AI is being made by many navigation
developed independently of human systems may lead to the cre-
reasoning, and relate to human
cognition and as an alternative to it. In ation of new traffic jam on morals, motivations, and emotions
these scenarios, AI develops its own the alternative routes.
highly efficient systems of knowl- Another hotly debated in that reasoning.
edge and reasoning that are largely point is question of wheth-
incompatible with the human way of er AI can have the ability
thinking and acting. Thus AI cannot to replicate human creativity, intu- such as creativity, innovation, and in-
really support humans and collabo- ition, and inventiveness, which are tuition (qualities that today we do not
rate with them. Instead, because of crucial in dealing with fundamen- know how to replicate in AI systems).
their efficiency, AI systems gradually tally new situations. Thus, in the positive scenario,
replace humans in more and more the results will go beyond the direct
tasks. As a consequence, we see in­­ Positive AI Development extrapolation of previous experience
creased loss of human control over Scenario and allow fundamental changes. It
the world and loss of human autono- We propose that the development of will therefore be possible to develop
my. In the “dark” scenarios AI actively AI can take a radically different route, solutions to radically new situations.
turns against humans. Although which can be termed Human-Cen- This scenario will happen if AI is
these scenarios are quite diverse, the tered AI. Human-Centered AI focuses used to enhance human intelligence,
end point of them is similar: humans on collaborating with humans, en­­ rather than to replace it. In other
lose their leading role as intelligent hancing human capabilities, and words, the positive scenario will hap-
beings, lose control of the world, lose empowering humans to better achieve pen if AI is used to acquire, gener-
their freedom and, in some of these their goals. In other words, the well- ate, integrate, access, and process
scenarios, the physical existence of being of humans is the superordinate the knowledge of human race, rather
humanity is threatened. goal of the development of AI. than to develop an alternative system
In the various “grey” scenarios In a positive AI development of knowledge and its representation
there is no direct action by AI to harm scenario, AI can be the biggest ac- that is inaccessible to humans.
humans. Instead there is a gradual complishment in the cognitive de- In this article we reflect on pos-
loss of autonomy and control resulting velopment of the human race. AI sible scenarios of AI development,
from more and more tasks and deci- sensing can dramatically extend and how they are related to the type
sions being delegated to AI systems. A the information acquisition capac- of AI that is being developed. That
particularly dangerous aspect of such ity of humans. AI can store and in- is, is the AI being built a “Function-
scenarios is “chaos like” emergent tegrate the knowledge of the human Oriented AI” or a “Human-Centered
phenomena resulting from unforesee- race. Machine-learning techniques AI.” We consider the differences
able interactions between AI systems can generate knowledge previously between these two approaches to
devoted to different tasks. Thus, while inaccessible to humans. AI can inter- AI development and outcomes. We

december 2018 ∕ IEEE Technology and Society Magazine 27


also argue that although, in the long in its bid for self-preservation and here about our human ability to
run, Human-Centered AI is likely expansion. In a scenario depicted make choices for the realization of
to be superior to Function-Orient- by the movie “Her” and discussed our needs and values, and our
ed AI, Human-Centered AI needs in popular books [7], the creation of a capacity to be subjects of our exis-
more support in its early stages of super intelligence would result in the tence in pursuit of self-realization.
development in order to have a fair emergence of a super powerful mind Instead, humans may gradually
equipped with awareness became passive elements of an
and its own goals that are emerging global socio-technical sys-
likely contradictory to the tem — a system composed of
goals of humanity. While machines, algorithms, sensors and
The real danger of AI lies not this spectacular scenario actuators, AI programs, and humans
has captivated public atten- interacting in the globally present
in sudden apocalypse, but in tion, from a technological Internet, and Internet that is ever-

the gradual degradation and point of view there is no present due to mobile technologies
plausible path from today’s and ambient intelligence.
disappearance of what make state of the art to the vicious In this grey perspective, the criti-
super-mind. Today’s AIs cal question concerns the degree to
human experience and are all Turing machines and which individuals, who are elements

existence meaningful. learning is mostly about of this global system, have freedom
complex statistical mod- of choice and action, and to what
eling and optimization on degree are they “enslaved” by the
chance of prevailing in the competi- huge data sets, while reasoning is system. Are the internal processes
tion with Function-Oriented AI. about information representation, of a human, in effect, dictated by the
efficient search, and clever appli- arising, global social-technical sys-
Creation of a Vicious cation of complex rule systems. tem, its algorithms and its emergent
Super Mind Clearly, we cannot prove beyond processes and goals (or more accu-
The most catastrophic “dark” scenario doubt that sufficiently complex statis- rately, quasi-goals, understood as
envisioning AI apocalypse is the cre- tical analysis of sufficiently large data standards of regulation)?
ation of a super-intelligence surpass- sets will not lead to the emergence of In this negative scenario, humans
ing that of humans that will rapidly consciousness. But there is no solid are losing their freedom and becom-
advance by accessing Big Data, scientific evidence indicating that ing elements that process infor-
super strong learning algorithms such an emergence of consciousness mation in the service of the global
and positive feedback loops created could happen, or how it could do so. techno-social system. The essence
by self-improving AI architecture. As of this question is: what are the real
soon as AI significantly surpasses Emergence of a Global Socio- chances for humans to break-out of
human intelligence it becomes a nat- Technological Quasi-Mind the choices dictated by the system of
ural competition to humanity. The nat- While not completely ruling-out the which they are an element? What are
ural features of advanced AI systems likelihood of this futuristic scenario their chances to retain the capacity for
according to this scenario, likely lead of the creation of a vicious super independent, critical thinking? Can
to the realization of this dark out- mind, we argue that the danger of AI they retain emotions and feelings dic-
come. The principle of self-preserva- may come in a somewhat different, tated by their internal processes, rather
tion would lead to the development more “grey” form. We also argue than those dictated by information tai-
of defensive strategies on the part of that this scenario is likely to be lored to manipulate their emotions,
the AI. These AI defensive strategies, already occurring. In this “grey” sce- or by autonomous decision-making?
such as hiding itself, replication, nario the danger is not an apoca- To what degree do these process-
and resource maximization would lypse of physical elimination of the es serve humans’ true needs, values,
result in competitive behaviors, and human race by an alternative, supe- and goals versus those of the techno-
rapid physical expansion [4]. rior, artificial, self-aware mind. Rath- social global super-computer? An-
If AI can take control of military er the danger lies in the gradual other dimension of this question is
applications, for example automat- disappearance of what makes us “to what degree do interactions and
ed weapon systems, it would be human, and of what makes our exis- contacts between individuals retain
in position to wipe out humanity tence meaningful. We are talking a human character, characterized

28 IEEE Technology and Society Magazine ∕ december 2018


by the intrinsic value of the inner super system will be created by the network, the number of interactions
experiences of other humans?” The interaction of human cognition and required to maintain any significant
practical question is “how can hu- computer information processing, level of control of the network can
mans retain their autonomy and free which combines sophisticated infor- easily exceed human capacity for
will amid the emergence of the global mation processing and extremely interactions [14] and choice.
techno-social system?” efficient AI learning algorithms on We believe that this can be formal-
Some elements limiting human one side, with randomness inherent ly proven or demonstrated using
choices as a result of interaction with in human information processing on Bayesian (causal) networks (cf., [15]),
the global techno-social system are the other. The emerging information or complex systems and chaos the­
already visible. For example, bias processing structure will likely be ory, or quantum analogs. In such
introduced in search algorithms to difficult, or even impossible to networks, beyond some threshold
match information collected from design or even to be understood by number of connections, the collec-
past searches on interests and views, humans. Inconsistency and contra- tive phenomena in the network would
reinforces existing views and pat- diction inherent in human decisions emerge, and the network behavior
terns of individual decisions. This lim- and actions is likely to amplify the would become, in a sense, uncontrol-
its the capacity for innovative choice, complexity of the emerging system. lable to the nodes. What is even more
leading to increasingly polarized Such a socio-technological sys- important is that it is possible that,
opinions and behavior [8]. Moreover, tem would not need to follow strict without being aware of it, humans
recommendation algorithms based rules of rationality or to strictly obey may just function as procedures or
on deep learning that follow viewers’ well-known economic rules of self- subroutines of the larger system, being
interests, likely lead to distorted view improving systems [4]. In particular, part of the higher-level computational
of reality, where content that is divi- it might exhibit deviations from the process of the socio-computational
sive, sensational, and conspiratorial optimization of the consumption of system. In this role humans may gen-
may promote fake news over objec- resources, which could potentially erate new goals without even being
tive journalism [9]. be catastrophically dangerous to aware of it, analogous to “games
These choices and behaviors of in- nature and the natural environment. designed on purpose” [16], where a
dividuals are increasingly controlled On the other hand, having as its ele- byproduct of playing the computer
by sophisticated social influence ments AI components intentionally game is supposed to be a solution of
mechanisms like micro-targeting designed to be capable of formulat- some problem. The major difference is
[10]. The limitation of autonomy may ing distant goals, such a system is that the user would have only a little (if
be the intended result of marketing likely to have emergent functions or any) knowledge of what the game is
efforts or political campaigns. It may distant goals that may not even be that he or she is taking part in.
also, however, be an emergent prop- known to society, at least not until It is also not clear how human
erty of various algorithms interacting severe consequences are visible. intelligence might evolve in this sce-
with each other and with other hu- With the rise and self-organiza- nario. The impacts of such a techno-
mans. Regardless of the source, the tion of such a quasi-mind system, the social system might be that people
techno-social system may go in direc- AI-like system is likely to increase its would lose their independence, and
tions that do not serve an individual’s share of the control of information also experience a deterioration of
goals, or those of the wider society. processing and decision making at intelligence. Or it might be that at
the expense of humans. One reason least some aspects of human intel-
Scenarios Leading to the is that humans have limited resourc- ligence might increase, for example
Emergence of an Uncontrolled es for processing capacity, attention, as an element of subroutines that
Techno-Social Quasi Mind memory, etc. Moreover, humans get would suit the AI-system’s distant
The question is, what are the sce- tired of making decisions. This has goals. It is more likely that, in gen-
narios leading to the emergence of been described as decision fatigue eral, humans’ long-term memory
a new AI-like meta-level system cre- [11] in economics and psychology function might decrease (e.g., an
ated by the interaction of nature, [12], but also as ego depletion [13]. analysis of Google books suggests
society, and technology? What is Individuals are thus expected to that, as a society and culture, we are
the likelihood of the occurrence of have a drastically decreased capac- forgetting the past faster [17]).
these scenarios, and will we recog- ity to be the source of independent Let us stress that, even if a stan-
nize the rise of such a meta-system? influence on a socio-technical net- dard design AI were apparently rela-
In the most likely scenario, the work. With growth of the size of the tively well controlled by humans (in

december 2018 ∕ IEEE Technology and Society Magazine 29


the sense that it would not attack other natural regularities coming cognition of the human race. In this
humanity directly, via spectacular s-f from direct human activities (e.g., scenario, human information pro-
type extinction, etc.), the emergence new highways)? First, AI-enhanced cessing is delegated to AI, and
of some AI-type human-network sys- mobile communications, social net- humans just get answers. They
tem might still occur as a by-product works, and digital media increase don’t gain understanding of the
(e.g., via the Internet of Things) that the density and strength of connec- knowledge and processing rules
would benefit from human intelli- tivity between humans. Such dense- that led to the solutions. Deep
gence and partial randomness and/ ly connected systems tend to be less learning algorithms [20] provide an
or free will as a resource. This might stable and more prone to cascading example of AI systems, that if pro-
formulate the distant quasi-goals effects [18]. Second, the speed with vided with enough learning exam-
that would influence the Nature- which things happen when connect- ples, processing power, and time
Society-Technology triangle. ed AI systems are involved makes it can learn almost any pattern, clas-
The final quasi-goals or output difficult for humans to react to prob- sify objects, predict next events,
states of the process may be complete- lems. Finally black-box-like machine and make decisions that are on
ly unpredictable due to 1) nonlinear- learning systems (e.g., the AlphaGo average better than those made by
ity of the process, 2) computer power Zero self-thought program) produce humans. When the tasks are rou-
becoming different in some way from solutions whose bases are mysteri- tine, statistics show the superiority
what we currently understand com- ous — i.e, where it is very difficult to of artificial neural networks over
puting power to be, or 3) the human discern how it was possible to find human performance. Lower costs
component that is involved in a the solution. and statistically better performance
nonstandard way. For example, what Another signature of the develop- of neural networks over human
if humans that were originally design- ment of the far distant goal or meta– experts raise the temptation to re­­
ers of the system’s algorithms then structure might be visible at the place human judgment and deci-
have their behaviors “designed’’ in level of resources — especially at sion-making with neural networks
a sophisticated back-reaction. The higher energy consumption (since it not only for simple tasks, but also
possible loss of human independence is always necessary to keep complex for complex decision making and
might occur gradually, or through a systems far from equilibrium, and a judgment tasks such as employ-
specific transition, beyond which there possible distant quasi-goal might ment decisions and political and
is a point of no return. This might have extra complexity). This howev- business strategic choices.
involve an energetic breakdown, ir- er applies to the complex emergent Although replacing human cogni-
reversible genotype changes, evolu- system, but does not need to apply tion by AI may, in some instances,
tionary changes in individual or social to the computation process (see have spectacular short-term advan-
behavior that would lead to paradoxi- the concept of reversible computa- tages, it can be disastrous in the
cal losses, some forms of addiction, tion [19]). However, there is a chance long term. Today’s machine learning
or something else. then that the reversible character systems create abstract represen-
The fundamental question is how may be tracked without referring to tations that are alien and mostly
to recognize that the emergence of the energy resource (with time as a inaccessible to humans. This type
such system is happening. By its natural resource). It may be that the of abstract knowledge cannot be
very definition, if the capacity of the AI-type socio-technological system blended with the existing knowl-
system goes beyond that of human activity would gradually lead to the edge of humans. As a consequence,
beings, then the distant goals should loss of free will, by narrowing the while artificial neural networks can
be unpredictable, and only visible perspective so that society would replace humans in routine tasks,
a posteriori to humans. However, see some processes (behaviors) as they do not produce knowledge that
there might be some signal-type unavoidable, before they actually can be used by humans, and they
behaviors. An era of computational were so. do not add to the knowledge pos-
activity of an AI system might, in sessed by the human race. On the
addition to convergence to distant Decline of Cognitive contrary, AI that replaces humans
goals, lead to some new by-product Capacities presents a grave danger to the cog-
regularities or repeatable phenom- The rapid development of AI that nitive skills of the human race. It is
ena in its functioning. replaces rather than augments a most threatening factor that could
What would differentiate activi- human intelligence can also dra- cause rapid decay and decline of
ties of the artificial systems from matically diminish the capacity for human cognitive skills.

30 IEEE Technology and Society Magazine ∕ december 2018


Any skill that is not used decays. human by effectively diminishing oriented principles that are not an
As an increasing range of tasks is our cognitive capacity to acquire optional “add-on” or a “by design”
delegated to AI, humans will lose useful information, and to dimin- feature. The concept of Human
the knowledge and skills to perform ish our knowledge and our
these tasks and will become help- capacity to reason. It can
less without AI. This, in an increasing wipe out our understanding
positive feedback loop, will cause an of the world around us.
increasing tendency to delegate all The direction in which Artificial Intelligence that replaces
the difficult tasks to AI. Moreover, the development of AI will
since humans cannot understand take us depends on what rather than augments human
the bases for the decisions made kind of AI we develop. The
by AI, they will increasingly lose rapid development of AI
intelligence can dramatically
control of the processing of infor- can either make us more diminish the capacity for cognition.
mation. Trusting AI will become the human, or alternatively, can
only choice — without the possibil- become the biggest exis-
ity of checking if the AI decisions tential threat to humanity [21]. This Centric AI envisions future AI tech-
are beneficial for humans. This can threat is not only in the sense of phys- nology that will synergistically work
become disastrous in several ways. ically eliminating us from the face of with humans for the benefit of hu-
Most importantly, it can lead to the the Earth, as some scenarios predict, mans and human society:
decline of human competencies and but in a much less spectacular way ■■ Instead of replacing humans we
cognitive skills. Skills that are not by reducing our cognitive capac- need to focus on enhancing
used can diminish to a catastrophic ity and, in effect, taking away our human capabilities allowing peo-
extent. By delegating information humanity, the adjective sapiens that ple to improve their own perfor-
processing to systems that use rules defines our species. While delegat- mance and successfully handle
that humans do not understand, ing human reasoning and decision more complex tasks.
humanity will lose a significant making to AI may trigger different ■■ Instead of prescriptive systems
degree of the cognitive competen- negative scenarios, Human-Centered telling people what to do we
cies that have given us the adjective AI is likely to result is the transition need to focus on systems that
“sapiens.” Delegating information to a higher level in the development empower humans to make more
processing also implies that when of intelligence of human race. The informed decisions and help
novel creative solutions are required critical question then may be how to them harness and channel their
due to changed conditions or new develop AI, and the global super-net, creativity.
opportunities or threats, humanity so as to leave space for free will, free ■■ Instead of creating unpredictable
may be helpless because algorithms choice, and the self-realization of “black box” systems we need to
trained on existing data cannot individual humans. focus on explainable, transpar-
cope with radically novel situa- ent, validated, and thus trustwor-
tions. So, while most of the time AI Human-Centered AI thy systems optimally supporting
may outcompete humans, in the most The common element in all the neg- both individuals and society as
critical situations it will fail with pos- ative scenarios is that AI develops a whole in dealing with the
sibly disastrous consequences. its own knowledge system that is increasing complexity of a net-
In summary, our species, homo inaccessible to humans. AI develops worked, globalized world.
sapiens, is defined by our capacity decision rules that are oblique to ■■ We need to include values, eth-
for cognition. Rapid development human understanding, and AI is ics, and privacy as core design
of AI can change this capacity in a focused on replacing rather than considerations in all of our AI
most profound way, for the better or supporting humans. systems and applications.
worse. In one scenario, by enhanc- In contrast, Human-Centered AI For the above vision to become
ing human cognitive capacity AI aims to interface and extend human reality a large scale. long term research
can elevate humanity to an unprec- capabilities [21], enhance human effort is needed that goes from the
edented, or even unforeseen, level, decision making, and serve human underlying fundamental unsolved
of perception, knowledge, under- goals on both the individual and problems of AI, through specific novel
standing, and reasoning. In another, societal level. Human-Centered AI technologies in different applied AI
it can take away what makes us is designed along ethical and value- domains to making broad impact in

december 2018 ∕ IEEE Technology and Society Magazine 31


relevant socio-economic areas. Such Implementing such systems nodes of the network. This algorithm
an effort must bring together three is related to two well known fun- would not be accessible to humans.
main communities: research, indus- damental “Grand Challenges” of If the company adopts this algo-
try, and societal stakeholders. We are AI. First is the ability to build and rithm, it would make personnel
currently pursuing this vision in the maintain comprehensive, differenti- decisions in a way that no one in the
HumanE AI initiative (www.human- ated world models. A key aspect of company understands. Moreover,
ai.eu). human intelligence is a world model because this algorithm would reflect
What does this mean in concrete based on a huge amount of experi- only past experiences, it would be
terms? Consider a judge, doctor, pol- ence. The human world model is likely to fail if the business environ-
icy maker or manager facing a com- based on complex, often ambigu- ment changes.
plex decision that has to be made on ous semantics, and on a dense web In contrast, Human-Centered AI
the basis of a large, noisy data basis of associations that allow a variety would analyze a huge amount of
and involves a variety of aspects of levels of implicit communica- data about worker productivity, and
that may not all be within the core tion (including irony and figurative would reveal complex patterns
competence of the decision-maker. speech). These factors are the basis underlying that productivity to man-
Since such decisions often have for human creativity and inven- agers. This knowledge could be
grave personal and/or social conse- tiveness. Although achievements used to formulate rules that would
quences and include complex ethical such as the IBM Watson’s, or the underlie hiring and firing decisions
and emotional aspects, a complete Debater project success at Jeopardy, in the company. These rules could
replacement of human decision mak- are great advances towards more be revealed to workers, and if need-
ers by AI is clearly undesirable, even advanced AI world models, we are ed implemented into software for
if it were feasible. Existing decision still very far from the comprehen- automated decision making. The
support systems are mostly about siveness, richness, and subtlety of decision-making software could be
guiding a person through a pre- the human world understanding. changed by managers proactively in
defined decision tree, which means The second related challenge is anticipation of planned changes in
that while the decision may formally the explainability of machine learn- the company’s strategy.
be taken by the human, it is often ing models. Thus, many recent AI As another example, an AI-based
largely pre-determined by the sys- success stories are based on the recommendation system, based on
tem. Data mining and analytics sys- application of complex statistical a deep-learning algorithm that has
tems leave much more freedom to analysis to massive amounts of been trained to maximize the time
the user, at the price of a potential training data. As powerful as such users spend on a social media site,
information overload. methods have proven to be, they will recommend to users content
Human Centric AI should be able have the disadvantage of being very using rules that may be not under-
to truly debate problems with the hard (often impossible) for humans stood by anyone. As evidenced by
human user. A Human-Centered AI to understand and interpret, mak- prior experience (e.g., [3]), this algo-
needs to understand human lines ing AI-based decisions difficult for rithm is likely to develop rules pro-
of reasoning, and relate to human humans to accept [22]. This goes moting highly distorted content. The
motivations and emotions and to against the vision of AI systems that distorted content is then disruptive
the moral assumptions and impli- can debate with humans and syner- to constructive social processes,
cations in that reasoning. The AI gistically work together with people, which violates the values of society.
needs to help the human partners, including learning from experts. A high number of human modera-
challenge their assumptions, and to As an example of the differences tors will be needed to neutralize the
provide and explain alternative ways between the two approaches, in the bias introduced by the algorithm.
of seeing a problem (given the AI’s Function-Oriented AI approach, a In contrast, Human-Centered AI
particular analytical abilities and company may use a deep learning would use sophisticated algorithms
data access). Only an AI system that algorithm to make personnel deci- to discover which content is most
is capable of such a rich and reflec- sions. A deep learning network, attractive. These findings would add
tive discussion with a human can based on the past history of produc- to already existing knowledge. While
optimally support informed deci- tivity of workers with different char- constructing algorithms for the rec-
sion-making while leaving sufficient acteristics, would develop its own ommendation systems, Human-
space for human intuition, inventive- algorithm that would be encoded Centered AI would also take into
ness, and creativity. in connection to strength between account societally accepted values

32 IEEE Technology and Society Magazine ∕ december 2018


such as trustworthiness of the infor- more, the user’s current activity, society in general, to be acceptable
mation, and avoidance of hate. mood, and priorities must be tak- and accepted. This is a problem
Dissatisfaction with AI that oper- en into account. In the same meet- that goes beyond a mere technical
ates on the basis of knowledge that ing it may or may not be appropriate integration and representation of
cannot be understood by humans has to ring depending on who is currently ethical concerns and social norms
resulted in a new strong movement present, who is talking, and how the within an AI system. It involves
towards the concept of “explainable meeting has evolved. Thus, if the user enabling the system to perform
AI” — or XAI. The XAI concept has is about to successfully convince his often inherently ambiguous ethical
gained popularity in science [23], bosses of something he deeply cares reasoning (which by itself is an open
business [24], [25], and military circles about, then taking a call is not help- research problem). In addition, a
[26]–[27] working with AI. The goal of ful. On the other hand, if he is going well-informed discussion is needed
this trend is to decipher the knowl- to lose the argument anyway, then among all stakeholders — research-
edge and decision algorithms used the call may be more important. A ers, industry, and the wider society —
by machine learning applications and call from the same person may have about the relevant ethical values
translate this into rules and knowl- a very different priority depending and norms that AI systems should
edge accessible to humans. Under- on recent interactions and current ex- follow and under what conditions.
standing the rules of AI increases trust pectations or intentions towards the Whatever values end up embedded
in, and accountability for, the control caller. While research areas such into AI systems, it is essential that
and safety of AI applications based as context-aware computing [30], the design decisions about what is
on machine learning [24]. Because affective computing [30], or social included be explicit and visible. Peo-
most machine learning algorithms computing [31] have considered vari- ple should be able to inquire and
are inherently complex, however, full ous aspects of the problem, acting understand the underlying values
explanation of the algorithm may be and interacting within complex social that an AI system is optimized for.
impossible. The goal of the approach settings and taking into account the
may then be to offer a very simplified full complexity of human feelings and Humane AI Project
explanation to humans, (for example, decision making processes is anoth- In summary, the concept of Human
to name the most important factor in er unsolved AI Grand Challenge. Centric AI envisions future AI tech-
the decision), rather than to extract Human-Centered AI could also nology that will synergistically work
the maximal knowledge of the algo- enhance the functioning of human with humans for the benefit of hu-
rithm used by AI because people pre- groups and socio-technical systems. mans and human society, focusing
fer simper explanations [28]. Although This could be achieved by facilitating on enhancing and empowering hu-
the approach of XAI gives humans interactions between humans and mans rather replacing and con-
some control, it usually assumes that technology through facilitating more trolling them. Core concerns are
AI systems can make better decisions productive social interactions, helping accountability, explainability, appro-
than humans; thus it tends to del- to find more trustworthy sources priate interaction concepts, and the
egate decisions to AI. of information, helping to delegate inclusion of values, ethics, and pri-
Another key aspect of Human- information processing and decision vacy as core design considerations.
Centered AI is the ability to act and making to most qualified individuals, For the above vision to become re-
interact in complex social contexts and by providing on-the-go knowl- ality, a large-scale long-term research
[29]. As an example, consider a well- edge facilitating group productivity effort is needed that goes from the
known, seemingly trivial problem: [33]. In contrast to Function-Oriented underlying fundamental unsolved
automatically deciding if, given a AI, the rules governing these pro- problems of AI, through specific novel
certain setting and a specific caller, cesses would be accessible to and technologies in different applied AI do-
if it is appropriate for a mobile phone modifiable by humans [32]. mains, to making broad impact in rele-
to ring or not. At the core of the prob- A f ina l consideration in the vant socio-economic areas. Such an
lem is the need for an in-depth un- design of Human-Centered AI sys- effort must bring together three main
derstanding of the fine points of the tems is the integration of ethical communities: research, industry, and
social context in which the user is values and social norms [33]. As AI societal stakeholders. We are currently
currently situated and the ability to systems influence more and more pursuing this vision in the HumanE AI
anticipate the potential significance areas of our lives, their actions must initiative (www.humane-ai.eu) and the
of taking or delaying the call in the be aligned well with the values and European CLAIRE (https://claire-ai
framework of the user’s life. Further- expectations of both users, and .org/) network of AI laboratories.

december 2018 ∕ IEEE Technology and Society Magazine 33


How the development of AI affects References [18] D. Helbing, “Globally networked risks
[1] C. Koch, “How the computer beat the and how to respond,” Nature, vol. 497, no.
the cognitive capacities of humanity
Go master,” Scientific American, vol. 19, 7447, p. 51, 2013.
will depend on which route humanity 2016. [19] C.H. Bennett, “The thermodynamics of
adopts for the development of AI. If AI [2] F.-H. Hsu, Behind Deep Blue: Building computation — A review,” Int. J. Theoretical
the Computer that Defeated the World Physics, vol. 21, no. 12, pp. 905–940, 1982.
develops in a way in which its function-
Chess Champion. Princeton, NJ: Princeton [20] I. Goodfellow, Y. Bengio, and A. Cour-
ing in higher cognitive tasks is based Univ. Press, 2004. ville. Deep Learning (Adaptive Computa-
on knowledge inaccessible to humans, [3] A.M. Turing, “Mind,” Mind, vol. 59, no. tion and Machine Learning series). Cam-
236, pp. 433–460, 1950. bridge, MA: M.I.T. Press, 2016.
and the main goal of AI is to replace
[4] S.M. Omohundro, “The nature of self- [21] https://www.theguardian.com/technology/
humans in tasks requiring complex 2014/oct/27/elon-musk-artificial-intelligence-
improving artificial intelligence,” presented
cognition, then the consequences may at Singularity Summit, 2007. ai-biggest-existential-threat
[22] P., Lukowicz, S Pentland, and A. Fers-
be disastrous. If, however, AI takes [5] S. Hawking, S. Russell, M. Tegmark, and F.
Wilczek, “Transcendence looks at the impli- cha. “From context awareness to socially
the Human-Centered approach, if AI aware computing.” IEEE pervasive com-
cations of artificial intelligence – But are we
champions the concept that its main taking AI seriously enough?,” Independent, puting 11.1 32–41, 2012.
[23] Lepri, B., Oliver, N., Letouzé, E. et al.
goal is to extend human cognitive ca- May 1, 2014; https://www.independent.co.uk/
news/science/stephen-hawking-transcen Fair, Transparent, and Accountable Algo-
pacities and to generate knowledge rithmic Decision-making Processes. Philos.
dence-looks-at-the-implications-of-artificial-
accessible to humans (serving the intelligence-but-are-we-taking-9313474.html. Technol. (2017). https://doi.org/10.1007/
s13347-017-0279-x
goals as defined by Human-Centered [6] Future of Life Institute, Research Pri-
[24] T. Miller, P. Howe, and L. Sonenberg,
AI) — then the development of AI may orities for Robust and Beneficial Artifi-
“Explainable AI: Beware of inmates running
cial Intelligence; https://futureoflife.org/
be the most significant achievement ai-open-letter, accessed Oct. 8, 2018.
the asylum,” in Proc. IJCAI-17 Workshop
on Explainable AI (XAI), Aug. 20, 2017;
in the evolution of humanity. [7] J. Barrat, Our Final Invention: Artificial
http://www.intelligentrobots.org/files/
Intelligence and the End of the Human
IJCAI2017/IJCAI-17_XAI_WS_Proceedings
Era. Macmillan, 2013.
Acknowledgment [8] R. Epstein and R.E. Robertson, “The
.pdf#page=36.
[25] “Explainable AI,” pwc UK, 2018;
This work was supported by funds search engine manipulation effect (SEME)
https://www.pwc.co.uk/services/audit-
from Polish National Science Cen- and its possible impact on the outcomes of
assurance/risk-assurance/services/technology-
elections,” Proc. National Academy of Sci-
ter (project no. DEC- 2011/02/A/ ences, vol. 112, no. 33, pp. E4512–E4521,
risk/technology-risk-insights/explainable-ai
.html.
HS6/00231). 2015.
[26] “The challenges and opportunities of
[9] P. Levis, “How YouTube’s algorithm dis- explainable AI, intel AI, Jan. 12, 2018;
Author Information torts truth,” The Guardian, 2018; https://
www.theguardian.com/technology/2018/
https://ai.intel.com/the-challenges-and-
opportunities-of-explainable-ai/.
Andrzej Nowak is the Director of the feb/02/how-youtubes-algorithm-distorts-truth. [27] D. Gunning, “Explainable Artificial Intel-
Center for Complex Systems in The [10] G. Farnadi, G. Sitaraman, S. Sushmita, ligence (XAI),” DARPA, https://www.darpa
F. Celli, M. Kosinski, et al., “Computational
Robert B. Zajonc Institute for Social .mil/program/explainable-artificial-intelli-
personality recognition in social media,” gence, accessed Nov. 2018.
Studies, and Professor at the De- User Modeling and User-Adapted Interac- [28] T Lombrozo, “Simplicity and proba-
partment of Psychology University tion, vol. 26, no. 2–3, pp. 109–142. 2016. bility in causal explanation,” Cognitive Psy-
of Warsaw, Warsaw, Poland; email: [11] M.A. Boksem and M. Tops, “Mental chology, vol. 55, no. 3, pp. 232–257, 2007.
fatigue: Costs and benefits,” Brain Res. [29] B. Schilit, N. Adams, and R. Want,
andrzejn232@gmail.com. Rev., vol. 59, no. 1, pp. 125–139, 2008. “Context-aware computing applications,”
Paul Lukowicz is Full Professor of [12] E. Polman and K.D. Vohs, “Decision in Proc. 1994 Workshop Mobile Comput-
AI at the Technical University of Kai- fatigue, choosing for others, and self-constru- ing Systems and Applications, pp. 85–90.
al,” Social Psychological and Personality IEEE, 1994.
serslautern in Germany where he Science, vol. 7, no. 5, pp. 471–478, 2016. [30] C.L. Lisetti, “Affective computing,” J.
heads the Embedded Intelligence [13] R.F. Baumeister, E. Bratslavsky, M. Pattern Analysis and Applications, vol. 1,
group at Deutsches Forschungszen- Muraven, and D.M. Tice, “Ego depletion: Is no. 1., Mar. 1998.
the active self a limited resource?,” J. Per- [31] F.Y. Wang, K.M. Carley, D. Zeng, and W.
trum für Künstliche Intelligenz, sonality and Social Psychology, vol. 74, Mao. “Social computing: From social infor-
(DFKI); email: Paul.Lukowicz@dfki.de. no. 5, p. 1252, 1998. matics to social intelligence,” IEEE Intelli-
Paweł Horodecki is professor at [14] R.I.M., Dunbar, “Neocortex size as a gent Syst., vol. 22, no. 2, 2007.
constraint on group size in primates,” J. [32] A. Schmidt, “Augmenting human intellect
University of Gdan’sk, International
Human Evo., vol. 22, p. 469, 1992. and amplifying perception and cognition,”
Centre for Theory of Quantum Infor- [15] R.M. Shiffrin, “Drawing causal infer- IEEE Pervasive Computing, vol. 16, no. 1,
mation Technologies and Gdan’sk ence from Big Data,” PNAS, vol. 113, no. pp. 6–10, 2017.
27, pp. 7308–7309, 2016. [33] A. Nowak, R. Vallacher, A. Rychwalska,
University of Technology, Faculty of
[16] L. von Ahn and L. Dabbish, “Designing and M. Kacprzyk, “The target in control,” sub-
Applied Physics and Mathematics, games with a purpose,” Commun. ACM, mitted for publication, 2018.
also associated with the National 2008; http://www.cs.cmu.edu/~biglou/ [34] J. Van Den Hoven and J. Weckert, Eds.,
GWAP_CACM.pdf. Information Technology and Moral Phi-
Quantum Information Center of
[17] J.B. Michel et al., “Quantitative analy- losophy. Cambridge, U.K.: Cambridge Univ.
Gdan’sk, Poland. Email: pawel.horo sis of culture using millions of digitized Press, 2008.
decki@pg.edu.pl. books,” Science, 2011. 

34 IEEE Technology and Society Magazine ∕ december 2018

You might also like