You are on page 1of 8

Thinking & Reasoning

ISSN: 1354-6783 (Print) 1464-0708 (Online) Journal homepage: http://www.tandfonline.com/loi/ptar20

Human versus machine thinking: the story of


chess

Jonathan St B. T. Evans

To cite this article: Jonathan St B. T. Evans (2018): Human versus machine thinking: the story of
chess, Thinking & Reasoning, DOI: 10.1080/13546783.2018.1430616

To link to this article: https://doi.org/10.1080/13546783.2018.1430616

Published online: 01 Feb 2018.

Submit your article to this journal

Article views: 19

View related articles

View Crossmark data

Full Terms & Conditions of access and use can be found at


http://www.tandfonline.com/action/journalInformation?journalCode=ptar20
THINKING & REASONING, 2018

BOOK REVIEW

Human versus machine thinking: the story of chess


Thinking deeply: where machine intelligence ends and human creativity
begins, by Garry Kasparov, London, John Murray, 2017, 287 pp., £15.42
(hardback)

The holy grail of artificial intelligence from 1950s and 60s onwards was to build a
chess program that could beat the world champion. Even as recently as the
1980s, it seemed an unattainable dream, the stuff of science fiction. But of course,
it happened. The unfortunate loser to IBM’s Deep Blue in 1997 was Gary Kasparov
but it is ironic that his name should be ever associated with this loss in the mind
of the general public. He is arguably the greatest chess player who has ever lived
with a remarkably low percentage of losses to his fellow professionals. In 1985, he
became the youngest ever world champion at the age of 22 and held the title
until 2000. When he retired from professional chess in 2005, just in his early for-
ties, he was still the world’s highest ranked player. Since then he has engaged in
many chess-related enterprises, writing a number of specialist chess books as well
as discovering and developing young players. However, he is also a regular lec-
turer on decision-making skills for business audiences and the author of several
books aimed at wider audiences. Kasparov has a long-established interest in chess
computers and pioneered the use of both extensive databases of master games
and chess engines as tools of preparation for professional matches, now a stan-
dard practice.
Deep Thinking contains no chess diagrams or notation and is in principle acces-
sible to a non-chess playing audience. In it, he comments on the development of
artificial intelligence generally, makes comparison of human and machine think-
ing and gives an autobiographical account of his own experience in playing
against chess engines. This includes the famous Deep Blue rematch as he insists
on calling it, as everyone seems to have forgotten that he won a previous match
against the program. The book will be of interest to cognitive psychologists and
cognitive scientists given the prominent role that computer chess has had in the
field. However, a word of warning. As the book progresses, it becomes more spe-
cifically about chess. As a regular and competent chess player myself, I found it
highly accessible and engaging. However, if you don’t know the game at all, this
might make parts of it a hard read, even with the lack of formal notation. You
don’t need to be a strong player, but you do need to understand some of the
basics of the game and it would be helpful to have played competitive chess at
some level.
Early in the book, Kasparov declares two “fallacies” (false beliefs) which are of
psychological interest. The first is the belief that master chess players must have
very high general intelligence and cognitive powers. Not so, he says, we are just
2 BOOK REVIEW

normal people with a special talent. Take the example of blindfold chess which is
often played as an exhibition by grandmasters. Many people think that this must
entail extraordinary powers of visualisation and memory but not so according to
Kasparov. All grandmasters can do it, apparently. It is the product of thousands of
hours of study of chess positions and the highly specialised cognitive representa-
tions that produces. However, he insists that while it requires a special talent, abil-
ity to play chess at the top level is also very rare. He does not accept the oft
repeated mantra that anyone could it (and other similarly skilled activities) with
ten thousand hours of practice (Gladwell, 2007). He also points out that the ability
to put so much effort and practice in is itself a rare talent, something that would
apply equally to concert pianists and Olympic gymnasts.
The second fallacy, which is of direct relevance to the cognitive science issues,
is the belief that if we could create a computer program to beat a world cham-
pion, then it would tell us a lot about human intelligence. As he demonstrates,
this turned out to be completely false. Before I get to that, let me describe the
strengths and weaknesses of human and computer chess, as Kasparov describes
them. There are two areas in which people are naturally strong and computers tra-
ditionally weak: strategic planning and pattern recognition. Strategic planning in
chess is thinking such things as: how could I open lines to give my rooks more
power, how can I find more space for my pieces and cramp those of my opponent,
or how can I engineer an endgame where I have two bishops against a knight and
bishop. Such goals can be formed and pursued without calculating any precise
move sequences to achieve them. Experienced human players do this effortlessly,
computers hardly at all. Pattern recognition – a key factor in all expert human
problem-solving (Robertson, 2001) – results from large amounts of study of chess
positions and allows the human expert to recognise with little effort the possibili-
ties in a position including a small number of candidate moves. Until very recently,
this was a great weakness in chess playing software. However, the book is almost
out of date on this issue, a point to which I shall return later.

How chess machines work


Computers do two things easily which humans find difficult. The first is calcula-
tion – that is envisaging sequences of moves and replies leading to a new and dif-
ferent board position. Unless there are a relatively small number of forced moves,
this is difficult for human players, especially in standard over the board chess
where all calculation must be done mentally within a fixed time limit. Computers
can, of course, examine very large numbers of board positions. For example, by
the time Kasparov played Deep Blue for the first time, the match which he won
(while losing one game), it could examine 100 million positions per second – and
that was in 1996. The improved software used in the 1997 rematch enjoyed dou-
ble that processing speed using a vastly expensive supercomputer. This IBM
machine was only ever built to defeat one opponent – Kasparov – and the project
was closed down immediately after it succeeded, with the team of developers
and grandmasters they had employed sworn to silence. However, just a few years
later, strong chess engines were developed that could run on normal PCs.
THINKING & REASONING 3

Nowadays, any owner of a tablet or smartphone can cheaply purchase any of sev-
eral chess apps that will play to grandmaster standard.
The other talent that computers have is memory. Professional chess players are
required to memorise thousands of lines of possible opening play – known as
opening theory. This is hugely time-consuming and subject to memory lapses
and mistakes. By contrast, computers can be given a huge “book” of opening lines
or even, in principle, a record of every tournament chess game ever played and
will effortless recall them without error. No human player can compete with com-
puters in such flawless memory nor in the ability to compute billions of chess posi-
tions prior to making a move. As Kasparov realised many years before his famous
defeat, it was inevitable that computers would eventually be stronger players
than the best humans. It was just a matter of speed: once computers could be
made fast enough, they would win.
In the early days of AI, Kasparov describes two approaches to building a chess
machine that were envisaged. Type A programs, also known as brute force, would
try to compute as many board positions as possible by examining all legal moves
and replies to the greatest depth it could achieve in the time available. Type B
programs used intelligent rules and heuristics to reduce the search space and
play a more human-like way. Many psychologists will be familiar with the work of
Newell and Simon in the 1960s in developing heuristic strategies in chess and
other domains to simulate human problem-solving (Newell & Simon, 1972).
Because computers had so little power at that time, Type B programs enjoyed
some early modest success. However, they never reached the strength of a good
club player, let alone a master level. Type A programs soon dominated and would
eventually beat the grandmasters. They have improved greatly over the years,
mostly due to the increasing speed of processing, but also due to refinements in
programming techniques such as alpha–beta pruning, in which analysis of less
promising lines is discarded early in favour of those providing better prospects.
Less breadth allows more depth.1
Kasparov describes how he and other grandmasters tried to keep the ever-
improving brute force programs at bay by understanding how they played and
trying to exploit their weaknesses. They knew they could never out-calculate the
programs in tactical play and so tried to play solid defensive positions where stra-
tegic thinking was required, waiting for the computer to make a mistake. In his
first, successful match against Deep Blue, for example, Kasparov radically changed
in his normal style of play. At one stage, he withheld an attacking move that he
would have played against any human player, knowing how accurately the
machine would defend. Even strong human players will struggle in positions
where there is just one move to avoid immediate disaster, particularly if faced
with a succession of such choices. Computers will relentlessly find the “one move”
each time. But, of course, it was only a matter of time before the computers got
too fast. They are still weak today in strategic planning, Kasparov says, but if you
have a 250 mph service in tennis, does it matter if you have a weak backhand?
Kasparov makes an interesting psychological observation of the human need
to construct and perceive narrative. He says this is major source of bias in both
the play of humans and in the analyses and commentaries that are published by
4 BOOK REVIEW

chess experts on other masters’ games. The latter write, with hindsight, as
though the game told a story from start of finish rather than being a sequence of
move by move decisions. Human players fall prey to this themselves, thinking I
have done A, so I should do B, and persisting with a strategic plan after the oppo-
nent has already nullified it. Computer programs do not think in this way. Each
new move is a separate problem with the same objective: find the strongest
move possible on the current state of the board. The error of thinking here is simi-
lar to that of roulette player using “systems” in which later bets are believed to
compensate for earlier losses, while each individual bet has an expected loss
(Wagenaar, 1988).
Kasparov also describes in the book a new approach to computer chess which
is under development exploiting neural networks and other advances in machine
learning techniques. I will call this Type C (although he does not). He points out
that traditional AI chess programs use evaluation functions that, for example,
assign points to different pieces (a queen is worth 9 pawns, a rook 5 pawns, a
bishop or knight 3). Although all beginners are taught these values, experienced
chess players know that they can be misleading. A bishop or knight may be better
than three pawns in the middle game, for example, but worse in the endgame.
Two bishops are more powerful than a knight and bishop, provided that there are
open diagonals on which they can operate and so on. One of the things that
makes chess so difficult is understanding how advantages in space and time can
compensate for material, which may be sacrificed to achieve them. Kasparov
points out that with machine learning one does not need to program in such eval-
uations – the program can learn the value of different pieces in different situations
for itself, developing much more refined evaluations of positions than can be pro-
vided by a set of explicit rules. This is exactly what appears to be happening cur-
rently. Kasparov does mention AlphaGo, which taught itself to beat the World Go
champion, but only as a Go player. Go has a larger board than chess and is less
accessible to brute force methods. However, in just the past few weeks, after pub-
lication of this book, reports have emerged in the press (e.g., Klein, 2017) that a
modified version of AlphaGo has taught itself to play chess in just a few hours of
practice in which it played chess against itself. It then heavily beat the current
computer world chess champion program Stockfish in a 100-game match. Yet,
AlphaGo has no opening book and no specialist chess rules programmed into it. It
apparently became the world’s best chess player by very rapid experiential learn-
ing. The work is yet to be peer reviewed at the time of writing this, but seems
potentially to be a breakthrough in artificial intelligence of staggering proportions.

The rematch against Deep Blue


Kasparov’s account of his infamous rematch against Deep Blue in 1997 makes riv-
eting reading, although it has little to do with cognitive science. He vividly con-
veys the confusion, anxiety and despair that he experienced during this match. I
will leave out the details that would only interest chess players but summarise
events briefly. IBM wanted to win at all costs and Kasparov rashly agreed to them
setting the conditions for the match and controlling the whole event. He had no
THINKING & REASONING 5

idea how the program had been developed since Deep Blue played no matches of
public record and IBM refused to provide him with examples of the training
games it had played against their own grandmasters. IBM’s experts were allowed
to access the program during the match, for example to reprogram algorithms or
add to its opening database. They even programmed it with psychological tricks,
such as pausing unnecessarily in play to give a false impression that it had run out
of opening book. In addition, an unfriendly and intimidating atmosphere was cre-
ated for a player who had a history of willing participation in what he regarded as
science experiments.
The match started well for the human race. Kasparov won the first of six games
quite easily, using a similar anti-computer strategy as in his original match with
Deep Blue. He played passively, avoiding his normally sharp opening lines for fear
of book preparation against him and of being out-calculated in tactical play. But
in game two, the strategy backfired with the computer taking advantage of the
space and time he gave it to build up an overwhelming positional advantage.
Completely disconcerted by this, Kasparov resigned in despair. His misery was
considerably increased when he was informed prior to the next game that he had
missed a chance to draw game two by perpetual check. This happens when the
king cannot escape a series of checks, typically administered by the defender’s
queen. This forces a draw regardless of other advantages in the position.
Of course, Kasparov knew that the enemy king was exposed and it beggars
belief that he did not attempt a perpetual check defence. He explains in the
book that he just assumed that with its formidable calculating powers, Deep
Blue would have found a method to avoid such a draw. He would never make
this mistake against a human player and this is probably the only time in his
career that he resigned a drawn game, a distressing event for any competitive
chess player. In the context, it was devastating. Nevertheless, he managed to
play out some draws and reached the sixth and final game level in the match.
In this game, he allowed Deep Blue to play an attacking move, sacrificing a
piece in the opening. Deep Blue rapidly won the game and with it the match.
Kasparov was well aware of the move and its dangers but certain that the
computer would not play it: no chess engine at this time would make this kind
of speculative sacrifice. He would have been right except that it emerged
many years later that Deep Blue had been given a specific instruction to over-
ride its engine in this position should the game reach it, which was added dur-
ing the match. It also later emerged that IBM had planted a Russian speaking
security guard in Kasparov’s personal rest area.
There was much suspicion among chess experts at the time that human play-
ers might be intervening to play some of Deep Blue’s moves, which would have
given it a serious advantage. It was later demonstrated, again at Kasparov’s insti-
gation, that a human expert with a chess engine could outperform either type of
player on their own. However, Kasparov dismisses the notion of human interven-
tion in the book, since the suspiciously “human-like” moves that it played were
replicated by commercially available chess engines within a few years. However,
the fact that IBM never allowed Deep Blue to play again or for its traces to be stud-
ied and bound their team of experts to silence meant that suspicions that some-
thing was badly wrong about this match persisted for years afterwards. But of
6 BOOK REVIEW

course, it is all academic now. The best engines, developed from this Type A brute
force tradition, can now beat the best human players. The holy grail was attained:
computers can beat the world chess champion.

Humans and machines: the differences


The success of brute force chess machines appears to tell us little or nothing
about human intelligence – a rather depressing conclusion. However, Kasparov
points out that the feared consequence that people would consequently give up
playing the game has not come to pass – quite the opposite in fact. Computers
have actually enabled a much larger number of people to acquire expert mastery
of the game. First, computer databases provide rapid and easy access to all master
chess games for study. Second, chess engines provide strong opponents for
developing players who may not have Kasparov’s own advantage of growing up
in a chess-rich culture in which many expert players are available. However, as he
points out in the book and I can attest form personal experience (I play much
online chess), playing against human opponents is much more motivating and
satisfying than playing machines. One reason is that humans naturally span a
wide range of playing abilities and styles which computers find hard to imitate.
Paradoxically, programmers have a new challenge: making their software weak
enough to give human players an even game, while doing so in a convincing way.
Mixing strong moves with the occasional blunder, for example, is transparent and
irritating.
This brings me to another difference between human and machine chess
which Kasparov mentions a number of times in the book: emotion. Computers do
not get stressed, hungry or tired as a game progresses. They do not feel elation if
they win or disappointment if they lose. They do not feel regret if they make a
mistake or relief if they get away with it. They do not feel anything at all, which
itself puts pressure on their human opponents. A human player under time pres-
sure will become highly anxious and agitated but they know that if the computer
is short of time, it will simply reduce the depth of its search before announcing its
move. One is reminded of the robot officer Data in the Star Trek series, who was
paradoxically portrayed as wanting to have emotions. Of course, these chess
machines do not want anything at all. As Kasparov points out, artificial intelligence
is a human achievement and should be celebrated as such.
So the dream of early AI researchers was realised – computers can now beat
the world chess champion. Unfortunately, it does not seem to have taught us very
much about the nature of intelligence, except that the human brain has found
methods of solving complex problems that do not rely on brute force. Kasparov
remarks, however, that something which in prospect seems to require intelligence
seems commonplace once achieved. This is similar to our perception of conjuring
tricks – a stage magician’s tricks are extremely impressive until we find out how
they were actually performed. We have known for a long time that computers
have large and accurate memories and that they can process information very
fast. We perhaps did not realise how big those memories would become and how
fast the processors would run after just half a century or so of research and devel-
opment in computer technology. And chess is after all a well-defined problem,
THINKING & REASONING 7

with a finite, if vast, search space. While many important scientific and real-world
problems can be solved by brute force computation, many also cannot.
I do, however, take two lessons from the chess story as possible indicators of the
future of artificial intelligence. One is that computers and humans working together,
combining the different cognitive strengths of each, can outperform a human or
machine on its own. This should keep humans in the game of expertise a while lon-
ger. However, as Kasparov remarks early in the book, automation is no longer to be
feared only by blue collar workers. The computers are coming for doctors and law-
yers as well. My second thought is the development mentioned only briefly in the
book but illustrated by the remarkable achievement of AlphaGo in apparently
becoming the best chess as well as Go player in the world, using machine learning
methods. Successful as Type A chess programs have been, it seems that Type C is
better. Not only that but they can mimic and improve upon the pattern recognition
abilities of human experts, if not yet the strategic thinking. Surely, this is the future
for intelligent machines and will change the world beyond recognition.

Note
1. Although pruning reduces search space, it is not a “heuristic” in the Type B sense.
Type A programs have never just used brute force because the problem space of
chess is far too large to search all possible moves and replies. For example, they
need functions to evaluate the chess quality of positions reached at the depth of
their search in order to choose the move with the best prospects. The strength of
such evaluation functions has been progressively improved with the help of
grandmasters. Pruning unpromising lines early improves efficiency as a greater
depth can be reached with the more promising ones. Hence, one could argue
that Type A programs have some “intelligence” in addition to brute force,
although the main improvement in their performance is down to sheer comput-
ing speed. And of course, their ability to examine billions of chess positions can-
not be remotely matched by any human player. On the other hand, humans only
ever examine a few candidate moves – they do not need any search at all to
ignore those that are obviously unimportant.

References
Gladwell, M. (2007). Outliers. London: Penguin.
Klein, M. (2017). Google’s AlpahZero destroys Stockfish in 100 game match. Retrieved from https://
www.chess.com/news/view/google-s-alphazero-destroys-stockfish-in-100-game-match
Newell, A., & Simon, H. A. (1972). Human problem solving. Englewood Cliffs, NJ: Prentice-Hall.
Robertson, S. I. (2001). Problem solving. Hove: Psychology Press.
Wagenaar, W. A. (1988). Paradoxes of gambling behaviour. Hove: Erlbaum.

Jonathan St B. T. Evans
School of Psychology, University of Plymouth, Plymouth, UK
J.Evans@plymouth.ac.uk
© 2018 Jonathan St B. T. Evans
https://doi.org/10.1080/13546783.2018.1430616

You might also like