You are on page 1of 15

Tom Chatfield on Critical Thinking

and Bias
Transcript

Key
DE: DAVID EDMONDS

TC: TOM CHATFIELD

DE: This is Social Science Bites with me, David Edmonds. Social Science Bites
is a series of interviews with leading social scientists and is made in association
with SAGE Publishing. It’s become fashionable in recent years to regard human
reasoning as contaminated by numerous irrational errors and prejudices. Tom
Chatfield, best known for his writings on technology, doesn’t quite see it this way.
Still, to reason critically, he says, means being alert to various forms of bias. Tom
Chatfield, welcome to Social Science Bites.

TC: Thank you so much for having me.

DE: The topic we’re talking about today is critical thinking and bias. Two separate
terms there, let’s get them clear one at a time. Critical thinking, what do you mean
by that?

TC: What I mean by critical thinking is our attempts to be more reasonable about
the world. And so this tends to involve coming up with reasoned arguments that
support conclusions. Reasoned explanations that seek to explain why things are
the way they are. And, perhaps most importantly, doing all this as part of a
reasonable, critically engaged discourse, when you’re listening to other people,
you’re prepared to change your mind.

DE: Is this the same as the rules of logic that we’ve been working on ever since
the Greeks?

TC: So logic is certainly part of this. Being able to correctly deduce conclusions,
saying, “I’ve got some information in front of me, so what must be true if these
things are true?” And I think that is deductive reasoning. And obviously induction
is also very important with this idea of making a leap from certain knowledge and
saying, “If these things are true, if this pattern is true, what else might be true?
What else is likely to be true?”

But I think more and more, we also need to roll into this the scientific and empirical
method of seeking explanations, forming hypotheses, testing theories. And, this is
the additional bit for me, building into all this our growing knowledge about human
bias, the predictable biases in the way we think.

DE: So that’s a wonderful segue to bias. Bias is what? A distortion of our thinking?
Our thinking becoming infected by error in some way?

TC: So I have a bit of a problem with a lot of the idea around kind of infection and
bias being this bad thing we would be better off without. When we talk about bias,
we are certainly talking about an inaccurate account of the way things actually
are. So it presupposes the idea that there is an objective reality out there, that the
world is a certain way. And then that our accounts of it are falling short of this. But
I think the problem is that there is no unbiased account out there.

So when I talk about bias I’m very specifically interested in talking about the
predictable ways in which our distortions and misrepresentations occur. And also
know it’s a really big deal in the 21st century, the distortions that can exist within
information systems within the digital systems through which we construct and
share knowledge. And how these, too, often have certain biases and assumptions
baked into them.

DE: We’ll get on to some of those points in a moment. But I want to pick up first
on something you just said, which is that there’s no non-biased perspective. That
sounds very postmodernist. If I say that Mt. Everest is the highest mountain in the
world, that doesn’t sound like it’s open to dispute. It’s a fact. There aren’t different
perspectives on that, or not different true perspectives on it.

TC: Absolutely. And I deeply dislike the kind of postmodernism that lets alternative
facts in through the back door. What I’m talking about is the fact that we do not
ever simply know things, full stop. We don’t possess objective facts. Knowledge
about the world has to exist in some kind of context, it has to have a framework
and a framing.

When we’re talking about something like Mt. Everest we know, or think we know,
that it is the tallest mountain in the world, which I think by most definitions it almost
certainly must be assumed to be because we’ve measured it. There are a whole
range of different heights out there for the height of this particular mountain, just
as there are a whole range of different names for it. And there are a whole range of
micro disputes over whether you measure a mountain from mean sea level, snow
caps, earthquakes, whether you should be counting undersea mountains, whether
you should be looking at the bulges in the earth.

Now, it’s very important in all of this not to let the perfect be the enemy of the
good. By which I mean, not to let the fact that these things are qualified be the
enemy of saying that some things are more or less true or valid. But I think there is
always a context which is a human and information gathering, and measurement
and knowledge context within which this stuff exists. And that becoming more
aware of that context is what allows us to refine and improve it and remain open
to surprises. And be really, really rigorous. So in a way, if we want to respect the
nature of the objective reality that’s out there we need to have this very careful
relationship with honest doubt.

DE: Let’s get back to bias. Is it fair to say that actually, you think that what we
humans suffer from most acutely is not bias in the sense of outright error, but
rather that we have heuristics. We have general rules that on occasion go wrong.

TC: So I think the word heuristic is a really useful one. It means a rule of thumb,
a kind of mental shortcut. And yes, the basic problem is you got the universe out
there, and then you’ve got the little squishy brain inside our lovely skulls. And it is
very clear that there is too much information out there, too many things happening
too fast, that the idea of making sense of things must involve shortcuts. And in
evolutionary and historical terms, if all humans did was sit around scratching their
heads for 25 minutes every time they had to decide whether or not to take a step
forwards or run away from an angry lion, we would just be very smart corpses.
And so we have a lot of extraordinarily useful and powerful shortcuts for resolving
the kind of overwhelming information and options into frameworks, meaningful
decisions, and preferences.

DE: Give me an example of these shortcuts that do so well for the vast majority of
our life and then occasionally go wrong with bad consequences.

TC: So one of the most famous examples of this is what’s known as the affect
heuristic. We use the emotional intensity of our reaction to something as a
guideline to decision making. What would you like to eat? You don’t look at the
menu and conduct a detailed calorific analysis of it, or say, “Well just give me a few
days, I’m off to do some research.” You probably ask, “Well, what do I feel like?”
And this is a really good idea because it enables you to make these decisions. And
also because your emotions are quite a complex biochemical decision making
algorithm. They’re not some kind of regrettable extra you’d be better off without.
They are a central part of your being and your survival.
And the affect heuristic throughout history has been trained up to guide us pretty
well in the kind of settings that our ancestors, over hundreds of thousands of
years probably faced. What foods to go for, what to run away from, how for
example, to raise these incredibly vulnerable offspring that humans produce in
contrast to other animals. How to collaborate to an unprecedented degree as
social organisms. It’s very clear that most of the time when people achieve great
things together and form lasting bonds and cooperate, much of what’s going on
is taking place primarily at a level of emotional processing. And of course today
in the blink of an eye, historically speaking, we are suddenly connected not only
to screens, media representations, but to millions and millions of strangers. And
many of these strangers are very interested in using heuristics to manipulate us, to
get us to buy things, to take certain decisions. And so suddenly the stuff that was
a very good guide to forming relationships like behaving charitably, perhaps, or
empathetically towards people in trouble, becomes an opportunity for spam email
to come whizzing into our inbox and beseeching us for help.

DE: To pick up on that, we may get a spam email telling us that our distant cousin
in Nigeria has been robbed and we should send money immediately into the
following bank account.

TC: Absolutely right. Yeah. And you’ll notice with things like spam email, and
indeed much more sophisticated approaches, that they aim to create a sense of
emotional urgency. What you want to do, if you’re trying to manipulate someone, is
put them in a situation in which they are reliant on emotion or the decision making
is dominated by emotion. Advertising, conning, manipulation. Also, interestingly,
what you want to do is allow the vulnerable to self-select. And so a lot of scams are
really rubbish if you are a sophisticated, experienced user of technology. And this
is great so far as scammers are concerned because by making something that will
only fool the most vulnerable or inexperienced, those who are least in possession
of expertise or critical aptitude in this area, you make it much less likely that you’ll
waste your time trying to fool someone who in fact is pretty savvy.

DE: That’s the effect bias. There’s also the availability heuristic or bias and that’s
linked to the recency bias. You better explain what those are.

TC: Yeah, we’re getting a bit of a tongue twister with these things. And I think one
nice clarifying point which Daniel Kahneman makes very eloquently, is that in all
of these situations, what we’re doing is we are taking a question that is difficult
to answer and we are substituting, often without noticing, an easier question.
So a very difficult question might be who will make the best next president or
prime minister?” That’s a very complicated question. But a very easy question is,
“Whose face do you like more?” “Who gives you good vibrations?”

And I’m not saying that every political decision is made on this basis but it is
certainly true that a lot of the time, we don’t even notice this substitution is going
on. We talked here about recency and availability. And really all of these words
are moving around the same point, which is that we are prone to treating how
easily something comes to mind as an indicator of its truth or validity. And this is
not always true.

One very simple example is to do with advertising and celebrity endorsements.


If something comes to mind very easily when I say “crisps,” you may think of a
famous brand of crisps. You may think of a famous face associated with crisps
or chips depending upon your country. Now is it likely that that which came most
easily to mind is also the best? Is also the finest purveyor of fried potato products
in the world? Probably not. But so far as you’re concerned in your everyday
dealings, it;s not a bad substitution. It’s probably reasonable. It’s probably pretty
good or it couldn’t get to that level. And beyond that, you’re quite happy for this
happy heuristic to take over and spare you the burden of potentially endless
research into the finest crisps or trying to make one.
Sometimes it can be more dangerous, however. Let’s say, for example, that you
are being asked a question about what your tax dollars are spent on or what your
money is spent on, or what your health care plan is spent on. And someone says,
“Well, what do you think is a greater threat, heart disease or cancer?” Now a lot
of people will probably say cancer. Because cancer, in its many and varied forms,
is often quite a public and prolonged disease. It is of course a massive killer. A lot
of well-known and famous people and cases have emerged from this. Whereas
heart disease, by and large, is I wouldn’t say less glamorous but it has a different
path. It’s managed differently. And yet, heart disease kills more than twice as many
people each year globally as cancer.

We are often very willing to let our emotional reaction double as truth and be
substituted for what we think of as truth. And that is even more important when
it comes to more controversial or important decisions in our lives. “What do you
want to do?” “What are you more afraid of?” “What are you more interested in?”
“What would be the right thing for you?”

DE: A much less important example, but an example I like, is the frequency of the
letter “K” and how often that appears in different words and where it appears.

TC: Absolutely. And this is a rather easy example, and you can try it for yourself. I
would invite anybody listening to this who doesn’t know this particular experiment
to try it right now.

Here’s a question. Are there more words in the English language that either A,
begin with the letter K, or that have the letter K as their third letter. Have a quick
think about that. Now, if you’re like most people, your gut will have answered
for you. “Well there are more words that begin with the letter “K” because I ask
that question and people start thinking, “King, key, kiss, kangaroo.” However,
as Kahneman and Tversky who first conducted this experiment found, there are
many more words that have the letter “K” as their third letter, but they are harder
to bring to mind because it is simply more difficult to think of words on the basis of
a third letter than the first letter. That’s how our minds work.

This is a very neat example of the fact that we are extraordinarily willing to treat
the ease or the coherence of something as synonymous with its likelihood or
truthfulness. When in fact, we should be very cautious about this.

DE: Tell me about a phrase I’ve heard hundreds of times, the confirmation bias. Is
that an outright bias or is that a heuristic?

TC: The confirmation bias is the universal human tendency to seek information
that confirms things we already believe or think, while ignoring or being less
willing to accept information or evidence that contradicts or challenges, or can’t be
integrated into beliefs and ideas we already have.

And when I put it like that, this is obviously a bad thing. When you look back
through history you find yourself mostly sort of laughing at the terrible people who
forced Galileo to recant because they could not bear to believe that there were
satellites orbiting Jupiter. That the Earth was not the center of the universe. It
sounds very clear that we should all, as far as possible, be terribly open minded.
And yet, at a sort of basic level, almost by definition, you cannot be open to stuff
that you have no way of comprehending or systematizing or grasping.

On some level, confirmation bias is an extreme example of just the way that
humans have to think. Understanding and grasping and explaining stuff is based
on the idea that you have preexisting ideas that you have some way of grasping
it. So I think we need to be, as ever, a little cautious around just this universal
“Bias is bad, bias is bad. This is a bias so it’s bad.” And perhaps a subtler way
of talking about it is that we can train ourselves to invite refutation. And we can
train ourselves to frame our beliefs about the world in a way that acknowledges
they are beliefs, that they are most time working theories. And some of them
are working theories that will probably just go on working, we don’t need to
worry about too much. But some of them, you know this idea for example that
economies will keep on growing, that computers will keep on getting faster, there’s
a decent working beliefs but they are more interesting and useful if we leave them
open to refutation and challenge then if we treat them as things that we just want
to find confirmation for.

DE: So is this the answer to different forms of heuristics? Is the answer to the
problems that they throw up a permanent kind of skepticism?

TC: Permanent skepticism is really hard to pull off. But in general, skepticism
is a shared project. When we talk about things like the scientific method, what
we’re talking about is a shared methodology. We’re talking about diminishing
our reliance on our own individual, personal view of things. And instead
acknowledging that we are part of a shared project of trying to understand and to
test.

So I think coming up with frameworks and structures and modes of practice and
attitudes that allow for collaboration. And this is very simple. You can do it in
the way you write. All I mean by this is that rather than sort of saying, “I have
observed that computers are getting faster and faster and faster and smarter
and smarter and smarter so the singularity is coming.” You might say, “It is
interesting to observe that for the last 30 or 40 years we’ve had these huge gains
in computational power. So one outcome of this could potentially be vast increases
in computer intelligence. I would be interested to see what other people think.
What evidence others might come up with that might contradict this, or might this
picture look more complicated.” In other words reasoning, it’s a shared project.

DE: So open-mindedness and dialogue.

TC: Dialogue. And a plurality of views. But permitting a plurality of views. And
there’s all sorts of tensions here. But there’s a lovely tension that the great
philosopher of science Karl Popper I think was very right to emphasize, which
is this idea that if we want to have an open kind of competition between ideas,
if we want to have multiple perspectives, each bringing different evidence and
potential explanations at the table and testing them, far from this being a sort of
post-modern mishmash which everyone has their own facts and then is prepared
to defend their own facts to the death, whether their own death or somebody else’s
death, in a way what we need is a radical intolerance of intolerance. We need to
be prepared to fight for this kind of rigorous plurality. And not just metaphorically
fight, literally fight with guns and stuff. Because otherwise this tolerance may be
wiped out.

DE: Are experts more likely to suffer from these kinds of biases or to experience
problems following on from sound heuristics than other people? Or is expertise a
cure?

TC: At this point we have to look at this word “expertise” and ask what it means.
Because on the one hand, there is what you might call “true expertise.” And true
expertise is when a person has spent a sufficient amount of time exposed to
phenomena or ideas in a field of sufficient regularity and information richness that
they can indeed be trusted to know quite a bit about what may happen next,
or what is going on. And then on the other hand, there is the word “expertise”
thrown around to indicate someone who is thought to be clever or well-informed
but who does not exist in these conditions. If the field is not one in which they have
sufficient experience, or the field doesn’t possess sufficient regularity, then what
they have to say is probably worthless. And they are more likely to believe in the
truth of it than other people, so they’re doubly dangerous.

So a specific example, which comes back again to people like Daniel Kahneman,
has shown that in a lot of financial areas there are people who know loads about
finance. But there is so much volatility inherent to things like stock prices and it
is in the nature of things like stock prices that a lot of information is factored into
these prices anyway, that people might as well have monkeys throwing darts at
a wall as experts stockpickers in a lot of fields. By contrast, we have examples
of things like athletes who are engaged in athletics endeavors like playing golf,
who put in thousands and thousands and thousands of hours in environments
that have a lot of regularity and a lot of meaningful feedback. And if an athlete
comes up to you and says “There’s something slightly-- I don’t know what it is,
something niggling with my back. I think I shouldn’t play today.” You should listen
to that because they’ve had a sufficient level of exposure to a sufficiently regular
field with meaningful feedback, that they’ve developed meaningful intuitions. And
one of the great problems, of course, is that if you have someone who’s got some
genuine expertise and knowledge in an area if they’re really rigorous, they will
often take that feeling of knowledge and that confident mode of self-expression,
and then they will step outside this perhaps quite narrow area in which they really
know their stuff.

DE: But if you want to know about the stock market you would think that the
person you need to ask is the stockbroker. How do you know when somebody is
an expert in a particular domain and worth listening to?

TC: The simplest thing is, “Have they made predictions that can in any way be
tested or have been validated?” It doesn’t apply to every field. It doesn’t often
apply to things like social science which deal with complicated systems. I think
there you might say, “Well, what are they saying? Are they making predictions and
assumptions that simply aren’t backed up? Or are they talking in a very expert and
informative and suggestive way about the nature of the doubts, the uncertainties,
the landscape, and the patterns here?”

And also there are different kinds of expertise. So prediction is not the only
measure of success, despite what I think some passionate physicists may argue.
Giving people useful, powerful, suggestive ways of thinking about the world,
of arguing of debating of understanding of systematizing things, is very, very
powerful. And also giving voice to different views, challenging orthodoxies. So an
orthodoxy can be wrong and dangerous. And can be challenged by viewpoints
and subtleties and insights from other areas, without those having to be
absolutely, predictively true.

We can say, for example, that a certain way of talking about the education system
may totally ignore the voices of students, or may systematically exclude the voices
of certain minorities. And that calling upon people who have experience from
these areas, who pay attention to this, who give voice to these concerns is very
valuable and important. Even though we’re not coming up with a single big shiny
answer. That perhaps most often getting rid of yesterday’s big shiny answer can
be great.

I want to invoke Karl Popper again who knows there’s this profound asymmetry
between confirmation and refutation. No amount of evidence can ever definitively
confirm a theory or an idea that is an inductive idea. But just one piece of suitable
evidence can disprove a theory. The most famous example is finding a black
swan. 1697 Australian explorers were the first Europeans ever to see a black
swan when they were down in Australia. And at a stroke, this disproves thousands
of years of European belief about what a swan was, about how you defined a
swam.

DE: We’re very lucky, aren’t we, because we live in a world of artificial intelligence.
We live in a world of robotics. And so soon we’ll have a world in which bias will
disappear.

TC: I think you’re seeking to provoke me here. People don’t realize the degree to
which human biases, conscious and unconscious, are embedded in our creations.
There is no such thing as a neutral tool. It doesn’t mean our tools are bad or evil
or wicked. It just means that if I want to kill you, a gun is better than a toothbrush.
And if I want to come up with an algorithmic understanding of the world, or society,
or crime, every and any data set I bring to bear upon this, will bring with it biases
and features based upon its manufacture.

When I say manufacture I mean this in the literal sense, that data is made, not
found. And it’s making embeds certain assumptions, certain ways of thinking
about the world. If, for example, I’m training an artificial intelligence system to help
me find new employees for my fictional large company, and I feed in all the data
I possess about my employees for the last 50 years. And I instruct my algorithm
to sift through the CVs of potential employees and come up with best fits, I’ve
probably, among other things created an algorithm which is a white middle aged
man generator.

DE: But machines that have deep learning built into them, in other words,
machines that teach themselves, will soon overcome those kinds of errors, won’t
they?

TC: Well, we can do astonishing things with deep learning. But, and I think it’s
a really big but, good learning from machines, just like for us, tends to involve
meaningful feedback. And it tends to involve the understanding of what kind of
questions we’re setting out to explore.

So I gave the example of a machine learning system that if we just presented it


with a whole bunch of raw data about employment history, would start to spew
out recommendations that were all white men because that’s what the past looked
like. This is a starting point and we might very quickly realize that actually we
would enormously improve the great potentials of a system like this by making
it name and gender and age and ethnicity biased. In fact, good application
processes already tend to do this. We would sort of clean and improve the data.
Less is often more when it comes to data.

But I think most crucially, what we would also need to be able to do is meaningfully
to scrutinize its outputs. Have meaningful criteria for success or e. And keep
feeding back and iterating, just like we do with people. And when we do this
with machine learning systems, when for example we have a whole host of
rival algorithms, almost like a sort of gene pool working upon data and were
meaningfully scrutinizing these outputs and iterating, then yes we have absolutely
astonishing tools. But they are solving problems by means entirely alien to human
minds. And the future, when I’m feeling hopeful, looks to me like a place where we
really refine the rules of human/machine collaboration. And where we understand
better the very different conditions under which machine learning algorithms and
humans thrive. And use them complementary ways rather than the delusion of
replacement and rivalry, which I think is very dangerous because of course once
you take people out of an automated system it is very hard to put them back in
again.

DE: Is it possible to give the pessimistic view that the future may be worse than
the past in terms of critical thinking and bias because we may have algorithms that
we don’t fully understand. We can’t identify what the biases are.

TC: Absolutely. A lot of this is already playing out around us. “Algorithmic
solutions,” in inverted commas, that really don’t obey any of the basic rules
of quality. Automated assessments of teaching quality in America, say. And
effectively you have judge, jury, and executioner, from a career perspective, in the
form of an algorithm with some very broad, arbitrary, and dangerous assumptions
baked into it. Against which there is no meaningful appeal of which there is very
little meaningful scrutiny.

One of the great phrases of our time is “computer says no.” This idea that you
want something and instead all you have is a computer making a person behave
like an idiot. Kind of artificial idiocy. I think again and again, we need to have
people waking up to the idea that if you cannot explain how a decision has been
arrived at, if you cannot interrogate that decision making process and seek to
modify it, then you have something very, very undemocratic, very unaccountable,
and very dangerous that embeds a whole host of unexamined assumptions that
you may never inveigle out.

People know about famous examples. Microsoft unleashed a chat bot on social
media with a mission to learn, and lo and behold it turned into a potty mouthed
12-year-old. Big surprise. But we have the algorithmic equivalent of potty mouthed
12-year-olds running systems across corporations. And in fact the philosopher
Nick Bostrom and others who’ve written very importantly about AI argue that
there is an ethical imperative for systems as far as possible to be transparent to
inspection, to be predictable, to be immune to tampering as far as possible, and
to be open to modification, as basic criteria for algorithmic systems that remain
amenable to ideas of justice and accountability. And then we can do really great
things.

DE: Tom Chatfield. Thank you very much indeed.

TC: Thank you so much for having me.

DE: Social Science Bites is made in association with SAGE Publishing. For more
interview, go to socialsciencebites.com.