You are on page 1of 68

Ideas Roadshow conversations present a wealth of candid insights from some of the world’s

leading experts, generated through a focused yet informal setting. They are explicitly designed
to give non-specialists a uniquely accessible window into frontline research and scholarship
that wouldn’t otherwise be encountered through standard lectures and textbooks.

Over 100 Ideas Roadshow conversations have been held since our debut in 2012, covering a
wide array of topics across the arts and sciences.

See www.ideas-on-film.com/ideasroadshow for a full listing.

Copyright ©2020 Open Agenda Publishing. All rights reserved.


ISBN: 978-1-77170-121-1
Edited with an introduction by Howard Burton.
All Ideas Roadshow Conversations use Canadian spelling.
Contents
A Note on the Text

Introduction

The Conversation
I. Becoming a Psychologist
II. Probing Agency
III. The Active Brain
IV. Ideal Bayesian Operators
V. In Search of a Mechanism
VI. Humanistic Hubris
VII. Free Will
VIII. The Very Big Picture
IX. Final Thoughts

Continuing the Conversation


A Note on the Text
The contents of this book are based upon a filmed conversation
between Howard Burton and Chris Frith in London, England, on
November 14, 2016.
Chris Frith is Emeritus Professor of Neuropsychology at UCL and
Honorary Research Fellow at the Institute of Philosophy, School of
Advanced Study, University of London.
Howard Burton is the creator and host of Ideas Roadshow and was
Founding Executive Director of Perimeter Institute for Theoretical
Physics.
Introduction
Eyes on the Prize

Chris Frith has long been fascinated by schizophrenia. As a young


graduate student working with Hans Eysenck, he recalls how his
interest was piqued even further by a serendipitous assignment as part
of Eysenck’s internal review process for his second edition of his
handbook of abnormal psychology.
“Students were handed out different chapters and I got the one on
perception, which was mostly about schizophrenia. What particularly
fascinated me then, and still does, is the problem of hallucinations and
delusions.
“It’s easy enough to understand in principle if you’ve got affected regions
of your brain why you become blind or deaf or can’t understand a
concept or something, but it’s very difficult to understand why you start
seeing things that aren’t there or believing things that are obviously not
true.
“So, I was always interested in questions like, Can we think about a
mechanism? and, How do we relate this to normal functioning? What is it
that could go wrong in normal functioning that can make you start seeing
things that aren’t there or hearing people talking about you, when they’re
not?”
A key theme driving Chris’ entire research career is readily apparent
from these early inquiries: using specific aspects of abnormal
psychology—“what goes wrong”, if you will—as a natural window to
help shed light on how, exactly, underlying “low-level” biological
mechanisms are converted to “higher level” subjective experiences.
Meanwhile, a more detailed consideration of the mechanics of
hallucinations initially drove him back in time, to the 19th century and
his “big hero” Hermann von Helmholtz, who developed a deep insight
on how we might objectively distinguish between external happenings
in the world and internal happenings in our brains.
“Helmholtz pointed this out in relation to eye movements. When I move
my eyes, obviously things jump about on my retina so there’s movement
on the retina, but it’s due to me. And I have to be able to distinguish
between movement on the retina due to me and movement on the retina
due to something actually moving in the world.
“And he basically said that because there’s a message involved—you’re
sending a message to your eye muscles to move the eye—you can use that
signal as a way of determining what the corresponding movement is due
to, whether it’s you or the world.
“So I took that up and thought, Maybe what goes wrong in schizophrenia
is that this signal—this normal signal that tells you that it’s your
movement or your action—doesn’t arrive for some reason.”
Well, what kind of a signal might that be? Combining his own
experiments and analysis with a wealth of results from the broader
cognitive science community, such as Wolfram Schultz’s pioneering
work with monkeys, he eventually came to believe that the key signal in
question was that of a prediction and reinforcement mechanism
involving dopamine.
The brain, it turns out, is hardly the passive recorder of external
happenings that scientists once believed, but is instead, vitally, a highly
active participant, constantly predicting what will happen and regularly
comparing its predictions with the incoming sensory input. Chris,
together with colleagues such as Daniel Wolpert, began to focus intently
on the importance of human agency, forthrightly calling himself “a
motor chauvinist”.
“There used to be this view that everything was perception. They would
draw a picture of the brain, and most of it would be the visual system. But
we would say in contrast, ‘No, action is what the brain is all about. If you
don’t have action, you’re going to die.”
Nowadays, most neuroscientists are not only convinced that this active
prediction mechanism is an essential characteristic of the brain—quite
possibly the essential characteristic—but after years of careful study,
they have come to appreciate how it successfully harnesses all the
nuances of advanced probability theory in order to predict and learn
appropriately. The brain, goes the common description, is an “ideal
Bayesian operator”, meaning that it is constantly invoking a deep
understanding of Bayesian statistics as it engages the world around us.
Well, that’s hardly surprising. Since Bayesian statistics actually work,
calling the brain an “ideal Bayesian operator” is really another way of
saying that it uses statistics appropriately in its constant prediction-
mechanism, which any evolutionary theorist would have very much
expected in the first place. After all, it’s hard to see how a constantly
predicting life form would last very long if the way it was going about
making its predictions was all wrong.
But what’s deeply curious about this picture is that, while our brains
are busily going about acting like ideal Bayesian operators, if you were
to ask people to perform a basic statistical calculation, more often than
not they would get it wrong. So somehow, there’s a big difference
between what our brains are doing and what we, consciously, do.
And suddenly, we’re right back to Chris’ initial conundrum: how, exactly,
are “lower-level” biological phenomena—such as those involved in our
ideal Bayesian brains—related to “higher-level” subjective phenomena
—such as becoming convinced that we should really buy that lottery
ticket because “somebody has to win”?
“I think this is the key point. I’m not based in a philosophy department,
you see, so I have had to learn some terms; and the terms I have learned
now for this dualistic aspect is “the personal” and “the subpersonal”,
where “the subpersonal” is related to doing what the brain does as an
ideal Bayesian operator and “the personal” is what I do, as it were.”
Well, adopting a new vocabulary is often pleasant, but the question
remains how is the subpersonal related to the personal? What’s the
mechanism for that?
While it’s safe to say that nobody knows for certain, Chris is
increasingly convinced that the answer to that involves a proper
appreciation of “culture” and its impact on the brain through
neuroplasticity.
“I’ve become more and more interested in culture. In some ways, the
brain is not enough on its own—it’s almost like a tool.
“And one thing I think about is that, genetically speaking, our brains at
birth are no different from the brains of people from something like
200,000 years ago, when people were making these crude stone tools and
so on. But adult brains today I would suspect, are very different from the
brains of people 200,000 years ago, because the brain is very plastic and
culture affects the brain.
“A lot of what we mean by “culture” is at the personal level: it depends
on communication, the interactions between people, which create
traditions and so on, that are then fed back into the system.”
A very thought-provoking idea. But how, exactly, might it be
implemented? To what extent is it possible, even in principle, to develop
a concrete framework that tells us something rather more detailed and
structured than simply saying something like, “Large-scale cultural
traditions shape our brains”?
Chris doesn’t pretend to know the answer to that, of course, but ever
the rigorous scientist, he is quite keen to point out some contemporary
experiments that might be signposts to a deeper understanding, such as
new interpretations of so-called “ego-depletion” and “common goods
games” that might be highlighting instances of how our subjective
“high-level” convictions influence our “low-level” brain processes.
“I think these are hints of the kinds of mechanisms that are involved, and
my current research project is precisely about that: trying to discover
something about these mechanisms.”
Chris Frith has had a highly impactful academic career in one of the
most dynamic scientific fields imaginable, a discipline which has been
transformed almost beyond recognition from what it was a mere fifty
years ago. But through it all, his fundamental approach to addressing
the key questions hasn’t changed one jot.
And why should it? If it ain’t broke, don’t fix it.
Our ideal Bayesian brain would happily tell us as much, if we were only
to ask it.
The Conversation
I. Becoming a Psychologist
From “min and crys” to schizophrenia

HB: I’d like to talk a little bit about your intellectual origins. And I’d like
to ask you specifically about your influences in psychology—I know
that you’ve written a little bit about that—but I want to go further back
than that and begin with your interest in science writ large. My
understanding is that you began reading natural sciences at Cambridge,
and I was curious to know if science was an all-consuming passion of
yours from an early age.
CF: Well, certainly from an early age, I was very interested in nature and
doing birdwatching, but also going up into the attic and mixing things
together and seeing what happened. And I’m told that at the age of 12
when asked what I wanted to be I said, “A research scientist”.
HB: Really? From the age of 12?
CF: But I don’t think I knew what that meant, because certainly no one
in my family was like that. My father was a classicist, so I did lessons in
Greek. And in fact I was the last generation who, in order to read
science at Cambridge, had to do a Latin exam—going back to the good
old days when everything was in Latin. At school, I basically did maths
and physics, which I was good at. But in my gap year, I learned about
this mysterious subject called cybernetics, particularly as applied to
people—which is sort of control theory.
So I arrived at university, and of course at Cambridge all you can do is
natural sciences—there was no such thing as cybernetics, the nearest
thing was psychology. But you couldn’t do psychology in your first year.
So my background is in applied maths, physics and what we used to call
then “min and crys”, which is mineralogy and crystallography. Then I
did psychology as what they call a “half-subject” in the second year, and
then I specialized in it in the third. But that means that my
undergraduate-level psychology is based on one and a third years of
work. And in fact, the “min and crys” turned out to be very useful
because you had to learn about three-dimensional maps and things like
that, which suddenly became relevant 20 years later with brain
imaging.
My plan at the end was to go on and do work with someone called
Donald MacKay, who was one of the very early people doing
information theory as applied to the brain—what we’d now call
“computational neuroscience”—and in fact I had spoken to him and he
had taken me on to do a PhD, but luckily I failed to get a good enough
degree, which is a constant storyline for me.
So I went and did clinical psychology as a way back into doing
research—which is also fascinating because today you would do
research as a way into clinical psychology, but then it was the other way
around. There was only one course at the time, which was in the
Maudsley Hospital in South London, a 13-month course in abnormal
psychology. So I’m now technically an abnormal psychologist. And they
rapidly said, “Yes, yes, very good. We think you should not see patients.
Why don’t you do a PhD?”
And I agreed with that because in order to see patients you have to
believe that you can really help them, and I wasn’t sure that I could. In a
way there’s a conflict between doing research—where you have to
doubt everything—and being a clinician where you have to be
confident. There’s always that conflict I think.
So I did my PhD with Hans Eysenck, which was quite interesting, and
it sort of set off from there.
And one of the very lucky things at that time for me was that—as I
claim—we had the very first computer in a psychology department in
the country, which was in 1965. So I learned to program in machine
code as part of my PhD.
HB: I imagine that, aside from whatever you learned in particular at the
time, doing that played a more general role in exposing you to
computationally-oriented thinking and made you more receptive than
others, perhaps, to the idea of moving even further in that direction
later on.
CF: Yes.
HB: Getting back to what you were saying before, what was your father
the classicist’s reaction to your announcement at the age of 12 that you
were going to be a research scientist? Was he pleased? Bemused?
Intrigued?
CF: Oh, I think he was pleased. And in fact, the chap who told me about
cybernetics was a friend of my father who was another teacher—he
was in biology or something, I believe—and we used to see him quite a
lot. He was a very entertaining chap who would regularly tell us all
about the current developments in science.
My parents were very open-minded because while I said that I
wanted to be a research scientist, my younger brother—now known as
Fred—announced that he was going to be a rock guitarist.
HB: That was okay too?
CF: That was okay too—and he still is.
HB: And when you were at school and interested in the natural sciences
and mathematics before you went off to Cambridge, did you have any
particularly influential teachers who stimulated you?
CF: Well, I guess the most influential was the sixth form maths teacher. I
just loved the idea that you could solve things with these equations.
And you knew for certain when you had done it correctly, as opposed to
writing an English literature essay, say, where different people naturally
had different views and opinions on things. In fact, as it happens the
English teacher and I seemed to disagree on all possible topics, so that,
in a sense, switched me over.
HB: Was he a model of the English professor in Making Up the Mind:
How the Brain Creates Our Mental World?
CF: Quite possibly.
It’s interesting because at university I did particularly well in “min
and crys”, because the lecturers there were really strict and told you
what to do and made sure you did it. But the psychology was
marvellous fun: I was particularly lucky because I had the very good
fortune to be lectured by Richard Gregory and Donald Broadbent—who
were the main people who started off cognitive psychology—Larry
Weiskrantz—who was very big in monkey physiology—and my direct
tutor was someone called John Steiner, who at that time was, well, I
guess what you call a “born-again Skinnerian”, so he was completely
behaviourist. So I got this extraordinary mixture of behaviourism plus
the new thing: cognitive psychology.
HB: I imagine that there were many behaviourists at the time.
CF: Well, in Cambridge, it never really caught on to quite the same
extent as in the States, for example.
And interestingly John Steiner subsequently became a psychoanalyst,
which always fascinates me because I think there’s an interesting
relationship between behaviourism and psychoanalysis, because both
are about how everything is determined by your early experiences in
some sense.
HB: You gave this self-effacing anecdote about how you hadn’t been
successful enough at Cambridge to enable you to move directly into
what later became computational biology—and I’m not sure I’m going
to take that a hundred percent on your word—but I’m guessing that
before you began that clinical program in abnormal psychology at
Maudsley Hospital, you were quite interested in abnormal psychology
or aspects of different psychological conditions?
CF: I don’t quite remember how it started, but I was particularly
interested in schizophrenia and I read lots of books about it. Later,
when Hans Eysenck was producing the second edition of his enormous
handbook of abnormal psychology his students were handed out
different chapters and I got the one on perception, which was mostly
about schizophrenia. What particularly fascinated me then, and still
does, is the problem of hallucinations and delusions.
It’s easy enough to understand in principle if you’ve got affected
regions of your brain why you become blind or deaf or can’t understand
a concept or something, but it’s very difficult to understand why you
start seeing things that aren’t there or believing things that are
obviously not true—although scientists are quite good at that too, as it
happens, but that’s another matter.
And so, I was always interested in questions like, Can we think about
a mechanism? and, How do we relate this to normal functioning? What is
it that could go wrong in normal functioning that can make you start
seeing things that are not there or hearing people talking about you,
when they’re not?
HB: And was this perspective—focusing on brain functioning and
specific mechanisms in the brain—was this something that was
somewhat iconoclastic at the time?
CF: Well, thinking about the brain, for me, came somewhat later. But
certainly the early cognitive stuff was very much about thinking about
the mechanisms or cognitive processes or information processing that
underlies all our different abilities.
So that was iconoclastic in relation to behaviourism, because you’ve
started thinking about what’s inside “the black box”, but at that stage
we were just talking about cognitive processes. And of course the
neuropsychologists, whom I came into contact with a bit later—people
like Elizabeth Warrington and Tim Shallice and John Morton—were
interested in asking, If somebody has a lesion in the brain, what does that
tell us about cognitive processes? They weren’t really interested in what
it tells you about the brain, at that stage.
HB: So tell me how your work on schizophrenia evolved and what the
prevailing views on it were at the time.
CF: Well, after my PhD I did several years as a postdoc—which was
marvellous because I more or less did whatever I liked: in those days
money was not a problem somehow, which I never quite understood,
but that’s the way it was. But then I joined this Medical Research
Council unit run by Tim Crow, where specifically the main question we
had to answer was, What’s the biological basis of schizophrenia?
And this was somewhat iconoclastic because there was this
extraordinary distinction between “functional psychosis” and “organic
psychosis” in old-fashioned psychiatry. According to this view, an
“organic psychosis” is where there was clearly something wrong with
the brain, while a “functional psychosis”—which was how
schizophrenia was regarded—either meant, There must be something
wrong with the brain but we don’t know what it is, or, There’s nothing
wrong with the brain.
So it was almost as if this was not a brain disorder—and you had
people like Ronnie Laing, who was saying that it was caused by society
or something like that—it was a response to an abnormal society—or
you had other people saying it’s caused by peculiar interactions in the
family. And both of these ideas, in a sense, faded away because there
wasn’t much empirical evidence to support them. And one of the first
things we did in this unit with Eve Johnstone was one of the first ever
structural brain imaging of schizophrenic patients using modern
technology, which in those days was what they called CAT-scans.
HB: When was this exactly?
CF: It was published in 1976. And these scans showed that chronic
patients with schizophrenia had enlarged ventricles, which was not due
to any treatment or things like that. And that, I think, had a big impact
on switching the belief towards the view that, This is really a brain
disorder which we need to explore.
But just to give you an idea of the problems, there’s this famous DSM
statistical manual (Diagnostic and Statistical Manual) for deciding how
you diagnose people. And in DSM-3, which is what we had at that time,
you have counter-indications. It said, To get a diagnosis of schizophrenia
you have to have these hallucinations and delusions and various other
things, but there must be no known brain disorder. So as soon as you find
any brain disorder it ceases to be regarded as schizophrenia. This has
changed.
HB: Quite the question-begging that.
CF: Yes. This has changed.
And the other thing we did—which I’m still very proud of—involved
research on anti-psychotic medication.
They first discovered the anti-psychotic medication in ‘55, I think, by
accident. They all turned out to be dopamine-blocking drugs. So we did
an experiment with one of the standard treatments that was something
called flupenthixol. Flupenthixol is interesting because it has two
isomeric forms: one of these forms blocks dopamine, while the other
one—which has lots of other effects—doesn’t; so you could do a very
tight comparison. And it turned out that indeed, yes: the one that
blocked the dopamine receptors reduced severity of symptoms over the
course of four weeks, whereas the other one was no different from
placebo.
So there are various interesting things in that result. First of all, it
seems to be very specific to this dopamine blocking—which again,
relates it to the brain. And that still seems to be true: I don’t think
they’ve progressed that much on that score.
Secondly, there were positive results for everyone, including the ones
on placebo.
And thirdly, that this effect was specifically on hallucinations and
delusions and not on the so-called negative symptoms: the retardation
and poverty in speech and things like that. So in a sense, you were
finding an effect, of dopamine blocking, which is clearly a very low-level
brain type intervention with an extremely high level of subjective
experience. So the key question then became, How do we bridge the
gap?
Questions for Discussion:

1. To what extent do you think Chris’ training in mathematics


and computer science gave him a different perspective than
some of his other colleagues?

2. Why do you think it took scientists so long to appreciate the


importance of looking at biological mechanisms to explain
psychological conditions? What does this tell us about the
sociology of science? Are there current dogmas that we will
look at 50 years from now as disapproving as we currently
regard behaviourism?
II. Probing Agency
Predictions, tickling and dopamine

HB: That brings up the important idea of how to bridge low-level and
high-level gaps in all sorts of ways, including, but not limited to
schizophrenia. But I’d like to stay with schizophrenia for a little while
and continue to examine the evolution of our understanding of that
condition.
But first, I’d like to go back and talk about the societal understanding
and appreciation of this disease, because my sense is that has also
changed enormously since the time when you first started doing your
research. Not to imply that everybody’s got a full understanding of the
situation today, but my sense is that in the popular consciousness the
word means something quite different today than it did then. Is that a
fair statement, you think?
CF: Well, I don’t really know what that word meant to people in the 50s
and 60s, if they knew the word at all. I mean, they knew that there were
“mad people” who lived in these big asylums, and you occasionally saw
them on the street talking to themselves and they probably thought
they were a bit dangerous. And I’m not sure that, on the whole, that’s
changed all that much.
I mean, I’m not quite sure how true it is nowadays, but certainly in
the recent past if you say were to say the word “schizophrenia”, people
would think, Split-mind and multiple personality—which of course is
completely wrong.
It’s a rather funny term, because the word “schizophrenia” does
actually mean “split-mind”, but it meant that there was a split between
your different faculties, like emotion and reason and motor and
perception. A classic example would be what was sometimes called “a
peculiar effect”: for example, you’d say to the patient, “Your mother is
very ill”, and he’d laugh. It would be that sort of splitting, not the
multiple personality that people believe about.
And then, of course what happened was that the big asylums were
closed down so that these people were no longer “over there”
somewhere—they were in hostels or on the streets or whatever. The
other thing that has happened in my lifetime, of course, is that in the
olden days if you were to see someone walking on the street talking to
themselves, they were thought to be schizophrenic. Now they’re most
likely to be on the phone.
HB: And they may or may not be schizophrenic in addition.
CF: That’s right.
HB: Let’s get back to this question of our scientific understanding of
things. Perhaps you could just give me a sense of how our
understanding of schizophrenia has evolved in the past 30 or 40 years
and why?
CF: Certainly—but of course this will naturally be my perspective on
things.
As I said earlier, I was interested in this problem of hallucinations
and delusions, and there’s one particular delusion that I became very
interested in, which is the delusion of control, which occurs in about
16% of cases. This is where the patient says, “I’m not in control of my
actions. Some alien force is causing me to do things”. It could be even
simple things like lifting up the glass and drinking.
It’s very difficult to find examples, interestingly—you’d think people
would collect these symptoms, but they all come from more or less
three papers, one of which is mine.
So you have a patient saying something like, “The force is causing me
to move”—this is pre-Star Wars, incidentally—and there’s also
something called “thought insertions”, which is even more peculiar and
yet to be solved, where the patient says, “There are thoughts coming into
my mind, which are not mine”. It’s very odd, because how can a thought
be in your mind and not be yours? After all, is there a little label that
comes with each thought saying “mine”?
But when you think about action, that’s much easier, because, for
example, if I hear a voice, it could be me talking, or it could be you
talking to me, and in a sense we need a label so that I know what I’m
hearing is my voice and not yours.
And this takes us right back to my big hero, Hermann von Helmholtz,
who pointed this out in relation to eye movements. When I move my
eyes, obviously things jump about on my retina so there’s movement on
the retina, but it’s due to me. And I have to be able to distinguish
between movement on the retina due to me and movement on the
retina due to something actually moving in the world.
And he basically said—I can’t remember his exact terminology
—“Because there’s a message involved—you’re sending a message to
your eye muscles to move the eye—you can use that signal as a way of
determining what the corresponding movement is due to, whether it’s
you or the world”. And he has this simple experiment: if you poke your
eyeball with your finger, carefully, to make it move, the world appears to
jump about, but you can use that signal to your eye muscles to
determine that the phenomenon in question is internal to you and not
in the world.
So I took that up and thought, Maybe what goes wrong in
schizophrenia is that this signal—this normal signal that tells you that
it’s your movement or your action—doesn’t arrive for some reason.
And we did various experiments, but the one I particularly like was
done by Sarah-Jayne Blakemore when she was doing a PhD with me.
She recognized that this obviously relates to tickling, as you can
immediately see, because we know that you can’t tickle yourself—Larry
Weiskrantz published something about this in Nature in the early
1970s. The question is why, and the answer is gained using this
Helmholtzian argument that because you can predict exactly what
you’re going to feel when you tickle yourself, it’s suppressed.
So Sarah-Jayne and I took this into the scanner and we had various
clever bits of equipment so you could tickle yourself directly or
indirectly. And she showed that if you introduce a delay of a hundred
milliseconds or so—if you’re holding a rod and tickling yourself with it,
you can introduce a delay—then it feels more ticklish.
But the nice thing was that she then tried this out on people with
schizophrenia. And indeed for the ones with delusions of control, if you
ask them to rate how ticklish it feels, there was no difference between
them tickling themselves and Sarah-Jayne tickling them. And we did
more sophisticated things after that. So that seemed to fit: we were
beginning to come up with a more mechanistic story of what might be
going wrong.
HB: So as I understand it there’s a natural focus on this question of
agency. When you and I are lifting up glasses of water or deciding to
look out the window or whatever, we have no doubt whatsoever that
it’s we, broadly defined—and, hopefully we’ll get to what that means in
a moment—who are actually doing that, but that the idea is that people
at least with some particular form of schizophrenia have a difficult time
with this whole concept.
CF: Yes. And there have been similar studies demonstrating this. Judith
Ford did some very nice work on hallucinations where she showed that,
while we normally suppress the sound of our own voice—which you
can measure with EEG and so forth—this was not happening to the
same extent in people who are prone to delusion. So you’re getting a
similar story.
And the obvious question now is, So what about dopamine?
In parallel with this, there were very exciting developments in the
dopamine story—mostly, I think, due to Wolfram Schultz at Cambridge,
who is looking at monkeys. He showed that there are neurons in the
middle of the brain—in the ventral tegmental area—which release
dopamine. He was measuring activity in these neurons and he could
show that they were actually predicting reward. In other words, if the
monkey gets an unexpected reward, these neurons fire.
You can then do some conditioning, so that the monkey learns that a
certain signal—a light flash—tells it there’s a reward coming.
Now this light flash, of course, is a signal of unexpected reward, so
what happens is that the neurons would fire immediately after the
monkey sees the flash rather than receiving the reward itself, since the
reward is now entirely predicted. On the other hand if the reward
doesn’t come as expected, then the firing rate of those neurons actually
decreases.
So you have a very nice mechanism here which is telling you whether
you’re being rewarded or not. This led to the development of early
forms of computational neuroscience, where you can have a very nice
story of how learning occurs on the base of whether your reward goes
up or down, and you can then learn to attach rewards to signals and so
on and so on.
It used to be viewed as simply a reward-mechanism—dopamine was
released when you were rewarded—but it’s now much more
sophisticated. It’s actually a signal of reward prediction error, as they
call it, which is used in learning.
And to this you add a so-called Bayesian perspective: you have prior
expectations and you have evidence, and then you update your model of
the world on the base of it.
HB: I’d like to get into those Bayesian aspects you were just mentioning
in more detail shortly, but first I’m going to back up and ask a few more
general sorts of questions.
Correct me if I’m wrong, but my sense is that through your work in
schizophrenia—or at least your work in schizophrenia combined with
other work—you’ve been led to appreciate the important way that the
brain acts and conditions information in such a way that we can learn
from it, namely this idea of prediction and reinforcement.
So at some level, it seems to me that you have used schizophrenia as
a window to better understand what’s occurring more generally, insofar
as you’re saying, “Oh, in these circumstances it doesn’t seem to be
working quite as well as it should. Let’s try to understand what’s
happening there.” Is that a fair way of looking at it?
CF: Yes. I think certainly one of my basic beliefs would be that we
should study the abnormal systems in order to learn about how it
works in the normal case. In a sense the abnormal system is somewhat
of a “simplification”, and I’ve been very influenced by
neuropsychologists studying patients with known lesions.
For example, alongside what I’ve just been talking about was the
discovery of things like “blindsight” and the famous patient, DF, that
David Milner and Melvyn Goodale studied. This is a patient who, due to
carbon-monoxide poisoning, has damaged her temporal lobe. She is
technically assessed, I believe, as effectively blind: she can see, but she
can’t recognize objects on the basis of their shape.
The fascinating thing that Milner and Goodale recognized is that she
wouldn’t be able to tell you that this thing in front of me is a mug, and
she wouldn’t be able to tell you where the handle is, but she can reach it
correctly. That gives you the idea that there are these two roughly
independent streams, one of which is for recognizing what the things
are and one of which is for reaching and grasping; and in the normal
case they’re all tied up together and it’s very difficult to separate them
out, but in the abnormal case, you can start to see these fractionations.
Questions for Discussion:

1. What are some of the key assumptions behind the idea that a
close examination of abnormal cases can shed light on generic
brain processing mechanisms?

2. To what extent does Chris’ invocation of Hermann von


Helmholtz argue for the importance of scientists being aware
of the history of science? Readers may be interested to learn
that Helmholtz is not just a “big hero” of Chris Frith’s—his
name spontaneously arises as a reference point in Chapter 4 of
The Physics of Banjos with Caltech Physics Nobel Laureate
David Politzer (who also calls Helmholtz “one of my heroes”) as
well as in Chapter 2 of Knowing One’s Place: Space and
the Brain with Duke University neuroscientist Jennifer Groh.
III. The Active Brain
The principal actor in the theatre of experience

HB: I’d like to talk more now about our general picture of how the brain
is operating.
Let me start off with what I’ll call the naive view—which we
understand now is not the best picture of what’s going on—but for the
longest time, I think, people did look at things this way. And the naive
view seems to be we have these receptors that correspond to our
senses and sense data impinges itself upon us through these various
receptors, which is how we get information about the world around us.
There’s this rather awkward little step which is elided in all of that,
which is how this is actually processed in our mind’s eye, but if we just
forget about that for a moment, the idea is that we’re going around the
world with our eyes open, say, and so photons hit our retina and there
is corresponding electrical stimulation and so forth in our brain. In
other words the brain is somehow this big receiver of information.
CF: Yes.
HB: And my sense is that things are considerably more complicated
than that.
CF: That’s right.
HB: Hence my calling it “the naive view”. So perhaps I could get you to
just give us a clear sense of what we now believe and why, very much in
keeping with what you were just discussing.
CF: Yes. So I would characterize the earlier version as a “feed-forward”
version. The evidence comes from the senses. Then it goes to a higher
level area that determines the shape, that goes to a higher level area
that sorts out what object in particular it is, and so on.
Take reading. You can say that there are marks on the page, which
can then be interpreted as letters, then converted to words, then
recognized as sentences; and then in the old-fashioned diagram box
there are diagrams that denote “the place that sentences go when
they’re understood”.
Again, Helmholtz, I think, was the first to see that this is clearly
wrong.
It’s partly because he realized that it’s simply too long, in
physiological terms: even though nerve conduction is rather slow, the
time it takes to recognize what an object is, is ten times slower. And
realized that there was something he called “unconscious inferences”
that were going on—we get this experience that this is the object, but
we’re not aware of how much work the brain has done to arrive at this
point. So the interesting question is, What is this work, exactly?
And this is where the idea of “predictive coding” or “the Bayesian
approach” comes in. And there are two aspects to this.
The first is that you have to have a prior expectation.
From my past experience, for example, I have a very good idea of
what’s likely to be on this table beside me. And what I use the evidence
from my senses to do is to evaluate to what extent that prior
expectation was right or not. If it’s right, that’s fine. If it’s wrong, I have
to slightly change what I think is out there—which leads to the idea that
most of the time, since our prior expectations are right, we’re not
actually taking any account of what’s out there. I think I say somewhere
in my book, Making Up The Mind—which I probably stole from
somewhere else—that basically our perception is “a hallucination
mildly constrained by reality”.
A nice example of this, if you’ll allow me to jump about a bit, is the
phantom limb. How on earth can someone have a phantom limb when
there’s no limb actually there? You can say, “Well, what motor control
theory tells us is that when I perform an action, I have a prediction of
where my limb is going to be and what it’s going to feel like”. Most of the
time, my experience of the world is not what my limb is, it’s my
prediction about where it will be and what it will feel like. So the person
with the phantom limb still has all these predictions and things intact,
and that’s what I think results in the phantom limb phenomenon.
But the second point is that not only do we have expectations prior to
the evidence from the senses, but we spend a lot of time doing things to
the world. And this is where I’m very much influenced by my friend,
Daniel Wolpert, who proudly says when he goes to meetings, “I am an
engineer”—he’s now in the engineering department in Cambridge.
And he’s also, as I am, a motor chauvinist because prior to us, there
was a sense that everything was perception. If we go back to Hubel and
Wiesel, they would say, “We know a great deal about visual perception”;
and if one of them would draw a picture of the brain, most of it would
be the visual system.
But we would say in contrast, “No, action is what the brain is all
about. If you don’t have action, you’re going to die.”
Daniel has this nice anecdote he likes to tell—I’m not sure if it’s
actually entirely true, but it doesn’t really matter, it’s illustrative.
There’s some sort of creature like a sea squirt that, in its larval form,
swims about and finds food, but when it matures into an adult, it
immediately attaches itself to a rock and never moves again. And the
first thing it does is it dissolves its brain, because it doesn’t need it
anymore—just like someone who’s being given tenure, incidentally.
It’s worth mentioning that Helmholtz also pointed this out when he
described how we perceive how far away things are. Well, you can use
something called parallax, which is basically to move from side to side,
and you find that things further away move less than the nearer things.
So you’re using your action, you’re making predictions about what will
happen when you act, to find out more about the world.
HB: Yes. And getting back to this overall approach that I was
mentioning earlier of using abnormal circumstances or systems as a
window into normal brain function, in Making Up The Mind you
specifically highlight the role that visual illusions can play in helping us
get a deeper understanding of how our normal act of perception works
by examining precisely how it is somehow being interfered with
through these illusions.
CF: Yes, that’s right. As we were saying in this new formulation
everything depends on prior expectations; and there are, of course,
circumstances where your prior expectations are completely wrong,
and I think that most illusions are indicating where this point is. So
many of them depend on seeing something as being 3D when it’s really
2D, but a particularly nice one is something called the “Hollow-Mask
illusion” that Richard Gregory used to great effect.
This is where you have a hollow mask, but if you look at it from
behind when the nose is actually pointing away from you, you cannot
help but see it sticking out towards you. And if you mount it on a rod
and rotate it, you see that when it’s facing you it rotates normally, but
when it’s facing away from you and you’re looking at it from behind it
seems to start rotating in the other direction once the illusion kicks it
because you’re now misperceiving the nose as sticking out.
And I would say that this is due to have an incredibly strong prior
expectation that faces stick out which completely overrides the
evidence we are being presented with to our senses. Now what is quite
interesting is that in schizophrenia, this illusion is much less strong. So
it seems that there’s something peculiar about the way they integrate
their prior expectations in their evidence.
Questions for Discussion:

1. Why do you think that many of those who still cling to what
Howard calls the “naive view” of the brain work on the
neuroscience of vision? How might the field of vision be
regarded as “a victim of its own success”? Readers interested in
this issue might want to compare Chapter 7 of Minds and
Machines with Duke neuroscientist Miguel Nicolelis with
Chapter 3 of Vision and Perception with Stanford
university vision scientist Kalanit Grill-Spector.

2. In what ways might the particularly strong impact of the


Hollow-Mask illusion be linked to the evolutionary importance
of facial recognition? If so, how might we deliberately create
other sensory illusions that are equally impactful?

3. To what extent does Chris’ use of the word “override” imply a


sense of “competition” between our prior expectations and the
information we are receiving from our senses?

4. Why do you think that the phenomenon of “phantom limb


pain” has not been more often highlighted by neuroscientists
and psychologists as a key example of how the brain acts?
Readers interested in more discussions of phantom limb pain
are referred to Chapter 9 of Knowing One’s Place: Space
and the Brain with Jennifer Groh and Chapter 6 of Minds
and Machines with Miguel Nicolelis.
IV. Ideal Bayesian Operators
How our brains trump our minds

HB: You mentioned the word “Bayesian” a couple of times already, so I’d
like to ask you to be a little bit more specific there, because as you were
speaking just now about how people with schizophrenia might be
integrating their experiences in different ways from others, it struck me
that they might be rephrased, at least in some hand-wavy way, as saying
that their Bayesian inferences are not quite what they should be.
CF: That’s exactly right, yes.
Thomas Bayes was a fascinating chap who lived in the 18th century.
He was a very good mathematician, although of course he wasn’t
allowed to go to university in England as he was a nonconformist
minister, so he had to go to Scotland. I’m fascinated by him because he
became a Fellow of the Royal Society, which is a very difficult thing to
do, despite not having published anything of real note. The famous
paper on probability theory and statistics, which everybody now
quotes, was actually published after his death. So it’s quite interesting
to know how he became a Fellow of the Royal Society, but one
speculation is that at that time many of the people in the Royal Society
were actually aristocrats rather than scientists, and they were
particularly interested in gambling.
HB: So it was useful.
CF: Yes, that’s right.
But the way people tend to interpret his theorem these days is, How
much should you update your prior expectation, given this new evidence.
It’s a mathematical formula that tells you precisely how much you
should change it. And that’s the basis for all these ideas of predictive
coding and so on.
Then it becomes more complicated because you can say, “Well, in
certain environments maybe I should put more weight on the evidence
rather than on my prior expectations—or the other way around.” So it
becomes quite complicated, but it’s a very good framework for
explaining a lot about perception, about action, and about how the
brain works in general.
And while I do very little work on schizophrenia these days myself,
my understanding is that there is one strand of those who are working
on it who believe that a central factor is precisely this balance between
expectations and evidence, believing that dopamine plays a more or
less direct role in this.
HB: That’s interesting. I’m curious to know what the mechanism might
be for that. And I’m guessing you are too.
CF: Yes, so am I.
HB: One thing that I thought I would just interject: my sense of an
essential aspect of this Bayesian framework is that there’s a strong
correlation with how unusual or usual the thing is that you’re
predicting, or at least the characteristic that’s associated with the thing
that you’re going to be predicting.
CF: That’s right. There are some very interesting studies on people in
airports who are scanning for guns, where you can easily show that it is
so unlikely that a gun is going to be in the luggage, that they’re not
going to see it.
HB: Another area where you mentioned explicitly that Bayesian
understanding has had great impact is in the health sciences, and in
terms of evaluating risks. You give an example of mammography and
whether or not it’s worth our while to be doing a mammogram for the
entire population.
CF: Yes, this is partly to do with base rates, so that even if you have a
reasonably sensitive test, if the odds among the general population
having the condition are low to start with, then the test is going to
produce large numbers of false positives compared to detecting those
few who actually have the condition, and it may simply be
counterproductive.
HB: Right. An interesting aspect of this to me is that as you pointed out
—and as many others have also pointed out—humans are generally not
very good at calculating Bayesian probabilities. The mammogram
example is a good case in point. If people tell us that a test is 80%
effective at detecting something harmful and important with a
relatively low false-positive rate of, say, 10%, our intuitive reaction is,
“Clearly everybody should have this done.”
And then you do the calculation and find out that, no, the number of
false positives is so large that it’s actually disadvantageous to do this—
incorporating the results of this new test at face makes you think that a
far greater proportion of the population actually has this condition than
you first—and quite rightly—thought.
But here’s the point I’m trying to make. Somewhere in your book,
Making Up The Mind, you describe how our brain is this so-called “ideal
Bayesian operator”.
CF: Yes, that’s right.
HB: And it seems the argument for why our brain is this “ideal Bayesian
operator” is because in order to learn effectively and quickly, it needs to
have this feedback loop done in the appropriate way—in other words, if
its sense of calculating probabilities was disastrous, we wouldn’t be
able to learn as efficiently as we do. So there’s a clear evolutionary
argument for why our brains should be wonderful at performing
Bayesian statistics about basically everything, all the time.
But at the same time, if you ask me a question that involves basic
Bayesian statistics, like the one about the mammogram we just spoke
about, I’ll likely get it wrong: I’m lousy at it. This seems to be pointing to
some sense of a weird dualism between what “I” think I understand
about the world and what my brain actually does.
CF: Yes, that’s absolutely right. I’m now based in a philosophy
department, you see, so I have had to learn some terms; and the terms I
have learned now for this dualistic aspect is “the personal” and “the
subpersonal”—“the subpersonal” meaning doing what the brain does,
and “the personal” is what I do, as it were.
And I think this is the key point. The brain is an ideal Bayesian
operator at the subpersonal level—and there are some beautiful
experiments, which I will probably insist on describing now—
demonstrating that.
HB: Go right ahead. I insist as well.
CF: Well, one experiment is about combining the senses. I should say
before I continue that another mistake people used to make is making a
big deal about distinguishing between all these different senses, but as
far as the brain is concerned it doesn’t care about any of that: it just
wants to know what’s out there and use all the information possible it
can get its hands on, as it were.
At any rate, in this experiment you see a bar, and you can also feel it;
and you have to evaluate its width.
And then of course, because there’s all this fancy equipment, you can
make it feel different than what it looks.
So you can measure how good you are at telling how wide it is from
vision and how good you are from touch, and then you can have them
competing. In the usual situation, vision wins and touch is ignored,
because vision is a much more precise sense.
But you can make the vision less precise by adding noise; and what
you then see—which is what the Bayesian operator predicts—is that
you weight the two senses on the basis of the precision of the two
signals. So if the vision is very bad, then you’re now entirely dependent
on touch and vice-versa. And there’s a sweet spot in the middle where
they’re both equally informative; and then you do better if you have
touch and vision then you do with either one on their own.
So that demonstrates the exquisite Bayesian approach that the brain
has at the subpersonal level.
But you’re absolutely right. At the personal level, when we’re asked
to justify things or to do probabilities, we can often get it wrong. But
this is partly because of the way the problems are presented. I think it’s
precisely analogous to visual illusions: you’re presenting problems in
such a way that they don’t fit the way we’ve learned to expect things.
There are various people now who say, “Actually, we’re completely
rational. It’s just that we have different prior expectations than the
problems are set up to explore”.
The only example I can think of at the moment is the frame effect,
which is where, if you start with 1,000 people and you say, “If we
introduce this new treatment, it will save 400 lives”, you will likely get a
different response than if you say, “If we introduce this new treatment,
there will still be 600 people who will die.”
So the probabilities are identical, but people’s decisions are modified
by this frame; and you could say, “That’s not rational, is it?”
But I would say that what we’ve brought up to do is to recognize that
when people present a problem, the way they present it is very
important. This is what pragmatics is all about: it is the glass half full or
half empty. If you say, “My glass is half empty”, that means please give me
some more. If you say, “My glass is half full”, it probably means I’ve got
enough.
HB: So we have to look closer at the hermeneutics of these things.
CF: Exactly. In fact, I’ve written a paper with Karl Friston about
hermeneutics (“Active inference, communication and hermeneutics”).
HB: OK, but do you actually believe that? I mean you were very cagey
just now, saying something like, “Some people believe that it’s all a
question of how these things are being phrased”. Are you, personally of
that view as well?
CF: I’m inclined to agree with that. The trouble is that I don’t know
enough about it, but there’s this chap, Chris Summerfield in Oxford,
who’s got a Bayesian account of why these answers are not actually
irrational if you consider the problem from a wider perspective.
Certainly the base rate thing is a problem—we’re just not used to
thinking about base rates, it’s not something we know about.
I mean, it’s like, Why do people buy lottery tickets? It’s completely
irrational. Perhaps the way to resolve this is to ensure that every time
you see a person on television winning the lottery, you should also see
all the other people who didn’t win the lottery, and then you might get a
proper experience of the relevant statistics.
HB: OK, but it seems to me that we’re sliding towards questions like,
Why do people behave irrationally?
CF: Yes, sorry about that.
HB: Well, there’s no reason to apologize: after all that’s deeply
mysterious in its own right. And we could discuss all other questions
like, Why do people do what they do all day?, which I also don’t
understand as a general rule.
CF: Right.
Questions for Discussion:

1. Do you agree that our apparent human weakness at


correctly evaluating probabilities mostly boils down to a
matter of how the question is phrased? What might be some
counterarguments to that view?

2. What does the claim that humans are poor at consciously


evaluating probabilities imply about the role of probabilistic
thinking in evolution?
V. In Search of a Mechanism
How to connect the subpersonal with the personal

HB: For the moment, however, I’d like to concentrate on the apparent
distinction between our brains and the processing that they’re doing,
and our conscious minds—a distinction that strikes me as a major
theme throughout Making Up The Mind.
I’m certainly willing to accept the fact that there are irrational people
—after all, the evidence seems to be overwhelming. Whether or not
that means they have irrational brains, as well as irrational minds, I
don’t know, but the point is that this distinction certainly seems to exist
for even reasonably rational people.
So if we focus on that we can now forget about focusing on Bayesian
probability per se—that was an example of trying to look at the
distinction.
When we talk about whether or not we’re convinced of something—
our beliefs, our desires, all the rest of that—we’re talking about
“ourselves”. We all have a fairly clear understanding of what that means
—even if we can’t specify it logically or physiologically—and that’s very
different, of course, than looking at brain activity in an fMRI machine.
Now, what I detect from you is some ambiguity—I’m not accusing
you of anything other than what every reasonable human being has
grappled with throughout the dawn of history, so this is not particularly
directed at you—but there is this obvious ambiguity that we’re all
battling with, it seems to me.
If we are materialists, we are naturally inclined to say something like,
“We don’t believe in a soul or ‘soul-stuff’; we believe that at some level
there’s nothing other than physical stuff out there, and therefore the
brain must cause the mind—and we also have all sorts of other evidence
for those conclusions, ranging from lesions to the brains to how people
behave under narcotics and so forth.”
So there are all sorts of reasons to believe that the brain and the mind
are causally connected, but the question is, Well, how does it work
exactly? Or even approximately, for that matter.
At one point in Making Up The Mind, you say admirably humble
words to the effect of, “This leads us to the question of consciousness, and
I’m not going to look so much at consciousness because that’s too difficult.
I’m going to look at what it’s for and go through an evolutionary pathway
and so on.”
But I want to put you back on the hook for a moment now and simply
ask you, “OK, look, we’ve got these two things: the brain and the mind.
How are they linked up?”
CF: Well, that’s what I’ve been thinking about, mostly, since writing that
book. And it’s very much to do with what I was saying before about the
personal and subpersonal and how they relate, but also I’ve become
more and more interested in culture. In some ways, the brain is not
enough on its own—it’s almost like a tool.
And one thing I think about is that, genetically speaking, our brains at
birth are no different from the brains of people from something like
200,000 years ago, when people were making these crude stone tools
and so on. But adult brains today I would suspect, are very different
from the brains of people 200,000 years ago, because the brain is very
plastic and culture affects the brain.
I was involved in the famous Taxi Drivers Study, showing that the
hippocampus, or portions of the hippocampus, of a London taxi driver
increases in volume as a result of learning The Knowledge—the
rigorous mental map of London that is required for all successful
drivers. I’ve been involved in a study which shows that Italian brains
are different from English brains because the spelling of Italian and
English is so different.
So there are innumerable things in modern culture that will make
our brains very different from what they were 200,000 years ago. A lot
of what we mean by “culture” is at the personal level: it depends on
communication, the interactions between people, which create
traditions and so on, that are then fed back into the system.
But to talk to people and describe our experiences and how the mind
works is quite difficult.
So this is becoming a bit speculative—there’s some nice work,
slightly controversial from Dijksterhuis and colleagues in the
Netherlands, where they show that making a complicated decision
which involves taking into account 12 different variables, you could do
it actually better if you didn’t think about it.
And there’s another study that says that if you have to recognize a
face and you’re asked to describe it, that actually makes you worse at
recognizing it. So the idea is that our subpersonal brain is extremely
good at handling a very complicated multi-dimensional structure, but
as soon as things get up to the personal level—which from my point of
view means we have to talk to other people about it—we have to
simplify it, we have to reduce the number of dimensions.
So we can do tricks, like making them richer. But in essence, we have
to reduce the number of dimensions. And if you choose the wrong
dimensions, you’re going to make the wrong decision. And I think that
might be the sort of thing that’s happening in these problem-solving
cases we spoke about earlier. But it’s this talking to each other—
including things like how the mind works—which creates culture and
feeds back into us.
HB: Let me just interject for a moment, because while I have no
problem whatsoever in being speculative, I don’t want to lose the
thread. So let me try to be a little bit more concrete.
I’m immersed in a culture. And as a result of this immersion, I need to
be communicating with you and others, and that communication
necessitates that I bounce ideas off you, predict how you might respond
and so forth, all of which requires me to use my wonderful, ideal
Bayesian operator brain.
CF: Yes.
HB: And by interacting with you and utilizing this prediction-
confirmation process—which presumably also includes some higher
level aspects of empathy and so forth—my brain is also evolving and it’s
changing its structure, resulting in things like, as you said earlier, how
Italians have a slightly different brain than anglophones, say.
So all of that, at least on a hand-wavy level, I’m okay with. But I don’t
know if that brings me any closer to this sense of how I’m linking my
“me”—my personal level—with “my brain”—the subpersonal level. Do
you see my problem?
CF: Yes, yes.
Well, the way I look at it is that there are signals coming up from the
subpersonal level, into the personal level, which we can actually talk to
people about. And likewise, when people tell us things that somehow
influence how the subpersonal level works.
So an example of a signal would be, I have a sense of effort: I feel how
hard I’m having to work to do a particular task. And in many tasks, the
longer I do it, the harder it seems to be. And I’m aware of this sense of
effort. And I can tell someone, “This feels very effortful”. And indeed, I
will say, eventually, something like, “I’m too tired. I can’t do this
anymore”.
Now there was a very nice experiment—not by me, first from Carol
Dweck’s group. They were investigating this phenomenon called “ego-
depletion”—which is very famous and well-established, but now under
attack—that if you do a difficult mental task or if you have to inhibit
yourself from eating nice food, it exhausts your mental resources and
you will have difficulty with another executive task immediately
afterwards.
Now what these people did is they added an additional group. So they
had similar experiments, but now there were two groups. One group is
told when you do a difficult mental task, it’s like a muscle and you will
feel tired and it will be difficult to do something further. The other
group is told when you do a difficult mental task, you will feel energized
and ready for more work. And lo and behold the people who were told
that they would feel tired, made more Stroop errors after the executive
task and the people who were told they would be energized, made
fewer Stroop errors after the task.
So that seems to me to be a direct example of how you being told how
your mind works, influences your behaviour at this low level.
HB: So that’s a sign of concrete interaction between these different
levels. But then, what I’d really like to know is what’s...
CF: The mechanism?
HB: Yes.
CF: Well, now this is becoming extremely hand-wavy.
There’s another experiment that I think gets us a bit closer to the
mechanism. You know that there are all these “common goods
games”—you interact with someone and you have to learn whether
they’re trustworthy or not. If you invest money and they give you some
money back then they’re more trustworthy, and if they didn’t give you
the money back they are less trustworthy, and you can have a
completely standard learning algorithm using prediction errors: if they
give you more money back, their trust goes up, and if they give you less
money back, the trust goes down.
You can see these prediction errors happening in the brain where the
dopamine is. That’s all fine. The interesting thing is that, if you tell
them, “This is a very trustworthy person”, they stop noticing the
prediction errors both behaviourally and in terms of brain function. So
they’re less influenced by the actual behaviour of the person, you see
less prediction-error activity in the middle of the brain and in the
Bayesian terminology if you say the mechanism here is that you’ve been
given the prior information that this is a trustworthy person, so you
down-weight how much attention you pay to their actual behaviour.
HB: This is almost analogous to the tickling thing.
CF: Yes—that’s a good point, I hadn’t thought of that. With respect to
the question of mental effort and related issues, it’s a matter of
interpretation, but I think you can explain it all on these very high-level
priors (prior information). So the issue of ego-depletion is fascinating
because now almost everybody in the world who’s read the books and
knows about it has this high-level prior, but you can change this.
And there are other interesting, analogous experiments on free will,
where you can say to people, “No sensible people these days believe in
free will. Francis Crick has shown in his book that there’s no such thing:
everything is predetermined”. And if you tell people that, they believe it
—which you can assess through a follow-up questionnaire—they will
then be more likely to cheat on tests, they will become less helpful in
social situations—”
HB: Because they are now convinced that they’re not responsible.
CF: Yes. Another thing that is fascinating to me involves this
phenomenon called “post-error slowing”. If you do a reaction time task,
for example, after you’ve made an error you will slow down, because
you’re monitoring yourself and you’re saying “I’m doing it too fast”. And
it turns out that people who don’t believe in free will showed less post-
error slowing, and the amplitude of their readiness potential in the
brain becomes smaller. I think the prior here is about how much top-
down control an individual believes she has over her behaviour. So if
you now believe you have less top-down control—”
HB: Or none.
CF: Yes—or none—that will have a considerable impact. So I think
these are hints of the kinds of mechanisms that are involved, and my
current research project is precisely about that: trying to discover
something about these mechanisms.
Questions for Discussion:

1. How might detailed studies that show differences between


the brains of Italian-speakers and English-speakers due to
neuroplasticity impact the so-called Sapir-Whorf debate on the
extent to which the language we speak influences our thoughts?

2. To what extent could it be validly argued that psychology


experiments should avoid using psychology undergraduate
students as subjects, due to the effect that our prior beliefs
might have on the results?

3. Is there a significant link between the concept of “energy


expended” and “desire”? Might it be the case that those who
“relish a challenge” and enjoy spending their “mental energy
reserves” act differently from those who deliberately opt to
conserve such “reserves”? Those interested in the link between
energy and decision-making are referred to Chapter 3 of
Being Social with Roy Baumeister, while those curious about
the impact of how embracing a challenge is reflected in the
notion of a “growth mindset” are referred to Chapters 1–3 of
Mindsets: Growing Your Brain with Carol Dweck.
VI. Humanistic Hubris
Dancing bees, stripping pine cones and The Royal Society

HB: I would—as you had predicted, ironically enough—like to talk


about free will, but before I do, I’m going to ask a somewhat different
question. So we’ve talked about humans and the human brain and how
somehow interactions between humans—“culture”, broadly defined—
can affect the development of that brain. But of course, humans, aren’t
the only living things around. So, might there be some sense of a meta-
theory which extends in principle to animals?
CF: Yes. I mean, one of the things that I’m very interested in and impacts
particularly on the sorts of things we’ve been talking about is, What’s
the difference between humans and other animals? Because obviously
the “Bayesian brain story” and the “basic reinforcement learning story”
is found at least in all mammals, and everybody these days is interested
in basic models of decision-making. And I’ve been interested in group
decision-making—and by “group” I mean two people making a decision
together—and you can show that two people working together in the
right circumstances will make a better decision than either one on their
own. And I think that we’ve shown that this depends on
communication.
And then you immediately think, But what about bees? Well, at least I
immediately think, What about bees? Because bees also make joint
decisions. They make joint decisions about where they’re going to go.
When they swarm, scouts go out and they come back from different
sites and they have a sort of argument to decide—
HB: They do a little dance, don’t they?
CF: Yes, they do a little dance. Scouts come back from different places—
this scout says, Go here and another scout says, Go there—and then
somehow they come up with an agreement and they all go off to one
place And it’s usually the best place. So how is that different from
humans? And I guess all you can really say is, Well, bees can only do it
for new nest sites.
HB: Oh, well, whatever. That doesn’t seem a particularly important
distinction to me.
CF: Yes. So what we’re saying is just, We’re more flexible? That doesn’t
seem to be a real difference in kind. We can come up with completely
novel problems and novel ways of communicating that bees can only
dance, while we can do it with words or semaphore or something else.
HB: But from a mechanistic perspective—from a large scale perspective
—one could argue that it’s the same thing.
CF: Absolutely. And they even suggested that the way bees make
decisions are actually very closely similar to the way that neurons in the
mammalian brain interact to make their decisions.
HB: Well, that wouldn’t be terribly surprising, given everything that
you’ve said today.
CF: That’s right. So it’s very difficult to say what the difference is,
exactly. In fact, bees seem to be rather better at communicating than
non-human mammals, I would guess—and that becomes more and
more interesting. There’s a lot of argument about culture, and there’s
some suggestion that chimpanzees and rats and so on do have culture,
but it’s trivial compared to ours, and to some extent it’s not cumulative:
so the rats in Jerusalem have different ways of stripping pine cones or
something from those in London, but it’s very fragile and can be lost.
HB: How much of this might be a reflection of our own cultural biases? I
mean, I can’t help but think that if we were having this very same
discussion right here, in London, 200 years ago, there would be a rather
different sentiment expressed—not by myself, of course, since I’m a
mere colonial—but by a British imperialist who would be declaring
without any hesitation whatsoever that these other “lower races” in
“lower places” just don’t have any sense of culture or civilization
whatsoever—there are differences, but they are not terribly meaningful
in the broader scheme of things. Perhaps we’re just doing the same
thing now.
CF: Yes, one should never forget that possibility. I mean I’m not
convinced that the rats have—
HB: Sure. It’s not as though I’m a staunch advocate for “rat equality”—
CF: And the other thing that we have which they don’t have is
institutions, like libraries or The Royal Society or governmental
agencies.
HB: Which is good and bad, I suppose.
CF: Yes. But then, of course, a couple of hundred thousand years ago we
didn’t either.
HB: Which isn’t really all that long ago, as these things go.
CF: That’s right, yes.
Questions for Discussion:

1. How might the analogy between the action of individual bees


and mammalian neurons help us build models of a mechanism
to link the subpersonal with the personal that was highlighted
in the previous chapter?

2. How do you think our basic views about “animal


consciousness” will change over the next 50 years and what
impact do you think that will have on human society and
culture?
VII. Free Will
And what it means

HB: OK, let’s move to free will now. Before we talk about your views,
maybe it makes sense to start with these very influential experiments
conducted by Benjamin Libet that I’d like you to describe and comment
on.
CF: Well, Libet had this very clever idea that depended on the discovery
of this so-called Bereitschaftspotential, or “readiness potential”, which
is easy to measure. It’s a negative gain potential that you can measure
with one electrode stuck in the motor cortex. And whenever you think
about lifting your finger spontaneously, this negative potential goes
gradually upward for about one second and then once your finger is
lifted it goes down again. So you can measure that. So his idea was that
you would ask people to lift their finger whenever they had the urge to
do so—those were his words. You can measure the readiness potential
to see when it started going up, and you could also ask them, When did
you have the urge to lift your finger?
And that’s slightly controversial, but it’s actually been done many
other ways now, so the basic technique is fine. So there was a little clock
that went round and round. And you had to say what the time on the
clock was when you decided to lift your finger. And what he found, and
what has been replicated many times, is that in the brain, this readiness
potential starts to go up about 500 milliseconds before you lift your
finger. And the time people typically report deciding to lift their finger
about 200 milliseconds before they lift their finger, so the brain activity
precedes their report of when they decided to lift their finger.
In other words, in principle—although no one’s quite done it yet—
you could predict when they were going to lift their finger before they
could. And I copied this and I did it in the scanner. In the Libet case, the
question is they have to decide when to lift their finger. And I did a
similar thing: they had to decide which fingers to lift—and that’s been
done too: John Dylan Haynes in Berlin has shown using fMRI that you
can actually predict something like six seconds in advance, which finger
they’re going to lift.
HB: Six seconds?! That’s a long time. I don’t know what I’m doing six
seconds in advance.
CF: Well, in this case all you’re doing is lying in a scanner contemplating
moving fingers, so it’s not a problem. But that’s slightly complicated—
fMRI is all about blood flow and so forth—at any rate, there are several
problems with this experiment.
And there are various critiques of this. I had several problems with
this experiment. One is, if you like, “the cultural problem”. When you say
to somebody, “I want you to stay in this room for half an hour and lift
your finger whenever you have the urge to do so”, this is a very funny
instruction. One thing that’s implied is that if Dr Libet comes back after
half an hour, he will not be very pleased if you say, “I never had the urge
to lift my finger”.
HB: So this is a frame problem, as you described earlier.
CF: That’s right: it’s a sort of frame problem. So you say to yourself, “I
clearly have to lift my finger from time to time. And because it’s to do with
internal urges, I shouldn’t do it every six seconds or something, I should
effectively at random intervals lift my finger.”
And I think that’s what the instruction really means. And you can
show that if instead of asking people, Lift whichever thing you like, you
ask them, Lift them at random, exactly the same sort of brain activity is
seen.
Now there’s nothing necessarily wrong with this because if we relate
this to free will, there are two alternatives: either we have free will or
everything is predetermined. If everything is predetermined, then I can
predict what you’re going to do and when you’re going to do it. So to
demonstrate that I have free will I have to behave in an unpredictable
manner, which doesn’t sound quite right.
I should say first of all, that people are very bad at being
unpredictable and at generating random numbers.
I have a nice anecdote for you. I once had a chat with Roger Penrose,
who was complaining to me about when he was a boy, he didn’t mind
his brother beating him in chess because he was a chess champion, but
he objected to his brother beating him at stone-paper-scissors. So he’d
memorized some random numbers. So in fact you can beat people at
stone-paper-scissors if you take into account their lack of randomness.
Anyway, there’s a nice very experiment showing that cockroaches—
unlike people—can behave randomly. If you blow at them, they change
direction completely at random so in that sense, cockroaches have free
will but we don’t.
So I am using this as an example to say that perhaps this
predetermination idea is not the way to think about free will. I mean,
there are other problems with free will. Some Enlightenment
philosophers say things like, “A free man is someone who always makes
the rational decision”. In which case, if you know what the rational
decision is, you can predict what they’re going to do. And then some
people take that seriously—at least to some extent—by saying things
like, “The only way to demonstrate that I have free will is to do something
completely stupid and against my interests”. And that doesn’t make any
sense either.
HB: No, it doesn’t. It sounds like something that a really, really dogmatic
economist would say.
CF: Yes. So I’ve been thinking about the relationship between free will
and responsibility. So we have a sense of agency and it seems to be very
closely tied to our sense of responsibility, by which I mean we can be
held responsible for things.
And according to the new science of experimental philosophy, if we
go and ask the folk, the folk will tell you that people are not responsible
for actions of which they are not conscious at the time, or something
like that. So there’s a very tight relationship between experience of
agency and being in control and consciousness and responsibility.
I’m sure you know about Patrick Haggard and his intentional binding
measure. And previous studies show that you’ve got more intentional
binding—in other words, a greater sense of agency—when you’re
making a moral decision as opposed to an economic decision. And
they’ve just published a very nice paper about a sort of Milgram-like
effect: that you feel less responsibility, and there’s correspondingly less
intentional binding, if you’re doing something because somebody told
you.
HB: So this is framing yet again?
CF: Yes, it’s all about framing. And the suggestion is that perhaps we
learn culturally to be responsible for our actions by getting praise and
blame—after all, that’s what Epicurus said long ago. And I certainly
noticed with my children that they very rapidly get onto the idea of
saying, “I did it, it was an accident” as a way of justifying an action.
HB: Right. As a way of limiting their responsibility.
CF: Yes. I think the fact that what we can tell people about is rather
inaccurate and malleable is actually helpful. There’s this nice
experiment by Petter Johansson which you probably know, where he
shows he has two packs of cards with faces of ladies, and you have to
say which one you prefer. And then they put them face down and push
it forward and say, “Explain why”. And they’re very clever conjurers, so
on 25% of trials the one they put forward is not actually the one that
they chose. And most of the time, people don’t notice this and explain
why they chose it regardless.
So we’re not attending that much to our decisions, but we’re very
keen on justifying them even if we didn’t actually make them. And I
think this notion of justification is very central. This is what we spend a
lot of time interacting with each other about: culture is very keen on
saying what’s justified and what’s not justified. We learn to feel
responsible for our actions. We learn to feel that we have free will.
And this is actually very important for social cohesion. So in these
economic types of experiments you get people called “free riders” who
appear who destroy this social cohesion. And you can recreate the
cohesion if you’re allowed to punish the free riders, but you only punish
free people if they are responsible for their actions.
Tania Singer did an experiment where if you’re told that someone is
not actually playing the economic game—he’s not deciding, he’s just
reading off a sheet, say—then you don’t punish him. So I think that the
feeling that we are all responsible for our actions, and therefore we can
be punished for those bad actions which are done deliberately, is very
important.
So I tend to believe that we do have free will, but I would say that
even if we didn’t it’s very important that we believe that we do.
Questions for Discussion:

1. What does it mean to you to say that “free will exists”?

2. How might someone argue with Chris’ statement “If


everything is predetermined, then I can predict what
you’re going to do and when you’re going to do it”?

3. In what ways might it be meaningful to distinguish between


“free will in principle” and “free will in practice”?
VIII. The Very Big Picture
Towards a grand unified theory of psychology?

HB: You’ve been very gracious with both your time and your
willingness to speculate, for which I’m most grateful. I’m going to try to
push you a little bit more now in terms of speculation. Here’s what I
was thinking as you were talking. So I understand that you’ve looked at
abnormal psychology—schizophrenia in particular—both as a research
subject in its own right and as a way of better understanding a wide
range of brain mechanisms that we all have. And my sense is that Uta
(Uta Frith, featured in the Ideas Roadshow conversation, Exploring
Autism) is doing something similar in her work on autism: investigating
the particularities of that condition as well as aspects of general social
behaviour through the prism of autism.
CF: Yes.
HB: Now, when it comes to autism, these days we often talk about “a
spectrum”. So first of all, one could imagine schizophrenia being on a
spectrum, but perhaps even more interestingly, one could also imagine
everything being on a spectrum—which is to say that rather than say,
“This person is autistic and that person has schizophrenia” as a result of
correspondingly strong overlaps with various external objective criteria
that we’ve established, we instead frame the entire human condition in
terms of our understanding of—let’s say—our “cognitive sociology”
and our ability to be able to interact in some cultural way, which might
result in some overarching framework through which we could
interpret everyone.
So this is hugely speculative, I appreciate, and much more of a
theoretical point than even a vague attempt at any sort of prescription.
Moreover, it’s not in any way to claim that everyone is equivalent or that
everyone is suffering from the same thing or the same way or that
people who are severely autistic or severely schizophrenic are not in a
very different situation than the rest of us. I’m not in any way trying to
insinuate any of that. I’m just thinking about a larger framework, and a
correspondingly different way of looking at things in “the bigger
picture” if you will.
CF: I’m sorry. I cannot resist my response to this, which is, there are
only two kinds of people in the world: those who believe in spectra and
those who believe in categories.
HB: And I think I know what kind of person you are.
CF: Yes. And Uta is not very happy about “the autistic spectrum”. The
concept is certainly less developed in the case of schizophrenia, but
there are certainly people who believe that there’s a spectrum of, say,
hallucinations; and they’re very interested in so-called normal people
in the population who hear voices.
There are obviously people in other categories who have delusions
and hallucinations, most notably neurological patients. So there’s the
Capgras Syndrome where people believe that their spouse has been
replaced by a robot. That sometimes happens in schizophrenia, but it’s
more typically associated with dementia or damage to the brain. And I
suppose what I guessing at is that there might be a spectrum of
symptomatology, but I’m not sure whether that necessarily means that
there’s a spectrum of diagnostics or disease or causes or something like
that.
I suspect that schizophrenia will turn out to have many different
causes with the same end result—it might even be like fever, as local as
that. In the olden days, people used to study mental deficiency—which
basically means people with lower IQ—and the IQ is clearly a spectrum.
And what’s happening now in genetics is that every month or so they
announce a new single-gene disorder, which accounts for 1% of this
category. And I think something similar is beginning to happen with
autism. So they’re talking about de nova mutations and various things
that are now viewed as accounting for—well, I think they’ve now got up
to 5% or something like that. So it all depends on what it’s the spectrum
of.
HB: Right. In retrospect, I don’t think I quite enunciated my thoughts
very well—perhaps that’s because I don’t have a very clear
representation in my own mind. So let me try again, since we’re almost
at the end and this is the time for massive amounts of speculation.
Clearly there are circumstances whereby someone will start
exhibiting a radically different form of behaviour than someone else.
And one might say, theoretically, there may be a spectrum, or there may
not be a spectrum, but clearly this person is doing something very, very
different. If I start believing tomorrow that my wife is a robot, I’m not
acting the same way that I’m acting today or that other people are
acting. So that’s obvious. And again, I’m not in any way trying to
minimize the pain and suffering that people who are afflicted with
those illnesses have, or their loved ones have around them, which is of
course very severe.
But I’m thinking in terms of these concepts that you were talking
about before—which is to say, firstly, the brain is going around making
these Bayesian predictions—what you called the subpersonal level—
and secondly, there is this sense of self that we have—what you called
the personal level—and a key question is how these things are
connected—what’s the mechanism, if you will—and you hypothesized
about the role of culture, properly understood and interpreted within
the context of neuroplasticity and all of that.
So I’m thinking, just in terms of the mechanics and the principles. So
for the moment, let’s forget about words like “schizophrenia” or
“autism”; and while recognizing the individual causal importance of
genetic factors and the environment—broadly defined—let’s forget
about that too for the moment.
Instead let me say, the only thing I am concerning myself with is the
final end states as it were: to what extent our brains are being good
“ideal Bayesian observers” and the way our brains can be connected to
our minds.
And once we plunge into the important specific details—which
naturally include all that stuff I just said I was going to ignore for the
time being—we recognize that sometimes there are breakdowns in this
particularly way which somehow manifest itself as schizophrenia, and
sometimes there are breakdowns in that way which manifest itself as
something else and so on. It’s all tremendously complicated and we
need to spend zillions of hours looking closely at the individual
mechanisms to make some sense of each category.
But the thought is that on some macro-level the “right framework”—
a “universal framework, if you will”—to be looking at things is through
those two particular degrees of freedom: to what extent is our brain an
ideal Bayesian observer and how faithfully, or reasonably, or whatever
the subpersonal level is “scaling up” into the personal level. That’s what
I’m saying—I think. Does that make any sense somehow or not?
CF: Yes, I think so. So if we take the Bayesian story and say that there
has to be a balance between prior expectations and evidence, and you
presumably have a complete continuum of where this balance is and
one extreme might lead you to schizophrenia, and I’m not sure what the
other extreme would lead to. Is that the sort of thing you have in mind?
HB: Yes.
CF: That would be fine. And there might be many different reasons,
causes both genetic and environmental. That’s why this balance might
be upset.
HB: Exactly. But that’s what you’re looking at—that’s the framework, as
it were.
CF: Yes. That would be fair. I would agree with that. Although one has to
make it clear that this single notion of balance is a bit too simplified.
HB: Sure. That’s probably the physicist in me wanting to pare
everything else away—you know, no air resistance, nothing messy—
just try to get some basic underlying principles.
Questions for Discussion:

1. Why do you think that some researchers would be happy


with the notion of a “spectrum”? What are the downsides to
such a view?

2. Why do you think Howard mentions “no air resistance”?


What does he have in mind when he talks about “the physicist
in him”? Are there certain approaches to scientific questions
that can be labelled as more aligned with one type of science
over another?
IX. Final Thoughts
Schizophrenia treatment and open questions

HB: Once again I should say that you’ve been very generous with your
time and I really appreciate your willingness to indulge me with all
these fascinating speculations. It seems to me that there has been an
interesting pattern here throughout our conversation: I keep trying to
get you to take flying, hand-waving speculative leaps into the unknown
and you keep citing all of these established experiments that keep our
feet firmly on the ground.
So to wrap up let me first return to schizophrenia, which is a
condition that you’ve had a lot of experience with. Notwithstanding the
fact that, as you said at the beginning of this conversation, you were
“steered out of” a clinical stream, what are your views about the future
treatments for schizophrenia based upon your sense of where we are
today?
CF: Well, I think the situation is currently rather bleak in the sense that
we have these anti-psychotics, they reduce the symptoms in most cases,
they have nasty side effects and the drug companies have failed to
discover better versions. They were the atypical anti-psychotics, and it
now seems clear that the only reason they were better is because
people were giving them in smaller doses. And many of the drug
companies have actually pulled out because they can’t see what’s going
to happen.
On the other side, we have things like cognitive behaviour therapy,
which some people have been very keen on, but again, it’s not clear
from various meta-analysis how good that actually is. I think there’s
nothing, obviously very positive on the immediate horizon. Of course
other people might well give you a completely different story, but I have
to say that that’s my feeling.
I think that the developments in cognitive neuroscience and so on are
promising and they might come up with a completely different
approach to the drug treatment and NMDA receptors—NMDA seems
perhaps more important than dopamine—but that is yet to be seen.
And I guess that the sort of cognitive approaches to saying, What are
the mechanisms which produce these symptoms and what is it actually
like to have them? might be harnessed to come up with better cognitive-
type therapies. But I don’t see them immediately at the moment, and
that’s not something I’m involved with.
So I would say that I think there are exciting developments in
cognitive neuroscience, but it’s not clear to me that they will have an
immediate effect on treatment—but hopefully they will eventually.
There’s also a huge amount of effort going into trying to discover a
genetic basis for schizophrenia, but my sense is that at the moment it’s
coming up with small risk factors and not much else.
HB: Let me conclude by asking you a standard sort of question I ask—
at least when I can remember to do so—which is: if I were an
omniscient being and could answer any three questions that you would
have about your research, what would you ask me?,
CF: I guess there’d be two questions, really. The first would be, What is
the biological basis for schizophrenia? And even this is somewhat
controversial in some circles because there are still people around who
think it’s all cultural.
HB: Goodness me. I had no idea that such people were around. Shows
you how little I know.
CF: There are still people who say things like, “We don’t like the medical
model” and “It’s not a disease”.
And I guess the other question would be the same sort of thing about
consciousness. It seems to me that a hundred years or so ago the big
mystery was, What is Life? And in fact, life and consciousness are more
or less the same thing. So Frankenstein’s monster had both; and my
impression now is that the question of life is regarded as sort of having
been solved—or at least people don’t talk about it anymore—in light of
DNA replication and so forth. And I presume that something similar will
happen to consciousness and I’d love to know what it looks like,
because many people say that the problem with trying to solve the
problem with consciousness is that we don’t even know what the
answer will look like. So I would ask, What will the answer look like?
HB: Very good. Is there anything that we haven’t had a chance to talk
about or that you’d like to embellish upon?
CF: No, nothing else.
HB: Well, thank you very much, Chris. I’ve had a wonderful time.
CF: Thank you. I enjoyed that.
Questions for Discussion:

1. When do you think we will have a biological understanding


of schizophrenia?

2. What impact do you think those who claim that


schizophrenia is solely a “cultural phenomenon” have on
schizophrenia sufferers and their families? What impact do you
think such views have on the understanding and awareness of
schizophrenia among the general public? Those interested in
these ideas are referred to the Ideas Roadshow conversation
Mental Health: Policies, Laws and Attitudes with USC
Law Professor and bestselling author, Elyn Saks.
Continuing the Conversation
Readers are encouraged to read Chris’ books, Making Up The Mind and
Schizophrenia: A Very Short Introduction, both of which go into
considerable additional detail about many of the issues discussed
here.
More generally, Ideas Roadshow offers an extensive collection of
additional individual conversations in cognitive science and biology,
including:

Being Social – A Conversation with Roy Baumeister


The Psychology of Bilingualism – A Conversation with Ellen
Bialystok
Philosophy of Brain – A Conversation with Patricia Churchland
Believing Your Ears: Examining Auditory Illusions – A
Conversation with Diana Deutsch
On Atheists and Bonobos – A Conversation with Frans de Waal
Investigating Intelligence – A Conversation with John Duncan
Mindsets: Growing Your Brain – A Conversation with Carol Dweck
Constructing Our World: The Brain’s-Eye View – A Conversation
with Lisa Feldman Barrett
Speaking and Thinking – A Conversation with Victor Ferreira
The Science of Emotions – A Conversation with Barbara
Fredrickson
Exploring Autism – A Conversation with Uta Frith
Autism: A Genetic Perspective – A Conversation with Jay Gargus
Vision and Perception – A Conversation with Kalanit Grill-Spector
Knowing One’s Place: Space and the Brain – A Conversation with
Jennifer Groh
Beyond Mirror Neurons – A Conversation with Greg Hickok
Applied Psychology: Thinking Critically – A Conversation with
Stephen Kosslyn
A Matter of Energy: Biology From First Principles – A
Conversation with Nick Lane
The Limits of Consciousness – A Conversation with Martin Monti
Minds and Machines – A Conversation with Miguel Nicolelis
Mental Health: Policies, Laws and Attitudes – A Conversation
with Elyn Saks
Our Human Variability – A Conversation with Stephen Scherer
Mind-Wandering and Meta-Awareness – A Conversation with
Jonathan Schooler
Learning and Memory – A Conversation with Alcino Silva
Sleep Insights – A Conversation with Matthew Walker
Critical Situations – A Conversation with Philip Zimbardo

We also offer a wide range of collections of five conversations each in


eBook and paperback format, including Conversations About
Psychology, Volumes I-II, Conversations About Neuroscience and
Conversations About Biology.
A full listing of all titles can be found at:
www.ideas-on-film.com/ideasroadshow.

You might also like