You are on page 1of 14

Can we start with you sharing your name and your position?

My name is Murray Shanahan, and I’m a Senior Research Scientist at DeepMind and also
Professor of Cognitive Robotics at Imperial College London.

Excellent. So, can you tell me a little bit about how you work in AI began?

Well I, I guess it began with a kind of childhood obsession with science fiction. And particularly
with the stories of Isaac Asimov, really. And I was very impressed with Susan Calvin who’s one
of the main characters in the I, Robot stories and who was a Robopsychologist. And I think
when I was a kid, I always wanted to grow up to be a Robopsychologist. So, from there I got
interested in computer science. So, that seemed to be the way into that kind of thing. So, I got
very interested in coding and did my A-levels or high school. I did computer science then and
learned BASIC. Ancient programming language: BASIC. And did computer science there and
then went on to do computer science as a degree and then went to Cambridge where I did my
PhD and there my PhD was in what we now think of as classical symbolic artificial intelligence,
really. That was sort of the research topic was in, falls under that category. So, it’s been a kind
of lifelong thing really, I guess. The whole artificial intelligence, nearly lifelong thing.

Can you tell me a little bit more about this motif of the Robopsychologist? Or the ways in which
that was influential?

Yeah, yeah.

I’m curious of the ways in which that to me sounds like a very humane entry into what most
people perceive as a highly technical...

Yes.

Field.

I think I was fascinated by the idea that that we could build things that, you know, exhibited
intelligence and where, what are human-like in so many ways, but nevertheless they were sort
of manufactured, and we could sort of understand them as well. Take them apart and
understand how they worked. So, I loved the images of robots in science fiction films. I loved the
idea of these things that you could, if you saw them, as a kid if you saw the lights flashing
around inside the transparent head of the robot, like in Robby in Forbidden Planet, then I think
something very deep inside me liked the idea that you could understand what was going on
inside this thing and yet it was like us, it was intelligent like us. So, I liked, very much drawn to
that idea. And then Susan Calvin was the quintessential person who, who did understand how
the robots worked inside and, and worked with them and engineered them and all that sort of
thing. So, that’s very much was a, I think it was sort of an image of a person that could do that
kind of thing, was something that really impressed me I think when I was a teenager, you know.

I’m wondering if you can also share a little bit about what you see as some of the most
influential examples of your past work? In terms of the contributions you’ve made before now.

Contributions I’ve made?

Yeah...
You know the, the honest truth is I don’t think my contributions are that big, really.

Well for your thinking.

But.

I’m thinking about the ways in which it’s shaped your interests and your career unfolded.

In a way a hallmark I think of my career which keeps your contributions at a minimum is that I
fleeted around all over the place while I’ve become disillusioned with one way of thinking about
AI and one kind of approach to understanding the mind and cognition and then moved to a
different one and abandoned therefore all the kind of reputation I built up in the first area and
then worked in that little area for a little bit and then abandoned that. So, in particular, I started
off working in symbolic artificial intelligence, so which really all about the idea that we want to try
to build systems that construct language-like representations of the world and then carry out
logic-like inferences on those language-like representations. And that seemed like a great idea
at the time and indeed there were many, many, you know, positive signs that we’re thinking
about intelligence and cognition. But, a sort of fundamental problem with it was the fact that the
elements of the representations really came out of the heads of the designers or the engineers.
They weren’t, so they were just kind of made up. So, when we acquire, when we humans or
animals acquire intelligence, well part of it is, is bequeathed to us from evolution and part of it
we learn through our interaction with the physical world and with other humans and animals.
And, so, you know, that you don’t have that in symbolic artificial intelligence. That element of
learning and, and so, there’s an issue called the so-called symbol grounding problem. So that,
so for us the symbols that we use in, or the words that we have in language are grounded in our
interaction with the world in numbers, symbols and classic symbolic artificial intelligence
grounded in that kind of way. And, so people like Rob Brooks, for example, a very influential
roboticist and he articulated a kind of critique of symbolic AI in the mid to late 80s, and, and it
very much resonated with at that time. You know I was working in that area partly because that
was the kind of thing that went on in the, you know, the kind of research that went on in Imperial
and the universities that I was hanging out in like Stanford and places like that. But, I was kind
of more and more thinking that there’s something not quite right with all of this, and Rob Brooks
articulated this, these sort of feelings very well saying that we need to take embodiment more
seriously and, you know, robotics is really the way to go because robots interact with the real
world and, so I thought there was a lot to that, so I started getting more and more interested in
robotics and so my first thing I wanted to do was see how, whether you could, whether I could
retain some of the best things about symbolic AI, but in a more kind of robotics, embodied
setting. So, I was working in what we still kind of call cognitive robotics. And, that’s where we
knew each other, right? When we were both kind of working broadly in that kind of thing in
cognitive robotics. I hope you don’t mind these occasional off-camera references.

I think it’s a fine. A perfect community now.

So, so, and I’ll ramble like this, is this okay?

Of course.

So, I got very interested in, you know, how you can do your logic and use logic-like
representations, symbolic representations but in a robotics setting. And where, maybe you’re
constructing a lot more of the representations from interacting with the environment from visual
perceptions say. And then that went so far, but then that didn’t really seem to be going terribly
far either at the time, and eventually, I realized I wasn’t getting very far with that either. And I
thought hang on, wait, let me just take a big step back and wonder, really understand better how
brains work, how biological brains work. And of course, I had no background in neuroscience at
all, so I mean apart from a bit of amateur reading. So, this was a big, you know, and from a
career point of view, very foolish kind of thing to do is to start thing, okay, right, so I got all this
nice publications list in kind of symbolic AI, so let’s just forget about all that and move into a
totally different and immensely crowded and difficult area. So, I just got very interested in
neuroscience and started thinking about how you could build little sort of more neural network
type models. But I got interested in more biologically realistic neural network models than the
kinds that are around today in like 2018 or that are, well the kinds that are so prevalent in today
cause they were around then of course. But I got interested in things like spiking neurons, so
much more biologically accurate neurons and how you model them in computers and how you
build big networks of them and what the properties are of big networks aspiring spiking neurons.
So that was the kind of thing that interested me for a long time.

Sure.

So then, from there I got very interested in things like, you know, how the modularity of the
networks in the brain and how they give rise to certain kinds of dynamics and how the dynamics
might give rise to complex cognition. All kinds of things like that. At the same time by the way
being very interested, long-standing interest in philosophical questions and questions
surrounding consciousness and things like that, so that was a sort of an ongoing theme. And
then I sort of got so far with that, and it had been a roundabout the early 2010, so it was this
huge resurgence of interest in machine learning and neural network-based machine learning.
These terrific successes were coming out of various labs using so-called deep learning
searches, large neural networks, large corpuses of data. And, that peaked my interests because
that relates back to this whole symbol grounding problem again, so maybe this shows how we
can ground symbols after all. And then my friends and colleagues here at DeepMind produced
this terrific piece of work, a DQN which is deep, applying what we now call Deep Reinforcement
Learning, so a combination of deep learning and reinforcement learning to these video games,
Atari video games. And there I thought they really had kind of cracked a big problem because =
that seemed to me to be the very first example of a properly general piece of sort of general AI
because it took, it knew nothing about the games that it, that it was asked to learn to play, all it
saw was the pixels, it didn’t know that the pixels, you know, there were objects there, it didn’t
know, you know, what big objective of the game was. It had to learn all that from scratch given
just the pixels on the screen and the score. And yet was able to get to sort of super-human
levels, you know, capabilities in many of these different games, Space Invaders and Breakout
and so on. There was some that tripped it up but, but it generally it was very good learning from
scratch at these kinds of things.

No, it’s fine.

Is it okay? You can cut this as you like.

Put it through. We will.

So after, after DeepMind released, so I’m at Imperial at this time, not at DeepMind, although I
knew the DeepMind people pretty well. And then DeepMind released some source code for this
DQN system so we kind of got it all up and running in the lab on a nice GPU and then when you
watch it actually learning you start to realize some of the limitations of this deep reinforcement
learning paradigm because it learns very slowly, it requires an enormous number of frames to
sort of get anywhere.

And is it all within the positivist notion that it is trying to earn points?

Absolutely, yes. So...

So, it’s a positivist trajectory.

So, well I’m not quite sure what you mean by positivist exactly but the, you know, the aim of this
system is to maximize its reward over time, so maximizes its expected reward over time. And,
the limitations that you saw just in the context of actually maximizing reward over time were that
it was very, very slow at learning. It had to see lots and lots and lots and lots of frames. You
know it didn’t like a human would kind of pretty quickly get the idea of the game whereas those
systems they never really get the idea of the game, they just build up, gradually build up a
statistical picture of what to do when and it’s just a kind of stimulus response thing. And that
made me think while actually some of those ideas from symbolic AI that I kind of grew up with
are actually the sort of things that we need to maybe reimport into this framework in order to
build something that can represent things on a more abstract level and can try to learn, you
know, to understand the principles of what’s going on in this little microworld of these games
and say well there are these kinds of objects and these are the rules for how they interact with
those kinds of objects and this is the objective of the game. So now I can work out in a much
more principle way what I need to do under different circumstances. So, I got interested in how
you join these two things back together again, the symbolic artificial intelligence and neural
network learning.

It’s actually exactly the kind of foundation that allows [missed @ 13:19, 1st video] continues so
it’s useful. So, in regards to your long lecture, one of the things that we’ve been paying a lot of
attention to in working with, with our students is the accuracy or the utility of communications on
these very sophisticated and complex systems to general audiences. So, I’m wondering if you
can think of an example of the communication that you think is either particularly clear or
accurate in terms of communicating the system to general audiences either that you’ve worked
on or you have colleagues who...

So, so you mean...

Clearly.

So, you mean sort of, sort of the public engagement challenge...

Yes.

Yeah, well I mean I think we have quite a big challenge at moment within artificial intelligence.
I’m not quite sure whether this is what you were getting at, but quite a big public engagement
challenge which has ought to do with the fact that I think the public tends to, and not just the
public but a lot of people tend to, even quite expert people, tend to muddle up the kind of
artificial intelligence technology, the technology that we have around today that has a lot of
useful applications and so on, and the kind of stuff you see in science fiction films which indeed
was the kind of thing that inspired me and inspires many AI researches, you know, in the field
you want to build these really smart things that have human-level intelligence. So, we’re
nowhere near being able to do that yet, and yet you’ve got people talking about AI in the media
all the time and companies all want to have an AI strategy and they’ve got their AI divisions that
have been doing, you know, this, that and the other, and that’s all relatively simplistic
technology. I mean it’s very sophisticated technology by the standards of what was around five
years ago, but it’s very simplistic technology by the standards of what you see in science fiction
films and what you might easily imagine is there. So, for example, when we’re interacting with
kind of voice assistants, then it’s very easy to kind of be misled into thinking that there’s more
intelligence there than there really is because it sounds human-like and it kind of gives smart
answers to lots of things. Normally if you’re interacting with a human being and they give
answers that are that good and sound, then you expect that there’s kind of someone at home
that has a kind of common-sense foundation for those answers when in fact there’s no such
thing in today’s voice assistants at all, so there’s no real understanding of things they’re talking
about. And so, I think it’s really difficult to convey to the public the gap between the technology
we have today and the sort of expectations and images that we inherit from science fiction.

Yeah, that’s the underlying theme of the question. Do you think there’s any particular examples
that have been particularly accurate or are achieving, bridging that chasm between public
imagination or even developing technologists’ imagined possibility for the work they would be
doing especially if they’re influenced by science fiction versus what...

Oh gosh that’s a...

These specific systems can actually do...

Yeah, that’s a bit of a curveball maybe. Yeah, can I think of an example of that. Not off the top of
my head but probably we’ll go away and five minutes later I’ll think “Oh, you know, one of my
colleagues did a great presentation.”

Yeah, and we can come back to it if it occurs to you.

But nothing immediately occurs to me.

Yeah, I mean I think on some level, I think it not occurring oftentimes speaks to the exact
tension.

Yes.

You’re describing.

[...] I mean there are some great communicators out there who are from the field and are very
good at kind of communicating but maybe not for the general public but for the technically
minded people who are maybe on the fringes of maybe like undergraduates and so on, so for
example there’s a guy called Chris [name @ 17:28 1st video]. I don’t know if you’ve come across
his blogs but he writes these amazing blogs explaining various bits of, of machine learning and
showing how machine learning kind of works underneath. So yeah, I mean there are a lot of
people who are kind of good communicators but for a technical audience, but it’s very difficult tto
bridge the gap to a nontechnical audience. So, what I like, so sometimes I go into schools, or in
fact I also did a so-called master class for some journalists here actually, it was organized by
DeepMind and then I think you can convey the idea of what a neural network is actually doing.
The actual idea of a neural network that learns, for example, to label an image, and I think you
can convey to anybody if they’re willing to sit down for ten or fifteen minutes, you know, actually
at a moderately technical level exactly what these things are doing. And that’s, you know, you
can explain sort of what a neuron does to almost anybody, an artificial neuron to almost
anybody, you can explain that it learns by modifying the weights, you can explain, you know
between the connections between the neurons, you can explain that you feed an image in at
one end, you get a sort of answer out of the other end, you work out how wrong the answer is
and then you need to adjust all of the weights to make it a little bit better for that example and
then when you do that for thousands and thousands and thousands of examples gradually the
network gets better at classifying images and even ones it’s never seen before it will be quite
good at. So that, you know, even there just like, that was just a few minutes and with a few
diagrams and especially if you do some animations then, just conveying that and then
explaining that that’s at the heart of a lot of what is around today, that dispels a little bit of the
magic I think, just to. But, also you don’t want to, there is kind of magic, amazing what it does
do, so you don’t want to dispel it too much and give the impression that there’s nothing
interesting there.

And that it’s a routine, yeah. One of the things that I was going to follow up to ask the
responsibility that you see yourself and it sounds like you are engaging this question and, on
some level, gently asking for a bit of patience in the audience.

Yeah, yeah.

But also conceding that you have to take a little bit of time as an educator as well.

Oh absolutely.

System by system rather than AI at large.

Yeah, yeah, I mean I think we all, all of us who are working in the field, I think we have a kind of
duty of public engagement actually, so I do take the whole business of public engagement very
seriously, and I’ve done a lot of it and I quite enjoy it. And I think that you can convey, you know,
if you find the right language, the right imagery, the right metaphors, pictures, you can convey a
lot of the, of the essence of what quite complicated systems are doing, you know.

Yeah, I mean especially as these technologies are more interwoven into our everyday lives, it’s
not just the everyday person, but our leadership and governmental leadership...

Yeah, absolutely.

Is quite ill-equipped to make any level of decision...

Yeah.

...on these systems. Of course, playing catch-up in regard to things like regulation, but also just
finding strategic ways to ensure the public has access to these increasingly powerful systems
that are being introduced.

Yeah, sure, so, of course. Of course, it’s important that the public know about these systems.
Even more important that policymakers and politicians understand, you know, how they work.
And I think it’s important to have, you know, I think there’s a role for people, you know, who kind
of bridge the humanities and the sciences in kind of becoming, you know, sort of semi-experts
on the technical side of things and being in a position to communicate to policymakers and the
public and whoever, you know, that kind of thing. I guess that’s kind of what this project is all
about.

Yeah, that’s right. So, the next set of questions depending on your own background or your own
interests feel free to take a pass on some of them if you choose, but they really are based on
elements of conjecture, or your impressions, so not necessarily domain expertise, but by virtue
of your work at Imperial and elsewhere as well. So, one of the things we’ve been quite
concerned with is thinking about the ways in which AI systems have changed the way that
people have worked. I’m wondering if you can share a little bit on, on your observations and the
ways in which these early AI systems have shifted the way people work up until now.

Well, well actually can I shift the question a teeny little bit?

Because sometimes I think we, I think in answering that kind of question, I think sometimes we
overestimate how important AI is as opposed to other kinds of automation.

Yeah.

I think we see examples of, you know, changes in the way people work around us all the time,
just for example, you know, ten years ago [...]

Self-checkout.

So self-checkouts didn’t exist ten years ago and now every big supermarket, you know, has a
whole row of self-checkouts and there’s only a handful of people who are actually, you know,
humans who will do this. And, you know, it’s interesting that you think, you might have thought
all of these jobs are going to be automated by robots. But it’s not, you know, you don’t have a
robot sitting at the checkout typing in the total and getting out money. It’s just, it’s a totally
different way of thinking because you didn’t, you didn’t need AI to automate that job, you just
needed to kind of rethink the way that, you know, the kind of flow of people and how they were
interacting with the whole pay system and all that kind of thing.

Reconfigure the notion of customer.

Well, if you like. That’s right.

Self-serving customer.

So, then you, it’s a sort of engineering kind of process, engineering kind of thing that you’re
doing and you’re doing automation. There’s no AI involved, or you can probably start to bring AI
into these things these days and go in all kinds of ways by figuring out what people are buying
and then marketing things to them. But, just the whole checkout business you can just do with
some hardcode electrical engineering and then rethinking the way the shop is organized and,
and so on. And I think there’s a great deal of that kind of thing is going on and maybe that has
more of an impact almost. That’s the umbrella thing to think about. So, I think one important
lesson is not necessarily to think the way that things are going to change is that all the same
jobs are going to be done in the same sort of way, but they’re going to be done by humanoid
robots or even just maybe we can’t imagine quite how things are going to change in a way that
makes, that brings automation into things in ways that we can’t really think of yet.
I think that’s completely fair. So, to put on your prediction hat, how can you imagine things
changing in the coming twenty years?

Well, I’m very reluctant to put on a prediction hat, I have to say.

Most people are.

I mean I don’t really do that,

Especially on screen.

Let me tell you the question I’m asked all the time and I’ll give you my answer to that is, “Oh
we’re hear at DeepMind, we’re interesting in producing artificial general intelligence, that is the
sort of science fiction vision [missed @ 25:32 1st video] so when is artificial general intelligence
going to happen?” Okay, and so my answer to that is that I think there is some number of
scientific breakthroughs that we need to make before that’s going to happen. I don’t know what
those scientific breakthroughs are, and I don’t know, you know, how many there are even,
maybe there’s only one, maybe there are ten. I don’t know how, when they’re going to happen.
They could be ten years apart, you could get three in a row, the first one might be fifty years
away. So, in other words, I have absolutely no idea, and I don’t believe anybody else does
either, no matter how much confidence they express their answer with. But of course, with
hindsight, it will be obvious, but at moment, you know, I don’t think we really know.

That’s fair. So what tensions would you expect, do you think, in regards to human dignity when
developing AI systems are introduced to particular job markets?

Well, I guess the, I imagine that if automation does lead to, you know, to significantly increased
unemployment or, so some people actually challenge whether that’s the right way to put it but,
you know, significantly less need for human labor, let’s say, so if that happens, then I think that
the issues are the same as we have today for people who become unemployed, who go out of
work, you know, involuntarily. And I think, you know, the question is how do people find
meaning in their lives. If we genuinely cannot, if there aren’t enough jobs to go around, if there
isn’t really enough human labor that needs to be done at some point in the future, then you
know maybe we need to kind of rethink how we live our lives and what constitutes a meaningful
life and maybe that’s the kind of thing that will, the sort of question that will come up in the
future. I think the kind of challenge to human dignity for somebody who loses their job is that
then they often, you know, lose a sense of what they’re living for are they, you know, how are
they serving society, what are their set of projects on a day to day basis, and I think those things
are very, very important, but maybe you know, we can rethink our lives in a way that allows us
to find meaning, you know, in different ways, in ways that are not as connected with the
economy [...] Maybe more to do with self-fulfilling, you know, projects that could be, you know,
artistic projects, gardening, literature, you know. Preferably not just, just not I don’t know sinking
into drug addiction and, and overeating or, you know, spend too much time on the sofa or all
those sorts of things.

Yeah, can you imagine any leadership areas where that, that might be a guided process?

Can, can I mention what sorry?

Any guiding leadership in, in that arena. I mean do you think governments would take some of
that responsibility, would it be community [missed @ 0:49 2nd video]
Who knows. I just, I really.

It’s a real shift, right?

Yeah, it is. And I think it’s difficult to really, you know, potentially, you know, things, it might be,
we might look back in say in ten years’ time, there’s no doubt regardless of achieving AGI
there’s no doubt that we’re going through a period of great change I think with accelerating
technology. And I think, you know, that is going to have a big effect on society over ten years,
but we may be surprised in ten years’ time how much things look the same, right? You know,
who knows. But they may be very different, I just find it very difficult to predict to be honest.

Yeah, I mean the predictive qualities are difficult, but I think they’re useful of exercises to
prepare as well.

Yes, I mean I do too. I mean I wrote a whole, this whole book, The Technological Singularity,
right? And there, I mean, my take on that concept is certainly not, I’m certainly not saying “Oh I
think there’s going to be a technological singularity.” Or rather it’s I think this is a very interesting
concept. I think we need to think through what would, the different scenarios for what the world
might be like if we, you know, if human-level AI really does come about. Cause I do think that
would make, if that were achieved in fifteen-twenty years’ time, I think that would have a very
radical effect on humanity. And it’s very difficult to imagine quite how that would be. You know,
some people think it might, you know, it could precipitate to utopia, some people think it could
precipitate in dystopia, you know, some people think it would be a kind of business as usual I
suppose. You know I think it’s very difficult to, to predict.

So...

But, sorry, but useful to think through different scenarios. That was the...

Sure. Yeah, so...

But don’t ask me to think through the different scenarios right here and now.

But we can come back to something simpler which is the, the relationship between humor, a
human individual and the user of that system.

I think we should go with the humor version of that question.

You can take it in that direction if you choose. I’m wondering if you can describe a tool that
you’ve worked with in, in particular observation...

Yeah, I think you’re going to have to repeat that cause I was distracted by the whole humor
thing, so, so, so, so...

Yes, right. It would probably be a lot more fun than this question.

So, so what was the actual question?

So, basically, can you describe a tool, an AI tool where you believe power’s been transferred
from the human user over to that system?
Well, I think it’s probably inappropriate to think of the power being transferred to the system, but
the power may have nevertheless been drained away from the individual.

Okay, can you describe that a little bit further?

Well, cause saying that the, that the power is transferred to the system, that I think that actually
is far too much agency and purpose to the system, right? So, I mean maybe builders of this
system, maybe power goes to them, but it’s not going to go to the AI system because the AI
systems today don’t do any meaningful sense have agency in the way that we do. But
nevertheless, you might feel disempowered as a user, so...

Especially if you’re not familiar with its limitations.

Yes, well even if you are familiar with its, in fact you may feel more disempowered because you
know what’s going. So, I mean, examples are things like, actually, this happened to me long
before this was a thing. I had a mortgage application turned down because there was some
confusion about postcodes, right, there was some postcode or zip code, right, which was
incorrect on the house that we’d lived in. And we had to kind of get this fixed and, you know, this
is difficult to get a post code changed right.

Yes.

And, this leads to some anomalous thing right, and then, the neural network has, you know,
takes a huge amount of data from kind of that you provide for it to make the decision. But then
something, some little thing like that can trigger a negative decision, and you’re thinking I have
like a perfect credit rating, you know, I’m borrowing nothing near the amount that would give,
you know, what’s going wrong here? And you just constantly get the computer says no, and
when you talk to humans on the other end of the phone, then they’re saying “Well I don’t know. I
just report what the computer says. We can’t do anything.” And then you really do start to feel
disempowered, right, and, but it’s not because power has been, you know, has a crude to this
system, I mean it, I don’t think that’s power has just been drained away from you, disappeared
into the ether almost. So, I mean it sounds like something, the reason I’m resistant to putting it
that way is because it sounds like there’s something insidious going on in that system, that it’s
got the power.

[missed @ 6:02 2nd video]

That is not what I meant what’s happening. But the power is certainly to some extent gone from
you, and that’s partly because it becomes very difficult then to kind of get out, you’re a sort of an
outlier case and it’s very difficult to then engage with any kind of part of the system, that can
deal with those outlier cases and you have to kind of get it bounce further and further up some
kind of hierarchy until eventually some human with the authority and responsibility will actually.

The power will break. The imbalance, correct?

Exactly, that’s right. So, there you’ve got some genuine power. So there when you get to the
human who’s able to then reevaluate the application by hand and say “Yes, it’s perfectly fine,”
then, you know, then power is restored in a sense to humans, right?
Yeah, and I mean, I think it’s a fantastic example because this has actually come up recently in
the United States in regards to postal codes perhaps, but sometimes postal codes or zip codes
that are associated with...

Yeah.

Lower income or racial, certain racial majority...

Absolutely, yeah.

And for individuals who may not actually be able to access the person

Yeah, yeah indeed.

Who can redistribute the power structure in this system, it actually can have a scaled effect.

Yeah, yeah absolutely. Absolutely.

On individuals. I think a saline example.

Yeah, yeah, indeed. It’s very kind of Kafka-esque. It really is like.

Very much so.

Somebody ends up in this.

Like a trial.

Yeah, in this kind of system. I mean there of course it’s a political, you know, a bureaucratic
system that’s trapped them. But it’s the same kind of effect that there’s no person who can
ultimately break this kind of like trap.

Yeah, and it is a bureaucratic system, but the tool is just on some level, emulating features of
that system in a scaled and faster.

Yes.

Delivery, right?

You’re saying when you’ve got technology to spare. Yes, indeed, of course.

Yeah.

You amplify all those things.

So, I’m wondering if we’re working within the, either the mortgage example or perhaps there’s
another example where we might be able to talk about how an AI system might undermine
human decision-making? Where perhaps someone defers to.
Well, I believe there have been some studies, but this is where I’m hesitant to come and say, “I
believe there have been some studies”, where in AI would go and find what the study was and
read it before I cited it, right.

Fair.

But I, so I feel a little bit uncomfortable about doing this, but I believe there have been some
studies that show that people are more willing to take to take the word of an AI system to certain
circumstances than they are of a human being. Because they somehow think it comes with
some kind of authority because it’s a machine which is interesting, but you would have to find
the, chase up the relevant work.

In regards to...

Actually, we are running out of time.

Okay, can I ask this last question?

Yeah.

What do you see as, or what do you perceive as valuable in the prospect of machine
autonomy?

As valuable? Oh, well, I mean, there’s no doubt at all that machines have the potential to take
better decisions in many cases. So, you know, self-driving cars are an example when we may
not be anywhere near there yet, but, humans are prone to making the most appalling decisions
when it comes to, you know, driving big metal boxes around at high speed amongst a lot of
other metal boxes. And maybe that is the kind of decision which is better handed over to a
machine which can accumulate very large amounts of data and react very quickly to what’s,
what’s going on. So, I think those are the kinds of cases where in deep maybe it’s a good idea
to hand over those kinds of decisions to machines. But, of course, you’ve got to get it all right. I
mean self-driving cars are an interesting example because they’re a case where if you’re driving
in normal road conditions on kind of a big highway or something, then you know, probably, it’s
not going to take that much engineering to build self-driving cars that are going to save way
more lives than the occasional time when they kind of make the wrong decision. But, if you think
about driving in general, you know, in rural areas or where all kinds of strange things might,
might happen, you know, in countries that have different traffic laws or some countries I’ve
visited have negligible traffic laws and there it’s all and it’s not to say, you know, that causes
more accidents, but there’s a lot more to do with human interaction than between road users
and little kind of signals and things. And it becomes just a lot more difficult to build a self-driving
car under those circumstances. But, going back to an earlier point, you could also think of all
kinds of ways where the more self-driving cars there are on the road, then I was thinking of the
earlier point about how you, you might have to rethink the way a system, you know, the way a
system works rather than imagining like, a very human-like intelligence working. So, in the case
of self-driving cars, the more self-driving cars you have on the road, the more potential there is
for them to communicate with each other directly. So, if you’ve, you know, if all the cars
immediately around you are also self-driving cars who can communicate in milliseconds to each
other about what’s going on, that’s potentially way safer position.

And it almost comes back to the video game motif on some level in that you’re dealing with liked
variables rather than the myriad of variables that play in the social construct of human drivers.
Yes, that’s right.

Human drivers...

So, there’s a limited, you know, so for sure AI works better when you constrain its domain of
application nicely. That’s one of the differences between the kind of specialist AI we have today
and artificial general intelligence that are human-like that we’d like to get to. Is that you really do
need to kind of circumscribe the situation in which the system works and then potentially you’ve
got something that can work very well.

And what kind of guidance would you provide if the system were to cause harm in your mind?

What, what kind?

Kind of guidance.

What kind of guidance would I provide? I think, I think you’d have to be much more specific for
particular cases that’s, that’s a really general question. I have no idea how to answer such a
general question.

Well within the context of the, the driverless cars. If they were to cause harm.

If they were to cause harm?

If they were to cause harm to human life, not to another driverless car.

Again, I’m not quite clear exactly what you’re asking. So if you were to show that for example,
they caused more accidents than, or caused more deaths than they prevented. Is that the kind
of thing?

Well I think that that’s an unlikely scenario.

Yeah, indeed.

But if we’re starting to work within the scenarios of considering the acceptable level of harm to
humans...

Yeah.

To justify.

I’m not the person to answer these kinds of questions. I think that’s a question for society and to
have a conversation about and for policy makers to think about, and you know, I mean people
like me can provide, I think we have a responsibility to think about those kinds of things of
course, and maybe more of a responsibility than the public as working in those kinds of sectors,
but ultimately I can provide some technical input, but I can’t make, you know, I can’t make
ethical, I’m not a moral philosopher, you know, or a policymaker or a politician, you know, or
somebody working in the humanities in academic who might be in a better position to kind of
address some aspects of those sorts of questions.
Any last thoughts you’d like to share with us before we close?

I think my current thoughts are that I’m five minutes, six minutes late for a meeting that I should
be in.

Alright, well thank you very much.

Sure.

You might also like