Professional Documents
Culture Documents
Intelligence
Uncorrected oral evidence: Artificial Intelligence
Tuesday 24 October 2017
3.30 pm
Witnesses
I: Dr Marko Balabanovic, Chief Technology Officer, Digital Catapult; Dr David
Barber, Turing Fellow, The Alan Turing Institute, Reader in Computational
Statistics and Machine Learning, UCL; Dr Timothy Lanfear, Director, EMEA
Solution Architecture & Engineering team, NVIDIA.
2. Any public use of, or reference to, the contents should make clear that
neither Members nor witnesses have had the opportunity to correct the
record. If in doubt as to the propriety of using the transcript, please
contact the Clerk of the Committee.
3. Members and witnesses are asked to send corrections to the Clerk of the
Committee within 7 days of receipt.
1
Examination of witnesses
Dr Marko Balabanovic, Dr David Barber, Dr Timothy Lanfear.
Q38 The Chairman: Good afternoon and a very warm welcome to you. We
have Dr Marko Balabanovic, Digital Catapult, Dr David Barber from the
Alan Turing Institute and Dr Timothy Lanfear from NVIDIA. I have a little
rubric that I need to read out on every occasion, which I am sure will
bore everybody to death by the end of our evidence taking. The session
is open to the public. A webcast of the session goes out live and is
subsequently accessible via the parliamentary website. A verbatim
transcript will be taken of your evidence. This will be put on the
parliamentary website. A few days after this evidence session you will be
sent a copy of the transcript to check for accuracy. We would be very
grateful if you could advise us of any corrections as quickly as possible.
If, after the session, you want to clarify or amplify any points made
during your evidence, or have any additional points to make, you are
welcome to submit supplementary written evidence to us. Would you like
to introduce yourselves for the record and then we will begin with the
questions?
Slightly unusually, we have Divisions taking place at just about the time
that we are sitting, so if there is a Division in the Chamber while we are
sitting, the Committee will adjourn as soon as the Division Bells have
rung and will resume after 10 minutes. It will be pretty obvious when the
bells go off; all hell breaks loose and we come back after 10 minutes and
resume proceedings. I am sorry if that is going to cause a little bit of
disruption but, hopefully, as little as possible.
I am going to open the questions. My first question is what, in your
opinion, are the biggest opportunities and risks in the UK over the coming
decade in relation to the development and use of AI? That is a fairly
general question. We would like to follow up on that in view of Dame
Wendy Hall’s review. Do you think the Government’s review of AI will
help with these? Does it go far enough? Perhaps we could start with you,
Dr Balabanovic.
Dr Marko Balabanovic: In terms of the opportunities from our point of
view—and I think this is very similar to what is mentioned in the review—
there are going to be very few industries or walks of life that are not
affected by artificial intelligence, so we would see the opportunities in the
UK across every part of our society and every part of our industry. That
will include the formation of entirely new markets. For example, the use
of machine-learning techniques to improve speech recognition and
speech synthesis is now leading to a new market for assistants in the
home which operate with voice interactions, such as the Amazon Echo
Alexa system. That was not there before and it is artificial intelligence
that has made that possible. It is now in more than 11 per cent of US
households that have broadband. It is a very rapid rise.
Dr Marko Balabanovic: It was 11 per cent but that was some time ago.
I suspect it has gone up but I am not sure what it is now. They are not
always keen to release up-to-date figures, as you might imagine. For the
UK particularly, though, it is worth thinking about where we have
differentiated advantages. There are a few industries, such as digital
manufacturing, which have the higher-value and more datacentric side of
manufacturing, and this is a strength that the UK could build on and use
to compete globally and successfully. Artificial intelligence tools for the
creative industries is a very interesting area that has not been picked up
as much by some of other countries investing globally, and the UK has
great potential because of its strength in the creative industries. An
obvious one is the area of digital health, where there are great
advantages to be had in improving health outcomes and producing a
more efficient health system. Shall I go on to talk about the risks side?
The Chairman: Hold it there for the moment. Do you agree with that, Dr
Barber?
The Chairman: An interesting aspect is the role that Dame Wendy sees
for the Turing. I am not sure whether you can answer the institutional
issues about the future of the Turing, but I am assuming that is under
consideration in terms of how the Turing is digesting Dame Wendy’s
report.
Dr David Barber: Yes, I believe so. I think the Turing would be delighted
to play the role of the UK national artificial intelligence institute. In my
personal view, it would require some significant restructuring of the
current framework of the institute and, obviously, a huge increase in
funding.
The Chairman: You might want a model a bit more like the Crick where
you have this match?
The Chairman: I am with you. Dr Lanfear, you are at the sharp end of
investment in NVIDIA, so you really have to put your money where your
mouth is, do you not?
Dr Timothy Lanfear: The two areas where we are going to see the
biggest societal impact are transportation and healthcare, in the short
term.
The Chairman: That in itself might be somewhat telling. Thank you very
much indeed. Now we go to Baroness Bakewell.
Q39 Baroness Bakewell: Gentlemen, it is quite clear that you are at the
brainy end of this whole enterprise. You have invested a lot of training
and knowledge in what you do and your careers, but where is the training
of the future going to focus? Where are those skills going to come from?
How far down the food chain are we all going to have to learn about AI? I
would quote you Andrew Orlowski who was here the other day. He talked
about his children going to an outstanding primary school in north London
where they are taught algorithms every week but only taught history
once or twice a term. He represents someone who knows the whole of
your business, but he also is anxious about the emphasis shifting so
heavily from the humanities. I am here to speak up for the humanities
and would like to know how you think it balances out. Who would like to
respond first? Dr Barber.
Dr David Barber: Gosh, humanities is an interesting one.
Dr David Barber: I can address the first part of the question a little
more comfortably. I would say that there is a requirement for a breadth
of training across the whole spectrum. For example, if you are looking to
plant flags in the IP landscape of the future in terms of AI, clearly you are
going to need much more training at PhD level in this country. Currently,
many of our competitors are investing very heavily in that area and we
are way behind that. We are very far behind in terms of investment in
training.
Dr Timothy Lanfear: First, I think they should learn from the beginning.
The sooner you can start teaching people these things, the better. I have
a lot of sympathy for your position on wanting to defend the humanities.
Personally, I am a technologist during the daytime but in the evening I
am a musician. That is an important part of my life and I see that
humanities are being pushed out of our educational process. They should
have a much stronger place than they do today.
place to look at in the short term. Online training is now quite a large
field, and, effectively, most of the large companies in that space were
started by people who originally came from an AI background. Perhaps
that is just a coincidence but those were the popular courses at the
beginning. Most of those courses are co-created and co-sponsored by
industry and academics. They are often free or very cheap. It is a
somewhat disruptive model compared to in-person training and going to
university, but if anybody wanted to learn about these subjects now,
there is a wealth of material available. It is a very interesting time for
that.
Q40 Viscount Ridley: Can I bring you on to the question of ownership: the
ownership of data and of the outcomes of algorithms? You will know that
DeepMind’s deal with the Royal Free NHS Trust led to a lot of criticism
and Sir John Bell in his recent review of the UK life sciences industry said,
“What you don’t want is somebody rocking up and using NHS data as a
learning set for the generation of algorithms and then moving the
algorithm to San Francisco and selling it so all the profits come back to
another jurisdiction”. When UK taxpayers contribute to training and
research in AI within the UK, how can we ensure that they see a
meaningful return on this investment, and that UK state investments are
not simply subsidising AI enterprises in other countries?
Dr David Barber: That is a complicated question. I believe there is a
concept called the “golden share” which enables companies to give
money back to the contributors of the data. I believe the Department for
Transport is heavily involved with that. That is one potential mechanism
to see a return on the investment. There are sometimes difficult
relationships between companies and, for example, universities where
the companies are looking for privileged relationships with the
universities and it is difficult to protect some of the intellectual property.
There are difficult questions around how to retain the independence of
the universities in the light of companies wishing to have privileged
relationships with them.
Viscount Ridley: If we are too squeamish about this and we protect the
rights of individuals to be rewarded too fervently, the industry will not
come here and do anything.
Dr David Barber: In a sense, yes, I think we could be, but we are also
talking on an international scale. For example, if you think about
healthcare, the paucity of datasets in radiology is very difficult to
overcome at a national level. We are talking about international
7
The Chairman: There is a lot from the Royal Society and others who are
trying to address this data issue. Because of the importance of data—
good data and datasets—that AI is applied to, is that not one of the
issues? Has Digital Catapult addressed this issue at all?
The other thing to bear in mind is that the market dynamics in the world
of AI mean that for the organisations that have the large customer bases
and a large amount of data already, there is a network effect that leads
them to get bigger and bigger in that sense. It is really important to look
at addressing the market barriers for UK growth as opposed to the
incumbent what are called data monopolies or AI giants. These are six or
seven of the biggest companies in the world—you mentioned DeepMind—
which have been so successful they will have customers in the billions
and therefore they will have data about people on that sort of scale, and
they will attract skilled and talented people—and all this builds upon
itself. The risk of things going abroad can be countered by helping UK
companies grow bigger and not having them be acquired when they are
very young by some of these giant companies.
The Chairman: That is exactly what Viscount Ridley and I were trying to
stimulate you to express.
Viscount Ridley: Can I play devil’s advocate for a second? The data
from a scan of my pancreas, say, is of awfully little use to me because I
cannot interpret it or do anything with it, but if it goes off to California
and someone turns it into a tiny part of an enormous project that ends up
in a cure for pancreatic cancer, which I then benefit from in 10 years’
time, what is the problem? Why should I be fussy about something that is
not very valuable to me at this point? It is only valuable if it is processed
and turned into something else.
Dr Marko Balabanovic: The problem would only come if the cost to the
UK health system to access the system that you have described is very
high and if it was built using taxpayer-funded data from the UK.
8
Dr Marko Balabanovic: You are right. The model where you have full
control of your own data would be a very powerful way for things to
move forward. It does not tend to be how things are today where,
obviously, the scan would be held inside some kind of hospital trust, but,
according to the way these regulations are going, you would have every
right to access that data, hold it yourself and give it to a company in
California if you so wished, and there would be nothing to stop you doing
that.
The Chairman: If that is too tough, I think we will park that question
and interrogate the lawyers on a future occasion—or you could come
back to us.
Lord Hollick: Do the academic sector and the academic institutions have
the skills to fight their corner in this area?
Lord Hollick: So you have the commercial skills to make sure that you
get fair value your side of the table.
Lord Hollick: Yes, correct. You are negotiating against some fairly
successful global companies.
Dr David Barber: I am not sure about that. That may be something that
we need help with. The Turing Institute, for example, could potentially
play that role for us.
Q41 Lord St John of Bletso: Should state and privately funded research
10
Lord St John of Bletso: I was reading in the robotic space that there is
a company called Robiquity which has been training students and trying
to prepare them for the practical aspects of robotics as well.
The Chairman: The Turing has quite a lot of relationships with business
now, does it not?
The Chairman: That is very interesting. How do you patrol the border in
the Digital Catapult?
11
Dr Marko Balabanovic: It is not our role exactly to patrol it, but we are
measured according to whether we can see UK growth and UK
productivity, and we would seek to do that in collaboration. If there can
be structures where some of the larger technology companies are able to
apply their investment into this country, we would see that as a positive
thing. If they are employing people in this country, we would see that as
a positive thing. If there are opportunities for UK academics to take their
ideas out of the universities and into industry, we would definitely be
helping that along the road and working with partners such as those
around the table to help make that happen. On the whole there is a
strong, positive story there.
The Chairman: These relationships are very fluid between the different
institutions, are they not, private and public and academic and so on?
That is the impression I get in AI in particular.
Q42 Lord Hollick: The UK has a rather long record of coming up with some
brilliant inventions but failing to exploit them and translate them into
developed products. Graphene and the fact that most of the patents are
now owned offshore is a good recent example. What steps do we need to
take to ensure that we benefit from the excellent and leading research
that is being done here in the UK?
Dr David Barber: I think we are a little risk averse in terms of investing
in the UK, particularly in this AI space. There are a lot of successful tech
start-ups which came out of the UK which in the end became successful
only when they went to Silicon Valley for investment. It is unfortunate
that has needed to happen over the last few years. It would be great if
there was a stronger emphasis here on nurturing early start-ups
particularly coming out of universities. Some form of national incubator or
accelerator programme might be very useful within the UK.
Dr Timothy Lanfear: First, I would like to say that the UK is doing very
well in this area. A McKinsey Global Institute study has shown that, while
the number of papers coming out of the UK may not match a very large
country such as China, the influence of those papers is disproportionate
to the quantity. We have a programme of good innovation in this country.
How can we keep it here? I think my colleagues here have asked for
investment, and that seems to be the right thing to do. There is a
burgeoning start-up industry in Europe in general. I know NVIDIA is
involved with something like 1,000 or more start-ups around Europe. I do
not have offhand the exact number for the UK, but start-ups are
generating the ideas here and bringing new ideas that can be brought to
fruition and real products can be made out of them.
12
The Chairman: You are able to give advice to spin-outs from academic
institutions and help them commercialise?
there. They are looking to hire 100 PhD students and create nine new
research groups and 10 professorships. The scale is huge, but that is just
in Baden-Württemberg alone: one particular region of Germany. Each of
those investments is huge. They dwarf the scale of what is going on in
the UK. To give you some numbers, 1.48 per cent of the EPSRC’s
budget—the main funding council for AI in the UK—is allocated to AI, so
you are talking roughly £13 million per year. This is a drop in the ocean
compared to most of our competitors in this space.
Lord Hollick: Does the Alan Turing Institute see as part of its role to
become an advocate for this?
Dr David Barber: I am not sure about the bias against the recruitment
of women. Certainly, it is difficult to hire women in this space—this is my
understanding—simply because there are not enough women graduates
in this area. I do not know if there is a particular bias against them per
se.
Dr David Barber: They are doing very well. They seem to have a very
specific strategy in AI. They are hoping to invest very heavily in that.
They really know what they are doing. They have excellent researchers
and very good research institutes.
Dr David Barber: I do not think they have done so yet. If you look at
many of the leading conferences and journals in this space, certainly they
are very competitive with us, but we are still doing very well.
Lord St John of Bletso: They have been making a few acquisitions. Did
the Chinese buy ARM and Skyscanner quite recently?
Q43 The Lord Bishop of Oxford: You will not be surprised to know that
ethics runs as a thread through all our questions. How do you ensure that
research and development in AI is conducted in an open and ethical
manner? What would you commend to the group and the wider industry
so that the benefits of the research are shared as widely as possible?
Could we start with NVIDIA? A number of us are curious as to why the
section on ethical implications in NVIDIA’s written submission was not
answered for reasons of space. It would be very good to hear your views.
The Chairman: You could expand that to include ethics in the translation
of research into practical products. We are interested in the gamut of
ethical behaviour.
take steps to educate your own workforce in its values and ensure
compliance?
Lord Swinfen: Do you see any limitations to this approach, and, if so,
how are you going to overcome them?
Viscount Ridley: I want to take off from Dr Barber’s point about the
difference between perceptual and reasoning AI and use it to explore the
issues of transparency and interpretability. We have heard lots of
different views on this. We have heard people saying it is absolutely vital
that all AIs must be interpretable and other people saying it is an
impossible and unrealistic way to go. In NVIDIA’s written evidence, Dr
Lanfear, you made the remarkable statement that the “assertion that
neural networks lack transparency is false”, and the fact that the code is
written in a programing language, you go on to say about ordinary
programs, “just provides an illusion of transparency. It is not possible for
humans to read and the only validation can be done through tests. From
this perspective, neural networks offer a significant advantage over the
handwritten codes”. In other words, things are getting more transparent
as we move to AI; is that what you are saying?
Dr Timothy Lanfear: Which are, in some sense, a black box, but that is
overstating the case. The people who write these pieces of software take
great care in designing them so that they can split them down into
smaller pieces that can be tested to be sure they are doing the correct
function. They can then assemble those pieces step by step to make
more and more complex things. We are using systems every day that are
of a level of complexity that we cannot absorb. Artificial intelligence is no
different from that. It is also at a level of complexity that cannot be
grasped as a whole. Nevertheless, what you can do is to break this down
into pieces, find ways of testing it to check that it is doing the things you
expect it to do and, if it is not, take some action.
Dr Timothy Lanfear: I was not aware of that analogy about the pencil,
but it is a very interesting thought. The role of the technologist is to
control what we are building; to control it, to test it, to be sure it really
does do what it is designed to do and not something different.
Dr Timothy Lanfear: NVIDIA wants to operate within the laws that are
around it and we want to behave in an ethical way. It is part of the core
values of the company to act ethically. If there are legal constraints, we
will certainly follow them. I do not see any reason why we should try to
work around them or avoid them. Those constraints are there for a
reason and we absolutely respect the reason that that regulation is in
place.
The Chairman: Dr Balabanovic, you are in the hot seat since you have
not said anything on this question so far.
The Chairman: Are you writing the Turing’s job description, I ask
myself?
The Chairman: Thank you very much. You have provoked our thoughts
and questions. I hope we have done likewise in a number of respects. If
you have additional things you want to write to us about, by all means
do. In the meantime, thank you very much; we really appreciate your
presence here today.