Professional Documents
Culture Documents
Intelligence
Uncorrected oral evidence: Artificial Intelligence
Tuesday 17 October 2017
4.35 pm
Witnesses
I: Professor Christopher Reed, Professor of Electronic Commerce Law, Queen
Mary University of London; Jeremy Barnett, Barrister, St Pauls Chambers, Leeds
& Gough Square Chambers; Professor Karen Yeung, Professor of Law and
Director of the Centre for Technology, Ethics, Law and Society at the Dickson
Poon School of Law, King’s College, London.
2. Any public use of, or reference to, the contents should make clear that
neither Members nor witnesses have had the opportunity to correct the
record. If in doubt as to the propriety of using the transcript, please
contact the Clerk of the Committee.
3. Members and witnesses are asked to send corrections to the Clerk of the
Committee within 7 days of receipt.
1
Examination of witnesses
Professor Christopher Reed, Jeremy Barnett, Professor Karen Yeung.
Q30 The Chairman: Thank you very much indeed. You are all supremely
qualified to answer our questions today. First, let me ask a very broad
general question. In your opinions, what are the biggest opportunities
and risks for the law in the UK over the coming decade in relation to the
development and use of artificial intelligence?
Professor Christopher Reed: I can give a very brief answer.
Essentially, there are no opportunities for the law, because the law has to
follow the developments of society rather than try to lead it. Every time I
have seen the law try to lead society in a particular direction it tends to
have failed.
On the risks, the biggest risk is the risk of rushing into regulation too
early. We have a long, long history of prospective regulation for
technology. I can think offhand of only one example that really worked.
All the rest were failures. We cannot reliably predict the future.
name in any sector and in the delivery of services. We can all do that in
an automated or partially automated way. It is an extremely powerful
technology, and you can see very quickly why it is generating so much
excitement because of its capacity to be seriously disruptive across the
board.
The Chairman: Let me stop you there, otherwise we are not going to be
able to make any progress on the questions.
the English legal system is the finest in the world and is recognised as
such. We think there is a great opportunity here to develop that skill. You
may be aware of the recent announcement about a new IT court, down
the road from the Royal Courts of Justice. The hope is that London will
become the centre for this type of innovation, but the governance and
regulation go hand in hand with the innovation. I see a tremendous
opportunity if we get this right.
The problem when you say prospectively, “We have this thing called an
algorithm and it is dangerous”, and we regulate it, is that you end up
regulating activities that we already are happy with regulation on, such
as driving and using cars on the road, and the financial services sector,
and you take it right down to regulating the contents of my smart fridge.
If I have a smart fridge that is busy designing my diet and I do not like
so much tofu and quinoa I can simply turn the fridge off. I do not believe
that needs to be regulated. At the financial services end, we are not
regulating the algorithm, we are regulating the financial services industry
and, therefore, indirectly, its use of algorithms. The danger is that we
jump in and say, “We have this thing called an algorithm and we are
worried about it”, and we end up regulating the whole of life.
The Chairman: We will explore all that further. First, Viscount Ridley and
Lady Grender have supplementaries.
Professor Reed, you said there was one example of prospective regulation
that was a good thing. You did not mention what that was. Were you
talking about the Warnock committee or something like that?
The Chairman: You could say that the jurisdiction aspects in another
directive are more important. That created more certainty than the e-
commerce directive. That is a debate we could have, I suspect.
Viscount Ridley: You do not see bolting horses? There was a lot of talk
of bolting horses in the last session.
The Chairman: You do not see them trotting away at this point?
Professor Christopher Reed: No. I can see areas where we need to get
worried. I look at my Facebook news feed and I do not necessarily see
what I want to see. I know that my Google searches are not the same as
everybody else’s, and I do not know how they are filtering things out.
That is starting to concern me, because I would like to know what is
going on and have more control over it.
The Chairman: We are all madly agreeing with our questions, in that
case.
Q31 Lord Swinfen: You have probably answered a lot of what I am going to
ask already. Does the ethical development and use of artificial
intelligence require regulation? What should be the purpose of that
regulation?
The Chairman: You might want to go on to add the supplementary
questions, as they give a bit more granularity.
Jeremy Barnett: I think the purpose should be safety, in the same way
as we expect people to put safe products on the market, echoing what
Professor Winfield said earlier. The Health and Safety Executive has
brought about safety on construction sites and has changed the way in
which the world works. People used to be dismissive, but now people can
6
Professor Karen Yeung: It may take the form of legal rules that are
enforceable or it may take the form of something else. If that is the
understanding, it is fair to say that we already have some legal regulation
of these systems when they are applied in a particular context. It is not
true to say that there is nothing there. There is already some legal
regulation of the systems that we have in place. The question is whether
that is adequate, given the capacity, potential and risks associated with
these decision-making systems. My concern is that they are already
being used in ways that make really consequential decisions about
individuals and have an effect on society and the community more
generally. Again, the fake news debate is a great example of how there
are real tangible consequences from using these systems, yet we have no
real effective way of governing those.
In some ways the horse has bolted, and that is exacerbated, I think, by a
Silicon Valley mentality. Facebook’s motto is “Move fast and break
things”. They do not have a general approach of, “Let’s get out there and
7
talk to the regulators and find out what the problems are and come up
with a consensual solution”. As it is such a competitive field among the
big giants, they say, “Put it out in beta before it’s ready and let’s try and
get the dominant role in the industry”. There is a problem. They are
powerful, they are having profound effects and yet there is a vacuum in
the governing of the values that are built in and the social consequences
of them. The Government have a responsibility to do something.
The Chairman: You are not resiling from your view that we are not quite
ready for additional regulation in most areas.
Jeremy Barnett: The problem about using the corporate route is that
the company may have disappeared and the algorithm may then do its
own thing.
Lord Swinfen: How do you punish the algorithm, if you need to punish
it? The algorithm does not have any money of its own to be fined, as far
as I am aware. Neither does it have a personality that you can actually
put in jail.
Jeremy Barnett: If I might say, we have now got to the nub of the
issue. We talked earlier about ethics and encouraging people to be
trained to do things properly. My view is that the question has to be:
what is the sanction for people who do not do it properly? There has to be
a whole debate on the sanction. In regulation, when I go to a tribunal I
try to persuade the tribunal that my clients have done nothing wrong, but
if they have done something wrong and we move on to the sanction,
what are we going to do?
Lord Swinfen: Yes, that may be easy, but the algorithm changes, as I
understand it, of its own accord.
Lord Swinfen: How could the designer be responsible for the changed
algorithm?
The Chairman: I can see the other witnesses twitching here. I must
bring in Professor Reed and Professor Yeung.
Lord Swinfen: The problem there is if the algorithm changes of its own
accord without human intervention.
Professor Karen Yeung: The question whether you should ascribe legal
personality to an algorithm depends on why you are asking the question.
These questions are part of a broader set of quite difficult questions
about the distribution of authority, responsibility and liability, and, in
particular, if there is harm, who is going to be held legally responsible
and ensure that the person who is innocent in that show is not left
holding the baby. One possibility is that you might want to think about
giving legal personality to the algorithm, but the core answers to that
question must be driven by how you envisage the distribution of loss,
liability and responsibility more generally. You have to focus on those
questions first and think about whether you want a no-fault
compensation system or a negligence-based compensation system, in
which case the breaking of the chain of causation because of the lack of
reasonable foresight is a problem.
The Chairman: Thank you. You have given us quite a range of options
there.
Professor Christopher Reed: This is one I have done a lot of work on.
Of course I think I have a very strong understanding, but then I would,
wouldn’t I? Do existing legal mechanisms work? The answer is yes, they
are always going to work. The law will find a solution. If we have a
liability claim, the law will find somebody liable or not liable. It will
answer the question. However, there are two problems with that. One is
in producing the evidence that is necessary to answer the question. Let
us take the self-driving car as an example. Suppose there is an accident
involving a self-driving car and it is alleged that somebody was
responsible for the accident: the designer of the technology, the designer
of the car, the operator of the vehicle who failed to intervene. It is
alleged that some person should have behaved better.
To answer that question, we will need to dig into very difficult questions
about how the AI is working, which may be answerable only by obtaining
information from the designers who are from a different country. It will
make litigation horribly expensive, slow, and very difficult. Where this as
a problem there are some comparatively easy answers. Indeed, in the
Vehicle Technology and Aviation Bill introduced last year, the answer was
quite simple. It said in effect, “Okay, the insurers are liable to pay out in
the event of an injury. There is no need to prove fault, and the insurers
can have pretty much any debate they like with anybody else they think
is responsible”. As I understood the Bill, it was basically saying the
insurer is liable if an accident is caused by a self-driving vehicle.
The Chairman: You are touching a raw nerve here, you know, because
Lord Levene was chairman of Lloyds before.
Lord Levene of Portsoken: I will not answer that. Professor, you were
talking before in a slightly different context about the fridge that had a
menu for you that you did not like. It has been suggested that the cause
of the Grenfell Tower disaster was a fridge malfunctioning. I think it is
unlikely that it was a very sophisticated fridge, but let us suppose that it
was and that is what caused it. The implications and ramifications of that
fire are enormous. How would one deal with that? You could say, “Go to
the insurers”, but where are they going to go?
tested properly. That can be dealt with at the moment. We are just
holding the line.
Our concern, put very simply, is that when these systems start trading
with each other in clusters, it will be very difficult to work out what may
happen. We will not be able to predict what will happen. At the moment,
the only real way to test an algorithm is to fire dummy trades at the
algorithm. People who write their algorithms will not give you the secrets.
It is their secret source, people in this area say, so they will not allow
people to investigate exactly why the algorithm has failed. The only way
you can work out if the algorithm may fail is the testing regime at the
outset. Work is being done on testing these systems, but our concern is
that the gap will exist when they trade with each other.
The Chairman: Thank you. If you are not careful, we could spend hours
firing propositions or hypothetical situations at you, but it is very, very
interesting.
Jeremy Barnett: They are all fighting over the cryptocurrencies and how
to deal with those issues. There are papers about whether the SEC should
13
Lord Hollick: Talking more generally, not just algorithms, are there any
regulatory initiatives that have been taken in other countries that we
should be looking at?
Professor Karen Yeung: I would say simply that the European Union
has definitely been the world leader on the regulation of automated
decision-making, without question. There are some mechanisms in the
GDPR. I do not think anybody has taken any steps in relation to
meaningful ethical regulation of AI systems. There are voluntary
initiatives abroad, such as the one the IEEE has talked about, but
grappling with these questions is the Wild West; nobody really knows
what data ethics is. I think there is a real challenge there.
The answer to the question whether the UK could lead the way is yes, if it
adopted something on which its thinking was global rather than parochial.
We already have a number of examples. The law on the most complex
form of electronic signatures is based on the model established by the
state of Utah going off on its own in the late-1990s, saying, “Wouldn’t
this be a good idea?”, and everybody else in the world picked up on it
and said, “That’s the idea. At least someone knows how to regulate this”.
Regulatory ideas spread very effectively, but they have to be ideas that
can be adoptable elsewhere.
Q34 Viscount Ridley: I was going to ask about autonomy and legal
personhood. We have covered that ground quite well, so if you do not
mind and with the Chairman’s permission I will ask you a completely
unscripted question. The word blockchain has come up a couple of times
in this session, which it has not in our previous sessions. I am curious
about the degree to which the whole blockchain idea—smart contracts,
distributed ledgers and that sort of thing—is part of the AI conversation
or a separate one. Perhaps cheekily, are you as lawyers worried that
blockchain is going to do you out of a job by doing away with the
middleman?
Jeremy Barnett: I became interested in smart contracts, because I was
told that it was the end of lawyers, but I think it is the beginning of
lawyers; the lawyers have something else to argue about. Blockchain is
very important. We at the CBC at UCL believe that it is blockchain in
conjunction with other technologies—with artificial intelligence, with the
internet of things—so that we can track people around the building, which
relates to the Grenfell Tower issue that was raised, and in conjunction
with building information modelling and the computerised design of
buildings. A paper has just been written, and published yesterday, about
the ownership of data, which, again, you discussed in the earlier session.
The paper has looked at blockchain-enabled artificial intelligence.
Blockchain provides trust, which, again, is one of the points that was
made earlier. It builds in trust and it assists automation.
data and control their data. They can control their consent. One of the
problems with the GDPR is the right to be forgotten. How can that
happen on a blockchain in an indelible record?
The Chairman: You do not need to go much further. The next question
from Lady Rock will be exactly in that area.
Jeremy Barnett: It is a big subject. Who owns the data? The individual.
For public data, systems have to be developed in order to allow the public
to advance learning in, say, the health service without compromising the
individual’s rights. The concern I alluded to earlier is the idea of these
decentralised, autonomous organisations controlling data, being
programmed to send the data to certain places in advance and deciding
whether or not that organisation can benefit. That is the concern. We
cannot say where the money goes until we have systems in place that
allow the control of the benefit. The building block is to design systems
that allow people to control the data, and then the decisions can be made
in each case as to where the benefit should lie.
Professor Karen Yeung: I wish I knew the answer. Your question about
ownership of data in relation to both personal data and non-personal data
is one of the biggest challenges of our generation. I do not think we are
anywhere near solving that. The problem is the very unique nature of
data. It is a public good. If we aggregate data we can generate incredibly
powerful insight. On the other hand, especially with medical data,
sometimes it is deeply intimate and personal, and people do not want to
share that, but if we did share it we could generate amazing insight.
People are also very mistrusting of the fact that that data may be used to
prejudice their interests in ways that they do not understand or
recognise. These are real problems, and I wish I had the solutions. I am
sorry that I do not.
For personal data, it is really much more difficult. I have come out of an
hour’s discussion with PhD students about personal data being used as
17
part of a big data set for improving healthcare, how that links to the right
to proper healthcare under the EU charter and how we try to balance that
with the right to data protection. It is a very difficult question, but it
seems to me that a society is entitled to take the view that, for example,
the wider public good in discovering a cure for ailment A is more
important than my individual desire to stop my data being put into a data
set for this kind of analysis. Society has to take that view collectively.
What I object to is it happening quietly and behind my back and there
being no discussion about the proper way of doing it and protecting me.
Q36 Lord St John of Bletso: In your introductory remarks you all spoke
about the many benefits that AI will provide for the legal profession
through natural language processing. Clearly, AI will make a lot of
services more efficient. How do you anticipate this developing over the
next decade?
Professor Christopher Reed: It is a difficult question. I gather that
Professor Richard Susskind has given written evidence and may come to
speak to you. He probably knows more than anybody else about this. I
am more of a sceptic than Richard is, and I have teased him over the
years that his predictions that the benefits to wider society through the
law becoming understandable to everybody and being able to be
implemented by everybody are always “just around the corner”. I think
there is a risk for the legal profession at the end of small disputes that to
be honest are too low value for legal help to be available.
The Chairman: I do not think you are very far away, if you are talking
about the next five years, from Richard Susskind.
Professor Karen Yeung: I do not feel that I have the expertise to talk
about what I think will happen in the legal services industry, although
what Chris says resonates with me. I imagine that all those really boring
tasks that I really hated as an articled clerk will no longer have to be
done by a human. That is a cause for celebration, personally. I am a little
more concerned about the implications of democratising the automated
legal advice systems. Perhaps I am old-fashioned, but I believe that
18
lawyers are custodians of the rule of law and the values associated with
respect for the law. It is not instrumental and about doing the absolute
minimum to comply with the law. The law, for me at least and I think
generally, is a moral enterprise. I am worried that we may lose touch
with the normative underpinnings of the law if we rely excessively on
machines to generate decisions based on historical analysis of cases and
legislation.
Lord St John of Bletso: Has the Law Society come out with a position
paper on this at all?
Professor Karen Yeung: They are already doing that. Those services
are already available in the US.
The Chairman: Thank you. I am going to bring in Lord Hollick, who has
used an awful lot of lawyers in his time.
Jeremy Barnett: No, we have not yet. It is too complex. We are looking
for use cases. At the moment we have designed the systems in theory
and we are in the university—
Lord Hollick: Would it be binding? Could you enter into it only if it was
binding?
Q37 Baroness Grender: What is the one recommendation each of you think
this Committee should make?
Professor Karen Yeung: Can I cheat and have two? Is that all right?
incentivising. There are lots of other ways of doing it. I can think of ways
of working with the liability system such that people would say, “If you
cannot, in your algorithm, explain, after the event, why the car crashed, I
am not going to buy it because my insurance will be too expensive”. That
is one example. Another is to recognise the difference between asking for
ex ante transparency—”you must explain this technology before you use
it”—and being able to explain after something has happened how it
happened. The former is very expensive to do, the latter is much
cheaper. You must not mix them up, otherwise you end up insisting on
something unnecessary.
Professor Christopher Reed: You could do, or you might say that ex
ante is more important for something like public trust than it is for
deciding liability.
The Chairman: Thank you all very much. You have provided the second
half of a very stimulating afternoon. I think it is wonderful that the legal
profession is still going to have a future. I am not sure that sentiment is
necessarily going to be shared among my colleagues. Thank you very
much indeed.