You are on page 1of 4

AI and the Legal Profession

By William Webster
Associate

I’m sure everyone reading this right now is aware of AI, having seen many recent articles
in the mainstream media. There is no doubt that the advancement in this technology has
potentially huge ramifications across many different industries, including the legal
sector.
I’m hoping to address some key areas in this article, including AI’s current influence on
legal work and what it could achieve in the future, alongside the inherent risks and
opinions in the legal sector.

AI’s Influence

In short, there is a widely held view that AI will impact almost every profession and the
law is no different. With the help of computer science methods such as natural language
processing algorithms and machine learning models, AI-powered legal research tools can
quickly search through vast amounts of legal data, enabling lawyers to find relevant case
law and legal precedents more quickly and accurately than ever before. AI can help
lawyers to quickly assess the documents and case law, as well as to recognise key legal
concepts and arguments.

By utilising predictive analytics and analysing data from past cases, AI algorithms could
also predict the outcome of legal disputes, allowing lawyers to make more informed
decisions. This could lead to faster and more efficient resolution of disputes and reduce
the burden on the court system. These same predictive methods could be utilised by in-
house counsel to identify possible risks or exposure within their own company's legal
policies and frameworks much quicker than any human can, which again can only
mitigate or even avoid the cost of litigation and having to engage the services of an
external law firm.

However, despite the undeniable potential AI has, particularly with addressing menial day
to day tasks and enabling lawyers to tackle more involved work, is there any merit in the
claims that it may soon replace lawyers and even judges? And if so, what are the ethical
considerations?

Ethical and Legal Considerations

There are a few areas of contention when it comes to AI, particularly when considering
the generally held opinion that law is a people driven profession, whose decisions and
outcomes can have huge ramifications for individuals, businesses and more.

A key question revolves around the accuracy of AI-generated outcomes. Recently,


Stephen Schwartz from New York firm Levidow, Levidow & Oberman had to apologise
for using an AI chatbot which provided him with fake cases for his client's submissions.
He had used ChatGPT in drafting submissions on behalf of his client and several of the
cases that the AI programme provided, turned out to be false. The court confirmed that
the cases "appear to be bogus judicial decisions with bogus quotes and bogus internal
citations". Screenshots, provided in an affidavit, revealed that Schwartz had questioned
the AI computer programme about the authenticity of the case when 'chatting' with it.
ChatGPT said that a bogus matter, "Varghese v China Southern Airlines Co" was a "real
case" which "does indeed exist and can be found on legal research databases such as
Westlaw and LexisNexis." The chatbot also said that other fake cases provided were "real"
and could be found on "reputable legal databases."

Another case, involving a radio host from Georgia USA, who is suing OpenAI (the makers
of one of the most popular chatbots, ChatGPT) for libel, claiming that the AI has been
falsely accusing him of embezzling money from a non-profit company which advocates
for gun rights, which he has no affiliation with, and which has falsely claimed he is the
treasurer and CFO (Chief Financial Officer) of said company. This clearly throws up a few
issues, most notably that anything that a chatbot claims to be true should be met with
scepticism, but also on who is ultimately responsible if the chatbot uses the identities of
real-life people in its claims. It has been suggested by Eugene Volokjh, a law professor at
the University of California Law School, that “if false information generated by ChatGPT
or similar AI models leads to harm and meets the legal criteria for libel, it could potentially
be subject to legal consequences” and so exposes companies like OpenAI to massive risk
of future lawsuits, as well as causing harm to actual people.

So, perhaps AI does have some more teething problems before it can be trusted with
serious matters, recently the Master of the Rolls Sir Geoffrey Voss has spoken out on AI
replacing senior individuals in the justice system saying, as reported by the Law Gazette;
“I believe that [AI] may also, at some stage, be used to take some (at first, very minor)
decisions.”, but the article then mentions "However, Sir Geoffrey concedes that there are
still limiting factors in the involvement of machines in judicial decisions. Key amongst
them is the ability of a justice system to inspire the confidence of its citizens and
businesses, without which it cannot function." This is an important point in the use of AI
in the legal profession, ultimately as a piece of technology, it will likely gain more scrutiny
and analysis of it’s results and decisions and therefore will need to be devoid of errors,
perhaps even more than its human counterpart.

Conclusion

Legal publication The Lawyer recently held a roundtable discussion including 18


managing partners of notable UK and US firms, focusing on AI in law firms. Opinions
included "the prospect of a reduction of secretaries and paralegals in the office" but
"many argued that the present and primary risk was regulatory.". They also suggested
that firms "risk losing money through a significant insurance payout if the technology
ends up making mistakes." Again, evidence seems to point to an acknowledgement that
AI will affect the profession but seems to still have much to prove.

In a more general overview of the legal sectors take on the technology, as opposed to just
the top 20 firms, The Law Gazette published a recent study conducted by the University
of Manchester, UCL and The Law Society that was slightly more damning, reporting that
"A study of attitudes to lawtech carried out by The University of Manchester, University
College London and The Law Society finds that a lack of understanding by, and
encouragement from, senior managers is proving a barrier to the uptake of technologies
such as artificial intelligence.", it then went on to say "The survey of a representative
sample of 656 solicitors from across the sector in England and Wales found that less than
a third (32%) use even basic lawtech, such as legal databases and contract review
software, daily. More than one-third of the sample (35%) said they do not use lawtech at
all or do so highly infrequently."

In conclusion, the evidence above clearly details the ways in which AI can streamline and
greatly assist the legal process, however, there seems to be a few hurdles and some
convincing to be done before this can happen at a scale significant enough to truly impact
the whole profession. The question then may become, will the legal world’s reluctance to
adopt this technology ultimately impact its ability to effectively advise it’s more tech-savvy
clients?

One thing is certain, other industries will be using AI and more, for better or worse and
the legal world can’t be left behind.

For a confidential discussion, please contact Will Webster at Chadwick Nott.

(t) 0117 945 1634


(m) 0773 370 0509
(e) willwebster@chadwicknott.co.uk

You might also like