You are on page 1of 33

2019 The Future

of Artificial
Intelligence
How to filter through applications of
AI for banking business transformation

Expert
Contributors :
Contents
The Future of Artificial Intelligence Report 2019
2 | The Future of Artificial Intelligence Report
The Future of Artificial Intelligence
How to filter through applications of AI for
banking business transformation
Introduction

Chapter 1:
Artificial intelligence hype

Expert view
VocaLink

Chapter 2:
Avoiding artificial stupidity

Expert view
Pelican

Chapter 3:
Filtering the ethics of AI

Expert view
IBM

Chapter 4:
Explainable and auditable AI

Expert view
Xceptor

Conclusion

Bibliography

About
3 | The Future of Artificial Intelligence Report
00 |
Introduction

While artificial intelligence has established itself as a disruptive


technology for decades, AI is arguably at the peak of the hype
cycle now and banks have started to implement this technology
to transform traditional models of businesses.
4 | The Future of Artificial Intelligence Report
However, being a multi-faceted
technology, financial institutions
must decipher whether it is
machine learning, robotics, deep
learning, business intelligence or
natural language processing
that is the most beneficial for
corporate banking.

Some banks have launched


chatbot applications and virtual
assistants, but others do not have
the talent within the business
to deploy innovative products
that are personalised for their
customers, and an even smaller
number do not understand the
value of AI.

This report on The Future of


Artificial Intelligence analyses how
despite a lack of experience using
the technology which has resulted
in problems when attempting to
identify unusual trends, prevent
fraud and avoid bias on an ethical
level, banks are now focused on
developing AI with a business-
critical consideration. 5 | The Future of Artificial Intelligence Report
01 |
Artificial intelligence hype

While artificial intelligence is transforming


several industries, the financial sector has a
lot to learn from specific case studies in
non-banking areas such as health, travel
and retail. To start at the beginning, despite
the term AI having been bandied about with
a number of different definitions, the actual
definition is simple: technology that
appears intelligent.
6 | The Future of Artificial Intelligence Report
Having been in existence since the 1950s, Deep learning, which is a subset of machine
early forms of the technology were created learning, has also become mainstream
and designed to mimic human nature, knowledge because of how it emulates
and this is where the controversy and the way animal brains learn tasks, but
misconceptions have emerged from. After deep learning has not been used to its full
a period of quiet development, artificial potential in financial services, yet.
intelligence has undergone a modern
revolution with subsequent excitement The tipping point
as a result of techniques such as machine
learning coming to the fore. MMC Ventures also state that after
“seven false dawns since its inception in
Abhijit Akerkar, head of AI business 1956, AI technology has come of age. The
integration at Lloyds Banking Group, capabilities of AI systems have reached a
reveals that a bank’s journey into artificial tipping point due to the confluence of seven
intelligence starts with experimentation. factors: new algorithms; the availability of
For Lloyds, it was around 24 months ago. training data; specialised hardware; cloud AI
Experimentation provides a low-cost way services; open source software resources;
to test what works and what doesn’t. greater investment; and increased interest.”
This experience shapes the portfolio of
use cases and hence, the trajectory of In what is being described as the fastest
value creation. paradigm shift in the history of technology,
banks can now adopt AI technology
“The last three to four years have seen the because of the shift to cloud computing,
explosion of data, easy and low-cost access offerings from vendors and software
to powerful compute power, and availability suppliers. According to Gartner, only 4% of
of sophisticated machine learning enterprises had adopted AI in 2018. Today,
algorithms. The stars are aligned for the this figure has jumped to 14% and a further
breakthrough. No wonder, companies have 23% intend to deploy AI within the next
stepped up their investments towards 12 months.
embracing AI,” Akerkar says.
“By the end of 2019, over a third of
Machine learning is another term that enterprises will have deployed AI. Adoption
has been bandied about and it must be of AI has progressed extremely rapidly
remembered that this is a subset of AI; from innovators and early adopters to the
all machine learning is AI, but not all AI is early majority. By the end of 2019 AI will
machine learning. Machine learning enables have ‘crossed the chasm’, from visionaries
programmes to learn through training, to pragmatists, at exceptional pace – with
instead of being programmed with rules and profound implications for companies,
as a result, can improve with experience. consumers and society,” the MMC Ventures
report posed.
This is why there has been excitement
about machine learning in financial services, But is this interest all conflated hype? MMC
as MMC Ventures explored in their report Ventures also revealed in March 2019 that
7 | The Future of Artificial Intelligence Report

‘The State of AI: Divergence,’1 in 40% of Europe’s AI startups do not use any
partnership with Barclays. “Machine AI programmes in their products, as was
learning can be applied to a wide variety reported in Financial Times. 2
of prediction and optimisation challenges,
from determining the probability of a
credit card transaction being fraudulent to
predicting when an industrial asset is likely
to fail.”

1
MMC Ventures, ‘The State of AI: Divergence’ (2019).
2
Financial Times, ‘Europe’s AI start-ups often do not use AI, study finds’ (2019).
Based on interviews and investigation into with artificial intelligence - after having heard
2,830 AI startups in Europe, David Kelnar, about the technology in the news - for the
MMC’s head of research, said that while execs to reply, revealing that their bank has
01 | Artificial intelligence hype

many of these firms had plans develop been using machine learning for a few years.
machine learning programmes, none
actually were at present. “A lot of venture In conversation with Finextra, Prag Sharma,
capital groups in Europe are responsive head of Emerging Technology, TTS Global
to companies that are interested in raising Innovation Lab, Citibank, highlights that
money [for AI],” Kelnar said. there has been a recent resurgence in
artificial intelligence and this is because of
The FT went on to report that companies the development in the overall capability of
that are branded as “AI businesses” have the technology driven by “data, processing
historically raised larger funding rounds and power, cost and algorithms, products and
secured higher valuations in comparison services developed by the open source
to other software businesses. In addition community.”
to this, politicians have also contributed to
this hype by discussing so-called AI success Annerie Vreugdenhil, chief innovation
stories. officer at ING Wholesale Bank suggests
that AI is already part of our everyday lives
AI FOMO and is more prevalent than first thought.
“The world is changing rapidly through
At Finextra’s annual NextGen Banking technological developments and as a result,
conference, keynote speaker head of AI at our expectations are changing. As we adapt,
TSB Bank Janet Adams framed the debate and these technologies become more
and stated that “AI is the new electricity” and intertwined into our lives, our expectations
has the potential to power everything we around what could be achieved also grows.
do in the future, helping banking customers We believe in stepping out of our comfort
thought the wealth creation stage of their zone, even beyond banking, to explore
lives. the opportunities, and as we do this, our
expectations extend beyond further than we
However, despite hype around uncovering have ever imagined before.”
the mysteries that surround the technology,
Adams pointed out that business models Paul Hollands, chief operating officer for
cannot succeed without proper education data and analytics at NatWest has a different
of staff in financial services, and only then, view. After saying that he was “a terrible
strategic advantage can be gained. “Data person to ask whether AI is a buzzword
equals training equals insight. Roshan or not,” he said he has always thought that
Rohatgi, AI lead at RBS, agreed and added AI was “a massively overhyped term. It is
that “everyone is keen to use this stuff, but a collection of capabilities, so you know, if
the system, the fabric, is not mature yet. It’s you think about it in its simplest form, it’s
all well and good to go from POC to pilot, but machine learning, its robotics and it is to
it never really reaches the real world.” some extent, chatbots as well and I think a
lot of what we’re trying to do is around how
The hype discussion continued in Karan do we used advanced techniques to help get
8 | The Future of Artificial Intelligence Report

Jain, head of technology Europe and to smarter outcomes for customers.


Americas from Westpac’s keynote, in which
he explained that a lot of discussion about “We’ve been using machine learning for
AI is around FOMO - fear of missing out. a long time in terms of how we identify
And this “FOMO generation” have different opportunities to customers to save money
expectations and “want their banking and do things differently. I sit there and
services to be available in a couple of clicks.” think machine learning isn’t that new but the
technology that is available that put the data
It was also argued in a later panel discussion through and true and a speed at which it is
that this FOMO also exists within the palatable enough to get an answer – that
corporate banking infrastructure, where the is new.”
board may ask executives if they are working
Machine learning lead at Monzo Neal Lathia
adds that “machine learning is well beyond the
peak of inflated expectations, but the broader
usage of the phrase ‘artificial intelligence’ is
hyped to a cringeworthy degree.” OakNorth’s
chief operating officer Amir Nooralia had a
similar view and said that while there is hype
around AI, he believes that it is justified and not
just part of a cycle.

“The hype is here to stay and if anything, will


only continue to grow over time as more use
cases develop and more propositions are
proven. Personally, I think the tipping point will
be commericalised AI: moving away from AI
chatbots which make us more efficient to AI
making commercial insights that lead to more
profitable businesses. Once that is proven in an
industry, it will permeate across quickly and then
replicated across other sectors. We saw this with
investment banking and algo-trading and how
quickly it took off, once money was being made.”

Stephen Browning, challenge director – next


generation services at Innovate UK, provides
a concise outlook at artificial intelligence and
explains that “it is not necessarily the technology
that’s important, it’s the projects and programs
in which AI is getting used. What we’re seeing
right now is a surge in interest around a
particular type of AI that is machine learning and
variants of that such as deep learning and that’s
driven substantially by two main things that have
developed and come along.

“One is the computing power available at a


reasonable price and the other is the availability
of large quantities of data. When you bring those
two together you have the ability to use machine
learning models to do some things that are
quite remarkable in terms of the ability to spot
patterns, but it’s not intelligent in the normal
sense of the word.
9 | The Future of Artificial Intelligence Report

“These techniques come under the broad title of


artificial intelligence, that’s really why there is a
surge in interest at the moment. The opportunity
to apply these techniques to new areas where
there is access to data gives the ability to spot
things that maybe you couldn’t spot before so
when you’re talking about financial services,
identifying fraudulent transactions far more
readily or using machine intelligence to assess
communication and potentially see where people
aren’t being so honest and spot fraud.”
Expert view:

David Divitt
Vice President,
Financial Crime
James Hogan
Product Manager,
Financial Crime Solutions

In a Q&A interview with Finextra,


David Divitt, Vice President,
Financial Crime and James
Hogan, Product Manager,
Financial Crime Solutions from
Vocalink explore how financial
institutions can approach
innovation and combat financial
crime at the same time.
10 | The Future of Artificial Intelligence Report

After it emerged that people had


started to see Facebook ads that
were related to their Internet 50% believe that a GDPR-style
search history, consumers felt as regulation could be implemented
if the line was being now being by regulators in the US, with barely
crossed, especially when it comes one-in-five believing it would never
to the unpermitted sharing of data. come to pass.
Why should innovation be embraced without scrutiny?

A level of scrutiny should be encouraged and is warranted as long as it doesn’t suffocate the
process, since criminals tend to exploit weaknesses as soon as they emerge, and generally
before the industry has time to fully investigate. That being said, applying unnecessary
governance can be a barrier to true innovation, and diminish the opportunity of discovery and
achievement by slowing the process down unnecessarily.

How can banks take a measured approach to keep pace with innovation and help
combat financial crime?

The well-established banks have historically been less agile when reacting quickly to change
or taking the lead when it comes to launching innovative products and solutions. It is true of
course, they have many more customers and greater legacy technology challenges then say,
a challenger bank, however, the arrival of the so called challenger bank and new initiatives
such as Open Banking has shaken the industry into life. In order to keep pace, the industry
as a whole needs to embrace agility and adopt a “start-up” mentality which encourages
experimentation. Financial crime is already proving the ideal incubation environment for new
ideas and technology to be tested: because bad actors move at an extremely rapid pace, it is
essential that the industry is similarly agile and innovative to combat it. Innovations such as
network-level money laundering detection, device fingerprinting, AI and machine learning
have all had significant impacts in reducing fraud and money laundering, but more can always
be done. We encourage financial institutions to dedicate teams focussed on innovation, who
can work in a different way, but have the backing and the resources of the parent. However,
ensuring that their mission is well communicated across the organisation and has the support
from the various stakeholders is critical to success.

Data and algorithms are improving our supervisory approach, but what should financial
11 | The Future of Artificial Intelligence Report

institutions focus efforts on?

For financial crime, wider collaboration is the key and exactly where financial institutions
should devote time as data and algorithms in a silo can only go so far. Partnering and sharing
intelligence will deliver new learning to ensure financial institutions keep pace. Regular pilots
and investigative explorations should form a conveyor belt of that innovation, wherever
possible focussing on collaboration. Of course, financial institutions have a long, competing list
of priorities but investment here, specifically in data science, aiming for tangible outcomes will
reap reward.
When it comes to combatting financial crime, why is there a spotlight on machine
learning when hacks cannot be statistically analysed?
Expert view

Machine learning is based on the principle that when given enough data, the machine can
better detect and react to the subtleties of a problem. Where humans alone can generally
interpret only the more obvious trends and patterns, a machine can comb through orders of
magnitude-more data to uncover the extremely hidden patterns which detect differences in
data. For this reason, tackling financial crime lends itself very well to the technique. Financial
crime involves identifying a relatively rare situation occurring amongst a huge pool of
legitimate transactions – a true needle in a haystack. Fraudsters intentionally try to blend in to
the crowd and avoid obvious clues to their activity. Also, criminals experiment with new attacks
and evolve existing ones rapidly, so reacting to them must also be done at speed. For these
reasons, machine learning is a great tool in the arsenal of weapons to combat financial crime.

Can risk models be built using algorithms to monitor crimes such as money laundering?

Absolutely. Risk models are in operation today and are at the forefront of combating money
laundering activity. The application of a rule-based strategy to detect money laundering is
antiquated and is proving to be an efficiency overhead that is no longer useful. The key to
truly exploiting algorithms to detect this type of criminal activity however is embracing a
collaborative approach where the silos across entities are broken down. The act of laundering
money is harder to detect at single transaction, single bank level and a wider network of
activity, relationships and neighbourhoods is the best approach to tackle this global problem.
12 | The Future of Artificial Intelligence Report
02 |
Avoiding artificial stupidity

While most financial applications of artificial


intelligence have been in the customer
service space, there are other areas that
banks are working with to improve through
implementation of innovation technology.

13 | The Future of Artificial Intelligence Report


Prag Sharma, head of emerging technology Atak says that the bank also have a platform
at Citibank’s TTS Global Innovation Lab, called Citi Payments that leverages machine
highlights in conversation with Finextra that learning to detect outliers in corporate
02 | Avoiding artificial stupidity

there are a few main areas that financial customer payments.


institutions are working to improve with
artificial intelligence technology, other Financial crime
than chatbots.
As International Banker explained in an
Sharma explains that at Citibank, they article last year, while chatbots appear to
are also working to improve operational be the most visible use case of artificial
efficiency with artificial intelligence because intelligence and developments are
there are several predictions that the bank being made in algorithmic trading, AI is
needs to make, for example predicting also making considerable inroads in the
customers behaviours from a transactions compliance and security space. Money
or liquidity perspective and or around laundering continues to be a problem in the
detection outliers in payments data. He global financial services and banks such as
adds that natural language processing is HSBC are exploring their options to combat
beginning to get used to handle the this issue.
millions of documents that are usually
processed manually. The article goes on to point out that in
April 2018, HSBC had partnered with big
“We’re a bank so compliance is a key data startup Quantexa and had piloted AI
area of focus. We’re looking at regtech software to combat money laundering,
and how that is going to affect us in the which follows the bank’s partnership with
future, where we can make it easier for Ayasdi to automate anti-money laundering
ourselves to have processes in place that investigations that were being processed by
continuously monitor various activities thousands of human employees.
with natural language processing as the key
technology enabler. “The aim of the initiative is to improve
efficiency in this area, especially given that
Gulru Atak, global head of innovation at the overwhelming majority of money-
Citibank’s TTS Global Innovation Lab, laundering investigations at banks do not
says that the bank also have a platform find suspicious activity, which means that
called Citi Payments Outlier Detection engaging in such tasks can be incredibly
that leverages machine learning to detect wasteful. In the pilot with Ayasdi, however,
outliers in corporate customer payments. HSBC reportedly managed to reduce the
number of investigations by 20 percent
On this point, Sharma says: “It’s a good without reducing the number of cases
example of Applied AI if you’re looking at referred for more scrutiny.”
what banks are interested in today, which
is using AI to look at payments transactions But how then do companies make the
and find anomalies. We could have bought most of the tools and techniques that AI
something off the shelf and applied it, offers now but also prepare for what the
14 | The Future of Artificial Intelligence Report

but we as an organisation looked at it and future might look like? PwC highlighted that
tried to figure out whether this would despite hundreds of millions being invested
add serious value and truly understand into technology that fights financial
the underlying algorithms, without crime, many financial institutions are still
having to rely on third parties because we struggling, but continue to rely on what
understand our data better than others.” would be considered legacy infrastructure
to keep up with new and evolving threats. 3

3
PwC, ‘Getting real about AI and financial crime’ (2019).
which explored how robotics process
PwC explained that financial automation (RPA), machine learning, and
services companies are aware cognitive AI can be adopted or combined to
solve issues with financial crime today.
that AI is a faster, cheaper
and smarter way of tackling However, KPMG advised making “a
reasoned decision as to what type, or
financial crime, but there is a mix of types of intelligent automation
lot of confusion around how a company should implement, financial
organisations should harness crime stakeholders first need to design an
intelligent automation strategy.
this technology

“These models tend to be based on black


Intelligence automation could
and white rules and parameters; for be used to reduce costs and
example, if a transaction is over $10,000 increase efficiencies and
or a person uses a credit card overseas,
then it gets flagged. The problem with effectiveness
simplistic approaches like this is that they
tend to throw up an enormous number
of false positives. And in an environment “This strategy depends on what investment
of increased regulation, increasing the institution is willing to make and the
competition and increased cost pressures, benefits sought, including a weighting of
it doesn’t make sense to have your team the risk potentially involved, and the level
trawling through thousands of alerts that of efficiency and agility desired. Therefore,
don’t represent real financial crime tasks.” the intelligent automation strategy should
be aligned with the size and scope of the
PwC explained that financial services institution and its risk tolerance.”
companies are aware that AI is a faster,
cheaper and smarter way of tackling KPMG also pointed to specific areas
financial crime, but there is a lot of in financial crime compliance where
confusion around how organisations should intelligence automation could be used to
harness this technology – “just because a reduce costs and increase efficiencies
certain technique is feasible doesn’t mean and effectiveness. For transaction
that a company is in a position to apply monitoring, the first being the need for
it immediately.” institutions to build on alerts and cases
that have previously occurred, building on
To remedy issues with financial crime, any existing machine learning models to
PwC suggested using AI to scan enormous establish a domain knowledge base that the
amounts of data and identifying patterns, cognitive platform can rely on.
behaviours and anomalies because the
technology can faster than humans can. “It is the key to monitoring the risks the
15 | The Future of Artificial Intelligence Report

“It can analyse voice records and detect institution already knows and identified.
changes in emotion and motivation that Instead, it looks at patterns that exist in the
can give clues about fraudulent activities. It data to identity if those patterns have been
can investigate linkages between customer seen previously.” The second suggestion
and employees and alert organisation to from KPMG was to use machines to
suspect dealings.” “automate aspects of the review process
and deployed to build statistical models that
KPMG delved deeper into this problem incorporate gathered data and calculate a
in its 2018 report, ‘The role of Artificial likelihood of occurrence (closure
Intelligence in combating financial crime,’4 or escalation).”

4
KPMG, ‘The role of Artificial Intelligence in combating financial crime’ (2019).
The third point was to employ bots to scan caricature of how the outcomes might have
the internet and public due diligence sites been generated, so we can make future
“to collect relevant data from internal and predictions about them in a systematic way.
02 | Avoiding artificial stupidity

other acceptable sources,” which would


save analysts vulnerable time. For Know “For example, in a money laundering
you Customer (KYC), the report identified context, the risk factors could be a firm’s
areas such as applying judgement to these products, types of customers and countries
domain areas using RPA and machine it deals with, and the outcomes could be
learning. This allows financial crimes detected instances of money laundering.
officers to make KYC a priority because Unfortunately, it’s quite difficult to acquire
the information they obtain better reflects robust figures on money laundering as
actual risks. industry-wide data is hard to come by,
and criminals aren’t exactly in the habit
In addition to this, machine learning can of publicising their successes. Crimes like
also automate the extraction of data from money laundering – a secret activity that
unstructured documents, while RPA can is designed to convert illicit funds into
enable institutions to be provided with a seemingly legitimate gains – is particularly
more reliable and more efficient customer- hard to measure.”
risk rating process, and in turn, more of a
real-time risk assessment. RPA also has the
potential of reducing, or even eliminating, AI could render 30% of
the need to contact customers repeatedly.
banking jobs obsolete in the
Alongside this, in a speech given by Rob next five years
Grupetta, head of the financial crime
department at the Financial Conduct
Authority at Chatham House5 in November To resolve this issue, he explained that the
2018, he pointed to how “the spotlight is FCA had introduced a financial crime data
squarely on machine learning,” which has return back in 2016 which would provide an
been “largely driven by the availability industry-wide view on key risks that banks
of ever larger datasets and benchmarks, face, which would then target supervisory
cheaper and faster hardware, and advances resources that are exposed to inherent risk.
in algorithms and their user-friendly
interfaces being made available online.” “We are moving away from a rule-based,
prescriptive world to a more data-driven,
Grupetta continued: “But financial crime predictive place where we are using data
doesn’t lend itself easily to statistical to help us objectively assess the inherent
analysis – the rules of the game aren’t fixed, financial crime risk posed by firms. And we
the goal posts keep moving, perpetrators have already started experimenting with
change, so do their motives and the supervised learning models to supervise
methods they use to wreak havoc. Simply the way we supervise firms – ‘supervised
turning an algorithm loose without thinking supervision’, as we call it.”
isn’t a suitable approach to tackling highly
16 | The Future of Artificial Intelligence Report

complex, dynamic and uncertain problems Substituting humans


in financial crime.
However, while on one hand, AI technology
“That’s not to say we can’t use algorithms can reduce the number of times that a
and models alongside our existing approach customer needs to be contacted, in the
to help us be more consistent and effective example highlighted above, fears are also
in targeting financial crime risks. Consider amounting around the substitutability
building a risk model using algorithms: of bank employees, as the International
using a set of risk factors and outcomes, we Banker article also discussed. 6 While there
could come up with a kind of mathematical have been many statistics and news articles

5
FCA, ‘AI and financial crime: silver bullet or red herring?’ (2018).
6
International Banker, ‘How AI is disrupting the banking industry’ (2018).
bandied about, Lex Sokolin, global director organisations as the ability to use machine
for fintech research firm Autonomous Next learning, to use robotics and artificial
revealed that AI adoption across financial intelligence increases.”
services could save US companies up to
$1 trillion in productivity gains and lower Hollands goes on to discuss how employers
overall employment costs by 2030. have a right to ensure that the people
within the organisation also have the core
skills to help them grow. Oaknorth’s Amir
Conflicting data suggested that Nooralia also had a similar attitude and says
AI may also result in a rise of that it is not about “machine replacing man
(or woman), but rather machine enhancing
banking jobs, as revealed by a human. Think Iron Man suit boosting a
recent study from Accenture human rather than an all-knowing robot.”
that found that by 2022, a 14% Like the healthcare sector that will
net gain of jobs is likely to occur continue to require a human’s emotional
in jobs that effectively use AI, in response, “when it comes to finance, it
is very personal and there are situations
addition to a 34% increase that will require empathy and emotional
in revenues. intelligence – e.g. a customer who might be
experiencing anxiety of mental stress as a
result of debt. It’s not like travel where the
The article also pointed to ex-Citigroup process involves getting from A to B, or
head Vikram Pandit’s expectation that AI retail which is purely transactional, so the
could render 30% of banking jobs obsolete human element is less important.”
in the next five years, asserting that AI
and robotics “reduce the need for staff in Nooralia then goes on to reference a
roles such as back office functions”. Japan’s recent Darktrace whitepaper, ‘The Next
Mizuho Group plans to replace 19,000 Paradigm Shift: AI-Driven Cyber-Attacks7
employees with AI-related functionality in which the organisation believes that in
by 2027, and recently departed Deutsche the future, “malware bolstered through AI
Bank CEO John Cryan once considered will be able to self-propagate and use every
replacing almost 100,000 of the bank’s vulnerability on offer to compromise a
personnel with robots. network,” Nooralia says.

However, conflicting data suggested that AI In the whitepaper, Darktrace state that
may also result in a rise of banking jobs, as “instead of guessing during which times
revealed by a recent study from Accenture normal business operations are conducted,
that found that by 2022, a 14% net gain of [AI-driven malware] will learn it. Rather
jobs is likely to occur in jobs that effectively than guessing if an environment is using
use AI, in addition to a 34% increase in mostly Windows machines or Linux
revenues. Accenture also finds that the machines, or if Twitter or Instagram would
most mundane human jobs will be replaced be a better channel, it will be able to gain an
17 | The Future of Artificial Intelligence Report

by robots, and leave banking employees to understanding of what communication is


focus on more interesting and complex jobs, dominant in the target’s network and blend
improving work-life balance and helping in with it.”
career prospects.
Nooralia adds: “A human will always be
In conversation with Finextra Research, guessing and will never be able to learn as
Paul Hollands, chief operating officer for quickly as a machine can, so it is inevitable
data and analytics for NatWest, that the machine will be better
highlights that this could be a problem, in comparison.”
because there is a skills gap and “there is a
change in the skills required in all

7
Darktrace, ‘The Next Paradigm Shift: AI-Driven Cyber-Attacks’ (2018).
Expert view:

Rajiv Desai
SVP – US Operations

In a Q&A interview with


Finextra, Rajiv Desai, SVP –
US Operations, from Pelican
discusses AI and the potential
for transformation, in addition
to the challenges that the world
of real-time payments presents
and how compliance plays a
part in this process.
18 | The Future of Artificial Intelligence Report
How do you see AI transforming banking in the future?

Artificial Intelligence is already a ubiquitous part of our everyday lives, and banks have been
deploying AI for several decades in task-specific ways. AI in transaction banking has been used
to address key bottlenecks in payments and financial crime compliance. These are the areas
where thousands of people are used in the back offices worldwide to do repetitive tasks which
require basic human intelligence. Application of AI to these areas will continue to grow as
these are some of the main causes of inefficiencies and last-mile problems that banks have to
solve. However, we are now also at an inflection point in banking transformation. This will also
transform AI from becoming a “nice to have” enhancement provider to a “must have” facilitator
of an open banking and real-time digital banking environment.

Can you explain how you see the challenges of today’s real-time payments world being
addressed by AI?

In today’s real-time environment complex processing and compliance decisions are made
within a few seconds. It is simply not possible in this increasingly digital and 24/7 instant
payment world to throw more human resources at the problem. The human body and mind
simply lack the abilities to consistently and systematically assess, investigate and decide on
matters 24x7 within seconds. AI is the only solution available to address this need to complete
the existing processing tasks and new challenges facing us like High Value Payment Fraud.
Real-time fraud detection in high value payments will gain increasing importance and AI will
play a prominent role to address the same issues.

Are there other compliance areas where you see AI playing a major role?
19 | The Future of Artificial Intelligence Report

In addition to tackling the growing problem of payments fraud, sanctions screening obligations
in a real-time environment can be incredibly challenging for banks, often resulting in very high
false positives, or wrong hits, in financial crime compliance. We have noticed that with dozens
of watchlists with thousands of patterns of names, companies, ships and cities many words
trigger false alerts. However, most of the time, humans can quickly and easily decide that the
hit is not real using context and common sense. For instant payments it is clearly not practical
to have humans take these decisions, so using Natural Language Processing AI technology can
easily figure out whether “Laura” is a ship or first name of a person, or if “Iran” is a street name
in Denver or a blacklisted country. In addition, auditability and examinability are particularly
important in these regulated contexts – banks need to have full confidence in their ability to
fully demonstrate and explain the decisions that AI processes have taken.
03 |
Filtering the ethics of AI

As decision-making factors using AI become


more accepted, pure economics might not
align with the softer strategies of a bank. Many
financial institutions are questioning how
artificial intelligence must be governed within
an organisation and how it can be taught
to align with a bank’s brand and ethos, but
without influence from human judgement.
20 | The Future of Artificial Intelligence Report
While AI has dominated news headlines acknowledging that problems exist, without
over the past year or so, the majority of ceding any power to regulate or transform
announcements and research has been the way technology is developed and
around the ethics of the technology and applied. We have not seen strong oversight
how to manage or avoid bias in data. In and accountability to backstop these ethical
April 2019, a fortnight after it was launched, commitments.”
Google scrapped its independent group set
up to oversee the technology corporation’s Days after Google scrapped their external
efforts in AI tools such as machine learning ethics board, the European Union published
and facial recognition. new guidelines10 on developing ethical
AI and how companies should use the
The Advanced Technology External technology, following the release of draft
Advisory Council (ATEAC) was shut down ethics guidelines at the end of last year.
after one member resigned and there
were calls for Kay Coles James, president After the EU convened a group of 52 experts,
of conservative thinktank The Heritage seven requirements were established that
Foundation to be removed after “anti-trans, future AI systems should meet:
anti-LGBTQ and anti-immigrant” comments,
as reported by the BBC.8 Google told the 1. Human agency and oversight:
publication that it had “become clear that AI systems should enable equitable
in the current environment, ATEAC can’t societies by supporting human
function as we wanted. agency and fundamental rights,
and not decrease, limit or misguide
“So we’re ending the council and going back human autonomy.
to the drawing board. We’ll continue to be 2. Robustness and safety: Trustworthy
responsible in our work on the important AI requires algorithms to be secure,
issues that AI raises, and will find different ways reliable and robust enough to deal
of getting outside opinions on these topics.” with errors or inconsistencies during
all life cycle phases of AI systems.
The big tech example 3. Privacy and data governance:
Citizens should have full control
Many industry experts expressed confusion over their own data, while data
at the decision and referred to Google as concerning them will not be used to
being naïve. However, as it was only the harm or discriminate against them.
external board that was shut down, as 4. Transparency: The traceability of AI
Bloomberg reported, the “Google AI ethics systems should be ensured.
board with actual power is still around.”9 5. Diversity, non-discrimination and
fairness: AI systems should consider
The Advanced Technology Review Council the whole range of human abilities,
was assembled last year as an attempt to skills and requirements, and ensure
“represent diverse, international, cross- accessibility.
functional points of view that can look 6. Societal and environmental well-
21 | The Future of Artificial Intelligence Report

beyond immediate commercial concerns.” being: AI systems should be used to


Many technology giants have laid out enhance positive social change and
ethical principles to guide their work enhance sustainability and ecological
on AI, so why haven’t financial services responsibility.
institutions? 7. Accountability: Mechanisms
should be put in place to ensure
Bloomberg referenced the AI Now Institute responsibility and accountability for
which wrote in a report last year that AI systems and their outcomes.
“Ethical codes may deflect criticism by

8
BBC, ‘Google’s ethics board shut down’ (2019).
9
Bloomberg, ‘The Google AI Ethics Board With Actual Power Is Still Around’ (2019
10
European Commission, ‘Ethics guidelines for trustworthy AI’ (2019).
The EU also explained that in the summer the workforce of the future will be more
of this year, the Commission will launch a relationship-based. Banks need to look
pilot phase that would involve a number at how to foster new talent and how to
03 | Filtering the ethics of AI

of stakeholders, but companies, public develop existing teams.”


administrations and organisations are
welcome to sign up to the European AI Cordeiro continued: “Even algorithms
Alliance today. need parents. And the parents have the
responsibility to train them, but where
Potential regulation are these people? They don’t exist.” In
conversation with Finextra, Monzo’s
At NextGen Banking London, Maciej machine learning lead Neal Lathia highlights
Janusz, head of cash management Nordic that “there is bias everywhere, and a lot of
Region at Citibank brought up the subject active research on measuring, detecting,
of regulation and said that regulation and trying to remedy it. I don’t think it’s
“comes when something crashes. Banks too late – it’s a problem that will have to be
will be reluctant to implement AI without constantly revisited.”
human oversight.” Comments on regulatory
frameworks were also made by Monica Nooralia also has a view on this and says:
Monaco, founder of TrustEU Affairs, who “The challenge lies in AI’s ‘black box’
revealed that governance - at the moment problem and our inability to see the inside
- only exists in the form of data protection, of an algorithm and therefore understand
specifically Article 22 in GDPR, which how it arrives at a decision. Unfortunately,
could become a source for future principles as we’ve seen in several circumstances,
to govern AI and the use of algorithms in AI programmes will replicate the biases
financial services. which are fed into them and these biases
originate from humans. So, the first step in
Monaco also made reference to the eliminating these biases is to open the ‘black
European Commission’s ‘AI for Europe’ box’, establish regulations and policies
report which was published on the 25th to ensure transparency, and then have a
April, which she recommended everyone human examine what’s inside to evaluate if
read. On GDPR, Monaco said that the right the data is fair and unbiased.”
to be forgotten could become problematic,
as it would also apply to institutions, not Sara El-Hanfy, innovation technologist –
just individuals. machine learning & data at Innovate UK
explains that “an AI system is not in itself
A question was raised as to whether AI biased. These systems are being designed
could be a leveler, as the technology is by humans and will therefore reflect the
shining a light on all issues, especially the biases of the developers or the data that
non-diverse nature of the industry. is selected to train the system. While
there is absolutely a risk that AI systems
Ekene Uzoma, VP digital product could amplify bias at scale, there is also an
development at State Street argued that opportunity for AI to improve transparency
the issue with data abuses is that they start and tackle existing biases. She provides
22 | The Future of Artificial Intelligence Report

to take on different forms, so predicting recruitment as an example and says that


may be a little difficult. He also spoke about it is “good that we are becoming more
education and how there needs to be a aware of the possible unintentional harms
recognition that we cannot look to the “altar of using AI technologies, and by having
of technology” to solve problems. these conversations, we can advance
understanding of AI and establish
According to Terry Cordeiro, head of best practices.”
product management - applied science and
intelligent products at Lloyds Bank, “AI will Innovate UK’s Stephen Browning adds that
automate repeatable work, but where does there is a “need for humans to work in a way
that leave us [humans]? We could say that that doesn’t perpetuate bias into the data
and on to the system. We are very conscious
of that as something that would hold back the
use of this type of technology or damage the
benefits you could potentially obtain from AI,”
he says – somewhat paraphrasing the concerns
of the AI Now Institute.

Browning continues to say that “what really


holds AI back and undermines it is the human
aspect, and not the technical aspects, and
that is what we’re working on. There are also
activities across the UK government that are
trying to address this, such as the Centre for
Data Ethics and Innovation.

Prag Sharma, head of Emerging Technology,


TTS Global Innovation Lab at Citibank, also
believes that this is a real concern in this day
and age, especially with the emergence of
explainable AI and more financial institutions
wanting to know how certain decisions are
being reached.

In order to solve the issues with ethical AI,


Sharma suggests introducing “rules and
regulations around an audit trail of the data,
so we are aware of what is produced, what is
consumed and how the result will reflect that.”
But in reality, we are only just coming to terms
with how this technology actually works and it
is not a case of financial services staying a step
ahead of big technology corporations either.

Annerie Vreugdenhil, chief innovation officer,


ING Wholesale Bank says that it “is not about
winning or losing, it is about making the most
of partnerships and the technical expertise
and capabilities from both sides. For example,
we believe that collaborating with fintechs is
key, because we can’t do it alone anymore.
Partnerships can be beneficial for both
parties: fintechs can bring agility, creativity and
entrepreneurship, while financial institutions
23 | The Future of Artificial Intelligence Report

like ING bring a strong brand, a large client


base with an international footprint, and
breadth of industry expertise.”
Expert view:

Michael Conway
Associate Partner,
Global Business Services

In a Q&A interview with Finextra,


Michael Conway, Associate
Partner, Global Business Services
from IBM explains how banks can
prevent bias to avoid a breakdown
in trust in regard to the technology
and the financial institution itself.

How can we ensure that the data we feed into AI systems are not racially, sexually or
ideologically biased?

Bias in AI systems mainly occurs in the data or in the algorithmic model. As we work to develop AI
systems we can trust, it’s critical to develop and train these systems with data that is unbiased and to
24 | The Future of Artificial Intelligence Report

develop algorithms that can be easily explained. As AI systems find, understand, and point out human
inconsistencies in decision making, they could also reveal ways in which we are partial, parochial, and
cognitively biased, leading us to adopt more impartial or egalitarian views.

Training the AI system is key and it must be quantitively and qualitatively assessed. Whilst the
orchestration and engineering of the system will be underpinned by devops and automation, we do
not have the same level of process sophistication for training AI at the moment. As a result, we must
be smart about how we approach training. Data science and machine learning have a very important
role in delivering focused and appropriate training for the AI platform as it matures. However, if
you only focus on the numbers there’s a chance you will deliver for the 80% and forget the 20%.
Qualitative assessments and manual reviews are essential to understanding with evidence how AI is
performing and therefore where Bias may be entering the system. Focusing upon questions like ‘did
the AI system satisfy the question’ not simply ‘did it give the most appropriate response’. In the early
days of AI evolution, we need to be overly critical to ensure that we provide a realistic baseline for the
system to replicate.
How can bias be tamed and tackled so that AI systems are as successful as they can be
in what they have been trained to do?

At this stage of AI training, and the heavily regulated environment we operate in, “Assisted
Learning” is critical to making sure that this type of bias is closely monitored. The application of
wholescale Automated Testing and the growing discipline of deep learning to understand the
performance of AI, as well as products (such as IBM’s OpenScale) help us better interpret the
performance of the AI Corpus. In parallel we must challenge ourselves to build diverse teams in
thinking, in background and in approach to make sure we don’t suffer from “group thinking”. As
mentioned above, recruitment in finance is more heavily focused on technical capabilities than
ever before. Ensuring that we balance this with a diverse and rounded perspective will help
mitigate the risk of bias from the outset.

At what stage in the process does discrimination need to be prevented?

Discrimination should be prevented at every stage of AI evolution. This is from inception to


training. This is done by establishing a thorough and appropriate control framework, one that
allows the AI to flourish but has appropriate measures and controls to ensure you know how
the system evolves over time.

Following this, how can banks make sure that bias in AI does not break down trust that
humans have in machines, but also the trust that customers have in their banks?

Transparency. Transparency in process, decisioning and training. Transparency in informing


customers they are talking to a virtual assistant. By ensuring banks have an appropriate
process for management and training of AI at scale, this will facilitate full end-to-end
transparency in the operation. In delivering this, we can provide appropriate control points that
allows these systems to grow and proliferate across the enterprise with confidence that we are
treating customers fairly throughout their engagement with AI. What is more, these control
points will provide the required evidence that the Regulator will require when reviewing how
banks are using AI across their endeavours. 25 | The Future of Artificial Intelligence Report

Once this bias has been mitigated, to what extent can the financial services industry be
transformed?

There is a long way to go before we can speak with confidence that the financial services
industry has been transformed, however we are in the crest of an AI wave that can take the
industry a long way. If we can strike the right balance of the use of AI technology with the
transparency mentioned above, we will be able to ensure that we can maintain the trust of the
customer, enterprise and the regulator – all of which are essential to ensuring this technology
is not simply the latest buzzword in financial services. Once this can be established, we will be
able to answer our risk counterparts and begin changing the dial in the world of risk appetite
for this technology. If we can evidence, it is safe to operate this technology at full enterprise
scale and that the control points at every stage of the customer engagement, there is no
reason why the future of financial institutions cannot be centred around artificial intelligence.
04 |
Explainable and auditable AI

At NextGen Banking London, Jason Maude,


head of technology advocacy at Starling Bank,
advised that explainable AI is necessary. “We
cannot just say that it is because the computer
has said. People are not going to trust that
answer, when they are declined for a loan
application or another product. The trust that
people need to have for banks to function will
not be there if we leave it up to the computer.”
26 | The Future of Artificial Intelligence Report
While machine learning might not be at As the responsibility shifts from
the heart of processes, it does not mean human to machine, the need for
that we shouldn’t interrogate them. explainability increases.
Maude continued and said that software
engineering techniques like version control “If an algorithm already has humans in
should be introduced. “Do testing where the loop, the human decision-makers
data sets are randomised before being can continue to bear the responsibility of
put into the system to see how the output explaining the outcomes as previously done.
has changed. Provide an audit trail that This can help the radiologist work more
regulators could start demanding.” accurately and efficiently, but ultimately,
he or she will still provide the diagnosis and
However, Jonathan Williams, principal explanations,” Costenaro said.
consultant at Mk2 Consulting pointed
out that as regulators are not from a However, as AI matures, we’re likely to
technological background, this is difficult. see the growth of new applications that
“The other challenge is looking at the decreasingly rely on human decision-making
outcomes and checking they are in line with and responsibility. Costenaro continued:
what we would expect. Humans bring their “For a new class of AI decisions that are
own biases, but we cannot automatically high-impact and that humans can no longer
test those. Regulators have a steep learning effectively participate in, either due to
curve to ascend.” speed or volume of processing required,
practitioners are scrambling to develop
Maude then made a very poignant point, ways to explain the algorithms.” It is up to
that remained with most of the audience IT leaders to take the reins to ensure their
before the end of the conference. “I don’t company uses AI properly and incorporates
think we will reach a point where humans explainability when necessary.
will not be able to explain what is going on.
We may get to a point where the cost of
explainability outweighs the benefit.”

Explainable AI lets humans understand


and articulate how an AI system made
a decision, but questions are still being
raised about the potential consequences
of AI-based outcomes and whether this
is needed, as some low-stakes AI systems
might be fine with a black box model, where
the results are not understood.

Jane.ai head of artificial intelligence R&D


Dave Costenaro explained: “If algorithm
results are low-impact enough, like the
27 | The Future of Artificial Intelligence Report

songs recommended by a music service,


society probably doesn’t need regulators
plumbing the depths of how those
recommendations are made.”11 While a
person may be able to get by after being
recommending a song they don’t like, AI
systems being asked to make decisions
about medical treatments or mortgage
loans, could become problematic.

11
The Enterprisers Project, ‘Explainable AI: 4 industries where it will be critical’ (2019).
Expert view:

Dan Reid
CTO and Founder

In a Q&A interview with


Finextra, Dan Reid, CTO and
founder of Xceptor, discusses
how AI has progressed in the
financial services industry
and the obstacles banks have
to overcome to leverage the
technology in the same way
that other sectors have.
28 | The Future of Artificial Intelligence Report
How is AI changing business across all industries?

Often the issue with AI is that it means something different to everyone you talk to, so no
one is really sure what they should expect out of AI, what changes they are looking for and
how best to go about it. It’s creating confusion rather than clarity. AI isn’t a single thing, rather
it is series of building blocks that solve business problems by learning from vast amounts of
structured and unstructured data. Typically you have to start by outlining what you mean by
AI. For us the main building blocks are machine learning and natural language processing with
a heavy focus on data transformation, so being able to ingest all manner of data types from
spreadsheets to pdfs right up to emails written in colloquial shorthand. With 80% of a firm’s
data typically unstructured, this opens the door for business to really get its arms around
its vast data banks, automating the ingestion of emails, pdfs, contracts and then being to
interrogate them and derive smart analytics.

What can financial services learn from successful case studies?

It can help identify some good places to start. We’ve been working with clients on areas such
as using natural language processing to classify unstructured emails, and to extract relevant
data points from them. This process typically achieves a high level of automation. Similar to any
process, AI or not, exceptions can occur and these can be flagged by validation rules. Other
areas include NAV validation, fraud detection and named entity recognition. These are just
a few examples and are all focussed on data enrichment – so building better data models to
drive smarter analytics. That is where the business value is and it is essential that value can be
demonstrated.

What are the challenges that hinder banks from implementing AI technology?

Part of it is cultural, people aren’t sure what AI means for them, their role, their jobs, but people
are as important to successful deployment as much as the technology or the analytics. Part of
it is treating AI like a single category, there are so many building blocks in the AI repertoire and
it’s a matter of identifying the best fit for the task at hand. And a big part of it is data quality and
29 | The Future of Artificial Intelligence Report

maturity. Access to the right data of a reasonable is often the biggest hurdle.

What are the challenges that hinder banks from implementing AI technology?

Scaling up AI is one of the biggest challenges for firms. We see pockets of deployment but
rarely enterprise-wide. There is still a long way to go and identifying the right part of AI for the
right task is key to success.
05 |
Conclusion

While early forms of the artificial intelligence have been


created and designed to mimic human nature, controversy
and misconceptions have emerged but after a period of quiet
development, AI has undergone a modern revolution and
techniques such as machine learning have come to the fore.
30 | The Future of Artificial Intelligence Report
Most financial applications of
AI have been in the customer
service space, but there are
other areas such as fraud
prevention that banks are
working with to improve.
However, as decision-making
factors using AI become more
accepted, this may not align
with the softer strategies of
a bank.

Many financial institutions


are questioning how artificial
intelligence must be governed
within an organisation and how
it can be taught to align with a
bank’s brand and ethos,
but without influence from
human judgement.

The future rests on explainable


AI and banks should interrogate
processes, but in order to do
this, regulators have a steep
curve to ascend. A problem also
emerges when humans cannot
explain how AI works, which
would outweigh the benefit of
the technology.
31 | The Future of Artificial Intelligence Report
06 |
Bibliography

BBC, ‘Google’s ethics board shut International Banker, ‘How AI is


down’ (2019). disrupting the banking industry’ (2018).
Available at: https://www.bbc.co.uk/ Available at: https://internationalbanker.
news/technology-47825833 [Accessed com/banking/how-ai-is-disrupting-the-
5/4/2019]. banking-industry/ [Accessed 4/3/2019].

Bloomberg, ‘The Google AI Ethics KPMG, ‘The role of Artificial Intelligence


Board With Actual Power Is Still in combating financial crime’ (2019).
Around’ (2019). Available at: https://assets.kpmg/content/
Available: https://www.bloomberg.com/ dam/kpmg/ch/pdf/the-role-of-artificial-
news/articles/2019-04-06/the-google- intelligence-in-combating-financial-crime.
ai-ethics-board-with-actual-power-is-still- pdf [Accessed 1/3/2019].
around [Accessed 6/4/2019].
MMC Ventures, ‘The State of AI:
Darktrace, ‘The Next Paradigm Shift: Divergence’ (2019).
AI-Driven Cyber-Attacks’ (2018). Avaliable at: https://www.stateofai2019.
Available at: https://www.darktrace.com/ com/ [Accessed 1/3/2019].
en/resources/wp-ai-driven-cyber-attacks.
pdf [Accessed 4/3/2019]. PwC, ‘Getting real about AI and financial
crime’ (2019).
32 | The Future of Artificial Intelligence Report

European Commission, ‘Ethics guidelines Available at: https://www.pwc.com.au/


for trustworthy AI’ (2019). consulting/assets/ai-financial-crime-article-
Available at: https://ec.europa.eu/digital- 07feb18.pdf [Accessed 1/3/2019].
single-market/en/news/ethics-guidelines-
trustworthy-ai [Accessed 8/4/2019]. The Enterprisers Project, ‘Explainable
FCA, ‘AI and financial crime: silver bullet AI: 4 industries where it will be
or red herring?’ (2018). critical’ (2019).
Available at: https://www.fca.org.uk/news/ Available at: https://enterprisersproject.
speeches/ai-and-financial-crime-silver- com/article/2019/5/explainable-ai-4-
bullet-or-red-herring [Accessed 4/3/2019]. critical-industries [Accessed 29/5/2019].

Financial Times, ‘Europe’s AI start-ups


often do not use AI, study finds’ (2019).
Available at: https://www.ft.com/
content/21b19010-3e9f-11e9-b896-
fe36ec32aece [Accessed 4/3/2019].
About Finextra
This report is published by Finextra Research.

Finextra Research is the world’s leading specialist financial technology (fintech)


news and information source. Finextra offers over 115,000 items of specialist
fintech news, features and TV content items to 420,000 unique monthly visitors
to www.finextra.com.

Founded in 1999, Finextra Research covers all aspects of financial technology


innovation and operation involving banks, institutions and vendor organisations
within the wholesale and retail banking, payments and cards sectors worldwide.

Finextra’s unique global community consists of over 30,000 fintech professionals


working inside banks and financial institutions, specialist fintech application
and service providers, consulting organisations and mainstream technology
providers. The Finextra community actively participate in posting their
opinions and comments on the evolution of fintech. In addition, they contribute
information and data to Finextra surveys and reports.

Finextra reports coming in 2019:


The Future of Payments

The Future of Trade Finance

The Future of Core Banking

The Future of Cybersecurity

Contact Salesadmin@finextra.com to get involved.


33 | The Future of Artificial Intelligence Report

You might also like