Professional Documents
Culture Documents
of Artificial
Intelligence
How to filter through applications of
AI for banking business transformation
Expert
Contributors :
Contents
The Future of Artificial Intelligence Report 2019
2 | The Future of Artificial Intelligence Report
The Future of Artificial Intelligence
How to filter through applications of AI for
banking business transformation
Introduction
Chapter 1:
Artificial intelligence hype
Expert view
VocaLink
Chapter 2:
Avoiding artificial stupidity
Expert view
Pelican
Chapter 3:
Filtering the ethics of AI
Expert view
IBM
Chapter 4:
Explainable and auditable AI
Expert view
Xceptor
Conclusion
Bibliography
About
3 | The Future of Artificial Intelligence Report
00 |
Introduction
‘The State of AI: Divergence,’1 in 40% of Europe’s AI startups do not use any
partnership with Barclays. “Machine AI programmes in their products, as was
learning can be applied to a wide variety reported in Financial Times. 2
of prediction and optimisation challenges,
from determining the probability of a
credit card transaction being fraudulent to
predicting when an industrial asset is likely
to fail.”
1
MMC Ventures, ‘The State of AI: Divergence’ (2019).
2
Financial Times, ‘Europe’s AI start-ups often do not use AI, study finds’ (2019).
Based on interviews and investigation into with artificial intelligence - after having heard
2,830 AI startups in Europe, David Kelnar, about the technology in the news - for the
MMC’s head of research, said that while execs to reply, revealing that their bank has
01 | Artificial intelligence hype
many of these firms had plans develop been using machine learning for a few years.
machine learning programmes, none
actually were at present. “A lot of venture In conversation with Finextra, Prag Sharma,
capital groups in Europe are responsive head of Emerging Technology, TTS Global
to companies that are interested in raising Innovation Lab, Citibank, highlights that
money [for AI],” Kelnar said. there has been a recent resurgence in
artificial intelligence and this is because of
The FT went on to report that companies the development in the overall capability of
that are branded as “AI businesses” have the technology driven by “data, processing
historically raised larger funding rounds and power, cost and algorithms, products and
secured higher valuations in comparison services developed by the open source
to other software businesses. In addition community.”
to this, politicians have also contributed to
this hype by discussing so-called AI success Annerie Vreugdenhil, chief innovation
stories. officer at ING Wholesale Bank suggests
that AI is already part of our everyday lives
AI FOMO and is more prevalent than first thought.
“The world is changing rapidly through
At Finextra’s annual NextGen Banking technological developments and as a result,
conference, keynote speaker head of AI at our expectations are changing. As we adapt,
TSB Bank Janet Adams framed the debate and these technologies become more
and stated that “AI is the new electricity” and intertwined into our lives, our expectations
has the potential to power everything we around what could be achieved also grows.
do in the future, helping banking customers We believe in stepping out of our comfort
thought the wealth creation stage of their zone, even beyond banking, to explore
lives. the opportunities, and as we do this, our
expectations extend beyond further than we
However, despite hype around uncovering have ever imagined before.”
the mysteries that surround the technology,
Adams pointed out that business models Paul Hollands, chief operating officer for
cannot succeed without proper education data and analytics at NatWest has a different
of staff in financial services, and only then, view. After saying that he was “a terrible
strategic advantage can be gained. “Data person to ask whether AI is a buzzword
equals training equals insight. Roshan or not,” he said he has always thought that
Rohatgi, AI lead at RBS, agreed and added AI was “a massively overhyped term. It is
that “everyone is keen to use this stuff, but a collection of capabilities, so you know, if
the system, the fabric, is not mature yet. It’s you think about it in its simplest form, it’s
all well and good to go from POC to pilot, but machine learning, its robotics and it is to
it never really reaches the real world.” some extent, chatbots as well and I think a
lot of what we’re trying to do is around how
The hype discussion continued in Karan do we used advanced techniques to help get
8 | The Future of Artificial Intelligence Report
David Divitt
Vice President,
Financial Crime
James Hogan
Product Manager,
Financial Crime Solutions
A level of scrutiny should be encouraged and is warranted as long as it doesn’t suffocate the
process, since criminals tend to exploit weaknesses as soon as they emerge, and generally
before the industry has time to fully investigate. That being said, applying unnecessary
governance can be a barrier to true innovation, and diminish the opportunity of discovery and
achievement by slowing the process down unnecessarily.
How can banks take a measured approach to keep pace with innovation and help
combat financial crime?
The well-established banks have historically been less agile when reacting quickly to change
or taking the lead when it comes to launching innovative products and solutions. It is true of
course, they have many more customers and greater legacy technology challenges then say,
a challenger bank, however, the arrival of the so called challenger bank and new initiatives
such as Open Banking has shaken the industry into life. In order to keep pace, the industry
as a whole needs to embrace agility and adopt a “start-up” mentality which encourages
experimentation. Financial crime is already proving the ideal incubation environment for new
ideas and technology to be tested: because bad actors move at an extremely rapid pace, it is
essential that the industry is similarly agile and innovative to combat it. Innovations such as
network-level money laundering detection, device fingerprinting, AI and machine learning
have all had significant impacts in reducing fraud and money laundering, but more can always
be done. We encourage financial institutions to dedicate teams focussed on innovation, who
can work in a different way, but have the backing and the resources of the parent. However,
ensuring that their mission is well communicated across the organisation and has the support
from the various stakeholders is critical to success.
Data and algorithms are improving our supervisory approach, but what should financial
11 | The Future of Artificial Intelligence Report
For financial crime, wider collaboration is the key and exactly where financial institutions
should devote time as data and algorithms in a silo can only go so far. Partnering and sharing
intelligence will deliver new learning to ensure financial institutions keep pace. Regular pilots
and investigative explorations should form a conveyor belt of that innovation, wherever
possible focussing on collaboration. Of course, financial institutions have a long, competing list
of priorities but investment here, specifically in data science, aiming for tangible outcomes will
reap reward.
When it comes to combatting financial crime, why is there a spotlight on machine
learning when hacks cannot be statistically analysed?
Expert view
Machine learning is based on the principle that when given enough data, the machine can
better detect and react to the subtleties of a problem. Where humans alone can generally
interpret only the more obvious trends and patterns, a machine can comb through orders of
magnitude-more data to uncover the extremely hidden patterns which detect differences in
data. For this reason, tackling financial crime lends itself very well to the technique. Financial
crime involves identifying a relatively rare situation occurring amongst a huge pool of
legitimate transactions – a true needle in a haystack. Fraudsters intentionally try to blend in to
the crowd and avoid obvious clues to their activity. Also, criminals experiment with new attacks
and evolve existing ones rapidly, so reacting to them must also be done at speed. For these
reasons, machine learning is a great tool in the arsenal of weapons to combat financial crime.
Can risk models be built using algorithms to monitor crimes such as money laundering?
Absolutely. Risk models are in operation today and are at the forefront of combating money
laundering activity. The application of a rule-based strategy to detect money laundering is
antiquated and is proving to be an efficiency overhead that is no longer useful. The key to
truly exploiting algorithms to detect this type of criminal activity however is embracing a
collaborative approach where the silos across entities are broken down. The act of laundering
money is harder to detect at single transaction, single bank level and a wider network of
activity, relationships and neighbourhoods is the best approach to tackle this global problem.
12 | The Future of Artificial Intelligence Report
02 |
Avoiding artificial stupidity
but we as an organisation looked at it and future might look like? PwC highlighted that
tried to figure out whether this would despite hundreds of millions being invested
add serious value and truly understand into technology that fights financial
the underlying algorithms, without crime, many financial institutions are still
having to rely on third parties because we struggling, but continue to rely on what
understand our data better than others.” would be considered legacy infrastructure
to keep up with new and evolving threats. 3
3
PwC, ‘Getting real about AI and financial crime’ (2019).
which explored how robotics process
PwC explained that financial automation (RPA), machine learning, and
services companies are aware cognitive AI can be adopted or combined to
solve issues with financial crime today.
that AI is a faster, cheaper
and smarter way of tackling However, KPMG advised making “a
reasoned decision as to what type, or
financial crime, but there is a mix of types of intelligent automation
lot of confusion around how a company should implement, financial
organisations should harness crime stakeholders first need to design an
intelligent automation strategy.
this technology
“It can analyse voice records and detect institution already knows and identified.
changes in emotion and motivation that Instead, it looks at patterns that exist in the
can give clues about fraudulent activities. It data to identity if those patterns have been
can investigate linkages between customer seen previously.” The second suggestion
and employees and alert organisation to from KPMG was to use machines to
suspect dealings.” “automate aspects of the review process
and deployed to build statistical models that
KPMG delved deeper into this problem incorporate gathered data and calculate a
in its 2018 report, ‘The role of Artificial likelihood of occurrence (closure
Intelligence in combating financial crime,’4 or escalation).”
4
KPMG, ‘The role of Artificial Intelligence in combating financial crime’ (2019).
The third point was to employ bots to scan caricature of how the outcomes might have
the internet and public due diligence sites been generated, so we can make future
“to collect relevant data from internal and predictions about them in a systematic way.
02 | Avoiding artificial stupidity
5
FCA, ‘AI and financial crime: silver bullet or red herring?’ (2018).
6
International Banker, ‘How AI is disrupting the banking industry’ (2018).
bandied about, Lex Sokolin, global director organisations as the ability to use machine
for fintech research firm Autonomous Next learning, to use robotics and artificial
revealed that AI adoption across financial intelligence increases.”
services could save US companies up to
$1 trillion in productivity gains and lower Hollands goes on to discuss how employers
overall employment costs by 2030. have a right to ensure that the people
within the organisation also have the core
skills to help them grow. Oaknorth’s Amir
Conflicting data suggested that Nooralia also had a similar attitude and says
AI may also result in a rise of that it is not about “machine replacing man
(or woman), but rather machine enhancing
banking jobs, as revealed by a human. Think Iron Man suit boosting a
recent study from Accenture human rather than an all-knowing robot.”
that found that by 2022, a 14% Like the healthcare sector that will
net gain of jobs is likely to occur continue to require a human’s emotional
in jobs that effectively use AI, in response, “when it comes to finance, it
is very personal and there are situations
addition to a 34% increase that will require empathy and emotional
in revenues. intelligence – e.g. a customer who might be
experiencing anxiety of mental stress as a
result of debt. It’s not like travel where the
The article also pointed to ex-Citigroup process involves getting from A to B, or
head Vikram Pandit’s expectation that AI retail which is purely transactional, so the
could render 30% of banking jobs obsolete human element is less important.”
in the next five years, asserting that AI
and robotics “reduce the need for staff in Nooralia then goes on to reference a
roles such as back office functions”. Japan’s recent Darktrace whitepaper, ‘The Next
Mizuho Group plans to replace 19,000 Paradigm Shift: AI-Driven Cyber-Attacks7
employees with AI-related functionality in which the organisation believes that in
by 2027, and recently departed Deutsche the future, “malware bolstered through AI
Bank CEO John Cryan once considered will be able to self-propagate and use every
replacing almost 100,000 of the bank’s vulnerability on offer to compromise a
personnel with robots. network,” Nooralia says.
However, conflicting data suggested that AI In the whitepaper, Darktrace state that
may also result in a rise of banking jobs, as “instead of guessing during which times
revealed by a recent study from Accenture normal business operations are conducted,
that found that by 2022, a 14% net gain of [AI-driven malware] will learn it. Rather
jobs is likely to occur in jobs that effectively than guessing if an environment is using
use AI, in addition to a 34% increase in mostly Windows machines or Linux
revenues. Accenture also finds that the machines, or if Twitter or Instagram would
most mundane human jobs will be replaced be a better channel, it will be able to gain an
17 | The Future of Artificial Intelligence Report
7
Darktrace, ‘The Next Paradigm Shift: AI-Driven Cyber-Attacks’ (2018).
Expert view:
Rajiv Desai
SVP – US Operations
Artificial Intelligence is already a ubiquitous part of our everyday lives, and banks have been
deploying AI for several decades in task-specific ways. AI in transaction banking has been used
to address key bottlenecks in payments and financial crime compliance. These are the areas
where thousands of people are used in the back offices worldwide to do repetitive tasks which
require basic human intelligence. Application of AI to these areas will continue to grow as
these are some of the main causes of inefficiencies and last-mile problems that banks have to
solve. However, we are now also at an inflection point in banking transformation. This will also
transform AI from becoming a “nice to have” enhancement provider to a “must have” facilitator
of an open banking and real-time digital banking environment.
Can you explain how you see the challenges of today’s real-time payments world being
addressed by AI?
In today’s real-time environment complex processing and compliance decisions are made
within a few seconds. It is simply not possible in this increasingly digital and 24/7 instant
payment world to throw more human resources at the problem. The human body and mind
simply lack the abilities to consistently and systematically assess, investigate and decide on
matters 24x7 within seconds. AI is the only solution available to address this need to complete
the existing processing tasks and new challenges facing us like High Value Payment Fraud.
Real-time fraud detection in high value payments will gain increasing importance and AI will
play a prominent role to address the same issues.
Are there other compliance areas where you see AI playing a major role?
19 | The Future of Artificial Intelligence Report
In addition to tackling the growing problem of payments fraud, sanctions screening obligations
in a real-time environment can be incredibly challenging for banks, often resulting in very high
false positives, or wrong hits, in financial crime compliance. We have noticed that with dozens
of watchlists with thousands of patterns of names, companies, ships and cities many words
trigger false alerts. However, most of the time, humans can quickly and easily decide that the
hit is not real using context and common sense. For instant payments it is clearly not practical
to have humans take these decisions, so using Natural Language Processing AI technology can
easily figure out whether “Laura” is a ship or first name of a person, or if “Iran” is a street name
in Denver or a blacklisted country. In addition, auditability and examinability are particularly
important in these regulated contexts – banks need to have full confidence in their ability to
fully demonstrate and explain the decisions that AI processes have taken.
03 |
Filtering the ethics of AI
8
BBC, ‘Google’s ethics board shut down’ (2019).
9
Bloomberg, ‘The Google AI Ethics Board With Actual Power Is Still Around’ (2019
10
European Commission, ‘Ethics guidelines for trustworthy AI’ (2019).
The EU also explained that in the summer the workforce of the future will be more
of this year, the Commission will launch a relationship-based. Banks need to look
pilot phase that would involve a number at how to foster new talent and how to
03 | Filtering the ethics of AI
Michael Conway
Associate Partner,
Global Business Services
How can we ensure that the data we feed into AI systems are not racially, sexually or
ideologically biased?
Bias in AI systems mainly occurs in the data or in the algorithmic model. As we work to develop AI
systems we can trust, it’s critical to develop and train these systems with data that is unbiased and to
24 | The Future of Artificial Intelligence Report
develop algorithms that can be easily explained. As AI systems find, understand, and point out human
inconsistencies in decision making, they could also reveal ways in which we are partial, parochial, and
cognitively biased, leading us to adopt more impartial or egalitarian views.
Training the AI system is key and it must be quantitively and qualitatively assessed. Whilst the
orchestration and engineering of the system will be underpinned by devops and automation, we do
not have the same level of process sophistication for training AI at the moment. As a result, we must
be smart about how we approach training. Data science and machine learning have a very important
role in delivering focused and appropriate training for the AI platform as it matures. However, if
you only focus on the numbers there’s a chance you will deliver for the 80% and forget the 20%.
Qualitative assessments and manual reviews are essential to understanding with evidence how AI is
performing and therefore where Bias may be entering the system. Focusing upon questions like ‘did
the AI system satisfy the question’ not simply ‘did it give the most appropriate response’. In the early
days of AI evolution, we need to be overly critical to ensure that we provide a realistic baseline for the
system to replicate.
How can bias be tamed and tackled so that AI systems are as successful as they can be
in what they have been trained to do?
At this stage of AI training, and the heavily regulated environment we operate in, “Assisted
Learning” is critical to making sure that this type of bias is closely monitored. The application of
wholescale Automated Testing and the growing discipline of deep learning to understand the
performance of AI, as well as products (such as IBM’s OpenScale) help us better interpret the
performance of the AI Corpus. In parallel we must challenge ourselves to build diverse teams in
thinking, in background and in approach to make sure we don’t suffer from “group thinking”. As
mentioned above, recruitment in finance is more heavily focused on technical capabilities than
ever before. Ensuring that we balance this with a diverse and rounded perspective will help
mitigate the risk of bias from the outset.
Following this, how can banks make sure that bias in AI does not break down trust that
humans have in machines, but also the trust that customers have in their banks?
Once this bias has been mitigated, to what extent can the financial services industry be
transformed?
There is a long way to go before we can speak with confidence that the financial services
industry has been transformed, however we are in the crest of an AI wave that can take the
industry a long way. If we can strike the right balance of the use of AI technology with the
transparency mentioned above, we will be able to ensure that we can maintain the trust of the
customer, enterprise and the regulator – all of which are essential to ensuring this technology
is not simply the latest buzzword in financial services. Once this can be established, we will be
able to answer our risk counterparts and begin changing the dial in the world of risk appetite
for this technology. If we can evidence, it is safe to operate this technology at full enterprise
scale and that the control points at every stage of the customer engagement, there is no
reason why the future of financial institutions cannot be centred around artificial intelligence.
04 |
Explainable and auditable AI
11
The Enterprisers Project, ‘Explainable AI: 4 industries where it will be critical’ (2019).
Expert view:
Dan Reid
CTO and Founder
Often the issue with AI is that it means something different to everyone you talk to, so no
one is really sure what they should expect out of AI, what changes they are looking for and
how best to go about it. It’s creating confusion rather than clarity. AI isn’t a single thing, rather
it is series of building blocks that solve business problems by learning from vast amounts of
structured and unstructured data. Typically you have to start by outlining what you mean by
AI. For us the main building blocks are machine learning and natural language processing with
a heavy focus on data transformation, so being able to ingest all manner of data types from
spreadsheets to pdfs right up to emails written in colloquial shorthand. With 80% of a firm’s
data typically unstructured, this opens the door for business to really get its arms around
its vast data banks, automating the ingestion of emails, pdfs, contracts and then being to
interrogate them and derive smart analytics.
It can help identify some good places to start. We’ve been working with clients on areas such
as using natural language processing to classify unstructured emails, and to extract relevant
data points from them. This process typically achieves a high level of automation. Similar to any
process, AI or not, exceptions can occur and these can be flagged by validation rules. Other
areas include NAV validation, fraud detection and named entity recognition. These are just
a few examples and are all focussed on data enrichment – so building better data models to
drive smarter analytics. That is where the business value is and it is essential that value can be
demonstrated.
What are the challenges that hinder banks from implementing AI technology?
Part of it is cultural, people aren’t sure what AI means for them, their role, their jobs, but people
are as important to successful deployment as much as the technology or the analytics. Part of
it is treating AI like a single category, there are so many building blocks in the AI repertoire and
it’s a matter of identifying the best fit for the task at hand. And a big part of it is data quality and
29 | The Future of Artificial Intelligence Report
maturity. Access to the right data of a reasonable is often the biggest hurdle.
What are the challenges that hinder banks from implementing AI technology?
Scaling up AI is one of the biggest challenges for firms. We see pockets of deployment but
rarely enterprise-wide. There is still a long way to go and identifying the right part of AI for the
right task is key to success.
05 |
Conclusion