Professional Documents
Culture Documents
Five AI Solutions Every Chief Risk Officer Needs PDF
Five AI Solutions Every Chief Risk Officer Needs PDF
introduction
Every year banks spend millions of dollars Transaction monitoring systems (TMS) are (mostly)
rule-based systems that are designed to identify
on detecting, investigating, and reporting
transactions that might be indicative of money
potential money laundering – and for good laundering. These systems, which are designed to
reason. It’s not uncommon for regulators to avoid missing potential money laundering (false
levy fines for inadequate or lax anti-money negatives) at any cost, generate reams of alerts,
forcing banks to spin up large investigative teams to
laundering (AML) monitoring that exceed
handle all of them.
one billion dollars. Consequently, banks
have created systems that are designed Machine learning models can be used to score alerts
according to how likely they are to actually result
to generate huge numbers of alerts, all of
in a SAR filing. The bank has complete control over
which must be manually investigated and how conservatively this system performs so that the
most of which do not result in Suspicious number of false negatives can be reduced to near
Activity Reports (SARs). zero.
Losses due to fraud increase every year, Be aware that implementing and monitoring fraud
prevention models will require modification of core
with some estimates claiming worldwide
systems within a bank. Making changes to these
losses to fraud as high as $200B in 2017. systems may give even the most veteran CTO
Despite the cost, many banks are either heartburn. In addition models must be monitored for
fighting fraud with antiquated, rules-based accuracy over time, as new types of fraud emerge
and the models age. In spite of these complexities,
systems or with expensive, black-box
however, the increased accuracy that machine learning
vendor models. provides far outweighs the cost of implementing these
new solutions.
Running a successful fraud solution means not only
minimizing losses due to fraud, but also minimizing
irritation and impact to existing customers. Blocking a
legitimate transaction or placing excessive holds on a
deposit may not result in a direct loss to the bank, but
they still have a tangible, substantial impact in terms of
customer satisfaction, retention, and churn.
The Federal Reserve requires banks Following a systematic and unbiased approach to
model building is key to a sustainable model risk
with assets greater than $50 billion to
management practice. Model developers must be
independently validate the models they disciplined in the way models are developed and must
build, causing these large banks to create utilize tools to make the process more reliable and
elaborate model risk management teams consistent. These same tools should also make the
documentation tasks easier, providing interpretability
to review and approve every model built
and insights that speed documentation for regulators.
within a bank. These new technologies make safely developing
Part of the reason that model validation is so difficult is highly-accurate models quicker and easier. Both model
that most models today are custom-built by hand. Data developers and model validators must be open to
science teams—and validation teams—don’t have the utilizing these new tools.
well-established testing and quality control measures
in place that software development teams have built
over the past several decades.
The new expected loss standards require that banks Federal Federal Reserve
use information about past events (i.e., historical data) 1913 Reserve Act Bank created
and “reasonable and supportable” forecasts when
estimating expected credit losses. Although this is a Standards for reserving
1921 Revenue Act
for bad debts
huge change to the current incurred loss standards,
it also provides a unique opportunity because the
Securities
new standards do not prescribe how lenders choose 1934 Exchange Act
SEC established
to make the estimate, but only that the forecasts
must be “reasonable and supportable.” This gives Loan Loss
banks the flexibility to implement the best models 1965 Estimation
Process established
and methodologies to forecast expected loss for their
portfolio, as long as the forecasts can be proven to be Financial reporting
reasonable and supportable. 1973 FASB
standards established
Accurate and transparent models for predicting expected Banks with assets >$25M
losses should be at the core of successful compliance 1976 ALLL required to report loss
programs. Machine learning models detect the patterns allowances
in a bank’s historical data in order to accurately estimate
Interagency Policy Statement
credit losses, and these models are no longer black
1993 GAAP on the Allowance for Loan
boxes. Modern tools allow stakeholders to understand and Lease Losses
how these models work in a detailed way, including
why individual predictions were made. Not only is this Guidance on managing
2000 OCC 2000-16
risks arising from models
useful from a compliance perspective, but also from an
underwriting and portfolio management perspective.
Guidance on Model
2011 SR 11-7
Risk Management
Granular credit loss models are also the foundation
of good risk adjusted pricing. Pricing inefficiencies
Restructuring of ALLL
—overpricing or underpricing risk—can easily be
2016 CECL to account for lifetime
spotted by predicting the expected loss at a given loan losses
price. Overpricing may indicate potential for volume
growth, and underpricing may indicate the need Guidance on the reduction
for adjustment in policy or risk selection. Superior 2017 TRIM of unwarranted variability
in model risk management
pricing analytics may also identify market pricing
inefficiencies, including opportunities to acquire
International standards for
portfolios where risk is overpriced or opportunities to 2018 IFRS-9 credit impairment and loss
originate and sell portfolios where risk is underpriced. forecasting
Very few risk review teams, though, are leveraging Compliance Risk: Predict policy exception
machine learning to improve the quality and efficiency levels based on historical trends, product mix, control
of their work, but they should. Machine learning can self assessments, audit findings
guide field work based on business mix, risk metrics,
and past reviews of similar areas.
.com
bot
o| 5 AI Solutions 9
ar
©2018 DataRobot
t
da