You are on page 1of 16

Credit Risk Model Monitoring

INTRODUCTION

This scenario might sound familiar: on a few key individuals. Processes related Operational Risks
to model monitoring may be affected Lack of automation (for example, the
A bank uses over 50 analytical models
by a number of elements including: manual entry of SAS outcomes into
to support its underwriting, pricing and
finance functions. There are numerous Microsoft Excel/Microsoft PowerPoint),
Governance regular controls in code and of tracking
models in place to generate the probability
of default (PD), loss given default (LGD) A lack of policies around model risk logs, leading to a high error rate.
and exposure at default (EAD) metrics monitoring, or policies in place may
not be properly enforced. Lack of contingency plans lead to the risk
that serve as inputs to the banks capital of losing key historical facts if dedicated
computation process. No full model list for audit tracking personnel leave the firm and adequate logs
Model monitoring and tracking are purposes; or a list not regularly updated. are not in place.
performed by an understaffed analytics
Organization In todays financial institutions, analytical
team, using Microsoft Excel templates
and SAS or other tools that may have The organization finding itself in reactive models are high-value organizational and
been handed down for years. mode, often scrambling frantically to meet strategic assets. As models are needed to
internal and external deadlines. run the business and comply with regulations,
Reports for senior management are they must be effectively and efficiently
assembled manually, under pressure, using Timelines regularly affected by poor managed for optimal performance once
metrics and formats often not updated for capacity planning and inadequate in production.
long periods of time. contingency plans.
Poor governance and process in the
A regulatory review of any models used Processes and Procedures management of these models can expose
usually triggers a massive manual exercise No standards in place surrounding an organization to the risks of suboptimal
as reports and documents are created and the frequency of model monitoring; business decisions, regulatory fines and
compiled. There may be multiple rounds similar teams using different standards reputational damage.
of information exchange with the regulator, and procedures for model tracking
as internal reports do not address all templates or for performance metrics. As seen in Figure 1 below, a robust system
aspects of model performance. of ongoing model monitoring is a key
Monitoring Output Analysis component in the management of model risk.
The above describes the situation at
a medium-sized European bank, but is fairly Poorly performing models remaining
in production due to decision making From a broader perspective, the term model
common across the industry. refers to any approach that processes
affected by inconsistency in metrics,
frequency, lack of analysis of root causes, quantitative data as input, and provides
Model monitoringthe regular analytical
or by ineffective and poor commentaries a quantitative output. The definition of
review of model performancemay be loosely
on monitoring output. a model can prove contentious. Best practice
managed, with its effectiveness dependent
often sees a broader definition of a model,
with a sensible model monitoring standards
document in place. This should help ensure
Figure 1: Managing Model Risk that all of the models are noted on the master
list, but that there are appropriate levels
of monitoring put in place.
Managing Model Risk
While we discuss the measurement of credit
First Line of Defense Second Line of Defense Third Line of Defense risk, and therefore refer to scoring or rating
PD and LGD models, the best practices
Model Development On Going Model Top Management
Monitoring and Board Review to which we refer are applicable to any
Model Documentation type of quantitative model.
Independent Validation Model Use Risk Escalation
Model Implementation and Stress Testing
Model Risk Appetite Setting
Model Usage Regular Model
Review Process Model Risk Management
Framework

Source: Accenture, November 2014

2
3
EFFECTIVE MODEL MONITORING:
KEY PRINCIPLES
Ongoing monitoring is essential Guidelines for establishing a model inventory
include:
to evaluate whether changes
in products, exposures, Segregate the inventory building exercise
by model category; for example segments
activities, customers or market may include:
conditions call for adjustment, Underwriting/Application Scoring Models
redevelopment, or replacement Account/Customer Behavior Scoring
of the model or to verify that any Models

extension of the model beyond Risk Decisioning Models

its original scope is valid. Any Pricing Models

model limitations or assumptions Impairment/Provisioning Models


identified in the development Stress Testing Models
stage should be assessed as Collections and Recovery Scoring Models
part of ongoing monitoring. Capital Planning Models such as PD,
LGD and EAD.
In practice, monitoring begins when a model
is first implemented in production systems Within each category, maintain a complete
for actual business use. This monitoring listing of all models used across the entity
process should have a frequency appropriate or group of entities. One way to do this
to the nature of the model, the availability is to include a measure of portfolio size
of new data or modeling approaches, and such as EAD (or portfolio balance where
the magnitude of the risks involved. In our EAD is not available) when building the
view, this should be clearly laid out as part list and checking that the sum of the
of a monitoring standards document. sub-portfolio EAD equals the total EAD.
This will help ensure that sub-portfolios
and models are not missed, and also that
ENTERPRISE LEVEL MODEL the rationale for excluded or untreated
INVENTORY segments is noted. This may also be
used to track the proportion of portfolio
A model inventory takes stock of the models
EAD covered by various models, which
used by an institution and establishes clear
is often requested by regulators.
ownership of the maintenance and usage
of the model. Some measure of the
The inventory should be careful to also
materiality of the model or portfolio should
include any sub-models or feeder
be included (common measures include the
models. As the Fed noted banks should
portfolio balance or exposure at default).
keep an inventory of all models used in
their capital planning process, including
While the existence of a complete listing
feeder models and all input used that
of models in use and associated materiality
produce estimates or projections used
may seem like a basic component of risk
by the models to help generate the final
management, it has been cited as a gap by
loss, revenue or expense projections.1
the Federal Reserve in its 2013 Comprehensive
Capital Analysis and Review (CCAR)
The inventory should include the following
Guidelines. As the Fed noted, bank holding
information for each listed model (Table 1).
companies with lagging practices were not
able to identify all models used in the capital
planning process. They also did not formally
review all of the models or assumptions
used for capital planning purposes.1

4
Table 1: Enterprise Level Model Inventory

Component Description

Model Type Model type to be selected from the list:


Underwriting/Application Models
Account/Customer Behavior Models
Risk Decisioning Models
Pricing Models
Impairment/Provisioning Models
Stress Testing Models
Collections and Recovery Scoring Models
Capital Planning Models

Product Type Product type to be selected from:


Retail Mortgage
Small and Medium Enterprise (SME) Mortgage
Non-retail Property
Credit Card
Etc

Portfolio Use of unique portfolio identifier.

Model Dependencies Any critical backward or forward linkages in the processes.

Model Usage What life-cycle process, product and entity does the model impact.

Model Adjustments What adjustments (if any) are made to the model output before it is fit for purpose.

Materiality Portfolio EAD (amount and percentage) covered by each model, and EAD period
date and source. If EAD is not available, portfolio balance should be used and noted.
For different model types, alternative materiality measures may be used.
For example, application model materiality may be measured by projected
pipeline. This should be clearly laid out in the model monitoring standards.

Model Owner Work contact details for model owner.

Model Developer Work contact details for employees involved in model creation.

Model Approver Work contact details for key employees involved in model approval.

Model User Work contact details for key employees involved in model usage.

Model Maintenance Work contact details for key employees involved in model maintenance.

Model Approval Date of model approval.

Last Model Validation Date of last model validation.

Last Model Monitoring Date of last model monitoring.

Documentation Links to model documentation including development documents as well as any


strategy setting/usage documents.
Rationale for model dismissal, approval with exceptions (for example, no change
despite poor performance) to policy, and outcomes of validations.

Current Model Status Status of model (pending approval, approved, decommissioned).


Should include rationale for model decommission, approval with exceptions.
For example, no change despite poor performance to policy, and outcomes
of last validation.

Key Technology Aspects Implementation platform.


Any issues at implementation or thereafter.

Current Model Risk Rating The current risk rating of the model (e.g. Red/Amber/Green).

Source: Accenture, November 2014


5
ROBUST DATA MONITORING A large European bank has
PROCESSES a monthly process to help run
Models are data intensive by their nature all model input data through
and typically are designed to accept inputs
from underwriting/origination systems,
a validation engine with
transaction processing systems, core approximately 8,000 rules.
banking systems and other sources. The engine analyzes the model
Errors in raw data or in model variables input file and generates
may be reflected in model monitoring reports,
but often happens too late to prevent
a monthly model data quality
negative effects on the business. To avoid report, indicating the variable(s)
such problems, data quality and consistency
rules should be considered and created for
affected and models affected,
each raw data field to help ensure the integrity if any. This is combined with
of the data dimensions feeding the model. data on portfolio materiality
A best practice common to a number of large to define an escalation process
banking institutions is to establish a data
monitoring process that precedes model
for data issues.
monitoring, as seen in Figure 2 below.

Figure 2: Data Monitoring Process

Raw Sources Model Data Model Variable Validation Rules To Model


Input File
Transaction Processing Systems Issues

Underwriting Systems

Customer Management Systems

Core Banking Systems


Escalation Process
Likelihood level
Unlikely Unlikely Possible Likely Very likely
Consequence level

Catastrophic High High Extreme Extreme Extreme


Major Medium High High Extreme Extreme
Moderate Low Medium High High High
Minor Low Low Medium Medium Medium
Insignificant Low Low Low Low Medium

Source: Accenture, November 2014

6
GOVERNANCE STRUCTURE also examine all processes involved. This
is a critical element; in an anecdotal case,
Critical components of a robust governance the model monitoring Microsoft Excel
structure around credit risk model monitoring spreadsheets used by a large Asian regional
include: bank were found to have several formula
errors. The spreadsheets had been used for
Independence of the model monitoring several years and were assumed to be correct.
team from the model development team;
Effective model audit processes and Senior management and board involvement
procedures; and in model governance may be the most
important element of all. This is essential
Engagement and involvement from senior to help ensure awareness and ownership
management. of models and related issues, appropriate
decision making in relation to potential
While the necessity for an independent business and regulatory impacts, and existence
model monitoring team may seem obvious, of appropriate incentive structures. The
in practice, modeling functions are often communication lines of a good governance
loosely structured, and independence may framework may be found in Figure 3.
exist only in theory.
It is important in our view to have
Ideally the organization should have a clear communication across all three lines of
separation among model developers/users defense. Figure 3 outlines these lines of
and validation functions. Incentive structures communications for the functions previously
should not discourage the escalation of model identified in Figure 1. This can help ensure
issues as appropriate, with a clearly defined that low level model issues can be actioned
escalation matrix. quickly, without senior management
involvement. However, having an escalation
A robust internal audit process is a key
channel to senior management can help
element of any model monitoring program.
raise major issues should action be required.
The audit function would typically audit all
Internal audit also has a very clear role to
stakeholdersincluding developers, users,
play in helping to provide assurance that all
and monitoring/validation teamsand would
processes have been carried out effectively.

Figure 3: Governance for Model Monitoring

Audit Findings
Board Risk Committee

Risk Committee

Model Developers
Internal Audit

Model Owners/Users

Model Validations/Monitoring

Likelihood level Escalation Process

Unlikely Unlikely Possible Likely Very likely


Consequence level

Catastrophic High High Extreme Extreme Extreme


Major Medium High High Extreme Extreme
Moderate Low Medium High High High
Minor Low Low Medium Medium Medium
Insignificant Low Low Low Low Medium

Source: Accenture, November 2014

7
8
COMPREHENSIVE INDICATORS Table 2: Evaluating Credit Risk Model Performance

FOR MODEL PERFORMANCE


Component Description
There are several aspects related to the
performance of a credit risk model (as seen Model Discrimination The ability of the model to differentiate between events and non-events
based on its input values, such as defaults and non-defaults.
in Table 2) that should be represented
in a good monitoring system. In practice, For models that do not have a binary outcome, this can be measured similarly
it is often observed that organizations as a High/Low indicator.
adhere to a few simple metrics (notably System or Population How different is the current data being scored by the model, compared
the Kolmogorov-Smirnov (KS) Statistic, and Stability to the data from the model development; is the model stable over time?
the Gini Coefficient) for regular monitoring
Characteristic Stability How different is the distribution of the current population in each
purposes, leaving the more comprehensive
explanatory variable in the model compared to the population used
checks and related metrics for periodic (often for model development?
annual, if not bi-annual) validation exercises.
What impact does this have on model performance?
Infrequency in measuring comprehensive Actual versus Expected Does the model deliver accurate point predictions?
checks can have adverse consequences. or Calibration
This is particularly significant for Regulatory Capital models where
For example, a major UK lender
systematic under or over-prediction may have capital implications.
had poorly calibrated PD models
across several portfolios, leading to Score Distribution Analysis Does the model generate large concentrations at particular deciles
negative regulatory comments. or credit grades?
Has there been a large migration over time in scores that needs
A key best practice employed in conjunction to be investigated?
with various performance criteria is the
Override Analysis To what extent and why are there judgmental overrides over and above the
definition of performance thresholds (often
raw model output?
called Traffic Lights) for various metrics.
While industry standards are available
for certain metrics, most banks would Source: Accenture, November 2014
determine acceptable thresholds internally,
subject to regulatory supervision.

MODEL DISCRIMINATION Table 3: Measures of Model Discrimination

Many credit risk models feature a binary


classification structure, as they have to Component Description
assess an obligors future status using their Gini Coefficient Area under Gini Curve/Lorenz Curve of model as compared to the perfect
present characteristics (application models) or Accuracy Ratio (AR) model, or one that would capture 100% of events in the first score bucket/decile.
or recent behavior (behavioral models).
Values ranging from 0-1.
The measurement of a classification tools
ability to assess an obligor for the future This measure enables a direct comparison across models.
status is commonly called discrimination. KolmogorovSmirnov The maximum separation between the percentage of events captured and
(KS) Statistic percentage of non-events captured by the model in cumulative distributions
The concept is also applicable to models of events and non-events.
that predict continuous variables (such as
Values ranging from 0-1.
loss given default) in terms of the models
ability to differentiate between high and low This measure enables a direct comparison across models.
values. Table 3 illustrates some measures of Receiver Operating Enables a comparison of two models on their ability to identify a true
discrimination observed in industry literature. Characteristic Curve/ positive (that is, an event) as opposed to a false positive.
Area Under Receiver
The area under the ROC curve (AUC) is a measure of the probability that
The KS Statistic and Gini Coefficient Operating Characteristic
(ROC) Curve the model will rank a randomly chosen event higher than a randomly
are the two most frequently used metrics chosen non-event and is related to the Gini Coefficient (G) by the formula
in an industry context; in banking, the G = 2(AUC) - 1.
Basel Committee recommends the Gini
Pietra Index The maximum vertical distance between the model Lorenz Curve and the line
Coefficient or accuracy ratio (AR) and
representing a random decision rule. This distance may be interpreted as the
the area under curve (AUC) measures.2 maximum lift over random provided by the model.

Change in Gini Coefficient A comparison of the models most recent Gini calculation with that observed
for the previous tracking period. A large decrease or increase would indicate
a need for further investigation.

Source: Accenture, November 2014

9
SYSTEM STABILITY Table 4: Evaluating System Stability

System stability or population stability


compares the data sample the model Component Description
was developed on with a more recent data Population Stability Index Measure of the relative change in distribution of the development
sample on which the model has been used. (PSI) and recent data samples by score deciles/ranges (to be defined on the
The discriminatory power of a model is based development sample).
on information contained in the development
PSI: Events Measure of the relative change in distribution of events (such as defaults)
dataset, and hence large variances from this between the development and recent data samples by score deciles/ranges
may cause model performance to deteriorate, (to be defined on the development sample).
or make the model unfit for purpose. Table 4
PSI: Non-Events Measure of the relative change in distribution of non-events (e.g. non-
shows some standard measures of stability.
defaults) between the development and recent data samples by score deciles/
ranges (to be defined on the development sample).
The Population Stability Index (PSI) is the
standard measure for system stability and Transition Matrix This compares two time periodsusually the previous and current periods for
is also recommended by most regulatory (Migration Matrix) model monitoringand examines what proportion of the portfolio migrated
to a higher or lower score category or decile. This measure is typically used
bodies. The PSI is also applicable to models
for PD models.
that predict continuous outcomes.

The transition matrix is critical to PD models Source: Accenture, November 2014


and is in most cases a required submission
to regulators. Thresholds for the degree
of transition that is deemed acceptable
are usually set internally by the institution.

CHARACTERISTIC STABILITY Table 5: Characteristic Stability Metrics

The system stability measures examine


a model as a whole but do not examine Component Description
how individual model characteristics or Characteristic Stability Measure of the change in the distribution of a variable between the
variables may have changed in distribution Index (CSI) development and recent data.
during two time periods. While the overall This uses the categories generated for the concerned variable on the
system stability (as seen in the PSI value) development data as a basis for comparison.
may be acceptable, when analyzed in detail
it may conceal large variations in individual Change in Characteristic A large change in the IV for a particular characteristic indicates
Information Value (IV) a change in its distribution or possibly over fitting to the model
model characteristics. Table 5 examines
development data initially.
some measures of characteristic stability.
The IV is a measure of the discriminatory power of a given variable.
Characteristic stability measures are
increasingly requested by regulators Source: Accenture, November 2014
and external auditors as evidence of
the thoroughness of a model monitoring
system, in addition to the PSI measures.

10
ACTUAL VERSUS EXPECTED Table 6: Evaluating Model Calibration

OR CALIBRATION
Component Description
The calibration or actual versus expected
performance of a model refers to its ability Hosmer-Lemeshow (HL) The HL test quantifies if the model is a good fit given the current data.
or Chi-Square Test It compares observed versus predicted counts of outcome defaults in each
to yield an accurate point prediction for
rating grade or score decile.
the output variable. This is particularly
relevant for Regulatory Capital models where Quality of fit is thus summarized into a single statistic.
systematic over or under prediction may be Brier Score The Brier score measures the accuracy of probabilistic predictions, measuring
subject to regulatory scrutiny and overrides. the mean squared differences between predicted and actual outcomes.
Table 6 discusses some measures
The score is only suitable for models predicting a binary outcome.
of calibration observed in the industry
and within vendor tools. Calibration Curve This is a technique based on a confidence interval created around the
Shape Test model prediction.
The Basel III norms explicitly cite appropriate Depending on the number of instances where the actual outcome lies
calibration of the risk functions, which outside this confidence interval, the test is classified as a Red, Amber
convert loss estimates into regulatory or Green outcome.
capital requirements.3 This level of scrutiny The calibration curve plot accompanying the test provides a useful visual
can be expected to increase within this view of model calibration.
important aspect of model performance.
Source: Accenture, November 2014

SCORING ANALYSIS Table 7: Evaluating Score Distribution

DISTRIBUTION
Component Description
Models which produce large concentrations
in particular credit grades or scores can Herfindahl-Hirschman Index The HHI has long been used as a measure of market concentration,
(HHI) with a value of 0 indicating near-perfect competition and a value
be problematic, and undesirable from a
of 1 indicating a monopoly.
regulatory perspective. A large bank was
asked to redevelop several rating models The same measure may be computed for concentrations by score/credit
grade in practice, values of the HHI in excess of 0.25 may indicate a need
that assigned more than 20 percent of the
for further action.
concerned portfolio to one credit grade.
The concentration of the score can be
measured by the Herfindahl-Hirschman Source: Accenture, November 2014
Index, which is outlined in Table 7.

OVERRIDE ANALYSIS Table 8: Evaluating Override Occurrence

This measure addresses what level of


managerial or other overlays are in place Component Description
over and above the raw model results. Distance Analysis for Distance analysis compares the raw quantitative score or credit grade
An override is defined as having occurred Overrides assigned by a model with the final grade assigned after any overrides.
whenever the model output has been The metric analyzed is the proportion of portfolio EAD that has a distance
ignored or amended. This measure may vary of two or more credit grades.
significantly by the nature of the portfolio
This measure must be used with care for secured retail lending and
or business. For instance, unsecured retail wholesale/commercial lending models where overrides are frequently used.
lending models may generate a low number
of model overrides, while significant model
overrides are more the norm for secured Source: Accenture, November 2014
retail or wholesale lending models.

11
EFFECTIVE MANAGEMENT INFORMATION
SYSTEMS (MIS)
There are numerous metrics Coverage: Using the model inventory Provide an Overall Assessment: Depending
discussed earlier, we suggest that all of the on performance outcome, model monitoring
that may be used to set models which are used for credit risk rating is usually performed to lead to one of three
up a comprehensive model be monitored on a regular basis. This will help possible decisions:
the organization be aware of the portfolio
monitoring system, these coverage of its various models. Performance Action
should be incorporated into Model still fits for purpose. No change
Data/Formula Integrity and Validation:
a robust and timely MIS Model monitoring tools should be subject Model still provides Recalibration
program. Some industry best to independent validation for data satisfactory discriminatory
performance but does not
aggregation processes and formulae
practices observed for model prior to the reporting cycle (for example, yield accurate point prediction.

risk monitoring MIS are: on a monthly cycle for data and quarterly Fall in discriminatory power, In-depth analysis
for formulae). Formula errors have been possibly due to a large of root causes and
Be Comprehensive: Often model monitoring observed in the model monitoring tools change in population. optional solutions.
systems within institutions present only the of at least one large European bank, leading Model still provides Decision to be taken
KS and Gini statistics, which may mask serious to a regulatory reprimand. satisfactory discriminatory as to whether or
issues with model calibration or other areas. performance and is accurate, not this requires
Use of Thresholds: The absolute values but it is yielding poor results further analysis/
Be Timely: Model monitoring should ideally of most computed metrics are difficult in some of the other metrics. remediation action.
be performed on a monthly basis, especially to interpret without the use of visual signals.
for models involved in real-time decision Traffic Lights for computed metrics are Include Clear and Relevant Insights and
making. While this may seem obvious, cases now increasingly common in most vendor Commentary: Quantitative outcomes are not
have been observed where such models are offerings and can be easily interpreted. self-explanatory; they require interpretation
monitored only in an ad hoc manner; this is These should be defined in the model and explanation, and skills and time are not
especially true of models in areas not subject monitoring standards. always sufficient to do a thorough job. This
to a high degree of regulatory scrutiny. is probably one of the most time-consuming
tasks for the monitoring unit, but it can also
be the most valuable for the organization
if the regular analysis process provides
Figure 4: Elements of a Robust Model Monitoring MIS key findings to senior management.

The ideal model monitoring process should


Insights and Comprehensive provide such a recommendation, together
Commentary Indicators with next steps and an assessment from
the model owner, rather than merely listing
various indicators (Figure 4).

Overall
Visual Timely
Status

Data Integrity Coverage

Source: Accenture, November 2014

12
13
CONCLUSION

Model monitoring is an area The Federal Reserve, for example, has issued effectiveness of their model monitoring.
guidelines on model monitoring in its SR 11-7 These steps include diligently tracking model
of increasing importance guidance note as well as annual guides on the performance, escalating and resolving
and regulatory scrutiny as CCAR process;4 similar documents have been model issues, involving senior management
issued by the Basel Committee and the Bank in decision making, fine-tuning models
models are treated as critical of England.5 on a timely basis, and maintaining well-
organizational assets. documented logs and rationales of changes.
A strong credit risk model monitoring Effective monitoring allows institutions
process is not only required by regulations, to closely control and better empower the
but it has proven to be a potent competitive strategic risk management tools for day-to-
advantage for those organizations that day operational management as well as for
take extra steps to help ensure the purposes of calculating capital requirements.

14
HOW ACCENTURE CAN HELP

Accenture has extensive In addition to skills related to model and automated dashboards. Figure 5 below
development, validation and monitoring illustrates one such offering, Accentures
experience in all aspects and an understanding of the regulatory Credit Risk Model Monitoring.
of model validation and implications of Basel II and III, Dodd-Frank,
Federal Reserve and UK Prudential Regulation For more information on the topics covered
monitoring, including a team Authority (PRA) requirements (among others), in this document, as well as any general
that has delivered more than Accenture has developed proprietary assets queries on model risk monitoring, please
that can help accelerate and simplify the contact the authors.
500 risk analytics solutions. setting up of a robust model monitoring
process with pre-written modules in SAS

Figure 5: Accenture Credit Risk Model Monitoring Suite

Individual Model Risk Enterprise Model Risk

Credit Risk Model Monitoring Tool Credit Risk Model Dashboard

Model Risk Dashboard

PD Models
Automated
Linkage Underwriting Models

EAD Models

LGD Models

Automated Deep Dive Dashboards Executive Dashboard

Source: Accenture, November 2014


15
NOTES Parvez Shaikh is a managing director, ABOUT ACCENTURE
Accenture Digital. Based in Bangalore, Parvez
1.Capital Planning at Large Bank Holding leads the Risk Analytics practice in India Accenture is a global management
Companies: Supervisory Expectations serving global Financial Services clients. consulting, technology services and
and Range of Current Practice, His areas of experience include risk modeling outsourcing company, with more than
August 2013, Board of Governors of the and quantification, stress testing, pricing, 305,000 people serving clients in more
Federal Reserve System. Accessed at: valuation, economic capital, provisioning/ than 120 countries. Combining unparalleled
http://www.federalreserve.gov/bankinforeg/ loss forecasting and development/validation experience, comprehensive capabilities
bcreg20130819a1.pdf. of risk rating scorecards, frameworks and across all industries and business functions,
methodologies. Parvez also has broad exposure and extensive research on the worlds
2.BCBS Working Paper No. 14: Studies most successful companies, Accenture
to global Financial Services regulation with
on the Validation of Internal Rating Systems, collaborates with clients to help them
a particular focus on North America and is a
Revised, May 2005, Bank for International become high-performance businesses and
featured speaker at industry risk conferences.
Settlements. Accessed at: http://www.bis.org/ governments. The company generated net
Prior to joining Accenture, he held leadership
publ/bcbs_wp14.htm revenues of US$30.0 billion for the fiscal
roles in Risk Management at a number of
global financial institutions. year ended Aug. 31, 2014. Its home page
3. Basel III: A global regulatory framework is www.accenture.com.
for more resilient banks and banking systems,
Tadhg OSuilleabhain is a senior manager,
December 2010 (rev June 2011), Bank for DISCLAIMER: This document is intended for general
Accenture Digital. Based in Dublin, Tadhg has informational purposes only and does not take into
International Settlements. Accessed at:
over 13 years of enterprise and consultancy account the readers specific circumstances, and
http://www.bis.org/publ/bcbs189_dec2010.htm
experience in Risk Management and Analytics may not reflect the most current developments.
across global retail banking and capital Accenture disclaims, to the fullest extent permitted
4. SR 11-7 Guidance on Model Risk by applicable law, any and all liability for the
Management, April 4, 2011, Board of markets sectors. His experience extends
accuracy and completeness of the information in this
Governors of the Federal Reserve System. to risk model building and developing and document and for any acts or omissions made based
Accessed at: http://www.federalreserve.gov/ validating Basel risk models (PD, EaD, LGD), on such information. Accenture does not provide
bankinforeg/srletters/sr1107.htm impairment models and Risk/Reward models legal, regulatory, audit, or tax advice. Readers are
in both retail and non-retail settings. responsible for obtaining such advice from their
He also has extensive policy writing own legal counsel or other licensed professionals.
5. Documentation in the Internal Model
Approval Process, December 2013, Bank experience in both Basel and impairment
of England Prudential Regulation Authority. model risk management settings. STAY CONNECTED
Accessed at: http://www.bankofengland. Accenture Finance & Risk Services:
co.uk/pra/Pages/solvency2/internalmodel.aspx Siddhartha Chatterji is a senior manager
Accenture Digital. Based in Gurgaon, India, http://www.accenture.com/microsites/
Siddhartha has 12 years of global risk and financeandrisk/Pages/index.aspx
ABOUT THE AUTHORS marketing analytics experience. He has
Larry Lerner is a managing director, Accenture worked in the areas of Basel II analytics, Connect With Us
Digital. Based in Washington, he leads risk scorecard development and lifecycle https://www.linkedin.com/
Accenture Analytics and is a member of management covering retail and commercial groups?gid=3753715
the groups leadership team. He has extensive risk, as well as core risk management roles in
consultancy and enterprise experience retail and commercial risk. He holds a B.A. (H) Join Us
in Financial Services and Analytics and has in Economics from St Stephens College Delhi, https://www.facebook.com/
built industrialized underwriting, marketing a Masters Degree in Economics from the accenturestrategy
and credit risk management capabilities. Delhi School of Economics and an MBA in
http://www.facebook.com/accenture
Over the years he has held several leadership Finance & Marketing from ISB Hyderabad.
roles in business intelligence, analytics, Follow Us
banking payment and capital markets AKNOWLEDGEMENTS http://twitter.com/accenture
enterprises and now guides clients
on their journey to high performance. The authors would like to thank the
following Accenture employees for Watch Us
their contribution to this document: www.youtube.com/accenture
Medb Corcoran
Alessandro Quadrelli
Copyright 2014 Accenture
All rights reserved.

Accenture, its logo, and


High Performance Delivered
are trademarks of Accenture.

Rights to trademarks referenced 14-6082_lc


herein, other than Accenture
trademarks, belong to their
respective owners. We disclaim
proprietary interest in the
marks and names of others.

You might also like