You are on page 1of 146

FRM PART – II

Practice Book 2

Subject 1: Operational Risk


Subject 2: Liquidity Risk
Subject 3: Investment Risk
Subject 4: Current Issues
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2

CONTENTS
Operational Risk
Sl. No. Page
No.
1 Principles for the Sound Management of Operational Risk 5
2 Enterprise Risk Management: Theory and Practice 11
3 OpRisk Data and Governance 14
4 Information Risk and Data Quality Management 27
5 Assessing the Quality of Risk Measures 31
Risk Capital Attribution and Risk- Adjusted PerformanceMeasurement
6 35

7 Range of Practices and Issues in Economic Capital Frameworks 42


8 Capital Planning at Large Bank Holding Companies 50
9 Stress Testing Banks 54
10 Guidance on Managing Outsourcing Risk 57
Total

Liquidity Risk
Sl. Page
No. No.
11 Liquidity and Leverage 59
12 Repurchase Agreements and Financing 67
13 Illiquid Assets 71
Total

2|Page
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2

Investment Risk
Sl. PageNo.
No.
14 Factor Theory 77
15 Factors 83
16 Alpha (and the Low-Risk Anomaly) 89
17 Portfolio Risk: Applying Analytical Methods 97
18 Var and Risk Budgeting in Investment Management 103
19 Risk Monitoring and Performance Measurement 109
20 Portfolio Performance Evaluation 114
21 Hedge Funds 124
22 Performing Due Diligence on Specific Managers and Funds 129
Total

Current Issues
Sl. Page
No. No.
Machine Learning: A Revolution in Risk Management andCompliance?
24 132

25 Artificial intelligence and machine learning in financial services 138


Total

3|Page
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
Principles for the Sound Management of Operational Risk | Questions
1. Risk Manager Beth has drafted a re-design of her bank's operational risk management framework (ORMF) that
includes roles and responsibilities for their Three Lines of Defense. In her proposed framework, the three lines are
(i) business line management, (ii) an independent corporate operational risk management function (CORF), and (iii)
an independent review by the auditors:

At a high level, her proposed design includes--but is not limited to--the following key responsibilities:
i. Business line management is responsible for identifying and managing the risks inherent in the products,
activities, processes and systems for which it is accountable
ii. The CORF challenges the business lines’ inputs to (and outputs from) the bank’s operational risk management,
operational risk measurement, and operational risk reporting systems
iii. The independent review can be conducted by EITHER an internal or external audit; however, the auditors
MUST design and implement the operational risk management framework (ORMF), and the independent
review must include both verification and validation
At first glance, Beth's draft high-level framework looks reasonable EXCEPT which of the following is an obvious
MISTAKE in her proposal?
a) Business line management is not a line of defense
b) Independent review includes neither verification nor validation
c) The auditors cannot be involved in the framework's design, development, or implementation
d) The independent review cannot be conducted internally, but rather must be conducted by external parties
2. In order for a bank to achieve sound management of its operational risk, necessarily features include a strong risk
management culture, an integrated operational risk management framework (ORMF), an engaged Board of
directors, senior managers who assume responsibility for the framework, and an independent corporate operational
risk management function (CORF; aka, second line of defense). In regard to the fundamental principles of
operational risk management, each of the following is true EXCEPT which is false?
a) To maintain independence, the ORMF prohibits any reporting relationship (even dotted- line) from the CORF
to either the Chief Risk Officer (CRO) or the Board's Risk Committee
b) The ORMF should provide for a common taxonomy of operational risk terms to ensure consistency of risk
identification (e.g., operational loss even types), exposure rating and risk management objectives
c) Compensation plans should be aligned with the bank's risk appetite and risk tolerance; and such plans may
include incentive compensation explicitly linked to risk-adjusted measures, deferral mechanisms, or claw-
backs
d) The Board of Directors approves a RISK APPETITE statement (i.e., a forward-looking, high-level view of risk
acceptance that incorporates return expectations) and a RISK TOLERANCE statement (i.e., a more specific
determination of the level of variation the bank is willing to accept around business objectives) for operational
risk

4|Page
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
3. The first two principles (among eleven in total) are the fundamental principles. Principle One requests the
establishment of a strong risk management culture. Principle Two requests the development and implementation
of an operational risk management framework (ORMF). Among the following responsibilities, each is the
responsibility of senior management EXCEPT which is the responsibility of the Board of Directors?
a) Establishes and maintains robust challenge mechanisms and effective issue-resolution processes
b) Translates the ORMF into specific policies and procedures that can be implemented and verified within the
different business units
c) To ensure implementation of the ORMF recruits experienced, technical staff and ensures sufficient level of
operational risk training
d) Reviews and approves the ORMF (at regular intervals) to ensure that the bank has is managing (and has
identified) the operational risks arising from external market changes and other environmental factors
4. Principle Six (6) among the Principles for Sound Management of Operation Risk advises that "Senior management
should ensure the identification and assessment of the operational risk inherent in all material products, activities,
processes and systems to make sure the inherent risks and incentives are well understood." Among the tools
promoted, the following four are discussed:
i. Losses in excess of a threshold are subjected to an exhaustive root cause analysis by the first line of defense,
which in turn is subject to an independent review and challenge by the second line of defense
ii. These metrics and/or statistics are used to monitor the main drivers of exposure associated with key risks.
They also provide insight insight into the status of operational processes, which may in turn provide insight
into operational weaknesses, failures, and potential loss. They are often paired with escalation triggers to
warn when risk levels approach or exceed thresholds or limits and prompt mitigation plans
iii. This tool identifies the key steps and risk points in business activities and organisational functions. It can reveal
individual risks, risk interdependencies, and areas of control or risk management weakness. It can also can
help prioritize subsequent management action
iv. A process of obtaining expert opinion of business line and risk managers to identify potential operational risk
events and assess their potential outcome. This is an effective tool to consider potential sources of significant
operational risk and the need for additional risk management controls or mitigation solutions. However, given
its subjectivity, a robust governance framework is essential to ensure the integrity and consistency of the
process
Which of the following correctly MATCHES the name of a tool to its corresponding DESCRIPTION above?
a) I = Internal loss data; II = Key Risk Indicators (KRIs); III = Business Process Mapping; IV= Scenario Analysis
b) I = Internal loss data; II = Scenario Analysis; III = Key Risk Indicators (KRIs); IV= Business Process Mapping
c) I = External loss data; II = Scenario Analysis; III = Key Risk Indicators (KRIs); IV= Business Process Mapping
d) I = External loss data; II = Business Process Mapping; III = Scenario Analysis; IV= Key Risk Indicators (KRIs)
5. The following section 39(d) of the Principles for the Sound Management of Operational Risk contains two blanks
where there should be key vocabulary terms: "Risk Assessments: In a risk assessment, often referred to as a Risk
Self-Assessment (RSA), a bank assesses the processes underlying its operations against a library of potential threats
and vulnerabilities and considers their potential impact. A similar approach, Risk Control Self Assessments (RCSA)
evaluates [ Blank #1 ] risk (the risk before controls are considered), the effectiveness of the control
environment, and [ Blank #2 ] risk (the risk exposure after controls are considered). Scorecards build on
RCSAs by weighting residual risks to provide a means of translating the RCSA output into metrics that give a relative
ranking of the controlenvironment;"
In order of their appearance, which two vocabulary terms should fill-in-the-blanks above?
a) Latent, Surplus
b) Inherent, Residual
c) Obvious, Contingent
d) Indigenous, Vestigial

5|Page
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
6. Principle Nine (9) of the Principles for Sound Management of Operation Risk concerns Control and Mitigation and
advises that "Banks should have a strong control environment that utilizes policies, processes and systems;
appropriate internal controls; and appropriate risk mitigation and/or transfer strategies." In regard to the
Committee's suggestions for managing technology risk and outsourcing risk, which of the following statements is
TRUE?
a) In most cases, outsourcing should be avoided because it introduces uncontrollable risks
b) Risk transfer tools are a replacement (aka, substitute) for internal operational risk control
c) The need for insurance against operational loss events is a red flag which indicates insufficient internal
controls
d) The use of technology related products, activities, processes and delivery channels exposes a bank to strategic,
operational, and reputational risks and the possibility of material financial loss.

6|Page
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
Principles for the Sound Management of Operational Risk | Answers
1. Correct Answer: C
True: The auditors cannot be involved in the framework's design, development, or implementation
In regard to (A), (B) and (D), each is FALSE, or not necessarily true. See below from the Principles for the Sound
Management of Operational Risk (emphasis ours):
"14. In the industry practice, the first line of defense is business line management. This means that sound
operational risk governance will recognize that business line management is responsible for identifying and
managing the risks inherent in the products, activities, processes and systems for which it is accountable.
A functionally independent corporate operational risk function (CORF) is typically the second line of defense,
generally complementing the business line’s operational risk management activities. The degree of independence
of the CORF will differ among banks. For small banks, independence may be achieved through separation of duties
and independent review of processes and functions. In larger banks, the CORF will have a reporting structure
independent of the risk generating business lines and will be responsible for the design, maintenance and ongoing
development of the operational risk framework within the bank. This function may include the operational risk
measurement and reporting processes, risk committees and responsibility for board reporting. A key function of the
CORF is to challenge the business lines’ inputs to, and outputs from, the bank’s risk management, risk measurement
and reporting systems. The CORF should have a sufficient number of personnel skilled in the management of
operational risk to effectively address its many responsibilities.
The third line of defense is an independent review and challenge of the bank’s operational risk management
controls, processes and systems. Those performing these reviews must be competent and appropriately trained and
not involved in the development, implementation and operation of the Framework. This review may be done by
audit or by staff independent of the process or system under review, but may also involve suitably qualified external
parties."
2. Correct Answer: A
False. The CORF needs to be independent of the risk generating business lines and therefore tends to report directly
to the CRO (or chief-level STAFF function; e.g., CFO, COO or Chief Compliance Officer) with dotted line to the Board's
Risk Committee.
In regard to (B), (C) and (D), each is TRUE.
3. Correct Answer: D
Review and approval of the ORMF is the responsibility of the Board, while the other three are the responsibility
of senior management. See the Principles.
In regard to (A), (B) and (C): "32. Senior management is responsible for establishing and maintaining robust challenge
mechanisms and effective issue-resolution processes. These should include systems to report, track and, when
necessary, escalate issues to ensure resolution. Banks should be able to demonstrate that the three lines of defense
approach is operating satisfactorily and to explain how the board and senior management ensure that this approach
is implemented and operating in an appropriate and acceptable manner.
33. Senior management should translate the operational risk management Framework established by the board of
directors into specific policies and procedures that can be implemented and verified within the different business
units. Senior management should clearly assign authority, responsibility and reporting relationships to encourage
and maintain accountability, and to ensure that the necessary resources are available to manage operational risk in
line within the bank’s risk appetite and tolerance statement. Moreover, senior management should ensure that the
management oversight process is appropriate for the risks inherent in a business unit’s activity.
36. Senior management should ensure that bank activities are conducted by staff with the necessary experience,
technical capabilities and access to resources. Staff responsible for monitoring and enforcing compliance with the
institution’s risk policy should have authority independent from the units they oversee."

7|Page
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
4. Correct Answer: A
True: I = Internal loss data; II = Key Risk Indicators (KRIs); III = Business Process Mapping; IV= Scenario Analysis.
See the Principles for full descriptions. Here are summary descriptions of the tools used to identify and assess
operational risks:
(a) Audit Findings: While audit findings primarily focus on control weaknesses and vulnerabilities, they can also
provide insight into inherent risk due to internal or externalfactors.
(b) Internal Loss Data Collection and Analysis: Internal operational loss data provides meaningful information for
assessing a bank’s exposure to operational risk and the effectiveness of internal controls. Analysis of loss
events can provide insight into the causes of large losses and information on whether control failures are
isolated or systematic ... (from the BIS Review of the Principles, BCBS 292: "Most of the banks have a well-
established process to collect internal operational loss data, with some collecting loss data above a threshold
(e.g. $10,000 or €10,000), while some banks collect data on all operational losses, and have not established
an internal threshold.")
(c) External Data Collection and Analysis: External data elements consist of gross operational loss amounts,
dates, recoveries, and relevant causal information for operational loss events occurring at organizations other
than the bank. External loss data can be compared with internal loss data, or used to explore possible
weaknesses in the control environment or consider previously unidentified risk exposures;
(d) Risk Assessments: In a risk assessment, often referred to as a Risk Self-Assessment (RSA), a bank assesses the
processes underlying its operations against a library of potential threats and vulnerabilities and considers their
potential impact. A similar approach, Risk Control Self Assessments (RCSA) ... (see next question)
(e) Business Process Mapping: Business process mappings identify the key steps in business processes, activities
and organisational functions. They also identify the key risk points in the overall business process. Process
maps can reveal individual risks, risk interdependencies, and areas of control or risk management weakness.
They also can help prioritize subsequent management action;
(f) Risk and Performance Indicators: Risk and performance indicators are risk metrics and/or statistics that
provide insight into a bank’s risk exposure. Risk indicators, often referred to as Key Risk Indicators (KRIs), are
used to monitor the main drivers of exposure associated with key risks. Performance indicators, often referred
to as Key Performance Indicators (KPIs), provide insight into the status of operational processes, which may
in turn provide insight into operational weaknesses, failures, and potential loss. Risk and performance
indicators are often paired with escalation triggers to warn when risk levels approach or exceed thresholds or
limits and prompt mitigation plans;
(g) Scenario Analysis: Scenario analysis is a process of obtaining expert opinion ofbusiness line and risk managers
to identify potential operational risk events and assess their potential outcome. Scenario analysis is an
effective tool to consider potential sources of significant operational risk and the need for additional risk
management controls or mitigation solutions. Given the subjectivity of the scenario process, a robust
governance framework is essential to ensure the integrity and consistency of the process;
(h) Measurement: Larger banks may find it useful to quantify their exposure to operational risk by using the
output of the risk assessment tools as inputs into a model that estimates operational risk exposure. The results
of the model can be used in an economic capital process and can be allocated to business lines to link risk and
return; and
(i) Comparative Analysis: Comparative analysis consists of comparing the results of the various assessment tools
to provide a more comprehensive view of the bank’s operational risk profile. For example, comparison of the
frequency and severity of internal data with RCSAs can help the bank determine whether self-assessment
processes are functioning effectively. Scenario data can be compared to internal and external data to gain a
better understanding of the severity of the bank’s exposure to potential risk events."

8|Page
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
5. Correct Answer: B
Inherent, Residual
From the Principles 39(d), emphasis ours: "Risk Assessments: In a risk assessment, often referred to as a Risk Self-
Assessment (RSA), a bank assesses the processes underlying its operations against a library of potential threats and
vulnerabilities and considers their potential impact. A similar approach, Risk Control Self Assessments (RCSA),
typically evaluates inherent risk (the risk before controls are considered), the effectiveness of the control
environment, and residual risk (the risk exposure after controls are considered). Scorecards build on RCSAs by
weighting residual risks to provide a means of translating the RCSA output into metrics that give a relative ranking
of the control environment;"
6. Correct Answer: D
True: The use of technology related products, activities, processes and delivery channels exposes a bank to
strategic, operational, and reputational risks and the possibility of material financial loss.
From the Principles: "51. Effective use and sound implementation of technology can contribute to the control
environment. For example, automated processes are less prone to error than manual processes. However,
automated processes introduce risks that must be addressed through sound technology governance and
infrastructure risk management programs.
The use of technology related products, activities, processes and delivery channels exposes a bank to strategic,
operational, and reputational risks and the possibility of material financial loss. Consequently, a bank should have
an integrated approach to identifying, measuring, monitoring and managing technology risks. Sound technology risk
management uses the same precepts as operational risk management and includes:
a) governance and oversight controls that ensure technology, includingoutsourcing arrangements, is aligned
with and supportive of the bank’s business objectives;
b) policies and procedures that facilitate identification and assessment of risk;
c) establishment of a risk appetite and tolerance statement as well as performance expectations to assist in
controlling and managing risk;
d) implementation of an effective control environment and the use of risk transfer strategies that mitigate risk;
and
e) monitoring processes that test for compliance with policy thresholds or limits."
In regard to (A), (B) and (C) each is false.
 In regard to false (A), according to the Principles: "54. Outsourcing is the use of a third party--either an
affiliate within a corporate group or an unaffiliated external entity--to perform activities on behalf of the bank.
Outsourcing can involve transaction processing or business processes. While outsourcing can help manage
costs, provide expertise, expand product offerings, and improve services, it also introduces risks that
management should address. The board and senior management are responsible for understanding the
operational risks associated with outsourcing arrangements and ensuring that effective risk management
policies and practices are in place to manage the risk in outsourcing activities."
 In regard to false (B), according to the Principles: "56. Because risk transfer is an imperfect substitute for
sound controls and risk management programs, banks should view risk transfer tools as complementary to,
rather than a replacement for, thorough
internal operational risk control." Risk transfer tools are a replacement (aka, substitute) for internal
operational risk control."
 In regard to false (C), according to the Principles: "55. In those circumstances where internal controls do not
adequately address risk and exiting the risk is not a reasonable option, management can complement controls
by seeking to transfer the risk to another party such as through insurance."

9|Page
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
Enterprise Risk Management: Theory and Practice | Questions
1. According to James Lam, a successful enterprise risk management (ERM) program can be broken down into seven
key components:

In particular, he says it is important that expected losses and the cost of risk capital should be included in the pricing
of a product or the required return of an investment project. In business development, risk acceptance criteria should
be established to ensure that risk management issues are considered in new product and market opportunities.
Transaction and business review processes should be developed to ensure the appropriate due diligence. Efficient
and transparent review processes will allow managers to develop a better understanding of those risks that they can
accept independently and those that require corporate approval or management. To which component does this key
activity--i.e., pricing of risk at its inception-primarily refer?
A. Corporate Governance
B. Line Management
C. Portfolio Management
D. Risk Transfer
2. According to Lam, there are requirements, or prerequisites, to the achievement of successful ERM. Each of the
following is a prerequisite to successful ERM EXCEPT which is not?
A. The integration of internal and external communications (including investor and public relations) that support a
successful ERM launch date; the timing of the switch to ERM should be coordinated on a specific date as this
avoids a long project with overruns and encourages accountability
B. The integration of risk transfer strategies which takes a portfolio view of all types of risk within a company and
rationalizes the use of derivatives, insurance, and alternative risk transfer products to hedge only the residual
risk deemed undesirable by management.
C. An integrated risk organization which probably implies a centralized risk management unit (RMU) reporting to
the Chief Executive Officer (CEO) and a Chief Risk Officer (CRO) who is responsible for overseeing all aspects of
risk within the organization
D. The integration of risk management into the business processes of a company which enables a shift from
defensive or control-oriented approaches to managing downside risk (or earnings volatility) in favor of risk as an
offensive weapon for management

10 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
3. The role of Chief Risk Officer (CRO) is clearly gaining in prominence. According to James Lam, the CRO is responsible
for:
 Providing the overall leadership, vision, and direction for enterprise risk management;
 Establishing an integrated risk management framework for all aspects of risks across the organization;
 Developing risk management policies, including the quantification of the firm's risk appetite through specific risk
limits;
 Implementing a set of risk indicators and reports, including losses and incidents, key risk exposures, and early
warning indicators;
 Allocating economic capital to business activities based on risk, and optimizing the company's risk portfolio
through business activities and risk transfer strategies;
• Communicating the company's risk profile to key stakeholders such as the board of directors, regulators, stock
analysts, rating agencies, and business partners; and
• Developing the analytical, systems, and data management
Given these responsibilities, Lam says an ideal CRO would have superb skills in five areas (While it is unlikely that any
single individual would possess all of these skills, it is important that these competencies exist either in the CRO or
elsewhere within his or her organization). Those five skills are:
 Leadership skills to hire and retain talented risk professionals and establish the overall vision for ERM
 Evangelical skills to convert skeptics into believers, particularly when it comes to overcoming natural resistance
from the business units.
 Stewardship to safeguard the company's financial and reputational assets
 Technical skills in big data analytics which requires some background in programming code preferably with R
and/or python
 Consulting skills in educating the board and senior management, as well as helping business units implement
risk management at the enterprise level
However, which of the above skills is inaccurately specified (defined)?
A. Leadership
B. Evangelical
C. Stewardship
D. Technical

11 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
Enterprise Risk Management: Theory and Practice | Answers
1. Correct Answer: B
Lam: Line Management: Perhaps the most important phase in the assessment and pricing of risk is at its inception.
Line management must align business strategy with corporate risk policy when pursuing new business and growth
opportunities. The risks of business transactions should be fully assessed and incorporated into pricing and
profitability targets in the execution of business strategy. Specifically, expected losses and the cost of risk capital
should be included in the pricing of a product or the required return of an investment project. In business
development, risk acceptance criteria should be established to ensure that risk management issues are considered in
new product and market opportunities. Transaction and business review processes should be developed to ensure
the appropriate due diligence. Efficient and transparent review processes will allow line managers to develop a better
understanding of those risks that they can accept independently and those that require corporate approval or
management. James Lam considers different, valid definitions for enterprise risk management (ERM) by the
Committee of Sponsoring Organizations of the Trade way Commission (COSO) and the International Organization of
Standardization (ISO) but settles on his own definition: Risk is a variable that can cause deviation from an expected
outcome. ERM is a comprehensive and integrated framework for managing key risks in order to achieve business
objectives, minimize unexpected earnings volatility, and maximize firm value. He claims that ERM offer the potential
to confer three major benefits: increased organizational effectiveness, better risk reporting, and improved business
performance. However, to achieve successful ERM is not easy.
2. Correct Answer: A
False, ERM is not evinced by a switch on a specific date: All this integration is not easy. For most companies, the
implementation of ERM implies a multi-year initiative that requires ongoing senior management sponsorship and
sustained investments in human and technological resources. Ironically, the amount of time and resources dedicated
to risk management is not necessarily very different for leading and lagging organizations.
3. Correct Answer: D
Should be Technical skills in strategic, business, credit, market, and operational risks. Otherwise, the five skills are
correctly specified.

12 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
OpRisk Data and Governance
1. Trader Kabir enters a foreign exchange (FX) spot transaction, buying euros for dollars. He then quickly enters another
transaction selling euro for dollars. He makes a quick profit from these two transactions, however due to confusion
in the back office, settlement to the counterparty is delayed to four days later. The counterparty demands
compensation for the delayed settlement and the claim includes lost (forgone) interest. The cost of the mistake
exceeds the profit on the trade, resulting in an operational loss. To which Level 1 event risk category should this loss
be assigned?
a) Improper Business or Market Practices
b) Business Disruption and System failures
c) Clients, Products, and Business Practices
d) Execution, Delivery & Process Management
2. An Investment Advisor at a division of an international bank was sued by an employer-based retirement plan sponsor
for fiduciary breach. The client claimed that the Advisor's recommendation put the Advisor intention to earn a
commission and the firm's profit motive above the client's best interest. The lawsuit was settled. To which event risk
category should this loss be assigned?
a) Improper Business or Market Practices
b) Business Disruption and System failures
c) Clients, Products, and Business Practices
d) None of the above, legal risk is not an operational risk
3. Last year, Good holdings Bank released a major upgrade of its custom-built enterprise resource planning (ERP)
software platform. However, the release exposed problems with the extant codebase. Consultants were hired to re-
factor the code and resolve the so-called technical debt. Technical debt is the development effort required to fix
structural software problems that remain in code due to sub-optimal programming practices; for example, a quick fix
might satisfy an immediate user requirement but interfere with long-term compatibility or performance.
Management views the cost of hiring the consultants as an operational loss. To which event risk category should the
loss be assigned?
a) Internal fraud
b) Business disruption and system failures
c) Employment Practices and Workplace Safety
d) None of the above, technical debt is an exception that is excluded from the taxonomy
4. Which of the following is TRUE about the selection of threshold for the purposes of reporting operational losses?
a) The primary driver of threshold selection is to include only tail risk events which drive OpRisk capital
b) Basel III prohibits the use of internal operational loss data thresholds, both in its current and proposed
standardized measurement approach (SMA) for operational risk
c) The loss threshold can vary between banks and within a bank across event types, but the de minimis gross loss
threshold must be low enough to capture all material exposures
d) The primary motivation for threshold selection is to avoid the costly data collection of losses that are small in
severity (magnitude); for example, losses less than €20,000 probably should be excluded

13 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
5. It is currently the third fiscal quarter of fiscal year 2016 (Q3 2016) and Freshtech Bank has just discovered product
defects that have triggered both a regulatory investigation and a dispute with a key counterparty. In an attempt to
remedy the problem, management today announced an internal restructuring. The detailed, formal plan for
restructuring will not be finalized until next year. As Chief Risk Officer (CRO), you have determined this event type
should be classified under the Level 1 Category of Client, Project and Business Practices. You have asked your staff
for an estimate of costs associated with this event. Which of the following is the MOST LIKELY candidate for current
recognition (i.e., to be recognized in the current accounting period) of a provision for an EXPECTED operational loss
due to this crisis?
a) Retraining and relocation costs associated with the restructuring triggered by the event
b) An "onerous contract" impacted by the event: the unavoidable costs of meeting the contract's obligations will
now exceed the expected economic benefits
c) A projected estimate of an operating loss in the future (i.e., FY 2017 which is next year) due to the event and the
estimate is given "with greater than 75% confidence"
d) There can be no current period recognition because IAS 37 does not allow for the provision of a future liability:
any and all losses are recorded only after they become cash expenses
6. In regard to a recent loss event, your firm's accounting staff has itemized the following losses associated with the
event:
 Gross loss due to: $5,000,000
 Forgone revenue (opportunity cost): $3,000,000
 Estimated, non-rapid recovery with 50% probability: $1,500,000
 Insurance premiums (paid by the firm while the event occurred): $12,500
Which is the operational loss amount that will be calculated in association with the event under Basel II/III?
a) 3,500,000
b) 4,250,000
c) 5,000,000
d) 6,487,500
7. Silver find Financial International is planning to conduct a risk control self-assessment (RCSA) for each of its business
units. Which of the following is MOST LIKELY to be a feature or element of the firm's RCSA?
a) Staff within a business unit are excluded from providing input into their own business unit's RCSA scorecard in
order to avoid conflicts of interest
b) Any process with an inherent risk that is greater than its residual risk will be assigned a "red flag" because this is
a situation that requires a process fix
c) Managers will first identify and assess inherent risks by making no inferences about controls embedded in the
process; i.e., controls are assumed to be absent
d) In order to estimate realistic outcomes, risks are identified by assuming the mitigation impact of in-place
controls; that is, the risk exposure sought is net of control and mitigation
8. Streethigh Bank is developing key risk indicators (KRI) for their Equity Settlement Process. The KRIs will be used as a
proxy for the quality of the control environment and will directly inform the bank's OpRisk model. Assuming the bank
is utilizing best practices in developing its KRIs, which of the following is mostly likely to be TRUE?
a) Collection of KRIs will be automated directly from the firm's operational systems at the cost center level
b) All of the bank's key risk indicators will be standardized on the same Red/Amber/Green (RAG) scale which can
be quantified as 1/2/3
c) A majority of the bank's key risk indicators (KRIs) are qualitative; realistically, the few remaining quantitative KRIs
are either measures with nominal or ordinal variables
d) Because the model concerns the banks own operational losses, external factors such as stock market volatility
or interest rates will be excluded from the final quantitative model

14 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
9. According to Cruz, Peters and Shevchenko, scenario analysis is a key tool in operational risk measurement and "for a
significant number of firms, the scenario analysis program is the pillar of their framework." In regard to scenario
analysis, each of the following is true EXCEPT which is not true?
a) Though there are different approaches to run a scenario workshop, three approaches are widely used: structured
workshops, surveys, or individualized discussions
b) Scenario analysis is NOT effective for emerging risks due to a lack of actual loss experience and because there
will likely exist a disparity of opinions among experts on loss sizes and frequencies
c) Scenarios estimates are usually gathered through expert opinions, but the opinions need to be converted into
numbers; the most frequent is through gathering estimates on the loss frequencies on predefined severity
brackets
d) A key limitation of gathering expert opinions is that biases tend to arise (for example, presentation bias which is
when the sequence of provided information skews the assessment; or availability bias which refers to the
over/underestimation of loss events due to respondents’ familiarity with a risk) and biases are very difficult to
mitigate or avoid
10. General Financial Services is a large, international firm with several business units. The operational risk profiles of
four of its largest unidentified business units is displayed below:

Specifically,
 Business Unit (A) is dominated in frequency and severity by Execution, Delivery & Process management (EDPM)
which is more than 75% of the frequency and a majority (> 50%) of the loss severity (total loss amount in the
business line)
 Business Unit (B) assigns about half the frequency of its losses to Client, Products & Business Practices (CPBP)
but a whopping 90+% of its loss severity occurs in CPBP
 Business Unit (C) assigns, with respect to frequency, 40% to External Fraud, ~20% to EDPM, and ~20% to
Employment Practices and Workplace Safety; with respect to loss severity, it assigns 40% to CPBP, ~20% to
EDPM, and ~20% to External Fraud
 Business Unit (D) assigns, with respect to frequency, ~50% to EDPM and 25% to External Fraud; loss severity is
mostly (75%) assigned to EDPM. Each of the following is a plausible business type for each unidentified business
unit EXCEPT which of the following is probably INCORRECT?
a) Business Unit (A) is Trading and Sales
b) Business Unit (B) is Corporate Finance
c) Business Unit (C) is Asset Management
d) Business Unit (D) is Payment and Settlement

15 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
11. According to Cruz, Peters and Shevchenko, we should not underestimate the role of organization in a business. They
say, "Although many times the focus is on the measurement models with its complex formulas, most of the times the
success of implementing an OpRisk framework lies in having the right organization. The organizational design would
usually hint at the strength and degree of development of an OpRisk framework at a firm." Consider the skeleton
organization chart:

With respect to the question, "To who should business risk managers report? i.e., where in the chart are business risk
managers (green above) located?" Cruz argues that which of the following configuration is viable, or VALID in practice
as an organizational configuration?
a) Matrix-style reporting: Business Units appoint business risk managers and primarily make their compensation
decisions, but business risk managers have "dotted line reporting" to the Central Risk function. Matrix reporting
requires that the Business Units have a strong risk culture and culture of independence
b) Solid-line reporting to Central Risk Management: business risk managers physically work in, and have dotted line
reporting to, their business units but have solid line reporting to the CRO who is usually at headquarters. This
configuration is more likely than Matrix reporting to create a homogeneous risk culture and to prioritize risk
management efforts across initiatives
c) Strong Central Risk Management: business risk managers have only solid line reporting to the central risk
function. The Chief Risk Officer (CRO) is fully responsible for risk across the firm; the structure makes it easier for
regulators to streamline supervision
d) According to Cruz, EACH of the above configurations is viable (i.e., true) although Matrix reporting represents a
less mature structure than Strong Central Risk Management
12. in regard to Risk Governance and the firm's operational risk (OpRisk) Policy, according to Cruz, Peters and
Shevchenko, each of the following is true EXCEPT which is not?
a) It is the responsibility of the Board of Directors to ensure that a strong OpRisk management culture exists
throughout the whole organization
b) With respect to the risk taxonomy exercise, the best approach to the classification methodology is to either (i)
use a cause-driven method; or (ii) use a mixture of cause drive, impact-driven, and event-driven methods
c) Common industry practice for sound OpRisk governance often relies on three lines of defense: Business line
management; an independent corporate OpRisk management function; and an independent review (usually
internal audit)
d) An OpRisk Policy defines a firm's operational risk management framework and includes the following, among
other components: a risk taxonomy; loss collection (defines what losses or incidents should be reported and
discusses concepts of near-misses and recoveries); validation (how the risk assessment and measurement are
validated, how frequent validation takes place, and which departments are responsible for the validation); and
governance (where the OpRisk policy is situated, which committee approves it, and how the OpRisk governance
works)

16 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
OpRisk Data and Governance
1. Correct Answer: D
Execution, Delivery & Process Management
Cruz: "2.2.1 EXECUTION, DELIVERY, AND PROCESS MANAGEMENT: Losses of this event type are quite frequent as
these can be due to human errors, mis-communications, and so on, which are very common in an environment
where banks have to process millions of transactions per day. A typical example of execution losses might help to
illustrate how frequent these losses can be.
Consider the following deal: A foreign exchange (FX) trader bought USD 100,000,000 for €90,000,000 (i.e., USD 1 =
€0.90) and then sold USD 100,000,000 for €90,050,000 (i.e., USD 1 = €0.9005) with a trading initial profit of €50,000.
Both transactions were made almost at the same time, and the trader was obviously very satisfied with a profit of
€50,000. In his/her excitement at the successful deal, however, there were some snags in the back-office with some
confusion on where to remit the payments of one leg of the deal, and the transaction was finally settled 3 days later
than it should have been.
In FX transactions trading tickets are usually larger to compensate for the low margins. Similar situations as
described earlier may lead to errors. The counterparties obviously would have demanded a compensation as the
settlement has been delayed for 3 days, and the bank would also have paid a penalty, in the form of interest claims
of €55,000. Therefore, any error has the potential to be higher than a transactions eventual economic profit.
The overall scenario is alarming. There was a loss of €5000 on the aggregate due to operational errors (€50,000
transaction profit less €55,000 interest claims due for late payment). This is the reality a trading environment faces
on the day-to-day. The actions of traders are recognized at the closing of the deal, and errors coming to light at a
later time (e.g., mispricing, late settlement) are not linked back to the underlying cause. The error goes to an “error
account” or the like and, in terms of OpRisk management, those who are responsible for the errors are never
identified; even worse is that the real profitability of individual transactions is rarely understood. The cost side (and
the OpRisks involved) is in general ignored."
In regard to false (A), please note that "Improper Business or Market Practices" is a Level 2 Category under Clients,
Products & Business Practices.
There are seven event-type categories (at Level 1; each includes sub-categories at Level 2 and Activity Examples at
Level 3):
 Internal fraud: Losses due to acts of a type intended to defraud, misappropriate property or circumvent
regulations, the law or company policy, excluding diversity/ discrimination events, which involves at least one
internal party.
 External fraud: Losses due to acts of a type intended to defraud, misappropriate property or circumvent the
law, by a third party
 Employment Practices and Workplace Safety: Losses arising from acts inconsistent with employment, health
or safety laws or agreements, from payment of personal injury claims, or from diversity / discrimination even
 Clients, Products & Business Practices (includes Level 2 "Suitability, Disclosure & Fiduciary Fiduciary" and
"Improper Business or Market Practices"): Losses arising from an unintentional or negligent failure to meet a
professional obligation to specific clients (including fiduciary and suitability requirements), or from the nature
or design of a product.
 Damage to Physical Assets: Losses arising from loss or damage to physical assets from natural disaster or other
events.
 Business Disruption and System Failures (includes Level 2 "Systems" which encompasses hardware, software,
telecommunications, and utility disruptions): Losses arising from disruption of business or system failures
 Execution, Delivery & Process Management: Losses from failed transaction processing or process
management, from relations with trade counterparties and vendors

17 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
2. Correct Answer: C
Clients, Products, and Business Practices
Cruz: "2.2.2 CLIENTS, PRODUCTS, AND BUSINESS PRACTICES: Loss events under Clients, Products and Business
Practices (CPBP) risk type are usually the largest, particularly in the US. These events encompass losses, for example,
from disputes with clients and counterparties, regulatory fines from improper business practices, or wrongful
advisory activities. Table 2.2 presents the Basel event-type breakdown and definition for this risk type. This is a
specific and an important risk type for firms with operations in the US where litigation is very common. As seen in
recent regulatory fines imposed on French banks and other foreign banks operating in US jurisdiction, this loss type
can also be significant to off-shore entities."

This Level 1 Category (Clients, Products and Business Practices) includes the following Level 2 Categories: Suitability,
Disclosure, and Fiduciary (e.g., fiduciary breaches/guideline violations; suitability/disclosure issues; retail customer
disclosure violations; breach of privacy; aggressive sales; account churning; misuse of confidential information;
lender liability); Improper Business or Market Practices (e.g., antitrust; improper trade/market practices; market
manipulation; insider trading on firm's account; unlicensed activity; money laundering); Product Flaws (e.g., product
defects; model errors); Selection, Sponsorship, and Exposure (e.g., failure to investigate client per guidelines;
exceeding client exposure limits) and Advisory Activities (e.g., disputes over performance of advisory activities)
3. Correct Answer: B
Business disruption and system failures includes hardware and software
Cruz: "2.2.3 BUSINESS DISRUPTION AND SYSTEM FAILURES: Business Disruption and System Failures (BDSF) event
type is one the most difficult to spot in a large organization. A system crash, for example, would almost certainly
bear some financial loss for a firm, but these losses most likely would be classified as EDPM. An example might help
to clarify this point. Suppose that the funding system of a large bank crashes at 9:00 am. Despite all efforts from IT,
the system comes back online only by 4:00 pm when money markets are already closed. When the system returns,
the bank learns that it needs to fund an extra USD 20 billion on that day. As the markets are already closed, they
need to make requests to their counterparties to allow them special conditions; however, the rates in which they
capture these funds are higher than the daily average. This extra cost, although due to a system failure and,
therefore, should be classified as BDSF, would hardly be captured at all. Table 2.3 presents the formal Basel
definition and breakdown of this risk type."

18 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
4. Correct Answer: C
TRUE: The loss threshold can vary between banks and within a bank across event types, but the de minimis gross
loss threshold must be low enough to capture all material exposures
Cruz: "2.3.2 SETTING A COLLECTION THRESHOLD AND POSSIBLE IMPACTS: Most firms set a threshold for loss
collection as allowed by Basel. However, this decision can have significant impact in establishing the risk profile of a
business unit. This is usually the case in businesses that have heavy transaction execution like asset management or
equities. See the example in Table 2.8. If the OpRisk department had chosen USD 100,000 as the threshold, usually
under the argument that only tail events drive OpRisk capital, that firm would think that its total loss in that year
was USD 49 million. If the threshold choice was USD 20,000, the total losses would be USD 53 million. However,
most losses are due to compensating retail clients whose orders are usually ranging from USD 1000 to USD 50,000.
The sum of the losses under USD 50,000 is about USD 20 million, which is almost equivalent to the losses above USD
5 million. For this particular firm, setting the loss collection threshold in USD 100,000 would show total losses for
the year as USD 49 million. However, if this firm had not set a loss collection threshold, they would observe that
their actual losses were USD 71 million, a very different risk profile.

A number of OpRisk managers pick their threshold thinking only in terms of OpRisk capital. Disregarding these small
losses in many cases can bias the risk profile of a business unit and, of course, this will also have an impact on OpRisk
capital."
In regard to (A), (B) and (D), each is FALSE.
5. Correct Answer: B
An "onerous contract" impacted by the event: the unavoidable costs of meeting the contract's obligations will now
exceed the expected economic benefits
Cruz: "2.3.7 PROVISIONING TREATMENT OF EXPECTED OPERATIONAL LOSSES: Unlike credit risk, the calculated
expected credit losses might be covered by general and/or specific provisions in the balance sheet. For OpRisk, due
to its multidimensional nature, the treatment of expected losses is more complex and restrictive. Recently, with the
issuing of IAS37 by the International Accounting Standards Board, Wittsiepe (2008), the rules have become clearer
as to what might be subject to provisions (or not). IAS37 establishes three specific applications of these general
requirements, namely:
 a provision should not be recognized for future operating losses;
 a provision should be recognized for an onerous contract — a contract in which the unavoidable costs of
meeting its obligations exceeds the expected economic benefits;
 a provision for restructuring costs should be recognized only when an enterprise has a detailed formal plan for
restructuring and has raised a valid expectation in those affected.
These provisions should not include costs, such as retraining or relocating continuing staff, marketing or investing
in new systems and distribution networks; the restructuring does not necessarily entail that.

19 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
IAS37 requires that provisions should be recognized in the balance sheet when, and only when, an enterprise has a
present obligation (legal or constructive) as a result of a past event. The event must be likely to call upon the
resources of the institution to settle the obligation, and, more importantly, it must be possible to form a reliable
estimate of the amount of the obligation. Provisions should be measured in the balance sheet at the best estimate
of the expenditure required to settle the present obligation at the balance sheet date. Any future changes, like
changes in the law or technological changes, may be taken into account where there is sufficient objective evidence
that they will occur. IAS37 also indicates that the amount of the provision should not be reduced by gains from the
expected disposal of assets (even if the expected disposal is closely linked to the event giving rise to the provision)
nor by expected reimbursements (arising from, for example, insurance contracts or indemnity clauses). When and
if it is virtually certain that reimbursement will be received should the enterprise settle the obligation, this
reimbursement should be recognized as a separate asset."
6. Correct Answer: C
$5,000,000; i.e., only the gross loss in this case. In addition to Cruz 2.3.7 (see previous question), here is Cruz:
"2.3.4 RECOVERIES AND NEAR MISSES
The Basel II rules (BCBS, 2006) in general do not allow for the use of recoveries to be considered for capital
calculation purposes. The issue again is that if firms are trying to estimate losses that can happen once every
thousand years, it would not make sense to start applying mitigating factors to reduce the losses and eventually
reducing also capital. For this reason, gross losses should be considered for OpRisk calculation purposes.
The only exception is on rapidly recovered loss events but even this exception is not accepted everywhere. Rapidly
recovered loss events are OpRisk events that lead to losses recognized in financial statements that are recovered
over a short period. For instance, a large internal loss is rapidly recovered when a bank transfers money to a wrong
party but recovers all or part of the loss soon thereafter. A bank may consider this to be a gross loss and a recovery.
However, when the recovery is made rapidly, the bank may consider that only the loss net of the rapid recovery
constitutes an actual loss. When the rapid recovery is full, the event is considered to be a 'near miss.'
2.3.5 TIME PERIOD FOR RESOLUTION OF OPERATIONAL LOSSES
Some OpRisk events, usually some of the largest, will have a large time gap between the inception of the event and
the final closure, due to the complexity of these cases. As an example, most litigation cases that came up from the
financial crisis in 2007/2008 were only settled by 2012/2013. These legal cases have their own life cycle and start
with a discovery phase in which lawyers and investigators would argue if the other party has a proper case to actually
take the action to court or not. At this stage, it is difficult to even come up with an estimate for eventual losses. Even
when a case is accepted by the judge it might be several years until lawyers and risk managers are able to estimate
properly the losses. Firms can set up reserves for these losses (and these reserves should be included in the loss
database), but they usually do that only for a few weeks before the case is settled to avoid disclosure issues (i.e.,
the counterparty eventually knows the amount reserved and uses this information in their favor). This creates an
issue for setting up OpRisk capital because firms would know that they are going to undergo a large loss and yet are
unable to include it in the database; the inclusion of this settlement would cause some volatility in the capital. The
same would happen if a firm set a reserve of, for example, USD 1 billion for a case, and then a few months later, if a
judge decides to remove the loss in favor of the firm. For this reason, firms need to have a clear procedure on howto
handle those large, long-duration losses.
2.3.6 ADDING COSTS TO LOSSES
As said earlier, an operational loss includes all expenses associated with an operational loss event except for
opportunity costs, forgone revenue, and costs related to risk management and control enhancements implemented
to prevent future operational losses. Most firms, for example, do not have enough lawyers on payroll (or expertise)
to deal with all the cases, particularly some of the largest or those that demand some specific expertise and whose
legal fees are quite expensive. There are cases in which the firm wins in the end, maybe due to some external law
firms, but the cost can reach tens of millions of dollars. In such cases, though the firm wins a court victory, there will
be an operational loss."

20 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
7. Correct Answer: C
Managers first identify and assess inherent risks by making no inferences about controls embedded in the process;
i.e., controls are assumed to be absent
Please note:
 Inherent Risk (aka, gross risk): The risk embedded in an operational process or activity as if no controls or
mitigation were in place; i.e., the gross risk before controls/mitigation
 Residual Risk (aka, net risk): The risk that remains after controls are taken into account (the net risk or risk
after controls)
Cruz (emphasis ours): "2.4.1 RISK CONTROL SELF-ASSESSMENT (RCSA):
These are also knowns as Control Self-Assessment (CSA) in some firms. According to this procedure, firms regularly
ask experts about their views on the status of each business process and sub process. These reviews are usually
done every 12 or 18 months and color rated Red/Amber/Green (RAG) according to the perceived status. Some firms
go beyond and try to quantify these risks using subjective approaches or through a scorecard. For many firms, RCSA
is the anchor of the OpRisk framework and most OpRisk activities are linked to this procedure. In a broad sense, the
RCSA program requires the documentation and assessment of risks embedded in a firm's processes. Levels of risks
are derived (usually from a frequency and severity basis), and controls associated with these risks are identified. As
risks are usually reported by business units, these processes are aggregated to a certain business unit and
rated/assessed.
In the RCSA program, managers first identify and assess inherent risks by making no inferences about controls
embedded in the process: controls are assumed to be absent. Under this assumption, managers must carefully
identify how risk manifests within the activities in the processes.
The following are the usual questions asked by risk managers in this phase:
 Risk scenarios. Where are the potential failure points in each of these processes?
 Exposure. How big a loss could happen to my operation if a failure happens?
 Correlation to other risks. Could a failure altogether change my organization's performance, either financially,
its reputation, or affect any other area?
The answers point toward the specific inherent risks embedded within a business unit's process, which must be
assessed to determine the likelihood the events could occur (frequency) and severity. The results of this analysis
provide a birds’ eye view of the inherent risk of a firm's business processes. Management can then use this
assessment to prioritize and focus on the most critical risks that must be proactively managed. Once these inherent
risks are understood, controls will be added in the RCSA framework. The effectiveness of these controls are then
assessed to understand how efficient these are to mitigate risks. At this stage, the residual risk is also calculated,
which is the risk that is left after inherent risks are controlled. Put another way, residual risk is the probability of loss
that remains to systems that store, process, or transmit information after security measures or controls have been
implemented.
For a firm that has the RCSA program as the core of the OpRisk framework, all other OpRisk initiatives under the
firms OpRisk program are usually structured to feed the RCSA. Risk metrics such as key risk indicators (KRIs), internal
loss events, and external events would contribute to the risk identification process ensuring the organization has
considered all readily available data and benchmark risk assessments. Once the universe of controls and mitigation
measures has been identified, the business unit can partner with various control functions to conduct the control
testing phase of the RCSA. Control testing is critical to a mutual understanding of expectations and actions across
business units and between the front and back offices. One significant challenge that arises due to combining RCSA
data is interpreting what the data actually means.
For example, outputs from a RCSA program might lead a risk manager to conclude that no immediate action is
required if the risk exposures are controlled within the tolerances acceptable to the firm. On the other hand, if the
RCSA data indicates that the control environment is weakening and threatening the success of a particular business
goal, a risk manager might decide to recommend a corrective action. However, weighting those risks across the
entire risk universe and naming the most important or “key” might not be an easy and objective task. There are a
number of vendors that provide systems that help to collate these results. The issue with these programs in general
is that they make it harder to integrate with the other data inputs that are numeric. Even if these RAG assessments
can be converted to a number or rating, there is always a bias embedded that the person who does the assessment
would have a motivation to improve their ratings so as to reduce their capital."

21 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
8. Correct Answer: A
Collection of KRIs will be automated directly from the firm's operational systems at the cost center level
Cruz (selected, emphasis ours): "2.4.2 KEY RISK INDICATORS: These indicators/factors are mostly quantitative and
are used as a proxy for the quality of the control environment of a business. For example, in order to report the
quality of the processing systems of an investment bank, we might design factors such as 'system downtime'
(measuring the number of minutes that a system stayed offline), and 'system slow time' (counting the minutes that
a system was overload and running slow). These KRIs can be extremely important in OpRisk measurement as they
can allow OpRisk models to behave very similarly to those in market and credit risks.
Going back to the equity settlement example, instead of using RAG (red/amber/green) self-assessment, a better
way to assess the quality of these processes is to establish a few KRIs that provides an accurate picture of the control
environment as seen in Figure 2.2 (below). As an example, on the trade confirmation stage of the settlement
process, if the number of unsigned confirmations older than 30 days increases to over a certain percent of the total
population, and the number of repudiated trades increases, one might say that this process is facing challenges that
need to be addressed.

The process of KRI collection deserves special attention. It is important that these data are absolutely reliable, in
order to display relationships between KRIs and losses. Automating the collection straight from the firm's
operational systems might help to create a more realistic reflection of the true profile of the infrastructure of a
certain business. There are many stages in establishing these links and of course there is a cost associated with the
implementation of the KRI program, but probably no other type of data will be more powerful than KRIs for
managing and measuring operational risk. It is much easier to explain OpRisk as a function of the control
environment in which a firm exists than to say that OpRisk capital is moving up or down because of past losses or
changes in scenarios.
The first stage of the KRI collection process is trying to establish assumptions on the OpRisk profile of a certain
business. For example, we might assume that execution errors in the equities division can be explained by the trade
volume on the day, the number of securities that failed to be received or delivered, the head count available on the
trading desk and the back office, and system downtime (measured by minutes offline). The decision to be made is:
at what organizational level should this relationship be measured? Equities division as a whole? Should we break
down equities division into cash equities, listed derivatives and OTC derivatives, or along any other lines? Should we
consider breaking it down along regional lines? All these questions are fundamental for the success of the analysis.
The quantitative incorporation of KRI data into OpRisk modeling is discussed in Chapter 16.

22 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
If loss data and KRIs are collected at cost center level (the lowest possible level), it becomes possible to perform this
disaggregation. In general, the lower the level you model the causal relationship, the better the chances that you
will find higher level fits to the model. Put this another way, it is easier to find strong causal relationships, if you
model, for example, the US cash equities department than modeling at the global equities division level, as the lower
level would better capture local nuances, idiosyncrasies, and trends.
The modeler might also consider using external factors such as equity indexes and interest rates. It is common to
find strong relationships between a stock market index and operational losses, for example, higher volatility on stock
markets is usually associated with high trading volumes, which in turn is highly associated with execution losses in
OpRisk. Table 2.9 (below) presents few examples of Business Environment and Internal Control Factors (BEICFs) used
in few environments."

9. Correct Answer: B
(this is where scenario analysis shines): Scenario analysis is NOT effective for emerging risks due to a lack of actual
loss experience and because there will likely exist a disparity of opinions among experts on loss sizes and frequencies
In regard to (A), (C) and (D), each is TRUE.
Cruz (emphasis ours): "2.6 Scenario Analysis: Another important tool in OpRisk management and measurement is
scenario analysis. For a significant number of firms, the scenario analysis program is the pillar of their framework.
These scenarios estimates are usually gathered through expert opinions, where these experts (or a group of experts)
communicate their estimates on how losses can happen on an extreme situation. These experts are commonly
guided by information gathered from external data or KRIs and internal loss trends, see for instance discussions on
scenario analysis for OpRisk in Rippel and Teplỳ (2008),
Alderweireld et al. (2006) and Hoffman (2002).
Though there are different approaches to run a scenario workshop, only three approaches are widely used:
structured workshops, surveys, or individualized discussions. A recent survey in 2012 with the largest US financial
firms (the results are not in public domain and reference cannot be provided) shows that information from experts
is obtained mainly through structured workshops (Figure 2.3). A comprehensive guide to performing and
establishing appropriate statistical structures for surveys in such workshops is provided in detail in O'Hagan et al.
(2006). Scenarios can be a useful tool in case of emerging risks where a loss experience would not be available.
Financial institutions understanding this challenge are creating many new scenarios for these emerging risks every
year. Figure 2.4 presents some other results of this survey about the number of new scenarios developed annually
by financial firms showing that most firms develop between 51 and 100 scenarios every year.
In order to make the outcomes of the scenario analysis workshops useful to the OpRisk measurement and
quantification efforts, the opinions need to be converted into numbers. There are a few ways to do so, but the most
frequent is through gathering estimates on the loss frequencies on predefined severity brackets. These numbers are
then converted to empirical distributions, see example in Table 2.11, that are aggregated with internal losses later.
After converting expert opinion into an empirical distribution, the question is how to incorporate this into the OpRisk
framework. There are a number of articles on the subject, for example, see recent publications of Dutta and Babbel
(2013), Ergashev (2012), and Shevchenko (2011). It will be discussed in detail in Chapters 14 and 15.
Common Issues and Bias in Scenarios. Because scenarios are usually based on expert opinion, they present a number
of biases, see for example, a demonstration of such features in the experiments designed by Lin and Bier (2008).

23 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
This is one of the key limitations of this process as these biases are very difficult to mitigate or avoid. Some of the
biases are as follows:
 Presentation Bias. This arises when the order in which the information is provided can skew or alter the
assessment from the experts; see discussion in Hogarth and Einhorn (1992);
 Availability bias. It is related to the over/underestimation of loss events due to respondents’ exposure or
familiarity to a particular experience or risk. For example, if the expert has a 30 years career in FX trading and
had never experimented or seen an individual loss of USD 1 billion or larger, he/she might be unable to accept
the risk that such a loss would take place;
 Anchoring bias. Anchoring occurs when participants restrict their estimates to being within a range of a given
value, which may come from their own experiences, a value they have seen elsewhere (e.g., internally, in the
media) or a value provided in the workshop; see discussion in Wright and Anderson (1989);
 “Huddle” bias or anxiety bias. It involves the tendency of groups to avoid conflicts and differences of opinion,
either because individuals do not want to disrupt the smooth functioning of the group through dissent, or
because they are unwilling to disagree openly with the more senior, expert, or powerful people in the room;
see discussions in O'Hagan (2005);
 Gaming. Conflicts of participants’ interests with the goals or consequences of the workshops can cause
motivational biases or gaming. Participants may be unwilling to disclose information or engage meaningfully
in the workshop or may seek to influence the outcomes;
 Over/under confidence bias. This bias involves over/underestimation of risk due to the available experience
and/or literature on the risk being limited;
 Inexpert opinion. In many firms, scenario workshops do not attract the expert (or the expert is not identified)
and a more junior or someone with much less experience ends up participating in the workshop and providing
inaccurate estimates;
 Context bias. This bias arises when framing in a certain manner alters the response of experts, that is, color
their opinion; see discussion in Fischhoff et al. (1978).
A fundamental problem that scenario analysis programs face is the disparity of understanding and opinions on losses
sizes and frequencies. To circumvent some of these problems, application of the Delphi technique may be of help.
The Delphi technique, as Linstone and Turoff (1975) defined, “… may be characterized as a method for structuring
a group communication process so that the process is effective in allowing a group of individuals, as a whole, to deal
with a complex problem."
10. Correct Answer: C
As Cruz says, "The OpRisk profile of retail banks is not too dissimilar to that of retail brokerage; see Table 2.14. On
the frequency side, most losses are due to external frauds that are daily events for these firms. Execution comes in
a far second. However, when looking at severity, the largest risk exposure is due to litigation once again."

In regard to (A), (B) and (D), each comports with the OpRisk Profiles listed in Cruz et al. For a more detailed look,
here is the source BIS data upon which the Cruz section is based (Results from the 2008 Loss Data Collection Exercise
for Operational Risk)

24 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
11. Correct Answer: D
TRUE (all of the above) According to Cruz, EACH of the above configurations is viable (i.e., true) although Matrix
reporting represents a less mature structure than Strong Central Risk Management
12. Correct Answer: B
Cruz et all caution against a mixture of the three methods. Further, they consider the cause-driven method to be
somewhat inferior and outdated to the other methods: impact-driven which tends to be used by smaller firms and
the superior event-driven method which tends to be used by larger firms and matches the method used by Basel.
Cruz et al (selected, emphasis ours): "2.2 OpRisk Taxonomy: ... There are roughly three ways that firms drive this
risk taxonomy exercise: cause-driven, impact-driven, and event-driven.
In many firms, risk taxonomy is a mixture of these three making it even more difficult to get it right. Let us discuss
these three methods. In the cause-driven method, the risk classification is based on the reasons that cause
operational losses. This usually follows the old OpRisk definition (which most firms use in their annual reports) in
which OpRisk is defined as a function of 'people, systems, and external events'. Some risk types in this classification
would be, for example, 'lack of skills in trade control' or 'inappropriate access control to systems'. Although there
are some advantages in this type of classification, as a 'root cause' is pretty much embedded into the risk
classification, challenges arise when multiple causes exist or the cause is not immediately clear. If this cause-driven
risk classification is applied to a process in which operational losses have high frequency, it would be very difficult
for risk managers to classify correctly every single loss, and the attrition with the business and within the department
is likely to be high. Another way to perform this classification exercise is through an impact-driven method. In this
method, the classification is made according to the financial impact of operational losses. Most firms that follow this
type of classification do not invest heavily in OpRisk management; they just use this type to retrieve data from their
systems. This is quite common in smaller firms. In this type of classification, it is quite difficult to manage OpRiskas,
although the exposures are known, it is difficult to understand what is driving these losses.
The event-driven risk classification is probably the most common one used by large firms. It classifies risk according
to OpRisk events. This is the classification used by the Basel Committee. It is interesting to know that during the
Basel II discussions, when this type of risk taxonomy was presented, most of the industries were reluctant to accept
it. A number of firms, even today, follow their own classification initially and map to the Basel event-type category
later. What is interesting in this classification is that the definition is rather broad which should make it easier to
accept changes in the process. For example, under “Execution, Delivery, and
Process Management” (EDPM), which is the level-1 event type, there is a category named “Transaction Capture,
Execution, and Maintenance” that can be an umbrella for a number of event types. For example, if the equity trading
process changes from an old-fashioned phone based to an online high-frequency trading, using this classification
would be easy to define the taxonomy of these risks."

25 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
Information Risk and Data Quality Management | Questions
1. According to Ramesh (author of Chapter 3, Information Risk and Data Quality Management), "Many data quality
issues may occur within different business processes, and a data quality analysis process should incorporate a
business impact assessment to identify and prioritize risks. To simplify the analysis, the business impacts associated
with data errors can be categorized within a classification scheme intended to support the data quality analysis
process and help in distinguishing between data issues that lead to material business impact and those that do not.
This classification scheme defines six (6) primary categories for assessing either the negative impacts incurred as a
result of a flaw, or the potential opportunities for improvement resulting from improved data quality."
Which business impact category is best associated with "increased workloads, decreased throughput, increased
processing time, or decreased end-product quality?"
a) Financial impacts
b) Confidence-based impacts
c) Productivity impacts
d) Risk impacts
2. Which business impact category is best associated with "decreased organizational trust, inconsistent operational and
management reporting, and delayed or improper decisions."
a) Confidence-based impacts
b) Satisfaction impacts
c) Productivity impacts
d) Risk impacts
3. The author cites the following examples to which financial institutions are particularly sensitive with respect to risk
and compliance impacts: Anti-money laundering aspects of the Bank Secrecy Act and the USA PATRIOT Act; Sarbanes-
Oxley; Basel II Accords; Graham- Leach-Bliley Act; Credit Risk assessment; and System Develop Risks. While the source
of these areas of risk differ, the author highlights two high-level similarities. First, as we might expect, each of these
mandates the use or presentation of high-quality information which implies financial institutions must manage the
quality of their organizational information.
Which is the second similarity?
a) They also require that data errors be classified into the category of financial impacts; e.g., operating costs,
decreased revenues, delays in cash flow
b) They also require demonstration of the adequacy of internal controls, which oversee data quality, to external
parties (e.g., auditors) such that they must have in place transparent, auditable governance processes
c) They also require that data errors be classified into the category of compliance impacts; e.g., government
regulations, industry expectations, or self-imposed policies (such as privacy policies).
d) They also require that companies set aside a reserve liability to fund future contingent outflows that arise due
to data quality issues.
4. Ramesh (author of Chapter 3, Information Risk and Data Quality Management) lists data quality dimensions: high-
level categorizations of assertions ("business user expectations for data quality") that lend themselves to
quantification, measurement, and reporting.
Here is one of the data qualities dimensions: "[This dimension of data quality] refers to measuring reasonable
comparison of values in one data set to those in another data. [This dimension] is relatively broad, and can encompass
an expectation that two data values drawn from separate data sets must not conflict with each other, or define more
complex comparators with a set of predefined constraints. More formal constraints can be encapsulated as a set of
rules that specify relationships between values of attributes, either across a record or message, or along all values of
a single attribute. However, be careful not to confuse [this dimension] with accuracy or correctness. [This dimension]
may be defined between one set of attribute values and another attribute set within the same record (i.e., record-
level), between one set of attribute values and another attribute set in different records (i.e., cross-record), or
between one set of attribute values and the same attribute set within the same record at different points in time (i.e.,
temporal)"

26 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
To which data quality dimension does this refer?
a) Consistency
b) Reasonableness
c) Currency
d) Uniqueness
5. Which of the following describes data validation RATHER THAN data quality inspection?
a) Institutes a mitigation or remediation of the root cause within an agreed-to time frame
b) Reviews and measures conformance of data with a set of defined business rules
c) Reduces the number of errors to a reasonable and manageable level
d) Enables the identification of data flaws along with a protocol for interactively making adjustments to enable the
completion of the processing stream
6. According to Ramesh, complex data quality metrics can be accumulated for reporting in a scorecard in one of three
different views. Each of the following is one of the views, along with their ideal audience, except which is not?
a) The Data Quality Issues View is ideal for data analysts attempting to prioritize tasks for diagnosis and remediation
b) The Business Process View is ideal for operational managers who want to examine the risks and failures
preventing the business process’s achievement of the expected results
c) The Unique Findings View is ideal for risk managers who want to identify outliers in the data quality control
process
d) The Business Impact View is ideal for senior managers who seek a high-level overview of the risks associated
with data quality issues, and how that risk is introduced across the enterprise

27 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
Information Risk and Data Quality Management | Answers
1. Correct Answer: C
Productivity impacts.
Ramesh: "This classification scheme defines six primary categories for assessing either the negative impacts incurred
as a result of a flaw, or the potential opportunities for improvement resulting from improved data quality:
1. Financial impacts, such as increased operating costs, decreased revenues, missed opportunities, reduction or
delays in cash flow, or increased penalties, fines, or other charges.
2. Confidence-based impacts, such as decreased organizational trust, low confidence in forecasting, inconsistent
operational and management reporting, and delayed or improper decisions.
3. Satisfaction impacts such as customer, employee, or supplier satisfaction, as well as general market
satisfaction.
4. Productivity impacts such as increased workloads, decreased throughput, increased processing time, or
decreased end-product quality.
5. Risk impacts associated with credit assessment, investment risks, competitive risk, capital investment and/or
development, fraud, and leakage.
6. Compliance is jeopardized, whether that compliance is with government regulations, industry expectations,
or self-imposed policies (such as privacy policies)."
2. Correct Answer: A
Confidence-based impacts
3. Correct Answer: B
They also require means of demonstrating the adequacy of internal controls overseeing that quality to external
parties such as auditors, such that they must have governance processes in place that are transparent and auditable
Ramesh: "Some examples to which financial institutions are particularly sensitive include:
 Anti-money laundering aspects of the Bank Secrecy Act and the USA PATRIOT Act have mandated private
organizations to take steps in identifying and preventing money laundering activities that could be used in
financing terrorist activities.
 Sarbanes-Oxley, in which section 302 mandates that the principal executive officer or officers and the principal
financial officer or officers certify the accuracy and correctness of financial reports.
 Basel II Accords provide guidelines for defining the regulations as well as guiding the quantification of
operational and credit risk as a way to determine the Information Risk and Data Quality Management amount
of capital financial institutions are required to maintain as a guard against those risks.
 The Graham-Leach-Bliley Act of 1999 mandates financial institutions with the obligation to “respect the
privacy of its customers and to protect the security and confidentiality of those customers’ nonpublic personal
information.”
 Credit risk assessment, which requires accurate documentation to evaluate an individual’s or organization’s
abilities to repay loans.
 System development risks associated with capital investment in deploying new application systems emerge
when moving those systems into production is delayed due to lack of trust in the application’s underlying data
assets.
While the sources of these areas of risk differ, an interesting similarity emerges: not only do these mandate the use
or presentation of high-quality information, they also require means of demonstrating the adequacy of internal
controls overseeing that quality to external parties such as auditors. This means that not only must financial
institutions manage the quality of organizational information, they must also have governance processes in place
that are transparent and auditable."

28 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
4. Correct Answer: A
Consistency
The data quality dimensions include (but are not limited to):
 Accuracy: the degree with which data instances compare to the “real-life” entities they are intended to model.
 Completeness: specifies the expectations regarding the population of data attributes.
 Consistency: refers to measuring reasonable comparison of values in one data set to those in another data.
 Reasonableness: sed to measure conformance to consistency expectations relevant within specific
operational contexts.
 Currency: measures the degree to which information is current, or "fresh," with the world that it models.
 Uniqueness: measures the number of inadvertent duplicate records that exist within a data set or across data
sets.
5. Correct Answer: B
Reviews and measures conformance of data with a set of defined business rules
Ramesh: "Note that data quality inspection differs from data validation. While the data validation process reviews
and measures conformance of data with a set of defined business rules, inspection is an ongoing process to:
 Reduce the number of errors to a reasonable and manageable level.
 Enable the identification of data flaws along with a protocol for interactively making adjustments to enable
the completion of the processing stream.
 Institute a mitigation or remediation of the root cause within an agreed-to time frame. The value of data
quality inspection as part of operational data governance is in establishing trust on behalf of downstream
users that any issue likely to cause a significant business impact is caught early enough to avoid any significant
impact on operations. Without this inspection process, poor-quality data pervades every system, complicating
practically any operational or analytical process."
6. Correct Answer: C
The three views are: Data Quality Issues; Business Process; and Business Impact.
Three data quality scorecard views:
 Data Quality Issues View: Evaluating the impacts of a specific data quality issue across multiple business
processes demonstrates the diffusion of pain across the enterprise caused by specific data flaws. This
scorecard scheme, which is suited to data analysts attempting to prioritize tasks for diagnosis and
remediation, provides a rolled-up view of the impacts attributed to each data issue. Drilling down through this
view sheds light on the root causes of impacts of poor data quality, as well as identifying “rogue processes”
that require greater focus for instituting monitoring and control processes.
 Business Process View: Operational managers overseeing business processes may be interested in a scorecard
view by business process. In this view, the operational manager can examine the risks and failures preventing
the business process’s achievement of the expected results. For each business process, this scorecard scheme
consists of complex metrics representing the impacts associated with each issue. The drill-down in this view
can be used for isolating the source of the introduction of data issues at specific stages of the business process
as well as informing the data stewards in diagnosis and remediation.
Business Impact View: Business impacts may have been incurred as a result of a number of different data quality
issues originating in a number of different business processes. This reporting scheme displays the aggregation of
business impacts rolled up from the different issues across different process flows. For example, one scorecard could
report rolled-up metrics documenting the accumulated impacts associated with credit risk, compliance with privacy
protection, and decreased sales. Drilling down through the metrics will point to the business processes from which
the issues originate; deeper review will point to the specific issues within each of the business processes. This view
is suited to a more senior manager seeking a high-level overview of the risks associated with data quality issues, and
how that risk is introduced across the enterprise.

29 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
Assessing the Quality of Risk Measures | Questions
1. In finance, a standard model of the behavior over time of an asset price or risk factor is the geometric Brownian
motion (or diffusion model) This model is also the basis for the Black- Scholes option pricing model, and is generally
a point of departure for analysis of asset return behavior. In this standard model, returns are normally distributed.
However, it is well understood that actual returns do not necessarily comport to the standard model. In fact, teal-
world asset returns tend to exhibit each of the following EXCEPT for which is not generally true about real- world
asset returns?
a. Leptokurtosis
b. The distributional mean differs from the median
c. 45-degree QQ-plot and low Jarque-Bera value
d. Time variation
2. Each of the following is true about value at risk (VaR) EXCEPT which is false?
a. VaR cannot be back tested
b. VaR cannot rank portfolios in order of riskiness
c. VaR cannot provide powerful tests of its own accuracy
d. Dramatic changes in VaR can be obtained by subtle differences in its parameters. ("Subtle differences in how
VaR is computed can lead to large differences in the estimates")
3. Consider the following four statements about value at risk (VaR):
I. If there were standardization of both the confidence interval and the time horizon, VaR estimates would be
highly consistent across users
II. There is not much uniformity of practice as to confidence interval and time horizon; as a result, intuition on what
constitutes a large or small VaR is underdeveloped.
III. There are a number of computational and modeling decisions that can greatly influence VaR results, such as the
length of time series used for historical simulation or to estimate moments; and the technique used for
estimating moments
IV. There are a number of computational and modeling decisions that can greatly influence VaR results, such as
mapping techniques and the choice of risk factors
Which of the above statements is (are) true?
a. None are true
b. I. and II. are true
c. II., III. and IV. are true
d. All are true
4. According to Malz, each of the following statements is true about the mapping of risk factors to positions EXCEPT
which is incorrect?
a. Mapping is the process of assigning risk factors to positions; in order to compute risk measures, we have to
assign risk factors to securities
b. When the underlying position and its hedge can be mapped to the same risk factor (or set of risk factors), the
appropriate measure value at risk (VaR) is zero
c. Mapping decisions are often pragmatic; for example, fixed-income cash-flow mappings are more accurate than
duration mappings, but are more complex and require more risk factors
d. As a practical matter, it may be difficult to find data that address certain risk factors, a problem that may merely
mirror the real-world challenge of hedging or expressing some trade ideas

30 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
5. About the long-equity tranche, short-mezzanine credit trade in 2005, Malz writes "A widespread trade among hedge
funds, as well as proprietary trading desks of banks and brokerages, was to sell protection on the equity tranche and
buy protection on the junior mezzanine tranche of the CDX.NA.IG. The trade was thus long credit and credit-spread
risk through the equity tranche and short credit and credit-spread risk through the mezzanine. It was executed using
several CDX.NA.IG series, particularly the IG3 introduced in September 2004 and the IG4 introduced in March 2005.
The trade was designed to be default-risk-neutral at initiation; by sizing the two legs of the trade so that their credit
spread sensitivities were equal. The motivation of the trade was not to profit from a view on credit or credit spreads,
though it was primarily oriented toward market risk.
Rather, it was intended to achieve a positively convex payoff profile. The portfolio of two positions would then benefit
from credit spread volatility. In addition, the portfolio had positive carry; that is, it earned a positive net spread. Such
trades are highly prized by traders, for whom they are akin to delta-hedged long option portfolios in which the trader
receives rather than paying away time value. "
But, of course, many of these traders suffered large losses. According to Malz, which of the following was the critical
error in the trade?
A. The model ignored correlation altogether
B. The model failed to adequately capture and anticipate individual defaults
C. The model assumed a static implied correlation: deltas were partial derivatives that did not account for changing
correlation, which drastically altered the hedge ratio
D. The recovery amount was at risk; in the event of a default on one or more of the names in the index, the recovery
amount was not fixed but a random variable
6. Among the costliest model risk episodes was the failure of subprime residential mortgage-based security (RMBS)
valuation and risk models during the 2008-2009 financial downturn. These models were employed by credit-rating
agencies to assign ratings to bonds, by traders and investors to value the bonds, and by issuers to structure them.
Consider the following potential model risks:
I. Models neglected off-balance-sheet vehicles altogether
II. II. Models assumed positive future house price appreciation
III. III. Correlations among regional housing markets were assumed to be low
IV. IV. Undue complexity of models including advanced mathematics that even models' authors did not understand
V. V. The heavy reliance on value at risk (VaR) which many practitioners did not realize was designed for market
risk
According to Malz, while the models varied widely, two widespread MODEL defects were particularly important with
respect to RMBS during the 2008-2009 downturn. Among the above, which are the two major defects in model
assumptions?
A. I. and II.
B. II. and III.
C. III. and IV.
D. IV. and V

31 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
Assessing the Quality of Risk Measures | Answers
1. Correct Answer: C
45-degree QQ-plot and low Jarque-Bera value
2. Correct Answer: A
FALSE: VaR can easily be backtested; and in Basel's internal models approach (IMA), VaR must be backtested!
3. Correct Answer: C
Malz: "The risk manager has a great deal of discretion in actually computing a VaR. The VaR techniques we described
in Chapter 3—modes of computation and the user-defined parameters—can be mixed and matched in different
ways. Within each mode of computation, there are major variants, for example, the so-called “hybrid” approach of
using historical simulation with exponentially weighted return observations. This freedom is a mixed blessing. On
the one hand, the risk manager has the flexibility to adapt the way he is calculating VaR to the needs of the firm, its
investors, or the nature of the portfolio.
On the other hand, it leads to two problems with the use of VaR in practice:
1. There is not much uniformity of practice as to confidence interval and time horizon; as a result, intuition on
what constitutes a large or small VaR is underdeveloped.
2. Different ways of measuring VaR would lead to different results, even if there were standardization of
confidence interval and time horizon. There are a number of computational and modeling decisions that can
greatly influence VaR results, suchas:
 Length of time series used for historical simulation or to estimate moments
 Technique for estimating moments
 Mapping techniques and the choice of risk factors, for example, maturity bucketing
 Decay factor if applying EWMA
 In Monte Carlo simulation, randomization technique and the number of simulations"
4. Correct Answer: B
False, as significant basis risk remains
Malz: "In some cases, a position and its hedge might be mapped to the same risk factor or set of risk factors. The
mapping might be justified on the grounds that the available data do not make it possible to discern between the
two closely related positions. The result, however, will be a measured VaR of zero, even though there is a significant
basis risk; that is, risk that the hedge will not provide the expected protection. Risk modeling of securitization
exposures provides a pertinent example of basis risk, too. Securitizations are often hedged with similarly- rated
corporate CDS indexes. If both the underlying exposure and its CDX hedge are mapped to a corporate spread time
series, the measured risk disappears. We discuss basis risk further in Chapter 13."
5. Correct Answer: C
The model assumed a static implied correlation: deltas were partial derivatives that did not account for changing
correlation which drastically altered the hedge ratio
Malz: "The relative value trade was set up in the framework of the standard copula model, using the analytics
described in Chapter 9. These analytics were simulation-based, using riskneutral default probabilities or hazard-rate
curves derived from single-name CDS. The timing of individual defaults was well modeled. Traders generally used a
normal copula. The correlation assumption might have been based on the relative frequencies of different numbers
of joint defaults, or, more likely, on equity return correlations or prevailing equity implied correlations, as described
at the end of Chapter 10.

32 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
In any event, the correlation assumption was static. This was the critical flaw, rather than using the 'wrong' copula
function, or even the 'wrong' value of the correlation. The deltas used to set the proportions of the trade were
partial derivatives that did not account for changing correlation. Changing correlation drastically altered the hedge
ratio between the equity and mezzanine tranches, which more or less doubled to nearly 4 by July 2005. In other
words, traders needed to sell protection on nearly twice the notional value of the mezzanine tranche in order to
maintain spread neutrality in the portfolio. The model did not ignore correlation, but the trade thesis focused on
anticipated gains from convexity. The flaw in the model could have been readily corrected if it had been recognized.
The trade was put on at a time when copula models and the concept of implied correlation generally had only
recently been introduced into discussions among traders, who had not yet become sensitized to the potential losses
from changes in correlation. Stress testing correlation would have revealed the risk. The trade could also have been
hedged against correlation risk by employing an overlay hedge: that is, by going long single-name protection in high
default-probability names. In this sense, the 'arbitrage' could not be captured via a two-leg trade, but required more
components."
6. Correct Answer: B
II. and III; Models assumed positive future house price appreciation; and Correlations among regional housing
markets were assumed to be low.
Malz: "While the models varied widely, two widespread defects were particularly important:
In general, the models assumed positive future house price appreciation rates. In the stress case, house prices might
fail to rise, but would not actually drop. The assumption was based on historical data, which was sparse, but
suggested there had been no extended periods of falling house prices on a large scale in any relevant historical
period. As can be seen in Figure 15.1, house prices did in fact drop very severely starting in 2007. Since the credit
quality of the loans depended on the borrowers’ ability to refinance the loans without additional infusions of equity,
the incorrect assumption on house price appreciation led to a severe underestimate of the potential default rates
in underlying loan pools in an adverse economic scenario.
Correlations among regional housing markets were assumed to be low. Bonds based on pools of loans from different
geographical regions were therefore considered well- diversified. In the event, while house prices fell more severely
in some regions than others, they fell—and loan defaults were much higher than expected in a stress scenario—in
nearly all."

33 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
Risk Capital Attribution and Risk- Adjusted Performance Measurement | Questions
1. Pravin Kumer works at Bland Bancorp, a large international bank and financial services firm. The bank has recently
made several significant investments but their success and profitability are highly uncertain; further, the bank has
also made several acquisitions. At a board committee meeting, Peter is asked to explain the difference between risk
capital, economic capital and regulatory capital. Which of the following is the TRUE statement that is part of his
response?
a) "Economic capital is less than risk capital because risk capital is the capital required to guarantee shareholders
with 100.0% confidence that it holds enough risk capital to ride out any eventuality"
b) "Regulatory capital is almost certainly higher than economic capital in our securitization business, therefore
we do not need to know its economic capital; in other words, whenever regulatory capital exceeds economic
capital, there is no reason or need to know economic capital"
c) "We need enough risk capital to absorb unexpected losses and remain solvent over a time horizon with a
selected confidence level, but with respect to risk capital, (unlike regulatory capital) the horizon and
confidence belong to us as policyparameters."
d) "For capital budgeting purposes, we should use ex ante RAROC with economic capital in the numerator; for
performance evaluation we should use ex post RAROC with regulatory capital in the numerator"
2. With respect to risk-adjusted performance metrics, let "ratio consistent" refer to a ratio that adjusts both the
numerator and denominator for risk. Among the risk-adjusted performance measures discussed in Crouhy, each of
the following is true statement about the defined metric EXCEPT which statement is not true?
a) RAROC (risk-adjusted return on economic capital) risk-adjusts the numerator's return by subtracting expected
loss from expected revenue and risk-adjusts the denominator by substituting economic capital for accounting
capital. RAROC is therefore ratio- consistent.
b) RORAC (return on risk-adjusted capital; aka, ROCaR or ROC) only adjusts the denominator, in practical
applications using VaR in the denominator. Because its numerator is "net income," RARAC is ratio-
inconsistent.
c) RAROA (risk-adjusted return on risk-adjusted assets) has a denominator in common with RORAA (return on
risk-adjusted assets) but RAROA employs "risk-adjusted expected net income" in the numerator while RORAA
employs "net income" in the numerator. Therefore, RAROA is ratio-consistent but RORAA isratio-inconsistent.
d) NPV (net present value) the discounted value of expected cash flows but its glaring weakness is that it is not
compatible with the capital asset pricing model (CAPM) and, related, NPV has no convenient method for
incorporating risk
3. Assume a $1.0 billion corporate loan portfolio offers a return of 5.0% per annum. The bank (the lender) has a direct
operating cost of $6.0 million per annum and an effective tax rate of 25.0%. The portfolio is funded by $1.0 billion
in retail deposits with a transfer-priced interest rate charge of 1.40%. Risk analysis of the unexpected losses
associated with the portfolio tell us we need to set aside economic capital of $80.0 million against the portfolio; i.e.,
8.0% of the loan amount. The bank's economic capital must be invested in risk-free securities and the risk- free rate
on government securities is only 1.0%. The expected loss on the portfolio is assumed to be 1.0% per annum; i.e.,
$10 million. Which is NEAREST to the risk-adjusted return on economic capital (RAROC)?
a) 8.75%
b) 13.00%
c) 19.50%
d) 25.25%

34 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
4. Assume a bank's $2.0 billion corporate loan portfolio offers a return of 6.0% per annum. The expected loss on the
portfolio is estimated to be 1.5% per annum; i.e., $30 million. The portfolio is funded by $2.0 billion in retail deposits
with a transfer-priced interest rate charge of 2.00%. The bank (the lender) has a direct operating cost of $16.0 million
per annum and an effective tax rate of 25.0%. Risk analysis of the unexpected losses associated with the portfolio
tell us we need to set aside economic capital of $200.0 million against the portfolio; i.e., 10.0% of the loan amount.
The bank's economic capital must be invested in risk-free securities and, unfortunately in the regime of ultra-low
interest rates, the risk-free rate on government securities is only 1.0%.
Although the loan portfolio's risk-adjusted return on capital (RAROC) is positive and seemingly high, the bank wants
to adjust the traditional RAROC calculation to obtain a RARO measures that takes into account the systemic riskiness
of the expected returns. If the risk-free rate is 1.0% (as above), and the expected rate of return on the market
portfolio is 8.0% such that the equity risk premium is 7.0%, and the beta of the firm's equity is 1.60, which of the
following is the correct adjusted RAROC and is the project advisable?
a) RAROC is 6.25% but no, the project is bad because ARAROC is below the risk-freerate
b) RAROC is 8.00% but no, the project is bad because ARAROC is below the risk-freerate
c) RAROC is 9.80% and yes, the project is good because ARAROC is above the risk-free rate
d) RAROC is 13.50% and yes, the project is good because ARAROC is above the risk-free rate
5. Consider a business unit (BU) which consists of two activities, X and Y, for which the firm's risk staff has calculated
three different measures of risk capital:

Please note:
 Stand-alone capital is the capital used by an activity taken independently of the other activities in the same
business unit; ie, risk capital calculated without any diversification benefits. Here the stand-alone capital for
X is $ 70 and for Y it is $ 80.
 Fully diversified capital is the capital attributed to each activity X and Y, taking into account all diversification
benefits from combining them under the same leadership. Here the overall portfolio effect is $30 = $70 + 80
- 120. The firm's analysts allocated the portfolio effect pro rata with the stand-alone risk capital: $30 × 70/150
= $14 for X and $30 × 80/150 = $16 for Y, so that the fully diversified risk capital becomes $56 for X and $64
for Y.
 Incremental capital (which is called marginal capital by Crouhy) is the additional capital required by an
incremental deal, activity, or business. It takes into account the full benefit of diversification. Here the
marginal risk capital for X (assuming that Y already exists)is $40 = $120 - 80, and the marginal risk capital for
Y (assuming that X already exists) is $50 = $120 - 70. The summation of the marginal risk capital, $90 in this
example, is less than the full risk capital of the BU.

35 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
Each of the following statements is true EXCEPT which is not?
a) Given that the risk capital for the business unit is $120.0 (as shown), the implied correlation between
activities must be zero
b) Fully diversified capital ($56 for X and $64 for Y) should be used for assessing the solvency of the firm
and minimum risk pricing
c) Incremental capital (aka, Crouhy's marginal risk capital; $40 for X and $50 for Y) should be used for
active portfolio management or business mix decisions,
d) Stand-alone capital ($70 for X and $80 for Y) should be used for incentive compensation; and fully-
diversified capital ($56 for X and $64 for Y) can be used to assess the extra performance generated by
the diversification effects, such that performance measurement involves both stand-alone and fully-
diversifiedperspectives
6. According to Crouhy, Galai and Mark, best practice in the implementation of a RAROC system should include each
of the following elements EXCEPT which is incorrect?
a) Given the strategic nature of the decisions steered by a RAROC system, the marching orders must come from
the top management of the firm; specifically, the CEO and his or her executive team should sponsor the
implementation of a RAROC system and should be active in its diffusion
b) In order to preserve the integrity and transparency of the RAROC system, there should be an exclusively
quantitative decision rule for activities: if the ex-ante RAROC does not exceed the firm's hurdle rate (if the
RAROC return is "low"), the firm should exit the activity
c) The value at risk (VaR) methodologies for measuring market risk and credit risk that underpin RAROC
calculations are "generally well accepted" by business units (although this is not yet true for operational risk);
in practice, disagreements concern the setting of the parameters that feed into these models which determine
the size of economiccapital
d) Balance sheet requests from the business units, such as economic capital, leverage ratio, liquidity ratios, and
risk-weighted assets, should be channeled to the RAROC group every quarter; limits are then set for economic
capital, leverage ratio, liquidity ratios, and risk-weighted assets

36 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
Risk Capital Attribution and Risk- Adjusted Performance Measurement | Answers
1. Correct Answer: C
TRUE: "We need enough risk capital to absorb unexpected losses and remain solvent over a time horizon with a
selected confidence level, but horizon and confidence belong to us as policy parameters."
In regard to false (A): The firm cannot realistically achieve 100.0% confidence in its own solvency
In regard to false (B): "The new regulatory capital requirements imposed by Basel III make it likely that for some
activities, such as securitization, regulatory capital may end up much higher than economic capital. Still, economic
capital calculation is essential for senior management as a benchmark to assess the economic viability of the activity
for the financial institution. When regulatory capital is much larger than economic capital, then it is likely that over
time the activity will migrate to the shadow banking sector, which can price the transactions at a more attractive
level. "
In regard to false (D): "RAROC for Performance Measurement: We should emphasize at this point that RAROC was
first suggested as a tool for capital allocation on an anticipatory or ex ante basis. Hence, expected revenues and
losses should be plugged into the numerator of the RAROC equation for capital budgeting purpose. When RAROC is
used for ex post, or after the fact, performance evaluation, we can use realized revenues and realized losses, rather
than expected revenues and losses, in our calculation. "
Please note the definitions by Crouhy (emphasis ours):
 Risk capital: "Risk capital is the cushion that provides protection against the various risks inherent in the
business of a corporation so that the firm can maintain its financial integrity and remain a going concern even
in the event of a near-catastrophic worst-case scenario. Risk capital gives essential confidence to the
corporation’s stakeholders, such as suppliers, clients, and lenders (for an industrial firm), or claimholders, such
as depositors and counterparties in financial transactions (for a financial institution). Risk capital is often
called economic capital, and in most instances the generally accepted convention is that risk capital and
economic capital are identical (although later in this chapter we introduce a slight wrinkle by defining
economic capital as risk capital plus strategic capital) ... Risk capital measurement is based on the same
concepts as the value-at-risk (VaR) calculation methodology that we discuss in [Chapter 7: Measuring Market
Risk: Value at Risk ...]. Indeed, risk capital numbers are often derived from, or supported by, sophisticated
internal VaR models, augmented in recent years by stress testing. However, the choice of the confidence
level and time horizon when using VaR to calculate risk capital are key policy parameters that should be set
by senior management (or the senior risk management committee). Usually, these decisions should be
endorsed by the board. Risk capital should be calculated in such a way that the institution can absorb
unexpected losses up to a level of confidence in line with the requirements of the firm’s various stakeholders.
No firm can offer its stakeholders a 100% guarantee (or confidence level) that it holds enough risk capital to
ride out any eventuality. Instead, risk capital is calculated at a confidence level set at less than 100%— say,
99.90% for a firm with conservative stakeholders.
In theory, this means that there is a probability of around 1/ 10 of 1.0% that actual losses will exceed the
amount of risk capital set aside by the firm over the given time horizon (generally one year). The exact choice
of confidence level is typically associated with some target credit rating from a rating agency such as Moody’s,
Standard & Poor’s, and Fitch as these ratings are themselves explicitly associated with a probability of default.
It should also be in line with the firm’s stated risk appetite

37 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
 Strategic risk capital = goodwill + burned-out capital: "Risk capital is the capital cushion that the bank must
set aside to cover the worst-case loss (minus the expected loss) from market, credit, operational, and other
risks, such as business risk and reputation risk, at the required confidence threshold; e.g., 99%. Risk capital is
directly related to the value- at-risk calculation at the one-year time horizon and at the institution’s required
confidence level. Strategic risk capital refers to the risk of significant investments about whose success and
profitability there is high uncertainty. If the venture is not successful, then the firm will usually face a major
write-off, and its reputation will be damaged. Current practice is to measure strategic risk capital as the sum
of burned-out capital and goodwill. Burned-out capital refers to the idea that capital is spent on, say, the
initial stages of starting up a business but the business may ultimately not be kicked off due to projected
inferior risk-adjusted returns. It should be viewed as an allocation of capital to account for the risk of strategic
failure of recent acquisitions or other strategic initiatives built organically. This capital is amortized over time
as the risk of strategic failure dissipates. The goodwill element corresponds to the investment premium— i.e.,
the amount paid above the replacement value of the net assets(assets – liabilities) when acquiring a company.
(Usually, the acquiring company is prepared to pay a premium above the fair value of the net assets because
it places a high value on intangible assets that are not recorded on the target’s balance sheet.) Goodwill is also
depreciated over time. "
 Economic capital is the sum of risk capital plus strategic risk capital. Economic capital is the denominator in
the RAROC ratio.
2. Correct Answer: D
FALSE. This is false because NPV is compatible with CAPM and does have a basic risk-adjustment mechanism in
the selection of the discount rate. Riskier cash flows are discounted at higher rates.
In regard to (A), (B) and (C), each is TRUE. Crouhy's definitions include:
 RAROC (risk-adjusted return on capital) = risk-adjusted expected net income/ economic capital. RAROC makes
the risk adjustment to the numerator by subtracting a risk factor from the return— e.g., expected loss. RAROC
also makes the risk adjustment to the denominator by substituting economic capital for accounting capital.
 RORAC (return on risk-adjusted capital) = net income/ economic capital. RORAC makes the risk adjustment
solely to the denominator. In practical applications, RORAC = P&L/VaR
 ROC (return on capital) = RORAC. It is also called ROCAR (return on capital at risk).
 RORAA (return on risk-adjusted assets) = net income/ risk-adjusted assets.
 RAROA (risk-adjusted return on risk-adjusted assets) = risk-adjusted expected net income/ risk-adjusted
assets.
 S (Sharpe ratio) = (expected return – risk-free rate)/ volatility. The ex post Sharpe ratio— i.e., that based on
actual returns rather than expected returns— can be shown to be a multiple of ROC.
 NPV (net present value) = discounted value of future expected cash flows, using a risk- adjusted expected rate
of return based on the beta derived from the CAPM, where risk is defined in terms of the covariance of changes
in the market value of the business with changes in the value of the market portfolio (see Chapter 5).In the
CAPM, the definition of risk is restricted to the systematic component of risk that cannot be diversifiedaway.
For RAROC calculations, the risk measure captures the full volatility of earnings, systematic and specific. NPV
is particularly well suited for ventures in which the expected cash flows over the life of the project can be
easily identified.
 EVA (economic value added), or NIACC (net income after capital charge), is the after- tax adjusted net income
less a capital charge equal to the amount of economic capital attributed to the activity, times the after-tax
cost of equity capital. The activity is deemed to add shareholder value, or is said to be EVA positive, when its
NIACC is positive (and vice versa). 2 An activity whose RAROC is above the hurdle rate is also EVApositive.

38 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
3. Correct Answer: C
19.50%
RAROC = (after-tax expected risk-adjusted net income)/(economic capital). In this case:
 Expected revenue = $1.0 billion loan portfolio * 5.0% = $50.0 million
 Expected losses = $1.0 billion loan portfolio * 1.0% = $10.0 million
 Interest expense = $1.0 billion borrowed funds * 1.40% = $14.0 million
 Operating cost = $6.0 million (given as an assumption)
 Return on economic capital (EC) = $80.00 EC * 1.0% = $0.80 million
 Tax rate = 0.25 (given as assumption)
Such that RAROC = [($50.0 - 10.0 - 14.0 - 6.0 + 0.80) *(1.0 - 0.25 tax rate)] / 80.0 = 19.50%
4. Correct Answer: D
RAROC is 13.50% and yes, the project is good because ARAROC is above the risk-free rate
RAROC = (after-tax expected risk-adjusted net income)/(economic capital). In this case:
 Expected revenue = $2.0 billion loan portfolio * 6.0% = $120.0 million
 Expected losses = $2.0 billion loan portfolio * 1.5% = $30.0 million
 Interest expense = $2.0 billion borrowed funds * 2.0% = $40.0 million
 Operating cost = $16.0 million (given as an assumption)
 Economic capital = $200.0 million = 10.0% * $2.0 billion (given as an assumption)
 Return on economic capital (EC) = $2.0 million = $200.0 EC * 1.0%
 Tax rate = 0.25 (given as assumption)
Such that RAROC = [($120.0 - 30.0 - 40.0 - 16.0 + 2.0) *(1.0 - 0.25 tax rate)] / 200.0 = 13.50% Adjusted RAROC
= RAROC - β(e)*[R(m) - Rf] = 13.50% - 1.60*[8.0% - 1.0%] = 2.30% and 2.30% is greater than the risk-free rate.
Alternatively, per Crouhy's Risk Management text, we can compare (RAROC - Rf)/β(e) to [R(m) - Rf] for the
same result; in this case, (13.50% - 1. %)/1.6 = 7.813% which is greater than the 7.0% ERP.
5. Correct Answer: A
False. If the implied correlation is zero, then BU capital = SQRT (70^2 + 80^2) = $106.3; BU risk capital of 120.0
implies correlation of about +0.2768.
In regard to (B), (C) and (D), each is TRUE.
Crouhy: "As [the] example shows, the choice of capital measure depends on the desired objective. Fully diversified
measures should be used for assessing the solvency of the firm and minimum risk pricing. Active portfolio
management or business mix decisions, on the other hand, should be based on marginal [i.e., incremental] risk
capital, taking into account the benefit of full diversification. Finally, performance measurement should involve both
perspectives: stand-alone risk capital for incentive compensation, and fully diversified risk capital to assess the extra
performance generated by the diversification effects. However, we must be cautious about how generous we are in
attributing diversification benefits. Correlations between risk factors drive the extent of the portfolio effect, and
these correlations tend to vary over time.
During market crises, in particular, correlations sometimes shift dramatically toward either 1 or – 1, reducing or
totally eliminating portfolio effects for a period of time. "

39 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
6. Correct Answer: B
FALSE. The RAROC system should also incorporate the QUALITY of earnings, in addition to the quantity of
earning.
In regard to (A), (C) and (D), each is TRUE.
Crouhy's six recommendations for implementing a RAROC system are:
1. Senior management commitment
2. Communication and educatoin
3. Ongoing consultation
4. Maintaining the integrity of the process
5. Combine RAROC with qualitative factors, and
6. Put an active capital management process in place.
With respect to Recommendation #5 (Combine RAROC with qualitative factors) he writes, "Earlier in this chapter,
we described a simple decision rule for project selection and capital attribution; i.e., accept projects where the
RAROC is greater than the hurdle rate. In practice, other qualitative factors should be taken into units should be
assessed in the context of the two-dimensional strategic grid shown in Figure 17-3 (see below). The horizontal axis
of this figure corresponds to the RAROC return calculated on an ex ante basis. The vertical axis is a qualitative
assessment of the quality of the earnings produced by the business units. This measure takes into consideration the
strategic importance of the activity for the firm, the growth potential of the business, the sustainability and volatility
of the earnings in the long run, and any synergies with other critical businesses in the firm. Priority in the allocation
of balance sheet resources should be given to the businesses in the upper right quadrant. At the other extreme, the
firm should try to exit, scale down, or fix the activities of businesses that fall into the lower left quadrant. The
businesses in the category Managed Growth, in the lower right quadrant, are high-return activities that have low
strategic importance for the firm. In contrast, businesses in the category Investment, in the upper left quadrant, are
currently low-return activities that have high growth potential and high strategic value for the firm."

40 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
Range of Practices and Issues in Economic Capital Frameworks | Questions
1. Which of the following is the best summary of how economic capital (EC) is most often parameterized?
a) Capital a bank needs to absorb expected losses (EL) over a certain time horizon at a given confident level
b) Capital a bank needs to absorb unexpected losses (UL) over a certain time horizon at a given confident level
c) Capital a bank needs to absorb both expected losses (EL) and unexpected losses (UL) over a certain time
horizon at a given confident level
d) Capital a bank needs to absorb unexpected loss (UL) over a one-year horizon with 99.0% confidence
2. According to BIS, economic capital has an important and useful role to play in each of the following ways EXCEPT:
a) Economic capital provides banks with a common risk currency for relative performance measurement
b) Economic capital provides banks with a common risk currency for capital budgeting (and even strategic
planning, target setting and internal reporting)
c) Economic capital should at least influence the bank’s incentive structure (compensation)
d) Economic capital should be the sole determinant of required capital “to minimize internal definitional
conflicts;” e.g., it should usurp available capital
3. According to BIS, each of the following are desirable characteristics of a risk measure (within the economic capital
framework) EXCEPT:
a) Coherent
b) Intuitive and “easy to understand”
c) Precise
d) Easy to compute
4. Which is a weakness of standard deviation as a risk measure in the economic capital framework?
a) Not coherent because does not meet monotonicity criteria
b) Not intuitive
c) Not easy to compute
d) Neither simple nor easy to understand
5. It is well-known that value at risk (VaR) is not coherent (because VaR is not sub-additive). Aside from its lack of
coherence, which is a weakness of VaR as a risk measure in the economic capital framework?
a) Not intuitive
b) Not stable as it depends on assumptions about loss distributions
c) Insufficiently easy to compute
d) Insufficiently easy to understand
6. According to BIS, which is a glaring weakness of expected shortfall (ES) as as a risk measure in the economic capital
framework?
a) Not stable
b) Not intuitive, not easy to compute, not easy to understand and not simple
c) Not coherent
d) None of the above: against the criteria, ES has no glaring weaknesses

41 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
7. According to BIS, which is a glaring weakness of “spectral and distorted risk measures” as a risk measure in the
economic capital framework?
a) Not intuitive
b) Not stable
c) Not easy to compute
d) Not coherent
8. According to BIS, which of the following perspectives is likely to lead to the highest confidence level in calibrating
economic capital?
a) Perspective of creditors, rating agencies and supervisors
b) Perspective of management in order to allocate capital
c) Perspective of management in order to identify exposures that are critical to profit (or profit centers)
d) None of the above: in a robust EC framework, confidence level should not vary according to constituent
perspective
9. A bank holds a portfolio of loans denominated in a foreign currency. The banks separately measure the credit risk
and market risk of the portfolio, then determines the portfolio’s economic capital by adding (aggregating) the two
risk components. Specifically, the bank determines the portfolio’s economic capital is $30 million because the
market risk componentis $10 million per a value at risk (VaR) method and the credit risk component is $20 million
per a CVaR method. Consider four statements about this aggregation:
i. As VaR is not sub additive, it is technically possible for the portfolio’s VaR to exceed $10 plus $20 million
ii. As this summation implicitly assumes zero correlation (and zero covariance) between market and credit risk,
if their correlation is actually positive, $30 million understates economic capital
iii. There may be “wrong way” risk between the credit and market risk components, in which case the portfolio’s
economic capital may be higher than $30 million
iv. VaR is a quantile, not a tail risk measure. Expected shortfall (ES) should be used, in which case, by definition
the portfolio’s economic capital (EC) must be higher than $30 million
Which of the above are TRUE?
a) I. only
b) I. and III.
c) II. and IV.
d) All four
10. BIS says “The use of a VARIANCE-COVARIANCE MATRIX (or correlation matrix), which summarizes the
interdependencies across risk types, provides a more flexible framework for recognizing diversification benefits,
while still maintaining the desirable features of being intuitive and easy to communicate.” But BIS also says that
EACH of the following is a disadvantage of the variance-covariance matrix EXCEPT:
a) It depends greatly on correlation pairs which are difficult to estimate, in part because they are time-varying
b) Operational inter-risk correlations can be costly and difficult to estimate because operational loss “data are
scarce and do not cover long time periods.”
c) It is inferior to approaches that that “isolate on marginal risk contribution” such as marginal VaR and
component VaR
d) By focusing on average covariance between risks, the approach will tend to underestimate tail risks

42 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
11. A serious limitation of some risk aggregation methodologies is their inability to capture non- linear dependencies.
Which risk aggregation method(s) overcome this limitation and do not necessarily assume linear dependencies?
a) Summation
b) Constant diversification
c) Variance-covariance
d) Copulas and Simulations
12. According to BIS and their survey, most banks use some flavor of the variance-covariance approach to aggregate
risk. Further, they tend to employ conservative correlations with an upward bias, with many additionally using
“stressed values that refer to the periods when these correlations may be higher than they are on average, or even
set equal to unity.” Consider the following three inter-risk correlations:
i. Correlation between market and credit risk
ii. Correlation between business risk and credit/market risk
iii. Correlation between operational risk and all other risks
From LOWEST to HIGHEST correlation, which is the most common assumption made by banks?
a) I., II., III
b) II., III., I.
c) II., I., III.
d) III., II., I.
13. BIS cites three supervisory concerns relating to risk aggregation. Each of the following is a supervisory concern
EXCEPT:
a) Popular methods are overly dependent on subjective judgment and insufficiently quantitative
b) Economic capital frameworks are inherently difficult to validate
c) Popular methodologies, in presuming or over-estimating diversification benefits, may under-estimate true
aggregate risk
d) Sophisticated methods, in producing apparently accurate outcomes, may lead to a (behavioral) sense of over-
or false-confidence
14. In regard to the validation of internal economic capital models, BIS asserts each of the following EXCEPT:
a) Validation refers broadly to all processes that provide evidence-based assessments about a model’s fitness
for its purpose, including both “before deployment” (ex ante) and afterward as ex post statistical tests
b) Validation is a quantitative process by definition but it is NOT a qualitative process
c) Benchmarking is a common and useful validation process but its drawback is that it does not test against
reality
d) There are a wide range of validation techniques making possible a “layered approach” and generally the more
layers the better
15. Consider the following three statements about dependency modeling in credit risk:
i. The majority of banks use one of three credit models: Moody’s/KMV (MKMV), CreditMetrics, and CreditRisk+
ii. The three credit models tend to produce significantly different economic capital estimates, in both default-
only mode and mark-to-market mode
iii. Most in-use models estimate asset correlations among obligors in terms of common dependence on
systematic risk factors; consequently, banks are implicitly accounting for concentration (both single name and
sectoral) because large parts of their books are subject to the same underlying risk factors or to multiple risk
factors

43 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
According to BIS, which of the above statements is TRUE?
a) I. only
b) III. Only
c) II. and III.
d) All three
16. BIS asserts that each of the following is a difficulty (“challenge”) in the modeling and/or evaluation of counterparty
credit risk in an economic capital framework EXCEPT:
a) Trying to measure current exposure
b) Trying to measure potential exposure
c) Difficult to identify wrong-way risk between credit risk (i.e., increase in PD and LGD) and market risk (increase
in EAD)
d) Differences in risk profiles between margined and non-margined counterparties
17. According to BIS, each of the following is a MAIN SOURCE of interest rate risk in the banking book EXCEPT:
a) Repricing risk
b) Yield curve risk
c) Earnings at risk (EaR)
d) Basis risk
18. Each of the following is a RECOMMENDATION from BIS that supervisors should consider to make effective use of
risk measures not designed for regulatory purposes EXCEPT:
a) A bank should be able to demonstrate how the economic capital model has been integrated into the business
decision making process
b) All risks should be quantified
c) Economic capital model validation should be conducted rigorously and comprehensively
d) Economic capital model validation should be conducted rigorously and comprehensively, with an aim to
demonstrate that the model is fit for purpose
19. Which is nearest to the Basel Committee’s position on management incentives (compensation) and economic
capital (EC)?
a) Linking incentive pay to EC does NOT tend to motivate participation in the EC allocation process; such linkage
need not be attempted, and actual linkage (EC metrics to pay) in practice is quite low
b) Linking incentive pay to EC does NOT tend to motivate participation in the EC allocation process; some such
linkage should be attempted, and actual linkage (EC metrics to pay) in practice is prevalent
c) Linking incentive pay to EC DOES tends to motivate participation in the EC allocation process; some such
linkage should be attempted, but actual linkage (EC metrics to pay) in practice is quite low
d) Linking incentive pay to EC DOES tends to motivate participation in the EC allocation process; some such
linkage should be attempted, and actual linkage (EC metrics to pay) in practice is already prevalent

44 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
Range of Practices and Issues in Economic Capital Frameworks | Answers
1. Correct Answer: B
Capital a bank needs to absorb unexpected losses (UL) over a certain time horizon at a given confident level
 Please note EC does not cover EL. In regard to (D), aside from the fact that 99.9% confidence is better here
than 99.0%, this weakness with (D) is that economic capital, unlike regulatory capital, is an INTERNAL
methodology that varies by bank.
 BIS: “In order to achieve a common measure across all risks and businesses, economic capital is often
parameterized as an amount of capital that a bank needs to absorb unexpected losses over a certain time
horizon at a given confidence level. Because expected losses are accounted for in the pricing of a bank’s
products and loan loss provisioning, it is only unexpected losses that require economic capital.”
2. Correct Answer: D
The opposite: “Economic capital should not be the sole determinant of required capital”
 In regard to (A), (B) and (C), each is TRUE.
3. Correct Answer: C
Precise is not on the list, consistent with the notion that good risk measurements realize precision is rarely
possible and often must settle for “rough approximations.” (recall Jorion Chapter 1: valuation requires precision
of the distributional first moment; risk is about approximation in the second, third and fourth moments)
 Intuitive: The risk measure should meaningfully align with some intuitive notion of risk, such as unexpected
losses.
 Stable: Small changes in model parameters should not produce large changes in the estimated loss
distribution and the risk measure. Similarly, another run of a simulation model in order to generate a loss
distribution should not produce a dramatic change in the risk measure. Also, it is desirable for the risk measure
not to be overly sensitive to modest changes in underlying model assumptions.
 Easy to compute: The calculation of the risk measure should be as easy as possible. In particular, the selection
of more complex risk measures should be supported by evidence that the incremental gain in accuracy
outweighs the cost of the additional complexity.
 Easy to understand: The risk measure should be easily understood by the bank’s senior management. There
should be a link to other well-known risk measures that influence the risk management of a bank. If not
understood by senior management, the risk measure will most likely not have much impact on daily risk
management and business decisions, which would limit its appropriateness.
 Coherent: The risk measure should be coherent and satisfy the conditions of: (i) monotonicity (if a portfolio Y
is always worth at least as much as X in all scenarios, then Y cannot be riskier than X); (ii) positive homogeneity
(if all exposures in a portfolio are multiplied by the same factor, the risk measure also multiplies by that factor);
(iii) translation invariance (if a fixed, risk-free asset is added to a portfolio, the risk measure decreases to reflect
the reduction in risk); and (iv) sub-additivity (the risk measure of two portfolios, if combined, is alwayssmaller
or equal to the sum of the risk measures of the two individual portfolios). Of particular interest is thelast
property, which ensures that a risk measure appropriately accounts for diversification.
 Simple and meaningful risk decomposition (risk contributions or capital allocation): In order to be useful for
daily risk management, the risk measured for the entire portfolio must be able to be decomposed to smaller
units (e.g. business lines or individual exposures). If the loss distribution incorporates diversification effects,
these effects should be meaningfully distributed to the individual business lines.
4. Correct Answer: A
Standard deviation is not coherent because does not meet monotonicity criteria
 In regard to (B), (C), and (D), these are false. BIS says that standard deviation is “sufficiently intuitive,” easy to
compute, easy to understand and “simple but not meaningful.”

45 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
5. Correct Answer: B
VaR is not stable as it depends on assumptions about loss distributions
 In regard to (A), (C), and (D), these are false. BIS says that VaR is intuitive, sufficiently easy to compute, and
easy to understand (but not simple)
6. Correct Answer: D
None of the above: against the criteria, ES has no glaring weaknesses
7. Correct Answer: A
Not intuitive
8. Correct Answer: A
Perspective of creditors, rating agencies and supervisors
 BIS: “The link between a bank’s target rating and the choice of confidence level may be interpreted as the
amount of economic capital that must be exceeded by available capital resources to prevent the bank from
eroding its capital buffer at a given confidence level [dharper: I find this definition confusing]. According to
this view, which can be interpreted as a going concern view, capital planning is seen more as a dynamic
exercise than a static one, where it is the probability of eroding such a buffer (rather than all available capital)
that is linked to the target rating. This would reflect the expectation (by analysts, rating agencies and the
market) that the bank operates with capital that exceeds the regulatory minimum requirement
 ... Apart from considerations about the link to a target rating, the choice of a confidence level might differ
based on the question to be addressed. On the one hand, high confidence levels reflect the perspective of
creditors, rating agencies and supervisors in that they are used to determine the amount of capital required
to minimize bankruptcy risk. On the other hand, banks may use lower confidence levels for management
purposes in order to allocate capital to business lines and/or individual exposures and to identify those
exposures that are critical for-profit objectives in a normal business environment. Consequently, banks
typically use different confidence levels for different purposes.”
9. Correct Answer: B
I. and III.
 The most important point is the wrong-way risk: “A more important reason why aggregate risk may be larger
than the sum of its components is independent of the choice of metric (i.e. it applies to metrics other than
VaR) and relates to the economic underpinnings of the portfolios that are pooled. The logic outlined above
assumes that covariance (a linear measure of dependence) fully captures and summarizes the dependencies
across risks. While this may be a reasonable approximation in many cases, there are instances where the risk
interactions are such that the resulting combination may represent higher, not lower, risk. For example,
measuring separately the market and credit risk components in a portfolio of foreign currency denominated
loans can underestimate risk, since probabilities of obligor default will also be affected by fluctuation in the
exchange rate, giving rise to a compounding effect. Similar types of “wrong-way” interactions could occur in
the context of portfolio positions that may be simultaneously affected by directional market moves and the
failure of counterparties to a hedging position. From a more “macro” perspective, asset price volatility often
interacts with the risk appetite of market participants and feeds back to market liquidity leading to a
magnification of risk rather than diversification.”
 In regard to (II), this is false: summation assumes perfect correlation (1.0) not independence. Summation
confers no diversification benefit.
 In regard to (IV), VaR is a quantile but that neither disqualifies it, per se, as a tail risk measure nor as a EC
metric; e.g., a 99.9% VaR is likely higher than a 95% ES.

46 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
10. Correct Answer: C
The variance-covariance is consistent with, is a key ingredient in, the analytical VaR approach that includes
marginal VaR and component VaR.
 In regard to (A), (B) and (D), each are TRUE as disadvantages cited by BIS.
 In regard to (D): “In addition, by focusing on average covariance between risks, the linearity assumption will
tend to underestimate dependence in the tail of loss distributions and underestimate the effects of skewed
distributions and non-linear dependencies.”
 Please note that some also cite the “curse of dimensionality” (not mentioned by BIS): the number of pairwise
correlations becomes exponentially large as the factorsincrease.
11. Correct Answer: D
Copulas and Simulations
12. Correct Answer: D
III., II., I.
 “Whatever the method and the estimates used, there are a number of commonalities in the assumptions
made by banks. For instance, a high correlation between market and credit risks is usually assumed, a lower
correlation between business risk and credit or market risk, and a very low correlation between operational
risk and all other risks.”
13. Correct Answer: A
BIS embraces judgment, in particular with respect to the aggregation of non- homogeneous inter-risk types.
 BIS (emphasis mine): “An important overall message is that meaningful aggregation of risk necessarily
involves compromises and judgment to augment quantitative methods. Risk measurement in portfolios that
are more homogeneous in terms of their risk drivers can be quite detailed and can address different facets of
risk. The combination of different types of risk into a common metric, however, presents many more
complications stemming either from the different statistical profiles of risk types or from differences in the
perspective and requirements of the business units that manage different portfolios (eg the use of different
metrics and/or management horizons). Aggregation, therefore, typically requires that some of the richness of
assessments made on the individual components is sacrificed in order to achieve comparability.”
 In regard to (B), (C) and (D), these are summaries of the three concerns (“supervisory concerns with the
economic capital aggregation relate to [1] validation of the inputs, [2] methodology, and [3] outputs of the
process.”). Note that the third is redolent of Nassim Taleb’s criticism that risk quantification may fools us into
think we have a grasp on uncertainty.
14. Correct Answer: B
Validation process include both qualitative (i.e., use tests, qualitative review, systems implementation,
management oversight, data quality checks, examination of assumptions) and quantitative processes (e.g.,
backtesting, benchmarking).
 In regard to (A), (C), and (D), EACH is TRUE according to BIS.
 In regard to (A) “In some cases the term validation is used exclusively to refer to statistical ex post validation,
while in other cases it is seen as a broader but still quantitative process that also incorporates evidence from
the model development stage. In this paper, the term “validation” is used in a broad sense, meaning all the
processes that provide evidence-based assessment about a model’s fitness for purpose. This assessment
might extend to the management and systems environment within which the model is operated. Moreover,
it is advisable that validation processes are designed alongside development of the models, rather than
chronologically following the model building process.”
 In regard to (C), “Benchmarking is a commonly used form of quantitative validation. Comparisons are made
with industry survey results, against alternative models such as a rating agency model, industry-wide models,
consultancy firms, academic papers and regulatory capital models. However, as a validation technique,
benchmarking has limitations, providing comparison of one model against another or one calibration to
others, but not testing against ‘reality’. It is therefore difficult to assess the degree of comfort provided by
such benchmarking methods, as they may only be capable of providing broad comparisons confirming that
input parameters or model outputs are broadly comparable.”

47 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
 In regard to (D), “Our purpose is to make two points. First, to demonstrate that there is a wide range of
techniques that would be covered by our broad definition of validation, creating a layered approach. The more
layers that can be provided, the more comfort that validation is able to provide evidence for or against the
performance of the model. Conversely, where fewer layers of validation are used, the level of comfort
diminishes. Second, that each validation process provides evidence for (or against) only some of the desirable
properties of a model. The list presented below moves from the more qualitative to the more quantitative
validation processes, and the extent of use is briefly discussed.”
15. Correct Answer: D
All three
16. Correct Answer: A
“One feature of derivatives and securities financing relationships is that, while the amount of current exposure
to a counterparty is known, the amount of potential exposure to a counterparty is an unknown quantity (in fact,
given the nature of derivatives contracts and securities financing arrangements, there may be no exposure to the
financial institution at the time of a counterparty default.)”
In regard to (B), (C), and (D), each is TRUE as a challenge in the evaluation of counterparty credit risk.
17. Correct Answer: C
Earnings at Risk (EaR) is metric or measure of risk, like VaR, not a source of risk
 “The main sources of interest rate risk in the banking book are repricing risk (arising from differences in the
maturity and repricing terms of customer loans and liabilities), yield curve risk (stemming from asymmetric
movements in rates along the yield curve), and basis risk (arising from imperfect correlation in the adjustment
of the rates earned and paid on different financial instruments with otherwise similar repricing
characteristics).”
18. Correct Answer: B
“Not all risks can be directly quantified. Material risks that are difficult to quantify in an economic capital
framework (e.g. funding liquidity risk or reputational risk) should be captured in some form of compensating
controls (sensitivity analysis, stress testing, scenario analysis or similar risk control processes).”
 In regard to (A), (C) and (D), each are true.
19. Correct Answer: C
Linking incentive pay to EC DOES tends to motivate participation in the EC allocation process; some such linkage
should be attempted, but actual linkage (EC metrics to pay) in practice is quite low
 BIS: “To become deeply engrained in internal decision-making processes, the use of economic capital needs
to be extended in a way that directly affects the objective functions of decision-makers at the business unit
level. This is achieved by influencing the incentive structure for business-unit management. Anecdotal
evidence suggests that incentives are the most sensitive element for the majority of bank managers, as well
as being the issue that motivates their getting involved in the technical aspects of the economic capital
allocation process. However, evidence suggests that compensation schemes rank quite low among the actual
uses of economic capital measures at the business unit level.”

48 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
Capital Planning at Large Bank Holding Companies | Questions
1. Each of the following is true about capital planning at large bank holding companies supervised by the Federal Reserve
EXCEPT which is inaccurate?
a) Large bank holding companies (BHCs) in the United States are subject to an annual assessment by the Federal
Reserve that includes two related programs: Dodd-Frank act supervisory stress testing, and the Comprehensive
Capital Analysis and Review(CCAR)
b) Under the U.S. supervisory stress tests, three macroeconomic scenarios are required by the Dodd-Frank Act:
baseline, adverse and severely adverse (itself characterized by a severe global recession accompanied by a global
aversion to long-term fixed-income assets)
c) Once a BHC has reached the minimum regulatory capital requirements (common equity tier 1, tier 1, or total
capital) during normal and non-stressed times, the Federal Reserve cannot limit the bank's freedom to pay
dividends or re-purchase shares
d) The seven principles of an effective capital adequacy process are: sound foundational risk management; effective
loss-estimation methodologies; solid resource-estimation methodologies; sufficient capital adequacy impact
assessment; comprehensive capital policy and capital planning; robust internal controls; and effective governance
2. In regard to practices that can result in a strong and effective capital adequacy process, according to "Capital Planning at
Large Bank Holding Companies: Supervisory Expectations and Range of Current Practice," each of the following is true
EXCEPT which is false?
a) Most BHCs use some forms of expert judgment for some purposes, often as a management adjustment overlay to
modeled outputs
b) With respect to Expected Loss Approaches, BHCs with leading practices were able to break down losses into PD,
LGD, and EAD components, separately identifying key risk drivers for each of those components
c) Although best practices in operational-risk models are still evolving, common approaches to operational-loss-
estimation for stress scenarios include regression models, modified loss-distribution-approach (LDA) and scenario
analysis
d) With respect to Market and Counterparty Credit Risk, the Federal Reserve much prefers probabilistic approaches
to deterministic approaches because deterministic approaches do not link stress scenarios to risk factor shocks
3. Betaplanet International is a large bank holding company (BHC) with a high dependence on customer deposits. Betaplanet
recently submitted its capital plan and stress scenarios to the Federal Reserve with the following features:
i. Betaplanet projected estimated revenue and expenses forward over a nine-quarter planning horizon
ii. Betaplanet separately estimated losses, revenues, and expenses under hypothetical stressed conditions for
business lines that are sensitive to different risk drivers
iii. The adverse scenarios assume that Betaplanet will gain, respectively, +2.0% and +5.0% market share due to its
competitive advantage and reputation for safety
iv. For some portfolios where it has limited expertise, Betaplanet relied on a commercial, third-party models
v. For its wholesale portfolios where it has limited history, Betaplanet supplemented internal data with external data
vi. Betaplanet projected net interest income (NII) and its components, but (due to its high dependence on deposits)
excluded projections of non-interest income and non-interest expense
Which of the features above are UNLIKELY to meet the regulatory expectations of the capital planning process?
a) None, all of the mentioned features are explicitly permitted
b) III. and VI., Betaplanet should not assume market share gains under stress scenario, nor should it exclude non-
interest income and expenses in projections
c) II. and IV, Betaplanet should not separate losses/revenues/expenses by business line, nor should rely on third-party
models whatsoever (versus internal models)
d) I. and V, Betaplanet must project revenue and expenses at least five years, and it cannot rely on external data

49 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
Capital Planning at Large Bank Holding Companies | Answers
1. Correct Answer: C
False. The whole point of the programs is to identify capital adequacy during adverse (stressed) scenarios. As the
bank supervisor, the Federal Reserve can reject a bank's capital plan, in particular by constraining the bank's ability
to pay dividends or buy back shares (as these payments reduce equity). From the CCAR 2018: "When the Federal
Reserve objects to a firm’s capital plan, the firm may not make any capital distribution unless expressly permitted
by the Federal Reserve [See 12 CFR 225.8(f)(2)(iv)]."
In regard to (A), (B) and (D), each is TRUE.
2. Correct Answer: D
False. The Fed considers each approach to have strengths/weaknesses; and the deterministic approaches
specifically do link (translate) scenarios to risk factors.
Market Risk and Counterparty Credit Risk: "Both approaches [i.e., probabilistic and deterministic] have different
strengths and weaknesses. A probabilistic approach can provide useful insight into a range of scenarios that
generate stress losses in ways that a deterministic stress testing approach may not be able to do. However, the
probabilistic approach is complex and often lacks transparency, and as a result, it can be difficult to communicate
the relevant scenarios to senior managers and the board of directors. In addition, the challenges inherent in tying
probabilistic loss estimates to specific underlying scenarios can make it difficult for management and the board of
directors to readily discern what actions could be taken to mitigate portfolio losses in a given scenario. Combined,
these factors complicate the use of probabilistic approaches as the primary element in an active capital planning
process that reflects well-informed decisions by senior management and the board of directors. The Federal Reserve
expects BHCs using a probabilistic approach to provide evidence that such an approach can generate scenarios that
are potentially more severe than what was historically experienced, and also to clearly explain how BHCs use the
scenarios associated with tail losses to identify and address their idiosyncratic risks.
By comparison, a deterministic approach generally produces scenarios that are easier to communicate to senior
management and the board of directors. However, a deterministic approach often uses a limited set of scenarios,
and may miss certain scenarios that may result in large losses. The Federal Reserve expects BHCs using a
deterministic approach to demonstrate that they have considered a range of scenarios that sufficiently stress their
key exposures.
For CCAR, most BHCs generally relied on a deterministic approach. BHCs using deterministic approaches often relied
on statistical models (for example, to inform the magnitude of risk-factor movements and covariances between risk
factors) and also considered multiple scenarios as part of the broader internal stress testing supporting their capital
planning process. BHCs using deterministic approaches used a three-step process to generate P/L losses under a
stress scenario: 1. Design and selection of stress scenarios; 2. Construction and implementation of the scenario (that
is, translation to risk-factor moves); 3. Revaluation (and aggregation) of position and portfolio-level P&L under the
stress scenarios."
In regard to (A), (B) and (C), each is TRUE.
 In regard to true (A), Qualitative Projections, Expert Judgment and Adjustments: "While quantitative
approaches are important elements of enterprise-wide scenario analysis, BHCs should not rely on weak or
poorly specified models simply to have a modeled approach. In fact, most BHCs use some forms of expert
judgment for some purposes— generally as a management adjustment overlay to modeled outputs. And BHCs
can, in limited cases, use expert judgment as the primary method to produce an estimate of losses, revenue,
or expenses. BHCs may use a management overlay to account for the unique risks of certain portfolios that
are not well captured in their models, or otherwise to compensate for specific model and data limitations."
 In regard to true (B), Expected Loss Approaches: "BHCs with leading practices were able to break down losses
into PD, LGD, and EAD components, separately identifying key risk drivers for each of those components,
though they typically did notdemonstrate this level of granularity consistently across all portfolios."

50 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
 In regard to true (C), Operational Risk: "Best practices in operational-risk models are still evolving, and the
Capital Plan Rule does not require BHCs to use advanced measurement approach (AMA) models for stressed
operational-risk loss estimation" [please note: in December 2017, BCSB released new proposed rules to
replace OpRisk BIA/SA/AMA with a new Standardized Approach] ... "Regression Models: Most BHCs have used
a regression model, either by itself or with another approach described below, to estimate operational-risk
losses for stress scenarios." ... "Modified Loss-Distribution Approach (LDA): The LDA is an empirical modeling
technique commonly used by BHCs subject to the AMA to estimate annual value-at-risk (VaR) measures for
operational risk losses based on loss data and fitted parametric distributions. The LDA involves estimating
probability distributions for the frequency and the severity of operational loss events for each defined unit of
measure, whether it is a business line, an event type, or some combination of the two. The estimated
frequency and severity distributions are then combined, generally using a Monte Carlo simulation, to estimate
the probability distribution for annual operational-risk losses at each unit of measure. For purposes of CCAR,
LDA models have generally been used in one of two ways: (1) by using a lower confidence interval than the
99.9th percentile used by the AMA, or (2) by adjusting the frequency based on outcomes of correlation
analysis. BHCs that modified the LDA by using a lower confidence interval typically have used either the mean
or median for the baseline estimates and higher confidence intervals--typically ranging from 70th percentile
to 98th percentile--for the stressed estimates." ... "Scenario Analysis: Scenario analysis is a systematic process
of obtaining opinions from business managers and risk management experts to assess the likelihood and loss
impact of plausible severe operational-loss events. Some BHCs have used this process to determine a
management overlay that is added to losses estimated using a model-based approach. BHCs have used this
overlay to incorporate idiosyncratic risks (particularly for event types where correlation was not identified) or
to capture potential loss events that the BHC had not previously experienced."
3. Correct Answer: B
True: III. and VI., Betaplanet should not assume market share gains under stress scenario, nor should it exclude
non-interest income and expenses inprojections
In regard to true (I.) about the planning horizon, "The Capital Plan Rule requires BHCs to estimate revenue and
expenses over the nine-quarter planning horizon (12 CFR 225.8(d)(2)(i).)
In regard to true (II.) about separately estimating losses/revenues/expenses: "As a general practice, BHCs should
separately estimate losses, revenues, or expenses for portfolios or business lines that are sensitive to different risk
drivers or sensitive to risk drivers in a markedly different way. For instance, losses on commercial and industrial
loans and commercial real estate (CRE) loans are, in part, driven by different factors, with the path of property values
having a more pronounced effect on CRE loan losses. Similarly, although falling property value affects both income-
producing CRE loans and construction loans, the effect often differs materially due to structural differences between
the two portfolios. Such differences can become more pronounced during periods of stress. BHCs with leading
practices have demonstrated clearly the rationale for selecting certain risk drivers over others. BHCs with lagging
practices used risk drivers that did not have a clear link to results, either statistically or conceptually."
In regard to false (III.) and the presumption of gaining market share assumption: "At the same time, BHC stress
scenarios should not feature assumptions that specifically benefit the BHC. For example, some BHCs with weaker
scenario-design practices assumed that they would be viewed as strong compared to their competitors in a stress
scenario and would therefore experience increased market share. Such assumptions are contrary to the supervisory
expectations for and the intent of a stress testing exercise that informs capitalplanning."

51 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
In regard to (IV.) third-party models: "In certain instances, BHCs may need to rely on third- party models—for
example, due to limitations in internal modeling capacity. In using these third- party models (vendor models or
consultant-developed models), BHCs should ensure that their internal staff has working knowledge and a good
conceptual understanding of the design and functioning of the models and potential model limitations so that
management can clearly communicate them to those governing the process. An off-the-shelf vendor model often
requires some level of firm-specific analysis and customization to demonstrate that it produces estimates
appropriate for the BHC and consistent with scenario conditions. Sensitivity analysis can be particularly helpful in
understanding the range of possible results of vendor models with less transparent or proprietary elements.
Importantly, all vendor and consultant-developed models should be validated in accordance with SR 11-7
guidelines."
In regard to (V.) external data: "Retail and Wholesale Credit Risk: BHCs used a range of approaches to produce loss
estimates on loans to retail and corporate customers, often using different estimation methods for different
portfolios. This section describes the observed range of practice for the methods used to project losses on retail and
wholesale loan portfolios.
Data and Segmentation: Sources of data used for loss estimation have often differed between retail and wholesale
portfolios. Due to the availability of a richer set of retail loss data, particularly from the most recent downturn, BHCs
generally used internal data to estimate defaults or losses on retail portfolios and only infrequently used external
data with longer history to benchmark estimated losses on portfolios that had more limited loss experience in the
recent downturn. For wholesale portfolios, some BHCs supplemented internal data with external data or used
external data to calibrate their models due to a short time series (5–10 years) that included only a single downturn
cycle."
In regard to false (VI.), "Non-Interest Income: BHCs are expected to produce stressed projections of non-interest
income that are consistent with assumed scenario conditions, as well as with stated business strategies. Due to
inherent challenges in estimating certain non-interest income components, some BHCs used more than one method
and/or employed benchmark analysis to inform estimates ... Non-Interest Expense: BHCs should fully consider the
various impacts of the assumed scenario conditions on their non-interest expense projections, including costs that
are likely to increase during a downturn. For example, items such as other real estate owned or credit-collection
costs may spike, whereas management may have some ability to control other expenses. Like other projections,
non-interest expense projections should be consistent with balance sheet and revenue estimates and should reflect
the same strategic business assumptions. BHCs with weaker practices did not account for additional headcount
needs in certain areas, nor for any corresponding changes to compensation expense associated with increased
collections activity resulting from declines in portfolio quality and/or increased underwriting activity to support any
assumed portfolio growth."

52 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
Stress Testing Banks | Questions
1. In regard to the historical evolution of the stress testing process, each of the following is true EXCEPT which is
inaccurate?
a) The U.S. Federal Reserve conducts an annual assessment of bank holding companies (BHC) which includes a
Comprehensive Capital Analysis Reserve (CCAR) andDodd-
a. Frank supervisory stress tests
b) The Committee of European Banking Supervisors (CEBS) oversees the most forward- looking and
comprehensive regulatory stress tests in particular with respect to liquidity risks, emerging cyber-risks, and
the linkage between macro and intermediate riskfactors
c) The Supervisory Capital Assessment Program (SCAP) was the first of the macro- prudential stress tests of the
global financial crisis (GFC); its state space had only three dimensions (i.e., GDP growth, unemployment, and
house price index, HPI) and the market risk scenario was based in historical experience
d) For the 2011 European Banking Authority (EBA) test, the supervisors specified over 70 risk factors for the
trading book, eight macro-factors for each of 21 countries (i.e., GDP growth, inflation, unemployment,
residential and commercial real estate price indices, short and long-term government rates, and stock prices),
plus sovereign haircuts across seven maturity buckets
2. Although the problem of coherence is generic to scenario design, which of thefollowing elements is most likely to
support the goal of designing risk factors that meet the goal of COHERENCE in a stress test?
a) Use historical scenarios
b) Exclude safe havens; aka, flight to quality
c) Use sub-additive metrics in a higher dimensional space
d) Depreciate all foreign exchange rates simultaneously
3. In regard to the challenges in modeling a bank’s revenues, losses, and its balance sheet over a stress test horizon
period, which of the following statements is TRUE?
a) Compared to the banking book, stress testing the trading book is more difficult because it is a newer discipline
b) Implementing stress scenarios for revenues is much more developed, and easier, than stress testing for losses
c) The most robust stress tests assume loss severity (i.e., loss given default, LGD) rates that are invariant to
geography, business cycle or industry and uniformly represent the worst-case scenarios
d) Regardless of whether the denominator in the capital ratio is common equity or risk- weighted assets
(RWA), determining post-stress capital adequacy requires modeling both the income statement and the
balance sheet over the course of the stress test horizon which is typically about two years; e.g., nine
quarters for the CCAR

53 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
Stress Testing Banks | Answers
1. Correct Answer: B
False. CEBS was succeeded by the European Banking Authority (EBA) in 2011.
In regard to (A), (C) and (D), each is TRUE.
 The Comprehensive Capital Analysis and Review (CCAR) is part of the Federal Reserve's annual examination of the
largest US banks. The first CCAR was conducted in 2010. The annual assessment includes two related programs: 1.
The CCAR consists of a quantitative assessment (for large or complex firmst) that evaluates a firm’s capital adequacy
and planned capital distributions, such as any dividend payments and common stock repurchases; and 2. Dodd-
Frank Act supervisory stress testing, which is a forward-looking quantitative evaluation of the impact of stressful
economic and financial market conditions on firms’ capital.
 According to Sudhansu, "The [2009 Supervisory Capital Assessment Program] SCAP was the first of the macro-
prudential stress tests of this crisis. But the changes at the micro-prudential or bank-specific level were at least
equally significant, and they are summarized in Table 2. With the SCAP, stress testing at banks went from mostly
single (or a handful) factor shock to using a broad macro scenario with market-wide stresses; from product or
business unit stress testing focusing mostly on losses to firm-wide and comprehensive, encompassing losses,
revenues and costs; all tied to a post-stress capital ratio to ensure a going concern."
 According to the European Banking Authority (EBA): "The EBA is mandated to monitor and assess market
developments as well as to identify trends, potential risks and vulnerabilities stemming from the micro-prudential
level. One of the primary supervisory tools to conduct such an analysis is the EU-wide stress test exercise. The EBA
Regulation gives the Authority powers to initiate and coordinate the EU-wide stress tests, in cooperation with the
European Systemic Risk Board (ESRB). The aim of such tests is to assess the resilience of financial institutions to
adverse market developments, as well as to contribute to the overall assessment of systemic risk in the EU financial
system. The EBA's EU-wide stress tests are conducted in a bottom-up fashion, using consistent methodologies,
scenarios and key assumptions developed in cooperation with the ESRB, the European Central Bank (ECB) and the
European Commission (EC)."
About the EBA, Sudhansu writes: "For the 2011 EBA test, the supervisors specified over 70 risk factors for the
trading book, eight macro-factors for each of 21 countries (macro-factors such as GDP growth, inflation,
unemployment, real estate price indices – residential and commercial, short and long term government rates, and
stock prices), plus sovereign haircuts across seven maturity buckets. The macroeconomic stress scenario was
generated by economists at the ECB with reference to the EU Commission baseline economic forecast."
2. Correct Answer: A
Use historical scenarios. Note historical scenarios tend to support coherence per se, but do have their own drawbacks.
Sudhansu (emphasis ours): "3. Designing the Stress Scenario: One of the principal challenges faced by both the
supervisors and the firms in designing stress scenarios is coherence. The scenarios are inherently multi-factor: we seek to
develop a rich description of adverse states of the world in the form of several risk factors, be they financial or real, taking
on extreme yet coherent (or possible) values. It is not sufficient to specify only high unemployment or only significant
widening of credit spreads or only a sudden drop in equity prices; when one risk factor moves significantly, the others
don’t stay fixed. The real difficulty is in specifying a coherent joint outcome of all the relevant risk factors. For instance,
not all exchange rates can depreciate at once; some have to appreciate. A high inflation scenario needs to account for
likely monetary policy responses, such as an increase in the policy interest rate. Every market shock scenario resulting in
a flight from risky assets – flight to quality – must have a (usually small) set of assets that can be considered safe havens
... While the problem of coherence is generic to scenario design, it is especially acute when considering stress scenarios
for market risk, i.e. for portfolios of traded securities and derivatives. These portfolios are typically marked to market as
a matter of course and risk managed in the context of a value-at-risk (VaR) system. Practically this means that the
hundreds of thousands (or more) positions in the trading book are mapped to tens of thousands of risk factors, and those
risk factors are tracked on a (usually) daily basis and form the data used to estimate risk parameters like volatilities and
correlations. Finding coherent outcomes in such a high dimensional space, short of resorting to historical realizations, is
daunting indeed.

54 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
Compounding the problem is the challenge of finding a scenario where the real and the financial factors are jointly
coherent. The 2009 SCAP had a rather simple scenario specification. The state space had but three dimensions –
GDP growth, unemployment, and house price index (HPI) – and the market risk scenario was based in historical
experience: an instantaneous risk factor impact reflecting changes from June 30 to December 31, 2008. This period
represented a massive flight to quality, the markets experienced the failure of at least one global financial institution
(Lehman), and risk premia at the time arguably placed a significant probability on the kind of adverse real economic
outcome painted by the tri-variate SCAP scenario. This solution achieved a loose coherence of the real and financial
stress. The price one pays for choosing a historical scenario is the usual one: it does not test for something new."
3. Correct Answer: D
True: Regardless of whether the denominator in the capital ratio is common equity or risk-weighted assets (RWA),
determining post-stress capital adequacy requires modeling both the income statement and the balance sheet
over the course of the stress test horizon which is typically about two years; e.g., nine quarters for the CCAR.
Sudhansu: "4.3. Modeling the balance sheet: Recall that capital adequacy is defined in terms of a capital ratio,
roughly capital over assets. Of course, both the numerator and denominator are nuanced. All supervisory stress
tests have insisted to varying degrees that the relevant form of capital be common equity. The 2010 CEBS test
allowed for some forms of hybrid capital typical of state participations, but the requirements were tightened a year
later. As discussed in Section 4.1, the denominator is typically risk-weighted assets (RWA), where the risk weights
are determined by the prevailing regulatory capital regime, namely Basel I (in the U.S. cases of the SCAP and CCAR)
and Basel II (in the two Europe-wide and the Irish stress tests). The many subtleties of what this implies is beyond
the scope of this paper, but suffice it to say that a bank may be forced to raise capital under one regime but not the
other, and without considerable detail about the portfolio, there is no way to know which regime will result in a
more favorable treatment.
Regardless of the risk weight regime, determining post-stress capital adequacy requires modeling both the income
statement and the balance sheet, both flows and stocks, over the course of the stress test horizon, which is typically
two years [The horizon is 9 quarters for the CCAR as it is based on Q3, not Q4, balance sheets]. The point of departure
is the current balance sheet, at which point the bank meets the required capital (and, if included, liquidity) ratios.
The starting balance sheet generates the first quarter’s income and loss, which in turn determines the quarter-end
balance sheet."
In regard to (A), (B) and (C), each is FALSE. Rather, Sudhansu makes the following claims:
 The banking book is more difficult to stress test because it is a newer discipline
 Implementing stress scenarios for revenues is harder than stress testing for losses
 Loss severity (i.e., loss given default, LGD) does and should vary by geography, business cycle or industry; e.g.,
"The problem of loose coupling of loss severity to the business cycle is not limited to auto loans."

55 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
Guidance on Managing Outsourcing Risk | Questions
1. According to the Guidance on Managing Outsourcing Risk (by the Board of Governors of the Federal Reserve
System), with respect to a third-party service provider arrangement, the overall due diligence process by a financial
institution should at least include a review of eachof the following EXCEPT which is not essential?
a) Ensure service provider has an appropriate background check program for its employees
b) Financial review of service provider's most recent annual report and financial statements
c) Evaluate the service provider's performance along environmental, social, and governance (ESG) factors
d) Evaluate the adequacy of standards, policies and procedures including adherence to applicable laws,
regulations and supervisory guidance
2. Cityace Bank is a financial institution who is outsourcing a vital customer-facing function to a third-party service
provider. Cityace wants to follow the Board's Guidance on Managing Outsourcing Risk, and if they do indeed follow
the Board's guidance, then each of the following is true EXCEPT which is not true?
a) Cityace should avoid outsourcing risk management activities, especially interest rate risk and model risk
because these are core competencies
b) If the service provider is foreign-based, Cityace should ensure the provider is in compliance with applicable
U.S. laws, regulations and regulatoryguidance.
c) Cityace should ensure an effective process is in place to review and approve any incentive compensation that
may be embedded in service provider contracts
d) Cityace should consider especially the following risks in outsourcing: compliance risks, concentration risks,
reputational risks, country risks, operational risks and legal risks
3. Planetholding International Bank is entering a contract with its third-party service provider, Tristechnology Inc, to
outsource the management of its website. Each of the following are likely contract provisions EXCEPT which is
unlikely to be a contract provision?
a) Confidentiality and security of information
b) Business resumption and contingency plan
c) Scope of service (including reference to service level agreement and ability to subcontract)
d) Loss waterfall allocation mechanism (including sufficient credit value adjustment for website downtime)

56 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
Guidance on Managing Outsourcing Risk | Answers
1. Correct Answer: C
A review of ESG factors is not mentioned.
According to the Board, the three broad review areas are:
1. Business background, reputation, and strategy: "Financial institutions should review a prospective service
provider's status in the industry and corporate history and qualifications; review the background and reputation of
the service provider and its principals; and ensure that the service provider has an appropriate background check
program for its employees."
2. Financial performance and condition: "Financial institutions should review the financial condition of the service
provider and its closely-related affiliates. The financial review may include: The service provider's most recent
financial statements and annual report with regard to outstanding commitments, capital strength, liquidity and
operating results; The service provider's sustainability, including factors such as the length of time that the service
provider has been in business and the service provider's growth of market share for a given service ..."
3. Operations and internal controls.
2. Correct Answer: A
False. According to the Guidance, risk management activities may be outsourced
In regard to (B), (C) and (D) each is TRUE.
 In regard to TRUE (C), please note that the question assumes the financial institution "is outsourcing a vital
customer-facing function," and therefore it is especially important that there exists a review process for incentive
compensation. Many case studies have highlighted the adverse, unintended consequences of certain incentive
plans that reward imprudent risk-taking.
 In regard to false (A), the Guidance says (page 12): "Risk management activities: Financial institutions may
outsource various risk management activities, such as aspects of interest rate risk and model risk management.
Financial institutions should require service providers to provide information that demonstrates developmental
evidence explaining the product components, design, and intended use, to determine whether the products and/or
services are appropriate for the institution's exposures and risks. Financial institutions should also have standards
and processes in place for ensuring that service providers offering model risk management services, such as
validation, do so in a way that is consistent with existing model risk management guidance."
3. Correct Answer: D
False: A "loss waterfall" is a mechanism typically associated with a central counterparty (CCP); a "cash flow waterfall"
is typically associated with a securitization structure.
In regard to (A), (B) and (D), each are likely contract provisions. According to the Board, the following are common
contractual provisions in a third-party service provider agreement (detail omitted):
 Scope: Contracts should clearly define the rights and responsibilities of each party, including: Support,
maintenance, and customer service; Contract timeframes; Compliance with applicable laws, regulations, and
regulatory guidance; Training of financial institution employees; The ability to subcontract services; The distribution
of any required statements or disclosures to the financial institution's customers; Insurance coverage
requirements; and Terms governing the use of the financial institution's property, equipment, and staff.
 Cost and compensation
 Right to audit
 Establishment and monitoring of performance standards
 Confidentiality and security of information
 Ownership and license
 Indemnification
 Default and termination
 Dispute resolution
 Limits on liability
 Insurance
 Customer complaints
 Business resumption and contingency plan of the service provider
 Foreign-based service providers
 Subcontracting

57 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
Liquidity and Leverage | Questions
1. Which of the following is "the risk of moving the price of an asset adversely in the act of buying or selling it" such that
this risk is "low if assets can be liquidated or a position can be covered quickly, cheaply, and without moving the price
too much"?
A. Transactions liquidity risk
B. Balance sheet risk
C. Funding liquidity risk
D. Systemic risk
2. About the funding liquidity risk of a fractional-reserve bank, Malz asserts each of the following statements as true
EXCEPT which statement is not accurate?
A. Funding liquidity risk arises for market participants who borrow at short term to finance investments that require
a longer time to become profitable; the balance-sheet situation of a market participant funding a longer-term
asset with a shorter-term liability is called a maturity mismatch.
B. In theory, the core function of a commercial bank is to take deposits and provide commercial and industrial loans
to non-financial firms. In doing so, the bank carries out transformations in liquidity, maturity, and credit; it
transforms long-term illiquid assets (e.g., loans to businesses) into short-term liquid ones, including deposits and
other liabilities that can be used as money
C. Bank fragility can be mitigated through higher capital (which reduces depositors’ concern about solvency, the
typical trigger of a bank run), and higher reserves (which reduces concern about liquidity)
D. If a fractional-reserve bank carries out a liquidity and maturity transformation, and has liabilities it is obligated
to repay at par and on demand, a properly calibrated asset- liability management system can fully immunize
(protect) the fractional-reserve bank against a general loss of confidence in its ability to pay out depositors
3. Consider investors in the following five asset classes:
I. Leveraged buyout (LBO) investors
II. Merger arbitrage hedge funds
III. Mortgage loan (i.e., loans collateralized by real estate) investors
IV. Convertible bond investors
V. Statistical arbitrage investors
Which of the above is (are) materially or meaningfully exposed to systematic funding liquidity risk?
A. None of the above
B. and II. Only
C. and IV. only.
D. All of the above
4. What is the phenomenon called "breaking the buck"?
A. When the net asset value (NAV) of a money market mutual fund falls below one dollar ($1.00)
B. When a money market mutual fund cannot borrow dollars at a cost less than its return on assets
C. When so many money market mutual funds borrow for the central bank that interest rates on the dollar rise
quickly
D. When a high proportion of shareholders attempt to redeem their shares simultaneously under adverse market
conditions

58 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
5. In regard to markets for collateral, each of the following is true EXCEPT which definition or statement is false?
A. Margin lending is lending for the purpose of financing a security transaction in which the loan is collateralized by
the security
B. Total return swaps (TRS) are matched pairs of the spot sale and forward repurchase of a security. Both the spot
and forward price are agreed now, and the difference between them implies an interest rate
C. In a securities lending transaction, one party lends a security to another in exchange for a fee, generally called a
rebate. The security lender, rather than the borrower, continues to receive dividend and interest cash flows from
the security. A common type of securities lending is stock lending, in which shares of stock are borrowed.
D. A haircut ensures that the full value of the collateral is not lent. A haircut of 10.0%, for example, means that if
the borrower of cash wants to buy $100.00 of a security, he can borrow only $90.0 from the broker and must
put $10.0 of his own funds in the margin account by the time the trade is settled. Similarly, the lender of cash
will be prepared to lend $90 against $100 of collateral.
6. Suppose a firm with a simple capital structure has assets of $20.0 million and debt of$10.0 million. Return on assets
(ROA) is 9.0% and cost of debt is 4.0%, such that the firm's leverage is 2.0 and its return on equity (ROE) is 14.0%. If
the firm borrows an additional $6.0 million at the same cost of 4.0%, and asset returns are fixed, what is the firm's
new leverage and return on equity (ROE)?
A. Leverage = 1.7 and ROE = 13.3%
B. Leverage = 2.0 and ROE = 15.7%
C. Leverage = 2.3 and ROE = 23.5%
D. Leverage = 2.6 and ROE = 17.0%
7. On opening day, Lever Brothers Multistrategy Master Fund LP has the following economic balance sheet: $100 in
Cash, $20 in Debt, and Equity of $80. This corresponds to an initial placement of $80 in Equity by the fund's owners
plus a loan of $20 by a commercial bank. Assume Lever Brothers finances a long position in $100 worth of an equity
at the Reg T margin requirement of 50%. It invests $50 of its own funds and borrows $50 from the broker. Immediately
following the trade, its margin account has $50 in equity and a $50 loan from the broker (The broker retains custody
of the stock as collateral for the loan). If firm leverage is defined, per Malz, as Assets/Equity, then what is the change
in the firm's economic balance sheet?
a. From 1.000 to 1.500
b. From 1.250 to 1.875
c. From 1.250 to 1.500
d. From 1.500 to 1.500
8. On opening day, Lever Brothers Multistrategy Master Fund LP has the following economic balance sheet: $100 in
Cash, $20 in Debt, and Equity of $80. Assume Lever Brothers creates a short position in a stock, borrowing $100 of
the security and selling it. It has thus created a liability equal to the value of the borrowed stock, and an asset, equal
in value, consisting of the cash proceeds from the short sale. The cash cannot be used to fund other investments, as
it is collateral; the broker uses it to ensure that the short stock can be repurchased and returned to the stock lender.
It remains in a segregated short account, offset by the value of the borrowed stock. The stock might rise in price, in
which case the $100 of proceeds would not suffice to cover its return to the borrower. Lever Brothers must therefore
in addition put up margin of $50. After the trade, what is the leverage in the firm's economic balance sheet?
a. 1.25
b. 1.50
c. 1.75
d. 2.50

59 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
9. On opening day, Lever Brothers Fund LP has a simple economic balance sheet: $100 in Cash, and Equity of $100. This
corresponds to an initial placement of $100 in equity by investors, and no debt. Suppose Lever Brothers now adds
three derivatives positions:
 A one-month currency forward, in which Lever Brothers is short $150 against the euro
 An at-the-money (currently 50-delta) three-month long call option on S&P 500 equity index futures, with an
underlying index value of $100
 Short protection on Ford Motor Co. via a five-year credit default swap, with a notional amount of $200
Assume that the non-option positions are initiated at market-adjusted prices and spreads, and therefore have zero
NPV. Also assume that the counterparty is the same for all the positions, namely the prime broker or broker-dealer
with which they are executed. If the initial margin on this derivatives portfolio (i.e., all three derivative positions) is
$50, what is the leverage of the firm's balance sheet after the trades?
a. 2.5
b. 3.5
c. 4.0
d. 5.0
10. Each of the following is a source (i.e., cause) of transactions liquidity risk EXCEPT which is not?
a. Inventory management by dealers
b. Adverse selection
c. Depth of resiliency
d. Differences of opinion
11. The bid-ask spread is USD 0.350 on an asset with a current price of USD 50.00 per share. The spread itself is normally
distributed with mean of zero and volatility of 20 basis points (0.20%). Which is nearest to the 99 percent confidence
interval on the transaction costs?
a. $0.08510
b. $0.29130
c. $1.05560
d. $3.82400
12. Let's define liquidity-adjusted value at risk (VaR) for a position that can be liquidated over (T) days as given by:

Let's assume an equity position with a value of one million dollars ($1,000,000) and a volatility of 34.0% per annum.
If there are 250 trading days in a year and returns are normal i.i.d., which is nearest to the 99.0% liquidity-adjusted
VaR if we estimate the position will require five (5) trading days to liquidate?
a. $50,000
b. $74,200
c. $98,900
d. $111,840
13. Which of the following refers to "the length of time for which a lumpy order moves the market away from the
equilibrium price," which is a characteristic of market liquidity?
a. Tightness
b. Depth
c. Slippage
d. Resiliency

60 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
14. Which of the following refers to "the cost of a round-trip transaction, and is typically measured by the bid-ask spread
and brokers’ commissions," which is a characteristic of market liquidity?
a) Tightness
b) Depth
c) Adverse price impact
d) Resiliency
15. Lack of liquidity manifests itself in observable, yet hard-to-measure ways. Which of the following refers to "the
deterioration in the market price induced by the amount of time it takes to get a trade done?"
a) Tightness
b) Bid-ask spread
c) Adverse price impact
d) Slippage

61 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
Liquidity and Leverage | Answers
1. Correct Answer: A
Transactions liquidity risk
Malz: "As with 'liquidity,' the term 'liquidity risk' is used to describe several distinct but related phenomena:
 Transaction liquidity risk is the risk of moving the price of an asset adversely in the act of buying or selling it.
Transaction liquidity risk is low if assets can be liquidated or a position can be covered quickly, cheaply, and
without moving the price too much. An asset is said to be liquid if it is 'near' or a good substitute for cash. An
asset is said to have a liquidity premium if its price is lower and expected return higher because it isn't perfectly
liquid. A market is said to be liquid if market participants can put on or unwind positions quickly, without
excessive transactions costs and without excessive price deterioration.
 Balance sheet risk or funding liquidity risk. Funding liquidity risk is the risk that creditors either withdraw credit
or change the terms on which it is granted in such a way that the positions have to be unwound and/or are
no longer profitable. Funding liquidity can be put at risk because the borrower’s credit quality is, or at least
perceived to be, deteriorating, but also because financial conditions as a whole are deteriorating.
 Systemic risk refers to the risk of a general impairment of the financial system. In situations of severe financial
stress, the ability of the financial system to allocate credit, support markets in financial assets, and even
administer payments and settle financial transactions may be impaired.
These types of liquidity risk interact. For example, if a counterparty increases collateral requirements or otherwise
raises the cost of financing a long position in a security, the trader may have to unwind it before the expected return
is fully realized. By shrinking the horizon of the trade, the deterioration of funding liquidity also increases the
transaction liquidity risk. The interaction also works the other way. If a leveraged market participant is perceived to
have illiquid assets on its books, its funding will be in greater jeopardy."
2. Correct Answer: D
Malz (emphasis ours): "No asset-liability management system can protect a fractional-reserve bank against a general
loss of confidence in its ability to pay out depositors. As long as the bank carries out a liquidity and maturity
transformation, and has liabilities it is obligated to repay at par and on demand, no degree of liquidity that a bank
can achieve can protect it completely against a run. Fragility can be mitigated through higher capital, which reduces
depositors’ concern about solvency, the typical trigger of a run, and higher reserves, which reduces concern about
liquidity. Historically, banks have also protected themselves against runs through individual mechanisms such as
temporary suspension of convertibility, and collective mechanisms such as clearing-houses."
 In regard to (A), (B) and (C), each is TRUE.
3. Correct Answer: D
Malz on systemic funding liquidity risk: "The funding liquidity risk in corporate transactions is both idiosyncratic and
systematic. Funding for a particular LBO or merger might fall through, even if the deal would otherwise have been
consummated. But funding conditions generally can change adversely. This occurred in mid-2007 as the subprime
crisis took hold. Many LBO and merger deals fell apart as financing came to a halt. Banks also incurred losses on
inventories of syndicated loans, called “hung loans,” that had not yet been distributed to other investors or into
completed securitizations, as noted in Chapter 9. As risk aversion and demand for liquidity increased, the appetite
for these loans dried up, and their prices fell sharply.
Apart from providers of financing, other participants in these transactions, such as hedge funds involved in merger
arbitrage, also experienced losses. Mergers typically result in an increase in the target acquisition price, though not
usually all the way to the announced acquisition price, and in a decrease in the acquirer’s price, since the acquirer
often takes on additional debt to finance the acquisition. Merger arbitrage exploits the remaining gap between the
current and announced prices. The risk arises from uncertainty as to whether the transactions will be closed. In the
early stages of the subprime crisis, merger arbitrage strategies generated large losses as merger plans were
abandoned for lack of financing.

62 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
Investors taking on exposure to such transactions are therefore exposed not only to the idiosyncratic risk of the
deal, but to the systematic risk posed by credit and funding conditions generally. This risk factor is hard to relate to
any particular time series of asset returns. Rather, it is a “soft factor,” on which information must be gathered from
disparate sources ranging from credit and liquidity spreads to quantitative and anecdotal data on credit availability.
We look at such data more closely in Chapter 14.
Systematic funding liquidity risk is pervasive. Other asset types or strategies that are good examples of sensitivity to
the “latent” factor of economy wide financing conditions include real estate, convertible bonds, and statistical
arbitrage. Real estate is one of the longest-lived assets. Mortgages—loans collateralized by real estate—are
therefore traditionally and most frequently originated as long-term, amortizing, fixed-rate loans. The typical home
mortgage, for example, is a 30-year amortizing, fixed-rate loan. When lending practice departs from this standard
and shorter-term loans predominate, lenders and borrowers both face funding liquidity risk, as borrowers are
unlikely to be in a position to repay unless they can refinance. This risk is primarily systematic, as it is likely to affect
all borrowers and lenders at the same time."
4. Correct Answer: A
When the net asset value (NAV) of a money market mutual fund falls below one dollar ($1.00)
 Please note that (D) has the potential to lead to "breaking the buck."
Malz: "Other mutual funds must mark assets to market each day. This daily net asset value (NAV) is the price at
which investors can contribute and withdraw equity. MMMFs, in contrast, are able to set a notional value of each
share equal to exactly $1.00, rather than an amount that fluctuates daily. The residual claim represented by the
shares is paid the net yield of the money market assets, less fees and other costs. MMMF shares thereby become
claims on a fixed nominal value of units, rather than proportional shares of an asset pool. Their equity nature is
absorbed within limits by fluctuations in the net yield.
This structure only works if market, credit, and liquidity risks are managed well. Some losses cannot be disregarded
under the amortized cost method, particularly credit write-downs. These losses can cause the net asset value to fall
below $1.00, a phenomenon called “breaking the buck.”
Liquidity risk can also jeopardize the ability of a MMMF to maintain a $1.00 net asset value. In this respect, it is much
like a classic commercial bank, and similarly vulnerable to runs. If a high proportion of shareholders attempt to
redeem their shares simultaneously under adverse market conditions, the fund may have to liquidate money market
paper at a loss, forcing write-downs and potentially breaking the buck. An episode of this kind involving credit write-
downs by a MMMF, the Reserve Fund, was an important event in the subprime crisis, as we see in Chapter 14."
5. Correct Answer: B
Repurchase agreements ("repos") are matched pairs of the spot sale and forward repurchase of a security. Both the
spot and forward price are agreed now, and the difference between them implies an interest rate.
Malz: "Markets for collateral have existed for a long time in three basic forms that are economically very similar,
although they differ in legal form and market practice:
 Margin Loans: Margin lending is lending for the purpose of financing a security transaction in which the loan
is collateralized by the security. It is generally provided by the broker intermediating the trade, who is also
acting as a lender. Margin lending is generally short term, but rolled over automatically unless terminated by
one of the counterparties.
 Repurchase Agreements: Repurchase agreements or repos are matched pairs of the spot sale and forward
repurchase of a security. Both the spot and forward price are agreed now, and the difference between them
implies an interest rate. The collateralization of the loan is achieved by selling the security temporarily to the
lender. The collateralization is adjusted for the riskiness of the security through thehaircut.
Repos are also a fairly old form of finance, but have grown significantly in recent decades. More significantly, the
range of collateral underlying repos has widened. At one time, repo lending could be secured only by securities with
no or de minimis credit risk. A few decades ago, repo began to encompass high-yield bonds and whole loans, and
more recently, structured credit products. It has been a linchpin of the ability of large banks and brokerages to
finance inventories of structured credit products, facilitated also by extending high investment-grade ratings to the
senior tranches of structured credit products such as ABS and CDOs.

63 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
Securities Lending: In a securities lending transaction, one party lends a security to another in exchange for a fee,
generally called a rebate. The security lender, rather than the borrower, continues to receive dividend and interest
cash flows from the security. A common type of securities lending is stock lending, in which shares of stock are
borrowed. As in repo transactions, the 'perfection' of the lien on the collateral is enhanced by structuring the
transaction as a sale, so that the lender holding the collateral can rehypothecate it or, in the event that the loan is
not repaid, sell it with minimal delay and transactions costs.
... Total Return Swaps: The ability to short equities depends on the ability to borrow and lend stock. An important
instrument of many short stock trades is total return swaps (TRS), inwhich one party pays a fixed fee and receives
the total return on a specified equity position on the other. TRS are OTC derivatives in which one counterparty,
usually a bank, broker dealer or prime broker, takes on an economic position similar to that of a stock lender,
enabling the other counterparty, often a hedge fund, to establish a synthetic short stock position, economically
similar to that of a borrower of stock. The broker then needs either to lay off the risk via a congruent opposite TRS,
or to hedge by establishing a short position in the cashmarket."
 In regard to (A), (C) and (D), each is TRUE.
6. Correct Answer: D
Leverage = 2.6 and ROE = 17.0%
Given new assets of 26.0 and new debt of 16.0, new equity is unchanged at 10.0 = 26.0 - 16.0. New leverage =
26.0/10.0 = 2.60 and new ROE = (26.0*9% - 16.0*4%)/10.0 = 17.0%.
7. Correct Answer: B
 Initial leverage = assets/equity = 100/80 = 1.250.
 After the trade, assets = $50 cash + $100 stock = $150;
 After the trade, liabilities = $20 debt + $50 margin loan = $70; such that equity = $150 - 70 = $80; and leverage
= 150/80 = 1.875
8. Correct Answer: D
 After the trade, Assets = $200 = $50 cash + $150 Due from broker (i.e., $100 short sale proceeds + $50 margin)
 After the trade, Liabilities = $120 = $20 debt + $100 Borrowed Stock, such that Equity = $80 = $200 - $120;
 Therefore, Leverage = 200/80 = 2.50.
9. Correct Answer: D
 After the trade, assets = $150 (currency forward) + $50 (long option) + $200 (short CDS) + $50 cash + $50
margin due from broker = $500;
 After the trade, debt = $400 (i.e., corresponding to derivative positions) such that equity = $100 (i.e.,
unchanged by new positions), and
 Leverage = 500/100 = 5.0
10. Correct Answer: C
Tightness, depth and resiliency are characteristics (i.e., measures), not causes, of market liquidity; also, depth of
resiliency is awkward.
Malz 12.4.1 Causes of Transactions Liquidity Risk:
"Transaction liquidity risk is ultimately due to the cost of searching for a counterparty, to the market institutions
that assist in search, and to the cost of inducing someone else to hold a position. We can classify these market
microstructure fundamentals as follows:
 Cost of trade processing. Facilitating transactions, like any economic activity, has fixed and variable costs of
processing, clearing, and settling trades, apart from the cost of finding a counterparty and providing
immediacy. These costs are tied partly to the state of technology and partly to the organization of markets.
While processing may be a significant part of transaction costs, it is unlikely to contribute materially to liquidity
risk. An exception is natural or man-made disasters that affect the trading infrastructure.

64 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
 Inventory management by dealers. The role of dealers is to provide trade immediacy to other market
participants, including other dealers. In order to provide this service, dealers must be prepared to estimate
the equilibrium or market-clearing price, and to hold long or short inventories of the asset. Holding inventories
exposes dealers to price risk, for which they must be compensated by price concessions. The dealers’
inventory risk is fundamentally a volatility exposure and is analogous to short-term option risk.
 Adverse selection. Some traders may be better informed than others, that is, better situated to forecast the
equilibrium price. Dealers and market participants cannot distinguish perfectly between offers to trade arising
from the counterparty’s wish to reallocate into or out of cash, or responses to non-fundamental signals such
as recent returns (“liquidity” or “noise” traders) from those who recognize that the prevailing price is wrong
(“information” traders). A dealer cannot be sure for which of these reasons he is being shown a trade and
therefore needs to be adequately compensated for this “lemons” risk through the bid-ask spread. A dealer
does, however, have the advantage of superior information about the flow of trading activity, and learns early
if there is a surge in buy or sell orders, or in requests for two-way prices.
 Differences of opinion. Investors generally disagree about the “correct” price of an asset, or about how to
interpret new information, or even about whether new information is important in assessing current prices.
Investors who agree have less reason to trade with one another than investors who disagree. When
agreement predominates, for example, when important and surprising information is first made public, or
during times of financial stress, it is more difficult to find a counterparty."
11. Correct Answer: B
$0.29130 = $50.00 * 0.5 * (0.350/50.00 + 2.326*0.0020)
12. Correct Answer: B
The 1-day 99% VaR = $1.0 mm * 34% * SQRT (1/250) * 2.326 = $50,017.
The 5-day adjustment = SQRT [(6*11)/30] = SQRT (2.20) = 1.48324
The 99% LVaR = $50,017 * 1.48324 = $74,187; note: this assumes rounded 99% quantile of 2.3260
13. Correct Answer: D
Malz: “A standard set of characteristics of market liquidity, focusing primarily on asset liquidity, helps to understand
the causes of illiquidity:
 Tightness refers to the cost of a round-trip transaction, and is typically measured by the bid-ask spread and
brokers’ commissions.
 Depth describes how large an order it takes to move the market adversely.
 Resiliency is the length of time for which a lumpy order moves the market away from the equilibrium price.”
14. Correct Answer: A
15. Correct Answer: D
Slippage
Malz: “Lack of liquidity manifests itself in these observables, if hard-to-measure ways:
 Bid-ask spread. If the bid-ask spread were a constant, then going long at the offer and short at the bid would
be a predictable cost of doing the trade. However, the bid-ask spread can fluctuate widely, introducing a risk.
 Adverse price impact is the impact on the equilibrium price of the trader’s own activity.
Slippage is the deterioration in the market price induced by the amount of time it takes to get a trade done. If prices
are trending, the market can go against the trader, even if the order is not large enough to influence the market.

65 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
Repurchase Agreements and Financing | Questions
1. At initiation of a repurchase agreement (repo), Counterparty A sells a security to Counterparty B for settlement on
June 1st, 2015 at an invoice price of USD 180.0 million. At the same time, Counterparty A agrees to repurchase the
security three months later, for settlement on September 1st, 2015, at a purchase price equal to the original invoice
price plus interest at a repo rate of 0.90%. Using the actual/360 convention of most money market instruments, which
is nearest to the repurchase price?
a. $414,000
b. $180,000,000
c. $180,414,000
d. $181,620,000
2. Tarun is a repo investor in the money market mutual fund industry with the exclusive motive of cash management.
As Amar explains, "Investors holding cash for liquidity or safekeeping purposes often find investing in repo to be an
ideal solution. The most significant example of this is the money market mutual fund industry, which invests on behalf
of investors willing to accept relatively low returns in exchange for liquidity and safety. [A money market fund would
lend money while taking collateral and then, at maturity, collect the loan plus interest and return the collateral.]
Holding collateral makes the lender less vulnerable to the creditworthiness of a counterparty because, in the event
of a default by the counterparty, the investor (e.g., the money market fund) can sell the repo collateral to recover
any amounts owed. In summary, relative to super safe and liquid non-interest-bearing bank deposits, repo
investments pay a short-term rate without sacrificing much liquidity or incurring significant default risk." Given this
motivation, Tarun understandably employs each of the following criteria (or preference) with respect to its repo
investments EXCEPT which is the LEAST likely preference?
a. Tarun insists on a sufficient (i.e., greater rather than lesser) collateral haircut
b. Tarun only accept securities with the highest credit quality; e.g., debt of government sponsored entities (GSEs)
c. Tarun refuses to accept general collateral (at general collateral repo rates) and instead insists on particular
securities with delineated asset classes
d. Tarun places a premium on liquidity so tends to lend overnight rather than for term; or, if it wants to lend cash
for an extended period, engages in an open repo
3. Consider the following illustration of a simplified repurchase agreement (repo) trade between generic Counterparty
A and Counterparty B.

Please note this illustration refers to the initiation of the repo trade, not the unwinding. Consider the three primary
motivations for a repo trade:
I. Lend funds short-term on a secured basis: A money market mutual fund who holds cash for liquidity or
safekeeping purposes but who wants to lend the cash safely would BUY the repo as Counterparty B; because the
fund is investing cash, the mutual fund is willing to accept general collateral
II. Finance the long position in a security: The trading desk at a financial institution who wants to finance the
purchase of a security would sell the repo (aka, repo out) as Counterparty A using the purchased security as
collateral
III. Borrow a security in order to sell it short: A hedge fund that wants to short a security but needs to borrow the
security in order to sell it would do a reverse repo (aka, buy the repo) as Counterparty B; because the hedge fund
is borrowing a bond, it does not accept a general collateral and instead requires delivery of a particular security

66 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
Which of the above is accurate (true)?
a. None are accurate
b. I. only is accurate (II. and III. are mistaken)
c. III. only is accurate (I. and II. are mistaken)
d. All are accurate
4. At the time of the Bear Stern's demise in March 2008, Paul Friedman was a Senior Managing Director at the firm with
responsibility for its fixed income repo desk. About the repo market's role in the collapse of Bear Sterns, he said in
testimony before the Financial Crisis Inquiry Commission, "During the week of March 10, 2008, Bear Stearns suffered
from a run on the bank that resulted, in my view, from an unwarranted loss of confidence in the firm by certain of its
customers, lenders, and counterparties. In part, this loss of confidence was prompted by market rumors, which I
believe were unsubstantiated and untrue, about Bear Stearns’ liquidity position. Nevertheless, the loss of confidence
had three related consequences."
Each of the following was one of his cited three consequences EXCEPT which was not?
a. Prime brokerage clients withdrew their cash and unencumbered securities at a rapid and increasing rate
b. Repo market lenders declined to roll over or renew repo loans, even when the loans were supported by high-
quality collateral such as agency securities
c. Counterpart to non-simultaneous settlements of foreign exchange trades refused to pay until Bear Stearns paid
first
d. Short sellers seized on the panic and drove the stock price down which reduced equity capital available, and
equity capital was already the least stable source of funds
5. In contrast to general collateral (CG) repo rates, which of the following is TRUE about special repo rates?
a. Special rates are typically less than general collateral rates
b. If the counterparty's primary motivation is to lend cash rather than borrow a security, the special rate applies
c. Special rates are well-suited to repo investors who are looking to obtain the highest rate for the collateral they
are willing to accept
d. The most commonly cited special rates are for overnight repos where any U.S. Treasury collateral is acceptable
6. In regard to special spreads, each of the following is true EXCEPT which is false?
a. On-the-run (OTR) issues tend to trade more special than off-the-run (OFR; aka, old) issues
b. The special spread equals the general collateral (GC) rate minus the respective bond repo rate
c. On-the-run special spreads peak immediately after an auction, and tend to decrease over the cycle, reaching
their lowest level immediately before the next auction
d. Special spreads tend to be volatile on a daily basis (reflecting supply and demand for special collateral) and
special spreads can be quite large (e.g., hundreds of basis points)

67 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
Repurchase Agreements and Financing | Answers
1. Correct Answer: C
$180,414,000 = 180,000,000 * (1 + 0.0090 * 92/360); there are 92 actual days between June 1st and September 1st
as both July and August have 31 days (30*3 + 2 = 92).
2. Correct Answer: C
Amar: "While repo investors care about the quality of the collateral they accept, they do not usually care about
which particular bond they accept. Hence, while repo investors can be very particular about which classes of
securities they will take as collateral, e.g., Euro-area government bonds with less than five years to maturity, they
will not insist on receiving any particular security within that delineated class. For this reason these investors are
said to accept general collateral, which trades at general collateral repo rates. The types and determination of repo
rates are discussed later in this chapter."
 In regard to (A), (B), and (D), each is TRUE.
3. Correct Answer: D
All are accurate
4. Correct Answer: D
Short sellers are not cited; further, equity capital is the most stable source of funds because "equity holders do not
have to be paid according to any particular schedule and because they cannot compel a redemption of their shares."
5. Correct Answer: A
Special rates are typically less than general collateral rates
Tuckman: "As mentioned earlier in this chapter, repo trades can be divided into those using general collateral (GC)
and those using special collateral or specials. In the former, the lender of cash is willing to take any particular
security, although the broad categories of acceptable securities might be specified with some precision. In specials
trading, the lender of cash initiates the repo in order to take possession of a particular security. For these trades,
therefore, it makes more sense to say that 'counterparty A is lending a security to counterparty B and taking cash
as collateral' as opposed to saying that 'counterparty B is lending cash and taking a security as collateral,' although
the two statements are economically equivalent. For this reason, by the way, when using the words 'borrow' or
'lend' in the repo context, it is best to specify whether cash or securities are being borrowed or lent. Also, as another
note on market terminology, bonds most in demand to be borrowed are said to be trading special, although any
request for specific collateral is a specials trade.
Each day there is a GC rate for each bucket of collateral and each repo term. The most commonly cited GC rates are
for repos where any U.S. Treasury collateral is acceptable, and 'the' GC rate refers to the overnight rate for U.S.
Treasury collateral. With respect to special rates, there can be one for each security for each term, e.g., the s of
August 15, 2019, to
September 30, 2010. But every special rate is typically less than the GC rate: being able to borrow cash at a relatively
low rate induces holders of securities that are in great demand to lend those securities, while being forced to lend
cash at a relatively low rate allocates securities that are in great demand to potential borrowers of that security.
Differences between the GC rate and the specials rates for particular securities and terms are called special spreads.
Relating GC and specials trades to the market participants discussed earlier in this chapter, GC trades suit repo
investors: they obtain the highest rate for the collateral they are willing to accept. Traders intending to short
particular securities have to do specials trades and must decide whether they are willing to lend money at rates
below GC rates in order to borrow those securities. Funding trades are predominantly GC. Should an institution find
itself wanting to borrow money against a security that is trading special, however, it will lend that security in the
specials market and borrow cash at a rate below GC, rather than financing that security as part of a GC trade. In the
United States the GC rate is typically close to, but below, the federal funds rate.

68 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
The latter, discussed further in Chapter 15, is the unsecured rate for overnight loans between banks in the Federal
Reserve System. By contrast, repo loans secured by U.S. Treasury collateral are safer and should trade at a lower
rate of interest. From October 23 through July 1, 2010, for example, the GC rate was, on average, about 16 basis
points below the fed funds rate. This fed funds–GC spread can vary, however, with the demand for Treasury
collateral. When the U.S. government was running surpluses and paying down debt in the late 1990s and early
2000s, so that U.S. Treasuries were becoming scarcer and expected to become scarcer still, the fed funds–GC spread
widened to reflect the decreasing supply of Treasury collateral."
 In regard to (B), (C), and (D), each is FALSE.
6. Correct Answer: C
The reverse: On-the-run special spreads are smallest immediately after an auction, and tend to increase over the
cycle, reaching their peak immediately before the next auction.
Tuckman: "The extra liquidity of newly issued Treasuries makes them ideal candidates not only for long positions
but for shorts as well. Most shorts in Treasuries are for relatively brief holding periods: a trading desk hedging the
interest rate risk of its current position; a corporation or its underwriter hedging an upcoming sale of its own bonds;
or an investor betting that interest rates will rise. All else being equal, holders of these relatively brief short positions
prefer to sell particularly liquid Treasuries so that, when necessary, they can cover their short positions quickly and
at relatively low transaction costs.
Investors and traders who are long an OTR bond for liquidity reasons require compensation if they are to sacrifice
that liquidity by lending that bond in the repo market. At the same time, investors and traders wanting to short the
OTR securities are willing to pay for the liquidity of shorting these particular bonds when borrowing them in the
repo market. As a result, OTR securities tend to trade [more] special. The auction cycle is an important determinant
not only of which bonds trade special, but also of how special individual bonds trade over the course of the auction
cycle. This will be illustrated first by examining the history of special spreads for the OTR 10-year Treasury and then
by examining the term structure of special spreads for the OTR 10-year Treasury as of May 28, 2010. Several lessons
may be drawn from these graphs. First, special spreads are quite volatile on a daily basis, reflecting supply and
demand for special collateral each day. Second, special spreads can be quite large: spreads of hundreds of basis
points are quite common. Third, special spreads do attain higher levels over some periods rather than others, a
feature that will be discussed in the next subsection. Fourth, and the main theme of this subsection, while the cycle
of OTR special spreads is far from regular, these spreads tend to be small immediately after auctions and to peak
before auctions. It takes some time for a short base to develop.
Immediately after an auction of a new OTR security, shorts can stay in the previous OTR security or shift to the new
OTR. This substitutability tends to depress special spreads. Also, the extra supply of the OTR security immediately
following a re-opening auction tends to depress special spreads. In fact, a more detailed examination of special
spreads indicates that reopened issues do not get as special as do new issues. In any case, as time passes after an
auction, shorts tend to migrate toward the OTR security, and its special spread tends to rise. Furthermore, as many
market participants short the OTR to hedge purchases of the to-be issued next OTR, the demand to short the OTR
and, therefore, its special spread, can increase dramatically or spike going into the subsequent auction."
In regard to (A), (B) and (D), each is TRUE.

69 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
Illiquid Assets | Questions
1. According to Provath, illiquidity can arise due to the following market imperfections: Clientele effects and
participation costs, transaction costs, search frictions, asymmetric information, price impact or funding constraints.
He characterizes the effects of these imperfections as "illiquidity." In regard to the CHARACTERISTICS of illiquid
markets, based on Ang's research, which of the following statements is TRUE (such that the other statements are
generally false)?
a) Normally liquid markets periodically become illiquid
b) Most individuals hold the majority of their wealth in liquid or highly liquid assets
c) Most asset classes are liquid such that genuinely illiquid markets tend to be small and temporary
d) Technology has virtually eliminated the following frictions: transaction costs, search friction, asymmetric
information, price impact, and funding constraints
2. Provath makes an important, provocative statement when he writes "Reported illiquid asset returns are not
returns." He claims that people overstate the expected returns and understate the risk of illiquid assets, and he
attributes this to three key biases. According to Ang, each of the following is a bias that overstates the expected
returns (and/or understates the risk) of illiquid assets EXCEPT which is not accurate?
a) Survivorship bias can inflate returns by 4.0% or more
b) Infrequent sampling (aka, infrequent trading) artificially reduces risk and risk-related metrics such as volatility,
correlation and beta
c) Turnover bias decreases the typical time between transactions and tends to artificially increase the expected
return by 5.0% or more
d) Selection bias (aka, reporting bias) is a distortion of the sample that artificially increases (ie, overestimates)
alpha and artificially decreases (ie, underestimates) beta
3. To adjust the infrequent trading bias introduced that is introduced into reported returns, we can "unsmooth" or
"de-smooth" the reported returns. Ang suggests this is a filtering problem: "Filtering algorithms are normally used
to separate signals from noise. When we’re driving on a freeway and talking on a cell phone, our phone call
encounters interference—from highway overpasses and tall buildings—or the reception becomes patchy when we
pass through an area without enough cell phone towers. Telecommunication engineers use clever algorithms to
enhance the signal, which carries our voice, against all the static. The full transmission contains both the signal and
noise, and so the true signal is less volatile than the full transmission. Thus standard filtering problems are designed
to remove noise. The key difference is that unsmoothing adds noise back to the reported returns to uncover the
true returns."
The essence of unsmoothing of returns is illustrated by Ang's formulas 13.1, 13.2 and 13.4 below:
r∗ = c + ϕr ∗ + εt
t t−1
1 ϕ
r = r∗ − r∗
t 1−ϕ t 1−ϕ t−1

r∗ = (1 − ϕ)rt + ϕr∗
t t−1
In these formulas r*(t) is the reported (aka, observed) return and r(t) is the true but unobserved return. Importantly,
as is almost always the case in finance, the model used in this particular unsmoothing process makes key
assumptions. However, if the assumptions are correct, then each of the following statements about the
unsmoothing process is true EXCEPT which is false?
a) Unsmoothing affects only risk estimates and not expected returns
b) Unsmoothing has no effect if the observed returns are uncorrelated.
c) The true returns implied by the "transfer function" and equation 13.2, r(t), should have zero autocorrelation
and generally should not be themselves forecastable
d) Due to the autocorrelation assumption, |φ| <1, the variance of the true returns will be less than the variance
70 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2

of the observed returns; i.e., variance[r(t)] < i.e., variance[r*(t)]

71 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
4. In order to identify the presence of illiquidity risk premium(s), Provath references data presented by Antti Ilmanen
in his well-regarded book Expected Returns (An Investor's Guide to Harvesting Market Rewards). This data is
displayed below as a scatter plot where the y-axis is the long-run average return of the asset class and the x-asset is
an index of illiquidity. A higher index (ie, to the right) implies less liquidity. For example, the venture capital as an
asset class is assigned to the least liquid (most illiquid) asset class but it also plots the highest long-run average
return.

In regard to the illiquidity risk premium, which of the following statements it TRUE?
a) In general illiquid asset classes offer high risk-adjusted returns
b) These charts demonstrate that there do exists large illiquidity risk premiums ACROSS asset classes
c) There do exist large illiquidity risk premiums WITHIN many asset classes, but not ACROSS asset classes
d) Illiquid equities earn the same returns as liquid equities; and illiquid bonds earn the same returns as liquid
bonds
5. Illiquidity risk premiums compensate investors for the inability to access capital immediately and/or for the market's
withdrawal of liquidity during a crisis. Provath details four different ways that an investor (asset owner) might
capture or "harvest" the illiquidity premium. However, among these four, which is the simplest to implement and
has the greatest impact on portfolio returns?
a) Dynamic rebalancing at the aggregate level
b) Market making at the individual security level
c) Holding less liquid securities within asset classes
d) Holding passive allocations to illiquid asset classes
6. You are consulting to a large endowment fund that is in the process of determining its asset allocation budget. An
important sub-project in this process is a recommendation for the definition of, target allocation to, and hurdle rates
associated with, illiquid assets. In short, you need to help develop the endowment's Portfolio Choice Model for
illiquid assets. In regard to this sub-project, members of your staff (Phillip, Debra, Peter, and Mary) makes
suggestions that include the following four assertions. According to Ang, each of these is true EXCEPT which is
probably false?
a) Phillip says: The longer the time between liquidity events for an asset--ie, the less liquid the asset--the LOWER
its optimal allocation in the portfolio
b) Debra says: The longer we need to wait to exit an investment--ie, the later the arrival of liquidity events--the
HIGHER should be our illiquidity hurdle rate
c) Peter says: Due to the nature of factor risk and idiosyncratic risk in illiquid markets, to the extend we allocate
to illiquid assets, it is really important for us to assign (or delegate to) genuinely skilled investors to this asset
class
d) Mary says: It's actually not very important that we identify skill in the illiquid asset class because we can rather
easily benchmark against tradeable indexes which will allow us to separate factor risk from manager skill (aka,
alpha)

72 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
Illiquid Assets | Answers
1. Correct Answer: A
True: Normally liquid markets periodically become illiquid. In regard to (B), (C) and (D), each is generally FALSE.
Ang's characteristics of illiquid markets include the following:
 "Most Asset Classes Are Illiquid [ie, choice C is false]: Except for plain-vanilla public equities and fixed income,
most asset markets are characterized by long periods, sometimes decades, between trades, and they have
very low turnover. Even among stocks and bonds, some sub asset classes are highly illiquid. Equities trading
in pink sheet over-the-counter markets may go for a week or more without trading. The average municipal
bond trades only twice per year, and the entire muni-bond market has an annual turnover of less than 10%
(see also chapter 11). In real estate markets, the typical holding period is four to five years for single-family
homes and eight to eleven years for institutional properties. Holding periods for institutional infrastructure
can be fifty years or longer, and works of art sell every forty to seventy years, on average. Thus, most asset
markets are illiquid in the sense that they trade infrequently and turnover is low.
 Illiquid Asset Markets Are Large: The illiquid asset classes are large and rival the public equity market in size.
In 2012, the market capitalization of the NYSE and NASDAQ was approximately $17 trillion. The estimated size
of the U.S. residential real estate market is $16 trillion, and the direct institutional real estate market is $9
trillion. In fact, the traditional public, liquid markets of stocks and bonds are smaller than the total wealth held
in illiquid assets.
 Investors Hold Lots of Illiquid Assets [ie, choice B is false]: Illiquid assets dominate most investors’ portfolios.
For individuals, illiquid assets represent 90% of their total wealth, which is mostly tied up in their house—and
this is before counting the largest and least liquid component of individuals’ wealth, human capital (see
chapter 5). There are high proportions of illiquid assets in rich investors’ portfolios, too. High net worth
individuals in the United States allocate 10% of their portfolios to treasure assets like fine art and jewelry. This
rises to 20% for high net worth individuals in other countries. The share of illiquid assets in institutional
portfolios has increased dramatically over the past twenty years. The National Association of College and
University Business Officers reported that, in 2011, the average endowment held a portfolio weight of more
than 25% in alternative assets versus roughly 5% in the early 1990s. A similar trend is evident among pension
funds. In 1995, they held less than 5% of their portfolios in illiquid alternatives, but today the figure is close to
20%.
 Liquidity Dries Up: Many normally liquid assets markets periodically become illiquid. During the 2008 to 2009
financial crisis, the market for commercial paper (or the money market)—usually very liquid –experienced
buyers’ strikes by investors unwilling to trade at any price ... Illiquidity crises occur regularly because liquidity
tends to dry up during periods of severe market distress. The Latin American debt crisis in the 1980s, the Asian
emerging market crisis in the 1990s, the Russian default crisis in 1998, and of course the financial crisis of 2008
to 2009 were all characterized by sharply reduced liquidity, and in some cases, liquidity completed evaporated
in some markets. Major illiquidity crises have occurred at least once every ten years, most in tandem with
large downturns in asset markets."
2. Correct Answer: C
False. Turnover bias is made-up.
In regard to (A), (B) and (D), each is TRUE.
Ang: "As Faust and Forst note in their memo to Harvard’s Council of Deans, the true illiquid asset losses were greater
than the reported ones, which leads us to an important corollary. Reported illiquid asset returns are not returns.
Three key biases cause people to overstate expected returns and understate the risk of illiquid assets:
Survivorship bias,
Infrequent sampling, and Selection bias.
In illiquid asset markets, investors must be highly skeptical of reported returns."

73 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
3. Correct Answer: D
The true statement, instead, is: Due to the autocorrelation assumption, |φ| <1, the variance of the true returns is
HIGHER than the variance of the observed returns; i.e., variance[r(t)] ≥ i.e., variance[r*(t)]
In regard to (A), (B) and (C), each is TRUE.
Ang: "The unsmoothing process has several important properties:
1. Unsmoothing affects only risk estimates and not expected returns. Intuitively, estimates of the mean require
only the first and last price observation (with dividends take “total prices,” which count reinvested
dividends).12 Smoothing spreads the shocks over several periods, but it still counts all the shocks. In Figure
13.2, we can see that the first and last observations are unchanged by infrequent sampling; thus,
unsmoothing changes only the volatility estimates.
2. Unsmoothing has no effect if the observed returns are uncorrelated. In many cases, reported illiquid asset
returns are autocorrelated because illiquid asset values are appraised. The appraisal process induces
smoothing because appraisers use, as they should, both the most recent and comparable sales (which are
transactions) together with past appraised values (which are estimated, or perceived, values). The artificial
smoothness from the appraisal process has pushed many in real estate to develop pure transactions-based,
rather than appraisal-based indexes. Autocorrelation also results from more shady aspects of subjective
valuation procedures—the reluctance of managers to mark to market in down markets."
In regard to true (C), Ang "Equation (13.2) unsmooths the observed returns. If our assumption on the transfer
function is right, the observed returns implied by equation (13.2) should have zero autocorrelation. Thus, the filter
takes an autocorrelated series of observed returns and produces true returns that are close to IID (or not
forecastable)."
4. Correct Answer: C
True: There do exist large illiquidity risk premiums WITHIN many asset classes, but not ACROSS asset classes. This is
Ang's essential point in section four of Chapter 14: "But while there do not seem to be significant illiquidity risk
premiums across classes, there are large illiquidity risk premiums within asset classes." Specifically (selected):
 US Treasuries: "A well-known liquidity phenomenon in the U.S. Treasury market is the on-the-run/off-the-run
bond spread. Newly auctioned Treasuries (which are on-the-run) are more liquid and have higher prices, and
hence lower yields, than seasoned Treasuries (which are off-the-run) [The on-the-run bonds are more
expensive because they can be used as collateral for borrowing funds in the repo market. This is called
specialness.] The spread between these two types of bonds varies over time reflecting liquidity conditions in
Treasury markets."
 Corporate bonds: "Corporate bonds that trade less frequently or have larger bid–ask spreads have higher
returns. Chen, Lesmond, and Wei (2007) find that illiquidity risk explains 7% of the variation across yields of
investment-grade bonds. Illiquidity accounts for 22% of the variation in junk bond yields; for these bonds, a
one basis point rise in bid–ask spreads increases yield spreads by more than two basis points."
 Equities: "A large literature finds that many illiquidity variables predict returns in equity markets, with less
liquid stocks having higher returns. These variables include bid–ask spreads, volume, volume signed by
whether trades are buyer or seller initiated, turnover, the ratio of absolute returns to dollar volume
(commonly called the Amihud measure based on his paper of 2002), the price impact of large trades, informed
trading measures, quote size and depth, the frequency of trades ... Estimates of illiquidity risk premiums in
the literature range between 1% and 8% depending on which measure of illiquidity is used. However, Ben-
Rephael, Kadan, and Wohl (2008) report that these equity illiquidity premiums have diminishedconsiderably—
for some illiquidity measures the risk premiums are now zero! In pink sheet stock markets, which are over-
the-counter equity markets, Ang, Shtauber, and Tetlock (2013) find an illiquidity risk premiumof almost 20%
compared to about 1% for comparable listed equities."

74 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
 Illiquid Assets: "There are higher returns to hedge funds that are more illiquid, in the sense that they place
more restrictions on the withdrawal of capital (called lockups) or for hedge funds whose returns fall when
liquidity dries up. Franzon et al report that there are significant illiquidity premiums in private equity funds—
typically 3%. In real estate, Liu and Qian (2012) construct illiquidity measures of price impact and search costs
for U.S. office buildings. They find a 10% increase in these illiquidity measures leads to a 4% increase in
expected returns."
In regard to (A), (B) and (D), each is FALSE.
5. Correct Answer: A
Dynamic rebalancing at the aggregate level
Ang: "Harvesting Illiquidity Risk Premiums: There are four ways an asset owner can capture illiquidity premiums:
1. By setting a passive allocation to illiquid asset classes, like real estate;
2. By choosing securities within an asset class that are more illiquid, that is by engaging in liquidity security
selection;
3. By acting as a market maker at the individual security level; and
4. By engaging in dynamic strategies at the aggregate portfolio level."
In regard to this fourth way, Ang writes (emphasis ours): "4.4. Rebalancing: The last way an asset owner can supply
liquidity is through dynamic portfolio strategies. This has a far larger impact on the asset owner’s total portfolio than
liquidity security selection or market making because it is a top-down asset allocation decision (see chapter 14 for
factor attribution). Rebalancing is the simplest way to provide liquidity, as well as the foundation of all long-horizon
strategies (see chapter 4). Rebalancing forces asset owners to buy at low prices when others want to sell. Conversely,
rebalancing automatically sheds assets at high prices, transferring them to investors who want to buy at elevated
levels. Since rebalancing is counter-cyclical, it supplies liquidity. Dynamic portfolio rules, especially those anchored
by simple valuation rules (see chapters 4 and 14), extend this further—as long as they buy when others want to sell
and vice versa. It is especially important to rebalance illiquid asset holdings too, when given the chance ...
4.5. Summary: Of all the four ways to collect an illiquidity premium: (i) holding passive allocations to illiquid asset
classes, (ii) holding less liquid securities within asset classes, (iii) market making at the individual security level, and
(iv) dynamic rebalancing at the aggregate level; the last of these is simplest to implement and has the greatest
impact on portfolio returns."
6. Correct Answer: D
False, Mary probably isn't an FRM! According to Ang, a key difference associated with illiquid asset classes is a
relative INABILITY to separate factor risk from manager skill ("there is no market index for illiquid asset classes")
such that investment manager talent is very important. Recall that an ideal benchmark is tradeable, and alpha is
measured relative to tradeable benchmarks. But Ang says "No investor receives the returns on illiquid indexes. An
asset owner never receives the NCREIF return on a real estate portfolio, for example. The same is true for most
hedge fund indexes and private equity indexes. In liquid public markets, large investors can receive index market
returns and pay close to zero in fees. In contrast, NCREIF is not investable as it is impossible to buy all the underlying
properties in that index."
In regard to (A), (B) and (C) each is TRUE. According to Ange, especially true is choice (B): Investors should demand
high illiquidity risk premiums, he says.
 lliquidity Markedly Reduces Optimal Holdings: "... If the risky asset can be traded on average every six months,
which is the second to last line, the optimal holding of the illiquid asset contingent on the arrival of the liquidity
event is 44%. When the average interval between trades is five years, the optimal allocation is 11%. For ten
years, this reduces to 5%. Illiquidity risk has a huge effect on portfolio choice."
 Rebalance Illiquid Assets to Positions Below the Long-Run Average Holding: "In the presence of infrequent
trading, illiquid asset wealth can vary substantially and is right-skewed. Suppose the optimal holding of illiquid
assets is 0.2 when the liquidity event arrives. The investor could easily expect illiquid holdings to vary from 0.1
to 0.35, say, during nonrebalancing periods. Because of the right-skew, the average holding of the illiquid asset
is 0.25, say, and is greater than the optimal rebalanced holding. The optimal trading point of illiquid assets is
lower than the long-run average holding."

75 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
 Consume Less with Illiquid Assets: "Payouts, or consumption rates, are lower in the presence of illiquid assets
than when only comparable liquid assets are held by the investor. The investor cannot offset the risk of illiquid
assets declining when these assets cannot be traded. This is an unhedgeable source of risk. The investor offsets
that risk by eating less.
 There Are No Illiquidity Arbitrages: In a mean-variance model, two assets with different Sharpe ratios and
perfect correlations produce positions of plus or minus infinity. This is a well-known bane of mean-variance
models, and professionals employ lots of ad hoc fixes, and arbitrary constraints, to prevent this from
happening. This does not happen when one as set is illiquid—there is no arbitrage. Investors do not load up
on illiquid assets because these assets have illiquidity risk and cannot be continuously traded to construct an
arbitrage.
 Investors Must Demand High Illiquidity Hurdle Rates: How much does an investor need to be compensated
for illiquidity? ... When liquidity events arrive every six months, on average, an investor should demand an
extra 70 basis points. (Some hedge funds have lockups around this ho rizon.) When the illiquid asset can be
traded once a year, on average, the illiquidity premium is approximately 1%. When you need to wait ten years,
on average, to exit an investment, you should demand a 6% illiquidity premium. That is, investors should insist
that private equity funds generate returns 6% greater than public markets to compensate for illiquidity."

76 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
Factor Theory | Questions
1. Sudhansu develops an analogy, writing "factors are to assets what nutrients are to food." His theory of factor risk
premiums includes each of the following three ideas EXCEPT which is not in the theory?
a) Assets are bundles of factors (just as most foods are combinations of nutrients)
b) Factors do matter but asset classes do not (just as healthy eating is about the nutrients not the labels)
c) Different investors prefer and/or need different factors (just as different people have different nutritional
needs)
d) Because factors represent different good times, most investors should seek exposure to most investable
factors (just as most people should seek a balanced diet of most nutrients)
2. Sudhansu introduces the capital asset pricing model (CAPM) in Chapter 6 (Factor Theory) with these words: "The
CAPM was revolutionary because it was the first cogent theory to recognize that the risk of an asset was not how
that asset behaved in isolation but how that asset moved in relation to other assets and to the market as a whole.
Before the CAPM, risk was often thought to be an asset’s own volatility. The CAPM said this was irrelevant and that
the relevant measure of risk was how the asset covaried with the market portfolio—the beta of the asset." What
else does he say is TRUE about the CAPM?
a) CAPM is known to be a spectacular failure with respect to its predictive power
b) Neither finance professors nor Chief Financial Officers (CFO) employ the CAPM in practice
c) Equilibrium asserts that factors are temporary because arbitrageurs eventually eliminate factors
d) Investors make very different predictions about asset returns, variances and correlations; equilibrium is the
theory that says this diversity of beliefs is reconciled via the market price mechanism of supply and demand
3. In regard to the capital asset pricing model (CAPM), Sudhansu is able to catalog both its failures and successes,
where success refers to "ideas the CAPM gets right." Each of the following is an assumption of the CAPM, but only
one is TRUE IN PRACTICE and therefore useful. Which of the following assumptions (or implications) of the CAPM is
a genuine success such that it is both true in practice and useful to us?
a) Information is costless and available to all investors: technology has reduced information friction to roughly
zero
b) Risk is factor exposure: The risk of an individual asset is measured in terms of the factor exposure of that asset
c) Investors have mean-variance utility: asset owners care only about means (which they like) and variances
(which they dislike)
d) Investors have homogeneous expectations: investors have identical expectations with respect to the
necessary inputs into the portfolio decision
4. In introducing multifactor models, Sudhansu explains that "to capture the composite bad times over multiple
factors, the new asset pricing approach uses the notion of a pricing kernel. This is also called a stochastic discount
factor (SDF). We denote the SDF as (m)." This allows him to write the risk premium of an asset as a function of β(i,m)
and the price of risk, λ(m), as follows:
E(r ) − r cov(ri,m) (− var(m)) = β ×γ (Sudhansu 6.8)
i f var(m) E(m) i,m m

About the relationship and its implications, each of the following statements is true EXCEPT which is false? Reminder
hint: the β (i, m) here is not exactly the same CAPM's traditional beta exposure to the market risk premium, this is
a generalized, different beta.
a) The expected return for the asset can be negative if this β(i,m) is high enough
b) The (m) denotes a pricing kernel or stochastic discount factor (SDF) and (m) is an "index of bad times"
c) A higher cov[r(i), m] corresponds to a higher β(i,m) which implies a higher risk premium and a higher expected
return for the asset
d) The stochastic discount factor (SDF) can depend on a vector of factors, F = [f (1), f(2),...f(k)] where each of the
77 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2

(K) factors defines different bad times; e.g., high inflation, low economic growth

78 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
5. Sudhansu says "The $64,000 question with multifactor pricing kernel models is: how do you define bad times?" With
respect to the multifactor model, among the following choices which is the BEST definition of "bad times?"
a) Low wealth level
b) Low happiness level
c) High marginal utility of the representative agent
d) Low marginal utility among heterogeneous agents
6. According to Sudhansu's theory of factor risk, which of the following statements is TRUE?
a) There should never be rational deviations from efficiency
b) The financial crisis was consistent with the multifactor model
c) The financial crisis demonstrated the failure of diversification
d) Factor theory implies that active management does not add value; ie., factor strategies should fail to "beat
the market"

79 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
Factor Theory | Answers
1. Correct Answer: D
False. It is thematic in Sudhansu that factors compensate investors for exposure to the effects of "bad times." This
book is wonderfully about risk! The preface begins "The two most important words in investing are bad times."
Further, just as choice (C) is true, unlike descriptive CAPM (wherein all informed investors hold the same market
portfolio), his theory does not advocate (normatively) that all investors should hold the market's entire (or
predominant) set of factors.
Sudhansu: "There is one difference, however, between factors and nutrients. Nutrients are inherently good for you.
Factor risks are bad. It is by enduring these bad experiences that we are rewarded with risk premiums. Each different
factor defines a different set of bad times. They can be bad economic times—like periods of high inflation and low
economic growth. They can be bad times for investments—periods when the aggregate market or certain
investment strategies perform badly. Investors exposed to losses during bad times are compensated by risk
premiums in good times. The factor theory of investing specifies different types of underlying factor risk, where each
different factor represents a different set of bad times or experiences. We describe the theory of factor risk bystarting
with the most basic factor risk premium theory—the CAPM, which specifies just one factor: the market portfolio."
In regard to (A), (B) and (C), each is true under Sudhansu's theory of factors:
"There are three similarities between food and assets:
1. Factors matter, not assets. If an individual could obtain boring, tasteless nutrients made in a laboratory, she
would comfortably meet her nutrient requirements and lead a healthy life. (She would, however, deprive
herself of gastronomic enjoyment.) The factors behind the assets matter, not the assets themselves. Investing
right requires looking through asset class labels to understand the factor content, just as eating right requires
looking through food labels to understand the nutrient content.
2. Assets are bundles of factors. Foods contain various combinations of nutrients. Certain foods are nutrients
themselves—like water—or are close to containing only one type of nutrient, as in the case of rice for
carbohydrates. But generally foods contain many nutrients. Similarly, some asset classes can be considered
factors themselves— like equities and government fixed income securities—while other assets contain many
different factors. Corporate bonds, hedge funds, and private equity contain different amounts of equity risk,
volatility risk, interest rate risk, and default risk. Factor theory predicts these assets have risk premiums that
reflect their underlying factor risks.
3. Different investors need different risk factors. Just as different people have different nutrient needs, different
investors have different optimal exposures to different sets of risk factors. Volatility, as we shall see, is an
important factor. Many assets and strategies lose money during times of high volatility, such as observed
during the 2007-2008 financial crisis. Most investors dislike these times and would prefer to be protected
against large increases in volatility. A few brave investors can afford to take the opposite position; these
investors can weather losses during bad times to collect a volatility premium during normal times. They are
paid risk premiums as compensation for taking losses—sometimes big losses, as in 2008–2009—during
volatile times. Another example is that investors have different desired exposure to economic growth. One
investor may not like times of shrinking GDP growth because he is likely to become unemployed in such
circumstances. Another investor—a bankruptcy lawyer, perhaps— can tolerate low GDP growth because his
labor income increases during recessions.The point is that each investor has different preferences, or risk
aversion coefficients, for each different source of factor risk."
2. Correct Answer: A
TRUE: CAPM is known to be a spectacular failure with respect to its predictive power
Sudhansu: "I state upfront that the CAPM is well known to be a spectacular failure. It predicts that asset risk
premiums depend only on the asset’s beta and there is only one factor that matters, the market portfolio. Both of
these predictions have been demolished in thousands of empirical studies. But the failure has been glorious, opening
new vistas of understanding for asset owners who must hunt for risk premiums and manage risk.

80 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
The basic intuition of the CAPM still holds true: that the factors underlying the assets determine asset risk premiums
and that these risk premiums are compensation for investors’ losses during bad times. Risk is a property not of an
asset in isolation but how the assets move in relation to each other. Even though the CAPM is firmly rejected by
data, it remains the workhorse model of finance: 75% of finance professors advocate using it, and 75% of CFOs
employ it in actual capital budgeting decisions despite the fact that the CAPM does not hold.1 It works
approximately, and well enough for most applications, but it fails miserably in certain situations (as the next chapter
will detail). Part of the tenacious hold of the CAPM is the way that it conveys intuition of how risk is rewarded. What
does the CAPM get right? "
In regard to (B), (C) and (D), each is FALSE; i.e., Sudhansu does not assert any of these.
3. Correct Answer: B
TRUE: Risk is factor exposure: The risk of an individual asset is measured in terms of the factor exposure of that
asset.
Sudhansu: "3.5. CAPM LESSON 5: RISK IS FACTOR EXPOSURE > The risk of an individual asset is measured in terms
of the factor exposure of that asset. If a factor has a positive risk premium, then the higher the exposure to that
factor, the higher the expected return of that asset."
According to Sudhansu, here is a summary of the successes of CAPM ("ideas it gets right"):
1. Don't hold an individual asset, hold the factor
2. Each investor has his own optimal exposure of factor risk
3. The average investor holds the market
4. The factor risk premium has an economic story
5. Risk is factor exposure
6. Assets paying off in bad times have low risk premiums
Here is a summary of the failures of CAPM according to Sudhansu. Please note these are CAPM assumptions that he
says are NOT true in practice; or put another way, these are assumptions which are demonstrably violated.
1. Investors have only financial wealth.
2. Investors have mean-variance utility
3. Single-period investment horizon
4. Investors have homogeneous expectations (I personally believe this is the most egregious CAPM assumption,
fwiw)
5. No taxes or transactions costs
6. Individual investors are price takers
7. Information is costless and available to all investors (as an investor, I wholeheartedly concur here: some
information is abundant, but processing it is hardly "free" nor is the information evenly distributed).
4. Correct Answer: C
False. To be consistent with true (A), this should instead read "A higher cov[r(i), m] corresponds to a higher β(i,m)
which implies a lower risk premium and a lower expected return for the asset."
It it thematic to the chapter that factors compensate investors for exposure to the effects of bad times and, per
truce choice B, (m) is an index of bad times. A higher value of cov[r(i), m] refers to an asset that performs better in
bad times per the factor; e.g., high inflation, low economic growth. Such a high beta asset (in this context) is valuable
and attracts a higher price/lower risk premium.
Sudhansu: "It turns out that we can write the risk premium of an asset in a relation very similar to the SML of the
CAPM in equation (6.2):
E(r ) − r cov(ri,m) (− var(m)) = β × γ (Sudhansu 6.8)
i f var(m) E(m) i,m m

81 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
Where β(i,m) = cov[r(i),m]/var(m) is the beta of the asset with respect to the SDF. Equation (6.8) captures the “bad
times” intuition that we had earlier from the CAPM. Remember that m is an index of bad times. The higher the
payoff of the asset is in bad times (so the higher cov (ri, m) and the higher βi,m), the lower the expected return of
that asset. The higher beta in equation (6.8) is multiplied by the price of “bad times” risk, λ(m) = −var(m)/E(m), which
is the inverse of factor risk, which is why there is a negative sign. Equation (6.8) states directly the intuition of Lesson
6 from the CAPM: higher covariances with bad times lead to lower risk premiums. Assets that pay off in bad times
are valuable to hold, so prices for these assets are high and expected returns are low."
In regard to (A), (B) and (D), each is TRUE.
In regard to true (D), Sudhansu: "4.1. PRICING KERNELS > To capture the composite bad times over multiple factors,
the new asset pricing approach uses the notion of a pricing kernel. This is also called a stochastic discount factor
(SDF). We denote the SDF as m. The SDF is an index of bad times, and the bad times are indexed by many different
factors and different states of nature. Since all the associated recent asset pricing theory uses this concept and
terminology, it is worth spending a little time to see how this SDF approach is related to the traditional CAPM
approach.
... By capturing all bad times by a single variable m, we have an extremely powerful notation to capture multiple
definitions of bad times with multiple variables. The CAPM is actually a special case where m is linear in the market
return: m = a + b × r(m), (equation 6.3) for some constants a and b. (A pricing kernel that is linear in the market gives
rise to a SML that with asset betas with respect to the market in equation (6.2).) With our “m” notation, we can
specify multiple factors very easily by having the SDF depend on a vector of factors, F = (f1, f2. . . fK): m = a + b(1)*f(1)
+ b(2)*f(2) + ... + b(K)*f(K) (equation 6.4) where each of the K factors themselves define different bad times.
Another advantage of using the pricing kernel m is that while the CAPM restricts m to be linear, the world is
nonlinear. We want to build models that capture this nonlinearity.6 Researchers have developed some complicated
forms for m, and some of the workhorse models that we discuss in chapters 8 and 9 describing equities and fixed
income are nonlinear."
5. Correct Answer: C
True: High marginal utility of the representative agent
Sudhansu: "The $64,000 question with multifactor pricing kernel models is: how do you define bad times? For the
average investor who holds the market portfolio, the answer is when an extra $1 becomes very valuable. This
interprets the SDF as the marginal utility of a representative agent. We will come back to this formulation in chapter
8 when we characterize the equity risk premium. Times of high marginal utility are, for example, periods when
you’ve just lost your job so your income is low and any extra dollars are precious to you. Your consumption is also
low during these times. In terms of the average, representative consumer, this also corresponds to a macro factor
definition of a bad time: bad times are when GDP growth is low, consumption is low, or economic growth in general
is low. Times of high marginal utility could also be defined in relative terms: it could be when your consumption is
low relative to your neighbor or when your consumption is low relative to your past consumption. In chapter 2, we
captured the former using a catching up with the Joneses utility function and the latter with a habit utility function.
During 2008–2009, the financial crisis was a bad time with high volatility and large financial shocks. So volatility is
an important factor, and the next chapter shows that many risky assets perform badly when volatility is high. Factors
can also be tradeable, investment styles. Some of these include liquid, public market asset classes like bonds and
listed equities. Others include investment styles that are easily replicable and that can be implemented cheaply (but
often are not when they are delivered to customers) and in scale, like value/growth strategies."

82 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
6. Correct Answer: B
True: The financial crisis was consistent with the multifactor model
Sudhansu: "7. The 2008–2009 Financial Crisis Redux > The simultaneously dismal performance of\ many risky assets
during the financial crisis is consistent with an underlying multifactor model in which many asset classes were
exposed to the same factors. The financial crisis was the quintessential bad time: volatility was very high, economic
growth toward the end of the crisis was low, and there was large uncertainty about government and monetary
policy. Liquidity dried up in several markets. The commonality of returns in the face of these factor risks is strong
evidence in favor of multifactor models of risk, rather than a rejection of financial risk theory as some critics have
claimed. Assets earn risk premiums to compensate for exposure to these underlying risk factors. During bad times,
asset returns are low when these factor risks manifest. Over the long run, asset risk premiums are high to
compensate for the low returns during bad times.
Some commentators have argued that the events of 2008 demonstrate the failure of diversification. Diversification
itself is not dead, but the financial crisis demonstrated that asset class labels can be highly misleading, lulling
investors into the belief that they are safely diversified when in fact they aren’t. What matters are the embedded
factor risks; assets are bundles of factor risks. We need to understand the factor risks behind assets, just as we look
past the names and flavors of the things that we eat to the underlying nutrients to ensure we have enough to sustain
us. We take on risk to earn risk premiums in the long run, so we need tom understand when and how that factor
risk can be realized in the short run. Some have criticized the implementation of diversification through mean
variance utility, which assumes correlations between asset classes are constant when in fact correlations tend to
increase during bad times. Factor exposures can and do vary through time, giving rise to time-varying correlations—
all the more reason to understand the true factor drivers of risk premiums."
In regard to (A), (C) and (D), each is false, according to Sudhansu.
In regard to false (A) and (D), Sudhansu (emphasis ours): "The EMH has been refined over the past several decades
to rectify many of the original shortcomings of the CAPM including imperfect information and the costs associated
with transactions, financing, and agency. Many behavioral biases have the same effect and some frictions are
actually modeled as behavioral biases. A summary of EMH tests is given in Sudhansu, Goetzmann, and Schaefer
(2011). What is relevant for our discussion is that the deviations from efficiency have two forms: rational and
behavioral. For an asset owner, deciding which prevails is important for deciding whether to invest in a particular
pocket of inefficiency. In a rational explanation, high returns compensate for losses during bad times. This is the
pricing kernel approach to asset pricing. The key is defining those bad times and deciding whether these are actually
bad times for an individual investor. Certain investors, for example, benefit from low economic growth even while
the majority of investors find these to be bad periods. In a rational explanation, these risks premiums will not go
away—unless there is a total regime change of the entire economy. (These are very rare, and the financial crisis in
2008 and 2009 was certainly not a regime change.) In addition, these risk premiums are scalable and suitable for
very large asset owners.
In a behavioral explanation, high expected returns result from agents’ underor overreaction to news or events.
Behavioral biases can also result from the inefficient updating of beliefs or ignoring some information. Perfectly
rational investors, who are immune from these biases, should be able to come in with sufficient capital and remove
this mispricing over time. Then it becomes a question of how fast an asset owner can invest before all others do the
same. A better justification for investment, at least for slow-moving asset owners, is the persistence of a behavioral
bias because there are barriers to the entry of capital. Some of these barriers may be structural, like the inability of
certain investors to take advantage of this investment opportunity. Regulatory requirements, for example, force
some investors to hold certain types of assets, like bonds above a certain credit rating or stocks with market
capitalizations above a certain threshold. If there is a structural barrier to entry, then the behavioral bias can be
exploited for a long time.
For some risk premiums, the most compelling explanations are rational (as with the volatility risk premium), for
some behavioral (e.g., momentum), and for some others a combination of rational and behavioral stories prevails
(like value/growth investing). Overall, the investor should not care if the source is rational or behavioral; the more
appropriate question is whether she is different from the average investor who is subject to the rational or
behavioral constraints and whether the source of returns is expected to persist in the future (at least in the short
term). We take up this topic in detail in chapter 14, where I discuss factor investing."

83 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
Factors | Questions
1. Sudhansu introduces factors and divides them primarily into two types: macro, fundamental-based factors versus
investment-style factors. He writes, "Factors drive risk premiums. One set of factors describes fundamental,
economywide variables like growth, inflation, volatility, productivity, and demographic risk. Another set consists of
tradeable investment styles like the market portfolio, value-growth investing, and momentum investing."
Sudhansu also claims that the three most important macro factors are growth, inflation, and volatility. His evaluation
of these macro factors is based on a long-term historical sample (specifically, 1952:Q1 to 2011:Q4). It is important
to qualify the sample window because we cannot be sure that past is prologue; e.g., interest rates experienced two
long-term secular trends during this window.
In regard to his historical analysis, each of the following statements is true EXCEPT which is false?
a) Government and investment-grade bonds performed BETTER during economic recessions than during
expansions
b) Both large and small (cap) stocks perform significantly BETTER during economic expansions than during
recessions
c) During periods of high inflation, all five asset classes perform significantly BETTER than during periods of low
inflation
d) All five assets classes are much MORE VOLATILE during recessions (or periods of low GDP growth) than during
expansions
2. Sudhansu says that volatility is one of the three most important macro factors. Each of the following statements
about volatility, as a macroeconomic factor, is true EXCEPT which is false?
a) There is a negative correlation between stock returns and the VIX index
b) Volatility has a negative "price of risk" in aggregate markets including equities, fixed income, currency, and
commodity markets
c) Due to volatility's negative "price of risk," an increase in volatility implies higher subsequent stock returns
because there is a natural upper limit on volatility somewhere around 100%
d) The "leverage effect" refers an increase in stock volatility (riskier equities) due to a general increase in firm's
financial leverage (i.e., assets divided by equity) caused by a drop in stock returns
3. In addition to the three major macro factors, Sudhansu reviews several other macro factors including productivity
risk, demographic risk, and political risk. According to Sudhansu, each of these statements is true EXCEPT which is
NOT necessarily true?
a) As the population ages, retirees purchase more financial assets and this demand puts upward pressure on
asset prices
b) Older people have higher risk aversion such that as the average age in the population increases, the equity
risk premium increases which implies a drop in equity prices
c) Before the global financial crisis, in general political risk was a meaningful macro risk factor only in emerging
markets; but after to the crisis, political risk has become important also in developed countries.
4. In "real business cycle models," inflation is somewhat neutral because production economies are modeled to include
firms that are subject to shocks that affect their output; this includes productivity shocks which work at longer-term
business cycle frequencies such that asset returns reflect long-run productivity risk Sudhansu explains that a large
literature has tried to estimate the return-volatility tradeoff as represented by Sudhansu's formula 7.1 below, where
gamma, γ, is the risk aversion of the average investor and σ(m)^2 is the variance of the market return:
E(rm ) − rf = ̅σ
γ2m (Sudhansu.7.1)

84 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
About the relationship between between returns (or earned premiums) and volatility, which of the following
statements is TRUE?
a) In theory, the risk aversion coefficient is negative; but in data, the risk aversion is always positive
b) Pure derivatives volatility trading takes a stance on expected returns; i.e., is necessarily directional
c) Rebalancing as a portfolio strategy is a short volatility strategy which earns a volatility risk premium
d) Selling volatility protection through derivatives markets is ultimately a low-risk strategy due to long-term
mean reversion
5. The Fama-French three-factor model is given by the following formula (Sudhansu 7.2):
E(ri) = rf + βi,MKTE(rm − rf) + βi,SMBE(SMB) + βi,HMLE(HML)
Which of the following statements about this Fama-French model is TRUE?
a) Unlike the size factor, the value premium is robust and outperforms over the long-run
b) Since 1965 to roughly the present, the size factor (size effect) in Fama-French has been significant and robust
c) Although the size effect continues to be robust and significant, small stocks do NOT have higher returns, on
average, than large stocks
d) The salient feature of value stocks is their tendency, both in theory and in the data, to outperform growth
stocks especially during bad times for the economy
6. We can add a momentum factor to the Fama-French so that it becomes a four-factor model. This momentum factor
is denoted by WML (i.e., past winners minus past losers) or UMD (i.e., stocks that have gone up minus stocks that
have gone down). At least with respect to the historical window analyzed, which is the long period from January
1965 to December 2011, which of the following statements is TRUE about the momentum factor?
a) Momentum is a negative feedback strategy which is inherently stabilizing
b) The momentum factor is observed in equities but is NOT observed in bonds, commodities and real estate
c) Momentum investing by definition is an anti-value strategy; correlations between HML and WML are strongly
negative
d) The cumulated profits on momentum strategies have been an order of magnitude larger than cumulated
profits on either size or value

85 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
Factors | Answers
1. Correct Answer: C.
False. To be true, this should instead read: During periods of high inflation, all five asset classes perform significantly
WORSE than during periods of low inflation.
Sudhansu: "2.2. INFLATION: High inflation tends to be bad for both stocks and bonds, as Table 7.2 shows. During
periods of high inflation, all assets tend to do poorly.5 Large stocks average 14.7% during low inflation periods and
only 8.0% during periods of high inflation. The numbers for government bonds, investment grade bonds, and high
yield bonds are 8.6%, 8.8%, and 9.2%, respectively, during low inflation periods and 5.4%, 5.3%, and 6.0%,
respectively, during high inflation periods. It is no surprise that high inflation hurts the value of bonds: these are
instruments with fixed payments, and high inflation lowers their value in real terms. It is more surpris ing that
stocks—which are real in the sense that they represent ownership of real, productive firms—do poorly when
inflation is high. We’ll take a closer look at the inflationhedging properties of equities in chapter 8, but for now,
suffice to say that high inflation is bad for both equities and bonds. Part of the long-run risk premiums for both
equities and bonds represents compensation for doing badly when inflation is high."
In regard to (A), (B) and (D), each is TRUE.
 In regard to true (A) and true (B), Sudhansu: "Risky assets generally perform poorly and are much more volatile
during periods of low economic growth. However, government bonds tend to do well during these times ...
during recessions, stock returns fall: the mean return for large stocks is 5.6% during recessions and 12.4%
during expansions. The difference in returns across recessions and expansions is more pronounced for the
riskier small cap stocks at 7.8% and 16.8%, respectively. Government bonds act in the opposite way,
generating higher returns at 12.3% during recessions compared to 5.9% during expansions. Investment-grade
corporate bonds, which have relatively little credit risk, exhibit similar behavior. In contrast, high-yield bonds
are much closer to equity, and their performance is between equity and government bonds; in fact, high-yield
bonds do not have any discernable difference in mean returns over recessions and expansions."
 In regard to true (D), Sudhansu: "All asset returns are much more volatile during recessions or periods of low
growth. For example, large stock return volatility is 23.7% during recessions compared to 14.0% during
expansions. While government bonds have higher returns during recessions, their returns are also more
volatile then, with a volatility of 15.5% during recessions compared to 9.3% during expansions. It is interesting
to compare the volatilities of assets over the full sample to the volatilities conditional on recessions and
expansions: volatility tends to be very high during bad times."
2. Correct Answer: C
False. There is an inverse relationship volatility and stock returns per true choice (A). This statement would be true
if it were written, instead, as follows: The time-varying risk premium narrative refers to a decline in stock prices due
to the increase in required return demanded by investors when volatility increases.
In regard to (A), (B) and (D), each is TRUE.
 In regard to true (B), Sudhansu: "We often think about assets having positive premiums—we buy, or go long,
equities, and the long position produces a positive expected return over time. Volatility is a factor with a
negative price of risk. To collect a volatility premium requires selling volatility protection, especially selling
out-of-the-money put options. The VIX index trades, on average, above volatilities observed in actual stocks:
VIX implied volatilities are approximately 2% to 3%, on average, higher than realized volatilities. Options are
thus expensive, on average, and investors can collect the volatility premium by short volatility strategies. Fixed
income, currency, and commodity markets, like the aggregate equity market, have a negative price of volatility
risk."

86 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
3. Correct Answer: A
False. To be true according to Sudhansu, this should instead read: As the population ages, retirees sell financial
assets to fund consumption and financial asset prices fall to clear markets.
In regard to (B), (C) and (D), each is TRUE.
 In regard to true (B), Sudhansu on Demographic Risk (emphasis ours): "Several OLG models predict that
demographic composition affects expected returns. Theory suggests two main avenues for this to occur. First,
the life-cycle smoothing in the OLG framework requires that when the middle-aged to young population is
small, there is excess demand for consumption by a relatively large cohort of retirees. Retirees do not want
to hold financial assets: in fact, they are selling them to fund their consumption. For markets to clear, asset
prices have to fall. Abel (2001) uses this intuition to predict that as baby boomers retire, stock prices will
decline. The predictions are not, however, clear cut: Brooks (2002), for example, argues that the baby boom
effect on asset prices is weak. The second mechanism where demography can predict stock returns is that,
since different cohorts have different risk characteristics, asset prices change as the aggregate risk
characteristics of the economy change. In an influential study, Bakshi and Chen (1994) show that risk aversion
increases as people age and, as the average age rises in the population, the equity premium should increase."
 In regard to true (C), Sudhansu on Political Risk: "The last macro risk that an asset owner should consider is
political or sovereign risk. Political risk has been always important in emerging markets: the greater the
political risk, the higher the risk premiums required to compensate investors for bearing it. Political risk was
thought to be of concern only in emerging markets. The financial crisis changed this, and going forward
political risk will also be important in developed countries."
 In regard to true (D), Sudhansu on Productivity Risk: "A class of real business cycle models developed in
macroeconomics seeks to explain the movements of macro variables (like growth, investment, and savings)
and asset prices across the business cycle. In these models, macro variables and asset prices vary across the
business cycle as a rational response of firms and agents adjusting to real shocks. The label real in real business
cycle emphasizes that the business cycle is caused by real shocks and is not due to market failures or
insufficient demand as in the models of John Maynard Keynes (1936). Real business cycle models have
inflation, but inflation is neutral or has no real effects. These models are production economies because they
involve optimizing firms producing physical goods, in addition to agents optimizing consumption and savings
decisions, but the firms are subject to shocks that affect their output. One particularly important shock that
affects firm output is a productivity shock ... Because these models are designed to work at business cycle
frequencies, they are less relevant for investors who have short horizons. But for long-horizon investors—like
certain pension funds, sovereign wealth funds, and family offices—the productivity factor should be
considered. Asset returns reflect long-run productivity risk."
4. Correct Answer: C
TRUE: Re-balancing as a portfolio strategy is a short volatility strategy which earns a volatility risk premium
Sudhansu: "In chapter 4, I showed that rebalancing as a portfolio strategy is actually a short volatility strategy. Thus,
the simple act of rebalancing will reap a long-run volatility risk premium, and the person who does not rebalance—
the average investor who owns 100% of the market—is long volatility risk and loses the long-run volatility risk
premium. A long-run, rebalancing investor is exposed to the possibilities of fat, left-hand tail losses like those in
Figure 7.4. There are two differences, however. Rebalancing over assets (or strategies or factors as in chapter 14)
does not directly trade volatility risk. That is, rebalancing over stocks trades physical stocks, but Figure 7.4 involves
trading risk-neutral, or option, volatility. Trading volatility in derivatives markets brings an additional volatility risk
premium that rebalancing does not. Thus, losses in trading volatility in derivative markets are potentially much
steeper than simple rebalancing strategies. Second, pure volatility trading in derivatives can be done without taking
any stances on expected returns through delta-hedging. Rebalancing over fundamental asset or strategy positions
is done to earn underlying factor risk premiums. While there is only weak predictability of returns, the investor
practicing rebalancing gets a further boost from mean reversion as she buys assets with low prices that have high
expected returns. Chapters 4 and 14 cover this in more detail.

87 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
Constructing valuation models with volatility risk can be tricky because the relation between volatility and expected
returns is time varying and switches signs and is thus very hard to pin down. A large literature has tried to estimate
the return–volatility trade-off as represented in equation (7.1) .. Is the coefficient relating the market volatility or
variance to expected returns, which is supposedly positive in theory, actually positive in data? In the literature, there
are estimates that are positive, negative, or zero. In fact, one of the seminal studies, Glosten, Jagannathan, and
Runkle (1993), contains all three estimates in the same paper! Theoretical work shows that the risk–return relation
can indeed be negative and change over time.13 What is undisputed, though, is that when volatility increases
dramatically, assets tend to produce losses. Only an investor who can tolerate large losses during high-volatility
periods should consider selling volatility protection."
In regard to (A), (B) and (D) each is FALSE, or at least not necessarily true.
5. Correct Answer: A
TRUE (according to Sudhansu's analysis): Unlike the size factor, the value premium is robust and outperforms over
the long-run
Sudhansu: "3.3. VALUE FACTOR: Unlike size, the value premium is robust. Figure 7.6 graphs cumulated returns on
the value strategy, HML. Value has produced gains for the last fifty years. There are several notable periods where
value has lost money in Figure 7.6, some extending over several years: the recession during the early 1990s, the
roaring Internet bull market of the late 1990s, and there were large losses from value strategies in the financial crisis
over 2007– 2008. The risk of the value strategy is that although value outperforms over the long run, value stocks
can underperform growth stocks during certain periods. It is in this sense that value is risky."
In regard to (B), (C) and (D), each is false.
 In regard to false (B), the size effect mostly disappeared in the mid-1980s: "The compound returns of SMB
reach a maximum right around the early 1980s—just after the early Banz and Reinganum studies were
published. Since the mid-1980s there has been no premium for small stocks, adjusted for market exposure.
International evidence since the mid-1980s has also been fairly weak"
 In regard to doubly false (C), "It should be noted that small stocks do have higher returns, on average, than
large stocks. The effects of other factors, like value and momentum, which we discuss below, are also stronger
in small stocks. Small stocks also tend to be more illiquid than large stocks. The pure size effect refers to the
possible excess returns of small stocks after adjusting for CAPM betas. The weak size effect today means that
an asset owner should not tilt toward small stocks solely for higher risk-adjusted returns."
 In regard to false (D), a key theme in Sudhansu is that each factor represents a different set of bad times, and
factor risk represents bad times for an investor. The risk implied by the value factor is that it will under-
perform growth during bad times.
6. Correct Answer: D
TRUE: The cumulated profits on momentum strategies have been an order of magnitude larger than cumulated
profits on either size or value
Sudhansu: "Momentum returns blow size and value out of the water. Figure 7.7, [see below] which plots cumulated
returns from January 1965 to December 2011, for SMB, HML, and WML speaks for itself. The cumulated profits on
momentum strategies have been an order of magnitude larger than cumulated profits on size and value. Momentum
is also observed in every asset class: we observe it in international equities, commodities, government bonds,
corporate bonds, industries and sectors, and real estate. In commodities, momentum is synonymous with
commodities trading advisory funds. Momentum is also called trend investing, as in 'the trend is your friend.'
Momentum returns are not the opposite of value returns: in Figure 7.7, the correlation of HML with WML is only –
16%. But many investors who claim that they are growth investors are actually momentum investors, especially
mutual funds (see chapter 16), as pure growth underperforms value in the long run. There is one sense in which
momentum is the opposite of value. Value is a negative feedback strategy, where stocks with declining prices
eventually fall far enough that they become value stocks. Then value investors buy them when they have fallen
enough to have attractive high expected returns. Value investing is inherently stabilizing. Momentum is a positive
feedback strategy. Stocks with high past returns are attractive, momentum investors continue buying them, and

88 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
they continue to go up! Positive feedback strategies are ultimately destabilizing and are thus subject to periodic
crashes, as Figure 7.7 shows and as I discuss below."

In regard to (A), (B) and (C), each is false.

89 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
Alpha (and the Low-Risk Anomaly) | Questions
1. Below is the regression output of a portfolio's excess returns against its benchmark's excess return over the last
three months (n = 60 trading days). Excess return is defined as return above the risk-free rate.

Key output from the regression includes:


 The sample size is 60 trading days
 With respect to the portfolio, its average excess return is 3.29% (in excess of the riskfree rate) with volatility
of 3.67%
 With respect to the benchmark, its average excess return is 0.98% (in excess of the riskfree rate) with volatility
of 1.99%
 The average difference in return between the portfolio and the benchmark, avg(P-M), is 2.31%; this is also
called the active return
 The regression intercept is 0.0180 and the regression slope is 1.5231 (as displayed on plot)
 The tracking error (standard error of the regression) is 2.10%
Which of the following is nearest to the information ratio (IR) if we measure the IR as residual return per unit of
residual risk?
a) 0.357
b) 0.630
c) 0.857
d) 1.102
2. The low-risk anomaly is a combination of each of the following three true effects EXCEPT which is false (and not
technically included in the low-risk anomaly)?
a) Both contemporaneous and lagged volatility are inversely (aka, negatively) related to returns
b) Contemporaneous beta is inversely (aka, negatively) related to raw returns
c) Lagged beta is inversely (aka, negatively) related to risk-adjusted returns
d) Minimum variance portfolios do better than the market

90 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
3. Sudhir the aspiring FRM candidate is estimating the alpha for his firm's (Martingale's) new low-volatility fund. His
naive benchmark is the Russel 1000 large-cap index. He has collected the following (ex ante) statistics over the
historical sample where the period returns are monthly:
 The regression slope coefficient, β, is 0.40
 The portfolio's average excess return is 3.15% per month
 The Russel index's average excess return is 0.65% per month
Excess returns refer to returns above the risk-free rate. Which of the following is TRUE?
a) The portfolio's alpha is about +289 basis points
b) He should assume a beta (aka, slope) coefficient, β, of 1.0 such that the portfolio's alpha is about +250 basis
points; this is a lower alpha due to the implicit risk-adjustment
c) The Russel is NOT an appropriate benchmark because the low beta, β, implies that the Russel 1000 cannot
be combined with another asset in order to generate a marketadjusted portfolio
d) The Russel is NOT an appropriate benchmark because it represents a tradeable, lowcost alternative but the
firm's low-volatility fund is active and charges high fees; an ideal benchmark charges comparable fees
4. Below are displayed the actual results for two of Sudhansu's regressions of Warren Buffett's Berkshire Hathaway
(from 1990 to 2012). On the left is his regression of Berkshire's monthly excess returns against the capital asset
pricing model (CAPM) benchmark. On the right is his regression of Berkshire's monthly excess returns against the
three-factor (MKT, SMB and HML) Fama-French benchmark:

Please note these are excess returns, R(p) - Rf. About these actual regression results, each of the following
statements is true EXCEPT which is false?
a) The -0.50 SMB factor loading and +0.380 HML factor loading imply the Berkshire Hathaway is a large-cap,
value investor
b) Berkshire generates significant alpha with at least 90.0% confidence, and the addition of the size (SMB) and
value (HML) does IMPROVE the fit of the regression model
c) The statistically insignificant alphas, low coefficients of determination (adjusted R^2), and increase in market
beta (from 0.510 to 0.670) when SMB and HML factors are added suggest that Berkshire did not generate
sustained alpha
d) The benchmark implied by the Fama–French regression estimates is given by: $0.33 in T-bills + $0.67 in the
market portfolio + (long $0.50 in large caps - short $0.50 in small caps) + ($0.38 in value stocks - $0.38 in
growth stocks)

91 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
5. Below are Sudhansu's actual regression results for the annual gross returns of the CalPERS pension fund against a
passive portfolio of index funds in stocks and bonds. Please note the returns are gross returns, not excess returns;
i.e., they are NOT net of the riskfree rate.

Which of the following statements about these regression results is TRUE?


a) The high adjusted R^2 validates a hypothesis that the CalPERS active fund managers do add value relative to
the benchmark
b) Because the factor loadings are not statistically significant, different factors should be repeatedly tested until
a set is found that is significant
c) This regression should be re-run with at least one additional factor because robust benchmark portfolios must
include at least one risk-free asset
d) It is acceptable to exclude risk-free assets and regress gross returns against the benchmark portfolio
conditional on a constraint that the factor loadings sum to one; inthis case, we need β(B) + β(S) = 1.0
6. Sudhansu tells us that a portfolio manager creates alpha relative to a benchmark by making bets that deviate from
that benchmark. The more successful these bets, the higher the manager's alpha. Grinold’s Fundamental Law of
Active Management formalizes this intuition by asserting that the maximum attainable information ratio is given by
IR ≈ IC * sqrt(BR) where IR is the information ratio, IC is the information coefficient (the correlation of the manager’s
forecast with the actual returns) and BR is the breadth of the strategy. Breadth is the number of securities that can
be traded and how frequently they can be traded.
According to Sudhansu, each of the following statements about the Fundamental Law is true EXCEPT which is false?
a) The empirical evidence suggests that, on average, IC tends to fall as BR increases
b) A compelling advantage of the fundamental law is that it successfully incorporates downside risk and higher
moment risk; specifically, it adjusts for skew and excess kurtosis
c) If we require an IR of 0.50, this can be achieved either by a highly skilled stock timer with an IC of 0.25 making
only four bets a year; or the same IR can be achieved by a manager with only a slight edge, IC of 0.025, who
makes fully 400 bets a year
d) A crucial assumption is that the forecasts are independent of each other. But due to realistically correlated
factor bets, it is difficult to make truly independent forecasts in BR; e.g., an equity manager with overweight
positions on 1,000 value stocks offset by underweight positions in 1,000 growth stocks has not placed 1,000
different bets
7. In order to evaluate the performance of its funds, the Risk Committee at your investment firm currently uses the
three-factor Fama-French benchmark as a basis for static factor regressions. There has been a proposal to consider
conducting "style analysis." In a presentation to the Committee, the following style-based benchmark is given as an
illustration (Sudhansu Formula 10.14):
rt+1 = αt + βSPY,tSPYt+1 + βSPYV,tSPYVt+1 + βSPYG,tSPYGt+1 + εt+1

92 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
The Committee members discuss the various advantages and disadvantages of shifting from a static benchmark to
a style-based benchmark. Each of the following drawbacks/advantages mentioned is true EXCEPT which is false?
a) A drawback of style analysis (relative to static benchmarks) is that we cannot introduce short positions into
the benchmark
b) A drawback of style analysis (relative to static benchmarks) is that it is harder to detect statistical significance
due to larger standard errors
c) An advantage of style analysis is that factor loadings can vary over time, as reflected in factor loadings
denoted by β(SPY,t) rather than β(SPY)
d) An advantage of style analysis is the ability to use actual tradeable funds in the factor benchmark, such as for
example SPDR ("spider") exchange-traded funds (ETFs)
8. The Investment Committee at your firm has a longstanding practice of weighing alpha, among other factors and
criteria, in its evaluation of external managers. However, recently, a member voiced concern about the reliability of
alpha in the context of certain strategies with known non-linear payoffs. For example, some of the firm's external
managers are effectively placing short volatility bets. The Committee wants to better evaluate manager alpha in
light of these non-linear strategies. However, as they are highly influenced by Sudhansu's work, they do want their
benchmarks to meet Sudhansu's criteria for an ideal benchmark: 1. Well defined, 2. Tradeable, 3. Replicable, and
4.Risk-adjusted. Which of the following solutions or approaches to this problem is the most viable?
a) One approach to accounting for nonlinear payoffs is to include tradeable non-linear factors
b) One approach to accounting for non-linear payoffs is to conduct a joint hypothesis test where the null involves
a simultaneous determination of alpha and the benchmark
c) The easiest way to compute tradeable alpha in the case of non-linear payoffs is to include non-linear terms,
in particular quadratic terms, on the right-hand side of the factor regression; for example, r^2(t) or max[r(t),0]
d) d) It is actually not a problem! The Committee's concerns about measuring static alpha in the case of non-
linear payoffs are exaggerated because alpha (along with the information ratio and the Sharpe ratio) is a
commonly used metric for many various strategies and it is inherently well-suited to return distributions of
any shape
9. Which of the following statements about the risk anomaly is TRUE?
a) The primary argument against the risk anomaly, with respect to either low volatility or low beta, is that
economists have generally demonstrated that markets are efficient; i.e., efficient market hypothesis (EMH)
is true
b) Due to time-varying reality, it is theoretically intractable (ie, not possible) to create a reproducible benchmark
for either the low beta or low volatility risk anomalies such that empirical tests of the risk anomalies are not
robust and the discussion remains "largely theoretical"
c) The risk anomaly might be explained by investors who are leveraged constrained (i.e., who cannot borrow so
instead bid up high beta stocks) and/or have an "agency problem" created by a need to minimize tracking
error with the benchmark (e.g., they cannot short low volatility or short low beta)
d) The presence of the low-risk anomaly (aka, low-risk effect) in several different contexts-- including U.S.
equities, international stock markets, Treasury bonds, corporate bonds (across credit rating classes),
commodity, option and foreign exchange markets—is compelling evidence that "data mining" almost surely
creates an illusion of a relationship between idiosyncratic return volatility (IVOL) and future returns because
diversification generally eliminates the impact of IVOL

93 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
Alpha (and the Low-Risk Anomaly) | Answers
1. Correct Answer: C
TRUE residual IR = 0.857 = alpha/SER = 0.0180/0.0210 = 0.85714. Alpha is the regression intercept; this is a very
hight beta portfolio! (Note: the portfolio's Sharpe ratio is 0.0329/0.0367 = 0.8949).
In regard to (A), (B) and (D), each is incorrect.
2. Correct Answer: B
False. The true statement is that contemporaneous beta is positively related to raw returns, consistent with CAPM.
Sudhansu: "Contemporaneous Beta and Returns: The CAPM does not predict that lagged betas should lead to
higher returns. The CAPM actually states that there should be a contemporaneous relation between beta and
expected returns. That is, stocks with higher betas should have higher average returns over the same periods used
to measure the betas and the returns (see chapter 7 for more on factor theory). Figure 10.11 examines the
contemporaneous relation between betas and average returns by graphing average realized returns and average
realized betas of portfolios formed at the end of each three-month period. It shows, perhaps surprisingly, that there
is a positive contemporaneous relation between beta and returns. This is exactly what the CAPM predicts!"

In regard to (A), (C) and (D) each is TRUE.


Sudhansu: "4. Low Risk Anomaly: The low-risk anomaly is a combination of three effects, with the third a
consequence of the first two:
1. Volatility is negatively related to future returns;
2. Realized beta is negatively related to future [risk-adjusted] returns; and
3. Minimum variance portfolios do better than the market.
The risk anomaly is that risk—measured by market beta or volatility—is negatively related to returns. Robin
Greenwood, a professor at Harvard Business School and my fellow adviser to Martingale Asset Management, said
in 2010, “We keep regurgitating the data to find yet one more variation of the size, value, or momentum anomaly,
when the Mother of all inefficiencies may be standing right in front of us—the risk anomaly.” In regard to true (C)
in contrast to false (B), please note Sudhansu's emphasis on risk-adjusted returns, which in this context refers to
Sharpe ratios: "The beta anomaly is not that stocks with high betas have low returns—they don’t. Stocks with high
betas have high volatilities. This causes the Sharpe ratios of high beta stocks to be lower than the Sharpe ratios of
low-beta stocks. The right-hand axis of Panel A shows that the raw Sharpe ratios drop from 0.9 to 0.4 moving from
the low- to the high-beta quintile portfolios."

94 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
3. Correct Answer: A
TRUE. The portfolio's alpha is about +289 basis points. Because the regression line must pass through the averages,
alpha, α = 0.0315 - 0.40*0.0065 = 0.02890 or 2.890%. Specifically, because P(avg) = α+β*B(avg) --> α = P(avg) -
β*B(avg).
In regard to (B), (C) and (D), each is FALSE.
 In regard to false (B): assuming beta = 1.0 does not risk-adjust; the actual alpha is higher than the unadjusted
return difference because the beta is less than one (a lower beta will increase the alpha)
 In regard to false (C): per Sudhansu's example, the index can be combined with the risk free asset; in this
case, about 40% (beta) to the index and 60% to the risk-free asset
 In regard to false (D): per Sudhansu's example, we prefer a low-cost, tradeable benchmark
Sudhansu's characteristics of an ideal bechmark include the following:
1. Well defined: Produced by an independent index provider, the Russell 1000 is verifiable and free of ambiguity
about its contents. Thus it ably defines the market portfolio for GM and Martingale.
2. Tradeable: Alpha must be measured relative to tradeable benchmarks, otherwise the computed alphas do
not represent implementable returns on investment strategies. So the benchmark should be a realistic, low-
cost alternative for the asset owner. The Russell 1000 is a natural passive benchmark for Martingale’s low
volatility strategy because low-cost mutual fund and ETF versions are available (see chapter 16).
3. Replicable: Both the asset owner and the funds manager should be able to replicate the benchmark.
Martingale is certainly able to replicate the returns of the Russell 1000 benchmark because it bases its
strategy on the Russell 1000 universe. GM Asset Management, the client, is also able to replicate the Russell
1000, either by internally trading it or by employing a Russell 1000 index fund provider. Thus, both Martingale
and GM face a common, low-cost option. Certain benchmarks can’t be replicated by the asset owner because
they are beyond the asset owner’s expertise. Such nonreplicable benchmarks are not viable choices and make
it difficult or impossible to measure how much value a portfolio manager has added because the benchmark
itself cannot be achieved by the asset owner. There are some benchmarks, like absolute return benchmarks,
that can’t even be replicated by the fund manager. In these cases, the fund manager is disadvantaged because
she may not even be able to generate the benchmark in the first place.
4. Adjusted for risk: Sadly, most benchmarks used in the money management business are not risk adjusted.
Taking the Russell 1000 as the benchmark assumes the beta of Jacques’s strategy is one, but the actual beta
of the volatility strategy is 0.73. This risk adjustment makes a big difference in the alpha; with the true beta
of 0.73, the alpha of the low volatility strategy is 3.44% per year compared to 1.50% when the beta is one.
4. Correct Answer: C
False. Sudhansu find the alphas (barely) significant given their t-stats are about 2.0; he says the adjusted R^2 of 14%
is "relatively high;" and the increase in market beta suggests the Fama-French model is a better fit.
In regard to (A), (B) and (D), each is TRUE.
About the CAPM regression, Sudhansu writes "This is impressive performance! Buffett is generating an alpha of
0.0072 × 12 = 8.6% per year with a risk of approximately half that of the market (β = 0.51) ... The alpha is also
statistically significant, with a high t-statistic above two. The cutoff level of two corresponds to the 95% confidence
level, a magic threshold for statisticians. Buffett is special. Most factor regressions do not produce significant alpha
estimates. The adjusted R^2 of the CAPM regression is 14%, which is also relatively high. For most stocks, CAPM
regressions produce R^2s of less than 10%."
About the Fama-French regression, Sudhansu writes "Buffett’s alpha has fallen from 0.72% per month (8.6% per
year) with the CAPM benchmark to 0.65% per month (7.8% per year). Controlling for size and value has knocked
nearly 1% off Buffett’s alpha. First, note that the market beta has moved from 0.51 in the pure CAPM regression to
0.67 in the Fama–French specification. This is an indication that adding the SMB and HML factors is doing
something— the market beta would stay the same only if the SMB and HML factors would have no ability to explain
Buffett’s returns. The SMB factor loading in the Fama–French regression is s = −0.50. The negative sign indicates
that Berkshire Hathaway is acting the opposite way from a small stock (remember, SMB is long small stocks and
short large stocks). That is, Berkshire Hathaway has large stock exposure. Note that being large counts against
Buffett’s outstanding performance because large stocks, according to the Fama–French model, tend to

95 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
underperform small stocks. The HML loading of h = 0.38 says that Berkshire Hathaway has a strong value\
orientation; it tends to move together with other value stocks. Thus, the negative SMB and positive HML factor
loadings suggest that Berkshire Hathaway is a large, value investor. Duh, of course it is! It doesn’t take the finance
cognoscenti to know that this is the investing technique that Buffett has touted since founding Berkshire Hathaway
in the 1960s. It is comforting that an econometric technique yields the same result as common sense. But the
statistical technique gives us the appropriate benchmark to compute Buffett’s risk-adjusted alpha.
The surprising result in the Fama-French regression is that Buffett is still generating considerable profits relative to
the size- and value-factor controls: Buffett’s monthly alpha of 0.65% is still outsized; the Fama-French model reduces
the CAPM alpha by less than 1% per year. This is not because the size and value factors are inappropriate risk factors.
Quite the contrary. The Fama-French regression has an adjusted R^2 of 27%, which is large by empirical finance
standards, and much higher than the adjusted R^2 of 14% in the CAPM benchmark. The size and value factors,
therefore, substantially improve the fit relative to the CAPM benchmark. Buffett’s performance is clearly not merely
from being a value investor, at least the way value is being measured relative to the CAPM."
In regard to true (D), please note that Sudhansu writes about the Fama-French regression that "the factor loadings
can be translated directly to a benchmark portfolio, only now the portfolio contains (complicated) long–short
positions in small/large and value/growth stocks. But it still represents $1 of capital allocated between factor
portfolios. Every time we run a factor regression; we are assuming that we can create a factor benchmark portfolio."
In this way the difference between the two regressions is:
 Because E(r) = Rf + β[r(m) - Rf] = Rf + β*r(m) - β*Rf = β*r(m) +(1- β)*Rf, the CAPM benchmark portfolio
allocates $1.00 to a long position of (β) dollars in the market plus a long position of (1-β) in risk-free Treasury
bills; if β>1, this implies a short position in the risk-free asset
 The Fama-French benchmark analogously allocates the MKT factor loading to the market and (1-MKT) to the
risk-free asset, which totals $1.00. But it further is long/short the SMB and HML factors; for example if SMB
= +0.20, then the benchmark is long +$0.20 is small caps and -$0.20 in large caps.
5. Correct Answer: D
TRUE: It is acceptable to exclude risk-free assets and regress gross returns against the benchmark portfolio
conditional on a constraint that the factor loadings sum to one; in this case, we need β(B) + β(S) = 1.0
Sudhansu: "3.2 DOING WITHOUT RISK-FREE ASSETS: Benchmark portfolios need not include risk-free assets ... To
obtain a benchmark portfolio, we require the restriction that β(s) + β(b) = 1. That is, the portfolio weights must sum
to one. Then, $1 placed into CalPERS on the lefthand side of equation (10.12) can be replicated by a portfolio of
stocks and bonds (with portfolio weights, which also must sum to one) on the right-hand side, plus any alpha
generated by the CalPERS’ funds manager." In regard to (A), (B) and (C), each is FALSE.
6. Correct Answer: B
False. Grinold’s fundamental law is derived under mean-variance utility, and so all the shortcomings of mean-
variance utility apply. In particular, by using mean-variance utility, the Grinold–Kahn framework ignores downside
risk and other higher moment risk while assuming that all information is used optimally.
In regard to (A), (C) and (D), each is TRUE.
Sudhansu (emphasis ours): "There are two very important limitations of the fundamental law. The first is that ICs
are assumed to be constant across BR. The first manager whom you find may truly have a high IC, but the one
hundredth manager whom you hire probably does not. As assets under management increase, the ability to
generate ICs diminishes. Indeed, the empirical evidence on active management, which I cover in Part III of this book,
shows decreasing returns to scale: as funds get bigger, performance deteriorates. This effect is seen in mutual funds,
hedge funds, and private equity. Thus, ICs tend to fall as assets under management rise.
Second, it is difficult to have truly independent forecasts in BR. Manager decisions tend to be correlated and
correlated bets reduce BR. An equity manager with overweight positions on 1,000 value stocks offset by
underweight positions in 1,000 growth stocks has not placed 1,000 different bets; he’s placed just one bet on a
value-growth factor. Hiring one hundred different fixed income managers who are all reaching for yield by buying
illiquid bonds gets you not one hundred different bets but rather a single bet on an illiquidity factor. Correlated
factor bets tend to dominate at the overall portfolio level—a reason why top–down factor investing is so important
(see chapter 14)."

96 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
7. Correct Answer: A
False. Style analysis with shorting is allowed, as long as the factor loading sum to one.
In regard to (B), (C) and (D) each is TRUE.
The two key advantages of style analysis, per true choices (C) and (D) are:
1. We use tradeable funds. Sudhansu: "The main idea with style analysis is that we use actual tradeable funds in the
factor benchmark. I used SPDR ETFs in equation (10.14), but I could have used other ETFs or index mutual funds
for the benchmark portfolio."
2. The factor loadings may vary over time
In regard to true choice (B), Sudhansu: "My final comment is that the problems of statistical inference with time
varying portfolio benchmarks are serious. It is hard enough to detect statistical significance with constant portfolio
benchmarks, and estimated time-varying styles will have even larger standard errors."
8. Correct Answer: A
True. One approach to accounting for nonlinear payoffs is to include tradeable non-linear factors
Sudhansu (emphasis ours): "There are two ways to account for nonlinear payoffs:
1. Include Tradeable Nonlinear Factors: Aggregate market volatility risk is an important factor and an easy way to
include the effects of short volatility strategies is to include volatility risk factors. Other nonlinear factors can also
be included in factor benchmarks. By doing so, the asset owner is assuming that she can trade these nonlinear
factors by herself. Sometimes, however, the only way to access these factors is through certain fund managers ...
2. Examine Nontradeable Nonlinearities: It is easy to test whether fund returns exhibit nonlinear patterns by
including nonlinear terms on the right-hand side of factor regressions. Common specifications include quadratic
terms, like r^2(t) or option-like terms like max[r(t),0]. The disadvantage is that, after including these terms, you do
not have alpha—we always need tradeable factors on the right-hand side to compute alphas. But we must move
beyond alpha if we want evaluation measures that are robust to dynamic manipulation. These will not be alphas,
but they can still be used to rank managers and evaluate skill. One state-of-the-art measure has been introduced
by Goetzmann et al. (2007) ..."
In regard to (B), (C) and (D) each is FALSE.
9. Correct Answer: C
True: The risk anomaly might be explained by investors who are leveraged constrained (i.e., who cannot borrow so
instead bid up high beta stocks) and/or have a "agency problem" created by a need to minimize tracking error with the
benchmark (e.g., they cannot short low volatility or short low beta)
Sudhansu's explanations for the risk anomaly include:
 Data Mining as a possibility (which Sudhansu dismisses): "Some papers in the literature rightfully point out some
data mining concerns with the original results in Sudhansu et al. (2006). There is some sensitivity in the results to
different portfolio weighting schemes and illiquidity effects ... The best argument against data mining is that the
low-risk effect is seen in many other contexts. Sudhansu et al. (2006) show that the effect appears during
recessions and expansions and during stable and volatile periods. Sudhansu et al. (2009) show that it takes place
in international stock markets. Frazzini and Pedersen (2011) show that low-beta portfolios have high Sharpe ratios
in U.S. stocks, international stocks, Treasury bonds, corporate bonds cut by different maturities and credit ratings,
credit derivative markets, commodities, and foreign exchange. Cao and Han (2013) and Blitz and de Groot (2013)
show that the low risk phenomenon even shows up in option and commodity markets, respectively. Low risk is
pervasive.
 Leverage Constraints: "Many investors are leverage constrained—they wish to take on more risk but are unable to
take on more leverage. Since they cannot borrow, they do the next best thing—they hold stocks with built-in
leverage, like high-beta stocks. Investors bid up the price of high-beta stocks until the shares are overpriced and
deliver low returns--exactly what we see in data.
 Agency Problems: "Many institutional managers can’t or won’t play the risk anomaly. In particular, the use of
market-weighted benchmarks itself may lead to the low volatility anomaly ... "
 Preferences: "If asset owners simply have a preference for high-volatility and high-beta stocks, then they bid up
these stocks until they have low returns. Conversely, these investors shun safe stocks—stocks with low volatility
and low betas—leading to low prices and high returns for these shares ..."
In regard to (A), (B) and (D) each is FALSE.

97 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
Portfolio Risk: Analytical Methods | Questions
1. In a portfolio where all returns are normally distributed, the diversified portfolio value at risk (VaR) of a two-asset
portfolio is $76 million. The first asset has an individual VaR of $70 million. The assets have zero correlation. What is
the individual VaR of the second asset?
a) $6.0 million
b) $18.0 million
c) $19.4 million
d) $29.6 million
2. A two-asset portfolio with a value of $20 million contains two equally-weighted assets (each asset has a value of $10
million). The volatility of the first asset is 10% and the volatility of the second asset is 20% (assets returns are normally
distributed). What is, respectively, the 95% diversified portfolio value at risk (VaR) if (i) the assets are uncorrelated,
(ii) the assets have a correlation (rho) of 0.5, and (iii) the assets are perfectly correlated?
a) $3.0, 3.9 and 4.5 million
b) $3.7, 4.4 and 4.9 million
c) $3.9, 5.2 and 5.8 million
d) $4.1, 5.6 and 6.3 million
3. A two-asset portfolio with a value of $40 million contains two equally-weighted assets (each asset has a value of $20
million). The volatility of both assets is 30%. The assets are uncorrelated (i.e., their correlation is zero). What is the
incremental value at risk (VaR), assuming 95% confident VaR, if we subtract one asset from the portfolio, leaving only
the remaining asset in the portfolio?
a) $4.09 million
b) $6.73 million
c) $9.87 million
d) $13.96 million
4. If computed for a portfolio where correlations are imperfect, which of the following value at risk (Vats) will be
greatest?
a) Undiversified VaR
b) Diversified VaR
c) Individual VaR
d) Incremental VaR
5. Which approach is most likely to find a local-valuation (delta-normal valuation) method insufficient?
a) Undiversified VaR
b) Diversified VaR
c) Individual VaR
d) Incremental VaR
6. Which is equal to the sum of component Vats?
a) Undiversified VaR
b) Diversified VaR
c) Sum of individual Vats
d) Sum of incremental Vats

98 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
7. Which is equal to the sum of individual Vats?
a) Undiversified VaR
b) Diversified VaR
c) Sum of component Vats
d) Sum of incremental Vats
8. A $20 million portfolio is equally invested in two currencies: $10 million in US dollars (USD) and $10 million in Euros
(EUR). The volatility of the Euro (EUR) is 20%; the volatility of the dollar (USD) is 30%. The two currencies have a
correlation of 0.60. What is the beta of the US dollar (USD) position with respect to the two-asset portfolio that
includes the US dollar position; i.e., beta (USD, Two-asset Portfolio)?
a) 0.75
b) 1.00
c) 1.25
d) 1.50
9. A $10 million portfolio is equally invested in two currencies: $5 million in Swiss francs (CHF) and $5 million in Japanese
yen (JPY). The volatility of CHF is 10%; the volatility of the JPY is 20%. The two currencies have a correlation of 0.30.
If we assume a 95% confident delta normal value at risk (VaR), what is the marginal value at risk (marginal VaR) of
the Swiss franc (CHF) position with respect to the two-asset portfolio that includes the CHF position; i.e., 95%
confident marginal VaR (CHF, Two-asset Portfolio)?
a) 0.106
b) 0.304
c) 0.633
d) 1.124
10. A $10 million portfolio with a volatility of 30% is invested in two positions, (Y) and (Z). The beta of position (Y) with
respect to the two-position portfolio, that includes (Y), is 0.80. What is the 99% confident marginal value at risk
(marginal VaR) of position (Y) with respect to the portfolio?
a) 0.25
b) 0.55
c) 0.78
d) 0.92
11. A $20 million portfolio with a volatility of 30% in invested equally in two positions, (A) and (B); i.e., $10 million in each
position. The 95% confident marginal value at risk (marginal VaR) of Position (A) with respect to the portfolio is 0.30.
What is the 95% confident Component VaR of Position (A) with respect to the portfolio?
a) $900,000
b) $3.0 million
c) $6.0 million
d) $9.0 million
12. A $20.0 million Portfolio is invested in two positions, (Y) and (Z): $14.0 million is invested in Position (Y) and $6.0
million is invested in Position (Z). The volatility of each position is 10% and the positions are uncorrelated. In the way,
it can be shown the beta of Position (Y) with respect to the Portfolio is 1.207 and beta of Position (Z) with respect to
the Portfolio is 0.517. What is the 95% Component VaR of position (Y)?
a) $389,000
b) $1.7 million
c) $2.1 million
d) $2.3 million

99 | P a g e
FRM Part 2 |Practice Book | Volume 2
ART 2 - Practice Book | Volume 2
13. A $10 million Portfolio is invested in two currencies: $6.0 million is invested in Canadian dollars (CAD) and $4.0 million
is invested in US dollars (USD). The volatility of CAD is 10% and the volatility of USD is 20%. Assume, perhaps
unrealistically, that the currencies are uncorrelated. Finally, the beta of the CAD position with respect to the Portfolio
is 0.60. If our desired value at risk (VaR) confidence level is 95%, what are, respectively, the Component VaR of the
CAD position and the Incremental VaR of the CAD position?
a) $329,000 (component VaR) and $329,000 (incremental VaR)
b) $329,000 (component VaR) and $592,000 (incremental VaR)
c) $592,000 (component VaR) and $329,000 (incremental VaR)
d) $592,000 (component VaR) and $592,000 (incremental VaR)
14. Each of the following is TRUE about Component VaR EXCEPT:
a) By construction, component Vats sum to (diversified) Portfolio VaR
b) Components with a negative sign act as hedges against the remainder of the portfolio
c) Component VaR = Marginal VaR * Portfolio Value ($) * position weight (%)
d) If correlations are imperfect, Component Vats must be greater than Individual Vats
15. Each of the following is TRUE about Incremental VaR EXCEPT:
a) Incremental VaR is not equal to Component VaR
b) Incremental VaR = Marginal VaR * Portfolio Value ($) * position weight (%)
c) Incremental VaR is the change in Portfolio VaR if a position is deleted
d) Incremental VaR probably requires a "before and after" full revaluation of the portfolio, if accuracy is desired
16. A bank is proposing to add Position (A) to Portfolio (P). (A) is small relative to the large but simple portfolio (P) which
is accurately summarized by risk factors that are jointly normally distributed. The bank's Risk Manager seeks a fast
and efficient method. Which is the best methods given the situation and the risk manager's preference?
a) Parametric VaR with marginal VaR
b) Full revaluation with incremental VaR
c) Historical Simulation
d) Monte Carlo Simulation
17. A bank is proposing to add Position (A) to Portfolio (P). (A) is large relative to (P) and both contain heavy-tailed (i.e.,
non-normal risk factors) returns that defy distributional assumptions.
The bank's Risk Manager needs a method that is computationally simple and easy to communicate. Which is the best
methods given the situation and the risk manager's preference?
a) Parametric VaR with marginal VaR
b) Delta-normal VaR
c) Historical Simulation
d) Monte Carlo Simulation
18. A bank is proposing to add Position (A) to Portfolio (P). (A) is large relative to portfolio (P) and both are complexwith
non-normal, non-linear and time-varying exposures. The bank's Risk Manager has a large budget, sophisticated
audience and ample computational time such that the priority is precision in the VaR estimate. Which is the best
methods given the situation and the risk manager's preference?
a) Parametric VaR with marginal VaR
b) Delta-normal VaR
c) Historical Simulation
d) Monte Carlo Simulation

100 | P a g
e
Practice
FRM PARTBook
2 -|Practice
VolumeBook
2 | Volume 2

19. If the goal is to shift a portfolio to its global risk minimum (global minimum-risk), per the perspective of a risk manager,
which of the following is most effective?
a) Add (trim) positions with low (high) Sharpe ratios until Sharpe ratios are equal
b) Add (trim) positions with low (high) marginal VaR until Marginal Vats are equal
c) Add (trim) positions with low (high) ratios of excess return to beat until ratios (excess return/beta) are equal
d) Add (trim) positions with low (high) ratios of Individual VaR to Portfolio VaR until ratios (Individual VaR/Portfolio
VaR) are equal
20. Which is equal to the sum of individual VaRs?
A. Undiversified VaR
B. Diversified VaR
C. Sum of component VaRs
D. Sum of incremental VaRs

101 | P a g e
Practice
FRM PARTBook
2 -|Practice
VolumeBook
2 | Volume 2

Portfolio Risk: Analytical Methods | Answers


1. Correct Answer: D
$29.6 million
VaR(2-asset portfolio) = SQRT(VaR_1^2+VaR_2^2+2*VaR_1*VaR_2*correlation); if correlation = 0, then VaR(2-asset
portfolio) = SQRT(VaR_1^2+VaR_2^2+2); and, we can solve for VaR_2 given diversified portfolio VaR : VaR_1 = SQRT
[VaR(2-asset portfolio)^2 - VaR_1^2] = SQRT[76^2 - 70^2] = $29.60 million
2. Correct Answer: B
$3.7, 4.4 and 4.9 million
As VaR (2-asset portfolio) = SQRT(VaR_1^2+VaR_2^2+2*VaR_1*VaR_2*correlation), it follows per Jorion that, If
correlation = 0, then: VaR (2-asset portfolio) = SQRT(VaR_1^2+VaR_2^2), and If correlation = 1.0, then: VaR (2-asset
portfolio) = VaR_1+VaR_2. Individual VaR of first asset = $10 million * 10% * 1.645 = $1.645 million; Individual VaR
of second asset = $10 million * 20% * 1.645 = $3.290 million.
(i) if uncorrelated, portfolio VaR = SQRT (1.645^2+3.290^2) = $3.678 million.
(ii) if correlation = 0.5, portfolio VaR = SQRT (1.645^2+3.290^2+2*1.645*3.290*0.5) = $4.352 million.
(iii) if correlation = 1.0, portfolio VaR = = VaR_1+VaR_2 = $4.935 million
3. Correct Answer: A
$4.09 million
Individual Vats = $20 million * 30% * 1.645 = $9.869 million. The two-asset diversified portfolio VaR = SQRT
(VaR_1^2+VaR_2^2) = SQRT (2*9.869^2) = $13.957 million. The incremental VaR, upon removing one asset, is given
by the difference: $13.957 - 9.869 = $4.088 million.
4. Correct Answer: A
Undiversified VaR must be greatest
5. Correct Answer: D
Incremental Varo’s main drawback (compared to marginal VaR which is analytical) is that it requires a full
revaluation.
6. Correct Answer: B
Diversified VaR: the sum of component Vats, by construction, is equal to the diversified VaR.
7. Correct Answer: A
Undiversified VaR: the sum of individual Vats assumes perfect correlation and therefore equals the undiversified
VaR.
8. Correct Answer: C
Beta (USD, Portfolio) = Covariance (USD, Portfolio)/ Variance (Portfolio). Covariance (USD, 0.5*USD + 0.5*EUR) =
Covariance (USD, 0.5*USD) + Covariance (USD, 0.5*EUR)
0.5 * Variance (USD) + 0.5 * Covariance (USD, EUR) = 0.5 * 30%^2 + 0.5 * 20% * 30% * 0.6 = 0.063; Variance (Portfolio)
= 0.5^2 * 20%^2 + 0.5^2 * 30%^2 + 2 * 0.5 * 0.5 * 20% * 30% * 0.6 = 0.0505 Therefore, Beta (USD, Portfolio) =
Covariance (USD, Portfolio)/ Variance (Portfolio) = 0.063/0.0505 = 1.247525
9. Correct Answer: A
0.106
Per Jorion 7.17: Marginal VaR = Covariance / Portfolio volatility * deviate Covariance (CHF, Portfolio) = 50% weight
* 10%^2 + 50% weight * 10% * 20% * 0.30 = 0.0080; Portfolio volatility = 12.450%; Deviate @ 95% = 1.645; Marginal
VaR = 0.0080/12.45% * 1.645 = 0.1057. Alternative, per Jorion 7.20: Marginal VaR = Portfolio VaR/W * beta (imp)
Portfolio VaR = $1.245 million* 1.645 = $2.048 million; Beta (CHF, Portfolio) = 0.5161 Marginal VaR = $2.048 million
/ $10 million * 0.5161 = 0.1057. Note: as marginal VaR (CHF, Portfolio) = beta (CHF, Portfolio) * Portfolio volatility *
deviate, in this case: 0.5161 beta (CHF, Portfolio) * 12.45% Portfolio volatility * 1.645 deviate = 0.1057 marginal VaR

102 | P a g e
Practice
FRM PARTBook
2 -|Practice
VolumeBook
2 | Volume 2

10. Correct Answer: B


0.55
Marginal VaR (Y, Portfolio) = beta (Y, Portfolio) * Portfolio volatility * deviate = beta (Y, Portfolio) * (Portfolio VaR%);
in this case, Marginal VaR (Y, Portfolio) = 0.80 * 30% * 2.33 ~= 0.55
11. Correct Answer: B
$3.0 million
Component VaR = Marginal VaR (i, Portfolio) * Position ($); as the Position (A) is $10 million: Component VaR
(Position A, Portfolio) = 0.30 * $10 million = $3.0 million
12. Correct Answer: C
$2.1 million
Component VaR = Portfolio VaR ($) * Position weight (%) * beta (imp). In this case, Portfolio VaR ($) = SQRT (70%^2
* 10%^2 + 30%^2 * 10%^2) * $20 million * 1.645 = $2.505 million; Component VaR (Position Y, Portfolio) = $2.505
million * 70% * 1.207 = $2.117 million. Please note: Component VaR (Position Z, Portfolio) = $2.505 million * 30% *
0.517 = $389,000, such that: The sum of component Vats = $2.117 million + $389,000 = $2.505 million From the
forum (see link below) “According to the underlying XLS, there are three ways, related to each other of course, to
reach the Component VaR(Y) in 2.5: Component VaR = marginal VaR * position(Y) = 0.1512 * $14,000 = $2,117; as
Mark shows Component VaR = Portfolio VaR * position weight * beta(y,P) = [$1,523.2 portfolio volatility * 1.645
deviate] * 70% * 1.207 = $2,117 Component VaR = Position VaR * correlation = [$14,000 * 10% vol * 1.645] * [1.207
* 7.616% / 10%], where correlation(y, P) = beta (y,P) * volatility(P)/volatility(y)”
13. Correct Answer: C
$592,000 (component VaR) and $329,000 (incremental VaR)
Since beta = Cov/Var = (correlation * volatility * volatility) / variance --> correlation = beta (i, P) *portfolio volatility
/ position volatility. In this case, Correlation = 0.60 * 10%/10% = 0.60; i.e., portfolio volatility = SQRT (60%^2*10%^2
40%^2*20%^2) = 10%. Individual (position) VaR of CAD = $6 million * 10% * 1.645 = $986.91. Component VaR =
Individual VaR * correlation = $986.91 * 0.60 = $592,147
Incremental VaR (CAD) = Portfolio VaR - Individual VaR (USD); i.e., the difference if the CAD position is deleted! In
this case, Incremental VaR (CAD) = $1.645 million - ($4 million * 20 % * 1.645) ~= $329,000
14. Correct Answer: D
Component Vats sum to diversified Portfolio VaR; individual Vats sum to undiversified VaR. As undiversified VaR >
diversified VaR (if correlations are imperfect), component Vats must be LESS than individual Versa
15. Correct Answer: B
Unlike Component VaR (which is a linear approximation based on marginal VaR), Incremental VaR does not make
use of the linear approximation. Put another way, that (D) is true generally conflicts with the idea of using marginal
VaR to calculated Incremental VaR.
16. Correct Answer: A
In this case, a local valuation method (delta normal is a parametric method) is justified as the portfolio is small with
normal risk factors. The other three methods are full revaluation methods.
17. Correct Answer: C
As location valuation methods are insufficient, the choice is either historical simulation (HS) and Monte Carlo
Simulation (MCS). But only HS is "computationally simple and easy to communicate."
18. Correct Answer: D
Monte Carlo Simulation
The fact that the portfolio is "complex with non-normal, non-linear and time-varying exposures" implies a need for
the MCS and the Risk Manager's preferences generally cope with its primary drawbacks
19. Correct Answer: B
Add (trim) positions with low (high) marginal VaR until Marginal Vats are equal
20. Correct Answer: A
Undiversified VaR: the sum of individual VaRs assumes perfect correlation and therefore, equals the undiversified
VaR.

103 | P a g e
Practice
FRM PARTBook
2 -|Practice
VolumeBook
2 | Volume 2

Var and Risk Budgeting in Investment Management | Questions


1. A pension fund has $28.0 billion invested in an equity portfolio that has a volatility of 14.0% and with normally
distributed returns. The fund measures the equity portfolio's absolute risk with a 95.0% one-year value-at-risk (VaR).
The fund wants to allocate this risk (per a risk budget) between two fund managers, assigning each an identical VaR
budget. If the correlation between the managers is 0.30, what is the VaR budget for each manager? (Source: variation
on FRM handbook example 29.9)
a) $3.7 billion
b) $4.0 billion
c) $5.2 billion
d) $6.4 billion
2. According to Prabin, which of the following risk controls and/or risk measures is LEAST LIKELY to be effective in
managing the risk of an institutional trading portfolio when the horizon is short, turnover is rapid, and leverage as
high (i.e., an environment typified by traditional "sell side" banks)?
Investment guidelines and asset allocation informed by long-term historical studies of the
a) equity risk premium
b) Value at risk (VaR) and VaR limits
c) Stress testing
d) Stop-loss rules
3. According to Prabin, hedge funds pose special risk measurement problems. Which of the following features of hedge
funds is LEAST LIKELY to create a unique risk measurement challenge?
a) Hedge funds tend to target absolute returns which "makes it difficult to define a benchmark"
b) Hedge funds tend to invest in illiquid assets which can both artificially lower the appearance of correlations and
volatility
c) Hedge funds exhibit a relative lack of transparency
d) Hedge funds are a heterogeneous group within which some funds employ high leverage and relatively rapid
turnover
4. Public Employee Retirement Fund (PERF) has $800 million in assets and $720 million in liabilities, for a current surplus
(S) of $80 million. The expected annual return on the surplus scaled by assets, ROA, is 5.0%. The volatility of assets
18.0%, with a normal distribution. If liabilities are unchanged over the year, at the end of one year, there is a 1.0%
probability that the pension will have a deficit of what amount? (Note: applies Jordon’s example from the assignment)
a) $117 million deficit
b) $215 million deficit
c) $335 million deficit
d) $455 million deficit
5. Public Employee Retirement Fund (PERF) has $600 million in assets and $600 million in liabilities, for a current surplus
of zero. The annual expected return on assets is 8.0% with 18.0% volatility per annum; the annual expected return
on liabilities is 6.0% with 14.0% volatility per annum. Both are normally distributed. The correlation between assets
and liabilities is 0.60.
What is the 95% absolute surplus at risk (absolute SaR); i.e., the worst expected SHORTFALL, or loss relative to current
surplus of zero, with 95% confidence? (Note: this is similar to a previous FRM question. The method to derive SaR is
different, but the assumptions given are different)
a) $12.0 million
b) $85.6 million
c) $133.6 million
d) $194.0 million

104 | P a g e
Practice
FRM PARTBook
2 -|Practice
VolumeBook
2 | Volume 2

6. Public Employee Retirement Fund (PERF) reports a projected benefit obligation (PBO) of $70.0 billion. If the discount
rate decreases by 10 basis points, the PBO liability will increase by $1.120 billion. These liabilities behave most nearly
like which of the following? (source: variation on FRM handbook question)
a) A long position in a bond with maturity of 9.0
b) A long position in a bond with (modified) duration of 9.0 years
c) A short position in a bond with maturity of 16.0 years
d) A short position in a bond with (modified) duration of 16.0 years
7. A pension fund invests into only two asset classes, equities and fixed income. The pension's policy mix is 50% equities
and 50% fixed income. Over the last period, the benchmark returns for equities, Rb (E), was 7.0% and the benchmark
return for fixed income, Rb (FI), was 5.0%. During the period, the pension did not deviate from its policy mix, such
that it allocated 50% to the equity asset class and 50% to the fixed income asset class. The equity managers returned
+4.0% and the fixed income managers returned +10.0%. What is the pension fund's return owing to (attributable to)
active-management risk (Prabin: "the risk can be measured from fund returns")?
a) -0.5%
b) zero
c) +1.0%
d) +2.5%
8. A pension fund invests in a small number of major asset classes. Owing to the Brinson research, the pension fund
assumes that "most of the variation in portfolio performance can be attributed to [its policy mix] choice of asset
class;" i.e., rather than the selection of individual managers. Further, within each major asset class, the pension fund
achieves diversification by allocating to many different investment managers within the asset class. Finally, the
correlation between policy-mix VaR and active-management VaR is slightly negative. In percentage (%) terms, which
is the most plausible order of the magnitude of the sources of risk, from most risk to least:
a) Asset VaR (most), Active-management VaR, Policy-mix VaR (least)
b) Policy-mix VaR (most), Asset VaR, Active-management VaR (least)
c) Active-management VaR (most), Policy-mix VaR, Asset VaR (least)
d) Active-management VaR (most), Asset VaR, Policy-mix VaR (least)
9. Each of the following definitions is true EXCEPT which is false?
a) Cash-flow risk is the risk of year-to-year fluctuations in contributions to the pension fund
b) Economic risk is the risk of variation in total economic earnings of the plan sponsor
c) Funding risk is the risk that the value of assets will not be sufficient to cover the liabilities of the fund
d) Active-management risk is the risk of a dollar loss owing to the policy mix selected by the fund
10. A portfolio manager has identified two stocks, a utility stock and a technology stock, each with an expected return of
+20% over the next year. If she can only add one position, which of the following criteria is best?
a) Stock with lower marginal value at risk (VaR)
b) Stock with higher marginal VaR
c) Stock with lower volatility
d) Stock with higher incremental VaR

105 | P a g e
Practice
FRM PARTBook
2 -|Practice
VolumeBook
2 | Volume 2

11. About employing value at risk (VaR) in an investment management context, Prabin asserts each of the following as
true EXCEPT which statement is false?
a) Instead of limits on notional exposures, plan sponsors should utilize value at risk (VaR) limits; for example, the
anticipated volatility of tracking error cannot exceed some limit
b) Active managers probably ought to maximize the information ratio (IR) for the total portfolio subject to a tracking
error volatility (TEV) constraint
c) A key advantage of budgeting across active managers by optimizing the portfolio's information ratio (IR) is that
"we do NOT need to make any assumptions about the expected performance of active managers."
d) Value at risk (VaR) can complement ("is perfectly consistent with") a top-down asset allocation process based
on a mean-variance framework.
12. Prabin explains that if we maximize the portfolio information ratio subject to a fixed tracking error volatility (TEV),
the relative risk budgets should be proportional to the information ratios.
Mathematically that is, if omega (.) is tracking error (TEV), IR (.) is the information ratio, and x(i) is the weight of the
position, Prabin 17.7 gives: x(i)*omega(i) = IR(i)*[1/IR (portfolio)] *omega(portfolio). If the portfolio's maximum
information ration is 0.82 subject to a TEV constraint of 5.0%, what is allocation to a manager with an information
ratio of 0.50 and a TEV of 8.0%?
a) Zero (manager's IR is less than optimal portfolio's IR)
b) 38.1%
c) 44.2%
d) 56.9%

106 | P a g e
Practice
FRM PARTBook
2 -|Practice
VolumeBook
2 | Volume 2

Var and Risk Budgeting in Investment Management | Answers


1. Correct Answer: B
$4.0 billion
The total VaR budget = $28.0 billion * 14.0% * 1.645 = $6.448 billion. Two-manager portfolio VaR = SQRT (VaR1^2 +
VaR2^2 + 2*VaR1*VaR2*correlation); i.e., the familiar two-asset volatility formula, but with VaR instead! As VaR1 =
VaR2 ("assigning each an identical VaR budget"), Total VaR = SQRT (VaR^2 +VaR^2 + 2*VaR^2*correlation). In this
case: $6.448 = SQRT (VaR^2 + VaR^2 + 2*VaR^2*correlation), and VaR^2 + VaR^2 + 2*VaR^2*correlation =
$6.448^2, and VaR^2*(1 + 1 + 2*correlation) = $6.448^2, and VaR = SQRT [$6.448^2 / (1 + 1 + 2*0.3)] = $3.999 billion
for a VaR budget to each manager.
2. Correct Answer: A
Prabin: "Consider first bank trading portfolios, where the horizon is short, turnover rapid, and leverage high. VAR is
particularly appropriate for such an environment. In this case, historical measures of risk basically are useless
because yesterday's portfolio profile may have nothing to do with todays. In an investment environment, in contrast,
the horizon, as measured by the portfolio evaluation period, is much longer, monthly or quarterly. Positions change
more slowly
... In sum, the daily application of VAR measures has become a requirement of bank trading portfolios owing to short
horizons, rapid turnover, and high leverage. Risk is controlled through position limits, VAR limits, and stop-loss rules.
Although the investment management industry operates with different risk parameters, the proper measurement
of risk is a critical function."
3. Correct Answer: A
Absolute returns do not per se create a unique risk measurement challenge: we
tend to assume the benchmark is a risk-free rate, or similar passive benchmark.
In regard to (B), (C) and (D), each is TRUE, as least as cited by Prabin:
"[1. Heterogeneous with high leverage and turnover:] Hedge funds, however, pose special risk measurement
problems. This group is very heterogeneous. Most hedge funds have leverage. Some groups have greater turnover
than traditional investment managers. Long Term
Capital Management is an extreme example of a hedge fund that went nearly bankrupt owing to its huge leverage.
Such hedge funds are more akin to the trading desks of investment banks than to those of pension funds. As such,
they should use similar risk management systems.
[2. Illiquid:] Another category of funds, however, invests in illiquid assets, such as convertible bonds, which are
traded infrequently, even within a month. When this is the case, risk measures based on monthly returns give a
misleading picture of risk because the closing net asset value (NAV) does not reflect recent transaction prices. This
creates two types of biases. First, correlations with other asset classes will be artificially lowered, giving the
appearance of low systematic risk. This can be corrected using enlarged regressions with additional lags of the
market factors and summing the coefficients across lags. Second, volatility will be artificially lowered, giving the
appearance of low total risk. Such illiquidity, however, will show up in positive serial autocorrelation in returns.
Biases in volatility measures can be corrected by taking this autocorrelation into account when extrapolating risk to
longer horizons.
[3. Transparency:] Finally, hedge funds can pose special problems owing to their lack of transparency. Many hedge
funds refuse to reveal information about their positions for fear of others taking advantage of this information. For
clients, however, this makes it difficult to measure the risk of their investment both at the hedge-fund level and in
the context of their broader portfolio."

107 | P a g e
Practice
FRM PARTBook
2 -|Practice
VolumeBook
2 | Volume 2

4. Correct Answer: B
$215 million deficit
Expected surplus at the end of the year = $80 million + growth of 5.0%*$800 million = $120 million. Relative surplus
at risk (relative SaR not absolute SaR) = 18% volatility * 2.33 deviate at 99.0% * $800 million ~= $335 million. End-
of-year deficit with 1.0% significance = $120 million - $335 million = -$215.0 million or $215 million deficit
C. $133.6 million
The expected growth in surplus (the "drift" if you will) = 8.0% ROA * $600 - 6.0% ROL * $600 = +$12 million.
The volatility of the surplus = SQRT [variance of (Assets - Liabilities)]; please note this applies the property of a
variance, in the case of a difference between two random variables:
Volatility of surplus = SQRT [$600^2*18%^2 + 600^2*14%^2 - 2*600*600*18%*14%*0.60] =
$88.508 million
Relative SaR = $88.508 million * 1.645 = $145.6 million; i.e., the relative SaR.
But the absolute SaR includes the drift = -$12 million + $88.508 million * 1.645 = $133.6 million
5. Correct Answer: D
A SHORT position in a bond with duration of 16.0 years
Modified duration, D* = -(1/P) *(delta Price/delta yield) = -(1/70) *(+1.12/-0.001) = +16.0 years
6. Correct Answer: D
A SHORT position in a bond with duration of 16.0 years
Modified duration, D* = -(1/P) *(delta Price/delta yield) = -(1/70) *(+1.12/-0.001) = +16.0 years
7. Correct Answer: C
R (asset) = Return (policy mix) + Return (active management risk). In this case, Return (asset) = 50% allocated * 4%
equity manager return + 50% allocated * 10% FI manager return = +7.0%.
Return (policy mix) = 50% * 7% benchmark equity return + 50% * 5% benchmark FI return =
6.0% return due to policy-mix.
Return (active management risk) = remaining 1% = (50%*4% - 50%*5%) + (50%*10% -
50%*7%) = -0.5% + 1.5% = +1.0%.
8. Correct Answer: B.
Policy-mix VaR (greatest), Asset VaR, Active-management VaR (least)
VaR [R(Asset)] = VaR [R (policy mix)] + VaR [R (active manager risk)], with R (policy mix) much
larger than R (active manager).
Due to slightly negative correlation, VaR [R (Asset)] < VaR[R(Policy)].
9. Correct Answer: D
Prabin: "Policy-mix risk, which is the risk of a dollar loss owing to the policy mix selected by the fund. Since the policy
mix generally can be implemented by investing in passive funds, this risk represents that of a passive strategy.
Active-management risk, which is the risk of a dollar loss owing to the total deviations from the policy mix. This
represents the summation of profits or losses across all managers relative to their benchmark. Thus, there may be
diversification effects across managers, depending on whether they have similar styles or not. In addition, the
current asset-allocation mix may deviate temporarily from the policy mix."

108 | P a g e
Practice
FRM PARTBook
2 -|Practice
VolumeBook
2 | Volume 2

10. Correct Answer: A


Stock with lower marginal value at risk (VaR)
Prabin: "For each asset to be added to the portfolio, analysts should be given a measure of its marginal VAR. If two
assets have similar projected returns, the analyst should pick the one with the lowest marginal VAR, which will lead
to the lowest portfolio risk. Assume, for instance, that the analyst estimates that two stocks, a utility and an Internet
stock, will generate an expected return of 20 percent over the next year. If the current portfolio is already heavily
invested in high tech stocks, the two stocks will have a very different marginal contribution to the portfolio risk. Say
that the utility stock has a portfolio beta of 0.5 against 2.0 for the other stock, leading to a lower marginal VAR for
the first stock. With equal return forecasts, the utility stock is clearly the preferred choice. Such analysis is only
feasible within the context of a portfolio wide VAR system."
11. Correct Answer: C
The IR requires assumptions about the tracking error and TEV of managers.
"17.5.2. Budgeting across Active Managers: This [the prior] approach [Budgeting across Asset
Classes] can be refined further if we are willing to make assumptions about the expected performance of active
managers. For better or for worse, active managers usually are evaluated in terms of their tracking error (TE),
defined as the active return minus that of the benchmark.
Define mu (u) as the expected TE and omega as its volatility (TEV). The information ratio then is defined as
information ratio (IR) = mu/omega [expected tracking error/volatility of tracking error]."
12. Correct Answer: B
38.1%
As x(i)*omega(i) = IR(i)*[1/IR (portfolio)] *omega(portfolio),
x(i) = IR(i)*[1/IR (portfolio)] *omega(portfolio)/omega(i). In this case,
x(i) = 0.50*[1/0.82] *5%/8% = 38.11%

109 | P a g e
Practice
FRM PARTBook
2 -|Practice
VolumeBook
2 | Volume 2

Risk Monitoring & Performance Measurement | Questions


1. Both value at risk (VaR) and tracking error (TE) are considered risk measures. Which of the following statements is
TRUE about the relationship and/or contrast between VaR and tracking error (TE)?
a) Relative value at risk (VaR), as opposed to absolute VaR, is analogous but not identical to tracking error (TE),
which is also called active risk; and either VaR or TE can set the risk constraints (limits) in a risk budget
b) Value at Risk (VaR) is mathematically identical to tracking error (TE); the only difference is that VaR implies the
user (or organizational context) is a shareholder or "owner of capital," but TE implies an asset manager
(investment management)
c) Value at risk (VaR) is identical to tracking error only if the asset returns are normally distributed. Otherwise, VaR
is different than TE because VaR does not require normal returns but TE requires normal returns
d) They are neither compatible nor analogous: tracking error is an absolute return metric which only has meaning
in the context of an information ratio, not a risk budget; value at risk (VaR) is a relative risk metric which is
necessarily expressed in different units
2. Your colleague Sidhart was tasked to prepare the first draft of your firm's Risk Plan. The board requested the draft
include each of "five guideposts" as the directors were influenced by Rosengarten and Peter Zangari (Litterman),
specifically that "the risk plan should be incorporated as a separate section of the organization’s strategic planning
document. As such, it should receive all of the vetting and discussion that any other part of the planning document
would receive. When in final form, its main themes should be capable of being articulated to analysts, auditors,
boards, actuaries, management teams, suppliers of capital, and other interested constituencies. The risk plan should
include five guideposts." Each of the following is likely to appear in the draft Risk Plan (as one of the five guideposts)
EXCEPT which is least likely to appear in the Risk Plan?
a) Qualitative scenario analysis that explores the factors that could cause the business plan to fail
b) Defined points of success or failure such as (e.g.) acceptable levels of return on equity (ROE) or returns on risk
capital (RORC)
c) Definition(s) of the difference between events that are merely disappointing and those that inflict serious
damage
d) An income statement that projects revenues and itemized expenses, netted to project expected net profit, by
activity
3. According to Rosengarten and Zangari (Litterman), risk management is a three-legged stool that includes a risk plan,
a risk budget, and the risk monitoring process. About the risk budget (i.e., the second stool), each of the following is
true EXCEPT for which is not?
a) The risk budget should quantify the vision of the risk plan; i.e., once the risk plan is crafted, the risk budget
expresses how risk capital will be allocated such that the organization’s strategic vision is likely to be realized
b) While a financial budget is likely to set a minimum return on equity (ROE) target, a risk budget should analogously
set a minimum return on risk capital (RORC) which is associated with each activity as well as for the aggregation
of all activities
c) In situations where risk budget variances (e.g., due to the possibility of unforeseeable events) are likely to occur,
mathematical modeling and its associated risk budget formalities should be avoided so that an impression of
high accuracy is not falsely conveyed to manager
d) While a financial budget allocates revenue and expenses determine the profitability of activities, a risk budget
complements by enabling the estimation of the risk-adjusted profitability of activities

110 | P a g e
Practice
FRM PARTBook
2 -|Practice
VolumeBook
2 | Volume 2

4. According to Rosengarten and Peter Zangari (Litterman), "In response to this heightened level of risk consciousness,
many organizations and asset managers have formed independent risk management units (RMUs) that oversee the
risk exposures of portfolios and ensure that such exposures are authorized and in line with risk budgets." Which of
the following is the LEAST LIKELY to be an objective of a risk management unit (RMU)?
A. To manage risk especially by exercising the authority to replace or reduce the decision methods of portfolio
managers
B. To be a catalyst for a comprehensive discussion of risk-related matters, including matters that do not easily lend
themselves to measurement
C. To develop an inventory of risk data for use in evaluating portfolio managers and market environments
D. To develop performance attribution analytical tools; and to assess the quality of models used to measure risk,
for example, by back-testing and conducing proactive research into model risk.
5. According to Rosengarten and Peter Zangari (Litterman), both tracking error (TE) and the information ratio (IR) use
useful performance measurement tools. Each of the following is true about TE and IR EXCEPT which is not?
a) Both (TE and IR) can be used to measure relative performance vis à vis the competition by identifying managers
who generate superior risk-adjusted excess returns vis à vis a relevant peer group
b) Both can test whether the manager has generated sufficient excess returns to compensate for the risk assumed
c) Both can be applied at the portfolio level as well as for individual industrial sectors and countries.
d) Both only need a limited amount of data and, if achieved risk rather than potential risk is calculated, can nullify
(isolate from) the impact of the manager's environment
6. In the context of an ex post risk and return attribution of performance managers, argue Rosengarten and Zangari
(Litterman), "For quantitative portfolio measurement tools to be effective, we must have a sufficient number of data
points to form a conclusion with a certain level of statistical confidence." What do the authors suggest in the context
of expost performance attribution, however, if in practice there is a dearth (a lack of sufficient quantity) of
performance data?
a) A dearth of performance data is "unrealistic in an age of big data" such that any such claim should be viewed
suspiciously
b) Substitute with external databases including, for example, assume manager's information ratio is equal to
median of well-defined peer group
c) Greater dependence on style monitoring and manager's compliance with their articulated philosophy
d) For realized (ex post) return and risk evaluation, supplement the performance data gaps with Monte Carlo
analysis

111 | P a g e
Practice
FRM PARTBook
2 -|Practice
VolumeBook
2 | Volume 2

Risk Monitoring & Performance Measurement | Answers


1. Correct Answer: A
TRUE. Relative value at risk (VaR)--as opposed to absolute VaR--is analogous
but not identical to tracking error (TE), which is also called active risk; and either VaR or TE can set the risk constraints
(limits) in a risk budget
Litterman (Rosengarten and Zangari): "Risk, in financial institutions, is frequently defined as Value at Risk (VaR). VaR
refers to the maximum dollar earnings/loss potential associated with a given level of statistical confidence over a
given period of time. VaR is alternatively expressed as the number of standard deviations associated with a particular
dollar earnings/loss potential over a given period of time. If an asset’s returns (or those of an asset class) are normally
distributed, 67 percent of all outcomes lie within the asset’s average returns plus or minus one standard deviation.
Asset managers use a concept analogous to VaR—called tracking error—to gauge their risk profile relative to a
benchmark. In the case of asset managers, clients typically assign a benchmark and a projected risk and return target
vis à vis that benchmark for all monies assigned to the asset manager’s stewardship. The risk budget is often referred
to as tracking error, which is defined as the standard deviation of excess returns (the difference between the
portfolio’s returns and the benchmark’s returns). If excess returns are normally distributed, 67 percent of all
outcomes lie within the benchmark’s returns plus or minus one standard deviation.
VaR is sometimes expressed as dollar value at risk by multiplying the VaR by assets under management. In this
manner, the owner of the capital is able to estimate the dollar impact of losses that could be incurred over a given
period of time and with a given confidence level. To achieve targeted levels of dollar VaR, owners of capital allocate
capital among asset classes (each of which has its own VaR). An owner of capital who wishes to incur only the risks
and returns of a particular asset class might invest in an index fund type product that is designed to replicate a
particular index with precision. To the extent that the owner wishes to enjoy some discretion around the
composition of the index, he or she allows the investment managers to hold views and positions that are somewhat
different than the index. The ability to take risks away from the index is often referred to as active management.
Tracking error is used to describe the extent to which the investment manager is allowed latitude to differ from the
index.
For the owner of capital, the VaR associated with any given asset class is based on the combination of the risks
associated with the asset class and the risks associated with active management. The same premise holds for the
VaR associated with any combination of asset classes and active management related to such asset classes."
In regard to (B), (C) and (D), each is FALSE.
 In regard to false (B), please note that tracking error (TE) is one standard deviation of active ("excess") returns
but value at risk (VaR) is a quantile; i.e., typically some multiple of standard deviation due to greater
confidence
 In regard to false (C), VaR makes no distributional assumption but neither does tracking error (at best, it
"imposes normality")
 In regard to false (D), VaR can be expressed as either relative (e.g., to the expected future value which is the
"analog" to tracking error in the reading) or absolute; further, VaR can be expressed as a percentage (as
tracking error often is) or in dollar terms.
2. Correct Answer: D
False: A projection of revenues and expenses will be included in the financial budget which is one of the three legs
of Financial Accounting Control.
Rosengarten and Peter Zangari (Litterman) compare but distinguish:
 Financial Accounting Controls, including 1. strategic plan, 2. financial budget, and 3. variance monitoring
process
 Risk Management, including 1. risk plan, 2. risk budget, and 3. risk monitoring process

112 | P a g e
Practice
FRM PARTBook
2 -|Practice
VolumeBook
2 | Volume 2

The five guideposts of the risk plan (summary version) are:


1. The risk plan should set expected return and volatility (e.g., VaR and tracking error) goals for the relevant time
period and establish mileposts which would let oversight bodies recognize points of success or failure. The
risk plan should use scenario analysis to explore those kinds of factors that could cause the business plan to
fail (e.g., identify unaffordable loss scenarios) and strategic responses in the event these factors actually occur.
2. The risk plan should define points of success or failure. Examples are acceptable levels of return on equity
(ROE) or returns on risk capital (RORC). For the purposes of the planning document, risk capital might be
defined using Value at Risk (VaR) methods. Since organizations typically report and budget results over various
time horizons (monthly, quarterly, annually), separate VaR measures for each time interval should be
explored.
3. The risk plan should paint a vision of how risk capital will be deployed to meet the organization’s objectives.
For example, the plan should define minimum acceptable RORCs for each allocation of risk capital. In so doing,
it helps ensure that the return per unit of risk meets minimum standards for any activity pursued by the
organization.
4. A risk plan helps organizations define the bright line between those events that are merely disappointing and
those that inflict serious damage. Strategic responses should exist for any franchise-threatening event—even
if such events are low-probability situations. The risk plan should identify those types of losses that are so
severe that insurance coverage (e.g., asset class puts) should be sought to cover the downside.
5. The risk plan should identify critical dependencies that exist inside and outside the organization. The plan
should describe the nature of the responses to be followed if there are breakdowns in such dependencies.
Examples of critical dependencies include reliance on key employees and important sources of financing
capacity.
Rosengarten and Peter Zangari (Litterman) write about the Risk Plan: "The existence of a risk plan makes an
important statement about how business activities are to be managed. It indicates that owners and managers
understand that risk is the fuel that drives returns. It suggests that a higher standard of business maturity is
present. Indeed, its very existence demonstrates an understanding that the downside consequences of risk—
loss and disappointment—are not unusual. These consequences are directly related to the chance that
management and owners accept in seeking profit. This indicates that management aspires to understand the
source of profit. The risk plan also promotes an organizational risk awareness and the development of a
common language of risk. It demonstrates an intolerance for mistakes/losses that are material, predictable,
and avoidable."
3. Correct Answer: C
Rosengarten and Peter Zangari (Litterman): "Clearly, risk budgeting incorporates elements of mathematical
modeling. At this point, some readers may assert that quantitative models are prone to failure at the worst possible
moments and, as such, are not sufficiently reliable to be used as a control tool. We do not agree. The reality is that
budget variances are a fact of life in both financial budgeting and risk budgeting. Variances from budget can result
from organization-specific factors (e.g., inefficiency) or completely unforeseen anomalies (e.g., macroeconomic
events, wars, weather, etc.). Even though such unforeseen events cause ROE variances, some of which may even be
large, most managers still find value in the process of financial budgeting. The existence of a variance from budget,
per se, is not a reason to condemn the financial budgeting exercise.
So, too, we believe that the existence of variances from risk budget by unforeseen factors does not mean that the
risk budgeting process is irrelevant. To the contrary. Frequently the greatest value of the risk budget derives from
the budgeting process itself—from the discussions, vetting, arguments, and harmonies that are a natural part of
whatever budget is ultimately agreed to. Managers who perform risk budgeting understand that variances from
budget are a fact of life and are unavoidable, but are not a reason to avoid a formal risk budgeting process.
To the contrary, understanding the causes and extent of such variances and ensuring that appropriate remedial
responses exist make the budgeting and planning process even more valuable."

113 | P a g e
Practice
FRM PARTBook
2 -|Practice
VolumeBook
2 | Volume 2

4. Correct Answer: A
False. The RMU should not manage risk, which is the responsibility of the individual portfolio managers, but rather
measure risk for use by those with a vested interest in the process. The RMU cannot reduce or replace the decision
methods and responsibilities of portfolio managers. It also cannot replace the activities of quantitative and risk
support professionals currently working for the portfolio managers.
In regard to (B), (C) and (D), each is TRUE.
5. Correct Answer: D
False: The information ratio notoriously requires much data to be statistically significant; and "achieved" TE or IR,
because they rely on standard deviation, are greatly influenced by the environment.
Litterman (Rosengarten and Zangari): "The Sharpe and information ratios incorporate the following strengths:
 They can be used to measure relative performance vis à vis the competition by identifying managers who
generate superior risk-adjusted excess returns vis à vis a relevant peer group. RMUs and investors might
specify some minimum rate of acceptable risk-adjusted return when evaluating manager performance.
 They test whether the manager has generated sufficient excess returns to compensate for the risk assumed.
 The statistics can be applied both at the portfolio level as well as for individual industrial sectors and countries.
For example, they can help determine which managers have excess risk-adjusted performance at the sector
or country level.
The Sharpe and information ratios incorporate the following weaknesses:
 They may require data that may not be available for either the manager or many of his competitors. Often an
insufficient history is present for one to be conclusive about the attractiveness of the risk-adjusted returns.
 When one calculates the statistic based on achieved risk instead of potential risk, the statistic’s relevance
depends, to some degree, on whether the environment is friendly to the manager
 Both require a limited amount of data and, if achieved rather than potential risk is calculated, can nullify
(isolate from) the impact of the manager's environment"
6. Correct Answer: C
Greater dependence on style monitoring and manager's compliance with their articulated philosophy
In regard to false (D), Monte Carlo analysis is more relevant to potential (ex ante) evaluation than realized (ex post)
evaluation.

114 | P a g e
Practice
FRM PARTBook
2 -|Practice
VolumeBook
2 | Volume 2

Portfolio Performance Evaluation | Questions


1. Thirty months ago (n = 30), $100 was invested and has grown to a value today of $175.00. The monthly returns were
normally distributed with monthly standard deviation of 10.0%. Without the return data, based on finding and using
the geometric (a.k.a., time weighted) return, which of the following is the best estimate of the arithmetic average
monthly return of the series?
a) 0.72%
b) 1.88%
c) 2.38%
d) 3.54%
2. You make an investment over two periods, from today (T0) to the end of the first year (T1), and then to the end of
the second year (T2). Today (T0), you buy one share at a cost of $10.00.
The stock pays a $2.00 annual dividend. At the end of the first year (T1), your single share pays a $2.00 dividend; and,
as the price increased by only $1.00, you buy a second share at a cost of $11.00. By the end of the second year (T2),
the stock price has soared to $18.00. You then decide to collect both dividends ($2.00 for each share) and sell both
shares, for total proceeds at the end of the second year (T2) of $40.00. What are, respectively, the time-weighted
(aka, geometric) and dollar-weighted (aka, internal) rates of return?
a) 36.5% (time) and 45.8% (dollar)
b) 49.7% (time) and 56.3% (dollar)
c) 53.7% (time) and 60.0% (dollar)
d) 60.2% (time) and 71.2% (dollar)
3. Shares of XYZCorp. pay a $2 dividend at the end of every year on December 31. An investor buys two shares of the
stock on January 1 at a price of $20 each, sells one of those shares for $22 a year later on the next January 1, and sells
the second share an additional year later for $19. What are, respectively, the dollar- and time-weighted rates of
return on the 2-year investment? (Let's identify the error in the source: Concept Check 1, Chapter 24, Bodie Kane
Marcus)
a) 8.8% (dollar) and 5.6% (time)
b) 9.3% (dollar) and 6.4% (time)
c) 10.2% (dollar) and 8.3% (time)
d) 11.9% (dollar) and 7.0% (time)
4. Consider the following performance date for a sample period:

If the Portfolio (P) is one sub-portfolio that is combined with several other portfolios into a large investment fund,
which is the appropriate risk-adjusted performance measure (RAPM) and what is its value for Portfolio (P)?
a) Sharpe of 25.0%
b) Treynor of 6.0%
c) Treynor of 7.5%
d) Information ratio of 12.0%

115 | P a g e
Practice
FRM PARTBook
2 -|Practice
VolumeBook
2 | Volume 2

5. Consider the following performance date for a sample period:

If the Portfolio (P) represents the entire risky investment fund, which is the appropriate risk adjusted performance
measure (RAPM) and what is its value for Portfolio (P)?
a) Sharpe of 0.4167
b) Sharpe of 0.5385
c) Treynor of 8.750%
d) Information ratio of 0.2727
6. Consider the following performance date for a sample period:

If the Portfolio (P) represents the active portfolio to be optimally mixed with the passive portfolio, which is the
appropriate risk-adjusted performance measure (RAPM) and what is its value for Portfolio (P)?
a) Sharpe of 0.4815
b) Jensen (alpha) of 0.0760
c) Treynor of 14.44%
d) Information ratio of 0.380
7. The following data compares a Portfolio (P) to the Market (M):

What is Modigliani-squared (M^2) measure of the portfolio?


a) -2.5%
b) +0.5%
c) +3.0%
d) +6.0%

116 | P a g e
Practice
FRM PARTBook
2 -|Practice
VolumeBook
2 | Volume 2

8. The following data compares two portfolios, Portfolio (A) and Portfolio (B), to the

Market(M):

If we rank the portfolios according to, respectively, the Modigliani-squared (M^2) measure and slope of the T-line
(equalizing for beta), how do the portfolios rank against each other?
a) Portfolio (A) offers both a higher M^2 and steeper T-line than Portfolio (B)
b) Portfolio (A) offers a higher M^2, but Portfolio (B) has a steeper T-line
c) Portfolio (B) offers a higher M^2, but Portfolio (A) has a steeper T-line
d) Portfolio (B) offers both a higher M^2 and steeper T-line than Portfolio (A)
9. The following data compares two portfolios, Portfolio (A) and Portfolio (B), to the Market(M):

What are, respectively, the Treynor-squared (T^2) measure of Portfolio (A) and Portfolio (B)?
a) T^2(A) = -1.0% and T^2(B) = 1.7%
b) T^2(A) = 1.0% and T^2(B) = 2.8%
c) T^2(A) = 3.0% and T^2(B) = 7.5%
d) T^2(A) = 5.0% and T^2(B) = 10.0%

117 | P a g e
Practice
FRM PARTBook
2 -|Practice
VolumeBook
2 | Volume 2

10. The single index model describes the excess return of two securities: R(A) = 1.0% + 0.80*R(M) + e(A) and R(B) = 2.0%
+ 1.10*R(M) + e(B), where R(A) and R(B) are the excess returns above the risk-free rate; R(M) is the excess return of
the market index; and e(A) and e(B) are the residuals or sources of firm-specific return and risk. The volatility
(standard deviation) of the market index is 10.0%. Finally, the volatilities of the residuals are, Standard_Deviation [e
(A)] = 15.0% and Standard_Deviation [e (B)] = 30.0%. What is the correlation between securities A & B? Hint: the
covariance (A, B) = beta (A)*beta (B)*variance (index).
a) 0.0933
b) 0.1620
c) 0.2269
d) 0.3155
11. Over a measurement period, Joe Dart's portfolio produced a monthly alpha of 50 basis points and a beta (with respect
to the single-factor market index) of 0.630. The monthly standard deviation of the portfolio's residual (non-systematic
risk) was 5.0% and the standard deviation of the market index was 9.0% per month. What was the correlation
coefficient between the portfolio and the market index? Hint: R^2 = ESS/TSS.
a) 0.551
b) 0.677
c) 0.750
d) 0.820
12. Over an historical measurement period, hedge fund manager Joe Dart produced an alpha of 50 basis points per
month, or + 6.0% per year. The monthly standard deviation of the residual (non-systemic) risk was 4.0%. If we want
a two-tailed 90% significance level, how many months (N) are required to determine that Joe demonstrated skill, that
is, such that we can reject the null hypothesis that his true alpha is zero?
a) 27 months
b) 99 months
c) 174 months
d) 516 months
13. According to Bodie, Kane and Marcus, EACH of the following makes it difficult to evaluate the performance of hedge
funds EXCEPT for:
a) Liquidity risk: Hedge funds tend to hold more illiquid assets, such that compensation for illiquidity may
mistakenly appear to be alpha
b) Tail risk: Some hedge fund strategies will earn consistent profits for a period, appearing to be high reward per
unit of risk, but are exposed to tail risk; e.g., writing deep OTM puts
c) Incentive structure: carried interest fee creates a circularity in the evaluation model that is difficult to overcome
d) Instability of risk attributes: As hedge funds have greater leeway to invest opportunistically, their factor loading
and risk profile changes rapidly
14. In the first year, a hedge fund manager produces quarterly excess returns of: -1.0%, +5.0%, -1.0%, +5.0%. In the
second year, the manager shifts to a riskier strategy and produces quarterly excess returns of: -12.0%, +36.0%, -
12.0%, +36.0%. Consequently, the manager's Sharpe ratio was 0.67 in the first year and 0.50 in the second year. What
was the manager's Sharpe ratio of the eight quarters (two years) as measured in a single sequence?
a) 0.39
b) 0.58
c) 0.63
d) 0.72

118 | P a g e
Practice
FRM PARTBook
2 -|Practice
VolumeBook
2 | Volume 2

15. About the market timing ability of fund managers, Bodie Kane and Marcus assert EACH of the following as true
EXCEPT:
a) The simple linear security characteristic line (SCL) is inadequate to the task of
b) evaluating the performance of a market timer
c) Although actual regression-based research (e.g., Henriksson) proves that managers are able to consistently and
successfully time markets, after deducting transaction costs the
d) profitability of even a perfect market timer is not economically significant
e) A viable way to measure both security selection and market timing is to add a quadratic term to the usual index
model, resulting in an expanded security characteristic line
f) A viable way to measure both security selection and market timing is to add a dummy variable to the usual
security characteristic line, so that the beta of the portfolio is (b) is bear markets but (b+c) in bull markets.
16. Bodie Kane Marcus define (P) as the measure (score) of market timing ability, where P =P (1) + P (2) - 1, such that P
(1) is the proportion of correct forecasts of a bull markets and P (2) is the proportion of correct forecasts of bear
markets. Over last ten periods (e.g., quarters), the first five periods saw consecutive bull markets; then the market
experienced a secular shift and produced five consecutive bear markets. Money manager Mr. Prabin predicted bull
markets for the first eight periods, but then finally resigning to the secular shift, predicted bear markets in the last
two periods. What is Mr. Prabin's market timing score?
a) Zero
b) 0.20
c) 0.25
d) 0.40
17. A portfolio manager's "bogey" is a benchmark portfolio invested in three components:
60.0% in the S&P 500 (the equity index), 30.0% in a Lehman Bond Index (the bond index), and 10.0% in a money
market fund (the cash index). A portfolio passively invested in this bogey portfolio would have returned +4.0% over
the period. The manager's actual portfolio components included 70.0% in equities, 20.0% in bonds, and 10.0% in cash.
The manager's ACTUAL portfolio underperformed the bogey by 90 basis points (i.e., 3.1% actual portfolio return
compared to a 4.0% bogey portfolio return):

If the excess return of -0.9% is decomposed into two components, asset allocation and security selection, what is the
contribution to the excess return from security selection?
a) -1.10%
b) -0.70%
c) -0.20%
d) +0.20%

119 | P a g e
Practice
FRM PARTBook
2 -|Practice
VolumeBook
2 | Volume 2

18. To conduct a Style Analysis on a mutual fund that cannot short securities, you regress the fund's returns against five
regressors: T-bills, a small capitalization index, a large capitalization index, a growth index and a value index. The
regression returned is: R(t) = 0*TBILL - 0.2*SMALL + 1.4*LARGE + 0.3*GROWTH + 0.7*VALUE + 1.51. Your colleague
Jane criticizes your style analysis with the following statements:
I. You should not have a zero coefficient (0*TBILL)
II. You should not have a non-zero intercept (1.51)
III. You should not have a negative coefficient (-0.2*SMALL)
IV. You should not have a coefficient that exceeds one (1.4*LARGE)
V. The sum of the coefficients should not be different than 1.0 (0 - 0.2 + 1.4 + 0.3 + 0.7 = 2.2 which <> 1.0)
Which of the criticisms is (are) correct?
A. None are correct, the regression is fine
B. II. I. and II. are correct only
C. III. IV. and V. are correct only
D. IV. All are correct

120 | P a g e
Practice
FRM PARTBook
2 -|Practice
VolumeBook
2 | Volume 2

Portfolio Performance Evaluation | Answers


1. Correct Answer: C
The geometric average monthly return = (175/100) ^ (1/30) - 1 = 1.882893%; this geometric average return is the
time-weighted return. As E [geometric average] = E [arithmetic average] - 0.5*variance; i.e., volatility erodes the
geometric average, The E [arithmetic average] = E [geometric average] + 0.5*variance = 1.882893% + 0.5*10%^2 =
2.382893%
2. Correct Answer: C
53.7% (TWR) and 60.0% (dollar-weighted)
Time-weighted (geometric) return:
R (1) = (11+2-10)/10 = 30%;
R (2) = (18+2-11)/11 = 81.818%
R (G) = (1.3 * 1.81818%) ^ (1/2) - 1 = 53.74%
Dollar-weighted return:
10 + 11/(1+r) = 2/(1+r) + 40/(1+r) ^2;
10(1+r) ^2 + 9(1+r) - 40 = 0; let a = (1+r):
10a^2 + 9a - 40 = 0;
(5a - 8) (2a + 5) = 0, such that
5a = 8, a = 1.6 and r = 60%.
3. Correct Answer: D
11.9% (dollar) and 7.0% (time)
Time 0: buy two shares = -40 cash flow
Time 1: collect dividends, sell one share = 4 + 22 = +26
Time 2: collect dividend on remaining share, sell it = 2 + 19 = +21
Dollar-weighted return: -40 + 26/(1+r) + 21/(1+r) ^2 = 0; r = 11.91%
Time-weighted return: r (1) = [2 + (22-20)]/20 = 20%, r (2) = [2 + (19-22)]/22 = -4.54%, such that:
time-weighted return = SQRT [(1+11.91%) *(1-4.54%)]-1 = 7.026%
... note the text solution computes an arithmetic average, which is higher!

121 | P a g e
Practice
FRM PARTBook
2 -|Practice
VolumeBook
2 | Volume 2

4. Correct Answer: C
Treynor of 7.5% (The Treynor ratio of the Market is 6.0%)
Treynor (Portfolio) = (15% - 3%)/1.6 = 7.5%.
Bodie: "This case might describe a situation where Jane, as a corporate financial officer, manages the corporate
pension fund. She parcels out the entire fund to a number of portfolio managers. Then she evaluates the
performance of individual managers to reallocate the fund to improve future performance. What is the correct
performance measure?
... Treynor's performance measure is appealing because when an asset is part of a large investment portfolio, one
should weigh its mean excess return against its systematic risk rather than against total risk to evaluate contribution
to performance."
5. Correct Answer: B
Sharpe of 0.5385
Sharpe (Portfolio) = (11.0% - 4.0%)/13.0% = 53.
Bodie: "In sum, when [the] portfolio represents [the] entire investment fund, the benchmark is the market index or
another specific portfolio. The performance criterion is the Sharpe measure of the actual portfolio versus the
benchmark."
6. Correct Answer: D
Information ratio of 0.380
Jensen (alpha) = 15.0% - [2.0% + 0.90*(8.0%-2.0%)] = 0.0760.
Information ratio = alpha/tracking error = 0.0760/20% = 0.380
7. Correct Answer: C.
The ratio of volatilities = 24%/32% = 0.75 such that a re-mixed portfolio (P*) with 25% cash will
have the same volatility (24%) as the market.
This re-mixed portfolio has expected return, Return(P*) = 25%*3.0% + 75%*15% = 12.0%, such
that:
M^2 [portfolio] = 12.0% - market's 9.0% = +3.0%
8. Correct Answer: C
Portfolio (B) offers a higher M^2, but Portfolio (A) has a steeper T-line
Portfolio (A) M^2 = [16%/30%*15% + 14%/30%*3%] - 12% = -2.6%
Portfolio (B) M^2 = [16%/24%*24% + 8%/24%*3%] - 12% = +5.0%; i.e., the M^2 is higher for
Portfolio (B)
Treynor (A) = (15% - 3%)/0.8 = 15%.
Treynor (B) = (24% - 3%)/1.5 = 14%; i.e., the slope of the T-line is steeper for Portfolio (A).
For example, if we re-mix (A) to match the beta of (B), then (A) would leverage (multiply) by 1.8750 and alpha (A*)
would equal 9.0%, compared to alpha of (B) of 7.5%.
9. Correct Answer: D
T^2(A) = 5.0% and T^2(B) = 10.0%
T^2 is the alpha if the portfolio is re-mixed (with the risk-free asset) so its beta = 1.0:
T^2(A) = Jensen's alpha of 3.5% * 1.0 / 0.70 = 5.0%
T^2(B) = Jensen's alpha of 12% * 1.0 /1.20 = 10.0%
or,
T^2(A) = Treynor(A) - Market excess return = 10% - (9% - 4%) = 5.0%.
T^2(B) = Treynor(B) - Market excess return = 15% - (9% - 4%) = 10.0%.

122 | P a g e
Practice
FRM PARTBook
2 -|Practice
VolumeBook
2 | Volume 2

10. Correct Answer: B


Volatility (A) = SQRT [Total Risk (A)] = SQRT [Systematic Risk (A) + Firm-specific Risk(A)] = SQRT [beta
(A)^2*StdDev(M)^2 + e(A)^2] = SQRT [0.80^2*10%^2 + 15%^2] = 17.00%
Volatility (B) = SQRT [beta (B)^2*StdDev(M)^2 + e(B)^2] = SQRT [1.10^2*10%^2 + 30%^2] = 31.95%
Covariance (A, B) = beta (A)*beta(B)*variance(index) = 0.80*1.10*10%^2 = 0.008800
Correlation (A, B) = Covariance (A, B) / [Volatility(A)*Volatility(B)] = 0.008800 / (17.00%*31.95%) = 0.1620
11. Correct Answer: C
0.750
ESS = portfolio's systemic risk = beta (A)^2*StdDev(M)^2 = 0.63^2*9.0%^2 = 0.003215
TSS = portfolio's total risk = beta (A)^2*StdDev(M)^2 + e(A)^2 = 0.63^2*9.0%^2 + 5.0%^2 =
0.005714890
ESS/TSS = 0.003215 / 0.005714890 = 0.562546 and correlation = SQRT (ESS/TSS) = SQRT (0.003215 / 0.005714890)
= 0.750031;
or, correlation = SQRT [beta(A)^2*StdDev(M)^2] / [beta(A)^2*StdDev(M)^2 + e(A)^2] = SQRT ([0.63^2*9.0%^2] /
[0.63^2*9.0%^2 + 5.0%^2]) = 0.750031
12. Correct Answer: C
174 months as t(alpha) = alpha*SQRT(N)/ StdDev(residual), N = [critical-t * StdDev(residual) / alpha] ^2 =
[1.645*4.0%/0.0050] ^2 = 174.24, or 173.15 if the using exact N (0.90).
13. Correct Answer: C
Please note, in addition to true (A), (B) and (D), the fourth difficulty listed by Bodie et al is SURVIVORSHIP BIAS. In
summary, the four difficulties are: 1. Illiquid assets; 2. Tail risk; 3. Unstable risk profiles; and 4. Survivorship bias
14. Correct Answer: A
Average excess return = 7.0% and standard deviation = 17.82%, such that Sharpe ratio = 7/17.82 = 0.39.
However, consistent with Example 24.3, calculations are probably not necessary: a fine guess is that the 2-year
Sharpe ratio (incorporating a dramatic shift in the average return) is lower than either 1-year Sharpe ratio. The
difference in means, due to strategy shift, contributes to greater volatility (and lower Sharpe ratio) in the overall
sequence. Bodie's point is that the changing risk profiles associated with active strategies imply "for actively
managed portfolios it is helpful to keep track of portfolio composition and changes in portfolio mean and risk."
15. Correct Answer: B
The opposite: Perfect market timing is extraordinarily profitable (see Bodies
Table 24.4); however, Bodie's cited research shows that most managers are generally unable to consistently time
markets.
In regard to (A), (C) and (D), each is TRUE.
In regard to (C), this refers to expanded SCL: r(p) - r(f) = a + b*[r(m) - r(f)] + c*[r(m) - r(f)] ^2 + E (p)
In regard to (D), this refers to expanded SCL: r(p) - r(f) = a + b*[r(m) - r(f)] + c*[r(m) - r(f)] *D + E (p), where D is
dummy variable
16. Correct Answer: D
P (bull markets) = 5/5 = 100%. P (bear markets) = 2/5 = 40%. P = 100% + 40% - 1 = 0.40

123 | P a g e
Practice
FRM PARTBook
2 -|Practice
VolumeBook
2 | Volume 2

17. Correct Answer: A


Indirectly, the contribution from asset allocation = sum of: (actual weight - benchmark
weight) *benchmark return = (0.7 - 0.6) *5.0% + (0.2 - 0.3) *3.0% + 0 = + 0.20%;
such that security selection must be -1.1% or -110 basis points: +0.20% from asset allocation -
1.1% from security selection = -0.9% excess return
Or, directly, contribution from security selection = sum of: Portfolio weight * (Actual Return -
Benchmark Return)
= 70%*(4.0% - 5.0%) + 20%*(1.0% - 3.0%) + 10.0%*0 = -0.7%-0.4% = -1.1% or -110 basis points
18. Correct Answer: C
III. IV. and V. are correct only the constraints are that the regression coefficients are between 0 and 1.0, and that
they sum to 1.0 to represent comparison portfolio that is fully allocated to styles (and no negatives to reflect the
prohibition on short positions).
In regard to I., zero is an acceptable coefficient; i.e., the fund is not sensitive to T bills.
In regard to II., the intercept is the return that cannot be attributed to a passive style (asset allocation) and "can be
attributed to security selection within asset classes, as well as timing
that shows up as periodic changes in allocation" (i.e., alpha; skill or luck)

124 | P a g e
Practice
FRM PARTBook
2 -|Practice
VolumeBook
2 | Volume 2

Hedge Funds | Questions


1. Your wealthy Aunt Betty has capital to invest and is trying to choose between a hedge fund and a mutual fund.
According to Fung and Hsieh, which of the following is the best argument in favor of a hedge fund investment rather
than a mutual fund investment?
a) She wants to minimize the total fees charged which reduced her net return
b) She prefers the investor protections associated with the greater regulations imposed on hedge funds
c) She seeks a manager with the skill set to market-time, fund leveraged positions and/or carry short positions
d) She prefers a simple strategy that is conducive to a high degree of transparency
2. Fung and Hsieh identify several biases which commonly attend to hedge fund reporting (although these biases are
not necessarily exclusive to the hedge fund industry!). These biases include each of the following EXCEPT which is
incorrectly defined?
a) Survivorship bias occurs when a poor performing funds that has been terminated is not reported in the database
b) Selection bias (aka, self-reporting bias) refers to an ex-post self-re-classification to a more favorable strategy;
e.g., at the end of the year, a risk is fund re-classifies itself as a global macro fund due to the latter's strong
performance
c) Backfill bias (aka, instant history bias) occurs when a new fund enters a database with its performance history
included, which may include performance during an incubation period
d) Measurement bias is an outcome which encompasses several potential causes including survivorship bias,
backfill bias, database merges, and fund migration among databases
3. Fung and Hsieh sketch the historical evolution of the hedge fund industry by identifying major events or milestones.
According to them, each of the following was a landmark event EXCEPT which was not?
a) In 1994, the arrival of commercially available hedge fund databases, hedge fund indexes and performance
reporting dramatically altered the flow of information between hedge funds and investors
b) In 1998, the collapse of Long-Term Capital Management (LTCM) left an indelible scar on hedge fund investors’
confidence and was a turning point in the capital formation process of the hedge fund industry
c) During the technology bubble of 1998 to 2000, hedge funds both failed to "ride the bubble up" and further failed
to reduce their holding before prices collapsed; consequently, the dot-com bubble tarnished the hedge fund
industry for approximately a decade
d) The early 2000s marked the arrival and emergence of institutional investors as a dominant investor group; they
raised expectations for operational integrity and demanded a governance process in hedge fund firms
4. According to Fung and Hsieh "the majority of managed futures funds pursue trend following strategies." Therefore,
they classify managed future hedge funds as a directional style (as opposed to non-directional or relative or arbitrage-
like). Which exotic or option trading strategy most nearly resembles the payoff of a trend-following strategy?
a) Asian average price call with an arithmetic price average
b) Exchange option where the underlying asset is the risk-free asset
c) A chooser option where the call and put share the same strike price
d) Floating look back straddle; i.e., floating look back call plus floating look back put
5. Fung and Hsieh describe one hedge fund strategy in the following way: "[These] fund managers are known to be
highly dynamic traders, often taking highly leveraged bets on directional movements in exchange rates, interest rates,
commodities, and stock indices in an opportunistic manner. We can think of [them] as a highly active asset allocator
betting on a range of risk factors over time. Over a reasonably long time, frame, opportunities in different risk factors
will come and go." To which strategy does this description refer?
a) Merger arbitrage; aka, risk arb
b) Global Macro
c) Event-driven distressed
d) Convertible arbitrage

125 | P a g e
Practice
FRM PARTBook
2 -|Practice
VolumeBook
2 | Volume 2

6. Fung and Hsieh include two major strategies in the event-driven category: risk (merger) arbitrage and distressed.
About these event-driven strategies, each of the following is true EXCEPT which is false?
a) Risk arbitrage strategy returns (risk arbitrage index) are correlated to the S&P 500
b) Distressed strategy returns (Distress index) are correlated with high yield bonds
c) Both of these event-driven strategies--risk arb and distressed--exhibit nonlinear returns characteristics
d) Both of these event-driven strategies--risk arb and distressed--tend to benefit from extreme market moves in
either direction.
7. You receive a general solicitation from a hedge fund, which is now possible under the 2012 JOBS Act. The fund
advertises its style as a long/short equity manager. However, you are skeptical of each of the solicitation's claims. In
fact, each of the following claims from the long/short equity manager is dubious or unlikely EXCEPT which is the MOST
PLAUSIBLE or MOST LIKELY?
a) As a long/short equity manager, we are located in a rare (uncommon) style with few competitors who also
employ a long/short strategy
b) As a long/short equity manager, we are non-directional with virtually no exposure to the stock market
c) As a long/short equity manager, you might find us long small cap stocks and emerging markets but short large
cap stocks
d) As a long/short equity manager, we try to avoid the idiosyncratic risk that attaches to so called "stock pickers"
in favor of acknowledging that markets are efficient
8. According to Fung and Hsieh, "An important feature that attracts investors to the hedge fund industry is the variety
of strategies hedge fund managers deploy and the diversity of assets to which these strategies are applied." Which
of the following best summarizes the historical evidence with respect to the ability of hedge funds to provide
diversification benefits?
a) With few exceptions, most hedge fund strategies have successfully provided high diversification benefits and
low correlation through all recent, major market events
b) Although there has rarely been a problem on the liability (funding) side of the balance sheet, diversification
among risk factor exposures on the asset side has rarely been achieved in practice
c) Managers who successfully diversified their risk factor exposures on the asset side, in turn, guaranteed their
diversification on the liability (funding) side and thrived during major market events
d) Most strategies employ leverage and are vulnerable to forced liquidation, a shared problem on the liability side
of the balance sheet, and there is no easy and complete solution
9. Which of the following is the best summary of the primary principal-agent risk (or problem) in the hedge fund
industry, according to Fung and Hsieh?
a) Although "clearly of academic interest," there is neither credible evidence, nor a plausible narrative in support
of, a realistic principle-agent problem in the hedge fund industry
b) The principle-agent risk arises due to the incentive fee structure and the high-water mark (HWM), but can be
mitigated by agents (fund managers) who invest a sizable amount of their own wealth in the fund
c) The principle-agent risk arises due to the fact that hedge fund managers have more information about the fund's
investments than managers, and there is no easy solution to this principle-agent problem because the principals
are not full-time investors themselves
d) The principle-agent risk arises due to the fact that the hedge fund manager, who charges a management fee as
a percentage of assets under management (AUM), wants to grow the size of the firm rather than outperform,
but this can be addressed simply by avoiding big funds in favor of small funds

126 | P a g e
Practice
FRM PARTBook
2 -|Practice
VolumeBook
2 | Volume 2

Hedge Funds | Answers


1. Correct Answer: C
She seeks a manager with the skill set to market-time, fund leveraged positions and/or carry short positions
 Fung and Hsieh: "These are the defining characteristics of a hedge fund manager’s skill set—the ability to
identify profitable long as well as short opportunities in a range of asset categories, the organization structure
to carry short positions for extended periods of time, the know-how to fund leveraged positions, and the risk
management skill to maintain complex positions during volatile markets."
 In regard to (A), (B) and (D), each is FALSE.
2. Correct Answer: B
Selection bias (aka, self-reporting bias) occurs when hedge funds decide not to report their information to databases
which can render the database non-representative, for example, of a strategy
 In regard to (A), (C) and (D) each is TRUE
3. Correct Answer: C
According to the Fung and Hsieh, "the hedge fund industry as a whole was affected in a major way" by the dot-com
bubble; further, according to the cited paper, "Hedge Funds and the Technology Bubble," in general hedge funds
did ride [up] the technology bubble and did reduce their holding before prices collapsed.
 In regard to TRUE (A), (B) and (D), the three major landmark events, then, were:
1. Arrival of commercially-available hedge fund databases (and indexes and performance reporting),
2. Collapse of LTCM, and
3. Emergence of institutional investors and their promotion of more professional practices including
operational integrity, governance processes, and risk management.
4. Correct Answer: D
Floating look back straddle; i.e., floating look back call plus floating look back put
Fung and Hsieh: "Fung and Hsieh (2001) show that majority of managed futures funds pursue trend following
strategies. Merton (1981) showed that a market timer, who switches between stocks and Treasury bills, generates
a return profile similar to that of a call option on the market.
Fung and Hsieh (2001) generalized this observation to encompass both long and short positions. The resulting return
profile is similar to that of a straddle. Over any observation period, a trend follower with perfect foresight would
have been able to initiate (and to exit) trading positions at the best possible price. The payout function of a trend
follower with perfect foresight therefore resembles that of a look back straddle.50 Since the payout of a look back
option is the same as that of a trend follower with perfect foresight, the execution cost of such a straddle (or the
price of the look back option) for a given trend follower can be interpreted as reflecting the cost of initiating (and
exiting) trading positions at sub-optimal points. How well do look back straddles mimic the performance of trend
following hedge funds?"
5. Correct Answer: B
Fung and Hsieh: "Like most other qualitative description of hedge fund styles, there is no universally accepted
definition of Global Macro hedge funds as a group. Below is how Global
Macro managers are described in the Dow Jones Credit Suisse website:
 Global macro funds typically focus on identifying extreme price valuations and leverage is often applied on
the anticipated price movements in equity, currency, interest rate and commodity markets. Managers
typically employ a top-down global approach to concentrate on forecasting how political trends and global
macroeconomic events affect the valuation of financial instruments. Profits can be made by correctly
anticipating price movements in global markets and having the flexibility to use a broad investment mandate,
with the ability to hold positions in practically any market with any instrument.

127 | P a g e
Practice
FRM PARTBook
2 -|Practice
VolumeBook
2 | Volume 2

These approaches may be systematic trend following models, or discretionary.


Fung and Hsieh (2006) provide a detailed description of Global Macro hedge fund managers’ performance. "Global
Macro fund managers are known to be highly dynamic traders, often taking highly leveraged bets on directional
movements in exchange rates, interest rates, commodities, and stock indices in an opportunistic manner. We can
think of a Global Macro manager as a highly active asset allocator betting on a range of risk factors over time.
Over a reasonably long time, frame, opportunities in different risk factors will come and go. Ex post the performance
of a Global Macro manager may well resemble that of a diversified hedge fund portfolio with highly variable
exposures to a broad set of global risk factors. As a first approximation, Fung and Hsieh (2006) applied the seven-
factor model of Fung and Hsieh
(2004b), which was originally designed to capture the inherent risks in diversified portfolios of hedge funds, and
reported reasonable results. In this chapter, we update the results to the full eight-factor model (see also Section
2.4)."
6. Correct Answer: D
Unlike trend followers, who tend to benefit from extreme moves, Event-Driven funds are hurt by extreme moves.
 Fung and Hsieh: "To summarize, the two major strategies in the Event-Driven hedge fund style category both
exhibit nonlinear returns characteristics—mostly as tail risk that show up under extreme market conditions.
In the case of Risk Arbitrage, the tail risk is a large drop in equities. In the case of distressed hedge funds, the
tail risk is in a large move of short-term interest rates. However, unlike trend followers, who tend to benefit
from extreme moves, Event-Driven funds are hurt by extreme moves."
 In regard to (A), (B) and (C), each is TRUE.
7. Correct Answer: C
As a long/short equity manager, you might find us long small cap stocks and emerging markets but short large cap
stocks. Fung and Hsieh: "All [equity long/short managers] are subject to the basic phenomenon that 'under-priced
stocks,' if they exist, are likely to be found among smaller, 'under-researched' stocks, or foreign markets (particularly
emerging markets). On the short side, liquidity condition in the stock-loan market makes small stocks and foreign
stocks much less attractive candidates for short sales."
In regard to (A), (B) and (D), each is dubious or unlikely from a long/short equity manager.
 In regard to (A), say Fung and Hsieh, "[long/short equity] is an important hedge fund style category. The
long/short equity style consistently accounts for 30–40% of the total number of hedge funds."
 In regard to (B), "Agarwal and Naik (2004) studied a wide range of equity-oriented hedge funds, and Fung and
Hsieh (2011) focused on long/short equity funds. The empirical evidence shows that long/short equity funds
have directional exposure to the stock market as well as exposure to long small-cap/short large-cap positions,
similar to the SMB factor in the Fama and French (1992) three-factor model for stocks."
 In regard to (D), "Typically, long/short equity hedge fund managers are stock pickers with diverse opinions
and ability. As such, the individual performance of these managers is likely to be highly idiosyncratic."
8. Correct Answer: D
Most strategies employ leverage and are vulnerable to forced liquidation, a shared problem on the liability side of
the balance sheet, and there is no easy and complete solution
Fung and Hsieh: "Can funding risk be mitigated in part or in whole? It is now almost four years since the 2008
financial crisis and the impact of global deleveraging is still unfolding with no end in sight. The profitability of most
hedge fund strategies is driven by effective use of leverage. Therefore, the risk of another funding crisis similar in
character to what we experienced in the summer of 2008 is something that cannot, and should not, be overlooked.
It is also a risk that cannot be easily mitigated by simply spreading one’s capital to different hedge fund strategies.
The need to consider hedging a credit-driven tail risk event in a hedge fund portfolio is clear; however, a complete
solution to this problem remains beyond our grasp and lies beyond the scope of this chapter."
 In regard to (A), (B) and (C), each is FALSE.

128 | P a g e
Practice
FRM PARTBook
2 -|Practice
VolumeBook
2 | Volume 2

9. Correct Answer: B
The principle-agent risk arises due to the incentive fee structure and the high-water mark (HWM), but can be
mitigated by agents (fund managers) who invest a sizable amount of their own wealth in the fund
 Please note that principal-agent risk (the "agency problem") not only manifests in various relationships aside
from hedge funds, but further, within the investor-hedge fund relationship, there are SEVERAL potential
agency risks. It's debatable whether the agency problem that Fung and Hsieh identify (i.e., undue risks in the
face of incentive fees and the HWM) is the primary agency risk.
In regard to (D), this is tempting and defensible, however please note (in the context of the assigned reading) that
Fung and Hsieh argue, based on evidence, that larger funds are generally more desirable due to their ability to earn
alpha (and they also argue that managers at large funds are less vulnerable to the temptation to take undue
risks).Further, it's not obvious that wanting to grow AUM would imply that manager does not want to outperform!

129 | P a g e
Practice
FRM PARTBook
2 -|Practice
VolumeBook
2 | Volume 2

Performing Due Diligence on Specific Managers and Funds | Questions


1. You have been tasked to perform due diligence on a hedge fund. According to Suresh, each of the following is a
desirable--or even necessary--characteristic of the fund or critical activity in the due diligence process EXCEPT which
is not desirable or necessary?
a) You perform reference checks and background checks on the firm Principals
b) You are given specific information in order to understand the firm's ownership (equity) structure
c) You have access to the actual decision makers and, as investors, you will deal directly with the actual risk takers
and decision makers
d) The Founder/CEO is also the Chief Investment Officer (CIO) and has chief responsibility for the fund's risk and
business model, which is desirable because this ensures total accountability at the top for all three areas
(investment, operational and business)
2. You are tasked to perform due diligence on a hedge fund and specifically its risk management process. According to
Suresh, which of the following is MOST LIKELY to be a red flag?
a) The fund's Risk Manager reports to the Portfolio Managers with dotted-line reporting to the Traders
b) The fun employs an independent Risk Service Provider but the provider is not co-located at the firm's
headquarters
c) The fund claims that its process for originating and controlling risk is "complex"
d) The fund employs different strategies but they each tend to incur a different set of risks, rather than a common
set of risks
3. Your team is tasked to perform due diligence on a relatively new long/short equity hedge fund. Your job on the due
diligence team is to evaluate the firm's risk management process (other team members are evaluating, respectively,
the investment process, the operational environment, and the business model). According to Mirabel, none of the
following facts about the firm is by itself (i.e., without further context) necessarily alarming or problematic, HOWEVER
which fact is a red flag or unacceptable?
a) The firm has both a high water-mark and a one-year lockup
b) The firm does not share equity ownership among the portfolio managers or traders in the firm
c) The firm is new and small and consequently does not yet provide periodic reports to investors
d) The firm is new and small but only uses one prime broker
4. In regard to the hedge fund's operational environment, Suresh advises each of the following as true for investors
during due diligence, EXCEPT however which is not true?
a) It is important to ensure that the fund offering memo, subscription agreement, limited partnership agreement,
investment management agreement, Form ADV, and website are all saying the same thing at the same point in
time.
b) Insufficient risk factors are more of a red flag than too many. In addition, overly broad and irrelevant risk factors
are also a red flag. In the latter case, the law firm may not have adequately looked at the manager's program or
is drafting the OM so broadly that it is looking primarily to protect itself, not the investor.
c) Investors should expect that a hedge fund will make all of the key contacts at their service providers available to
them so they can verify the scope of any services being provided. Investors can also obtain internal control letters
and audited financial statements from the fund's service providers to make sure there is an independent check
on the service providers themselves.
d) The limited partnership agreement should include indemnification provisions such that the fund itself
indemnifies the manager and all directors against gross negligence, bad faith, fraud, or willful misconduct of the
manager

130 | P a g e
Practice
FRM PARTBook
2 -|Practice
VolumeBook
2 | Volume 2

5. Each of the following is a potential indicator of hedge fund FRAUD risk, except which is not (or is the least likely to
indicate fraud)?
a) A high percentage of illiquid investments or those marked to market by the manager
b) Personal trading by managers in the same securities or similar securities as the fund
c) Fixed operating costs are above the industry average while manager compensation has a prominent variable
component
d) Unusually strong performance claims; e.g., hedge fund performance claims are better than market average over
a long period of time
6. According to Suresh, which of the following is true about hedge fund due diligence?
a) An investor must use a comprehensive checklist to ensure that nothing is left out or omitted yet remain free to
ask open-ended questions that provide insights into a firm's philosophy or culture
b) Due diligence is an activity with diminishing marginal returns which consumes the time of the fund's investment
professions; therefore, a potential investor is wise to limit the inquiry to a quantitative analysis which tends to
be objective and fact-based
c) Hedge fund are still largely unregulated and, unfortunately, anyone who feels that have been defrauded by a
hedge fund cannot realistically report to a government agency as
d) no single agency has jurisdiction
e) Because due diligence is focused on the firm's risk management process, fund operating environment, business
model and fraud risk, the investment strategy (e.g., equities, fixed-income, distressed) is largely irrelevant

131 | P a g e
Practice
FRM PARTBook
2 -|Practice
VolumeBook
2 | Volume 2

Performing Due Diligence on Specific Managers and Funds | Answers


1. Correct Answer: D
False: "Founders who are the CIO and CEO and who also manage a fund's risk and business model themselves are,
quite frankly, considered dinosaurs today. Today's top managers are very often making the choice to either be the
CIO and share the CEO role or, where they are serving both roles, to bring in senior partners who can add depth and
have specialized risk, accounting, operations, or trading skills beyond their own. Investors in hedge funds now are
quite focused on making sure the organization can provide all three skills at a very high level, even though any one
founder cannot."
In regard to (A), (B), and (C) each is TRUE.
 "Investors who are thinking about a particular manager need to allocate resources to both references and
background checks."
 "Understanding the firm's ownership structure and how things get done is critical. The participation of the
investment team in some form of ownership is a critical area of differentiation among firms."
 "Most if not all of the questions related to a fund's investment process should be done in person with as many
people of mixed seniority as possible. The goal is to find out what is really going on at the fund and not just
have a pitch book recitation by the investor relations staff. Remember that if you cannot get access to the
actual decision makers when you are trying to invest, it is unlikely you will have access if something goes
wrong!
... Investors need to make sure they have access to the people at the top of the firm. It is also important that investors
deal directly with the actual risk takers and decision makers and not just investor relations, sales people, or junior
staff at the fund."
2. Correct Answer: A
The fund's Risk Manager reports to the Portfolio Managers with dotted-line reporting to the Traders
Suresh: "Risk management is an emerging discipline at many funds. That is not to say risk was not managed well in
the past. It is more a reflection on the fact that risk measurement and reporting and the decision to take action is
evolving toward a more independent model that segregates risk taking and investing from risk measurement and
management. Today, many funds have dedicated risk managers who report to the CIO or the CEO independently
from the portfolio managers and traders. Many firms also employ independent risk service providers to report risk
to investors completely independently from the firm. Some use fund accountants and administrators to achieve this
goal."
3. Correct Answer: C
Regardless of the fund's age or size, "Investors are always entitled to periodic reporting from the fund."
In regard to (A), (B) and (D), none is itself necessarily problematic
4. Correct Answer: D.
Typical indemnifications should NOT extend to the gross negligence, bad faith, fraud, or willful misconduct of the
manager.
In regard to (A), (B), and (C), each is TRUE.

132 | P a g e
Practice
FRM PARTBook
2 -|Practice
VolumeBook
2 | Volume 2

5. Correct Answer: C
This might be a business risk indicator but not a fraud risk indicator
In regard to (A), (B) and (D), each is listed as a potential indicator of fraud that investors should investigate before
investing in a fund:
"Civil regulatory agencies like the Securities and Exchange Commission and the Commodity
Futures Trading Commission have identified several indicators of fraud in hedge funds:
Lack of trading independence: hedge fund managers trading through affiliated
broker\dealers
 Investor complaints: investors being unable to redeem their investments in a timely fashion
 Audit issues: lack of audits by reputable independent accounting firms
 Litigation: hedge funds being sued civilly by investors alleging fraud
 Unusually strong performance claims: hedge fund performance claims are better than market average over a
long period of time (they can't always win, but if they do, possible indicator of insider information or false
reporting).
 Illiquid investments: investing in a commodity which is not easy to value (incentive to overvalue investment
to earn a larger commission)
 Valuation issues: use of related parties to value illiquid investments or use of a non-independent fund
administrator
 Personal trading: hedge fund managers trading in their own accounts
 Aggressive Bear Shorting: hedge funds take a short position in a stock (bet it will go down) and orchestrate
efforts to disseminate unfounded or materially false negative information about the stock, eroding the price
and allowing the perpetrators to profit on the short position"
6. Correct Answer: A
"Checklist" is the keyword here. Key themes of the reading include that
(i) Due diligence is an art and science including both quantitative and qualitative analyses
(ii) That should be flexible to the situation and the firm's unique investment strategy but also
(iii) That should definitely be comprehensive and holistic where a checklist is a key tool. The primary mechanism
for achieving a comprehensive analysis is the CHECKLIST; e.g.,
"Investors need to think about performing due diligence in a way that is comprehensive and holistic. This means
that it must cover the investment process, operational environment, and business acumen of a manager and a fund.
It should also be both quantitative and qualitative.
Quantitative analysis ensures that every aspect of due diligence is covered and nothing obvious gets missed. This is
sometimes referred to as the checklist approach. Qualitative analysis is much more investigative and allows
investors to ask the type of open-ended questions needed to assess the attitudes and culture of a manager and the
consistency of message or a process throughout an entire organization."
In regard to (B), (C) and (D), each is FALSE.

133 | P a g e
Practice
FRM PARTBook
2 -|Practice
VolumeBook
2 | Volume 2

Machine Learning: A Revolution in Risk Management and Compliance? | Questions


1. Peter is an analyst who is using Microsoft Azure to conduct drag-and-drop (that is, without coding!) machine learning
analytics on his company's dataset of consumer loans. The dataset includes the response variable (aka, dependent
variable) in a column that indicates the historical performance of the consumer loan as either "defaulted" or "repaid
in full." Peter wants to use a training set to predict whether future loans will default and he expects the relationship
is non-linear. Which of the following machine learning approaches is probably BEST?
a) Any unsupervised approach
b) Either ridge or LASSO non-penalized regression
c) Either principal components, or K- and X- means clustering
d) Either decision trees, support vector machines or deep learning
2. Barbara has developed a model to detect fraudulent transactions at her bank. Her primary dataset consists of a
table that contains millions of rows (aka, observations), one per each customer transaction, and several dozen
columns; each column is already a "feature" (aka, attribute, parameter) in the model. Her goal is to increase the
predictive power of the model, and the model does perform well when applied to the historical database (aka, in
sample), but she is greatly concerned specifically about over fitting the model. Each of the following techniques is a
possible mitigant (or remedy) EXCEPT which of the following is unlikely to help or cure her over fitting problem?
a) Bootstrap aggregation; aka, bagging
b) Build a random forest or ensemble of tree-based models
c) Increase the number of features; i.e., add parameters to the model
d) Boosting; i.e., overweight scarcer observations in the training dataset
3. Prabin Kumar writes that "Financial institutions (FIs) are looking to more powerful analytical approaches in order to
manage and mine increasing amounts of regulatory reporting data and unstructured data, for purposes of
compliance and risk management (applying machine learning as RegTech) or in order to compete effectively with
other FIs and FinTechs." He explains that machine learning approaches are well-positioned to deliver this analytical
power due to their natural ability to cope with extremely large datasets while offering a high granularity and depth
of predictive analysis. He presents three use cases: Credit risk and revenue modeling; Fraud; and Surveillance of
conduct and market abuse in trading.
In regard to these three case studies, each of the following is true (according to Prabin Kumar) EXCEPT which is
inaccurate?
a) Clustering is an unsupervised learning method that is applicable to anti-money laundering and counter
terrorism financing (AML/CTF)
b) Machine learning has been more successful in credit card fraud than anti-money laundering and counter
terrorism financing (AML/CTF)
c) To facilitate the surveillance of conduct breaches by traders, supervisory learning approaches are difficult to
apply because there is often no labeled training data
d) Widespread adoption of machine learning is limited by two practical constraints: regulations require
supervised (i.e., national supervisor) learning; and machine learning's black box character implies that
applications in the financial sector are not context dependent

134 | P a g e
Practice
FRM PARTBook
2 -|Practice
VolumeBook
2 | Volume 2

4. Peter the risk analyst is helping his international financial services client analyze their very big client transaction
database. His immediate task is to conduct an anti-money laundering (AML) analysis. Unlike credit card fraud,
however, money laundering is hard to define: there is no universally agreed definition of money laundering.
Consequently, the historical database contains no field indicating whether a transaction was fraudulent or not; put
another way, there is no dependent variable. As such, Peter effectively only has input variables with which to work.
If his goal is to yield insights from the data for his client, which of the following methods (among the choices given)
in this situation is the MOST appropriate?
a) Clustering
b) Support vector machines
c) Classification decision tree
d) LASSO, a penalized regression
5. In regard to machine learning, each of the following statements is true EXCEPT which is inaccurate?
a) Averaging over many small models tends to give better out-of-sample prediction than choosing a single model
b) Deep learning is a supervised learning method that requires structured data but the layers of features are
designed by human engineers
c) There is often a trade-off between prediction and explanation and many machine learning methods are better
at prediction than explanation
d) Over-fitting is a common problem in non-parametric, non-linear machine learning models, and its symptoms
include very good in-sample fit but poor out-of-sample performance
6. Among these choices, which of the following machine learning models is the LEAST useful for the regulatory purpose
of providing a system that can be audited and verified by the supervisor?
a) Logit regression
b) Linear regression approach
c) Machine learning ensemble
d) Behavioral science-based model

135 | P a g e
Practice
FRM PARTBook
2 -|Practice
VolumeBook
2 | Volume 2

Machine Learning: A Revolution in Risk Management and Compliance? | Answers


1. Correct Answer: D
True: Either decision trees, support vector machines or deep learning. This a SUPERVISED CLASSIFICATION problem
where a linear relationship could use support vector machines but a non-linear relationship could use (i) decision
trees [classification trees, regression trees, or random forests], (ii) support vector machines, or deep learning. See
Prabin Kumar's Table 1.
2. Correct Answer: C
False: Adding features, ceteris paribus, may contribute to over fitting (she already has several dozen features)
In regard to (A), (B) and (D), each of these are proposed by the author as a potential remedy to the over fitting
problem.
Prabin Kumar: Tackling overfitting: bagging and ensembles: Excessively complex models can also lead to “over
fitting,” where they describe random error or noise instead of underlying relationships in the dataset. Model
complexity can be due to having too many parameters relative to the number of observations [footnote 11: For
example, R^2, a goodness-of-fit indicator, tends to increase (and cannot decrease) with any variable that is added
to the model, whether or not it makes sense in the context. See Ramanathan, R., 2002, Introductory Econometrics
with Applications]. In machine learning, overfitting is particularly prevalent in nonparametric, non-linear models,
which are also complex by design (and therefore also typically hard to interpret). When a model describes noise in
a dataset, it will fit that one data sample very well, but will perform poorly when tested out-of-sample.
There are several ways to deal with overfitting and improve the forecast power of machine learning models,
including “bootstrapping,” “boosting” and “bootstrap aggregation” (also called bagging). Boosting concerns the
overweighting of scarcer observations in a training dataset to ensure the model will train more intensively on them.
For example, one may want to overweight the fraudulent observations due to their relative scarcity when training
a model to detect fraudulent transactions in a dataset. In “bagging,” a model is run hundreds or thousands of times,
each on a different subsample of the dataset, to improve its predictive performance. The final model is then an
average of each of the run models. Since this average model has been tested on a lot of different data samples, it
should be more resilient to changes in the underlying data. A “random forest” is an example of a model consisting
of many different decision tree-based models. Econometricians can take this concept even further by combining the
resulting model with a model based on another machine learning technique. The result is a so-called ensemble: a
model consisting of a group of models whose outcomes are combined by weighted averaging or voting. It has been
shown that averaging over many small models tends to give better out of sample prediction than choosing a single
model.
3. Correct Answer: D
Both clauses are false. The first is nonsensical, and a key theme in the paper is that applications are context-
dependent.
From the Conclusion: This article has given an introduction to the machine learning field and has discussed several
cases of application within financial institutions, based on discussions with IIF members and technology vendors:
credit risk modeling, detection of credit card fraud and money laundering, and surveillance of conduct breaches at
FIs. Two tentative conclusions emerge on the use of machine learning in the financial sector – tentative, because
the field is developing fast and many FIs are still experimenting with machine learning in some spaces.
First, machine learning comprises a range of statistical learning tools that are generally able to analyze very large
amounts of data while offering a high granularity and depth of analysis, mostly for predictive purposes. ...
Second, the application of machine learning approaches within the financial sector is highly context-dependent.
Ample, high-quality data for training or analysis are not always available in FIs. More importantly, the predictive
power and granularity of analysis of several approaches can come at the cost of increased model complexity and a
lack of explanatory insight. This is an issue particularly where analytics are applied in a regulatory context, and a
supervisor or compliance team will want to audit and understand the applied model. Fortunately, simpler machine
learning approaches do exist, combining non-linear analysis with simplicity. Indeed, vendors of machine learning
analytics in finance typically aim to combine machine learning’s depth of insight with model simplicity, or add factor
models to improve the auditability of their products. As it seems, there is an algorithm for every problem.

136 | P a g e
Practice
FRM PARTBook
2 -|Practice
VolumeBook
2 | Volume 2

In regard to (A), (B) and (C), each is TRUE.


In regard to true (A) and (B), Prabin Kumar writes: Fraud (page 65): ... The detection of money laundering and
terrorism financing through payments systems stands as a contrast to machine learning’s long-standing record in
credit card fraud. Many banks are still relying on conventional rules-based systems, which focus on individual
transactions or simple transaction patterns. These systems are often unable to detect complex patterns of
transactions or obtain a holistic view of transactions behavior on payment infrastructures. Due to their coarse
selection methods, the number of false positives created by these systems is substantial. As a result, significant
human capacity is required for the assessment of alerts and filtering false positives from actual suspicious
observations. In addition, impediments to data sharing and data usage, as well as long-established regulatory
requirements, have complicated innovation in the AML/CFT area.
Machine-learning systems have the potential to improve detection of money laundering activity significantly, due
to their ability to identify complex patterns in the data and combine transactions information at network speed,
with data from many other sources to obtain a holistic picture of a client’s activity. Indeed, these systems have
already been shown to bring false positives down significantly.
However, application so far in the AML space has lagged for several reasons. First, money laundering is hard to
define. There is no universally agreed definition of money laundering and financial institutions do not receive
feedback from law enforcement agencies on which of their reported suspicious activities have turned out to be
money laundering. It is, therefore, more difficult to train ML-detection algorithms using historical data, because an
incidence of money laundering typically is not firmly established.
As a second-best, FIs are optimizing ML detection algorithms using lower-level suspicious activity reports as a
depending variable for classification – using classification between alerts that the bank could classify as false alerts,
and those that moved on to be submitted as SARs to law enforcement agencies.
Unsupervised learning methods are also applied to AML/CFT as they “learn” relevant patterns from the data by
clustering transactions or client activity. This yields additional insights, since laundering methods take all kinds of
form and develop on a continuous basis. An example of such unsupervised learning is clustering. Clustering requires
large datasets where it can automatically find patterns within the data without the need for labels. Clustering works
by identifying outliers as points without any strong membership in any one cluster group, thus finding anomalies
within subsets of the data. In AML, clustering is one of the methods used to group together data; using other
analytics, such as topological data analytics and dimensionality reduction, machine learning can reduce the
significant amounts of false positives often associated with alternative methods.
In regard to true (C), Prabin Kumar writes (emphasis ours): Surveillance of conduct and market abuse in trading
(page 66): A third area in which machine learning is increasingly being applied within financial institutions is the
surveillance of conduct breaches by traders working for the institution. Examples of such breaches include [... etc.].
There are several challenges to applying machine learning in this space. First, there are typically no labeled data to
train algorithms on, as it is legally complex for financial institutions to share the sensitive information on past
breaches with developers. Supervisory learning approaches are, therefore, hard to apply. Second, a surveillance
system needs to be auditable for supervisors and for compliance officers, and needs to be able to explain to a
compliance officer why certain behavior has set off an alert. For systems that are entirely based on machine learning,
that can be difficult due to the “black box” character of learning approaches. In order for an alert to be interpretable
and actionable for compliance teams, it should ideally be linked to detection of a specific kind of behavior, rather
than based solely on a statistical correlation in the data. These issues can be addressed at least partly by founding
the learning system on a behavioral science-based model, which incorporates human decisions and behavioral traits.
In a way, such a model addresses the lack of explanatory power of machine learning approaches. Any alerts from
the system will be based on deviations it has identified from the model. However, the inclusion of machine learning
approaches on top of the model creates a feedback loop in the system through which it can adapt to evolving
behavior, and “get to know” a trader as it ingests more data.

137 | P a g e
Practice
FRM PARTBook
2 -|Practice
VolumeBook
2 | Volume 2

4. Correct Answer: A
True: Clustering is the only unsupervised method among the choices; see Table 1 below.
Prabin Kumar: "The machine learning spectrum comprises many different analytical methods, whose applicability
varies with the types of statistical problem one might want to address. Broadly speaking, machine learning can be
applied to three classes of statistical problems: regression, classification, and clustering. Regression and
classification problems both can be solved through supervised machine learning; clustering is an unsupervised
machine learning approach.
Regression problems involve prediction of a quantitative, continuous dependent variable, such as GDP growth or
inflation. Linear learning methods try to solve regression problems including partial least squares5 and principal
component analysis; non-linear learning methods include penalized regression approaches, such as LASSO and
elastic nets. In penalized approaches, a factor is typically added to penalize complexity in the model, which should
improve its predictive performance.
Classification problems typically involve prediction of a qualitative (discrete) dependent variable, which takes on
values in a class, such as blood type (A/B/AB/O). An example is filtering spam e-mail, where the dependent variable
can take on the values SPAM/NO SPAM. Such problems can be solved by a decision tree, 'which aims to deliver a
structured set of yes/no questions that can quickly sort through a wide set of features, and thus produce an accurate
prediction of a particular outcome.' Support vector machines also classify observations, but by applying and
optimizing a margin that separates the different classes more efficiently.
In clustering, lastly, only input variables are observed while a corresponding dependent variable is lacking. An
example is exploring data to detect fraud without knowing which observations are fraudulent and which not. An
anti-money laundering (AML) analysis may nonetheless yield insights from the data by grouping them in clusters
according to their observed characteristics. This may allow an analyst to understand which transactions are similar
to others. In some instances, unsupervised learning is first applied to explore a dataset; the outputs of this approach
are then used as inputs for supervised learning methods."

138 | P a g e
Practice
FRM PARTBook
2 -|Practice
VolumeBook
2 | Volume 2

5. Correct Answer: B
Triply false. Deep learning can be either supervised or unsupervised; it handles unstructured data well; and its layers of
features are learned from the data.
Prabin Kumar: "One of the dominant approaches is deep learning, a learning approach that can be based on both
supervised and non-supervised methods; all are non-linear. In deep learning, multiple layers of algorithms are stacked to
mimic neurons in the layered learning process of the human brain. Each of the algorithms is equipped to lift a certain
feature from the data. This so-called representation or abstraction is then fed to the following algorithm, which again lifts
out another aspect of the data.19 The stacking of representation-learning algorithms allows deep-learning approaches to
be fed with all kinds of data, including low-quality, unstructured data; the ability of the algorithms to create relevant
abstractions of the data allows the system as a whole to perform a relevant analysis. Crucially, these layers of features
are not designed by human engineers, but learned from the data using a general-purpose learning procedure."
In regard to (A), (C) and (D), each is TRUE.
 In regard to true (A), "Econometricians can take this concept [i.e., the concept of bagging which entails running a
model thousands of time each on a different subsample] even further by combining the resulting model with a
model based on another machine learning technique. The result is a so-called ensemble: a model consisting of a
group of models whose outcomes are combined by weighted averaging or voting. It has been shown that averaging
over many small models tends to give better out-of-sample prediction than choosing a single model."
 In regard to true (C), "Machine learning’s ability to make out-of-sample predictions does not necessarily make it
appropriate for explanation or inference as well, as statistical methods are typically subject to a trade-off between
explanatory and predictive performance. A good predictive model can be very complex, and may thus be very hard
to interpret. For predictive purposes, a model would need only to give insight in correlations between variables,
not in causality. In the case of credit scoring a loan portfolio, a good inferential model would explain why certain
borrowers do not repay their loans. Its inferential performance can be accessed through its statistical significance
and its goodness-of-fit within the data sample. A good predictive model, on the other hand, will select those
indicators that prove to be the strongest predictors of a borrower default. To that end, it does not matter whether
an indicator reflects a causal factor of the borrower’s ability to repay, or a symptom of it. What matters is that it
contains information about the ability to repay."
 In regard to true (D), "Excessively complex models can also lead to over fitting, where they describe random error
or noise instead of underlying relationships in the dataset. Model complexity can be due to having too many
parameters relative to the number of observations. In machine learning, over fitting is particularly prevalent in non-
parametric, non-linear models, which are also complex by design (and therefore also typically hard to interpret).
When a model describes noise in a dataset, it will fit that one data sample very well, but will perform poorly when
tested out-of-sample."
6. Correct Answer: C
True: Machine learning ensemble is the most "black box" and therefore least useful (directly) for regulatory purposes
 For the use case of Credit Risk and Revenue Modeling, Prabin Kumar says: "Nonparametric and non-linear
approaches (support vector machines, neural networks, and\ deep learning) and ensembles are so complex that
they are practically black boxes that are hard, if not impossible, for any human to understand and audit from the
outside. That makes these models hardly useful for regulatory purposes, such as the development of internal
models in the Basel Internal Ratings-Based approach. Financial supervisors typically require risk models to be clear
and simple in order to be understandable and verifiable and appropriate for validation by them."
 For the use case of Surveillance of conduct and market abuse in trading, Prabin Kumar says: "There are several
challenges to applying machine learning in this space. First, there are typically no labeled data to train algorithms
on, as it is legally complex for financial institutions to share the sensitive information on past breaches with
developers. Supervisory learning approaches are, therefore, hard to apply. Second, a surveillance system needs to
be auditable for supervisors and for compliance officers, and needs to be able to explain to a compliance officer
why certain behavior has set off an alert. For systems that are entirely based on machine learning, that can be
difficult due to the black box character of learning approaches. In order for an alert to be interpretable and
actionable for compliance teams, it should ideally be linked to detection of a specific kind of behavior, rather than
based solely on a statistical correlation in the data."

139 | P a g e
Practice
FRM PARTBook
2 -|Practice
VolumeBook
2 | Volume 2

Artificial intelligence and machine learning in financial services | Questions


1. The Financial Stability Board's Financial Innovation Network (FSB FIN, November 2017) observes that "artificial
intelligence and machine learning (AI&ML) are being rapidly adopted for a range of applications in the financial
services industry." Specific use cases of AI&ML include (i) customer-focused applications; (ii) operations-focused
uses; (iii) trading and portfolio management; and (iv)regulatory compliance and supervision. Further, according to
FSB FIN, each of the following statements is true EXCEPT which is inaccurate?
a. Deep learning can be used for supervised, unsupervised, or reinforcement learning
b. The key risk of artificial intelligence is that its ability to contextualize implies it will soon be able to fully
replicate human intelligence and therefore eventually replace humans
c. Machine learning is a sub-category of artificial intelligence (AI) that extends familiar statistical methods and
generally deals with optimization, prediction and categorization but not causal inference
d. Reinforcement learning falls in between supervised and unsupervised learning, and it feeds an unlabeled
dataset to the algorithm which chooses an action and then receives human feedback that helps it learn
2. In regard to the drivers that have contributed to the growing use of Fintech and the supply and demand factors that
have spurred adoption of AI and machine learning in financial services, each of the following statements is true
EXCEPT which is false?
a. A key supply factor is the declining cost of data storage and corresponding growth in datasets
b. Key demand factors include profitability (ie, cost reduction, risk management gains, productivity
improvements), competition, and regulatory compliance
c. A key supply factor is weak-form efficient markets due to a lack of threshold structured data, a factor which
is theoretically temporary because AI and machine learning should eventually arbitrage it away
d. Regulatory compliance is a salient demand factor (i.e., RegTech) but legal frameworks will be a complicating-
-and possibly dampening--factor on several fronts such as liability, anti-discrimination and credit system
interpretability
3. The FSB FIN briefly discusses three customer-focused use cases: credit scoring applications, insurance-related
technologies (aka, InsurTech including insurance policies), and client-facing chatbots. About the use of artificial
intelligence and machine learning (AI&ML) specifically in these customer-focused use cases, which of the following
statements is TRUE?
a) The black box is good because it will reduce discrimination
b) Machine learning algorithms are likely to reduce access to credit
c) Machine learning-based credit scoring models decisively outperform traditional credit models in over 90.0%
of cases
d) In the insurance industry, AI&ML can improve profitability (via risk-based pricing and reduced costs) and
augment underwriting and claims processing functions
4. Consider the following three companies, each with an intended use case for artificial intelligence and machine
learning (AI&ML):
I. In the category of operations-focused uses, Acme Trading Inc wants to score the liquidity of individual bonds
by comparing them to similar bonds (similar in features such as duration) but there is no labeled training
dataset
II. In the category of SupTech, a National Supervisor wants to incorporate sentiment derived from Twitter posts
which themselves are unstructured data (note that tweets are semi-structured because structured JSON
objects contain unstructured "tweets" as themselves text)
III. In the category of operations-focused uses, Bland Financial Corp wants to teach artificial intelligence tools to
react to order imbalance and queue position in the limit order book by feeding non-labeled data to an
algorithm that chooses an action and learns by receiving feedback (sometimes the feedback is human).

140 | P a g e
Practice
FRM PARTBook
2 -|Practice
VolumeBook
2 | Volume 2

Which solutions, respectively, are probably BEST for each of the above use cases?
a. I. Classification trees, II. Cluster analysis, III. Ridge Regression
b. I. Random forests, II. Support vector machines, III. Supervised learning
c. I. Regression, II. Penalized regression, III. Natural Language Processing (NLP)
d. I. Cluster analysis, II. Natural Language Processing (NLP) and III. Reinforcement learning
5. The Financial Stability Board's Financial Innovation Network (FSB FIN, November 2017) says that "From a micro-
financial point of view, the application of AI and machine learning to financial services may have an important impact
on financial markets, institutions and consumers." Specifically, in regard to this micro-financial point of view, each
of the following is true EXCEPT which statement is inaccurate?
a) In regard to consumers, AI&ML could enable wider access to financial services that are more
personalized/customized
b) In regard to financial institutions, AI&ML can be used for risk management through earlier and more accurate
estimation of risks
c) In regard to consumers, AI&ML guarantees the avoidance of discrimination by excluding sensitive features
(e.g., race, religion, gender) from the dataset
d) In regard to financial markets, AI&ML is likely to enable participants to collect and analyze information on a
greater scale which should (i) reduce information asymmetries and contribute to market efficiency; and (ii)
lower trading costs
6. With respect to a macro-financial analysis, the Financial Stability Board's Financial Innovation Network (FSB FIN,
November 2017) argues that "widespread adoption of AI and machine learning could impact the financial system in
a number of ways, depending on the nature of the application." From a macro perspective, which of the following
statements about the potential implications of artificial intelligence and machine learning (AI&ML) is TRUE?
a. A key vulnerability of AI&ML is its tendency to constrain economies of scope; that is, to promote dis-
economies of scope
b. In insurance markets, although AI&ML is likely to increase the degree of moral hazard and adverse selection,
it should create larger, fewer risk pools
c. Robo-advisors are likely to increase market liquidity but at the risk of higher volatility and less stability as more
participants are exposed to the same correlated common factors
d. A concern is that AI&ML might favor a greater concentration of fewer, larger organizations including advanced
third-party AI&ML providers, owners of proprietary sources of big data, and those able to afford heavy
investments in such innovative technologies

141 | P a g e
Practice
FRM PARTBook
2 -|Practice
VolumeBook
2 | Volume 2

Artificial intelligence and machine learning in financial services | Answers


1. Correct Answer: B
False: the authors say the trend is toward augmented intelligence, but not replacement of humans (we did not mean
to scare you!); and the article asserts that AI cannot contextualize.
FSB FIN (page 7): Many applications tend more toward augmented intelligence, or an augmentation of human
capabilities, rather than a replacement of humans. Even as advancements in AI and machine learning continue,
including in the area of deep learning, most industries are not attempting to fully replicate human intelligence. As
noted by one industry observer “… a human in the loop is essential: we are, unlike machines, able to take into
account context and use general knowledge to put AI-drawn conclusions into perspective.”
In regard to (A), (C) and (D), each is TRUE.
In regard to true (A) and (D), FSB FIN frames a typology (page 5): There are several categories of machine learning
algorithms. These categories vary according to the level of human intervention required in labeling the data:
 In supervised learning, the algorithm is fed a set of ‘training’ data that contains labels on some portion of the
observations. For instance, a data set of transactions may contain labels on some data points identifying those
that are fraudulent and those that are not fraudulent. The algorithm will ‘learn’ a general rule of classification
that it will use to predict the labels for the remaining observations in the data set.
 Unsupervised learning refers to situations where the data provided to the algorithm does not contain labels.
The algorithm is asked to detect patterns in the data by identifying clusters of observations that depend on
similar underlying characteristics. For example, an unsupervised machine learning algorithm could be set up
to look for securities that have characteristics similar to an illiquid security that is hard to price. If it finds an
appropriate cluster for the illiquid security, pricing of other securities in the cluster can be used to help price
the illiquid security.
 Reinforcement learning falls in between supervised and unsupervised learning. In this case, the algorithm is
fed an unlabeled set of data, chooses an action for each data point, and receives feedback (perhaps from a
human) that helps the algorithm learn. For instance, reinforcement learning can be used in robotics, game
theory, and self-driving cars.
 Deep learning is a form of machine learning that uses algorithms that work in ‘layers’ inspired by the structure
and function of the brain. Deep learning algorithms, whose structure are called artificial neural networks, can
be used for supervised, unsupervised, or reinforcement learning
In regard to true (C), writes FSB FIN (page 4): This report defines AI as the theory and development of computer
systems able to perform tasks that traditionally have required human intelligence. AI is a broad field, of which
‘machine learning’ is a sub-category [footnote 7: Examples of AI applications that are not machine learning include
the computer science fields of ontology management, or the formal naming and defining of terms and relationships
by computers, as well as inductive and deductive logic and knowledge representation. In this report, for
completeness, we often refer to “AI and machine learning,” with the understanding that many of the important
recent advances are in the machine learning space]
Machine learning may be defined as a method of designing a sequence of actions to solve a problem, known as
algorithms [footnote 8: An algorithm may be defined as a set of steps to be performed or rules to be followed to
solve a mathematical problem. More recently, the term has been adopted to refer to a process to be followed, often
by a computer], which optimizes automatically through experience and with limited or no human intervention.9
These techniques can be used to find patterns in large amounts of data (big data analytics) from increasingly diverse
and innovative sources. Figure 1 gives an overview.

142 | P a g e
Practice
FRM PARTBook
2 -|Practice
VolumeBook
2 | Volume 2

Figure 1: A schematic view of Al, machine learning and big data analytics

2. Correct Answer: C
False. The sentence tries to sound reasonable but is mostly some manufactured mumbo-jumbo (sorry!).
In regard to (A), (B), and (D), each is TRUE. FSB FIN graphically summarizes the supply/demand factors in their Figure 4
(below). Also see the reading's Annex A: Legal Issues Around AI and Machine Learning.
Figure 4: Supply and demand factors of financial adoption of Al and machine learning

3. Correct Answer: D
True: In the insurance industry, AI&ML can improve profitability (via risk-based pricing and reduced costs) and augment
underwriting and claims processing functions. See (section 3.1.2): Use for pricing, marketing and managing insurance
policies.
In regard to (A), (B) and (C), each is FALSE.
In regard to false (A), says FSB FIN (page 13, emphasis ours): However, the use of complex algorithms could result in a
lack of transparency to consumers. This black box aspect of machine learning algorithms may in turn raise concerns. When
using machine learning to assign credit scores make credit decisions, it is generally more difficult to provide consumers,
auditors, and supervisors with an explanation of a credit score and resulting credit decision if challenged. Additionally,
some argue that the use of new alternative data sources, such as online behaviour or non-traditional financial
information, could introduce bias into the credit decision. Specifically, consumer advocacy groups point out that machine
learning tools can yield combinations of borrower characteristics that simply predict race or gender, factors that fair
lending laws prohibit considering in many jurisdictions (see annex B). These algorithms might rate a borrower from an
ethnic minority at higher risk of default because similar borrowers have traditionally been given less favourable loan
conditions.
In regard to false (B) and (C), the authors assert that an advantage of ML algorithms is their ability to enable greater access
to credit, but their out-performance has not been established (page 12-13, emphasis ours): In addition to facilitating a
potentially more precise, segmented assessment of creditworthiness, the use of machine learning algorithms in credit
scoring may help enable greater access to credit. In traditional credit scoring models used in some markets, a potential
borrower must have a sufficient amount of historical credit information available to be considered scorable. In the
absence of this information, a credit score cannot be generated, and a potentially creditworthy borrower is often unable
to obtain credit and build a credit history. With the use of alternative data sources and the application of machine learning
algorithms to help develop an assessment of ability and willingness to repay, lenders may be able to arrive at credit

143 | P a g e
Practice
FRM PARTBook
2 -|Practice
VolumeBook
2 | Volume 2

decisions that previously would have been impossible. While this trend may benefit economies with shallow credit
markets, it could lead to non-sustainable increases in credit outstanding in countries with deep credit markets. More
generally, it has not yet been proved that machine learning-based credit scoring models outperform traditional one for
assessing creditworthiness ... There are a number of advantages and disadvantages to using AI in credit scoring models.
AI allows massive amounts of data to be analysed very quickly. As a result, it could yield credit scoring policies that can
handle a broader range of credit inputs, lowering the cost of assessing credit risks for certain individuals, and increasing
the number of individuals for whom firms can measure credit risk. An example of the application of big data to credit
scoring could include the assessment of non-credit bill payments, such as the timely payment of cell phone and other
utility bills, in combination with other data. Additionally, people without a credit history or credit score may be able to
get a loan or a credit card due to AI, where a lack of credit history has traditionally been a constraining factor as alternative
indicators of the likelihood to repay have been lacking in conventional credit scoring models.
4. Correct Answer: D
TRUE: I. Cluster analysis, II. Natural Language Processing (NLP), and III. Reinforcement learning
 About (I.) Cluster analysis, the lack of labeled data implies this is unsupervised learning and grouping
observations according to similar features is cluster analysis; e.g., k-means clustering. FSB FIN (page 17):
"Machine learning is often used to identify groups of bonds that behave similarly to each other. By doing so,
they can rely on many more data points, providing better estimates of price movements when the market is
thin. The resulting tool groups bonds into broad, intuitively similar buckets and then, using cluster analysis,
collects the most comparable products together in each bucket, to score the liquidity of individual bonds."
According to the paper's Glossary: "Cluster analysis: A statistical technique whereby data or objects are
classified into groups (clusters) that are similar to one another but different from data or objects in other
clusters."
 About (II.) Natural Language Processing (NLP), FSB FIN (page 17): "3.4.3 SupTech: uses and potential uses by
central banks and prudential authorities: Machine learning can be applied to systemic risk identification and
risk propagation channels. Specifically, NLP tools may help authorities to detect, measure, predict, and
anticipate, among other things, market volatility, liquidity risks, financial stress, housing prices, and
unemployment. In a recent Banca d’Italia (BdI) study, still in progress, textual sentiment derived from Twitter
posts is used as a proxy for the time-varying retail depositors’ trust in banks. The indicator is used to challenge
the predictions of a banks’ retail funding model, and to try to capture possible threats to financial stability
deriving from an increase of public distrust in the banking system. Furthermore, at the BdI, in order to extract
the most relevant information available on the web, newspaper articles are processed through a suitable NLP
pipeline that evaluates their sentiment. In another study, academics developed a model using computational
linguistics and probabilistic approaches to uncover semantics of natural language in mandatory US bank
disclosures. The model found risks as early as 2005 related to interest rates, mortgages, real estate, capital
requirements, rating agencies and marketable securities. Other studies are able to predict and anticipate
market outcomes and economic conditions, including volatility and growth." According to the paper's
Glossary: "Natural Language Processing (NLP): An interdisciplinary field of computer science, artificial
intelligence, and computation linguistics that focuses on programming computers and algorithms to parse,
process, and understand human language."
 About (III.) Reinforcement learning, FSB FIN (page 17): "Also, AI can be used to help identify how the timing of
trades can minimize market impact. Market impact models can be developed that describe how the effect of
a trade depends on previous trades as a starting point. The models attempt to avoid scheduling trades too
closely together to avoid having a market impact greater than the sum of its parts. These models can be used
to set out the best possible trading schedules for a range of scenarios and then tweak the schedule as the real
trade progresses, using supervised learning techniques to make the short term predictions determining those
tweaks. Banks are also testing reinforcement learning to teach artificial intelligence tools to react to order
imbalance and queue position in the limit order book." According to the paper's Glossary: "Reinforcement
learning: a subset of machine learning in which an algorithm is fed an unlabelled set of data, chooses an action
for each data point, and receives feedback (perhaps from a human) that helps the algorithm learn."

144 | P a g e
Practice
FRM PARTBook
2 -|Practice
VolumeBook
2 | Volume 2

5. Correct Answer: C
FALSE: This statement is currently too extreme, or at a minimum, it is imprecise; there is a concern (including among
some experts) that AI&ML will not curb discrimination and might even create new types of discrimination. This is a
non-trivial problem.
FSB FIN (page 27): "Avoiding discrimination in credit scoring, credit provision, and insurance is also an important
topic. Even where data on sensitive characteristics such as race, religion, gender, etc. are not collected, AI and
machine learning algorithms may create outcomes that implicitly correlate with those indicators, for example, based
on geography or other characteristics of individuals. There is ongoing research on how to address and mitigate these
biases. This is a key area in the broader discussion on AI ethics (see annex B).
In regard to (A), (B) and (D), each is TRUE, at least according to the reading.
 In regard to true (A), (emphasis ours): "4.3 Possible effects of AI and machine learning on consumers and
investors: If AI and machine learning reduce the costs and enhance the efficiency of financial services,
consumers could obtain a number of benefits: (a) Consumers and investors could enjoy lower fees and
borrowing costs if AI and machine learning reduce the costs for various financial services; (b) Consumers and
investors could have wider access to financial services. For example, applications of AI for robo-advice might
facilitate people’s use of various asset markets for their investments. Moreover, AI and machine learning,
through advanced credit scoring for FinTech lending, might make wider sources of funds available to
consumers and small and medium enterprises (SMEs); (c) AI and machine learning could facilitate more
customized and personalized financial services through big data analytics. For example, AI and machine
learning might facilitate the analysis of big data, thus clarifying the characteristics of each consumer and/or
investor and allowing firms to design well-targeted services...
 In regard to true (B), "AI and machine learning can be used for risk management through earlier and more
accurate estimation of risks. For example, to the extent that AI and machine learning enable decision making
based on past correlations among prices of various assets, financial institutions could better manage these
risks. Tools that mitigate tail risks could be especially beneficial for the overall system. Also, AI and machine
learning could be used for anticipating and detecting fraud, suspicious transactions, default, and the risk of
cyber-attacks, which could result in better risk management."
 In regard to true (D), "4.1 Possible effects of AI and machine learning on financial markets: (a) AI and machine
learning may enable certain market participants to collect and analyse information on a greater scale. In
particular, these tools may help market participants to understand the relationship between the formulation
of market prices and various factors, such as in sentiment analysis. This could reduce information asymmetries
and thus contribute to the efficiency and stability of markets; (b) AI and machine learning may lower market
participants’ trading costs. Moreover, AI and machine learning may enable them to adjust their trading and
investment strategies in accordance with a changing environment in a swift manner, thus improving price
discovery and reducing overall transaction costs in the system."
6. Correct Answer: D
TRUE: A concern is that AI&ML might favor a greater concentration of fewer, larger organizations including advanced
third-party AI&ML providers, owners of proprietary sources of big data, and those able to afford heavy investments
in such innovative technologies
FSB FIN (page 29): "5.1. Market concentration and systemic importance of institutions: AI and machine learning may
affect the type and degree of concentration in financial markets in certain circumstances. For instance, the
emergence of a relatively small number of advanced thirdparty providers in AI and machine learning could increase
concentration of some functions in the financial system. Similarly, access to big data could be a source of systemic
importance, especially if firms are able to leverage their proprietary sources of big data to obtain substantial
economies of scope. Finally, the most innovative technologies may be mainly affordable to large companies because
the development of uses requires significant investments (for acquiring and maintaining the infrastructure and the
skilled workers)." In regard to (A), (B) and (C), each is FALSE.

145 | P a g e
Practice
FRM PARTBook
2 -|Practice
VolumeBook
2 | Volume 2

 In regard to false (A), the authors argue (convincingly, we think!) in general that AI&ML enables economies of
scope: "Facilitating collaboration and realizing new economies of scope: Were AI and machine learning to
facilitate collaboration between financial services and various industries, such as e-commerce and sharing-
economy industries, this could realize new economies of scope and foster greater economic growth. For
example, customer analysis based on transaction data attached to payment and settlement activities (for
example, “who buys what, when, and where?”) would encourage cooperation between e-commerce and
financial services."
 In regard to false (B), the authors assert the inverse, arguing that moral hazard and adverse selection problems
will likely be reduced but at the risk of undermining the pooling function of insurance: "AI and machine
learning applications in insurance markets could reduce the degree of moral hazard and adverse selection –
but could also undermine the risk pooling function of insurance. Moral hazard and adverse selection are
inherent problems in insurance. Nonetheless, if AI and machine learning are used to continuously adjust
insurance fees in accordance with changing behavior of the policyholders, this may reduce moral hazard. If AI
and machine learning are utilized to offer customized insurance policies reflecting detailed characteristics of
each person, it may also decrease adverse selection. On the other hand, these uses may pose various new
challenges. For example, the more accurate pricing of risk may lead to higher premiums for riskier consumers
(such as in health insurance for individuals with a genetic predisposition to certain diseases) and could even
price some individuals out of the market. Even if innovative insurance pricing models are based on large data
sets and numerous variables, algorithms can entail biases that can lead to non-desirable discrimination and
even reinforce human prejudices. This warrants a societal discussion on the desired extent of risk sharing, how
the algorithms are conceived, and which information is admissible."
 In regard to false (C), "Use of AI and machine learning for trading could impact the amount and degree of
directional trading. Under benign assumptions, the divergent development of trading applications by a wide
range of market players could benefit financial stability. For example, if machine learning powered robo-
advisors give more customized advice to individuals, their investment activities may become more tailored to
individual preferences and perhaps less correlated with other trading strategies. By reducing the barriers to
entry for retail consumers to invest, these applications could also expand the investor base in capital markets.
Similarly, the use of AI and machine learning for new and uncorrelated trading strategies by hedge funds could
also result in greater diversity in market movements. More efficient processing of information could help to
reduce price misalignments earlier and hence mitigate the build-up of macro financial price imbalances."

146 | P a g e

You might also like