You are on page 1of 33

Table of Contents

Table of Contents...................................................................................1 I.A New Approach to Real Estate Risk by DiBartolomeo, et.al....................2 Risk Model Basics.................................................................................................... 2 Application to Real Estate....................................................................................... 3 Required Property Level Input Data........................................................................4 Estimating Factor Exposures for Real Estate...........................................................5 Conclusion............................................................................................................... 9 Part 1: Definition of Analytical VaR........................................................................10 Part 2: Formalization and Applications..................................................................11 Example 1 Analytical VaR of a single asset.........................................................12 Example 2 Conversion of the confidence level...................................................12 Example 3 Conversion of the holding period......................................................13 Example 4 Analytical VaR of a portfolio of two assets........................................13 Example 5 Analytical VaR of a portfolio composed of n assets...........................14 Part 3: Risk Mapping.............................................................................................. 16 Part 4: Volatility Models......................................................................................... 16 Part 5: Advantages and Disadvantages of Analytical VaR.....................................17 Conclusion............................................................................................................. 17 Historical Simulations VaR Methodology...............................................................19 Applications of Historical Simulations VaR............................................................22 Advantages of Historical Simulations VaR.............................................................24 Disadvantages of Historical Simulations VaR........................................................24 Conclusion............................................................................................................. 25 Methodology.......................................................................................................... 27 Applications........................................................................................................... 29 Disadvantages....................................................................................................... 32 Conclusion............................................................................................................. 32

I.

A New Approach to Real Estate Risk by DiBartolomeo, et.al

Risk Model Basics Modern methods of portfolio risk analysis typically rely on a linear factor model of assets returns. Rit = S m = 1 to n [Bim * Fmt] + Eit (1)

Where: Rit = the return on asset i during period t n = the number of factors in the model Bim = the exposure of asset i to factor m Fmt = the return to unit exposure of factor m during period t Eit = the asset specific return of asset i during period t By the usual algebra we can extend the linear model for the return of a single asset to describe the variance of return to an entire portfolio of assets. Vp = Sj = 1 to n Sk = 1 to n [Bpj*Bpk*Sj*Sk*Pjk]] + Si = 1 to z (Wi2 * Si2) (2) Where: Vp = the variance of portfolio returns Bpj = the exposure of the portfolio to factor j Sj = the standard deviation of returns to factor j Pjk = the correlation of returns to factor j and factor k Wi = the weight of asset i in the portfolio Si = the standard deviation of asset specific returns to asset i Finally, we should note that the factor exposures of a portfolio are simply the weighted average of the factor exposures of the individual assets. Bpj = Si =
1 to z

(Wi * Bij)

(3)

In applying this sort of linear model to the financial assets, two types of specifications are popular. In economic models, the factors are defined to be exogenous variables such as interest rates or oil prices, such that the factor returns (the F values) can be observed in the real world. A separate time series regression is generally then used to estimate the factor exposures (the B values) of each asset. In such regressions, the independent variable is the periodic returns to the particular asset, and the independent variables are the observable returns to the factors. Alternatively, we could use a fundamental model, where endogenous characteristics of the assets (e.g. market cap of a stock) are used to specify observable values of the factor exposures (the B values) and the factor returns (The F values) are estimated for each period in a separate crosssectional regression for each time period. In these cross-sectional regressions, the independent variable is the vector of asset returns for the
2

period, and the independent variables are the factor exposures for all the assets at the beginning of the period. Normally, we distinguish between the two types of models via notation. In fundamental models, the factor exposures can be time varying, so the factor exposures (the B values) would also carry a time subscript. One particular model of the economic type that is widely used by institutional investors to evaluate the risk of their marketable securities portfolios is the Everything, everywhere (EE) model1. This model links global public security performance to over 50 factors including: stock and bond market performance across five global geographic regions and six broad economic sectors. There are also factors meant to measure investor confidence (e.g. the spreads in yields for different qualities of bonds), and macroeconomic conditions (interest rates, energy costs, exchange rates). It breaks discount rate risk into two components; the risk of treasury curve movements and the risk of changes in credit related yields spread. Bond risk is estimated by measuring a bonds price sensitivity to both the credit factors and the treasury factors using a a binomial model that incorporates prepayment options. As of this writing, the model covers approximately 35,000 global equities and 270,000 fixed income securities. An extensive discussion of the specification of the EE model is provided in Appendix A. In that the EE model is of the economic type, the estimation of factor exposures is normally carried out by time series regressions. However, since real estate investments do not typically have observable periodic returns, we have no information to use as the independent variable in our regressions. Instead we take advantage of various techniques available to estimate the exposure of a financial assets returns to the factors in closed form. For example, one might compute the sensitivity of a bonds return to changes in the level of interests rates by the sort of time series regression discussed here. However, there are well known closed form methods for calculating the duration of a bond, the sensitivity measure that would have arisen as the result of our regression. We will therefore endeavor to include real estate within the EE framework by use of such closed form methods. Application to Real Estate Traditional real estate appraisals utilize one of three basic methods to value a property: (a) replacement cost, (b) comparable sales and (c) capitalizing the expected income. One way that real estate investors have tried to evaluate risk of individual properties in the past is to do Monte Carlo simulations of their valuation models. By varying the valuation inputs across their expected range, we can obtain an expectation of the range of property values at any future moment in time. This allows us to estimate the uncertainty of return on that property over a known time horizon. However, such procedures are not tractable over large portfolios, nor do the bottom up estimates of real estate specific variables such as rents or operating
3

expense allow for any insight into the interrelationships between real estate properties and other asset classes. Our methods for estimating factor exposures for real estate are closely related to the third method. From the perspective of typical real estate analysis, we are using financial market data external to real estate to forecast the possible range of inputs to such a valuation process across time, and thereby derive a direct assessment of risk. For example, we can use observed volatility of bond interest rates to frame the range of potential capitalization rates for a property. Further, we can do things like assess potential demand for office space in lower Manhattan based on the recent strength of the stock market performance of the financial services sector of the economy. The more volatile the expected stock market performance of the financial services sector, the greater the uncertainty of demand for such office space. By direct use of information from the stock and bond markets, the model automatically provides the institutional investor with consistency of assumptions across all asset classes. Our model first takes a complicated problem and breaks it down into its parts. We do this by disaggregating a portfolio into buildings, and buildings into their constituent sources of risk including the cash flow from tenants, tenant credit risk, rent volatility and the propertys financing structure. Each of these sources of risk is represented by a hypothetical proxy portfolio of marketable securities that we believe will have the same economic payoff properties as the concerned aspect of the real estate property. Once we have done this, we can apply our existing model used for traded securities to value and estimate the risk of each piece. Having done that, we can reassemble the components and examine risk at any level we choose. Given the models framework, we can examine the sensitivity of each contributor to changes in property value to the common set of underlying factors. Armed with estimates of the potential range for factors (e.g. how volatile do we expect oil prices to be?) the mathematics of a risk assessment for a single property or entire portfolio is simple algebra. The risk estimate contains the future range of interest rates, existing cash flows streams, rent volatility, and financing structure risk. Required Property Level Input Data In order to make the model useful, we must define a parsimonious set of input data that can be practically collected for actual real estate portfolios in order to compute the various aspects of the model. Below is a listing of the inputs the model uses for each property. For even large property portfolios, this information can be maintained in a single spreadsheet. 1. Dominant Property Use (Apartment, Hotel, Industrial, Office, or Retail) 2. Current Occupancy 3. Anchor Tenants a. Square Footage b. Lease Renewal Date
4

c. Credit Rating 4. Debt a. Property Only or Cross Collateralized b. Duration c. Fixed or Variable d. Coupon Rate e. Prepayment Options/Penalties 5. Expected Capital Expenditures 6. Current Effective Rent 7. Current Estimated Property Value It should be noted that the estimated property values are not used as part of the individual property analyses, but rather are used to compute portfolio weights for portfolio level calculations. In addition to the information on individual properties, we also collect data on local real estate market conditions for each area in which a property is located. Our real estate data set is completed from commercially available databases of real estate statistical data, and regional economic data. The EE model encompasses a wide range of information on both individual securities and financial market conditions. Estimating Factor Exposures for Real Estate Our first step is to estimate the Interest Rate Risk of Incoming Cash Flows. This is done by forecasting the time series of a propertys cash flow in a deterministic fashion, without considering rent volatility. There are two components to the cash flow, the inward cash flows provided by net operating income, and the outgoing cash flows required by the mortgage financing (if any). Using the framework of fixed income securities markets, we can consider tenant leases like long positions in bonds. These pseudobonds are subject to credit risk, and have other bond like characteristics such as fixed expiration dates, and embedded options (e.g tenant renewal options). Incoming cash flows are based on projected net operating income (NOI) which changes with projected income taking into consideration expected vacancy levels. Our approach to forecasting cash flows is similar to that found in real estate software packages such as Argus and Circle, but with less detail and precision. In addition to rental growth, operating margins also depend on occupancy levels since vacant space is generally costly to the landlord. A buildings long-term vacancy is assumed to move from its current level to a long-term equilibrium structural vacancy over time unless there are convincing idiosyncratic factors to do otherwise. Lease renewal rates for existing tenants are also modeled in a manner that makes them inversely related to vacancy. Therefore, in order to calculate a propertys cash flow the following inputs are needed:
5

Current rent and expenses (NOI) Current occupancy/vacancy Market-level structural vacancy and reversion Down-time between leases Rent and expense growth over time Useful life of the building (assumed to be 50 years in examples)

With that data, we use current NOI and projected NOI forward based solely on assumed rental inflation and expenses, lease renewal schedules, probability of renewal, credit risk, downtime, as well as changes due to whether the buildings occupancy level is above or below the market average. For buildings whose current vacancy is below the markets longterm average there will be a downward trend in NOI as current vacancy reverts to the structural vacancy estimates. Given the projected pattern, a buildings cash flow can be valued and its exposure to the treasury curve risk factors can be measured in a fashion that is consistent with common bond market practice. The exhibit below shows a building whose current vacancy rate is well below the market vacancy. As the building trends towards the market equilibrium, cash flows decline to reflect the loss of tenants to other buildings. Once it approaches market occupancy, it is assumed to trend with the market thereafter.

Lease renewal rates for existing tenants are also inversely related to market vacancy levels. Tenants have more options in a market where the rate of vacancy is high, so the probability of renewal is presumed lower. Downtime between leases is also incorporated and like lease renewals are a function of the vacancy at the time of renewal. Finally, rents are assumed to move with inflation in the long run and we also assume the useful life of a building is 50 years from the start of the analysis.

The credit risk of tenants is incorporated into the model in two ways. First, expected levels of lease defaults are incorporated into projections of incoming cash flow in terms of vacancy level, and down time between leases. We will incorporate the impact of credit risk on discount rates in a later step. For anchor tenants with published credit ratings (e.g. Moodys or S&P), the default rates associated with these ratings are taken into account in forecasting cash flows. For each property we also estimate the creditworthiness of a non-credit rated, generic tenant. Noncredit tenants are thought to be typical inhabitants of their local economy and their credit risk is determined by weighting the credit risk parameters of a low rated, high yield bond by the employment-based sector share of their metropolitan area. This allows generic in Houston (i.e. local economy concentrated in the energy sector) to be differentiated from tenants in San Jose (i.e. concentrated in the high tech sector). It should be noted that corporate credit ratings provide a conservative measure of tenant risk since salvage values in bond defaults are lower than can be recovered from situations of tenants defaulting on leases. For properties with a large number of small tenants, some of this risk will diversify away (the tenant specific portion), while the credit risk arising from the potential for a general economic downturn will not. It should be noted that the migration of credit risk across time is also taken into account. It is assumed that when a credit rated tenants lease expires, that tenant may renew their lease (with some assumed probability) or be replaced by a generic tenant. As such, cash flow expectations of a property (frictional vacancy rates, downtime, NOI) slowly migrate toward greater influence of generic tenants, as leases turn over. The credit migration process is modeled as a binomial tree with expected renewal rates of leases used to define the probabilities at each decision node. For most properties, the projected NOI stream associated with each individual lease are then analyzed within the EE models term structure process, in the same fashion as we would consider each separate bond in a fixed income securities portfolio. Factor exposures to the three factors that define potential variation in the term structure (shift, twist, butterfly) are then calculated using EEs binomial type OAS model. The factor exposure to the shift potential of the term structure is comparable to duration. Our next step is to consider the impact of credit risk on discount rates used to compute the present value of future cash incoming cash flows. The rate of discount applied to incoming cash flows can be thought of as having two components, the term structure portion having purely to do with the time value of money, and a credit spread. Time series changes in the credit spread are like a parallel shift in the yield curve, so the sensitivity of a cash flow stream to a change in the spread is given by its duration. The expected volatility and correlations of the credit spreads are modeled separately as a function of the EE model factors, for each credit rating level and economic sector (see Appendix A for details).
7

So far, our detailed projections of future operating cash flows have assumed future rents will be the current rent level adjusted for inflation. We will now explicitly incorporate the future uncertainty of levels of rents and occupancy. Over time, changes in the level of rents and occupancy are driven by both demand and supply. Each property package includes a set of risk exposures to represent rent and occupancy volatility. The supply of commercial space changes very slowly having little correlation with the EE models broad economic factors and is largely a function of local market conditions. In contrast, demand for commercial space is elastic and can be effectively captured in our framework by relating percentage changes in rents to the broad economic factors of the EE model. For each property type and each metropolitan area we run a time series regression which models the percentage change in rents. The independent variable for this equation is a demand variable specific to each metropolitan area. It is a weighted average of the stock market returns for each of the six EE industrial sectors, weighted by the employment shares in each individual market. In that retail and personal services employment are a significant percentage of total employment, the consumer sector typically receives the biggest weight. Variation does occur across metro areas since each market has it own employment profile given the makeup of the local economy. In addition to demand, we also incorporate the change in metro-level building stock for each property type as well as an initial conditions variable in the form of vacancy rates. Changes in demand and supply have less of an impact when the market is 25% vacant than when it is 5% vacant. This is important because most markets were not in equilibrium at the beginning of the observation period. Initial market conditions affect the impact of the weighted EE factors on rent volatility since it is the cumulative effect of supply and demand factors that impact rent growth, and hence the expected variation of rents through time. The impact of rent volatility is incorporated into risk assessment in a novel fashion. Our cash flow analysis assumed future rents are known with certainty. We incorporate the uncertainty in rents by assuming property owners have entered into a forward contract with tenants to keep future rents constant in real terms. The coefficients arising from the regression equation above represent the EE factor exposures of this contract. The portion of a buildings rents subject to the forward contract at any one moment in time is presumed to be combination of the expected percentage of vacant space plus the expected percentage of leases turning over in the current year. As this expected value varies from year to year, this value is projected out over the expected life of the building and a present value weighted average is taken as the final coefficient. The residuals from the rent volatility estimation equation are also scaled by this rent volatility exposure coefficient and used as an estimate of the idiosyncratic risk of a particular building. Having measured a propertys underlying cash flow structure, the model still needs to analyze and quantify the risk associated with its financing structure.
8

Mortgages are modeled using the EE term structure model. This process is comparable to analyzing a short position in mortgaged backed securities. Provisions of a particular financing vehicle may include fixed or variable rate mortgages. Prepayment options are taken into account and are similar to having embedded calls, where the exercise price is the prepayment penalty plus the outstanding loan amount at the particular point in time. If a property is part of a cross collateralized pool of assets we build a synthetic security which is the sum of all the properties under the same financing umbrella. We also take a conservative approach with respect to the borrower by assuming that intentional default is not an option. Conclusion Here, we have put forward a new approach to estimating the risk of real estate that does not rely on typical appraisal based benchmark indices. The proposed model decomposes real estate risk into four components: operating cash flow valuation risk, financing structure, credit, and rent/occupancy volatility. Each of these risks is then expressed as functions of factors that are observable in financial markets or in the general economy. Our approach is congruent with methodologies used for risk management of securities market portfolios allowing for seamless integration of risk assessment in multi-asset class portfolios. Most importantly, the model provides a framework for determining how much of the risk of investing in a property arises from characteristics of the specific property, and how much of the risk arises from common influences across all properties such as interest rates and levels of economic activity. We believe this new level of transparency with respect to real estate risk will encourage investors to be confident in their understanding of real estate, and consequently be more willing to allocate more of their resources to property investments. Our work with specific real estate portfolios has led to a number of useful empirical conclusions. First, even our limited results suggest that widely published real estate indices do understate the true (but unobservable) volatility in real estate returns. The volatility forecasts from our model seem consistent with both comparative statistics for other asset types such as REITs, and are also consistent in property by property comparisons with the beliefs of the investment professionals concerned. One more interesting use of the model would be the potential creation of synthetic real estate by creating securities portfolios that are constructed to have comparable exposure to the economic forces that drive returns. Such synthetic products may be an important step forward in overcoming the inherently illiquid nature of real estate. This new approach to estimating real estate risk also provides an avenue for many innovations in real estate practice, including better benchmarks, the active hedging of interest rate risk to property values, and the more appropriate inclusion of real estate in optimal asset allocation.
9

II.

Value at Risk: An Overview of Analytical VaR

One of the most widespread models in use in most of risk management departments across the financial industry is Value-at-Risk (or VaR)2. VaR calculates the worst expected loss over a given horizon at a given confidence level under normal market conditions. VaR estimates can be calculated for various types of risk: market, credit, operational, etc. We will only focus on market risk in this article. Market risk arises from mismatched positions in a portfolio that is marked-to-market periodically (generally daily) based on uncertain movements in prices, rates, volatilities and other relevant market parameters. In such a context, VaR provides a single number summarizing the organizations exposure to market risk and the likelihood of an unfavorable move. There are mainly three designated methodologies to compute VaR: Analytical (also called Parametric), Historical Simulations, and Monte Carlo Simulations. For now, we will focus only on the Analytical form of VaR. The two other methodologies will be treated separately in the upcoming issues of this newsletter. Part 1 of this article defines what VaR is and what it is not, and describes the main parameters. Then, in Part 2, we mathematically express VaR, work through a few examples and play with varying the parameters. Part 3 and 4 briefly touch upon two critical but complex steps to computing VaR: mapping positions to risk factors and selecting the volatility model of a portfolio. Finally, in Part 5, we discuss the pros and cons of Analytical VaR. Part 1: Definition of Analytical VaR VaR is a predictive (ex-ante) tool used to prevent portfolio managers from exceeding risk tolerances that have been developed in the portfolio policies. It can be measured at the portfolio, sector, asset class, and security level. Multiple VaR methodologies are available and each has its own benefits and drawbacks. To illustrate, suppose a $100 million portfolio has a monthly VaR of $8.3 million with a 99% confidence level. VaR simply means that there is a 1% chance for losses greater than $8.3 million in any given month of a defined holding period under normal market conditions. It is worth noting that VaR is an estimate, not a uniquely defined value. Moreover, the trading positions under review are fixed for the period in question. Finally, VaR does not address the distribution of potential losses on those rare occasions when the VaR estimate is exceeded. We should also bear in mind these constraints when using VaR. The ease of using VaR is also its pitfall. VaR summarizes within one number the risk exposure of a
10

portfolio. But it is valid only under a set of assumptions that should always be kept in mind when handling VaR. VaR involves two arbitrarily chosen parameters: the holding period and the confidence level. The holding period corresponds to the horizon of the risk analysis. In other words, when computing a daily VaR, we are interested in estimating the worst expected loss that may occur by the end of the next trading day at a certain confidence level under normal market conditions. The usual holding periods are one day or one month. The holding period can depend on the funds investment and/or reporting horizons, and/or on the local regulatory requirements. The confidence level is intuitively a reliability measure that expresses the accuracy of the result. The higher the confidence level, the more likely we expect VaR to approach its true value or to be within a pre-specified interval. It is therefore no surprise that most regulators require a 95% or 99% confidence interval to compute VaR. Part 2: Formalization and Applications Analytical VaR is also called Parametric VaR because one of its fundamental assumptions is that the return distribution belongs to a family of parametric distributions such as the normal or the lognormal distributions. Analytical VaR can simply be expressed as: (1) where: VaR is the estimated VaR at the confidence level 100 (1 - )%. x is the left-tail percentile of a normal distribution . x is described in the expression where R is the expected return. In order for VaR to be meaningful, we generally choose a confidence level of 95% or 99%. x is generally negative.

P is the marked-to-market value of the portfolio.

The Central Limit Theorem states that the sum of a large number of independent and identically distributed random variables will be approximately normally distributed (i.e., following a Gaussian distribution, or bell-shaped curve) if the random variables have a finite variance. But even if we have a large enough sample of historical returns, is it realistic to assume that the returns of any given fund follow a normal distribution? Thus, we need to associate the return distribution to a standard normal distribution which has a zero mean and a standard deviation of one. Using a standard normal distribution enables us to replace x by z through the following permutation:
11

(2) which yields: (3) z is the left-tail percentile of a standard normal distribution. Consequently, we can re-write (1) as: (4)

Example 1 Analytical VaR of a single asset Suppose we want to calculate the Analytical VaR at a 95% confidence level and over a holding period of 1 day for an asset in which we have invested $1 million. We have estimated3 (mean) and (standard deviation) to be 0.3% and 3% respectively. The Analytical VaR of that asset would be:

This means that there is a 5% chance that this asset may lose at least $46,347 at the end of the next trading day under normal market conditions. Example 2 Conversion of the confidence level Assume now that we are interested in a 99% Analytical VaR of the same asset over the same one-day holding period. The corresponding VaR would simply be:

There is a 1% chance that this asset may experience a loss of at least $66,789 at the end of the next trading day. As you can see, the higher the confidence level, the higher the VaR as we travel downwards along the tail of the distribution (further left on the x-axis).

12

Example 3 Conversion of the holding period If we want to calculate a one-month (21 trading days on average) VaR of that asset using the same inputs, we can simply apply the square root of the time5: (5)

Applying this rule to our examples above yields the following VaR for the two confidence levels:

Example 4 Analytical VaR of a portfolio of two assets Let us assume now that we have a portfolio worth $100 million that is equally invested in two distinct assets. One of the main reasons to invest in two different assets would be to diversify the risk of the portfolio. Therefore, the main underlying question here is how one asset would behave if the other asset were to move against us. In other words, how will the correlation between these two assets affect the VaR of the portfolio? As we aggregate one level up the calculation of Analytical VaR, we replace in (4) the mean of the asset by the weighted mean of the portfolio, Pand the standard deviation (or volatility) of the asset by the volatility of the portfolio, P. The volatility of a portfolio composed of two assets is given by: (6)

where

w1 is the weighting of the first asset w2 is the weighting of the second asset 1 is the standard deviation or volatility of the first asset 2 is the standard deviation or volatility of the second asset
13

1,2is the correlation coefficient between the two assets

And (4) can be re-written as: (7)

Let us assume that we want to calculate Analytical VaR at a 95% confidence level over a one-day horizon on a portfolio composed of two assets with the following assumptions:

P = $100 million w1 = w2 = 50%6 1 = 0.3% 1 = 3% 2 = 0.5% 2 = 5% 1,2 = 30% (8)

Example 5 Analytical VaR of a portfolio composed of n assets From the previous example, we can generalize these calculations to a portfolio composed of nassets. In order to keep the mathematical formulation handy, we use matrix notation and can re-write the volatility of the portfolio as:

(9)

where: w is the vector of the weights of the n assets


14

w is the transpose vector of w is the covariance matrix of the n assets

Practically, we could design a spreadsheet in Excel (Exhibit 1) to calculate Analytical VaR on the portfolio in Example 4. The cells in grey are the input cells. It is easy from there to expand the calculation to a portfolio of n assets. But be aware that you will soon reach the limits of Excel as we will have to calculate n(n-1)/2 terms for your covariance matrix.

Exhibit 1: Excel Spreadsheet to calculate Analytical VaR for a portfolio of two assets (click to enlarge)

15

Part 3: Risk Mapping In order to cope with an increasing covariance matrix each time you diversify your portfolio further, we can map each security of the portfolio to common fundamental risk factors and base our calculations of Analytical VaR on these risk factors. This process is called reverse engineering and aims at reducing the size of the covariance matrix and speeding up the computational time of transposing and multiplying matrices. We generally consider four main risk factors: Spot FX, Equity, Zero-Coupon Bonds and Futures/Forward . The complexity of this process goes beyond the scope of this overview of Analytical VaR and will need to be treated separately in a future article.

Part 4: Volatility Models We can guess from the various expressions of Analytical VaR we have used that its main driver is the expected volatility (of the asset or the portfolio) since we multiply it by a constant factor greater than 1 (1.6449 for a 95% VaR, for instance) as opposed to the expected mean, which is simply added to the expected volatility. Hence, if we have used historical data to derive the expected volatility, we could consider how todays volatility is positively correlated with yesterdays volatility. In that case, we may try to estimate the conditional volatility of the asset or the portfolio. The two most common volatility models used to compute VaR are the Exponential Weighted Moving
16

Average (EWMA) and the Generalized Autoregressive Conditional Heteroscedasticity (GARCH). Again, in order to be exhaustive on this very important part in computing VaR, we will discuss these models in a future article. Part 5: Advantages and Disadvantages of Analytical VaR Analytical VaR is the simplest methodology to compute VaR and is rather easy to implement for a fund. The input data is rather limited, and since there are no simulations involved, the computation time is minimal. Its simplicity is also its main drawback. First, Analytical VaR assumes not only that the historical returns follow a normal distribution, but also that the changes in price of the assets included in the portfolio follow a normal distribution. And this very rarely survives the test of reality. Second, Analytical VaR does not cope very well with securities that have a non-linear payoff distribution like options or mortgage-backed securities. Finally, if our historical series exhibits heavy tails, then computing Analytical VaR using a normal distribution will underestimate VaR at high confidence levels and overestimate VaR at low confidence levels. Conclusion As we have demonstrated, Analytical VaR is easy to implement as long as we follow these steps. First, we need to collect historical data on each security in the portfolio (we advise using at least one year of historical data except if one security has experienced high volatility, which would suggest a shorter period of time). Second, if the portfolio has a large number of underlying positions, then we would need to map them against a more manageable set of risk factors. Third, we need to calculate the historical parameters (mean, standard deviation, etc.) and need to estimate the expected prices, volatilities and correlations. Finally we apply (7) to find the Analytical VaR estimate of the portfolio. As always when building a model, it is important to make sure that it has been reviewed, fully tested and approved, that a User Guide (including any potential code) has been documented and will be updated if necessary, that a training has been designed and delivered to the members of the risk management team and to the recipients of the outputs of the risk management function, and finally that a capable person has been allocated the oversight of the model, its current use, and regular refinement.

17

CDO stands for Collaterized Debt Obligation. These instruments repackage a portfolio of average- or poor-quality debt into high-quality debt (generally rated AAA) by splitting a portfolio of corporate bonds or bank loans into four classes of securities, called tranches. 2 Pronounced VahR. 3 Note that these parameters have to be estimated. They are not the historical parameters derived from the series. 4 Note that z is to be read in the statistical table of a standard normal distribution. 5 This rule stems from the fact that the sum of n consecutive one-day log returns is the n-day log return and the standard deviation of n-day returns is n standard deviation of one-day returns. 6 These weights correspond to the weights of the two assets at the end of the holding period. Because of market movements, there is little likelihood that they will be the same as the weights at the beginning of the holding period.
1

(Source: http://www.jpmorgan.com/tss/General/email/1159360877242)

18

III.

Historical Simulations VaR

We indicated in the previous article that the main benefits of Analytical VaR were that it requires very few parameters, is easy to implement and is quick to run computations (with an appropriate mapping of the risk factors). Its main drawbacks lie in the significant (and inconsistent across asset classes and markets) assumption that price changes in the financial markets follow a normal distribution, and that this methodology may be computer-intensive since we need to calculate then(n-1)/2 terms of the Variance-Covariance matrix (in the case where we do not proceed to a risk mapping of the various instruments that composed the portfolio). With the increasing power of our computers, the second limitation will barely force you to move away from spreadsheets to programming. But the first assumption in the case of a portfolio containing a non-negligible portion of derivatives (minimum 10%15% depending on the complexity and exposure or leverage) may result in the Analytical VaR being seriously underestimated because these derivatives have non-linear payoffs. One solution to circumvent that theoretical constraint is merely to work only with the empirical distribution of the returns to arrive at Historical Simulations VaR. Indeed, is it not more logical to work with the empirical distribution that captures the actual behavior of the portfolio and encompasses all the correlations between the assets composing the portfolio? The answer to this question is not so clear-cut. Computing VaR using Historical Simulations seems more intuitive initially but has its own pitfalls as we will see. But first, how do we compute VaR using Historical Simulations? Historical Simulations VaR Methodology The fundamental assumption of the Historical Simulations methodology is that you look back at the past performance of your portfolio and make the assumption there is no escape from making assumptions with VaR modeling that the past is a good indicator of the near-future or, in other words, that the recent past will reproduce itself in the near-future. As you might guess, this assumption will reach its limits for instruments trading in very volatile markets or during troubled times as we have experienced this year. The below algorithm illustrates the straightforwardness of this methodology. It is called Full Valuation because we will re-price the asset or the portfolio
19

after every run. This differs from a Local Valuation method in which we only use the information about the initial price and the exposure at the origin to deduce VaR. Step 1 Calculate the returns (or price changes) of all the assets in the portfolio between each time interval. The first step lies in setting the time interval and then calculating the returns of each asset between two successive periods of time. Generally, we use a daily horizon to calculate the returns, but we could use monthly returns if we were to compute the VaR of a portfolio invested in alternative investments (Hedge Funds, Private Equity, Venture Capital and Real Estate) where the reporting period is either monthly or quarterly. Historical Simulations VaR requires a long history of returns in order to get a meaningful VaR. Indeed, computing a VaR on a portfolio of Hedge Funds with only a year of return history will not provide a good VaR estimate. Step 2 Apply the price changes calculated to the current mark-to-market value of the assets and re-value your portfolio. Once we have calculated the returns of all the assets from today back to the first day of the period of time that is being considered let us assume one year comprised of 265 days we now consider that these returns may occur tomorrow with the same likelihood. For instance, we start by looking at the returns of every asset yesterday and apply these returns to the value of these assets today. That gives us new values for all these assets and consequently a new value of the portfolio. Then, we go back in time by one more time interval to two days ago. We take the returns that have been calculated for every asset on that day and assume that those returns may occur tomorrow with the same likelihood as the returns that occurred yesterday. We re-value every asset with these new price changes and then the portfolio itself. And we continue until we have reached the beginning of the period. In this example, we will have had 264 simulations. Step 3 Sort the series of the portfolio-simulated P&L from the lowest to the highest value. After applying these price changes to the assets 264 times, we end up with 264 simulated values for the portfolio and thus P&Ls. Since VaR calculates the worst expected loss over a given horizon at a given confidence level under normal market conditions, we need to sort these 264 values from the lowest to the highest as VaR focuses on the tail of the distribution.

20

Step 4 Read the simulated value that corresponds to the desired confidence level. The last step is to determine the confidence level we are interested in let us choose 99% for this example. One can read the corresponding value in the series of the sorted simulated P&Ls of the portfolio at the desired confidence level and then take it away from the mean of the series of simulated P&Ls. In other words, the VaR at 99% confidence level is the mean of the simulated P&Ls minus the 1% lowest value in the series of the simulated values. This can be formulated as follows: (1) where:

VaR1- is the estimated VaR at the confidence level 100 (1 - )% (R) is the mean of the series of simulated returns or P&Ls of the portfolio R is the th worst return of the series of simulated P&Ls of the portfolio or, in other words, the return of the series of simulated P&Ls that corresponds to the level of significance

We may need to proceed to some interpolation since there will be no chance to get a value at 99% in our example. Indeed, if we use 265 days, each return calculated at every time interval will have a weight of 1/264 = 0.00379. If we want to look at the value that has a cumulative weight of 99%, we will see that there is no value that matches exactly 1% (since we have divided the series into 264 time intervals and not a multiple of 100). Considering that there is very little chance that the tail of the empirical distribution is linear, proceeding to a linear interpolation to get the 99% VaR between the two successive time intervals that surround the 99th percentile will result in an estimation of the actual VaR. This would be a pity considering we did all that we could to use the empirical distribution of returns, wouldnt it? Nevertheless, even a linear interpolation may give you a good estimate of your VaR. For those who are more eager to obtain the exact VaR, the Extreme Value Theory (EVT) could be the right tool for you. We will explain in another article how to use EVT when computing VaR. It is rather mathematically demanding and would require us to spend more time to explain this method.

21

Applications of Historical Simulations VaR Let us compute VaR using historical simulations for one asset and then for a portfolio of assets to illustrate the algorithm. Example 1 Historical Simulations VaR for one asset The first step is to calculate the return of the asset price between each time interval. This is done in column D in Table 1. Then we create a column of simulated prices based on the current market value of the asset (1,000,000 as shown in cell C3) and each return which this asset has experienced over the period under consideration. Thus, we have 100 x (-1.93%) = -19,313.95. In Step 3, we simply sort all the simulated values of the asset (based on the past returns). Finally, in Step 4, we read the simulated value in column G which corresponds to the 1% worst loss. As there is no value that corresponds to 99%, we interpolate the surrounding values around 99.24% and 98.86%. That gives us -54,711.55. This number does not take into account the mean, which is 1,033.21. As the 99% VaR is the distance from the mean of the first percentile (1% worst loss), we need to subtract the number we just calculated from the mean to obtain the actual 99% VaR. In this example, the VaR of this asset is thus 1,033.21 (-54,711.55) = 55,744.76. In order to express VaR in percentage, we can divide the 99% VaR amount by the current value of the asset (1,000,000), which yields 5.57%. Table 1 - Calculating Historical Simulations VaR for one asset

Asset Price

Histogram of Returns

Example 2 - Historical Simulations VaR for one portfolio Computing VaR on one asset is relatively easy, but how do the historical simulations account for any correlations between assets if the portfolio holds more than one asset? The answer is also simple: correlations are already
22

embedded in the price changes of the assets. Therefore, there is no need to calculate a Variance-Covariance matrix when running historical simulations. Let us look at another example with a portfolio composed of two assets. Table 2 - Calculating Historical Simulations VaR for a portfolio of two assets

Portfolio Unit Price

Histogram of Returns

As you can see, we simply add a couple of columns to replicate the intermediary steps for the second asset. In this example, each asset represents 50% of the portfolio. After each run, we re-value the portfolio by simply adding up the simulated P&L of each asset. This gives us the simulated P&Ls for the portfolio (column J). This straightforward step of simply re-composing the portfolio after every run is one of the reasons behind the popularity of this methodology. Indeed, we do not need to handle sizeable Variance-Covariance matrices. We apply the calculated returns of every asset to their current price and re-value the portfolio. As we have noted, correlations are embedded in the price changes. In this example, the 99% VaR of the first asset is 55,744.76 (or 5.57%) and the 99% VaR of the second asset is 54,209.71 (or 5.42%). We know that VaR is a subadditive risk measure if we add the VaR of two assets, we will not get the VaR of the portfolio. In this case, the 99% VaR of the portfolio only represents 3.67% of the current marked-to-market value of the portfolio. That difference represents the diversification effect. Having a portfolio invested in these two assets makes the risk lower than investing in any of these two assets solely. The reason is that the gains on one asset sometimes offset the losses on the other asset (rows 10, 12, 13, 17-20, 23, 26-28, 30, 32 in Table 2). Over the 265 days, this happened 127 times with different magnitude. But in the end, this benefited the overall risk profile of the portfolio as the 99% VaR of the portfolio is only 3.67%.
23

Advantages of Historical Simulations VaR Computing VaR using the Historical Simulations methodology has several advantages. First, there is no need to formulate any assumption about the return distribution of the assets in the portfolio. Second, there is also no need to estimate the volatilities and correlations between the various assets. Indeed, as we showed with these two simple examples, they are implicitly captured by the actual daily realizations of the assets. Third, the fat tails of the distribution and other extreme events are captured as long as they are contained in the dataset. Fourth, the aggregation across markets is straightforward. Disadvantages of Historical Simulations VaR The Historical Simulations VaR methodology may be very intuitive and easy to understand, but it still has a few drawbacks. First, it relies completely on a particular historical dataset and its idiosyncrasies. For instance, if we run a Historical Simulations VaR in a bull market, VaR may be underestimated. Similarly, if we run a Historical Simulations VaR just after a crash, the falling returns which the portfolio has experienced recently may distort VaR. Second, it cannot accommodate changes in the market structure, such as the introduction of the Euro in January 1999. Third, this methodology may not always be computationally efficient when the portfolio contains complex securities or a very large number of instruments. Mapping the instruments to fundamental risk factors is the most efficient way to reduce the computational time to calculate VaR by preserving the behavior of the portfolio almost intact. Fourth, Historical Simulations VaR cannot handle sensitivity analyses easily. Lastly, a minimum of history is required to use this methodology. Using a period of time that is too short (less than 3-6 months of daily returns) may lead to a biased and inaccurate estimation of VaR. As a rule of thumb, we should utilize at least four years of data in order to run 1,000 historical simulations. That said, round numbers like 1,000 may have absolutely no relevance whatsoever to your exact portfolio. Security prices, like commodities, move through economic cycles; for example, natural gas prices are usually more volatile in the winter than in the summer. Depending on the composition of the portfolio and on the objectives you are attempting to achieve when computing VaR, you may need to think like an economist in addition to a risk manager in order to take into account the various idiosyncrasies of each instrument and market. Also, bear in mind that VaR estimates need to rely on a stable set of assumptions in order to keep a

24

consistent and comparable meaning when they are monitored over a certain period of time. In order to increase the accuracy of Historical Simulations VaR, one can also decide to weight more heavily the recent observations compared to the furthest since the latter may not give much information about where the prices would go today. We will cover these more advanced VaR models in another article.

Conclusion Despite these disadvantages, many financial institutions have chosen historical simulations as their favored methodology to compute VaR. To many, working with the actual empirical distribution is the real deal. However, obtaining an accurate and reliable VaR estimate has little value without a proper back testing and stress testing program. VaR is simply a number whose value relies on a sound methodology, a set of realistic assumptions and a rigorous discipline when conducting the exercise. The real benefit of VaR lies in its essential property of capturing with one single number the risk profile of a complex or diversified portfolio. VaR remains a tool that should be validated through successive reconciliation with realized P&Ls (back testing) and used to gain insight into what would happen to the portfolio if one or more assets would move adversely to the investment strategy (stress testing). (Source:
http://www.jpmorgan.com/tss/General/Risk_Management/1159369485859)

25

IV.

Monte Carlo

Monte Carlo means using random numbers in scientific computing. More precisely, it means using random numbers as a tool to compute something that is not random. For example1, let X be a random variable and write its expected value as A = E[X]. If we can generate X1, . . . ,Xn, n independent random variables with the same distribution, then we can make the approximation

The strong law of large numbers states that

A as n . The Xk and

are random and could be different each time we run the program. Still, the target number, A, is not random. We emphasize this point by distinguishing between Monte Carlo and simulation. Simulation means producing random variables with a certain distribution just to look at them. For example, we might have a model of a random process that produces clouds. We could simulate the model to generate cloud pictures, either out of scientific interest or for computer graphics. As soon as we start asking quantitative questions about, say, the average size of a cloud or the probability that it will rain, we move from pure simulation to Monte Carlo. The reason for this distinction is that there may be other ways to define A that make it easier to estimate. This process is called variance reduction, since most of the error in is statistical. Reducing the variance of reduces the statistical error. We often have a choice between Monte Carlo and deterministic methods. For example, if X is a one dimensional random variable with probability density f(x), we can estimate E[X] using a panel integration method. This probably would be more accurate than Monte Carlo because the Monte Carlo error is roughly proportional to for large n, which gives it order of

accuracy roughly . The worst panel method given in Section 3.4 is first order

26

accurate. The general rule is that deterministic methods are better than Monte Carlo in any situation where the determinist method is practical. We are driven to resort to Monte Carlo by the curse of dimensionality. The curse is that the work to solve a problem in many dimensions may grow exponentially with the dimension. Suppose, for example, that we want to compute an integral over ten variables, an integration in ten dimensional spaces. If we approximate the integral using twenty points in each coordinate direction, the total number of integration points is 20 10 1013, which is on the edge of what a computer can do in a day. A Monte Carlo computation might reach the same accuracy with only, say, 10 6 points. People often say that the number of points needed for a given accuracy in Monte Carlo does not depend on the dimension, and there is some truth to this. One favorable feature of Monte Carlo is that it is possible to estimate the order of magnitude of statistical error, which is the dominant error in most Monte Carlo computations. These estimates are often called error bars because of the way they are indicated on plots of Monte Carlo results. Monte Carlo error bars are essentially statistical confidence intervals. Monte Carlo practitioners are among the avid consumers of statistical analysis techniques. Another feature of Monte Carlo that makes academics happy is that simple clever ideas can lead to enormous practical improvements in efficiency and accuracy (which are basically the same thing). This is the main reason I emphasize so strongly that, while A is given, the algorithm for estimating it is not. The search for more accurate alternative algorithms is often called variance reduction. Common variance reduction techniques are importance sampling, antithetic variates, and control variates. See more: E:\Risk\Monte Carlo Methods.pdf

Methodology Computing VaR with Monte Carlo Simulations follows a similar algorithm to Historical Simulations. The main difference lies in the first step of the algorithm instead of picking up a return (or a price) in the historical series of the asset and assuming that this return (or price) can re-occur in the next time interval, we generate a random number that will be used to estimate the return (or price) of the asset at the end of the analysis horizon. Step 1 Determine the length T of the analysis horizon and divide it equally into a large number N of small time increments t (i.e. t = T/N).
27

For illustration, we will compute a monthly VaR consisting of twenty-two trading days. Therefore N = 22 days and t = 1 day. In order to calculate daily VaR, one may divide each day per the number of minutes or seconds comprised in one day the more, the merrier. The main guideline here is to ensure that t is large enough to approximate the continuous pricing we find in the financial markets. This process is called discretization, whereby we approximate a continuous phenomenon by a large number of discrete intervals. Step 2 Draw a random number from a random number generator and update the price of the asset at the end of the first time increment. It is possible to generate random returns or prices. In most cases, the generator of random numbers will follow a specific theoretical distribution. This may be a weakness of the Monte Carlo Simulations compared to Historical Simulations, which uses the empirical distribution. When simulating random numbers, we generally use the normal distribution. In this paper, we use the standard stock price model to simulate the path of a stock price from theith day as defined by: Ri = (Si+1 - Si) / Si = t + t1/2 where

(1)

Ri is the return of the stock on the ith day Si is the stock price on the ith day Si+1 is the stock price on the i+1th day is the sample mean of the stock price t is the timestep is the sample volatility (standard deviation) of the stock price is a random number generated from a normal distribution

At the end of this step/day (t = 1 day), we have drawn a random number and determined Si+1 by applying (1) since all other parameters can be determined or estimated. Step 3 Repeat Step 2 until reaching the end of the analysis horizon T by walking along the N time intervals.
28

At the next step/day (t = 2), we draw another random number and apply (1) to determine Si+2 from Si+1. We repeat this procedure until we reach T and can determine Si+T. In our example, Si+22represents the estimated (terminal) stock price in one month time of the sample share. Step 4 Repeat Steps 2 and 3 a large number M of times to generate M different paths for the stock over T. So far, we have generated one path for this stock (from i to i+22). Running Monte Carlo Simulations means that we build a large number M of paths to take account of a broader universe of possible ways the stock price can take over a period of one month from its current value (S i) to an estimated terminal price Si+T. Indeed, there is no unique way for the stock to go from Si to Si+T. Moreover, Si+T is only one possible terminal price for the stock amongst an infinity. Indeed, for a stock price being defined on (a set of positive numbers), there is an infinity of possible paths from S i to Si+T (see footnote 1). It is an industry standard to run at least 10,000 simulations even if 1,000 simulations provide an efficient estimator of the terminal price of most assets. In this paper, we ran 1,000 simulations for illustration purposes. Step 5 Rank the M terminal stock prices from the smallest to the largest, read the simulated value in this series that corresponds to the desired (1-)% confidence level (95% or 99% generally) and deduce the relevant VaR, which is the difference between S i and the th lowest terminal stock price. Let us assume that we want the VaR with a 99% confidence interval. In order to obtain it, we will need first to rank the M terminal stock prices from the lowest to the highest. Then we read the 1% lowest percentile in this series. This estimated terminal price, Si+T1% means that there is a 1% chance that the current stock price Si could fall to S i+T1% or less over the period in consideration and under normal market conditions. If S i+T1% is smaller than Si (which is the case most of the time), then S i - Si+T1% will corresponds to a loss. This loss represents the VaR with a 99% confidence interval. Applications Let us compute VaR using Monte Carlo Simulations for one share to illustrate the algorithm.

29

We apply the algorithm to compute the monthly VaR for one stock. Historical prices are charted in Exhibit 1. We will only consider the share price and thus work with the assumption we have only one share in our portfolio. Therefore the value of the portfolio corresponds to the value of one share. Exhibit 1: Historical prices for one stock from 01/22/08 to 01/20/09

From the series of historical prices, we calculated the sample return mean (0.17%) and sample return standard deviation (5.51%). The current price (S i) at the end of the 20th of January 2009 was $18.09. We want to compute the monthly VaR on the 20th of January 2009. This means we will jump in the future by 22 trading days and look at the estimated prices for the stock on the 19th of February 2009. Since we decided to use the standard stock price model to draw 1,000 paths until T (19th of February 2009), we will need to estimate the expected return (also called drift rate) and the volatility of the share on that day. We can estimate the drift by (2) The volatility of the share can be estimated by
30

(3) Note that since we chose t = 1 day, these two estimators will equal the sample mean and sample standard deviation. Based on these two estimators, we generate Si+1 from Si by re-arranging (1) as Si+1 = Si (1 + t + t1/2) and simulate 1,000 paths for the share. The last step can be summarized in Exhibit 2. We sort the 1,000 terminal stock prices from the lowest to the highest and read the price which corresponds to the desired confidence level. For instance, if we want to get the VaR at a 99% confidence level, we will read the 1% lowest stock price, which is $15.7530. On January 20th, the stock price was $18.09. Therefore, there is a 1% likelihood that the JPMorgan Chase & Co. share falls to $15.7530 or below. If that happens, we will experience a loss of at least $18.09 $15.7530 = $2.5170. This loss is our monthly VaR estimate at a 99% confidence level for one share calculated on the 20th of January 2009. Advantages Monte Carlo Simulations present some advantages over the Analytical and Historical Simulations methodologies to compute VaR. Exhibit 2: Reading VaR for one share (click to enlarge) (4)

The main benefit of running time-consuming Monte Carlo Simulations is that they can model instruments with non-linear and path-dependent payoff functions, especially complex derivatives. Moreover, Monte Carlo Simulations VaR is not affected as much as Historical Simulations VaR by extreme events, and in reality provides in-depth details of these rare events that may occur beyond VaR. Finally, we may use any statistical distribution to simulate the returns as far as we feel comfortable with the underlying assumptions that justify the use of a particular distribution.

31

Disadvantages The main disadvantage of Monte Carlo Simulations VaR is the computer power that is required to perform all the simulations, and thus the time it takes to run the simulations. If we have a portfolio of 1,000 assets and want to run 1,000 simulations on each asset, we will need to run 1 million simulations (without accounting for any eventual simulations that may be required to price some of these assets like for options and mortgages, for instance). Moreover, all these simulations increase the likelihood of model risk. Consequently, another drawback is the cost associated with developing a VaR engine that can perform Monte Carlo Simulations. Buying a commercial solution off-the-shelf or outsourcing to an experienced third party are two options worth considering. The latter approach will reinforce the independence of the computations and therefore reliance of its accuracy and non-manipulation. Conclusion Estimating the VaR for a portfolio of assets using Monte Carlo Simulations has become the standard in the industry. Its strengths overcome its weaknesses by far. Despite the time and effort required to estimate the VaR for a portfolio, this task only represents half of the time a risk manager should spend on VaR. Indeed, the other half should be spent on checking that the model(s) used to calculate VaR is (are) still appropriate for the assets that composed the portfolio and still provide credible estimate of VaR (back testing), and on analyzing how the portfolio reacts to extreme events which occur every now and then in the financial markets (stress testing). This is the reason why we used the discretized form (1) of the standard stock price model so that Monte Carlo Simulations can be handled more easily without losing too much information. Thus, the higher N and M are, the more accurate the estimates of the terminal stock prices will be, but the longer the simulations will take to run.
1

(Source:
http://www.jpmorgan.com/tss/General/Risk_Management/1159380637650 )

32

33

You might also like