This action might not be possible to undo. Are you sure you want to continue?
Table of Contents...................................................................................1 I.A New Approach to Real Estate Risk by DiBartolomeo, et.al....................2 Risk Model Basics.................................................................................................... 2 Application to Real Estate....................................................................................... 3 Required Property Level Input Data........................................................................4 Estimating Factor Exposures for Real Estate...........................................................5 Conclusion............................................................................................................... 9 Part 1: Definition of Analytical VaR........................................................................10 Part 2: Formalization and Applications..................................................................11 Example 1 – Analytical VaR of a single asset.........................................................12 Example 2 – Conversion of the confidence level...................................................12 Example 3 – Conversion of the holding period......................................................13 Example 4 – Analytical VaR of a portfolio of two assets........................................13 Example 5 – Analytical VaR of a portfolio composed of n assets...........................14 Part 3: Risk Mapping.............................................................................................. 16 Part 4: Volatility Models......................................................................................... 16 Part 5: Advantages and Disadvantages of Analytical VaR.....................................17 Conclusion............................................................................................................. 17 Historical Simulations VaR Methodology...............................................................19 Applications of Historical Simulations VaR............................................................22 Advantages of Historical Simulations VaR.............................................................24 Disadvantages of Historical Simulations VaR........................................................24 Conclusion............................................................................................................. 25 Methodology.......................................................................................................... 27 Applications........................................................................................................... 29 Disadvantages....................................................................................................... 32 Conclusion............................................................................................................. 32
A New Approach to Real Estate Risk by DiBartolomeo, et.al
Risk Model Basics Modern methods of portfolio risk analysis typically rely on a linear factor model of assets returns. Rit = S m = 1 to n [Bim * Fmt] + Eit (1)
Where: Rit = the return on asset i during period t n = the number of factors in the model Bim = the exposure of asset i to factor m Fmt = the return to unit exposure of factor m during period t Eit = the asset specific return of asset i during period t By the usual algebra we can extend the linear model for the return of a single asset to describe the variance of return to an entire portfolio of assets. Vp = Sj = 1 to n Sk = 1 to n [Bpj*Bpk*Sj*Sk*Pjk]] + Si = 1 to z (Wi2 * Si2) (2) Where: Vp = the variance of portfolio returns Bpj = the exposure of the portfolio to factor j Sj = the standard deviation of returns to factor j Pjk = the correlation of returns to factor j and factor k Wi = the weight of asset i in the portfolio Si = the standard deviation of asset specific returns to asset i Finally, we should note that the factor exposures of a portfolio are simply the weighted average of the factor exposures of the individual assets. Bpj = Si =
1 to z
(Wi * Bij)
In applying this sort of linear model to the financial assets, two types of specifications are popular. In economic models, the factors are defined to be exogenous variables such as interest rates or oil prices, such that the factor returns (the F values) can be observed in the real world. A separate time series regression is generally then used to estimate the factor exposures (the B values) of each asset. In such regressions, the independent variable is the periodic returns to the particular asset, and the independent variables are the observable returns to the factors. Alternatively, we could use a fundamental model, where endogenous characteristics of the assets (e.g. market cap of a stock) are used to specify observable values of the factor exposures (the B values) and the factor returns (The F values) are estimated for each period in a separate crosssectional regression for each time period. In these cross-sectional regressions, the independent variable is the vector of asset returns for the
period, and the independent variables are the factor exposures for all the assets at the beginning of the period. Normally, we distinguish between the two types of models via notation. In fundamental models, the factor exposures can be time varying, so the factor exposures (the B values) would also carry a time subscript. One particular model of the economic type that is widely used by institutional investors to evaluate the risk of their marketable securities portfolios is the “Everything, everywhere” (EE) model1. This model links global public security performance to over 50 factors including: stock and bond market performance across five global geographic regions and six broad economic sectors. There are also factors meant to measure investor confidence (e.g. the spreads in yields for different qualities of bonds), and macroeconomic conditions (interest rates, energy costs, exchange rates). It breaks discount rate risk into two components; the risk of treasury curve movements and the risk of changes in credit related yields spread. Bond risk is estimated by measuring a bond’s price sensitivity to both the credit factors and the treasury factors using a a binomial model that incorporates prepayment options. As of this writing, the model covers approximately 35,000 global equities and 270,000 fixed income securities. An extensive discussion of the specification of the EE model is provided in Appendix A. In that the EE model is of the economic type, the estimation of factor exposures is normally carried out by time series regressions. However, since real estate investments do not typically have observable periodic returns, we have no information to use as the independent variable in our regressions. Instead we take advantage of various techniques available to estimate the exposure of a financial asset’s returns to the factors in closed form. For example, one might compute the sensitivity of a bond’s return to changes in the level of interests rates by the sort of time series regression discussed here. However, there are well known closed form methods for calculating the duration of a bond, the sensitivity measure that would have arisen as the result of our regression. We will therefore endeavor to include real estate within the EE framework by use of such closed form methods. Application to Real Estate Traditional real estate appraisals utilize one of three basic methods to value a property: (a) replacement cost, (b) comparable sales and (c) capitalizing the expected income. One way that real estate investors have tried to evaluate risk of individual properties in the past is to do Monte Carlo simulations of their valuation models. By varying the valuation inputs across their expected range, we can obtain an expectation of the range of property values at any future moment in time. This allows us to estimate the uncertainty of return on that property over a known time horizon. However, such procedures are not tractable over large portfolios, nor do the “bottom up” estimates of real estate specific variables such as rents or operating
tenant credit risk. Further. this information can be maintained in a single spreadsheet. or Retail) 2. Having done that. Below is a listing of the inputs the model uses for each property. 1. Dominant Property Use (Apartment. By direct use of information from the stock and bond markets. the greater the uncertainty of demand for such office space. Current Occupancy 3. we must define a parsimonious set of input data that can be practically collected for actual real estate portfolios in order to compute the various aspects of the model. Office. Lease Renewal Date 4 . Once we have done this. and thereby derive a direct assessment of risk. we can use observed volatility of bond interest rates to frame the range of potential capitalization rates for a property. The more volatile the expected stock market performance of the financial services sector. For example. Given the model’s framework. Anchor Tenants a. Hotel. Square Footage b. The risk estimate contains the future range of interest rates. we are using financial market data external to real estate to forecast the possible range of inputs to such a valuation process across time. Armed with estimates of the potential range for factors (e. Our model first takes a complicated problem and breaks it down into its parts.expense allow for any insight into the interrelationships between real estate properties and other asset classes. Industrial. For even large property portfolios. the model automatically provides the institutional investor with consistency of assumptions across all asset classes. we can examine the sensitivity of each contributor to changes in property value to the common set of underlying factors. we can apply our existing model used for traded securities to value and estimate the risk of each piece. rent volatility and the property’s financing structure. Each of these sources of risk is represented by a hypothetical proxy portfolio of marketable securities that we believe will have the same economic payoff properties as the concerned aspect of the real estate property. We do this by disaggregating a portfolio into buildings. we can reassemble the components and examine risk at any level we choose. and buildings into their constituent sources of risk including the cash flow from tenants. how volatile do we expect oil prices to be?) the mathematics of a risk assessment for a single property or entire portfolio is simple algebra. rent volatility. From the perspective of typical real estate analysis. Our methods for estimating factor exposures for real estate are closely related to the third method. and financing structure risk.g. Required Property Level Input Data In order to make the model useful. existing cash flows streams. we can do things like assess potential demand for office space in lower Manhattan based on the recent strength of the stock market performance of the financial services sector of the economy.
Our approach to forecasting cash flows is similar to that found in real estate software packages such as Argus and Circle. Lease renewal rates for existing tenants are also modeled in a manner that makes them inversely related to vacancy. without considering rent volatility. and the outgoing cash flows required by the mortgage financing (if any). Therefore. but with less detail and precision. There are two components to the cash flow. and embedded options (e. In addition to rental growth.g tenant renewal options).c. we also collect data on local real estate market conditions for each area in which a property is located. Property Only or Cross Collateralized b. in order to calculate a property’s cash flow the following inputs are needed: 5 . the inward cash flows provided by net operating income. A building’s long-term vacancy is assumed to move from its current level to a long-term equilibrium structural vacancy over time unless there are convincing idiosyncratic factors to do otherwise. In addition to the information on individual properties. operating margins also depend on occupancy levels since vacant space is generally costly to the landlord. Estimating Factor Exposures for Real Estate Our first step is to estimate the Interest Rate Risk of Incoming Cash Flows. Coupon Rate e. and have other bond like characteristics such as fixed expiration dates. Using the framework of fixed income securities markets. and regional economic data. we can consider tenant leases like long positions in bonds. Duration c. Credit Rating 4. Fixed or Variable d. Prepayment Options/Penalties 5. Incoming cash flows are based on projected net operating income (NOI) which changes with projected income taking into consideration expected vacancy levels. The EE model encompasses a wide range of information on both individual securities and financial market conditions. but rather are used to compute portfolio weights for portfolio level calculations. Expected Capital Expenditures 6. These pseudobonds are subject to credit risk. Debt a. This is done by forecasting the time series of a property’s cash flow in a deterministic fashion. Current Estimated Property Value It should be noted that the estimated property values are not used as part of the individual property analyses. Our real estate data set is completed from commercially available databases of real estate statistical data. Current Effective Rent 7.
Finally. For buildings whose current vacancy is below the market’s longterm average there will be a downward trend in NOI as current vacancy reverts to the structural vacancy estimates. it is assumed to trend with the market thereafter. downtime. Once it approaches market occupancy. probability of renewal. rents are assumed to move with inflation in the long run and we also assume the useful life of a building is 50 years from the start of the analysis. as well as changes due to whether the building’s occupancy level is above or below the market average. Downtime between leases is also incorporated and like lease renewals are a function of the vacancy at the time of renewal. so the probability of renewal is presumed lower. The exhibit below shows a building whose current vacancy rate is well below the market vacancy. lease renewal schedules.· · · · · · Current rent and expenses (NOI) Current occupancy/vacancy Market-level structural vacancy and reversion Down-time between leases Rent and expense growth over time Useful life of the building (assumed to be 50 years in examples) With that data. cash flows decline to reflect the loss of tenants to other buildings. a building’s cash flow can be valued and its exposure to the treasury curve risk factors can be measured in a fashion that is consistent with common bond market practice. Tenants have more options in a market where the rate of vacancy is high. Given the projected pattern. we use current NOI and projected NOI forward based solely on assumed rental inflation and expenses. credit risk. Lease renewal rates for existing tenants are also inversely related to market vacancy levels. As the building trends towards the market equilibrium. 6 .
It should be noted that the migration of credit risk across time is also taken into account. For each property we also estimate the creditworthiness of a non-credit rated. cash flow expectations of a property (frictional vacancy rates. For properties with a large number of small tenants. and a credit spread. the projected NOI stream associated with each individual lease are then analyzed within the EE model’s term structure process. 7 . For most properties. We will incorporate the impact of credit risk on discount rates in a later step.e.The credit risk of tenants is incorporated into the model in two ways. generic tenant. This allows generic in Houston (i. downtime. concentrated in the high tech sector). First. Noncredit tenants are thought to be “typical” inhabitants of their local economy and their credit risk is determined by weighting the credit risk parameters of a low rated. The factor exposure to the “shift” potential of the term structure is comparable to duration. twist. while the credit risk arising from the potential for a general economic downturn will not. so the sensitivity of a cash flow stream to a change in the spread is given by its duration. in the same fashion as we would consider each separate bond in a fixed income securities portfolio. that tenant may renew their lease (with some assumed probability) or be replaced by a generic tenant. the default rates associated with these ratings are taken into account in forecasting cash flows.g. It is assumed that when a credit rated tenant’s lease expires. The expected volatility and correlations of the credit spreads are modeled separately as a function of the EE model factors. as leases turn over. local economy concentrated in the energy sector) to be differentiated from tenants in San Jose (i. As such. butterfly) are then calculated using EE’s binomial type OAS model. the term structure portion having purely to do with the time value of money. and down time between leases. Our next step is to consider the impact of credit risk on discount rates used to compute the present value of future cash incoming cash flows.e. Factor exposures to the three factors that define potential variation in the term structure (shift. NOI) slowly migrate toward greater influence of generic tenants. For anchor tenants with published credit ratings (e. The credit migration process is modeled as a binomial tree with expected renewal rates of leases used to define the probabilities at each decision node. high yield bond by the employment-based sector share of their metropolitan area. some of this risk will diversify away (the tenant specific portion). The rate of discount applied to incoming cash flows can be thought of as having two components. expected levels of lease defaults are incorporated into projections of incoming cash flow in terms of vacancy level. It should be noted that corporate credit ratings provide a conservative measure of tenant risk since “salvage values” in bond defaults are lower than can be recovered from situations of tenants defaulting on leases. Time series changes in the credit spread are like a parallel shift in the yield curve. Moody’s or S&P). for each credit rating level and economic sector (see Appendix A for details).
We will now explicitly incorporate the future uncertainty of levels of rents and occupancy. The independent variable for this equation is a demand variable specific to each metropolitan area. We incorporate the uncertainty in rents by assuming property owners have entered into a forward contract with tenants to keep future rents constant in real terms. weighted by the employment shares in each individual market. Each property “package” includes a set of risk exposures to represent rent and occupancy volatility. Initial market conditions affect the impact of the weighted EE factors on rent volatility since it is the cumulative effect of supply and demand factors that impact rent growth. the consumer sector typically receives the biggest weight. Over time. In contrast. The impact of rent volatility is incorporated into risk assessment in a novel fashion. The residuals from the rent volatility estimation equation are also scaled by this rent volatility exposure coefficient and used as an estimate of the idiosyncratic risk of a particular building. changes in the level of rents and occupancy are driven by both demand and supply. The supply of commercial space changes very slowly having little correlation with the EE model’s broad economic factors and is largely a function of local market conditions. demand for commercial space is elastic and can be effectively captured in our framework by relating percentage changes in rents to the broad economic factors of the EE model. This is important because most markets were not in equilibrium at the beginning of the observation period. The portion of a building’s rents subject to the forward contract at any one moment in time is presumed to be combination of the expected percentage of vacant space plus the expected percentage of leases turning over in the current year. As this expected value varies from year to year. In that retail and personal services employment are a significant percentage of total employment. and hence the expected variation of rents through time. Having measured a property’s underlying cash flow structure. Changes in demand and supply have less of an impact when the market is 25% vacant than when it is 5% vacant. Variation does occur across metro areas since each market has it own employment profile given the makeup of the local economy. For each property type and each metropolitan area we run a time series regression which models the percentage change in rents.So far. It is a weighted average of the stock market returns for each of the six EE industrial sectors. Our cash flow analysis assumed future rents are known with certainty. In addition to demand. our detailed projections of future operating cash flows have assumed future rents will be the current rent level adjusted for inflation. the model still needs to analyze and quantify the risk associated with its financing structure. we also incorporate the change in metro-level building stock for each property type as well as an initial conditions variable in the form of vacancy rates. The coefficients arising from the regression equation above represent the EE factor exposures of this contract. 8 . this value is projected out over the expected life of the building and a present value weighted average is taken as the final coefficient.
We believe this new level of transparency with respect to real estate risk will encourage investors to be confident in their understanding of real estate. The proposed model decomposes real estate risk into four components: operating cash flow valuation risk. 9 . Such synthetic products may be an important step forward in overcoming the inherently illiquid nature of real estate. Our work with specific real estate portfolios has led to a number of useful empirical conclusions. We also take a conservative approach with respect to the borrower by assuming that intentional default is not an option. we have put forward a new approach to estimating the risk of real estate that does not rely on typical appraisal based benchmark indices. even our limited results suggest that widely published real estate indices do understate the true (but unobservable) volatility in real estate returns. Each of these risks is then expressed as functions of factors that are observable in financial markets or in the general economy.Mortgages are modeled using the EE term structure model. Prepayment options are taken into account and are similar to having embedded calls. and are also consistent in property by property comparisons with the beliefs of the investment professionals concerned. One more interesting use of the model would be the potential creation of synthetic “real estate” by creating securities portfolios that are constructed to have comparable exposure to the economic forces that drive returns. and the more appropriate inclusion of real estate in optimal asset allocation. and consequently be more willing to allocate more of their resources to property investments. financing structure. If a property is part of a cross collateralized pool of assets we build a synthetic security which is the sum of all the properties under the same financing umbrella. including better benchmarks. and how much of the risk arises from common influences across all properties such as interest rates and levels of economic activity. Our approach is congruent with methodologies used for risk management of securities market portfolios allowing for seamless integration of risk assessment in multi-asset class portfolios. Most importantly. This new approach to estimating real estate risk also provides an avenue for many innovations in real estate practice. where the exercise price is the prepayment penalty plus the outstanding loan amount at the particular point in time. credit. and rent/occupancy volatility. Conclusion Here. The volatility forecasts from our model seem consistent with both comparative statistics for other asset types such as REITs. Provisions of a particular financing vehicle may include fixed or variable rate mortgages. the active hedging of interest rate risk to property values. the model provides a framework for determining how much of the risk of investing in a property arises from characteristics of the specific property. This process is comparable to analyzing a short position in mortgaged backed securities. First.
Part 1 of this article defines what VaR is and what it is not. asset class. Historical Simulations. not a uniquely defined value. VaR does not address the distribution of potential losses on those rare occasions when the VaR estimate is exceeded. VaR summarizes within one number the risk exposure of a 10 . Finally. The ease of using VaR is also its pitfall. In such a context. VaR simply means that there is a 1% chance for losses greater than $8. There are mainly three designated methodologies to compute VaR: Analytical (also called Parametric). etc. the trading positions under review are fixed for the period in question. We will only focus on market risk in this article. suppose a $100 million portfolio has a monthly VaR of $8. we will focus only on the Analytical form of VaR. Moreover. credit. Value at Risk: An Overview of Analytical VaR One of the most widespread models in use in most of risk management departments across the financial industry is Value-at-Risk (or VaR)2. VaR calculates the worst expected loss over a given horizon at a given confidence level under normal market conditions. operational. in Part 5.3 million in any given month of a defined holding period under normal market conditions. in Part 2. rates. It can be measured at the portfolio. Market risk arises from mismatched positions in a portfolio that is marked-to-market periodically (generally daily) based on uncertain movements in prices. Then. and security level.3 million with a 99% confidence level.II. sector. volatilities and other relevant market parameters. Finally. VaR provides a single number summarizing the organization’s exposure to market risk and the likelihood of an unfavorable move. we discuss the pros and cons of Analytical VaR. The two other methodologies will be treated separately in the upcoming issues of this newsletter. and describes the main parameters. work through a few examples and play with varying the parameters. For now. and Monte Carlo Simulations. VaR estimates can be calculated for various types of risk: market. Part 3 and 4 briefly touch upon two critical but complex steps to computing VaR: mapping positions to risk factors and selecting the volatility model of a portfolio. Part 1: Definition of Analytical VaR VaR is a predictive (ex-ante) tool used to prevent portfolio managers from exceeding risk tolerances that have been developed in the portfolio policies. Multiple VaR methodologies are available and each has its own benefits and drawbacks. It is worth noting that VaR is an estimate. We should also bear in mind these constraints when using VaR. we mathematically express VaR. To illustrate.
we generally choose a confidence level of 95% or 99%. xα is described in the expression where R is the expected return. Using a standard normal distribution enables us to replace xα by zα through the following permutation: 11 . or bell-shaped curve) if the random variables have a finite variance.. Part 2: Formalization and Applications Analytical VaR is also called Parametric VaR because one of its fundamental assumptions is that the return distribution belongs to a family of parametric distributions such as the normal or the lognormal distributions.portfolio. But even if we have a large enough sample of historical returns. VaR involves two arbitrarily chosen parameters: the holding period and the confidence level.e. The usual holding periods are one day or one month. • xα is the left-tail α percentile of a normal distribution . we are interested in estimating the worst expected loss that may occur by the end of the next trading day at a certain confidence level under normal market conditions. • P is the marked-to-market value of the portfolio. is it realistic to assume that the returns of any given fund follow a normal distribution? Thus. In order for VaR to be meaningful. following a Gaussian distribution. Analytical VaR can simply be expressed as: (1) where: • VaRα is the estimated VaR at the confidence level 100 × (1 . xα is generally negative. and/or on the local regulatory requirements. when computing a daily VaR. The Central Limit Theorem states that the sum of a large number of independent and identically distributed random variables will be approximately normally distributed (i. The higher the confidence level. It is therefore no surprise that most regulators require a 95% or 99% confidence interval to compute VaR. The confidence level is intuitively a reliability measure that expresses the accuracy of the result.α)%. we need to associate the return distribution to a standard normal distribution which has a zero mean and a standard deviation of one. In other words. But it is valid only under a set of assumptions that should always be kept in mind when handling VaR. The holding period can depend on the fund’s investment and/or reporting horizons. The holding period corresponds to the horizon of the risk analysis. the more likely we expect VaR to approach its true value or to be within a pre-specified interval.
we can re-write (1) as: (4) Example 1 – Analytical VaR of a single asset Suppose we want to calculate the Analytical VaR at a 95% confidence level and over a holding period of 1 day for an asset in which we have invested $1 million. Example 2 – Conversion of the confidence level Assume now that we are interested in a 99% Analytical VaR of the same asset over the same one-day holding period. The Analytical VaR of that asset would be: This means that there is a 5% chance that this asset may lose at least $46.789 at the end of the next trading day. 12 . The corresponding VaR would simply be: There is a 1% chance that this asset may experience a loss of at least $66.347 at the end of the next trading day under normal market conditions. Consequently. the higher the VaR as we travel downwards along the tail of the distribution (further left on the x-axis). We have estimated3 μ (mean) and σ (standard deviation) to be 0.3% and 3% respectively. the higher the confidence level. As you can see.(2) which yields: (3) zα is the left-tail α percentile of a standard normal distribution.
One of the main reasons to invest in two different assets would be to diversify the risk of the portfolio. we replace in (4) the mean of the asset by the weighted mean of the portfolio. we can simply apply the square root of the time5: (5) Applying this rule to our examples above yields the following VaR for the two confidence levels: Example 4 – Analytical VaR of a portfolio of two assets Let us assume now that we have a portfolio worth $100 million that is equally invested in two distinct assets. The volatility of a portfolio composed of two assets is given by: (6) where • • • • w1 is the weighting of the first asset w2 is the weighting of the second asset σ1 is the standard deviation or volatility of the first asset σ2 is the standard deviation or volatility of the second asset 13 .Example 3 – Conversion of the holding period If we want to calculate a one-month (21 trading days on average) VaR of that asset using the same inputs. σ P. how will the correlation between these two assets affect the VaR of the portfolio? As we aggregate one level up the calculation of Analytical VaR. μ Pand the standard deviation (or volatility) of the asset by the volatility of the portfolio. Therefore. In other words. the main underlying question here is how one asset would behave if the other asset were to move against us.
In order to keep the mathematical formulation handy.2 = 30% (8) Example 5 – Analytical VaR of a portfolio composed of n assets From the previous example.5% σ2 = 5% ρ1.• ρ1. we can generalize these calculations to a portfolio composed of nassets. we use matrix notation and can re-write the volatility of the portfolio as: (9) where: • w is the vector of the weights of the n assets 14 .3% σ1 = 3% μ2 = 0.2is the correlation coefficient between the two assets And (4) can be re-written as: (7) Let us assume that we want to calculate Analytical VaR at a 95% confidence level over a one-day horizon on a portfolio composed of two assets with the following assumptions: • • • • • • • P = $100 million w1 = w2 = 50%6 μ1 = 0.
But be aware that you will soon reach the limits of Excel as we will have to calculate n(n-1)/2 terms for your covariance matrix. The cells in grey are the input cells. we could design a spreadsheet in Excel (Exhibit 1) to calculate Analytical VaR on the portfolio in Example 4.• • w’ is the transpose vector of w Σ is the covariance matrix of the n assets Practically. Exhibit 1: Excel Spreadsheet to calculate Analytical VaR for a portfolio of two assets (click to enlarge) 15 . It is easy from there to expand the calculation to a portfolio of n assets.
if we have used historical data to derive the expected volatility. Part 4: Volatility Models We can guess from the various expressions of Analytical VaR we have used that its main driver is the expected volatility (of the asset or the portfolio) since we multiply it by a constant factor greater than 1 (1. The two most common volatility models used to compute VaR are the Exponential Weighted Moving 16 . Equity. we can map each security of the portfolio to common fundamental risk factors and base our calculations of Analytical VaR on these risk factors. In that case. This process is called reverse engineering and aims at reducing the size of the covariance matrix and speeding up the computational time of transposing and multiplying matrices. We generally consider four main risk factors: Spot FX.Part 3: Risk Mapping In order to cope with an increasing covariance matrix each time you diversify your portfolio further. for instance) – as opposed to the expected mean. we may try to estimate the conditional volatility of the asset or the portfolio. Hence. The complexity of this process goes beyond the scope of this overview of Analytical VaR and will need to be treated separately in a future article.6449 for a 95% VaR. which is simply added to the expected volatility. we could consider how today’s volatility is positively correlated with yesterday’s volatility. Zero-Coupon Bonds and Futures/Forward .
in order to be exhaustive on this very important part in computing VaR. that a User Guide (including any potential code) has been documented and will be updated if necessary. Finally we apply (7) to find the Analytical VaR estimate of the portfolio.) and need to estimate the expected prices. Analytical VaR does not cope very well with securities that have a non-linear payoff distribution like options or mortgage-backed securities. First. it is important to make sure that it has been reviewed. standard deviation. then we would need to map them against a more manageable set of risk factors. its current use. Its simplicity is also its main drawback. if our historical series exhibits heavy tails. As always when building a model. Part 5: Advantages and Disadvantages of Analytical VaR Analytical VaR is the simplest methodology to compute VaR and is rather easy to implement for a fund. then computing Analytical VaR using a normal distribution will underestimate VaR at high confidence levels and overestimate VaR at low confidence levels. and regular refinement. and finally that a capable person has been allocated the oversight of the model. The input data is rather limited. volatilities and correlations. First. but also that the changes in price of the assets included in the portfolio follow a normal distribution. Analytical VaR is easy to implement as long as we follow these steps. etc. fully tested and approved. Conclusion As we have demonstrated. and since there are no simulations involved. Third. we need to calculate the historical parameters (mean. Second. the computation time is minimal.Average (EWMA) and the Generalized Autoregressive Conditional Heteroscedasticity (GARCH). 17 . if the portfolio has a large number of underlying positions. that a training has been designed and delivered to the members of the risk management team and to the recipients of the outputs of the risk management function. Second. which would suggest a shorter period of time). Analytical VaR assumes not only that the historical returns follow a normal distribution. we will discuss these models in a future article. Again. Finally. we need to collect historical data on each security in the portfolio (we advise using at least one year of historical data – except if one security has experienced high volatility. And this very rarely survives the test of reality.
Because of market movements.CDO stands for Collaterized Debt Obligation. called tranches. They are not the historical parameters derived from the series. 4 Note that zα is to be read in the statistical table of a standard normal distribution. These instruments repackage a portfolio of average. 2 Pronounced V’ah’R. 6 These weights correspond to the weights of the two assets at the end of the holding period. there is little likelihood that they will be the same as the weights at the beginning of the holding period. 3 Note that these parameters have to be estimated. 1 (Source: http://www.jpmorgan. 5 This rule stems from the fact that the sum of n consecutive one-day log returns is the n-day log return and the standard deviation of n-day returns is √n × standard deviation of one-day returns.com/tss/General/email/1159360877242) 18 .or poor-quality debt into high-quality debt (generally rated AAA) by splitting a portfolio of corporate bonds or bank loans into four classes of securities.
As you might guess. But first. is it not more logical to work with the empirical distribution that captures the actual behavior of the portfolio and encompasses all the correlations between the assets composing the portfolio? The answer to this question is not so clear-cut. this assumption will reach its limits for instruments trading in very volatile markets or during troubled times as we have experienced this year. that the recent past will reproduce itself in the near-future. is easy to implement and is quick to run computations (with an appropriate mapping of the risk factors). in other words. The below algorithm illustrates the straightforwardness of this methodology. But the first assumption in the case of a portfolio containing a non-negligible portion of derivatives (minimum 10%15% depending on the complexity and exposure or leverage) may result in the Analytical VaR being seriously underestimated because these derivatives have non-linear payoffs. Historical Simulations VaR We indicated in the previous article that the main benefits of Analytical VaR were that it requires very few parameters. With the increasing power of our computers. how do we compute VaR using Historical Simulations? Historical Simulations VaR Methodology The fundamental assumption of the Historical Simulations methodology is that you look back at the past performance of your portfolio and make the assumption – there is no escape from making assumptions with VaR modeling – that the past is a good indicator of the near-future or.III. and that this methodology may be computer-intensive since we need to calculate then(n-1)/2 terms of the Variance-Covariance matrix (in the case where we do not proceed to a risk mapping of the various instruments that composed the portfolio). Indeed. Computing VaR using Historical Simulations seems more intuitive initially but has its own pitfalls as we will see. It is called Full Valuation because we will re-price the asset or the portfolio 19 . the second limitation will barely force you to move away from spreadsheets to programming. One solution to circumvent that theoretical constraint is merely to work only with the empirical distribution of the returns to arrive at Historical Simulations VaR. Its main drawbacks lie in the significant (and inconsistent across asset classes and markets) assumption that price changes in the financial markets follow a normal distribution.
Historical Simulations VaR requires a long history of returns in order to get a meaningful VaR. computing a VaR on a portfolio of Hedge Funds with only a year of return history will not provide a good VaR estimate. After applying these price changes to the assets 264 times. In this example. For instance. Step 2 – Apply the price changes calculated to the current mark-to-market value of the assets and re-value your portfolio.after every run. Private Equity. We re-value every asset with these new price changes and then the portfolio itself. we end up with 264 simulated values for the portfolio and thus P&Ls. That gives us new values for all these assets and consequently a new value of the portfolio. but we could use monthly returns if we were to compute the VaR of a portfolio invested in alternative investments (Hedge Funds. we need to sort these 264 values from the lowest to the highest as VaR focuses on the tail of the distribution. We take the returns that have been calculated for every asset on that day and assume that those returns may occur tomorrow with the same likelihood as the returns that occurred yesterday. we go back in time by one more time interval to two days ago. This differs from a Local Valuation method in which we only use the information about the initial price and the exposure at the origin to deduce VaR. Generally. Since VaR calculates the worst expected loss over a given horizon at a given confidence level under normal market conditions. Once we have calculated the returns of all the assets from today back to the first day of the period of time that is being considered – let us assume one year comprised of 265 days – we now consider that these returns may occur tomorrow with the same likelihood. we will have had 264 simulations. Step 3 – Sort the series of the portfolio-simulated P&L from the lowest to the highest value. 20 . Step 1 – Calculate the returns (or price changes) of all the assets in the portfolio between each time interval. we start by looking at the returns of every asset yesterday and apply these returns to the value of these assets today. Indeed. we use a daily horizon to calculate the returns. And we continue until we have reached the beginning of the period. Venture Capital and Real Estate) where the reporting period is either monthly or quarterly. The first step lies in setting the time interval and then calculating the returns of each asset between two successive periods of time. Then.
in other words. if we use 265 days.Step 4 – Read the simulated value that corresponds to the desired confidence level. even a linear interpolation may give you a good estimate of your VaR. the return of the series of simulated P&Ls that corresponds to the level of significance α • We may need to proceed to some interpolation since there will be no chance to get a value at 99% in our example. One can read the corresponding value in the series of the sorted simulated P&Ls of the portfolio at the desired confidence level and then take it away from the mean of the series of simulated P&Ls. This can be formulated as follows: (1) where: • • VaR1-α is the estimated VaR at the confidence level 100 × (1 .α)% μ(R) is the mean of the series of simulated returns or P&Ls of the portfolio Rα is the αth worst return of the series of simulated P&Ls of the portfolio or. Considering that there is very little chance that the tail of the empirical distribution is linear. proceeding to a linear interpolation to get the 99% VaR between the two successive time intervals that surround the 99th percentile will result in an estimation of the actual VaR. For those who are more eager to obtain the exact VaR. The last step is to determine the confidence level we are interested in – let us choose 99% for this example. each return calculated at every time interval will have a weight of 1/264 = 0. wouldn’t it? Nevertheless. Indeed. we will see that there is no value that matches exactly 1% (since we have divided the series into 264 time intervals and not a multiple of 100). 21 . It is rather mathematically demanding and would require us to spend more time to explain this method. If we want to look at the value that has a cumulative weight of 99%. the Extreme Value Theory (EVT) could be the right tool for you. the VaR at 99% confidence level is the mean of the simulated P&Ls minus the 1% lowest value in the series of the simulated values.00379. We will explain in another article how to use EVT when computing VaR. This would be a pity considering we did all that we could to use the empirical distribution of returns. In other words.
000. we have 100 x (-1. the VaR of this asset is thus 1. we read the simulated value in column G which corresponds to the 1% worst loss.744.55) = 55. Then we create a column of simulated prices based on the current market value of the asset (1.93%) = -19.000. which is 1.57%. This number does not take into account the mean. As the 99% VaR is the distance from the mean of the first percentile (1% worst loss).Calculating Historical Simulations VaR for one asset Asset Price Histogram of Returns Example 2 .86%.000 as shown in cell C3) and each return which this asset has experienced over the period under consideration. Thus.21.711.033. Example 1 – Historical Simulations VaR for one asset The first step is to calculate the return of the asset price between each time interval. In order to express VaR in percentage. which yields 5.24% and 98. we can divide the 99% VaR amount by the current value of the asset (1. we simply sort all the simulated values of the asset (based on the past returns).313.000).Applications of Historical Simulations VaR Let us compute VaR using historical simulations for one asset and then for a portfolio of assets to illustrate the algorithm. This is done in column D in Table 1. As there is no value that corresponds to 99%.21 – (-54.033.76. but how do the historical simulations account for any correlations between assets if the portfolio holds more than one asset? The answer is also simple: correlations are already 22 . we need to subtract the number we just calculated from the mean to obtain the actual 99% VaR. we interpolate the surrounding values around 99.55. In Step 3. Table 1 .711. in Step 4. In this example. Finally.Historical Simulations VaR for one portfolio Computing VaR on one asset is relatively easy. That gives us -54.95.
We know that VaR is a subadditive risk measure – if we add the VaR of two assets. This straightforward step of simply re-composing the portfolio after every run is one of the reasons behind the popularity of this methodology. The reason is that the gains on one asset sometimes offset the losses on the other asset (rows 10. 12. this happened 127 times with different magnitude. 17-20. 23.Calculating Historical Simulations VaR for a portfolio of two assets Portfolio Unit Price Histogram of Returns As you can see. each asset represents 50% of the portfolio.57%) and the 99% VaR of the second asset is 54. After each run. That difference represents the diversification effect. the 99% VaR of the portfolio only represents 3. In this example. In this example. As we have noted. In this case. we simply add a couple of columns to replicate the intermediary steps for the second asset. 26-28.209. This gives us the simulated P&Ls for the portfolio (column J). there is no need to calculate a Variance-Covariance matrix when running historical simulations.42%).71 (or 5.744. the 99% VaR of the first asset is 55. Let us look at another example with a portfolio composed of two assets. 32 in Table 2).embedded in the price changes of the assets. But in the end. correlations are embedded in the price changes. 30.67%. Therefore. Having a portfolio invested in these two assets makes the risk lower than investing in any of these two assets solely. this benefited the overall risk profile of the portfolio as the 99% VaR of the portfolio is only 3. we will not get the VaR of the portfolio. we do not need to handle sizeable Variance-Covariance matrices.67% of the current marked-to-market value of the portfolio. Over the 265 days. Table 2 . 23 .76 (or 5. 13. we re-value the portfolio by simply adding up the simulated P&L of each asset. Indeed. We apply the calculated returns of every asset to their current price and re-value the portfolio.
we should utilize at least four years of data in order to run 1. First.000 may have absolutely no relevance whatsoever to your exact portfolio. the falling returns which the portfolio has experienced recently may distort VaR. That said. First. bear in mind that VaR estimates need to rely on a stable set of assumptions in order to keep a 24 . but it still has a few drawbacks. round numbers like 1. a minimum of history is required to use this methodology. Lastly. Also. Disadvantages of Historical Simulations VaR The Historical Simulations VaR methodology may be very intuitive and easy to understand. such as the introduction of the Euro in January 1999. Using a period of time that is too short (less than 3-6 months of daily returns) may lead to a biased and inaccurate estimation of VaR. For instance. this methodology may not always be computationally efficient when the portfolio contains complex securities or a very large number of instruments. they are implicitly captured by the actual daily realizations of the assets. there is no need to formulate any assumption about the return distribution of the assets in the portfolio. Fourth. there is also no need to estimate the volatilities and correlations between the various assets. as we showed with these two simple examples.Advantages of Historical Simulations VaR Computing VaR using the Historical Simulations methodology has several advantages. Depending on the composition of the portfolio and on the objectives you are attempting to achieve when computing VaR. like commodities. natural gas prices are usually more volatile in the winter than in the summer. Security prices. Third. the aggregation across markets is straightforward. if we run a Historical Simulations VaR just after a crash. Second. Second. you may need to think like an economist in addition to a risk manager in order to take into account the various idiosyncrasies of each instrument and market. if we run a Historical Simulations VaR in a bull market.000 historical simulations. Third. for example. Mapping the instruments to fundamental risk factors is the most efficient way to reduce the computational time to calculate VaR by preserving the behavior of the portfolio almost intact. As a rule of thumb. it relies completely on a particular historical dataset and its idiosyncrasies. the fat tails of the distribution and other extreme events are captured as long as they are contained in the dataset. Historical Simulations VaR cannot handle sensitivity analyses easily. it cannot accommodate changes in the market structure. Similarly. VaR may be underestimated. Indeed. move through economic cycles. Fourth.
(Source: http://www. We will cover these more advanced VaR models in another article. In order to increase the accuracy of Historical Simulations VaR.” However.jpmorgan. obtaining an accurate and reliable VaR estimate has little value without a proper back testing and stress testing program. working with the actual empirical distribution is the “real deal. a set of realistic assumptions and a rigorous discipline when conducting the exercise. The real benefit of VaR lies in its essential property of capturing with one single number the risk profile of a complex or diversified portfolio. VaR remains a tool that should be validated through successive reconciliation with realized P&Ls (back testing) and used to gain insight into what would happen to the portfolio if one or more assets would move adversely to the investment strategy (stress testing). VaR is simply a number whose value relies on a sound methodology. To many. Conclusion Despite these disadvantages. many financial institutions have chosen historical simulations as their favored methodology to compute VaR.com/tss/General/Risk_Management/1159369485859) 25 .consistent and comparable meaning when they are monitored over a certain period of time. one can also decide to weight more heavily the recent observations compared to the furthest since the latter may not give much information about where the prices would go today.
We could simulate the model to generate cloud pictures. the target number. it means using random numbers as a tool to compute something that is not random. We emphasize this point by distinguishing between Monte Carlo and simulation. we move from pure simulation to Monte Carlo. since most of the error in is statistical. As soon as we start asking quantitative questions about. The worst panel method given in Section 3. which gives it order of accuracy roughly . either out of scientific interest or for computer graphics. For example1. we might have a model of a random process that produces clouds. A. Reducing the variance of reduces the statistical error. say. For example. we can estimate E[X] using a panel integration method. The reason for this distinction is that there may be other ways to define A that make it easier to estimate. More precisely. is not random.4 is first order 26 . let X be a random variable and write its expected value as A = E[X]. This probably would be more accurate than Monte Carlo because the Monte Carlo error is roughly proportional to for large n. then we can make the approximation The strong law of large numbers states that → A as n → ∞. the average size of a cloud or the probability that it will rain. . Monte Carlo Monte Carlo means using random numbers in scientific computing. If we can generate X1. This process is called variance reduction.IV. Simulation means producing random variables with a certain distribution just to look at them. The Xk and are random and could be different each time we run the program. n independent random variables with the same distribution. . We often have a choice between Monte Carlo and deterministic methods.Xn. . For example. . Still. if X is a one dimensional random variable with probability density f(x).
This is the main reason I emphasize so strongly that. while A is given. See more: E:\Risk\Monte Carlo Methods.e. The search for more accurate alternative algorithms is often called “variance reduction”. which is the dominant error in most Monte Carlo computations. 10 6 points.accurate. Suppose. Common variance reduction techniques are importance sampling. we generate a random number that will be used to estimate the return (or price) of the asset at the end of the analysis horizon. and there is some truth to this. which is on the edge of what a computer can do in a day. the algorithm for estimating it is not. Δt = T/N). Monte Carlo error bars are essentially statistical confidence intervals. The main difference lies in the first step of the algorithm – instead of picking up a return (or a price) in the historical series of the asset and assuming that this return (or price) can re-occur in the next time interval. If we approximate the integral using twenty points in each coordinate direction. 27 . the total number of integration points is 20 10 ≈ 1013. The general rule is that deterministic methods are better than Monte Carlo in any situation where the determinist method is practical. Step 1 – Determine the length T of the analysis horizon and divide it equally into a large number N of small time increments Δt (i. Another feature of Monte Carlo that makes academics happy is that simple clever ideas can lead to enormous practical improvements in efficiency and accuracy (which are basically the same thing). say. We are driven to resort to Monte Carlo by the “curse of dimensionality”. The curse is that the work to solve a problem in many dimensions may grow exponentially with the dimension. A Monte Carlo computation might reach the same accuracy with only. One favorable feature of Monte Carlo is that it is possible to estimate the order of magnitude of statistical error. that we want to compute an integral over ten variables. and control variates. an integration in ten dimensional spaces. for example. antithetic variates. These estimates are often called “error bars” because of the way they are indicated on plots of Monte Carlo results. People often say that the number of points needed for a given accuracy in Monte Carlo does not depend on the dimension.pdf Methodology Computing VaR with Monte Carlo Simulations follows a similar algorithm to Historical Simulations. Monte Carlo practitioners are among the avid consumers of statistical analysis techniques.
the generator of random numbers will follow a specific theoretical distribution. 28 . Step 3 – Repeat Step 2 until reaching the end of the analysis horizon T by walking along the N time intervals. In order to calculate daily VaR. This may be a weakness of the Monte Carlo Simulations compared to Historical Simulations. we will compute a monthly VaR consisting of twenty-two trading days. The main guideline here is to ensure that Δt is large enough to approximate the continuous pricing we find in the financial markets. whereby we approximate a continuous phenomenon by a large number of discrete intervals. which uses the empirical distribution. In most cases. we generally use the normal distribution. This process is called discretization. we use the standard stock price model to simulate the path of a stock price from theith day as defined by: Ri = (Si+1 . It is possible to generate random returns or prices. the merrier. When simulating random numbers. Therefore N = 22 days and Δt = 1 day. we have drawn a random number and determined Si+1 by applying (1) since all other parameters can be determined or estimated.Si) / Si = μ δt + σ φ δt1/2 where • • • • • • • (1) Ri is the return of the stock on the ith day Si is the stock price on the ith day Si+1 is the stock price on the i+1th day μ is the sample mean of the stock price δt is the timestep σ is the sample volatility (standard deviation) of the stock price φ is a random number generated from a normal distribution At the end of this step/day (δt = 1 day). Step 2 – Draw a random number from a random number generator and update the price of the asset at the end of the first time increment.For illustration. one may divide each day per the number of minutes or seconds comprised in one day – the more. In this paper.
Let us assume that we want the VaR with a 99% confidence interval. Then we read the 1% lowest percentile in this series. In our example.At the next step/day (δt = 2).000 simulations even if 1. Indeed. there is no unique way for the stock to go from Si to Si+T. It is an industry standard to run at least 10. for a stock price being defined on (a set of positive numbers). we draw another random number and apply (1) to determine Si+2 from Si+1.000 simulations provide an efficient estimator of the terminal price of most assets. Si+T1% means that there is a 1% chance that the current stock price Si could fall to S i+T1% or less over the period in consideration and under normal market conditions. there is an infinity of possible paths from S i to Si+T (see footnote 1). Step 5 – Rank the M terminal stock prices from the smallest to the largest. Moreover. In this paper. If S i+T1% is smaller than Si (which is the case most of the time). This estimated terminal price. Applications Let us compute VaR using Monte Carlo Simulations for one share to illustrate the algorithm. we have generated one path for this stock (from i to i+22). Step 4 – Repeat Steps 2 and 3 a large number M of times to generate M different paths for the stock over T. which is the difference between S i and the αth lowest terminal stock price. read the simulated value in this series that corresponds to the desired (1-α)% confidence level (95% or 99% generally) and deduce the relevant VaR. Running Monte Carlo Simulations means that we build a large number M of paths to take account of a broader universe of possible ways the stock price can take over a period of one month from its current value (S i) to an estimated terminal price Si+T. 29 . we ran 1. This loss represents the VaR with a 99% confidence interval. we will need first to rank the M terminal stock prices from the lowest to the highest. then S i .000 simulations for illustration purposes. In order to obtain it. So far.Si+T1% will corresponds to a loss. Si+22represents the estimated (terminal) stock price in one month time of the sample share. Indeed. We repeat this procedure until we reach T and can determine Si+T. Si+T is only one possible terminal price for the stock amongst an infinity.
we will need to estimate the expected return (also called drift rate) and the volatility of the share on that day. We want to compute the monthly VaR on the 20th of January 2009.51%).000 paths until T (19th of February 2009). we calculated the sample return mean (0. This means we will jump in the future by 22 trading days and look at the estimated prices for the stock on the 19th of February 2009. We will only consider the share price and thus work with the assumption we have only one share in our portfolio. Since we decided to use the standard stock price model to draw 1.09. We can estimate the drift by (2) The volatility of the share can be estimated by 30 . Therefore the value of the portfolio corresponds to the value of one share. Exhibit 1: Historical prices for one stock from 01/22/08 to 01/20/09 From the series of historical prices. Historical prices are charted in Exhibit 1.We apply the algorithm to compute the monthly VaR for one stock. The current price (S i) at the end of the 20th of January 2009 was $18.17%) and sample return standard deviation (5.
we will experience a loss of at least $18. For instance. we generate Si+1 from Si by re-arranging (1) as Si+1 = Si (1 + μ σt + σ φ δt1/2) and simulate 1. Monte Carlo Simulations VaR is not affected as much as Historical Simulations VaR by extreme events. Moreover. 31 .09 – $15.000 paths for the share. The last step can be summarized in Exhibit 2.5170. we will read the 1% lowest stock price. If that happens. and in reality provides in-depth details of these rare events that may occur beyond VaR. especially complex derivatives. We sort the 1.09.000 terminal stock prices from the lowest to the highest and read the price which corresponds to the desired confidence level. Advantages Monte Carlo Simulations present some advantages over the Analytical and Historical Simulations methodologies to compute VaR. share falls to $15. which is $15. Exhibit 2: Reading VaR for one share (click to enlarge) (4) The main benefit of running time-consuming Monte Carlo Simulations is that they can model instruments with non-linear and path-dependent payoff functions. On January 20th. if we want to get the VaR at a 99% confidence level.(3) Note that since we chose δt = 1 day. we may use any statistical distribution to simulate the returns as far as we feel comfortable with the underlying assumptions that justify the use of a particular distribution. these two estimators will equal the sample mean and sample standard deviation.7530. Finally. Based on these two estimators. there is a 1% likelihood that the JPMorgan Chase & Co.7530 or below. This loss is our monthly VaR estimate at a 99% confidence level for one share calculated on the 20th of January 2009. Therefore. the stock price was $18.7530 = $2.
000 assets and want to run 1.Disadvantages The main disadvantage of Monte Carlo Simulations VaR is the computer power that is required to perform all the simulations. 1 (Source: http://www. but the longer the simulations will take to run. The latter approach will reinforce the independence of the computations and therefore reliance of its accuracy and non-manipulation. we will need to run 1 million simulations (without accounting for any eventual simulations that may be required to price some of these assets – like for options and mortgages. and thus the time it takes to run the simulations.jpmorgan. Moreover. Thus.com/tss/General/Risk_Management/1159380637650 ) 32 . Buying a commercial solution off-the-shelf or outsourcing to an experienced third party are two options worth considering. all these simulations increase the likelihood of model risk. the other half should be spent on checking that the model(s) used to calculate VaR is (are) still appropriate for the assets that composed the portfolio and still provide credible estimate of VaR (back testing). for instance). this task only represents half of the time a risk manager should spend on VaR. Its strengths overcome its weaknesses by far. another drawback is the cost associated with developing a VaR engine that can perform Monte Carlo Simulations. the higher N and M are. and on analyzing how the portfolio reacts to extreme events which occur every now and then in the financial markets (stress testing). Indeed. If we have a portfolio of 1. Consequently. the more accurate the estimates of the terminal stock prices will be. This is the reason why we used the discretized form (1) of the standard stock price model so that Monte Carlo Simulations can be handled more easily without losing too much information. Conclusion Estimating the VaR for a portfolio of assets using Monte Carlo Simulations has become the standard in the industry. Despite the time and effort required to estimate the VaR for a portfolio.000 simulations on each asset.
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue listening from where you left off, or restart the preview.