This action might not be possible to undo. Are you sure you want to continue?

Peter Luk – April 2008

Fisher Black and Myron Scholes’ paper on Option Pricing in 1973 ushered in a new age in financial mathematics. Since then thousands of mathematical models have been developed to describe the financial markets and to devise ways to reduce or eliminate risks. But since then the financial markets collapsed once every ten years or so because of these models. At least one well-known professor (whose book outsold Greenspan) called this “phony bell curve-style mathematics”. Portfolio insurance was designed in 1980s, using mathematical models to ‘ensure’ stock portfolios stay immune from market ups and downs and then in October 1987 the market collapsed and the Dow Jones Industrial Average, the bellwether of the US stock market, dropped more than 22% in a single day, thanks to the models. About a decade later, in 1998, the famous LTCM (Long Term Capital Management, which prided itself on having two Nobel laureates as its partners, one of which was the very same Myron Scholes) collapsed having lost US$4.5 billion, including $1.6 billion on swaps and US$1.3 billion on equity volatility. The DJIA lost 18% in about three months’ time. One would have thought people learned about risk management. But another ten years later, we now have the sub-prime crisis. Between October 2007 and March 2008, the DJIA lost 17% – one estimate put the total losses at US$400 billion. Mathematical modeling played an important part in all of the above, the most popular one being the normality assumption about stock price change. As Nassim Taleb, professor and trader, said in Fortune (April 14, 2008), “We replaced so much experience and common sense with “models” that work worse than astrology,... I noticed that while portfolio models got worse and worse in tracking reality, their use kept increasing as if nothing was happening. Why? Because in the past 15 years business schools accelerated their teaching of portfolio theory as a replacement for our experiences.” Professor Eugene Fama (the thesis advisor to LTCM’s Myron Scholes) had this to say, “If the population of price change is strictly normal,...an observation that is more than five standard deviations from the mean should be observed once every 7,000 years. In fact, such observations seem to occur about once every three or four years.” In August 2007, Wall Street Journal reported that events that (Lehman Brothers) models predicted would only happen once in 10,000 years happened every day for three days and Financial Times reported that Goldman Sachs witnessed something that only happens once every 100,000 years according to their model. It is time to have a look at what models can safely be used and what should only be used with caution.

1

i.Derivative Model A typical model involves the following common assumptions: Risk-free interest rate: constant or distribute normally with constant volatility Equity price change (or its log): distribute normally with constant volatility In reality.e. these things are never distributed normally.. For a standardized normal distribution (i. mean=0. x = mean + σ × { Q -1 (α ) + γ1 ([Q-1(α)]2 −1) 6 +( γ 2 ([Q-1(α)]3 − 3Q-1(α) ) 24 2 − γ12 (2[Q-1(α)]3 − 5 Q-1(α) ) 36 )} .. the Cornish-Fisher expansion says.e. when skewness. standard deviation=1). See the following chart. α = Q( x ) = ∫ Z ( t ) dt x ∞ ∞ = ∫ x 1 -t 2 / 2 e dt 2π or x = Q -1 (α ) When the distribution is not normal. kurtosis and higher moments are not zero.

e. This number is often called implied volatility.. For each of the 15 combinations.. Nikkei 225. γ1 = γ2 = 3rd central moment σ3 4th central moment is the coefficient of skewness and − 3 σ4 is the coefficient of kurtosis If skewness and higher moments are zero.. This is actually a number representing the combined influence of volatility. Standard & Poor 500 and FTSE 100. It is not unreasonable to assume many of individual stocks will also fail the tests. chi-square test. but people tend to ignore this fact. a misnomer. This is. kurtosis and higher moments. three 6-month periods were selected: Jan 2007 – Jun 2007. Some empirical tests of normality were performed on five indices: Hang Seng Index.. are wrong and useless as you have one equation with two unknown variables. The relationship between implied volatility and historical volatility can be found from the following approximate formula derived from the above Cornish-Fisher expansion : Implied volatility = historical volatility × − {1 + γ1 ([Q-1(α)] −1/[Q-1(α)]) 6 +( γ 2 ([Q-1(α)]2 − 3) 24 γ12 (2[Q-1(α)]2 − 5) 36 )} This relationship is only approximate as we have no idea how important a role the higher moments play under any particular circumstance. So people do the reverse way. There is no such thing as implied volatility.. calculating a volatility number from the observed risk-free interest rate and market prices. where.. When this approximation does not even come close to the implied volatility (which happens not infrequently) one can only surmise that the market factors anticipated future changes of volatility (which is basically 3 . In reality. a set of three tests (i. of course. particularly the famous Black-Scholes formula. Dow Jones Industrial Average. Kolmogorov-Smirnov test and Anderson-Darling test) was performed at 5% level. you will not get the correct derivative prices because the normality assumption is wrong. Therefore. About half of them fail the tests. Jul 2007 – Dec 2007 and Oct 2007 – Mar 2008. For each of them.+ . the conventional derivative formulas. these numbers are far from zero. If you use observed risk-free interest rate and market volatility.. this expansion is reduced to normal... skewness..

the less reliable the model will become (it is useful to remember that insurance models are always of longer term than trading models). LTCM learned a painful lesson when they found there were no buyers when they were forced to sell. Another important (probably the most important) thing not often mentioned in the textbook is that all the models have an implicit assumption about market liquidity. To Use: short term models simple models constant risk-free yield curve constant volatility Not to use (or at least be wary) : complicated models (unless they have passed stress testing) long term models variable risk-free yield curve fluctuating volatility for the calculation of the tail expectation 4 .a piece of guesswork) into this so-called implied volatility. It is important NOT TO USE any mathematical models to calculate the tail expectations. If it is a long-term model. Conclusions: People will continue to use these formulas and models because they are simple to use and it is reasonable to hope that the so-called implied volatility will not change much over the near future. They all assume that for every seller there is a ready buyer. since our knowledge of tails of distributions is very very limited. preference should always be given to simpler models (this is called Occam’s razor). it is very likely to be wrong. it is likely to be ok. If it is a short-term model. Financial markets are subject to the influence of mass human psychology and history has proven us wrong every time we thought we got it right. many sub-prime failures are victims of illiquidity. it could turn out unpleasant. It is more likely to be ok if it is about the average. and hence the Black-Scholes formula even less reliable than they would otherwise be. Other things being equal. The longer the term. That makes the implied volatility. Similarly. but if it is about the tail expectation.

The first three are fine but the last three end up in default. Suppose we have existing data as follows.000 22.0% 100. Credit rating models are models that try to predict future failures by using past statistics.000 21.000 60. These are called credit rating. The last column of the table is the calculated default probability as per this formula. The maximum likelihood method is used here to establish the probability of default as follows. “B”. The model used here is called a logit model. It is the usual practice to express such credit standing quantitatively as “probability of default” or. is founded on credit.000 18% 20% 22% 40% 48% 45% 0. notably the growth of our banking system.000 10.9 . For future mortgage applications we can estimate the probability of default as illustrated in this table. or alphabetically as “AAA”.000 13.0% 0.000 8. 1 Probability of default (P) = 1+ e .000 20.000 y Mortgage / Default Mortgage Income experience 9. etc.0% 0.0% There are six mortgages with the mortgagor’s income and the amount of mortgage given. 5 .0% 100.0019 x + 0 . As such. “excellent”. Default 0 0 0 1 1 1 x Income 50. Here is an illustration of a simplified model.( 32 .000 11.0041 y) The following table gives the result of this formula.0% 100. the credit standing of the borrowers are of paramount importance.Credit Rating model The growth of modern economy. more often qualitatively using such words as “good”.000 55.0 .000 10.

However. But it does illustrate an important characteristic of credit rating models.000 50.000 20. these credit rating agencies received widespread criticism over the past decade and more so during the current sub-prime crisis for their lack of transparency.2% 12.0% 100. Credit behavior can be quite different under different economic conditions and data collected during economic boom can hardly be considered relevant and used to predict default probability under a recession. etc.000 Predicted y Mortgage / default Mortgage Income probability 20.000 18.5% This is.0% 100. Similarly the history of credit default swap is too short for their statistics to be meaningful (i. They provide a useful service to investors at large. This condition however is not easy to meet. There are many international rating agencies such Standard & Poor.000 1. in fact. A M Best. CMO (collateral mortgage obligation) and CDS (credit default swap). homogeneous). With the securitization of such mortgages into CDOs and CMOs (helped by good ratings and swaps). As such..9% 1. The rating agencies were also 6 . A credit rating model is a probability model. Second.7% 0. but the data size is too small for it to be a good predictor. the size of the past data.000 50. that provide rating information on companies and government debts.e.000 20.000 20.0% 83.000 4.000 20.9% 0. it relies heavily on past statistics.0% 90. the data must be homogeneous. Any model that does not have a large size cannot be a good model.000 50.000 2. a simplified model. banks can lend to subprime customers only limited amount of money subject to the size of their capital base. the role played by rating agencies can be said to be critical or even instrumental in the development of the sub-prime crisis.1% 0. Fitch Ratings. which is far from perfect.000 6. Indeed.000 14.000 50. far from good at all.000 8.000 20.000 500 200 40% 36% 32% 28% 24% 40% 30% 20% 10% 5% 3% 1% 100.000 20.0% 100. Moody’s. Many critics claim that many CDOs received their triple-A rating too easily.x Income 50.000 12.0% 100.000 16. It is.000 20. First. Without CDO (collateral debt obligation). Two important factors determine whether such a model has the potential to be a good model. The above model has its R2 = 1 indicating a very good fit. of course. banks can lend much more to sub-prime customers and a potential minor credit crunch was turned into a world-wide crisis.

For investors to continue to have faith in these ratings in future. While publicly available ratings are very useful in helping one’s decision making. either a logit model (where errors are assumed to be logistically distributed) or a probit model (where errors are assumed to be normally distributed). Conclusion: A simple credit rating model. we should not develop a blind faith in them. can often help a company in their decision-making with the full knowledge that such ratings are not foolproof.criticized during the 1997 Asian financial crisis when a Thai company received their top rating just a few months before their collapse. There has been sufficient evidence that caution is warranted when going through uncharted waters. it is important that the rating methods be transparent showing the size of the data from which the rating is based and how far back such data went. To Use: In-house developed rating models (logit or probit models) – degree of confidence in such models depends on the size and homogeneousness of the data publicly available ratings for securities issued by long established industries publicly available ratings for familiar financial instruments Not to use (or at least be wary) : publicly available ratings for securities issued by new industries publicly available ratings for securities issued in newly developed economies publicly available ratings for unfamiliar financial instruments publicly available ratings from new/unknown rating agencies 7 . developed in-house.

It is my guess that ten or fifteen years from now. which is one-dimensional. which as a liability has a value of $95. credit rating. once you have the necessary market-consistent assumptions such as risk-free interest rate. For instance. The only asset my company has is an old building which the realtor says is worth $80 million.. people will stop talking about market consistent assumptions. Why? Because under the principle of market consistent assumptions. Generally speaking. which is strictly speaking incorrect. Most of the discussions in the previous section on derivative models relate to asset models. your model will come up with a number. I therefore value my liability as $95. Technically. a regulatory or accounting authority) can or should declare the yield curve (a set of risk-free rates for various terms) for everybody to follow – thereby reducing the whole thing to a rule-based approach as opposed to a principle-based approach that is being advocated these days.. I really can’t imagine that to happen. It is due (a total of $100 million) in exactly one year’s time. asset models are much easier. The second dimension in the liability modeling is that an asset can stand alone without accompanying liabilities.23 million. whereas a liability has always accompanying assets and modeling liabilities must take into account the nature of the accompanying assets. market-consistent liability data are generally not available. my company issued a bond sometime ago. The risk-free interest rate for one year is 5%. The same bond. asset modeling is a valuation model. Liability models on the other hand are multi-dimensional. Somehow. the term “market consistent assumptions” is a misnomer. my company is insolvent.e. stock return volatility. But it is politically incorrect to do so under the current environment.23 million. Most of us use comparable asset data in our liability modeling. Furthermore. The market assumes a 30% default probability and the bond’s market value has dropped to $70 million. Usually. the value of the asset. now has a market value of $70 million as an asset. Whereas you can spread open your Wall Street Journal or turn on your Bloomberg and find out all the market data relating to your assets. i. This is what happened to some bonds in Hong Kong market during the Asian financial crisis in 1997. It therefore follows that some authority (say. 8 . The model needs to tell you several things at one time as will be demonstrated below.Asset and Liability Model Relative to liability models. etc. Hence a shell life of 15 years in my estimation. there can be only one risk-free yield curve in one market and everybody in that market must use the same yield curve for their valuation. the market hears the rumor that my company is unable to repay the debt and the bond is downgraded to junk status with a very high probability of default.

If you borrow money from your friend to open a restaurant. the due dates and the nature of its liabilities change. which makes the due date an important consideration in liability modeling. but they are of much less use when dealing with financial markets where the most fundamental underlying variable is human behavior. When the due date changes unexpectedly. you can run the restaurant as long as you like so long as you have positive cashflow and your friend does not ask for the return of the money. A third dimension of liability modeling is to be wary of the worst possible scenario. In fact. hurricane. For catastrophe insurance. Similarly if an otherwise sound and solid bank suddenly faces a bank run.In a way a liability is not a liability until it is due. If a bank suddenly withdraws its lending facilities to a company its liability becomes immediately due and the nature of the liabilities changes significantly. The consequence of such untoward changes of liabilities is often to lead to forced sale of the accompanying assets at greatly reduced prices. no matter how small the probability of occurrence is reinsurance is always sought so that the maximum loss is containable in the event the unthinkable happens. Mathematical models may help a lot when dealing with natural events such as earthquake. which ‘mutates’ against any well-developed models. Financial Times recently reported that “Turmoil reveals the inadequacy of Basil II”.) highlight the importance of observing the changing nature of the liabilities in liability modeling. such requirements are not necessarily adequate. In such cases. The lesson one learns from this is that one can model the average (for the premium calculation). One of the main purposes of liability modeling is to avoid insolvency. or model the conditional tail expectation (for the required capital). actuaries have known this dimension of liability for a long time. While conventional financial reporting requires us to report only the average of the liabilities and modern capital adequacy test may requires us to be ready for the 95 or 99 percentile. we need to look at the absolute maximum without 9 . If a company does not have enough cashflow to meet the liability it is technically insolvent regardless of the value of the liability. etc. but before one takes on the risk one should always first look at the maximum possible loss regardless of the probability of occurrence. This is also why many of those defined-benefit pension schemes do not have their past-service benefits fully funded in many developed countries simply because they are many years away from being due. Cross default and chain reaction in the recent events of sub-prime crisis (particularly those that lead to the fire sale of Bear Stearn and the wind-up of Carlyle Capital Corp. by accumulating statistics over a long period of time. the nature of the liabilities changes and very often the nature of the accompanying assets changes too. This dimension of liability modeling requires one to consider if there is sufficient cashflow to meet the liability that is about to become due.

when there is a bank run. but when it happens its impact could be devastating. which have lasting impact. conventional modeling may be sufficient. the delta hedging – the buying or selling a portion of the equity portfolio to ensure it remain delta-neutral – is however tricky. Then people who understood Black-Scholes formula programmed this delta hedging buying and selling into their 10 . As of now. liability modeling is a necessity. Nothing else has stood the test of time. required capital (as used in the usual financial reporting) is just another form of liability and should be treated as such. It is worth noting that the impact of models used in pricing will remain there during the entire duration of the insurance (or any other financial) contract while models used in liability valuation has only impact on one particular moment of time.. All that we know is that it does not happen very often. this does not mean we must not take any risk where the maximum loss is considered ‘too high’. the market’s behavior could change unpredictably and assets shrink in size and liabilities balloon beyond all expectations. There is no standard by which one can judge which one is to be trusted. This happens. Of course. The first book on this subject was written by an actuary some thirty years ago. The investment execution. When dealing with contingent liabilities arising from guarantees. and indeed often do. which we don’t even know how to measure. Another difficulty encountered in liability modeling is that there is no publicly recognized methodology. This is not the case for pricing models. There must be hundreds and possibly thousands of different models around the world. The pricing of this kind of products is fairly straightforward and most actuaries know how to do it. The fourth dimension is regulatory requirements which may. but we must be fully aware of the consequences. In the context of liability modeling. One needs to be extremely careful in deciding on the pricing models.considering the probability. stipulate certain rules that could appear quite arbitrary from the point of view of those who practice asset liability management. What we do know is that when confidence sags. Nobody knows as yet how to model this dimension of liabilities. history tells us that conventional modeling as those commonly taught in the textbooks is no longer adequate. This is what risk management is all about. The fifth dimension is the confidence level of the public. Let’s roll back 20 years to October 1987. The difference is just in semantics. When the guarantee is a long term one. cum geographical diversification). This product has recently gained favor in Hong Kong. When the guarantee is a short term one. for instance. Only the latest results count even though models used for the prior years might be inappropriate. Let us explore two examples here: equity-linked guarantee and interest guarantee. Guaranteed equity-linked life insurance is not a new subject.e. Liability models can change from time to time. it appears that the only safe long term guarantee is the mortality guarantee under life insurance policies (i.

diverting the necessity of forced liquidation. after 18 years) with Nikkei at 13. Conclusion: While asset modeling is one-dimensional. you are still in deep water today (i. To Use: Regulatory requirements.. worst possible scenario. whether considered reasonable or not. If you issued guarantee on Japanese stocks at the end of 1989 when Nikkei was 39. There would have been a global catastrophe if those companies were forced into liquidation. Long term modeling for contingent liabilities 11 . everybody was hit by the selling signal at the same time and the US stock market dropped 22% in one day. prevailing interest rates dropped to around 2% . Today. The regulatory requirements were relaxed (the fourth dimension mentioned above) so that such companies could stay in business. many of these companies in Asia (perhaps for the whole world) would have been insolvent.e. By 1997-98. regulatory requirements and confidence crisis. For a while after that people focused on interest rate guarantee as we entered into a declining interest environment. Then. there are at least five dimensions associated with liability modeling: market consistent data. it appeared. Unpleasant surprises often come from faulty product design or process management (such as very high leveraging) rather than liability valuation.3% and interest rate guarantee provided at early 1990s for 7% or 8% became a heavy burden for many insurance companies.000. the second dimension mentioned above came into play as almost all of them had positive cash flow. By today’s IFRS. Pricing models (actually it is more of the product design rather than the pricing) are far more important than valuation models. must be observed Always model cash flow under different scenarios Always consider the worst case scenario Use stress testing or back testing Not to use (or at least be wary) : Any pricing model without considering the worst case scenario. Modeling for long term liabilities is unreliable. One day. Some companies I know of lost up to US$1 billion during that week.computers. positive cashflow and due dates. many of them have a strong and thriving business.000.

We’ll see more papers on this subject in the years to come. together with the ubiquitous internet that became widely popular around a further decade later. which is accompanied by an inflation factor. The introduction of micro-computer. On the other hand. but very few of us care afterwards whether such projections were right or not. once the necessary infrastructure was set up. launched the digital age as we know today.000 people today. Modern IT system is approaching the stage where micro-modeling our business is no longer appropriate. 12 . Under such circumstances. Take for instance the mobile phone in your pocket. financial services. can be highly correlated. driven by mass psychology and ultrafast information dissemination. whether expressed as per thousand sum assured. The twentieth century was the century of micro-modeling. We then project our cash flow thirty or fifty years into the future using this assumption. In our pricing. The mobile operator did not incur a “unit cost” when it acquired you as its customer.Macro and Micro Model When Frederick Taylor introduced time-motion studies in 1881. the critically important assumption of independence of individual variables in actuarial modeling could be seriously violated and modelers must be made aware of this possibility. We actuaries like to make long term projections. acquiring a thousand or even a million customers incurs very little additional cost. market behavior of individuals. In fact. unit cost became the centre piece of management science covering such giant industries as manufacturing. How many of us ever attempt to verify whether such assumptions made by us or by our predecessors in the company were right or just close to reality? The answer is probably “none”. Macro-modeling is a relatively new concept and there is very little actuarial literature on this subject except one paper that appeared in the journal of Australian Institute of Actuaries some years ago. In any such modeling. Every actuary who masters his or her pricing work today knows how to derive the unit cost. we often incorporate an assumption called “cost per policy”. If we have an audience of 10. about one hundred years later. Each unit in the macro model is no longer an independent unit. per policy or percentage of premium. as witnessed during the recent collapse of the financial markets due to the sub-prime crisis. the answer is probably still “none”. etc. This will be the beginning of the age of macromodeling. There is no matching between the income and the cost in the normal business sense (we are not referring to the matching in the accounting sense). income and expenses have to be separately modeled as they are not highly correlated. Let’s look at one familiar example below.

If you check the records of your company (to the extent they are available) you will find that cost as a percentage of premiums is a far more stable assumption as compared to the cost per policy assumption. Conclusion: As we go deeper and deeper into the digital age. There is something to be learned for the regulatory authorities as well. this does not apply to actuaries alone. there is bound to be further changes at the macro level that will drastically alter our business models. It is a universal phenomenon true to all scientists who make long term projections. but not at the micro level. I had the opportunity of having a look at the alternative assumption “cost as a percentage of premiums”. there should be a trigger for the whole industry as well. The underlying truth of the this example is that the mutation of human behavior can be observed at the macro level. This can only be explained by the macro phenomenon – If we know a certain cost content is acceptable to the consumers there is no incentive to reduce that and we must design our products with sufficient cost incentives to keep our salesmen alive. 13 . That data went back almost 100 years for the US life insurance companies and I found that percentage stayed at around 20% for the last 100 years. our cost content must not be so high as to drive away our customers. Instead of just a trigger for individual companies (such as capital adequacy trigger). Maybe our future financial regulations should include something at the macro level.In fact. A good understanding of the macro-modeling will definitely improve our competitive edge vis-à-vis other actuaries or other financial professionals. On the other hand.

Wiley 2007 The Black Swan – Nassim N. Wiley 2001 Interest Rte Models-Theory and Practice – Damiano Brigo. Balakrishnan. random House 2007 14 . Eduardo Schwartz. McGraw-Hill 1994 Interest Rate Spreads & Market Analysis – Citicorp 1994 Beyond Value At Risk – Kevin Dowd. Irwin 1989 Market Volatility – Robert Shiller. McGraw-Hill 2002 Fooled by Randomness – Nassim N. Random House 2000 Inventing Money – Nicholas Dunbar. Wiley 1998 Interest Rate Modeling – Jessica James. Nick Webber. Random House 2005 Credit Risk Modeling using Excel and VBA – Gunter Loffler. Taleb. Wiley 2000 When Genius Failed – Roger Lowenstein. N. Vladimir Medvedev. Richard D. University of Pennsylvania 1979 Guaranteed Investment Contracts – Kenneth Walker. Journal of Political Economy 1973 Pricing And Investment Strategies For Guaranteed Equity-Linked Life Insurance – Michael Brennan. Fabio Mercurio. The MIT Press 1989 Continuous Univariate Distributions – Norman Johnson. Taleb. Myron Scholes.References: Random Walk In Stock Market Prices – Eugene Fama. Wiley 1994 Option Volatility & Pricing – Sheldon Natenberg. Peter Posch. Journal of Business 1965 The Pricing Of Options And Corporate Liabilities – Fischer Black. Springer 2001 Building and Using Dynamic Interest Rate Models – Ken Kortanek. Samuel Kotz. Wiley 2001 The Fundamentals Of Risk Measurement – Chris Marrison.

- 8 Arata Markets
- Volatiltiy Transmission and Fnancial Crisis
- EDHEC Position Paper Evidence From the 2008 Ban F
- z.drawdown
- 31.pdf
- Oheads_Chapter12
- 201342 Pap
- Swing Traders Users Guide
- GVI_ING
- Threshold Volatility Model
- Forecasting China Stock Markets Volatility via GARCH Models
- Algorithmic Trading
- Economic Fluctuations and Statistical Physics - The Puzzle of Large Fluctuations
- SSRN-id209988
- 2006 Jef
- Time-Varying Mixture GARCH Models And
- 4944-15324-1-PB
- City of Boise Idaho Checks 41
- US Federal Reserve
- frbclv_wp1986-08.pdf
- SALN Manual
- 41613_1970-1974
- 7937_1975-1979
- e4_19671230
- 53196_1930-1934
- Dynamics of Currency Futures Trading and Underlying Exchange Rate Volatility in India
- 68739_2000-2004
- Future Finance
- 68739_2005-2009
- VPS Writeup Short Final April'11 (2)

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

We've moved you to where you read on your other device.

Get the full title to continue

Get the full title to continue reading from where you left off, or restart the preview.

scribd