You are on page 1of 101

Nassim Nicholas Taleb

Fat Tails and (Anti)fragility

Lectures on Probability, Risk, and Decisions in The Real World

DRAFT VERSION, APRIL 2013

2 | Risk and (Anti)fragility - N N Taleb

4 | Risk and (Anti)fragility - N N Taleb

PART I- MODEL ERROR & METAPROBABILITY

This Segment, Part I corresponds to topics covered in The Black Swan, 2nd Ed. Part II will address those of Antifragile. Note that all the topics in these chapters are discussed in these books in a verbal or philosophical form

I am currently teaching a class with the absurd title risk management and decision-making in the real world, a title I have selected myself; this is a total absurdity since risk management and decision-making should never have to justify being about the real world, and whats worse, one should never be apologetic about it. In real disciplines, titles like Safety in the Real World, Biology and Medicine in the Real World would be lunacies. But in social science all is possible as there is no exit from the gene pool for blunders, nothing to check the system. You cannot blame the pilot of the plane or the brain surgeon for being too practical, not philosophical enough; those who have done so have exited the gene pool. The same applies to decision making under uncertainty and incomplete information. The other absurdity in is the common separation of risk and decision-making, as the latter cannot be treated in any way except under the constraint: in the real world. And the real world is about incompleteness: incompleteness of understanding, representation, information, etc., what when does when one does not know what's going on, or when there is a non-zero chance of not knowing what's going on. It is based on focus on the unknown, not the production of mathematical certainties based on weak assumptions; rather measure the robustness of the exposure to the unknown, which can be done mathematically through metamodel (a model that examines the effectiness and reliability of the model), what I call metaprobability, even if the meta-approach to the model is not strictly probabilistic. This first section presents a mathematical approach for dealing with errors in conventional risk models, taking the bulls***t out of some, add robustness, rigor and realism to others. For instance, if a "rigorously" derived model (say Markowitz mean variance) gives a precise risk measure, but ignores the central fact that the parameters of the model don't fall from the sky, but need to be discovered with some error rate, then the model is not rigorous for risk management, decision making in the real world, or, for that matter, for anything. We need to add another layer of uncertainty, which invalidates some models (but not others). The mathematical rigor is shifted from focus on asymptotic (but rather irrelevant) properties to making do with a certain set of incompleteness. Indeed there is a mathematical way to deal with incompletness. The focus is squarely on "fat tails", since risks and harm lie principally in the high-impact events, The Black Swan and some statistical methods fail us there. The section ends with an identification of classes of exposures to these risks, the Fourth Quadrant idea, the class of decisions that do not lend themselves to modelization and need to be avoided. Modify your decisions. The reason decision-making and risk management are insparable is that there are some exposure people should never take if the risk assessment is not reliable, something people understand in real life but not when modeling. About every rational person facing an plane ride with an unreliable risk model or a high degree of uncertainty about the safety of the aircraft would take a train instead; but the same person, in the absence of skin-in-the game, when working as "risk expert" would say: "well, I am using the best model we have" and use something not reliable, rather than be consistent with real-life decisions and subscribe to the straightforward principle: "let's only take those risks for which we have a reliable model". Finally, someone recently asked me to give a talk at unorthodox statistics session of the American Statistical Association. I refused: the approach presented here is about as orthodox as possible, much of the bone of this author come precisely from enforcing rigorous standards of statistical inference on process. Risk (and decisions) require more rigor than other applications of statistical inference.

Risk and (Anti)fragility - N N Taleb| 5

1
Risk is Not in The Past (the Turkey Problem)
This is an introductory chapter outlining the turkey problem, showing its presence in data, and explaining why an assessment of fragility is more potent than data-based methods of risk detection.

1.1 Introduction: Fragility, not Statistics


Fragility (Chapter x) can be defined as an accelerating sensitivity to a harmful stressor: this response plots as a concave curve and mathematically culminates in more harm than benefit from the disorder cluster [(i) uncertainty, (ii) variability, (iii) imperfect, incomplete knowledge, (iv) chance, (v) chaos, (vi) volatility, (vii) disorder, (viii) entropy, (ix) time, (x) the unknown, (xi) randomness, (xii) turmoil, (xiii) stressor, (xiv) error, (xv) dispersion of outcomes, (xvi) unknowledge. Antifragility is the opposite, producing a convex response that leads to more benefit than harm. We do not need to know the history and statistics of an item to measure its fragility or antifragility, or to be able to predict rare and random ('black swan') events. All we need is to be able to assess whether the item is accelerating towards harm or benefit. The relation of fragility, convexity and sensitivity to disorder is thus mathematical and not derived from empirical data.

Figure 1.1. The risk of breaking of the coffee cup is not necessarily in the past time series of the variable; if fact surviving objects have to have had a rosy past.

The problem with risk management is that past time series can be (and actually are) unreliable. Some finance journalist (Bloomberg) was commenting on my statement in Antifragile about our chronic inability to get the risk of a variable from the past with economic time series. Where is he going to get the risk from since we cannot get it from the past? from the future?, he wrote. Not really, think about it: from the present, the present state of the system. This explains in a way why the detection of fragility is vastly more potent than that of risk -and much easier to do.

6 | Risk and (Anti)fragility - N N Taleb

Assymmetry and Insufficiency of Past Data. Our focus on fragility does not mean you can ignore the past history of an object for risk management, it is just accepting that the past is highly insufficient. The past is also highly asymmetric. There are instances (large deviations) for which the past reveals extremely valuable information about the risk of a process. Something that broke once before is breakable, but we cannot ascertain that what did not break is unbreakable. This asymmetry is extremely valuable with fat tails, as we can reject some theories, and get to the truth by means of via negativa.

This confusion about the nature of empiricism, or the difference between empiricism (rejection) and naive empiricism (anecdotal acceptance) is not just a problem with journalism. Naive inference from time series is incompatible with rigorous statistical inference; yet many workers with time series believe that it is statistical inference. One has to think of history as a sample path, just as one looks at a sample from a large population, and continuously keep in mind how representative the sample is of the large population. While analytically equivalent, it is psychologically hard to take the outside view, given that we are all part of history, part of the sample so to speak.

General Principle To Avoid Imitative, Cosmetic Science:


From Antifragile (2012): There is such a thing as nonnerdy applied mathematics: find a problem first, and figure out the math that works for it (just as one acquires language), rather than study in a vacuum through theorems and artificial examples, then change reality to make it look like these examples.

1.2 Turkey Problems

Turkey and Inverse Turkey (from the Glossary for Antifragile): The turkey is fed by the butcher for a thousand days, and every day the turkey pronounces with increased statistical confidence that the butcher "will never hurt it"until Thanksgiving, which brings a Black Swan revision of belief for the turkey. Indeed not a good day to be a turkey. The inverse turkey error is the mirror confusion, not seeing opportunities pronouncing that one has evidence that someone digging for gold or searching for cures will "never find" anything because he didnt find anything in the past. What we have just formulated is the philosophical problem of induction (more precisely of enumerative induction.)

1.3 Simple Risk Estimator


Let us define a risk estimator that we will work with throughout the book. Definition 1.3.1: Take, as of time T, a standard sequence X= 8xt0 +iDt <0< i< n (xt !), as the discretely monitored history of a function of X H A, f L is defined as the process Xt over the interval [t0 , T D (with realizations at fixed interval Dt thus T=t0 +n Dt); the estimator MT
X MT HA, f L

n i=0 1 A f H xt0 +i Dt L n i=0 1

(1.1)

where 1 A : X {0,1} is an indicator function taking values 1 if xt A and 0 otherwise, and f is a function of x. For instance f(x)=1, f(x)=x, and f(x) = xN correspond to the probability, the first moment, and N th moment, respectively. A is the subset of the support of the distribution that is of concern for the estimation. Let us stay in dimension 1 for the next few chapters not to muddle things. Standard Estimators tend to be variations about MtX (A,f) where f(x) =x and A is defined as the domain of the process X, standard measures from x, such as moments of order z, etc., as of period T. Such measures might be useful for the knowledge of a process, but remain insufficient for decision making as the decision-maker may be concerned for risk management purposes with the left tail (for distributions that are not entirely skewed, such as purely loss functions such as damage from earthquakes, terrorism, etc. ), or any arbitrarily defined part of the distribution.
Standard Risk Estimators

Definition 1.3.2: The empirical estimator S for the unconditional shortfall S below K is defined as
X S MT HA, f L, with A = H-, K E, f HxL = x

Risk and (Anti)fragility - N N Taleb| 7

n i=0 1 A Xt0 +i Dt n i=0 1

(1.2)

an alternative method is to compute the conditional shortfall:


X S E @ M X < K D = MT HA, f L

n i=0 1 n i=0 1 A (1.3)

S =

n i=0 1 A Xt0 +i Dt n i=0 1 A

One of the uses of the indicator function for 1A , for observations falling into a subsection A of the distribution, is that we can actually derive the X past actuarial value of an option with X as an underlying struck as K as MT HA, xL, with A=(-,K] for a put and A=[K, ) for a call, with f(x) =x
Criterion:

The measure M is considered to be an estimator over interval [t- N Dt, T] if and only it holds in expectation over the period XT +i Dt for X X all i>0, that is across counterfactuals of the process, with a threshold x (a tolerated divergence that can be a bias) so |E[MT +i Dt (A,f)]- Mt (A, f)]|< x . In other words, the estimator should have some stability around the "true" value of the variable and a lower bound on the tolerated bias. We skip the notion of variance for an estimator and rely on absolute mean deviation so x can be the absolute value for the tolerated bias. And note that we use mean deviation as the equivalent of a loss function; except that with matters related to risk, the loss function is embedded in the subt A of the estimator. This criterion is compatible with standard sampling theory. Actually, it is at the core of statistics. Let us rephrase: Standard statistical theory doesnt allow claims on estimators made in a given set unless these are made on the basis that they can generalize, that is, reproduce out of sample, into the part of the series that has not taken place (or not seen), i.e., for time series, for t>t. This should also apply in full force to the risk estimator. In fact we need more, much more vigilance with risks.
X For convenience, we are taking some liberties with the notations, pending on context: MT HA, f L is held to be the estimator, or a conditional summation on data but for convenience, given that such estimator is sometimes called empirical expectation, we will be also using the same symbol for the estimated variable in cases where M is the M-derived expectation operator ! or !P under real world probability measure P, that is the expectation operator completely constructed from past available n data points.

1.4 Fat Tails, the Finite Moment Case


Fat tails are not about the incidence of low probability events, but the contributions of events away from the center of the distribution to the total properties. As a useful heuristic, take the ratio
E@ x2 D E@ x D

(or more generally

X MT HA,x n L X MT HA, x L

); the ratio increases with the fat tailedness of the distribution,

when the distribution has finite moments up to n).

Simply, xn is a weighting operator that assigns a weight, xn-1 large for large values of x, and small for smaller values. Norms ! p : More generally, the ! p Norm of a vector x = 8xi <n i=1 is defined as x p = 1 n
n 1 p p

xi
i=1

(1.4)

p 1. For the Euclidian norm, p = 2. The norm rises with higher values of p , as, for a > 0, 1 n
n 1H p+aL p+ a

xi
i=1

1 n

1 p p

xi
i=1

(1.5)

One property quite useful with power laws with infinite moments : x = MaxH8 xi <n i=1 L (1.6)

8 | Risk and (Anti)fragility - N N Taleb

Gaussian Case: For a Gaussian, where x ~ N(0,s), as we assume the mean is 0 without loss of generality,
N -1 N 1

X H MT HA, xN LL1N X MT HA,

p =

2N

K2

-1

HH-1L + 1L GI 2

N +1 2

MO

(1.7)

x L

or, alternatively
X MT HA, xN L X MT HA,
1

x L

=2

H N - 3L

I1 + H-1L M

1 s
2

1 2

N 2

N+1 2

(1.8)

Where G(z) is the Euler gamma function; GHzL = 0 tz-1 e-t dt. For odd moments, the ratio is 0. For even moments:
X MT HA, x2 L X MT HA,

x L

p 2

(1.9)

hence
X MT HA, x2 L = Standard Deviation

p 2

(1.10)

Mean Absolute Deviation

For a Gaussian the ratio ~ 1.25, and it rises from there with fat tails. Example: Take an extremely fat tailed distribution with n=106 , observations are all -1 except for a single one of 106 , X = 9-1, -1, ... - 1, 106 = The mean absolute deviation, MAD (X) = 2. The standard deviation STD (X)=1000. The ratio standard deviation over mean deviation is 500. As to the fourth moment:
X MT HA, x4 L X MT HA,

x L

=3

p 2

s3

For a power law distribution with tail exponent a=3, say a Student T
X MT HA, x2 L X MT HA,

x L

Standard Deviation Mean Absolute Deviation

p 2

(1.11)

STDMAD 1.7 1.6 1.5 1.4 1.3 1.2 1.1 Time


Figure 1.2. Standard Deviation/Mean Deviation for the daily returns of the SP500 over the past 47 years, with a monthly window.

We will return to other metrics and definitions of fat tails with power law distributions when the moments are said to be infinite, that is, do not exist. Our heuristic of using the ratio of moments to mean deviation only works in sample, not outside.

Risk and (Anti)fragility - N N Taleb| 9

Infinite moments, say infinite variance, always manifest themselves as computable numbers in observed sample, yielding an estimator M, simply because the sample is finite. A distribution, say, Cauchy, with infinite means will always deliver a measurable mean in finite samples; but different samples will deliver completely different means.

The next two figures illustrate the drifting effect of M a with increasing information.
X MT HA, xL X MT IA, x2 O

4 3 2 1 T 3.0 4.0

3.5

2000 -1 -2

4000

6000

8000

10 000

2000

4000

6000

8000

10 000

Figure 1.3 The mean (left) and standard deviation (right) of two series with Infinite mean (Cauchy) and infinite variance (St(2)), respectively.

A Simple Heuristic to Create Mildly Fat Tails


Since higher moments increase under fat tails, as compared to lower ones, it should be possible so simply increase fat tails without increasing lower moments. Variance-preserving heuristic. Keep ![x2 ] constant and increase ![x4 ], by "stochasticizing" the variance of the distribution, since <x4 > is itself analog to the variance of < x2 > measured across samples ( ![x4 ] is the noncentral equivalent of !AHx2 - !@x2 DL2 E . Chapter x will do the "stochasticizing" in a more involved way. An effective heuristic to watch the effect of the fattening of tails is to simulate a random variable we set to be at mean 0, but with the following variance-preserving : it follows a distribution N(0, s with 0 b a < 1 . The characteristic function is fHt, aL = 1 2
- H1+aL t2 s2
2 1

1 - a ) with probability p =

1 2

and N(0, s

1 + a ) with the remaining probability

1 2

I1 + a t

s2

(1.12)

Odd moments are nil. The second moment is preserved since M H2L = H-L2 !t,2 fHtL and the fourth moment M H4L = H-L4 !t,4 fHtL
0 0

= s2

(1.13)

= 3 Ia2 + 1M s4

(1.14)

which puts the traditional kurtosis at 3 Ha2 + 1L. This means we can get an "implied a" from kurtosis. a is roughly the mean deviation of the stochastic volatility parameter "volatility of volatility" or Vvol in a more fully parametrized form. This heuristic is of weak powers as it can only raise kurtosis to twice that of a Gaussian, so it should be limited to getting some intuition about its effects. Section 1.10 will present a more involved technique.

10 | Risk and (Anti)fragility - N N Taleb

1.5 Scalable and Nonscalable, A Deeper View of Fat Tails


So far for the discussion on fat tails we stayed in the finite moments case. For a certain class of distributions, those with finite moments,
P>nK P >K

depends on n and K. For a scale-free distribution, with K in the tails, that is, large enough,

P>n K P>K

depends on n not K. These latter

distributions lack in characteristic scale and will end up having a Paretan tail, i.e., for X large enough, P>X = C X -a where a is the tail and C is a scaling constant. Table 1.1. The Gaussian Case, By comparison a Student T Distribution with 3 degrees of freedom, reaching power laws in the tail. And a "pure" power law, the Pareto distribution, with an exponent a=2.
K 2 4 6 8 10 12 14 16 18 20
1 P >K

Gaussian

P>K P>2 K

1 PK

St H3L

P>K P>2 K

St H3L

1 PK

Pareto

P>K P>2 K

Pareto

Ha = 2L 44.0 3.16 10
4

7.2 102 3.21 104 1.59 106 8.2 107 4.29 109 2.28 1011 1.22 1013 6.6 1014 3.54 1016 1.91 1018

14.4 71.4 216. 491. 940. 1.61 103 2.53 103 3.77 103 5.35 103 7.32 103

4.97443 6.87058 7.44787 7.67819 7.79053 7.85318 7.89152 7.91664 7.93397 7.94642

8.00 64.0 216. 512. 1.00 103 1.73 103 2.74 103 4.10 103 5.83 103 8.00 103

4. 4. 4. 4. 4. 4. 4. 4. 4. 4.

1.01 109 1.61 1015 1.31 1023 5.63 1032 1.28 1044 1.57 1057 1.03 1072 3.63 1088

Note: We can see from the scaling difference between the Student and the Pareto the conventional definition of a power law tailed distribution L Ht xL is expressed more formally as P>X = L HxL X -a where L Hx) is a slow varying function, which satisfies limx =1 for all constants Lx t > 0. For X large enough,
Log P>X 0.1 Student (3)
log HP>X L log HXL

converges to a constant, the tail exponent -a. A scalable should show the slope a in the tails, as X

10-4

10-7 Gaussian 10-10 LogNormal

10-13 Log X

10

20

Figure 1.5. Three Distributions. As we hit the tails, the Student remains scalable while the Lognormal shows an intermediate position.

So far this gives us the intuition of the difference between classes of distributions. Only scalable have true fat tails, as others turn into a Gaussian under summation. And the tail exponent is asymptotic; we may never get there and what we may see is an intermediate version of it. The figure above drew from Platonic off-the-shelf distributions; in reality processes are vastly more messy, with switches between exponents. Estimation issues: Note that there are many methods to estimate the tail exponent a from data, what is called a calibration. However, we will see, the tail exponent is rather hard to guess, and its calibration marred with errors, owing to the insufficiency of data in the tails. In general, the data will show thinner tail than it should. We will return to the issue in Chapter 3.

Risk and (Anti)fragility - N N Taleb| 11

The Black Swan Problem: It is is not merely that events in the tails of the distributions matter, happen, play a large role, etc. The point is that these events play the major role and their probabilities are not computable, not reliable for any effective use. Why do we use Student T to simulate symmetric power laws? It is not that we believe that the generating process is Student T. Simply, the center of the distribution does not matter much for the properties. The lower the exponent, the less the center plays a role. The higher the exponent, the more the student T resembles the Gaussian, and the more justified its use will be accordingly. More advanced methods involving the use of Levy laws may help in the event of asymmetry, but the use of two different Pareto distributions with two different exponents, one for the left tail and the other for the right one would help. Why power laws? There are a lot of theories on why things should be power laws, as sort of exceptions to the way things work probabilistically. But it seems that the opposite idea is never presented: power should can be the norm, and the Gaussian a special case as we will see in Chapt x, of concave-convex responses (sort of dampening of fragility and antifragility, bringing robustness, hence thinning tails).

1.6 Different Approaches For Statistically Derived Estimators


There are broadly two separate ways to go about estimators: nonparametric and parametric. The nonparametric approach: it is based on observed raw frequencies derived from sample-size n. Roughly, it sets a subspace of events A
X and MT HA, 1L (i.e., f(x) =1), so we are dealing with the frequencies jHAL = n 1A . Thus these estimates dont allow discussions on frequenn i=0 1

cies j < , at least not directly. Further the volatility of the estimator increases with lower frequencies. The error is a function of the frequency itself (or rather, the smaller of the frequency j and 1-j). So if n i=0 1 A =30 and n=1000, only 3 out of 100 observations are expected to fall into the subspace A, restricting the claims to too narrow a set of observations for us to be able to make a claim, even if the total sample n=1000 is deemed satisfactory for other purposes. Some people introduce smoothing kernels between the various buckets corresponding to the various frequencies, but in essence the technique remains frequency-based. So if we nest subsets A1 A2 ... A, the expected volatility (as we will see X X X later in the chapter, we mean MAD, mean absolute deviation, not STD) of MT HAz , f L, E @ MT HAz , f L - M> T H Az , f L D r X X E @ MT HA<z , f L - M> H A , f L D for all functions f. (Proof via twinking of law of large numbers). T <z The parametric approach: it allows extrapolation but imprison the representation into a specific off-the-shelf probability distribution (which X can itself be composed of more probability distributions); so MT is an estimated parameter for use input into a distribution or model. Both methods make is difficult to deal with small frequencies. The nonparametric for obvious reasons of sample insufficiency in the tails, the parametric because small probabilities are very sensitive to parameter errors. The problem of payoff This is the central problem of model error seen in consequences not in probability. The literature is used to discussing errors on probability which should not matter much for small probabilities. But it matters for payoffs, as f can depend on x. Let us see how the problem becomes very bad when we consider f and in the presence of fat tails. Simply, you are multiplying the error in probability by a large number, since fat tails imply that the probabilities p(x) do not decline fast enough for large values of x. Now the literature seem to have examined errors in probability, not errors in payoff.
X Let MT HAz , f L be the estimator of a function of x in the subset Az = (d1 , d2 ) of the support of the variable. i) Take x the mean absolute error in X X the estimation of the probability in the small subspace Az = (d1 , d2 ), i.e., x= E MT HAz , 1L - M> T H Az , 1L . ii) Assume f(x) is either linear or n

convex (but not concave) in the form C+ L x b , with both L > 0 and b 1. iii) Assume further that the distribution p(x) is expected to have fat tails (of any of the kinds seen in 1.4 and 1.5) ,
X Then the estimation error of MT HAz , f L compounds the error in probability, thus giving us the lower bound in relation to x. X X E A MT HAz , f L - M> T H Az , f L E L d1 - d2 X E @ M> T H Az , f LD X E @ M> T H Az , 1LD
2 d1 f HxL pHxL x 2 d1 pHxL x

b H d2

X X d1 L b-1 EA MT HAz , 1L - M> T H Az , 1L E

(1.15)

Since

The error on p(x) can be in the form of parameter mistake that inputs into p, say s (Chapter x and discussion of metaprobability), or in the X frequency estimation. Note now that if d1 -, we may have an infinite error on MT HAz , f L, the left-tail shortfall although by definition the error on probability is necessarily bounded.

1.7 The Mother of All Turkey Problems: How Time Series Econometrics and Statistics Dont Replicate

12 | Risk and (Anti)fragility - N N Taleb

1.7 The Mother of All Turkey Problems: How Time Series Econometrics and Statistics Dont Replicate
(Debunking a Nasty Type of PseudoScience)
Something Wrong With Econometrics, as Almost All Papers Dont Replicate. The next two reliability tests, one about parametric methods the other about robust statistics, show that there is something wrong in econometric methods, fundamentally wrong, and that the methods are not dependable enough to be of use in anything remotely related to risky decisions.

Performance of Standard Parametric Risk Estimators, f(x)= xn (Norm !2 )


With economic variables one single observation in 10,000, that is, one single day in 40 years, can explain the bulk of the "kurtosis", a measure of "fat tails", that is, both a measure how much the distribution under consideration departs from the standard Gaussian, or the role of remote events in determining the total properties. For the U.S. stock market, a single day, the crash of 1987, determined 80% of the kurtosis. The same problem is found with interest and exchange rates, commodities, and other variables. The problem is not just that the data had "fat tails", something people knew but sort of wanted to forget; it was that we would never be able to determine "how fat" the tails were within standard methods. Never. The implication is that those tools used in economics that are based on squaring variables (more technically, the Euclidian, or !2 norm), such as standard deviation, variance, correlation, regression, the kind of stuff you find in textbooks, are not validscientifically(except in some rare cases where the variable is bounded). The so-called "p values" you find in studies have no meaning with economic and financial variables. Even the more sophisticated techniques of stochastic calculus used in mathematical finance do not work in economics except in selected pockets. The results of most papers in economics based on these standard statistical methods are thus not expected to replicate, and they effectively don't. Further, these tools invite foolish risk taking. Neither do alternative techniques yield reliable measures of rare events, except that we can tell if a remote event is underpriced, without assigning an exact value. From Taleb (2009), using Log returns, Xt log P HtL P Ht - i DtL

Take the measure MtX HH-, L, X 4 L of the fourth noncentral moment MtX HH-, L, X 4 L
1 N 4 iN =0 H Xt-i Dt L N

and the N-sample maximum quartic observation Max{ Xt-i Dt 4 < . Q(N) is the contribution of the maximum quartic variations over N samples.
i=0

Max 8H Xt-i Dt L4 < QH N L


4 iN =0 H Xt-i Dt L

i=0

(1.16)

For a Gaussian (i.e., the distribution of the square of a Chi-square distributed variable) show Q(104 ) the maximum contribution should be around .008 .0028 Recall that, naively, the fourth moment expresses the stability of the second moment. And the second moment expresses the stability of the measure mean across samples.

Risk and (Anti)fragility - N N Taleb| 13

VARIABLE Silver SP500 CrudeOil Short Sterling Heating Oil Nikkei FTSE JGB Eurodollar Depo 1M Sugar 11 Yen Bovespa Eurodollar Depo 3M CT DAX

Q HMax Quartic Contr.L 0.94 0.79 0.79 0.75 0.74 0.72 0.54 0.48 0.31 0.3 0.27 0.27 0.25 0.25 0.2

N HyearsL 46. 56. 26. 17. 31. 23. 25. 24. 19. 48. 38. 16. 28. 48. 18.

Description of the dataset:

All tradable macro markets data available as of August 2008, with tradable meaning actual closing prices corresponding to transactions (stemming from markets not bureaucratic evaluations, includes interest rates, currencies, equity indices).

Figure 1.6. The entire dataset Figure 1.7. Time Stability ... We have no idea how fat the tails are in a given period. All we know is that Kurtosis is 1) high, 2) unstable, hence we dont know how high it is.

Monthly Vol

0.8

0.6

0.4

0.2

Figure 1.8. Montly delivered volatility in the SP500 (as measured by standard deviations). The only structure it seems to have comes from the fact that it is bounded at 0. This is standard. Figure 1.9. Montly volatility of volatility from the same dataset, predictably unstable.

The significance of nonexistent (that is, infinite) Kurtosis can be seen by generating data with finite variance and a tail exponent <4. Obviously, the standard deviation is meaningless a measure is is never expected to predict itself except in subsamples that are devoid of large deviations.

14 | Risk and (Anti)fragility - N N Taleb


The significance of nonexistent (that is, infinite) Kurtosis can be seen by generating data with finite variance and a tail exponent <4. Obviously, the standard deviation is meaningless a measure is is never expected to predict itself except in subsamples that are devoid of large deviations.
X MT IA, x2 O

T Figure 1.10. Monthly Delivered Volatility for Fake data, Randomly Generated data with infinite Kurtosis. Not too different from Figure x (SP500).

Performance of Standard NonParametric Risk Estimators, f(x)= x or |x| (Norm !1), A =(-, K]
Does the past resemble the future in the tails? The following tests are nonparametric, that is entirely based on empirical probability distributions.

M@t+1D 0.004

M@t+1D 0.030 0.025

0.003

0.020 0.015 0.010

0.002

0.001

0.005 M @t D 0.005 0.010 0.015 0.020 0.025 0.030 M @t D

0.001

0.002

0.003

0.004

Figure 1.11. Comparing M[t-1, t] and M[t,t+1], where t= 1year, 252 days, for macroeconomic data using extreme deviations, A= (- ,-2 standard deviations (equivalent)], f(x) = x (replication of data from The Fourth Quadrant, Taleb, 2009) Figure 1.12. The regular is predictive of the regular, that is mean deviation. Comparing M[t] and M[t+1 year] for macroeconomic data using regular deviations, A= (- ,) , f(x)= |x| M@t+1D 0.0004 Concentration of tail events without predecessors

0.0003 Concentration of tail events without successors

0.0002

0.0001

M @t D 0.0001 0.0002 0.0003 0.0004 0.0005 Figure 1.13 This are a lot worse for large deviations A= (- ,-4 standard deviations (equivalent)], f(x) = x

So far we stayed in dimension 1. When we look at higher dimensional properties, such as covariance matrices, things get worse. We will return to the point with the treatment of model error in mean-variance optimization.

Risk and (Anti)fragility - N N Taleb| 15

When xt are now in "N , the problems of sensitivity to changes in the covariance matrix makes the estimator M extremely unstable. Tail events for a vector are vastly more difficult to calibrate, and increase in dimensions.

Figure 1.14 Correlations are also problematic, which flows from the instability of single variances and the effect of multiplication of the values of random variables.

The Responses so far by members of the economics/econometrics establishment: his books are too popular to merit attention, nothing new (sic), egomaniac (but I was told at the National Science Foundation that egomaniac does not apper to have a clear econometric significance).

No answer as to why they still use STD, regressions, GARCH, value-at-risk and similar methods.

1.8 Metrics Outside !2


We can see from the data that the predictability of the Gaussian-style cumulants is low, the mean deviation of mean deviation is ~70% of the mean deviation of the standard deviation (in sample, but the effect is much worse in practice); working with squares is not a good estimator. Many have the illusion that we need variance: we dont , even in finance and economics (especially in finance and economics). We propose different cumulants, that should exist whenever the mean exists. So we are not in the dark when we refuse standard deviation. It is just that these cumulants require more computer involvement and do not lend themselves easily to existing Platonic distributions. And, unlike in the conventional Brownian Motion universe, they dont scale neatly. C0 = 1 T
T

xi
i=1

C1 =

1 T-1 1 T-2

xi - C0 produces the Mean Deviation Hbut centered by the mean, the first momentL.
i=1 T

C2 = . . CN =

i=1

xi - Co -C1 produces the mean deviation of the mean deviation.

1 T-N-1

...
i=1

xi - Co -C1 -C2 ... - CN

The next table shows the theoretical first two cumulants for two symmetric distributions: a Gaussian, N (0,s) and a Student T StH0, s, aL with mean 0, a scale parameter s, the PDF for x is p(x)= x s (and the mean will be necessarily positive).
1 a s BJ , N
2 2 a 1

a a+H xsL2

Ha+1L2

As to the PDF of the Pareto distribution, p(x) = sa x-a-1 a for

The next table shows the theoretical first two cumulants for two symmetric distributions: a Gaussian, N (0,s) and a Student T StH0, s, aL with 16 | Risk and (Anti)fragility - N N Taleb Ha+1L2 mean 0, a scale parameter s, the PDF for x is p(x)= x s (and the mean will be necessarily positive). Distr Gaussian Pareto a Mean 0
as a-1 1 a s BJ , N
2 2 a 1

a+H xsL2

As to the PDF of the Pareto distribution, p(x) = sa x-a-1 a for

C1
2 p

C2 s 2 -1p
2 p
1

K1 - p erfcK

1 p

OO s

2 Ha - 1La-2 a1-a s
2
6 p

s GJ N
4 3 4

ST a=32 ST Square a=2 ST Cubic a=3

0 0 0
2

3 GJ N
4

5 2

GJ N

32

2 s
3 s p 2

s8

s 2 3 s tan-1 J N
p 2

p2 z -t 2 e p 0

where erfc is the complimentary error function erfcHzL = 1 -

dt.

These cumulants will be useful in areas for which we do not have a good grasp of convergence of the mean of observations.

1.9 Typical Manifestations of The Turkey Surprise


10

200 -10 -20 -30 -40 -50

400

600

800

1000

Figure 1.15. The Turkey Problem (The Black Swan, 2007/2010), where nothing in the past properties seems to indicate the possibility of the jump.

1.10 Fattening of Tails Through the Approximation of a Skewed Distribution for the Variance
1

We can replace the heuristic in 81.14< which limited the kurtosis to twice the Gaussian as follows. Switch between Gaussians with : with probability p s2 H1 + aL s2 H1 + bL with probability H1 - pL
p 1- p

with both a and b (-1,1). b= -a 3Ha2 - 1L


H1+ pL 1- p

, giving a characteristic function is fHt, aL = p

- Ha+1L s2 t2
2

- H p - 1L

s2 t2 Ha p+ p-1L 2 H p-1L

, which shows Kurtosis =


1- p p

s4 , thus allowing polarized states and high kurtosis, all variance preserving, conditioned on, when a > (<) 0, a < (>)

Thus with p= 1/1000, and the maximum possible a = 999, kurt can reach as high a level as 3000 . This heuristic approximates quite effectively the probability effect of a lognormal weighting for the characteristic function f(t, V) =

t2 v 2 -v0+ Vv2 2 2 +Log@vD

2 Vv2

2 p v Vv

v , where v is the variance and Vv is the second order variance, often called volatility of volatility.

Risk and (Anti)fragility - N N Taleb| 17

This heuristic approximates quite effectively the probability effect of a lognormal weighting for the characteristic function f(t, V) =

t2 v 2 -v0+ Vv2 2 2 +Log@vD

2 Vv2

2 p v Vv

v , where v is the variance and Vv is the second order variance, often called volatility of volatility.

1.11 An Aspect of the Turkey Problem: The Problem of Self-Reference


Counterintuitively, when one raises the kurtosis, the time series looks quieter. Because the storms are rare but deep. This leads to mistaken illusion of low volatility when in fact it is just high kurtosis, something that fooled people big-time with the story of the great moderation as risks were accumulating and nobody was realizing that fragility was increasing, like dynamite accumulating under the structure. Let us use a switching process producing Kurtosis= 107 (from p= 1/2000, a sligtly below the upper bound a= regular case p=0, a=1, the kurtosis of 3.
S S time
1- p p

-1) and compare it to the

200

400

600

800

1000

Fat tails Thin tails

200

400

600

800

1000

time

Fat tails Thin tails

Figure 1.17 N=1000. Sample simulation. Both series have the exact same means and variances at the level of the generating process. Figure 1.18 N=1000. Another simulation. there is 1/2 chance of seeing the real properties.

Kolmogorov - Smirnov, Shkmolgorov-Smirnoff


Remarkably, the fat tailed series passes the Kolomogorov-Smirnov test of normality with better marks than the thin-tailed one, since it displays a lower variance. The problem discussed with with Avital Pilpel (Taleb and Pilpel, 2001, 2004, 2007) is that Kolmogorov-Smirnov and similar tests of normality are inherently self-referential. These probability distributions are not directly observable, which makes any risk calculation suspicious since it hinges on knowledge about these distributions. Do we have enough data? If the distribution is, say, the traditional bell-shaped Gaussian, then yes, we may say that we have sufficient data. But if the distribution is not from such well-bred family, then we do not have enough data. But how do we know which distribution we have on our hands? Well, from the data itself. If one needs a probability distribution to gauge knowledge about the future behavior of the distribution from its past results, and if, at the same time, one needs the past to derive a probability distribution in the first place, then we are facing a severe regress loopa problem of self reference akin to that of Epimenides the Cretan saying whether the Cretans are liars or not liars. And this self-reference problem is only the beginning. (Taleb and Pilpel, 2001, 2004) Also, From the Glossary in The Black Swan. Statistical regress argument (or the problem of the circularity of statistics): We need data to discover a probability distribution. How do we know if we have enough? From the probability distribution. If it is a Gaussian, then a few points of data will suffice. How do we know it is a Gaussian? From the data. So we need the data to tell us what probability distribution to assume, and we need a probability distribution to tell us how much data we need. This causes a severe regress argument, which is somewhat shamelessly circumvented by resorting to the Gaussian and its kin.

A comment on the Kolmogorov Statistic


Note that the Kolmogorov-Smirnov test doesnt affect payoffs and higher moments, as it only focuses on probabilities. But thats not the only problem. It is, as we mentioned, conditioned on sample size while claiming to be nonparametric. Let us see how it works. Take the historical series and find the maximum point of divergence with F(.) the cumulative of the proposed distribution to test against: D = Max : 1 j
J i=1

Xt0 +i Dt - F I Xt0 + j Dt M >

n j =1

(1.17)

where n =

T - t0 Dt

18 | Risk and (Anti)fragility - N N Taleb

Figure 1.19 Small vs Large World. The problem of formal probability theory covering very, very narrow situations in which formal proofs can be produced, particularly when facing 1) preasymptotics, and 2) inverse problems. Yet we cannot afford to blow up. The Procrustean bed error is to squeeze the small world into the large, rather than the opposite.

We will get more technical in the discussion of convergence, take for now that the Kolmogorov statistic, that is, the distribution of D, is expressive of convergence, and should collapse with n. The idea is that, by a Brownian Bridge argument (that is a process pinned on both sides, with intermediate steps subjected to double conditioning), D j = K
J i=1 X Dt i+t0 j

- F IXDt j+t0 MO

follows a Uniform distribution.

The probability of Exceeding D , P>D H J n DN where H is the cumulative distribution function of the Kolmogorov - Smirnov distribution,

H HtL = 1 - 2 H-1Li-1 e-2 i


i=1

2 2

We can see that the main idea reposes on a decay of

n D with large values of n. So we can fool the testing by proposing distributions with a
1 n

small probability of very large jump, where the probability of switch d .

Table of the "fake" Gaussian when not busted


Let us run a more involved battery of statistical tests (but consider that it is a single run, one historical simulation). Comparing the Fake and genuine Gaussians (Figure 1.17) and subjecting them to a battery of tests:
Anderson-Darling Cramr-von Mises Jarque-Bera ALM Kuiper Mardia Combined Mardia Kurtosis Mardia Skewness Pearson c2 Shapiro-Wilk Watson U2 Statistic 0.406988 P-Value 0.354835 Anderson-Darling Cramr-von Mises Jarque-Bera ALM Kuiper Mardia Combined Mardia Kurtosis Mardia Skewness Pearson c2 Shapiro-Wilk Watson U2 Statistic 0.656362 P-Value 0.0854403

0.0624829 0.357839 1.46412 0.472029 0.0424013 0.110324 1.46412 -0.876786 0.7466 43.4276 0.998193 0.472029 0.380603 0.387555 0.041549 0.372054

0.0931212 0.138087 3.90387 0.136656 0.204809 0.136656 0.066344 0.430795 0.250061 0.107481 0.0410144 0.144466 3.90387 -1.83609 0.620678 33.7093 0.997386

Kolmogorov-Smirnov 0.0242912 0.167368 Fake Genuine

Kolmogorov-Smirnov 0.023499

0.0607437 0.326458

0.0914161 0.116241

Table of the "fake" Gaussian when busted


And of course the fake Gaussian when caught (Figure 1.18 ). But recall that we have a small chance of observing the true distribution.
Anderson-Darling Cramr-von Mises Jarque-Bera ALM Kuiper Mardia Combined Mardia Kurtosis Mardia Skewness Pearson c2 Shapiro-Wilk Watson U
2

Statistic 376.059

P-Value 0.

80.7342 0. 4.21447 107 0. 0. 0. 1.565108283892229 10-8 979 680 1.074929530052337 10-36 143 3.284407132415535 10-6596 1.91137 10-57 0. 0.967477 6430.62 166 432. 30 585.7 0.0145449 80.5877

Kolmogorov-Smirnov 0.494547

Busted Fake

4.21447 107 0.

Risk and (Anti)fragility - N N Taleb| 19

Anderson-Darling Cramr-von Mises Jarque-Bera ALM Kuiper Mardia Combined Mardia Kurtosis Mardia Skewness Pearson c Watson U
2

Statistic 376.059

P-Value 0.

80.7342 0. 4.21447 107 0. 0. 0.


7

Kolmogorov-Smirnov 0.494547

Busted Fake

0.967477 4.21447 10 6430.62 166 432. 30 585.7 0.0145449 80.5877

0. 1.565108283892229 10-8 979 680 1.074929530052337 10-36 143 3.284407132415535 10-6596 1.91137 10-57 0.

Shapiro-Wilk
2

We will revisit the problem upon discussing the N=1 fallacy (that is the fallacy of thinking that N=1 is systematically insufficient sample).

Lack of Skin in the Game. Indeed one wonders why these methods can be used while being wrong, how University researchers can partake of such a scam. Basically they capture the ordinary and mask higher order effects. Since blowups are not frequent, these events do not show in data and the researcher looks smart most of the time while being fundamentally wrong. At the source, researchers, quant risk manager, and academic economist do not have skin in the game so they are not hurt by wrong risk measures: other people are hurt by them. And the scam should continue perpetually so long as people are allowed to harm others with impunity. More in Appendix B.

20 | Risk and (Anti)fragility - N N Taleb

A
Appendix: Special Cases of Fat Tails

Multimodality and Fat Tails, or the War and Peace Model


We noted in 1.x that the distribution gains in fat tailedness (as expressed by kurtosis) by stochasticizing, ever so mildly, variances. But we maintained the same mean. But should we stochasticize the mean as well, and separate the potential outcomes wide enough, so that we get many modes, the kurtosis would drop. And if we associate different vairances with different means, we get a variety of regimes, each with its set of probabilities. Either the very meaning of "fat tails" loses its significance, or takes on a new one where the "middle", around the expectation ceases to exist. Now there are plenty of distributions in real life composed of many possible regimes, or states, and, assuming finite moments to all states, s1 a calm regime, with expected mean m1 and standard deviation s1 , s2 a violent regime, with expected mean m2 and standard deviation s2 , and more. Each state has its probability pi . Assume, to simplify a one-period model, as if one was standing in front of a discrete slice of history, looking forward at outcomes. (Adding complications (transition matrices between different regimes) doesnt change the main result.) The Characteristic Function f(t) for the mixed distribution:
N

fHtL = pi
i=1

- t 2 s2 i + t mi
2

(A.1)

For N=2, the moments simplify to M1 M2 M3 M4 p1 m1 + H1 - p1 L m2 2 2 2 p1 Hm2 1 + s1 L + H1 - p1 L Hm2 + s2 L


2 2 2 p1 m 3 1 + H1 - p1 L m2 Hm2 + 3 s2 L + 3 m1 p1 s1 2 4 4 2 2 4 4 p1 H6 m2 1 s1 + m1 + 3 s1 L + H1 - p1 L H6 m2 s2 + m2 + 3 s2 L

Let us consider the different varieties, all charaterized by the condition p1 < (1- p1 ), m1 < m2 , preferably m1 < 0 and m2 > 0 , and, at the core, the central property: s1 > s2 . Variety 1: War and Peace. Calm period with positive mean and very low volatility, turmoil with negative mean and extremely low volatility.

Risk and (Anti)fragility - N N Taleb| 21

Pr S2 S1

Figure 1.1 The War and peace model. Kurtosis K=1.7, much lower than the Gaussian

Variety 2: Conditional determistic state. Take a bond B, paying interest r at the end of a single period. At termination, there is a high probability of getting B (1+r), a possibility of defaut. Getting exactly B is very unlikely. Think that there are no intermediary steps between war and peace: these are separable and discrete states. Bonds dont just default a little bit. Note the divergence, the probability of the realization being at or close to the mean is about nil. Typically, p(E(x)) the probabilitity densities of the expectation are smaller than at the different means of regimes, so pHEHxLL < pHm1 L and < pHm2 L , but in the extreme case (bonds), p(E(x)) becomes increasingly small. The tail event is the realization around the mean.
Pr S1 S2

Figure 1.2 The Bond payoff model. Absence of volatility, deterministic payoff in regime 2, mayhem in regime 1. Here the kurtosis K=2.5. Note that the coffee cup is a special case of both regimes 1 and 2 being degenerate.

In option payoffs, this bimodality has the effect of raising the value of at-the-money options and lowering that of the out-of-the-money ones, causing the exact opposite of the so-called volatility smile. Note the coffee cup has no state between broken and healthy.

22 | Risk and (Anti)fragility - N N Taleb

Figure 1.3 The coffee cup cannot incur small harm; it is everything or nothing.

A brief list of other situations where bimodality is encountered:


1. Mergers 2. Professional choices and outcomes 3. Conflicts: interpersonal, general, martial, any situation in which there is no intermediary between harmonious relations and hostility. 4. Conditional cascades

Risk and (Anti)fragility - N N Taleb| 23

2
Preasymptotics and Central Limit in the Real World
The Central Limit Theorem, at the foundation of modern statistics: The behavior of the sum of random variables allows us to get to the asymptote and use handy asymptotic properties, that is, Platonic distributions. But the problem is that in the real world we never get to the asymptote, we just get close. Some distributions get close quickly, others very slowly (even if they have finite variance). The same applies to the law of large numbers, with severe problems in convergence.

Figure 2.1 Small vs Large World. The problem of formal probability theory covering very, very narrow situations in which formal proofs can be produced, particularly when facing 1) preasymptotics, and 2) inverse problems. Yet we cannot afford to blow up. The Procrustean bed error is to squeeze the small world into the large, rather than the opposite.

2.1 An Excursion Into The Law of Large Numbers Under Fat Tails
How do you reach the limit? A common flaw in the interpretation of the law of large numbers. More technically, by the weak law of large numbers, a sum of random variables X1 , X2 ,..., XN independent and identically distributed with finite mean m, that is E[ Xi ] < , then
1 N

1iN Xi converges to m in probability, as N . But the problem of convergence in probability, as we will see later, is that it does not

take place in the tails of the distribution (different parts of the distribution have different speeds). This point is quite central and will be examined later with a deeper mathematical discussions on limits in Chapter x. We limit it here to intuitive presentations of turkey surprises. {Hint: we will need to look at the limit without the common route of Chebychev's inequality which requires E[ Xi2 ] < .} Let us examine the speed of convergence of the average convolution is jHt N LN =
sH1L N s2 t 2 2 N2

1 N

1iN Xi . For a Gaussian distribution Hm, s), the characteristic function for the
!2 c !t 2

mt N

, which, derived twice at 0 yields H-L2

!c !t

. t 0 which produces the standard deviation sHnL =

so one can say that sum converges at a speed

N . Another way to view it is by expanding j and letting N gets the characteristic

function of the degenerate distribution at m.

Let | usRisk examine the(Anti)fragility speed of convergence ofN the average 24 and -N Taleb convolution is jHt N LN =
sH1L N s2 t 2 2 N2

1 N

1iN Xi . For a Gaussian distribution Hm, s), the characteristic function for the
!2 c !t 2

mt N

, which, derived twice at 0 yields H-L2

!c !t

. t 0 which produces the standard deviation sHnL =

so one can say that sum converges at a speed

N . Another way to view it is by expanding j and letting N gets the characteristic

function of the degenerate distribution at m. But things are far more complicated with power laws. Let us repeat the exercise for a Pareto distribution with density La x-1-a a , x> L, a >1, jHt N LN = aN Ea+1 Lt N
N

(2.1)

where E is the exponential integral E; En HzL = 1 e-z t tn t. Setting L=1 to scale, the standard deviation sa HN L for the N-average becomes sa H N L = 1 N aN Ea+1 H0LN -2 IEa-1 H0L Ea+1 H0L + Ea H0L2 I-N aN Ea+1 H0LN + N - 1MM for a > 2
sa H1L N

(2.2)

Sucker Trap: After some tinkering, we get sa HN L =

as with the Gaussian, which is a suckers trap. For we should be careful in interpret-

ing sa HN L, which can be very volatile since sa H1L is already very volatile and does not reveal itself easily in realizations of the process. In fact, let p(.) be the PDF of a Pareto distribution with mean m, standard deviation s, minimum value L and exponent a, Da the mean deviation of the variance for a given a will be Da =
1 s2 2 2 L HHx - mL - s L pHxL x

Figure 2.2 The standard deviation of the sum of N=100 a=13/6. The right graph shows the entire span of realizations.

Absence of Clear Theory: As to situations, central situations, where 1<a<2, we are left hanging analytically. We will return to the problem in our treatment of the preasymptotics of the central limit theorem. a But we saw in 1.8 that the volatility of the mean is s and the mean deviation of the mean deviation, that is, the volatility of the volatility of
a-1

mean is 2 Ha - 1La-2 a1-a s , where s is the scale of the distribution. As we get close to a = 1 the mean becomes more and more volatile in realizations for a given scale. This is not trivial since we are not interested in the speed of convergence per se given a variance, rather the ability of a sample to deliver a meaningful estimate of some total properties. The law of large numbers, if it ever works, operates at >20 times slower rate for an a of 1.15 than for an exponent of 3. As a rough heuristic, just assume that one needs > 400 times the observations. Indeed, 400 times! (The point of what we mean by rate will be revisited with the discussion of the Large Deviation Principle and the Cramer rate function in X.x; we need a bit more refinement of the idea of tail exposure for the sum of random variables).

Risk and (Anti)fragility - N N Taleb| 25

Figure 2.3 How thin tails (Gaussian) and fat tails (1<a2 ) converge.

2.2 Preasymptotics and Central Limit in the Real World


The common mistake is to think that if we satisfy the criteria of convergence, that is, independence and finite variance, that central limit is a given. Take the conventional formulation of the Central Limit Theorem (Grimmet & Stirzaker, 1982; Feller 1971, Vol. II): Let X1 ,X2 ,... be a sequence of independent identically distributed random variables with mean m & variance s 2 satisfying m< and 0 <s 2 <, then iN =1 Xi - N m s
D

N H0, 1L as n

(2.3)

Where is converges "in distribution". Granted convergence "in distribution" is about the weakest form of convergence. Effectively we are dealing with a double problem. The first, as uncovered by Jaynes, corresponds to the abuses of measure theory: Some properties that hold at infinity might not hold in all limiting processes --a manifestation of the classical problem of uniform and pointwise convergence. Jaynes 2003 (p.44):"The danger is that the present measure theory notation presupposes the infinite limit already accomplished, but contains no symbol indicating which limiting process was used (...) Any attempt to go directly to the limit can result in nonsense". Granted Jaynes is still too Platonic (he falls headlong for the Gaussian by mixing thermodynamics and information). But we accord with him on this point --along with the definition of probability as information incompleteness, about which later. The second problem is that we do not have a "clean" limiting process --the process is itself idealized. I will ignore it in this chapter. Now how should we look at the Central Limit Theorem? Let us see how we arrive to it assuming "independence".

The Problem of Convergence


The CLT works does not fill-in uniformily, but in a Gaussian way --indeed, disturbingly so. Simply, whatever your distribution (assuming one mode), your sample is going to be skewed to deliver more central observations, and fewer tail events. The consequence is that, under aggregation, the sum of these variables will converge "much" faster in thep body of the distribution than in the tails. As N, the number of observations increases, the Gaussian zone should cover more grounds... but not in the "tails". This quick note shows the intuition of the convergence and presents the difference between distributions. Take the sum of of random independent variables Xi with finite variance under distribution j(X). Assume 0 mean for simplicity (and symmetry, absence of skewness to simplify). A more useful formulation of the Central Limit Theorem (Kolmogorov et al, x)

26 | Risk and (Anti)fragility - N N Taleb

P -u Z =

n i=0 Xi n s

u =

-u e

u -

Z2 2

(2.4)

2p

So the distribution is going to be:


u -u

1- e

Z2 2

Z , for - u z u

inside the "tunnel" [-u,u] --the odds of falling inside the tunnel itself and
u

Z j HN L z +

Z j HN L z

outside the tunnel (-u,u) Where j'[N] is the n-summed distribution of j. How j'[N] behaves is a bit interesting here --it is distribution dependent. Before continuing, let us check the speed of convergence per distribution. It is quite interesting that we get the ratio observations in a given subsegment of the distribution, in proportion to the expected frequency So the speed of convergence to the Gaussian will depend on
u Nu N- u Nu N-

u where Nu , is the numbers of observations falling between -u and u.

as can be seen in the next two simulations.

Figure 2.4. Q-Q Plot of N Sums of variables distributed according to the Student T with 3 degrees of freedom, N=50, compared to the Gaussian, rescaled into standard deviations. We see on both sides a higher incidence of tail events. 106 simulations

Figure 2.5. The Widening Center. Q-Q Plot of variables distributed according to the Student T with 3 degrees of freedom compared to the Gaussian, rescaled into standard deviation, N=500. We see on both sides a higher incidence of tail events. 107 simulations

To realize the speed of the widening of the tunnel (-u,u) under summation, consider the symmetric (0-centered) Student T with tail exponent a= 3, with density
x2 2aN

2 a3 p Ia2 +x2 M
2

, and variance a2 . For large tail values of x, P(x)

2 a3 p x4

. Under summation of N variables, the tail P(Sx) will be

2 N a3 p x4

Now the center, by the Kolmogorov version of the central limit theorem, will have a variance of N a2 in the center as well, hence P(Sx) =

2p

aN

. Setting the point u where the crossover takes place,

To realize the speed of the widening of the tunnel (-u,u) under summation, consider the symmetric (0-centered) Student T with tail exponent a= 3, with density
x2 2aN

Risk and (Anti)fragility - N N Taleb| 27


2 N a3 p x4

2 a3 p Ia2 +x2 M
2

, and variance a2 . For large tail values of x, P(x)

2 a3 p x4

. Under summation of N variables, the tail P(Sx) will be

Now the center, by the Kolmogorov version of the central limit theorem, will have a variance of N a2 in the center as well, hence P(Sx) =

2p

aN

. Setting the point u where the crossover takes place, 2 a3 p x4


u2 2aN

x2 2aN

> aN

, hence u4

>

2 2 a3 p

aN

produces the solution u = 2

a32 2 Ha N L34 H2 pL14

2p

where W is the Lambert W function or product log, which climbs very slowly.

2000

4000

6000

8000

10 000

Note about the crossover: See Nagaev(1969). For a regularly varying tail, the threshold beyond which the crossover of the two distribution needs to take place should be to the right of n logHnL (normalizing for unit variance) for the right tail.

Generalizing for all exponents > 2

More generally, using the reasoning for a broader set and getting the crossover for powelaws of all exponents:
a-2 4

Ha - 2L a 2p

x2

2aN

>

aa I 2 M
x

1+a 2

aa2
1 2

a , since the standard deviation is a -2 + a

aaN

BetaA , E
2

a a Ha+1L N W HlL Ha-2L a


a 1 - 1 2

B
a 1 2 2

2 a+1

H2 pL a+1

a-2 a

a-2 a 2 4 a N

-a-

Where l= -

a Ha+1L N

28 | Risk and (Anti)fragility - N N Taleb

a =4 3.5 3 2.5

200 000

400 000

600 000

800 000

1 106

Figure 2.6 Scaling to a=1, the speed of the convergence at different fatness of tails.

This part will be expanded See also Petrov (1975, 1995), Mikosch and Nagaev (1998), Bouchaud and Potters (2002), Sornette (2004).

2.3 Using Log Cumulants to Observe Convergence to the Gaussian


The normalized cumulant of order n, C(n) is the derivative of the log of the characteristic function f which we convolute N times divided by the second cumulant (i,e., second moment). CHn, N L = H-Ln !n logHfN L H-!2 logHfLN Ln-1 .z 0 (2.5)

This exercise show us how fast an aggregate of N-summed variables become Gaussian, looking at how quickly the 4th cumulant 1 approaches 0 . For instance the Poisson get there at a speed that depends inversely on l, that is, 3 3 , while by contrast an exponential
N l

distribution reaches it faster at higher values of l since the cumulant is

3! l2 N2

Table of Normalized Cumulants -Speed of Convergence (Dividing by sn where n is the order of the cumulant). Distribution Normal@ m, sD PoissonHlL ExponentialHlL GHa, bL PDF N - convoluted Log Characteristic 2 nd Cum 3 rd

H x-mL2 2 s2

-l lx x! z mz2 s 2 2

2p s

- x l l
z

b- a

x b

xa-1

GHaL l l- z

N log 1 0

N logII-1+ 1
1 Nl

Ml

N logJ 1
2l N

N logHH1 - b zL-a L 1
2 abN

Risk and (Anti)fragility - N N Taleb| 29

4 th 5 th 6 th 7 th 8 th 9 th 10 th

0 0 0 0 0 0 0

1 N 2 l2 1 N 3 l3 1 N 4 l4 1 N 5 l5 1 N 6 l6 1 N 7 l7 1 N 8 l8

3! l2 N2 4! l3 N3 5! l4 N4 6! l5 N5 7! l6 N6 8! l7 N7 9! l8 N8 n

3! a2 b2 N 2 4! a3 b3 N 3 5! a4 b4 N 4 6! a5 b5 N 5 7! a6 b6 N 6 8! a7 b7 N 7 9! a8 b8 N 8

Table of Normalized Cumulants -Speed of Convergence (Dividing by s where n is the order of the cumulant). Distribution Mixed Gaussians HStoch VolL
x2 2 s1 2

StudentTH3L
x2 2 s2 2

StudentTH4L

PDF N - convoluted Log Characteristic 2 nd Cum 3 rd 4 th 5 th 6 th

2 p s1

+ H1 - pL
z2 s1 2 2

6
z2 s 2 2 2

3
2

2 p s2 -

p I x2 +3M

12 I 3 zN

1 x 2 +4

52

N log p

+ H1 - pL

N JlogJ 3 z + 1N -

N logH2 z2 K2 H2 zLL

1 0
2 2 -I3 H-1 + pL p Hs2 1 - s2 L M 2 3 I N 2 H p s2 1 - H-1 + pL s2 L M

1 Ind Ind Ind Ind


4

1 TK Ind Ind Ind

0
2 3 I15 H-1 + pL p H-1 + 2 pL Hs2 1 - s2 L M

IN

H p s2 1

- H-1 +

5 pL s2 2L M

Notes on Levy Stability and the Generalized Cental Limit Theorem


Take for now that the distribution that concerves under summation (that is, stays the same) is said to be stable. You add Gaussians and get Gaussians. But if you add binomials, you end up with a Gaussian, or, more accurately, converge to the Gaussian basin of attraction. These distributions are not called unstable but they are. There is a more general class of convergence. Just consider that the Cauchy variables converges to Cauchy, so the stability has to apply to an entire class of distributions.

30 | Risk and (Anti)fragility - N N Taleb

Although these lectures are not about mathematical techniques, but about the real world, it is worth developing some results converning stable distribution in order to prove some results relative to the effect of skewness and tails on the stability. Let n be a positive integer, n2 and X1, X2 ,....Xn satisfy some measure of independence and are drawn from the same distribution, i) there exist cn #+ and dn R+ such that n i=1 Xi = cn X + dn where = means equality in distribution. ii) or, equivalently, there exist sequence of i.i.d random variables {Yi }, a real positive sequence {di } and a real sequence {ai } such that
1 dn D D

n i=1 Yi + an X
D

where means convergence in distribution. iii) or, equivalently, The distribution of X has for characteristic function fHtL = expHi m t - s t H1 + 2 i b p sgnHtL logHtLLL a = 1 expIi m t - t sa I1 - i b tanI
pa 2

M sgnHtLMM

a$1

a(0,2] s !+ , b [-1,1], m # Then if either of i), ii), iii) holds, then X has the "alpha stable" distribution S(a, b, m, s), with b designating the symmetry, m the centrality, and s the scale. Warning: perturbating the skewness of the Levy stable distribution by changing b without affecting the tail exponent is mean preserving, which we will see is unnatural: the transformation of random variables leads to effects on more than one characteristic of the distribution.
0.14 0.12 0.10 0.08 0.06 0.04 0.02 20 -30 -25 -20 -15 -10 -5 0.05 0.10 0.15

-20

-10

10

Figure 2.7 Disturbing the scale of the alpha stable and that of a more natural distribution, the gamma distribution. The alpha stable does not increase in risks! (risks for us in Chapter x is defined in thickening of the tails of the distribution). We will see later with convexification how it is rare to have an effect without an increase in risks.

2.4. Illustration: Convergence of the Maximum of a Finite Variance Power Law

The behavior of the maximum value as a percentage of a sum is much slower than we think, and doesnt make much difference on whether it is a finite variance, that is a>2 or not. (See comments in Mandelbrot & Taleb, 2011)

Risk and (Anti)fragility - N N Taleb| 31

Max 8 Xt-i Dt < tH N L E iN =0 H Xt-i Dt L

i=0

(2.6)

MaxSum 0.05

0.04 a=1.8 0.03

0.02

0.01

a=2.4

N 2000 4000 6000 8000 10 000 Figure 2.8 Pareto Distributions with different tail exponents a=1.8 and 2.5. Although the difference between the two ( finite and infinite variance) is quantitative, not qualitative. impressive.

32 | Risk and (Anti)fragility - N N Taleb

A
A Tutorial: How We Converge Mostly in the Center of the Distribution

We start with the Uniform Distribution


1

f HxL =

H -L

LxH elsewhere

where L = 0 and H =1

A random run from a Uniform Distribution


300 250 200 150 100 50 0.2 0.4 0.6 0.8 1

The sum of two random draws: N=2

Risk and (Anti)fragility - N N Taleb| 33

500 400 300 200 100

0.5

1.5

Three draws N=3


800

600

400

200

0.5

1.5

2.5

As we can see, we get more observations where the peak is higher. Now some math. By Convoluting 2, 3, 4 times iteratively:

f2 Hz2L =

H f Hz - xLL H f xL x = !

2 - z2 1 < z2 < 2 z2 0 < z2 1


z32 2

0 < z3 1
3 2

f3 z3 = Hf2 Hz3 - x2LL H f x2L x2 =


0

-Hz3 - 3L z3 1 2

1 < z3 < 2

- Hz3 - 3L Hz3 - 1L z3 " 2


1 2 1 4 1

Hz3 - 3L2

2 < z3 < 3 z4 " 3 z4 " 2

2 z4 4 1 4 1 4
2

f4 x = Hf3 Hz4 - xLL H f x3L x3 =


0

0 < z4 1
2

I-z4 + 4 z4 - 2M 1 < z4 < 2 2 < z4 < 3 Hz4 - 4L2 3 < z4 < 4

A simple Uniform Distribution

34 | Risk and (Anti)fragility - N N Taleb

4 3.5 3 2.5 2 1.5 1 0.5 -0.5 0.5 1 1.5

We can see how quickly, after one single addition, the net probabilistic "weight" is going to be skewed to the center of the distribution, and the vector will weight future densities..
2.5 2 1.5 1 0.5

-0.5 0.7 0.6 0.5 0.4 0.3 0.2 0.1

0.5

1.5

2.5

1 0.5 0.4 0.3 0.2 0.1

Risk and (Anti)fragility - N N Taleb| 35

B
Appendix:The Large Deviation Principle and Fat Tails
There are a lot of studies of large deviations in an idealized world; unfortunately the existing methods do not really extend to true fat tails outside special situations like stable distributions. The methods we will resort to is to start with a thin-tailed distribution, then fatten the tails progressively and probe the effect, particularly as the distribution approaches a scalable power law. Let us start with the tail behavior of the sum of Gaussian random variables. Sn = xi n , with xi ~ N H m, sL
n Hs-mL2 2 s2

PH m, s, n, Sn = sL =

(2.7)
Hs-mL2 2 s2

2p s Hence, with n , the variance of Sn =


s2 n

0, the dominant part PHSn = sL -n RHsL where RHsL is the rate function R H m, s, n, sL =

Roughly the rate function is a form of discounter of the tail events, the headwind so to speak.

2.5

2.0

pH0, 1, 10, sL pH0, 1, 50, sL

1.5 Stochastic volatility effect

RH0, 1, sL RI0, , sM
2 1 2 3

1.0

HRH0, 1, sL + RH0, 2, sLL

0.5

-2

-1

Different rate functions. We can see the effect of fatter tails and how it weakens the headwind of large deviations.

This part will be expanded

36 | Risk and (Anti)fragility - N N Taleb

3
On The Misuse of Statistics in Social Science
There have been many papers (Kahneman and Tverskyk 1974, Hoggarth and Soyer, 2012) showing how statistical researchers overinterpret their own findings. (Goldstein and I, Goldstein and Taleb 2007, showed how professional researchers and practitioners substitute x 1 for x 2 ). The common result is underestimating the randomness of the estimator M, in other words read too much into it. There is worse. Mindless application of statistical techniques, without knowledge of the conditional nature of the claims are widespread. But mistakes are often elementary, like lectures by parrots repeating N of 1 or p, or do you have evidence of?, etc. Social scientists need to have a clear idea of the difference between science and journalism, or the one between rigorous empiricism and anecdotal statements. Science is not about making claims about a sample, but using a sample to make general claims and discuss properties that apply outside the sample.
X Take M (short for MT HA, f L) the estimator we saw above from the realizations (a sample path) for some process, and M* the "true" mean that would emanate from knowledge of the generating process for such variable. When someone says: "Crime rate in NYC dropped between 2000 and 2010", the claim is limited M the observed mean, not M* the true mean, hence the claim can be deemed merely journalistic, not scientific, and journalists are there to report "facts" not theories. No scientific and causal statement should be made from M on "why violence has dropped" unless one establishes a link to M* the true mean. M cannot be deemed "evidence" by itself. Working with M alone cannot be called "empiricism". What we just saw is at the foundation of statistics (and, it looks like, science). Bayesians disagree on how M converges to M*, etc., never on this point. From his statements in a dispute with this author concerning his claims about the stability of modern times based on the mean casualy in the past (Pinker, 2011), Pinker seems to be aware that M may have dropped over time (which is a straight equality) and sort of perhaps we might not be able to make claims on M* which might not have really been dropping. In some areas not involving time series, the differnce between M and M* is negligible. So I rapidly jot down a few rules before showing X proofs and derivations (limiting M to the arithmetic mean, that is, M= MT HH-, L, xL). Where E is the expectation operator under "realworld" probability measure P:

3.1 The Tails Sampling Property


E[|M-M*|] increases in with fat-tailedness (the mean deviation of M* seen from the realizations in different samples of the same process). In other words, fat tails tend to mask the distributional properties. This is the immediate result of the problem of large numbers.

On the difference between the initial and the recovered distribution


{Explanation of the method of generating data from a known distribution and comparing realized outcomes to expected ones}

Risk and (Anti)fragility - N N Taleb| 37

Figure 3.1. Q-Q plot. Fitting extreme value theory to data generated by its own process , the rest of course owing to sample insuficiency for extremely large values, a bias that typically causes the underestimation of tails, as the points tend to fall to the right.

Case Study: Pinkers Claims On The Stability of the Future Based on Past Data
When the generating process is power law with low exponent, plenty of confusion can take place. For instance, Pinker(2011) claims that the generating process has a tail exponent ~1.16 but made the mistake of drawing quantitative conclusions from it about the mean from M' and built theories about drop in the risk of violence that is contradicted by the data he was showing, since fat tails (plus skewness)= hidden risks of blowup. His study is also missing the Casanova problem (next point) but let us focus on the error of being fooled by the mean of fat-tailed data. The next two figures show the realizations of two subsamples, one before, and the other after the turkey problem, illustrating the inability of a set to naively deliver true probabilities through calm periods.

Figure 3.2. First 100 years (Sample Path): A Monte Carlo generated realization of a process for casualties from violent conflict of the "80/20 or 80/02 style", that is tail exponent a= 1.15

38 | Risk and (Anti)fragility - N N Taleb

Figure 3.3. The Turkey Surprise: Now 200 years, the second 100 years dwarf the first; these are realizations of the exact same process, seen with a longer window and at a different scale.

The next simulation shows M1, the mean of casualties over the first 100 years across 104 sample paths, and M2 the mean of casualties over the next 100 years.
M2 M2 12 000 800 600 400 200 M1 5000 10 000 15 000 20 000 25 000 30 000 10 000 8000 6000 4000 2000 200 400 600 800 M1

Figure 3.4. Does the past mean predict the future mean? Not so. M1 for 100 years, M2 for the next century. Seen at different scales, the second scale show how extremes can be severe. M2 2.2 2.0 1.8 1.6 1.4 1.2 1.0 0.8 1.0 1.5 2.0 M1

Figure 3.5. The same seen with a thin-tailed distribution.

So clearly it is a lunacy to try to read much into the mean of a power law with 1.15 exponent (and this is the mild case, where we know the exponent is 1.15).

Risk and (Anti)fragility - N N Taleb| 39

NOTE: Where does this 80/20 business come from?


a-1

Assume a the power law tail exponent, and an exceedant probability P>x = xmin x -a , x [xmin , ). Very simply, the top p of the population gets S= p the total pie.

of the share of

a=

logHp L logHp L-logIS M

which means that the exponent needs to be 1.161 for the 80/20 distribution. Note that as a gets close to 1 the contribution explodes as it becomes close to infinite mean. Derivation:
a The density f Hx L = xmin a x -a-1 , x xmin
K x f Hx L x 1 x f Hx L x

1) The Share attributed above K, K > 1, will be

= K 1-a

2) The probability of exceeding K is K f Hx L x = K -a


a-1

3) K -a of the population contributes K 1-a =p

of the result

40 | Risk and (Anti)fragility - N N Taleb

The next graph shows exactly how not to work with power laws.

Figure 3.7. Cederman 2003, used by Pinker. I wonder if I am dreaming or if the exponent a is really = .41. Chapters x and x show why such inference is centrally flawed, since low exponents do not allow claims on mean of the variable except to say that it is very, very high and not observable in finite samples. Also, in addition to wrong conclusions from the data, take for now that the regression fits the small deviations, not the large ones, and that the author overestimates our ability to figure out the asymptotic slope.

Warning. Severe mistake! One should never, never fit a powerlaw using frequencies (such as 80/20), as these are interpolative. Other estimators based on Log-Log plotting or regressions are more informational, though with a few caveats. Why? Because data with infinite mean, a 1, will masquerade as finite variance in sample and show about 80% contribution to the top 20% quantile. In fact you are expected to witness in finite samples a lower contribution of the top 20%/
n Let us see. Generate m samples of a =1 data Xj ={xi , j <n i=1 , ordered xi , j xi -1, j , and examine the distribution of the top n contribution Z j = i n n x j i n x j

, with n (0,1).

Figure 3.11. n=20/100, N= 107 . Even when it should be .0001/100, we tend to watch an average 75/20

Risk and (Anti)fragility - N N Taleb| 41

3.2 Survivorship Bias (Casanova) Property


E[M-M*] increases under the presence of an absorbing barrier for the process. This is the Casanova effect, or fallacy of silent evidence see The Black Swan, Chapter 8. (Fallacy of silent evidence: Looking at history, we do not see the full story, only the rosier parts of the process, in the Glossary) History is a single sample path we can model as a Brownian motion, or something similar with fat tails (say Levy flights). What we observe is one path among many counterfactuals, or alternative histories. Let us call each one a sample path, a succession of discretely observed states of the system between the initial state S0 and ST the present state. 1. Arithmetic process: We can model it as SHtL = SHt - DtL + ZDt where ZDt is noise drawn from any distribution. 2. Geometric process: We can model it as SHtL = SHt - DtL Wt typically SHt - DtL m Dt + s distribution. Typically, logJ
S HtL S Ht-i DtL D t Zt

but Wt can be noise drawn from any

N is treated as Gaussian, but we can use fatter tails. The convenience of the Gaussian is stochastic
D t Wt

calculus and the ability to skip steps in the process, as S(t)=S(t-Dt) m Dt + s single period to summarize the total.

, with Wt ~N(0,1), works for all Dt, even allowing for a

The Black Swan made the statement that history is more rosy than the true history, that is, the mean of the ensemble of all sample path. Take an absorbing barrier H as a level that, when reached, leads to extinction, defined as becoming unobservable or unobserved at period T.
Sample Paths

250

200

150

100 Barrier H 50

200 400 600 800 1000 Figure 3.12. Counterfactual historical paths subjected to an absorbing barrier.

Time

When you observe history of a family of processes subjected to an absorbing barrier, i.e., you see the winners not the losers, there are biases. If the survival of the entity depends upon not hitting the barrier, then one cannot compute the probabilities along a certain sample path, without adjusting. Definition: true distribution is the one for all sample paths, observed distribution is the one of the succession of points {SiDt <T i=1 . 1. Bias in the measurement of the mean. In the presence of an absorbing barrier H below, that is, lower than S0 , the observed mean r true mean 2. Bias in the measurement of the volatility. The observed variance b true variance The first two results have been proved (see Brown, Goetzman and Ross (1995)). What I will set to prove here is that fat-tailedness increases the bias. First, let us pull out the true distribution using the reflection principle.

42 | Risk and (Anti)fragility - N N Taleb

The reflection principle (graph from Taleb, 1997). The number of paths that go from point a to point b without hitting the barrier H is equivalent to the number of path from the point -a (equidistant to the barrier) to b.

Thus if the barrier is H and we start at S0 then we have two distributions, one f(S), the other f(S-2(S0 -H)) By the reflection principle, the observed distribution p(S) becomes: pHSL = : f HSL - f HS - 2 HS0 - H LL if S > H 0 if S < H

(3.1)

Simply, the nonobserved paths (the casualties swallowed into the bowels of history) represent a mass of 1-H f HSL - f HS - 2 HS0 - H LL S and, clearly, it is in this mass that all the hidden effects reside. We can prove that the missing mean is S H f HSL - f HS - 2 HS0 - H LLL S and perturbate f(S) using the previously seen method to fatten the tail.
H

Absorbed Paths Observed Distribution

Figure 3.14. If you dont take into account the sample paths that hit the barrier, the observed distribution seems more positive, and more stable, than the true one.

3.3 Left (Right) Tail Sample Insufficiency Under Negative (Positive) Skewness
E[M-M*] increases (decreases) with negative (positive) skeweness of the true underying variable.

Risk and (Anti)fragility - N N Taleb| 43

A naive measure of a sample mean, even without absorbing barrier, yields a higher oberved mean than true mean when the distribution is skewed to the left.

Figure 3.15. The left tail has fewer samples. The probability of an event falling below K in n samples is F(K), where F is the cumulative distribution.

This can be shown analytically, but a simulation works well. To see how a distribution masks its mean because of sample insufficiency, take a skewed distribution with fat tails, say the Pareto Distribution. The "true" mean is known to be m= Measure m j =
iN =1 Xi, j N a a-1

. Generate { X1, j , X2, j , ..., XN , j } random samples indexed by j as a designator of a certain history j.
T

. We end up with the sequence of various sample means 9 m j = j=1 , which naturally should converge to M with both N and T.
mj M* T

the median value of T Next we calculate m j=1 ing distribution.


m

1 , such that P> m = where, to repeat, M* is the theoretical mean we expect from the generat2

0.95

0.90

0.85

0.80

0.75

a 1.5 2.0
mj MT

2.5

Figure 3.16. Median of T j =1

in simulations (106 Monte Carlo runs). We can observe the underestimation of the mean of a skewed power law distribution as a exponent

gets lower. Note that lower a imply fatter tails.

Entrepreneurship is penalized by right tail insufficiency making performance look worse than it is. Figures 3.15 and 3.16 can be seen in a symmetrical way, producing the exact opposite effect of negative skewness.

44 | Risk and (Anti)fragility - N N Taleb

3.4 Why N=1 Can Be Very, Very Significant Statistically


Power of Extreme Deviations: Under fat tails, large deviations from the mean are vastly more informational than small ones. They are not "anecdotal". (The last two properties corresponds to the black swan problem, inherently asymmetric). The gist of the argument is the notion called masquerade problem in The Black Swan. A thin-tailed distribution is less likely to deliver a single large deviation than a fat tailed distribution a series of long calm periods. Now add negative skewness to the issue, which makes large deviations negative and small deviations positive, and a large negative deviation, under skewness, becomes extremely informational.

3.5 Asymmetry in Inference


Asymmetry in Inference: Under both negative skewness and fat tails, negative deviations from the mean are more informational than positive deviations.

3.6 A General Summary of The Problem of Reliance on Past Time Series


The four aspects of what we will call the nonreplicability issue, particularly for mesures that are in the tails. These are briefly presented here and developed more technically throughout the book: a- Definition of statistical rigor (or Pinker Problem). The idea that an estimator is not about fitness to past data, but related to how it can capture future realizations of a process seems absent from the discourse. Much of econometrics/risk management methods do not meet this simple point and the rigor required by orthodox, basic statistical theory. b- Statistical argument on the limit of knowledge of tail events. Problems of replicability are acute for tail events. Tail events are impossible to price owing to the limitations from the size of the sample. Naively rare events have little data hence what estimator we may have is noisier. c- Mathematical argument about statistical decidability. No probability without metaprobability. Metadistributions matter more with tail events, and with fat-tailed distributions. The soft problem: we accept the probability distribution, but the imprecision in the calibration (or parameter errors) percolates in the tails. The hard problem (Taleb and Pilpel, 2001, Taleb and Douady, 2009): We need to specify an a priori probability distribution from which we depend, or alternatively, propose a metadistribution with compact support. Both problems are bridged in that a nested stochastization of standard deviation (or the scale of the parameters) for a Gaussian turn a thin-tailed distribution into a power law (and stochastization that includes the mean turns it into a jump-diffusion or mixed-Poisson). d- Economic arguments: The Friedman-Phelps and Lucas critiques, Goodharts law. Acting on statistical information (a metric, a response) changes the statistical properties of some processes.

Conclusion
This chapter introduced the problem of surprises from the past of time series, and the invalidity of a certain class of estimators that seem to only work in-sample. Before examining more deeply the mathematical properties of fat-tails, let us look at some practical aspects.

Risk and (Anti)fragility - N N Taleb| 45

46 | Risk and (Anti)fragility - N N Taleb

D
Appendix: Statistics and Lack of Skin-in-the-Game
This continues the discussion in 3.3 on the effect of the statistical properties not showing with the left tail. Let us now add the asymmetry in incentives. If an agent has the upside of the payoff of the random variable, with no downside, then the incentive is to hide risks in the tails. This can be generalized to any payoff for which one does not bear the full risks and consequences of one's actions So if an operator can only stay in the game by being profitable the previous period, and is not penalized by losses, then his objective is to maximize the stream of payoffs by shooting for a high probability of payoff.
+ PHN L = iN =1 H XDt i+t - K L 1 XDt Hi-1L+t >K

The most efficient payoff to maximize P(N)

The fact is that how negative XDt i+t does not matter at all for P, so the more negative skewness the better.

Risk and (Anti)fragility - N N Taleb| 47

4
On the Difficulty of Risk Parametrization With Fat Tails
This chapter presents case studies around the point that, simply, some models depend quite a bit on small variations in parameters. The effect on the Gaussian is easy to gauge, and expected. But many believe in power laws as panacea. Even if I believed the r.v. was power law distributed, I still would not be able to make a statement on tail risks. Sorry, but thats life.
X This chapter is illustrative; it will initially focus on nonmathematical limits to producing estimates of MT HA, f L when A is limited to the tail. We will see how things get worse when one is sampling and forecasting the maximum of a random variable.

4.1 Some Bad News Concerning power laws


We saw the shortcomings of parametric and nonparametric methods so far. What are left are power laws; they are a nice way to look at the world, but we can never really get to know the exponent a, for a spate of reasons we will see later (the concavity of the exponent to parameter uncertainty). Suffice for now to say that the same analysis on exponents yields a huge in-sample variance and that tail events are very sensitive to small changes in the exponent. For instance, for a broad set of stocks over subsamples, using a standard estimation method (the Hill estimator), we get subsamples of securities. Simply, the variations are too large for a reliable computation of probabilities, which can vary by > 2 orders of magnitude. And the effect on the mean of these probabilities is large since they are way out in the tails.

1Pr 350 300 250 200 150 100 50 a 1.5 2.0 2.5 3.0 Figure 4.1. The effect of small changes in tail exponent on a probability of exceeding a certain point. Here Pareto(L,a), probability of exceeding 7 L ranges from 1 in 10 to 1 in 350. For further in the tails the effect is more severe.

The way to see the response to small changes in tail exponent with probability: considering P>K ~ K -a , the sensitivity to the tail exponent
! P >K !a

= -K -a logHK L.

Now the point that probabilities are sensitive to assumptions brings us back to the Black Swan problem. One might wonder, the change in probability might be large in percentage, but who cares, they may remain small. Perhaps, but in fat tailed domains, the event multiplying the probabilities is large. In life, it is not the probability that matters, but what one does with it, such as the expectation or other moments, and the contribution of the small probability to the total moments is large in power law domains.

48 | Risk and (Anti)fragility - N N Taleb


Now the point that probabilities are sensitive to assumptions brings us back to the Black Swan problem. One might wonder, the change in probability might be large in percentage, but who cares, they may remain small. Perhaps, but in fat tailed domains, the event multiplying the probabilities is large. In life, it is not the probability that matters, but what one does with it, such as the expectation or other moments, and the contribution of the small probability to the total moments is large in power law domains. For all powerlaws, when K is large, with a > 1, the unconditional shortfallK x fHxL x and - x fHxL x approximate to a a-1 -K a a-1

K -a+1 and

K -a+1 , which are extremely sensitive to a particularly at higher levels of K,

! = !a

K 1-a HHa-1L a logHK L+1L Ha-1L2

There is a deeper problem related to the effect of model error on the estimation of a, which compounds the problem, as a tends to be underestimated by Hill estimators and other methods, but let us leave it for now.

4.2. Extreme Value Theory: Fuhgetaboudit

We saw earlier how difficult it is to compute risks using power laws, owing to excessive model sensitivity. Let us apply this to the socalled Extreme Value Theory, EVT. Extreme Value Theory has been considered a panacea for dealing with extreme events by some "risk modelers" . On paper it looks great. But only on paper. The problem is the calibration and parameter uncertainty --in the real world we don't know the parameters. The ranges in the probabilities generated we get are monstrous. We start with a short presentation of the idea, followed by an exposition of the difficulty.

What is Extreme Value Theory? A Simplified Exposition


Let us proceed with simple examples.
Case 1, Thin Tailed Distribution

The Extremum of a Gaussian variable: Say we generate N Gaussian variables 8Zi <N i=1 with mean 0 and unitary standard deviation, and take the highest value we find. We take the upper bound Ej for the N-size sample run j E j = Max 9Zi, j =i=1 Assume we do so M times, to get M samples of maxima for the set E E = 9Max 9Zi, j =i=1 = j=1 The next figure will plot a histogram of the result.
N M N

Figure 4.2. Taking M samples of Gaussian maxima; here N= 30,000, M=10,000. We get the Mean of the maxima = 4.11159 Standard Deviation= 0.286938; Median = 4.07344
-x+a

Let us fit to the sample an Extreme Value Distribution (Gumbel) with location and scale parameters a and b, respectively: f(x;a,b) =

- b +

-x+a b

Risk and (Anti)fragility - N N Taleb| 49

Figure 4.3. : Fitting an extreme value distribution (Gumbel) a= 3.97904, b= 0.235239

So far, beautiful. Let us next move to fat(ter) tails.


Case 2, Fat-Tailed Distribution

Now let us generate, exactly as before, but change the distribution, with N random power law distributed variables Zi , with tail exponent m=3, generated from a Student T Distribution with 3 degrees of freedom. Again, we take the upper bound. This time it is not the Gumbel, but the Frchet distribution that would fit the result, Frchet f(x; a,b)=

x -a b

aJ N
b

-1-a

, for x>0

Figure 4.4. Fitting a Frchet distribution to the Student T generated with m=3 degrees of freedom. The Frechet distribution a=3, b=32 fits up to higher values of E. But next two graphs shows the fit more closely.

50 | Risk and (Anti)fragility - N N Taleb

Figure 4.5. Seen more closely

How Extreme Value Has a Severe Inverse Problem In the Real World
In the previous case we start with the distribution, with the assumed parameters, then get the corresponding values, as these risk modelers do. In the real world, we dont quite know the calibration, the a of the distribution, assuming (generously) that we know the distribution. So here we go with the inverse problem. The next table illustrates the different calibrations of PK the probabilities that the maximum exceeds a certain value K (as a multiple of b under different values of K and a.

a 1. 1.25 1.5 1.75 2. 2.25 2.5 2.75 3. 3.25 3.5 3.75 4. 4.25 4.5 4.75 5.

1 P >3 b

1 P>10 b

1 P>20 b

3.52773 4.46931 5.71218 7.3507 9.50926 12.3517 16.0938 21.0196 27.5031 36.0363 47.2672 62.048 81.501 107.103 140.797 185.141 243.5

10.5083 18.2875 32.1254 56.7356 100.501 178.328 316.728 562.841 1000.5 1778.78 3162.78 5623.91 10 000.5 17 783.3 31 623.3 56 234.6 100 001.

20.5042 42.7968 89.9437 189.649 400.5 846.397 1789.35 3783.47 8000.5 16 918.4 35 777.6 75 659.8 160 000. 338 359. 715 542. 1.51319 106 3.2 106

Consider that the error in estimating the a of a distribution is quite large, often > 1/2, and typically overstimated. So we can see that we get the probabilities mixed up > an order of magnitude. In other words the imprecision in the computation of the a compounds in the evaluation of the probabilities of extreme values.

4.3 Using Power Laws Without Being Harmed by Mistakes


We can use power laws in the near tails for information, not risk management. That is, not pushing outside the tails, staying within a part of the distribution for which errors are not compounded. I was privileged to get access to a database with cumulative sales for editions in print that had at least one unit sold that particular week (that is, conditional of the specific edition being still in print). I fit a powerlaw with tail exponent a> 1.3 for the upper 10% of sales (graph), with N=30K. Using the Zipf variation for ranks of powerlaws, with rx and ry the ranks of book x and y, respectively, Sx and Sy the corresponding sales

Risk and (Anti)fragility - N N Taleb| 51


I was privileged to get access to a database with cumulative sales for editions in print that had at least one unit sold that particular week (that is, conditional of the specific edition being still in print). I fit a powerlaw with tail exponent a> 1.3 for the upper 10% of sales (graph), with N=30K. Using the Zipf variation for ranks of powerlaws, with rx and ry the ranks of book x and y, respectively, Sx and Sy the corresponding sales Sx Sy = rx ry
100 - 1.3 1000
1

1 a

(4.1)

So for example if the rank of x is 100 and y is 1000, x sells I

= 5.87 times what y sells.

Note this is only robust in deriving the sales of the lower ranking edition (ry > rx ) because of inferential problems in the presence of fat-tails.
P>X 1

0.1 Near tail 0.01

a=1.3
0.001

10-4 100 104 106 X

Figure 4.7 Log-Log Plot of the probability of exceeding X (book sales), and X

This works best for the top 10,000 books, but not quite the top 20 (because the tail is vastly more unstable). Further, the effective a for large deviations is lower than 1.3. But this method is robust as applied to rank within the near tail.

52 | Risk and (Anti)fragility - N N Taleb

5
How To Tell True Fat Tails from Poisson Jumps
5.1 Beware The Poisson
By the masquerade problem, any power law can be seen backward as a Gaussian plus a series of simple (that is, noncompound) Poisson jumps, the so-called jump-diffusion process. So the use of Poisson is often just a backfitting problem, where the researcher fits a Poisson, happy with the evidence. The next exercise aims to supply convincing evidence of scalability and NonPoisson-ness of the data (the Poisson here is assuming a standard Poisson). Thanks to the need for the probabililities add up to 1, scalability in the tails is the sole possible model for such data. We may not be able to write the model for the full distribution --but we know how it looks like in the tails, where it matters. The Behavior of Conditional Averages: With a scalable (or "scale-free") distribution, when K is "in the tails" (say you reach the point when f HxL = C X -a where C is a constant and a the power law exponent), the relative conditional expectation of X (knowing that x>K) divided by K, that is,
E@X X>KD K

is a constant, and does not depend on K. More precisely, it is = Ka a-1

a a-1

. (5.1)

K x f Hx, aL x
K f Hx,

aL x

This provides for a handy way to ascertain scalability by raising K and looking at the averages in the data. Note further that, for a standard Poisson, (too obvious for a Gaussian): not only the conditional expectation depends on K, but it "wanes", i.e. Limit
K

mx GH xL
x

x K = 1 x (5.2)

m K x!

Calibrating Tail Exponents. In addition, we can calibrate power laws. Using K as the cross-over point, we get the a exponent above it --the same as if we used the Hill estimator or ran a regression above some point. We defined fat tails in the previous chapter as the contribution of the low frequency events to the total properties. But fat tails can come from different classes of distributions. This chapter will present the difference between two broad classes of distributions. This brief test using 12 million pieces of exhaustive returns shows how equity prices (as well as short term interest rates) do not have a characteristic scale. No other possible method than a Paretan tail, albeit of unprecise calibration, can charaterize them.

5.2 Leave it to the Data


We tried the exercise with about every piece of data in sight: single stocks, macro data, futures, etc. Equity Dataset: We collected the most recent 10 years (as of 2008) of daily prices for U.S. stocks (no survivorship bias effect as we included companies that have been delisted up to the last trading day), n= 11,674,825 , deviations expressed in logarithmic returns. We scaled the data using various methods. The expression in "numbers of sigma" or standard deviations is there to conform to industry language (it does depend somewhat on the stability of sigma). In the "MAD" space test we used the mean deviation.

Risk and (Anti)fragility - N N Taleb| 53

Log MADHiL =
1 N N

St i St-1 i St- j i St - j - 1 i

j=0 LogK

We focused on negative deviations. We kept moving K up until to 100 MAD (indeed) --and we still had observations.

Implied a
MAD -1. -2. -5. -10. -15. -20. -25. -50. -70. -75. -100. E@X
X<K D

E@X E@X

X<K D X<K D - K
E@X
X<K D

n Hfor X < KL 1.32517 10 300 806. 19 285. 3198. 1042. 418. 181. 24. 11. 9. 7.
6

Implied a 2.32974 2.95322 2.68717 2.87678 3.04888 2.95176 2.57443 1.96609 1.80729 1.74685 1.96163

-1.75202 -3.02395 -7.96354 -15.3283 -22.3211 -30.2472 -40.8788 -101.755 -156.709 -175.422 -203.991

1.75202 1.51197 1.59271 1.53283 1.48807 1.51236 1.63515 2.0351 2.23871 2.33896 2.03991

Short term Interest Rates


EuroDollars Front Month 1986-2006

n=4947
MAD -0.5 -1. -5. -6. -7. -8. E@X
X<K D

n Hfor X < KL 1520 969 69 46 34 24

E@X

X<K D

Implied a 1.38361 1.7076 2.68491 2.84496 3.16464 3.32782

-1.8034 -2.41323 -7.96752 -9.2521 -10.2338 -11.4367

3.6068 2.41323 1.5935 1.54202 1.46197 1.42959

UK Rates 1990-2007

n=4143
MAD 0.5 1. 3. 5. 6. 7. E@X
X<K D

n Hfor X < KL 1270 806 140 36 26 16

E@X

X<K D

Implied a 1.42087 1.80761 2.52038 2.45658 2.68477 2.56381

1.68802 2.23822 4.97319 8.43269 9.56132 11.4763

3.37605 2.23822 1.65773 1.68654 1.59355 1.63947

Literally, you do not even have a large number K for which scalability drops from a small sample effect.

Global Macroeconomic data

54 | Risk and (Anti)fragility - N N Taleb

Taleb(2008), International Journal of Forecasting.

Risk and (Anti)fragility - N N Taleb| 55

6
How Power Laws Emerge From Recursive Epistemic Uncertainty

6.1 The Opposite of Central Limit


With the Central Limit Theorem: we start with a distribution and end with a Gaussian. The opposite is more likely to be true. Recall how we fattened the tail of the Gaussian by stochasticizing the variance? Now let us use the same metaprobability method, put add additional layers of uncertainty.

The Regress Argument (Error about Error)


The main problem behind The Black Swan is the limited understanding of model (or representation) error, and, for those who get it, a lack of understanding of second order errors (about the methods used to compute the errors) and by a regress argument, an inability to continuously reapplying the thinking all the way to its limit (particularly when they provide no reason to stop). Again, there is no problem with stopping the recursion, provided it is accepted as a declared a priori that escapes quantitative and statistical methods. Epistemic not statistical re-derivation of power laws: Note that previous derivations of power laws have been statistical (cumulative advantage, preferential attachment, winner-take-all effects, criticality), and the properties derived by Yule, Mandelbrot, Zipf, Simon, Bak, and others result from structural conditions or breaking the independence assumptions in the sums of random variables allowing for the application of the central limit theorem. This work is entirely epistemic, based on standard philosophical doubts and regress arguments.

6.2 Methods and Derivations


Layering Uncertainties

Take a standard probability distribution, say the Gaussian. The measure of dispersion, here s, is estimated, and we need to attach some measure of dispersion around it. The uncertainty about the rate of uncertainty, so to speak, or higher order parameter, similar to what called the "volatility of volatility" in the lingo of option operators (see Taleb, 1997, Derman, 1994, Dupire, 1994, Hull and White, 1997) --here it would be "uncertainty rate about the uncertainty rate". And there is no reason to stop there: we can keep nesting these uncertainties into higher orders, with the uncertainty rate of the uncertainty rate of the uncertainty rate, and so forth. There is no reason to have certainty anywhere in the process.
Higher order integrals in the Standard Gaussian Case

We start with the case of a Gaussian and focus the uncertainty on the assumed standard deviation. Define f(m,s,x) as the Gaussian PDF for value x with mean m and standard deviation s. A 2 nd order stochastic standard deviation is the integral of f across values of s ]0,[, under the measure f Hs, s1 , sL, with s1 its scale parameter (our approach to trach the error of the error), not necessarily its standard deviation; the expected value of s1 is s1 .

f HxL1 =

fH m, s, xL f Hs, s1 , sL s

(6.1)

Generalizing to the Nth order, the density function f(x) becomes


f HxLN =

...

fH m, s, xL f Hs, s1 , sL f Hs1 , s2 , s1 L ... f HsN -1 , sN , sN -1 L s s1 s2 ... sN

(6.2)

The problem is that this approach is parameter-heavy and requires the specifications of the subordinated distributions (in finance, the lognormal has been traditionally used for s2 (or Gaussian for the ratio Log[
2 st

s2

] since the direct use of a Gaussian allows for negative values). We would

need to specify a measure f for each layer of error rate. Instead this can be approximated by using the mean deviation for s, as we will see next.

56 | Risk and (Anti)fragility - N N Taleb


The problem is that this approach is parameter-heavy and requires the specifications of the subordinated distributions (in finance, the lognormal has been traditionally used for s2 (or Gaussian for the ratio Log[
2 st

s2

] since the direct use of a Gaussian allows for negative values). We would

need to specify a measure f for each layer of error rate. Instead this can be approximated by using the mean deviation for s, as we will see next.
Discretization using nested series of two-states for s- a simple multiplicative process

We saw in the last chapter a quite effective simplification to capture the convexity, the ratio of (or difference between) f(m,s,x) and 0 fH m, s, xL f Hs, s1 , sL s (the first order standard deviation) by using a weighted average of values of s, say, for a simple case of oneorder stochastic volatility:

s H1 a H1LL, 0 a H1L < 1


where a(1) is the proportional mean absolute deviation for s, in other word the measure of the absolute error rate for s. We use probability of each state. Unlike the earlier situation we are not preserving the variance, rather the STD. Thus the distribution using the first order stochastic standard deviation can be expressed as: 1 2
1 as 2

the

f HxL1 =

8f H m, s H1 + aH1LL, xL + f H m, sH1 - aH1LL, xL<


1 2

(6.3) a(1)( 1

Now assume uncertainty about the error rate a(1), expressed by a(2), in the same manner as before. Thus in place of a(1) we have a(2)).

HaH1L + 1L HaH2L + 1L HaH3L + 1L s HaH1L + 1L HaH2L + 1L s HaH1L + 1L HaH2L + 1L H1 - aH3LL s HaH1L + 1L s HaH1L + 1L H1 - aH2LL HaH3L + 1L s HaH1L + 1L H1 - aH2LL s HaH1L + 1L H1 - aH2LL H1 - aH3LL s s H1 - aH1LL HaH2L + 1L HaH3L + 1L s H1 - aH1LL HaH2L + 1L s H1 - aH1LL HaH2L + 1L H1 - aH3LL s H1 - aH1LL s H1 - aH1LL H1 - aH2LL HaH3L + 1L s H1 - aH1LL H1 - aH2LL s H1 - aH1LL H1 - aH2LL H1 - aH3LL s

Figure 4:- Three levels of error rates for s following a multiplicative process

The second order stochastic standard deviation:

Risk and (Anti)fragility - N N Taleb| 57

f HxL2 =

8fH m, sH1 + aH1L H1 + aH2LLL, xL + 4 fH m, sH1 - aH1L H1 + aH2LLL, xL + fH m, sH1 + aH1L H1 - aH2LL, xL + f H m, sH1 - aH1L H1 - aH2LLL, xL<

(6.4)

and the Nth order: f HxL N = 1 2N


2N

fI m, s MiN , xM
i=1

(6.5)

N where Mi is the ith scalar (line) of the matrix MN I2N 1M N

MN = : Ha HjL TPi, jT + 1L>


j=1

2N i=1

(6.6)

and T[[i,j]] the element of ith line and jth column of the matrix of the exhaustive combination of N-Tuples of (-1,1),that is the N-dimentionalvector {1,1,1,...} representing all combinations of 1 and -1. for N=3 1 1 1 1 T= -1 -1 -1 -1 1 1 -1 -1 1 1 -1 -1 1 -1 1 -1 1 -1 1 -1 H1 - aH1LL H1 - aH2LL H1 - aH3LL H1 - aH1LL H1 - aH2LL HaH3L + 1L H1 - aH1LL HaH2L + 1L H1 - aH3LL H1 - aH1LL HaH2L + 1L HaH3L + 1L and M 3 = HaH1L + 1L H1 - aH2LL H1 - aH3LL HaH1L + 1L H1 - aH2LL HaH3L + 1L HaH1L + 1L HaH2L + 1L H1 - aH3LL HaH1L + 1L HaH2L + 1L HaH3L + 1L

3 so M1 = 8H1 - aH1LL H1 - aH2LL H1 - aH3LL<, etc.

Figure 4: Thicker tails (higher peaks) for higher values of N; here N=0,5,10,25,50, all values of a=

1 10

Note that the various error rates a(i) are not similar to sampling errors, but rather projection of error rates into the future. They are, to repeat, epistemic.
The Final Mixture Distribution

The mixture weighted average distribution (recall that f is the ordinary Gaussian with mean m, std s and the random variable x).
2N

f Hx m, s, M , N L = 2-N fI m, s MiN , x M
i=1

(6.7)

It could be approximated by a lognormal distribution for s and the corresponding V as its own variance. But it is precisely the V that interest us, and V depends on how higher order errors behave.

58 | Risk and (Anti)fragility - N N Taleb

Next let us consider the different regimes for higher order errors.

6.3 Regime 1 (Explosive): Case of a Constant parameter a


Special case of constant a: Assume that a(1)=a(2)=...a(N)=a, i.e. the case of flat proportional error rate a. The Matrix M collapses into a conventional binomial tree for the dispersion at the level N.
N

f H x m , s , M , N L = 2- N K
j =0

N O fI m, s Ha + 1L j H1 - aLN - j , xM j

(6.8)

Because of the linearity of the sums, when a is constant, we can use the binomial distribution as weights for the moments (note again the artificial effect of constraining the first moment m in the analysis to a set, certain, and known a priori).
Order 1 2 3 4 5 6 7 8
N N N

Moment m s2 Ia2 + 1M + m2 3 m s Ia + 1M + m3 6 m2 s2 Ia2 + 1M + m4 + 3 Ia4 + 6 a2 + 1M s4 10 m s Ia + 1M + m + 15 Ia + 6 a + 1M m s4 15 m4 s2 Ia2 + 1M + m6 + 15 IIa2 + 1M Ia4 + 14 a2 + 1MM s6 + 45 Ia4 + 6 a2 + 1M m2 s4 21 m5 s2 Ia2 + 1M + m7 + 105 IIa2 + 1M Ia4 + 14 a2 + 1MM m s6 + 105 Ia4 + 6 a2 + 1M m3 s4 28 m6 s2 Ia2 + 1M + m8 + 105 Ia8 + 28 a6 + 70 a4 + 28 a2 + 1M s8 + 420 IIa2 + 1M Ia4 + 14 a2 + 1MM m2 s6 + 210 Ia4 + 6 a2 + 1M m4 s4
N N N N N N N 3 2 2 N 5 4 2 N N N 2 2 N N

For clarity, we simplify the table of moments, with m=0


Order 1 2 3 4 5 6 7 8 Moment 0 Ia2 + 1M s2 0 3 Ia4 + 6 a2 + 1M s4 0 15 Ia6 + 15 a4 + 15 a2 + 1M s6 0 105 Ia8 + 28 a6 + 70 a4 + 28 a2 + 1M s8
N N N N

Note again the oddity that in spite of the explosive nature of higher moments, the expectation of the absolute value of x is both independent of a and N, since the perturbations of s do not affect the first absolute moment = different under addition of x. Every recursion multiplies the variance of the process by (1+a 2 ). The process is similar to a stochastic volatility model, with the standard deviation (not the variance) following a lognormal distribution, the volatility of which grows with M, hence will reach infinite variance at the limit.
Consequences
2 p

s (that is, the initial assumed s). The situation would be

For a constant a > 0, and in the more general case with variable a where a(n) a(n-1), the moments explode. A- Even the smallest value of a >0, since I1 + a2 M is unbounded, leads to the second moment going to infinity (though not the first) when N. So something as small as a .001% error rate will still lead to explosion of moments and invalidation of the use of the class of !2 distributions. B- In these conditions, we need to use power laws for epistemic reasons, or, at least, distributions outside the !2 norm, regardless of observations of past data. Note that we need an a priori reason (in the philosophical sense) to cutoff the N somewhere, hence bound the expansion of the second moment.
Convergence to Properties Similar to Power Laws
N

We can see on the example next Log-Log plot (Figure 1) how, at higher orders of stochastic volatility, with equally proportional stochastic coefficient, (where a(1)=a(2)=...=a(N)=
1 ) 10

how the density approaches that of a power law (just like the Lognormal distribution at higher

variance), as shown in flatter density on the LogLog plot. The probabilities keep rising in the tails as we add layers of uncertainty until they seem to reach the boundary of the power law, while ironically the first moment remains invariant.

Risk and (Anti)fragility - N N Taleb| 59


We can see on the example next Log-Log plot (Figure 1) how, at higher orders of stochastic volatility, with equally proportional stochastic coefficient, (where a(1)=a(2)=...=a(N)=
1 ) 10

how the density approaches that of a power law (just like the Lognormal distribution at higher

variance), as shown in flatter density on the LogLog plot. The probabilities keep rising in the tails as we add layers of uncertainty until they seem to reach the boundary of the power law, while ironically the first moment remains invariant. a=
Log PrHxL 0.1

1 10

, N=0,5,10,25,50

10-4

10-7

10-10

10-13

1.5

2.0

3.0

5.0

7.0

10.0

15.0

20.0

30.0

Log x

Figure x - LogLog Plot of the probability of exceeding x showing power law-style flattening as N rises. Here all values of a= 1/10

The same effect takes place as a increases towards 1, as at the limit the tail exponent P>x approaches 1 but remains >1.

Effect on Small Probabilities

Next we measure the effect on the thickness of the tails. The obvious effect is the rise of small probabilities. Take the exceedant probability, that is, the probability of exceeding K , given N , for parameter a constant :
N

P > K N = 2-N -1 K
j =0

N O erfc j

K 2 s Ha + 1L j H1 - aLN - j
2 p

(6.9) 0 e
z -t 2

where erfc(.) is the complementary of the error function, 1-erf(.), erf HzL =

dt

Convexity effect:The next Table shows the ratio of exceedant probability under different values of N divided by the probability in the case of a standard Gaussian. a= N 5 10 15 20 25
P>3,N P>3,N =0 1 100 P>5,N P>5,N =0 P>10,N P>10,N =0

1.01724 1.0345 1.05178 1.06908 1.0864 a=

1.155 1.326 1.514 1.720 1.943


1 10 P>5,N P>5,N =0

7 45 221 922 3347

N 5 10 15 20 25

P>3,N P>3,N =0

P>10,N P>10,N =0

2.74 4.43 5.98 7.38 8.64

146 805 1980 3529 5321

1.09 1012 8.99 1015 2.21 1017 1.20 1018 3.62 1018

60 | Risk and (Anti)fragility - N N Taleb

6.4 Regime 2: Cases of decaying parameters a(n)


As we said, we may have (actually we need to have) a priori reasons to decrease the parameter a or stop N somewhere. When the higher order of a(i) decline, then the moments tend to be capped (the inherited tails will come from the lognormality of s).
Regime 2-a; First Method: bleed of higher order error

Take a "bleed" of higher order errors at the rate l, 0 l < 1 , such as a(N) = l a(N-1), hence a(N) = lN a(1), with a(1) the conventional intensity of stochastic standard deviation. Assume m=0. With N=2 , the second moment becomes: M2H2L = IaH1L2 + 1M s2 IaH1L2 l2 + 1M With N=3, M2H3L = s2 I1 + aH1L2 M I1 + l2 aH1L2 M I1 + l4 aH1L2 M finally, for the general N:
N -1

(6.10)

(6.11)

M3HN L = IaH1L2 + 1M s2 IaH1L2 l2 i + 1M


i=1 N -1

(6.12)

We can reexpress H12L using the Q - Pochhammer symbol Ha; qLN = I1 - aqi M
i=1

M2HN L = s I-aH1L ; l MN Which allows us to get to the limit Limit M2 HN LN = s2 As to the fourth moment: By recursion:
N -1

(6.13)

Hl2 ; l2 L2 HaH1L2 ; l2 L Hl2 - 1L2 Hl2 + 1L

(6.14)

M4HN L = 3 s4 I6 aH1L2 l2 i + aH1L4 l4 i + 1M


i=0

(6.15)

M4HN L = 3 s4 KK2

2 - 3O aH1L2 ; l2 O K-K3 + 2
N

2 O aH1L2 ; l2 O
N

(6.16) (6.17)

Limit M4H N LN = 3 s4 KK2

2 - 3O aH1L2 ; l2 O

K-K3 + 2

2 O aH1L2 ; l2 O

So the limiting second moment for l=.9 and a(1)=.2 is just 1.28 s2 , a significant but relatively benign convexity bias. The limiting fourth moment is just 9.88 s4 , more than 3 times the Gaussians (3 s4 ), but still finite fourth moment. For small values of a and values of l close to 1, the fourth moment collapses to that of a Gaussian.
Regime 2-b; Second Method, a Non Multiplicative Error Rate

For N recursions s H1 HaH1L H1 H a H2L H1 aH3L H ... LLL PHx, m, s, N L =


N N iL =1 f H x, m, s H1 + H T . A Li L

(6.18)

Risk and (Anti)fragility - N N Taleb| 61

IM N .T + 1Mi is the ith component of the HN 1L dot product of T N the matrix of Tuples in H6L , L the length of the matrix, and A is the vector of parameters AN = 9a j = j=1,... N So for instance, for N=3, T= {1, a, a2 , a3 } a + a2 + a3 a + a2 - a3 a - a2 + a3 a - a2 - a3 - a + a2 + a3 - a + a2 - a3 - a - a2 + a3 - a - a2 - a3

T 3 . A3 =

The moments are as follows: M1HN L = m M2HN L = m2 + 2 s


N

(6.19) (6.20) (6.21)

M4HN L = m4 + 12 m2 s + 12 s2 a2 i
i=0

at the limit of N Lim N M4HN L = m4 + 12 m2 s + 12 s2 which is very mild. 1 1 - a2 (6.22)

6.5 Conclusion
Something Boring & something about epistemic opacity. This part will be expanded

62 | Risk and (Anti)fragility - N N Taleb

7
An Introduction to Metaprobability
Ludic fallacy (or uncertainty of the nerd): the manifestation of the Platonic fallacy in the study of uncertainty; basing studies of chance on the narrow world of games and dice (where we know the probabilities ahead of time, or can easily discover them). A-Platonic randomness has an additional layer of uncertainty concerning the rules of the game in real life. (in the Glossary of The Black Swan)

Epistemic opacity: Randomness is the result of incomplete information at some layer. It is functionally indistinguishable from true or physical randomness. Randomness as incomplete information: simply, what I cannot guess is random because my knowledge about the causes is incomplete, not necessarily because the process has truly unpredictable properties.

7.1 Metaprobability
The Effect of Estimation Error, General Case
The idea of model error from missed uncertainty attending the parameters (another layer of randomness) is as follows. Most estimations in economics (and elsewhere) take, as input, an average or expected parameter, a = a fHaL a, where a is f distributed ( deemed to be so a priori or from past samples), and regardles of the dispersion of a, build a probability distribution for X that relies on the mean estimated parameter, p H X )= pI X aM, rather than the more appropriate metaprobability adjusted probability: pH X L = pH X aL fHaL a (7.1)
-

In other words, if one is not certain about a parameter a, there is an inescapable layer of stochasticity; such stochasticity raises the expected (metaprobability-adjusted) probability if it is <
1 2

and lowers it otherwise. The uncertainty is fundamentally epistemic, includes incertitude, in

the sense of lack of certainty about the parameter. The model bias becomes an equivalent of the Jensen gap (the difference between the two sides of Jensen's inequality), typically positive since probability is convex away from the center of the distribution. We get the bias wA from the differences in the steps in integration wA = pH X aL fHaL a - pK X a fHaL aO With f(X) a function , f(X)=X for the mean, etc., we get the higher order bias wA' wA' = f H X L pH X aL fHaL a X - f H X L pK X a fHaL aO X (7.3) (7.2)

n n Now assume the distribution of a as discrete n states, with a = 8ai <n i=1 each with associated probability f= 8f i <i=1 i=1 f i = 1 . Then (1) becomes n

pH X L = p H X ai L f i
i=1

(7.4)

So far this holds for a any parameter of any distribution.

Risk and (Anti)fragility - N N Taleb| 63

This part will be expanded

7.2 The Effect of Metaprobability on the Calibration of Power Laws


In the presence of a layer of metaprobabilities (from uncertainty about the parameters), the asymptotic tail exponent for a powerlaw corresponds to the lowest possible tail exponent regardless of its probability. The problem explains Black Swan effects, i.e., why measurements tend to chronically underestimate tail contributions, rather than merely deliver imprecise but unbiased estimates. When the perturbation affects the standard deviation of a Gaussian or similar nonpowerlaw tailed distribution, the end product is the weighted average of the probabilities. However, a powerlaw distribution with errors about the possible tail exponent will bear the asymptotic properties of the lowest exponent, not the average exponent. Now assume p(X) a standard Pareto Distribution with a the tail exponent being estimated, bound for X,
n ai pH X L = ai X -ai -1 Xmin fi i=1 a p H X aL = a X -a-1 Xmin , where Xmin is the lower

(7.5)

Taking it to the limit


ai Limit X a +1 ai X -ai -1 Xmin fi = K X i=1 -ai -1 ai where K is a strictly positive constant and a* = min ai . In other words n Xmin f i is asymptotically equivalent to a constant times i=1 ai X 1in
*

X a +1 . The lowest parameter in the space of all possibilities becomes the dominant parameter for the tail exponent.
Prob

0.1

n i=1 p H X ai L f i

pH X a* L

0.001

10-5

10-7

pIX aM
X
-

10

50

100

500

1000

* Figure 7.1 Log-log plot illustration of the asymptotic tail exponent with two states. The graphs shows the different situations, a) p IX aM b) n i =1 p HX ai L f i and c) p HX a L . We can see how b) and c) converge

The asymptotic Jensen Gap wA becomes p H X a* L - p H X aM

Implications
1. Whenever we estimate the tail exponent from samples, we are likely to underestimate the thickness of the tails, an observation made about Monte Carlo generated a-stable variates and the estimated results (the Weron effect). 2. The higher the estimation variance, the lower the true exponent. 3. The asymptotic exponent is the lowest possible one. It does not even require estimation. 4. Metaprobabilistically, if one isnt sure about the probability distribution, and there is a probability that the variable is unbounded and could be powerlaw distributed, then it is powerlaw distributed, and of the lowest exponent.

64 | Risk and (Anti)fragility - N N Taleb

The obvious conclusion is to in the presence of powerlaw tails, focus on changing payoffs to clip tail exposures to limit wA' and robustify tail exposures, making the computation problem go away.

Risk and (Anti)fragility - N N Taleb| 65

8
Brownian Motion in the Real World (Under Path Dependence and Fat Tails)
Most of the work concerning martingales and Brownian motion can be idealized to the point of lacking any match to reality, in spite of the sophisticated, rather complicated discussions. This section discusses the (consequential) differences.

8.1 Path Dependence and History as Revelation of Antifragility


The Markov Property underpining Brownian Motion XN 8X1 ,X2 ,... XN-1 < = XN 8XN-1 < , that is the last realization is the only one that matters. Now if we take fat tailed models, such as stochastic volatility processes, the properties of the system are Markov, but the history of the past realizations of the process matter in determining the present variance. Take M realizations of a Brownian Bridge process pinned at St0 = 100 and ST = 80, sampled with N periods separated by Dt, with the sequence S, a collection of Brownian-looking paths with single realizations indexed by j , Si = :9SiDt+t0 =i=0 >
j j j N M j =1 j j

(8.1)

Now take m* = min min Si and 9 j : min Si = m* =


i

Take 1) the sample path with the most direct route (Path 1) defined as its lowest minimum , and 2) the one with the lowest minimum m* (Path 2). The state of the system at period T depends heavily on whether the process ST exceeds its minimum (Path 2), that is whether arrived there thanks to a steady decline, or rose first, then declined. If the properties of the process depend on (ST - m*), then there is path dependence. By properties of the process we mean the variance, projected variance in, say, stochastic volatility models, or similar matters.
S 120 110 100 90 ST 80 70 60 50 0.2 0.4 0.6 0.8 1.0 Time Path 2 Path 1

Figure 8.1 Brownian Bridge Pinned at 100 and 80, with multiple realizations.

This part will be expanded

66 | Risk and (Anti)fragility - N N Taleb

This part will be expanded

8.2 Brownian Motion in the Real World


We mentioned in the discussion of the Casanova problem that stochastic calculus requires a certain class of distributions, such as the Gaussian. It is not as we expect because of the convenience of the smoothness in squares (finite Dx2 ), rather because the distribution conserves across time scales. By central limit, a Gaussian remains a Gaussian under summation, that is sampling at longer time scales. But it also remains a Gaussian at shorter time scales. The foundation is infinite dividability. The problems are as follows: 1. The results in the literature are subjected to the constaints that the Martingale M is member of the subset (H2 ) of square integrable martingales, suptT E[M2 ] < 2. We know that the restriction does not work for lot or time series.
2 3. We know that, with q an adapted process, without 0 qs s < we cant get most of the results of Itos lemma. T

4. Even with o W 2 < , The situation is far from solved because of powerful, very powerful presamptotics. Hint: Smoothness comes from o W 2 becoming linear to T at the continuous limit --Simply dt is too small in front of dW Take the normalized (i.e. sum=1) cumulative variance (see Bouchaud & Potters), situations.
2 n i=1 HW @i DtD-W @Hi-1L DtDL T Dt i=1 HW @i DtD-W @Hi-1L DtDL2

. Let us play with finite variance

1.0 0.8 0.6 0.4 0.2 200 400 600 800 1000

1.0 0.8 0.6

1.0 0.8 0.6

0.4 0.2 200 400 600 800

0.4 0.2

1000

200

400

600

800

1000

Figure 8.2 Itos lemma in action. Three classes of processes with tail exonents: a = (Gaussian), a = 1.16 (the 80/20) and, a = 3. Even finite variance does not lead to the smoothing of discontinuities except in the infinitesimal limit, another way to see failed asymptotes.

8.3 Stochastic Processes and Nonanticipating Strategies


There is a difference between the Stratonovich and Itos integration of a functional of a stochastic process. But there is another step missing in Ito: the gap between information and adjustment.

This part will be expanded

Risk and (Anti)fragility - N N Taleb| 67

8.4 Finite Variance not Necessary for Anything Ecological (incl. quant finance)
This part will be expanded

68 | Risk and (Anti)fragility - N N Taleb

9
On the Difference Between Binaries and Vanillas with Implications For Prediction Markets

This explains how and where prediction markets (or, more general discussions of betting matters) do not correspond to reality and have little to do with exposures to fat tails and Black Swan effects. Elementary facts, but with implications. This show show, for instance, the long shot bias is misapplied in real life variables, why political predictions are more robut than economic ones. This discussion is based on Taleb (1997) which focuses largely on the difference between a binary and a vanilla option.

9.1 Definitions
1- A binary bet (or just a binary or a digital): a outcome with payoff 0 or 1 (or, yes/no, -1,1, etc.) Example: prediction market, X election, most games and lottery tickets. Also called digital. Any statistic based on YES/NO switch. Its estimator is MT HA, f L, which recall 1.x was
n i=0 1A Xt0 +i Dt n i=0 1

with f=1 and A either = (K,) or its complement HK, L .

Binaries are effectively bets on probability, more specifically cumulative probabilities or their complement. They are rarely ecological, except for political predictions. More technically, they are mapped by the Heaviside function, q(K)= 1 if x>K and 0 if x<K . q(K) is itself the integral of the dirac delta function which, in expectation, will deliver us the equivalent of probability densities. 2- A exposure or vanilla: an outcome with no open limit: say revenues, market crash, casualties from war, success, growth, inflation, epidemics... in other words, about everything. Exposures are generally expectations, or the arithmetic mean, never bets on probability, rather the pair probability payoff 3- A bounded exposure: an exposure(vanilla) with an upper and lower bound: say an insurance policy with a cap, or a lottery ticket. When the boundary is close, it approaches a binary bet in properties. When the boundary is remote (and unknown), it can be treated like a pure exposure. The idea of clipping tails of exposures transforms them into such a category.
fHxL Vanilla

Binary

Bet Level
x The different classes of payoff.

A vanilla or continuous payoff can be decomposed as a series of binaries. But it can be an infinite series. It would correspond to, with L < K< H:

Risk and (Anti)fragility - N N Taleb| 69

H -K DK

K -L DK

X MT HHL,

H L, xL lim

DK0

HK + i DKL - HK - i DKL
i=1 i=1

DK K (9.1)

and for unbounded vanilla payoffs, L=0,


H -K DK K DK

X MT HH-,

L, xL lim lim

K DK0

HK + i DKL - HK - i DKL
i=1 i=1

DK K

(9.2)

The Problem
The properties of binaries diverge from those of vanilla exposures. This note is to show how conflation of the two takes place: prediction markets, ludic fallacy (using the world of games to apply to real life), 1. They have diametrically opposite responses to skewness (mean-preserving increase in skewness). Proof TK 2. They respond differently to fat-tailedness (sometimes in opposite directions). Fat tails makes binaries more tractable. Proof TK 3. Rise in complexity lowers the value of the binary and increases that of the vanilla. Proof TK

Some direct applications: 1- Studies of long shot biases that typically apply to binaries should not port to vanillas. 2- Many are surprised that I find many econometricians total charlatans, while Nate Silver to be immune to my problem. This explains why. 3- Why prediction markets provide very limited information outside specific domains. 4- Etc.

The Elementary Betting Mistake


One can hold beliefs that a variable can go lower yet bet that it is going higher. Simply, the digital and the vanilla diverge. PH X > X0 L> , but EH X L < EH X0 L. This is normal in the presence of skewness and extremely common with economic variables. Philosophers have a related problem called the lottery paradox which in statistical terms is not a paradox. In Fooled by Randomness, a trader was asked: "do you predict that the market if going up or down?" "Up", he said, with confidence. Then the questioner got angry when he discovered that the trader was short the market, i.e., would benefit from the market going down. The questioner could not get the idea that the trader believed that the market had a higher probability of going up, but that, should it go down, it would go down a lot. So the rational idea was to be short. This divorce between the binary (up is more likely) and the vanilla is very prevalent in real-world variables.
1 2

9.3 The Elementary Fat Tails Mistake


A slightly more difficult problem. To the question: What happens to the probability of a deviation >1s when you fatten the tail (while preserving other properties)?, almost all answer: it increases (so far all have made the mistake). Wrong. Fat tails is the contribution of the extreme events to the total properties, and that it is a pair probability payoff that matters, not just probability. This is the reason people thought that I meant (and reported) that Black Swans were more frequent; I meant Black Swans are more consequential, more determining, but not more frequent. Ive asked variants of the same question. The Gaussian distribution spends 68.2% of the time between 1 standard deviation. The real world has fat tails. In finance, how much time do stocks spend between 1 standard deviations? The answer has been invariably lower. Why? Because there are more deviations. Sorry, there are fewer deviations: stocks spend between 78% and 98% between 1 standard deviations (computed from past samples).

70 | Risk and (Anti)fragility - N N Taleb

Ive asked variants of the same question. The Gaussian distribution spends 68.2% of the time between 1 standard deviation. The real world has fat tails. In finance, how much time do stocks spend between 1 standard deviations? The answer has been invariably lower. Why? Because there are more deviations. Sorry, there are fewer deviations: stocks spend between 78% and 98% between 1 standard deviations (computed from past samples). Some simple derivations Let x follow a Gaussian distribution (m , s). Assume m=0 for the exercise. What is the probability of exceeding one standard deviation? P >1 s = 1 1 2

erfcK-

1 2

O , where erfc is the complimentary error function, P>1 s = P< 1 s >15.86% and the probability of staying within the

"stability tunnel" between 1 s is > 68.2 %. Let us fatten the tail in a varince-preserving manner, using the standard method of linear combination of two Gaussians with two standard deviations separated by s 1 + a and s 1 - a , where a is the "vvol" (which is variance preserving, technically of no big effect here, as a standard deviation-preserving spreading gives the same qualitative result). Such a method leads to immediate raising of the Kurtosis by a factor of H1 + a2 L since
E I x4 M E I x2 M
2

= 3 Ha2 + 1L 1 2 erfc 2 1 1-a 1 2 erfc 2 1 a+1

P>1 s = P< 1 s = 1 -

So then, for different values of a as we can see, the probability of staying inside 1 sigma increases.
0.6

0.5

0.4

0.3

0.2

0.1

-4

-2

Fatter and fatter tails: different values of a. We notice that higher peak lower probability of nothing leaving the 1 s tunnel

9.4 The More Advanced Fat Tails Mistake and Great Moderation
Fatter tails increases time spent between deviations, giving the illusion of absence of volatility when in fact events are delayed and made worse (my critique of the Great Moderation). Stopping Time & Fattening of the tails of a Brownian Motion: Consider the distribution of the time it takes for a continuously monitored Brownian motion S to exit from a "tunnel" with a lower bound L and an upper bound H. Counterintuitively, fatter tails makes an exit (at some sigma) take longer. You are likely to spend more time inside the tunnel --since exits are far more dramatic. y is the distribution of exit time t, where t inf {t: S [L,H]} From Taleb (1997) we have the following approximation

yH t sL =
m

1 HlogHH L - logHLLL 1 H L
2

- 8 It s M p s2
n2 p2 t s2 2 HlogHH L-logHLLL2

n= 1

H-1L

L sin

n p HlogHLL - logHSLL logHH L - logHLL

H sin

n p HlogHH L - logHSLL logHH L - logHLL

and the fatter-tailed distribution from mixing Brownians with s separated by a coefficient a:

Risk and (Anti)fragility - N N Taleb| 71

yHt s, aL =

1 2

pH t s H1 - aLL +

1 2

pHt s H1 + aLL

This graph shows the lengthening of the stopping time between events coming from fatter tails.
Probability 0.5 0.4 0.3 0.2 0.1 2 4 6 8 Exit Time

Expected t 8 7 6 5 4 3 0.1 0.2 0.3 0.4 0.5 0.6 0.7 v

More Complicated : MetaProbabilities

9.5 The Fourth Quadrant Mitigation (or Solution)


Let us return to M[A, f HxL] of chapter 1. A quite significant result is that M[A,xn ] may not converge, in the case of, say power laws with exponent a < n, but M[A,xm ] where m<n, might converge. Well, where the integral - f HxL pHxL x does not exist, by clipping tails, we canmake the payoff integrable. There are two routes; 1) Limiting f (turning an open payoff to a binary): when f(x) is a constant as in a binary - K pHxL x will necessarily converge if p is a probability distribution. 2) Clipping tails: (and this is the business we will deal with in Antifragile, Part II), where the payoff is bounded, A = (L, H), or the integral L f HxL pHxL x will necessarily converge.
H

This part will be expanded

Two types of Decisions

72 | Risk and (Anti)fragility - N N Taleb

1. M0, depend on the 0th moment, that is, "Binary", or simple, i.e., as we saw, you just care if something is true or false. Very true or very false does not matter. Someone is either pregnant or not pregnant. A statement is "true" or "false" with some confidence interval. (I call these M0 as, more technically, they depend on the zeroth moment, namely just on probability of events, and not their magnitude you just care about "raw" probability). A biological experiment in the laboratory or a bet with a friend about the outcome of a soccer game belong to this category. 2. M1+ Complex,depend on the 1st or higher moments. You do not just care of the frequencybut of the impact as well, or, even more complex, some function of the impact. So there is another layer of uncertainty of impact. (I call these M1+, as they depend on higher moments of the distribution). When you invest you do not care how many times you make or lose, you care about the expectation: how many times you make or lose times the amount made or lost. Two types of probability structures: There are two classes of probability domainsvery distinct qualitatively and quantitatively. The first, thin-tailed: Mediocristan", the second, thick tailed Extremistan:

Note the typo f(x)= 1 should be f(x) = x

Risk and (Anti)fragility - N N Taleb| 73

The Map

Conclusion
The 4th Quadrant is mitigated by changes in exposures. And exposures in the 4th quadrant can be to the negative or the positive, depending on if the domain subset A exposed on the left on on the right.

74 | Risk and (Anti)fragility - N N Taleb

PART II - NONLINEARITIES

This Segment, Part II corresponds to topics covered in Antifragile

The previous chapters dealt mostly with probability rather than payoff. The next section deals with fragility and nonlinear effects.

Risk and (Anti)fragility - N N Taleb| 75

76 | Risk and (Anti)fragility - N N Taleb

10
Nonlinear Transformations of Random Variables
10.1. The Conflation Problem: Exposures to X Confused With Knowledge About X
Exposure, not knowledge.Take X a random or nonrandom variable, and F(X) the exposure, payoff, the effect of X on you, the end bottom line. (To be technical, X is higher dimensions, in !N but less assume for the sake of the examples in the introduction that it is a simple one-dimensional variable). The disconnect. As a practitioner and risk taker I see the following disconnect: people (nonpractitioners) talking to me about X (with the implication that we practitioners should care about X in running our affairs) while I had been thinking about F(X), nothing but F(X). And the straight confusion since Aristotle between X and F(X) has been chronic. Sometimes people mention F(X) as utility but miss the full payoff. And the confusion is at two level: one, simple confusion; second, in the decision-science literature, seeing the difference and not realizing that action on F(X) is easier than action on X. Examples: X is unemployment in Senegal, F1 (X) is the effect on the bottom line of the IMF, and F2 (X) is the effect on your grandmother (which I assume is minimal). X can be a stock price, but you own an option on it, so F(X) is your exposure an option value for X, or, even more complicated the utility of the exposure to the option value. X can be changes in wealth, F(X) the convex-concave value function of Kahneman-Tversky, how these affect you. One can see that F(X) is vastly more stable or robust than X (it has thinner tails).
Funtion f HXL

Variable X
A convex and linear function of a variable X. Confusing f(X) (on the vertical) and X (the horizontal) is more and more significant when f(X) is nonlinear. The more convex f(X), the more the statistical and other properties of f(X) will be divorced from those of X. For instance, the mean of f(X) will be different from f(Mean of X), by Jensens ineqality. But beyond Jensens inequality, the difference in risks between the two will be more and more considerable. When it comes to probability, the more nonlinear f, the less the probabilities of X matter compared to the nonlinearity of f. Moral of the story: focus on f, which we can alter, rather than the measurement of the elusive properties of X.

Risk and (Anti)fragility - N N Taleb| 77

Probability Distribution of x

Probability Distribution of f HxL

There are infinite numbers of functions F depending on a unique variable X. All utilities need to be embedded in F. Limitations of knowledge. What is crucial, our limitations of knowledge apply to X not necessarily to F(X). We have no control over X, some control over F(X). In some cases a very, very large control over F(X). This seems naive, but people do, as something is lost in the translation. The danger with the treatment of the Black Swan problem is as follows: people focus on X (predicting X). My point is that, although we do not understand X, we can deal with it by working on F which we can understand, while others work on predicting X which we cant because small probabilities are incomputable, particularly in fat tailed domains. F(X) is how the end result affects you. The probability distribution of F(X) is markedly different from that of X, particularly when F(X) is nonlinear. We need a nonlinear transformation of the distribution of X to get F(X). We had to wait until 1964 to get a paper on convex transformations of random variables, Van Zwet (1964). Bad news: F is almost always nonlinear, often S curved, that is convex-concave (for an increasing function). The central point about what to understand: When F(X) is convex, say as in trial and error, or with an option, we do not need to understand X as much as our exposure to H. Simply the statistical properties of X are swamped by those of H. That's the point of Antifragility in which exposure is more important than the naive notion of "knowledge", that is, understanding X. Fragility and Antifragility: When F(X) is concave (fragile), errors about X can translate into extreme negative values for F. When F(X) is convex, one is immune from negative variations. The more nonlinear F the less the probabilities of X matter in the probability distribution of the final package F. Most people confuse the probabilites of X with those of F. I am serious: the entire literature reposes largely on this mistake. So, for now ignore discussions of X that do not have F. And, for Baals sake, focus on F, not X.

10.2. Transformations of Probability Distributions


Say x follows a distribution p(x) and z= f(x) follows a distribution g(z). Assume g(z) continuous, increasing, and differentiable for now. The density p at point r is defined by use of the integral
r

DHrL hence
r

p HxL x

f HrL

pHxL x =

gHzL z

In differential form gHzL z = pHxL x since x = f H-1L HzL, we get gHzL z = pI f H-1L HzLM f H-1L HzL Now, the derivative of an inverse function f H-1L HzL =
1 f I f -1 HzLM

, which obtains the useful transformation heuristic:

78 | Risk and (Anti)fragility - N N Taleb

gHzL =

p I f H-1L HzLM f ' HuL u = I f H-1L HzLM (10.1)

In the event that g(z) is monotonic decreasing, then gHzL = p I f H-1L HzLM f ' HuL u = I f H-1L HzLM
1 2 1 2

(10.2) H f Hx - DxL + f HDx + xLL< f(x). Let us simplify with sole condition,

Where f is convex, H f Hx - DxL + f HDx + xLL > f HxL, concave if assuming f(.) twice differentiable,
! f ! x2
2

> 0 for all values of x in the convex case and <0 in the concave one.

Some Examples.
Squaring x: p(x) is a Gaussian(with mean 0, standard deviation 1) , f(x)= x2 g(x)=
2
x 2

2p

,xr0

which corresponds to the Chi-square distribution with 1 degrees of freedom. Exponentiating x :p(x) is a Gaussian(with mean m, standard deviation s)
HlogHxL-mL2 2 s2

g HxL =

2p sx which is the lognormal distribution.

10.3 Application 1: Happiness (f(x) does not have the same statistical properties as wealth (x)
There is a conflation of fat-tailedness of Wealth and Utility

Case 1: The Kahneman Tversky Prospect theory, which is convex-concave


xa vHxL = -l H-xa L x < 0 with a & l calibrated a = 0.88 and l = 2.25 For x (the changes in wealth) following a T distribution with tail exponent a, I f HxL =
a a+ x2
a+1

if x 0

M
a 2

a BI , M
2

Where B is the Euler Beta function, BHa, bL " GHaL GHbL GHa + bL " 0 ta-1 H1 - tLb-1 dt; we get (skipping the details of z= v(u) and f(u) du = z(x) dx), the distribution z(x) of the utility of happiness v(x)
1-a

x a

a+1

a BJ , N
2 2
a+1

a+x2a a 1

x0
2

zHx a, a, lL =
I- M
l x

1-a a

a a+K- O
l x 2a

al

a BJ , N
2 2

a 1

x<0

Risk and (Anti)fragility - N N Taleb| 79

Figure 1: Simulation, first. The distribution of the utility of changes of wealth, when the changes in wealth follow a power law with tail exponent =2 (5 million Monte Carlo simulations). 0.35

Distribution of x

0.30

0.25

0.20

0.15

Distribution of V(x)
0.10

0.05

-20 -10 Figure 2: The graph in Figure 1 derived analytically

10

20

Fragility: as defined in the Taleb-Douady (2012) sense, on which later, i.e. tail sensitivity below K, v(x) is less fragile than x.

0.020

0.015

Tail of x

0.010

Tail of v(x)
0.005

-18
Figure 3: Left tail.

-16

-14

-12

-10

-8

-6

80 | Risk and (Anti)fragility - N N Taleb

v(x) has thinner tails than x more robust. ASYMPTOTIC TAIL More technically the asymptotic tail for V(x) becomes ~Kx
a a

a a

(i.e, for x and -x large, the exceedance probability for V, P>x

, with K a constant, or
- -1
a a

z HxL ~ K x

We can see that V(x) can easily have finite variance when x has an infinite one. The dampening of the tail has an increasingly consequential effect for lower values of a.

Case 2: Compare to the Monotone Concave of Classical Utility


Unlike the convex-concave shape in Kahneman Tversky, classical utility is monotone concave. This leads to plenty of absurdities, but the worst is the effect on the distribution of utility. Granted one (K-T) deals with changes in wealth, the second is a function of wealth. Take the standard concave utility function g(x)= 1- e-a x . With a=1
gHxL 1 -2 -1 -1 -2 -3 -4 -5 -6 Plot of 1- e
-a x

x 1 2 3

The distribution of v(x) will be


H m+logH1-xLL2 2 s2

vHxL = -

2 p s Hx - 1L
vHxL 0.6 0.5 0.4 0.3 0.2 0.1 x 2

-10

-8

-6

-4

-2

With such a distribution of utility it would be absurd to do anything.

10.4 The effect of convexity on the distribution of f(x)


Note the following property.

Risk and (Anti)fragility - N N Taleb| 81

Distributions that are skewed have their mean dependent on the variance (when it exists), or on the scale. In other words, more uncertainty raises the expectation. Demonstration 1:TK
Probability Low Uncertainty

High Uncertainty

Outcome

Example: the Lognormal Distribution has a term

s2 2

in its mean, linear to variance.


1 l

Example: the Exponential Distribution 1 - -x l x 0 has the mean a concave function of the variance, that is, variance.
a

, the square root of its

Example: the Pareto Distribution L x

a -1-a

a x L , a>2 has the mean

a-2

a Standard Deviation,

a-2

a-1

(verify)

10.5 The Mistake of Using Regular Estimation Methods When the Payoff is Convex

82 | Risk and (Anti)fragility - N N Taleb

11
Definition, Mapping, and Detection of (Anti)Fragility

Inserting Taleb Douady 2013

Fat Tails and (Anti)Fragility

What is Fragility?
In short, fragility is related to how a system suffers from the variability of its environment beyond a certain preset threshold (when threshold is K, it is called K-fragility), while antifragility refers to when it benefits from this variability in a similar way to vega of an option or a nonlinear payoff, that is, its sensitivity to volatility or some similar measure of scale of a distribution. Simply, a coffee cup on a table suffers more from large deviations than from the cumulative effect of some shocksconditional on being unbroken, it has to suffer more from tail events than regular ones around the center of the distribution, the at the money category. This is the case of elements of nature that have survived: conditional on being in existence, then the class of events around the mean should matter considerably less than tail events, particularly when the probabilities decline faster than the inverse of the harm, which is the case of all used monomodal probability distributions. Further, what has exposure to tail events suffers from uncertainty; typically, when systems a building, a bridge, a nuclear plant, an airplane, or a bank balance sheet are made robust to a certain level of variability and stress but may fail or collapse if this level is exceeded, then they are particularly fragile to uncertainty about the distribution of the stressor, hence to model error, as this uncertainty increases the probability of dipping below the robustness level, bringing a higher probability of collapse. In the opposite case, the natural selection of an evolutionary process is particularly antifragile, indeed, a more volatile environment increases the survival rate of robust species and eliminates those whose superiority over other species is highly dependent on environmental parameters. Figure 1 show the tail vega sensitivity of an object calculated discretely at two different lower absolute mean deviations. We use for the purpose of fragility and antifragility, in place of measures in L2 such as standard deviations, which restrict the choice of probability distributions, the broader measure of absolute deviation, cut into two parts: lower and upper semi-deviation above the distribution center .
Prob Density

x HK , s- + Ds- L = x HK , s- L = Hx - WL f

Hx - WL f Hx L x

lHs_ +Ds- L

Hx L x

lHs_ L

Figure 1- A definition of fragility as left tail-vega sensitivity; the figure shows the effect of the perturbation of the lower semi-deviation s on the tail integral of (x ) below K, with a entering constant. Our detection of fragility does not require the specification of f the probability distribution.

This article aims at providing a proper mathematical definition of fragility, robustness, and antifragility and examining how these apply to different cases where this notion is applicable. Intrinsic and Inherited Fragility: Our definition of fragility is two-fold. First, of concern is the intrinsic fragility, the shape of the probability distribution of a variable and its sensitivity to s-, a parameter controlling the left side of its own distribution. But we do not often directly observe the statistical distribution of objects, and, if we did, it would be difficult to measure their tail-vega sensitivity. Nor do we need to specify such distribution: we can gauge the response of a given object to the volatility of an external stressor that affects it. For instance, an option is usually analyzed with respect to the scale of the distribution of the underlying security, not its own; the fragility of a coffee cup is determined as a response to a given source of randomness or stress; that of a house with respect of, among other sources, the distribution of earthquakes. This fragility coming from the effect of the underlying is called inherited fragility. The transfer function, which we present next, allows us to assess the effect, increase or decrease in fragility, coming from changes in the underlying source of stress. Transfer Function: A nonlinear exposure to a certain source of randomness maps into tail-vega sensitivity (hence fragility). We prove that Inherited Fragility ! Concavity in exposure on the left side of the distribution and build H, a transfer function giving an exact mapping of tail vega sensitivity to the second derivative of a function. The transfer function will allow us to probe parts of the distribution and generate a fragility-detection heuristic covering both physical fragility and model error.

Fat Tails and (Anti)Fragility

FRAGILITY AS SEPARATE RISK FROM PSYCHOLOGICAL PREFERENCES

Avoidance of the Psychological: We start from the definition of fragility as tail vega sensitivity, and end up with nonlinearity as a necessary attribute of the source of such fragility in the inherited case a cause of the disease rather than the disease itself. However, there is a long literature by economists and decision scientists embedding risk into psychological preferences historically, risk has been described as derived from risk aversion as a result of the structure of choices under uncertainty with a concavity of the muddled concept of utility of payoff, see Pratt (1964), Arrow (1965), Rothchild and Stiglitz(1970,1971). But this utility business never led anywhere except the circularity, expressed by Machina and Rothschild (2008), risk is what risk-averters hate. Indeed limiting risk to aversion to concavity of choices is a quite unhappy result the utility curve cannot be possibly monotone concave, but rather, like everything in nature necessarily bounded on both sides, the left and the right, convex-concave and, as Kahneman and Tversky (1979) have debunked, both path dependent and mixed in its nonlinearity. Beyond Jensens Inequality: Furthermore, the economics and decision-theory literature reposes on the effect of Jensens inequality, an analysis which requires monotone convex or concave transformations in fact limited to the expectation operator. The world is unfortunately more complicated in its nonlinearities. Thanks to the transfer function, which focuses on the tails, we can accommodate situations where the source is not merely convex, but convex-concave and any other form of mixed nonlinearities common in exposures, which includes nonlinear dose-response in biology. For instance, the application of the transfer function to the Kahneman-Tversky value function, convex in the negative domain and concave in the positive one, shows that its decreases fragility in the left tail (hence more robustness) and reduces the effect of the right tail as well (also more robustness), which allows to assert that we are psychologically more robust to changes in wealth than implied from the distribution of such wealth, which happens to be extremely fat-tailed. Accordingly, our approach relies on nonlinearity of exposure as detection of the vega-sensitivity, not as a definition of fragility. And nonlinearity in a source of stress is necessarily associated with fragility. Clearly, a coffee cup, a house or a bridge dont have psychological preferences, subjective utility, etc. Yet they are concave in their reaction to harm: simply, taking z as a stress level and (z) the harm function, it suffices to see that, with n > 1,

(nz) < n ( z) for all 0 < nz < Z*


where Z* is the level (not necessarily specified) at which the item is broken. Such inequality leads to (z) having a negative second derivative at the initial value z. So if a coffee cup is less harmed by n times a stressor of intensity Z than once a stressor of nZ, then harm (as a negative function) needs to be concave to stressors up to the point of breaking; such stricture is imposed by the structure of survival probabilities and the distribution of harmful events, and has nothing to do with subjective utility or some other figments. Just as with a large stone hurting more than the equivalent weight in pebbles, if, for a human, jumping one millimeter caused an exact linear fraction of the damage of, say, jumping to the ground from thirty feet, then the person would be already dead from cumulative harm. Actually a simple computation shows that he would have expired within hours from touching objects or pacing in his living room, given the multitude of such stressors and their total effect. The fragility that comes from linearity is immediately visible, so we rule it out because the object would be already broken and the person already dead. The relative frequency of ordinary events compared to extreme events is the determinant. In the financial markets, there are at least ten thousand times more events of 0.1% deviations than events of 10%. There are close to 8,000 micro-earthquakes daily on planet earth, that is, those below 2 on the Richter scale about 3 million a year. These are totally harmless, and, with 3 million per year, you would need them to be so. But shocks of intensity 6 and higher on the scale make the newspapers. Accordingly, we are necessarily immune to the cumulative effect of small deviations, or shocks of very small magnitude, which implies that these affect us disproportionally less (that is, nonlinearly less) than larger ones. Model error is not necessarily mean preserving. s-, the lower absolute semi-deviation does not just express changes in overall dispersion in the distribution, such as for instance the scaling case, but also changes in the mean, i.e. when the upper semi-deviation from to infinity is invariant, or even decline in a compensatory manner to make the overall mean absolute deviation unchanged. This would be the case when we shift the distribution instead of rescaling it. Thus the same vega-sensitivity can also express sensitivity to a stressor (dose increase) in medicine or other fields in its effect on either tail. Thus s() will allow us to express the sensitivity to the disorder cluster (Taleb, 2012): i) uncertainty, ii) variability, iii) imperfect, incomplete knowledge, iv) chance, v) chaos, vi) volatility, vii) disorder, viii) entropy, ix) time, x) the unknown, xi) randomness, xii) turmoil, xiii) stressor, xiv) error, xv) dispersion of outcomes.

DETECTION HEURISTIC

Finally, thanks to the transfer function, this paper proposes a risk heuristic that "works" in detecting fragility even if we use the wrong model/pricing method/probability distribution. The main idea is that a wrong ruler will not measure the height of a child; but it can certainly tell us if he is growing. Since risks in the tails map to nonlinearities (concavity of exposure), second order effects reveal fragility, particularly in the tails where they map to large tail exposures, as revealed through perturbation analysis. More generally every nonlinear function will produce some kind of positive or negative exposures to volatility for some parts of the distribution.

Fat Tails and (Anti)Fragility

Figure 2- Disproportionate effect of tail events on nonlinear exposures, illustrating the necessity of nonlinearity of the harm function and showing how we can extrapolate outside the model to probe unseen fragility. It also shows the disproportionate effect of the linear in reverse: For small variations, the linear sensitivity exceeds the nonlinear, hence making operators aware of the risks.

Fragility and Model Error: As we saw this definition of fragility extends to model error, as some models produce negative sensitivity to uncertainty, in addition to effects and biases under variability. So, beyond physical fragility, the same approach measures model fragility, based on the difference between a point estimate and stochastic value (i.e., full distribution). Increasing the variability (say, variance) of the estimated value (but not the mean), may lead to one-sided effect on the model just as an increase of volatility causes porcelain cups to break. Hence sensitivity to the volatility of such value, the vega of the model with respect to such value is no different from the vega of other payoffs. For instance, the misuse of thin-tailed distributions (say Gaussian) appears immediately through perturbation of the standard deviation, no longer used as point estimate, but as a distribution with its own variance. For instance, it can be shown how fat-tailed (e.g. power-law tailed) probability distributions can be expressed by simple nested perturbation and mixing of Gaussian ones. Such a representation pinpoints the fragility of a wrong probability model and its consequences in terms of underestimation of risks, stress tests and similar matters. Antifragility: It is not quite the mirror image of fragility, as it implies positive vega above some threshold in the positive tail of the distribution and absence of fragility in the left tail, which leads to a distribution that is skewed right.

Fragility and Transfer Theorems


Table 1- Introduces the Exhaustive Taxonomy of all Possible Payoffs y=f(x)
Type Condition Left Tail (loss domain) Right Tail (gains domain) Nonlinear Payoff Function y = f(x ) "derivative", where x is a random variable Mixed concave left, convex right (fence) Concave Derivatives Equivalent (Taleb, 1997) Effect of fatailedness of f(x) compared to primitive x

Type 1

Fragile (type 1)

Fat (regular or absorbing barrier) Fat

Fat

Long up -vega, short down vega Short vega

More fragility if absorbing barrier, neutral otherwise More fragility

Type 2

Fragile (type 2)

Thin

Type 3

Robust

Thin

Thin

Mixed convex left, concave right (digital, sigmoid) Convex

Short up vega, long down - vega Long vega

No effect

Type 4

Antifragile

Thin

Fat (thicker than left)

More antifragility

Fat Tails and (Anti)Fragility

The central Table 1 introduces the exhaustive map of possible outcomes, with 4 mutually exclusive categories of payoffs. Our steps in the rest of the paper are as follows: a. We provide a mathematical definition of fragility, robustness and antifragility. b. We present the problem of measuring tail risks and show the presence of severe biases attending the estimation of small probability and its nonlinearity (convexity) to parametric (and other) perturbations. c. We express the concept of model fragility in terms of left tail exposure, and show correspondence to the concavity of the payoff from a random variable. d. Finally, we present our simple heuristic to detect the possibility of both fragility and model error across a broad range of probabilistic estimations. Conceptually, fragility resides in the fact that a small or at least reasonable uncertainty on the macro-parameter of a distribution may have dramatic consequences on the result of a given stress test, or on some measure that depends on the left tail of the distribution, such as an out-of-themoney option. This hypersensitivity of what we like to call an out of the money put price to the macro-parameter, which is some measure of the volatility of the distribution of the underlying source of randomness. Formally, fragility is defined as the sensitivity of the left-tail shortfall (non-conditioned by probability) below a certain threshold K to the overall left semi-deviation of the distribution.

Examples a. Example: a porcelain coffee cup subjected to random daily stressors from use. b. Example: tail distribution in the function of the arrival time of an aircraft. c. Example: hidden risks of famine to a population subjected to monoculture or, more generally, fragilizing errors in the application of Ricardos comparative advantage without taking into account second order effects. d. Example: hidden tail exposures to budget deficits nonlinearities to unemployment. e. Example: hidden tail exposure from dependence on a source of energy, etc. (squeezability argument).\
TAIL VEGA SENSITIVITY

We construct a measure of vega in the tails of the distribution that depends on the variations of s, the semi-deviation below a certain level , chosen in the L1 norm in order to insure its existence under fat tailed distributions with finite first semi-moment. In fact s would exist as a measure even in the case of infinite moments to the right side of . Let X be a random variable, the distribution of which is one among a one-parameter family of pdf f, I . We consider a fixed reference value and, from this reference, the left-semi-absolute deviation:

s ( ) =

( x ) f ( x)dx

We assume that s() is continuous, strictly increasing and spans the whole range + = [0, +), so that we may use the left-semi-absolute deviation s as a parameter by considering the inverse function (s) : + I, defined by s((s)) = s for s +. This condition is for instance satisfied if, for any given x < , the probability is a continuous and increasing function of . Indeed, denoting

F ( x) = Prf ( X < x ) = f (t )dt , an integration by part yields:

s ( ) = F ( x)dx

This is the case when is a scaling parameter, i.e. X ~ + (X1 ), indeed one has in this case

x , F ( x) = F1 +

F x ( x) = f ( x) and s() = s(1).


It is also the case when is a shifting parameter, i.e. X ~ X0 , indeed, in this case

F ( x) = F0 ( x + ) and

s = F () .

Fat Tails and (Anti)Fragility

For K < and s +, let:

( K , s ) = ( x) f ( s ) ( x)dx

In particular, (, s) = s. We assume, in a first step, that the function (K,s) is differentiable on (, ] +. The K-left-tail-vega sensitivity of X at stress level K < and deviation level s > 0 for the pdf f is:

K ds f V ( X , f , K , s ) = ( K , s ) = ( x ) ( x ) dx s d

As the in many practical instances where threshold effects are involved, it may occur that does not depend smoothly on s. We therefore also define a finite difference version of the vega-sensitivity as follows:

V ( X , f , K , s , s) =

1 ( K , s + s) ( K , s s) 2 s f ( s + s ) ( x ) f ( s s ) ( x ) K = ( x ) dx 2 s

Hence omitting the input s implicitly assumes that s 0. Note that

( K , s ) = E f X X < K Pr f ( X < K ) . It can be decomposed into two parts:

( K , s ( )) = ( K ) F ( K ) + P ( K )
P ( K ) = ( K x) f ( x)dx
K

Where the first part ( K)F(K) is proportional to the probability of the variable being below the stress level K and the second part P(K) is the expectation of the amount by which X is below K (counting 0 when it is not). Making a parallel with financial options, while s() is a put at-themoney, (K,s) is the sum of a put struck at K and a digital put also struck at K with amount K; it can equivalently be seen as a put struck at with a down-and-in European barrier at K. Letting = (s) and integrating by part yields

( K , s ( )) = ( K ) F ( K ) + F ( x)dx = FK ( x)dx

Where

FK ( x) = F ( min( x, K ) ) = min ( F ( x), F ( K ) ) , so that


FK ( x)dx V ( X , f , K , s ) = (K , s ) = F s ( x)dx

For finite differences

V ( X , f , K , s , s) =
Where s+ and s are such that

1 ( FK,s ( x) ) dx 2s
s s

K K s(s+ ) = s + s , s(s ) = s s and FK , s ( x ) = F + ( x ) F ( x ) .

Fat Tails and (Anti)Fragility

F F
!

F !K F K) FK K )

Figure 3- The different curves of F () and F () showing the difference in sensitivity to changes in at different levels of K.

MATHEMATICAL EXPRESSION OF FRAGILITY

In essence, fragility is the sensitivity of a given risk measure to an error in the estimation of the (possibly one-sided) deviation parameter of a distribution, especially due to the fact that the risk measure involves parts of the distribution tails that are away from the portion used for estimation. The risk measure then assumes certain extrapolation rules that have first order consequences. These consequences are even more amplified when the risk measure applies to a variable that is derived from that used for estimation, when the relation between the two variables is strongly nonlinear, as is often the case.

Definition of Fragility: The Intrinsic Case


The local fragility of a random variable X depending on parameter , at stress level K and semi-deviation level s() with pdf f is its K-left-tailed semi-vega sensitivity V(X, f, K, s). The finite-difference fragility of X at stress level K and semi-deviation level s() s with pdf f is its K-left-tailed finite-difference semi-vega sensitivity V(X, f, K, s, s). In this definition, the fragility relies in the unsaid assumptions made when extrapolating the distribution of X from areas used to estimate the semiabsolute deviation s(), around , to areas around K on which the risk measure depends.

Definition of Fragility: The Inherited Case


We here consider the particular case where a random variable Y = (X) depends on another source of risk X, itself subject to a parameter . Let us keep the above notations for X, while we denote by g the pdf of Y, Y = () and u() the left-semi-deviation of Y. Given a strike level L = (K), let us define, as in the case of X :

The inherited fragility of Y with respect to X at stress level L = (K) and left-semi-deviation level s() of X is the partial derivative:

( L, u ( )) = (Y y) g ( y)dy

g K ds VX (Y , g , L, s ( )) = ( L, u ( )) = (Y y) ( y)dy s d

Note that the stress level and the pdf are defined for the variable Y, but the parameter which is used for differentiation is the left-semi-absolute deviation of X, s(). Indeed, in this process, one first measures the distribution of X and its left-semi-absolute deviation, then the function is applied, using some mathematical model of Y with respect to X and the risk measure is estimated. If an error is made when measuring s(), its impact on the risk measure of Y is amplified by the ratio given by the inherited fragility.

Fat Tails and (Anti)Fragility

Once again, one may use finite differences and define the finite-difference inherited fragility of Y with respect to X, by replacing, in the above equation, differentiation by finite differences between values + and , where s(+) = s + s and s() = s s.

Implications of a Nonlinear Change of Variable on the Intrinsic Fragility


We here study the case of a random variable Y = (X), the pdf g of which also depends on parameter , related to a variable X by the nonlinear function . We are now interested in comparing their intrinsic fragilities. We shall say, for instance, that Y is more fragile at the stress level L and left-semi-deviation level u() than the random variable X, at stress level K and left-semi-deviation level s() if the L-left-tailed semi-vega sensitivity of Y is higher than the K-left-tailed semi-vega sensitivity of X: V (Y , g , L , u ) > V (X , f , K , s ) One may use finite differences to compare the fragility of two random variables: V(Y, g, L, u, u) > V(X, f, K, s, s). In this case, finite variations must be comparable in size, namely u/u = s/s. Let us assume, to start with, that is differentiable, strictly increasing and scaled so that Y = () = . We also assume that, for any given x < ,

F ( x) > 0 . In this case, as observed above, s() is also increasing.


Let us denote

G ( y) = Prg (Y < y) . We have:


G ( ( x) ) = Prg (Y < ( x)) = Pr f ( X < x) = F ( x)

Hence, if (L, u ) denotes the equivalent of (K, s) with variable (Y, g) instead of (X, f), then we have:

( L, u ( )) = GL ( y ) dy = FK ( x )

d ( x ) dx dx

Because is increasing and min((x),(K)) = (min(x,K)). In particular

u ( ) = (, u ( )) = F ( x)

d ( x)dx dx

The L-left-tail-vega sensitivity of Y is therefore:

FK d ( x) ( x)dx dx V (Y , g , L, u ( )) = F d ( x) dx ( x)dx

For finite variations:

V (Y , g , L, u ( ), u) =
Where

1 d FK ( x) ( x)dx , u 2u dx
u u

u+

and

are such that

K K . u(u+ ) = u + u , u(u ) = u u and FK , u ( x ) = F + ( x ) F ( x)

Next, Theorem 1 proves how a concave transformation (x) of a random variable x produces fragility.
THEOREM 1 (FRAGILITY TRANSFER THEOREM)

Let, with the above notations, : be a twice differentiable function such that () = and for any x < , Y = (X) is more fragile at level L = (K) and pdf g than X at level K and pdf f if, and only if, one has:

d ( x) > 0 . The random variable dx

K H ( x)

d 2 ( x)dx < 0 dx 2

Fat Tails and (Anti)Fragility

Where
K H ( x) =

PK PK P P ( x) () ( x) ()

and where

P ( x) = F (t )dt

is the price of the put option on X with strike x and

PK ( x) = FK (t )dt is that of a put option with

strike x and European down-and-in barrier at K. H can be seen as a transfer function, expressed as the difference between two ratios. For a given level x of the random variable on the left hand side of , the second one is the ratio of the vega of a put struck at x normalized by that of a put at the money (i.e. struck at ), while the first one is the same ratio, but where puts struck at x and are European down-and-in options with triggering barrier at the level K.

Proof

Let has

I X =

K K F F F F d d K L , and ( x)dx , I X = ( x ) dx I = ( x ) ( x ) dx I = Y Y dx ( x) dx ( x)dx . One

K V ( X , f , K , s ( )) = I X I X and V (Y , g , L, u ( )) = IYL IY hence:

K IYL IY IX K IY I X IY I X I X Therefore, because the four integrals are positive, V (Y , g , L, u ( )) V ( X , f , K , s ( )) has the same sign as

V (Y , g , L, u ( )) V ( X , f , K , s ( )) =

IYL

K IX

K IYL I X IY I X . On the other hand, we have I X =

P K P K = () and () , I X

IY = IYL

An elementary calculation yields:

P F P d d d 2 ( x) ( x)dx = () () ( x) 2 ( x) dx dx dx dx K K K F P P d d d 2 = ( x) ( x)dx = () ( ) ( x) 2 ( x) dx dx dx dx

IYL
K IX

IY I X

P K = ()
K = H ( x)

P PK d 2 ( x ) () dx2 dx +

P d 2 ( x ) dx2 dx

d 2 dx dx 2

Let us now examine the properties of the function F/), therefore


K H ( x) has the same sign as

K H ( x) . For x K, we have

PK P ( x) = ( x) > 0 (the positivity is a consequence of that of

P P K () () . As this is a strict inequality, it extends to an interval on the right hand side of

K, say (, K ] with K < K < . But on the other hand:


F P P K F ( ) ( ) = ( x)dx ( K ) ( K ) K

For K negative enough,

P P K F ( K ) is smaller than its average value over the interval [K, ], hence () () > 0 .

We have proven the following theorem.

Fat Tails and (Anti)Fragility

THEOREM 2 ( FRAGILITY EXACERBATION THEOREM)

With the above notations, there exists a threshold < such that, if K then

K H ( x) > 0 for x (, ] with K < < . As a consequence,

if the change of variable is concave on (, ] and linear on [, ], then Y is more fragile at L = (K) than X at K. One can prove that, for a monomodal distribution, < < (see discussion below), so whatever the stress level K below the threshold , it suffices that the change of variable be concave on the interval (, ] and linear on [, ] for Y to become more fragile at L than X at K. In practice, as long as the change of variable is concave around the stress level K and has limited convexity/concavity away from K, the fragility of Y is greater than that of X. Figure 2 shows the shape of
K H ( x) in the case of a Gaussian distribution where is a simple scaling parameter ( is the standard deviation ) and

= 0. We represented K = 2 while in this Gaussian case, = 1.585.

HK

Figure 4- The Transfer function H for different portions of the distribution: its sign flips in the region slightly below DISCUSSION

Monomodal case

f f 0 on (, ] and 0 on [, ]. In this P PK case is a convex function on the left half-line (, ], then concave after the inflexion point . For K , the function coincides with P P PK on (, K], then is a linear extension, following the tangent to the graph of in K (see graph below). The value of () corresponds P to the intersection point of this tangent with the vertical axis. It increases with K, from 0 when K to a value above () when K = . The
We say that the family of distributions (f ) is left-monomodal if there exists < such that threshold corresponds to the unique value of K such that
K G ( x) =

PK P P P () = () . When K < then G ( x) = ( x) () and

PK PK K () = 1 and which are proportional for x K, the latter being linear on ( x) () are functions such that G () = G PK P K K [K, ]. On the other hand, if K < then ( K ) , which implies that G ( x) < G ( x) for x K. An () < () and G ( K ) < G
elementary convexity analysis shows that, in this case, the equation function
K G ( x) = G ( x) has a unique solution with < < . The transfer K H ( x) is positive for x < , in particular when x and negative for < x < .

Fat Tails and (Anti)Fragility

GK 1" G+

HK < 0

HK > 0

!P / !+
!P K/! + " 0"

Figure 5- The distribution G and the various derivatives of the unconditional shortfalls

Scaling Parameter
We assume here that is a scaling parameter, i.e. X = + (X1 ). In this case, as we saw above, we have

f ( x) =

x , f1 + 1

x x and s() = s(1). Hence F ( x) = F1 + 1 + , P ( x) = P

( K , s ( )) = ( K ) F1 +

K K 1 + + P 1 1 (K , s ) = (K , ) = ( P ( K ) + ( K ) F ( K ) + ( K ) 2 f ( K ) ) s s (1) s ( )

When we apply a nonlinear transformation , the action of the parameter is no longer a scaling: when small negative values of X are multiplied by a scalar , so are large negative values of X. The scaling applies to small negative values of the transformed variable Y with a coefficient but large negative values are subject to a different coefficient

d (0) , dx

d ( K ) , which can potentially be very different. dx

Fragility Drift
Fragility is defined at as the sensitivity i.e. the first partial derivative of the tail estimate with respect to the left semi-deviation s. Let us now define the fragility drift:

( X , f , K , s ) = VK

2 (K , s ) K s

Fat Tails and (Anti)Fragility

In practice, fragility always occurs as the result of fragility, indeed, by definition, we know that (, s) = s, hence V(X, f, , s) = 1. The fragility drift measures the speed at which fragility departs from its original value 1 when K departs from the center .

Second-order Fragility
The second-order fragility is the second order derivative of the tail estimate with respect to the semi-absolute deviation s:

Vs ( X , f , K , s ) =

( s )

(K , s )

As we shall see later, the second-order fragility drives the bias in the estimation of stress tests when the value of s is subject to uncertainty, through Jensen inequality.

Definitions of Robustness and Antifragility


Antifragility is not the simple opposite of fragility, as we saw in Table 1. Measuring antifragility, on the one hand, consists of the flipside of fragility on the right-hand side, but on the other hand requires a control on the robustness of the probability distribution on the left-hand side. From that aspect, unlike fragility, antifragility cannot be summarized in one single figure but necessitates at least two of them. When a random variable depends on another source of randomness: Y = (X), we shall study the antifragility of Y with respect to that of X and to the properties of the function .
DEFINITION OF ROBUSTNESS

Let (X) be a one-parameter family of random variables with pdf f. Robustness is an upper control on the fragility of X, which resides on the left hand side of the distribution. We say that f is b-robust beyond stress level K < if V(X, f, K, s()) b for any K K. In other words, the robustness of f on the half-line ( , K] is R( , K ] ( X , f , K , s ( )) = max V ( X , f , K , s ( )) , so that b-robustness simply means R( , K ] ( X , f , K , s ( )) b .
K K

We also define b-robustness over a given interval [K1, K2] by the same inequality being valid for any K [K1, K2]. In this case we use

R[ K1 , K2 ] ( X , f , K , s ( )) = max V ( X , f , K , s ( )) .
K1 K K 2

Note that the lower R, the tighter the control and the more robust the distribution f. Once again, the definition of b-robustness can be transposed, using finite differences V(X, f, K, s(), s). In practical situations, setting a material upper bound b to the fragility is particularly important: one need to be able to come with actual estimates of the impact of the error on the estimate of the left-semi-deviation. However, when dealing with certain class of models, such as Gaussian, exponential of stable distributions, we may be lead to consider asymptotic definitions of robustness, related to certain classes. For instance, for a given decay exponent a > 0, assuming that f(x) = O(eax) when x level K is: , the a-exponential asymptotic robustness of X below the

Rexp ( X , f , K , s ( ), a ) = max ea ( K )V ( X , f , K , s ( ))
K K

)
, then Rexp = + and X is

If one of the two quantities e

a ( K )

f ( K ) or e

a ( K )

V ( X , f , K , s ( )) is not bounded from above when K!

considered as not a-exponentially robust. Similarly, for a given power > 0, and assuming that f(x) = O(x) when x , the -power asymptotic robustness of X below the level K is:

Rpow ( X , f , K , s ( ), a) = max (( K ) 2V ( X , f , K , s ()) )


K K

Fat Tails and (Anti)Fragility

If one of the two quantities

( K ) f ( K ) or ( K )

V ( X , f , K , s ( )) is not bounded from above when K!

, then Rpow = +

and X is considered as not -power robust. Note the exponent 2 used with the fragility, for homogeneity reasons, e.g. in the case of stable distributions. When a random variable Y = (X) depends on another source of risk X. Definition 2a, Left-Robustness (monomodal distribution). A payoff y = (x) is said (a,b)-robust below L = (K) for a source of randomness X with pdf f assumed monomodal if, letting g be the pdf of Y = (X), one has, for any K! K and L! = (K!) :

VX (Y , g , L, s ( ) ) aV ( X , f , K , s ( ) ) + b

(4)

The quantity b is of order deemed of negligible utility (subjectively), that is, does not exceed some tolerance level in relation with the context, while a is a scaling parameter between variables X and Y. Note that robustness is in effect impervious to changes of probability distributions. Also note that this measure robustness ignores first order variations since owing to their higher frequency, these are detected (and remedied) very early on. Example of Robustness (Barbells): a. trial and error with bounded error and open payoff b. for a "barbell portfolio" with allocation to numeraire securities up to 80% of portfolio, no perturbation below K set at 0.8 of valuation will represent any difference in result, i.e. q = 0. The same for an insured house (assuming the risk of the insurance company is not a source of variation), no perturbation for the value below K, equal to minus the insurance deductible, will result in significant changes. c. a bet of amount B (limited liability) is robust, as it does not have any sensitivity to perturbations below 0.
DEFINITION OF ANTIFRAGILITY

The second condition of antifragility regards the right hand side of the distribution. Let us define the right-semi-deviation of X :

And, for H > L > :

s + ( ) = ( x ) f ( x)dx

+ ( L, H , s + ( )) = ( x ) f ( x ) dx
L

W ( X , f , L, H , s + ) =

+ f f + ( L, H , s + ) H = ( x ) ( x ) dx ( x ) ( x ) dx + L s

When Y = (X) is a variable depending on a source of noise X, we define:

(H ) + g f WX (Y , g , ( L), ( H ), s + ) = ( y ()) ( y ) dy ( x ) ( x ) dx ( L)

Definition 2b, Antifragility (monomodal distribution). A payoff y = (x) is locally antifragile over the range [L, H] if 1. 2. It is b-robust below for some b > 0

WX (Y , g , ( L), ( H ), s + ( ) ) aW ( X , f , L, H , s + ( ) ) where a = u+ ( ) s ( )
+

The scaling constant a provides homogeneity in the case where the relation between X and y is linear. In particular, nonlinearity in the relation between X and Y impacts robustness. The second condition can be replaced with finite differences u and s, as long as u/u = s/s.

Fat Tails and (Anti)Fragility

REMARKS

Fragility is K-specific. We are only concerned with adverse events below a certain pre-specified level, the breaking point. Exposures A can be more fragile than exposure B for K = 0, and much less fragile if K is, say, 4 mean deviations below 0. We may need to use finite s to avoid situations as we will see of vega-neutrality coupled with short left tail. Effect of using the wrong distribution f: Comparing V(X, f, K, s , s) and the alternative distribution V(X, f*, K, s*, s), where f* is the true distribution, the measure of fragility provides an acceptable indication of the sensitivity of a given outcome such as a risk measure to model error, provided no paradoxical effects perturb the situation. Such paradoxical effects are, for instance, a change in the direction in which certain distribution percentiles react to model parameters, like s. It is indeed possible that nonlinearity appears between the core part of the distribution and the tails such that when s increases, the left tail starts fattening giving a large measured fragility then steps back implying that the real fragility is lower than the measured one. The opposite may also happen, implying a dangerous under-estimate of the fragility. These nonlinear effects can stay under control provided one makes some regularity assumptions on the actual distribution, as well as on the measured one. For instance, paradoxical effects are typically avoided under at least one of the following three hypotheses: a. The class of distributions in which both f and f* are picked are all monomodal, with monotonous dependence of percentiles with respect to one another. b. The difference between percentiles of f and f* has constant sign (i.e. f* is either always wider or always narrower than f at any given percentile) c. For any strike level K (in the range that matters), the fragility measure V monotonously depends on s on the whole range where the

true value s* can be expected. This is in particular the case when partial derivatives kV/sk all have the same sign at measured s up to some order n, at which the partial derivative has that same constant sign over the whole range on which the true value s* can be expected. This condition can be replaced by an assumption on finite differences approximating the higher order partial derivatives, where n is large enough so that the interval [s ns] covers the range of possible values of s*. Indeed, in this case, the finite difference estimate of fragility uses evaluations of at points spanning this interval. Unconditionality of the shortfall measure : Many, when presenting shortfall, deal with the conditional shortfall

while such measure might be useful in some circumstances, its sensitivity is not indicative of fragility in the sense used in this discussion. The unconditional tail expectation = K x f ( x) dx is more indicative of exposure to fragility. It is also preferred to the raw probability of falling below K, which is over broad values of K; but the expectation

x f ( x) dx

f ( x) dx ;

f ( x) dx , as the latter does not include the consequences. For instance, two such measures

x f ( x) dx can be much more consequential than

xg ( x) dx as the cost of the break can be more

f ( x) dx and

g ( x) dx may be equal

severe and we are interested in its vega equivalent.

Applications to Model Error


In the cases where Y depends on X, among other variables, often x is treated as non-stochastic, and the underestimation of the volatility of x maps immediately into the underestimation of the left tail of Y under two conditions: abX is stochastic and its stochastic character is ignored (as if it had zero variance or mean deviation) Y is concave with respect to X in the negative part of the distribution, below

"Convexity Bias" or Jensen's Inequality Effect: Further, missing the stochasticity under the two conditions a) and b) , in the event of the concavity applying above leads to the negative convexity bias from the lowering effect on the expectation of the dependent variable Y.
CASE 1- APPLICATION TO DEFICITS

Example: A government estimates unemployment for the next three years as averaging 9%; it uses its econometric models to issue a forecast balance B of 200 billion deficit in the local currency. But it misses (like almost everything in economics) that unemployment is a stochastic variable. Employment over 3 years periods has fluctuated by 1% on average. We can calculate the effect of the error with the following: Unemployment at 8% , Balance B(8%) = -75 bn (improvement of 125bn)

Fat Tails and (Anti)Fragility

Unemployment at 9%, Balance B(9%)= -200 bn Unemployment at 10%, Balance B(10%)= --550 bn (worsening of 350bn)

The convexity bias from underestimation of the deficit is by -112.5bn, since

B(8%) + B(10%) = 312.5 2

Further look at the probability distribution caused by the missed variable (assuming to simplify deficit is Gaussian with a Mean Deviation of 1% )

Figure 6 CONVEXITY EFFECTS ALLOW THE DETECTION OF BOTH MODEL BIAS AND FRAGILITY. Illustration of the example; histogram from Monte Carlo simulation of government deficit as a left-tailed random variable simply as a result of randomizing unemployment of which it is a convex function. The method of point estimate would assume a Dirac stick at -200, thus underestimating both the expected deficit (-312) and the skewness (i.e., fragility) of it.

Adding Model Error and Metadistributions: Model error should be integrated in the distribution as a stochasticization of parameters. f and g should subsume the distribution of all possible factors affecting the final outcome (including the metadistribution of each). The so-called "perturbation" is not necessarily a change in the parameter so much as it is a means to verify whether f and g capture the full shape of the final probability distribution. Any situation with a bounded payoff function that organically truncates the left tail at K will be impervious to all perturbations affecting the probability distribution below K. For K = 0, the measure equates to mean negative semi-deviation (more potent than negative semi-variance or negative semi-standard deviation often used in financial analyses).

MODEL ERROR AND SEMI-BIAS AS NONLINEARITY FROM MISSED STOCHASTICITY OF VARIABLES

Model error often comes from missing the existence of a random variable that is significant in determining the outcome (say option pricing without credit risk). We cannot detect it using the heuristic presented in this paper but as mentioned earlier the error goes in the opposite direction as model tend to be richer, not poorer, from overfitting. But we can detect the model error from missing the stochasticity of a variable or underestimating its stochastic character (say option pricing with non-stochastic interest rates or ignoring that the volatility can vary).

Missing Effects: The study of model error is not to question whether a model is precise or not, whether or not it tracks reality; it is to ascertain the first and second order effect from missing the variable, insuring that the errors from the model dont have missing higher order terms that cause severe unexpected (and unseen) biases in one direction because of convexity or concavity, in other words, whether or not the model error causes a change in z.

Fat Tails and (Anti)Fragility

Model Bias, Second Order Effects, and Fragility


Having the right model (which is a very generous assumption), but being uncertain about the parameters will invariably lead to an increase in model error in the presence of convexity and nonlinearities. As a generalization of the deficit/employment example used in the previous section, say we are using a simple function:
f x

(5)

Where is supposed to be the average expected rate, where we take as the distribution of over its domain
= ( ) d

(6)

The mere fact that is uncertain (since it is estimated) might lead to a bias if we perturb from the outside (of the integral), i.e. stochasticize the parameter deemed fixed. Accordingly, the convexity bias is easily measured as the difference between a) f integrated across values of potential and b) f estimated for a single value of deemed to be its average. The convexity bias A becomes:

f x d dx

) ( )

f ( x d )dx

( )

(7)

And B the missed fragility is assessed by comparing the two integrals below K, in order to capture the effect on the left tail:

B (K )

f x d dx

) ( )

f ( x d )dx

( )

(8)

Which can be approximated by an interpolated estimate obtained with two values of separated from a mid point by a mean deviation of and estimating

B ( K )
We can probe B by point estimates of f at a level of X K

K 1 f ( x | + ) + f ( x | ) ) dx f ( x | ) dx ( 2

(8)

(X ) = B
So that

1 ( f ( X | + ) + f ( X | )) f ( X | ) 2

(9)

which leads us to the fragility heuristic. In particular, if we assume that B(X) has a constant sign for X K, then B(K) has the same sign.

( x)dx B ( K ) = B

The Fragility/Model Error Detection Heuristic (detecting A and B when cogent)


Example 1 (Detecting Tail Risk Not Shown By Stress Test, B). The famous firm Dexia went into financial distress a few days after passing a stress test with flying colors. If a bank issues a so-called stress test (something that has not proven very satisfactory), off a parameter (say stock market) at -15%. We ask them to recompute at -10% and -20%. Should the exposure show negative asymmetry (worse at -20% than it improves at -10%), we deem that their risk increases in the tails. There are certainly hidden tail exposures and a definite higher probability of blowup in addition to exposure to model error. Note that it is somewhat more effective to use our measure of shortfall in Definition, but the method here is effective enough to show hidden risks, particularly at wider increases (try 25% and 30% and see if exposure shows increase). Most effective would be to use power-law

Fat Tails and (Anti)Fragility

distributions and perturb the tail exponent to see symmetry. Example 2 (Detecting Tail Risk in Overoptimized System, B). Raise airport traffic 10%, lower 10%, take average expected traveling time from each, and check the asymmetry for nonlinearity. If asymmetry is significant, then declare the system as overoptimized. (Both A and B as thus shown. The same procedure uncovers both fragility and consequence of model error (potential harm from having wrong probability distribution, a thintailed rather than a fat-tailed one). For traders (and see Gigerenzers discussions, i n Gigerenzer and Brighton (2009), Gigerenzer and Goldstein(1996)) simple heuristics tools detecting the magnitude of second order effects can be more effective than more complicated and harder to calibrate methods, particularly under multi-dimensionality. See also the intuition of fast and frugal in Derman and Wilmott (2009), Haug and Taleb (2011). The Heuristic applied to a Model: 1- First Step (first order). Take a valuation. Measure the sensitivity to all parameters p determining V over finite ranges p. If materially significant, check if stochasticity of parameter is taken into account by risk assessment. If not, then stop and declare the risk as grossly mismeasured (no need for further risk assessment). 2-Second Step (second order). For all parameters p compute the ratio of first to second order effects at the initial range p = estimated mean deviation.
H ( p )

' ,

where
1 1 1 ' ( p ) f p + p + f p p 2 2 2

2-Third Step. Note parameters for which H is significantly > or < 1. 3- Fourth Step: Keep widening p to verify the stability of the second order effects.

The Heuristic applied to a stress test: In place of the standard, one-point estimate stress test S1, we issue a "triple", S1, S2, S3, where S2 and S3 are S1 is indicative of fragility.

p. Acceleration of losses

Remarks: a. Simple heuristics have a robustness (in spite of a possible bias) compared to optimized and calibrated measures. Ironically, it is from the multiplication of convexity biases and the potential errors from missing them that calibrated models that work in-sample underperform heuristics out of sample (Gigerenzer and Brighton, 2009). b. Heuristics allow to detection of the effect of the use of the wrong probability distribution without changing probability distribution (just from the dependence on parameters). c. The heuristic improves and detects flaws in all other commonly used measures of risk, such as CVaR, expected shortfall, stresstesting, and similar methods have been proven to be completely ineffective (Taleb, 2009). d. The heuristic does not require parameterization beyond varying p.

Further Applications
In parallel works, applying the "simple heuristic" allows us to detect the following hidden short options problems by merely perturbating a certain parameter p: a. Size and negative stochastic economies of scale. i. size and squeezability (nonlinearities of squeezes in costs per unit) b. Specialization (Ricardo) and variants of globalization. i. missing stochasticity of variables (price of wine). ii. specialization and nature. c. Portfolio optimization (Markowitz)

Fat Tails and (Anti)Fragility

d. Debt e. Budget Deficits: convexity effects explain why uncertainty lengthens, doesnt shorten expected deficits. f. Iatrogenics (medical) or how some treatments are concave to benefits, convex to errors. g. Disturbing natural systems

"

84 | Risk and (Anti)fragility - N N Taleb

11.1 Fragility, As Linked to Nonlinearity

1 -100 000 -200 000 -300 000 -400 000 -500 000 -600 000 -700 000

7 -500 000 -1.0 106

10

12

14

-1.5 106 -2.0 106 -2.5 106

5 -2 10
6

10

15

20

-4 106 -6 106 -8 106 -1 107

Mean Dev 1 2 5 10 15 20 25

10 L -100 000 -200 000 -500 000 -1 000 000 -1 500 000 -2 000 000 -2 500 000

5L -50 000 -100 000 -250 000 -500 000 -750 000 -1 000 000 -1 250 000

2.5 L -25 000 -50 000 -125 000 -250 000 -375 000 -500 000 -625 000

L -10 000 -20 000 -50 000 -100 000 -150 000 -200 000 -250 000

Nonlinear -1000 -8000 -125 000 -1 000 000 -3 375 000 -8 000 000 -15 625 000

Risk and (Anti)fragility - N N Taleb| 85

Mean Dev 1 2 5 10 15 20 25

10 L -100 000 -200 000 -500 000 -1 000 000 -1 500 000 -2 000 000 -2 500 000

5L -50 000 -100 000 -250 000 -500 000 -750 000 -1 000 000 -1 250 000

2.5 L -25 000 -50 000 -125 000 -250 000 -375 000 -500 000 -625 000

L -10 000 -20 000 -50 000 -100 000 -150 000 -200 000 -250 000

Nonlinear -1000 -8000 -125 000 -1 000 000 -3 375 000 -8 000 000 -15 625 000

11.2 Nonlinearity of Harm Function and Probability Distribution


Generalized LogNormal x~ Lognormal (m,s)
1 2

log K O k -m
A

Axk ~ F@A, x, k, m, sD =

2 s2

2p ksx As the response harm becomes more convex.


3.5 3.0 2.5 2.0 1.5 1.0 0.5

FH1, x, 1, 0, 1L FI , x, , 0, 1M
2 1 4 1 4 2 1 3

FI , x, 2, 0, 1M FI , x, 3, 0, 1M

12.5

13.0

13.5

14.0

14.5

15.0

Survival Functions
1.0

0.8

SH1, 15 - k, 1, 0, 1L
0.6

SI , 15 - K , , 0, 1M
2 1 4 1 4 2

0.4

SI , 15 - K , 2, 0, 1M SI , 15 - K , 3, 0, 1M

0.2

12.5

13.0

13.5

14.0

14.5

15.0

86 | Risk and (Anti)fragility - N N Taleb

11.3 Coffee Cup and Barrier-Style Payoff


One period model
Response 3.0

Broken glass !

2.5

2.0

1.5

1.0

0.5

10

15

20

Dose

Clearly there is path dependence.


Response 4

10

15

20

Dose

This part will be expanded