You are on page 1of 16

Extreme Value Theory

Riccardo Rebonato

EDHEC Business School – EDHEC Risk Institute

1/16
Outline of Session

Plan of the Session

The ‘Far’ Tail

Setting Up the Problem

The Gnedenko Theorem

Estimating ξ and β

Estimating the Tail of the Distribution

Estimating VaR and CES

2/16
Plan of the Session

In this session we want to understand1


I how to ‘extend’ an empirical cumulative distribution into the
tails;
I what Extreme Value Theory says;
I how to estimate the parameters of the tail;
I how to estimate VaR and CES.

1
This material has been adapted from Hull J, Risk Management and
Financial Institutions, John Wiley. 3/16
The ‘Far’ Tail

I From a given set of univariate data, we can always build an


empirical cumulative distribution.
I This has some nice properties – I don’t have to commit to a
‘named’ distribution, which could be inappropriate.
I However, it also has drawbacks: for instance, events beyond
the highest and lowest ‘just cannot happen’: they have zero
empirical probability.
I Can we ‘extend’ the tails of a cumulative distribution in a
theoretically justifiable manner?

4/16
Setting Up the Problem

I Let F (v ) be the cumulative distribution for variable v .


I Let’s assume that we have built it empirically.
I Let u be a value of the variable v in the right tail.
I Fact 1. The probability that v lies between u and u + y ,
Prob[u ≤ v ≤ u + y ], is given by

Prob[u ≤ v ≤ u + y ] = F (u + y ) − F (u) (1)

I Fact2. The probability that v is greater than u, Prob[v > u],


is given by
Prob[v > u] = 1 − F (u) (2)

5/16
Setting Up the Problem

I Remember that, if
1. Prob[a|b] is the probability of a given b;
2. Prob[b] is the probability of b; and
3. Prob[a, b] is the joint probability of a and b,
then
Prob[a|b] × Prob[b] = Prob[a, b] (3)
I Now define Fu (y ) the probability that v lies between u and
u + y , given that v > u.
I Then

Prob[u ≤ v ≤ u+y |v > u]Prob[v > u] = Prob[u ≤ v ≤ u+y , v > u]


(4)
Fu (y ) × (1 − F (u)) = F (u + y ) − F (u) (5)

6/16
Setting Up the Problem

I I can solve for Fu (y ), the probability that v lies between u and


u + y , given that v > u:

F (u + y ) − F (u)
Fu (y ) = (6)
1 − F (u)
I The quantity Fu (y ) defines the right-hand of the distribution.
I It gives the cumulative distribution for the quantity, m, which
is the amount by which v exceeds u, given that it does exceed
u.

7/16
Setting Up the Problem

I Clearly this cumulative distribution depends on the value u.


I However, if u is ‘sufficiently out in the tails’, then for a very
wide class of distributions, F (v ), the distribution Fu (y )
converges to a generalized Pareto distribution.
I This is what we are going to estimate.

8/16
The Gnedenko Theorem

I Gnedenko (1943) proved that, for a large class of


distributions, F (v ), the distribution Fu (y ) converges to
 
y
Gξ,β (y ) = 1 − 1 + ξ (7)
β
I This distribution is characterized by two parameters, ξ and β,
that have to be estimated from the data.
I The parameter β is just a scale parameter.
I The parameter ξ is the shape parameter, and determines how
fat the tail is.
I Therefore it also determines how many of the moments of the
distribution are finite: E [v k ] is infinite for k ≥ 1ξ

9/16
The Gnedenko Theorem

ξ Maximum Finite Moment


0.1 9
0.2 4
0.25 3
0.33333 2
0.5 1
Table: The highest finite moment for different values of ξ.

10/16
The Gnedenko Theorem

I Typical values for ξ for financial time series lie between 0.1
and 0.4.
I The higher the value of ξ, the fatter the tail.
I This means that we should never assume that some moments
are well-defined.
I So, the key question is: ‘How do we estimate ξ (and β)?’

11/16
Estimating ξ and β

I As usual, we are going to use Maximum Likelihood.


I Step 1: Obtain the density, gξ,β (y )
 − 1 −1
d 1 y ξ
gξ,β (y ) = Gξ,β (y ) = 1+ξ (8)
dy β β
I Step 2: Choose a ‘high’ value of u: we want to be
sufficiently in the tails – something like u = 0.95 or u = 0.975
generally works well.
I Step 3: Sort the observations for which v > u. Call nu
the number of observations such that v > u.

12/16
Estimating ξ and β

I Step 4:Calculate the likelihood function, LF (ξ, β):

nu  1
vi − u − ξ −1

Y 1
LF (ξ, β) 1+ξ (9)
β β
i=1

I Step 5: Take the log of the likelihood function, LLF (ξ, β):

nu
"   1 #
X 1 vi − u − ξ −1
LlF (ξ, β) = log 1+ξ (10)
β β
i=1

I Step 6: Vary ξ, β until LLF (ξ, β) is maximized.

13/16
Estimating the Tail of the Distribution

I Now we have Gξ,β (y ).


I Therefore the probability that v > u + y , given that
v > u + y is 1 − Gξ,β (y ).
I We have seen that the probability that v > u is 1 − F (u).
I the unconditional probability that v > x (for x > u) therefore
is
Prob[v > x] = [1 − F (u)] × [1 − Gξ,β (y )] (11)
I My best estimate for [1 − F (u)] is [1 − F (u)] = nu
n and
therefore
 1
x − u −ξ

nu 1
Prob[v > x] = 1+ξ (12)
n β β

14/16
Estimating VaR and CES

I Remember that Var is just a percentile (of the loss


distribution).
I Given confidence level q this means that F (VaR) = q.
I But F (x) = 1 − Prob[v > x].
I We have just estimated Prob[v > x]. Therefore
 1
VaR − u − ξ

nu 1
q =1− 1+ξ (13)
n β β
I Solving for VaR gives
h i−ξ 
β nu
vaR = u + (1 − q) −1 (14)
ξ n

15/16
Estimating VaR and CES

I It is easy to show that the Conditional Exepcted Shortfall


(CES) is given by

VaR + β − ξu
CES = (15)
1−ξ

16/16

You might also like