You are on page 1of 114

Market Risk Measurement

Lecture 4
Estimating the Joint Distribution of Risk Factors

Riccardo Rebonato
Key Words and Concepts:

• marginal distributions,

• copula,

• empirical cumulative distribution,

• inverse function,

• Monte Carlo simulation.


1 Plan of the Lecture

Up to now we have assumed that someone had given us the joint distribution
of risk factors and that we simply had to sample from it. We did not explain
where this probability distribution was coming from. We fill in this gap in this
chapter.

The message is that we are going to use a divide-and-conquer approach:

1. first we focus on the marginal (univariate) distributions;

2. next we focus on the dependence among the risk factors;

3. lastly we put the two sets of information together using a copula approach.
Unless we use the historical-simulation approach, this approach is almost uni-
versal.

However, it can be applied in different ways, depending on the size of the


problem and on the specific purposes we have in mind.

For instance:

• if we want to analyze a particular transaction or a concentrated book, we


will want to zoom in with a magnifying glass and model as best we can
the fine details of the dependencies and of the tail properties;

• if, instead, we are interested in the risk profile of the whole portfolio of a
large bank, we will need, and want to, adopt a much more broad-brush
approach.
Not surprisingly, different tools will come to the fore.
So, in this lecture we will look at:

1. fitting an empirical marginal distribution to a parametric distribution;

2. estimating the pairwise co-dependence (not just correlation) between vari-


ables;

3. transforming a set of non-independent random variables with arbitrary mar-


ginal distributions to a set of variables with different marginal distributions,
but which preserve important features of the original dependence;

4. how to use the above for efficient Monte Carlo Simulation.


Remember our goal: obtaining the joint distribution of risk factors.

The efficient Monte Carlo procedure alluded to in point 4. will be one of our
preferred techniques to achieve our goal.
2 Transforming the Marginal Distribution of Ran-
dom Variables

In this part of the course we are going to learn how to transform the mar-
ginal distribution of a random variable to a different marginal in such a way
that something important is preserved (what this ‘something important’ is will
become apparent in the following).

We always proceed in steps: first we transform from the starting marginal


distribution to the Uniform U [0, 1] distribution.

Then, from the Uniform distribution we transform again to the target marginal
distribution of choice.

This is how it is done.


Figure 1:
2.1 Transforming from a Generic Distribution to U [0, 1]

First, we remember that if f is a strictly increasing function of x, then

P r [x ≤ a] =⇒ P r [f (x) ≤ f (a)] . (1)


This follows because

P r [x ≤ a] = E 1{x≤a} , (2)
where 1{condition} is the indicator function which is equal to 1 if condition =
T RUE, and equal to zero otherwise.

So, for a monotonic function, f (x) ≤ f (a) if x ≤ a, and therefore the


indicator function is ‘activated’ under the same conditions.
Second, we note that, for any strictly increasing cumulative distribution, φ, the
inverse function, φ−1, is a strictly increasing function: if u2, u1 are from [0, 1],
and u2 > u1, then φ−1 (u2) = x2 > φ−1 (u1) = x1 . (Make yourself happy
with a sketch that this is true.)

In what follows the strictly increasing function in (1) will be the inverse function,
φ−1.

Now we can establish a nice result.

We start from the standard normal distribution, N , and then we generalize the
setting.
Let z be drawn from N :
z˜N µ, σ2 .

Let the random variable u be the cumulative normal distribution of z:

u = N (z) . (3)

So, u is a number (a random variable) between 0 and 1.

How is this new random variable distributed?


To answer the question, let’s first remind ourselves that the uniform distribution
is that distribution for which P r [u ≤ u0] = u0.

(Again, a sketch should convince you that this is true: the cumulative distrib-
ution is just a straight line that goes up at 45 degrees.)

Consider the cumulative distribution of u, ie, consider P r [u ≤ u0], with 0 ≤


u0 ≤ 1. We have

P r [u ≤ u0] = P r [N (z) ≤ u0] = P r N −1 (N (z)) ≤ N −1 (u0) =

P r z ≤ N −1 (u0) = u0 (4)

where we have made use of the fact that N (·) and N −1 (·) are both strictly
increasing functions (see below).
To make sure that we understand the last line, note that the inverse N −1 (u0)
is the Gaussian random variate whose cumulative probability is u0.

In words the last line therefore says:

The probability that a random Normal variate, z, is smaller than the Gaussian
random variate, N −1 (u0), whose cumulative probability is u0, is just the
cumulative probability, u0.
Let’s do the last step slowly again.

Given a random variable z, the function, f (h), defined by


f (h) = P r [z ≤ h] (5)
is just, by definition, the cumulative distribution of z (which we have assumed
for the moment to be normal).

This means that, for any h,


f (h) = N (h) (6)

So, in particular, this is true for h0 = N −1 (u0).

For this particular value of h we have


N (h0) = N N −1 (u0) = u0 (7)
and so
P r z ≤ N −1 (u0) = u0 (8)
Since P r z ≤ N −1 (u0) = u0 was the last term in Equation (4), equating
this with where we started from (P r [u ≤ u0]) we have obtained that

P r [u ≤ u0] = u0 (9)
and therefore u is uniformly distributed between 0 and 1 (because, as we saw
before, uniform distribution is that distribution for which P r [u ≤ u0] = u0).
By the way, in the derivation we have made use of the fact that

• the inverse function, N −1, is a strictly increasing function — and therefore,


if P r [N (z) ≤ u0] then P r N −1 (N (z)) ≤ N −1 (u0) , because, for
any strictly increasing f , P r [x ≤ a] =⇒ P r [f (x) ≤ f (a)] ;

• and that N −1 (N (z)) = z — the father of the son of Johnny is Johnny.


This means that giving as input to the normal cumulative distribution function
a random variable, z, that has been drawn from a normal distribution results
in a random variable, u = N (z), that is drawn from the uniform, U [0, 1],
distribution.

But note that we did not make use of any property of the Normal distribution
to obtain this result.

Therefore we can generalize as follows.


For any (strictly increasing) cumulative distribution function, φ, if the apply φ
to random variables, x, drawn from that distribution, we obtain a new random
variable, u, drawn from the uniform, U [0, 1], distribution.
Another way to say this is the following

Take any distribution.

If I draw from this arbitrary distribution lots and lots of random variates, and I
calculate the cumulative distribution corresponding to all these different random
values, the resulting quantities are uniformly distributed.

This means that, given any distribution, we know how to transform this distri-
bution to the uniform U [0, 1].

Why we may want to do this will be apparent in a moment.


2.2 Transforming from One Distribution to Another

We have learnt how to go obtain a bunch of uniformly distributed variables, u,


as long as we can draw from a distribution, φ.

Now we are going to see how to transform random variables drawn from a
given distribution, φ, to ‘equivalent’ random variables drawn from a different
distribution, χ.
To fix ideas we start from a Student t distribution with, say, 2.9 degrees of
freedom.

We are going to draw 10,000 realizations from this distribution.

We call these variables t1, and we display their histogram in Fig (2).
Figure 2: 10,000 random realizations drawn from a Student-t distribution with
2.9 degrees of freedom.
Now we apply the cumulative Student-t function with 2.9 degrees of freedom
to the variables t1 that we have generated.

As we know, we obtain variables, ut1 , distributed as U [0, 1].

They are shown in Fig (3).

So far, so unsurprising.
But uniform variates are uniform variates: they do not keep a label telling
statisticians from which distribution ‘they were originally drawn’.

This means that now we can transform these U [0, 1] variables to variables
distributed according to any distribution for which we know how to calculate
the inverse.
Figure 3: The 10,000 random variable drawn from the 2.9-degrees-of-freedom
Student-t distribution in Fig(1), transformed to the uniform U[0, 1] disribution
as described in the text.
So, for instance, if we know how to calculate the inverse of, say, a gamma dis-
tribution, we can obtain, from the original Student-t variables, the ‘equivalent’
Gamma-distributed variables.

Step by step:

1. First the simulated 10,000 Student-t variates, t1;

2. Next we obtained the associated uniform variates, u1 = ΦStud (t1);

3. Finally we used the inverse of the Gamma distribution to go from the


associated uniform to the associated gamma variates, g1:
g1 = Φ−1
Gamma (u1 ) = Φ−1
Gamma (ΦStud (t1)) (10)
Here is the transformation of the original Student-t variates to a gamma,
Γ (2, 1) distribution.

See Fig (4).


Gamma distributions look nice, but transforming to the Normal distribution is
much more useful (we’ll see why).

This is the effect of transforming the original Student-t variables to a standard


normal distribution.

See Fig (??).


Figure 4: The 10,000 U[0, 1] random variables which had been obatained from
the 10,000 Student-t distribution with 2.9 degrees of freedom, tranformed to a
Γ(2, 1) distribution.
So, we have learnt how to start from a bunch of random variables drawn from
one distribution, and to obtain the ‘equivalent’ random variables drawn from a
different distribution.

What does ‘equivalent’ mean?

It means that if, in the distribution we started with, one draw was so high
that only, say, 5% of draws are on average higher, then in the new equivalent
distribution we are going to create a draw so high that only 5% of draws (in
the new distribution) are on average higher.
Figure 5: The 10,000 U[0, 1] random variables which had been obatained from
the 10,000 Student-t distribution with 2.9 degrees of freedom, tranformed to a
Gaussian (N (0, 1)) distribution.
3 Applications to Monte Carlo Simulation

This is all nice, but why would we want to do this?

Suppose that we have a bunch of variables, and we have ascertained that they
are distributed according to different distributions. Using the inverse, we can
first transform all these variables to uniform random variables, and from these
to normal random variables.
Given these normal random variables we can estimate the matrix of correlation
among them.

Once we know that a bunch of jointly normally distributed variables has a known
vector of means, and a known covariance matrix (a vector of variances and a
correlation matrix), then we can very easily carry out a Monte Carlo simulation
for Gaussian variates, as we have learnt to do in Lecture 2.
But, you may say, this is not what we were interested in: we want to simulate
vectors drawn from the complex distribution we started with, not from a multi-
variate Gaussian.

To some extent, no problem: once we have a vector of Gaussian random variates,


we can transform to the associated U [0, 1] variates, and from these back to
the original, and possibly all different, marginal distributions for the individual
original variables.
The point here is that, as long as we preserve their order, the original variables,
their uniform transforms, their normal transforms, etc, all preserve the memory
of the dependence in the original variables.

Of course, when we do the Monte Carlo simulation assuming a Gaussian copula,


we are imposing a structure of dependence which may or may not be reflected
in the transformed data.

But, if a Gaussian copula does a decent job at describing the empirical depen-
dence, then in the limit the procedure becomes quasi exact.
It gets better.

We don’t even have to fit the empirical marginal distributions to named distri-


bution functions (such as Student-t, Gamma, Gaussian, Beta, you name it).

As we shall see in the next section, with a tiny little bit of care we can use the
empirical marginal distributions, and follow the same procedure.

Here is how it is done.


4 An Example in Detail

Let’s see this in practice.

We are going to use the same data that we used in the baby example in Lecture
2, namely six time series, covering more than 10 years (2736 observations) for
three equity indices (the S&P500, the FTSE100 and the DAX), and three swap
rates (10 10-year, 5-year and 2-year swap rates).
We have the following vector of sensitivities, hi:
Asset Sensitivity hi
S&P500 128, 000
FTSE100 −72, 000
DAX −40, 000 (11)
Swap10y 800, 000
Swap5y 600, 000
Swap2y 200, 000
The changes in the underlying time series and the sensitivities are related to
the changes in the P&L by

P Lki = hi xki − xki−1 (12)

where xki is the value of the financial time series k on day i.

To be clear, if from day i − 1 to day i the S&P index changed from 2020 to
2012, then the associated P&L would be:

P LS&P 500 = 128, 000 × (2012 − 2020) = −$1, 024, 000 (13)
i
And, before we get started, let’s standardize each data series by subtracting its
mean and dividing by its standard deviation.

We will add them back at the end.

Let’s look first at the normalized histogram for the empirical S&P500 returns.
(By the way, from now on I will no longer say that the data have been normalized
— we just have to remember it.)

This is shown in Fig (6).


Now from the data we are going to build the empirical cumulative distribution
function, and to create a linear interpolation so that we can have it for any
intermediate value we may want.

The empirical cumulative distribution function and its linear interpolation are
shown in Fig(??).
Figure 6:
From the (interpolated) empirical cumulative distribution, φSP , we know how
to obtain the associated uniform variates:

uSP = φSP xSP (14)

Fig (8) shows how uniform the associated random variables obtained by follow-
ing this procedure turned out to be:
Figure 7:
From the associated uniform variates we can go back to the original distribu-
tions.

We just have to do
−1
xSP
sim = φSP u
SP

Fig (9) shows what we obtain when we do this.


Figure 8:
Clearly, for a single time series going back and forth from the distribution to
the uniform and back to the same distribution again is not very exciting.

We shall see presently the more interesting applications of these transformation.

Even in this case, however, I could simulate far more variables than in the
original sample (this is where the interpolation becomes useful), which certainly
looks good, and may even be useful.
Exercise 1 Under what conditions am I adding information by running more
simulations than the original number of data points?
We can do the same for the other five time series.

We do not show the results because they are very similar, and similarly good.

Let’s go back to the uniform variates obtained for the six time series, and let’s
collate them in a [2736 × 6] matrix.

As we do so, let’s keep careful track of the time ordering, so that the kth
uniform variate associated with, say, the FTSE, is lined up in the matrix with
the kth variate for all the other time series.
Figure 9:
Having done this, let’s transform each vector of uniform variates not back to
the original distribution, or to some exotic distribution such as the gamma that
we saw before, but to a Gaussian.

Why do we want to do this?

Because we know how to draw (and hence simulate) easily from a multivariate
Gaussian distribution.
The idea is to simulate a zillion of correlated Gaussians, and then to trans-
form them back to the original variables, while preserving (almost) the original
correlation.

Let’s do it step by step.


So, as a next step we are going to apply the inverse normal cumulative to
the each uniform vector and obtain the associated Gaussian variates. For the
S&500, we have

z SP = N −1 uSP = N −1 φSP xSP


Note carefully the difference: before we did φ−1
SP φSP x
SP (and we ended
back were we started, because the dad of Johnny’s son is Johnny). Now we do
N −1 φSP xSP , and we end somewhere different, ie, in Gaussland.

Since we are now in Gaussland we are in the perfect place to calculate correla-
tions, which we do and show in Fig (??).
Figure 10:

Could we have calculated the correlation matrix also using the original data, or
perhaps even the associated uniform variates?

Of course we could, and Figs (11) and (12) show what we would have obtained.
As a matter of fact, the marginals and the associated scatterplots look extremely
different, but the correlation matrices themselves are remarkably similar.
Figure 11:

Discussion.
Figure 12:
Now that we have a correlation matrix, a vector of standard deviations for each
variable, and a vector of means, we can easily run a Monte Carlo simulation
with Gaussian variates, as we saw in Lecture 2.

This will give us as many Gaussian variates as we want.

At this point there are two avenues open to us, one that leads to perdition and
despair, the other to joy and success.
The bad way to do things is to stop at this stage, and just use the Gaussian
variates (after multiplying each by its standard deviation and adding back in
the mean) to calculate the P&Ls.

A moment’s reflection shows that this would be a very convoluted way to


do what we learnt how to do in the previous lecture: we could simply have
measured the means, the standard deviations and the correlation matrix∗, and
we could have run a simple-minded Monte Carlo simulation.
∗ Thiswould not have been quite identical because we would have used the original varibales,
not the associated Gaussian varibales. Very little difference.
Even if it is a bit silly, we can still look at what we would have obtained by
proceeding this way, and compare it with the empirical cumulative distribution
for each time series (curve labelled ‘HS’).

We note that there are very big differences between the empirical and the
simulated cumulative distributions.

See, for instance, Fig (13).


Similar graphs are obtained for the other time series.

And, by the way, this Fig (14) compares the P&L distribution obtained via His-
torical Simulation with the distribution obtained using the convoluted Gaussian
route.
Figure 13:
Figure 14:

So, if this an just an expensive way to obtain garbage, what is the path to joy?

Clearly, once we have obtained the correlated Gaussian draws we have to trans-
form each set of variates back to the original empirical distribution, by going
via the associated uniforms.
So, first we do, for each time series k,

uk = NSP z k (15)

(where the z k are the associated Gaussian draws for variable k).

Next we do
xk = φ−1
k uk = φ−1 N
k SP z k (16)

The same comparison between the empirical and the simulated cumulative
distribution now looks as shown in Fig (15):
The two curves are now so on top of each other that they cannot be distin-
guished.

Does this mean that we have found a perfect way to carry out a simulation
from an arbitrary (and arbitrarily complex) joint distribution?

Not quite.

To understand why not, let’s continue the exercise one more step, namely
let’s compare the empirical and simulated cumulative distributions of the P&Ls
instead of the variables xk . (We obtained the empirical distribution of P&Ls
using Historical Simulation.)

The comparison is shown in Fig (16).


Figure 15:
Figure 16:

The match is much closer than what we obtained in Fig (??).

But why is it not as good as what we obtained when the looked at the univariate
distributions?
Clearly our Achille’s heel is in the description of the correlation (co-dependence,
really) among the original variables.
For all we know, they may have had a very different type of tail dependence
than what allowed by the Gaussian multivariate distribution;

or positive changes may have been correlated more or less strongly than negative
changes;

or whatever.
In general, a different copula other than the Gaussian could have been the best
description for each individual pair.

But, since for a multivariate distribution handling multivariate copulas gets


very messy very quickly, we resort for practical reason to imposing a Gaussian
dependence (a Gaussian copula) among the original variables.

This does not mean that at all that the simulated variables will have Gaussian
tails: the marginals are all very well recovered.

It just means that the co-dependence is not perfectly captured.


Figure 17:

Comparing Historical Simulation and Monte Carlo. Fig (17)


For the percentiles examined, the VaR from the Gaussian simulation is system-
atically lower.

You can check that, apart from numerical noise, it is essentially the same as the
VaR calculated with the analytic method in the previous lecture, and reported
again below in Fig (18) for ease of comparison.
Figure 18:

In case you want to play with this at home, I have written below a simple piece
of code that does the job. Caveat emptor.
5 What Have We Done?

Today we have learnt how to draw from the unconditional joint distribution of
several variables representing risk factors.

We have not fitted the marginals to any named distribution. We have used the
empirical marginals instead.

This is good and bad — see later.

To carry out the draws of the high-dimensional variates, we have had to impose
a Gaussian dependence among the variables. This is the weakest point of the
procedure, and the most difficult to fix.
What remains to be done?

Remember that I would like to draw not from the unconditional joint distribution
of risk factors, but from the joint distribution of risk factors that applies to
today.

This is the first fix, and it is something that we will learn how to do in a future
lecture.
Why did I say before that using the empirical marginals was good and bad?

Discuss (Suppose that the length of each vector was 4 days, and then I drew
1,000,000,000 joint realizations.)
What could we do to fix the fact that, as we implemented the procedure, we
are never going to draw a value for any risk factor larger or smaller than what
in our data set?

What happens if I take a pencil and I extend monotonically (right and left) the
empirical cumulative distribution by half a mile? Do I still have a bona fide
cumulative distribution?
Are there principled ways of making this extension?

Yes, there are: they are given by the Extreme Value Theory — soon to appear
on these screens.
When we put these pieces together (ie, when we go from the unconditional to
the conditional joint distribution of risk factors, and when we allow for occur-
rences larger/smaller than what in our data set), we really have a industrial-
strength (ie, real-life, not baby toy) Monte Carlo simulation tool.

If I had a lot of time and money, this would be my favourite tool to calculate
a hypothetical distribution of risk factors (and hence a P&L distribution).
Discuss: How would you check whether the Gaussian copula assumption is
good enough?
Think of comparing HS with the same number of MC simulations.
Discuss: In what respects is Historical Simulation better / worse than the fancy
MC we have just learnt to run?
(Think of altering dependence, exploring tails, making conditional, non-Gaussianity
of the copula)
6 The Code

function [ titlestring ] = ChooseString( indexstring )


%

if indexstring == 1
titlestring = ’S&P500’;
end
if indexstring == 2
titlestring = ’FTSE100’;
end
if indexstring == 3
titlestring = ’DAX’;
end
if indexstring == 4
titlestring = ’Swap10y’;
end
if indexstring == 5
titlestring = ’Swap5y’;
end
if indexstring == 6
titlestring = ’Swap2y’;
end

end

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function [ totunif, totgauss, corrorig, corrunif, ...
... corrgauss, meanorig, stdorig, meangauss, ...
... stdgauss, absdeldata, standdata] = DataAnalysis1( )
%
%
% The function creates equivalent Gaussian
% variables that display the same
% correlation as the original variables
% read in by the function
% ReadInData();

% The data read in are in a matrix


% of dimensions [recordlength x numseries]

% The equivalent Gaussian random variables are


% created by turning first the
% original variables into uniform random varibales,
% and then transforming these
% into Gaussians using the inverse normal cumulative function.

% The transformation to the equivalent uniform is achieved by creating


% the empirical cumulative distribution function; also its inverse is
% required to generate the smooth distribution
% of random variables with the
% same marginal properties as the original variables.
%
% The function returns
% 1) the equivalent uniform random variates (matrix totunif);
% 2) the equivalent Gaussian random variates (matrix totgauss);
% 3) the correlation matrix among the original variables (matrix
% corrorig);
% 4) the correlation matrix among the uniform randon variables (matrix
% corrunif);
% 5) the correlation matrix among the Gaussian random variables (matrix
% corrgauss);
%
% All the correlation matrices have dimension [numseries x numseries]
%
% The matrix corrgauss is what is needed for the Monte Carlo simulation;
% the other quantities are returned for analysis.
%

[ data, recordlength, numseries ] = ReadInData(); % reads in the data

% now takes the absolute (not percentage) differences


[ absdeldata ] = GetAbsDiff( data, recordlength, numseries);
% now standardizes the data by
%subtracting the mean and dividing by the
% standard deviation
[ standdata ] = StandardizeData( absdeldata );
[npoints, numseries] = size(standdata);
xmin = min(min(standdata));
xmax = max(max(standdata)); % for later use with plotting

for kser =1:numseries

x = standdata(:,kser);
titlestring = ChooseString( kser );
hist(x,-6.4:0.2:6.4); % plots the original data
title (titlestring);
xlabel(’Z (empirical)’);
ylabel(’Frequency’);
xlim([-4 4]);
figure

[Fi, xi] = ecdf(x); % empirical cumulative distribution function


stairs(xi,Fi, ’r’);

xlim([xmin*1.01 xmax*1.01]); % now does a linear piecewsie estimat


xlabel (’x’);
ylabel(’F(x)’);
xj= xi(2:end);
[n, ~] = size(xj);
Fj = (Fi(1:end-1)+Fi(2:end))/2;
hold on
plot(xj,Fj,’b.’, xj,Fj, ’b-’);
hold off
legend({[’ECDF ’ titlestring] ’Breakpoints’ ’Piecewsie Linear Est
figure

% extends the function to make it reach 0 and 1


xj = [xj(1) - Fj(1) * (xj(2)-xj(1))/((Fj(2)-Fj(1)));
xj;
xj(n) + (1-Fj(n))*((xj(n)-xj(n-1))/(Fj(n)-Fj(n-1)))];
Fj = [0; Fj; 1];
stairs(Fi,[xi(2:end); xi(end)], ’r’);

% creates a function that linearly interpolates the empirical


% cumulative function for any value of the input

Fcum = @(z) interp1(xj,Fj, z, ’linear’, ’extrap’);


Finv = @(u) interp1(Fj,xj, u, ’linear’, ’extrap’);

% creates 10,000 univariate random variates with the same marginal


% distributional properties as the original data. Note: these
% vectors DO NOT HAVE the same codependence displayed by the
% original data

utest = rand(10000,1);
ztest = Finv(utest);
hist(ztest,-8:0.2:8);
xlim([-5 5]);
title (titlestring);
xlabel(’Z (simulated)’);
ylabel(’Frequency’);
figure

for k=1:npoints
unif(k)=Fcum(x(k));
end
totunif(1:npoints,kser)=unif(1:npoints);

end

% now it plots the uniform distribution obtained from the data


for kser=1:numseries
titlestring = ChooseString( kser );
hist(totunif(1:npoints,kser))
title (titlestring);
xlabel(’U’);
ylabel(’Frequency’);
figure
end

% now it creates the Gaussian random variables associated


% with the original data
for kser=1:numseries
totgauss(:,kser)=norminv(totunif(:,kser));
end

%calculates the correlation matrices

corrorig = corr(standdata);
corrunif = corr(totunif);
corrgauss = corr(totgauss);
corrplot(standdata, ’varnames’,{’SP500’,’FTSE100’, ’DAX’, ’Sw10y’, ’Sw
title(’Correlation (original data)’);

corrplot(totunif, ’varnames’,{’SP500’,’FTSE100’, ’DAX’, ’Sw10y’, ’Sw5y


title(’Correlation (uniform transform)’);

corrplot(totgauss, ’varnames’,{’SP500’,’FTSE100’, ’DAX’, ’Sw10y’, ’Sw5


title(’Correlation (gaussian transform)’);

meanorig = mean(absdeldata);
stdorig = std(absdeldata);
meangauss = mean(totgauss);
stdgauss = std(totgauss);
end

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

function [ returnmatrix, PandLMC, PandLHS, PandLGauss ] = ...


...GenerateGaussianRandomVariates( meangauss, stdgauss, ...
...meanorig, stdorig, corrmatrix, absdeldata, standdata)
%

[numseries, ~] = size(corrmatrix) ;
[numsimul, ~] = size(absdeldata) ;
sensvector=[128000; -72000; -40000; 800000; 600000; 200000];
%% This is all done with Gaussian simulated data!!

lowertriang = chol(corrmatrix,’lower’);
for ksim= 1:numsimul;

draw = normrnd(0,1,numseries,1);
shockvectorGauss = diag(stdgauss) * lowertriang * draw ;
shockmatrixGauss(ksim,1:numseries) = shockvectorGauss(1:numseries)
detvector = meangauss’ ;
standreturnvector = detvector + shockvectorGauss;

totPandLMC(ksim,1) = 0;
totPandLHS(ksim,1) = 0;
for kser = 1:numseries
returnmatrix(ksim,kser) = (standreturnvector(kser,1) * stdorig

PandLMC(ksim, kser) = sensvector(kser,1) * returnmatrix(ksim,k


PandLHS(ksim, kser) = sensvector(kser,1) * absdeldata(ksim,kse

% adds up the individual P&Ls into a total P&L


totPandLMC(ksim,1) = totPandLMC(ksim,1) + PandLMC(ksim, kser);
totPandLHS(ksim,1) = totPandLHS(ksim,1) + PandLHS(ksim, kser);

end

end
PandLGauss = PandLMC;
%%
% Gaussian simulation is over; now analyze the results of the Gaussian
% eperiment

for kser = 1:numseries

[ titlestring ] = ChooseString( kser );


HSseries(1:numsimul,1) = PandLHS(1:numsimul, kser);
MCseries(1:numsimul,1) = PandLMC(1:numsimul, kser);

[FiHS, xiHS] = ecdf(HSseries);


[FiMC, xiMC] = ecdf(MCseries);
stairs(xiHS, FiHS, ’r’);
hold on
stairs(xiMC,FiMC, ’b’);
hold off
legend({[’ECDF HS’ titlestring] [’ECDF MC-Gauss’ titlestring]}, ’
figure
end
PandLGauss = PandLMC;

[FiHS, xiHS] = ecdf(totPandLHS);


[FiMC, xiMC] = ecdf(totPandLMC);
stairs(xiHS, FiHS, ’r’);
hold on
stairs(xiMC,FiMC, ’b’);
hold off
legend({’ECDF HS Total P&L’, ’ECDF MC-Gauss Total P&L’}, ’location’, ’
figure

%
%%

% Now transfom the Gaussian variates back to the original distribution


% for the real simulation. This is done in two steps: first egt the
% uniform variates associated with the Gaussian random varibales. Then
% from these get to the original distribition.

for kser =1:numseries


x = shockmatrixGauss(:,kser);
[Fi, xi] = ecdf(x); % empirical cumulative distribution functi
xj= xi(2:end);
[n, ~] = size(xj);
Fj = (Fi(1:end-1)+Fi(2:end))/2;
% extends the function to make it reach 0 and 1
xj = [xj(1) - Fj(1) * (xj(2)-xj(1))/((Fj(2)-Fj(1)));
xj;
xj(n) + (1-Fj(n))*((xj(n)-xj(n-1))/(Fj(n)-Fj(n-1)))];
Fj = [0; Fj; 1];

% creates a function that linearly interpolates the


% cumulative function for any value of the input

Fcum = @(z) interp1(xj,Fj, z, ’linear’, ’extrap’);

for k=1:numsimul
unifGauss(k)=Fcum(x(k));
end
totunifGauss(1:numsimul,kser)=unifGauss(1:numsimul);
end

%%
% Now I have the uniform variates that correspinds to the
% random draws. I have obtained them from the simulated
% Gaussian variates with the right correlation. I must
% transform them to the original empirical distributions.

% First I need the inverse of the original distrbution, into whihch I


% will feed the uniform obtained rom the Gaussian draws

for kser =1:numseries


x = standdata(:,kser);
[Fi, xi] = ecdf(x); % empirical cumulative distribution function of
xj= xi(2:end);
[n, ~] = size(xj);
Fj = (Fi(1:end-1)+Fi(2:end))/2;

% extends the function to make it reach 0 and 1


xj = [xj(1) - Fj(1) * (xj(2)-xj(1))/((Fj(2)-Fj(1)));
xj;
xj(n) + (1-Fj(n))*((xj(n)-xj(n-1))/(Fj(n)-Fj(n-1)))];
Fj = [0; Fj; 1];

% creates a function that linearly interpolates the inverse


% cumulative function for any value of the input
Finv = @(u) interp1(Fj,xj, u, ’linear’, ’extrap’);

% Now it creates the shocks (normalized) associated with the original


% distribution. They inherit the dependence from the Gaussian
% correlation matrix. These are the shocks that will be used in the MC
% simulation.

yunif = totunifGauss(:,kser);

for k=1:numsimul
empir(k)=Finv(yunif(k));
end
empshocxkmatrix(1:numsimul,kser)= empir(1:numsimul);

end
%%
% Now I can start the simulation again with the appropriate
% variables with the empirical marginal distributions and with the
% correct correlation matrix

for ksim= 1:numsimul;

detvector = meangauss’ ;
empshocxkvector = empshocxkmatrix(ksim,:);
empreturnvector = detvector + empshocxkvector’;

totPandLMC(ksim,1) = 0;
totPandLHS(ksim,1) = 0;
for kser = 1:numseries

returnmatrix(ksim,kser) = ...
(empreturnvector(kser,1) * stdorig(1,kser)) + meanorig (1, kse

PandLMC(ksim, kser) = sensvector(kser,1) * returnmatrix(ksim,k


PandLHS(ksim, kser) = sensvector(kser,1) * absdeldata(ksim,kse

% adds up the individual P&Ls into a total P&L


totPandLMC(ksim,1) = totPandLMC(ksim,1) + PandLMC(ksim, kser);
totPandLHS(ksim,1) = totPandLHS(ksim,1) + PandLHS(ksim, kser);

end

end
for kser = 1:numseries

[ titlestring ] = ChooseString( kser );


HSseries(1:numsimul,1) = PandLHS(1:numsimul, kser);
MCseries(1:numsimul,1) = PandLMC(1:numsimul, kser);

[FiHS, xiHS] = ecdf(HSseries);


[FiMC, xiMC] = ecdf(MCseries);
stairs(xiHS, FiHS, ’r’);
hold on
stairs(xiMC,FiMC, ’b’);
hold off
legend({[’ECDF HS’ titlestring] [’ECDF MC-Emp’ titlestring]}, ’lo
figure
end

[FiHS, xiHS] = ecdf(totPandLHS);


[FiMC, xiMC] = ecdf(totPandLMC);
stairs(xiHS, FiHS, ’r’);
hold on
stairs(xiMC,FiMC, ’b’);
hold off
legend({’ECDF HS Total P&L’, ’ECDF MC-Emp Total P&L’}, ’location’, ’NW

end

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function [ absdeldata ] = GetAbsDiff( data, recordlength, numseries)
%
for kser=1:numseries
for krec=1:recordlength-1
absdeldata(krec,kser) = data(krec+1,kser)-data(krec,kser);
end
end

end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function [ data, recordlength, numseries ] = ReadInData()
% The columns contain SOX, UKX, DAX, USSW10, USSW5, USSW2

data = csvread(’C:\Users\r_rebonato\Documents\MATLAB\ProjectVaR\FIEquityDa
[recordlength, numseries] = size(data);
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function [ standdata ] = StandardizeData( data )
%
[recordlength, numseries] = size(data);

for kser=1:numseries
average = mean(data(:,kser));
stdev = std(data(:,kser));
for krec=1:recordlength
standdata(krec,kser)= (data(krec,kser)- average)/stdev;

end
end

end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

You might also like