You are on page 1of 32

GOODNESS-OF-FIT TESTS FOR COPULAS OF MULTIVARIATE

TIME SERIES

BRUNO REMILLARD

Abstract. The asymptotic behaviour of the empirical copula constructed from


residuals of stochastic volatility models is studied. It is shown that if the stochastic
volatility matrix is diagonal, then the empirical copula process behaves like if the
parameters were known, a remarkable property. However, that is not true if the
stochastic volatility is genuinely non-diagonal. Applications for goodness-of-fit and
structural change of the dependence between innovations are discussed.

1. Introduction
In many financial applications, e.g., pricing of options on multiple assets or ex-
change rates, multiname credit derivatives, portfolio management and risk manage-
ment, it is necessary to model the dependence between different assets. That can
be done simply by using copulas, which are distribution functions of multivariate
uniform variables.
It has been shown, e.g., Embrechts et al. (2002) and Berrada et al. (2006), that
the choice of the copula is of paramount importance since it can lead to significant
differences in pricing. The same is true for measures of risk. From an economic
point of view, it is also pertinent to try to find the kind of dependence linking several
economic series. One could be interested for example in modeling the dependence
between several exchange rates with respect to the US currency, or to show that
there is a strong dependence between an exchange rate and the value of a commodity.
Using the recent economic context in Greece, one could also be interested in modeling
the dependence between exchange rates (Euro vs USD) and bond values. Actuaries
also have to model dependence between pairs or multivariate vectors of data.
Motivated by actuarial, economic and financial applications, the problem of the
choice of the copula to model dependence between data correctly is quite recent and
has been tackled mainly for serially independent observations. See, e.g., Genest et al.
(2009) for a comparison and review of the consistent goodness-of-fit tests that can be
used in that context.

Date: December 22, 2010.


1991 Mathematics Subject Classification. Primary 60F05, Secondary 62E20 62M10.
Key words and phrases. Goodness-of-fit, time series, dynamic copulas, GARCH models.
Funding in partial support of this work was provided by the Natural Sciences and Engineering
Research Council of Canada, the Fonds quebecois de la recherche sur la nature et les technologies,
and the Institut de finance mathematique de Montreal.
1

Electronic copy available at: http://ssrn.com/abstract=1729982


2 BRUNO REMILLARD

However, in economic and financial applications, there is almost always serial de-
pendence and when looking for the choice of a copula family, the serial dependence
problem is either ignored, i.e., the data are not filtered to remove serial dependence,
as in Dobric and Schmid (2005, 2007) and Kole et al. (2007), or the data are filtered
but the potential inferential problems of using these transformed data are not taken
into account. For example, Panchenko (2005) uses a goodness-of-fit test on filtered
data (residuals of GARCH models in his case), without proving that his proposed
methodology works for residuals. However he mentioned in passing that working
with residuals could destroy the asymptotic properties of his test. A similar situation
appears in Breymann et al. (2003) where both the problem of working with residuals
and the problem of the estimation of the copula parameters are ignored. The same
criticisms can be addressed to van den Goorbergh et al. (2005) and Patton (2006).
It seems that the first paper addressing rigorously the problems raised by the use of
residuals in estimation and goodness-of-fit of copulas is Chen and Fan (2006). An
unpublished document (Chen et al., 2005) also circulated some time ago proposing a
kernel-based test using residuals of GARCH models. However the proof of their main
result is missing and the technical report never got published. Using a multivariate
GARCH-like model with diagonal innovation matrix, Chen and Fan (2006) showed
the remarkable result that estimating the copula parameters using the rank-based
pseudo-likelihood method of Genest et al. (1995) and Shih and Louis (1995) with
the ranks of the residuals instead of the (non-observable) ranks of innovations, leads
to the same asymptotic distribution. In particular, the limiting distribution of the
estimation of the copula parameters does not depend on the unknown parameters
used to estimate the conditional means and the conditional variances. That prop-
erty is crucial if one wants to develop goodness-of-fit tests for the copula family of
the innovations. In Chen and Fan (2006), the authors also propose ways of selecting
copulas based on pseudo-likelihood ratio tests. However, their comparison test is not
a goodness-of-fit test in the sense that one could select a model which is inadequate
though better than the other proposed models.
In the present paper, goodness-of-fit tests are proposed, with the related statistics
being functions of empirical processes, since tests based on empirical processes are
generally consistent and more powerful than other classes of test statistics, including
likelihood ratio tests. One extends the results of Chen and Fan (2006) by proving
that under similar technical assumptions, the empirical copula process has the same
limiting distribution as if one would have started with the innovations instead of
the residuals. Other methods of estimation of the parameters of the copula families,
not considered in Chen and Fan (2006), also share the same properties. For exam-
ple, the asymptotic behaviour of Kendalls tau, Spearmans rho, van der Waerden
and Blomqvists coefficients are exactly the same as with serially independent ob-
servations. An immediate consequence is that all tools developed recently for the
serially independent case remain valid for the residuals. In particular, one can use
the 1-level and 2-level parametric bootstrap of Genest and Remillard (2008) to esti-
mate p-values of tests statistics, if the estimator is regular (Genest and Remillard,
2008). That is the case for the usual estimators like pseudo likelihood estimators and

Electronic copy available at: http://ssrn.com/abstract=1729982


GOODNESS-OF-FIT FOR COPULAS 3

moment estimators. Such properties are in sharp contrast with the ones using con-
secutive residuals of a single time series for testing serial independence (Ghoudi and
Remillard, 2010), where the limiting copula process does depend on parameters, even
in simple ARMA models. It is also shown that when the volatility matrix is genuinely
non-diagonal, then all these nice properties stop holding true. The estimation of the
copula parameters and the limiting empirical copula depend on the conditional mean
and conditional variance parameters. It is the case for general BEKK models, used
in Patton (2006) and Dias and Embrechts (2009).
In what follows, one starts, in Section 2, by describing the model and discussing pa-
rameter estimation for copulas. Tests statistics based on the empirical copula process
and the Rosenblatts transform are then proposed in Sections 3 and 4 respectively,
together with implementations of the parametric bootstrap. The main result for test-
ing goodness-of-fit using the empirical copula process is given in Proposition 5, while
its analog for using Rosenblatts transform is given in Proposition 6. Change-point
problems are discussed in Section 5, either for univariate series or copulas, while an
example of application using some data of Chen and Fan (2006) is treated in Section
6. The main results on the convergence of the empirical processes are stated and
proved in the Appendix.

2. Model and estimation


Following Chen and Fan (2006), one considers a stochastic volatility model for a
multivariate time series Xi , i.e., for i 1 and j = 1, . . . , d,
Xji = ji() + hji ()1/2 ji,
or in vector form
Xi = i () + i ()i ,
where the innovations i = (1i , . . . , di) are i.i.d. with continuous distribution
1/2 1/2
function K, i = diag{h1i , . . . , hdi }, and where i , i are Fii-measurable and
independent of i . Here Fi1 contains information from the past and possible in-
formation from exogenous variables. That model, studied in Chen and Fan (2006),
contains as a particular case, BEKK models (Engle and Kroner, 1995) with diagonal
conditional volatility matrix. Note that in many applications, univariate stochastic
volatility models are fitted separately to each time series (Xji)ni=1 , j = 1, . . . , d.
Given an estimation n of , compute the residuals ei,n = (e1i,n , . . . , edi,n ) , where
ei,n = 1
i ( n ){Xi i ( n )}.

Since the distribution function K is continuous, there exists a unique copula C


(Sklar, 1959) so that for all x = (x1 , . . . , xd ) Rd ,
K(x) = C{F(x)}, F(x) = (F1 (x1 ), . . . , Fd (xd )) , (1)
where F1 , . . . , Fd are the marginal distribution functions of K, i.e., Fj is the distribu-
tion function of ji. Setting Ui = F(i ), one gets that Ui has distribution C, denoted
by Ui C, i = 1, . . . , n.

Electronic copy available at: http://ssrn.com/abstract=1729982


4 BRUNO REMILLARD

Since the copula is independent of the margins, it is generally suggested1 to remove


their effect by replacing ei with the associated rank vector
Ui,n = (U1i,n , . . . , Udi,n ) , Uji,n = Rank(eji,n )/(n + 1),
with Rank(eji,n ) being the rank of eji,n amongst ej1,n , . . . , ej1,n , j = 1, . . . , d. That
can also be written as Ui,n = Fn (ei,n ), where Fn (x) = (F1n (1, x1 ), . . . , Fdn (1, xd )) ,
and
ns
1 X
Fjn (s, xj ) = 1(eji,n xj ), j = 1, . . . , d, (s, x) [0, 1] Rd . (2)
n + 1 i=1
The main results on the paper are deduced from the asymptotic behaviour (see
Appendix B) of the partial-sum empirical process
ns
1 X
Kn (s, x) = {1(ei,n x) K(x)} , s [0, 1], x Rd . (3)
n i=1
The reason for introducing partial sums in (2) and (3) will become clear in Section 5
when one studies detection of change-points.

In the next two sections one will study tests of goodness-of-fit for parametric copula
families, i.e., one wants to test the null hypothesis
H0 : C C = {C ; O},
for some parametric family of copula C. Typical families are of the meta-elliptic type
(Gaussian and Student copula) and Archimedean copulas (Clayton, Frank, Gumbel).
As proposed in Dias and Embrechts (2004), Chen and Fan (2006) and Patton (2006),
one could also consider mixtures of such copulas. These families are listed in Appen-
dix D, together with their parameters.

It is assumed that K and F1 , . . . , Fd have continuous densities h, f1 , . . . , fd respec-


tively. As a result, the copula C has a density c which satisfies
d
Y
h(x) = c{F(x)} fj (xj ).
j=1

Under H0 , each copula C is assumed to admit a density c satisfying hypotheses


B1B3 described in Appendix A, and depends on the parameter which must be
estimated.

In what follows one lists some estimators of copula parameters and one also studies
some of their asymptotic properties.
1Anothermethod proposed by Xu (1996) is to find parametric estimators for each margin, so
it becomes a fully parametric problem with two-stage estimation. On the negative side, it is less
accurate than a fully parametric estimation and errors on the margins will reflect in the estimation
of the parameters of the copula.
GOODNESS-OF-FIT FOR COPULAS 5

2.1. Estimation of copula parameters.


2.1.1. Pseudo likelihood estimators. In Chen and Fan (2006), it is shown that under
smoothness conditions (conditions D, C, and N in their article), the pseudo maximum
likelihood estimator ( n )
X
n = arg max log c(Ui,n )

i=1
is asymptotically Gaussian with covariance matrix depending only on c. Therefore,
the asymptotic behaviour does not depend on the estimation of the parameter
required for the evaluation of the residuals! In fact, it has the same representation as
the estimator studied by Genest et al. (1995) in the serially independent case, i.e., if
the parameter was known. More precisely, one has

n = n(n ) = J 1 (Wn Zn ) + oP (1), (4)

where Wn = 1n ni=1 c(U i)
, Zn = 1n dj=1 ni=1 Qj (Uji ),
P P P
c(Ui )

c(v) vj c(v)
Z
Qj (uj ) = {1(uj vj ) vj }dv,
(0,1)d c(v)

and where J is the Fishers information matrix (0,1)d c(u)c(u)c(u) du. Here c is the row
R
 
Wn
vector given by the gradient of c with respect to . Note that converges in
Zn
   
W J 0
law to N(0, ), with = . It follows that n converges in law
Z 0 J
to N (0, J 1 + J 1 J J 1 ). Note also that E W = I, i.e., n is a regular

estimator for in the sense of Genest and Remillard (2008), where it is shown that
regular estimators are essential for the validity of the parametric bootstrap procedure.
2.1.2. Two-stage estimators. In addition to pseudo likelihoodestimators,
 one may
1
consider a two-stage estimator. That is, suppose that = . Decompose
2
also Wn and Zn accordingly. Suppose that 1,n is an estimator of 1 that is regular

in the sense that 1,n = n(1,n 1 ) converges in law to 1 N(0, 1 ) and
E(1 W
1 ) = I, E(1 W2 ) = 0. Now define 2,n as the pseudo-likelihood estimator
of the reduced log-likelihood viz
" n #
X
2,n = arg max log c1,n ,2 (Ui,n ) .
2 O2
i=1

It is then easy to check that


W2,n Z2,n = J21 1,n + J22 2,n ,
 
1 1
so n converges to = , with 2 = J22 (W2 Z2 J21 1 ). As a result,
2
E(2 W
1 ) = 0 and E(2 W2 ) = I. This proves that n is a regular estimator of
6 BRUNO REMILLARD

since (, W) is a centered Gaussian vector with E(W ) = I. Two-stage estimation


is often used for meta-elliptical copulas which depend on a correlation matrix and
possibly other parameters. It is known that can be expressed in terms of functions
of Kendalls tau, playing the role of 1 , while the remaining parameters are defined
as 2 . In fact, jk = (Uji , Uki) = 2 arcsin(jk ) (Fang et al., 2002). For example, in
the Student copula case, 2 would be the degrees of freedom.

It follows from Proposition 1 in the next section that if jk,n is the empirical
Kendalls tau for the pairs (Uji,n , Uki,n ), i = 1, . . . , n, then for all 1 j < k d,
n
1 X  (j,k)
n(jk,n jk ) = 8C (Uji , Uki ) 4Uji 4Uki + 2 2jk + oP (1),
n i=1
converge to centered Gaussian variables RK jk , where C
(j,k)
is the copula of (Uji, Uki ).
2
Setting jk,n = sin(jk,n/2) + oP (1) , it follows that

Rjk,n = n(jk,n kj ) + oP (1) = (1 2jk )1/2 n(jk,n jk ) + oP (1)
2
and (Rjk,n)1j<kd converges in law to (Rjk )1j<kd, with Rjk = 2 (1 2jk )1/2 RK jk .
Note that
Z
K
C (j,k) (uj , uk )dC (j,k) (uj , uk ) = jk .

E Rjk W = 8
[0,1]2

Hence
E Rjk W = (1 2jk )1/2 {arcsin(jk )} = jk .


 1 j < k d, and 2 does not


As a result, if 1 is the vector of components jk with
depend on , then E 1 W 1 = I and E 1 W
1 = 0. This shows that two-stage
estimators are regular for meta-elliptic copulas families (defined in Appendix D.2).

Many copula families have parameters linked to rank-based measures of depen-


dence. The most common Archimedean families (Clayton, Frank and Gumbel) can
all be indexed by Kendalls tau (see Appendix D.1, Table 1), Gaussian copula has van
der Waerden correlation matrix as parameters (see Appendix D.2) and the Plackett
copula can be indexed by Spearmans rho (Nelsen, 2006). The estimation of these
parameters when using the ranks of residuals is treated next.
2.2. Asymptotic behaviour of some rank-based dependence measures. In
this section one investigates the asymptotic behaviour of four well-known rank-based
dependence measures: Kendalls tau, Spearmans rho, van der Waerden and Blomqvists
coefficients. The main result is that these measures behave asymptotically like the
ones computed from innovations, extending the results of Chen and Fan (2006). The
proofs depend on the asymptotic behaviour of the empirical copula process and they
are given in Appendix C.
2If
d > 2, one could have to slightly modify the vector with components sin(jk,n /2) in order to
make (n ) a correlation matrix.
GOODNESS-OF-FIT FOR COPULAS 7

2.2.1. Kendalls tau. jk,n , the empirical Kendalls coefficient for the pairs (eji,n , eki,n ),
i = 1, . . . , n, is defined by
2
jk,n = ( number of concordant pairs number of concordant pairs) ,
n(n 1)
where the pairs (eji,n , eki,n ) and (ejl,n , ekl,n ) are concordant if (eji,n ejl,n )(eki,n
ekl,n ) > 0, i 6= l. Otherwise, they are said to be discordant. Its theoretical counterpart
can be written as
Z 1Z 1
jk = 4 C (j,k) (uj , uk )dC (j,k)(uj , uk ) 1,
0 0

with values in [1, 1] and with value 0 under independence. Let jk,n be Kendalls
tau calculated with the pairs of innovations (ji, ki), i = 1, . . . , n.

Proposition 1. Under assumptions (A1)(A6), for all 1 j < k d, n(jk,n
jk,n ) = oP (1) and
n
1 X  (j,k)
n(jk,n jk ) = 8C (Uji, Uki ) 8Uji 8Uki + 6 2jk + oP (1).
n i=1

converge to centered Gaussian variables RK jk , with


Z 1Z 1
K
C (j,k) (uj , uk )dC (j,k) (uj , uk ) = (jk ) .

E Rjk W = 8
0 0

2.2.2. Spearmans rho. Spearmans empirical coefficient Sjk,n is the correlation coef-
ficient of the pairs (Uji,n , Uki,n ), i = 1, . . . , n, while its theoretical counterpart Sjk is
R1R1
Cor(Uji , Uki ) = 12Cov(Uji , Uki) = 12 0 0 C (j,k (uj , uk ) uj uk duj duk . It has val-

ues in [1, 1] and has value 0 under independence. Further let Sjk,n be Spearmans
rho calculated with the pairs (Uji , Uki ), i = 1, . . . , n.

Proposition 2. Under assumptions (A1)(A6), n Sjk,n Sjk,n = oP (1) and


n
12 X 
n Sjk,n Sjk 12(Uji 1/2)(Uki 1/2) Sjk

=
n i=1
+6(Uji 1/2)2 + 6(Uki 1/2)2 1 + oP (1).

converge to centered Gaussian variables RSjk with


Z 1Z 1
S
C (j,k) (uj , uk )duj duk = Sjk .
 
E Rjk W = 12
0 0

2.2.3. van der Waerdens coefficient. Let N and N 1 be respectively the distribution
function and the quantile function of the standard Gaussian distribution. Then the
van der Waerdens empirical coefficient W
jk,n is the correlation coefficient of the pairs
8 BRUNO REMILLARD

(Zji,n , Zki,n), i = 1, . . . , n, where Zji,n = N 1 (Uji,n ). Its theoretical counterpart W


jk
is defined by
Z 1Z 1
C (uj , uk ) uj uk dN 1(uj )dN 1 (uk ),
 (j,k
Cor(Zji , Zki) = E(ZjiZki ) =
0 0
1
with Zji = N (Uji ). It has values in [1, 1] and has value 0 under independence.
Further let W jk,n be van der Waerdens coefficient calculated with the pairs (Uji , Uki ),
i = 1, . . . , n.

Proposition 3. Under assumptions (A1)(A6), n W W

jk,n jk,n = oP (1) and
n
1 X
W W ZjiZki W

n jk,n jk = jk jk (Zji ) kj (Zki ) + oP (1)
n i=1

converge to RWjk centered Gaussian variables, with


Z 1Z 1

S
C (j,k) (uj , uk )dN 1 (uj )dN 1(uk ) = W
 
E Rjk W = 12 jk ,
0 0

where Z
jk (zj ) = {1(zj x) N (x)E(Zk1|Zj1 = x)}dx
R
and Z
kj (zk ) = {1(zk y) N (y)}E(Zj1|Zk1 = y)dy.
R

2.2.4. Blomqvists coefficient. Blomqvists empirical coefficient B


jk,n is defined as
n
4X
B
jk,n = 1(Uji,n 1/2, Uki,n 1/2) 1.
n i=1

Its theoretical counterpart B


jk = 4P (Uji 1/2, Uki 1/2) 1, has values in [1, 1]
with value zero under independence. Further let B jk,n be Blomqvists coefficient cal-
culated with the pairs (Uji, Uki ), i = 1, . . . , n.

Proposition 4. Under assumptions (A1)(A6), n B B

jk,n jk,n = oP (1) and
n
4 X
B B 1(Uji 1/2, Uki 1/2) C (j,k) (1/2, 1/2)

n jk,n jk =
n i=1
{1(Uji 1/2) 1/2}uj C (j,k)(1/2, 1/2)
{1(Uki 1/2) 1/2}uk C (j,k) (1/2, 1/2) + oP (1)


converge to centered Gaussian variables RBjk with

E RSjk W = 4C (j,k)(1/2, 1/2) = B


 
jk .
GOODNESS-OF-FIT FOR COPULAS 9

3. Inference procedure using the empirical copula


Tests of goodness-of-fit can be designed by computing some kind of distance be-
tween the empirical copula Cn , defined by
n
1X
Cn (u) = 1(Ui,n u), u [0, 1]d , (5)
n i=1
and the best representative Cn of the parametric family C, since Cn is a non-
parametric estimator of the true copula C. Here, it is assumed that n is a rank-
based estimator of , .e., n = Tn (U1,n , . . . , Un,n ), for some deterministic function
Tn (u1 , . . . , un ).
For example, for testing H0 , one could use the Cramer-von Mises type statistic
based on the process An = n(Cn Cn ) viz.
Z n
2
X  2
Sn = An (u)dCn (u) = Cn (Ui,n ) Cn (Ui,n ) . (6)
[0,1]d i=1

According to Genest et al. (2009), Sn is one of the best statistic constructed from
An for an omnibus test3, and is much more powerful and easier to compute than the
Kolmogorov-Smirnov type statistic kAn k = supu[0,1]d |An (u)|. That is why the later
is ignored in the present paper.
To be able to state the convergence result for Sn , one needs to introduce auxiliary
empirical processes. For any s [0, 1] and x Rd , set
ns
( d )
1 X Y
n (s, u) = 1(Uji uj ) C(u) ,
n i=1 j=1

and j,n (s, uj ) = n (s, 1, . . . , 1, uj , 1, . . . , 1) = 1n ns


P
i=1 {1(Uji uj ) uj }, j = 1 . . . , d.
It is well known (Bickel and Wichura, 1971) that n 4, where is a C-Kiefer
process, i.e., is a continuous centered Gaussian process with Cov {(s, u), (t, v)} =
(s t) {C(u v) C(u)C(v)}, s [0, 1] and u, v [0, 1]d . Here (u v)j =
min(uj , vj ), j = 1 . . . , d.

For convenience, set


d
X
C(s, u) = (s, u) s j (1, uj )uj C(u), (s, u) [0, 1] [0, 1]d . (7)
j=1

Recall that C(1, ) is the limit of the empirical copula process constructed from in-
novations; see, e.g., Ganler and Stute (1987), Fermanian et al. (2004), Tsukahara
3Of course, if the parametric family under H1 is specified, then one can find better tests statistics
than Sn . see, e.g., Berg and Quessy (2009).
4Convergence of processes means convergence with respect to the Skorohod topology for the space
of cadlag processes, and is denoted by . The processes studied here are indexed by [0, 1] [0, 1]d ,
[0, 1] [, +]d , or product of theses spaces. Note that random vectors belong to these spaces,
being constant random functions.
10 BRUNO REMILLARD

(2005). The process C, that could be called the Kiefer copula process, will be impor-
tant in Section 5.

It follows from Corollary 1 that the empirical copula process Cn (1, u) = n(Cn (u)
C(u)) converges to C(1, u), which does not depend on the parameters of the condi-
tional mean and conditional volatility. As a result, the limiting distribution of An
also shares that property, depending only on and under H0 . The basic result for
testing goodness-of-fit using the empirical copula process is stated next.
As in Genest and Remillard (2008), assume, for identifiability purposes, that for
every > 0,
( )
inf sup |C(u) C0 (u)k : O and | 0 | > > 0.
u[0,1]d

Furthermore, the mapping 7 C is assumed to be Frechet differentiable with


derivative C, i.e., for all 0 O,
|C0 +h (u) C0 (u) C(u)h|
lim sup = 0. (8)
h0 u[0,1]d khk
Before stating the main result of the section, one needs to extend the notion of
regularity of n as defined in Genest and Remillard (2008). One says that n is
regular for if (n , Wn , n ) (, W, ) where the latter is centered Gaussian

with E W = I. Note that the estimators described in Section 2.1 (under the
additional assumptions of Chen and Fan (2006)) and Section 2.2 (under assumptions
(A1)(A6), stated in Appendix B) are all regular.
Proposition 5. Under R assumptions (A1)(A6), if n is regular for , then Sn con-
verges in law to S = [0,1]d A2 (u)dC(u), where A = C C .
In fact, if is a continuous function on the space C([0, 1]), then the statistic
Tn = (An ) converges in law to T = (A). Moreover, the parametric bootstrap
algorithm described next or the two-level parametric bootstrap proposed in Genest
et al. (2009) can be used to estimate P -values of Sn or Tn .
3.1. Parametric bootstrap for Sn . The following procedure leads to an approx-
imate P -value for the test based on Sn . The adaptations required for any other
function of An are obvious. It can be used only if there is an explicit expression for
C. Otherwise, 2-level parametric bootstrap must be used.
1.- Compute Cn as defined in (5) and estimate with n = Tn (U1,n , . . . , Un,n ).
2.- Compute the value of Sn , as defined by (6).
3.- For some large integer N, repeat the following steps for every k {1, . . . , N}:
(k) (k)
(a) Generate a random sample Y1,n , . . . , Yn,n from distribution Cn and com-
(k) (k) (k) (k)
pute the pseudo-observations Ui,n = Ri,n /(n + 1), where R1,n , . . . , Rn,n
(k) (k)
are the associated rank vectors of Y1,n , . . . , Yn,n.
GOODNESS-OF-FIT FOR COPULAS 11

(b) Set
n
1 X  (k) 
Cn(k) (u) = 1 Ui,n u , u [0, 1]d
n i=1
 
(k) (k) (k)
and estimate by n = Tn U1,n , . . . , Un,n .
(c) Set
n n    o2
(k) (k)
X
Sn(k) = Cn(k) Ui,n C(k) n
U i,n .
i=1
 
PN (k)
An approximate P -value for the test is then given by k=1 1 Sn > Sn /N.

Remark 1. Some authors, e.g., Kole et p al. (2007), proposed tests statistics of the
Anderson-Darling type, dividing An (u) by Cn (u){1 Cn (u)}, and then integrat-
ing or taking the supremum. As argued in Genest et al. (2009) and Ghoudi and
Remillard (2010), one should be very careful with these tests and in fact avoid them
totally since the denominator only makes sense in the univariate case when parameters
are not estimated. In the present context, the limiting distribution of such weighted
processes has not been proven and in fact, Ghoudi and Remillard (2010) gave an
example where the limiting variance of the weighted process is infinite.

4. Inference procedure using Rosenblatts transform


Based on recent results of Genest et al. (2009), one might also propose to use
goodness-of-fit tests constructed from the Rosenblatts transform (Rosenblatt, 1952).
In their study, such tests were among the most powerful omnibus tests.

Recall that the Rosenblatts mapping of a d-dimensional copula C is the mapping


R from (0, 1)d (0, 1)d so that u = (u1 , . . . , ud) 7 R(u) = (e1 , . . . , ed ) with e1 = u1
and
i1 C(u1 , . . . , ui, 1, . . . , 1) . i1 C(u1 , . . . , ui1, 1, . . . , 1)
ei = , (9)
u1 ui1 u1 ui1
i = 2, . . . , d. Rosenblatts transforms for Archimedean copulas and meta-elliptic cop-
ulas are quite easy to compute for any dimension; see, e.g., Remillard et al. (2010).
The usefulness of Rosenblatts transform lies in the following properties (Rosenblatt,
1952): Suppose that V C , where C is the independence copula, which is equiv-
alent to say that V is uniformly distributed over (0, 1)d . Then R(U) C if and
only if U C. In addition, R1 (V) C. Since U = R1 (V) can be computed in a
recursive way, this is particularly useful for simulation purposes.
It follows that the null hypothesis H0 : C C = {C; O} can be written in
terms of Rosenblatts transforms viz.
H0 : R {R; O}.
Using an idea of Breymann et al. (2003), extending previous ideas of Durbin (1973)
and Diebold et al. (1998), one can build tests of goodness-of-fit by comparing the
12 BRUNO REMILLARD

empirical distribution function of Ei,n = Rn (Ui,n ), i = 1, . . . , n, with C , since


under H0 , Ei,n should have approximately distribution C . More precisely, set
n
1 X
Dn (u) = {1(Ei,n u) C (u)}, u [0, 1]d , (10)
n i=1

and define
Z
Sn(B) = D2n (u)du
[0,1]d
n d n X n Yd
n 1 XY 2
 1X
= d d1 1 Eki,n + (1 Eki,n Ekj,n) , (11)
3 2 i=1
n i=1 j=1
k=1 k=1

where a b = max(a, b).


To define regular estimators in that setting, one needs to define
n
1 X
Bn (u) = {1(Ei u) C (u)}, u [0, 1]d .
n i=1

It is easy to check that check that (Bn , Wn ) (B, W), where the joint law is Gauss-
ian, and B is a C -Brownian bridge. Now, when using Rosenblatts transforms, one
says that n is regular for if (Bn , Wn , n ) (B, W, ) where the latter is cen-
tered Gaussian with E W = I. Again, all estimators described in Section 2.1
(under the additional assumptions of Chen and Fan (2006)) and Section 2.2 (under
assumptions (A1)(A6)) are all regular.
As in the case of copula processes studied in the previous section, in order to prove
the following result, one must assume that R is Frechet differentiable, i.e.,
|R0 +h (u) R0 (u) R(u)hk
lim sup = 0. (12)
h0 u[0,1]d khk
One also has to assume that R is continuously differentiable with respect to u (0, 1).
One can now state the main result of the section.
(B)
Proposition 6. Under assumptions (A1)(A6), if n is regular for , then Sn
(B) 2
R
converges in law to S = [0,1]d D (u)du, where D is a continuous centered Gaussian
process depending only on C and .
In fact, if is a continuous function on the space C([0, 1]), then the statistic
Tn = (Dn ) converges in law to T = (D). Moreover, the parametric bootstrap
(B)
algorithm described next in Section 4.1 can be used to estimate P -values of Sn or
Tn .
The expression for D is given in Theorem 2 of Appendix B.
Remark 2. Set Ui,n = Ri /(n + 1), where R1 , . . . , Rn are the associated rank vectors
of U1 , . . . , Un , and let Ei,n = Rn (Ui ), where n is the estimation of calculated
GOODNESS-OF-FIT FOR COPULAS 13

with Ui,n = Ri /(n + 1), i = 1, . . . , n. Then, it follows from Theorem 2 that Dn D,


where n
1 X
Dn (u) = {1(Ei,n u) C (u)}, u [0, 1]d . (13)
n i=1
(B)
4.1. A parametric bootstrap for Sn . The following algorithm is described in
(B)
terms of statistic Sn but can be applied easily to nay statistic of the form Tn =
(Dn ).
(B)
(1) Estimate by n = Tn (U1,n , . . . , Un,n ), compute Dn and Sn according to
formulas (10) and (11).
(2) For some large integer N, repeat the following steps for every k {1, . . . , N}:
(k) (k)
(a) Generate a random sample Y1,n , . . . , Yn,n from distribution Cn and com-
(k) (k) (k) (k)
pute the pseudo-observations Ui,n = Ri,n /(n + 1), where R1,n , . . . , Rn,n
(k) (k)
are the associated rank vectors of Y1,n , . . . , Yn,n.
 
(k) (k)
(b) Estimate by (k) n = Tn U1,n , . . . , Un,n , and compute and compute
 
(k) (k)
Ei,n = Rn (k) Ui,n , i {1, . . . , n}.
(c) Let
n
(k) 1 X n  (k)  o
Dn (u) = 1 Ei,n u C (u) , u [0, 1]d
n i=1
and set Z
(B)  (k) 2
Sn,k = Dn (u) du.
[0,1]d
 
PN (B) (B)
An approximate P -value for the test is then given by k=1 1 S n,k > Sn /N.

5. Change-point problems
In this section, ones describes non-parametric tests for detecting change-points.
First, inspired by Ghoudi and Remillard (2010), detection of change-point for uni-
variate series is tackled. Next, one proposes a new test for change-point detection for
the copula, provided there is no change-point in the marginal distributions.
5.1. Detection of change-point for univariate series. Detection of change-point
for the univariate series ji can be based on the process
ns
1 X ns
Aj,n (s, xj ) = {1(eji,n xj ) Fj,n (1, xj )} = Fj,n (s, xj ) Fj,n (1, xj ),
n i=1 n
(s) ns (1s)
for if Fj,n denotes the empirical distribution function of the (eji,n )i=1 and Fj,n
denotes the empirical distribution function of the (eji,n )ni=ns+1 , then under the null
hypothesis that Fj is the distribution function ofoji for all i = 1, . . . , n, i.e., there is
n (s) (1s) n2
no change-point, then n Fj,n (xj ) Fj,n (xj ) = ns(nns) Aj,n (s, xj ).
14 BRUNO REMILLARD

Under assumptions (A1)(A6), Aj,n Aj , where


Aj (s, xj ) = Fj (s, xj ) sFj (1, xj ) = j {s, Fj (xj )} sj {1, Fj (xj )} = Kj {s, Fj (xj )}.
The latter shows that the limiting distribution of the statistics
Z 1Z
(KS) (CV M )
Tj,n = sup sup |Aj,n (s, xj )| and Tj,n = {Aj,n (s, xj )}2 dsdFj,n (xj )
s[0,1] xj R 0 R

are distribution free, converging respectively to


Z 1Z 1
(KS) (CV M )
Tj = sup sup |Kj (s, uj )| and Tj = {Kj (s, u)}2 dsdu.
s[0,1] uj [0,1] 0 0

That result extends the one obtained in Ghoudi and Remillard (2010) for residu-
als of ARMA processes. Note that Kj is a continuous centered Gaussian process
with covariance given by Cov {Kj (s, u), Kj (t, v)} = {min(s, t) st} {min(u, v) uv}.
As remarked in Ghoudi and Remillard (2010), that process appears as the limit of
many other processes used in tests of change-point (Picard, 1985, Carlstein, 1988)
and tests of independence (Blum et al., 1961, Ghoudi et al., 2001). Furthermore,
(KS) (CV M )
tables for the limiting distribution of Tj,n and Tj,n are given in Ghoudi et al.
(2001) Table IV page 206 and Table I page 204 respectively. In case the sample size
considered is not available in these tables, it is suggested in Ghoudi and Remillard
(2010) tonuse the simulations o since Kj also appear as the limit of Kj,n (s, u) =
ns nu
1 1 Rni u n , s, u [0, 1], where Ri is the rank of Ui , amongst the
P 
n i=1
i.i.d. uniform variables U1 , . . . , Un .
5.2. Detection of change-point for copulas. Suppose now that the null hypothe-
ses that there is no change-point in the marginal distributions are all accepted.
Next, if one is interested in possible change-points in the dependence structure,
one could do something similar to the previous section. That is the methodology
proposed next. Previous work on structural change for copulas include the para-
metric change-point approach of Dias and Embrechts (2004, 2009), a filtering/non-
parametric methodology proposed by Harvey (2010) and parametric/kernel-based ap-
proach proposed by Guegan and Zhang (2010). In the latter, to perform the test, the
authors have to select a family for their so-called static copula. Their test is based
on kernel estimates. Here, in contrast, one starts by performing a non-parametric
change-point test. If the null hypothesis is accepted, then one may try to select a
static copula. Furthermore, Guegan and Zhang (2010) work with residuals, without
ever proving that the methodology is valid. In Harvey (2010), no residuals are used.
The methodology is based on time-varying quantiles and some kind of filtering tech-
nique. It would be interesting to compare the approach proposed here to the one
proposed in Harvey (2010).
Lets now describe the proposed methodology, which is closely related to the test
of equality between two copulas proposed by Remillard and Scaillet (2009). First, it
(s)
is easy to check that if Cn denotes the empirical distribution function of the first
(1s)
ns pseudo-observations Ui,n and Cn denotes the empirical distribution function
GOODNESS-OF-FIT FOR COPULAS 15

of the remaining n ns pseudo-observations, then under the null hypothesis that


there is no change-point of the dependence structure, one has
ns
n2 1 X
n{Cn(s) (u) Cn(1s) (u)} = {1(Ui,n u) Cn (u)} .
ns(n ns) n i=1
Setting
ns
1 X
Cn (s, u) = {1(Ui,n u) C(u)} ,
n i=1
one gets that
ns
1 X ns
Gn (s, u) = {1(Ui,n u) Cn (u)} = Cn (s, u) Cn (1, u).
n i=1 n
Therefore change-point tests can be based on Gn . For example, one could define
Tn = max max |Gn (k/n, Ui,n )| (14)
1kn 1in

and reject the null hypothesis for large values of Tn . The limiting distribution of Tn
and Gn is given next.
Proposition 7. Under Assumptions (A1)(A6), Cn C, Gn G and Tn T,
where
d
X
C(s, u) = (s, u) s uj C(u)j (1, uj ),
j=1

G(s, u) = C(s, u) sC(1, u) = (s, u) s(1, u),


and T = sups[0,1],u[0,1]d |G(s, u)|.
Even if the law of G depends on the unknown copula C, it is easy to simulate inde-
pendent copies, using a multiplier method adapted from Scaillet (2005) and Remillard
and Scaillet (2009). That technique is described next.
5.2.1. Multipliers method for Tn . The following procedure leads to an approximate
P -value for the test based on Tn . The adaptations required for any other function of
Gn are obvious.
1.- Compute Tn as defined in (14).

2.- For some large integer N, repeat the following steps for every k {1, . . . , N}:
(a) Generate a random sample 1,k N(0, 1), i = 1, . . . , n.
(k)
(b) For (s, u) [0, 1]d+1 , set n (s, u) = 1n ns
P
i=1 i,k {1(Ui,n u) Cn (u)}
(k) (k) (k)
and Gn (s, u) = n (s, u) ns
n
n (1, u).
(k) (k)
(c) Evaluate Tn = max max Gn (j/n, Ui,n ) .

1jn 1in
 
PN (k)
An approximate P -value for the test is then given by k=1 1 Sn > Sn /N.
16 BRUNO REMILLARD

Remark 3. Using Theorem 1, a non-parametric change-point for the innovations i


can be based on Kn (s, x) sKn (1, x) = 1n ns
P
i=1 {1(ei,n x) Kn (x)}. Because of
the form of the limiting distribution, one has to use multipliers technique to generate
asymptotically independent copies . See Remillard (2010) for details.

6. Example
In order to be able to make comparisons with Chen and Fan (2006), one of their data
set is used, namely the Deutsche Mark/US and Japanese Yen/US exchanges rates,
from April 28, 1988 to Dec 31, 1998. AR(3)-GARCH(1,1) and AR(1)-GARCH(1,1)
models were fitted on the 2684 log-returns.
For such a large sample size, one must be sure that there is no structural change-
point. To that end, univariate change-point were performed first on the standardized
residuals and the null hypothesis was accepted each time. Then, the copula change-
point test was performed, leading once again to the acceptance of the null hypothesis,
since the p-value was estimated to be 33%, using N = 100 replications.
Next, the usual standard copula models (Gaussian, Student, Clayton, Frank, Gum-
bel) were checked for goodness-of-fit. In each case the null hypothesis was rejected
since the p-value was estimated to be 0 (using N=100 replications). That shows the
limitations of the model selection methodology proposed by Chen and Fan (2006). It
can only be used to rank models, even if none is adequate, which is the case here.
Having rejected the standard copula models, one tried to fit a mixture of two
Gaussian copulas. Similar models were proposed by Dias and Embrechts (2004),
Chen and Fan (2006) and Patton (2006). The null hypothesis is accepted with a 84%
(B)
p-value (Sn = 0.0183), calculated from N = 100 replications. The parameters of
the two Gaussian copulas are 1 = 0.8205, 2 = 0.3749, and 1 = 0.4017, 2 = 0.5983.

7. Conclusion
The asymptotic behaviour of the empirical copula constructed from residuals of
stochastic volatility models was studied. It was shown that if the stochastic volatility
matrix is diagonal, then the empirical copula process behaves like if the parameters
were known. That remarquable property makes it possible to construct consistent
tests of goodness-of-fit for the copula of innovations. Tests of structural change in
the dependence structure were also proposed.

Appendix A. Smoothness conditions for parametric bootstrap


Following Genest and Remillard (2008), assume that the family of densities c
satisfies
(B1) The density c of C admits first and second order derivatives with respect to
all components of . The gradient (row) vector with respect to is denoted
c, and the Hessian matrix is represented by c.
(B2) For arbitrary u (0, 1)d and every 0 O, the mappings 7 c(u)/c(u)
and 7 c(u)/c(u) are continuous at 0 .
GOODNESS-OF-FIT FOR COPULAS 17

(B3) For every 0 O, there exist a neighborhood N of 0 and C0 -integrable


functions h1 , h2 : Rd R such that for every u (0, 1)d ,

c(u) 1/2
c(u)
h1 (u) and N
sup sup
h2 (u).
N c(u) c(u)

Appendix B. Convergence of the partial-sum empirical processes


In this section, one assumes a more general model than the one used in Section 2
where i is no longer a diagonal matrix, namely
Xi = i () + i ()i ,
where the innovations i = (1i , . . . , di) are i.i.d. with continuous distribution
function K, and i , i are Fii -measurable and independent of i . Here Fi1 contains
information from the past and possible information from exogenous variables.
Set 0i = i1 i and 1ki = i1 ki , where
(i )jl = l ji, (ki )jl = l jki = l (i )jk , j, k = 1, . . . , d, l = 1, . . . , p.
Given an estimation n of , compute the residuals ei,n = (e1i,n , . . . , edi,n ) , where
ei,n = 1
i ( n ){Xi i ( n )}.

Further set n = n1/2 ( n ). The goal is to study the asymptotic behaviour of


the partial-sum empirical process Kn defined by (3). The following assumptions are
needed in order to prove the convergence of Kn .

Let di,n = i ei,n (0i n + dk=1 li 1kin )/ n, where 0i and 1i are Fi1-
P
measurable, and such that for any j = 1, . . . , d, and any x Rd ,
ns ns
1X Pr 1X Pr
(A1) 0,n (s) = 0i s0 , 1k,n (s) = 1ki s1k , uniformly in
n i=1 n i=1
s [0, 1], where 0 and 1k are deterministic, k = 1, . . . , d.
n n
1X 1X
E k0i kk and E k1jikk are bounded, for k = 1, 2.
 
(A2)
n i=1 n i=1
P
(A3) There exists a sequence of positive terms ri > 0 so that i1 ri < and such
that the sequence max kdi,n k/ri is tight.
1in

(A4) max1in k0i k/ n = oP (1) and max1in |ji|k1jik/ n = oP (1).

(A5) (n , n ) (, ) in D([, ]d ) Rp .

(A6) xj K(x) and xj xj K(x) are bounded and continuous on Rd = [, +]d .


18 BRUNO REMILLARD

(A7) Suppose that for all k 6= j, fj (xj )E{|k1 |1(1 x)|j1 = xj } and xj fj (xj )E{|k1 |1(1
x)|j1 = xj } are bounded and continuous on Rd .

Remark 4. Note that (A1) and (A2) are trivially satisfied if the sequences 0i and 1ki
Pr Pr
are stationary, ergodic and square integrable. Also, if n1 ni=1 0i 0 , n1 ni=1 1ki
P P
1k , and n1 ni=1 E(k0i k2 ), n1 ni=1 E(k1kik2 ) converge, then (A1) and (A2) are sat-
P P
isfied.
Set Kn (s, x) = n {s, F(x)} and K(s, x) = {s, F(x)}. one can now prove the main
theorem.

Theorem 1. Under assumptions (A1)(A7), Kn K, with


d X
X d
K(s, x) = K(s, x) + sK(x)0 + s Gjk (x)(1k )j ,
j=1 k=1

where Gjk (x) = fj (xj )E {k11(1 x)|j1 = xj }. In particular, Gjj (x) = xj xj K(x).
Furthermore, for all j = 1, . . . , d, Fj,n Fj , where
Fj (s, xj ) = j {s, Fj (xj )} + sfj (xj ) {(0 )j + xj (1j )j }
X
+s fj (xj )E(k1 |j1 = xj )(1k )j .
k6=j

If is diagonal, (A7) is not needed for the convergence of Kn . In that case,


d
X
K(s, x) = K(s, x) + sK(x)0 + s Gjj (x)(1j )j .
j=1

Remark 5. (1k )j = 0 for all and all j 6= k if and only if (1k )jl = 0 for all l and
all j 6= k. That can occur for example if
{i ()}jk = {i ()}kk (Ai )jk , with (Ai )jj = 1. (15)
In that case Ai must be known since it is parameter free. This is true in particular if
i is diagonal, in which case Ai is the identity matrix.
It then follows from (15) that
d
jji (n ) jji()
 
1 X
1

ji eji,n = A {ki(n ) ki ()} + ji
jji(n ) k=1 i jk jji (n )
Setting Hi to be the diagonal matrix with (Hi )jj = (i )jj , j = 1, . . . , d, then one
can rewrite the model as
Xi = i + Ai Hi i ,
1 1
so Yi = Ai Xi = Ai i + Hi i. Since Ai is known, this model is a simple rescaling
of a model with diagonal volatility, and as such has little interest. So if the model
cannot be transformed into a diagonal one, the limiting empirical copula process is
not parameter free.
GOODNESS-OF-FIT FOR COPULAS 19

Corollory 1. Under assumptions (A1)(A6), if the volatility matrix is diagonal then


Cn C, with
d
X
C(s, u) = C(s, u) = (s, u) uj C(u)j (s, uj ),
j=1

where j (s, uj ) = (s, . . . , 1, uj , 1, . . . , 1). If the volatility matrix is not diagonal, then
X
C(s, u) = C(s, u) + s Gjk (u)(1k )j ,
j6=k

where Gjk (u) = E {k1 1(U1 u)|Uj1 = uj }, where Uji = Fj (ji).


Corollary 1 follows directly from Theorem 1, using Genest et al. (2007)[Proposition
A.1].
Proof. Set Sd = {1, . . . , d}. Further set
ns
1 X Y
j,n(s, x) = {1(eji,n xj ) 1(ji xj )} 1(ki xk )
n i=1 k6=j

and
ns
1 XY Y
A,n (s, x) = {1(eji,n xj ) 1(ji xj )} 1(ki xk ),
n i=1 jA kA c

for any A Sd . Using the multinomial formula, one has


ns
" #
1 X X Y Y
Kn (s, x) = {1(eji,n xj ) 1(ji xj )} 1(ji xj ) K(x)
n i=1 AS jA jA c
d

ns
( d )
1 X Y
= 1(ji xj ) K(x)
n i=1 j=1
ns d
1 XX Y
+ {1(eji,n xj ) 1(ji xj )} 1(ki xk )
n i=1 j=1 k6=j
ns
1 XXY Y
+ {1(eji,n xj ) 1(ji xj )} 1(ki xk )
n i=1 jA kAc
|A|>1
d
X X
= Kn (s, x) + j,n (s, x) + A,n (s, x).
j=1 |A|>1

To prove the theorem, it suffices to show that for any 1 P j d, uniformly in


(s, x), j,n (s, x) converges in probability to sxj K(x)(0 )j + s dk=1 Gjk (x)(1k )j ,
and that for any |A| > 1, A,n (s, x) converges in probability to zero. These proofs
20 BRUNO REMILLARD

will be done for j = 1 and A {1, 2}, the other cases being similar.
Also suppose that is diagonal. The general case is similar.
For simplicity, set 0ji = 0i , for all j = 1, . . . , d.
Let (0, 1) be given. From (A2), (A3) and (A5), one can find M > 0 such that
if n is large enough, then P (BM,n ) > 1 , where
  ( n )
1 X
BM,n = {kn k M} ni=1 max |dji,n| Mri 1k=0 dj=1 kkjik M .
1jd n i=1
Because the closed ball of radius M is compact, it can be covered by finitely many
balls of radius (0, 1).
Further set C,n = {max1in (k0ik + |i|k1jik)/ n }. By (A4), P (C ) 1
if n is large enough. On BM,n C,n {kn k < }, one has
(1i )1 + k01i k + |i|k01i k
1i e1i,n Mri + ,
n
(1i )1 k01i k |i |k01i k
Mri + .
n

k)/ n, zi,n = (
Set yi,n = Mri +((01i )1 +k01i 11i )1 / n and wi,n = k11i k/ n.
Further set ai,n = Mri + ck01i k/ n + c|i|k11i k/ n, where c = 1 + kk. It follows
that
1(e1i,n x1 ) 1(1i x1 + yi,n + i zi,n + |i|wi,n )
1(1i x1 + ai,n ),

|1(e1i,n x1 ) 1(1i x1 )| 1(x1 ai,n < 1i x1 + ai,n ),


and
|1(e2i,n x2 ) 1(2i x2 )| 1(x2 Mri c 2i x2 + Mri + c)
1(x2 2c < 2i x2 + 2c),
if i i1 , for some i1 i0 , since ri 0.
As a result, for any A {1, 2},
ns
i1 1 X
|A,n (s, x)| + 1(x1 ai,n < 1i x1 + ai,n )1(x2 2c < 2i x2 + 2c).
n n i=1

Set = (1 , 2 , 3 ), i,n () = 1 ri +{(01i 2 )1 +1i (11i 2 )1 +3 k01i k+3 |1i |k11i k}/ n,
and set
ns d
1 X Y
1,n (s, x; ) = [1 {1i x1 + i,n ()} 1(1i x1 )] 1 (ki xk ) .
n i=1 k=2

Further set 12,n (s, x1 , x2 ; ) = 1,n (s, x1 , x2 , , . . . , ; ).


Then
1,n (s, x; M, , ) 1,n (s, x) 1,n (s, x; M, , )
GOODNESS-OF-FIT FOR COPULAS 21

and
i1
|A,n (s, x)| + 12,n (s, x1 , x2 + 2c; M, 0, c)
n
12,n (s, x1 , x2 2c; M, 0, c)
12,n (s, x1 , x2 + 2c; M, 0, c)
+12,n (s, x1 , x2 2c; M, 0, c).
Next, note that P ( 1i x1 + i,n (), 2i z1 , . . . , di zd1 | Fi1 ) is given by

(x1 + 1 ri + (01i 2 )1 / n + 3 k01i k/ n)+
 
K , z K(0, z)
1 (11i 2 )1 / n 3 k11i k/ n

(x1 + 1 ri + (01i 2 )1 / n + 3 k01i k/ n)
 
+K ,z .
1 (11i 2 )1 / n + 3 k11i k/ n
Next, set 1 (x) = x1 K(x)(01 + x1 11 ) and define
ns
1 X
1,n (s, x; ) = [P ( 1i x1 + i,n (), 2i x2 , . . . , di xd | Fi1 ) K(x)]
n i=1
It follows from (A1), (A2) and (A6) that on BM,n ,


sup 1,n (s, x; ) s2 1 (x)

xRd

can be made arbitrarily small with large probability. The final step is to show that
1,n (s, x; ) = 1,n (s, x; ) 1,n (s, x) can be made arbitrarily small by choosing
3 small enough. The proof is similar to the proof of Lemmas 7.1-7.2 in Ghoudi
and Remillard (2004). Suppose 1/2 < < 1 and set Nn = n . Then, set yk =
F11 (k/Nn ), 1 k < Nn . Further set y0 = and yNn = +. Now, if yk
x1 < yk+1 , and z = (x2 , . . . , xd ). First, note that one can cover Rd by a finite
number Nn J of intervals of the form [a, b) = [yk , yk+1) [ul , vl ), for which 0
K(yk+1, z) K(yk , z) F1 (yk+1) F1 (yk ) = 1/Nn .
Set
d
Y
Ui,n (x) = [1({i1 x1 + i,n ()} 1(1i x1i ] 1(ji xj ).
j=2

and set Vi,n (x) = E{Ui,n (x)|Fi1}. One cannot directly with Ui,n Vi,n . Better
bounds are obtained by decomposing Ui,n and Vi,n as follows: Set
d
+
 +
Y
Ui,n (x) = 1({i1 x1 + i,n ()} 1(1i x1i ) 1(ji xj ),
j=2

and
d


Y
Ui,n (x) = 1(1i x1i ) 1{i1 x1 +i,n ()} 1(ji xj ).
j=2
22 BRUNO REMILLARD


Similarly, set Vi,n (x) = E{Ui,n (x)|Fi1 } and define
ns
1 X

1,n (s, x; ) =
{U (x) Vi,n (x)}.
n i=1 i,n
+ +
Then Ui,n Vi,n = Ui,n Vi,n {Ui,n Vi,n }, so 1,n = +
1,n 1,n . To complete the
proof, it is enough to show that 1,n can be made arbitrarily small. Only the proof
for the + part is given, the other one being similar.
Now, for x [yk , yk+1) [ul , vl ), observe that
+ +
Ui,n (x) Ui,n (yk+1 , vl ) + 1(yk < 1i yk+1)
and
+ +
Ui,n (x) Ui,n (yk , ul ) 1(yk < 1i yk+1 ).
Taking expectations over the last two inequalities and summing over i yield the
following bound:
sup sup (s, x; )
1,n
s[0,1] x[yk ,yk+1 )[ul ,vl )



n
sup max |1,n (s, yk+1, vl ; )|, |1,n(s, yk , ul ; )| + 2
s[0,1] Nn
n
1 X + +

+ sup |1,n (s, yk+1) 1,n (s, yk )| + Vi,n (yk+1, vl ) Vi,n (yk , ul ) .
s[0,1] n i=1
+ + 2
Next i,n = Ui,n Vi,n is a martingale difference such that |i,n | 2 and E(i,n |Fi1 ) =
+ + +
Vi,n (1 Vi,n ) Vi,n . As a result, from the maximum inequality for martingales, one
gets,
( )
h 4 i
P sup max max 1,n (yk , ul ; ) > 0 (Nn j)4

0 sup E 1,n (1, x; )
s[0,1] 1kNn 1lJ xRd

which is bounded by
!2
n
16 + 1 E
X
c(Nn j)4
0 sup
+
Vi,n (x) ,
xRd n n2 i=1

for some universal constant c. Using (A3) and (A6), the latter is O(Nn /n), proving
that sups[0,1] max1kNn max1lJ 1,n (y

k, ul ; ) converges

in probability to zero.
Similarly, sups[0,1] max1kNn max1lJ 1,n (yk , vl ; ) also converges in probability

to zero. Next,

sup max |1,n (s, yk+1) 1,n (s, yk )| = sup 1,n (s, (k + 1)/Nn ) 1,n (s, k/Nn )

s[0,1] 1kNn s[0,1]

converges in probability to zero, where 1,n is the empirical Kiefer process constructed
from uniform variables.
GOODNESS-OF-FIT FOR COPULAS 23

Finally, from (A1) and (A2), one may conclude that 1n ni=1 Vi,n
P  + +

(yk+1 , vl ) Vi,n (yk , ul )
is bounded, for some constants c1 , . . . , c4 depending on kf1 k and kg1 k , by
Pn
ri
c1 |1 | i=1 + c2 max max |f1 (yk+1, vl ) f1 (yk , ul )|
n 0k<Nn 1lJ

+c3 max max |g1 (yk+1, vl ) g1 (yk , ul )| + c4 |3 |.


0k<Nn 1lJ

That can be made as small as necessary, provided n is large, 3 is small and the mesh
of the covering is small enough.

The following theorem give the result of the convergence of the empirical process
based on Rosenblatts transformation.
Set Ui,n = Ri /(n + 1), where R1 , . . . , Rn are the associated rank vectors of
U1 , . . . , Un , and let Ei,n = Rn (Ui ), where n is the estimation of calculated
with Ui,n = Ri /(n + 1), i = 1, . . . , n, and define
n
1 X
Dn (u) = {1(Ei,n u) C (u)}, u [0, 1]d . (16)
n i=1
.
Theorem 2. Under Assumptions (A1)(A6), if the volatility matrix is diagonal and
if (n ) is regular for , then Dn Dn 0 and Dn D, where
D(u) = B(u) (u) (u) ,
where B is a C -Brownian bridge, E{B(u)W} = (u), E{(u)W} = 0, and
j
d X n o
X
(j)
(u) = E 1(E u)j (1, Uk )uk R (U)|Ej = uj ,
j=1 k=1

where U C = C and E = R(U), with U independent of all other observations.


Proof. First, note that Bn (u) = 1n ni=1 [1{R(Ui ) u} C (u)]
P
B, where B is a

C -Brownian bridge. Next, set Hn (x) = Rn Fn (x) and H(x) = R{F(x)}, where
n = Tn (U1,n , . . . , Un,n ) and n = Tn {Fn (U1 ), . . . , Fn (Un )}. Then Vi,n = Hn (i )

and Vi = H(i ), for all i = 1, . . . , n. Since n(n ) , using the results in

Sections 2.12.2, and since Hn = n(Hn H) H, where, for any j = 1, . . . , d,
j
X
(j) (j)
H (x) = R {F(x)} + uk R(j) {F(x)}k {1, Fk (uk )},
k=1

it follows from the results in Ghoudi and Remillard (2004) that Dn D, where
( d )
X
D(u) = B(u) (u) j (u) , u [0, 1]d ,
j=1
24 BRUNO REMILLARD

with n o
j (u) = E R(j) (U)1(E u)|Ej = uj , j = 1, . . . , d. (17)

Next, it follows from Remillard (2010)[Lemma 1] that = dj=1 j , so D = B


P

. Hence E {B(u)W} = (u), as claimed. Next, since we already know that for
any j = 1, . . . , d, E {j (1, uj )W} = 0, proving that E {(u)W} = 0, for all u [0, 1]d .
As a result,
E D(u)W = (u) E W (u) = 0,
 

for all u  [0, 1]d , since any n in Sections 2.12.2 is a regular estimator of , so
E W = I. It then follows from Genest and Remillard (2008) that the parametric
bootstrap work for Dn . To complete the proof, it only remains to show that Dn
Dn 0. To that end, note that Vi,n = Hn (ei,n ), where Hn = Rn Fn , so if
Hn = n(Hn H), then Hn H, where, for all j = 1, . . . , d,
j
X
(j) (j)
H (x) = R {F(x)} + uk R(j) {F(x)}Fk {1, Fk (uk )}
k=1
j
X
= H(j) + uk R(j) {F(x)}fk (xk ) {(0 )k + xk (1k )k } .
k=1

Next,
(j)
Hn (ei,n )
Vji Vji,n = + H (j) (i ) H (j)(ei,n )
n
j
( d
)
(j)
Hn (ei,n ) X X
= + uk R(j) (Ui ) dki,n + (0i n + li1li n )k / n
n k=1 l=1

i=1,. . . ,n. It then follows from the proof of Theorem 1, the tightness of H and
Ghoudi and Remillard (2004)[Lemma 5.1] that Dn Dn 0. 

Appendix C. Other proofs


Before starting the proofs, one states a lemma that is quite useful in some proofs.
Lemma 1. Suppose that C and D and distribution functions on [0, 1]2, so that C is
continuous and D has mean 1/2 for each marginal distribution. Then
Z Z
D(u, v)dC(u, v) = C(u, v)dD(u, v).

C.1. Proof of Propositions 14. To prove Proposition 1, note that


Z Z Z
(j,k) (j,k)
 (j,k) (j,k)
(j,k)
Cn dCn = Cn C dCn + C (j,k) dCn(j,k)
Z Z
 (j,k) (j,k)
(j,k)
= Cn C dCn + Cn(j,k) dC (j,k),
GOODNESS-OF-FIT FOR COPULAS 25

(j,k)
using Lemma 1, since Cn and C (j,k) satisfy the assumptions. Then

Z Z 
(j,k) (j,k) (j,k) (j,k)
n(jk,n jk ) = 4 n Cn dCn 4 C dC + oP (1)
Z Z
= 4 Cn dCn + 4 C(j,k)
(j,k) (j,k)
n dC (j,k) + oP (1)
Z
= 8 C(j,k)
n dCn(j,k) + oP (1).

Similarly,

Z
n(jk,n jk ) = 8 C(j,k)
n dCn(j,k) + oP (1).
R (j,k) (j,k)
By Corollary 1, Cn = Cn +oP (1) C, proving that RKjk,n converges to 8 C dCn .
Next, it is easy to check that
n
( d
)
1 X X
Cn () = 1(Ui u) C(u) 1(Uji uj )uj C(u)
n i=1 j=1

converges to C. As a result, for any 1 j < k d,


n
1 X  (j,k)
Z

8 Cn dC = 8C (Uji , Uki) 4Uji 4Uki + 2 2jk
n i=1

converges to RK K

jk . To compute the covariance between Rjk and W, note that E C(u)W =
C(u), and the latter is 0 if all but one uj is 1. As a result,
Z
E Rjk W = 8 C (j,k)dC (j,k) = {jk } ,
K


using integration by parts, since jk = 4 C (j,k) dC (j,k) 1. The proof of Propositions


R
24 is similar. It is sufficient to note that for the three estimators, one has

Z
n(jk,n jk ) = n {L(uj ) L}{L(uk ) L}dCn(j,k (uj , uk )
Z 
(j,k
{L(uj ) L}{L(uk ) L}dC (uj , uk ) + oP (1)
Z
= Cn (j, k){J(x), J(y)}dxdy + oP (1),

for an appropriate distribution function J with left-continuous inverse L. 5 According


to Genest and Remillard (2004) and Corollary 1, the latter converges to
Z Z
C(j, k){J(x), J(y)}dxdy. = C(j, k){J(x), J(y)}dxdy.

5J

= N for van der Waerden, J is the distribution of the uniform over [0, 12] for Spearmans
rho while J is the distribution function of the discrete random variable taking values 0 and 2 with
p = 1/2 for Blomqvists coefficient.
26 BRUNO REMILLARD

The representations come from the convergence of Cn to C. The proof of the covari-
ance with W can be dealt similarly to the one involving Kendalls tau. 


C.2. Proof of Proposition 5. The convergence of Cn = n(Cn C) follows from
Corollary 1 and the joint convergence of (n , n ) follows from the representation of
n and the estimators of Sections 2.12.2. Using the smoothness of c, it follows that
C = C is continuous and under H0 ,

An = n(Cn Cn ) = Cn n(Cn C) = Cn C n .

As a result, An A = C C = C C .
Following Genest and Remillard (2008), the parametric bootstrap approach will
work since E W = I, as shown in Sections 2.12.2. 

C.3. Proof of Proposition 7. Note that


ns 
Cn (s, u) = Kn {s, F1 n K{Fn1 (u)} C(u) .

n (u)} +
n
It then follows from Theorem 1 and the the proof of Genest et al. (2007)[Proposition
A.1] that
d
X
Cn (s, u) (s, u) s uj C(u)j (1, uj ).
j=1

As a result, Gn G, with

G(s, u) = C(s, u) sC(1, u) = (s, u) s(1, u).

To show that the multipliers method works, it suffices to note that conditionally
(k)
on U1,n , . . . , Un,n , the finite dimensional distributions of n converge to those of (k) ,
(k)
an independent copy of by construction. Next the tightness of n follows from
n n
the tightness of C n (u)
1
P P
n i=1 i,k and the tightness of n i=1 i,k 1(Ui,n u), using
Bickel and Wichura (1971) and the convergence in probability of n1 ns
P
i=1 1(Ui,n
u) to the continuous distribution function sK(u) at each (s, u) [0, 1]d+1 . As a
(1) (N )
result, (Tn , Tn , . . . , Tn converges to (T, T (1) , . . . , T (N )
 , independent
 and identically
1
P N (k)
distributed. Hence, if N is large enough, N k=1 1 Tn > Tn is an approximate
P -value for Tn . 

Appendix D. Copula families


In this section, one describes the copulas families used in the paper, beginning with
two major families, the Archimedean copulas and the meta-elliptic copulas.
GOODNESS-OF-FIT FOR COPULAS 27

D.1. Archimedean copulas. Archimedean copulas were first defined by Genest and
MacKay (1986). A copula C is said to be Archimedean (with generator ) when it
can be expressed in the form
C(u1 , . . . , ud) = 1 {(u1 ) + + (ud )} ,
for all u (0, 1)d so that dj=1 (uj ) < (0+), where : (0, 1] [0, ), is a bijection
P

such that (1) = 0 and


di 1
i
(1) i (x) > 0, 1 i d.
dx
The generator is unique up to a constant. If the generator yields a copula for any
d 2, then 1 is necessarily the Laplace transform of a non-negative random variable
(Marshall and Olkin, 1988), i.e., 1 (s) = E es , for all s 0 . Table 1 gives the
generators for three well-known Archimedean copulas: Clayton, Frank, and Gumbel-
Hougaard families. These three classes share the interesting property that the copula
exists for any dimension, for the values of parameters listed in the table. See Joe
(1997) and Nelsen (2006) for further examples on copulas.

Table 1. Multivariate Archimedean copulas and domain of parameter

Family (t) Range of Kendalls tau

Clayton (t 1)/ (0, ) /( + 2)

1 t
 
log()2 +4 log()+4dilog()
Frank log (0, 1) log()2
1

Gumbel-Hougaard | log t|1/ (0, 1) 1


Rx log t
Here, dilog(x) = 1 1t dt stands for the dilog function.

For the Clayton family of parameter (0, ),  the associated has Gamma
distribution with parameters (1/, 1) since E es = (1 + s)1/ . For the Frank
family with parameter (0, 1), the associated is discrete and has a logarithmic
1 (1)k
series distribution given by P ( = k) = log(1/) k
, k = 1, 2, . . ., since

s 1 X eks
= log 1 (1 )es / log() = (1 )k
 
E e .
log(1/) k
k=1

Finally, for the Gumbel-Hougaard family with parameter (0, 1), has a positive

stable distribution of parameter , since E es = es .
For formulas giving densities and Rosenblatts transforms for these three families,
see, e.g., Remillard et al. (2010).
28 BRUNO REMILLARD

D.2. Meta-elliptic copulas. Meta-elliptic copulas are simply copulas associated


with elliptic distributions through relation (1), the most popular in applications being
the Student copula and the (now infamous) Gaussian copula. Recall that a vector Y
has an elliptic distribution with generator g and parameters and (positive definite
symmetric
1
 1
Y d E(g, , ), if its density h is given by h(y) =
matrix) , denoted
||1/2
g (y ) (y ) , y R , where

d/2 (d2)/2
r g(r) (18)
(d/2)

is the density of = (Y ) 1 (Y ). It is easy to check that the underlying


copula depends only on g and R, R being the correlation matrix associated with .
Here are some general families of elliptic distributions.

Table 2. Generators of some d-dimensional elliptic distributions.

Family Generator

1
Gaussian g(r) = er/2
(2)d/2

( + d/2)
Pearson type II g(r) = (1 r)1 , where 0 < r < 1 and > 0
d/2 ()

( + d/2)
Pearson type VII g(r) = (1 + r/)d/2 , where , > 0
()d/2 ()

Remark 6. The case = /2 for the Pearson type VII corresponds to the mul-
tivariate Student, while if = 1/2 and = 1, it corresponds to the multivariate
Cauchy distribution. However, since is a scaling parameter for that family, the
meta-elliptic copula only depends on , so any meta-elliptic copula in that family is
necessarily a Student copula with paramters (R, 2). In particular, the Cauchy copula
is the Student copula with 1 degree of freedom.
   
X1 11 12
Suppose that X = E(g, 0, ), where = .
X2 21 22
It is easy to check that X1 E(g1 , 0, R11 ), where
Z
2 d2 /2
Z
g1 (r) = 2
g(kx2 k + r)dx2 = sd2 1 g(s2 + r)ds. (19)
R d2 (d2 /2) 0
GOODNESS-OF-FIT FOR COPULAS 29

As a consequence, the density of any marginal distribution of a d-dimensional elliptic


distribution with generator g and parameters (0, R) is
(d1)/2 (d3)/2
Z
f (x) = d1
s g(s + x2 )ds, x R. (20)
( 2 ) 0
For example, if g is the generator of the d-dimensional Pearson type VII with
parameters (, ), then gi is the generator of the di -dimensional Pearson type VII
with parameters (, ), i = 1, 2. One can also show that if g is the generator of
the d-dimensional Pearson type II with parameter , then gi is the generator of the
di -dimensional Pearson type II with parameter + d3i , i = 1, 2. In particular, the
marginal distributions of a Pearson type VII has density
( + 1/2)
f (x) = (1 + x2 /)1/2 .
()1/2 ()
Note that formula (19) is particularly useful for computing Rosenblatts transforms.
D.3. Other copula families. As proposed in Dias and Embrechts (2004), one could
also consider mixtures of copulas, i.e., consider families of the form
m
(k)
X
C, = k Ck ,
k=1
Pm (k)
with k=1 k = 1, k > 0, and = ( 1 , . . . , m ). Copulas Ck may be part of the
same family; for example, one could consider a mixture of Gaussian copulas. One
could also take a mixture of different families, e.g. a mixture of Clayton and Gumbel
copulas.
Other families considered recently in applications include hierarchical copulas (Savu
and Trede, 2006),(McNeil, 2008), and copula vines (Bedford and Cooke, 2002), (Aas
et al., 2009).

References
Aas, K., Czado, C., Frigessi, A., and Bakken, H. (2009). Pair-copula constructions of
multiple dependence. Insurance Math. Econom., 44(2):182198.
Bedford, T. and Cooke, R. M. (2002). Vinesa new graphical model for dependent
random variables. Ann. Statist., 30(4):10311068.
Berg, D. and Quessy, J.-F. (2009). Local power analyses of goodness-of-fit tests for
copulas. Scand. J. Stat., 36(3):389412.
Berrada, T., Dupuis, D. J., Jacquier, E., Papageorgiou, N., and Remillard, B. (2006).
Credit migration and derivatives pricing using copulas. J. Comput. Fin., 10:4368.
Bickel, P. J. and Wichura, M. J. (1971). Convergence criteria for multiparameter
stochastic processes and some applications. Ann. Math. Statist., 42:16561670.
Blum, J. R., Kiefer, J., and Rosenblatt, M. (1961). Distribution free test of indepen-
dence based on the sample distribution function. Ann. Math. Statist., 32:485498.
Breymann, W., Dias, A., and Embrechts, P. (2003). Dependence structures for mul-
tivariate high-frequency data in finance. Quant. Finance, 3:114.
30 BRUNO REMILLARD

Carlstein, E. (1988). Nonparametric change-point estimation. Ann. Statist.,


16(1):188197.
Chen, X. and Fan, Y. (2006). Estimation and model selection of semiparametric
copula-based multivariate dynamic models under copula misspecification. Journal
of Econometrics, 135(1-2):125 154.
Chen, X., Fan, Y., and Patton, A. (2005). Simple tests for models of dependence
between multiple financial time series, with applications to u.s. equity returns and
exchange rates. Technical Report 483, Financial Markets Group, London School of
Economics.
Dias, A. and Embrechts, P. (2004). Dynamic copula models for multivariate high-
frequency data in finance. Technical report, ETH Zurich.
Dias, A. and Embrechts, P. (2009). Testing for structural changes in exchange rates
dependence beyond linear correlation. European Journal of Finance, 15(7):619637.
Diebold, F. X., Gunther, T. A., and Tay, A. S. (1998). Evaluating density forecasts
with applications to financial risk management. International Economic Review,
39(4):863883.
Dobric, J. and Schmid, F. (2005). Testing goodness of fit for parametric families
of copulas: Application to financial data. Comm. Statist. Simulation Comput.,
34:10531068.
Dobric, J. and Schmid, F. (2007). A goodness of fit test for copulas based on Rosen-
blatts transformation. Comput. Statist. Data Anal., 51:46334642.
Durbin, J. (1973). Weak convergence of the sample distribution function when pa-
rameters are estimated. Ann. Statist., 1(2):279290.
Embrechts, P., McNeil, A. J., and Straumann, D. (2002). Correlation and dependence
in risk management: properties and pitfalls. In Risk management: value at risk and
beyond (Cambridge, 1998), pages 176223. Cambridge Univ. Press, Cambridge.
Engle, R. F. and Kroner, K. F. (1995). Multivariate simultaneous generalized ARCH.
Econometric Theory, 11:122150.
Fang, H.-B., Fang, K.-T., and Kotz, S. (2002). The meta-elliptical distributions with
given marginals. J. Multivariate Anal., 82(1):116.
Fermanian, J.-D., Radulovic, D., and Wegkamp, M. J. (2004). Weak convergence of
empirical copula processes. Bernoulli, 10:847860.
Ganler, P. and Stute, W. (1987). Seminar on Empirical Processes, volume 9 of DMV
Seminar. Birkhauser Verlag, Basel.
Genest, C., Ghoudi, K., and Remillard, B. (2007). Rank-based extensions of the
Brock Dechert Scheinkman test for serial dependence. J. Amer. Statist. Assoc.,
102:13631376.
Genest, C., Ghoudi, K., and Rivest, L.-P. (1995). A semiparametric estimation proce-
dure of dependence parameters in multivariate families of distributions. Biometrika,
82:543552.
Genest, C. and MacKay, R. J. (1986). Copules archimediennes et familles de lois bidi-
mensionnelles dont les marges sont donnees. The Canadian Journal of Statistics,
14(2):145159.
GOODNESS-OF-FIT FOR COPULAS 31

Genest, C. and Remillard, B. (2004). Tests of independence or randomness based on


the empirical copula process. Test, 13:335369.
Genest, C. and Remillard, B. (2008). Validity of the parametric bootstrap for
goodness-of-fit testing in semiparametric models. Ann. Inst. H. Poincare Sect.
B, 44:10961127.
Genest, C., Remillard, B., and Beaudoin, D. (2009). Omnibus goodness-of-fit tests
for copulas: A review and a power study. Insurance Math. Econom., 44:199213.
Ghoudi, K., Kulperger, R. J., and Remillard, B. (2001). A nonparametric test of serial
independence for time series and residuals. J. Multivariate Anal., 79:191218.
Ghoudi, K. and Remillard, B. (2004). Empirical processes based on pseudo-
observations. II. The multivariate case. In Asymptotic Methods in Stochastics,
volume 44 of Fields Inst. Commun., pages 381406. Amer. Math. Soc., Providence,
RI.
Ghoudi, K. and Remillard, B. (2010). Diagnostic tests for innovations of ARMA
models using empirical processes of residuals. Technical Report G-2010-23, Gerad.
Guegan, D. and Zhang, J. (2010). Change analysis of a dynamic copula for measuring
dependence in multivariate financial data. Quant. Finance, 10(4):421430.
Harvey, A. (2010). Tracking a changing copula. Journal of Empirical Finance,
17(3):485 500.
Joe, H. (1997). Multivariate models and dependence concepts, volume 73 of Mono-
graphs on Statistics and Applied Probability. Chapman & Hall, London.
Kole, E., Koedijk, K., and Verbeek, M. (2007). Selecting copulas for risk management.
Journal of Banking & Finance, 31(8):2405 2423.
Marshall, A. and Olkin, I. (1988). Families of multivariate distributions. Journal of
the American Statistical Association, 83:834841.
McNeil, A. J. (2008). Sampling nested Archimedean copulas. Journal of Statistical
Computation and Simulation, 78(6):567581.
Nelsen, R. B. (2006). An introduction to copulas, volume 139 of Lecture Notes in
Statistics. Springer-Verlag, New York, second edition.
Panchenko, V. (2005). Goodness-of-fit test for copulas. Phys. A, 355:176182.
Patton, A. J. (2006). Modelling asymmetric exchange rate dependence. International
Economic Review, 47(2):527556.
Picard, D. (1985). Testing and estimating change-points in time series. Adv. in Appl.
Probab., 17(4):841867.
Remillard, B. (2010). Validity of the parametric bootsrap for goodness-of-fit testing
in dynamic models. Technical report.
Remillard, B., Papageorgiou, N., and Soustra, F. (2010). Dynamic copulas. Technical
Report G-2010-18, Gerad.
Remillard, B. and Scaillet, O. (2009). Testing for equality between two copulas. J.
Multivariate Anal., 100:377386.
Rosenblatt, M. (1952). Remarks on a multivariate transformation. Ann. Math. Stat.,
23:470472.
Savu, C. and Trede, M. (2006). Hierarchical Archimedean copulas. Technical report,
University of Munster.
32 BRUNO REMILLARD

Scaillet, O. (2005). A Kolmogorov-Smirnov type test for positive quadrant depen-


dence. Canad. J. Statist., 33(3):415427.
Shih, J. H. and Louis, T. A. (1995). Inferences on the association parameter in copula
models for bivariate survival data. Biometrics, 51:13841399.
Sklar, M. (1959). Fonctions de repartition a n dimensions et leurs marges. Publ. Inst.
Statist. Univ. Paris, 8:229231.
Tsukahara, H. (2005). Semiparametric estimation in copula models. Canad. J.
Statist., 33(3):357375.
van den Goorbergh, R., Genest, C., and Werker, B. (2005). Bivariate option pricing
using dynamic copula models. Insurance: Mathematics and Economics, 37:101114.
Xu, J. (1996). Statistical Modelling and Inference for Multivariate and Longitudinal
Discrete Response Data. PhD thesis, University of British Columbia.
GERAD and Department of Management Sciences, HEC Montreal, 3000 chemin de
la Cote Sainte-Catherine, Montreal (Quebec), Canada H3T 2A7

You might also like