Consistent testing for a constant copula under strong mixing

based on the tapered block multiplier technique
Martin Ruppert

November 4, 2011
Abstract
Considering multivariate strongly mixing time series, nonparametric tests for a constant
copula with specified or unspecified change point (candidate) are derived; the tests are
consistent against general alternatives. A tapered block multiplier technique based on
serially dependent multiplier random variables is provided to estimate p-values of the test
statistics. Size and power of the tests in finite samples are evaluated with Monte Carlo
simulations.
Key words: Change point test; Copula; Empirical copula process; Nonparametric estima-
tion; Time series; Strong mixing; Multiplier central limit theorem.
AMS 2000 subject class.: Primary 62G05, 62G10, 60F05, Secondary 60G15, 62E20.

Graduate School of Risk Management and Department of Economic and Social Statistics, University
of Cologne, Albertus-Magnus-Platz, 50923 Köln, Germany; Email: martin.ruppert@uni-koeln.de,
Tel: +49 (0) 221 4706656, Fax: +49 (0) 221 4705074.
1. Introduction
Over the last decade, copulas have become a standard tool in modern risk management.
The copula of a continuous random vector is a function which uniquely determines the
dependence structure linking the marginal distribution functions. Copulas play a pivotal
role for, e.g., measuring multivariate association [see 45], pricing multivariate options [see
48] and allocating financial assets [see 34]. The latter two references emphasize that time
variation of copulas possesses an important impact on financial engineering applications.
Evidence for time-varying dependence structures can indirectly be drawn from functionals
of the copula, e.g., Spearman’s ρ, as suggested by Gaißer et al. [20] and Wied et al. [51].
Investigating time variation of the copula itself, Busetti and Harvey [8] consider a nonpara-
metric quantile-based test for a constant copula. Semiparametric tests for time variation of
the parameter within a prespecified family of one-parameter copulas are proposed by Dias
and Embrechts [15] and Giacomini et al. [23]. Guegan and Zhang [24] combine tests for
constancy of the copula (on a given set of vectors on its domain), the copula family, and the
parameter. The assumption of independent and identically distributed pseudo-observations
is generally made in the latter references. With respect to financial time-series, the estima-
tion of a GARCH model represents a frequently chosen option in order to approximate this
assumption using the residuals obtained after GARCH filtration. The effect of replacing
unobserved innovations by estimated residuals, however, is to be taken into account. There-
fore, specific techniques for residuals are required [cf., e.g., 11]. Exploring this approach,
Rémillard [36] investigates a nonparametric change point test for the copula of residuals in
stochastic volatility models. Avoiding the need to specify any parametric model, Fermanian
and Scaillet [19] consider purely nonparametric estimation of copulas for time-series under
strict stationarity and strong mixing conditions on the multivariate process. A recent gen-
eralization of this framework is proposed by van Kampen and Wied [50] who assume the
univariate processes to be strictly stationary but relax the assumption of a constant copula
and suggest a quantile-based test for a constant copula under strong mixing assumptions.
We introduce nonparametric Cramér-von Mises-, Kuiper-, and Kolmogorov-Smirnov tests
for a constant copula under strong mixing assumptions. The tests extend those for time-
constant quantiles by assessing constancy of the copula on its domain. In consequence,
they are consistent under general alternatives. Depending on the object of investigation,
tests with a specified or unspecified change point (candidate) are introduced. Whereas the
former setting requires a hypothesis on the change point location, it allows us to relax the
assumption of strictly stationary univariate processes. P-values of the tests are estimated
based on a generalization of the multiplier bootstrap technique introduced in Rémillard
and Scaillet [38] to the case of strongly mixing time series. The idea is comparable to block
bootstrap methods: however, instead of sampling blocks with replacement, we generate
blocks of serially dependent multiplier random variables. For a general introduction to the
latter idea, we refer to Bühlmann [6] and Paparoditis and Politis [32].
This paper is organized as follows: in Section 2, we discuss convergence of the empiri-
cal copula process under strong mixing. A result of Doukhan et al. [17] is generalized
to establish the asymptotic behavior of the empirical copula process under nonrestrictive
smoothness assumptions based on serially dependent observations. Furthermore, a tapered
block multiplier bootstrap technique for inference on the weak limit of the empirical copula
process is derived and assessed in finite samples. Tests for a constant copula with specified
or unspecified change point (candidate) which are relying on this technique are established
in Section 3.
1
2. Nonparametric inference based on serially dependent observations
As a basis for the tests introduced in the next section, the result of Segers [46] on the asymp-
totic behavior of the empirical copula process under nonrestrictive smoothness assumptions
is generalized to enable its applicability to serially dependent observations. Furthermore,
we introduce a multiplier-based resampling method for this particular setting, establish its
asymptotic behavior and investigate performance in finite samples.
2.1. Asymptotic theory
Consider a vector-valued process (X
j
)
j∈Z
with X
j
= (X
j,1
, . . . , X
j,d
) taking values in R
d
.
Let F
i
be the distribution function of X
j,i
for all j ∈ Z, i = 1, . . . , d and let F be the
joint distribution of X
j
for all j ∈ Z. Assume that all marginal distribution functions are
continuous. Then, according to Sklar’s Theorem [47], there exists a unique copula C such
that F(x
1
, . . . , x
d
) = C(F
1
(x
1
), . . . , F
d
(x
d
)) for all (x
1
, . . . , x
d
) ∈ R
d
. The σ-fields generated
by X
j
, j ≤ t, and X
j
, j ≥ t, are denoted by F
t
= σ{X
j
, j ≤ t} and F
t
= σ{X
j
, j ≥ t},
respectively. We define
α(F
s
, F
s+r
) = sup
A∈Fs,B∈F
s+r
|P(A ∩ B) −P(A)P(B)|.
The strong- (or α-) mixing coefficient α
X
corresponding to the process (X
j
)
j∈Z
is given by
α
X
(r) = sup
s≥0
α(F
s
, F
s+r
). The process (X
j
)
j∈Z
is said to be strongly mixing if α
X
(r) →0
for r → ∞. This type of weak dependence covers a broad range of time-series models.
Consider the following examples, cf. Doukhan [16] and Carrasco and Chen [10]:
Example 1. i) AR(1) processes (X
j
)
j∈Z
given by
X
j
= βX
j−1

j
,
where (ǫ
j
)
j∈Z
is a sequence of independent and identically distributed continuous innovations
with mean zero. For |β| < 1, the process is strictly stationary and strongly mixing with
exponential decay of α
X
(r).
ii) GARCH(1, 1) processes (X
j
)
j∈Z
,
X
j
= σ
j
ǫ
j
, σ
2
j
= ω +βσ
2
j−1
+αǫ
2
j−1
, (1)
where (ǫ
j
)
j∈Z
is a sequence of independent and identically distributed continuous innova-
tions, independent of σ
2
0
, with mean zero and variance one. For α + β < 1, the process is
strictly stationary and strongly mixing with exponential decay of α
X
(r).
Let X
1
, . . . , X
n
denote a sample from (X
j
)
j∈Z
. A simple nonparametric estimator of the
unknown copula C is given by the empirical copula which is first considered by Rüschendorf
[43] and Deheuvels [13]. Depending on whether the marginal distribution functions are
assumed to be known or unknown, we define
C
n
(u) :=
1
n
n

j=1
d

i=1
1
{U
j,i
≤u
i
}
for all u ∈ [0, 1]
d
,
´
C
n
(u) :=
1
n
n

j=1
d

i=1
1
{
´
U
j,i
≤u
i
}
for all u ∈ [0, 1]
d
, (2)
2
with observations U
j,i
= F
i
(X
j,i
) and pseudo-observations
´
U
j,i
=
´
F
i
(X
j,i
) for all j = 1, . . . , n
and i = 1, . . . , d, where
´
F
i
(x) =
1
n

n
j=1
1
{X
j,i
≤x}
for all x ∈ R. Unless otherwise noted,
the marginal distribution functions are assumed to be unknown and the estimator
´
C
n
is
used. In addition to the practical relevance of this assumption, Genest and Segers [22] prove
that pseudo-observations
´
U
j,i
=
´
F
i
(X
j,i
) permit more efficient inference on the copula than
observations U
j,i
for a broad class of copulas.
Doukhan et al. [17] investigate dependent observations and establish the asymptotic behav-
ior of the empirical copula process, defined by

n{
´
C
n
−C}, assuming the copula to possess
continuous partial derivatives on [0, 1]
d
. Segers [46] points out that many popular families
of copulas (e.g., the Gaussian, Clayton, and Gumbel families) do not satisfy the assumption
of continuous first partial derivatives on [0, 1]
d
. He establishes the asymptotic behavior of
the empirical copula for serially independent observations under the weaker condition
D
i
C(u) exists and is continuous on
_
u ∈ [0, 1]
d
|u
i
∈ (0, 1)
_
for all i = 1, . . . , d. (3)
Under Condition (3), the partial derivatives’ domain can be extended to u ∈ [0, 1]
d
by
D
i
C(u) =
_
¸
_
¸
_
lim
h→0
C(u+he
i
)−C(u)
h
for all u ∈ [0, 1]
d
, 0 < u
i
< 1,
limsup
h↓0
C(u+he
i
)
h
for all u ∈ [0, 1]
d
, u
i
= 0,
limsup
h↓0
C(u)−C(u−he
i
)
h
for all u ∈ [0, 1]
d
, u
i
= 1,
(4)
and for all i = 1, . . . , d, where e
i
denotes the ith column of a d × d identity matrix. A
result of Bücher [4] permits to establish the asymptotic behavior of the empirical copula
process in the case of serially dependent observations. The following Theorem is based on
nonrestrictive smoothness assumptions and mild assumptions on the strong mixing rate:
Theorem 1. Consider observations X
1
, . . . , X
n
, drawn from a strictly stationary process
(X
j
)
j∈Z
satisfying the strong mixing condition α
X
(r) = O(r
−a
) for some a > 1. If C
satisfies Condition (3), then the empirical copula process converges weakly in the space of
uniformly bounded functions on [0, 1]
d
equipped with the uniform metric
_


([0, 1]
d
), .

_
:

n
_
´
C
n
(u) −C(u)
_
w.
−→G
C
(u),
whereas G
C
represents a Gaussian process given by
G
C
(u) = B
C
(u) −
d

i=1
D
i
C(u)B
C
_
u
(i)
_
for all u ∈ [0, 1]
d
. (5)
The vector u
(i)
denotes the vector where all coordinates, except the ith coordinate of u, are
replaced by 1. The process B
C
is a tight centered Gaussian process on [0, 1]
d
with covariance
function
Cov(B
C
(u)B
C
(v)) =

j∈Z
Cov
_
1
{U
0
≤u}
, 1
{U
j
≤v}
_
for all u, v ∈ [0, 1]
d
. (6)
The proof is given in 5. Notice that the covariance structure as given in Equation (6)
depends on the entire process (X
j
)
j∈Z
. If the marginal distribution functions F
i
, i = 1, . . . , d,
are known then the limiting process reduces to B
C
. In this particular case, Condition (3)
is not necessary to establish weak convergence of the empirical copula process.
3
2.2. Resampling techniques
In this Section, a generalized multiplier bootstrap technique is introduced which is appli-
cable in the case of serially dependent observations. Moreover, a generalized asymptotic
result is obtained for the (moving) block bootstrap which serves as a benchmark for the
new technique in the following finite sample assessment.
Fermanian et al. [18] investigate the empirical copula process for independent and identi-
cally distributed observations X
1
, . . . , X
n
and prove consistency of the nonparametric boot-
strap method which is based on sampling with replacement from X
1
, . . . , X
n
. We denote a
bootstrap sample by X
B
1
, . . . , X
B
n
and define
´
C
B
n
(u) :=
1
n
n

j=1
1
{
´
U
B
j
≤u}
for all u ∈ [0, 1]
d
,
´
U
B
j,i
:=
1
n
n

k=1
1
{X
B
k,i
≤X
B
j,i
}
for all j = 1, . . . , n and i = 1, . . . , d. Notice that the bootstrap empirical copula can equiva-
lently be expressed based on multinomially (n, n
−1
, . . . , n
−1
) distributed random variables
W = (W
1
, . . . , W
n
) :
´
C
W
n
(u) :=
1
n
n

j=1
W
j
1
{
´
U
W
j
≤u}
for all u ∈ [0, 1]
d
,
´
U
W
j,i
:=
1
n
n

k=1
W
k
1
{X
k,i
≤X
j,i}
for all j = 1, . . . , n and i = 1, . . . , d.
Modes of convergence which allow us to investigate weak convergence of the empirical
copula process conditional on an observed sample are considered next. The multiplier
random variables represent the remaining source of stochastic influence in this conditional
setting. The bootstrap empirical copula process converges weakly conditional on X
1
, . . . , X
n
in probability in
_


([0, 1]
d
), .

_
if the following two criteria are satisfied, see van der
Vaart and Wellner [49], Section 3.9.3, Kosorok [30], Section 2.2.3, and Bücher and Dette
[5]:
sup
h∈BL
1
(ℓ

([0,1]
d
))
¸
¸
¸E
W
h
_

n
_
´
C
W
n
(u) −
´
C
n
(u)
__
−Eh(G
C
(u))
¸
¸
¸
P
−→0, (7)
where E
W
denotes expectation with respect to W conditional on X
1
, . . . , X
n
. Furthermore,
E
W
h
_

n
_
´
C
W
n
(u) −
´
C
n
(u)
__

−E
W
h
_

n
_
´
C
W
n
(u) −
´
C
n
(u)
__

P
−→0. (8)
The function h is assumed to be uniformly bounded with Lipschitz-norm bounded by one,
i.e., h ∈ BL
1
(ℓ

([0, 1]
d
)) which is defined by
_
f : ℓ

_
[0, 1]
d
_
→R, f

≤ 1, |f(β) −f(γ)| ≤ β −γ

for all γ, β ∈ ℓ

_
[0, 1]
d
__
.
Moreover, h(.)

and h(.)

denote the measurable majorant and minorant with respect to the
joint data (i.e., X
1
, . . . , X
n
and W). In the case of independent and identically distributed
observations satisfying Condition (3), validity of criteria (7) and (8) can be proven [see
18, 4]. Hence, the bootstrap empirical copula process converges weakly conditional on
X
1
, . . . , X
n
in probability which is denoted by

n
_
´
C
W
n
(u) −
´
C
n
(u)
_
P
−→
W
G
C
(u).
4
Weak convergence conditional on X
1
, . . . , X
n
almost surely is defined analogously.
Whereas the bootstrap is consistent for independent and identically distributed samples,
consistency generally fails for serially dependent samples. In consequence, a block bootstrap
method is proposed by Künsch [31]. Given the sample X
1
, . . . , X
n
, the block bootstrap
method requires blocks of size l
B
= l
B
(n), l
B
(n) → ∞ as n → ∞ and l
B
(n) = o(n),
consisting of consecutive observations
B
h,l
B
= {X
h+1
, . . . , X
h+l
B
}, for all h = 0, . . . , n −l
B
.
We assume n = kl
B
(else the last block is truncated) and simulate H = H
1
, . . . , H
k
indepen-
dent and uniformly distributed random variables on {0, . . . , n − l
B
}. The block bootstrap
sample is given by the observations of the k blocks B
H
1
,l
B
, . . . , B
H
k
,l
B
, i.e.,
X
B
1
= X
H
1
+1
, . . . , X
B
l
B
= X
H
1
+l
B
, X
B
l
B
+1
= X
H
2
+1
, . . . , X
B
n
= X
H
k
+l
B
.
Denote the block bootstrap empirical copula by
´
C
B
n
(u). Its asymptotic behavior is given
next:
Theorem 2. Consider observations X
1
, . . . , X
n
, drawn from a strictly stationary process
(X
j
)
j∈Z
satisfying


r=1
(r + 1)
16(d+1)
_
α
X
(r) < ∞. Assume that l
B
(n) = O(n
1/2−ǫ
) for
0 < ǫ < 1/2. If C satisfies Condition (3), then the block bootstrap empirical copula process
converges weakly conditional on X
1
, . . . , X
n
in probability in
_


([0, 1]
d
), .

_
:

n
_
´
C
B
n
(u) −
´
C
n
(u)
_
P
−→
H
G
C
(u).
The proof is given in 5. The previous theorem weakens the smoothness assumptions of
a result obtained by Gaißer et al. [21]; it is derived by means of an asymptotic result
on the block bootstrap for general distribution functions established by Bühlmann [6],
Theorem 3.1 and a representation of the copula as a composition of functions [18, 4]. Based
on bootstrap samples s = 1, . . . , S a set of block bootstrap realizations to estimate the
asymptotic behavior of the empirical copula process is obtained by:
´
G
B(s)
C,n
(u) =

n
_
´
C
B(s)
n
(u) −
´
C
n
(u)
_
for all u ∈ [0, 1]
d
.
A process related to the bootstrap empirical copula process can be formulated if both the
assumption of multinomially distributed random variables is dropped and the marginal
distribution functions are left unaltered during the resampling procedure. Consider in-
dependent and identically distributed multiplier random variables ξ
1
, . . . , ξ
n
with finite
positive mean and variance, additionally satisfying ξ
j

2,1
:=
_

0
_
P(|ξ
j
| > x)dx < ∞
for all j = 1, . . . , n (whereas the last condition is slightly stronger than that of a finite
second moment). Replacing the multinomial multiplier random variables W
1
, . . . , W
n
by
ξ
1
/
¯
ξ, . . . , ξ
n
/
¯
ξ (ensuring realizations having arithmetic mean one) yields the multiplier (em-
pirical copula) process which converges weakly conditional on X
1
, . . . , X
n
in probability in
_


([0, 1]
d
), .

_
, see Bücher and Dette [5]:

n
_
_
_
1
n
n

j=1
ξ
j
¯
ξ
1
{
´
U
j
≤u}

´
C
n
(u)
_
_
_
P
−→
ξ
B
C
(u).
5
For general considerations of multiplier empirical processes, we refer to the monographs of
van der Vaart and Wellner [49] and Kosorok [30]. An interesting property of the multiplier
process is its convergence to an independent copy of B
C
, both for known as well as for
unknown marginal distribution functions. The process is introduced by Scaillet [44] in a
bivariate context, a general multivariate version and its unconditional weak convergence are
investigated by Rémillard and Scaillet [38]. Bücher and Dette [5] find that the multiplier
technique yields more precise results than the nonparametric bootstrap in mean as well as
in mean squared error when estimating the asymptotic covariance of the empirical copula
process based on independent and identically distributed samples. Hence, a generalization
of the multiplier technique is considered next. It is motivated by the fact that this technique
is inconsistent when applied to serially dependent samples. Inoue [27] develops a block
multiplier process for general distribution functions based on dependent data in which the
same multiplier random variable is used for a block of observations. We focus on copulas
and consider a refinement of this technique. More precisely, a tapered block multiplier
(empirical copula) process is introduced based on the work of Bühlmann [6], Chapter 3.3
and Paparoditis and Politis [32]: The main idea is to consider a sample ξ
1
, . . . , ξ
n
from a
process (ξ
j
)
j∈Z
of serially dependent tapered block multiplier random variables, satisfying:
A1 (ξ
j
)
j∈Z
is independent of the observation process (X
j
)
j∈Z
.
A2 (ξ
j
)
j∈Z
is a positive c · l(n)-dependent process, i.e., for fixed j ∈ Z, ξ
j
is independent
of ξ(j + h) for all |h| ≥ c · l(n), where c is a constant and l(n) → ∞ as n → ∞ while
l(n) = o(n).
A3 (ξ
j
)
j∈Z
is strictly stationary. For all j, h ∈ Z, assume E[ξ
j
] = µ > 0, Cov[ξ
j
, ξ
j+h
] =
µ
2
v(h/l(n)) and v is a function symmetric about zero; without loss of generality, we consider
µ = 1 and v(0) = 1. All central moments of ξ
j
are supposed to be bounded given the sample
size n.
Weak convergence of the tapered block multiplier process conditional on a sample X
1
, . . . , X
n
almost surely is established in the following theorem:
Theorem 3. Consider observations X
1
, . . . , X
n
, drawn from a strictly stationary process
(X
j
)
j∈Z
satisfying


r=1
(r + 1)
c
_
α
X
(r) < ∞, whereas c = max{8d + 12, ⌊2/ǫ⌋ + 1}. Let
the tapered block multiplier process (ξ
j
)
j∈Z
satisfy A1, A2, A3 with block length l(n) →∞,
where l(n) = O(n
1/2−ǫ
) for 0 < ǫ < 1/2. The tapered block multiplier empirical copula
process converges weakly conditional on X
1
, . . . , X
n
almost surely in
_


([0, 1]
d
), .

_
:

n
_
_
1
n
n

j=1
ξ
j
¯
ξ
1
{
´
U
j
≤u}

´
C
n
(u)
_
_
a.s.
−→
ξ
B
M
C
(u),
where B
M
C
(u) is an independent copy of B
C
(u).
The proof is given in 5.
Remark 1. The multiplier random variables can as well be assumed to be centered around
zero [cf. 30, Proof of Theorem 2.6]. Define ξ
0
j
:= ξ
j
−µ. Then
B
M,0
C,n
(u) =
1

n
n

j=1
_
ξ
0
j

¯
ξ
0
_
1
{
´
U
j
≤u}
=
¯
ξ
1

n
n

j=1
_
ξ
j
¯
ξ
−1
_
1
{
´
U
j
≤u}
=
¯
ξB
M
C,n
(u)
6
for all u ∈ [0, 1]
d
. This is an asymptotically equivalent form of the above tapered block
multiplier process:
sup
[0,1]
d
¸
¸
¸B
M,0
C,n
(u) −B
M
C,n
(u)
¸
¸
¸ = sup
[0,1]
d
¸
¸
_
¯
ξ −1
_
B
M
C,n
(u)
¸
¸
P
−→
ξ
0,
since B
M
C,n
(u) tends to a tight centered Gaussian limit. The assumption of centered multi-
plier random variables is abbreviated as A3b in the following.
There are numerous ways to define tapered block multiplier processes (ξ
j
)
j∈Z
satisfying the
above assumptions; in the following, a basic version having uniform weights and a refined
version with triangular weights are investigated and compared:
Example 2. A simple form of the tapered block multiplier random variables can be defined
based on moving average processes. Consider the function κ
1
which assigns uniform weights
given by
κ
1
(h) :=
_
1
2l(n)−1
for all |h| < l(n)
0 else.
Note that κ
1
is a discrete kernel, i.e., it is symmetric about zero and

h∈Z
κ
1
(h) = 1. The
tapered block multiplier process is defined by
ξ
j
=

h=−∞
κ
1
(h)w
j+h
for all j ∈ Z, (9)
where (w
j
)
j∈Z
is an independent and identically distributed sequence of, e.g., Gamma(q,q)
random variables with q := 1/[2l(n)−1]. The expectation of ξ
j
is then given by E[ξ
j
] = 1, its
variance by V ar[ξ
j
] = 1 for all j ∈ Z. For all j ∈ Z and |h| < 2l(n) −1, direct calculations
further yield the covariance function Cov(ξ
j
, ξ
j+h
) = {2l(n) − 1 − |h|}/{2l(n) − 1} which
linearly decreases as h increases in absolute value. The resulting sequence (ξ
j
)
j∈Z
satisfies
A1, A2, and A3. Exploring Remark 1, tapered block multiplier random variables can as
well be defined based on sequences (w
j
)
j∈Z
of, e.g., Rademacher-type random variables w
j
characterized by P(w
j
= −1/

q) = P(w
j
= 1/

q) = 0.5 or Normal random variables
w
j
∼ N(0, 1/

q). In either one of these two cases, the resulting sequence (ξ
j
)
j∈Z
satisfies
A1, A2, and A3b. Figure 1 shows the kernel function κ
1
and simulated trajectories of
Rademacher-type tapered block multiplier random variables.
Example 3. Following Bühlmann [6], let us define the kernel function by
κ
2
(h) := max{0, {1 −|h|/l(n)}/l(n)}
for all h ∈ Z. The tapered multiplier process (ξ
j
)
j∈Z
follows Equation (9), where (w
j
)
j∈Z
is
an independent and identically distributed sequence of Gamma(q,q) random variables with
q = 2/{3l(n)} + 1/{3l(n)
3
}. The expectation of ξ
j
is given by
E[ξ
j
] =
1
l(n)
+ 2
l(n)

h=1
1
l(n)
_
1 −
h
l(n)
_
= 1.
7
−8 −6 −4 −2 0 2 4 6 8
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
h
κ
1
(
h
)
0 20 40 60 80 100
−3
−2
−1
0
1
2
3
j
ξ
j
Figure 1: Tapered block multiplier Monte Carlo simulation. Kernel function κ
1
(h) (left) and
simulated trajectories of Rademacher-type tapered block multiplier random variables ξ
1
, . . . , ξ
100
(right) with block length l(n) = 3 (solid line) and l(n) = 6 (dashed line), respectively.
−8 −6 −4 −2 0 2 4 6 8
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
h
κ
2
(
h
)
0 20 40 60 80 100
−3
−2
−1
0
1
2
3
j
ξ
j
Figure 2: Tapered block multiplier Monte Carlo simulation. Kernel function κ
2
(h) (left) and
simulated trajectories of Rademacher-type tapered block multiplier random variables ξ
1
, . . . , ξ
100
(right) with block length l(n) = 3 (solid line) and l(n) = 6 (dashed line), respectively.
For the variance, direct calculations yield
V ar[ξ
j
] =
_
_
1
l(n)
2
+ 2
l(n)

h=1
{l(n) −h}
2
l(n)
4
_
_
V ar[w
.
] =
_
2
3l(n)
+
1
3l(n)
3
_
V ar[w
.
] = 1
for all j ∈ Z. For any j ∈ Z and |h| < 2l(n) − 1, the covariance function Cov(ξ
j
, ξ
j+h
)
can be described by a parabola centered at zero and opening downward [for details, see 6,
Section 6.2]. The resulting sequence (ξ
j
)
j∈Z
satisfies A1, A2, and A3. Figure 2 provides an
illustration of the kernel function κ
2
as well as simulated trajectories of Rademacher-type
tapered block multiplier random variables. In this setting, (ξ
j
)
j∈Z
satisfies A1, A2, and
A3b. Notice the smoothing which is driven by the choice of kernel function and the block
length l(n). This effect can be further explored using more sophisticated kernel functions,
e.g., with bell-shape; this is left for further research.
Given observations X
1
, . . . , X
n
of a strictly stationary process (X
j
)
j∈Z
satisfying the as-
sumptions of Theorem 3 and further satisfying Condition (3), estimation of Equation (5)
requires three steps: consider s = 1, . . . , S samples ξ
(s)
1
, . . . , ξ
(s)
n
from a tapered block multi-
plier process (ξ
j
)
j∈Z
satisfying A1, A2, A3. A set of copies of the tight centered Gaussian
8
process B
C
is obtained by
´
B
M(s)
C,n
(u) =
1

n
n

j=1
_
ξ
(s)
j
¯
ξ
(s)
−1
_
1
{
´
U
j
≤u}
for all u ∈ [0, 1]
d
, s = 1, . . . , S. (10)
The required adjustments for tapered block multiplier random variables satisfying A1,
A2, A3b are easily deduced from Remark 1. Finite differencing yields a nonparametric
estimator of the first order partial derivatives D
i
C(u) :
¯
D
i
C(u) =
_
¸
¸
_
¸
¸
_
´
Cn(u+he
i
)−
´
Cn(u−he
i
)
2h
for all u ∈ [0, 1]
d
, h ≤ u
i
≤ 1 −h,
´
Cn(u+2he
i
)
2h
for all u ∈ [0, 1]
d
, 0 ≤ u
i
< h,
´
Cn(u)−
´
Cn(u−2he
i
)
2h
for all u ∈ [0, 1]
d
, 1 −h < u
i
≤ 1,
(11)
where h = 1/

n and e
i
denotes the ith column of the d × d identity matrix [see 46, 4].
Combining Equations (10) and (11), we obtain:
´
G
M(s)
C,n
(u) =
´
B
M(s)
C,n
(u) −
d

i=1

D
i
C
n
(u)
´
B
M(s)
C,n
_
u
(i)
_
for all u ∈ [0, 1]
d
, s = 1, . . . , S. (12)
An application of Segers [46], Proposition 3.2 proves that
´
G
M(s)
C,n
(u) converges weakly to an
independent copy of the limiting process G
C
for all s = 1, . . . , S.
2.3. Finite sample behavior
Having established the asymptotic theory, we evaluate and compare the finite sample prop-
erties of the (moving) block bootstrap and the introduced tapered block multiplier technique
when estimating the limiting covariance of the empirical copula process in MC simulations.
The results of this section complement those of Bücher and Dette [5] and Bücher [4] on boot-
strap approximations for the empirical copula process based on independent and identically
distributed observations.
We first simulate independent and identically distributed observations from the bivariate
Clayton copula given by
C
Cl
θ
(u
1
, u
2
) =
_
u
−θ
1
+u
−θ
2
−1
_

1
θ
, θ > 0, (13)
for θ ∈ {1, 4}, i.e. Kendall’s τ = θ/(θ + 2) ∈ {1/3, 2/3}; as a second family of copulas, we
consider the bivariate family of Gumbel copulas
C
Gu
θ
(u
1
, u
2
) = exp
_

_
{−ln (u
1
)}
θ
+{−ln (u
2
)}
θ
_1
θ
_
, θ ≥ 1, (14)
for θ ∈ {1.5, 3}, i.e. Kendall’s τ = 1 −1/θ ∈ {1/3, 2/3}. In order to assess the performance
of the two methods when applied to serially dependent data, two examples of strongly
mixing time-series are considered (cf. Example 1). Firstly, the class of AR(1) processes:
we assume that a sample of n independent random variates
U
j
= (U
j,1
, U
j,2
), j = 1, . . . , n, (15)
9
is drawn from one of the aforementioned copulas. Define
ε
j
= (Φ
−1
(U
j,1
), Φ
−1
(U
j,2
)), for all j = 1, . . . , n. (16)
We obtain a sample X
1
, . . . , X
n
of an AR(1) process having Normal residuals by the ini-
tialization X
1
= ε
1
and recursive calculation of
X
j
= βX
j−1
+ ε
j
for all j = 2, . . . , n. (17)
The initially chosen coefficient of the lagged variable is β = 0.5.
For comparison, we as well investigate observations from bivariate copula-GARCH(1, 1)
processes having a specified static copula [see 33]. Based on the MC simulation of ε
j
for
all j = 1, . . . , n as given in Equation (16), heteroscedastic standard deviations σ
j,i
, i = 1, 2,
are obtained by initializing
σ
0,i
=
_
ω
i
1 −α
i
−β
i
for i = 1, 2
using the unconditional GARCH(1, 1) standard deviation and iterative calculation of the
general process given in Equation (1) with the following parameterizations:
X
j,1
= σ
j,1
ǫ
j,1
, σ
2
j,1
= 0.012 + 0.919σ
2
j−1
+ 0.072ǫ
2
j−1
, (18)
X
j,2
= σ
j,2
ǫ
j,2
, σ
2
j,2
= 0.037 + 0.868σ
2
j−1
+ 0.115ǫ
2
j−1
, (19)
for all j = 1, . . . , n. The considered coefficients are estimated by Jondeau et al. [28] to
model volatility of S&P 500 and DAX daily (log-)returns in an empirical application which
shows the practical relevance of this specific parameter choice.
In the case of independent observations, the theoretical covariance Cov(G
C
(u), G
C
(v)),
calculated at u = v ∈ {(1/3, 1/3), (2/3, 1/3), (1/3, 2/3), (2/3, 2/3)}, serves as a benchmark.
However, the theoretical covariance is unknown after the transformations carried out to gen-
erate dependent observations: though any chosen copula is invariant under componentwise
application of the (strictly monotonic) inverse standard Normal distribution, the transfor-
mations required to obtain samples from AR(1) or GARCH(1, 1) processes may not leave
the copula invariant. The following Lemma provides an alternative benchmark based on
consistent estimation of the unknown theoretical covariance structure:
Lemma 1. Consider a sample X
1
, . . . , X
N
. Assume that N → ∞ and choose n such that
n(N) → ∞, as well as n(N) = o(N). Under the assumptions of Theorem 1, a consistent
estimator for Cov(G
C
(u), G
C
(v)) is provided by
¯
Cov
_
´
G
C
(u),
´
G
C
(v)
_
:=
¯
Cov
_

n
_
´
C
n
(u) −
´
C
N
(u)
_
,

n
_
´
C
n
(v) −
´
C
N
(v)
__
for all u, v ∈ [0, 1]
d
.
The proof is given in 5. We apply Lemma 1 in 10
6
MC replications with n = 1, 000 and
N = 10
6
to provide an approximation of the true covariance.
In practice, it is not possible to iterate MC simulations; the limiting tight centered Gaussian
process G
C
is instead estimated conditional on a sample X
1
, . . . , X
n
. Therefore, we apply
10
Table 1: Mean and MSE (×10
4
) Monte Carlo results. I.i.d. and AR(1) settings, sample
size n = 100 and 1, 000 Monte Carlo replications. For each replication, we perform S = 2, 000
tapered block multiplier (M
i
) repetitions with Normal multiplier random variables, kernel function
κ
i
, i = 1, 2, block length l
M
= 3, and block bootstrap (B) repetitions with block length l
B
= 5.
(u
1
, u
2
) (1/3, 1/3) (1/3, 2/3) (2/3, 1/3) (2/3, 2/3)
Mean MSE Mean MSE Mean MSE Mean MSE
i.i.d. setting
Clayton True 0.0486 0.0338 0.0338 0.0508
(θ = 1) Approx. 0.0487 0.0338 0.0338 0.0508
M
2
0.0496 1.6323 0.0344 1.2449 0.0345 1.3456 0.0528 1.6220
M
1
0.0494 1.7949 0.0342 1.2934 0.0343 1.4128 0.0524 1.7910
B 0.0599 2.8286 0.0432 2.4103 0.0429 2.1375 0.0643 3.1538
Clayton True 0.0254 0.0042 0.0042 0.0389
(θ = 4) Approx. 0.0255 0.0042 0.0042 0.0390
M
2
0.0259 0.9785 0.0051 0.3715 0.0048 0.3662 0.0407 1.6324
M
1
0.0257 1.0104 0.0050 0.3656 0.0048 0.3673 0.0404 1.7048
B 0.0383 2.5207 0.0100 0.7222 0.0097 0.6662 0.0533 3.7255
Gumbel True 0.0493 0.0336 0.0336 0.0484
(θ = 1.5) Approx. 0.0493 0.0335 0.0335 0.0485
M
2
0.0514 1.3914 0.0346 1.2657 0.0340 1.2583 0.0497 1.4946
M
1
0.0510 1.5530 0.0344 1.3390 0.0338 1.3275 0.0495 1.6948
B 0.0616 2.8334 0.0429 2.1366 0.0432 2.2273 0.0620 3.2134
Gumbel True 0.0336 0.0058 0.0058 0.0293
(θ = 3) Approx. 0.0335 0.0058 0.0058 0.0294
M
2
0.0355 1.1359 0.0064 0.4427 0.0063 0.3851 0.0307 0.9819
M
1
0.0353 1.1991 0.0063 0.4409 0.0062 0.3832 0.0306 1.0261
B 0.0470 2.7885 0.0120 0.8396 0.0122 0.8959 0.0437 2.9355
AR(1) setting with β = 0.5
Clayton Approx. 0.0599 0.0408 0.0409 0.0629
(θ = 1) M
2
0.0602 3.1797 0.0394 2.4919 0.0398 2.5903 0.0625 2.6297
M
1
0.0598 3.5305 0.0391 2.6090 0.0396 2.7410 0.0620 2.9826
B 0.0699 3.4783 0.0496 2.9893 0.0492 2.8473 0.0761 4.0660
Clayton Approx. 0.0329 0.0064 0.0064 0.0432
(θ = 4) M
2
0.0347 2.0017 0.0071 0.6040 0.0072 0.6666 0.0460 2.8656
M
1
0.0344 2.0672 0.0071 0.5869 0.0072 0.6583 0.0458 3.0571
B 0.0472 3.5289 0.0132 1.0990 0.0128 0.9658 0.0585 4.2965
Gumbel Approx. 0.0617 0.0408 0.0409 0.0605
(θ = 1.5) M
2
0.0631 3.0587 0.0406 2.7512 0.0398 2.4187 0.0600 3.0433
M
1
0.0626 3.4252 0.0402 2.8739 0.0395 2.4920 0.0594 3.3824
B 0.0735 3.7410 0.0501 3.2025 0.0502 3.1150 0.0735 4.0521
Gumbel Approx. 0.0385 0.0061 0.0061 0.0345
(θ = 3) M
2
0.0425 2.5441 0.0069 0.5841 0.0070 0.5796 0.0375 2.1518
M
1
0.0422 2.7239 0.0069 0.5706 0.0070 0.5836 0.0373 2.2381
B 0.0535 3.8803 0.0127 1.0720 0.0125 1.0526 0.0507 4.1894
11
Table 2: Mean and MSE (×10
4
) Monte Carlo results. I.i.d. and AR(1) settings, sample
size n = 200 and 1, 000 Monte Carlo replications. For each replication, we perform S = 2, 000
tapered block multiplier (M
i
) repetitions with Normal multiplier random variables, kernel function
κ
i
, i = 1, 2, block length l
M
= 4, and block bootstrap (B) repetitions with block length l
B
= 7.
(u
1
, u
2
) (1/3, 1/3) (1/3, 2/3) (2/3, 1/3) (2/3, 2/3)
Mean MSE Mean MSE Mean MSE Mean MSE
i.i.d. setting
Clayton True 0.0486 0.0338 0.0338 0.0508
(θ = 1) Approx. 0.0487 0.0338 0.0338 0.0508
M
2
0.0494 1.2579 0.0346 0.8432 0.0343 0.8423 0.0522 1.0987
M
1
0.0490 1.3749 0.0345 0.8880 0.0341 0.8719 0.0519 1.2074
B 0.0562 1.7155 0.0402 1.2154 0.0395 1.2279 0.0593 1.8137
Clayton True 0.0254 0.0042 0.0042 0.0389
(θ = 4) Approx. 0.0255 0.0042 0.0042 0.0390
M
2
0.0261 0.5811 0.0044 0.1889 0.0048 0.2027 0.0390 0.9932
M
1
0.0260 0.6024 0.0044 0.1876 0.0048 0.2008 0.0388 1.0702
B 0.0347 1.6381 0.0076 0.3197 0.0073 0.2908 0.0489 2.1425
Gumbel True 0.0493 0.0336 0.0336 0.0484
(θ = 1.5) Approx. 0.0493 0.0335 0.0335 0.0485
M
2
0.0512 1.1513 0.0347 0.8157 0.0347 0.8113 0.0504 1.1697
M
1
0.0511 1.2909 0.0345 0.8653 0.0346 0.8692 0.0503 1.2811
B 0.0577 1.7178 0.0402 1.2233 0.0403 1.1701 0.0574 1.8245
Gumbel True 0.0336 0.0058 0.0058 0.0293
(θ = 3) Approx. 0.0335 0.0058 0.0058 0.0294
M
2
0.0361 0.8340 0.0067 0.2577 0.0063 0.2505 0.0320 0.8678
M
1
0.0359 0.9107 0.0067 0.2576 0.0062 0.2444 0.0318 0.9078
B 0.0435 1.7448 0.0095 0.3917 0.0094 0.3804 0.0388 1.5320
AR(1) setting with β = 0.5
Clayton Approx. 0.0599 0.0408 0.0409 0.0629
(θ = 1) M
2
0.0615 2.3213 0.0413 1.8999 0.0414 1.9235 0.0646 2.2534
M
1
0.0608 2.5553 0.0408 1.9594 0.0410 2.0094 0.0638 2.4889
B 0.0676 2.6206 0.0468 1.9802 0.0472 2.0278 0.0715 2.5494
Clayton Approx. 0.0329 0.0064 0.0064 0.0432
(θ = 4) M
2
0.0344 1.2685 0.0074 0.3890 0.0073 0.4108 0.0460 1.7936
M
1
0.0341 1.3034 0.0073 0.3824 0.0072 0.4053 0.0455 1.8566
B 0.0432 2.1693 0.0105 0.5427 0.0101 0.4901 0.0546 2.8808
Gumbel Approx. 0.0617 0.0408 0.0409 0.0605
(θ = 1.5) M
2
0.0649 2.2833 0.0432 2.1528 0.0433 1.9800 0.0646 3.0536
M
1
0.0639 2.4663 0.0426 2.1943 0.0428 2.0367 0.0638 3.2590
B 0.0703 2.5133 0.0468 1.8163 0.0467 1.7087 0.0693 2.5655
Gumbel Approx. 0.0385 0.0061 0.0061 0.0345
(θ = 3) M
2
0.0430 1.9156 0.0079 0.4884 0.0074 0.3950 0.0405 2.5846
M
1
0.0426 1.9784 0.0078 0.4806 0.0073 0.3858 0.0400 2.6083
B 0.0492 2.2152 0.0102 0.5375 0.0099 0.4832 0.0455 2.1534
12
Table 3: Mean and MSE (×10
4
) Monte Carlo results. AR(1) and GARCH(1, 1) settings,
sample size n = 200 and 1, 000 Monte Carlo replications. For each replication, we performS = 2, 000
tapered block multiplier (M
i
) repetitions with Normal multiplier random variables, kernel function
κ
i
, i = 1, 2, block length l
M
= 4, and block bootstrap (B) repetitions with block length l
B
= 7.
(u
1
, u
2
) (1/3, 1/3) (1/3, 2/3) (2/3, 1/3) (2/3, 2/3)
Mean MSE Mean MSE Mean MSE Mean MSE
AR(1) setting with β = 0.25
Clayton Approx. 0.0506 0.0350 0.0350 0.0530
(θ = 1) M
2
0.0521 1.5319 0.0360 1.0527 0.0361 1.1258 0.0545 1.4689
M
1
0.0517 1.6574 0.0357 1.1003 0.0358 1.1662 0.0541 1.6083
B 0.0591 2.0394 0.0415 1.4084 0.0417 1.5256 0.0624 2.1027
Clayton Approx. 0.0272 0.0048 0.0048 0.0395
(θ = 4) M
2
0.0285 0.8366 0.0052 0.2290 0.0052 0.2208 0.0413 1.3315
M
1
0.0283 0.8792 0.0051 0.2273 0.0052 0.2200 0.0410 1.3858
B 0.0375 1.9937 0.0084 0.3726 0.0085 0.3646 0.0495 2.1996
Gumbel Approx. 0.0518 0.0349 0.0348 0.0507
(θ = 1.5) M
2
0.0549 1.4150 0.0373 1.1973 0.0377 1.2459 0.0550 2.0067
M
1
0.0545 1.5154 0.0370 1.2339 0.0373 1.2871 0.0545 2.0766
B 0.0608 2.0500 0.0415 1.3234 0.0418 1.4093 0.0608 2.3118
Gumbel Approx. 0.0346 0.0058 0.0058 0.0304
(θ = 3) M
2
0.0386 1.1686 0.0070 0.3079 0.0068 0.2979 0.0350 1.5135
M
1
0.0384 1.2224 0.0070 0.3052 0.0067 0.2892 0.0347 1.5157
B 0.0447 1.8434 0.0095 0.3873 0.0097 0.4303 0.0409 1.8797
GARCH(1, 1) setting
Clayton Approx. 0.0479 0.0340 0.0340 0.0516
(θ = 1) M
2
0.0491 1.1144 0.0347 0.8485 0.0343 0.8279 0.0520 1.0958
M
1
0.0486 1.3579 0.0339 0.9021 0.0338 0.8100 0.0515 1.2013
B 0.0567 2.0156 0.0403 1.1765 0.0403 1.2556 0.0600 1.8542
Clayton Approx. 0.0252 0.0055 0.0056 0.0403
(θ = 4) M
2
0.0259 0.5054 0.0051 0.1979 0.0053 0.2301 0.0399 1.0431
M
1
0.0258 0.6429 0.0052 0.2199 0.0051 0.2217 0.0390 1.0959
B 0.0345 1.5252 0.0081 0.2764 0.0081 0.2921 0.0484 1.8359
Gumbel Approx. 0.0500 0.0339 0.0339 0.0482
(θ = 1.5) M
2
0.0516 1.0480 0.0356 0.8486 0.0354 0.8175 0.0511 1.2451
M
1
0.0516 1.2235 0.0351 0.9582 0.0352 0.8848 0.0503 1.2774
B 0.0575 1.5928 0.0402 1.2395 0.0403 1.2346 0.0574 1.9198
Gumbel Approx. 0.0341 0.0074 0.0074 0.0291
(θ = 3) M
2
0.0362 0.9284 0.0073 0.2941 0.0072 0.2587 0.0321 0.9133
M
1
0.0366 0.9782 0.0073 0.2786 0.0071 0.2888 0.0320 0.9819
B 0.0435 1.6811 0.0103 0.3607 0.0101 0.3390 0.0390 1.6878
13
the tapered block multiplier technique and the block bootstrap method. For detailed discus-
sions on the block length of the block bootstrap, we refer to Künsch [31] as well as Bühlmann
and Künsch [7]. In the present setting we choose l
B
(100) = 5 and l
B
(200) = 7; this choice
corresponds to l
B
(n) = ⌊1.25n
1/3
⌋ which satisfies the assumptions of the asymptotic theory.
The tapered block multiplier technique is assessed based on a sequence (w
j
)
j∈Z
of Normal
random variables as introduced in Examples 2 and 3. Moreover, serial dependence in the
tapered block multiplier random variables is either generated on the basis of the uniform
weighting scheme represented by the kernel function κ
1
or the triangular weighting scheme
represented by the kernel function κ
2
. The block length is set to l
M
(n) = ⌊1.1n
1/4
⌋, hence
l
M
(100) = 3 and l
M
(200) = 4, meaning that both methods yield 2l
M
-dependent blocks.
Both methods are used to estimate the sample covariance of block bootstrap- or tapered
block multiplier-based
´
G
.(s)
C
(u) and
´
G
.(s)
C
(v) based on s = 1, . . . , S = 2, 000 resampling
repetitions for the given set of vectors u and v. We perform 1, 000 MC replications and
report mean and mean squared error (MSE) of each method.
Tables 1, 2, and Table 3 show results for samples X
1
, . . . , X
n
of size n = 100 and n = 200
based on AR(1) and GARCH(1, 1) processes. Since the approximation
¯
Cov(
´
G
C
(u),
´
G
C
(v))
works well, it can, hence, be used as a benchmark in the case of serially dependent observa-
tions. MC results based on independent and identically distributed samples indicate that
the tapered block multiplier outperforms the block bootstrap in mean and MSE of estima-
tion. The general applicability of these resampling methods however comes at the price of
an increased mean squared error [in comparison to the multiplier or bootstrap with block
length l = 1 as investigated in 5]. Hence, we suggest to test serial independence of con-
tinuous multivariate time-series as introduced by Kojadinovic and Yan [29] to investigate
which method is appropriate. In the case of serially dependent observations, resampling
results indicate that the tapered block multiplier yields more precise results in mean and
mean squared error than the block bootstrap (which tends to overestimate) for the con-
sidered choices of the temporal dependence structure, the kernel function, the copula, and
the parameter. Regarding the choice of the kernel function, mean results for κ
1
and κ
2
are similar, whereas κ
2
yields slightly better results in mean squared error. Additional MC
simulations are given in Ruppert [42]: if the multiplier or bootstrap methods for indepen-
dent observations are incorrectly applied to dependent observations, i.e., l
B
= l
M
= 1, then
their results do not reflect the changed structure adequately. Results based on Normal,
Gamma, and Rademacher-type sequences (w
j
)
j∈Z
indicate that different distributions used
to simulate the multiplier random variables lead to similar results. To ease comparison
of the next section to the work of Rémillard and Scaillet [38], we use Normal multiplier
random variables in the following.
3. Testing for a constant copula
Considering strongly mixing multivariate processes (X
j
)
j∈Z
, nonparametric tests for a con-
stant copula with a specified or unspecified change point candidate(s) consistent against
general alternatives are introduced and assessed in finite samples.
3.1. Specified change point candidate
The specification of a change point candidate can for instance have an economic motivation:
Patton [33] investigates a change in parameters of the dependence structure between various
exchange rates following the introduction of the euro on the 1st of January 1999. Focusing
14
on stock returns, multivariate association between major S&P global sector indices before
and after the bankruptcy of Lehman Brothers Inc. on 15th of September 2008 is assessed
in Gaißer et al. [21] and Ruppert [42]. Whereas these references investigate change points
in functionals of the copula, the copula itself is in the focus of this study. This approach
permits to analyze changes in the structure of association even if a functional thereof, such
as a measure of multivariate association, is invariant.
Suppose we observe a sample X
1
, . . . , X
n
of a process (X
j
)
j∈Z
. Constancy of the structure
of association is initially investigated in the case of a specified change point candidate
indexed by ⌊λn⌋ for λ ∈ [0, 1] :
H
0
: U
j
∼ C
1
for all j = 1, . . . , n,
H
1
: U
j

_
C
1
for all j = 1, . . . , ⌊λn⌋,
C
2
for all j = ⌊λn⌋ + 1, . . . , n,
whereas C
1
and C
2
are assumed to differ on a non-empty subset of [0, 1]
d
. To test for
a change point in the structure of association after observation ⌊λn⌋ < n, we split the
sample into two subsamples: X
1
, . . . , X
⌊λn⌋
and X
⌊λn⌋+1
, . . . , X
n
. Assuming the marginal
distribution functions to be unknown and constant in each subsample, we (separately)
estimate
´
U
1
, . . . ,
´
U
⌊λn⌋
and
´
U
⌊λn⌋+1
, . . . ,
´
U
n
with empirical copulas
´
C
⌊λn⌋
and
¯
C
n−⌊λn⌋
,
respectively. The test statistic is defined by
T
n
(λ) =
_
[0,1]
d
__
⌊λn⌋(n −⌊λn⌋)
n
_
´
C
⌊λn⌋
(u) −
¯
C
n−⌊λn⌋
(u)
_
_
2
du (20)
and can be calculated explicitly, for details we refer to Rémillard and Scaillet [38]. These
authors introduce a test for equality between to copulas which is applicable in the case of
no serial dependence. Weak convergence of T
n
(λ) under strong mixing is established in the
following:
Theorem 4. Consider observations X
1
, . . . , X
n
, drawn from a process (X
j
)
j∈Z
satisfying
the strong mixing condition α
X
(r) = O(r
−a
) for some a > 1. Further assume a specified
change point candidate indexed by ⌊λn⌋ for λ ∈ [0, 1] such that U
j
∼ C
1
, X
j,i
∼ F
1,i
for
all j = 1, . . . , ⌊λn⌋, i = 1, . . . , d and U
j
∼ C
2
, X
j,i
∼ F
2,i
for all j = ⌊λn⌋ + 1, . . . , n,
i = 1, . . . , d. Suppose that C
1
and C
2
satisfy Condition (3). Under the null hypothesis
C
1
= C
2
, the test statistic T
n
(λ) converges weakly:
T
n
(λ)
w.
−→ T(λ) =
_
[0,1]
d
_

1 −λG
C
1
(u) −

λG
C
2
(u)
_
2
du,
whereas G
C
1
and G
C
2
represent dependent identically distributed Gaussian processes.
The proof is given in 5. Notice that if there exists a subset I ∈ [0, 1]
d
such that
_
I
_

1 −λC
1
(u) −

λC
2
(u)
_
2
du > 0,
then T
n
(λ) → ∞ in probability under H
1
. The limiting law of the test statistic depends
on the unknown copulas C
1
before and C
2
after the change point candidate. To estimate
p-values of the test statistic, we use the tapered block multiplier technique described above.
15
Corollary 1. Consider observations X
1
, . . . , X
⌊λn⌋
and X
⌊λn⌋+1
, . . . , X
n
drawn from a pro-
cess (X
j
)
j∈Z
. Assume that the process satisfies the strong mixing assumptions of Theorem
3. Let ⌊λn⌋ for λ ∈ [0, 1] denote a specified change point candidate such that U
j
∼ C
1
,
X
j,i
∼ F
1,i
for all j = 1, . . . , ⌊λn⌋, i = 1, . . . , d and U
j
∼ C
2
, X
j,i
∼ F
2,i
for all
j = ⌊λn⌋ + 1, . . . , n, i = 1, . . . , d. Suppose that C
1
and C
2
satisfy Condition (3).
For s = 1, . . . , S, let ξ
(s)
1
, . . . , ξ
(s)
n
denote samples of a tapered block multiplier process (ξ
j
)
j∈Z
satisfying A1, A2, A3(b) with block length l(n) → ∞, where l(n) = O(n
1/2−ǫ
) for 0 < ǫ <
1/2, . Define
´
T
M(s)
n
(λ) based on Equation (12):
´
T
M(s)
n
(λ) :=
_
[0,1]
d
__
n −⌊λn⌋
n
´
G
M(s)
C
1
,⌊λn⌋
(u) −
_
⌊λn⌋
n
¯
G
M(s)
C
2
,n−⌊λn⌋
(u)
_
2
du, (21)
whereas ξ
(s)
j
, j = 1, . . . , ⌊λn⌋ and ⌊λn⌋+1, . . . , n enter the first and the second summand of
the Cramér-von Mises functional, respectively. Weak convergence conditional on X
1
, . . . , X
n
almost surely holds under the null hypothesis as well as under the alternative:
´
T
M(s)
n
(λ)
a.s.
−→
ξ
T
M
(λ),
whereas T
M
(λ) is an independent copy of T(λ).
The proof is given in 5. Notice that the result of Corollary 1 is valid both for tapered block
multiplier processes satisfying A3 and A3b. The integral involved in
´
T
M(s)
n
(λ) can be
calculated explicitly [see 37, Appendix B]. Notice that dependence between the subsamples
is captured since the two sets of tapered block multiplier random variables are dependent
by construction. An approximate p-value for T
n
(λ) is provided by
1
S
S

s=1
1
_
´
T
M(s)
n
(λ)>Tn(λ)
_
. (22)
Hence, p-values can be estimated by counting the number of cases in which the simulated
test statistic based on the tapered block multiplier method exceeds the observed one.
Finite sample properties. Size and power of the test in finite samples are assessed in a
simulation study. We apply the MC algorithm introduced in Section 2.3 to generate samples
of size n = 100 or n = 200 from bivariate processes (X
j
)
j∈Z
. As a base scenario, serially
independent observations are simulated. Moreover, we consider observations from strictly
stationary AR(1) processes with autoregressive coefficient β ∈ {0.25, 0.5} and GARCH(1, 1)
processes which are parameterized as in Equations (18) and (19). The univariate processes
are either linked by a Clayton or a Gumbel copula. The change point after observation
⌊λn⌋ = n/2 (if present) only affects the parameter within each family: the copula C
1
is parameterized such that τ
1
= 0.2, the copula C
2
such that τ
2
= 0.2, . . . , 0.9. A set
of S = 2, 000 Normal tapered block multiplier random variables is simulated, whereas
l
M
(100) = 3 and l
M
(200) = 4 are chosen for the block length.
Results of 1, 000 MC replications based on n = 100 and n = 200 observations are shown
in Tables 4 and 5, respectively. The test based on the tapered block multiplier technique
with kernel function κ
2
leads to a rejection quota under the null hypothesis which is close
16
Table 4: Size and power of the test for a constant copula with a specified change point
candidate. Results are based on 1, 000 Monte Carlo replications, n = 100, S = 2, 000 tapered
block multiplier repetitions, kernel function κ
2
, and asymptotic significance level α = 5%.
τ
2
0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
i.i.d. setting
Clayton l = 1 0.036 0.110 0.295 0.612 0.881 0.983 1.000 1.000
l = 3 0.050 0.114 0.315 0.578 0.877 0.976 1.000 1.000
Gumbel l = 1 0.040 0.093 0.236 0.569 0.840 0.976 0.998 1.000
l = 3 0.063 0.110 0.276 0.594 0.866 0.983 1.000 1.000
GARCH(1, 1) setting
Clayton l = 1 0.037 0.106 0.298 0.598 0.868 0.977 1.000 1.000
l = 3 0.047 0.120 0.303 0.588 0.876 0.978 0.999 1.000
Gumbel l = 1 0.043 0.089 0.246 0.573 0.827 0.978 0.999 1.000
l = 3 0.065 0.124 0.285 0.569 0.847 0.980 1.000 1.000
AR(1) setting with β = 0.25
Clayton l = 1 0.051 0.115 0.308 0.592 0.849 0.969 0.999 1.000
l = 3 0.047 0.111 0.292 0.547 0.836 0.968 0.998 1.000
Gumbel l = 1 0.053 0.109 0.257 0.550 0.836 0.975 0.998 1.000
l = 3 0.066 0.105 0.254 0.568 0.818 0.964 1.000 1.000
AR(1) setting with β = 0.5
Clayton l = 1 0.086 0.154 0.313 0.549 0.798 0.928 0.985 1.000
l = 3 0.078 0.117 0.236 0.462 0.730 0.868 0.986 0.999
Gumbel l = 1 0.100 0.172 0.285 0.541 0.816 0.956 0.998 1.000
l = 3 0.077 0.109 0.218 0.482 0.722 0.907 0.994 0.999
to the chosen theoretical asymptotic size of 5% in all considered settings; comparing the
results for n = 100 and n = 200, we observe that the approximation of the asymptotic
size based on the tapered block multiplier improves in precision with increased sample
size. The tapered block multiplier-based test also performs well under the alternative
hypothesis and its power increases with the difference τ
2
−τ
1
between the considered values
for Kendall’s τ. The power of the test under the alternative hypothesis is best in the case of
no serial dependence as is shown in Table 5. If serial dependence is present in the sample
then more observations are required to reach the power of the test in the case of serially
independent observations. For comparison, we also show the results if the test assuming
independent observations (i.e., the test based on the multiplier technique with block length
l = 1) is erroneously applied to the simulated dependent observations. The effects of
different types of dependent observations differ largely in the finite sample simulations
considered: GARCH(1, 1) processes do not show strong impact, whereas AR(1) processes
lead to considerable distortions, in particular regarding the size of the test. Results indicate
that the test overrejects if temporal dependence is not taken into account; the observed
size of the test in these cases can be more than twice the specified asymptotic size. For
comparison, results for n = 200 and kernel function κ
1
are shown in Ruppert [42]. The
obtained results indicate that the uniform kernel function κ
1
leads to a more conservative
testing procedure since the rejection quota is slightly higher, both under the null hypothesis
as well as under the alternative. Due to the fact that the size of the test is approximated
more accurately based on the kernel function κ
2
, its use is recommended.
17
Table 5: Size and power of the test for a constant copula with a specified change point
candidate. Results are based on 1, 000 Monte Carlo replications, n = 200, S = 2, 000 tapered
block multiplier repetitions, kernel function κ
2
, and asymptotic significance level α = 5%.
τ
2
0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
i.i.d. setting
Clayton l = 1 0.047 0.172 0.524 0.908 0.993 1.000 1.000 1.000
l = 4 0.063 0.164 0.552 0.905 0.989 1.000 1.000 1.000
Gumbel l = 1 0.043 0.162 0.525 0.877 0.991 1.000 1.000 1.000
l = 4 0.055 0.169 0.535 0.895 0.996 1.000 1.000 1.000
GARCH(1, 1) setting
Clayton l = 1 0.040 0.169 0.503 0.894 0.994 1.000 1.000 1.000
l = 4 0.056 0.160 0.541 0.903 0.992 1.000 1.000 1.000
Gumbel l = 1 0.046 0.154 0.498 0.873 0.992 1.000 1.000 1.000
l = 4 0.057 0.174 0.496 0.899 0.994 1.000 1.000 1.000
AR(1) setting with β = 0.25
Clayton l = 1 0.052 0.180 0.521 0.866 0.989 1.000 1.000 1.000
l = 4 0.057 0.149 0.497 0.867 0.988 1.000 1.000 1.000
Gumbel l = 1 0.047 0.180 0.515 0.872 0.989 1.000 1.000 1.000
l = 4 0.050 0.136 0.490 0.855 0.992 1.000 1.000 1.000
AR(1) setting with β = 0.5
Clayton l = 1 0.107 0.237 0.523 0.813 0.975 0.999 1.000 1.000
l = 4 0.058 0.137 0.396 0.748 0.957 0.998 1.000 1.000
Gumbel l = 1 0.122 0.227 0.499 0.825 0.979 1.000 1.000 1.000
l = 4 0.059 0.123 0.401 0.770 0.958 0.998 1.000 1.000
3.2. The general case: unspecified change point candidate
The assumption of a change point candidate at specified location is relaxed in the following.
Intuitively, testing with unspecified change point candidate(s) is less restrictive but a trade-
off is to be made: the tests introduced in this section neither require conditions on the partial
derivatives of the underlying copula(s) nor the specification of change point candidate(s), yet
they are based on the assumption of strictly stationary univariate processes, i.e., X
j,i
∼ F
i
for all j ∈ Z and i = 1, . . . , d. The motivation for this test setting is that only for a subset
of the change points documented in empirical studies, a priori hypothesis such as triggering
economic events can be found [see, e.g., 15]. Even if a triggering event exists, its start
(and end) often are subject to uncertainty: Rodriguez [41] studies changes in dependence
structures of stock returns during periods of turmoil considering data framing the East Asian
crisis in 1997 as well as the Mexican devaluation in 1994, whereas no change point candidate
is given a priori. These objects of investigation are well-suited for nonparametric methods
which offer the important advantage that their results do not depend on model assumptions.
For a general introduction to change point problems of this type, we refer to the monographs
by Csörgő and Hórvath [12] and, with particular emphasis on nonparametric methods, to
Brodsky and Darkhovsky [3].
Let X
1
, . . . , X
n
denote a sample of a process (X
j
)
j∈Z
with strictly stationary univariate
18
margins, i.e., X
j,i
∼ F
i
for all j ∈ Z and i = 1, . . . , d. We establish tests for the null
hypothesis of a constant copula versus the alternative that there exist P unspecified change
points λ
1
< . . . < λ
P
∈ [0, 1], formally
H
0
: U
j
∼ C
1
for all j = 1, . . . , n,
H
1
: there exist 0 = λ
0
< λ
1
< . . . < λ
P
< λ
P+1
= 1 such that U
j
∼ C
p
for all j = ⌊λ
p−1
n⌋ + 1, . . . , ⌊λ
p
n⌋ and p = 1, . . . , P + 1,
whereas, under the alternative hypothesis, C
1
, . . . , C
P+1
are assumed to differ on a non-
empty subset of [0, 1]
d
. In this setting, we estimate the pseudo-observations
´
U
1
, . . . ,
´
U
n
based on X
1
, . . . , X
n
. For any change point candidate ζ ∈ [0, 1], we split the pseudo-
observations in two subsamples
´
U
1
, . . . ,
´
U
⌊ζn⌋
and
´
U
⌊ζn⌋+1
, . . . ,
´
U
n
. The following test
statistics are based on a comparison of the resulting empirical copulas:
S
n
(ζ, u) :=
⌊ζn⌋(n −⌊ζn⌋)
n
3/2
_
´
C
⌊ζn⌋
(u) −
¯
C
n−⌊ζn⌋
(u)
_
for all u ∈ [0, 1]
d
.
The functional used to define the test statistic given in Equation (20) of the previous section
is thus multiplied by the weight function
_
⌊ζn⌋(n −⌊ζn⌋)/n which assigns less weight to
change point candidates close to the sample’s boundaries. Define Z
n
:= {1/n, . . . , (n −
1)/n}. We consider three alternative test statistics which pick the most extreme realization
within the set Z
n
of change point candidates:
T
1
n
= max
ζ∈Zn
_
[0,1]
d
S
n
(ζ, u)
2
d
´
C
n
(u), (23)
T
2
n
= max
ζ∈Zn
_
max
u∈{
´
U
j
}
j=1,...,n
S
n
(ζ, u) − min
u∈{
´
U
j
}
j=1,...,n
S
n
(ζ, u)
_
, (24)
T
3
n
= max
ζ∈Zn
_
max
u∈{
´
U
j
}
j=1,...,n
|S
n
(ζ, u)|
_
, (25)
which are the maximally selected Cramér-von Mises (CvM), Kuiper (K), and Kolmogorov-
Smirnov (KS) statistic, respectively. We refer to Hórvath and Shao [26] for an investigation
of these statistics in a univariate context based on an independent and identically distributed
sample; T
3
n
is investigated in Inoue [27] for general multivariate distribution functions under
strong mixing conditions as well as in Rémillard [36] with an application to the copula of
GARCH residuals.
Under the null hypothesis of a constant copula, i.e., C(u) = C
p
(u) for all p = 1, . . . , P +1,
notice the following relation between S
n
(ζ, u) and a linear combination of the sequential
and the standard empirical process, more precisely a (d + 1)-time parameter tied down
empirical copula process [see Section 2.6 in 12, 26, 36]:
S
n
(ζ, u) =
⌊ζn⌋(n −⌊ζn⌋)
n
3/2
{
´
C
⌊ζn⌋
(u) −
¯
C
n−⌊ζn⌋
(u)}
=
1

n
_
_
⌊ζn⌋

j=1
_
1
{
´
U
j
≤u}
−C(u)
_

⌊ζn⌋
n
n

j=1
_
1
{
´
U
j
≤u}
−C(u)
_
_
_
, (26)
for all ζ ∈ [0, 1] and u ∈ [0, 1]
d
. Equation (26) is the pivotal element to derive the asymptotic
behavior of S
n
(ζ, u) under the null hypothesis which is given next.
19
Theorem 5. Consider a sample X
1
, . . . , X
n
of a strictly stationary process (X
j
)
j∈Z
satis-
fying the strong mixing condition α
X
(r) = O(r
−4−d{1+ǫ}
) for some 0 < ǫ ≤ 1/4, whereas
U
j
∼ C, X
j,i
∼ F
i
for all j ∈ Z and i = 1, . . . , d. Weak convergence of S
n
(ζ, u) holds in
_


([0, 1]
d+1
), .

_
:
S
n
(ζ, u)
w.
−→B
C
(ζ, u) −ζB
C
(1, u),
where B
C
(ζ, u) denotes a (centered) C-Kiefer process, viz. B
C
(0, u) = B
C
(ζ, 0) = B
C
(ζ, 1) =
0 with covariance structure
Cov(B
C

1
, u), B
C

2
, v)) = min(ζ
1
, ζ
2
)

j∈Z
Cov
_
1
{U
0
≤u}
, 1
{U
j
≤v}
_
for all ζ
1
, ζ
2
∈ [0, 1] and u, v ∈ [0, 1]
d
. This in particular implies weak convergence of the
test statistics T
1
n
, T
2
n
, and T
3
n
:
T
1
n
w.
−→ sup
0≤ζ≤1
_
[0,1]
d
{B
C
(ζ, u) −ζB
C
(1, u)}
2
dC(u),
T
2
n
w.
−→ sup
0≤ζ≤1
_
sup
u∈[0,1]
d
{B
C
(ζ, u) −ζB
C
(1, u)} − inf
u∈[0,1]
d
{B
C
(ζ, u) −ζB
C
(1, u)}
_
,
T
3
n
w.
−→ sup
0≤ζ≤1
_
sup
u∈[0,1]
d
|{B
C
(ζ, u) −ζB
C
(1, u)}|
_
.
The proof is given in 5. For independent and identically distributed observations X
1
, . . . , X
n
,
a detailed investigation of C-Kiefer processes is given in Zari [52].
Direct calculations yield T
i
n
→ ∞ for i = 1, 2, 3 under H
1
. Hence, the test is consistent
against general alternatives. The established limiting distributions of the test statistics
under the null hypothesis can be estimated based on an application of the tapered block
multiplier technique to Equation (26):
Corollary 2. Consider a sample X
1
, . . . , X
n
of a process (X
j
)
j∈Z
which satisfies X
j,i
∼ F
i
for all j ∈ Z and i = 1, . . . , d. Further assume the process to fulfill the strong mixing
assumptions of Theorem 3. For s = 1, . . . , S, let ξ
(s)
1
, . . . , ξ
(s)
n
denote samples of a tapered
block multiplier process (ξ
j
)
j∈Z
satisfying A1, A2, A3b with block length l(n) →∞, where
l(n) = O(n
1/2−ǫ
) for 0 < ǫ < 1/2 and define:
´
S
M(s)
n
(ζ, u) :=
1

n
_
_
⌊ζn⌋

j=1
ξ
(s)
j
_
1
{
´
U
j
≤u}

´
C
n
(u)
_
(27)

⌊ζn⌋
n
n

j=1
ξ
(s)
j
_
1
{
´
U
j
≤u}

´
C
n
(u)
_
_
_
for all u ∈ [0, 1]
d
. Under the null hypothesis of a constant copula, weak convergence condi-
tional on X
1
, . . . , X
n
almost surely in
_


([0, 1]
d+1
), .

_
holds:
´
S
M(s)
n
(ζ, u)
a.s.
−→
ξ
B
M
C
(ζ, u) −ζB
M
C
(1, u),
20
whereas the limit is an independent copy of B
C
(ζ, u) −ζB
C
(1, u). Under the alternative hy-
pothesis, weak convergence conditional on X
1
, . . . , X
n
almost surely in
_


([0, 1]
d+1
), .

_
to a tight limit holds.
The proof is given in 5. Remarkably, the latter result is only valid if centered multiplier
random variables are applied, i.e., if assumption A3b is satisfied. An application of the
continuous mapping theorem proves consistency of the tapered block multiplier-based tests.
The p-values of the test statistics are estimated as shown in Equation (22).
For simplicity, the change point location is assessed under the assumption that there is at
most one change point. In this case, the alternative hypothesis can as well be formulated:
H
1b
: ∃λ ∈ [0, 1] such that U
j

_
C
1
for all j = 1, . . . , ⌊λn⌋,
C
2
for all j = ⌊λn⌋ + 1, . . . , n,
whereas C
1
and C
2
are assumed to differ on a non-empty subset of [0, 1]
d
. An estimator
for the location of the change point
´
λ
i
n
, i = 1, 2, 3, is obtained replacing max functions by
arg max functions in Equations (23), (24), and (25). For ease of exposition, the superindex
i is dropped in the following if no explicit reference to the functional is required. Given a
(not necessarily correct) change-point estimator
´
λ
n
, the empirical copula of X
1
, . . . , X

´
λnn⌋
is an estimator of the unknown mixture distribution given by [for an analogous estimator
related to general distribution functions, see 9]:
C
´
λn,1
(u) = 1
{
´
λn≤λ}
C
1
(u) +1
{
´
λn>λ}
´
λ
−1
n
_
λC
1
(u) +
_
´
λ
n
−λ
_
C
2
(u)
_
, (28)
for all u ∈ [0, 1]
d
. The latter coincides with C
1
if and only if the change point is estimated
correctly. On the other hand, the empirical copula of X

´
λnn⌋+1
, . . . , X
n
is an estimator of
the unknown mixture distribution given by
C
´
λn,2
(u) =1
{
´
λn≤λ}
(1 −
´
λ
n
)
−1
_
(λ −
´
λ
n
)C
1
(u) + (1 −λ) C
2
(u)
_
(29)
+1
{
´
λn>λ}
C
2
(u),
for all u ∈ [0, 1]
d
. The latter coincides with C
2
if and only if the change point is estimated
correctly. Consistency of
´
λ
n
follows from consistency of the empirical copula and the fact
that the difference of the two mixture distributions given in Equations (28) and (29) is
maximal in the case
´
λ
n
= λ. Bai [1] iteratively applies the setting considered above to test
for multiple breaks (one at a time), indicating a direction of future research to estimate
locations of multiple change points in the dependence structure.
Finite sample properties. Size and power of the tests for a constant copula are shown in
Tables 6 and 7. The former shows results for n = 400 observations from serially independent
as well as strictly stationary AR(1) processes with autoregressive coefficient β = 0.25 in
each dimension. The alternative hypothesis H
1b
of at most one unspecified change point is
considered. If present, then the change point is located after observation ⌊λn⌋ = n/2 and
only affects the parameter within the investigated Clayton or Gumbel families: the
21
Table 6: Size and power of tests for a constant copula with unspecified change point can-
didate. Results are based on 1, 000 Monte Carlo replications, n = 400, S = 1, 000, kernel function
κ
2
, and α = 5%; additionally, the estimated change point location
´
λ
n
, ˆ σ(
´
λ
n
), and MSE(
´
λ
n
) ×10
2
are reported.
size/power
´
λ
n
ˆ σ
_
´
λ
n
_
MSE
_
´
λ
n
_
τ
2
0.2 0.6 0.9 0.6 0.9 0.6 0.9 0.6 0.9
i.i.d. setting
Clayton l = 1 CvM 0.061 0.406 0.873 0.511 0.506 0.093 0.061 0.871 0.372
K 0.040 0.495 0.991 0.496 0.496 0.074 0.048 0.556 0.228
KS 0.062 0.409 0.905 0.507 0.503 0.079 0.056 0.627 0.319
l = 5 CvM 0.035 0.342 0.847 0.519 0.509 0.084 0.058 0.735 0.349
K 0.031 0.375 0.983 0.495 0.496 0.070 0.049 0.489 0.246
KS 0.036 0.337 0.881 0.506 0.504 0.080 0.055 0.645 0.306
Gumbel l = 1 CvM 0.047 0.456 0.903 0.509 0.507 0.083 0.055 0.694 0.304
K 0.056 0.474 0.992 0.494 0.495 0.074 0.050 0.548 0.251
KS 0.052 0.437 0.910 0.502 0.504 0.080 0.054 0.644 0.291
l = 5 CvM 0.043 0.387 0.880 0.506 0.507 0.086 0.055 0.739 0.310
K 0.026 0.343 0.974 0.487 0.496 0.072 0.046 0.518 0.217
KS 0.041 0.349 0.884 0.497 0.503 0.079 0.056 0.642 0.312
AR(1) setting with β = 0.25
Clayton l = 1 CvM 0.187 0.487 0.846 0.519 0.510 0.116 0.075 1.390 0.566
K 0.139 0.562 0.994 0.490 0.494 0.084 0.052 0.707 0.276
KS 0.177 0.499 0.913 0.504 0.506 0.098 0.071 0.969 0.503
l = 5 CvM 0.046 0.243 0.697 0.516 0.512 0.102 0.070 1.073 0.497
K 0.039 0.289 0.954 0.496 0.493 0.073 0.056 0.535 0.315
KS 0.040 0.253 0.771 0.506 0.506 0.095 0.065 0.901 0.421
Gumbel l = 1 CvM 0.185 0.536 0.878 0.515 0.513 0.119 0.079 1.443 0.649
K 0.123 0.576 0.997 0.487 0.493 0.086 0.051 0.746 0.272
KS 0.167 0.541 0.913 0.504 0.509 0.103 0.075 1.078 0.573
l = 5 CvM 0.042 0.295 0.706 0.519 0.506 0.096 0.074 0.949 0.547
K 0.040 0.294 0.939 0.495 0.495 0.084 0.053 0.716 0.282
KS 0.050 0.287 0.745 0.509 0.504 0.089 0.067 0.793 0.457
copula C
1
is parameterized such that τ
1
= 0.2, the copula C
2
such that τ
2
∈ {0.2, 0.6, 0.9}.
We consider S = 1, 000 tapered block multiplier simulations based on Normal multiplier
random variables, block length l
M
(400) = 5, kernel function κ
2
, and report mean as well
as mean squared error of 1, 000 MC repetitions.
In the case of independent and identically distributed observations, we observe that the
tapered block multiplier works similarly well as the standard multiplier (i.e., l
M
= 1): the
asymptotic size of the test, chosen to be 5%, is well approximated and its power increases
in the difference τ
2
− τ
1
. The estimated location of the change point,
´
λ
n
, is close to its
theoretical value. Moreover, its standard deviation ˆ σ(
´
λ
n
) as well as its mean squared
error MSE(
´
λ
n
) are decreasing in the difference τ
2
− τ
1
. In the case of serially dependent
observations sampled from AR(1) processes with β = 0.25, we find that the observed size
of the test strongly deviates from its nominal size (chosen to be 5%) if serial dependence
22
Table 7: Size and power of tests for a constant copula with unspecified change point
candidate. Results are based on 1, 000 Monte Carlo replications, n = 800, S = 500, kernel function
κ
2
, and α = 5%; additionally, the estimated change point location
´
λ
n
, ˆ σ(
´
λ
n
), and MSE(
´
λ
n
) ×10
2
are reported.
size/power
´
λ
n
ˆ σ
_
´
λ
n
_
MSE
_
´
λ
n
_
τ
2
0.2 0.6 0.9 0.6 0.9 0.6 0.9 0.6 0.9
i.i.d. setting
Clayton l = 1 CvM 0.033 0.695 0.999 0.508 0.506 0.079 0.041 0.623 0.173
K 0.054 0.901 1.000 0.494 0.497 0.059 0.029 0.350 0.087
KS 0.046 0.734 0.999 0.505 0.503 0.068 0.038 0.463 0.142
l = 6 CvM 0.043 0.681 0.997 0.510 0.505 0.076 0.042 0.585 0.177
K 0.037 0.838 1.000 0.494 0.497 0.057 0.031 0.335 0.096
KS 0.036 0.699 0.998 0.507 0.503 0.062 0.039 0.384 0.149
Gumbel l = 1 CvM 0.049 0.510 0.953 0.510 0.506 0.090 0.049 0.823 0.243
K 0.037 0.688 1.000 0.490 0.496 0.061 0.032 0.372 0.106
KS 0.051 0.509 0.966 0.502 0.503 0.081 0.044 0.672 0.199
l = 6 CvM 0.055 0.721 0.998 0.505 0.504 0.069 0.037 0.477 0.138
K 0.034 0.775 1.000 0.498 0.496 0.059 0.029 0.355 0.086
KS 0.045 0.686 0.998 0.504 0.500 0.065 0.039 0.428 0.156
AR(1) setting with β = 0.25
Clayton l = 1 CvM 0.184 0.712 0.995 0.519 0.507 0.097 0.056 0.973 0.321
K 0.122 0.914 1.000 0.495 0.497 0.063 0.034 0.411 0.122
KS 0.169 0.756 0.999 0.512 0.506 0.084 0.050 0.714 0.248
l = 6 CvM 0.065 0.488 0.939 0.506 0.508 0.089 0.054 0.803 0.293
K 0.057 0.724 1.000 0.493 0.496 0.060 0.033 0.359 0.113
KS 0.062 0.521 0.965 0.500 0.503 0.072 0.045 0.527 0.206
Gumbel l = 1 CvM 0.207 0.760 0.992 0.507 0.509 0.091 0.052 0.841 0.273
K 0.141 0.879 1.000 0.489 0.497 0.070 0.036 0.496 0.135
KS 0.182 0.780 0.998 0.500 0.505 0.089 0.047 0.805 0.218
l = 6 CvM 0.052 0.533 0.959 0.514 0.508 0.083 0.052 0.707 0.273
K 0.046 0.670 1.000 0.495 0.496 0.065 0.032 0.426 0.106
KS 0.053 0.520 0.974 0.508 0.505 0.076 0.048 0.584 0.232
is neglected and the block length l
M
= 1 is used: its estimates reaching up to 18.7%. The
test based on the tapered block multiplier with block length l
M
(400) = 5 yields rejection
quotas which approximate the asymptotic size well in all settings considered. These results
are strengthened in Table 7 which shows results of MC simulations for sample size n = 800
and block length l
M
= 6 : the tests based on the tapered block multiplier perform well in
size, their power improves considerably with the increased amount of observations and the
change point location is well captured. Standard deviation and mean squared error of the
estimated location of the change point,
´
λ
n
, decrease in the difference τ
2
−τ
1
.
Comparing the tests based on statistics T
1
n
, T
2
n
, and T
3
n
, we find that the test based on the
Kuiper-type statistic performs best: results indicate that the nominal size is well approx-
imated in finite samples. Moreover, the test is most powerful in many settings. Likewise,
with regard to the estimated location of the change point, the Kuiper-type statistic performs
best in mean and in mean squared error.
23
The introduced tests for a constant copula offer some connecting factors for further research,
e.g.: Inoue [27] investigates nonparametric change point tests for the joint distribution of
strongly mixing random vectors and finds that the observed size of the test heavily depends
on the choice of the block length l in the resampling procedure. For different types of serially
dependent observations, e.g., AR(1) processes with higher coefficient for the lagged variable
or GARCH(1, 1) processes, it is of interest to investigate the optimal choice of the block
length for the tapered block multiplier-based test with unspecified change point candidate.
Moreover, test statistics based on different functionals offer potential for improvements:
e.g., the Cramér-von Mises functional introduced by Rémillard and Scaillet [38] led to
strong results in the case of a specified change point candidate. Though challenging from
a computational point of view, an application of this functional to the case of unspecified
change point candidate(s) is of interest as the functional yields very powerful tests.
4. Conclusion
Consistent tests for constancy of the copula with specified or unspecified change point
candidate are introduced. We observe a trade-off in assumptions required for the testing:
if a change point candidate is specified, then the test is consistent whether or not there is a
simultaneous change point in marginal distribution function(s). If change point candidate(s)
are unspecified, then the assumption of strictly stationary marginal distribution functions
is required and allows to drop continuity assumptions on the partial derivatives of the
underlying copula(s). Tests are shown to behave well in size and power when applied to
various types of dependent observations. P-Values of the tests are estimated using a tapered
block multiplier technique which is based on serially dependent multiplier random variables;
the latter is shown to perform better than the block bootstrap in mean and mean squared
error when estimating the asymptotic covariance structure of the empirical copula process
in various settings.
[1] Bai, J. (1997): Estimating multiple breaks one at a time, Econom. Theory, 13(3), pp.
315–352.
[2] Billingsley, P. (1995): Probability and Measure, John Wiley & Sons.
[3] Brodsky, B. E., Darkhovsky, B. S. (1993): Nonparametric Methods in Change Point
Problems, Kluwer Academic Publishers.
[4] Bücher, A. (2011): Statistical Inference for Copulas and Extremes, Ph.D. thesis, Ruhr-
Universität Bochum.
[5] Bücher, A., Dette, H. (2010): A note on bootstrap approximations for the empirical
copula process, Statist. Probab. Lett., 80(23-24), pp. 1925 – 1932.
[6] Bühlmann, P. (1993): The blockwise bootstrap in time series and empirical processes,
Ph.D. thesis, ETH Zürich, Diss. ETH No. 10354.
[7] Bühlmann, P., Künsch, H. R. (1999): Block length selection in the bootstrap for time
series, Comput. Statist. Data Anal., 31(3), pp. 295–310.
[8] Busetti, F., Harvey, A. (2010): When is a copula constant? A test for changing
relationships, J. Financ. Econ., 9(1), pp. 106–131.
24
[9] Carlstein, E. (1988): Nonparametric change-point estimation, Ann. Statist., 16(1), pp.
188–197.
[10] Carrasco, M., Chen, X. (2002): Mixing and moment properties of various GARCH and
stochastic volatility models, Econom. Theory, 18(1), pp. 17–39.
[11] Chen, X., Fan, Y. (2006): Estimation and model selection of semiparametric copula-
based multivariate dynamic models under copula misspecification, Journal of Econo-
metrics, 135(1–2), pp. 125–154.
[12] Csörgő, M., Hórvath, L. (1997): Limit theorems in change-point analysis, John Wiley
& Sons.
[13] Deheuvels, P. (1979): La fonction de dépendance empirique et ses propriétés: un test
non paramétrique d’indépendance, Académie Royale de Belgique. Bulletin de la Classe
des Sciences (5th series), 65(6), pp. 274–292.
[14] Dhompongsa, S. (1984): A note on the almost sure approximation of the empirical
process of weakly dependent random variables, Yokohama Math. J., 32(1-2), pp. 113–
121.
[15] Dias, A., Embrechts, P. (2009): Testing for structural changes in exchange rates de-
pendence beyond linear correlation, Eur. J. Finance, 15(7), pp. 619–637.
[16] Doukhan, P. (1994): Mixing: properties and examples, Lecture Notes in Statistics,
Springer-Verlag, New York.
[17] Doukhan, P., Fermanian, J.-D., Lang, G. (2009): An empirical central limit theorem
with applications to copulas under weak dependence, Stat. Infer. Stoch. Process., 12(1),
pp. 65–87.
[18] Fermanian, J.-D., Radulovic, D., Wegkamp, M. (2004): Weak convergence of empirical
copula processes, Bernoulli, 10(5), pp. 847–860.
[19] Fermanian, J.-D., Scaillet, O. (2003): Nonparametric estimation of copulas for time
series, J. Risk, 5(4), pp. 25–54.
[20] Gaißer, S., Memmel, C., Schmidt, R., Wehn, C. (2009): Time dynamic and hierarchical
dependence modelling of an aggregated portfolio of trading books - a multivariate
nonparametric approach, Deutsche Bundesbank Discussion Paper, Series 2: Banking
and Financial Studies 07/2009.
[21] Gaißer, S., Ruppert, M., Schmid, F. (2010): A multivariate version of Hoeffding’s
Phi-Square, J. Multivariate Anal., 101(10), pp. 2571–2586.
[22] Genest, C., Segers, J. (2010): On the covariance of the asymptotic empirical copula
process, J. Multivariate Anal., 101(8), pp. 1837–1845.
[23] Giacomini, E., Härdle, W. K., Spokoiny, V. (2009): Inhomogeneous dependency mod-
eling with time varying copulae, J. Bus. Econ. Stat., 27(2), pp. 224–234.
[24] Guegan, D., Zhang, J. (2010): Change analysis of dynamic copula for measuring de-
pendence in multivariate financial data, Quant. Finance, 10(4), pp. 421–430.
25
[25] Hall, P., Heyde, C. (1980): Martingale limit theory and its application, Academic Press,
New York.
[26] Hórvath, L., Shao, Q.-M. (2007): Limit theorems for permutations of empirical pro-
cesses with applications to change point analysis, Stoch. Proc. Appl., 117(12), pp.
1870–1888.
[27] Inoue, A. (2001): Testing for distributional change in time series, Econom. Theory,
17(1), pp. 156–187.
[28] Jondeau, E., Poon, S.-H., Rockinger, M. (2007): Financial Modeling Under Non-
Gaussian Distributions, Springer, London.
[29] Kojadinovic, I., Yan, Y. (2011): Tests of serial independence for continuous multivari-
ate time series based on a Möbius decomposition of the independence empirical copula
process, Ann. Inst. Statist. Math., 63(2), pp. 347–373.
[30] Kosorok, M. R. (2008): Introduction to empirical processes and semiparametric infer-
ence, Springer.
[31] Künsch, H. R. (1989): The jackknife and the bootstrap for general stationary obser-
vations, Ann. Statist., 17(3), pp. 1217–1241.
[32] Paparoditis, E., Politis, D. N. (2001): Tapered block bootstrap, Biometrika, 88(4), pp.
1105–1119.
[33] Patton, A. J. (2002): Applications of copula theory in financial econometrics, Ph.D.
thesis, University of California, San Diego.
[34] Patton, A. J. (2004): On the out-of-sample importance of skewness and asymmetric
dependence for asset allocation, J. Financ. Econ., 2(1), pp. 130–168.
[35] Philipp, W., Pinzur, L. (1980): Almost sure approximation theorems for the multivari-
ate empirical process, Z. Wahrscheinlichkeitstheorie verw. Gebiete, 54(1), pp. 1–13.
[36] Rémillard, B. (2010): Goodness-of-fit tests for copulas of multivariate time series, Tech.
rep., HEC Montréal.
[37] Rémillard, B., Scaillet, O. (2006): Testing for equality between two copulas, Tech.
Rep. G-2006-31, Les Cahiers du GERAD.
[38] Rémillard, B., Scaillet, O. (2009): Testing for equality between two copulas, J. Multi-
variate Anal., 100(3), pp. 377–386.
[39] Rio, E. (1993): Covariance inequalities for strongly mixing processes, Ann. Inst. H.
Poincaré Sect. B, 29(4), pp. 587–597.
[40] Rio, E. (2000): Théorie Asymptotique des Processus Aléatoires Faiblement Dépendants,
Springer.
[41] Rodriguez, J. C. (2006): Measuring financial contagion: A copula approach, J. Em-
pirical Finance, 14(3), pp. 401–423.
[42] Ruppert, M. (2011): Contributions to Static and Time-Varying Copula-based Modeling
of Multivariate Association, Ph.D. thesis, University of Cologne.
26
[43] Rüschendorf, L. (1976): Asymptotic distributions of multivariate rank order statistics,
Ann. Statist., 4(5), pp. 912–923.
[44] Scaillet, O. (2005): A Kolmogorov-Smirnov type test for positive quadrant dependence,
Canad. J. Statist., 33(3), pp. 415–427.
[45] Schmid, F., Schmidt, R., Blumentritt, T., Gaißer, S., Ruppert, M. (2010): Copula-
based measures of multivariate association, in: Jaworski, P., Durante, F., Härdle, W.,
Rychlik, T. (eds.), Copula theory and its applications - Proceedings of the Workshop
held in Warsaw, 25-26 September 2009, pp. 209–235, Springer, Berlin Heidelberg.
[46] Segers, J. (2011): Weak convergence of empirical copula processes under nonrestrictive
smoothness assumptions, Bernoulli, forthcoming.
[47] Sklar, A. (1959): Fonctions de répartition à n dimensions et leur marges, Publications
de l’Institut de Statistique, Université Paris 8, pp. 229–231.
[48] van den Goorbergh, R. W. J., Genest, C., Werker, B. J. M. (2005): Bivariate option
pricing using dynamic copula models, Insur. Math. Econ., 37(1), pp. 101–114.
[49] van der Vaart, A. W., Wellner, J. A. (1996): Weak convergence and empirical processes,
Springer Verlag, New York.
[50] van Kampen, M., Wied, D. (2010): A nonparametric constancy test for copulas under
mixing conditions, Tech. Rep. 36/10, TU Dortmund, SFB 823.
[51] Wied, D., Dehling, H., van Kampen, M., Vogel, D. (2011): A fluctuation test for
constant spearman’s rho, Tech. Rep. 16/11, TU Dortmund, SFB 823.
[52] Zari, T. (2010): Contribution à l’étude du processus empirique de copule, Ph.D. thesis,
Université Paris 6.
5. Appendix: Proofs of the results
Proof of Theorem 1. The proof is established as in Gaißer et al. [21], Proof of Theorem
4 while applying a result on Hadamard-differentiability under nonrestrictive smoothness
assumptions obtained by Bücher [4]. Consider integral transformations U
j,i
:= F
i
(X
j,i
)
for all j = 1, . . . , n, i = 1, . . . , d, and denote their joint empirical distribution function
by
´
F
n
. As exposed in detail by Fermanian et al. [18], Lemma 1, integral transformations
allow to simplify the exposition while obtained asymptotic results remain valid for general
continuous marginal distribution functions. Under the strong mixing condition α
X
(r) =
O(r
−a
) for some a > 1, Rio [40] proves weak convergence of the empirical process in the
space ℓ

([0, 1]
d
) :

n
_
´
F
n
(u) −F(u)
_
w.
−→B
F
(u).
Notice that the copula can be obtained by a mapping Υ :
Υ : D
Υ
→ℓ

([0, 1]
d
),
F →Υ(F) := F
_
F
−1
1
, . . . , F
−1
d
_
.
Bücher [4], Lemma 2.6 establishes the following result which is pivotal to conclude the
proof:
27
Lemma 2. Assume that F satisfies Condition (3). Then Υ is Hadamard-differentiable at
C tangentially to
D
0
:=
_
D ∈ C
_
[0, 1]
d
_
|D is grounded and D(1, . . . , 1) = 0
_
The derivative at F in D ∈ D
0
is represented by
_
Υ

F
(D)
_
(u) = D(u) −
d

i=1
D
i
F
_
u
(i)
_
D(u
(i)
), for all u ∈ [0, 1]
d
,
whereas D
i
F(u), i = 1, . . . , d is defined on the basis of Equation (4).
Hence, an application of the functional delta method yields weak convergence of the trans-
formed empirical process in (ℓ

([0, 1]
d
), .

) :

n
__
Υ
_
´
F
n
__
(u) −(Υ(F)) (u)
_
w.
−→
_
Υ

F
(B
F
)
_
(u) .
To conclude the proof, notice that
sup
u∈[0,1]
d
¸
¸
¸
_
Υ
_
´
F
n
__
(u) −
´
C
n
(u)
¸
¸
¸ = O
_
1
n
_
.
Proof of Theorem 2. Bühlmann [6], proof of Theorem 3.1 considers integral transformations
U
j,i
:= F
i
(X
j,i
) for all j = 1, . . . , n and i = 1, . . . , d and proves the bootstrapped empirical
process to converge weakly conditional on X
1
, . . . , X
n
almost surely in the space D([0, 1]
d
)
of càdlàg functions equipped with the uniform metric .

:

n
_
´
F
B
n
(u) −
´
F
n
(u)
_
a.s.
−→
H
B
F
(u).
Based on Hadamard-differentiability of the map Υ as established in the proof of Theorem
1, an application of the functional delta method for the bootstrap [see, e.g., 49] yields weak
convergence of the transformed empirical process conditional on X
1
, . . . , X
n
in probability
in (ℓ

([0, 1]
d
), .

) :

n
__
Υ
_
´
F
B
n
__
(u) −
_
Υ
_
´
F
n
__
(u)
_
P
−→
H
_
Υ

F
(B
F
)
_
(u) .
To conclude the proof, recall that the empirical copula as defined in Equation (2) and the
map Υ share the same asymptotic behavior.
Proof of Theorem 3. Based on integral transformations U
j,i
:= F
i
(X
j,i
) for all j = 1, . . . , n
and i = 1, . . . , d, Bühlmann [6], proof of Theorem 3.2 establishes weak convergence of
the tapered block empirical process conditional on a sample X
1
, . . . , X
n
almost surely in
_
D([0, 1]
d
), .

_
:

n
_
_
1
n
n

j=1
ξ
j
¯
ξ
1
U
j
≤u}
−C
n
(u)
_
_
a.s.
−→
ξ
B
M
C
(u),
28
under assumptions A1, A2, A3, provided that the process is strongly mixing with given
rate. Notice that the tapered multiplier empirical copula process as well as its limit are right-
continuous with left limits, hence, reside in D([0, 1]
d
). Functions in D([0, 1]
d
) are defined on
the closed set [0, 1]
d
, are bounded in consequence, which implies D([0, 1]
d
) ⊂ ℓ

([0, 1]
d
).
Following Lemma 7.8 in Kosorok [30], convergence in (D([0, 1]
d
), .

) is then equivalent to
convergence in (ℓ

([0, 1]
d
), .

) (and more generally in any other function space of which
D([0, 1]
d
) is a subset and which further contains the tapered multiplier empirical copula
process as well as its limit).
It remains to prove that the limiting behavior of the tapered block multiplier process for
independent observations is unchanged if we assume the marginal distribution functions
to be unknown. Using an argument of Rémillard and Scaillet [38], consider the following
relation between the tapered block multiplier process in the case of known and unknown
marginal distribution functions
B
M
C,n
(u) =

n
_
_
_
1
n
n

j=1
d

i=1
ξ
j
¯
ξ
1
{U
j,i
≤u}
−C
n
(u)
_
_
_
=

n
_
_
_
1
n
n

j=1
d

i=1
ξ
j
¯
ξ
1
{
´
U
j,i

´
F
i(F
−1
i
(u
i
))}
−C
n
_
´
F
1
_
F
−1
1
(u
1
)
_
, . . . ,
´
F
d
_
F
−1
d
(u
d
)
_
_
_
_
_
=
´
B
M
C,n
_
´
F
1
_
F
−1
1
(u
1
)
_
, . . . ,
´
F
d
_
F
−1
d
(u
d
)
_
_
,
where
´
B
M
C,n
denotes the tapered block multiplier process in the case that the marginal dis-
tribution functions as well as the copula are unknown. Under the given set of assumptions,
we have in particular
sup
u∈[0,1]
d
|C
n
(u) −C(u)| → 0.
Consider u
(i)
= (1, . . . , 1, u
i
, 1, . . . , 1). It follows that
sup
u
i
∈[0,1]
¸
¸
¸
¸
¸
¸
1
n
n

j=1
1
{X
j,i
≤F
−1
i
(u
i
)}
−u
i
¸
¸
¸
¸
¸
¸
= sup
u
i
∈[0,1]
¸
¸
¸
´
F
i
_
F
−1
i
(u
i
)
_
−u
i
¸
¸
¸ →0
for all i = 1, . . . , d as n →∞. We conclude that (B
C,n
,
´
B
M
C,n
)
w.
−→(B
C
, B
M
C
) in ℓ

([0, 1]
d
) ×


([0, 1]
d
).
Proof of Lemma 1. Consider

n
_
´
C
n
(u) −
´
C
N
(u)
_
=

n
_
´
C
n
(u) −C(u)
_


n
_
´
C
N
(u) −C(u)
_
=

n
_
´
C
n
(u) −C(u)
_


n

N

N
_
´
C
N
(u) −C(u)
_
w.
−→G
C
(u),
since the factor

n/

N tends to zero for n(N) → ∞and n(N) = o(N). Following Theorem
1,

N
_
´
C
N
(u) −C(u)
_
converges to a tight centered Gaussian process in (ℓ

([0, 1]
d
), .

).
The result is derived by an application of Slutsky’s Theorem and consistent (covariance)
estimation.
29
Proof of Theorem 4. With given assumptions, the asymptotic behavior of each empirical
copula process is derived in Theorem 1:
´
G
C
1
,⌊λn⌋
(u) :=
_
⌊λn⌋
_
´
C
⌊λn⌋
(u) −C
1
(u)
_
w.
−→G
C
1
(u) := B
C
1
(u) −
d

i=1
D
i
C
1
(u)B
C
1
_
u
(i)
_
in (ℓ

([0, 1]
d
), .

). Analogously, we have
¯
G
C
2
,n−⌊λn⌋
(v) :=
_
n −⌊λn⌋
_
¯
C
n−⌊λn⌋
(v) −C
2
(v)
_
w.
−→G
C
2
(v) := B
C
2
(v) −
d

i=1
D
i
C
2
(v)B
C
2
_
v
(i)
_
in (ℓ

([0, 1]
d
), .

). If a joint, hence, 2d-dimensional mean zero limiting Gaussian process
(G
C
1
(u) G
C
2
(v))⊤ exists, then a complete characterization can be obtained based on its
covariance function [cf. 49, Appendix A.2]. Hence, it remains to prove that the empirical
covariance matrix converges to a well-defined limit. To ease exposition, indices within the
sample X
1
, . . . , X
n
are shifted by −⌊λn⌋ to locate the change point candidate at zero. The
covariance matrix is given by
lim
n→∞
_
_
Cov
_
´
G
C
1
,⌊λn⌋
(u),
´
G
C
1
,⌊λn⌋
(v)
_
Cov
_
´
G
C
1
,⌊λn⌋
(u),
¯
G
C
2
,n−⌊λn⌋
(v)
_
Cov
_
¯
G
C
2
,n−⌊λn⌋
(v),
´
G
C
1
,⌊λn⌋
(u)
_
Cov
_
¯
G
C
2
,n−⌊λn⌋
(u),
¯
G
C
2
,n−⌊λn⌋
(v)
_
_
_
for all u, v ∈ [0, 1]
d
if the limit exists, whereas
Cov
_
´
G
C
1
,⌊λn⌋
(u),
´
G
C
1
,⌊λn⌋
(v)
_
:=
1
⌊λn⌋
0

i=−⌊λn⌋+1
0

j=−⌊λn⌋+1
Cov
_
1
{
´
U
i
≤u}
, 1
{
´
U
j
≤v}
_
, (30)
Cov
_
´
G
C
1
,⌊λn⌋
(u),
¯
G
C
2
,n−⌊λn⌋
(v)
_
:=
1
_
⌊λn⌋(n − ⌊λn⌋)
0

i=−⌊λn⌋+1
n−⌊λn⌋

j=1
Cov
_
1
{
´
U
i
≤u}
, 1
{
´
U
j
≤v}
_
, (31)
Cov
_
¯
G
C
2
,n−⌊λn⌋
(v),
´
G
C
1
,⌊λn⌋
(u)
_
:=
1
_
⌊λn⌋(n − ⌊λn⌋)
n−⌊λn⌋

i=1
0

j=−⌊λn⌋+1
Cov
_
1
{
´
U
i
≤v}
, 1
{
´
U
j
≤u}
_
, (32)
Cov
_
¯
G
C
2
,n−⌊λn⌋
(u),
¯
G
C
2
,n−⌊λn⌋
(v)
_
:=
1
n − ⌊λn⌋
n−⌊λn⌋

i=1
n−⌊λn⌋

j=1
Cov
_
1
{
´
U
i
≤u}
, 1
{
´
U
j
≤v}
_
. (33)
Convergence of the series in Equations (30) and (33) follows from Theorem 1, furthermore
the limiting variances are equal (under the null hypothesis) and the double sum in its
representation can be simplified [see 39] to reconcile the result of Theorem 1. Equations
(31) and (32) coincide by symmetry and converge absolutely, as:
lim
n→∞
¸
¸
¸Cov
_
´
G
C1,⌊λn⌋
(u),
¯
G
C2,n−⌊λn⌋
(v)

¸
¸ ≤ lim
n→∞
4
_
⌊λn⌋(n −⌊λn⌋)
0

i=−⌊λn⌋+1
n−⌊λn⌋

j=1
α
X
(|j −i|)
cf. Inoue [27], proof of Theorem 2.1 and Hall and Heyde [25], Theorem A5. Direct cal-
culations and an application of the Cauchy condensation test to the generalized harmonic
30
series yield:
lim
n→∞
4
_
⌊λn⌋(n −⌊λn⌋)
0

i=−⌊λn⌋+1
n−⌊λn⌋

j=1
α
X
(|j −i|) < 4

i=1
α
X
(i) < 4

i=1
i
−a
< ∞, (34)
based on strong mixing with polynomial rate α
X
(r) = O(r
−a
) for some a > 1. Absolute
convergence of Equations (31) and (32) follows the comparison test for infinite series with
respect to the series given in Equation (34). Notice that (under the null hypothesis):
_
⌊λn⌋(n −⌊λn⌋)
n
_
´
C
⌊λn⌋
(u) −
¯
C
n−⌊λn⌋
(u)
_
=
_
n −⌊λn⌋
n
´
G
C
1
,⌊λn⌋
(u) −
_
⌊λn⌋
n
¯
G
C
2
,n−⌊λn⌋
(u)
for all u ∈ [0, 1]
d
. An application of the continuous mapping theorem and Slutsky’s theorem
yields
T
n
(λ)
w.
−→ T(λ) =
_
[0,1]
d
_

1 −λG
C
1
(u) −

λG
C
2
(u)
_
2
du.
Proof of Corollary 1. Assume a sample X
1
, . . . , X
n
of (X
j
)
j∈Z
with specified change
point candidate ⌊λn⌋ for λ ∈ [0, 1] such that U
i
∼ C
1
for all i = 1, . . . , ⌊λn⌋ and U
i

C
2
for all i = ⌊λn⌋+1, . . . , n. Based on pseudo-observations
´
U
1
, . . . ,
´
U
⌊λn⌋
and
´
U
⌊λn⌋
, . . . ,
´
U
n
(which are estimated separately for each subsample), consider
_
⌊λn⌋
n
_
⌊λn⌋
_
´
C
⌊λn⌋
(u) −C
1
(u)
_
+
_
n −⌊λn⌋
n
_
n −⌊λn⌋
_
¯
C
n−⌊λn⌋
(u) −C
2
(u)
_
for all u ∈ [0, 1]
d
. If C
1
and C
2
satisfy Condition (3), then weak convergence of the latter
linear combination follows by an application of Slutsky’s theorem and the proof of Theorem
4. Merging the sums involved in the two empirical copulas yields:
⌊λn⌋

n
_
´
C
⌊λn⌋
(u) −C
1
(u)
_
+
n −⌊λn⌋

n
_
¯
C
n−⌊λn⌋
(u) −C
2
(u)
_
(35)
=

n
_
_
_
1
n
n

j=1
1
{
´
U
j
≤u}

⌊λn⌋
n
C
1
(u) −
n −⌊λn⌋
n
C
2
(u)
_
_
_
=

n
_
_
_
1
n
n

j=1
1
{
´
U
j
≤u}
−C
mix
(u)
_
_
_
, (36)
for all u ∈ [0, 1]
d
where, asymptotically equivalent, C
mix
(u) := λC
1
(u) + 1 −λC
2
(u). Due
to separate estimation of pseudo-observations in each subsample, ties occur with positive
probability P
n
(
´
U
j
1
,i
=
´
U
j
2
,1
) > 0 for j
1
∈ {1, . . . , ⌊λn⌋} and j
2
∈ {⌊λn⌋ +1, . . . , n} in finite
samples. Asymptotically, we have lim
n→∞
P
n
(
´
U
j
1
,i
=
´
U
j
2
,1
) = 0.
Hence, Equation (36) can be estimated on the basis of the tapered block multiplier approach
as given in Theorem 3 and Equation (12), respectively. The Corollary follows as Equation
31
(35) reconciles Equation (21) up to a rescaling of deterministic factors and an application
of the continuous mapping theorem.
Proof of Theorem 5. As observed in Equation (26), under the null hypothesis of a constant
copula, weak convergence of S
n
(ζ, u) can be derived based on the asymptotic behavior of
the sequential empirical process. Under given assumptions, consider:
B
C,n
(ζ, u) :=
1

n
⌊ζn⌋

j=1
_
1
{U
j
≤u}
−C(u)
_
,
´
G
C,n
(ζ, u) :=
1

n
⌊ζn⌋

j=1
_
1
{
´
U
j
≤u}
−C(u)
_
for all (ζ, u) ∈ [0, 1]
d+1
. Philipp and Pinzur [35] prove convergence of B
C,n
(ζ, u) : there exists
η > 0 depending on the dimension and the strong mixing rate α
X
(r) = O(r
−4−d{1+ǫ}
) for
some 0 < ǫ ≤ 1/4, such that:
sup
0≤λ≤1
sup
u∈[0,1]
d
¸
¸
¸
¸
B
C,n
(ζ, u) −
1

n
B
C
(ζ, u)
¸
¸
¸
¸
= O
_
{log n}
−η
_
almost surely in (ℓ

([0, 1]
d+1
), .

), where B
C
(ζ, u) is a C-Kiefer process with covariance
Cov (B
C

1
, u), B
C

2
, v)) = min(ζ
1
, ζ
2
)

j∈Z
Cov
_
1
{U
0
≤u}
, 1
{U
j
≤v}
_
.
Notice that, following the work of Dhompongsa [14], an improvement of the considered
strong mixing condition is possible - this improvement, however, is not relevant for the
time-series applications investigated in this paper.
It remains to prove weak convergence in the case of unknown marginal distribution func-
tions. Note that
B
C,n
(ζ, u) =
1

n
⌊ζn⌋

j=1
_
d

i=1
1
{
´
U
j,i

´
F
n,i
(F
−1
i
(u
i
))}
−C(u)
_
=
´
G
C,n
_
ζ,
´
F
1
(F
−1
1
(u
1
)), . . . ,
´
F
d
(F
−1
d
(u
d
))
_
+
⌊ζn⌋
n
·

n
_
C
_
´
F
1
(F
−1
1
(u
1
)), . . . ,
´
F
d
(F
−1
d
(u
d
))
_
−C(u)
_
, (37)
whereas weak convergence of the sequential empirical process B
C,n
(ζ, u) is established and
weak convergence of Equation (37) can be proven by an application of the functional delta
method and Slutsky’s theorem [see 4, and references therein]. More precisely, if the partial
derivatives D
i
C(u) of the copula exist and satisfy Condition (3), then
´
G
C,n
(ζ, u)
w.
−→B
C
(ζ, u) −ζ
d

i=1
D
i
C(u)B
C
_
1, u
(i)
_
(38)
in (ℓ

([0, 1]
d+1
), .

). Weak convergence of the (d+1)-time parameter tied down empirical
32
copula process given in Equation (26) holds, since:
S
n
(ζ, u) =
´
G
C,n
(ζ, u) −
⌊ζn⌋
n
´
G
C,n
(1, u)
=B
C,n
(ζ, u) −
⌊ζn⌋
n
B
C,n
(1, u)
+
_
ζ −
⌊ζn⌋
n
_

n
_
C
_
´
F
1
(F
−1
1
(u
1
)), . . . ,
´
F
d
(F
−1
d
(u
d
))
_
−C(u)
_
w.
−→B
C
(ζ, u) −ζB
C
(1, u)
in (ℓ

([0, 1]
d+1
), .

). Notice that Equation (38) and in particular continuity of the partial
derivatives is not required for this result [cf. the work of 36, in the context of GARCH
residuals]. Convergence of the test statistics T
1
n
, T
2
n
, and T
3
n
follows by an application of
the continuous mapping theorem.
Proof of Corollary 2. Under the null hypothesis, weak convergence conditional on
X
1
, . . . , X
n
almost surely follows combining the results of Theorems 3 and 5. The strong
mixing assumptions of the former theorem on conditional weak convergence of the tapered
block multiplier empirical copula process are relevant for this proof since they imply those
of the latter Theorem 5. Under the alternative hypothesis, there exist P change point
candidates such that
0 = λ
0
< λ
1
< . . . < λ
P
< λ
P+1
= 1,
whereas U
j
∼ C
p
for all j = ⌊λ
p−1
n⌋ + 1, . . . , ⌊λ
p
n⌋ and p ∈ P := {1, . . . , P + 1}. For any
given ζ ∈ [0, 1], define
ω :=
_
0 for 0 < ζ ≤ λ
1
,
arg max
p∈P
λ
p
1
{λp<ζ}
for λ
1
< ζ ≤ 1,
which, for ζ > λ
1
, yields the index of the maximal change point strictly dominated by ζ.
Consider the following linear combination of copulas:
L
n
(ζ, u) :=
ω

p=1
⌊λ
p
n⌋ −⌊λ
p−1
n⌋
⌊ζn⌋
C
p
(u) +
⌊ζn⌋ −⌊λ
ω
n⌋
⌊ζn⌋
C
ω+1
(u)

ω

p=1
λ
p
−λ
p−1
ζ
C
p
(u) +
ζ −λ
ω
ζ
C
ω+1
(u) =: L(ζ, u)
for all (ζ, u) ∈ [0, 1]
d+1
. Consider a sample X
1
, . . . , X
n
of a process (X
j
)
j∈Z
. Given knowl-
edge of the constant marginal distribution functions F
i
, i = 1, . . . , d, the empirical copula
33
of X
1
, . . . , X
⌊ζn⌋
converges weakly in (ℓ

([0, 1]
d+1
), .

) :
_
⌊ζn⌋
_
_
_
1
⌊ζn⌋
⌊ζn⌋

j=1
1
{U
j
≤u}
−L
n
(ζ, u)
_
_
_
(39)
=
_
⌊ζn⌋
_
_
_
1
⌊ζn⌋
⌊ζn⌋

j=1
1
{U
j
≤u}

ω

p=1
⌊λ
p
n⌋ −⌊λ
p−1
n⌋
⌊ζn⌋
C
p
(u) −
⌊ζn⌋ −⌊λ
ω
n⌋
⌊ζn⌋
C
ω+1
(u)
_
_
_
=
ω

p=1
1
_
⌊ζn⌋
_
_
_
⌊λpn⌋

j=⌊λ
p−1
n⌋+1
1
{U
j
≤u}
−(⌊λ
p
n⌋ −⌊λ
p−1
n⌋) C
p
(u)
_
_
_
+
1
_
⌊ζn⌋
_
_
_
⌊ζn⌋

j=⌊λωn⌋+1
1
{U
j
≤u}
−(⌊ζn⌋ −⌊λ
ω
n⌋) C
ω+1
(u)
_
_
_
w.
−→
ω

p=1
¸
λ
p
−λ
p−1
ζ
B
Cp
(u) +
¸
ζ −λ
ω
ζ
B
C
ω+1
(u) =:
B
L
(ζ, u)

ζ
.
The latter result follows from Theorem 1 and the results on joint convergence given in the
proof of Theorem 4.
Considering the tapered multiplier empirical copula process, we have
´
S
M(s)
n
(ζ, u) =
1

n
_
_
⌊ζn⌋

j=1
ξ
(s)
j
_
1
{
´
U
j
≤u}

´
C
n
(u)
_

⌊ζn⌋
n
n

j=1
ξ
(s)
j
_
1
{
´
U
j
≤u}

´
C
n
(u)
_
_
_
=
1

n
⌊ζn⌋

j=1
ξ
(s)
j
_
1
{
´
U
j
≤u}

´
C
⌊ζn⌋
(u)
_

_
´
C
n
(u) −
´
C
⌊ζn⌋
(u)
_
1

n
⌊ζn⌋

j=1
ξ
(s)
j

⌊ζn⌋

n
3
n

j=1
ξ
(s)
j
_
1
{
´
U
j
≤u}

´
C
n
(u)
_
,
for all (ζ, u) ∈ [0, 1]
d+1
, whereas (applying Equation (39) and Theorem 3) the first and third
summands converge weakly conditional on X
1
, . . . , X
n
almost surely to limiting processes
B
M
L
(ζ, u) and ζB
M
L
(1, u), respectively, in (ℓ

([0, 1]
d+1
), .

). These are independent copies
of B
L
(ζ, u) and ζB
L
(1, u). Notice that the tapered multiplier random variables themselves
are, by construction, strongly mixing. If additionally centered around zero (i.e., satisfying
A3b), then the central limit theorem under strong mixing as given in Billingsley [2], The-
orem 27.4 proves weak convergence of the second summand to a Normal limit conditional
on X
1
, . . . , X
n
almost surely. Hence, there exists a tight limit in ℓ

([0, 1]
d+1
).
34

1. Introduction Over the last decade, copulas have become a standard tool in modern risk management. The copula of a continuous random vector is a function which uniquely determines the dependence structure linking the marginal distribution functions. Copulas play a pivotal role for, e.g., measuring multivariate association [see 45], pricing multivariate options [see 48] and allocating financial assets [see 34]. The latter two references emphasize that time variation of copulas possesses an important impact on financial engineering applications. Evidence for time-varying dependence structures can indirectly be drawn from functionals of the copula, e.g., Spearman’s ρ, as suggested by Gaißer et al. [20] and Wied et al. [51]. Investigating time variation of the copula itself, Busetti and Harvey [8] consider a nonparametric quantile-based test for a constant copula. Semiparametric tests for time variation of the parameter within a prespecified family of one-parameter copulas are proposed by Dias and Embrechts [15] and Giacomini et al. [23]. Guegan and Zhang [24] combine tests for constancy of the copula (on a given set of vectors on its domain), the copula family, and the parameter. The assumption of independent and identically distributed pseudo-observations is generally made in the latter references. With respect to financial time-series, the estimation of a GARCH model represents a frequently chosen option in order to approximate this assumption using the residuals obtained after GARCH filtration. The effect of replacing unobserved innovations by estimated residuals, however, is to be taken into account. Therefore, specific techniques for residuals are required [cf., e.g., 11]. Exploring this approach, Rémillard [36] investigates a nonparametric change point test for the copula of residuals in stochastic volatility models. Avoiding the need to specify any parametric model, Fermanian and Scaillet [19] consider purely nonparametric estimation of copulas for time-series under strict stationarity and strong mixing conditions on the multivariate process. A recent generalization of this framework is proposed by van Kampen and Wied [50] who assume the univariate processes to be strictly stationary but relax the assumption of a constant copula and suggest a quantile-based test for a constant copula under strong mixing assumptions. We introduce nonparametric Cramér-von Mises-, Kuiper-, and Kolmogorov-Smirnov tests for a constant copula under strong mixing assumptions. The tests extend those for timeconstant quantiles by assessing constancy of the copula on its domain. In consequence, they are consistent under general alternatives. Depending on the object of investigation, tests with a specified or unspecified change point (candidate) are introduced. Whereas the former setting requires a hypothesis on the change point location, it allows us to relax the assumption of strictly stationary univariate processes. P-values of the tests are estimated based on a generalization of the multiplier bootstrap technique introduced in Rémillard and Scaillet [38] to the case of strongly mixing time series. The idea is comparable to block bootstrap methods: however, instead of sampling blocks with replacement, we generate blocks of serially dependent multiplier random variables. For a general introduction to the latter idea, we refer to Bühlmann [6] and Paparoditis and Politis [32]. This paper is organized as follows: in Section 2, we discuss convergence of the empirical copula process under strong mixing. A result of Doukhan et al. [17] is generalized to establish the asymptotic behavior of the empirical copula process under nonrestrictive smoothness assumptions based on serially dependent observations. Furthermore, a tapered block multiplier bootstrap technique for inference on the weak limit of the empirical copula process is derived and assessed in finite samples. Tests for a constant copula with specified or unspecified change point (candidate) which are relying on this technique are established in Section 3. 1

2. Nonparametric inference based on serially dependent observations As a basis for the tests introduced in the next section, the result of Segers [46] on the asymptotic behavior of the empirical copula process under nonrestrictive smoothness assumptions is generalized to enable its applicability to serially dependent observations. Furthermore, we introduce a multiplier-based resampling method for this particular setting, establish its asymptotic behavior and investigate performance in finite samples. 2.1. Asymptotic theory Consider a vector-valued process (Xj )j∈Z with Xj = (Xj,1 , . . . , Xj,d ) taking values in Rd . Let Fi be the distribution function of Xj,i for all j ∈ Z, i = 1, . . . , d and let F be the joint distribution of Xj for all j ∈ Z. Assume that all marginal distribution functions are continuous. Then, according to Sklar’s Theorem [47], there exists a unique copula C such that F (x1 , . . . , xd ) = C(F1 (x1 ), . . . , Fd (xd )) for all (x1 , . . . , xd ) ∈ Rd . The σ-fields generated by Xj , j ≤ t, and Xj , j ≥ t, are denoted by Ft = σ{Xj , j ≤ t} and F t = σ{Xj , j ≥ t}, respectively. We define α(Fs , F s+r ) = sup
A∈Fs ,B∈F s+r

|P (A ∩ B) − P (A)P (B)|.

The strong- (or α-) mixing coefficient αX corresponding to the process (Xj )j∈Z is given by αX (r) = sups≥0 α(Fs , F s+r ). The process (Xj )j∈Z is said to be strongly mixing if αX (r) → 0 for r → ∞. This type of weak dependence covers a broad range of time-series models. Consider the following examples, cf. Doukhan [16] and Carrasco and Chen [10]: Example 1. i) AR(1) processes (Xj )j∈Z given by Xj = βXj−1 + ǫj , where (ǫj )j∈Z is a sequence of independent and identically distributed continuous innovations with mean zero. For |β| < 1, the process is strictly stationary and strongly mixing with exponential decay of αX (r). ii) GARCH(1, 1) processes (Xj )j∈Z , Xj = σj ǫj ,
2 2 σj = ω + βσj−1 + αǫ2 , j−1

(1)

where (ǫj )j∈Z is a sequence of independent and identically distributed continuous innova2 tions, independent of σ0 , with mean zero and variance one. For α + β < 1, the process is strictly stationary and strongly mixing with exponential decay of αX (r). Let X1 , . . . , Xn denote a sample from (Xj )j∈Z . A simple nonparametric estimator of the unknown copula C is given by the empirical copula which is first considered by Rüschendorf [43] and Deheuvels [13]. Depending on whether the marginal distribution functions are assumed to be known or unknown, we define Cn (u) := 1 n
n d

j=1 i=1 n d

1{Uj,i ≤ui } for all u ∈ [0, 1]d , 1{Uj,i ≤ui } for all u ∈ [0, 1]d , 2 (2)

1 Cn (u) := n

j=1 i=1

1]d . . . Unless otherwise noted. [17] investigate dependent observations and establish the asymptotic behav√ ior of the empirical copula process. . . 1]d . (3) for all x ∈ R. . Segers [46] points out that many popular families of copulas (e.i for a broad class of copulas. except the ith coordinate of u. n and i = 1. 1]d with covariance function Cov(BC (u)BC (v)) = j∈Z Cov 1{U0 ≤u} . . 1]d . 1{Uj ≤v} for all u. then the empirical copula process converges weakly in the space of uniformly bounded functions on [0. Notice that the covariance structure as given in Equation (6) depends on the entire process (Xj )j∈Z . In this particular case. . . Consider observations X1 . d. In addition to the practical relevance of this assumption. d. all u ∈ [0. . 0 < ui < 1.g. Doukhan et al. and for all i = 1. . 1]d . . .i ) permit more efficient inference on the copula than observations Uj. v ∈ [0. where ei denotes the ith column of a d × d identity matrix. . w. (4) whereas GC represents a Gaussian process given by d GC (u) = BC (u) − Di C(u)BC u(i) i=1 for all u ∈ [0. Genest and Segers [22] prove that pseudo-observations Uj. Condition (3) is not necessary to establish weak convergence of the empirical copula process. Under Condition (3).i = Fi (Xj. 1]d by all u ∈ [0. . Xn . . i = 1. Clayton. defined by n{Cn − C}. The following Theorem is based on nonrestrictive smoothness assumptions and mild assumptions on the strong mixing rate: Theorem 1. where Fi (x) = 1 n n j=1 1{Xj. . d. assuming the copula to possess continuous partial derivatives on [0. 1]d |ui ∈ (0. ∞ : √ n Cn (u) − C(u) −→GC (u). all u ∈ [0. . 1]d . A result of Bücher [4] permits to establish the asymptotic behavior of the empirical copula process in the case of serially dependent observations. the Gaussian. the partial derivatives’ domain   limh→0 C(u+hei )−C(u) for  h C(u+hei ) Di C(u) = for lim suph↓0 h   C(u)−C(u−hei ) lim suph↓0 for h can be extended to u ∈ [0. ui = 1. He establishes the asymptotic behavior of the empirical copula for serially independent observations under the weaker condition Di C(u) exists and is continuous on u ∈ [0. 1]d .with observations Uj. . (5) The vector u(i) denotes the vector where all coordinates.i = Fi (Xj. ui = 0. 1]d equipped with the uniform metric ℓ∞ ([0. If the marginal distribution functions Fi . If C satisfies Condition (3). 1]d .. are replaced by 1. . .i = Fi (Xj. . and Gumbel families) do not satisfy the assumption of continuous first partial derivatives on [0. . d. .i ) and pseudo-observations Uj. The process BC is a tight centered Gaussian process on [0.i ≤x} the marginal distribution functions are assumed to be unknown and the estimator Cn is used. . 1]d ). 3 . drawn from a strictly stationary process (Xj )j∈Z satisfying the strong mixing condition αX (r) = O(r −a ) for some a > 1. . (6) The proof is given in 5. are known then the limiting process reduces to BC .i ) for all j = 1. 1) for all i = 1.

Furthermore. W P 4 . d. . . . The multiplier random variables represent the remaining source of stochastic influence in this conditional setting. . Xn and W). . . 1]d ). Moreover.i 1 := n n k=1 Wk 1{Xk. Kosorok [30]. . . . Section 2. .i } for all j = 1. .3. Modes of convergence which allow us to investigate weak convergence of the empirical copula process conditional on an observed sample are considered next. 4]. . 1]d )) which is defined by f : ℓ∞ [0. . . Resampling techniques In this Section. 1] .i for all j = 1. In the case of independent and identically distributed observations satisfying Condition (3). n−1 ) distributed random variables W = (W1 . . ... [18] investigate the empirical copula process for independent and identically distributed observations X1 . validity of criteria (7) and (8) can be proven [see 18. 1] . .)∗ denote the measurable majorant and minorant with respect to the joint data (i.2. and Bücher and Dette [5]: sup h∈BL1 (ℓ∞ ([0. Section 3. .i ≤Xj. . 1]d . . n and i = 1. Xn . . n and i = 1. see van der Vaart and Wellner [49]. . Wn ) : W Cn (u) 1 := n n j=1 Wj 1{UW ≤u} for all u ∈ [0. h ∈ BL1 (ℓ∞ ([0. . f ∞ ≤ 1. XB and define n 1 B Cn (u) 1 := n n j=1 1{UB ≤u} for all u ∈ [0. h(. . Hence. We denote a bootstrap sample by XB . .1]d )) EW h √ W n Cn (u) − Cn (u) − Eh (GC (u)) −→ 0. . Xn in probability which is denoted by √ W n Cn (u) − Cn (u) −→ GC (u). . .2. a generalized multiplier bootstrap technique is introduced which is applicable in the case of serially dependent observations. . 1]d → R. P (8) The function h is assumed to be uniformly bounded with Lipschitz-norm bounded by one. . j d B Uj. . β ∈ ℓ∞ [0. . n−1 . the bootstrap empirical copula process converges weakly conditional on X1 . j d W Uj. d. .i 1 := n n k=1 1{X B ≤X B } j. .)∗ and h(. . .3. Moreover. . . i.e. . P (7) where EW denotes expectation with respect to W conditional on X1 . .2. .e. .9. a generalized asymptotic result is obtained for the (moving) block bootstrap which serves as a benchmark for the new technique in the following finite sample assessment. Xn . Xn in probability in ℓ∞ ([0. . . . X1 . . . . ∞ if the following two criteria are satisfied.i k. . . EW h √ W n Cn (u) − Cn (u) ∗ − EW h √ W n Cn (u) − Cn (u) ∗ −→ 0. Notice that the bootstrap empirical copula can equivalently be expressed based on multinomially (n. The bootstrap empirical copula process converges weakly conditional on X1 . . . Xn and prove consistency of the nonparametric bootstrap method which is based on sampling with replacement from X1 . |f (β) − f (γ)| ≤ β − γ ∞ for all γ. Fermanian et al. .

. . Given the sample X1 . . . If C satisfies Condition (3). . Xn . 1]d ). . Xn in probability in ℓ∞ ([0. Consider observations X1 .lB = {Xh+1 . Its asymptotic behavior is given next: Theorem 2. The block bootstrap sample is given by the observations of the k blocks BH1 . . . 4]. Based on bootstrap samples s = 1. XB = XH1 +1 . Xn in probability in ℓ∞ ([0.1 := 0 P (|ξj | > x)dx < ∞ for all j = 1. it is derived by means of an asymptotic result on the block bootstrap for general distribution functions established by Bühlmann [6]. . . . . . . . Whereas the bootstrap is consistent for independent and identically distributed samples. S a set of block bootstrap realizations to estimate the asymptotic behavior of the empirical copula process is obtained by: GC. . . drawn from a strictly stationary process (Xj )j∈Z satisfying ∞ (r + 1)16(d+1) αX (r) < ∞. ∞ . . . . . Xh+lB }. . . then the block bootstrap empirical copula process converges weakly conditional on X1 . . The previous theorem weakens the smoothness assumptions of a result obtained by Gaißer et al. . . . for all h = 0. additionally satisfying ξj 2. . . . lB (n) → ∞ as n → ∞ and lB (n) = o(n). XB = XHk +lB .n (u) = B(s) √ B(s) n Cn (u) − Cn (u) for all u ∈ [0. . XB = XH1 +lB . . . 1 n lB lB B Denote the block bootstrap empirical copula by Cn (u). . i. . [21].e. . ∞ : √ B n Cn (u) − Cn (u) −→ GC (u). . a block bootstrap method is proposed by Künsch [31]. the block bootstrap method requires blocks of size lB = lB (n). 1]d ). . . . . . . In consequence. . Hk independent and uniformly distributed random variables on {0. . n (whereas the last condition is slightly stronger than that of a finite second moment). . . We assume n = klB (else the last block is truncated) and simulate H = H1 . . . ¯  ξ ξ 5 . . 1]d . . . . . . .. Wn by ¯ ¯ ξ1 /ξ. consistency generally fails for serially dependent samples. .lB . . Xn almost surely is defined analogously. Replacing the multinomial multiplier random variables W1 .lB . XB +1 = XH2 +1 . . . n − lB }. ξn /ξ (ensuring realizations having arithmetic mean one) yields the multiplier (empirical copula) process which converges weakly conditional on X1 . . Assume that lB (n) = O(n1/2−ǫ ) for r=1 0 < ǫ < 1/2. H P The proof is given in 5. . . ξn with finite ∞ positive mean and variance. consisting of consecutive observations Bh.Weak convergence conditional on X1 . . . BHk . . see Bücher and Dette [5]:  √ 1 n n n j=1   ξj P 1{Uj ≤u} − Cn (u) −→ BC (u). . . . .1 and a representation of the copula as a composition of functions [18. Consider independent and identically distributed multiplier random variables ξ1 . Theorem 3. Xn . n − lB . A process related to the bootstrap empirical copula process can be formulated if both the assumption of multinomially distributed random variables is dropped and the marginal distribution functions are left unaltered during the resampling procedure.

both for known as well as for unknown marginal distribution functions. a generalization of the multiplier technique is considered next. . ξ n ξ j=1 where BM (u) is an independent copy of BC (u). we refer to the monographs of van der Vaart and Wellner [49] and Kosorok [30]. Xn almost surely is established in the following theorem: Theorem 3. Consider observations X1 . 1]d ).For general considerations of multiplier empirical processes. . A2 (ξj )j∈Z is a positive c · l(n)-dependent process. The process is introduced by Scaillet [44] in a bivariate context. ∞ :   n √ ξj 1  a. for fixed j ∈ Z.e. . For all j. . assume E[ξj ] = µ > 0. It is motivated by the fact that this technique is inconsistent when applied to serially dependent samples. Weak convergence of the tapered block multiplier process conditional on a sample X1 . drawn from a strictly stationary process (Xj )j∈Z satisfying ∞ (r + 1)c αX (r) < ∞. . i. We focus on copulas and consider a refinement of this technique. . we consider µ = 1 and v(0) = 1. Cov[ξj . Xn . where l(n) = O(n1/2−ǫ ) for 0 < ǫ < 1/2. Inoue [27] develops a block multiplier process for general distribution functions based on dependent data in which the same multiplier random variable is used for a block of observations. . . ξj+h ] = µ2 v(h/l(n)) and v is a function symmetric about zero. C The proof is given in 5.0 (u) C. A2. Hence. . Then BM. h ∈ Z.3 and Paparoditis and Politis [32]: The main idea is to consider a sample ξ1 . . A3 with block length l(n) → ∞. a general multivariate version and its unconditional weak convergence are investigated by Rémillard and Scaillet [38]. Xn almost surely in ℓ∞ ([0. 30. .n (u) ξ . The tapered block multiplier empirical copula process converges weakly conditional on X1 . where c is a constant and l(n) → ∞ as n → ∞ while l(n) = o(n). a tapered block multiplier (empirical copula) process is introduced based on the work of Bühlmann [6]. . The multiplier random variables can as well be assumed to be centered around 0 zero [cf. without loss of generality.s. . All central moments of ξj are supposed to be bounded given the sample size n. ξj is independent of ξ(j + h) for all |h| ≥ c · l(n). An interesting property of the multiplier process is its convergence to an independent copy of BC . A3 (ξj )j∈Z is strictly stationary. More precisely. whereas c = max{8d + 12. Chapter 3. .. . Remark 1. satisfying: A1 (ξj )j∈Z is independent of the observation process (Xj )j∈Z . Let r=1 the tapered block multiplier process (ξj )j∈Z satisfy A1. Define ξj := ξj − µ. M n ¯ 1{Uj ≤u} − Cn (u) −→ BC (u). Proof of Theorem 2. . Bücher and Dette [5] find that the multiplier technique yields more precise results than the nonparametric bootstrap in mean as well as in mean squared error when estimating the asymptotic covariance of the empirical copula process based on independent and identically distributed samples. ξn from a process (ξj )j∈Z of serially dependent tapered block multiplier random variables. .n 1 =√ n n 0 ξj j=1 ¯ ¯ 1 − ξ 0 1{Uj ≤u} = ξ √ n 6 n j=1 ξj ¯ M ¯ − 1 1{Uj ≤u} = ξBC.6]. ⌊2/ǫ⌋ + 1}.

e.g. it is symmetric about zero and tapered block multiplier process is defined by ∞ = 1.1]d since BM (u) tends to a tight centered Gaussian limit. tapered block multiplier random variables can as well be defined based on sequences (wj )j∈Z of. Example 3. Consider the function κ1 which assigns uniform weights given by κ1 (h) := 1 2l(n)−1 0 for all |h| < l(n) else.. The expectation of ξj is then given by E[ξj ] = 1. . its variance by V ar[ξj ] = 1 for all j ∈ Z. direct calculations further yield the covariance function Cov(ξj . A2. a basic version having uniform weights and a refined version with triangular weights are investigated and compared: Example 2.n C. (9) where (wj )j∈Z is an independent and identically distributed sequence of.q) random variables with q := 1/[2l(n)−1]. {1 − |h|/l(n)}/l(n)} for all h ∈ Z. 1/ q). The assumption of centered multiC. A simple form of the tapered block multiplier random variables can be defined based on moving average processes. In either one of these two cases. ξj+h ) = {2l(n) − 1 − |h|}/{2l(n) − 1} which linearly decreases as h increases in absolute value. Figure 1 shows the kernel function κ1 and simulated trajectories of Rademacher-type tapered block multiplier random variables.1]d [0. and A3. This is an asymptotically equivalent form of the above tapered block multiplier process: sup BM. There are numerous ways to define tapered block multiplier processes (ξj )j∈Z satisfying the above assumptions. let us define the kernel function by κ2 (h) := max{0. The tapered multiplier process (ξj )j∈Z follows Equation (9).n ξ [0. Exploring Remark 1.e. the resulting sequence (ξj )j∈Z satisfies A1. and A3b.q) random variables with q = 2/{3l(n)} + 1/{3l(n)3 }. 1]d .. Following Bühlmann [6].0 (u) − BM (u) = sup C.5 or Normal random variables √ wj ∼ N (0. For all j ∈ Z and |h| < 2l(n) − 1. Gamma(q. i. Rademacher-type random variables wj √ √ characterized by P (wj = −1/ q) = P (wj = 1/ q) = 0.n plier random variables is abbreviated as A3b in the following.n P ¯ ξ − 1 BM (u) −→ 0. The expectation of ξj is given by 1 E[ξj ] = +2 l(n) l(n) h=1 1 l(n) 7 1− h l(n) = 1. e. The resulting sequence (ξj )j∈Z satisfies A1. where (wj )j∈Z is an independent and identically distributed sequence of Gamma(q. in the following. h∈Z κ1 (h) Note that κ1 is a discrete kernel. C..for all u ∈ [0. The ξj = h=−∞ κ1 (h)wj+h for all j ∈ Z. A2.g.

. . ] = l(n)4 2 1 + 3l(n) 3l(n)3 V ar[w. respectively. . Kernel function κ1 (h) (left) and simulated trajectories of Rademacher-type tapered block multiplier random variables ξ1 . .g. . Given observations X1 . the covariance function Cov(ξj . Figure 2 provides an illustration of the kernel function κ2 as well as simulated trajectories of Rademacher-type tapered block multiplier random variables. ξn from a tapered block multiplier process (ξj )j∈Z satisfying A1. A set of copies of the tight centered Gaussian (s) (s) 8 .35 0.2 0. Notice the smoothing which is driven by the choice of kernel function and the block length l(n). Xn of a strictly stationary process (Xj )j∈Z satisfying the assumptions of Theorem 3 and further satisfying Condition (3). . A2. .2 0.05 0 −8 −6 −4 −2 0 h 2 4 6 8 −2 −3 0 2 3 2 1 0 −1 ξ j 20 40 j 60 80 100 Figure 2: Tapered block multiplier Monte Carlo simulation. This effect can be further explored using more sophisticated kernel functions. Kernel function κ2 (h) (left) and simulated trajectories of Rademacher-type tapered block multiplier random variables ξ1 .25 κ (h) 0.. . . . e. ] = 1 for all j ∈ Z. A2. The resulting sequence (ξj )j∈Z satisfies A1. S samples ξ1 . In this setting. . . Section 6. and A3b. ξ100 (right) with block length l(n) = 3 (solid line) and l(n) = 6 (dashed line). A3. ξ100 (right) with block length l(n) = 3 (solid line) and l(n) = 6 (dashed line). .3 0. . . with bell-shape. estimation of Equation (5) requires three steps: consider s = 1.4 0.1 0. see 6. .05 0 −8 −6 −4 −2 0 h 2 4 6 8 1 3 2 1 0 −1 −2 −3 0 ξ j 20 40 j 60 80 100 Figure 1: Tapered block multiplier Monte Carlo simulation.15 0. direct calculations yield 1 V ar[ξj ] =  +2 l(n)2  l(n) h=1  {l(n) − h}2  V ar[w. . (ξj )j∈Z satisfies A1. For any j ∈ Z and |h| < 2l(n) − 1.0. .25 κ (h) 0. For the variance. . respectively.35 0. this is left for further research. 0. A2.15 0.3 0. ξj+h ) can be described by a parabola centered at zero and opening downward [for details.1 0. . and A3.4 0.2].

Combining Equations (10) and (11). Example 1).2 ). .n M (s) u(i) for all u ∈ [0. Finite sample behavior Having established the asymptotic theory.n (u) − M (s) M (s) d i=1 Di Cn (u)BC. we consider the bivariate family of Gumbel copulas Gu Cθ (u1 . (13) for θ ∈ {1. . The results of this section complement those of Bücher and Dette [5] and Bücher [4] on bootstrap approximations for the empirical copula process based on independent and identically distributed observations. . 4]. S. 2. the class of AR(1) processes: we assume that a sample of n independent random variates Uj = (Uj. as a second family of copulas. S. . . . j = 1. A3b are easily deduced from Remark 1. In order to assess the performance of the two methods when applied to serially dependent data. Firstly. u2 ) = exp − {− ln (u1 )}θ + {− ln (u2 )}θ 1 θ . 1]d .n (u) = BC. 9 (15) . for all u ∈ [0. 4}. . Finite differencing yields a nonparametric estimator of the first order partial derivatives Di C(u) :        Cn (u+hei )−Cn (u−hei ) 2h Cn (u+2hei ) 2h Cn (u)−Cn (u−2hei ) 2h Di C(u) = for all u ∈ [0. (14) for θ ∈ {1. S. 3}. (11) √ where h = 1/ n and ei denotes the ith column of the d × d identity matrix [see 46. M (s) (12) An application of Segers [46].5. A2. 1] . 1]d . s = 1. . Uj. u2 ) = u−θ + u−θ − 1 1 2 −1 θ . Kendall’s τ = θ/(θ + 2) ∈ {1/3.e.1 . . 2/3}.2 proves that GC. s = 1. . Proposition 3. i. h ≤ ui ≤ 1 − h. We first simulate independent and identically distributed observations from the bivariate Clayton copula given by Cl Cθ (u1 . θ ≥ 1. i. 1 − h < ui ≤ 1. Kendall’s τ = 1 − 1/θ ∈ {1/3. 2/3}. 1]d . . for all u ∈ [0. . θ > 0. 0 ≤ ui < h. . 1]d . ξ (s) (s) (10) The required adjustments for tapered block multiplier random variables satisfying A1.e. we evaluate and compare the finite sample properties of the (moving) block bootstrap and the introduced tapered block multiplier technique when estimating the limiting covariance of the empirical copula process in MC simulations. . two examples of strongly mixing time-series are considered (cf. n.n (u) converges weakly to an independent copy of the limiting process GC for all s = 1. we obtain: GC. . .n (u) 1 =√ n n j=1 ξj d ¯ − 1 1{Uj ≤u} for all u ∈ [0.process BC is obtained by M (s) BC.3.

(2/3. calculated at u = v ∈ {(1/3. n Cn (v) − CN (v) . GC (v) := Cov for all u. . as well as n(N ) = o(N ). We apply Lemma 1 in 106 MC replications with n = 1. Φ−1 (Uj. 1/3). Xn of an AR(1) process having Normal residuals by the initialization X1 = ε1 and recursive calculation of Xj = βXj−1 + εj for all j = 2. .2 . . n. Xj. . (1/3. . . . Therefore. . n as given in Equation (16). . 2. [28] to model volatility of S&P 500 and DAX daily (log-)returns in an empirical application which shows the practical relevance of this specific parameter choice. 1) standard deviation and iterative calculation of the general process given in Equation (1) with the following parameterizations: Xj. . the theoretical covariance is unknown after the transformations carried out to generate dependent observations: though any chosen copula is invariant under componentwise application of the (strictly monotonic) inverse standard Normal distribution. Under the assumptions of Theorem 1. heteroscedastic standard deviations σj. For comparison.072ǫ2 . the theoretical covariance Cov(GC (u). .1 = 0. . 000 and N = 106 to provide an approximation of the true covariance. . a consistent estimator for Cov(GC (u). 2/3).037 + 0. 1]d . (2/3. j−1 2 2 σj. . i = 1.5.2 )). The initially chosen coefficient of the lagged variable is β = 0. the limiting tight centered Gaussian process GC is instead estimated conditional on a sample X1 . 2/3)}. The proof is given in 5.1 ). n. In the case of independent observations.2 = σj. GC (v)) is provided by Cov GC (u). we as well investigate observations from bivariate copula-GARCH(1. for all j = 1. are obtained by initializing σ0. 2 2 σj. Based on the MC simulation of εj for all j = 1. we apply 10 √ √ n Cn (u) − CN (u) . The following Lemma provides an alternative benchmark based on consistent estimation of the unknown theoretical covariance structure: Lemma 1. . Consider a sample X1 . .919σj−1 + 0. .1 = σj. 2 1 − αi − βi (17) using the unconditional GARCH(1.i .012 + 0.868σj−1 + 0. . . .i = ωi for i = 1. 1) processes may not leave the copula invariant. . Define εj = (Φ−1 (Uj. n.is drawn from one of the aforementioned copulas. . . serves as a benchmark. . it is not possible to iterate MC simulations.1 . In practice.2 = 0. . . j−1 (18) (19) for all j = 1. the transformations required to obtain samples from AR(1) or GARCH(1. However.1 ǫj. (16) We obtain a sample X1 . .115ǫ2 . The considered coefficients are estimated by Jondeau et al. GC (v)). XN . Xn . . v ∈ [0. 1/3).2 ǫj. 1) processes having a specified static copula [see 33]. Assume that N → ∞ and choose n such that n(N ) → ∞.

8656 3.0061 0.0408 0. M2 M1 B 0.0495 0.7512 2.0425 0. i = 1.7949 2.d.0051 0.1366 1.0432 0.0507 2.0493 0.0338 0.0064 0.6090 2.0072 0.0017 2. 000 Monte Carlo replications.1518 2.0335 0.0336 0.5) 3.0338 0. M2 M1 B Approx.0571 4.0598 3.0398 0.0120 1.6324 1. and AR(1) settings.0631 0.4103 0.8959 0.0533 0.0492 0.9826 4.0042 0.0526 2.5706 1.1991 2.0437 1.0058 0.8286 0.0616 0.2381 4.0261 2.0620 0.0345 0.0458 0. 2/3) Mean MSE (2/3.1150 3.0502 0.0395 0.0064 0.2657 1.0394 0.6583 0.0660 2.0390 0.9355 AR(1) setting with β = 0.0429 0.0355 0.0408 0.4187 2.0472 0.0535 0.0257 0.0069 0.0432 0.9893 0.5796 0.0497 0.4252 3.0402 0. 0.0344 0.0062 0. For each replication.0042 0.0294 0.5441 2.0070 0.2449 1.3824 4.0071 0.0625 0.0336 0.5841 0. block length lM = 3. 2.0329 0.0605 0.3715 0.0042 0.0063 0.3656 0.0484 0.0521 Gumbel (θ = 3) 2.6220 1.0058 0.2583 1.0485 0.9658 2.0594 0.1359 1.0072 0.0699 3.4783 Clayton (θ = 4) Approx.0342 0.6948 3.1538 0. sample size n = 100 and 1.0501 0.0058 0.0602 3.0125 2.0335 0. M2 M1 B True Approx.5 Clayton Approx.0391 0.0409 0.3390 2.0508 0.0259 0.0042 0.0058 0.0338 0. 1/3) Mean MSE (2/3.7410 2.0097 0.0254 0.0398 0.8396 0.0510 0.3456 1.8473 0.7410 2.8739 3.0127 2.0100 0.0486 0.4946 1. 1/3) Mean MSE (1/3.7910 3.6040 0.0470 1. setting Clayton True (θ = 1) Approx.3275 2.0132 0.5) 1.0104 2.0629 0.0528 0.0069 0.0514 0.0335 0.i.5207 0.7239 3.4920 3.1894 11 .8334 1. we perform S = 2. 2/3) Mean MSE i.0617 0.0385 0. M2 M1 B 0. (u1 .6666 0.0735 0. 000 tapered block multiplier (Mi ) repetitions with Normal multiplier random variables.0587 3. M2 M1 B Approx.0599 0.d.0375 0.0344 0.0389 0.7255 Gumbel (θ = 1.0508 0.0487 0. kernel function κi .3851 0.4427 0.0496 0.5836 1.0600 0.7048 3.1375 0.0063 0.0383 0.0307 0.0338 0. and block bootstrap (B) repetitions with block length lB = 5.8803 0.2965 Gumbel (θ = 1.0071 0. M2 M1 B True Approx.0496 0.0990 0.0493 0.0396 0.3673 0.3914 1.0460 0.0422 0.Table 1: Mean and MSE (×104 ) Monte Carlo results.6323 1.0122 1.7885 0. M2 M1 B Clayton (θ = 4) True Approx.0407 0.5869 1.0070 0.0432 0.0347 0.0404 0.9819 1.4409 0.3662 0.0620 0.0128 0.0406 0.0345 0.0338 0.0585 0.0626 0.0340 0. u2 ) (1/3.6662 1.0255 0.0735 0.0433 3.0720 0.5289 0.0050 0.4919 2.0064 0.0409 0.i.7222 0.2134 Gumbel (θ = 3) 1.2273 1.0494 0.6297 2.0524 0. I.0353 0.0761 0.0061 0.0599 (θ = 1) M2 0.2934 2.0346 0.0343 0.3832 0.9785 1.0306 0.0429 0.0048 0.0048 0.0336 0.0643 0.1797 M1 0.5530 2.5305 B 0.0344 0.2025 2.0672 3.0373 0.4128 2.5903 2.0293 0.

we perform S = 2.2685 1.0061 0.3950 0.0430 0.4663 2.5) 1.4806 0.0468 0.0067 0.0485 0.0064 0.1513 1.0044 0.0715 0.8678 0.0073 0.0467 0.0346 0.0318 0.8808 Gumbel (θ = 1.8999 1.8137 0.9800 2.0073 0.0044 0.0074 0.0345 0.0338 0.0428 0.2505 0.0410 0.0536 3.0072 0.0395 0.7448 0.3824 0.0562 0.0493 0.0058 0.7936 1.0433 0.0341 0. 000 Monte Carlo replications. 1/3) Mean MSE (2/3.0489 0.0409 0.0409 0.0408 0.0067 0.0347 0.0693 0.0347 0. setting Clayton True (θ = 1) Approx.0703 0.0546 0.0105 0.9235 2.0508 0. For each replication.0493 0.0329 0.0094 0. (u1 .2579 1.0336 0.0345 0.0503 0.0058 0.0574 0.9107 1.0294 0.0062 0. I.2577 0.4884 0.0402 0.4053 0.0649 0.5811 0.0676 2.2444 0.d. M2 M1 B Approx.0390 0.8719 1. M2 M1 B 0. sample size n = 200 and 1.i.0338 0.7178 0. 1/3) Mean MSE (1/3.0320 0.0367 1.0494 0.8163 1.0593 0. and AR(1) settings.8113 0.0432 0.9932 1.3890 0.5375 0.9594 1.0064 0.0646 0.0426 0.0261 0.3197 0.0432 0.5427 0.0403 0.0487 0.3034 2.0385 0. u2 ) (1/3.5320 AR(1) setting with β = 0.0260 0.0490 0.0486 0.5 Clayton Approx.6024 1. M2 M1 B True Approx.5133 2.0074 0.0388 0.1697 1.0389 0.0293 0.2908 0.0341 0. 2/3) Mean MSE i.0512 0.0345 0.0408 0. 2.0400 0.0639 0.2152 0.8157 0.0388 1.0460 0. block length lM = 4.0522 0.0408 0.2576 0.1701 1.0278 0.0347 0.3917 0.1943 1.2909 1.5846 2.1693 0.2233 0.0061 0.0511 0.0094 2.0336 0.9078 1.9784 2.5) 2.0361 0.0346 0.0638 0. M2 M1 B Clayton (θ = 4) True Approx.2154 0.0472 0.0608 2.8566 2.8245 Gumbel (θ = 3) 0.4832 2.3804 0.2534 2.0344 0.d.0101 0.0492 0.8653 1.0468 0.0519 0.0336 0.5655 Gumbel (θ = 3) 1.0048 0.0577 0.0414 0.0058 0.0413 0.3749 1.i.0102 1.2279 0.0702 2.6206 Clayton (θ = 4) Approx.5553 B 0. i = 1.Table 2: Mean and MSE (×104 ) Monte Carlo results.0073 0.0095 0.8423 0.1425 Gumbel (θ = 1.0048 0.8432 0.1889 0.0987 1. M2 M1 B 0.8692 1.9156 1.0338 0.7155 0.0426 0. 000 tapered block multiplier (Mi ) repetitions with Normal multiplier random variables.8880 1.0405 0.0615 2.0484 0.2008 0.0455 0.0335 0. M2 M1 B True Approx. 0.0099 1.6381 0.0402 0.0335 0.0079 0.0390 0.2590 2.0042 0.0432 0.2027 0.4889 2.0335 0.3213 M1 0.0605 0.0343 0.0254 0.0058 0.2833 2.0338 0. and block bootstrap (B) repetitions with block length lB = 7.0617 0.4108 0.0063 0.0629 0.4901 1. 2/3) Mean MSE (2/3.0359 0. M2 M1 B Approx.3858 0.0638 0.1876 0.0599 (θ = 1) M2 0.1534 12 .0042 0.9802 0.7087 3.8340 0.0455 2.0073 0.0076 0.0042 0.1528 2.6083 2.0042 0.0078 0.0646 0.2811 1.0255 0.2074 1.0435 1.0508 0. kernel function κi .5494 1.0504 0.

0482 0. 000 tapered block multiplier (Mi ) repetitions with Normal multiplier random variables.0051 0.0370 0.5135 1.8792 1.0048 0.0052 0.9021 1. M2 M1 B Approx.0390 1.0517 1.2235 1.0348 0.6083 2.3607 0.0285 0.5) 1.0252 0.0341 0.1973 1. sample size n = 200 and 1.8279 0.0395 0.4303 1.0350 0.1027 0.0507 0.0095 1.9198 Gumbel (θ = 3) 0.0386 0.0390 0.2786 0.0366 0. 0.0356 0.2556 0.0403 0.2979 0.9782 1.0410 0. kernel function κi .0051 0.0545 0.3646 1.2339 1.4084 0.0339 0.3726 0.9937 0.6574 B 0. M2 M1 B Approx.0070 0.4150 1. M2 M1 B 0.0530 0.0409 1. For each replication.0574 0.25 Clayton Approx.8100 1.2273 0.8175 0.6811 0.0373 0.0345 0.0417 0.0958 1.2208 0.0399 0.6429 1.0291 0. 2/3) Mean MSE (2/3.0085 0.0541 0.0447 0.1996 Gumbel (θ = 1.0418 0.5319 M1 0.0766 2.0067 2.3873 0.0358 0.0346 0. and block bootstrap (B) repetitions with block length lB = 7.0058 0.2921 1.2774 1.2587 0. M2 M1 B 0.5154 2.0347 0.0384 0.0503 0.0545 0.8366 0. 2/3) Mean MSE AR(1) setting with β = 0.8797 GARCH(1.2013 1.2871 1.0608 0.9819 1.0073 0.0549 0.0272 0. 1/3) Mean MSE (2/3.0070 0.0072 0.1003 1.1686 1.0340 0.0624 0.0545 0.3858 2.0074 0.0081 0.0394 Clayton (θ = 4) Approx.3052 0.6878 13 .1662 1.0068 0.0511 0.0403 0.4093 2.0516 0.0516 0. block length lM = 4.0053 0.0479 (θ = 1) M2 0. M2 M1 B Approx.5252 0.0340 0.0304 0.0258 0.1765 0.8434 0.0377 0.0373 0.2892 0.0052 0.0067 0.0486 B 0.2459 1.0351 0.0071 0.9133 0.0051 0.0516 0.0156 0.0550 0.0518 0.0415 0.0435 1.0375 0.0591 2.0283 0.2199 0.0413 0.0352 0.0073 0.0048 0.0103 0.2301 0.0600 0.8485 0.1258 1.0350 0.8359 Gumbel (θ = 1.0506 (θ = 1) M2 0.3390 0.0500 1.0339 0.0350 0. 0.0403 0.2217 0.2200 0.0343 0.0320 0.0402 0.0101 0. we perform S = 2.0431 1.8542 0.5256 0.0354 0.0052 0.0567 Clayton (θ = 4) Approx.9582 1.0058 0.4689 1. i = 1.0056 0.0097 1.3579 2.0491 M1 0.5928 0.0362 0.5054 0. M2 M1 B Approx.2764 0.0349 0.0074 0.2888 0.0361 0.2224 1.0495 0.0084 0.2941 0.5) 1.0321 0.3118 Gumbel (θ = 3) 1.0347 0.0515 0.0500 0.0415 0. 2.0480 1.0338 0.5157 1.0608 0.0575 0. (u1 .0052 0.2346 1.3079 0.8848 1.0527 1.0403 0.3234 1.2451 1.0339 0. u2 ) (1/3.1144 1.0259 0.2290 0.8486 0.0959 1. 1) setting Clayton Approx.0357 0.3315 1. 1) settings.0081 0.0055 0.9284 0. 000 Monte Carlo replications.0484 0.Table 3: Mean and MSE (×104 ) Monte Carlo results.0521 1. AR(1) and GARCH(1.2395 0.0360 0.0520 0.1979 0. 1/3) Mean MSE (1/3.

GC (v)) works well. The block length is set to lM (n) = ⌊1. 1) processes.25n1/3 ⌋ which satisfies the assumptions of the asymptotic theory.or tapered block multiplier-based GC (u) and GC (v) based on s = 1. we refer to Künsch [31] as well as Bühlmann and Künsch [7]. whereas κ2 yields slightly better results in mean squared error. . 000 MC replications and report mean and mean squared error (MSE) of each method. serial dependence in the tapered block multiplier random variables is either generated on the basis of the uniform weighting scheme represented by the kernel function κ1 or the triangular weighting scheme represented by the kernel function κ2 . .(s) 14 . Since the approximation Cov(GC (u). 3. this choice corresponds to lB (n) = ⌊1. and the parameter. In the case of serially dependent observations. hence lM (100) = 3 and lM (200) = 4. and Rademacher-type sequences (wj )j∈Z indicate that different distributions used to simulate the multiplier random variables lead to similar results. then their results do not reflect the changed structure adequately. i. . it can. The tapered block multiplier technique is assessed based on a sequence (wj )j∈Z of Normal random variables as introduced in Examples 2 and 3. Tables 1. Focusing . . nonparametric tests for a constant copula with a specified or unspecified change point candidate(s) consistent against general alternatives are introduced and assessed in finite samples. Hence. 000 resampling repetitions for the given set of vectors u and v. be used as a benchmark in the case of serially dependent observations. To ease comparison of the next section to the work of Rémillard and Scaillet [38]. we use Normal multiplier random variables in the following. Gamma. meaning that both methods yield 2lM -dependent blocks. resampling results indicate that the tapered block multiplier yields more precise results in mean and mean squared error than the block bootstrap (which tends to overestimate) for the considered choices of the temporal dependence structure. Xn of size n = 100 and n = 200 based on AR(1) and GARCH(1. Specified change point candidate The specification of a change point candidate can for instance have an economic motivation: Patton [33] investigates a change in parameters of the dependence structure between various exchange rates following the introduction of the euro on the 1st of January 1999. MC results based on independent and identically distributed samples indicate that the tapered block multiplier outperforms the block bootstrap in mean and MSE of estimation.the tapered block multiplier technique and the block bootstrap method.(s) .1. . We perform 1. Regarding the choice of the kernel function. 2. Testing for a constant copula Considering strongly mixing multivariate processes (Xj )j∈Z . . The general applicability of these resampling methods however comes at the price of an increased mean squared error [in comparison to the multiplier or bootstrap with block length l = 1 as investigated in 5]. Additional MC simulations are given in Ruppert [42]: if the multiplier or bootstrap methods for independent observations are incorrectly applied to dependent observations. For detailed discussions on the block length of the block bootstrap. Both methods are used to estimate the sample covariance of block bootstrap. In the present setting we choose lB (100) = 5 and lB (200) = 7.. Results based on Normal. . and Table 3 show results for samples X1 . we suggest to test serial independence of continuous multivariate time-series as introduced by Kojadinovic and Yan [29] to investigate which method is appropriate. 3.e. lB = lM = 1. hence. . S = 2. Moreover.1n1/4 ⌋. the copula. mean results for κ1 and κ2 are similar. the kernel function.

. . . [21] and Ruppert [42]. . C2 for all j = ⌊λn⌋ + 1. . The limiting law of the test statistic depends on the unknown copulas C1 before and C2 after the change point candidate. Xj. . . These authors introduce a test for equality between to copulas which is applicable in the case of no serial dependence. The proof is given in 5.1]d 1 − λGC1 (u) − √ λGC2 (u) 2 du. respectively. . . . To estimate p-values of the test statistic. whereas GC1 and GC2 represent dependent identically distributed Gaussian processes. . . . The test statistic is defined by Tn (λ) = ⌊λn⌋(n − ⌊λn⌋) C⌊λn⌋ (u) − Cn−⌊λn⌋ (u) n 2 du (20) [0. . the test statistic Tn (λ) converges weakly: Tn (λ) −→ T (λ) = w. Xj. Weak convergence of Tn (λ) under strong mixing is established in the following: Theorem 4. d and Uj ∼ C2 . d.i for all j = ⌊λn⌋ + 1. . . . √ [0. . . we split the sample into two subsamples: X1 . . .i ∼ F2. 1] : H0 : Uj ∼ C1 for all j = 1. . . n. for details we refer to Rémillard and Scaillet [38]. then Tn (λ) → ∞ in probability under H1 . . 1]d such that √ I 1 − λC1 (u) − √ λC2 (u) 2 du > 0. . we use the tapered block multiplier technique described above. . U⌊λn⌋ and U⌊λn⌋+1 . . .on stock returns. . n. Suppose that C1 and C2 satisfy Condition (3). . Constancy of the structure of association is initially investigated in the case of a specified change point candidate indexed by ⌊λn⌋ for λ ∈ [0. . such as a measure of multivariate association. To test for a change point in the structure of association after observation ⌊λn⌋ < n. . . . . . Xn . ⌊λn⌋. Un with empirical copulas C⌊λn⌋ and Cn−⌊λn⌋ . ⌊λn⌋. H1 : Uj ∼ C1 for all j = 1. . whereas C1 and C2 are assumed to differ on a non-empty subset of [0. on 15th of September 2008 is assessed in Gaißer et al. the copula itself is in the focus of this study. is invariant. 1]d . we (separately) estimate U1 . Further assume a specified change point candidate indexed by ⌊λn⌋ for λ ∈ [0. . Xn . . X⌊λn⌋ and X⌊λn⌋+1 . . Under the null hypothesis C1 = C2 . Whereas these references investigate change points in functionals of the copula. i = 1. drawn from a process (Xj )j∈Z satisfying the strong mixing condition αX (r) = O(r −a ) for some a > 1. multivariate association between major S&P global sector indices before and after the bankruptcy of Lehman Brothers Inc. . .i for all j = 1. i = 1. Xn of a process (Xj )j∈Z . . 1] such that Uj ∼ C1 . . This approach permits to analyze changes in the structure of association even if a functional thereof. n. Suppose we observe a sample X1 . . . . .i ∼ F1.1]d and can be calculated explicitly. . Consider observations X1 . . . . Assuming the marginal distribution functions to be unknown and constant in each subsample. Notice that if there exists a subset I ∈ [0. 15 .

.n−⌊λn⌋ (u) n 2 du. Consider observations X1 . . we consider observations from strictly stationary AR(1) processes with autoregressive coefficient β ∈ {0. . . Xj. .9. 1] denote a specified change point candidate such that Uj ∼ C1 . The change point after observation ⌊λn⌋ = n/2 (if present) only affects the parameter within each family: the copula C1 is parameterized such that τ1 = 0. .5} and GARCH(1. .25. A set of S = 2. Xn drawn from a process (Xj )j∈Z . . n. Notice that dependence between the subsamples is captured since the two sets of tapered block multiplier random variables are dependent by construction. S. .i ∼ F1. serially independent observations are simulated. We apply the MC algorithm introduced in Section 2. . j = 1. d and Uj ∼ C2 . 000 Normal tapered block multiplier random variables is simulated. . (s) whereas T M (λ) is an independent copy of T (λ). . .3 to generate samples of size n = 100 or n = 200 from bivariate processes (Xj )j∈Z . . . Moreover. Results of 1. p-values can be estimated by counting the number of cases in which the simulated test statistic based on the tapered block multiplier method exceeds the observed one. . (22) Hence. . Finite sample properties. . A3(b) with block length l(n) → ∞. . Notice that the result of Corollary 1 is valid both for tapered block multiplier processes satisfying A3 and A3b.1]d n − ⌊λn⌋ M (s) GC1 . 1) processes which are parameterized as in Equations (18) and (19). . . Assume that the process satisfies the strong mixing assumptions of Theorem 3.s. A2. Appendix B]. i = 1. Suppose that C1 and C2 satisfy Condition (3). Xn almost surely holds under the null hypothesis as well as under the alternative: M Tn (s) (λ) −→ T M (λ). . X⌊λn⌋ and X⌊λn⌋+1 . . 0.i for all j = 1.Corollary 1. Size and power of the test in finite samples are assessed in a simulation study. ξn denote samples of a tapered block multiplier process (ξj )j∈Z satisfying A1. The univariate processes are either linked by a Clayton or a Gumbel copula. As a base scenario. . Define Tn M Tn (s) (λ) M (s) (s) (s) (λ) based on Equation (12): := [0. . ⌊λn⌋. ξ a. . whereas lM (100) = 3 and lM (200) = 4 are chosen for the block length. i = 1. . .2. . Xj. .i for all j = ⌊λn⌋ + 1. . An approximate p-value for Tn (λ) is provided by 1 S S M (s) 1 s=1 Tn M (s) (λ)>Tn (λ) .i ∼ F2. . . 0. where l(n) = O(n1/2−ǫ ) for 0 < ǫ < 1/2. . ⌊λn⌋ and ⌊λn⌋ + 1. . . . The test based on the tapered block multiplier technique with kernel function κ2 leads to a rejection quota under the null hypothesis which is close 16 . . respectively.⌊λn⌋ (u) − n ⌊λn⌋ M (s) GC2 . n enter the first and the second summand of the Cramér-von Mises functional. let ξ1 . . . respectively. . (21) whereas ξj . d.2. Weak convergence conditional on X1 . . . The integral involved in Tn (λ) can be calculated explicitly [see 37. the copula C2 such that τ2 = 0. 000 MC replications based on n = 100 and n = 200 observations are shown in Tables 4 and 5. . Let ⌊λn⌋ for λ ∈ [0. . For s = 1. . . . The proof is given in 5. .

994 0.292 0.066 AR(1) setting with β Clayton l = 1 0.086 l = 3 0.236 0.569 0.827 0.000 1. n = 100.218 0.569 0.978 0. For comparison.881 0.000 1.000 1.257 0.598 0. The obtained results indicate that the uniform kernel function κ1 leads to a more conservative testing procedure since the rejection quota is slightly higher. The effects of different types of dependent observations differ largely in the finite sample simulations considered: GARCH(1.999 GARCH(1. setting Clayton l = 1 l=3 Gumbel l = 1 l=3 0.836 0.036 0. results for n = 200 and kernel function κ1 are shown in Ruppert [42]. we also show the results if the test assuming independent observations (i. 1) processes do not show strong impact.573 0.876 0.976 0.124 = 0.983 0.093 0.109 0.541 0.000 0.976 0. Results are based on 1.047 Gumbel l = 1 0.980 0.568 0.592 0.000 1. The power of the test under the alternative hypothesis is best in the case of no serial dependence as is shown in Table 5.077 to the chosen theoretical asymptotic size of 5% in all considered settings. 000 tapered block multiplier repetitions.000 1.050 0.000 1. the observed size of the test in these cases can be more than twice the specified asymptotic size. we observe that the approximation of the asymptotic size based on the tapered block multiplier improves in precision with increased sample size.Table 4: Size and power of the test for a constant copula with a specified change point candidate.254 0.986 0.000 1.089 0. the test based on the multiplier technique with block length l = 1) is erroneously applied to the simulated dependent observations.053 l = 3 0. comparing the results for n = 100 and n = 200.730 0.722 0. If serial dependence is present in the sample then more observations are required to reach the power of the test in the case of serially independent observations.d.000 1.7 0.246 0.110 0.065 AR(1) setting with β Clayton l = 1 0.111 0.594 0.968 0.5 0.549 0.482 0.000 0.000 0.154 0.462 0.105 = 0.985 0. 000 Monte Carlo replications.588 0.078 Gumbel l = 1 0.998 1.i. and asymptotic significance level α = 5%.285 0.000 1.120 0. S = 2. 17 .285 0.5 0.037 l = 3 0.868 0.999 0. τ2 i.999 0.040 0.110 0.172 0.964 0.866 0.877 0.840 0.798 0.999 1.836 0.983 0.100 l = 3 0.928 0.115 0.303 0.295 0. The tapered block multiplier-based test also performs well under the alternative hypothesis and its power increases with the difference τ2 − τ1 between the considered values for Kendall’s τ.612 0.2 0.000 0.25 0. Results indicate that the test overrejects if temporal dependence is not taken into account.051 l = 3 0.578 0.998 1.298 0.977 0.547 0.308 0.106 0. kernel function κ2 ..9 1.816 0. For comparison.907 0.849 0.4 0.114 0. both under the null hypothesis as well as under the alternative.063 0.3 0.000 1.000 0.000 1.000 1.236 0.000 0. whereas AR(1) processes lead to considerable distortions.117 0. Due to the fact that the size of the test is approximated more accurately based on the kernel function κ2 . 1) setting Clayton l = 1 0.109 0.e.8 1. in particular regarding the size of the test.956 0.276 0.999 1.998 0.550 0. its use is recommended.998 0.047 Gumbel l = 1 0.313 0.6 0.000 1.000 1.818 0.978 0.975 0.868 0.315 0.969 0.043 l = 3 0.847 0.

.894 0.000 1.989 0.000 1.164 0.6 0.162 0.i.000 1.000 1.000 1.7 0.490 0.169 0.160 0.000 1. .000 1. .000 1. .2.237 0. Intuitively.g. . testing with unspecified change point candidate(s) is less restrictive but a tradeoff is to be made: the tests introduced in this section neither require conditions on the partial derivatives of the underlying copula(s) nor the specification of change point candidate(s).000 1. d.992 0.499 0.958 0. its start (and end) often are subject to uncertainty: Rodriguez [41] studies changes in dependence structures of stock returns during periods of turmoil considering data framing the East Asian crisis in 1997 as well as the Mexican devaluation in 1994.552 0.4 0. 1) setting Clayton l = 1 0.000 1.541 0.866 0.052 l = 4 0.000 1. .000 1.5 0.999 0.e. 000 Monte Carlo replications. n = 200.748 0.521 0.000 1.057 AR(1) setting with β Clayton l = 1 0.136 = 0.000 0.000 1.050 AR(1) setting with β Clayton l = 1 0. These objects of investigation are well-suited for nonparametric methods which offer the important advantage that their results do not depend on model assumptions.000 1. τ2 i.000 1.174 = 0. S = 2.895 0.000 3.998 1.872 0.873 0.227 0.515 0.000 1.000 1.180 0.903 0.154 0.825 0.523 0.5 0.107 l = 4 0. whereas no change point candidate is given a priori.867 0.000 1.000 1.000 1. with particular emphasis on nonparametric methods.991 0.908 0. e.975 0.000 0.047 l = 4 0.979 0.497 0.503 0.813 0.000 1.000 1.535 0..000 1.396 0.000 1.994 1.149 0.401 0.994 0. 15].d.057 Gumbel l = 1 0.8 0.040 l = 4 0. Xn denote a sample of a process (Xj )j∈Z with strictly stationary univariate 18 .000 1. Xj.25 0. .000 1..957 0.524 0.989 0.000 1. kernel function κ2 .905 0. Even if a triggering event exists.123 0.855 0.046 l = 4 0.i ∼ Fi for all j ∈ Z and i = 1.9 0.000 0.989 0.992 1.137 0.996 1.047 0.993 0.000 1.525 0. . i. Let X1 .877 0.000 1.2 0.000 1.000 1.000 1.3 0.000 GARCH(1.055 0. The motivation for this test setting is that only for a subset of the change points documented in empirical studies. Results are based on 1.172 0.063 0.498 0.000 1.169 0. yet they are based on the assumption of strictly stationary univariate processes.000 1.043 0. and asymptotic significance level α = 5%.992 0. we refer to the monographs by Csörgő and Hórvath [12] and.122 l = 4 0.496 0.770 0.058 Gumbel l = 1 0.000 1.000 1.180 0. a priori hypothesis such as triggering economic events can be found [see.000 1.000 1.056 Gumbel l = 1 0. 000 tapered block multiplier repetitions. setting Clayton l = 1 l=4 Gumbel l = 1 l=4 0. For a general introduction to change point problems of this type.899 0.000 1. The general case: unspecified change point candidate The assumption of a change point candidate at specified location is relaxed in the following. to Brodsky and Darkhovsky [3].988 0.998 1.Table 5: Size and power of the test for a constant copula with a specified change point candidate.059 0.

The functional used to define the test statistic given in Equation (20) of the previous section is thus multiplied by the weight function ⌊ζn⌋(n − ⌊ζn⌋)/n which assigns less weight to change point candidates close to the sample’s boundaries. . U⌊ζn⌋ and U⌊ζn⌋+1 .. . u) and a linear combination of the sequential and the standard empirical process. 1]d . < λP ∈ [0... . u) − min u∈{Uj }j=1. < λP < λP +1 = 1 such that Uj ∼ Cp for all j = ⌊λp−1 n⌋ + 1. . .i ∼ Fi for all j ∈ Z and i = 1. 36]: Sn (ζ. .margins. . . . . formally H1 : there exist 0 = λ0 < λ1 < . For any change point candidate ζ ∈ [0. . n n j=1 j=1 (26) for all ζ ∈ [0.. P + 1. Xn . C(u) = Cp (u) for all p = 1.. . . . 1]. . We consider three alternative test statistics which pick the most extreme realization within the set Zn of change point candidates: 1 Tn = max ζ∈Zn [0. . whereas. . u) under the null hypothesis which is given next. . . . We refer to Hórvath and Shao [26] for an investigation of these statistics in a univariate context based on an independent and identically distributed 3 sample.e. u) . . (24) 2 Tn = max ζ∈Zn u∈{Uj }j=1. The following test statistics are based on a comparison of the resulting empirical copulas: Sn (ζ. . under the alternative hypothesis. Kuiper (K). (n − 1)/n}. d. CP +1 are assumed to differ on a nonempty subset of [0. (25) which are the maximally selected Cramér-von Mises (CvM). .n (23) Sn (ζ. Under the null hypothesis of a constant copula. P + 1. i.n 3 Tn = max ζ∈Zn max u∈{Uj }j=1. . Define Zn := {1/n. .. . 1]. . Un . respectively.. . u)| . In this setting. . and KolmogorovSmirnov (KS) statistic. .. . . .n |Sn (ζ.6 in 12. . we split the pseudoobservations in two subsamples U1 . ⌊λp n⌋ and p = 1. Xj. We establish tests for the null hypothesis of a constant copula versus the alternative that there exist P unspecified change points λ1 < . .e. H0 : Uj ∼ C1 for all j = 1. . . u)2 dCn (u).. Tn is investigated in Inoue [27] for general multivariate distribution functions under strong mixing conditions as well as in Rémillard [36] with an application to the copula of GARCH residuals.. . u) = ⌊ζn⌋(n − ⌊ζn⌋) {C⌊ζn⌋ (u) − Cn−⌊ζn⌋ (u)} n3/2   ⌊ζn⌋ n 1 ⌊ζn⌋ =√  1{Uj ≤u} − C(u) − 1{Uj ≤u} − C(u)  . 1]d . 19 . i. . 1] and u ∈ [0. 26. . Un based on X1 . u) := ⌊ζn⌋(n − ⌊ζn⌋) C⌊ζn⌋ (u) − Cn−⌊ζn⌋ (u) n3/2 for all u ∈ [0.. . . C1 . . notice the following relation between Sn (ζ. Equation (26) is the pivotal element to derive the asymptotic behavior of Sn (ζ.. .1]d Sn (ζ. . . max Sn (ζ.. n. . 1]d . . we estimate the pseudo-observations U1 . more precisely a (d + 1)-time parameter tied down empirical copula process [see Section 2..

0≤ζ≤1 u∈[0. u)}| . This in particular implies weak convergence of the 1 2 3 test statistics Tn . u) − ζBC (1. .i ∼ Fi for all j ∈ Z and i = 1. The established limiting distributions of the test statistics under the null hypothesis can be estimated based on an application of the tapered block multiplier technique to Equation (26): assumptions of Theorem 3. . where BC (ζ. u) −→ BM (ζ. . . u) denotes a (centered) C-Kiefer process. . ξn denote samples of a tapered block multiplier process (ξj )j∈Z satisfying A1. . . u) − ζBC (1. .Theorem 5. and Tn : 1 Tn −→ sup 2 Tn −→ sup 3 Tn −→ sup w. Cov 1{U0 ≤u} . Xn of a strictly stationary process (Xj )j∈Z satisfying the strong mixing condition αX (r) = O(r −4−d{1+ǫ} ) for some 0 < ǫ ≤ 1/4. u). . Xn of a process (Xj )j∈Z which satisfies Xj. For independent and identically distributed observations X1 . . Xn . u) − ζBM (1. . d. . BC (0. For s = 1. . . 1] and u. S. inf {BC (ζ. u) := √ ξj 1{Uj ≤u} − Cn (u) (27) n j=1  n ⌊ζn⌋ (s) − ξj 1{Uj ≤u} − Cn (u)  n j=1 Corollary 2. . Xj. u) = BC (ζ. . w. A3b with block length l(n) → ∞.i ∼ Fi for all j ∈ Z and i = 1. u) −→ BC (ζ. where l(n) = O(n1/2−ǫ ) for 0 < ǫ < 1/2 and define:  ⌊ζn⌋ 1  (s) M (s) Sn (ζ. 1]d+1 ). BC (ζ2 . u) holds in ℓ∞ ([0. 2. . ∞ holds: M Sn (s) (ζ. . Consider a sample X1 .s.1]d {BC (ζ. . 1]d . weak convergence conditional on X1 . 0) = BC (ζ. The proof is given in 5. . .1]d sup |{BC (ζ. Hence. Tn . d. u)} − u∈[0. u) − ζBC (1. 1]d . 1]d+1 ). u). v)) = min(ζ1 . the test is consistent against general alternatives. w. let ξ1 . . 20 . u)} . A2. ∞ : Sn (ζ. whereas Uj ∼ C. C C ξ a. 0≤ζ≤1 [0. . u)}2 dC(u). v ∈ [0. . . . Further assume the process to fulfill the strong mixing (s) (s) for all u ∈ [0. u) − ζBC (1. . viz. . .1]d 0≤ζ≤1 u∈[0. 1{Uj ≤v} for all ζ1 . Under the null hypothesis of a constant copula. . u) − ζBC (1. u). Xn almost surely in ℓ∞ ([0. Weak convergence of Sn (ζ. i Direct calculations yield Tn → ∞ for i = 1. Consider a sample X1 . . ζ2 ) j∈Z w. ζ2 ∈ [0. . .1]d sup {BC (ζ. . 3 under H1 . 1) = 0 with covariance structure Cov(BC (ζ1 . a detailed investigation of C-Kiefer processes is given in Zari [52].

. the change point location is assessed under the assumption that there is at most one change point. the empirical copula of X⌊λn n⌋+1 . . the latter result is only valid if centered multiplier random variables are applied.25 in each dimension. Finite sample properties.whereas the limit is an independent copy of BC (ζ. For simplicity. whereas C1 and C2 are assumed to differ on a non-empty subset of [0. . . ⌊λn⌋. The p-values of the test statistics are estimated as shown in Equation (22). 3. If present. the empirical copula of X1 . The proof is given in 5. . indicating a direction of future research to estimate locations of multiple change points in the dependence structure. . (29) for all u ∈ [0. The latter coincides with C2 if and only if the change point is estimated correctly. ∞ to a tight limit holds. the superindex i is dropped in the following if no explicit reference to the functional is required. Under the alternative hypothesis. 2. then the change point is located after observation ⌊λn⌋ = n/2 and only affects the parameter within the investigated Clayton or Gumbel families: the 21 . . For ease of exposition.. . X⌊λn n⌋ is an estimator of the unknown mixture distribution given by [for an analogous estimator related to general distribution functions. u). u) − ζBC (1. . . n (28) for all u ∈ [0. An estimator for the location of the change point λi . Bai [1] iteratively applies the setting considered above to test for multiple breaks (one at a time). 1]d .1 (u) = 1{λn ≤λ} C1 (u) + 1{λn >λ} λ−1 λC1 (u) + λn − λ C2 (u) . weak convergence conditional on X1 . Given a (not necessarily correct) change-point estimator λn . On the other hand. Xn almost surely in ℓ∞ ([0. . i = 1. 1] such that Uj ∼ C1 for all j = 1. The former shows results for n = 400 observations from serially independent as well as strictly stationary AR(1) processes with autoregressive coefficient β = 0. . is obtained replacing max functions by n arg max functions in Equations (23). (24). 1]d+1 ). . . . if assumption A3b is satisfied. 1]d . Size and power of the tests for a constant copula are shown in Tables 6 and 7. n. i. and (25). . see 9]: Cλn . Consistency of λn follows from consistency of the empirical copula and the fact that the difference of the two mixture distributions given in Equations (28) and (29) is maximal in the case λn = λ. C2 for all j = ⌊λn⌋ + 1. .e. 1]d . The alternative hypothesis H1b of at most one unspecified change point is considered. Remarkably. . In this case. . Xn is an estimator of the unknown mixture distribution given by Cλn . the alternative hypothesis can as well be formulated: H1b : ∃λ ∈ [0. . .2 (u) =1{λn ≤λ} (1 − λn )−1 (λ − λn )C1 (u) + (1 − λ) C2 (u) + 1{λn >λ} C2 (u). An application of the continuous mapping theorem proves consistency of the tapered block multiplier-based tests. The latter coincides with C1 if and only if the change point is estimated correctly.

421 0.073 0.880 0.507 0.716 0.644 0.079 0.642 0.504 0.349 0.913 0.496 0.031 0.098 0.490 0.735 0.406 0.287 0.046 K 0. In the case of serially dependent observations sampled from AR(1) processes with β = 0. The estimated location of the change point.515 0. is close to its theoretical value.997 0.061 0.694 0.493 0.507 0.053 0.548 0.312 AR(1) setting with β = 0. we observe that the tapered block multiplier works similarly well as the standard multiplier (i.078 0.040 0.2.495 0.103 0.043 0.25 Clayton l = 1 CvM 0.503 0.070 0. Results are based on 1.294 0.873 0.9 λn 0.495 0.745 0.084 0.070 0.084 0.506 0.315 0.050 0.506 0.056 0.306 0.246 0.052 0.910 0.056 0.516 0.503 0.055 0.707 0.519 0.089 0.046 0. block length lM (400) = 5.026 0.375 0.506 0.272 0.084 0. additionally.969 1.096 0.6 0.884 0.342 0.510 0.496 0.903 0.913 0.504 0.343 0.905 0.041 0.056 0. 000 tapered block multiplier simulations based on Normal multiplier random variables.881 0.139 KS 0.495 0.185 K 0.504 0. Moreover.167 l = 5 CvM 0.074 0.507 0.503 0.036 0.746 1.251 0. lM = 1): the asymptotic size of the test.390 0.649 0.496 0.9}.080 0.055 0.506 0.457 copula C1 is parameterized such that τ1 = 0.25. 000.994 0.i.083 0.040 KS 0.054 0.939 0.065 0.509 0.102 0. setting Clayton l = 1 0. chosen to be 5%.493 0.062 0.243 0.047 0.074 0.080 0.739 0.547 0.337 0.035 0.2 0.847 0.771 0.697 0.456 0.177 l = 5 CvM 0.295 0.437 0.079 0.282 0.2.055 0.954 0.9 l=5 Gumbel l=1 l=5 CvM K KS CvM K KS CvM K KS CvM K KS 0.228 0.051 0.349 0.627 0.217 0.319 0.061 0. the estimated change point location λn .497 0.487 0. we find that the observed size of the test strongly deviates from its nominal size (chosen to be 5%) if serial dependence 22 . 0.187 K 0.506 0.535 0.443 0.509 0.072 0.901 1.706 0.086 0.519 0.494 0.304 0.504 0.095 0. 000 Monte Carlo replications.276 0.039 KS 0.871 0.093 0.409 0. its standard deviation σ (λn ) as well as its mean squared ˆ error MSE(λn ) are decreasing in the difference τ2 − τ1 .253 0.506 0. λn .512 0.573 0. is well approximated and its power increases in the difference τ2 − τ1 .071 0.495 0.509 0.793 0.536 0.487 0.562 0.9 M SE λn 0.495 0.974 0.518 0.075 0.056 0.e.Table 6: Size and power of tests for a constant copula with unspecified change point can- didate.067 1. and M SE(λn ) × 102 ˆ are reported..052 0.513 0.541 0.116 0. σ (λn ).474 0.949 0. 000 MC repetitions.075 0.519 0.291 0.040 Gumbel l = 1 CvM 0. n = 400.058 0.6.992 0.d. kernel function κ2 .049 0.846 0.494 0.387 0.048 0.073 0.310 0.497 0.991 0.074 0. S = 1.645 0.079 0. 0.509 0. the copula C2 such that τ2 ∈ {0.487 0.566 0.506 0.496 0.6 0. and α = 5%. In the case of independent and identically distributed observations.289 0.9 σ λn ˆ 0.086 0.119 0.983 0. We consider S = 1. size/power τ2 i.496 0.123 KS 0.504 0.050 0.6 0.499 0.372 0.511 0.502 0.6 0.042 K 0.556 0. kernel function κ2 .576 0.489 0.878 0. and report mean as well as mean squared error of 1.

953 1.384 0.293 0.503 0. 23 .510 0.497 0.059 0.043 0.122 0.046 KS 0.055 0.084 0.965 0.497 0. the test is most powerful in many settings.780 0.734 0.052 0. λn .089 0.507 0.122 KS 0.533 0.199 0.428 0.061 0.509 0.681 0. the estimated change point location λn .973 0.463 0.039 0.823 0.505 0.053 0.686 0.062 0.182 l = 6 CvM 0.6 0.505 0.350 0.507 0.029 0.000 0.d.059 0.070 0.497 0.519 0. and M SE(λn ) × 102 ˆ are reported.091 0.6 0.838 0.090 0.997 1.959 1.044 0.273 0.509 0.998 0.506 0.086 0.038 0. the Kuiper-type statistic performs best in mean and in mean squared error.087 0.699 0.9 λn 0.999 0.106 0.000 0.505 0.i.050 0.033 0.049 0.036 0.051 0.585 0. σ (λn ).901 0.065 0.426 0. and Tn .335 0.048 0.998 0.000 0.504 0.9 σ λn ˆ 0.096 0.721 0.503 0.359 0.2 0. size/power τ2 i.508 0.037 0.504 0.243 0.000 0.496 0.033 0.497 0.879 0.505 0.512 0.775 0.998 0.046 0.029 0.060 0. 000 Monte Carlo replications.065 0.142 0.062 Gumbel l = 1 CvM 0.914 0.507 0.502 0.714 0. 1 2 3 Comparing the tests based on statistics Tn .496 0.141 KS 0.076 0. Standard deviation and mean squared error of the estimated location of the change point.000 0.707 0.974 0.520 0.505 0. Moreover.032 0.498 0.724 0.503 0.056 0.169 l = 6 CvM 0.489 0. Likewise.321 0.477 0.076 0.206 0. decrease in the difference τ2 − τ1 .508 0. The test based on the tapered block multiplier with block length lM (400) = 5 yields rejection quotas which approximate the asymptotic size well in all settings considered.756 0.355 0. These results are strengthened in Table 7 which shows results of MC simulations for sample size n = 800 and block length lM = 6 : the tests based on the tapered block multiplier perform well in size. and α = 5%.9 l=6 Gumbel l=1 l=6 CvM K KS CvM K KS CvM K KS CvM K KS 0.493 0.510 0.232 is neglected and the block length lM = 1 is used: its estimates reaching up to 18.039 0.031 0.069 0.156 AR(1) setting with β = 0. Tn . n = 800.063 0.207 K 0.034 0.054 0.135 0.695 0.089 0.083 0.000 0.495 0.494 0.068 0.047 0.079 0.514 0.7%.034 0.032 0.041 0. kernel function κ2 .036 0.372 0. their power improves considerably with the increased amount of observations and the change point location is well captured.411 0. setting Clayton l = 1 0.939 1.000 0.712 0.527 0.805 0.042 0.672 0.488 0.081 0. additionally.966 0.503 0.9 M SE λn 0.623 0.097 0.495 0.490 0.065 K 0.496 0.496 0.999 0. S = 500. we find that the test based on the Kuiper-type statistic performs best: results indicate that the nominal size is well approximated in finite samples.273 0.496 0.803 0.506 0.173 0.510 0. Results are based on 1.584 0.177 0.072 0.045 0.113 0.992 1.106 0.138 0.760 0.Table 7: Size and power of tests for a constant copula with unspecified change point candidate.052 K 0.521 0.841 0.500 0.149 0.218 0.500 0.248 0.057 KS 0.494 0.052 0.6 0.049 0.670 0.6 0.25 Clayton l = 1 CvM 0.000 0.037 0.054 0.037 0.999 1.057 0.184 K 0.506 0.995 1.508 0.506 0.688 0.998 1. with regard to the estimated location of the change point.045 0.508 0.500 0.

P. H. J.The introduced tests for a constant copula offer some connecting factors for further research. Dette. AR(1) processes with higher coefficient for the lagged variable or GARCH(1. (2010): A note on bootstrap approximations for the empirical copula process. [1] Bai.. (1993): Nonparametric Methods in Change Point Problems. B. then the assumption of strictly stationary marginal distribution functions is required and allows to drop continuity assumptions on the partial derivatives of the underlying copula(s). Darkhovsky. thesis. [6] Bühlmann. Ph. B. RuhrUniversität Bochum. Ph. John Wiley & Sons. F. Financ. 1925 – 1932. pp. thesis. 315–352. A. A. 24 . [8] Busetti. Data Anal. (1997): Estimating multiple breaks one at a time. Diss. 295–310.. P-Values of the tests are estimated using a tapered block multiplier technique which is based on serially dependent multiplier random variables. it is of interest to investigate the optimal choice of the block length for the tapered block multiplier-based test with unspecified change point candidate.g. pp.. Kluwer Academic Publishers. H. Though challenging from a computational point of view. the latter is shown to perform better than the block bootstrap in mean and mean squared error when estimating the asymptotic covariance structure of the empirical copula process in various settings. e. 31(3). (2010): When is a copula constant? A test for changing relationships.D. P.. Econom. A.. R. [2] Billingsley. 80(23-24). J.: Inoue [27] investigates nonparametric change point tests for the joint distribution of strongly mixing random vectors and finds that the observed size of the test heavily depends on the choice of the block length l in the resampling procedure. Probab. then the test is consistent whether or not there is a simultaneous change point in marginal distribution function(s).. [5] Bücher. Conclusion Consistent tests for constancy of the copula with specified or unspecified change point candidate are introduced. ETH Zürich. Moreover.. test statistics based on different functionals offer potential for improvements: e. Künsch.D. [3] Brodsky. (1999): Block length selection in the bootstrap for time series. an application of this functional to the case of unspecified change point candidate(s) is of interest as the functional yields very powerful tests. the Cramér-von Mises functional introduced by Rémillard and Scaillet [38] led to strong results in the case of a specified change point candidate. (1993): The blockwise bootstrap in time series and empirical processes. Statist. [4] Bücher. 10354.. Tests are shown to behave well in size and power when applied to various types of dependent observations. 106–131. Harvey. e. 4. 9(1). Lett. pp. For different types of serially dependent observations. Econ. If change point candidate(s) are unspecified. Statist.g. S.g. 13(3). (2011): Statistical Inference for Copulas and Extremes. We observe a trade-off in assumptions required for the testing: if a change point candidate is specified. ETH No. P. pp. Comput. E. Theory. (1995): Probability and Measure. [7] Bühlmann. 1) processes..

pp. W. [20] Gaißer.. C. (2003): Nonparametric estimation of copulas for time series. [19] Fermanian. Finance. [18] Fermanian. pp. Wegkamp. P. 32(1-2). Eur. Scaillet. M. (1988): Nonparametric change-point estimation.. 10(5). Y. K. Finance. Theory.-D.. 188–197. [17] Doukhan. D. J. [16] Doukhan. [23] Giacomini.. Zhang.. 65–87. Series 2: Banking and Financial Studies 07/2009. (1979): La fonction de dépendance empirique et ses propriétés: un test non paramétrique d’indépendance. (2010): A multivariate version of Hoeffding’s Phi-Square.a multivariate nonparametric approach. Embrechts. X. J. 125–154. Chen. [22] Genest. S. [10] Carrasco.. pp. 1837–1845. V. Ann. Wehn. Segers. [14] Dhompongsa. 2571–2586. 27(2). New York.. pp. 16(1). S. 101(10). [13] Deheuvels. Springer-Verlag. pp.. 101(8). Bulletin de la Classe des Sciences (5th series). [21] Gaißer. pp. Yokohama Math. 224–234. M. pp. 619–637. 12(1). Hórvath. pp. (1984): A note on the almost sure approximation of the empirical process of weakly dependent random variables.. 17–39. pp. Lang. Memmel. 15(7). Fermanian. Multivariate Anal. Econom. G. Académie Royale de Belgique. J. Lecture Notes in Statistics. Bernoulli. P.. [15] Dias.. 113– 121.. M. E.. C. [11] Chen. Härdle. (2009): Testing for structural changes in exchange rates dependence beyond linear correlation. Ruppert. pp. P. Process. (2009): Time dynamic and hierarchical dependence modelling of an aggregated portfolio of trading books . (2009): An empirical central limit theorem with applications to copulas under weak dependence.-D. Schmidt. 25 . C. Journal of Econometrics. J. Stat. pp... J. (2002): Mixing and moment properties of various GARCH and stochastic volatility models. Risk. 25–54. Econ. Schmid. J.. Statist. 274–292. pp. Spokoiny. [12] Csörgő. Stoch.. 18(1). pp. Quant. (1994): Mixing: properties and examples. M. (2009): Inhomogeneous dependency modeling with time varying copulae.. 847–860. (1997): Limit theorems in change-point analysis. 5(4). [24] Guegan. John Wiley & Sons. S. A. 10(4). 135(1–2). Multivariate Anal. R. Infer. Fan.-D. O. Bus. X. F. J. (2006): Estimation and model selection of semiparametric copulabased multivariate dynamic models under copula misspecification. (2010): On the covariance of the asymptotic empirical copula process.. Deutsche Bundesbank Discussion Paper.... J. J. (2010): Change analysis of dynamic copula for measuring dependence in multivariate financial data. Stat. E.[9] Carlstein. J. L. J.. 421–430. Radulovic.. 65(6). P. (2004): Weak convergence of empirical copula processes. D.

[33] Patton. Les Cahiers du GERAD. (2006): Measuring financial contagion: A copula approach. I. 54(1). 100(3). Empirical Finance. Scaillet. [27] Inoue. N. (1980): Martingale limit theory and its application. Appl. [40] Rio. [35] Philipp. Y. 1217–1241. Springer. M. pp.. (2007): Financial Modeling Under NonGaussian Distributions. P. J. Stoch. Yan. B. (2000): Théorie Asymptotique des Processus Aléatoires Faiblement Dépendants. 156–187. San Diego. HEC Montréal. (2009): Testing for equality between two copulas.. J. (2002): Applications of copula theory in financial econometrics. C. [34] Patton. pp. (2008): Introduction to empirical processes and semiparametric inference. Statist. [32] Paparoditis. J. pp. pp. G-2006-31. 1–13. (2001): Testing for distributional change in time series. Biometrika. Scaillet. Rockinger.. Q.. (1989): The jackknife and the bootstrap for general stationary observations. C. D. pp. 2(1). S. A. Ann. Gebiete.. O.D. Poon. [38] Rémillard. 117(12).. A.. [39] Rio. Z. pp.D. University of Cologne. Tech. R. H. B. Academic Press. Tech. Springer. thesis. pp. L. Shao. H.. E. Multivariate Anal. Politis. Ann. 26 . University of California.. Statist. thesis. 17(3). M. Springer. Proc. Econ. [36] Rémillard.-H. (2007): Limit theorems for permutations of empirical processes with applications to change point analysis. (2004): On the out-of-sample importance of skewness and asymmetric dependence for asset allocation. 63(2). A. L. Ph. 130–168. (2011): Contributions to Static and Time-Varying Copula-based Modeling of Multivariate Association. Rep.. [30] Kosorok. E. Wahrscheinlichkeitstheorie verw. (2006): Testing for equality between two copulas. [26] Hórvath. 1870–1888. M. O. B. Financ. 587–597. [31] Künsch.. (1980): Almost sure approximation theorems for the multivariate empirical process.. B. J. Heyde. Ph. Econom. Inst. New York. W. 14(3). 88(4).. 17(1). rep. Inst.-M. Poincaré Sect. [41] Rodriguez. R. (2011): Tests of serial independence for continuous multivariate time series based on a Möbius decomposition of the independence empirical copula process. pp.. [29] Kojadinovic. [37] Rémillard. (2010): Goodness-of-fit tests for copulas of multivariate time series. 377–386. 29(4). London. E. J. Pinzur. Theory. E.. J. 1105–1119. pp. Ann. [42] Ruppert. 347–373. 401–423. pp. (1993): Covariance inequalities for strongly mixing processes. [28] Jondeau. (2001): Tapered block bootstrap. Math.[25] Hall.

(2011): Weak convergence of empirical copula processes under nonrestrictive smoothness assumptions. Springer... T.. Tech. 37(1). SFB 823. and denote their joint empirical distribution function by Fn . d. Université Paris 6. . 229–231.. Bernoulli. 415–427. D. D. 209–235. Lemma 1. Publications de l’Institut de Statistique. . Berlin Heidelberg. M. W. [52] Zari. Université Paris 8. Bücher [4]. A. L. Vogel. R. ..6 establishes the following result which is pivotal to conclude the proof: 27 . J. W. Notice that the copula can be obtained by a mapping Υ : Υ : DΥ → ℓ∞ ([0. Tech. R. Gaißer. van Kampen. J. Under the strong mixing condition αX (r) = O(r −a ) for some a > 1. M. J. A. [49] van der Vaart. . 25-26 September 2009. forthcoming... Wellner. O. integral transformations allow to simplify the exposition while obtained asymptotic results remain valid for general continuous marginal distribution functions. Blumentritt. i = 1. [44] Scaillet. Ph. . −1 −1 F → Υ(F ) := F F1 . P. Consider integral transformations Uj. . Springer Verlag. Durante. B. (2010): Contribution à l’étude du processus empirique de copule. . pp. (2005): A Kolmogorov-Smirnov type test for positive quadrant dependence. w.. 5.Proceedings of the Workshop held in Warsaw. [46] Segers. 912–923. Copula theory and its applications . [47] Sklar. The proof is established as in Gaißer et al. Insur.[43] Rüschendorf.. 1]d ) : √ n Fn (u) − F (u) −→ BF (u). TU Dortmund. 36/10. Rio [40] proves weak convergence of the empirical process in the space ℓ∞ ([0. Statist. Dehling. 33(3).D. [45] Schmid. [51] Wied. . [48] van den Goorbergh. T. 16/11. Canad.. . H. Lemma 2. D. (eds. [50] van Kampen. 101–114. C. As exposed in detail by Fermanian et al.. M. (2005): Bivariate option pricing using dynamic copula models. .. pp. Ruppert. Appendix: Proofs of the results Proof of Theorem 1. (2010): A nonparametric constancy test for copulas under mixing conditions. J.. pp. (2010): Copulabased measures of multivariate association. n. Ann. in: Jaworski. Econ. Härdle. Wied. pp. thesis.. Math. W. (2011): A fluctuation test for constant spearman’s rho. [18]. Rychlik. Schmidt. T. Rep.). TU Dortmund. J. [21]...i := Fi (Xj. Fd . F. Werker. A. 4(5). Proof of Theorem 4 while applying a result on Hadamard-differentiability under nonrestrictive smoothness assumptions obtained by Bücher [4]. . 1]d ). (1976): Asymptotic distributions of multivariate rank order statistics. . M. S. SFB 823. Statist. New York. (1996): Weak convergence and empirical processes. Rep. pp.i ) for all j = 1. (1959): Fonctions de répartition à n dimensions et leur marges. Genest.. F.

. . whereas Di F (u).. . Proof of Theorem 2. . an application of the functional delta method for the bootstrap [see. ∞ ) : √ n Υ Fn (u) − (Υ (F )) (u) −→ Υ′ (BF ) (u) . for all u ∈ [0. Bühlmann [6]. an application of the functional delta method yields weak convergence of the transformed empirical process in (ℓ∞ ([0. n and i = 1. d. . H a. . . M ¯ 1Uj ≤u} − Cn (u) −→ BC (u). 49] yields weak convergence of the transformed empirical process conditional on X1 . 1]d ). . 1]d ). recall that the empirical copula as defined in Equation (2) and the map Υ share the same asymptotic behavior. . .s. .Lemma 2. e. ∞ : √ B n Fn (u) − Fn (u) −→ BF (u). . Then Υ is Hadamard-differentiable at C tangentially to D0 := D ∈ C [0. 1]d |D is grounded and D(1. . Based on Hadamard-differentiability of the map Υ as established in the proof of Theorem 1. proof of Theorem 3. . . Xn almost surely in D([0. . .i := Fi (Xj. i = 1. . Xn almost surely in the space D([0. Xn in probability in (ℓ∞ ([0. Hence. . 1) = 0 The derivative at F in D ∈ D0 is represented by d Υ′ (D) (u) = D(u) − F i=1 Di F u(i) D(u(i) ). 1]d ) of càdlàg functions equipped with the uniform metric . . To conclude the proof.g. .1]d Υ Fn (u) − Cn (u) = O 1 n . F H P To conclude the proof. ∞ ) : √ n B Υ Fn (u) − Υ Fn (u) −→ Υ′ (BF ) (u) . notice that sup u∈[0. . . .i ) for all j = 1. . . d is defined on the basis of Equation (4). d and proves the bootstrapped empirical process to converge weakly conditional on X1 . Bühlmann [6]. ∞ :  √ 1 n n n j=1 ξj  a. . . Proof of Theorem 3. Based on integral transformations Uj. 1]d .2 establishes weak convergence of the tapered block empirical process conditional on a sample X1 .i ) for all j = 1. . . Assume that F satisfies Condition (3). . . 1]d ). . n and i = 1. . F w. .i := Fi (Xj.s. .1 considers integral transformations Uj. proof of Theorem 3. . . ξ ξ 28  . .

n tribution functions as well as the copula are unknown.i ≤F −1 (ui )} − ui = sup i ui ∈[0.1]d sup |Cn (u) − C(u)| → 0.8 in Kosorok [30]. 1]d ) × C. . 1]d ). ui . Consider √ √ √ n Cn (u) − CN (u) = n Cn (u) − C(u) − n CN (u) − C(u) √ √ √ n w. . . Functions in D([0. 1. where BM denotes the tapered block multiplier process in the case that the marginal disC. . reside in D([0. ∞ ) (and more generally in any other function space of which D([0. ∞ ) is then equivalent to convergence in (ℓ∞ ([0. Fd Fd (ud ) = n ¯ i n  ξ j=1 i=1 −1 −1 = BM F1 F1 (u1 ) . . Using an argument of Rémillard and Scaillet [38]. . . . A2.n C ℓ∞ ([0.1] Fi Fi−1 (ui ) − ui → 0 w. . A3. The result is derived by an application of Slutsky’s Theorem and consistent (covariance) estimation.n . BM ) −→ (BC . 1]d ) ⊂ ℓ∞ ([0. It follows that sup ui ∈[0. we have in particular u∈[0. We conclude that (BC.1] 1 n n j=1 1{Xj. 1]d ). hence. are bounded in consequence. 1]d . 1]d ). 1.under assumptions A1. . convergence in (D([0. Fd Fd (ud ) C. 1]d ). = n Cn (u) − C(u) − √ N CN (u) − C(u) −→ GC (u). Proof of Lemma 1. . . 1]d ). . which implies D([0. . for all i = 1. 29 . 1]d ) are defined on the closed set [0. N CN (u) − C(u) converges to a tight centered Gaussian process in (ℓ∞ ([0. N √ √ since the factor n/ N tends to zero for n(N ) → ∞ and n(N ) = o(N ). It remains to prove that the limiting behavior of the tapered block multiplier process for independent observations is unchanged if we assume the marginal distribution functions to be unknown. Following Theorem √ 1. 1).n ¯ n  ξ j=1 i=1   1 n d ξ  √ j −1 −1 1{Uj. Following Lemma 7.i ≤u} − Cn (u) BM (u) = n C. Notice that the tapered multiplier empirical copula process as well as its limit are rightcontinuous with left limits. . d as n → ∞. ∞ ). Consider u(i) = (1. . . .n . . consider the following relation between the tapered block multiplier process in the case of known and unknown marginal distribution functions   1 n d ξ  √ j 1{Uj. BM ) in ℓ∞ ([0. . .i ≤Fi (F −1 (ui ))} − Cn F1 F1 (u1 ) . 1]d ). provided that the process is strongly mixing with given rate. 1]d ) is a subset and which further contains the tapered multiplier empirical copula process as well as its limit). . Under the given set of assumptions. .

⌊λn⌋ (u).⌊λn⌋ (u). 1{Uj ≤v} . GC2 . To ease exposition. d Di C2 (v)BC2 v(i) i=1 in (ℓ∞ ([0. indices within the sample X1 . ∞ ). 1]d ).n−⌊λn⌋ (v) := n − ⌊λn⌋ Cn−⌊λn⌋ (v) − C2 (v) −→ GC2 (v) := BC2 (v) − w.2].n−⌊λn⌋ (v) := Cov 1{Ui ≤u} . GC2 .1 and Hall and Heyde [25].⌊λn⌋ (u) := ⌊λn⌋ C⌊λn⌋ (u) − C1 (u) −→ GC1 (u) := BC1 (u) − in (ℓ∞ ([0. (32) i=1 j=−⌊λn⌋+1 n−⌊λn⌋ n−⌊λn⌋ Cov GC2 . ∞ ). whereas Cov GC1 . GC1 .⌊λn⌋ (v) Cov GC1 . GC1 .n−⌊λn⌋ (v) := 1 ⌊λn⌋(n − ⌊λn⌋) i=−⌊λn⌋+1 1 ⌊λn⌋(n − ⌊λn⌋) 1 n − ⌊λn⌋ n−⌊λn⌋ 0 n−⌊λn⌋ Cov 1{Ui ≤u} . . . 1]d ). 1{Uj ≤u} . the asymptotic behavior of each empirical copula process is derived in Theorem 1: GC1 . .n−⌊λn⌋ (u).⌊λn⌋ (u) Cov GC2 . proof of Theorem 2. . hence. GC1 . Direct calculations and an application of the Cauchy condensation test to the generalized harmonic 30 .n−⌊λn⌋ (v) ≤ lim 4 ⌊λn⌋(n − ⌊λn⌋) i=−⌊λn⌋+1 0 n−⌊λn⌋ n→∞ n→∞ j=1 αX (|j − i|) cf. w. GC2 . (31) j=1 0 Cov GC2 . If a joint. . With given assumptions.⌊λn⌋ (v) := 1 ⌊λn⌋ 0 0 Cov 1{Ui ≤u} . 1]d if the limit exists. as: lim Cov GC1 . v ∈ [0.n−⌊λn⌋ (v).n−⌊λn⌋ (v) for all u. d Di C1 (u)BC1 u(i) i=1 Analogously. .⌊λn⌋ (u). i=1 j=1 (33) Convergence of the series in Equations (30) and (33) follows from Theorem 1. 2d-dimensional mean zero limiting Gaussian process (GC1 (u) GC2 (v))⊤ exists.⌊λn⌋ (u) := Cov 1{Ui ≤v} . Theorem A5. 49.⌊λn⌋ (u). we have GC2 . Equations (31) and (32) coincide by symmetry and converge absolutely. 1{Uj ≤v} . Hence.⌊λn⌋ (u).Proof of Theorem 4. GC1 . then a complete characterization can be obtained based on its covariance function [cf. GC2 . Inoue [27]. i=−⌊λn⌋+1 j=−⌊λn⌋+1 (30) Cov GC1 . it remains to prove that the empirical covariance matrix converges to a well-defined limit. Xn are shifted by −⌊λn⌋ to locate the change point candidate at zero.n−⌊λn⌋ (u).n−⌊λn⌋ (v). The covariance matrix is given by   Cov GC1 . GC2 . Appendix A. furthermore the limiting variances are equal (under the null hypothesis) and the double sum in its representation can be simplified [see 39] to reconcile the result of Theorem 1.n−⌊λn⌋ (v)  lim  n→∞ Cov GC2 . 1{Uj ≤v} .

1 ) > 0 for j1 ∈ {1. . then weak convergence of the latter linear combination follows by an application of Slutsky’s theorem and the proof of Theorem 4. Hence. n  j=1 (35) (36) for all u ∈ [0.⌊λn⌋ (u) − n ⌊λn⌋ GC2 . . . . asymptotically equivalent. Proof of Corollary 1. Due to separate estimation of pseudo-observations in each subsample. . . we have limn→∞ Pn (Uj1 . . .1]d 1 − λGC1 (u) − √ λGC2 (u) 2 du. 1]d . Merging the sums involved in the two empirical copulas yields: ⌊λn⌋ n − ⌊λn⌋ √ √ C⌊λn⌋ (u) − C1 (u) + Cn−⌊λn⌋ (u) − C2 (u) n n   1 n  √ ⌊λn⌋ n − ⌊λn⌋ = n 1{Uj ≤u} − C1 (u) − C2 (u) n  n n j=1    √ 1 n = n 1{Uj ≤u} − Cmix (u) . Based on pseudo-observations U1 . Notice that (under the null hypothesis): ⌊λn⌋(n − ⌊λn⌋) C⌊λn⌋ (u) − Cn−⌊λn⌋ (u) n = n − ⌊λn⌋ GC1 . . 1] such that Ui ∼ C1 for all i = 1. . . .n−⌊λn⌋ (u) n for all u ∈ [0. . 1]d where. If C1 and C2 satisfy Condition (3). Equation (36) can be estimated on the basis of the tapered block multiplier approach as given in Theorem 3 and Equation (12). . 1]d . ⌊λn⌋ and Ui ∼ C2 for all i = ⌊λn⌋+1.i = Uj2 . respectively. . . Xn of (Xj )j∈Z with specified change point candidate ⌊λn⌋ for λ ∈ [0. Asymptotically.1 ) = 0. U⌊λn⌋ and U⌊λn⌋ . consider ⌊λn⌋ n ⌊λn⌋ C⌊λn⌋ (u) − C1 (u) + n − ⌊λn⌋ n n − ⌊λn⌋ Cn−⌊λn⌋ (u) − C2 (u) for all u ∈ [0. . . . The Corollary follows as Equation 31 . Cmix (u) := λC1 (u) + 1 − λC2 (u). . (34) based on strong mixing with polynomial rate αX (r) = O (r −a ) for some a > 1. n} in finite samples. ⌊λn⌋} and j2 ∈ {⌊λn⌋ + 1. .i = Uj2 . n. . . . Un (which are estimated separately for each subsample). . ties occur with positive probability Pn (Uj1 . An application of the continuous mapping theorem and Slutsky’s theorem yields Tn (λ) −→ T (λ) = w. Assume a sample X1 . Absolute convergence of Equations (31) and (32) follows the comparison test for infinite series with respect to the series given in Equation (34). √ [0. . .series yield: lim 4 ⌊λn⌋(n − ⌊λn⌋) i=−⌊λn⌋+1 0 n−⌊λn⌋ j=1 ∞ ∞ n→∞ αX (|j − i|) < 4 αX (i) < 4 i=1 i=1 i−a < ∞. .

if the partial derivatives Di C(u) of the copula exist and satisfy Condition (3). . u) −→ BC (ζ.n (ζ.(35) reconciles Equation (21) up to a rescaling of deterministic factors and an application of the continuous mapping theorem.n (ζ.n (ζ. n (37) whereas weak convergence of the sequential empirical process BC. an improvement of the considered strong mixing condition is possible . Note that 1 BC. . 1]d+1 ). u) = √ n ⌊ζn⌋ j=1 d i=1 1{Uj. weak convergence of Sn (ζ. Philipp and Pinzur [35] prove convergence of BC. 1]d+1 ).n ζ.n (ζ. . 1]d+1 .n (ζ. Fd (Fd (ud )) − C(u) .n (ζ.i ≤Fn. then GC. 1{Uj ≤v} . F1 (F1 (u1 )). It remains to prove weak convergence in the case of unknown marginal distribution functions. is not relevant for the time-series applications investigated in this paper. and references therein]. v)) = min(ζ1 . Fd (Fd (ud )) + ⌊ζn⌋ √ −1 −1 · n C F1 (F1 (u1 )). 1{Uj ≤u} − C(u) 1 GC. u) − ζ in (ℓ∞ ([0. u) := √ n ⌊ζn⌋ j=1 ⌊ζn⌋ j=1 1{Uj ≤u} − C(u) . ζ2 ) Notice that. . ∞ ). w. u(i) i=1 (38) Weak convergence of the (d+1)-time parameter tied down empirical 32 . u).n (ζ. following the work of Dhompongsa [14]. As observed in Equation (26). d Di C(u)BC 1. . Proof of Theorem 5. More precisely. u) : there exists η > 0 depending on the dimension and the strong mixing rate αX (r) = O(r −4−d{1+ǫ} ) for some 0 < ǫ ≤ 1/4.this improvement. j∈Z Cov (BC (ζ1 . u) is a C-Kiefer process with covariance Cov 1{U0 ≤u} . u) − √ BC (ζ. u) ∈ [0. u) is established and weak convergence of Equation (37) can be proven by an application of the functional delta method and Slutsky’s theorem [see 4. . . . u) = O {log n}−η n ∞ ). such that: sup sup 0≤λ≤1 u∈[0.1]d 1 BC. where BC (ζ. . under the null hypothesis of a constant copula. u) := √ n for all (ζ. consider: 1 BC. u) can be derived based on the asymptotic behavior of the sequential empirical process. BC (ζ2 .i (F −1 (ui ))} − C(u) i −1 −1 =GC. almost surely in (ℓ∞ ([0. however. Under given assumptions. .

u) n ⌊ζn⌋ √ −1 −1 + ζ− n C F1 (F1 (u1 )). . i = 1.n (1. ⌊λp n⌋ and p ∈ P := {1. . . u) =GC. define ω := 0 for 0 < ζ ≤ λ1 . . . . Under the null hypothesis.n (1. u) in (ℓ∞ ([0. Consider the following linear combination of copulas: ω Ln (ζ. . . Convergence of the test statistics Tn . Proof of Corollary 2. . weak convergence conditional on X1 . . < λP < λP +1 = 1. . . . u) − ⌊ζn⌋ GC. . u) ∈ [0. yields the index of the maximal change point strictly dominated by ζ. u) − ζBC (1. The strong mixing assumptions of the former theorem on conditional weak convergence of the tapered block multiplier empirical copula process are relevant for this proof since they imply those of the latter Theorem 5. Xn almost surely follows combining the results of Theorems 3 and 5.n (ζ. since: Sn (ζ. . ∞ ). and Tn follows by an application of the continuous mapping theorem. d. for ζ > λ1 . . u) := p=1 ω ⌊λp n⌋ − ⌊λp−1 n⌋ ⌊ζn⌋ − ⌊λω n⌋ Cp (u) + Cω+1 (u) ⌊ζn⌋ ⌊ζn⌋ λp − λp−1 ζ − λω Cp (u) + Cω+1 (u) =: L(ζ. . . 1]. For any given ζ ∈ [0. . Fd (Fd (ud )) − C(u) n −→BC (ζ. Under the alternative hypothesis. the empirical copula 33 . P + 1}. Notice that Equation (38) and in particular continuity of the partial derivatives is not required for this result [cf. u) − BC. . arg maxp∈P λp 1{λp <ζ} for λ1 < ζ ≤ 1. w. 1]d+1 .n (ζ. Xn of a process (Xj )j∈Z . u) n ⌊ζn⌋ =BC.copula process given in Equation (26) holds. . u) ζ ζ → p=1 for all (ζ. Consider a sample X1 . Tn . 1]d+1 ). . which. . . in the context of GARCH 1 2 3 residuals]. . . the work of 36. whereas Uj ∼ Cp for all j = ⌊λp−1 n⌋ + 1. . Given knowledge of the constant marginal distribution functions Fi . there exist P change point candidates such that 0 = λ0 < λ1 < .

1]d+1 ).. . by construction. X⌊ζn⌋ converges weakly in (ℓ∞ ([0. u) = √ ξj 1{Uj ≤u} − Cn (u) − n n j=1 n j=1 (s) ξj 1 =√ n ⌊ζn⌋ j=1 ξj (s) 1 1{Uj ≤u} − C⌊ζn⌋ (u) − Cn (u) − C⌊ζn⌋ (u) √ n 1{Uj ≤u} − Cn (u) . u). 1]d+1 . 1{Uj ≤u} − Cn (u)  ⌊ζn⌋ j=1  ξj (s) ⌊ζn⌋ −√ n3 n j=1 ξj (s) for all (ζ. u) √ . then the central limit theorem under strong mixing as given in Billingsley [2].of X1 . u) (39)  ⌊ζn⌋  j=1   ω  1 ⌊ζn⌋  ⌊ζn⌋ − ⌊λω n⌋ ⌊λp n⌋ − ⌊λp−1 n⌋ Cp (u) − Cω+1 (u) 1{Uj ≤u} − = ⌊ζn⌋  ⌊ζn⌋  ⌊ζn⌋ ⌊ζn⌋ p=1 j=1   ω  ⌊λp n⌋  1 = 1{Uj ≤u} − (⌊λp n⌋ − ⌊λp−1 n⌋) Cp (u)  ⌊ζn⌋ j=⌊λ n⌋+1 p=1 p−1    ⌊ζn⌋  1 + 1{Uj ≤u} − (⌊ζn⌋ − ⌊λω n⌋) Cω+1 (u)  ⌊ζn⌋  j=⌊λω n⌋+1 −→ w. . in (ℓ∞ ([0. u). u) and ζBL (1. ∞ ). there exists a tight limit in ℓ∞ ([0.e. If additionally centered around zero (i. Xn almost surely. satisfying A3b). . we have  ⌊ζn⌋ 1  ⌊ζn⌋ (s) M Sn (s) (ζ.4 proves weak convergence of the second summand to a Normal limit conditional on X1 . 34 . 1]d+1 ). BCω+1 (u) =: ζ ζ The latter result follows from Theorem 1 and the results on joint convergence given in the proof of Theorem 4. . Xn almost surely to limiting processes BM (ζ. . Considering the tapered multiplier empirical copula process. . Notice that the tapered multiplier random variables themselves are. respectively. . . whereas (applying Equation (39) and Theorem 3) the first and third summands converge weakly conditional on X1 . 1]d+1 ). . ∞) :    1 ⌊ζn⌋  ⌊ζn⌋ 1{Uj ≤u} − Ln (ζ. . . Hence. . strongly mixing. These are independent copies L L of BL (ζ. . u) and ζBM (1. Theorem 27. ω p=1 λp − λp−1 BCp (u) + ζ ζ − λω BL (ζ. . u) ∈ [0.