On the overlaps between eigenvectors of correlated random matrices

© All Rights Reserved

19 views

On the overlaps between eigenvectors of correlated random matrices

© All Rights Reserved

- MQ2
- PEO CO
- HWStrang4thEd
- B.C.A.
- Manjual Paper.pdf
- FP3 Mock New
- Tensegrity Flight Simulator Based on Tendon Control.by Sultan, Corless, Skelton 2000
- An efﬁcient method for computing genus expansionsand counting numbers in the Hermitian matrix model
- ACT.pdf
- 27I11-IJAET1111174-Buckling-load.pdf
- Multi-Dimensional Model Order Selection
- refguide.pdf
- Paper Principal Components - JPM 2011
- Sigworth Matrix
- Multiuser Receivers, Random Matrices and
- httpwww_ptmts_org_pl2011-2-palej-k.pdf
- chpt2hints
- basicmatrixtheorems.pdf
- tcdm9903
- zj_paper_39

You are on page 1of 10

1

LPTMS, CNRS, Univ. Paris-Sud, Universite Paris-Saclay, 91405 Orsay, France and

3

Leonard de Vinci P

ole Universitaire, Finance Lab, 92916 Paris La Defense, France.

We obtain general, exact formulas for the overlaps between the eigenvectors of large correlated

random matrices, with additive or multiplicative noise. These results have potential applications in

many different contexts, from quantum thermalisation to high dimensional statistics. We apply our

results to the case of empirical correlation matrices, that allow us to estimate reliably the width of

the spectrum of the true underlying correlation matrix, even when the latter is very close to the

identity matrix. We illustrate our results on the example of stock returns correlations, that clearly

reveal a non trivial structure for the bulk eigenvalues.

large random matrices is of primary importance in many

different contexts, from quantum mechanics to high dimensional data analysis. Correspondingly, Random Matrix Theory (RMT) has established itself as a major discipline, at the frontier between theoretical physics, mathematics, probability theory, and applied statistics, with

a somewhat intimidating corpus of knowledge [1]. One of

the most striking applications of RMT concerns quantum

chaos and quantum transport [2], with renewed interest

coming from problems of quantum ergodicity (eigenstate thermalisation) [3, 4], entanglement and dissipation (for recent reviews see [5, 6]). In the context of signal

processing, RMT is of primary importance in the analysis

of high dimensional statistics [79], wireless communication channels [10, 11], etc. Other examples cover the

dynamics of complex systems from random ecologies

[12] to glasses and spin-glasses [13].

Whereas the spectral properties of random matrices

have been investigated at length, the interest has recently

shifted to the statistical properties of their eigenvectors

see e.g. [3, 14] and [1521] for more recent papers. In

particular, one can ask how the eigenvectors of a sample

matrix S resemble those of the population (or pure) matrix C itself. We recently obtained in [22] explicit formulas for the overlaps between these pure and noisy eigenvectors for a wide class of random matrices, generalizing

results obtained for

sample

covariance matrices of [15]

obtained as S = CW C where W is a Wishart matrix and for matrices of the form S = C + W, where W

is a Gaussian random matrix [18, 23, 24]. In the present

paper, we want to generalize these results to the overlaps

between the eigenvectors of two different realisations of

such random matrices, that remain correlated through

their common part C. For example, imagine one measures the sample covariance matrix of the same process,

but on two non-overlapping time intervals, characterized

by two independent realizations of the Wishart noises

W and W 0 . How close are the corresponding eigenvectors expected to be? We provide exact, explicit formulas

for these overlaps in the high dimensional regime. Perhaps surprisingly, these overlaps can be evaluated from

knowledge of the pure matrix C itself. This leads us to

propose a statistical test based on these overlaps, that

allows one to determine whether two realisations of the

random matrix S and S0 indeed correspond to the very

same underlying true matrix C either in the multiplicative and additive cases defined above. We give a

transparent interpretation to our formulas and generalize them to various cases, in particular when the noises

are correlated.

Throughout the following, we consider N N symmetric random matrices and denote by 1 > 2 >

. . . > N the eigenvalues of S and by u1 , u2 , . . . , uN

the corresponding eigenvectors. Similarly, we denote

by 01 > 02 > . . . > 0N the eigenvalues of S0 and by

u01 , u02 , . . . , u0N the associated eigenvectors. Note that we

will index the eigenvectors by their corresponding eigenvalues for convenience. The central object that we focus

on in this study are the asymptotic (N ) scaled,

mean squared overlaps

(, 0 ) ..= N E(u u00 )2 ,

(1)

equation, the expectation E can be interpreted either as

an average over different realisations of the randomness

or, for fixed randomness, as an average over small slices

of eigenvalues of width = d N 1 , such that the result becomes self-averaging in the large N limit. We will

study the asymptotic behavior of (1) using the complex

function

*

(z, z 0 ) ..=

+

1

Tr (z S)1 (z 0 S0 )1

,

N

(2)

where z, z 0 C and hiP denotes the average with respect to probability measure associated to S and S0 . Indeed, using the spectral decomposition, we obtain the

following inversion formula, valid when N for any

, 0 supp %, where % is the spectral density of S (that

we assume here to be the same as that of S0 , but see

2

Appendix for more general cases):

(, 0 ) =

,

2 2 %()%(0 )

we find:

(3)

lim0 . The derivation of the identity (3) can be found

in the Appendix.

The study of the asymptotic behavior of the function

requires to control the resolvent of S and S0 entry-wise.

It was shown recently that one can approximate these

(random) resolvents by deterministic equivalent matrices

[22, 25, 26]. In the case of correlated Wishart (covariance) matrices S = CW C, this allows us to obtain,

for N :

(z, z 0 ) =

(z 0 )g(z) (z)g(z 0 )

,

z 0 (z 0 ) z(z)

(4)

N 1 Tr(zIN S)1 , and (z) ..= (1q+qzg(z))1 , where

q = N/T is e.g. the ratio between the number of observables and the length of the empirical time series. The

derivation of (4) is given in the Appendix. In the case of

additive Gaussian noise, S = C+W, one obtains instead:

a (z, z 0 ) =

g(z) g(z 0 )

,

(z 0 ) (z)

(5)

elements of the Gaussian random matrix W. [Here and

henceforth, the subscript a denotes the additive noise.]

Note that from Ref. [26] we expect the results (4) and

(5) to hold with fluctuations of order N 1/2 .

One notices that both Eq. (4) and Eq. (5) only depend

on a priori observable quantities, i.e. they do not involve explicitly the unknown matrix C. Hence, we will

obtain an observable expression for (1) using the inversion formula (3). For sample covariance matrices, it is

convenient to work with m(z) := (z(z))1 . Defining

m0 () = lim0 m( i) mR () + imI (), the asymptotic behaviour of Eq. (1) for any , 0 supp % can be

explicitly computed as (see Appendix for details)

(, 0 )

2q

(, 0 )

,

0 (, 0 )

(6)

a (, 0 )

2 2 ( 0 )

R () R (0 )

,

a (, 0 )

(8)

in Eq. (7) with the substitutions mR R and mI I .

Again, (8) does not involve the pure matrix C.

Eqs. (6) and (8) are exact and are the main new results

of this work. They can be simplified in the limit where

0 , i.e. when one studies the self overlaps between

eigenvectors corresponding to the same eigenvalue . In

this case, we find:

q |m0 ()|4 mR ()/|m0 ()|2

, (9)

(, ) = 2

2

m2I ()| m0 ()|2

and for the additive case:

a (, ) =

2

R ()

,

2

2 () 0 ()2

(10)

Before investigating some concrete applications of

these formulas, let us give an interesting interpretation

of the above formalism. We first introduce the set of

eigenvectors v of the pure matrix C, labelled by the

eigenvalue . p

We define the overlaps between the us

and the vs as O(, )/N (, ), where O(, ) were

explicitly computed in [22] for a wide class of problems,

and (, ) are random variables of unit variance. Now,

one can always decompose the us as:

Z

p

1

u =

d%C () O(, )(, )v ,

(11)

N

where %C is the spectral density of C. Using the orthonormality of the vs, one then finds:

Z

p

1

u u00 =

d%C () O(, )O(, 0 ) (, )(, 0 ).

N

(12)

If we square this last expression and average over the

noise, and make an ergodic hypothesis [3] according to

which all signs (, ) are in fact independent from one

another, one finds the following, rather intuitive convolution result for square overlaps:

Z

0

(, ) = d%C ()O(, ) O(, 0 ).

(13)

(, 0 ) ..= ( 0 ) mR ()|m0 (0 )|2 mR (0 )|m0 ()|2

It turns out that this expression is completely general

(, 0 ) ..= (mR () mR (0 ))2 + (mI () + mI (0 ))2 and exactly equivalent to Eqs. (6) and (8) in the cor

(mR () mR (0 ))2 + (mI () mI (0 ))2 (7) responding cases. However, whereas this expression still

contains some explicit dependence on the structure of the

The result (6) is indeed remarkable in that the mean

pure matrix C, it has disappeared in Eqs. (6) and (8).

squared overlap between the eigenvectors of independent

Now, let us consider the case of sample correlation masample covariance matrices can be expressed to leading

trices, first in the theoretical case where the true correlaorder in N without the explicit knowledge of the true

tion matrix C is an inverse Wishart matrix of parameter

3

(0, ), that corresponds to 1/q for Wishart matrices

(see [22] for details). In this case, the function m(z) can

be explicitly computed. This finally leads to:

(, ) =

( + 2q)2

2q 2( + ) 2 + (2q 1)

(14)

with := 1 + q and is within the interval h[ , + ], where the edges i are given by =

p

1 + (2 + 1)(2q + 1) . An interesting limit

corresponds to , where C tends to the identity

matrix, and the overlaps are expected to become all equal

to 1/N . Indeed one finds, for a fixed q:

#

"

( 1)(0 1) 1

2

0

+ O( ) ,

(, )

1+

2q 2

(15)

which is in fact universal in this limit, provided the eigenvalue spectrum of C has a variance given by (2)1 0.

[29] This formula is interesting insofar as it allows one to

estimate the width of the eigenvalue distribution of C,

even when it is close to the identity matrix, i.e. 1.

One could think of directly using information on the empirical spectrum, for example the Marcenko-Pastur prediction Tr C1 = (1 q) Tr S1 , that in principle allows one extract the parameter through 1 + (2)1 =

(1 q) Tr S1 /N . However, this method is numerically

unstable and very imprecise when 1 and finite N s

(for one thing, the RHS can be negative, leading to a

negative variance). Our formula based on overlaps avoid

these difficulties. As an illustration, we check the validity

of Eq. (14) in the inset of Figure 1 in the case = 10,

N = 500 and q = 0.5. We determine the empirical average overlap as follows: we consider 100 independent

realisation of the Wishart noise W. For each pair of

samples we compute a smoothed overlap as:

N

1 X (ui u0 j )2

[(ui u0i )2 ] =

,

Zi j=1 (i 0j )2 + 2

(16)

PN

with Zi = k=1 ((i 0k )2 + 2 )1 the normalization

constant and the width of the Cauchy kernel, that we

choose to be N 1/2 in such a way that N 1 1.

We then average this quantity over all pairs for a given

value of i to obtain [(ui u0i )2 ]e , and plot the resulting

quantity as a function of the average eigenvalue position

[i ]e , see Figure 1, inset. The agreement with Eq. (14)

is excellent, even when the true underlying matrix C is

close to the identity matrix

We now investigate an application to real data, in the

case of stock markets and using a bootstrap technique

to generate different samples. Specifically, we consider

the standardized daily returns of the 450 most liquid US

stocks from 2005 to 2012. We randomly split the total

period of 1800 days into two non-overlapping subsets of

eigenvectors of the empirical correlation matrices S and

S0 corresponding to the two subsets exactly as in Eq. (16)

above and average over 100 different bootstrap samples.

The results are reported in Figure 1 where we only focus

on bulk eigenvalues (the top eigenvectors are governed

by non trivial dynamics, see e.g. [16]). We conclude from

Figure 1 that bulk eigenvalues have a non trivial structure

since the mean squared overlaps strongly depart from

the null hypothesis and in fact cannot be estimated using the universal formula Eq. (15) as a parabolic fit is

clearly unwarranted. A fit with the full inverse Wishart

formula (14) leads to 0.7. [30]. A much better fit

could be achieved by choosing another prior for the spectrum of C, for example a power-law as in [8]; we however

leave this issue for later investigations. The conclusion

of this study is that the cross-sample eigenvector overlaps provide precious insights about the structure of the

true eigenvalue spectrum, much beyond the information

contained in the empirical spectrum itself.

u0i )2 using N = 450 US stocks daily returns from 2005 to

2012 and 100 bootstrap replicas of two non-overlapping periods of size T = 900 and with N = 450. We compare the

empirical results with (i) the null hypothesis 1/N (green dotted line) and (ii) an Inverse-Wishart prior with q = 0.5 and

an estimated parameter 0.7 (red line). Inset: Same plot

using synthetic data coming from an Inverse-Wishart prior

with parameter = 10 and N = 500. The empirical average

is taken over 100 realizations of the noise. The agreement

with the theoretical formula is excellent.

much broader context (see Appendix for technical details). For instance, the noise parameters can be different for the two realisations: for example the observation

ratio q = N/T can be different for S and S0 , or the amplitude of the noise can be different in the additive

case. Another direction is to consider generalized

empir

where B is a given matrix independent from C, and O is

4

a random matrix chosen in the Orthogonal group O(N )

according to the Haar measure (see e.g. [20, 22, 27]).

The case above corresponds to the case where OBO is

a Wishart matrix. We find for this general model that

(4) still holds with (z) replaced by SB (zg(z) 1) where

SB is the so-called Voiculescus S-transform of the B matrix [28]. If B = W, then SB (z) = 1/(1 + qz). Similarly,

the additive model can be generalized to S = C + OBO

with the same definitions for B and O. In that case, the

above result (5) again holds with (z) = z RB (g(z)),

where RB (z) is now the R-transform of the B matrix [28]

which is simply equal to RB (z) = 2 z when B = W

is a Gaussian random matrix, as considered above. Note

that in all these generalized cases, the overlap convolution formula (13) is always valid provided that the noises

are independent.

0

or W, W 0 are correlated while the above calculations

referred to independent noises. In the additive case, the

trick is to realize

that one can always write (in

law) W =

W0 + 1 W1 and W0 = W0 + 1 W2 ,

where W1 , W2 are now independent, as above. Since our

formulas do not rely on the common matrix C that can

with 2 simply multiplied by 1 . The corresponding

shape of a (, ) for different values of is shown in Fig.

2. We also provide in the inset a comparison with synthetic data for a fixed = 0.54, 2 = 1. The empirical average is taken over 200 realizations of the noise and again,

the agreement is excellent. The multiplicative model

is also interesting since it describes the case of correlation matrices

measured

such

that

on overlapping

periods,

S = C [W 0 + W] C and S0 = C W 0 + W 0 C.

This case turns out to be more subtle and is the subject

of ongoing investigations.

In summary, we have provided general, exact formulas for the overlaps between the eigenvectors of large

correlated random matrices, with additive or multiplicative noise. Remarkably, these results do not require the

knowledge of the underlying pure matrix and have a

broad range of applications in different contexts. We

showed that the cross-sample eigenvector overlaps provide unprecedented information about the structure of

the true eigenvalue spectrum, much beyond that contained in the empirical spectrum itself. For example, the

width of the bulk of the spectrum of the true underlying

correlation matrix can be reliably estimated, even when

the latter is very close to the identity matrix. We have

illustrated our results on the example of stock returns

correlations, that clearly reveal a non trivial structure

for the bulk eigenvalues.

Acknowledgments: We want to thank R. Allez, R.

Benichou, R. Chicheportiche, B. Collins, A. Rej, E. Serie

and D. Ullmo for very useful discussions.

for a fixed 0 0.95 as a function of for N = 500, 2 = 1,

and for different values of . The population matrix C is given

by a (white) Wishart matrix with parameter T = 2N . Inset:

We compare the theoretical prediction a (, 0 0.95) for a

fixed = 0.54 with synthetic data. The empirical averages

(blue points) are obtained from 200 independent realizations

of S.

handbook of random matrix theory (Oxford University

Press, 2011).

[2] C. W. Beenakker, Reviews of modern physics 69, 731

(1997).

[3] J. M. Deutsch, Physical Review A 43, 2046 (1991).

[4] G. Ithier and F. Benaych-Georges, arXiv preprint

arXiv:1510.04352 (2015).

[5] R. Nandkishore and D. A. Huse, Annual Review of Condensed Matter Physics 6, 15 (2015).

[6] J. Eisert, M. Friesdorf, and C. Gogolin, Nature Physics

11, 124 (2015).

[7] I. M. Johnstone, in Proceedings on the International

Congress of Mathematicians I (2007), pp. 307333.

[8] J.-P. Bouchaud and M. Potters, in The Oxford handbook of Random Matrix Theory (Oxford University Press,

2011).

[9] D. Paul and A. Aue, Journal of Statistical Planning and

Inference 150, 1 (2014).

[10] A. M. Tulino and S. Verd

u, Random matrix theory and

wireless communications, vol. 1 (Now Publishers Inc,

2004).

[11] R. Couillet, M. Debbah, et al., Random matrix methods for wireless communications (Cambridge University

Press Cambridge, MA, 2011).

[12] R. M. May, Nature 238, 413 (1972).

[13] Y. V. Fyodorov, Physical review letters 92, 240601

(2004).

[14] M. Wilkinson and P. N. Walker, Journal of Physics A:

Mathematical and General 28, 6143 (1995).

[15] O. Ledoit and S. Peche, Probability Theory and Related

Fields 151, 233 (2011).

[16] R. Allez and J.-P. Bouchaud, Physical Review E 86,

5

046202 (2012).

[17] P. Bourgade, L. Erd

os, H.-T. Yau, and J. Yin, Communications on Pure and Applied Mathematics (2015).

[18] R. Allez and J.-P. Bouchaud, Random Matrices: Theory

and Applications 3, 1450010 (2014).

[19] A. Bloemendal, A. Knowles, H.-T. Yau, and J. Yin, Probability Theory and Related Fields pp. 194 (2015).

[20] R. Couillet and F. Benaych-Georges, arXiv preprint

arXiv:1510.03547 (2015).

[21] R. Monasson and D. Villamaina, EPL (Europhysics Letters) 112, 50001 (2015).

[22] J. Bun, R. Allez, J.-P. Bouchaud, and M. Potters, arXiv

preprint arXiv:1502.06736 (2015).

[23] Philippe Biane, Quantum Probability Communications

11, 55 (2003).

[24] R. Allez, J. Bun, and J.-P. Bouchaud, arXiv preprint

arXiv:1412.7108 (2014).

[25] Z. Burda, A. G

orlich, A. Jarosz, and J. Jurkiewicz, Physica A: Statistical Mechanics and its Applications 343, 295

(2004).

[26] A. Knowles and J. Yin, arXiv preprint arXiv:1410.3516

(2014).

[27] Z. Burda, J. Jurkiewicz, and B. Waclaw, Physical Review

E 71, 026111 (2005).

[28] D. Voiculescu, Inventiones mathematicae 104, 201 (1991).

[29] The analysis for the additive case leads to a very similar

result. More precisely, taking C = IN + W0 with W0 a

GOE (independent from W and W0 ) of variance 02

0, one finds that a (, 0 ) is given by exactly the same

formula (15) with the substitution 2q 2 4 /02 .

[30] The calibration of is performed by least squares using

Eq. (14) and where we removed the 10 largest eigenvalues.

We collect the derivations of the different results in the following sections. For the sake of clarity, we rename S0 , its

ei ]i[[1,N ]] and [e

e [

eigenvalues and eigenvectors respectively by S,

ui ]i[[1,N ]] , not to confuse primes with derivatives.

Note that our calculations rely on some physical arguments that can certainly be made rigorous in a mathematical

sense. Precise estimates about the error terms for sample covariance matrices or deformed GOE may be obtained

using the results of [26] for instance. Throughout the following, the symbol denotes the limit N .

The derivation of the inversion formula (3) is pretty straightforward. We start from Eq. (2) that we rewrite using

e as

the spectral decomposition of S and S

*

(z, ze) =

N

1 X

1

1

e )2

(u u

ej i j

N i,j=1 z i ze

+

,

(17)

where P denotes the probability density function of the noise part of S and S.

ei ]i[[1,N ]] ] stick to their classical locations, i.e. smoothly allocated with respect to

the eigenvalues of [i ]i[[1,N ]] ] and [

the quantile of the spectral density. Roughly speaking, the sample eigenvalues become deterministic in the large N

limit. Hence, we obtain after taking the continuous limit

Z Z

(z, ze)

e

%() %e()

e

e

(, )dd

,

e

z ze

(18)

where % and %e are respectively the spectral density of S and S,

(1) above. Then, it suffices to compute

Z Z

e i)

(x + i) (y

e

e

e

%()e

%()(,

)dd

2

2

e 2 + 2

(x ) + (y )

Z Z

e 2 + i(y

e (x ))

(x )(y )

e

e

e

%()e

%()(,

)dd

,

e 2 + 2 )

((x )2 + 2 )((y )

(x i, y i)

=

(19)

Re (x i, y + i) (x i, y i) 2

Z Z

e

%()

%e()

e

e

(, )dd

.

2

2

e 2 + 2

(x ) + (y )

(20)

6

Finally, the inversion formula follows from Sokhotski-Plemelj identity

lim Re (x i, y + i) (x i, y i) 2 2 %(x)e

%(y)(x, y).

(21)

Note that the derivation holds for any models of S and S

deterministic limit.

Derivation of (4). We now compute the asymptotic expression of the function for sample covariance matrices.

Note

CW C

that we shall

omit

the

identity

matrix

I

in

our

notations

throughout

the

following.

We

recall

that

S

=

N

f C where W and W

f are two independent Wishart matrices. Here, we allow these two matrices to be

and e

S = CW

parametrized by a possibly different dimensional ratio q and qe. By independence, we have

(z, ze) =

1 X

e 1 ,

(z S)1

z S)

kl P (e

kl P

N

(22)

k,l

then, we use the deterministic estimate like that of [22, 25, 26] to find for any z = x i at global scale that

(z S)1

kl

1

(z) z(z) C

+ O(N 1/2 ),

(23)

kl

where we recall that (z) = 1/(1 q + qzg(z)). Such an estimate holds as well for S

e z ) = 1/(1 qe + qezege(e

ge. Hence, by defining (e

z )), we can rewrite Eq. (22) as

(z, ze)

1

e z )(e

e z ) C)1 ],

Tr[(z)(z(z) C)1 (e

z (e

N

(24)

e z ) C)1 =

(z(z) C)1 (e

z (e

1

e z ) C)1 ],

[(z(z) C)1 (e

z (e

e

ze(e

z ) z(z)

(25)

to obtain

(z, ze)

e z) 1

(z) (e

e z ) C)1 ].

Tr[(z(z) C)1 (e

z (e

e

ze(e

z ) z(z) N

(26)

1

h

i

h

i

e z ) 1 Tr (z)(z(z) C)1 (z) 1 Tr (e

e z (e

e z ) C)1

(z, ze)

(e

e z ) z(z)

N

N

ze(e

!

(27)

g(z)

(z)

Tr(z(z) C)1

N

and ge(e

z)

e z)

(e

e z ) C)1 .

Tr(e

z (e

N

(28)

(z, ze)

which reduces to Eq. (4) when q 6= qe.

e z )g(z) (z)e

(e

g (e

z)

,

e z ) z(z)

ze(e

(29)

7

Derivation of (6). Using the definition of the complex function m and m

e above, we find that (z) = 1/(zm(z)) and

e z ) = 1/(e

(e

z m(e

e z )). Note that we shall omit the arguments z and ze in m and m

e in the following when there are no

confusion. First, we rewrite (4) as

g

ge

1

,

(30)

(z, ze)

1/m

e 1/m zem

e

zm

which is equivalent to

(z, ze)

1

1

[zgm zegem]

e .

ze

zmm

e

e so that we have

1

(e

q z qe

z )m

e2

(q qe)m

e

m+m

e

1q

(z, ze)

+

+

.

qe

q ze

z

mm

e

mm

e

qe

z

qze

z

(31)

(32)

To use the inversion formula Eq. (3), we need to evaluate ( i,

m0 () = lim0 m( i), we find

e + i) ( i,

e i)]

lim [( i,

i

h

i

i

h

h

2

2m

m

e

e

m

e

m

e

m

e

m

e

m

+

m

e

m

e

m

e

0

0

0

0 0

0

0

0

qe q 0 0

qe q

+ Im, (33)

+

e

e

qe

q

qe

q m0 m

m0 m

e 0 m0 m

e0

e 0 m0 m

e0

s where we omit the explicit expressions of the imaginary part since this is useless (see Eq. (3)). Then, using the

representation m0 = mR + imI and m

e0 = m

e R + im

e I , one finds

h

h

i

i

m0 m

e 20 m

e0 m

e0 m

e 0 = 2m

e I 2mI m

e R + i(m

e 2R + m

e 2R 2mR m

e R) ,

e 20 + m

e 0m

h

i

m0 m

e0 m

e 0 = 2m

e I mI imR ,

(34)

and

e0

m0 m

m0 m

e 0 = (mR m

e R )2 (m2I m

e 2I ) + 2imI (mR m

e R ).

h

i2

2

e 0 m0 m

e 0 = (mR m

e R )2 (m2I m

e 2I ) + 4m2I (mR m

e R )2 ,

m0 m

h

ih

i

= (mR m

e R )2 + (m2I + m

e 2I )2 (mR m

e R )2 + (m2I m

e 2I )2 ,

(35)

(36)

which is exactly the denominator in (6). For the numerator, elementary complex analysis in Eq. (33) yields

h

i

h

i

h

i

e 20 m

e 20 + m

e 0m

e0 m

e0 m

e 0 m0 m

e 0 m0 m

e 0 = 4mI m

e I mR |m

e 0 |2 m

e R |m0 |2 ,

m0 m

(37)

h

i

h

i

m0 m

e0 m

e 0 m0 m

e 0 m0 m

e 0 = 2mI m

e I |m

e 0 |2 |m0 |2 .

(38)

and

e =

By regrouping these last three equations with the prefactors in (33), and recalling that mI () = q%() and m

e I ()

e

e

q %e(), so we obtain by using the inversion formula (3) the general result:

e =

q,eq (, )

e m R |m

2(e

q q )

e 0 |2 m

e R |m0 |2 + (e

q q) |m

e 0 |2 |m0 |2

h

ih

i.

e (mR m

e R )2 + (mI + m

e I )2 (mR m

e R )2 + (mI m

e I )2

(39)

8

e , which is

One easily retrieve Eq. (6) by taking qe = q. Another interesting case if when qe = 0 as we expect

precisely the framework of [15]. In that case, we have m

e R = 1/ and m

e I = 0. Hence, we deduce from (39) that

q

(mR 1/)2 + m2I

q

,

=

|1 m0 ()|2

q,eq=0 (, ) =

(40)

e = + with > 0. This yields

Derivation of (9). We set qe = q and

e = mR () + mR () + O(2 ),

mR ()

e 2 = |m0 ()|2 + 2 mR () mR () + 2 q 2 %() %() + O(2 ).

|m0 ()|

(41)

h

i

(, + ) = 2 2mR ()[mR () m0R () + mI () m0I ()] mR ()(m2R () + m2I ()) + O(3 )

= |m0 |2 mR mR |m0 |2 + O(3 ),

(42)

and

(, + ) =

i

+ O(3 )

(43)

The result (9) follows by plugging Eqs. (42) and (43) into Eq. (6) and then set = 0.

The derivation of the overlaps (8) for two independent deformed GOEs is very similar to sample covariance matrices.

Hence, we shall omit most details that can be obtained by following the arguments of the above Appendix. Again,

e where there is no confusion.

we shall skip the arguments and

For completeness, we recall some notations. We defined g and ge are respectively the Stieltjes transform of S = C+W

f where the noises are independent. Also, we introduced

and e

S=C+W

(z) = z 2 g(z),

e z ) = ze

(e

e2 g(e

z ),

(44)

e2 . We still use the convention 0 () = lim0 ( i) R + iI and e0 () =

e

e

e

lim0 ( i) R + iI . It is easy to see that R = 2 gR and I = 2 gI .

For deformed GOE, the resolvent can also be approximated by a deterministic matrix for N . Indeed, one has

[22, 26]:

(z S)1

kl

1

(z) C

(45)

kl

e

and (e

ze

S)1

kl P is obtained from Eq. (45) by replacing by . Then, plugging this into (2), we obtain,

e + i) ( i,

e i) = g0 ge0 g0 ge0

lim ( i,

0

e

e

g0 e e + ge0 ge0 + g0 e ge0 e

=

e e

(46)

9

We proceed as above (see Eq. (33) and thereafter) to find

g0 e e + ge0 ge0 + g0 e ge0 e = 2 (I geI gI eI ) + i eI (gR geR ) geI (R eR )

(47)

and

e = (R eR )2 + (eI2 I2 ) + 2iI (eR R ).

(48)

Hence, by putting these last two equations into (46) and then using (3), we get after some straightforward computations

2

e2 )(R eR )2 + 2 2

e2 (gR geR )(R eR ) ( 2

e2 )(I2 eI2 )

e = ( +

,

a (, )

(R eR )2 + (I + eI )2 (R eR )2 + (I eI )2

(49)

which is the generalization of Eq. (8) for two different noise parameters 6=

e. If we now consider =

e, we have

(R eR )2 + 2 (gR geR )(R eR )

e = 2 2

a (, )

,

e 2 + (I () + I ())

e 2 (R () R ())

e 2 + (I () I ())

e 2

(R () R ())

e R () ())

e

2 2 ( )(

=

,

e 2 + (I () I ())

e 2

e 2 + (I () + I ())

e 2 (R () R ())

(R () R ())

(50)

e = + to find

that is exactly Eq. (8). As for sample covariance matrices, one can again set

a (, ) =

2I2

2 R ()

,

[ R ()]2 + [ I ()]2

(51)

e2 = 0 (no noise), it implies that m

e I = 0 and m

e R = . Then, one can

easily check that this yields in Eq. (49):

a (, ) =

2

,

(R )2 + I2

(52)

We define

S = C + W1 ,

e

S = C + W2 ,

2

1 12

hW1 iN = 0, hW2 iN = 0, Cov(W1 , W2 ) =

,

12 22

(53)

(54)

where we denoted by N the Gaussian measure and used the abbreviation 12 = 1 2 . Using the stability of GOE

under addition, let us rewrite the noise terms as

W1 = A + B1 ,

W2 = A + B2 ,

(55)

hAiN = 0,

hA2 iN = 12 ,

hB1 iN = 0, hB2 iN = 0,

hB21 iN = 12 12 , hB22 iN = 22 12 ,

hB1 B2 iN = 0.

(56)

(57)

10

One can check that this parametrization yields exactly the correlation structure of Eq. (54). Therefore, using (55)

into (53), we have the equivalence (in law)

S1 = D + B1 ,

e = D + B2 ,

S

D ..= C + A.

(58)

Since the noises are now independent and that the mean squared overlap a , given in Eq. (49), is independent

from the exact structure of C, we can therefore replace C by D. Hence, we deduce that the overlaps for this model

will again be given by Eq. (49) with 2 = 12 12 , and

e2 = 22 12 . If =

e, then, a is given by Eq. (8) with

2

variance (1 ), as announced.

- MQ2Uploaded bybeppes66
- PEO COUploaded byabhishek1028
- HWStrang4thEdUploaded byShazia Ahmed
- B.C.A.Uploaded byM Srinivasan Mca MPhil
- Manjual Paper.pdfUploaded byconcord1103
- FP3 Mock NewUploaded byicavak
- Tensegrity Flight Simulator Based on Tendon Control.by Sultan, Corless, Skelton 2000Uploaded byTensegrity Wiki
- An efﬁcient method for computing genus expansionsand counting numbers in the Hermitian matrix modelUploaded byMichael Pearson
- ACT.pdfUploaded byrs0004
- 27I11-IJAET1111174-Buckling-load.pdfUploaded byKatta Sridhar
- Multi-Dimensional Model Order SelectionUploaded byRodrigo Rozário
- refguide.pdfUploaded byAlexandre Pereira
- Paper Principal Components - JPM 2011Uploaded bysf_freeman5645
- Sigworth MatrixUploaded byJeannette Craig
- Multiuser Receivers, Random Matrices andUploaded byNathaniel Myers
- httpwww_ptmts_org_pl2011-2-palej-k.pdfUploaded byD.R. Prasst.
- chpt2hintsUploaded byЯeader
- basicmatrixtheorems.pdfUploaded byMuhammadSajid
- tcdm9903Uploaded byalin444444
- zj_paper_39Uploaded bysuyogbhave
- ahptutorial.pdfUploaded byabhi123_4
- Restrepo 2007 ApproximatingUploaded bypastafarianboy
- starting2matricesUploaded bymike510
- end_sem_newUploaded byPurwajTiwari
- Module 18 - Matrix Analysis 3Uploaded byapi-3827096
- 8.2 Orthogonal MatricesUploaded byJosephKuo
- ps4-solnUploaded bysdesdf
- 1084-1087 Tanque Elevado Liquido EstructuraUploaded byManuel Cano
- QMFMULT2Uploaded byhadian98
- final.pdfUploaded bysuaibdanishm4456

- Vettius Valens EntireUploaded byGarga
- Accessible Categories, Set Theory, And Model Theory an InvitationUploaded bylerhlerh
- Forecasting Financial Crashes With Quantum ComputingUploaded bylerhlerh
- A Generalised Method for Empirical Game Theoretic AnalysisUploaded bylerhlerh
- Leveraging Machine Learning for High-Frequency Trading of Commodity FuturesUploaded bylerhlerh
- The Pre-FOMC Announcement DriftUploaded bylerhlerh
- Global Money Notes #18Uploaded bylerhlerh
- Evaluating Gambles Using DynamicsUploaded bylerhlerh
- Realized Volatility a ReviewUploaded bylerhlerh
- Taylor 2Uploaded byDaniNatã
- Data Assimilation - The Schrödinger PerspectiveUploaded bylerhlerh
- On the Genesis of Robert P. Langlands' Conjectures and His Letter to André WeilUploaded bylerhlerh
- writing_idiomatic_python_3.pdfUploaded byMonty Singh
- A Multilayer Approach for Price Dynamics in Financial MarketsUploaded bylerhlerh
- Modelling Financial Markets by Self-Organized CriticalityUploaded bylerhlerh
- Finding Motif Sets in Time Series.pdfUploaded bylerhlerh
- Why is Quantum Physics Based on Complex NumbersUploaded bylerhlerh
- Lectures on Classical Dynamics David Tong.pdfUploaded bylerhlerh
- Introduction to Matrix and Tensor Factorization Martin LeginusUploaded bylerhlerh
- Optimal Probabilistic Predictions of Financial ReturnsUploaded bylerhlerh
- ‘Beating the News’ With EMBERS Forecasting Civil Unrest Using Open Source IndicatorsUploaded bylerhlerh
- The Ubiquitous Matched Filter a Tutorial and Application to Radar DetectionUploaded bylerhlerh
- Deep Learning Tutorial Release 0.1Uploaded bylerhlerh
- Learning to PrediLearning to Predict Rare Events in Categorical Time-Series Datact Rare Events in Categorical Time-Series DataUploaded bylerhlerh
- algebraUploaded byAhsan Shahid
- A Last Lecture 1949-2009 Quantiles are “Optimal”Uploaded bylerhlerh
- When Micro Prudence Increases Macro Risk the Destabilizing Effects of Financial Innovation, Leverage, And DiversificationUploaded bylerhlerh
- The Moving Averages DemystifiedUploaded bylerhlerh

- General Topology Free ExamsUploaded byThang Nguyen
- Chen_Matrix Preconditioning Techniques and Applications.pdfUploaded byArmand
- State SpaceUploaded byJaol1976
- CalcI ReviewUploaded byapi-3815602
- Projection TheoremUploaded byAnonymous Gd16J3n7
- FP2_MEIUploaded bymirali74
- G. Hiss, R. Kessar and B. Külshammer-An Introduction to the Representation Theory of Finite Groups (2012)Uploaded byabdullahkhalids
- Product NotationUploaded byMark
- Symbolic Math ToolboxUploaded byAnonymous 834jcl1l4
- UAG.pdfUploaded byalin444444
- Wolfgang Demtröder Atoms, Molecules and Photons_ an Introduction to Atomic-, Molecular- And Quantum-physicsUploaded byRimple Mahey
- ch4.pptUploaded byadamalishah
- dynamicUploaded byrazvan.curtean5992
- Maclauren SeriesUploaded byRizki Athe
- (U.john Hopkins) Matrix Computations (3rd Ed.)Uploaded byphucborso1
- The Construction of Free-free Flexibility MatricesUploaded byennegi
- Partial Diffraction SeminarUploaded byPulkit Sharma
- Methods of IntegrationUploaded byyddap
- 4.Special SystemsUploaded bySamuel Mawutor Gamor
- MCQ Class X Polynomials Question AnswersUploaded bypri911
- Vectors and TensorsUploaded byAndrei Cioroianu
- PIP Assignment v 2014 15Uploaded byradhikasontakay
- Zermelo-Fraenkel Axioms in Set TheoryUploaded byRonnie Lloyd Javier
- Finite-Length Discrete TransformsUploaded byThiruselvan Manian
- L3.Sde(Www1.Gantep.edu.Tr)Uploaded byAnonymous l6OMyp4w
- 44_Concavity_and_Points_of_Inflection.pdfUploaded byJashandeep Singh Kochhar
- A Visual Proof of the Derivatives of arcsin, arccos, and arctan Using a Cartesian TriangleUploaded bymesfaisant
- ec1Uploaded byKarl Jonsson
- Measure Theory 3: Measure Algebras - D.H. FremlinUploaded byPaulo Henrique Macedo
- Putnam Linear AlgebraUploaded byinfinitesimalnexus