Professional Documents
Culture Documents
Matrix Ensembles With Unitary Invariance
Matrix Ensembles With Unitary Invariance
1. Introduction
Gaussian random matrices are a well-studied topic due to the enormous number of
applications in engineering, mathematics and physics. This large variety in scope
has led to a multitude of modifications of random matrix models veering away from
the traditional Gaussian ensembles into models with greater generality, e.g. see the
textbooks [1, 2, 3]. This has resulted in various studies of spectral statistics such as
bulk statistics, hard- and soft-edge statistics, as well as the statistics of multicritical
points where for instance spectral supports merge (also known as cuts) or an outlier
(a separate eigenvalue not belonging to the bulk of the spectrum) is absorbed into the
bulk of the spectrum.
What is less well-studied is the tail statistics of eigenvalues of a random matrix
that exhibits a heavy tail. Nevertheless, heavy-tailed random matrices have many
important applications especially when systems are non-stationary or open. For such
systems one can expect heavy-tails being more natural to occur rather than ensembles
for which all moments exist. Examples admitting heavy tails being time series analysis,
disordered systems, quantum field theory and more recently deep neural networks.
e.g., see [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]. From a theoretical point of view the
ensembles stable under matrix addition are of particular importance since they are
the fixed points of their respective domains of attraction via the multivariate central
limit theorem [17], see [18] for a recent work on unitarily invariant Hermitian random
matrices. The classification of these domains as well as the stable distributions is still
poorly understood from the perspective of spectral statistics. We aim to unveil one
part of this incomplete picture, namely the statistics of the largest eigenvalues for
unitarily invariant ensembles.
There are several works [19, 4, 16, 20, 21, 22, 23, 24, 25, 26, 27, 29, 30, 31, 32],
in physics and mathematics that have studied heavy-tailed Wigner matrices in detail
(meaning matrices whose entries are independently and identically distributed along
a heavy-tailed univariate probability measure). For these matrices it has been
shown [20, 21, 24, 28] that the largest eigenvalues in the heavy tail converge to Poisson
statistics. Moreover the eigenvectors become localised [9, 19, 28] while those in the bulk
become delocalised [30]. One may argue that the Poisson statistics of the eigenvalues
is due to the localisation of the eigenvectors. This perspective is supported by the
fact that the distribution of these largest eigenvalues are shared by the distribution
of the largest matrix entries [24]. In the present work, we will argue that this is not
necessarily the case and that the Poisson statistics or at least a very diminished level
repulsion can also be found for unitarily invariant random matrix ensembles. Such
ensembles have been discussed in [8, 11, 22, 33, 34, 35, 36, 37, 38]. The unitary
invariance takes the eigenvectors out of the picture as they are still Haar distributed
and thus delocalised. In [39], it was also shown that exactly such matrices maximise
the Shannon entropy when only a level density is given as an input.
In our present work we consider two specific random matrix ensembles. The first
one is about the singular values of a product of complex inverse Ginibre matrices,
see [40, 41, 42, 43] for an analytical computation of the finite N (matrix dimension)
statistics as well as the hard edge statistics. This ensemble is not stable for finite
matrix dimension, it is known [44, 45] that the limiting macroscopic level density is
stable under free convolution (sum of identical and independent copies of the random
matrix in the large N limit). Numerically we have confirmed that this asymptotic
stability also holds for the local statistics at the soft-edge and some part of the bulk.
Local Tail Statistics 3
Indeed in Ref. [43], those local spectral statistics have been proven for this kind of
random matrix. However the tail statistics do not share this behaviour. In the tail the
eigenvalues degenerate statistically, meaning clusters of eigenvalues show a diminished
level repulsion which is vanishing completely for N → ∞. This number of eigenvalues
inside such a cluster is equal to the number of copies of the random matrix that
has been added. Through using the supersymmetry method we have analytically
confirmed our numerical observations. Namely inside the tail the sum of random
matrices agrees with the direct sum of exactly the same matrices. This agreement is
however not true in the bulk or at the soft-edge. Therefore there is a transition. The
scaling of the eigenvalues that belong to this critical regime of the transition has been
identified and exhibits a dependence on the stability exponent.
Furthermore, we will address the question of the central limit theorem for the
tail statistics of the sum of these random matrices, which happens to converge to a
Poisson point process for the largest eigenvalues as it has already been known for
heavy-tailed Wigner matrices [20, 21, 24, 28]. To confirm whether this picture is
true more generally, we have consider a second random matrix ensemble which is a
Gaussian unitary ensemble (GUE) whose variance is averaged over a stable one-sided
distribution. Similar averages have also been discussed in [6, 7, 8, 34, 35]. This
construction yields a random matrix ensemble that is already stable at fixed matrix
dimension N so that its macroscopic as well as its microscopic statistics must be
stable. There is however the downside that two copies of the matrix are not free in
the sense of free probability [46]. Whether their largest eigenvalues will go exactly to
the Poisson statistics will depend on whether or not the largest eigenvalues live on a
scale that is bigger than the one of the bulk. This will be shown with a supersymmetry
calculation. The Monte Carlo simulations we have generated suggest that the average
position of the largest eigenvalues might saturate at a finite value.
The present work is built up as follows. In Sec. 2, we describe the numerical
experiment we have carried out for a sum of inverse complex Ginibre matrices in order
to get a feeling what is happening. We do not only confirm that the macroscopic
level density as well as the soft edge statistics are stable but show how the tail
statistics change by adding several independent copies of these random matrices. To
get an analytical confirmation, we compute the average of the ratio of characteristic
polynomials in Sec. 3. Therein, we consider a more general situation of the sum
of products of inverse Ginibre matrices as those are also known to be stable under
free convolution [44, 45]. Those averages encode the whole eigenvalue statistics and
are computed via the supersymmetry method [47, 48, 49]. In Sec. 4, we study the
limit of the sum of an infinite number of matrices analytically as well as numerically.
In particular we investigate the question regarding how universal are the Poisson
statistics for the largest eigenvalues of heavy-tailed ensembles and what are the scales
to finding them. We conclude in Sec. 5 by formulating two conjectures for heavy-tailed
ensembles.
2. Numerical Observations
We begin this section with a short numerical experiment (In fact the same one that
led to the discovery of this surprising result) as it will give us some insight into what
is going when we add random matrices with heavy tailed macroscopic level densities.
We consider a sum of L identically and independently distributed inverse complex
Local Tail Statistics 4
the resulting matrix YL shares the same macroscopic level density as each single Xj† Xj .
This can be readily checked via the R-transform which is implicitly defined with the
help of the Green function [46]
Z ∞
ρ(λ)dλ
G(z) = , (5)
−∞ z − λ
namely
1
R[G(z)] = z − . (6)
G(z)
The R-transform of Xj† Xj is [45]
RX † Xj (y) = −eiπ/2 y −1/2 . (7)
j
Due to the rule for a sum of two asymptotically free random matrices A and B,
RA+B (y) = RA (y) + RB (y), (8)
and the scaling rule of a random matrix A with a scalar µ,
RµA (y) = µRA (µy), (9)
we have
RYL (y) = −eiπ/2 y −1/2 , (10)
too. We have numerically illustrated this for L = 1, 2, 3, 4 in Fig. 1.
Local Tail Statistics 5
1.0
0.8 △
▽
□
○
▽
□
△
○
Free Prob.
0.6
▽
△
□
○
ϱ(λ)
△
□
▽
○ ▽ L=1
0.4 □
▽
△
○
△ L=2
○ L=3
△
□
▽
○
□
▽
△
○ □ L=4
△
□
▽
○
□
△
▽
○
△
▽
□
○
0.2 □
△
▽□
○
▽
△
○○
△
□
▽○
▽○
△
□□
△
▽□
▽□
△
○▽○
△
○□
△
▽○
▽○
△
□▽○
□
△□
△
▽○
□
△
▽○
▽○
△
□▽□
□
△△
▽△
○□
▽□
○▽
△
○□
△
▽
○□
△
▽▽
○□
△▽
△
○□△
▽
○□△
▽
○□▽
△
○□
○□
△
▽
○□
▽
△▽
△
○□△
▽
○□△
▽
○□
○□
▽
△▽
○□
△▽
△
○□△
○□
▽
○□
▽
△▽
△
○□
○□
△
▽▽
△
○□□
△
▽○
○○□
△
▽○
▽○
□
△□
▽○
△▽○
□
△△
□
▽○
▽○
△
□□
▽○
△△
▽○
□▽○
□
△□
△
▽○
▽○
△
□△
□
▽○
△
▽○
□▽○
□
△△
□
▽○
▽○
□
△△
□
▽○
▽○
△
□□
▽○
△□
△
▽○
▽○
□
△▽○
□
△□
△
▽○
0.0□
▽□
△
○▽
△
○ ▽○
△
□▽○
△
□□
△
▽○
△
□
▽○
▽○
□
△□
▽○
△△○
□
▽▽
□○
△ ▽
△
□○
□○
△
▽△
□○
▽▽
□○
△△
▽
□○
□○
△
▽△
□○
▽▽
□○
△△
□○
▽△
□○
▽▽
□○
△▽
□○
△△
▽
□○
□○
△
▽□○
▽
△△
□○
▽▽
△
□○
△
▽
□○
△
□
▽
0 2 4 6 8 10
λ
Figure 1. The macroscopic level density of the random matrix sum (4) has
been simulated by Monte Carlo simulations (coloured symbols) for L numbers of
matrices added together. This is compared to the analytical result (3) (black solid
curve) which should agree for any L via free probability. We generated for each
setting 106 configurations of dimension N = 200. The bin size is equal to 0.1.
Thus, the statistical and systematic error is below one percent.
Other statistics that can be checked to be stable when performing the sum (4) are
those in the bulk and at the soft-edge. In Ref. [43], those have been proven to be those
shared with the GUE. For instance, the soft edge lies at λmin = 1/4 in the macroscopic
scaling for any L ∈ N. Thus, we should find the microscopic level density [1, 2]
* +
2/3 L
N 4N
X † Xj
X
ρAiry (λ) = lim tr δ λ1N − 1/3 1N − 2
N →∞ 2 L j=1 j
0.8
□
△
▽
○
□
▽
○ □
△ ▽
△ ▽
○ □
○ △
△ ▽
□
○ ○
△
▽
□▽△
○
□
□
▽
△
○
△
□
○ □
▽ ○ □
△
▽ ▽
△ ▽
○ △
○ ▽
□ △
○
□ Airy
0.6 ▽
△
□
○
ϱsoft (λ)
△
□
○
▽
□
○
▽
△
▽
○
△
□ □
▽
△
○ □
▽
△ □
○ △
△
○ ▽ □
▽
○ △
▽
□
○
△
□
▽
0.4
○
▽
△
○
□ ▽ L=1
▽
△
□
○ △ L=2
△
□
○
▽ ○ L=3
▽
□
△
○
□ L=4
0.2 □
▽
△
○
○
□
▽
△
□
△
▽
○
□
▽
△
○
▽
□
△
○
□
▽
△
○ ○
□
▽△
△ □
▽
○ △
0.0 □
▽
○ □
△
▽
○ △
□
▽
○ ▽
□
△
○ □
△
▽
○ □
▽
△
○
-6 -4 -2 0 2
λmin -λ
Figure 2. Microscopic level density at the soft edge for the Monte Carlo
simulations (coloured symbols) of the random matrix sums (4) and the analytical
prediction (11) (solid black curve). We have employed the same configurations
generated for Fig. 1. The bin size is this time 0.2.
1.2
λ2 ϱtail(λ) 1.0
0.8
0.6
ϱBessel(x)
0.4
L=1
0.2 L=2
L=3
L=4
0.0
0 2 4 6 8 10
λ-1
Figure 3. The microscopic level density of the Monte Carlo simulations of Figs. 1
and 2 (coloured histograms) compared to the inverse Bessel statistic result (13)
(black solid curve). The bin size is this time 0.2. We underline that we have
employed the unfolded scale meaning that we have inverted the spectrum so that
the largest eigenvalues are those closest to the origin.
p(s)
0.6 p1
p2 0.4
0.4 p3
p4 0.2
0.2
0.0 0.0
0 2 4 6 8 10 0.0 0.5 1.0 1.5 2.0 2.5 3.0
λ-1 s
1.4
ϱBessel(x/2) L=2 1.0 Poisson (IGin)
p1↔2
1.2 GUE (IGin)
1.0
0.8 p2↔3
△△△
ϱtail △△ △ (IGin)
p3↔4
λ2 ϱtail(λ)
△
0.8 0.6▽○ ○▽ ○▽○ ▽△○ △
p1 △
p(s)
○ ○
▽ △
△ ▽○ ○
0.6 p2 △ ▽ ○ △
0.4 ▽ ○ △ (Block)
p3 △
△ ▽ ○ △
▽○ △ ▽ p1↔2
0.4 p4 △△ ▽○ ○ (Block)
▽△ ○
0.2 △ p2↔3
0.2 p1 +p2 △▽ ○
△ ▽ ○▽ ○
△△
p3 +p4 △ △▽ ○ ▽○
△ △ △ △○▽△ ○▽○
(Block)
p3↔4
0.0 0.0 △ △ △▽○
△
○
△▽
0.8 ϱtail 0.6 △▽ ○ ○ ○ ○ ○
p(s)
○ ○ △▽
p1 ○ ○ ○ △
○
○
0.6
p2 0.4 ▽△
▽△
○
○
p1↔2
(Block)
0.4 p3 △
▽ ○
△▽ ○ ○
▽
(Block)
p4 0.2 △▽ ○
△ △ p2↔3
0.2 p1 +p2 +p3 ▽○△
○ ▽○
△ ▽△ (Block)
0.0 0.0
○ ○ △
▽ △
○ ○ ○▽○ p3↔4○
○
p1 ▽ ○
△
0.6 p2 0.4 ▽○
△ (Block)
p3 ○
▽
△ ▽ p1↔2
0.4 ○
▽ ○
p4 0.2
△
▽ ○
△ △
(Block)
p2↔3
0.2 p1 +p2 +p3 +p4 ▽○
△ ○
▽
△ (Block)
0.0 0.0
○△
▽ ○ △
▽ ○ ○ p3↔4
0 2 4 6 8 10 0.0 0.5 1.0 1.5 2.0 2.5 3.0
λ-1 s
3. Analytical Corroborations
In Sec. 3.1 we define and discuss the random matrix ensembles namely the sum of
products of inverse Wishart-Laguerre matrices that we study with the supersymmetry
method in Sec. 3.2. After we have derived the corresponding supermatrix integral we
carry out the large N -limit in Sec. 3.3. Finally in Sec. 3.4 we look for the critical scale
where the spectral statistics of this ensemble changes from stable (meaning the sum
exhibits the same statistics as each matrix in the sum) to unstable.
Comparison with (4) shows that the stability exponent will be α = 1/(M + 1).
(M )
Certainly it has been shown that the macroscopic level density of Y1 ,
1
(M )
ρY (M ) (λ) = lim tr δ λ1N − N M Y1 , (20)
1 N →∞ N
is stable under free convolution, too, see [44, 45]. The reason is the S-transform,
another transform in free probability indirectly defined via the R-transform [46]
1 1
R(y) = ⇔ S(χ) = . (21)
S[yR(y)] R[χS(χ)]
It has the property [46]
SY (M ) (χ) = SX † (M −1) (χ) = SY (M −1) (χ)SX † (χ) (22)
1 1,M Y1 X1,M 1 1,M X1,M
(M )
as X1,M and Y1 are asymptotically free. In our case we have
SX † (χ) = −χ ⇒ SY (M ) (χ) = (−χ)M ⇒ RY (M ) (y) = −eiπ/(M +1) y −M/(M +1) .
l,m Xl,m l l
(23)
This result reflects the stability when considering the classification in [44, Appendix
A].
(M )
The macroscopic level density of the inverse of Y1 is given in terms of the
Meijer G-function [56]
1 M M −3/2
ρ(Y (M ) )−1 (λ) = √ (24)
1 2π (M + 1)M +1/2
MM
M,0 {(1 + j − M )/M }j=1,...,M
× GM,M (M + 1)M +1 λ .
{(j − 1 − M )/(M + 1)}j=1,...,M
Local Tail Statistics 10
where the contour C starts at −i∞ and finishes at +i∞ while having the poles of
Γ[cj + s] on the left side of the path and Γ[1 − aj − s] are on the right side. The
distribution (24) is called the Fuss-Catalan distribution since it has the Fuss-Catalan
numbers [56],
Γ[(M + 1)n − M ]
F CM (n) = with n ∈ N0 , (26)
Γ[M n − M + 2]Γ[n]
as its moments and it has a support on λ ∈ [0, (M + 1)M +1 /M M ]. Additionally, its
behaviour at the origin diverges like
λ−M/(1+M )
ρ(Y (M ) )−1 (λ) ≈ for λ 1. (27)
1 Γ[(M + 2)/(M + 1)]Γ[M/(M + 1)]
(M )
this implies that the tail behaviour of the matrix Y1 will be
M/(1+M )−2
λ
ρY (M ) (λ) = λ−2 ρ(Y (M ) )−1 (λ−1 ) ≈ for λ 1.(28)
1 1 Γ[(M + 2)/(M + 1)]Γ[M/(M + 1)]
where we can again read off the stability exponent α = 1/(M + 1) which is consistent
with the other discussion.
As a result from the above discussion, the scaling of the largest eigenvalues of
(M ) (M )
Y1 as well as YL will be N regardless of M . This scaling can be obtained by
combining N ρY (M ) (λ)dλ ∝ d(N λ−1/(M +1) ) for λ 1 and Eq. (20). We need this
1
scale to properly unfold the spectrum as well as to find the largest eigenvalues. For
(M )
instance, the unfolded microscopic tail level density of Y1 is given by the so-called
Meijer G-kernel result [57, Theorem 5.3] (νj = 0 for all j) of the hard edge microscopic
(M )
level density of (Y1 )−1 which is
N 1/(M +1) (M ) −1/(M +1)
(M )
ρMeijerG (λ) = lim tr δ λ1N − (Y1 )
N →∞ cM
Z 1
0,M +1 −; −
= (M + 1)cM +1 M t(cM λ)M +1
M λ dtG 1,0
0 0; 0, . . . , 0
−; −
× G0,M +1 t(cM λ)M +1
M,0 (29)
0, . . . , 0; 0
with
M +2 M
cM = Γ Γ (30)
M +1 M +1
(M )
the proper unfolding constant such that ρMeijerG (λ) ≈ λ for λ 1. For M = 2, this
formula reduces to (12). Due to the proper unfolding the spectrum becomes the half
sided picket fence spectrum for M → ∞ [58, 59, 60, 61]
∞
(M )
X
lim ρMeijerG (λ) = δ(λ − j + 0.5). (31)
M →∞
j=1
Local Tail Statistics 11
The shift by 0.5 reflects the level repulsion from the origin and has thus a strong
resemblance to the spectrum of the quantum harmonic oscillator.
The microscopic tail level density is given then by
(M ) 1 (M )
ρinvMG (λ) = 2 ρMeijerG (λ−1 ). (32)
λ
(M )
This is the one that can be expected when studying the matrix Y1 . For the sum of
(M )
L copies of matrices, meaning YL , we will find that the averaged spectrum behaves
as if we would have directly summed these random matrices, cf., subsection 3.3.
These two equalities are enough to fix the integration over Grassmann variables as
any function of Grassmann variables is understood as a finite Taylor series due to the
nilpotence of the Grassmann variables, i.e., ηj2 = 0.
Coming back to (34), we can readily generalise the partition function to an
arbitrary (k|k) × (k|k) supermatrix κ as long as the numerical part of the eigenvalues
of the Boson-Boson block κBB do not lie on the positive real line or the origin.
The idea of the supersymmetry method in random matrix theory is to map the
average over an ordinary random matrix to an average over a supermatrix whose
dimension is independent of N . In our case we essentially have products of matrices
of the form W W † . In Refs. [63, 64], one of the present authors has introduced a
short cut called the supersymmetric projection formula that essentially states for a
random matrix W ∈ CN ×N that is distributed along a unitarily invariant density
P (W W † ) = P (V W W † V † ) (for all V ∈ U(N ) and W ∈ CN ×N ), we can find a
superfunction Q(U ) for a (k|k) × (k|k) supermatrix U such that
Sdet (W W † ⊗ 1k|k + κ
b)−1 = Sdet (1N ⊗ U + κ b)−1 .
(38)
b can be a much larger supermatrix of dimensions (kN |kN ) × (kN |kN ). We
Here, κ
underline that on the left hand side we average over the ordinary random matrix W
while on the right hand side we average over the supermatrix U .
The supermatrix U is in the current situation relatively simple, namely its
†
Boson-Boson block is a positive definite Hermitian matrix UBB = UBB ∈ Herm+ (k)
with no Grassmann variables and its Fermion-Fermion block is a unitary matrix
UFF ∈ U(k) also containing no Grassmann variables. The Boson-Fermion and
Fermion-Boson blocks only comprise independent Grassmann variables with no further
symmetries. Therefore, the supermatrix space described by U is the supersymmetric
coset Herm (k|k) = Gl(k|k)/U(k|k), see [65]. The superfunction Q(U ) is given via
the supersymmetric projection formula [63, 64] times the measure Sdet U N d[U ] on
Herm (k|k),
" #!
WW† + W f† W
Z Z fW f
Q(U ) = d[W ] d[W ]P
f e
f† , (39)
CN ×N CN ×(k|k) UW U
where d[U ] is the product of the differentials of all supermatrix elements. The
supermatrix W f is a rectangular matrix where its first k columns are ordinary N
dimensional complex vectors and the last k columns are N dimensional vectors with
independent complex Grassmann variables as their entries, this set has been denoted
by CN ×(k|k) . The superfunction Pe is a supersymmetric extension of P that satisfies
P (V V † ) = Pe(V † V ), for any V ∈ CN ×(N +k|k) . (40)
Such a supersymmetric extension is commonly not unique but there are usually natural
choices as we will see below for the present situation of the inverse Ginibre matrices.
(M )
We approach the average (34) inductively by writing YL as a sum and a product
of inverse Ginibre matrices,
(k,N ) (M ) † (M ) (M )
Z (M ) (κ) = Sdet (XL XL ⊗ 1k|k + LM +1 YL−1 ⊗ 1k|k − LM +1 1N ⊗ κ)−1
YL
D E
†
= Sdet (XL,M XL,M ⊗ 1k|k + κbL,M )−1 (41)
with
M +1 (M −1) † −1 (M ) (M −1) −1 (M −1) † (M −1) −1
κ
bL,M = L (XL ) YL−1 (XL ) ⊗ 1k|k − (XL XL ) ⊗ κ .(42)
Local Tail Statistics 13
For the rearrangement of the matrices inside the superdeterminant we have employed
the identities
Sdet (AB) = Sdet (A) Sdet (B) and Sdet (H ⊗ 1k|k ) = 1 (43)
for any two square supermatrices A and B and any ordinary square matrix H.
Next, we apply (38) to (41) and obtain
(k,N )
b)−1
Z (M ) (κ) = Sdet (1N ⊗ UL,M + κ (44)
YL
(M −1) † (M −1) (M )
= Sdet (XL XL ⊗ UL,M + LM +1 [YL−1 ⊗ 1k|k − 1N ⊗ κ])−1 .
We repeat this procedure until each integration over the random matrix XL,j is
transferred into an integration over a supermatrix UL,j . The only thing what changes
is the supermatrix
(j−1) † −1 (M ) (j−1) −1
κ
bL,j = L M +1
(XL ) YL−1 (XL ) ⊗ (UL,j+1 · · · UL,M )−1 (45)
(j−1) † (j−1) −1
− (XL XL ) ⊗ κ(UL,j+1 · · · UL,M )−1 . (46)
(53)
−1 −1 −1
= Sdet (1N ⊗ UGin + κ b ) Sdet κ b
−N −1
b)−1 .
= ( Sdet UGin ) Sdet (1N ⊗ UGin + κ
Local Tail Statistics 14
Simple comparison with the general duality (38) yields the identification that each
−1
supermatrices Ul,m in (49) is a copy of UGin which we coin Vl,m .
Summarising, the partition function takes the form
(M ) QM QL
Sdet (VL − κ)−N m=1 l=1 e− Str Vl,m d[Vl,m ]
R
(k,N ) HermM L
(k|k)
Z (M ) (κ) = R (54)
YL ( Herm (k|k) ( Sdet V )N e− Str V d[V ])LM
with
L
(M ) 1 X
−1 −1
VL = Vj,1 · · · Vj,M . (55)
LM +1 j=1
(k,N )
The normalisation Z (M ) (z1k|k ) = 1 with an arbitrary complex z ∈
/ [0, +∞[ follows
YL
from the Wegner integration theorems [66, 67, 68, 69, 70, 71, 72] for supergroup
invariant integrands. These theorems are essentially multidimensional Cauchy-like
identities which tell us that the integral is essentially the integrand at V ∝ 1k|k
times an integrand independent constant. The proportionality constant cancels in
the invariants like the supertrace or the superdeterminant as can be readily checked
with (36). Therefore, the denominator in (54) is only an N independent constant, i.e.,
it is the power of the integral
Z
( Sdet V )N e− Str V d[V ] (56)
Herm (k|k)
−1 N
det(VBB − VBF VFF
Z
VFB )
= e− tr VBB +tr VFF d[V ]
Herm (k|k) det VFF
Z Z
N −k − tr VBB
= (det VBB ) e d[VBB ] (det VFF )−N −k etr VFF d[VFF ]
Herm+ (k) U(k)
Z
N
× det (1k − VBF VFB ) d[VBF , VFB ].
C(k|0)×(0|k)
The latter equality can be found by substituting VBF → VBB VBF VFF . The first two
integrals are given by the Selberg integrals [1, 2]
Z k−1
Y πj Z
1
(det VBB )N −k e− tr VBB d[VBB ] = det(x)N −k e− tr x ∆2k (x)d[x]
Herm+ (k) k! j=0 j! Rk
+
k−1
Y
= π j (N − j − 1)! (57)
j=0
and
Z k−1 j Z
1 π
Y iϕ
(det VFF )−N −k etr VFF d[VFF ] = det(eiϕ )−N −k etr e ∆2k (eiϕ )d[eiϕ ]
U(k) k! j=0 j!
[0,2π]k
k−1
Y πj
= (2πi)k , (58)
j=0
(N + j)!
where we have first diagonalised the matrices and then integrated over their
eigenvalues. To compute the remaining integral over the Grassmann variables, we
Local Tail Statistics 15
Certainly, this limit works only when λ bM +1 + [λ bM +1 ]† < 0 is negative definite. This
BB BB
is usually not the case since the imaginary part of λ b should eventually be set to 0.
Therefore we deform the contours by a slight rotation V1,j → e−iπ/(M +1) V1,j so that
for a positive definite Hermitian λ
b we obtain the convergent result
Z M
(k,N )
b −M −1 =
Y d[V1,m ]
lim Z (M ) N (cM λ) k π k2
(64)
N →∞ YL M
Herm (k|k) m=1 (2i)
" #M +1
M L
−iπ/(M +1) cM λ
b
−1 −1
X
× exp −e
Str V1,1 · · · V1,M + V1,m
L m=1
" M +1 !#L
(k,N ) L
= lim Z (M ) N .
N →∞ Y1 cM λ
b
In the last line, we recognise our claim in Sec. 2 that the tail eigenvalue statistics agree
with those of a direct sum, which is namely
* k + *
k
+L
Y det(LL (X (M ) )† X (M ) − κF,j ⊗ 1L ) Y (M ) † (M )
det((X1 ) X1 − κF,j )
l=1 l l
LL (M ) † (M )
= (M ) † (M )
j=1 det( l=1 (Xl ) Xl − κB,j ⊗ 1L ) j=1 det((X1 ) X1 − κB,j )
L
(k,N )
= Z (M ) (κ) . (65)
Y1
For M = L = 1, this result agrees with the Bessel kernel result [64] which has been
an important result in the study of Quantum Chromodynamics [76].
The result (64) is the analytical corroboration which we previously mentioned
despite that we considered here a particular kind of ensemble where we could carry
out the computation. However, we are rather sure that this is the generic behaviour
of heavy-tailed statistics. The eigenvalues seem to be too diluted to show their level
repulsion which is reflected in the Vandermonde determinant of their joint probability
densities, so that effectively they behave as if this level repulsion never existed.
M Y
L h i d[V ]
l,m
Y
× exp −e−iπ/(M +1) N (1−γ)/(M +1) Str Vl,m .
m=1 l=1
(2i)k π k2
When Taylor expanding the logarithm of the superdeterminant we notice that only
the linear term survives in the large N limit since 1 − j(M + γ)/(M + 1) < 0 for all
j ≥ 2 and any M ≥ 1 and γ > 1 − M , so that
(k,N )
Z (M ) N γ κ0 1k|k + N δ κ
e (68)
YL
Z
N 1
h i
(M )
≈ exp −e−iπ/(M +1) N (1−γ)/(M +1) Str (κ0 1k|k + N δ−γ κ
e)−1 VL
HermM L
(k|k)
M Y
L h i d[V ]
l,m
Y
× exp −e−iπ/(M +1) N (1−γ)/(M +1) Str Vl,m k π k2
m=1 l=1
(2i)
L
(k,N ) M +1 γ δ
≈ Z (M ) L (N κ0 1k|k + N κe) .
Y1
In this expression, we can read off the local scale given by the exponent δ =
[(M + 2)γ − 1]/(M + 1) as then the Taylor expansion in κ e terminates with the linear
term in the asymptotic limit N → ∞.
Equation (68) already shows that even some part of the bulk statistics close to the
tail still follows the statistics of a direct sum. A more detailed saddle point analysis
would show that we get the direct sum of L independent sine-kernel statistics. The
condition of the scaling exponent γ which has to be satisfied for these kind of statistics
is γ > 1 − M . Hence, for γ ≤ 1 − M we have to go to higher order expansions of the
logarithm of the superdeterminant. Those terms couple the supermatrices.
For γ < 1−M , those higher order terms are also large so that we need to carry out
an additional saddle point expansion where all Vl,m become equal. Thence we would
find the statistics of a single sine-kernel, see [65, 76] for the supersymmetric integral
expression of these statistics. One needs to be careful, as Rothstein vectorfields [77] will
occur in the saddle point expansion as they account for all Efetov-Wegner boundary
terms [67, 68] that correspond to the diagonalisation of a supermatrix. All those terms
have been explicitly computed for diagonalising Hermitian supermatrices in [72].
For γ = 1 − M and, hence, δ = (1 − M − M 2 )/(M + 1), the quadratic term of
the Taylor expansion is of order 1 so that all supermatrices are coupled,
2
(k,N )
Z (M ) N 1−M κ0 1k|k + N (1−M −M )/(M +1) κ e (69)
YL
Z " M X L
!#
N 1 (M )
X
≈ exp −e−iπ/(M +1) N M/(M +1) Str κ−1 0 VL + Vl,m
HermM L
(k|k) m=1 l=1
−iπ/(M +1) −2iπ/(M +1)
YM YL
e (M ) e (M ) d[Vl,m ]
× exp Str κ
e VL + Str (VL )2 .
κ20 2κ20 m=1
(2i)k π k2
l=1
with
2
K(Ξ) = LM +1 [N 1−M κ0 1k|k + N (1−M −M )/(M +1)
(e
κ + κ0 Ξ)]. (72)
This result resembles those in [80] where the transition between independent diagonal
blocks of random matrices between a full matrix have been considered. This underlines
that our understanding of a transition between a direct sum to a full matrix without
block structures in the tail is ostensibly correct.
One last comment on the critical scale of the eigenvalues λ ∝ N 1−M of the random
(M )
matrix YL . It is only slightly larger than the scale N −M of the macroscopic level
density (20) which is freely stable. Although we are already then deep in the tail we
are far away from the scale of the largest eigenvalue which scales like N for the chosen
reference scale of the product of inverse Ginibre ensembles, cf., Eq. (1). We believe
that the relative scales should hold for other ensembles too and that in particular the
ratio of the scale between the largest eigenvalues and the critical scale should follow the
law scalelargest eigenvalue /scalecritical = N 1/(2α) where α is the stability exponent. The
Taylor expansion should follow the same mechanism however the probability density
P (H) and thus the corresponding superfunction Q(U ) may vary.
The last thing we would like to address in the present article is the limit L → ∞.
The multivariate central limit theorem [17], for unitarily invariant random matrix
ensembles see [18], tells us that if the limit exists it should converge to one of the
stable random matrix ensembles and this already occurs at finite matrix dimension
N . Specifically this means when the Hermitian matrix H ∈ Herm(N ) is a strictly
stable random matrix associated to the stability exponent α and we draw two copies
H1 and H2 of H, then the sum (H1 + H2 )/21/α is also a copy of H, implying that it
exhibits the very same statistics, including but not limited to eigenvalues, eigenvectors,
and matrix entries (also for finite N ). Hence this behaviour should carry over to the
large N limit. The only question is whether the two limits L → ∞ and N → ∞
commute. Our numerical and analytical simulations suggest that this might be true
for some cases of the microscopic and macroscopic spectral scales depending on the
averaged position of the largest eigenvalues in the tail.
Local Tail Statistics 19
Additionally, mesoscopic spectral scales might arise which are reminiscent to the
order of the two limits. What underlines the latter point is the fact that the Bessel
result for the level density (12) of the case M = 1 can be found for all L, see left plots
in Fig. 4. One needs only to cluster the eigenvalues in L consecutive couples.
The limit L → ∞ for the model considered in Sec. 3 and in particular for
the result (65), is carried out in subsection 4.1, while we consider a random matrix
ensemble that is already stable at finite N in subsection 4.2.
where λ0 > 0 is the base point where we zoom into the spectrum and λ e measures the
spectral fluctuations. Plugging this into (64) and expanding for large L, we obtain
(k,N )
lim lim Z (M ) (κ)
L→∞ N →∞ YL
Z M
Y d[V1,m ] 1 −i Mπ+1 −1 −1
= lim 1 − e Str λV
e
1,1 · · · V1,M
L→∞
m=1
(2i)k π k2 L
HermM
(k|k)
" M
!#
X L
−iπ/(M +1) +1 −1 −1
× exp −e Str λM
0 V1,1 · · · V1,M + V1,m
m=1
Z M
Y d[V1,m ] −i Mπ+1 e −1 · · · V −1
= exp − k π k2
e Str λV 1,1 1,M
m=1
(2i)
HermM
(k|k)
" M
!#
X
−iπ/(M +1) +1 −1 −1
× exp −e Str λM
0 V1,1 · · · V1,M + V1,m . (74)
m=1
For the second equality we have exploited the fact that the integrand is normalised
for any κ ∝ 1k|k because then all determinants in (33) cancel. The average in the
exponent can be simplified due to the supergroup invariance of the integrand that is
e −1 · · · V −1 so that eventually we have
only broken by Str λV 1,1 1,M
(k,N )
lim lim Z (M ) (κ) = exp[C Str λ]
e (75)
L→∞ N →∞ YL
, (76)
which is the counterpart of (34). The average on the right hand side is over a single
eigenvalue only.
Local Tail Statistics 20
The local spectral fluctuations of a Poisson ensemble happen on the scale 1/N
when the distribution F (E) is N independent. Therefore, we choose the scaling
κ = λ0 1k|k + λ/N
e with λ0 the base point with a tiny imaginary increment for the
regularization then the limit N → ∞ of (76) leads to
Z
(k,N ) F (E)dE
lim ZPoisson (κ) = exp − Str λ .
e (77)
N →∞ E − λ0
Comparison with the result (75) underlines our point that in the large L limit we
indeed find the Poisson statistics.
Since the critical scaling κ ∝ N 1−M where the transition to the sine-kernel
statistics happens, is independent of L we expect that it is the same critical scale
where the Poisson statistics should turn over into the sine-kernel as well. This certainly
deserves more investigation, yet we skip it here as it exceeds the scope of the present
work.
0.25
ρCL (λ)
0.20
0.15 ρ1(λ)
ρ
0.10
0.05
0.00
-10 -5 0 5 10
λ
Figure 5. The macroscopic level densities ρ1 (λ) (blue solid curve, see Eq. (82))
and ρCL (λ) (red dashed curve, see Eq. (83)). Despite the corresponding ensembles
are stable only those yielding ρCL (λ) can be stable under free convolution.
with respect to the considered stable ensemble. The notation h.ix denotes the average
over the GUE with variance x.
The average over the GUE can be cast into a supermatrix integral like in [75] with
the help of the superbosonisation formula. In doing so we assume that the Boson-
Boson block κBB is diagonalised which is always possible when its Jordan normal
form is diagonal. Then we define the diagonal matrix Sb = (sign(Im[κBB ]), 1k ) which
comprises all signs of the imaginary parts of κBB . In this way, the Hermitian numerical
part of (iH ⊗ Sb − i1N ⊗ Sκ) b BB is positive definite. This allows us to write the
superdeterminant in terms of an average over a Gaussian integral of a rectangular
supermatrix V of size (N |0) × (k|k) as the convergence is guaranteed now,
exp[i Str V † V Sκ b † ]d[V ]
R
b − i tr HV SV
−1
Sdet (H ⊗ 1k|k − 1N ⊗ κ) = R . (86)
exp[− Str V † V ]d[V ]
After averaging over H, we arrive at
exp[i Str V † V Sκ b † V )2 /2]d[V ]
R
b − x Str (SV
−1
Sdet (H ⊗ 1k|k − 1N ⊗ κ) = R . (87)
x exp[− Str V † V ]d[V ]
In the last step, we employ the superbosonisation formula [73, 74, 75] and replace
V † V by γU with γ > 0 a scaling that needs to be adjusted and the supermatrix
U ∈ Herm (k|k) which is realized from the very same set as in the supersymmetric
projection formula (39). Thus we eventually arrive at
Z ∞ Z
d[U ]
Zα(k,N ) (κ) = dx pb (x) Sdet U N
k π k2 α/2
(88)
0 Herm (k|k) (2i)
xγ 2
2
× exp − Str (SU ) + iγ Str U Sκ .
b b
2
Local Tail Statistics 23
△
0.6 1.4
○
△ α=0.5 △
△
△ △
△
△
△
△ △△
0.5 1.2 △
△
△
△
△
△
△
△△
△△
△△△
△
△
△
△
△
△
△
△
△△
△
△
△
△
○
△
△
△
△△△
△△
△△△ △△△
△
△△
△
△
△
○
○
△
△
○
○
○
△
△△
○
△
△△
△
△△
△△
△ △△ △ △
△
△△
△
△
△
△
△△
△
○△
△
○
○
△
△
○
○
○
△
△
○△
○○
△△ △ △△ △ △△
△
△ △
△○
△
△△
○
○△
△
○
0.4 1.0 △
○
○
△
△
○
○
△
○
△
○
○
△
○
△
○
○
△
△
○
△
○
△
△
○
△
○
○
△
○
△
△
○
○
△
○
△
○
△
○
△
○
△
○
△
△
△
△
○
△
○
△
○
△
△
△
△
○
○
△
△
△
○
△○
△△
△
△△
△
△
○
○
○
△
△
△
△
△
△
△
○
○
○△
△
○
○
○
△
△△○
△
○
○
△
△△
△
○
○
△
△○
○
○
△
△ △○
△○△○
○
○
△
△
△○
○
△
△○△○
△○ △○
△○ △○
△○△△△△ △ ○
○○○ ○ △ ○
△ ○
△ ○○○
△△△△△○
○○△○△○
△○△○△
○
△○
○
△
△○
○
○
△
△ △○
△○○
○
△
△
△○
○
○
△
△△
△
○
○
○
△
△ △
○
△○
○
△
△
△
△△
△
△
○
○
△
○△
△
○
○
○
△
△
△
△
△
△
△
○
△
○
○
△
△
△
△
△
○△
△
○
○
○
△
△
△
○
○
△
△
△
○
△
△△
○
△
○
△
○
△
○
○
△
○
△
△
○
○
△
△
△
△
○
○
△
○
△
△
○
○
△
○
△
△
△
○
○
○
△
△
○
○
△
△
○
○
○
△
△△
△△
△△△△
△ △ ○
△ △ △
△△△
△△
△
○
△
○
△
ρuf (μ)
○ ○
△
△
△
△△△
△△
△ △
△△△
△
△
△
△ ○
△ △△
△ △△△
ρ(λ)
△
△
△
△ △
△△
△
△
△
0.8 △
△
△
△ △
△
△
0.3 △
△
0.6
△
○ △
○
0.2
○△ ○
△ 0.4
0.1 △
△○
○△ ○△
○△△ 0.2
△○
△○
△○
○○△○
△○
△○
○○
△△ ○
△△○△○ △○○
△△○○
△△○○○
△△△○○○○
△△△△○○○
△△△△△○○○○△△ △△△△○○○○
△△△△
0.0○○○○○
0.0
-4 -2 0 2 4 -0.6 -0.4 -0.2 0.0 0.2 0.4 0.6
λ μ
0.25 ○△
○ △○
△○△
○△
△
△ △
△
△
△
○ ○△ α=1.0 1.0
△
○
△
△
○
○
○
○
△
△
△
△
△
△
△
△
△
△
○
○
△
△
△○
○
○
△
△
△△
△
○
○
△
△△○
○
○△○
△
△
△
○
○△○
○ △○
△○
△○ △○
△○△○○○○○ ○
△△△△△ △○ △ ○
△ ○ △ ○
△ ○ △ ○
△ ○ △ ○ △○
△ ○ △ ○○○
△△△△△ △○
△○
○○○ △○
△○△○△
△
△○
○
△
○○
○
△△
△○
△
○
○
△△
○
○
△
○
△
△
△
○
△
○
○
△
△
△
△
△
△
△
△
○
○
○
△
△
△
△
△
△
○
△
○
○
△
△
△
△
△
0.20 ○
△ ○
△
○
△ ○
△
0.8
ρuf (μ)
0.15
ρ(λ)
○
△ ○
△
0.6
○△ △
○
0.10 ○△ ○△
0.4
○△ ○△
○△ ○△
○△ ○△
0.05 △
△○
△○
○△△
○○△○ 0.2
△○
△○ △○
△○
○△○△○ △○
△○
0.00
△△○○
○○ △
△△○○△○△ △○
△△
○○△△
○○△△
○
0.0
analytical
-4 -2 0 2 4 -0.6 -0.4 -0.2 0.0 0.2 0.4 0.6
λ μ
1.2
△○
△○
△○
△○△△△△
○○○○
△○
△○
△○
△○ α=1.5
△
△
△△
△
△△
△ N=50
0.15 ○
△
○
△
○ △
○
△
△
○ 1.0
△
△
○
△
○
○
△
△
△
△
○
△
△
△
△
○
○
○
△
△△△
△
△
△
△
○
○
△
△△
○
○
○
△
△△
○
○
△△○△○
△○△○
△○○○○○○○○
△△△△△ △△△○
△○△○
△○△○
△○ △○
△○ △○
△○△○
△○ △ ○○○○○○
△○ ○○○
△ △ △ △△△△△△○ △
△○
△○
△○ ○
○
△
△ △
△○
○○
△
△
○
△
△
△
△
○
△
○
○
△
△
△
△
○
△
△
○
○
△
△
△
△
△
△
○
△
○ ○ N=500
○
△ ○
△ △
△
○
△ ○
△
△
○ ○
△ 0.8
ρuf (μ)
0.10 ○
△ ○
△
ρ(λ)
△
○ ○
△ 0.6
○
△ ○
△
○
△ ○
△ 0.4
0.05 ○
△ △
○
△
○ ○
△
○
△ △
○
△○
△○
△ ○
△○
△○
△○
0.2
△○
△○
△○ △○
△○
△△△○
○○ △○
△○ △○
△○
△○
△△
○○
△△
○△△△△△△△
○○○○ ○○
△△
○○○ ○○
△△△
○○○
△△△△
○○○○
△
0.00 0.0
-6 -4 -2 0 2 4 6 -0.6 -0.4 -0.2 0.0 0.2 0.4 0.6
λ μ
0.12
△○
△○ △○
△○ △○
△○△○
△○
△○
△○
△○
△○ △
△△ △
△
△○ △○
△○
△○
△○
△○
△○
△○
△○
△○ α=1.8 △
△
△△
△△
△△△○
△
△
△
△
△
0.10 ○○
△
△○
△○
△○
△○
△○
△○
1.0 △
○
△
△
○
○
△
△
○
△
○
○
△
○
△ △△
△○△○
○
○
△△○△○
△○ ○○○○○○○○○○○○○○○○○○○○○○○○○○○○○○○○○○○○○○○○○
△△△△△△△△△△△△△△△△△△△△△△△△△△△△△△△△△△△△△△△△△
△○ △○ △○
△○
△○ ○
△
○
△△
△○
○
△○
△
△
○
△
△△
△
△
○
△
△
○
○
△
○
○△ △ △ △
△
○△ ○△ △ △
○△ ○△ △
0.08 ○
○
△
△ ○△
○△ 0.8
○△ ○△
ρuf (μ)
○△ ○△
ρ(λ)
0.06 ○
△ ○△ 0.6
○
△ ○
△
0.04 ○△ ○
△ 0.4
○△ ○△
○△ ○△
0.02 ○△ ○△
△○
0.2
△○
△○
△ ○ △○
△○
△○
△○
△○
△○
△○
△○△○
△○ △○
△○ △○
△○
△○
△○
△○
△○
0.00○○△○△○△○△○△○△○△○△○△○△○△○△○ △○
△○
△○
△○
△○
△○
△○
△○
△○
△○
△○
△○
△ 0.0
-10 -5 0 5 10 -0.6 -0.4 -0.2 0.0 0.2 0.4 0.6
λ μ
Figure 6. The non-unfolded (left plots) and unfolded (right plots) macroscopic
level densities for the stable ensemble (80) for the four stability exponents
α = 0.5, 1.0, 1.5, 1.8 and the matrix dimensions N = 50, 500. The analytical curves
(solid black curves) are generated via the integral (82). The unfolded eigenvalues
µ are given by (84). For each ensemble (coloured symbols) we have created 105
configurations. The scattering in the unfolded densities close to the boundaries
µ = ±1/2 can be explained due to very rare events as those points correspond
to the heavy tails. The slight dip at µ = 0 for α = 0.5 can be understood by
the very narrow peak which is not sufficiently resolved by the chosen bin. The
vertical lines show the mean positions of the largest and smallest eigenvalue in
the unfolded variables µ. In the left plots we have mapped these position back to
λ.
√
The scale of the macroscopic
√ level density is obtained for κ ∝ N κ0 ∈ R.
Then we choose γ = N and perform the saddle point analysis for N 1.
Local Tail Statistics 24
1.0 1.0
α=0.5 α=0.5
0.8 N=50 0.8 N=500
0.6 △○
○
△△△
○○○
△○
△○○
△○
△△
△○ △○○
○
○○△○
△○
△△ △△
○ 0.6 △○
○○○○
△△△
○○
△○○ ○○
△
△△△△△△○
○○○○△ △○
△△△
p(s)
△△
p(s)
△○
△○ △○
△△ ○○○○
△△
△△ ○○ △△
○
△○
○△○
△○ ○○
○△○
△○ △△
○
△○○△○
△○△ △○
○○○○
△
○△△
△○
△△
○ △ △○
△○△ △
○△○
△○
△△
○ ○ ○○
△○
△
○
○
△ △○
○△○
△
△
△○
△○
△○
△
△△
○
0.4 ○△
△○
△△
○
△○
△○
0.4 ○△○
△○
△△○
△○ ○△
○
△△○
△
△○
○ △
△○ △△
○
△○
○
○△△ △○
△○△○
△○
○
○ △○
△○
0.2 △○△○
△
○
△○△○
0.2 △○
△△
○
△○
△△○
○△
△○
△△○△ △○
○△△
○
○
△△
△○
○△○
△○
△○ △○
○
○△○
○ △
△○
△○△○ △○
△△
○○
○
△△○○
△○
△○
△○○
△△○
△○
△△△
○○
△○ △△
○○
△△
○△○
△△
△ △
○○
○○
○△△△
△○○
△△△△ ○○
0.0 ○○○
○△○
△○△○○○○○
△○○○○
△△△△
△△○
○ △△○
△△○○○○○○○
△△△△△△△
○
△○△
○○○○○
△○
○ △○
△△○
△ 0.0 ○
△△△ △△△△
△△△
△○○○○○
○○○
○○
△ △△△△△△
△○○○○○○○○
△○○○○○○
○○○○○○○○○
△ △ △△△△△△△
△△△△△△△△
△○○○
△ ○○
△△
0 1 2 3 4 0 1 2 3 4
s s
1.0 1.0
α=1.0 α=1.0
0.8 N=50 0.8 N=500
0.6 ○△△△△
○○○
○○○
○△○
△△ △○
○△ 0.6○○△△△
○△△
○
△○○
△△
△△
○
○○○○
△△△△○
△△△○
○○○
○
△△△△△
△ ○
△○○○
○
△○
p(s)
△△△
○
△○
p(s)
△
△ ○○ △△△
△○○○ ○○○△○○
△○△△ ○△○○
△
△△△△△
○
△○
○ △△○
△ △
△△○
△○○
△○○
△ △○
△○
○○
△○
△△
○○
○ △
○○
△
△△○
△○△○
△○△ △○
○△
○△○
△△△
△
△○
○
△○
○ △
△○ △○○
○
△△
△
○
○△
△○△△
○ △○
○△○
0.4 ○△ △
○○
△
△○ 0.4 △
△
○△
△○
○
△△○
○△○
○△
△△
○
△○
△○
○
△○
○
△○
△○
○○
△
△△△
○
△○
△○
○
△○
△△ △△
○ △△
△○ ○
△○
○△○
△
○
△△
○
△○
△△
○ ○△
○
△△
○△○
0.2 ○
○ △○
△○△△○
△○
0.2 △
△△
○
△○
○○
△△
△○
△○
△○
△
○△
△△
○
△○
○
△△
△△○
△○
△○△○
△○△
△
○△○
○△○
○△○
△
○
△△○
○△△
△○○
○
△
○
△○
△○
○○△○
△△
△○
△○
△△
△○
△
○△○
○△○
△△△○
△△
△○
△○
○
△△
△△
○○○
△
○
△△
△
○○○
Poisson
0.0
△
○ △△△
△
○○
○○○○
△△△
○
△
△○○
△△△
○○
△ △○○○○
△△△△
○○○○
○○○○○
△ △ △ ○○○○○○○
△△△△△
△△
○○○○○
△ △ △△ △△
○○○○ 0.0
△△
○△○
△○
○○
○
△△△△
△△ ○
○○
△△△○
○○
△△△△△
○
△△△△△
○○○○○
○○
△△△
○○
○○
△△△△
○ △△△△
△△△△△△△
○○
○○○
○○○○○○○
△△△△△△△△△△△△
○○○○
○○○
△△△△△△
○△△△
○○○
○
○○ GUE
0 1 2 3 4 0 1 2 3 4
s s
1.0 1.0 p1 (s)
α=1.5 α=1.5 △
○○
○ ○○○ △△△△
△○○
○○
△○○○ ○○
△
△○ △△△○ △ ○
○△○ △
○
△△○
△
△○
△
○
△△ △△△○○ ○○○
△△
○
△△○○○
△△ △○
△
△○
○○
△
○△○
△○
○○ △○○
△○
△○
△△
○○
△△△
△○
○ △
△○
○○
△ △
○
△○△
○
○ △
○
○
△
△△
○○○
△○
△
○○
△
○
△○
△△△△
○
△△
○○○
△
○ pN-2 (s)
0.4 ○△△
○
△○ △
○○ 0.4 ○
△△○
○
△△
△
pN-3 (s)
○○
△ ○△○
△○△○
△○
△○
△○
△△○
○△ ○○○○
△
△○
△○
△
△○
○
△○
○
○ △ ○△△
△△○
△○ △○
△○△ ○
△△△○
○ △ △ △○
○△△○
△
○
△ △○
○○
○△○
△△ △
△
○
○
○△○
○
△△○
○ △○
△○
△△△△○
○
0.2 ○○○△○
△○ △○
△○△○
△△○
△○
0.2 △○
○○
△ △
△○
△○△
○
△○
△
△○
△
○△△△
△
△○△○
○
△△○
△○
△△○
△○ △○ △○
○△○
△○
△
○
△
○○
△○
△△○
△○
○
△△○ △○
○△○
△△△
○ △
△○
△
△○
△△
○△○△○
△○△○
△ ○
△△
△
○○
△△○○
△△△
△△
○○ ○
△○
△○
△
△△△
△
○○△○
△○ △
△○△ △○
△○
○○○△
○ ○○○
△△
○○
△△
○○ △△
△△△△ △○
△○
△○
△△
○ ○○
△△
△ △△△△
△○○
△○
△○△○
△○△○
△○△○
○△○
△○△○
○○△△△ △○ ○
0.0 ○ ○○○
△△△
○○○
△△△ △△△△△
○○○
○○○○○
△△△△△△
○○○○○○○
△△△△△△△△
○○○○○○○○
△△△△△△
○○○○○○○○○○
△△△△△△
△△
○○○ 0.0 △△△△△
○ ○
△
△○△○
△○△○
△○△○
△△△△△○○
△○
△△○○○
△○
△
○△○
△○ △
△○
○
○
△○△○
△△△△△△
○△○△
△○
0 1 2 3 4 0 1 2 3 4
s s
1.0 1.0
α=1.8 α=1.8
0.8 N=50 0.8 N=500
○
△
○
○ ○
△
△△
○ △○
△○
○○
0.6
△△
○ △○△○ ○○○○○
△△△△△ △△
○○
△ △○
0.6 △○△○○△○△△△△
○○○
△
○ △○
△○
△△△○△
△△
○○○△○
△○△
○△
○○△ ○ △○ △△△
○○△○ △○ △△○ △△
△○
○
p(s)
○
p(s)
○ △○
○
△△△
○ ○○△○ ○○○
△△△ △○
△○△
○
○△○
△
○ ○△○○△ ○ △○ △○ ○○△
○
△○△
○
△○
△○
△ ○
△○○ △△○
△○
○○
△ △△△ △○○ △○△○△
△○△
△
○
△○△○
△△
△△△
○ △△△△
○ △○△△○△△○
△ △△○○△○△○ ○
△○ △○
○ △△○
○○
○
△ △○
△○
○ △△
△○○ △○
0.4 ○△○△○△○△○△ △○△○
△
△○ △
△△○
△ 0.4 △△○
△○
○
△
○△○
△○ △○
△○
△
○△
○
△△
○○○△
△○ △○
△○△
△○
○
△○△
○
△ ○
△△○
△
○○△
△
○○
△○
△
○
○△△ △
○○
△△
○△○
○
○
△○
○
△ △○ △△
△○○○
△○
△○
0.2 △△○
△
○
△○
△
○△ 0.2 △△
△○
○
△△○
○
○
△△○△ △○△
△○
△○
○△○
○△○
△○ △
△○ △○
△○
△△
△○
△△
△
△○
○ △△
△○
△○ ○ ○○○
△○
△○
△
○△
○ △○
○ △
△○
○ ○
△
△○
△△△△○
○ △ ○
△
○○
△○
○
△
△△
△○
△○
△△
△○
○△△○
△
△△
△△
○
○○
△○△○
△○ △○
△○△△ ○○△
○
○△△
△
○
○
△△○
○
△ △○
△△△
△
○
△
○○
△△○○
△△○○
○○
○○○
△△
○ ○ ○ ○
△△
○○
△
△ △△△
○○○△
○○
○
△○
△△△△△
△○
○△△
△△△△
○△
○
△△△
0.0 ○
△△△ ○○
△△
○
△△△
○△△△
○
○○○
△△△△△△
△△△△△△ ○
○○○○○
○○○
○△△△△△△△
○
○○○
△△△△△△△△△△△
△△△△△△△△△△△△△△△△△
○○
○○○
○○
○
○○
○ ○
○○○
○○ 0.0 ○△
○△
○○△
○△
△△△△△△△△△△
△△△
○ ○
○○
○
△○
○
○○△
△○
△△△△△△△△
△ ○○
△○
△△
○○
○△
○△
○○
○
△○
○
△
0 1 2 3 4 0 1 2 3 4
s s
The two p corresponding saddle point solutions for the eigenvalues of U S are z± =
b
2
(iκ0 ± 4x − κ0 )/(2x). Here we have to split the discussion into cases depending on
whether 4x is larger or smaller than κ20 .
If 4x > κ20 , then the real parts of the eigenvalues in the Boson-Boson block of
Local Tail Statistics 25
U Sb must have the signature SbBB as otherwise the maximum of the integrand along
the contour is never acquired at the saddle point. Moreover the eigenvalues of the
Fermion-Fermion block need to be the same eigenvalues since only then the Berezinian
of the diagonalisation of U Sb (Jacobian in superspace, see [62]) is of order one. Other
solutions will yield higher orders in 1/N . To summarise the saddle point manifold is
given by
p
iκ 0 4x − κ20 e e −1
U Sb = 1k|k + U diag(SbBB , SbBB )U (89)
2x 2x
where U e ∈ U(k+ , k− |k+ , k− )/[U(k+ |k+ ) × U(k− |k− )] is a Haar distributed unitary
supermatrix with k+ and k− the number of plus and minus signs in SbBB . We would like
to point out that the change of coordinates for (89) involves a Rothstein vectorfield [77]
which we denote by YUe . It is however N -independent because it only corresponds to
the substitution and not the integrand as we know it for Jacobians.
If 4x ≤ κ20 , the solutions become entirely imaginary though only one of them is a
maximum of the integrand for the integration variables. The second derivative of the
exponential term is at the two saddle points
xz 2 4x2
∂z2 −
+ iκ0 z + ln(z) = −x + p . (90)
2 z=z± (κ0 ± κ20 − 4x)2
Combining this with the fact that the bosonic eigenvalues run through these points
along the imaginary line and the fermionic ones parallel to the real line, both amount
to an additional minus sign in the second term of the Taylor expansion about the
saddle point in (88). Therefore only
p
κ 0 − κ20 − 4x
U Sb = i 1k|k (91)
2x
can be a maximum along the contours. Here, we would like to underline that no
Rothstein vectorfield is needed as we do not need to diagonalise the supermatrix to
reach this saddle point in contrast to the case 4x > κ20 . √ √
The spectral fluctuations can be obtained by setting κ = N κ0 1k|k + κ e/ N . We
expand
p
iκ0 4x − κ20 e e −1 + √1 U e −1 (92)
US =
b 1k|k + U diag(SbBB , SbBB )U e δQU
2x 2x N
for 4x > κ20 and
p
κ 0 − κ20 − 4x 1
U Sb = i 1k|k + √ δQ (93)
2x N
for 4x < κ20 up to second order in the massive modes δQ. The supermatrix δQ
describes essentially the superspaces Herm(k+ |k+ ) × Herm(k− |k− ) and Herm(k|k),
respectively for the two situations. Their integrations yield 1 due to the normalisation
and we eventually obtain
Z κ20 /4 " p #
(k,N ) κ0 − κ20 − 4x
lim Zα (κ) = dxbpα/2 (x) exp − Str κ
e (94)
N →∞ 0 2x
Z ∞ h κ i Z
0
+ pα/2 (x) exp − Str κ
dxb e exp[YUe ]
κ20 /4 2x
U(k+ ,k− |k+ ,k− )
U(k+ |k+ )×U(k− |k− )
" p #
4x − κ20 −1 dµ(U
e)
× exp i Str U diag(SBB , SBB )U κ
e b b e e k2 −k2 −k2 .
2x π + −
Local Tail Statistics 26
The exponential term exp[YUe ] is the application of the Rothstein vector field YUe
which takes care of all Efetov-Wegner boundary terms [67, 68] that result from the
corresponding change of coordinates.
The result (94) is a superposition of the Poisson partition function (77) convolved
with the stable distribution pbα/2 (x) and the sine kernel partition function in its
supersymmetric form [65, 76] convolved again with pbα/2 (x). For the sine kernel,
usually the vector field is dropped as it only generates lower point correlations than
the k-point correlation function that can be derived by taking derivatives in κ and
then setting κBB = κFF , see [47, 48, 49].
For the computation above, we have assumed that κ0 is of order O(1) in N . We
could also choose that κ0 is of a larger order since the spectrum has a heavy tail so
that eigenvalues can indeed lie very deep in the tail. For instance this is the case
for the product of inverse Ginibre matrices where the ratio of the scale of the largest
eigenvalue and the bulk is of order scalelargest eigenvalue /scalebulk = N .
Assuming κ0 1 in (94) with κ e = O(κ0 ), we can Taylor expand the square root
in the first term which makes the exponent x independent so that we get the Poisson
partition function (77),
Z κ20 /4 " p #
κ0 − κ20 − 4x
κ0 1 1
pα/2 (x) exp −
dxb Str κ
e ≈ exp − Str κ e . (95)
0 2x κ0
For the second term we rescale x → κ20 x/2 then the Jacobian together with the
approximation pbα/2 (κ20 x/2) ∝ (κ20 x)−1−α/2 is of the size κ−α
0 while the integrand is
of order
√ one. Thus it is a lower order term and vanishes when κ is of an order larger
than N . √
In summary, for κ N , the partition function becomes the one for the Poisson
statistic,
√
N →∞, κ N 1
Zα(k,N ) (κ) ≈ exp − Str κ e + O(κ−α0 ). (96)
κ0
The problem is that the convergence is slower for smaller α.
The question is whether the largest eigenvalues are now of larger order than
the scale of the macroscopic level density for the ensemble (80). Using the averaged
position of the largest and smallest eigenvalue in the unfolded variables (84), we see
that their position (horizontal lines in Fig. 6) moves to the extreme values at ±0.5. As
their change is however very tiny it could be very likely that those positions saturate
at a certain value which implies the mixed statistics for the level spacing distribution
between the four largest and smallest consecutive eigenvalues seen in Fig. 7 will persist
when taking the limit N → ∞. We have also simulated the ensemble for α = 0.5 and
N = 5000 and it seems that this kind of saturation is taking place. Nonetheless, we
can confirm that the similarity to the Poisson statistics is diminished for a smaller
α which can be indeed understood by the resulting error term in Eq. (96). A more
detailed analysis is needed to decide which scenario either (94) or (96), is actually
realised.
matrix ensembles. Surely for complex spectra other mechanisms will enter the game.
Nevertheless in two dimensions the added spatial capacity will further facilitate an
increase in the decorrelation of eigenvalues.
Acknowledgments
References
[1] M. L. Mehta: Random Matrices, Academic Press, Amsterdam, 3rd ed. (2004).
[2] P. J. Forrester: Log-gases and random matrices, Princeton University Press, Princeton, NJ
(2010).
[3] G. Akemann, J. Baik, and P. Di Francesco, eds.: The Oxford Handbook of Random Matrix
Theory, Oxford University Press, Oxford (2011).
[4] Z. Burda, J. Jurkiewicz, M. A. Nowak, G. Papp, and I. Zahed: Lévy Matrices and Financial
Covariances, Acta Physica Polonica Series B 34, 4747 (2001) [arXiv:cond-mat/0103108].
[5] M. M. Meerschaert and H.-P. Scheffler: Portfolio Modeling with Heavy Tailed Random Vectors,
Chapter 15 in Handbook of Heavy Tailed Distributions in Finance, S. T. Rachev ed., Elsevier,
Amsterdam (2003).
[6] Z. Burda, A. T. Görlich, and B. Waclaw: Spectral properties of empirical covariance matrices
for data with power-law tails, Phys. Rev. E 74, 041129 (2006) [arXiv:physics/0603186].
[7] O. Bohigas, J. X. de Carvalho, and M. P. Pato: Disordered ensembles of random matrices, Phys.
Rev. E 77, 011122 (2008) [arXiv:0711.3719].
[8] G. Akemann, J. Fischmann, and P. Vivo: Universal Correlations and Power-Law Tails in
Financial Covariance Matrices, Physica A 389, 2566–2579 (2010) [arXiv:0906.5249].
[9] G. Biroli and M. Tarzia: The Lévy-Rosenzweig-Porter random matrix ensemble,
[arXiv:2012.12841] (2012).
[10] M. C. Münix, R. Schäfer, and T. Guhr: A Random Matrix Approach to Credit Risk, PLoS ONE
9, e98030 (2014) [arXiv:1102.3900].
[11] T. Kanazawa: Heavy-tailed chiral random matrix theory, JHEP 2016, 166 (2016)
[arXiv:1602.05631].
[12] S. Oymak, J. A. Tropp: Universality laws for randomized dimension reduction, with
applications, Information and Inference: A Journal of the IMA 7, 337–446 (2017)
[arXiv:1511.09433].
[13] S. Minsker: Sub-Gaussian Estimators of the Mean of a Random Matrix with Heavy-Tailed
Entries, The Annals of Statistics 46, 2871–2903 (2018) [arXiv:1605.07129].
[14] C. H. Martin and M. W. Mahoney: Implicit Self-Regularization in Deep Neural Networks:
Evidence from Random Matrix Theory and Implications for Learning, [arXiv:1810.01075]
(2018).
[15] C. H. Martin and M. W. Mahoney: Traditional and Heavy-Tailed Self Regularization in Neural
Network Models, Proceedings of the 36th International Conference on Machine Learning,
Long Beach, California, PMLR 97 (2019) [arXiv:1901.08276].
[16] J. Heiny: Random Matrix Theory for Heavy-Tailed Time Series, J. Math. Sci. 237, 652-–666
(2019).
[17] E. L. Rvačeva: On domains of attraction of multidimensional distributions, L’Vov. Gos. Univ.
Uč. Zap. 29, Ser. Meh.-Mat. No. 6, 5 (1954).
[18] J. Zhang and M. Kieburg: in preparation.
[19] P. Cizeau and J. P. Bouchaud: Theory of Levy matrices, Phys. Rev. E 50, 1810 (1994)
[20] A. Soshnikov: Poisson Statistics for the Largest Eigenvalues of Wigner Random Matrices with
Heavy Tails, Elect. Comm. in Probab. 9, 82–91 (2004) [arXiv:math/0405090].
[21] G. Biroli, J.-P. Bouchaud, and M. Potters: On the top eigenvalue of heavy-tailed random
matrices, EPL 78, 10001 (2007) [arXiv:cond-mat/0609070].
[22] Z. Burda, J. Jurkiewicz, M. A. Nowak, G. Papp, and I. Zahed: Random Lévy Matrices Revisited,
Phys. Rev. E 75, 051126 (2007) [arXiv:cond-mat/0602087].
[23] G. Ben Arous and A. Guionnet: The Spectrum of Heavy Tailed Random Matrices, Commun.
Math. Phys. 278, 715—751 (2008) [arXiv:0707.2159].
[24] A. Auffinger, G. Ben Arous, and S. Péché : Poisson convergence for the largest eigenvalues of
heavy tailed random matrices, Ann. l H. Poincare-Pr. 45, 589–610 (2009) [arXiv:0710.3132].
Local Tail Statistics 29
[25] R. Vershynin: Introduction to the non-asymptotic analysis of random matrices, Chapter 5 of:
Compressed Sensing, Theory and Applications, Y. Eldar and G. Kutyniok ed., Cambridge
University Press, Cambridge (2012).
[26] F. Benaych-Georges, A. Guionnet, and C. Male: Central Limit Theorems for Linear
Statistics of Heavy Tailed Random Matrices, Commun. Math. Phys. 329, 641–686 (2014)
[arXiv:1301.0448].
[27] F. Benaych-Georges and A. Maltsev: Fluctuations of linear statistics of half-heavy-tailed random
matrices, Stoch. Process. Their Appl. 126, 3331–3352 (2016) [arXiv:1410.5624].
[28] E. Tarquini, G. Biroli, and M. Tarzia: Level Statistics and Localization Transitions of Lévy
Matrices, Phys. Rev. Lett. 116, 010601 (2016) [arXiv:1507.00296].
[29] J. Heiny and T. Mikosch: Eigenvalues and Eigenvectors of Heavy-Tailed Sample Covariance
Matrices with General Growth Rates: the iid Case, Stoch. Process. Their Appl. 127, 2179–
2207 (2017) [arXiv:1608.06977].
[30] C. Bordenave and A. Guionnet: Delocalization at small energy for heavy-tailed random matrices,
Commun. Math. Phys. 354, 115–159 (2017) [arXiv:1603.08845].
[31] C. Male: The limiting distributions of large heavy Wigner and arbitrary random matrices, J.
Funct. Anal. 272, 1–46 (2017) [arXiv:1209.2366].
[32] O. Guédon, A. E. Litvak, A. Pajor, and N. Tomczak-Jaegermann: On the interval of fluctuation
of the singular values of random matrices, J. Eur. Math. Soc. 19, 1469–1505 (2017)
[arXiv:1509.02322].
[33] Z. Burda, R. A. Janik, J. Jurkiewicz, M. A. Nowak, G. Papp, and I. Zahed: Free Random Lévy
Matrices, Phys. Rev. E 65, 021106 (2002) [arXiv:cond-mat/0011451].
[34] G. Akemann, and P. Vivo: Power-law deformation of Wishart-Laguerre ensembles of random
matrices, J. Stat. Mech. 0809, P09002 (2008) [arXiv:0806.1861].
[35] A.Y. Abul-Magd, G. Akemann, and P. Vivo: Superstatistical generalisations of Wishart-
Laguerre ensembles of random matrices, J. Phys. A 42, 175207 (2009) [arXiv:0811.1992].
[36] J. Choi and K. A. Muttalib: Rotationally invariant family of Lévy like random matrix ensembles,
J. Phys. A 42, 152001 (2009) [arXiv:0903.5266].
[37] T. Guhr and A. Schell: Matrix Moments in a Real, Doubly Correlated Algebraic Generalization
of the Wishart Model, (2020) [arXiv:2011.07573].
[38] A. K. Gupta and D. K. Nagar: Matrix Variate Distributions, Monographs and Surveys in
Applied and Pure Mathematics 104, CRC Press, London (1999).
[39] R. Balian: Random matrices and information theory, Il Nuovo Cimento B 57, 183–193 (1968).
[40] K. Adhikari, N. K. Reddy, T. R. Reddy, and K. Saha: Determinantal point processes in the
plane from products of random matrices, Ann. Inst. Henri Poincaré Probab. Stat. 52, 16–46
(2016) [arXiv:1308.6817].
[41] P. J. Forrester: Eigenvalue statistics for product complex Wishart matrices, J. Phys. A 47,
345202 (2014) [arXiv:1401.2572].
[42] G. Akemann and J. R. Ipsen: Recent exact and asymptotic results for products of independent
random matrices, Acta Physica Polonica B 46, 1747–1784 (2015) [arXiv:1502.01667].
[43] D.-Z. Liu, D. Wang, and L. Zhang: Bulk and soft-edge universality for singular values of products
of Ginibre random matrices, Ann. Inst. H. Poincaré Probab. Statist. 52, 1734–1762 (2016)
[arXiv:1412.6777].
[44] H. Bercovici, V. Pata and P. Biane: Stable Laws and Domains of Attraction in Free Probability
Theory, Annals of Mathematics 149, 1023–1060 (1999) [arXiv:math/9905206].
[45] O. Arizmendi E. and V. Pérez-Abreu: The S-transform of symmetric probability measures with
unbounded supports, Proc. Amer. Math. Soc. 137, 3057–3066 (2009).
[46] R. Speicher: Free Probability Theory, Chapter 22 of Ref. [3] (2011).
[47] K. B. Efetov: Supersymmetry in Disorder and Chaos, 1st ed., Cambridge University Press,
Cambridge (1997).
[48] M. R. Zirnbauer: The Supersymmetry Method of Random Matrix Theory, Encyclopedia of
Mathematical Physics 5, 151, eds. J-P. Franoise, G. L. Naberand S. T. Tsou, Elsevier: Oxford
(2006) [arXiv:math-ph/0404057].
[49] T. Guhr: Supersymmetry, Chapter 7 of Ref. [3] [arXiv:1005.0979].
[50] J. Ginibre: Statistical ensembles of complex, quaternion, and real matrices, J. Math. Phys. 6,
440–449 (1965).
[51] J. Wishart: The Generalised Product Moment Distribution in Samples from a Normal,
Multivariate Population, Biometrika 20, 32–52 (1928).
[52] V. A. Marčenko and L. A. Pastur: Distribution of eigenvalues for some sets of random matrices,
Math. USSR Sbornik 1, 457–483 (1967), translated from the Russian in Mat. Sb. 72, 507–536.
[53] B. Dietz and F. Haake: Taylor and Padé analysis of the level spacing distributions of random-
Local Tail Statistics 30