You are on page 1of 30

Local Tail Statistics of Heavy-Tailed Random

Matrix Ensembles with Unitary Invariance

M. Kieburg and A. Monteleone


arXiv:2103.00817v1 [math-ph] 1 Mar 2021

School of Mathematics and Statistics, University of Melbourne, 813 Swanston


Street, Parkville, Melbourne VIC 3010, Australia
E-mail: m.kieburg@unimelb.edu.au, monteleonea@student.unimelb.edu.au

Abstract. We study heavy-tailed Hermitian random matrices that are unitarily


invariant. The invariance implies that the eigenvalue and eigenvector statistics
are decoupled. The motivating question has been whether a freely stable random
matrix has stable eigenvalue statistics for the largest eigenvalues in the tail.
We investigate this question through the use of both numerical and analytical
means, the latter of which makes use of the supersymmetry method. A surprising
behaviour is uncovered in that a freely stable random matrix does not necessarily
yield stable statistics and if it does then it might exhibit Poisson or Poisson-
like statistics. The Poisson statistics have been already observed for heavy-tailed
Wigner matrices. We conclude with two conjectures on this peculiar behaviour.
Keywords: Random Matrices, Local Spectral Statistics, Heavy-Tailed
Distributions, Stable Distributions, Supersymmetry Method
Local Tail Statistics 2

1. Introduction

Gaussian random matrices are a well-studied topic due to the enormous number of
applications in engineering, mathematics and physics. This large variety in scope
has led to a multitude of modifications of random matrix models veering away from
the traditional Gaussian ensembles into models with greater generality, e.g. see the
textbooks [1, 2, 3]. This has resulted in various studies of spectral statistics such as
bulk statistics, hard- and soft-edge statistics, as well as the statistics of multicritical
points where for instance spectral supports merge (also known as cuts) or an outlier
(a separate eigenvalue not belonging to the bulk of the spectrum) is absorbed into the
bulk of the spectrum.
What is less well-studied is the tail statistics of eigenvalues of a random matrix
that exhibits a heavy tail. Nevertheless, heavy-tailed random matrices have many
important applications especially when systems are non-stationary or open. For such
systems one can expect heavy-tails being more natural to occur rather than ensembles
for which all moments exist. Examples admitting heavy tails being time series analysis,
disordered systems, quantum field theory and more recently deep neural networks.
e.g., see [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]. From a theoretical point of view the
ensembles stable under matrix addition are of particular importance since they are
the fixed points of their respective domains of attraction via the multivariate central
limit theorem [17], see [18] for a recent work on unitarily invariant Hermitian random
matrices. The classification of these domains as well as the stable distributions is still
poorly understood from the perspective of spectral statistics. We aim to unveil one
part of this incomplete picture, namely the statistics of the largest eigenvalues for
unitarily invariant ensembles.
There are several works [19, 4, 16, 20, 21, 22, 23, 24, 25, 26, 27, 29, 30, 31, 32],
in physics and mathematics that have studied heavy-tailed Wigner matrices in detail
(meaning matrices whose entries are independently and identically distributed along
a heavy-tailed univariate probability measure). For these matrices it has been
shown [20, 21, 24, 28] that the largest eigenvalues in the heavy tail converge to Poisson
statistics. Moreover the eigenvectors become localised [9, 19, 28] while those in the bulk
become delocalised [30]. One may argue that the Poisson statistics of the eigenvalues
is due to the localisation of the eigenvectors. This perspective is supported by the
fact that the distribution of these largest eigenvalues are shared by the distribution
of the largest matrix entries [24]. In the present work, we will argue that this is not
necessarily the case and that the Poisson statistics or at least a very diminished level
repulsion can also be found for unitarily invariant random matrix ensembles. Such
ensembles have been discussed in [8, 11, 22, 33, 34, 35, 36, 37, 38]. The unitary
invariance takes the eigenvectors out of the picture as they are still Haar distributed
and thus delocalised. In [39], it was also shown that exactly such matrices maximise
the Shannon entropy when only a level density is given as an input.
In our present work we consider two specific random matrix ensembles. The first
one is about the singular values of a product of complex inverse Ginibre matrices,
see [40, 41, 42, 43] for an analytical computation of the finite N (matrix dimension)
statistics as well as the hard edge statistics. This ensemble is not stable for finite
matrix dimension, it is known [44, 45] that the limiting macroscopic level density is
stable under free convolution (sum of identical and independent copies of the random
matrix in the large N limit). Numerically we have confirmed that this asymptotic
stability also holds for the local statistics at the soft-edge and some part of the bulk.
Local Tail Statistics 3

Indeed in Ref. [43], those local spectral statistics have been proven for this kind of
random matrix. However the tail statistics do not share this behaviour. In the tail the
eigenvalues degenerate statistically, meaning clusters of eigenvalues show a diminished
level repulsion which is vanishing completely for N → ∞. This number of eigenvalues
inside such a cluster is equal to the number of copies of the random matrix that
has been added. Through using the supersymmetry method we have analytically
confirmed our numerical observations. Namely inside the tail the sum of random
matrices agrees with the direct sum of exactly the same matrices. This agreement is
however not true in the bulk or at the soft-edge. Therefore there is a transition. The
scaling of the eigenvalues that belong to this critical regime of the transition has been
identified and exhibits a dependence on the stability exponent.
Furthermore, we will address the question of the central limit theorem for the
tail statistics of the sum of these random matrices, which happens to converge to a
Poisson point process for the largest eigenvalues as it has already been known for
heavy-tailed Wigner matrices [20, 21, 24, 28]. To confirm whether this picture is
true more generally, we have consider a second random matrix ensemble which is a
Gaussian unitary ensemble (GUE) whose variance is averaged over a stable one-sided
distribution. Similar averages have also been discussed in [6, 7, 8, 34, 35]. This
construction yields a random matrix ensemble that is already stable at fixed matrix
dimension N so that its macroscopic as well as its microscopic statistics must be
stable. There is however the downside that two copies of the matrix are not free in
the sense of free probability [46]. Whether their largest eigenvalues will go exactly to
the Poisson statistics will depend on whether or not the largest eigenvalues live on a
scale that is bigger than the one of the bulk. This will be shown with a supersymmetry
calculation. The Monte Carlo simulations we have generated suggest that the average
position of the largest eigenvalues might saturate at a finite value.
The present work is built up as follows. In Sec. 2, we describe the numerical
experiment we have carried out for a sum of inverse complex Ginibre matrices in order
to get a feeling what is happening. We do not only confirm that the macroscopic
level density as well as the soft edge statistics are stable but show how the tail
statistics change by adding several independent copies of these random matrices. To
get an analytical confirmation, we compute the average of the ratio of characteristic
polynomials in Sec. 3. Therein, we consider a more general situation of the sum
of products of inverse Ginibre matrices as those are also known to be stable under
free convolution [44, 45]. Those averages encode the whole eigenvalue statistics and
are computed via the supersymmetry method [47, 48, 49]. In Sec. 4, we study the
limit of the sum of an infinite number of matrices analytically as well as numerically.
In particular we investigate the question regarding how universal are the Poisson
statistics for the largest eigenvalues of heavy-tailed ensembles and what are the scales
to finding them. We conclude in Sec. 5 by formulating two conjectures for heavy-tailed
ensembles.

2. Numerical Observations

We begin this section with a short numerical experiment (In fact the same one that
led to the discovery of this surprising result) as it will give us some insight into what
is going when we add random matrices with heavy tailed macroscopic level densities.
We consider a sum of L identically and independently distributed inverse complex
Local Tail Statistics 4

Ginibre matrices Xj ∈ CN ×N with the probability density



1 exp[− tr(Xj Xj )−1 ]
P (Xj ) = . (1)
πN 2 det(Xj† Xj )2N
Certainly, the inverse Xj−1 is a complex Ginibre matrix [50] and the product (Xj† Xj )−1
describes a complex (Wishart) Laguerre matrix [1, 51]. Thus, the full spectral statistics
of a single matrix (L = 1) is completely known, see [1, 2].
For instance, the macroscopic level density of (Xj† Xj )−1 is the Marčenko-Pastur
law [52],
   √
1 1 † −1 4−λ
ρ(X † Xj )−1 (λ) = lim tr δ λ1N − (Xj Xj ) = √ Θ[λ(4 − λ)] (2)
j N →∞ N N 2π λ
with the Heaviside step function Θ. This implies that the macroscopic level density
of Xj† Xj is equal to

1   √4 − λ−1

ρX † Xj (λ) = lim tr δ λ1N − N Xj Xj = Θ[λ(4 − λ−1 )]. (3)
j N →∞ N 2πλ3/2
Comparing this result with the definition of the stability exponent α, see [44, Appendix
A], which shows in the asymptotic approximation ρ(λ) ∝ |λ|−1−α for large |λ|  1,
the current ensemble corresponds to α = 1/2.
From free probability [44, 45], we know that this distribution is stable under free
convolution, meaning when we add two or more copies of the matrix Xj† Xj , i.e.,
L
1
Xj† Xj ,
X
YL = (4)
L1/α j=1

the resulting matrix YL shares the same macroscopic level density as each single Xj† Xj .
This can be readily checked via the R-transform which is implicitly defined with the
help of the Green function [46]
Z ∞
ρ(λ)dλ
G(z) = , (5)
−∞ z − λ
namely
1
R[G(z)] = z − . (6)
G(z)
The R-transform of Xj† Xj is [45]
RX † Xj (y) = −eiπ/2 y −1/2 . (7)
j

Due to the rule for a sum of two asymptotically free random matrices A and B,
RA+B (y) = RA (y) + RB (y), (8)
and the scaling rule of a random matrix A with a scalar µ,
RµA (y) = µRA (µy), (9)
we have
RYL (y) = −eiπ/2 y −1/2 , (10)
too. We have numerically illustrated this for L = 1, 2, 3, 4 in Fig. 1.
Local Tail Statistics 5
1.0

0.8 △






Free Prob.

0.6




ϱ(λ)




○ ▽ L=1
0.4 □



△ L=2
○ L=3







○ □ L=4












0.2 □

▽□



○○


▽○
▽○

□□

▽□
▽□

○▽○

○□

▽○
▽○

□▽○

△□

▽○


▽○
▽○

□▽□

△△
▽△
○□
▽□
○▽

○□


○□

▽▽
○□
△▽

○□△

○□△

○□▽

○□
○□


○□

△▽

○□△

○□△

○□
○□

△▽
○□
△▽

○□△
○□

○□

△▽

○□
○□

▽▽

○□□

▽○
○○□

▽○
▽○

△□
▽○
△▽○

△△

▽○
▽○

□□
▽○
△△
▽○
□▽○

△□

▽○
▽○

□△

▽○

▽○
□▽○

△△

▽○
▽○

△△

▽○
▽○

□□
▽○
△□

▽○
▽○

△▽○

△□

▽○
0.0□
▽□

○▽

○ ▽○

□▽○

□□

▽○


▽○
▽○

△□
▽○
△△○

▽▽
□○
△ ▽

□○
□○

▽△
□○
▽▽
□○
△△

□○
□○

▽△
□○
▽▽
□○
△△
□○
▽△
□○
▽▽
□○
△▽
□○
△△

□○
□○

▽□○

△△
□○
▽▽

□○


□○



0 2 4 6 8 10
λ

Figure 1. The macroscopic level density of the random matrix sum (4) has
been simulated by Monte Carlo simulations (coloured symbols) for L numbers of
matrices added together. This is compared to the analytical result (3) (black solid
curve) which should agree for any L via free probability. We generated for each
setting 106 configurations of dimension N = 200. The bin size is equal to 0.1.
Thus, the statistical and systematic error is below one percent.

Other statistics that can be checked to be stable when performing the sum (4) are
those in the bulk and at the soft-edge. In Ref. [43], those have been proven to be those
shared with the GUE. For instance, the soft edge lies at λmin = 1/4 in the macroscopic
scaling for any L ∈ N. Thus, we should find the microscopic level density [1, 2]
*   +
2/3 L
N 4N
X † Xj 
X
ρAiry (λ) = lim tr δ λ1N − 1/3 1N − 2
N →∞ 2 L j=1 j

= [Ai0 (λ)]2 − λ[Ai(λ)]2 (11)


0
with the Airy function Ai(x) and its derivative Ai (x). This stability has been
corroborated in Fig. 2 with the help of Monte Carlo simulations.
The question is what are the eigenvalue statistics in the tail? For L = 1 it is rather
trivial because (Xj† Xj )−1 is an ordinary Laguerre ensemble that exhibits a hard edge
with the microscopic level density [1, 2] of the Bessel kernel,
* √ !+
2 N
ρBessel (λ) = lim tr δ λ1N − (X1† X1 )−1/2
N →∞ π
π2
λ[J02 (πλ) + J12 (πλ)].
= (12)
2
We have used the Bessel function of the first kind Jν (x). We also note that we
have unfolded in such a way that the asymptotic of the level density for large λ is
ρBessel (λ) → 1, meaning the mean level spacing is approximately one.
From (12) we can read off the microscopic level density in the tail which we
Local Tail Statistics 6
1.0

0.8






○ □
△ ▽
△ ▽
○ □
○ △
△ ▽

○ ○


□▽△








○ □
▽ ○ □

▽ ▽
△ ▽
○ △
○ ▽
□ △

□ Airy
0.6 ▽



ϱsoft (λ)












□ □


○ □

△ □
○ △

○ ▽ □

○ △






0.4




□ ▽ L=1



○ △ L=2



▽ ○ L=3




□ L=4
0.2 □






















○ ○

▽△
△ □

○ △
0.0 □

○ □


○ △


○ ▽


○ □


○ □



-6 -4 -2 0 2
λmin -λ

Figure 2. Microscopic level density at the soft edge for the Monte Carlo
simulations (coloured symbols) of the random matrix sums (4) and the analytical
prediction (11) (solid black curve). We have employed the same configurations
generated for Fig. 1. The bin size is this time 0.2.

‘baptise’ the inverse Bessel statistics,


1 π2
2
ρBessel (λ−1 ) = 3 [J02 (πλ−1 ) + J12 (πλ−1 )].
ρinvB (λ) = (13)
λ 2λ
We note that the tail of the largest eigenvalues is decaying slightly stronger than the
macroscopic level density, namely ρinvB (λ) ∝ λ−3 in contrast to ρX † X1 (λ) ∝ λ−3/2 .
1
One reason is that the unfolding involves taking the square root of the eigenvalues.
Yet, this does not completely explain everything. The largest eigenvalues are far in
the tail and the different tail behaviours lie on a different scale than the heavy-tail
of the macroscopic level density. This is known already for the hard edge where the
macroscopic level density can have a totally different behaviour (for instance it goes
to zero instead of diverging as a square root singularity) than the microscopic ones
due to the very different scales.
We would like investigate whether the tail statistics are also stable and
particularly if Eq. (13) still holds for the sum (4) for any L ∈ N. Unfortunately
and quite surprisingly this is not the case. Numerically we have observed that the
microscopic level density in the tail follows the law
(Y ) 1 π2
L
ρinvB (λ) = L2 ρinvB (Lλ) = ρBessel ((Lλ)−1 ) = [J 2 (π(Lλ)−1 ) + J12 (π(Lλ)−1 )],
λ 2 2Lλ3 0
(14)
see Fig. 3 and left plots of Fig. 4. Note that this is not a simple rescaling for the level
density as then we would have multiplied the density by L and not L2 resulting from
the Jacobian. Therefore there has to be a different and perhaps more fundamental
change in the spectral statistics.
What is actually going on in the tail? To analyse this first numerically we have
measured the distributions of the four largest eigenvalues and the three level spacing
Local Tail Statistics 7
1.4

1.2

λ2 ϱtail(λ) 1.0

0.8

0.6
ϱBessel(x)
0.4
L=1
0.2 L=2
L=3
L=4
0.0
0 2 4 6 8 10
λ-1

Figure 3. The microscopic level density of the Monte Carlo simulations of Figs. 1
and 2 (coloured histograms) compared to the inverse Bessel statistic result (13)
(black solid curve). The bin size is this time 0.2. We underline that we have
employed the unfolded scale meaning that we have inverted the spectrum so that
the largest eigenvalues are those closest to the origin.

distributions between these four eigenvalues of YL for L = 1, 2, 3, 4, see Fig. 4. Each


single maximum of the microscopic Bessel level density (12) and thus the inverse
Bessel result (14) is now described by L eigenvalues and not just a single one. The
level spacing distribution indeed corroborates that the eigenvalues corresponding to
one maximum have a diminished level repulsion, see to the right plots of Fig. 4.
Our interpretation is that in the large N -limit of YL the eigenvalue tail statistics
asymptotes to the statistics of the direct sum
 † 
L X1 X1 0
..
M †
YbL = Xj Xj =  . (15)
 
.
j=1 †
0 XL XL
We have also simulated these random matrices and measured the same level spacing
distributions of the largest eigenvalues for comparison. In the right plots of Fig. 4
we see indeed similarities such that our point is substantiated. The deviations can be
understood as residual level repulsions which are suppressed onto very small scales.
Therefore we expect a convergence to the statistics of (15) though it will not be uniform
about very small spacings. For reference we have added the level spacing distribution
for the Poisson statistics (statistically independent eigenvalues).
pPoisson (s) = e−s (16)
and the Wigner surmise for the GUE
 
32 2 4 2
pPoisson (s) = x exp − x , (17)
π2 π
which is pretty close to the universal bulk distribution [53] as well as to the level
spacing distributions at the hard edge [54].
Local Tail Statistics 8
1.4
ϱBessel(x) L=1 1.0 Poisson
1.2 GUE
0.8
1.0
λ2 ϱtail(λ)

0.8 ϱtail 0.6

p(s)
0.6 p1
p2 0.4
0.4 p3
p4 0.2
0.2
0.0 0.0
0 2 4 6 8 10 0.0 0.5 1.0 1.5 2.0 2.5 3.0
λ-1 s
1.4
ϱBessel(x/2) L=2 1.0 Poisson (IGin)
p1↔2
1.2 GUE (IGin)
1.0
0.8 p2↔3
△△△
ϱtail △△ △ (IGin)
p3↔4
λ2 ϱtail(λ)


0.8 0.6▽○ ○▽ ○▽○ ▽△○ △
p1 △

p(s)
○ ○
▽ △
△ ▽○ ○
0.6 p2 △ ▽ ○ △
0.4 ▽ ○ △ (Block)
p3 △
△ ▽ ○ △
▽○ △ ▽ p1↔2
0.4 p4 △△ ▽○ ○ (Block)
▽△ ○
0.2 △ p2↔3
0.2 p1 +p2 △▽ ○
△ ▽ ○▽ ○
△△
p3 +p4 △ △▽ ○ ▽○
△ △ △ △○▽△ ○▽○
(Block)
p3↔4
0.0 0.0 △ △ △▽○

0 2 4 6 8 10 0.0 0.5 1.0 1.5 2.0 2.5 3.0


λ-1 s
1.4
ϱBessel(x/3) L=3 1.0 Poisson (IGin)
p1↔2
1.2 GUE (IGin)
1.0
0.8 p2↔3

▽ (IGin)
p3↔4
λ2 ϱtail(λ)

△▽
0.8 ϱtail 0.6 △▽ ○ ○ ○ ○ ○
p(s)

○ ○ △▽
p1 ○ ○ ○ △


0.6
p2 0.4 ▽△
▽△


p1↔2
(Block)
0.4 p3 △
▽ ○
△▽ ○ ○

(Block)
p4 0.2 △▽ ○
△ △ p2↔3
0.2 p1 +p2 +p3 ▽○△
○ ▽○
△ ▽△ (Block)
0.0 0.0
○ ○ △
▽ △
○ ○ ○▽○ p3↔4○

0 2 4 6 8 10 0.0 0.5 1.0 1.5 2.0 2.5 3.0


λ-1 s
1.4
ϱBessel(x/4) L=4 1.0 Poisson (IGin)
p1↔2
1.2 GUE (IGin)
1.0
0.8


p2↔3
▽ (IGin)
λ2 ϱtail(λ)

0.8 ϱtail 0.6


○△
▽ p3↔4
p(s)


p1 ▽ ○

0.6 p2 0.4 ▽○
△ (Block)
p3 ○

△ ▽ p1↔2
0.4 ○
▽ ○
p4 0.2

▽ ○
△ △
(Block)
p2↔3
0.2 p1 +p2 +p3 +p4 ▽○
△ ○

△ (Block)
0.0 0.0
○△
▽ ○ △
▽ ○ ○ p3↔4
0 2 4 6 8 10 0.0 0.5 1.0 1.5 2.0 2.5 3.0
λ-1 s

Figure 4. Left plots: distributions of the individual eigenvalues (dashed


coloured histograms, bin size 0.1, only Monte Carlo simulated) compared with
the microscopic tail level densities (14) in the unfolded (inverted) scale (red solid
histograms for the simulation with bin size 0.2 and black solid curve for the
analytical result) and with the sum of the distribution for those eigenvalues that
cluster to one of the peaks. The configurations are the same as those of Figs. 1, 2,
and 3. Right plots: unfolded level spacing distributions of the three largest
eigenvalues for the random matrix sum (4) (coloured histograms) and for the
direct sum (15) (coloured symbols), both with a bin size 0.1. For reference we
have also drawn the spacing distributions of the Poisson statistics (16) and the
Wigner surmise (17) of the GUE.

In Sec. 3 we have derived an analytical argument based on the supersymmetry


method that verifies our understanding. The question that remains is if this
observation is true in general and whether it is perhaps governed by a more universal
behaviour. This will be discussed later, see Sec. 5.
Local Tail Statistics 9

3. Analytical Corroborations

In Sec. 3.1 we define and discuss the random matrix ensembles namely the sum of
products of inverse Wishart-Laguerre matrices that we study with the supersymmetry
method in Sec. 3.2. After we have derived the corresponding supermatrix integral we
carry out the large N -limit in Sec. 3.3. Finally in Sec. 3.4 we look for the critical scale
where the spectral statistics of this ensemble changes from stable (meaning the sum
exhibits the same statistics as each matrix in the sum) to unstable.

3.1. Matrix Model and the Scaling of the Tail Statistics


After we have seen a glimpse of the peculiarities of the spectral statistics in heavy tails
we will show in this section that first the proper interpretation is indeed the one we
have concluded with in the last section and secondly that it works for more general
ensembles. For this reason we consider a sum of product matrices of the form
(M )
Xl = Xl,M Xl,M −1 · · · Xl,1 , for l = 1, . . . , L, (18)
N ×N
where each Xl,m ∈ C is independently drawn from (1). Then the sum is given by
L
(M ) 1 X (M ) (M )
YL = (Xl )† Xl . (19)
LM +1
l=1

Comparison with (4) shows that the stability exponent will be α = 1/(M + 1).
(M )
Certainly it has been shown that the macroscopic level density of Y1 ,
 
1 
(M )

ρY (M ) (λ) = lim tr δ λ1N − N M Y1 , (20)
1 N →∞ N

is stable under free convolution, too, see [44, 45]. The reason is the S-transform,
another transform in free probability indirectly defined via the R-transform [46]
1 1
R(y) = ⇔ S(χ) = . (21)
S[yR(y)] R[χS(χ)]
It has the property [46]
SY (M ) (χ) = SX † (M −1) (χ) = SY (M −1) (χ)SX † (χ) (22)
1 1,M Y1 X1,M 1 1,M X1,M

(M )
as X1,M and Y1 are asymptotically free. In our case we have
SX † (χ) = −χ ⇒ SY (M ) (χ) = (−χ)M ⇒ RY (M ) (y) = −eiπ/(M +1) y −M/(M +1) .
l,m Xl,m l l

(23)
This result reflects the stability when considering the classification in [44, Appendix
A].
(M )
The macroscopic level density of the inverse of Y1 is given in terms of the
Meijer G-function [56]
1 M M −3/2
ρ(Y (M ) )−1 (λ) = √ (24)
1 2π (M + 1)M +1/2
MM
 
M,0 {(1 + j − M )/M }j=1,...,M
× GM,M (M + 1)M +1 λ .

{(j − 1 − M )/(M + 1)}j=1,...,M
Local Tail Statistics 10

The Meijer G-function [55] is essentially an inverse Mellin transform of ratios of


Gamma functions, i.e.
 
p,q a1 , . . . , an ; b1 , . . . , bp
Gm,n zλ (25)
c1 , . . . , c m ; d 1 , . . . , d q
  
Z Qm Γ[cj + s] Qn Γ[1 − aj − s]
j=1 j=1 ds
= Q  Q  z −s ,
p q 2πi
j=1 Γ[bj + s] j=1 Γ[1 − dj + s]
C

where the contour C starts at −i∞ and finishes at +i∞ while having the poles of
Γ[cj + s] on the left side of the path and Γ[1 − aj − s] are on the right side. The
distribution (24) is called the Fuss-Catalan distribution since it has the Fuss-Catalan
numbers [56],
Γ[(M + 1)n − M ]
F CM (n) = with n ∈ N0 , (26)
Γ[M n − M + 2]Γ[n]
as its moments and it has a support on λ ∈ [0, (M + 1)M +1 /M M ]. Additionally, its
behaviour at the origin diverges like
λ−M/(1+M )
ρ(Y (M ) )−1 (λ) ≈ for λ  1. (27)
1 Γ[(M + 2)/(M + 1)]Γ[M/(M + 1)]
(M )
this implies that the tail behaviour of the matrix Y1 will be
M/(1+M )−2
λ
ρY (M ) (λ) = λ−2 ρ(Y (M ) )−1 (λ−1 ) ≈ for λ  1.(28)
1 1 Γ[(M + 2)/(M + 1)]Γ[M/(M + 1)]
where we can again read off the stability exponent α = 1/(M + 1) which is consistent
with the other discussion.
As a result from the above discussion, the scaling of the largest eigenvalues of
(M ) (M )
Y1 as well as YL will be N regardless of M . This scaling can be obtained by
combining N ρY (M ) (λ)dλ ∝ d(N λ−1/(M +1) ) for λ  1 and Eq. (20). We need this
1
scale to properly unfold the spectrum as well as to find the largest eigenvalues. For
(M )
instance, the unfolded microscopic tail level density of Y1 is given by the so-called
Meijer G-kernel result [57, Theorem 5.3] (νj = 0 for all j) of the hard edge microscopic
(M )
level density of (Y1 )−1 which is
N 1/(M +1) (M ) −1/(M +1)
  
(M )
ρMeijerG (λ) = lim tr δ λ1N − (Y1 )
N →∞ cM
Z 1  
0,M +1 −; −
= (M + 1)cM +1 M t(cM λ)M +1

M λ dtG 1,0
0 0; 0, . . . , 0
 
−; −
× G0,M +1 t(cM λ)M +1

M,0 (29)
0, . . . , 0; 0
with    
M +2 M
cM = Γ Γ (30)
M +1 M +1
(M )
the proper unfolding constant such that ρMeijerG (λ) ≈ λ for λ  1. For M = 2, this
formula reduces to (12). Due to the proper unfolding the spectrum becomes the half
sided picket fence spectrum for M → ∞ [58, 59, 60, 61]

(M )
X
lim ρMeijerG (λ) = δ(λ − j + 0.5). (31)
M →∞
j=1
Local Tail Statistics 11

The shift by 0.5 reflects the level repulsion from the origin and has thus a strong
resemblance to the spectrum of the quantum harmonic oscillator.
The microscopic tail level density is given then by
(M ) 1 (M )
ρinvMG (λ) = 2 ρMeijerG (λ−1 ). (32)
λ
(M )
This is the one that can be expected when studying the matrix Y1 . For the sum of
(M )
L copies of matrices, meaning YL , we will find that the averaged spectrum behaves
as if we would have directly summed these random matrices, cf., subsection 3.3.

3.2. Supersymmetry Method


Instead of computing the level density or more generally the k-point correlation
(M )
functions of YL we consider the partition function
* k +
(k,N )
Y det(Y (M ) − κF,j )
L
Z (M ) (κ) = (M )
(33)
YL
j=1 det(YL − κB,j )
with κF,j , κB,j ∈ C and κB,j ∈ / [0, +∞[ for all j = 1, . . . , k. It is a well-known fact
these kinds of partition functions can generate several different quantities such as the
k-point correlation functions and specifically the level density, see Refs. [1, 47, 48, 49].
The variables κF,j and κB,j usually contain the spectral variables that correspond
to the eigenvalue statistics, regularisations such as an imaginary shift away from the
positive real axis, as well as some source variables that can be expanded and can
create Green’s functions. When arranging the κF,j and κB,j in the form of a diagonal
supermatrix κ = diag(κB,1 , . . . , κB,k ; κF,1 , . . . , κF,k ), we can write the average in terms
of a superdeterminant
D E
(k,N ) (M )
Z (M ) (κ) = Sdet (YL ⊗ 1k|k − 1N ⊗ κ)−1 . (34)
YL

For an introduction to superalgebra and superanalysis we refer to [62]. As the


conventions slightly vary in the literature, we briefly summarise ours in the present
work. A (p1 |q1 ) × (p2 |q2 ) supermatrix A can be arranged into four blocks
 
ABB ABF
A= (35)
AFB AFF
with the Boson-Boson (ABB : p1 × p2 ) and the Fermion-Fermion (AFF : q1 × q2 )
blocks containing only commuting variables and the Boson-Fermion (ABF : p1 × q2 )
and the Fermion-Boson (AFB : q1 × p2 ) comprising of only anti-commuting variables.
Then for p1 = p2 = p and q1 = q2 = q, the supertrace and the superdeterminant are
given by
det(ABB − ABF A−1FF AFB )
Str A = tr ABB − tr AFF and Sdet A = , (36)
det AFF
respectively. Each commuting variable consists of a numerical part, which can be
real or complex, and a nilpotent one which is an even polynomial of the underlying
Grassmann variables {ηj } (algebraic basis of the anti-commuting variables), while an
anti-commuting variable only contains a nilpotent part which is an odd polynomial of
the Grassmann variables. The integral over a Grassmann variable is defined by the
two axiomatic identities
Z Z
dη = 0 and ηdη = 1. (37)
Local Tail Statistics 12

These two equalities are enough to fix the integration over Grassmann variables as
any function of Grassmann variables is understood as a finite Taylor series due to the
nilpotence of the Grassmann variables, i.e., ηj2 = 0.
Coming back to (34), we can readily generalise the partition function to an
arbitrary (k|k) × (k|k) supermatrix κ as long as the numerical part of the eigenvalues
of the Boson-Boson block κBB do not lie on the positive real line or the origin.
The idea of the supersymmetry method in random matrix theory is to map the
average over an ordinary random matrix to an average over a supermatrix whose
dimension is independent of N . In our case we essentially have products of matrices
of the form W W † . In Refs. [63, 64], one of the present authors has introduced a
short cut called the supersymmetric projection formula that essentially states for a
random matrix W ∈ CN ×N that is distributed along a unitarily invariant density
P (W W † ) = P (V W W † V † ) (for all V ∈ U(N ) and W ∈ CN ×N ), we can find a
superfunction Q(U ) for a (k|k) × (k|k) supermatrix U such that
Sdet (W W † ⊗ 1k|k + κ
b)−1 = Sdet (1N ⊗ U + κ b)−1 .



(38)
b can be a much larger supermatrix of dimensions (kN |kN ) × (kN |kN ). We
Here, κ
underline that on the left hand side we average over the ordinary random matrix W
while on the right hand side we average over the supermatrix U .
The supermatrix U is in the current situation relatively simple, namely its

Boson-Boson block is a positive definite Hermitian matrix UBB = UBB ∈ Herm+ (k)
with no Grassmann variables and its Fermion-Fermion block is a unitary matrix
UFF ∈ U(k) also containing no Grassmann variables. The Boson-Fermion and
Fermion-Boson blocks only comprise independent Grassmann variables with no further
symmetries. Therefore, the supermatrix space described by U is the supersymmetric
coset Herm (k|k) = Gl(k|k)/U(k|k), see [65]. The superfunction Q(U ) is given via
the supersymmetric projection formula [63, 64] times the measure Sdet U N d[U ] on
Herm (k|k),
" #!
WW† + W f† W
Z Z fW f
Q(U ) = d[W ] d[W ]P
f e
f† , (39)
CN ×N CN ×(k|k) UW U
where d[U ] is the product of the differentials of all supermatrix elements. The
supermatrix W f is a rectangular matrix where its first k columns are ordinary N
dimensional complex vectors and the last k columns are N dimensional vectors with
independent complex Grassmann variables as their entries, this set has been denoted
by CN ×(k|k) . The superfunction Pe is a supersymmetric extension of P that satisfies
P (V V † ) = Pe(V † V ), for any V ∈ CN ×(N +k|k) . (40)
Such a supersymmetric extension is commonly not unique but there are usually natural
choices as we will see below for the present situation of the inverse Ginibre matrices.
(M )
We approach the average (34) inductively by writing YL as a sum and a product
of inverse Ginibre matrices,
 
(k,N ) (M ) † (M ) (M )
Z (M ) (κ) = Sdet (XL XL ⊗ 1k|k + LM +1 YL−1 ⊗ 1k|k − LM +1 1N ⊗ κ)−1
YL
D E

= Sdet (XL,M XL,M ⊗ 1k|k + κbL,M )−1 (41)
with
 
M +1 (M −1) † −1 (M ) (M −1) −1 (M −1) † (M −1) −1
κ
bL,M = L (XL ) YL−1 (XL ) ⊗ 1k|k − (XL XL ) ⊗ κ .(42)
Local Tail Statistics 13

For the rearrangement of the matrices inside the superdeterminant we have employed
the identities
Sdet (AB) = Sdet (A) Sdet (B) and Sdet (H ⊗ 1k|k ) = 1 (43)
for any two square supermatrices A and B and any ordinary square matrix H.
Next, we apply (38) to (41) and obtain
(k,N )
b)−1


Z (M ) (κ) = Sdet (1N ⊗ UL,M + κ (44)
YL
 
(M −1) † (M −1) (M )
= Sdet (XL XL ⊗ UL,M + LM +1 [YL−1 ⊗ 1k|k − 1N ⊗ κ])−1 .

We repeat this procedure until each integration over the random matrix XL,j is
transferred into an integration over a supermatrix UL,j . The only thing what changes
is the supermatrix

(j−1) † −1 (M ) (j−1) −1
κ
bL,j = L M +1
(XL ) YL−1 (XL ) ⊗ (UL,j+1 · · · UL,M )−1 (45)

(j−1) † (j−1) −1
− (XL XL ) ⊗ κ(UL,j+1 · · · UL,M )−1 . (46)

This eventually leads to


D E
(k,N ) (M ) (M )
Z (M ) (κ) = Sdet (L−M −1 1N ⊗ UL + YL−1 ⊗ 1k|k − 1N ⊗ κ)−1 (47)
YL
with
(M )
UL = UL,1 · · · UL,M . (48)
(M ) (M )
The average over YL−1 looks now like the one over YL . Therefore, we can
proceed as before and find eventually
D E
(k,N ) (M )
Z (M ) (κ) = Sdet (VL − κ)−N (49)
YL
with
L
(M ) 1 X (M ) (M )
VL = Uj and Uj = Uj,1 · · · Uj,M (50)
LM +1 j=1
in analogy to (18) and (19).
The remaining ingredient to be calculated is the superfunction Q for the inverse
Ginibre ensemble. For this purpose, it helps to know the corresponding superfunction
for the Ginibre ensemble that has been computed with the projection formula in [64],
which follows from the Gaussian structure
exp[− tr V V † ] = exp[− Str V † V ], for any V ∈ CN ×(N +k|k) , (51)
thence,
" #!
WW† + W f†
Z Z fW
f ] exp − Str W
f
QGin (UGin ) = d[W ] d[W †
CN ×N CN ×(k|k) UGin W
f UGin
∝ exp[− Str UGin ]. (52)
We can make use of it by noticing that the averages (38) between Ginibre and inverse
Ginibre are related,
Sdet (W W † ⊗ 1k|k + κ
b)−1 = Sdet ((W W † )−1 ⊗ 1k|k + κ b−1 )−1 Sdet κ
b−1



(53)
−1 −1 −1


= Sdet (1N ⊗ UGin + κ b ) Sdet κ b
−N −1
b)−1 .


= ( Sdet UGin ) Sdet (1N ⊗ UGin + κ
Local Tail Statistics 14

Simple comparison with the general duality (38) yields the identification that each
−1
supermatrices Ul,m in (49) is a copy of UGin which we coin Vl,m .
Summarising, the partition function takes the form
(M ) QM QL
Sdet (VL − κ)−N m=1 l=1 e− Str Vl,m d[Vl,m ]
R
(k,N ) HermM L
(k|k)
Z (M ) (κ) = R (54)
YL ( Herm (k|k) ( Sdet V )N e− Str V d[V ])LM
with
L
(M ) 1 X
−1 −1
VL = Vj,1 · · · Vj,M . (55)
LM +1 j=1
(k,N )
The normalisation Z (M ) (z1k|k ) = 1 with an arbitrary complex z ∈
/ [0, +∞[ follows
YL
from the Wegner integration theorems [66, 67, 68, 69, 70, 71, 72] for supergroup
invariant integrands. These theorems are essentially multidimensional Cauchy-like
identities which tell us that the integral is essentially the integrand at V ∝ 1k|k
times an integrand independent constant. The proportionality constant cancels in
the invariants like the supertrace or the superdeterminant as can be readily checked
with (36). Therefore, the denominator in (54) is only an N independent constant, i.e.,
it is the power of the integral
Z
( Sdet V )N e− Str V d[V ] (56)
Herm (k|k)
 −1 N
det(VBB − VBF VFF
Z
VFB )
= e− tr VBB +tr VFF d[V ]
Herm (k|k) det VFF
Z Z
N −k − tr VBB
= (det VBB ) e d[VBB ] (det VFF )−N −k etr VFF d[VFF ]
Herm+ (k) U(k)
Z
N
× det (1k − VBF VFB ) d[VBF , VFB ].
C(k|0)×(0|k)
The latter equality can be found by substituting VBF → VBB VBF VFF . The first two
integrals are given by the Selberg integrals [1, 2]
 
Z k−1
Y πj Z
1
(det VBB )N −k e− tr VBB d[VBB ] =   det(x)N −k e− tr x ∆2k (x)d[x]
Herm+ (k) k! j=0 j! Rk
+

k−1
Y
= π j (N − j − 1)! (57)
j=0

and
 
Z k−1 j Z
1  π 
Y iϕ
(det VFF )−N −k etr VFF d[VFF ] = det(eiϕ )−N −k etr e ∆2k (eiϕ )d[eiϕ ]
U(k) k! j=0 j!
[0,2π]k
k−1
Y πj
= (2πi)k , (58)
j=0
(N + j)!

where we have first diagonalised the matrices and then integrated over their
eigenvalues. To compute the remaining integral over the Grassmann variables, we
Local Tail Statistics 15

employ that the Gaussian of Grassmann variables is equal to


Z k Z
Y
exp (− tr VBF VFB ) d[VBF , VFB ] = (−VBF,ab VFB,ba )dVBF,ab dVFB,ba
C(k|0)×(0|k) a,b=1
= 1. (59)
Additionally we exploit the superbosonisation formula [73, 74, 75] which tells us how
to replace the product VBF VFB by a Haar distributed unitary matrix U ∈ U(k) at the
cost of an additional factor of det U −k . In particular we compute
Z
N
det (1k − VBF VFB ) d[VBF , VFB ]
C (k|0)×(0|k)
R N
(k|0)×(0|k) det (1k − VBF VFB ) d[VBF , VFB ]
= CR
C(k|0)×(0|k)
exp (− tr VBF VFB ) d[VBF , VFB ]
N
det (1k − U ) det U −k dµ(U )
R
U(k)
= R
U(k)
exp (− tr U ) det U −k dµ(U )
N
det 1k − eiϕ det(eiϕ )−2k ∆2k (eiϕ )d[eiϕ ]
R
[0,2π]k
= R
[0,2π]k
exp (− tr eiϕ ) det(eiϕ )−2k ∆2k (eiϕ )d[eiϕ ]
k−1
Y (N + j)!
= . (60)
j=0
(N − j − 1)!
The introduction of the denominator guarantees that we do not get any additional
constants from the superbosonisation formula and the diagonalisation of the unitary
matrix U as both steps are independent of the integrand. Therefore, the result (54)
simplifies to
Z M Y L
(k,N ) (M )
Y d[Vl,m ]
Z (M ) (κ) = Sdet (VL − κ)−N e− Str Vl,m . (61)
YL
HermM L
(k|k) m=1
(2i)k π k2
l=1

3.3. Microscopic Tail Statistics


The tail asymptotic results from combining expression (61) for the partition
function (33) with the scaling shown in (29) for the largest eigenvalues inside the
tail. This implies the scaling
b −M −1
κ = N (cM λ) (62)
with cM as in (30) and the supermatrix λ being fixed. Plugging this into (61), we can
b
readily carry out the large N limit and find
 
(k,N ) b −M −1
lim Z (M ) N (cM λ) (63)
N →∞ YL
!−N M Y
L
Z b M +1 (M )
(cM λ) Y d[Vl,m ]
= lim Sdet 1k|k − VL e− Str Vl,m
N →∞ HermM L
(k|k)
N m=1 l=1
(2i)k π k2
Z M Y
L
h
(M )
iY d[Vl,m ]
= b M +1 V
exp Str (cM λ) L e− Str Vl,m
HermM L
(k|k) m=1 l=1
(2i)k π k2
  !M +1  L
Z M
cM λ
b
−1 −1 
Y d[V1,m ] 
=  exp  Str V1,1 · · · V1,M e− Str V1,m .
HermM
(k|k)
L m=1
(2i)k π k2
Local Tail Statistics 16

Certainly, this limit works only when λ bM +1 + [λ bM +1 ]† < 0 is negative definite. This
BB BB
is usually not the case since the imaginary part of λ b should eventually be set to 0.
Therefore we deform the contours by a slight rotation V1,j → e−iπ/(M +1) V1,j so that
for a positive definite Hermitian λ
b we obtain the convergent result
Z M
(k,N )
 
b −M −1 =
Y d[V1,m ]
lim Z (M ) N (cM λ) k π k2
(64)
N →∞ YL M
Herm (k|k) m=1 (2i)
 " #M +1 
M L
−iπ/(M +1) cM λ
b
−1 −1
X
× exp −e
 Str  V1,1 · · · V1,M + V1,m 
L m=1
"  M +1 !#L
(k,N ) L
= lim Z (M ) N .
N →∞ Y1 cM λ
b
In the last line, we recognise our claim in Sec. 2 that the tail eigenvalue statistics agree
with those of a direct sum, which is namely
* k + *
k
+L
Y det(LL (X (M ) )† X (M ) − κF,j ⊗ 1L ) Y (M ) † (M )
det((X1 ) X1 − κF,j ) 
l=1 l l
LL (M ) † (M )
=  (M ) † (M )
j=1 det( l=1 (Xl ) Xl − κB,j ⊗ 1L ) j=1 det((X1 ) X1 − κB,j )
 L
(k,N )
= Z (M ) (κ) . (65)
Y1

For M = L = 1, this result agrees with the Bessel kernel result [64] which has been
an important result in the study of Quantum Chromodynamics [76].
The result (64) is the analytical corroboration which we previously mentioned
despite that we considered here a particular kind of ensemble where we could carry
out the computation. However, we are rather sure that this is the generic behaviour
of heavy-tailed statistics. The eigenvalues seem to be too diluted to show their level
repulsion which is reflected in the Vandermonde determinant of their joint probability
densities, so that effectively they behave as if this level repulsion never existed.

3.4. Critical Regime of the Spectral Statistics


Now we investigate how far into the bulk the limiting statistics to a direct sum of
random matrices carries over and what the critical regime is where we eventually
enter stable statistics such as the sine kernel in the bulk. For this aim we consider a
general scaling
κ = N γ κ0 1k|k + N δ κ
e (66)
with δ < γ < 1. The point κ0 > 0 is where we zoom into the spectrum and hence
the condition δ < γ where δ has to be determined so that the spectral fluctuations on
the scale of the local mean level spacing are resolved. For γ = 1 this scale has been
δ = γ as we have seen and for γ > 1 we look into the tail of the largest eigenvalue
only implying that we do not see anything of much interest.
Choosing the slightly rotated version of the supermatrices Vl,m , cf., Eq. (64), we
rescale the supermatrices by N (1−γ)/(M +1)
(k,N )
Z (M ) N γ κ0 1k|k + N δ κ

e (67)
YL
Local Tail Statistics 17
Z  −N
(M )
= Sdet 1k|k + e−iπ/(M +1) N −(M +γ)/(M +1) (κ0 1k|k + N δ−γ κ
e)−1 VL
HermM L
(k|k)

M Y
L h i d[V ]
l,m
Y
× exp −e−iπ/(M +1) N (1−γ)/(M +1) Str Vl,m .
m=1 l=1
(2i)k π k2
When Taylor expanding the logarithm of the superdeterminant we notice that only
the linear term survives in the large N limit since 1 − j(M + γ)/(M + 1) < 0 for all
j ≥ 2 and any M ≥ 1 and γ > 1 − M , so that
(k,N )
Z (M ) N γ κ0 1k|k + N δ κ

e (68)
YL
Z
N 1
h i
(M )
≈ exp −e−iπ/(M +1) N (1−γ)/(M +1) Str (κ0 1k|k + N δ−γ κ
e)−1 VL
HermM L
(k|k)

M Y
L h i d[V ]
l,m
Y
× exp −e−iπ/(M +1) N (1−γ)/(M +1) Str Vl,m k π k2
m=1 l=1
(2i)
 L
 
(k,N ) M +1 γ δ
≈ Z (M ) L (N κ0 1k|k + N κe) .
Y1

In this expression, we can read off the local scale given by the exponent δ =
[(M + 2)γ − 1]/(M + 1) as then the Taylor expansion in κ e terminates with the linear
term in the asymptotic limit N → ∞.
Equation (68) already shows that even some part of the bulk statistics close to the
tail still follows the statistics of a direct sum. A more detailed saddle point analysis
would show that we get the direct sum of L independent sine-kernel statistics. The
condition of the scaling exponent γ which has to be satisfied for these kind of statistics
is γ > 1 − M . Hence, for γ ≤ 1 − M we have to go to higher order expansions of the
logarithm of the superdeterminant. Those terms couple the supermatrices.
For γ < 1−M , those higher order terms are also large so that we need to carry out
an additional saddle point expansion where all Vl,m become equal. Thence we would
find the statistics of a single sine-kernel, see [65, 76] for the supersymmetric integral
expression of these statistics. One needs to be careful, as Rothstein vectorfields [77] will
occur in the saddle point expansion as they account for all Efetov-Wegner boundary
terms [67, 68] that correspond to the diagonalisation of a supermatrix. All those terms
have been explicitly computed for diagonalising Hermitian supermatrices in [72].
For γ = 1 − M and, hence, δ = (1 − M − M 2 )/(M + 1), the quadratic term of
the Taylor expansion is of order 1 so that all supermatrices are coupled,
 2

(k,N )
Z (M ) N 1−M κ0 1k|k + N (1−M −M )/(M +1) κ e (69)
YL
Z " M X L
!#
N 1 (M )
X
≈ exp −e−iπ/(M +1) N M/(M +1) Str κ−1 0 VL + Vl,m
HermM L
(k|k) m=1 l=1
 −iπ/(M +1) −2iπ/(M +1)
YM YL
e (M ) e (M ) d[Vl,m ]
× exp Str κ
e VL + Str (VL )2 .
κ20 2κ20 m=1
(2i)k π k2
l=1

This expression can be decoupled by a Hubbard-Stratonovich transformation [78, 79]


 −2iπ/(M +1) 
e (M ) 2
exp Str (V L ) (70)
2κ20
Local Tail Statistics 18
−iπ/(M +1) (M )
exp[− 12 Str Ξ2 + e κ0
R
Herm(k|k)
Str ΞVL ]d[Ξ]
= R 1 ,
Herm(k|k)
exp[− 2 Str Ξ2 ]d[Ξ]
with Ξ a supermatrix whose Boson-Boson block is an arbitrary Hermitian matrix
ΞBB = Ξ†BB ∈ Herm(k) and the Fermion-Fermion block is an arbitrary anti-Hermitian
matrix ΞFF = −Ξ†FF ∈ i Herm(k). The Boson-Fermion and Fermion-Boson blocks
again contain independent Grassmann variables and the measure d[Ξ] is the product
of all differentials of the supermatrix entries. The partition function can then be
approximated anew by the partition function of a direct sum of identical random
matrices which however are now coupled by the supermatrix Ξ, i.e.,
 2

(k,N )
Z (M ) N 1−M κ0 1k|k + N (1−M −M )/(M +1) κ e (71)
YL
(k,N )
(K(Ξ))]L exp[− 12 Str Ξ2 ] d[Ξ]
R
[Z
N 1 Herm(k|k) Y1(M )

exp[− 12 Str Ξ2 ]d[Ξ]
R
Herm(k|k)

with
2
K(Ξ) = LM +1 [N 1−M κ0 1k|k + N (1−M −M )/(M +1)
(e
κ + κ0 Ξ)]. (72)
This result resembles those in [80] where the transition between independent diagonal
blocks of random matrices between a full matrix have been considered. This underlines
that our understanding of a transition between a direct sum to a full matrix without
block structures in the tail is ostensibly correct.
One last comment on the critical scale of the eigenvalues λ ∝ N 1−M of the random
(M )
matrix YL . It is only slightly larger than the scale N −M of the macroscopic level
density (20) which is freely stable. Although we are already then deep in the tail we
are far away from the scale of the largest eigenvalue which scales like N for the chosen
reference scale of the product of inverse Ginibre ensembles, cf., Eq. (1). We believe
that the relative scales should hold for other ensembles too and that in particular the
ratio of the scale between the largest eigenvalues and the critical scale should follow the
law scalelargest eigenvalue /scalecritical = N 1/(2α) where α is the stability exponent. The
Taylor expansion should follow the same mechanism however the probability density
P (H) and thus the corresponding superfunction Q(U ) may vary.

4. Stable Ensembles and Poisson Statistics in the Tail

The last thing we would like to address in the present article is the limit L → ∞.
The multivariate central limit theorem [17], for unitarily invariant random matrix
ensembles see [18], tells us that if the limit exists it should converge to one of the
stable random matrix ensembles and this already occurs at finite matrix dimension
N . Specifically this means when the Hermitian matrix H ∈ Herm(N ) is a strictly
stable random matrix associated to the stability exponent α and we draw two copies
H1 and H2 of H, then the sum (H1 + H2 )/21/α is also a copy of H, implying that it
exhibits the very same statistics, including but not limited to eigenvalues, eigenvectors,
and matrix entries (also for finite N ). Hence this behaviour should carry over to the
large N limit. The only question is whether the two limits L → ∞ and N → ∞
commute. Our numerical and analytical simulations suggest that this might be true
for some cases of the microscopic and macroscopic spectral scales depending on the
averaged position of the largest eigenvalues in the tail.
Local Tail Statistics 19

Additionally, mesoscopic spectral scales might arise which are reminiscent to the
order of the two limits. What underlines the latter point is the fact that the Bessel
result for the level density (12) of the case M = 1 can be found for all L, see left plots
in Fig. 4. One needs only to cluster the eigenvalues in L consecutive couples.
The limit L → ∞ for the model considered in Sec. 3 and in particular for
the result (65), is carried out in subsection 4.1, while we consider a random matrix
ensemble that is already stable at finite N in subsection 4.2.

4.1. Limiting Statistics for the Model of Sec. 3


What does the discussion above on the limiting stable distributions imply for the tail
statistics? For that reason, we start from the knowledge that the microscopic tail
statistics follows approximately the one of a direct sum of random matrices, i.e., we
start from (64). The spectral variables are scaled as follows
" !#−M −1
λ
e
κ = N L λ0 1k|k + M , (73)
λ0 (M + 1)L

where λ0 > 0 is the base point where we zoom into the spectrum and λ e measures the
spectral fluctuations. Plugging this into (64) and expanding for large L, we obtain
(k,N )
lim lim Z (M ) (κ)
L→∞ N →∞ YL
 Z M  
Y d[V1,m ] 1 −i Mπ+1 −1 −1
= lim 1 − e Str λV
e
1,1 · · · V1,M
L→∞
m=1
(2i)k π k2 L
HermM
(k|k)
" M
!# 
X L
−iπ/(M +1) +1 −1 −1
× exp −e Str λM
0 V1,1 · · · V1,M + V1,m
m=1
 Z M
Y d[V1,m ] −i Mπ+1 e −1 · · · V −1
= exp − k π k2
e Str λV 1,1 1,M
m=1
(2i)
HermM
(k|k)
" M
!# 
X
−iπ/(M +1) +1 −1 −1
× exp −e Str λM
0 V1,1 · · · V1,M + V1,m . (74)
m=1
For the second equality we have exploited the fact that the integrand is normalised
for any κ ∝ 1k|k because then all determinants in (33) cancel. The average in the
exponent can be simplified due to the supergroup invariance of the integrand that is
e −1 · · · V −1 so that eventually we have
only broken by Str λV 1,1 1,M
(k,N )
lim lim Z (M ) (κ) = exp[C Str λ]
e (75)
L→∞ N →∞ YL

with a constant C that depends only on k, M and λ0 .


Let us compare (75) with the partition function of a Poisson distributed spectrum,
meaning the random matrix H = diag(E1 , . . . , EN ) is diagonal and each eigenvalue
Ej is independently and identically distributed by F (E). We consider the average
(k,N ) N
ZPoisson (κ) = Sdet (H ⊗ 1k|k − 1N ⊗ κ)−1 = Sdet (E1k|k − κ)−1



, (76)
which is the counterpart of (34). The average on the right hand side is over a single
eigenvalue only.
Local Tail Statistics 20

The local spectral fluctuations of a Poisson ensemble happen on the scale 1/N
when the distribution F (E) is N independent. Therefore, we choose the scaling
κ = λ0 1k|k + λ/N
e with λ0 the base point with a tiny imaginary increment for the
regularization then the limit N → ∞ of (76) leads to
 Z 
(k,N ) F (E)dE
lim ZPoisson (κ) = exp − Str λ .
e (77)
N →∞ E − λ0
Comparison with the result (75) underlines our point that in the large L limit we
indeed find the Poisson statistics.
Since the critical scaling κ ∝ N 1−M where the transition to the sine-kernel
statistics happens, is independent of L we expect that it is the same critical scale
where the Poisson statistics should turn over into the sine-kernel as well. This certainly
deserves more investigation, yet we skip it here as it exceeds the scope of the present
work.

4.2. Tail Statistics for a Stable Random Matrix Model


To understand better whether the Poisson statistic holds true generally in the heavy
tail as we know it does for heavy-tailed Wigner ensembles [20, 21, 24, 28], we have
also numerically checked whether an already stable heavy-tailed unitarily invariant
random matrix exhibits the Poisson statistics in its tail. Therefore we would like to
point out that such an ensemble can be readily constructed with the help of the GUE.
Its probability density is
exp[− tr H 2 /(2σ 2 )]
PGUE (H; σ) = , H ∈ Herm(N ), (78)
2N/2 (πσ 2 )N 2 /2
with an arbitrary standard deviation σ > 0. This standard deviation is now drawn
e ∈]0, 1[
from a stable totally asymmetric univariate density with a stability exponent α
and asymmetry parameter βe = 1. The distribution of such a density is given by the
Fourier transform [81]
Z ∞
(−iω)αe
 

pbαe (x) = exp −ixω − . (79)
−∞ cos(π α
e /2) 2π
The condition α e < 1 is important because only then the support is restricted on the
positive real line R+ .
When determining the variance σ 2 from pαe (x), we obtain a symmetric unitarily
invariant random matrix which is stable with stability exponent α = 2eα. In particular
we consider the random matrix distribution
Z ∞
exp[− tr H 2 /(2x)]
Pα (H) = pbα/2 (x)dx. (80)
0 2N/2 (πx)N 2 /2
A simple computation readily proves its stability
Z
2 2
2N /α Pα ∗ Pα (21/α H) = 2N /α Pα (H 0 )Pα (21/α H − H 0 )dH 0
Herm(N )
h i
tr H 2
Z ∞ Z ∞ exp − 21−2/α [x1 +x2 ]
= dx1 dx2 pbα2 (x1 )b
p α2 (x2 )
0 0 2N/2 (2−2/α π[x1 + x2 ])N 2 /2
= Pα (H). (81)
Local Tail Statistics 21
2
The factor 2N /α is the Jacobian due to the rescaling H → 21/α H. Additionally
the second equality results from the convolution rules of two Gaussians and the third
equality takes into account the stability of pbα/2 (x).
The construction (80), where one averages over the variance has also been studied
for similar ensembles in [6, 7, 8, 34, 35] though the authors of this work did not aim
for stable distributions.
We have employed the construction above for the Monte √ Carlo simulations
which readily can be numerically generated by noticing that σH with σ and H
independently drawn from pbα/2 (x) and PGUE (H; 1), respectively, leads to the same
random matrix. For four different stability exponents α = 0.5, 1, 1.5, 1.8, we have
numerically simulated the level spacing distribution as well as their level density for
the four largest and four smallest eigenvalues. Those are drawn in Fig. 7. Since the
ensemble is symmetric about the origin the ensemble exhibits on both sides heavy-tails.
The macroscopic level density is an averaged Wigner semicircle
1 D E Z ∞ √4x − λ2
−1/2
ρα (λ) = lim tr δ(λ1N − N H) = pbα/2 (x)dx. (82)
N →∞ N λ2 /4 2πx
Evidently, the asymptotic behaviour is ρα (λ) ∝ |λ|−1−α for |λ|  1.
Despite the considered ensemble being stable, it is not ’freely stable’ meaning two
copies of the matrix are not free random variables. One can convince oneself fairly
easily by considering the case α = 1. The symmetric distribution which is stable under
free convolution is uniquely given (up to a scaling parameter c > 0) by the Lorentz
function [44] (also known as the Cauchy or Breit-Wigner distribution)
1 c
ρCL (λ) = . (83)
π 1 + c2 λ2
One can then show that it is always the case ρ1 (λ) 6= ρCL (λ) regardless of what scaling
c > 0 is chosen, cf., Fig. 5 where c is fixed by ρ1 (0) = c/π.
When computing the (unfolded) largest eigenvalues, we have been very surprised
by the fact that the largest eigenvalues are still deep in the bulk of the distribution
ρα (λ) in contrast to the model in Sec. 3. Thence, the approximation of this distribution
by its leading asymptotic behaviour ρα (λ) ∝ λ−1−α has not been suitable for the
unfolding. Instead, we have unfolded the entire spectrum with the new variables
Z λ
µ(λ) = ρα (λ0 )dλ0 . (84)
0
This substitution maps the spectrum from the real line R to the open interval
] − 1/2, 1/2[ with a uniform level density. The mean of the largest eigenvalue after this
mapping is well-defined and after mapping this back to the original spectrum, they
have been indicated as vertical lines in Fig. 6.
The reason why the largest eigenvalues do not lie at the utmost end of the heavy-
tail (for the unfolded variables at µ = ±1/2) is that the tails are a superposition
of almost all eigenvalues instead of (essentially) only the largest one. Therefore the
situation is not clear as to whether we should find the Poisson statistics when taking
the limit N → ∞. To further understand the problem in more detail we compute the
partition function
Zα(k,N ) (κ) = Sdet (H ⊗ 1k|k − 1N ⊗ κ)−1


(85)
Z ∞
Sdet (H ⊗ 1k|k − 1N ⊗ κ)−1 x pbα/2 (x)dx


=
0
Local Tail Statistics 22

0.25
ρCL (λ)

0.20

0.15 ρ1(λ)
ρ
0.10

0.05

0.00
-10 -5 0 5 10
λ

Figure 5. The macroscopic level densities ρ1 (λ) (blue solid curve, see Eq. (82))
and ρCL (λ) (red dashed curve, see Eq. (83)). Despite the corresponding ensembles
are stable only those yielding ρCL (λ) can be stable under free convolution.

with respect to the considered stable ensemble. The notation h.ix denotes the average
over the GUE with variance x.
The average over the GUE can be cast into a supermatrix integral like in [75] with
the help of the superbosonisation formula. In doing so we assume that the Boson-
Boson block κBB is diagonalised which is always possible when its Jordan normal
form is diagonal. Then we define the diagonal matrix Sb = (sign(Im[κBB ]), 1k ) which
comprises all signs of the imaginary parts of κBB . In this way, the Hermitian numerical
part of (iH ⊗ Sb − i1N ⊗ Sκ) b BB is positive definite. This allows us to write the
superdeterminant in terms of an average over a Gaussian integral of a rectangular
supermatrix V of size (N |0) × (k|k) as the convergence is guaranteed now,
exp[i Str V † V Sκ b † ]d[V ]
R
b − i tr HV SV
−1
Sdet (H ⊗ 1k|k − 1N ⊗ κ) = R . (86)
exp[− Str V † V ]d[V ]
After averaging over H, we arrive at
exp[i Str V † V Sκ b † V )2 /2]d[V ]
R
b − x Str (SV
−1


Sdet (H ⊗ 1k|k − 1N ⊗ κ) = R . (87)
x exp[− Str V † V ]d[V ]
In the last step, we employ the superbosonisation formula [73, 74, 75] and replace
V † V by γU with γ > 0 a scaling that needs to be adjusted and the supermatrix
U ∈ Herm (k|k) which is realized from the very same set as in the supersymmetric
projection formula (39). Thus we eventually arrive at
Z ∞ Z
d[U ]
Zα(k,N ) (κ) = dx pb (x) Sdet U N
k π k2 α/2
(88)
0 Herm (k|k) (2i)
xγ 2
 
2
× exp − Str (SU ) + iγ Str U Sκ .
b b
2
Local Tail Statistics 23

0.6 1.4

△ α=0.5 △

△ △



△ △△
0.5 1.2 △






△△
△△
△△△








△△








△△△
△△
△△△ △△△

△△











△△


△△

△△
△△
△ △△ △ △

△△




△△

○△










○△
○○
△△ △ △△ △ △△

△ △
△○

△△

○△


0.4 1.0 △






























































△○
△△

△△














○△





△△○




△△




△○



△ △○
△○△○




△○


△○△○
△○ △○
△○ △○
△○△△△△ △ ○
○○○ ○ △ ○
△ ○
△ ○○○
△△△△△○
○○△○△○
△○△○△

△○


△○



△ △○
△○○



△○



△△





△ △

△○




△△





○△




















○△














△△












































△△
△△
△△△△
△ △ ○
△ △ △
△△△
△△




ρuf (μ)
○ ○



△△△
△△
△ △
△△△



△ ○
△ △△
△ △△△
ρ(λ)




△ △
△△



0.8 △


△ △


0.3 △

0.6

○ △

0.2
○△ ○
△ 0.4
0.1 △
△○
○△ ○△
○△△ 0.2
△○
△○
△○
○○△○
△○
△○
○○
△△ ○
△△○△○ △○○
△△○○
△△○○○
△△△○○○○
△△△△○○○
△△△△△○○○○△△ △△△△○○○○
△△△△
0.0○○○○○
0.0
-4 -2 0 2 4 -0.6 -0.4 -0.2 0.0 0.2 0.4 0.6
λ μ
0.25 ○△
○ △○
△○△
○△

△ △



○ ○△ α=1.0 1.0






















△○




△△




△△○

○△○




○△○
○ △○
△○
△○ △○
△○△○○○○○ ○
△△△△△ △○ △ ○
△ ○ △ ○
△ ○ △ ○
△ ○ △ ○ △○
△ ○ △ ○○○
△△△△△ △○
△○
○○○ △○
△○△○△

△○


○○

△△
△○



△△





































0.20 ○
△ ○


△ ○

0.8

ρuf (μ)
0.15
ρ(λ)


△ ○

0.6
○△ △

0.10 ○△ ○△
0.4
○△ ○△
○△ ○△
○△ ○△
0.05 △
△○
△○
○△△
○○△○ 0.2
△○
△○ △○
△○
○△○△○ △○
△○
0.00
△△○○
○○ △
△△○○△○△ △○
△△
○○△△
○○△△

0.0
analytical
-4 -2 0 2 4 -0.6 -0.4 -0.2 0.0 0.2 0.4 0.6
λ μ
1.2
△○
△○
△○
△○△△△△
○○○○
△○
△○
△○
△○ α=1.5


△△

△△
△ N=50
0.15 ○



○ △



○ 1.0



















△△△







△△




△△


△△○△○
△○△○
△○○○○○○○○
△△△△△ △△△○
△○△○
△○△○
△○ △○
△○ △○
△○△○
△○ △ ○○○○○○
△○ ○○○
△ △ △ △△△△△△○ △
△○
△○
△○ ○


△ △
△○
○○




























○ ○ N=500

△ ○
△ △


△ ○


○ ○
△ 0.8
ρuf (μ)

0.10 ○
△ ○

ρ(λ)


○ ○
△ 0.6

△ ○


△ ○
△ 0.4
0.05 ○
△ △


○ ○


△ △

△○
△○
△ ○
△○
△○
△○
0.2
△○
△○
△○ △○
△○
△△△○
○○ △○
△○ △○
△○
△○
△△
○○
△△
○△△△△△△△
○○○○ ○○
△△
○○○ ○○
△△△
○○○
△△△△
○○○○

0.00 0.0
-6 -4 -2 0 2 4 6 -0.6 -0.4 -0.2 0.0 0.2 0.4 0.6
λ μ
0.12
△○
△○ △○
△○ △○
△○△○
△○
△○
△○
△○
△○ △
△△ △

△○ △○
△○
△○
△○
△○
△○
△○
△○
△○ α=1.8 △

△△
△△
△△△○





0.10 ○○

△○
△○
△○
△○
△○
△○
1.0 △













△ △△
△○△○


△△○△○
△○ ○○○○○○○○○○○○○○○○○○○○○○○○○○○○○○○○○○○○○○○○○
△△△△△△△△△△△△△△△△△△△△△△△△△△△△△△△△△△△△△△△△△
△○ △○ △○
△○
△○ ○


△△
△○

△○




△△









○△ △ △ △

○△ ○△ △ △
○△ ○△ △
0.08 ○


△ ○△
○△ 0.8
○△ ○△
ρuf (μ)

○△ ○△
ρ(λ)

0.06 ○
△ ○△ 0.6

△ ○

0.04 ○△ ○
△ 0.4
○△ ○△
○△ ○△
0.02 ○△ ○△
△○
0.2
△○
△○
△ ○ △○
△○
△○
△○
△○
△○
△○
△○△○
△○ △○
△○ △○
△○
△○
△○
△○
△○
0.00○○△○△○△○△○△○△○△○△○△○△○△○△○ △○
△○
△○
△○
△○
△○
△○
△○
△○
△○
△○
△○
△ 0.0
-10 -5 0 5 10 -0.6 -0.4 -0.2 0.0 0.2 0.4 0.6
λ μ

Figure 6. The non-unfolded (left plots) and unfolded (right plots) macroscopic
level densities for the stable ensemble (80) for the four stability exponents
α = 0.5, 1.0, 1.5, 1.8 and the matrix dimensions N = 50, 500. The analytical curves
(solid black curves) are generated via the integral (82). The unfolded eigenvalues
µ are given by (84). For each ensemble (coloured symbols) we have created 105
configurations. The scattering in the unfolded densities close to the boundaries
µ = ±1/2 can be explained due to very rare events as those points correspond
to the heavy tails. The slight dip at µ = 0 for α = 0.5 can be understood by
the very narrow peak which is not sufficiently resolved by the chosen bin. The
vertical lines show the mean positions of the largest and smallest eigenvalue in
the unfolded variables µ. In the left plots we have mapped these position back to
λ.


The scale of the macroscopic
√ level density is obtained for κ ∝ N κ0 ∈ R.
Then we choose γ = N and perform the saddle point analysis for N  1.
Local Tail Statistics 24
1.0 1.0
α=0.5 α=0.5
0.8 N=50 0.8 N=500
0.6 △○

△△△
○○○
△○
△○○
△○
△△
△○ △○○

○○△○
△○
△△ △△
○ 0.6 △○
○○○○
△△△
○○
△○○ ○○

△△△△△△○
○○○○△ △○
△△△
p(s)

△△

p(s)
△○
△○ △○
△△ ○○○○
△△
△△ ○○ △△

△○
○△○
△○ ○○
○△○
△○ △△

△○○△○
△○△ △○
○○○○

○△△
△○
△△
○ △ △○
△○△ △
○△○
△○
△△
○ ○ ○○
△○



△ △○
○△○


△○
△○
△○

△△

0.4 ○△
△○
△△

△○
△○
0.4 ○△○
△○
△△○
△○ ○△

△△○

△○
○ △
△○ △△

△○

○△△ △○
△○△○
△○

○ △○
△○
0.2 △○△○


△○△○
0.2 △○
△△

△○
△△○
○△
△○
△△○△ △○
○△△


△△
△○
○△○
△○
△○ △○

○△○
○ △
△○
△○△○ △○
△△
○○

△△○○
△○
△○
△○○
△△○
△○
△△△
○○
△○ △△
○○
△△
○△○
△△
△ △
○○
○○
○△△△
△○○
△△△△ ○○
0.0 ○○○
○△○
△○△○○○○○
△○○○○
△△△△
△△○
○ △△○
△△○○○○○○○
△△△△△△△

△○△
○○○○○
△○
○ △○
△△○
△ 0.0 ○
△△△ △△△△
△△△
△○○○○○
○○○
○○
△ △△△△△△
△○○○○○○○○
△○○○○○○
○○○○○○○○○
△ △ △△△△△△△
△△△△△△△△
△○○○
△ ○○
△△

0 1 2 3 4 0 1 2 3 4
s s
1.0 1.0
α=1.0 α=1.0
0.8 N=50 0.8 N=500
0.6 ○△△△△
○○○
○○○
○△○
△△ △○
○△ 0.6○○△△△
○△△

△○○
△△
△△

○○○○
△△△△○
△△△○
○○○

△△△△△
△ ○
△○○○

△○

p(s)
△△△

△○
p(s)


△ ○○ △△△
△○○○ ○○○△○○
△○△△ ○△○○

△△△△△

△○
○ △△○
△ △
△△○
△○○
△○○
△ △○
△○
○○
△○
△△
○○
○ △
○○

△△○
△○△○
△○△ △○
○△
○△○
△△△

△○

△○
○ △
△○ △○○

△△


○△
△○△△
○ △○
○△○
0.4 ○△ △
○○

△○ 0.4 △

○△
△○

△△○
○△○
○△
△△

△○
△○

△○

△○
△○
○○

△△△

△○
△○

△○
△△ △△
○ △△
△○ ○
△○
○△○


△△

△○
△△
○ ○△

△△
○△○
0.2 ○
○ △○
△○△△○
△○
0.2 △
△△

△○
○○
△△
△○
△○
△○

○△
△△

△○

△△
△△○
△○
△○△○
△○△

○△○
○△○
○△○


△△○
○△△
△○○



△○
△○
○○△○
△△
△○
△○
△△
△○

○△○
○△○
△△△○
△△
△○
△○

△△
△△
○○○


△△

○○○
Poisson
0.0

○ △△△

○○
○○○○
△△△


△○○
△△△
○○
△ △○○○○
△△△△
○○○○
○○○○○
△ △ △ ○○○○○○○
△△△△△
△△
○○○○○
△ △ △△ △△
○○○○ 0.0
△△
○△○
△○
○○

△△△△
△△ ○
○○
△△△○
○○
△△△△△

△△△△△
○○○○○
○○
△△△
○○
○○
△△△△
○ △△△△
△△△△△△△
○○
○○○
○○○○○○○
△△△△△△△△△△△△
○○○○
○○○
△△△△△△
○△△△
○○○

○○ GUE
0 1 2 3 4 0 1 2 3 4
s s
1.0 1.0 p1 (s)
α=1.5 α=1.5 △

0.8 N=50 0.8 N=500 △ p2 (s)



○ ○
△○
○ ○△○○
△○
△ p3 (s)
△△
△△ △○ △

△ △○
0.6 ○△○○△△△

△△△△
△○○
△△△

△○
0.6 ○○○△○△○△△
△○
△○
○○
△○
△○○

△△
○○
△○

○△
○○
△○
△○

○ △△○
△○ ○ pN-1 (s)
p(s)
p(s)

○○
○ ○○○ △△△△
△○○
○○
△○○○ ○○

△○ △△△○ △ ○
○△○ △

△△○

△○


△△ △△△○○ ○○○
△△

△△○○○
△△ △○

△○
○○

○△○
△○
○○ △○○
△○
△○
△△
○○
△△△
△○
○ △
△○
○○
△ △

△○△

○ △



△△
○○○
△○

○○


△○
△△△△

△△
○○○

○ pN-2 (s)
0.4 ○△△

△○ △
○○ 0.4 ○
△△○

△△

pN-3 (s)
○○
△ ○△○
△○△○
△○
△○
△○
△△○
○△ ○○○○

△○
△○

△○

△○

○ △ ○△△
△△○
△○ △○
△○△ ○
△△△○
○ △ △ △○
○△△○


△ △○
○○
○△○
△△ △



○△○

△△○
○ △○
△○
△△△△○

0.2 ○○○△○
△○ △○
△○△○
△△○
△○
0.2 △○
○○
△ △
△○
△○△

△○

△○

○△△△

△○△○

△△○
△○
△△○
△○ △○ △○
○△○
△○



○○
△○
△△○
△○

△△○ △○
○△○
△△△
○ △
△○

△○
△△
○△○△○
△○△○
△ ○
△△

○○
△△○○
△△△
△△
○○ ○
△○
△○

△△△

○○△○
△○ △
△○△ △○
△○
○○○△
○ ○○○
△△
○○
△△
○○ △△
△△△△ △○
△○
△○
△△
○ ○○
△△
△ △△△△
△○○
△○
△○△○
△○△○
△○△○
○△○
△○△○
○○△△△ △○ ○
0.0 ○ ○○○
△△△
○○○
△△△ △△△△△
○○○
○○○○○
△△△△△△
○○○○○○○
△△△△△△△△
○○○○○○○○
△△△△△△
○○○○○○○○○○
△△△△△△
△△
○○○ 0.0 △△△△△
○ ○

△○△○
△○△○
△○△○
△△△△△○○
△○
△△○○○
△○

○△○
△○ △
△○


△○△○
△△△△△△
○△○△
△○

0 1 2 3 4 0 1 2 3 4
s s
1.0 1.0
α=1.8 α=1.8
0.8 N=50 0.8 N=500



○ ○

△△
○ △○
△○
○○
0.6
△△
○ △○△○ ○○○○○
△△△△△ △△
○○
△ △○
0.6 △○△○○△○△△△△
○○○

○ △○
△○
△△△○△
△△
○○○△○
△○△
○△
○○△ ○ △○ △△△
○○△○ △○ △△○ △△
△○

p(s)


p(s)

○ △○

△△△
○ ○○△○ ○○○
△△△ △○
△○△

○△○

○ ○△○○△ ○ △○ △○ ○○△

△○△

△○
△○
△ ○
△○○ △△○
△○
○○
△ △△△ △○○ △○△○△
△○△


△○△○
△△
△△△
○ △△△△
○ △○△△○△△○
△ △△○○△○△○ ○
△○ △○
○ △△○
○○

△ △○
△○
○ △△
△○○ △○
0.4 ○△○△○△○△○△ △○△○

△○ △
△△○
△ 0.4 △△○
△○


○△○
△○ △○
△○

○△

△△
○○○△
△○ △○
△○△
△○

△○△

△ ○
△△○

○○△

○○
△○


○△△ △
○○
△△
○△○


△○

△ △○ △△
△○○○
△○
△○
0.2 △△○


△○

○△ 0.2 △△
△○

△△○


△△○△ △○△
△○
△○
○△○
○△○
△○ △
△○ △○
△○
△△
△○
△△

△○
○ △△
△○
△○ ○ ○○○
△○
△○

○△
○ △○
○ △
△○
○ ○

△○
△△△△○
○ △ ○

○○
△○


△△
△○
△○
△△
△○
○△△○

△△
△△

○○
△○△○
△○ △○
△○△△ ○○△

○△△



△△○

△ △○
△△△



○○
△△○○
△△○○
○○
○○○
△△
○ ○ ○ ○
△△
○○

△ △△△
○○○△
○○

△○
△△△△△
△○
○△△
△△△△
○△

△△△
0.0 ○
△△△ ○○
△△

△△△
○△△△

○○○
△△△△△△
△△△△△△ ○
○○○○○
○○○
○△△△△△△△

○○○
△△△△△△△△△△△
△△△△△△△△△△△△△△△△△
○○
○○○
○○

○○
○ ○
○○○
○○ 0.0 ○△
○△
○○△
○△
△△△△△△△△△△
△△△
○ ○
○○

△○

○○△
△○
△△△△△△△△
△ ○○
△○
△△
○○
○△
○△
○○

△○

0 1 2 3 4 0 1 2 3 4
s s

Figure 7. The level spacing distributions of the unfolded eigenvalues (84)


for the stable random matrix ensemble (80) for the four stability exponents
α = 0.5, 1.0, 1.5, 1.8 and the matrix dimensions N = 50, 500. The Monte Carlo
simulations (coloured symbols) are the same of Fig. 6. The nomenclature pk (s)
refers to the spacing of the kth and (k + 1)st smallest eigenvalue. Due to
symmetry of the ensemble, the two distributions pk (s) and pN −k (s) must agree
as they indeed approximately do (same coloured circles and triangles). As a
comparison we have drawn the Poisson distribution (16) (solid curve) and the
Wigner surmise (17) (dashed curve).

The two p corresponding saddle point solutions for the eigenvalues of U S are z± =
b
2
(iκ0 ± 4x − κ0 )/(2x). Here we have to split the discussion into cases depending on
whether 4x is larger or smaller than κ20 .
If 4x > κ20 , then the real parts of the eigenvalues in the Boson-Boson block of
Local Tail Statistics 25

U Sb must have the signature SbBB as otherwise the maximum of the integrand along
the contour is never acquired at the saddle point. Moreover the eigenvalues of the
Fermion-Fermion block need to be the same eigenvalues since only then the Berezinian
of the diagonalisation of U Sb (Jacobian in superspace, see [62]) is of order one. Other
solutions will yield higher orders in 1/N . To summarise the saddle point manifold is
given by
p
iκ 0 4x − κ20 e e −1
U Sb = 1k|k + U diag(SbBB , SbBB )U (89)
2x 2x
where U e ∈ U(k+ , k− |k+ , k− )/[U(k+ |k+ ) × U(k− |k− )] is a Haar distributed unitary
supermatrix with k+ and k− the number of plus and minus signs in SbBB . We would like
to point out that the change of coordinates for (89) involves a Rothstein vectorfield [77]
which we denote by YUe . It is however N -independent because it only corresponds to
the substitution and not the integrand as we know it for Jacobians.
If 4x ≤ κ20 , the solutions become entirely imaginary though only one of them is a
maximum of the integrand for the integration variables. The second derivative of the
exponential term is at the two saddle points
xz 2 4x2
 
∂z2 −

+ iκ0 z + ln(z) = −x + p . (90)
2 z=z± (κ0 ± κ20 − 4x)2
Combining this with the fact that the bosonic eigenvalues run through these points
along the imaginary line and the fermionic ones parallel to the real line, both amount
to an additional minus sign in the second term of the Taylor expansion about the
saddle point in (88). Therefore only
p
κ 0 − κ20 − 4x
U Sb = i 1k|k (91)
2x
can be a maximum along the contours. Here, we would like to underline that no
Rothstein vectorfield is needed as we do not need to diagonalise the supermatrix to
reach this saddle point in contrast to the case 4x > κ20 . √ √
The spectral fluctuations can be obtained by setting κ = N κ0 1k|k + κ e/ N . We
expand
p
iκ0 4x − κ20 e e −1 + √1 U e −1 (92)
US =
b 1k|k + U diag(SbBB , SbBB )U e δQU
2x 2x N
for 4x > κ20 and
p
κ 0 − κ20 − 4x 1
U Sb = i 1k|k + √ δQ (93)
2x N
for 4x < κ20 up to second order in the massive modes δQ. The supermatrix δQ
describes essentially the superspaces Herm(k+ |k+ ) × Herm(k− |k− ) and Herm(k|k),
respectively for the two situations. Their integrations yield 1 due to the normalisation
and we eventually obtain
Z κ20 /4 " p #
(k,N ) κ0 − κ20 − 4x
lim Zα (κ) = dxbpα/2 (x) exp − Str κ
e (94)
N →∞ 0 2x
Z ∞ h κ i Z
0
+ pα/2 (x) exp − Str κ
dxb e exp[YUe ]
κ20 /4 2x
U(k+ ,k− |k+ ,k− )
U(k+ |k+ )×U(k− |k− )
" p #
4x − κ20 −1 dµ(U
e)
× exp i Str U diag(SBB , SBB )U κ
e b b e e k2 −k2 −k2 .
2x π + −
Local Tail Statistics 26

The exponential term exp[YUe ] is the application of the Rothstein vector field YUe
which takes care of all Efetov-Wegner boundary terms [67, 68] that result from the
corresponding change of coordinates.
The result (94) is a superposition of the Poisson partition function (77) convolved
with the stable distribution pbα/2 (x) and the sine kernel partition function in its
supersymmetric form [65, 76] convolved again with pbα/2 (x). For the sine kernel,
usually the vector field is dropped as it only generates lower point correlations than
the k-point correlation function that can be derived by taking derivatives in κ and
then setting κBB = κFF , see [47, 48, 49].
For the computation above, we have assumed that κ0 is of order O(1) in N . We
could also choose that κ0 is of a larger order since the spectrum has a heavy tail so
that eigenvalues can indeed lie very deep in the tail. For instance this is the case
for the product of inverse Ginibre matrices where the ratio of the scale of the largest
eigenvalue and the bulk is of order scalelargest eigenvalue /scalebulk = N .
Assuming κ0  1 in (94) with κ e = O(κ0 ), we can Taylor expand the square root
in the first term which makes the exponent x independent so that we get the Poisson
partition function (77),
Z κ20 /4 " p #
κ0 − κ20 − 4x
 
κ0 1 1
pα/2 (x) exp −
dxb Str κ
e ≈ exp − Str κ e . (95)
0 2x κ0
For the second term we rescale x → κ20 x/2 then the Jacobian together with the
approximation pbα/2 (κ20 x/2) ∝ (κ20 x)−1−α/2 is of the size κ−α
0 while the integrand is
of order
√ one. Thus it is a lower order term and vanishes when κ is of an order larger
than N . √
In summary, for κ  N , the partition function becomes the one for the Poisson
statistic,
√  
N →∞, κ N 1
Zα(k,N ) (κ) ≈ exp − Str κ e + O(κ−α0 ). (96)
κ0
The problem is that the convergence is slower for smaller α.
The question is whether the largest eigenvalues are now of larger order than
the scale of the macroscopic level density for the ensemble (80). Using the averaged
position of the largest and smallest eigenvalue in the unfolded variables (84), we see
that their position (horizontal lines in Fig. 6) moves to the extreme values at ±0.5. As
their change is however very tiny it could be very likely that those positions saturate
at a certain value which implies the mixed statistics for the level spacing distribution
between the four largest and smallest consecutive eigenvalues seen in Fig. 7 will persist
when taking the limit N → ∞. We have also simulated the ensemble for α = 0.5 and
N = 5000 and it seems that this kind of saturation is taking place. Nonetheless, we
can confirm that the similarity to the Poisson statistics is diminished for a smaller
α which can be indeed understood by the resulting error term in Eq. (96). A more
detailed analysis is needed to decide which scenario either (94) or (96), is actually
realised.

5. Conclusions and Two Conjectures

We investigated heavy-tailed unitarily invariant random matrices and their limiting


spectral statistics in the tail. In particular we addressed the question whether the
statistics are stable when all the remaining statistics are stable. To achieve this
Local Tail Statistics 27

we considered two classes of random matrices. One is a product of inverse Ginibre


matrices that are known [44, 45] to yield a freely stable macroscopic level density.
Surprisingly in the tail the spectrum is not stable. When L is the number of matrices
added in order to check the stability, the eigenvalues in the tail cluster into groups
of L eigenvalues. Eigenvalues inside this cluster become statistically independent in
the limit of large matrix dimension N while eigenvalues in different clusters are still
correlated. Our interpretation is that that the sum of heavy-tailed random matrices
behave in the tail like a direct sum of the same types of random matrices. Our
analytical computations with the supersymmetry method confirm this. When looking
at the particular details of the computation one notices that this easily carries over
to a sum of heavy-tailed random matrices that do not necessarily have to be equally
distributed nor do they need to have the same stability exponent. We believe that this
is even true for real and quaternionic matrices. We now arrive at our first conjecture.
Conjecture 1 (Tail Statistics of a Sum of Heavy-Tailed Random Matrix Ensembles).
Let X1 , . . . , XL be independently (not necessarily identically) distributed random
matrices with heavy-tails, and unitary invariance (eigenvalues and eigenvectors are
uncorrelated) such that the position of the largest eigenvalues is on a scale larger than
that of the eigenvalues
PL in the bulk. Then the statistics
LL of the largest eigenvalues in
the tail of the sum j=1 Xj and of the direct sum j=1 Xj will be the same up to a
scaling.
As we have seen there are also stable statistics in the product of inverse
Ginibre matrices. The scale “scalecritical ” of the transition from stable to
unstable spectral statistics has been quantified by the following ratio of scales
scalelargest eigenvalue /scalecritical = N 1/(2α) , where α is the stability exponent. We
think that this critical scaling might be universal. Certainly a further investigation
is needed for more general classes of random matrix ensembles like the multiplicative
Pólya ensembles [82, 83], that also comprise several heavy-tailed ensembles.
It seems to be paramount in Conjecture 1 that the largest eigenvalues are on a
larger scale than the bulk as we have observed with the second class of ensembles. This
class consists of averaged GUE’s, where one integrates over the variance with a stable
univariate distribution. This construction is very similar to the one in [6, 7, 8, 34, 35],
where other averaging distributions have been studied. This led to a heavy-tailed
random matrix that is already for fixed matrix dimension N stable. With the help of
this class, we wanted to examine whether the limiting statistics for L → ∞ follows the
Poisson statistics as it is the natural choice for a direct sum of random matrices.
Our numerical simulations suggest that this is not true. Through our analytical
computations we have found out that it has to follow the Poisson statistics if the
largest eigenvalue scales much larger than the bulk otherwise one should find a mixture
of Poisson statistics and a kind of average of the sine-kernel result. The latter seems to
be the case of this average of the GUE with a heavy-tailed standard deviation. Thus,
we state our second conjecture.
Conjecture 2 (Tail Statistics of Stable Random Matrix Ensembles).
Let X be a heavy-tailed stable random matrix with unitary invariance (eigenvalues
and eigenvectors are uncorrelated). If the largest eigenvalues are considerably larger
than the bulk of eigenvalues then the local spectral statistics in the tail follows Poisson
statistics.
These two conjectures should certainly also carry over in some way to the
other symmetry classes for the Hermitian, see [65], as well as non-Hermitian random
Local Tail Statistics 28

matrix ensembles. Surely for complex spectra other mechanisms will enter the game.
Nevertheless in two dimensions the added spatial capacity will further facilitate an
increase in the decorrelation of eigenvalues.

Acknowledgments

MK acknowledges fruitful discussions with Jiyuan Zhang and Holger Kösters.

References
[1] M. L. Mehta: Random Matrices, Academic Press, Amsterdam, 3rd ed. (2004).
[2] P. J. Forrester: Log-gases and random matrices, Princeton University Press, Princeton, NJ
(2010).
[3] G. Akemann, J. Baik, and P. Di Francesco, eds.: The Oxford Handbook of Random Matrix
Theory, Oxford University Press, Oxford (2011).
[4] Z. Burda, J. Jurkiewicz, M. A. Nowak, G. Papp, and I. Zahed: Lévy Matrices and Financial
Covariances, Acta Physica Polonica Series B 34, 4747 (2001) [arXiv:cond-mat/0103108].
[5] M. M. Meerschaert and H.-P. Scheffler: Portfolio Modeling with Heavy Tailed Random Vectors,
Chapter 15 in Handbook of Heavy Tailed Distributions in Finance, S. T. Rachev ed., Elsevier,
Amsterdam (2003).
[6] Z. Burda, A. T. Görlich, and B. Waclaw: Spectral properties of empirical covariance matrices
for data with power-law tails, Phys. Rev. E 74, 041129 (2006) [arXiv:physics/0603186].
[7] O. Bohigas, J. X. de Carvalho, and M. P. Pato: Disordered ensembles of random matrices, Phys.
Rev. E 77, 011122 (2008) [arXiv:0711.3719].
[8] G. Akemann, J. Fischmann, and P. Vivo: Universal Correlations and Power-Law Tails in
Financial Covariance Matrices, Physica A 389, 2566–2579 (2010) [arXiv:0906.5249].
[9] G. Biroli and M. Tarzia: The Lévy-Rosenzweig-Porter random matrix ensemble,
[arXiv:2012.12841] (2012).
[10] M. C. Münix, R. Schäfer, and T. Guhr: A Random Matrix Approach to Credit Risk, PLoS ONE
9, e98030 (2014) [arXiv:1102.3900].
[11] T. Kanazawa: Heavy-tailed chiral random matrix theory, JHEP 2016, 166 (2016)
[arXiv:1602.05631].
[12] S. Oymak, J. A. Tropp: Universality laws for randomized dimension reduction, with
applications, Information and Inference: A Journal of the IMA 7, 337–446 (2017)
[arXiv:1511.09433].
[13] S. Minsker: Sub-Gaussian Estimators of the Mean of a Random Matrix with Heavy-Tailed
Entries, The Annals of Statistics 46, 2871–2903 (2018) [arXiv:1605.07129].
[14] C. H. Martin and M. W. Mahoney: Implicit Self-Regularization in Deep Neural Networks:
Evidence from Random Matrix Theory and Implications for Learning, [arXiv:1810.01075]
(2018).
[15] C. H. Martin and M. W. Mahoney: Traditional and Heavy-Tailed Self Regularization in Neural
Network Models, Proceedings of the 36th International Conference on Machine Learning,
Long Beach, California, PMLR 97 (2019) [arXiv:1901.08276].
[16] J. Heiny: Random Matrix Theory for Heavy-Tailed Time Series, J. Math. Sci. 237, 652-–666
(2019).
[17] E. L. Rvačeva: On domains of attraction of multidimensional distributions, L’Vov. Gos. Univ.
Uč. Zap. 29, Ser. Meh.-Mat. No. 6, 5 (1954).
[18] J. Zhang and M. Kieburg: in preparation.
[19] P. Cizeau and J. P. Bouchaud: Theory of Levy matrices, Phys. Rev. E 50, 1810 (1994)
[20] A. Soshnikov: Poisson Statistics for the Largest Eigenvalues of Wigner Random Matrices with
Heavy Tails, Elect. Comm. in Probab. 9, 82–91 (2004) [arXiv:math/0405090].
[21] G. Biroli, J.-P. Bouchaud, and M. Potters: On the top eigenvalue of heavy-tailed random
matrices, EPL 78, 10001 (2007) [arXiv:cond-mat/0609070].
[22] Z. Burda, J. Jurkiewicz, M. A. Nowak, G. Papp, and I. Zahed: Random Lévy Matrices Revisited,
Phys. Rev. E 75, 051126 (2007) [arXiv:cond-mat/0602087].
[23] G. Ben Arous and A. Guionnet: The Spectrum of Heavy Tailed Random Matrices, Commun.
Math. Phys. 278, 715—751 (2008) [arXiv:0707.2159].
[24] A. Auffinger, G. Ben Arous, and S. Péché : Poisson convergence for the largest eigenvalues of
heavy tailed random matrices, Ann. l H. Poincare-Pr. 45, 589–610 (2009) [arXiv:0710.3132].
Local Tail Statistics 29

[25] R. Vershynin: Introduction to the non-asymptotic analysis of random matrices, Chapter 5 of:
Compressed Sensing, Theory and Applications, Y. Eldar and G. Kutyniok ed., Cambridge
University Press, Cambridge (2012).
[26] F. Benaych-Georges, A. Guionnet, and C. Male: Central Limit Theorems for Linear
Statistics of Heavy Tailed Random Matrices, Commun. Math. Phys. 329, 641–686 (2014)
[arXiv:1301.0448].
[27] F. Benaych-Georges and A. Maltsev: Fluctuations of linear statistics of half-heavy-tailed random
matrices, Stoch. Process. Their Appl. 126, 3331–3352 (2016) [arXiv:1410.5624].
[28] E. Tarquini, G. Biroli, and M. Tarzia: Level Statistics and Localization Transitions of Lévy
Matrices, Phys. Rev. Lett. 116, 010601 (2016) [arXiv:1507.00296].
[29] J. Heiny and T. Mikosch: Eigenvalues and Eigenvectors of Heavy-Tailed Sample Covariance
Matrices with General Growth Rates: the iid Case, Stoch. Process. Their Appl. 127, 2179–
2207 (2017) [arXiv:1608.06977].
[30] C. Bordenave and A. Guionnet: Delocalization at small energy for heavy-tailed random matrices,
Commun. Math. Phys. 354, 115–159 (2017) [arXiv:1603.08845].
[31] C. Male: The limiting distributions of large heavy Wigner and arbitrary random matrices, J.
Funct. Anal. 272, 1–46 (2017) [arXiv:1209.2366].
[32] O. Guédon, A. E. Litvak, A. Pajor, and N. Tomczak-Jaegermann: On the interval of fluctuation
of the singular values of random matrices, J. Eur. Math. Soc. 19, 1469–1505 (2017)
[arXiv:1509.02322].
[33] Z. Burda, R. A. Janik, J. Jurkiewicz, M. A. Nowak, G. Papp, and I. Zahed: Free Random Lévy
Matrices, Phys. Rev. E 65, 021106 (2002) [arXiv:cond-mat/0011451].
[34] G. Akemann, and P. Vivo: Power-law deformation of Wishart-Laguerre ensembles of random
matrices, J. Stat. Mech. 0809, P09002 (2008) [arXiv:0806.1861].
[35] A.Y. Abul-Magd, G. Akemann, and P. Vivo: Superstatistical generalisations of Wishart-
Laguerre ensembles of random matrices, J. Phys. A 42, 175207 (2009) [arXiv:0811.1992].
[36] J. Choi and K. A. Muttalib: Rotationally invariant family of Lévy like random matrix ensembles,
J. Phys. A 42, 152001 (2009) [arXiv:0903.5266].
[37] T. Guhr and A. Schell: Matrix Moments in a Real, Doubly Correlated Algebraic Generalization
of the Wishart Model, (2020) [arXiv:2011.07573].
[38] A. K. Gupta and D. K. Nagar: Matrix Variate Distributions, Monographs and Surveys in
Applied and Pure Mathematics 104, CRC Press, London (1999).
[39] R. Balian: Random matrices and information theory, Il Nuovo Cimento B 57, 183–193 (1968).
[40] K. Adhikari, N. K. Reddy, T. R. Reddy, and K. Saha: Determinantal point processes in the
plane from products of random matrices, Ann. Inst. Henri Poincaré Probab. Stat. 52, 16–46
(2016) [arXiv:1308.6817].
[41] P. J. Forrester: Eigenvalue statistics for product complex Wishart matrices, J. Phys. A 47,
345202 (2014) [arXiv:1401.2572].
[42] G. Akemann and J. R. Ipsen: Recent exact and asymptotic results for products of independent
random matrices, Acta Physica Polonica B 46, 1747–1784 (2015) [arXiv:1502.01667].
[43] D.-Z. Liu, D. Wang, and L. Zhang: Bulk and soft-edge universality for singular values of products
of Ginibre random matrices, Ann. Inst. H. Poincaré Probab. Statist. 52, 1734–1762 (2016)
[arXiv:1412.6777].
[44] H. Bercovici, V. Pata and P. Biane: Stable Laws and Domains of Attraction in Free Probability
Theory, Annals of Mathematics 149, 1023–1060 (1999) [arXiv:math/9905206].
[45] O. Arizmendi E. and V. Pérez-Abreu: The S-transform of symmetric probability measures with
unbounded supports, Proc. Amer. Math. Soc. 137, 3057–3066 (2009).
[46] R. Speicher: Free Probability Theory, Chapter 22 of Ref. [3] (2011).
[47] K. B. Efetov: Supersymmetry in Disorder and Chaos, 1st ed., Cambridge University Press,
Cambridge (1997).
[48] M. R. Zirnbauer: The Supersymmetry Method of Random Matrix Theory, Encyclopedia of
Mathematical Physics 5, 151, eds. J-P. Franoise, G. L. Naberand S. T. Tsou, Elsevier: Oxford
(2006) [arXiv:math-ph/0404057].
[49] T. Guhr: Supersymmetry, Chapter 7 of Ref. [3] [arXiv:1005.0979].
[50] J. Ginibre: Statistical ensembles of complex, quaternion, and real matrices, J. Math. Phys. 6,
440–449 (1965).
[51] J. Wishart: The Generalised Product Moment Distribution in Samples from a Normal,
Multivariate Population, Biometrika 20, 32–52 (1928).
[52] V. A. Marčenko and L. A. Pastur: Distribution of eigenvalues for some sets of random matrices,
Math. USSR Sbornik 1, 457–483 (1967), translated from the Russian in Mat. Sb. 72, 507–536.
[53] B. Dietz and F. Haake: Taylor and Padé analysis of the level spacing distributions of random-
Local Tail Statistics 30

matrix ensembles, Z. Phys. B Condensed Matter 80, 153–158 (1990).


[54] G. Akemann, V. Gorski, and M. Kieburg: , in preparation (2021).
[55] M. Abramowitz and I. A. Stegun: Handbook of Mathematical Functions with Formulas, Graphs,
and Mathematical Tables, Dover Books on Mathematics, New York (1965).
[56] K. A. Penson and K. Żczkowski: Product of Ginibre matrices: Fuss-Catalan and Raney
distributions, Phys. Rev. E 83, 061118 (2011) [arXiv:1103.3453].
[57] A. Kuijlaars and L. Zhang: Singular values of products of Ginibre random matrices, multiple
orthogonal polynomials and hard edge scaling limits, Commun. Math. Phys. 332, 759–781
(2014) [arXiv:1308.1003].
[58] G. Akemann, Z. Burda, and M. Kieburg: Universal distribution of Lyapunov exponents for
products of Ginibre matrices, J. Phys. A 47, 395202 (2014) [arXiv:1406.0803].
[59] G. Akemann, Z. Burda, and M. Kieburg: From Integrable to Chaotic Systems: Universal Local
Statistics of Lyapunov exponents, EPL 126, 40001 (2019) [arXiv:1809.05905].
[60] D. Z. Liu, D. Wang, and Y. Wang: Lyapunov exponent, universality and phase transition for
products of random matrices, [arXiv:1810.00433] (2018).
[61] G. Akemann, Z. Burda, and M. Kieburg: Universality of local spectral statistics of products of
random matrices, Phys. Rev. E 102, 052134 (2020) [arXiv:2008.11470].
[62] F. A. Berezin: Introduction to Superanalysis, D. Reidel Publishing Company, Dordrecht, 1st
ed.(1987).
[63] V. Kaymak, M. Kieburg, and T. Guhr: Supersymmetry Method for Chiral Random
Matrix Theory with Arbitrary Rotation Invariant Weights, J. Phys. A 47, 295201 (2014)
[arXiv:1402.3458].
[64] M. Kieburg: Supersymmetry for Products of Random Matrices, Acta Physica Polonica B 46,
1709–1728 (2015) [arXiv:1502.00550].
[65] M. R. Zirnbauer: Riemannian symmetric superspaces and their origin in random matrix theory,
J. Math. Phys. 37, 4986 (1996) [arXiv:math-ph/9808012].
[66] G. Parisi and N. Sourlas: , Phys. Rev. Lett. 43, 744 (1979).
[67] F. Wegner: unpublished notes (1983).
[68] K. B. Efetov: Supersymmetry and theory of disordered metals, Adv. Phys. 32, 53, (1983).
[69] F. Constantinescu: The supersymmetric transfer matrix for linear chains with nondiagonal
disorder, J. Stat. Phys.50, 1167-1177 (1988).
[70] F. Constantinescu and H. de Groote: The integral theorem for supersymmetric invariants, J.
Math. Phys. 30, 981–992 (1989).
[71] M. Kieburg, H. Kohler, and T. Guhr: Integration of Grassmann variables over invariant
functions on flat superspaces, J. Math. Phys. 50, 013528 (2009) [arXiv:0809.2674].
[72] M. Kieburg: On the Efetov-Wegner terms by diagonalizing a Hermitian supermatrix, J. Phys.
A 44, 285210 (2011) [arXiv:1011.0836].
[73] H.-J. Sommers, Acta Phys. Pol. B38, 4105 (2007), arXiv:0710.5375 [cond-mat.stat-mech].
[74] P. Littlemann, H.-J. Sommers, and M. R. Zirnbauer, Commun. Math. Phys.283, 343 (2008),
arXiv:0707.2929 [math-ph].
[75] M. Kieburg, H.-J. Sommers, and T. Guhr, J. Phys. A42, 275206 (2009), arXiv:0905.3256 [math-
ph].
[76] J. J. M. Verbaarschot: The Supersymmetric Method in Random Matrix Theory and Applications
to QCD, AIP Conference Proceedings 744, 277–362 (2004) [arXiv:hep-th/0410211].
[77] M. J. Rothstein: Integration on noncompact Supermanifolds, Trans. Am. Math. Soc. 299, 387–
396 (1987).
[78] J. Hubbard: Calculation of Partition Functions, Phys. Rev. Lett. 3, 77-80 (1959).
[79] B. L. Stratonovich: On a Method of Calculating Quantum Distribution Functions, Soviet Physics
Doklady 2, 416 (1957).
[80] M. Kieburg, J. J. M. Verbaarschot, and S. Zafeiropoulos: Spectral Properties of the Wilson Dirac
Operator and random matrix theory, Phys. Rev. D 88, 094502 (2013) [arXiv:1307.7251].
[81] P. Lévy: Calcul des Probabilités, Gauthier-Villars, Paris (1925).
[82] M. Kieburg and K. Kösters: Products of Random Matrices from Polynomial Ensembles, Ann.
Inst. H. Poincaré Probab. Statist. 55, 98–126 (2019) [arXiv:1601.03724].
[83] Y.-P. Förster, M. Kieburg, and K. Kösters: Polynomial Ensembles and Pólya Frequency
Functions, Journal of Theoretical Probability (2020), https://doi.org/10.1007/s10959-020-
01030-z, [arXiv:1710.08794].

You might also like