You are on page 1of 23

Information geometry of Euclidean quantum fields

arXiv:2303.04081v1 [hep-th] 7 Mar 2023

Stefan Floerchinger
Theoretisch-Physikalisches Institut, Max-Wien Platz 1, 07743 Jena, Germany

E-mail: stefan.floerchinger@uni-jena.de

Abstract: Information geometry provides differential geometric concepts like a Rieman-


nian metric, connections and covariant derivatives on spaces of probability distributions.
We discuss here how these concepts apply to quantum field theories in the Euclidean domain
which can also be seen as statistical field theories. The geometry has a dual affine structure
corresponding to sources and field expectation values seen as coordinates. A key concept
is a new generating functional, which is a functional generalization of the Kullback-Leibler
divergence. From its functional derivatives one can obtain connected as well as one-particle
irreducible correlation functions. It also encodes directly the geometric structure, i. e. the
Fisher information metric and the two dual connections, and it determines asymptotic
probabilities for field configurations through Sanov’s theorem.
Contents

1 Introduction 1

2 Euclidean quantum field theory 3

3 Affine geometry for sources 4

4 The Fisher information metric 5

5 Expectation values as coordinates 5

6 Affine geometry for expectation values 7

7 Divergence functional 8

8 Functional integral representation for divergence functional 14

9 Connections 16

10 Metric and connection from divergence 18

11 Conclusions 19

1 Introduction

Information theoretic aspects of quantum field theory are getting more important recently,
partly triggered by the advancement of quantum information theory. Traditionally this
has been discussed in the context of black holes and the information paradox [1–4], but
also in the context of non-equilibrium dynamics and thermalization [5, 6], or for expanding
quantum fields [7–10]. Information theoretic methods can be used to classify states, for
example in terms of entropy or entanglement entropy, to compare states in terms of relative
entropies etc.
It is interesting in that context that there is actually a sophisticated geometric structure
on the space of probability distributions or quantum density matrices. This has been
developed mathematically in the field of information geometry, see refs. [11, 12] for recent
reviews. More specific, the Fisher information metric, or its quantum generalization, the
quantum Fisher information metric, provides a natural Riemannian metric. Beyond this,
there is also a natural connection available, or actually two that are dual, described by the
Amari-Chentsov structure [11, 12]. They differ from the Levi-Civita connection and are
not metric-compatible or, in other words, they have non-metricity. Even more, the entire

–1–
geometric structure is very nicely encoded in terms of divergences such as the Kullback-
Leibler divergence.
Using these geometric structures to better understand quantum field dynamics is an
excellent perspective for the coming years. As a preparation for this we follow a related
program in the present article: we study Euclidean quantum field theories that directly
have a probability interpretation, or that can be seen as classical statistical field theories
with one more spatial dimension, from an information theoretic point of view. This has
the advantage that concepts can be taken over from the mathematical literature on clas-
sical information geometry directly, with the only generalization being from functions to
functionals1 .
In other words, in the present work we discuss the information geometry of statistical
field theories which can also be seen as quantum field theories in the Euclidean domain. We
explore specifically how source fields or field expectation values can be seen as alternative
coordinates in the space of functional probability distributions and what kind of geometric
structures are associated to them. The Riemannian Fisher information metric will be seen
to correspond to connected two-point correlation functions. The two dual connections
are corresponding to different kinds of three-point correlation functions (connected and
one-particle irreducible).
Let us mention here that somewhat different information theoretic concepts (based on
the Wasserstein metric) have been applied in particle physics previously [13, 14]. Moreover,
in ref. [15], Erdmenger, Grosvenor and Jefferson explored the use of the quantum Fisher
information metric in different model systems. In ref. [16] they studied relative entropy
and the connections of neural networks with the renormalization group, see also ref. [17].
The quantum Fisher information has also been employed in the context of entanglement
detection, see refs. [18–20] and references therein, and in the context of holography, see
e. g. ref. [21]. Geometric concepts also find applications in thermodynamics, see [22, 23]
for reviews. Relative entropy has recently been used to formulate entropic uncertainty
relations for quantum field theories [24].
Let us remark that discussion we present here is from a mathematical point of view
heuristic. For example, issues with different topologies or the definition of the functional
integral will not be discussed. We leave in particular the issue of renormalization to future
work.
This paper is organized as follows. In section 2 we briefly recall the conceptual setup
of Euclidean quantum field theory before discussing a natural affine structure in terms
of sources in 3. In 4 we introduce the Fisher information metric and discuss its signif-
icance in the field theoretic context. In section 5 we introduce expectation value fields
as alternative coordinates using Legendre transforms and the quantum effective action.
The corresponding affine geometry is discussed in section 6. In section 7 we discuss a
new generating functional for quantum field theory, the functional generalization of the
Kullback-Leibler divergence in terms of source coordinates, expectation value coordinates
and combinations thereof. Also the information theoretic significance of this object will
1
Even that step has been partly taken by mathematicians already [12].

–2–
be discussed. Subsequently, in section 8 we present functional integral representations of
the divergence functional including steepest descend or one-loop approximations to it. In
section 9 we discuss different connections in more detail, before explaining how they can
be obtained from the divergence functional in general coordinates in section 10. Finally,
we draw some conclusions in section 11.

2 Euclidean quantum field theory

We study here a bosonic quantum field theory in Euclidean spacetime, or statistical field
theory, with the partition function
Z Z X !
W [J]
Z[J] = e = Dφ exp −S[φ] + Jn (x)φn (x) . (2.1)
x n

The index n labels field components, where the fields could be fundamental or composite,
and we use the abbreviation Z Z p
= dd x g(x) (2.2)
x
with g(x) = det(gµν (x)) the determinant of the Euclidean spacetime metric.
From the partition function Z[J], or the Schwinger functional W [J] (known as cu-
mulant generating function in statistics) one easily derives expectation values, correlation
functions or other interesting observables. One can easily show, using Hölder’s inequality,
that W [J] is convex,

W [(1 − t)J ′ + tJ ′′ ] ≤ (1 − t)W [J ′ ] + tW [J ′′ ], (2.3)

with 0 ≤ t ≤ 1.
In the following, it will also be convenient to use an abstract index which combines
position and field component, α = (x, n), and to use Einsteins summation convention,
Z X
J α φα = Jn (x)φn (x), (2.4)
x n

with J α = Jn (x) and φα = φn (x). It is usually clear from the context how expressions
involving abstract indices can be made concrete.
A Euclidean quantum field theory, or a statistical field theory, for example to describe
critical phenomena, is a probabilistic theory where the random variables are the field
configurations φn (x), and a probability density with respect to the functional integral
measure Dφ is given by

p[φ, J] = exp (−S[φ] + J α φα − W [J]) . (2.5)

The source field J α can be seen as a parameter within a class of probability distributions.
One may generalize this setup somewhat, in the sense that the fundamental random
variables could be some other microscopic degrees of freedom (e. g. of a lattice model or

–3–
similar), which we call χ but do not specify them further. The probability density with
respect to the measure Dχ is then

p[χ, J] = exp (−I[χ] + J α φα [χ] − W [J]) , (2.6)

with Z
W [J]
Z[J] = e = Dχ exp (−I[χ] + J α φα [χ]) . (2.7)

The fields φα [χ] are now some functionals of the fundamental or microscopic random vari-
ables χ. The functional I[χ] plays the role of the action S[φ] but can differ as a result of
the variable change with fixed measure

Dχ exp (−I[χ]) = Dφ exp (−S[φ]) . (2.8)

A class of probability distributions as in eq. (2.6) with the sources J α seen as parameters,
is known as an exponential family in information geometry [11, 12].

3 Affine geometry for sources

Let us note here immediately that there is an affine structure on this exponential family
in the sense that an affine transformation of sources,

J α → J ′α = M αβ J β + cα , (3.1)

with invertible M αβ , leads again to a probability distribution of the form (2.6), i. e. in the
exponential family. Note that this is not the case for non-linear transformation of sources.
In a related way, affine transformations of the form (3.1) preserve the convexity of the
Schwinger functional W [J], while more general non-linear transformation do not.
Moreover, the affine structure allows connecting probability distributions associated to
the sources J ′α and J ′′α through a distribution with source J α (t) given by the flat, so-called
e-geodesic2 ,

J α (t) = (1 − t)J ′α + tJ ′′α , (3.2)

where 0 ≤ t ≤ 1. Geometrically this means that the manifold of probability distributions in


the exponential family has a flat connection in terms of the coordinates J α . As trajectories,
e-geodesics are characterized by the differential equation
d2 α d d
J (t) + (ΓE )β αγ [J] J β (t) J γ (t) = 0, (3.3)
dt2 dt dt
where the connection vanishes here, in terms of source coordinates,

(ΓE )β αγ [J] = 0. (3.4)

In another coordinate system, related to J α in a non-linear way, this would not be the case,
however.
2
A geodesic is here a line connecting two points, determined by a connection, and not necessarily the
shortest path in any sense.

–4–
4 The Fisher information metric

Spaces of probability distributions have a natural Riemannian metric, the Fisher informa-
tion metric. For distributions parametrized by coordinates J α it is given by
δ δ
Z
Gαβ [J] = Dχ p[χ, J] α ln p[χ, J] β ln p[χ, J]
δJ δJ
(4.1)
δ2
Z
= − Dχ p[χ, J] α β ln p[χ, J].
δJ δJ
R
In the last equation we have used the product rule and that Dχ p[χ, J] = 1.
The information-theoretic significance of the Fisher metric is that the infinitesimal
Fisher-Rao distance of two nearby probability distributions

ds2 = Gαβ [J]dJ α dJ β (4.2)

gives a measure for how well distributions at J and J ′ can be distinguished [25]. As usual,
in Riemannian geometry, the length of a path is defined through line integrals of ds.
For the exponential family, one finds easily

δ2
Gαβ [J] = W [J] = hφα [χ]φβ [χ]i − hφα [χ]ihφβ [χ]i. (4.3)
δJ α δJ β
The Fisher metric agrees with the connected correlation function of fields! With the ex-
ception of Gaussian theories, this correlation function depends still on the source J α and
the metric is therefore not constant.
We are now in the interesting situation that the space of probability distributions as
parametrized by J α has a non-trivial Riemannian metric Gαβ [J], but at the same time has
an affine structure with e-geodesics corresponding to vanishing connection, (ΓE )β αγ [J] = 0.
It is clear that this connection is, for non-Gaussian theories, not the Levi-Civita connection
corresponding to the metric Gαβ [J], and it remains to characterize it further.

5 Expectation values as coordinates

Instead of the sources J α one can parametrize the probability distributions for χ or the
fields φα [χ] in terms of the expectation values

δ
Z
Φα = hφα [χ]i = W [J] = Dχ p[χ, J] φα [χ]. (5.1)
δJ α
With the exception of Gaussian theories this is a non-linear change of coordinates from the
sources J α .
One way to describe this change of variables is in terms of Legendre transforms. The
quantum effective action or one-particle irreducible effective action is defined as

Γ[Φ] = sup (J α Φα − W [J]) , (5.2)


J

–5–
and as a Legendre transform it is a convex functional of expectation values Φα . In the
context of the theory of large deviations this is known as the Cramér function [26]. One
can also see Γ[Φ] as the negative of the infimum of the differential information entropy,
 Z 
Γ[Φ] = − inf − Dχ p[χ, J] ln p[χ, J] , (5.3)
J

for given expectation value Φα . Note that as a differential entropy this is not necessarily
positive. The quantum effective action satisfies the field equation
δ
Γ[Φ] = J α . (5.4)
δΦα
With these relations one can write the probability density for χ as
 
δΓ[Φ]
p[χ, Φ] = exp −I[χ] + (φα − Φα ) + Γ[Φ] . (5.5)
δΦα
R
The normalization condition for the distribution (5.5), Dχ p[χ, Φ] = 1, gives the well
known background identity for Γ[Φ]. On the other side, eq. (5.1) is a linear relation
between Φα and p, and it should therefore be possible to write p[χ, Φ] as a linear functional
in Φα .
To investigate this in more detail we first define Φeqα as the expectation value at van-
α
ishing source, i. e. the solution of eq. (5.4) at J = 0. One can infer immediately that

p[χ, Φeq ] = exp (−I[χ] − W [0]) . (5.6)

Expanding to linear order in Φα − Φeq


α yields

δ2 Γ[Φeq ] eq
p[χ, Φ] = p[χ, Φeq ] + (Φα − Φeq eq
α ) (φβ [χ] − Φβ ) p[χ, Φ ]. (5.7)
δΦα δΦβ
The interesting statement is that this is not only the linear order of an expansion, but
actually an exact expression! The probability density in (5.7) is indeed constructed such
that eq. (5.7) is fulfilled. First note that the distribution is properly normalized, as a result
of Z
Dχ p[χ, Φeq ] = 1, (5.8)

and Z
Dχ φβ [χ] p[χ, Φeq ] = Φeq
β . (5.9)

Moreover, by construction
Z
eq
Dχ (φα [χ] − Φeq eq

α ) (φβ [χ] − Φβ ) p[χ, Φ ] = Gαβ [J] J=0 . (5.10)

One also needs that Γ(2) and W (2) are inverse, as a consequence of the definition (5.2),
δ2 Γ[Φ] δ2 W [J] δ2 Γ[Φ]
= Gβγ [J] = δαγ . (5.11)
δΦα δΦβ δJ β δJ γ δΦα δΦβ
Together this implies indeed that the distribution in (5.7) has the expectation values Φα .
The class of probability distributions in eq. (5.7), written as a linear combination of
expectation values, is called mixture family in the context of information geometry [11, 12].

–6–
6 Affine geometry for expectation values

The Fisher metric in terms of expectation value coordinates is easily determined to be

δ2 δ2 Γ[Φ]
Z
αβ
G [Φ] = − Dχ p[χ, Φ] ln p[χ, Φ] = . (6.1)
δΦα δΦβ δΦα δΦβ

As a matrix, this is actually the inverse of Gαβ [J], see eq. (5.11). The Fisher-Rao distance
between close by distributions can thus be written in three equivalent ways,

ds2 = Gαβ [J]δJ α δJ β = Gαβ [Φ]δΦα δΦβ = δJ α δΦβ . (6.2)

As discussed before, the expectation values Φα are related to the sources J α by a


change of variables that is in general nonlinear. In this sense it is surprising that there is
an affine structure here, as well. Indeed, transformations of the form

Φα → Φ′α = Nαβ Φβ + dα , (6.3)

with invertible Nαβ , map elements of the mixture family to elements of the mixture fam-
ily. One may consider so-called m-geodescis connecting two probability distributions with
expectation values Φ′α and Φ′′α of the form

Φα (t) = (1 − t)Φ′α + tΦ′′α , (6.4)

and the corresponding probability distributions for 0 ≤ t ≤ 1 are simply linear superposi-
tions of the distributions at the two endpoints of the geodesic.
Note that the m-geodesic in (6.4) is different from the e-geodesic described in (3.2),
even when the endpoints J ′α and Φ′α as well as J ′′α and Φ′′α correspond to the same
probability distributions (see also below).
The m-geodesics can also be characterized in terms of differential equations,

d2 d d
Φα (t) + (ΓM )β αγ [Φ] Φβ (t) Φγ (t) = 0, (6.5)
dt2 dt dt
In terms of expectation values as coordinates the m-connection symbols vanish,

(ΓM )β αγ [Φ] = 0. (6.6)

At this point it is instructive to work out the connection symbols of the m-connection
in terms of source coordinates,

δΦα δΦγ δJ β δJ β δ2 Φµ
(ΓM )αβγ [J] = (Γ M )ρ ν
µ [Φ] + . (6.7)
δJ ρ δJ ν δΦµ δΦµ δJ α δJ γ
This uses the general transformation law for a connection under changes of coordinates. As
we just argued, in expectation value coordinates the m-connection symbol vanishes, and
we thus find in source coordinates
δJ β δ2 Φµ 3
βµ δ W [J]
(ΓM )αβγ [J] = = G . (6.8)
δΦµ δJ α δJ γ δJ α δJ γ δJ µ

–7–
Up to a Fisher metric, the m-connection symbol in source coordinates is the connected
three-point function! Accordingly, m-geodesics are in source coordinates not straight lines
when this three-point function is non-vanishing.
Similarly, one can find the e-connection symbol in expectation value coordinates,
δJ α δJ γ δΦβ δΦβ δ2 J µ
(ΓE )αβ γ [Φ] = (Γ ) µ
E ρ ν [J] +
δΦρ δΦν δJ µ δJ µ δΦα δΦγ
(6.9)
δ3 Γ[Φ]
=Gβµ .
δΦα δΦβ δΦµ
This is essentially the one-particle irreducible three point function.

7 Divergence functional

7.1 Divergence functional in source coordinates


Another highly interesting quantity used in information geometry is the Kullback-Leibler
divergence or relative information entropy between two probability distributions with co-
ordinates J α and J ′α ,
Z
D[JkJ ′ ] = Dχ p[χ, J] ln p[χ, J]/p[χ, J ′ ] .

(7.1)

Note the antisymmetric nature of the definition. The Kullback-Leibler divergence is non-
negative,
D[JkJ ′ ] ≥ 0, (7.2)
and it vanishes when the distributions p[χ, J] and p[χ, J ′ ] agree, i. e. for J α = J ′α . More-
over, for J ′α = J α + δJ α one has
1
D[JkJ ′ ] = Gαβ [J]δJ α δJ β + . . . , (7.3)
2
where the ellipses are for terms of cubic and higher order in δJ α . In other words, for nearby
distributions the Kullback-Leibler divergence equals the Fisher-Rao distance squared, up
to a factor.
The relative entropy finds an interesting application in terms of Sanov’s theorem [25,
26]. Consider n random realizations of field configurations χk (where k = 1, . . . , n) taken
from the distribution (2.6). This gives rise to an “empirical” distribution
n
1X
pn [χ] = δ[χ − χk ]. (7.4)
n
k=1

The probability w for this empirical distribution pn [χ] to lie in some region Ω of the space
of all possible distributions p[χ] is asymptotically (for large n) constrained by Sanov’s
theorem. For simplicity let us assume that Ω is closed (with respect to weak topology).
Then w is asymptotically constrained by the element q[χ] of Ω that is closest to p[χ] in the
sense of relative entropy. In other words, one has
 
1
Z
lim ln(w) = − inf Dχ q[χ] ln (q[χ]/p[χ]) . (7.5)
n→∞ n q∈Ω

–8–
Assuming now in addition that Ω consists of a set of distributions p[χ, J] that is in the
exponential class (2.5) with J in a set A, or expectation value Φ in the corresponding set
B, we can write w in terms of the divergence functional,
 
1
lim ln(w) = − inf D[JkJ ′ ] = − inf D[ΦkΦ′ ]. (7.6)
n→∞ n J∈A Φ∈B

Here we assumed that the true distribution is p[φ, J ′ ] = p[φ, Φ′ ]. By these arguments
it becomes clear that the divergence functional plays a natural role in quantifying the
probability for large deviations [26].
For the exponential family one finds easily the relative entropy

D[JkJ ′ ] =(J α − J ′α )Φα − W [J] + W [J ′ ]


δW [J] (7.7)
=(J α − J ′α ) − W [J] + W [J ′ ].
δJ α
The expectation value Φα is here with respect to the distribution p[χ, J], as made explicit
in the second line.
The expression in eq. (7.7) is up to an interchange of arguments (a duality transform),
the Bregman divergence associated with the convex function W [J],

D[JkJ ′ ] = DW [J ′ kJ]. (7.8)

In this context it is also clear that the Schwinger functional W [J] is uniquely fixed by
D[JkJ ′ ], up to an additive constant and a linear term in J. In other words, to fully
reconstruct W [J] from D[JkJ ′ ] from one needs additional information such as W [0] and

δW [J]
= Φeq
α . (7.9)
δJ α J=0

The largest part of the functional information of W [J] is contained in D[JkJ ′ ]. Concretely,
one finds for J α = 0
D[0kJ ′ ] + J ′α Φeq ′
α + W [0] = W [J ]. (7.10)
First derivatives of the divergence functional with respect to the source coordinates
are
δ ′ ′λ δ2
D[JkJ ] =(J λ
− J ) W [J] = (J λ − J ′λ )Gαλ [J],
δJ α δJ α δJ λ (7.11)
δ δ δ
′α
D[JkJ ′ ] = − α W [J] + ′α W [J ′ ] = −Φα [J] + Φ′α [J ′ ],
δJ δJ δJ
and they vanish for J = J ′ , as expected. Second derivatives are

δ2 ′ λ ′λ δ3
D[JkJ ] =(J − J ) W [J] + Gαβ [J],
δJ α δJ β δJ α δJ β δJ λ
δ2
D[JkJ ′ ] = − Gαβ [J], (7.12)
δJ α δJ ′β
δ2
D[JkJ ′ ] =Gαβ [J ′ ].
δJ ′α δJ ′β

–9–
Let us also give the third derivatives,

δ3 ′ λ ′λ δ4 δ3
D[JkJ ] =(J − J ) W [J] + 2 W [J],
δJ α δJ β δJ γ δJ α δJ β δJ γ δJ λ δJ α δJ β δJ γ
δ3 ′ δ3
D[JkJ ] = − W [J],
δJ α δJ β δJ ′γ δJ α δJ β δJ γ (7.13)
δ3
D[JkJ ′ ] =0,
δJ α δ′β δJ ′γ
δ3 ′ δ3
D[JkJ ] = W [J ′ ].
δJ ′α δJ ′β δJ ′γ δJ ′α δJ ′β δJ ′γ
It is particularly interesting to observe that the derivatives with respect to the second
argument J ′ yield directly the connected correlation functions generated by the Schwinger
functional W [J ′ ]. Similarly, one derivative with respect to J ′ and several derivatives with
respect to J also lead to connected correlation functions. Schematically one has for n ≥ 2

D (0,n) [JkJ ′ ] = W (n) [J ′ ], (7.14)

and
D (n−1,1) [JkJ ′ ] = −W (n) [J]. (7.15)
This shows again how most of the information of W [J] is contained in D[JkJ ′ ].

7.2 Divergence functional in expectation value coordinates


The divergence functional can also be expressed in terms of expectation value coordinates,
and it is convenient to work then with the quantum effective action defined in (5.2). One
obtains from (7.7) immediately

δΓ[Φ′ ]
D[ΦkΦ′ ] = Γ[Φ] − Γ[Φ′ ] − (Φλ − Φ′λ ). (7.16)
δΦ′λ

This can be seen as the Bregman divergence accociated to Γ[Φ],

D[ΦkΦ′ ] = DΓ [ΦkΦ′ ]. (7.17)

On the other side one can reconstruct Γ[Φ] up to an additive constant from D[ΦkΦ′ ] if
additionally also Φeq satisfying (7.9) or (5.4) for J α = 0 is known. Specifically, the difference
of the quantum effective action at Φ to the one at Φeq is the divergence functional,

Γ[Φ] − Γ[Φeq ] = D[ΦkΦeq ]. (7.18)

On similar ground, let us note that from the functional D[JkJ ′ ] alone one cannot
immediately infer the functional D[ΦkΦ′ ] (and vice versa) because the information about
the relation between the source J α and the expectation value Φα is not contained in D[JkJ ′ ]
(or in D[ΦkΦ′ ]). It is sufficient, however, to know in addition the expectation value Φeq
α .
From (7.11) follows a relation for the expectation value as functional of the source,
δ
Φα [J] = Φeq D[JkJ ′ ] J ′ =0 ,

α − ′α
(7.19)
δJ

– 10 –
which allows to implement the change of variables. Similarly, this can be done in the other
direction.
Taking functional derivatives of (7.16) with respect to the two arguments yields

δ δΓ[Φ] δΓ[Φ′ ]
D[ΦkΦ′ ] = − = J α − J ′α ,
δΦα δΦα δΦ′α
(7.20)
δ ′ δ2 Γ[Φ′ ]
D[ΦkΦ ] = − (Φλ − Φ′λ ) = −Gαλ [Φ′ ](Φλ − Φ′λ ),
δΦ′α δΦ′α δΦ′λ

and they vanish for Φ = Φ′ as expected. Evaluating the first line at Φ′ = Φeq leads to a
relation for the source as a functional of the expectation value,
δ
J α [Φ] = D[ΦkΦeq ]. (7.21)
δΦα
Second derivatives are
δ2
D[ΦkΦ′ ] =Gαβ [Φ],
δΦα δΦβ
δ2
D[ΦkΦ′ ] = − Gαβ [Φ′ ], (7.22)
δΦα δΦ′β
δ2 ′ δ3
D[ΦkΦ ] = − Γ[Φ′ ](Φλ − Φ′λ ) + Gαβ [Φ′ ],
δΦ′α δΦ′β δΦ′α δ′ Φβ δΦ′λ

and third derivatives follow as


δ3 δ3
D[ΦkΦ′ ] = Γ[Φ],
δΦα δΦβ δΦγ δΦα δΦβ δΦγ
δ3
D[ΦkΦ′ ] =0,
δΦα δΦβ δΦ′γ
(7.23)
δ3 ′ δ3
D[ΦkΦ ] = − Γ[Φ′ ],
δΦα δΦ′β δΦ′γ δΦ′α δΦ′β δΦ′γ
δ3 ′ δ4 ′ ′ δ3
′ D[ΦkΦ ] = − ′ ′ Γ[Φ ](Φ λ − Φ λ ) + 2 Γ[Φ′ ].
δΦ′α δΦβ δΦ′γ δΦ′α δΦβ δΦ′γ δΦλ δΦ′α δΦ′β δΦ′γ

Here it is particularly interesting that the derivatives with respect to the first argument Φα
yield the one-particle irreducible correlation functions generated by Γ[Φ]! Something similar
happen for one derivative with respect to Φ and several with respect to Φ′ . Schematically,
for n ≥ 2,
D (n,0) [ΦkΦ′ ] = Γ(n) [Φ], (7.24)
and
D (1,n−1) [ΦkΦ′ ] = −Γ(n) [Φ′ ]. (7.25)
To summarize, the divergence functional D[Φkφ′ ], when supplemented with one expec-
tation value configuration φeq α
α corresponding to vanishing source J , contains the same
information as the effective action Γ[Φ].

– 11 –
7.3 Divergence functional in mixed representation
There is also a very elegant mixed representation, where the expectation value coordinate
is used for the first argument, and the source coordinate for the second argument of the
divergence functional,
D[ΦkJ ′ ] = Γ[Φ] + W [J ′ ] − J ′α Φα . (7.26)
Here it is manifest that functional derivatives with respect to the first argument generate
one-particle irreducible correlation functions, and those with respect to the second argu-
ment generate connected correlation functions. Also, by setting one of the arguments to
zero one obtains the quantum effective action or Schwinger functional up to an additive
constant, respectively.x
First derivatives are
δ δΓ[Φ]
D[ΦkJ ′ ] = − J ′α = J α − J ′α ,
δΦα δΦα
(7.27)
δ ′ δW [J ′ ]
D[ΦkJ ] = − Φα = Φ′α − Φα ,
δJ ′α δJ ′α
and they vanish for J α = J ′α and Φα = Φ′α . In particular, when the first line is evaluated
at J ′ = 0 one obtains
δ δΓ[Φ]
J α [Φ] = D[Φk0] = , (7.28)
δΦα δΦα
which is the field equation for the expectation value. Similarly, the second line in (7.27)
gives Φ′α [J ′ ] when evaluated at some given configuration Φα . In this sense the mixed
functional D[ΦkJ ′ ] contains more information that either D[JkJ ′ ] or D[ΦkΦ′ ], because
also the relation between sources and expectation values is directly contained! Note that
D[ΦkJ ′ ] is not in the form of a Bregman divergence.
Second derivatives are
δ2
D[ΦkJ ′ ] =Gαβ [Φ],
δΦα δΦβ
δ2 (7.29)
D[ΦkJ ′ ] = − δαβ ,
δΦα δJ ′β
δ2
D[ΦkJ ′ ] =Gαβ [J ′ ].
δJ ′α δJ ′β
Third derivatives are simply

δ3 δ3
D[ΦkJ ′ ] = Γ[Φ],
δΦα δΦβ δΦγ δΦα δΦβ δΦγ
δ3
D[ΦkJ ′ ] =0,
δΦα δΦβ δJ ′γ
(7.30)
δ3
D[ΦkJ ′ ] =0,
δΦα δJ ′β δJ ′γ
δ3 ′ δ3
D[ΦkJ ] = W [J ′ ].
δJ ′α δJ ′β δJ ′γ δJ ′α δJ ′β δJ ′γ

– 12 –
More general, one finds here for n ≥ 2,

D (n,0) [ΦkJ ′ ] =Γ(n) [Φ],


(7.31)
D (0,n) [ΦkJ ′ ] =W (n) [J ′ ].

Let us summarize that the divergence functional D[ΦkJ ′ ] in the mixed representation of
expectation value for the first argument and source for the second argument contains infor-
mation equivalent to the Schwinger functional W [J] and effective action Γ[Φ], respectively.
In fact, it elegantly combines the two functionals.

7.4 Divergence functional in opposite mixed representation


Finally, let us discuss also a mixed representation with the opposite choice of coordinates
δW [J] ′ δW [J] δΓ[Φ′ ]
′ δΓ[Φ ]
D[JkΦ′ ] = J λ − W [J] + Φ λ − Γ[Φ ′
] − . (7.32)
δJ λ δΦ′λ δJ λ δΦ′λ
First derivatives are here
δ
D[JkΦ′ ] =(J λ − J ′λ )Gαλ [J],
δJ α
δ (7.33)
D[JkΦ′ ] = − Gαλ [Φ′ ](Φλ − Φ′λ ).
δΦ′α
On first sight D[JkΦ′ ] in this opposite mixed representation seems to contain even
less information than D[JkJ ′ ] or D[ΦkΦ′ ], because even with the additional knowledge of
the expectation value Φeq α
α corresponding to vanishing source J = 0, it is not possible to
infer from (7.33) the general relation between expectation values and sources. Only if in
addition also the two point function or Fisher metric Gαλ [0] at J = 0 or its inverse Gλα [Φeq ]
is known, can one evaluate the first line of (7.33) at J = 0 to yield
δ
J ′λ [Φ′ ] = −Gλα [Φeq ] ′

D[JkΦ ] . (7.34)
δJ α J=0

Similarly, evaluating the second line at Φ′α = Φeq


α gives

δ
Φλ [J] = Φeq D[JkΦ′ ] Φ′ =Φeq .

λ − Gλα [0] δΦα′
(7.35)

However, the necessary information about the Fisher metric is contained in the second
functional derivatives of (7.32), which follow in general as

δ2 ′ λ ′λ δ3
D[JkΦ ] =(J − J ) W [J] + Gαβ [J],
δJ α δJ β δJ α δJ β δJ λ
δ2
D[JkΦ′ ] = − Gαλ [J]Gλβ [Φ′ ], (7.36)
δJ α δΦ′β
δ2 ′ δ3
D[JkΦ ] = − Γ[Φ′ ](Φλ − Φ′λ ) + Gαβ [Φ′ ].
δΦ′α δΦ′β δΦ′α δΦ′β δΦ′λ

Evaluating the first and third line at J = 0 and Φ′ = Φeq yields Gαβ [0] and Gαβ [Φeq ],
respectively.

– 13 –
Finally, third derivatives are in this representation given by

δ3 ′ λ ′λ δ4 δ3
D[JkΦ ] =(J − J ) W [J] + 2 W [J],
δJ α δJ β δJ γ δJ α δJ β δJ γ δJ λ δJ α δJ β δJ γ
δ3 ′ δ3 δ2
D[JkΦ ] = − W [J] Γ[Φ′ ],
δJ α δJ β δΦ′γ δJ α δJ β δJ λ δΦ′γ δΦ′λ
δ3 δ2 δ3 (7.37)

D[JkΦ ] = − W [J] Γ[Φ′ ],
δJ α δΦ′β δΦ′γ δJ α δJ λ δΦ′β δΦ′γ δΦ′γ
δ3 ′ δ4 ′ ′ δ3
D[JkΦ ] = − Γ[Φ ](Φ λ − Φ λ ) + 2 Γ[Φ′ ].
δΦ′α δΦ′β δΦ′γ δΦ′α δΦ′β δΦ′γ δΦ′λ δΦ′ αδΦ′ βδΦ′γ

This representation is less elegant and obviously leads to more involved expressions for
derivatives.
The question arises to which extent one can recover the Schwinger functional W [J]
or quantum effective action Γ[Φ] from the opposite mixed representation functional (7.32).
As we have argued, one can, with the additional information of Φeq α construct the map
between expectation values and sources, and the information in D[JkΦ′ ] is then equivalent
to the divergence functional in other representations.

8 Functional integral representation for divergence functional

For some purposes it is useful to have a direct functional integral representation of func-
tionals. From eq. (7.7) one finds

eW [J]−J Φα
α
−D[JkJ ′ ]
e = W [J ′ ]−J ′α Φ . (8.1)
e α

Here one can use the functional integral representation for the Schwinger functional (2.7),
leading to an intuitive expression for the divergence functional

Dχ exp (−I[χ] + J α (φα [χ] − Φα ))


R
−D[JkJ ′ ]
e =R . (8.2)
D χ̃ exp (−I[χ̃] + J ′α (φα [χ̃] − Φα ))

Note that the expectation value Φα appearing in the numerator, as well as in the denomi-
nator is with respect to the distribution at source J α .
At this point it is interesting to undo the coordinate change in (2.8) which yields

Dφ exp (−S[Φ] + J α (φα − Φα ))


R
−D[JkJ ′ ]
e = R . (8.3)
D φ̃ exp(−S[φ̃] + J ′α (φ̃α − Φα ))

Assume now that the two functional integrals can be (approximately) evaluated in steepest
descend method (one-loop approximation). This yields

D[JkJ ′ ] ≈ S[ϕ[J]] − S[ϕ[J ′ ]] − J α (ϕα [J] − Φα ) + J ′α (ϕα [J ′ ] − Φα )


1 n o (8.4)
+ Tr ln S (2) [ϕ[J]] − ln S (2) [ϕ[J ′ ]] ,
2

– 14 –
where φα = ϕα [J] is a solution to the classical field equation
δ
S[φ] = J α . (8.5)
δφα
The functional integral representation (8.2) can be adapted to other coordinate systems
by deriving from (7.23) the relations
δ
Jα = D[ΦkΦeq ],
δΦα
(8.6)
′α δ
D[ΦkΦ′ ] Φ=Φeq ,

J =−
δΦα
where Φeq is the expectation value configuration corresponding to vanishing source J = 0.
Using this, one finds in terms of expectation value coordinates
 
δ eq ](φ [χ] − Φ )
R

Dχ exp −I[χ] + δΦα D[ΦkΦ α α
e−D[ΦkΦ ] = R  . (8.7)
D χ̃ exp −I[χ̃] − δΦδ α D[ΦkΦ′ ] Φ=Φeq (φα [χ̃] − Φα )

Note that this is an implicit relation, because the divergence functional D[ΦkΦ′ ] appears
on the right-hand side.
For the steepest descend approximation in (8.4) one finds to leading order
δϕβ [J] δJ β
 
δ ′ δ
D[JkJ ] = S[ϕ[J]] − J β
− (ϕβ [J] − Φβ ) + J α − J ′α
δΦα δφβ δΦα δΦα
(8.8)
δJ β α ′α
=− (ϕβ [J] − Φβ ) + J − J .
δΦα
Eqs. (8.6) are therefore solved when ϕβ [J] = Φβ . This leads to the leading order steepest
descend approximation in expectation value coordinates
δ
D[ΦkΦ′ ] ≈ S[Φ] − S[Φ′ ] − S[Φ′ ](Φα − Φ′α ). (8.9)
δΦ′α
This is consistent with (7.16) and the standard steepest descend or one-loop approximation
for the quantum effective action,
1 n o
Γ[Φ] ≈ S[Φ] + Tr ln S (2) [Φ] . (8.10)
2
Let us give for completeness also a functional integral relation in the mixed represen-
tation (7.26),
 
δ
R

Dχ exp −I[χ] + δΦα D[Φk0](φ α [χ] − Φ α )
e−D[ΦkJ ] = R
′α
, (8.11)
D χ̃ exp (−I[χ̃] + J (φα [χ̃] − Φα ))
where we used
δ δ
Jα = D[ΦkJ ′ ] J ′ =0 =

D[Φk0]. (8.12)
δΦα δΦα
As seen before the mixed representation is particularly elegant in the sense that it does
not need additional information about Φeq .

– 15 –
9 Connections

Let us now discuss the different connections in more detail. We start by working in source
coordinates. Associated to the Fisher metric in eq. (4.3) is a unique connection that is
both torsion-free and metric-compatible (i. e. free of non-metricity). This is the Levi-
Civita connection
 
β 1 βλ δ δ δ
(ΓLC )α γ [J] = G [J] Gγλ [J] + γ Gαλ [J] − λ Gαγ [J] . (9.1)
2 δJ α δJ δJ
For the exponential family one finds easily

1 δ3
(ΓLC )αβ γ [J] = Gβλ [J] α β λ W [J], (9.2)
2 δJ δJ δJ
where Gβλ [J] is the inverse Fisher metric.
Similarly, the Levi-Civita connection in expectation value coordinates is obtained as
 
α γ 1 δ γλ δ αλ δ αγ
(ΓLC ) β [Φ] = Gβλ [Φ] G [Φ] + G [Φ] − G [Φ]
2 δΦα δΦγ δΦλ
(9.3)
1 δ3
= Gβλ [Φ] Γ[Φ],
2 δΦα δΦβ δΦγ

where Gαβ [Φ] is the Fisher metric in terms of expectation value coordinates and Gαβ [Φ]
its inverse.
Starting from the Levi-Civita connection, one can write any other connection as

Γαβ γ = (ΓLC )αβ γ + Nα βγ , (9.4)

where Nα βγ is known as the distortion tensor. (The difference of two connections transforms
as a tensor under coordinate changes or diffeomorphisms.) The torsion tensor can be
expressed through the distortion tensor as

T αβγ = Nβ αγ − Nγ αβ , (9.5)

and the non-metricity tensor is expressed through the distortion tensor and the metric as
1 1
Bαβγ = Gβλ Nα λγ + Gγλ Nα λβ . (9.6)
2 2
After these general considerations, let us now address the e- and m-connection. In
source coordinates, the e-connection symbols vanish, see eq. (3.4). Together with eq. (9.2)
this implies the distortion tensor in source coordinates

1 δ3
(NE )αβ γ [J] = − Gβλ [J] α β λ W [J]. (9.7)
2 δJ δJ δJ
Torsion vanishes, and the non-metricity tensor is

1 δ3
(BE )αβγ [J] = − W [J]. (9.8)
2 δJ α δJ β δJ γ

– 16 –
Up to a factor, this is simply the connected three-point function!
The m-connection symbols in source coordinates are given by eq. (6.8). Accordingly,
the distortion tensor is here
1 δ3
(NM )αβ γ [J] = Gβλ [J] α β λ W [J]. (9.9)
2 δJ δJ δJ
The non-metricity has simply the opposite sign compared to the one of the e-connection,

1 δ3
(BM )αβγ [J] = −(BE )αβγ [J] = W [J]. (9.10)
2 δJ α δJ β δJ γ
In the context of information geometry, the fully symmetric tensor Tαβγ = 2(BE )αβγ =
−2(BM )αβγ is known as the Amari-Chentsov tensor [11, 12].
By similar arguments, or by a change of coordinates, one also obtains the distortion
tensors in expectation value coordinates,

1 δ3
(NM )αβ γ [Φ] = −(NE )αβ γ [Φ] = Gβλ [Φ] Γ[Φ]. (9.11)
2 δΦα δΦβ δΦλ
The non-metricity tensors are here, up to factors, given by the one-particle irreducible
three-point function,

1 δ3
(BM )αβγ [Φ] = −(BE )αβγ [Φ] = Γ[Φ]. (9.12)
2 δΦα δΦβ δΦγ
We see that the e-connection and m-connection are dual in the sense that they have
opposite non-metricity and they are both free of torsion. This duality implies that vector
fields V µ [J] and W ν [J] obey
δ µ ν

(E) µ

ν µ

(M) ν

(G µν [J]V [J]W [J]) = Gµν [J] ∇ α V [J] W [J] + Gµν [J]V [J] ∇ α W [J] ,
δJ α
(9.13)
with the covariant functional derivatives associated with the e-connection,
δ
∇(E) µ
α V [J] = α
V µ [J] + (ΓE )αµβ [J] V β [J] = ∇(LC)
α V µ [J] + (NE )αµβ [J] V β [J], (9.14)
δJ
the covariant derivative associated with the m-connection,
δ
∇(M) ν
α W [J] = W ν [J] + (ΓM )αν β [J] W β [J] = ∇(LC)
α W ν [J] + (NM )αν β [J] W β [J], (9.15)
δJ α
and the covariant derivative of the Levi-Civita connection
δ
∇(LC)
α V µ [J] = V µ [J] + (ΓLC )αµβ [J] V β [J]. (9.16)
δJ α
Eq. (9.13) follows from the well known relation for the Levi-Civita connection
δ µ ν

(LC) µ

ν µ

(LC) ν

(Gµν [J]V [J]W [J]) = Gµν [J] ∇ α V [J] W [J] + Gµν [J]V [J] ∇ α W [J] ,
δJ α
(9.17)
and the symmetry properties of the distortion tensors NE and NM .

– 17 –
10 Metric and connection from divergence

Interestingly it is also possible to obtain the connection symbols directly from the diver-
gence functional [11]. From eqs. (7.12) one can read off that the metric in source coordinates
is given by
δ2
Gαβ [J] = − α ′β D[JkJ ′ ] J=J ′ .

(10.1)
δJ δJ
This is an interesting relation because one can do a change of coordinates, e. g. J α →
K α = K α [J], and both sides transform automatically in the right way,

δJ α δJ ′β
Gµν [K] = Gαβ [J[K]]
δK µ δK ′ν
δJ α δJ ′β δ δ (10.2)
=− D[JkJ ′ ]
δK µ δK ′ν δJ α δJ ′β
δ δ
D[KkK ′ ] K=K ′

=− µ ′ν
δK δK
In a related way one can obtain the m-connection symbols

(ΓM )αβγ [J] =Gβδ [J](ΓM )αδ γ [J]


δ2 δ (10.3)
D[JkJ ′ ] J=J ′ .

=− α γ ′β
δJ δJ δJ
This has indeed the right transformation law for a connection and can immediately be
evaluated in any other coordinate system!
Finally, the symbols of the dual e-connection vanish in source coordinates, but can be
written as
δ δ2
(ΓE )αβγ [J] = − β ′α ′β D[JkJ ′ ] J=J ′ .

(10.4)
δJ δJ δJ
This formula easily generalizes to other coordinates, for example one can obtain (6.9) from
(7.23). We observe how elegantly information geometry is encoded in a divergence function.
Let us note here that after a general (non-linear) change of coordinates J α → K α [J],
functional derivatives with respect to the sources J α generalize to covariant derivatives
based on the e-connetion. One may call them e-covariant derivatives. Similarly, func-
tional derivatives with respect to expectation values Φα generalize to covariant m-covariant
derivatives based on the m-connection.
One can develop a calculus for functional derivatives based one these notions of co-
variant derivates but must be careful with taking over intuition from standard Riemannian
geometry in the sense that both covariant derivatives are not metric compatible. One can
use the calculus emerging this way to related connected correlation functions to one-particle
irreducible correlation functions and vice versa.
Let us add another remark here. While it is straight forward to express the divergence
functional in other coordinates, e. g.

D[KkK ′ ] = D[J[K]kJ ′ [K ′ ]], (10.5)

– 18 –
it is in general not possible to write the resulting functional D[KkK ′ ] again in the form
of a Bregman divergence with reversed arguments as in (7.7). This works only when the
transformation from J α to K α is an affine map.
Associated with some connection Γαβ γ [J] is a Riemann curvature tensor

δ δ
Rαβγδ [J] = Γ α [J] − δ Γγ αβ [J] + Γγ αλ [J]Γδ λβ [J] − Γδ αλ [J]Γγ λβ [J]. (10.6)
δJ γ δ β δJ
Interestingly, the curvature tensor associated with the e- and m-connections both van-
ish! This is immediately clear because Rαβγδ [J] is a tensor, and both the e- and the
m-connection have coordinate systems where their connection symbols vanish. In contrast,
the curvature tensor associated with the Levi-Civita connection has no reason to vanish,
and it is given by a combination of three points functions and the Fisher metric.

11 Conclusions

To conclude, we have discussed here the conceptual setup of Euclidean quantum field theory
in the functional integral representation from the point of view of information geometry.
It is nice to see how naturally the concepts of information geometry apply. When source
fields, that are usually introduced to obtain correlation functions, are seen as coordinates, a
natural and rich geometric picture arises. Dual to this is a description with field expectation
values as coordinates.
It is clear to any physicist familiar with general relativity how powerful the concepts
of differential geometry can be. It is therefore great to have a similar formalism now also
available for Euclidean quantum fields. For example one can easily go to general coordinates
without losing the significance of convexity of generating functionals. One can work with
connections and corresponding covariant derivatives very similar as familiar from spacetime
geometry.
A particularly interesting feature of the type of information geometry explored here is
that the metric as well as the two dual connections arise from the functional derivatives of a
divergence functional. The latter corresponds to the relative information entropy between
two probability distributions and it has many highly interesting properties. One of them
is that it can serve as a generating functional for correlation functions, very much as the
Schwinger functional or the quantum effective action. Another is that it is non-negative and
of course it has an information theoretic significance as exemplified by Sanov’s theorem.
In the present study we have concentrated on taking the sources or field expectation
values as coordinates, but in a very similar way one can also understand coupling constants
entering an action as coordinates and extend the information geometry accordingly. This
will be done in a forthcoming publication.
Another aspect we did not study here is the renormalization group. In fact, the
Schwinger functional and quantum effective action are subject to renormalization. This
is discussed in detail in particular in the context of the functional renormalization group
[27–30]. In a forthcoming publication we will present a renormalization group flow equation
for the divergence functional [31].

– 19 –
Finally, information geometry can also be developed for quantum states described by
density matrices or reduced density matrices. Central concepts like the relative entropy and
Fisher information metric are defined in that context, as well. In the context of relativistic
quantum field theory it is particularly interesting that the relative entropy is well-defined
also for spatial subregions [32]. Relative entropy can be used to formulate thermodynamics
[5] and relativistic fluid dynamics [6] on the basis of quantum field theory. We can well
imagine that a quantum extension of information geometry allows eventually to understand
quantum field theory dynamics in much more detail.

Acknowledgements

The author would like to thank Holger Gies and Markus Schröfl for useful discussions and
Markus Schröfl for carefully reading the manuscript.

References
[1] L. Bombelli, R.K. Koul, J. Lee and R.D. Sorkin, Quantum source of entropy for black holes,
Physical Review D 34 (1986) 373.
[2] M. Srednicki, Entropy and area, Physical Review Letters 71 (1993) 666.
[3] C. Callan and F. Wilczek, On geometric entropy, Physics Letters B 333 (1994) 55.
[4] E. Witten, APS Medal for Exceptional Achievement in Research: Invited article on
entanglement properties of quantum field theory,
Reviews of Modern Physics 90 (2018) 045003.
[5] S. Floerchinger and T. Haas, Thermodynamics from relative entropy,
Physical Review E 102 (2020) 052117.
[6] N. Dowling, S. Floerchinger and T. Haas, Second law of thermodynamics for relativistic
fluids formulated with relative entropy, Physical Review D 102 (2020) 105002.
[7] J. Berges, S. Floerchinger and R. Venugopalan, Dynamics of entanglement in expanding
quantum fields, Journal of High Energy Physics 2018 (2018) 145 [1712.09362].
[8] J. Berges, S. Floerchinger and R. Venugopalan, Thermal excitation spectrum from
entanglement in an expanding quantum string, Physics Letters B 778 (2018) 442
[1707.05338].
[9] V. Giantsos and N. Tetradis, Entanglement entropy in a four-dimensional cosmological
background, Physics Letters B 833 (2022) 137331 [2203.06699].
[10] K. Boutivas, G. Pastras and N. Tetradis, Entanglement and Expansion, e-print 2302.14666
(2023).
[11] S.-i. Amari, Information Geometry and Its Applications, Springer Publishing Company,
Incorporated, 1st ed. (2016).
[12] N. Ay, J. Jost, H.V. Lê and L. Schwachhöfer, Information Geometry, vol. 64 of Ergebnisse
Der Mathematik Und Ihrer Grenzgebiete 34, Springer International Publishing, Cham (2017),
10.1007/978-3-319-56478-4.
[13] P.T. Komiske, E.M. Metodiev and J. Thaler, Metric Space of Collider Events,
Physical Review Letters 123 (2019) 041801.

– 20 –
[14] P.T. Komiske, E.M. Metodiev and J. Thaler, The hidden geometry of particle collisions,
Journal of High Energy Physics 2020 (2020) 6.
[15] J. Erdmenger, K. Grosvenor and R. Jefferson, Information geometry in quantum field theory:
Lessons from simple examples, SciPost Physics 8 (2020) 073.
[16] J. Erdmenger, K. Grosvenor and R. Jefferson, Towards quantifying information flows:
Relative entropy in deep neural networks and the renormalization group,
SciPost Physics 12 (2022) 041.
[17] K. Grosvenor and R. Jefferson, The edge of chaos: Quantum field theory and deep neural
networks, SciPost Physics 12 (2022) 081.
[18] H. Strobel, W. Muessel, D. Linnemann, T. Zibold, D.B. Hume, L. Pezzè et al., Fisher
information and entanglement of non-Gaussian spin states, Science 345 (2014) 424.
[19] P. Hauke, M. Heyl, L. Tagliacozzo and P. Zoller, Measuring multipartite entanglement
through dynamic susceptibilities, Nature Physics 12 (2016) 778.
[20] L. Pezzè, A. Smerzi, M.K. Oberthaler, R. Schmied and P. Treutlein, Quantum metrology with
nonclassical states of atomic ensembles, Reviews of Modern Physics 90 (2018) 035005.
[21] S. Banerjee, J. Erdmenger and D. Sarkar, Connecting Fisher information to bulk
entanglement in holography, Journal of High Energy Physics 2018 (2018) 1.
[22] G. Ruppeiner, Riemannian geometry in thermodynamic fluctuation theory,
Reviews of Modern Physics 67 (1995) 605.
[23] D.C. Brody and D.W. Hook, Information geometry in vapour–liquid equilibrium,
Journal of Physics A: Mathematical and Theoretical 42 (2009) 023001.
[24] S. Floerchinger, T. Haas and M. Schröfl, Relative entropic uncertainty relation for scalar
quantum fields, SciPost Physics 12 (2022) 089 [2107.07824].
[25] T.M. Cover and J.A. Thomas, Elements of Information Theory, Wiley-Interscience,
Hoboken, N.J, 2nd ed ed. (2006).
[26] A. Dembo and O. Zeitouni, Large Deviations Techniques and Applications, vol. 38 of
Stochastic Modelling and Applied Probability, Springer Berlin Heidelberg, Berlin, Heidelberg
(2010), 10.1007/978-3-642-03311-7.
[27] J. Berges, N. Tetradis and C. Wetterich, Non-Perturbative Renormalization Flow in Quantum
Field Theory and Statistical Physics, Physics Reports 363 (2002) 223 [hep-ph/0005122].
[28] J.M. Pawlowski, Aspects of the functional renormalisation group,
Annals of Physics 322 (2007) 2831.
[29] H. Gies, Introduction to the Functional RG and Applications to Gauge Theories, in
Renormalization Group and Effective Field Theory Approaches to Many-Body Systems,
A. Schwenk and J. Polonyi, eds., vol. 852, (Berlin, Heidelberg), pp. 287–348, Springer Berlin
Heidelberg (2012), DOI.
[30] B. Delamotte, An Introduction to the Nonperturbative Renormalization Group, in
Renormalization Group and Effective Field Theory Approaches to Many-Body Systems,
A. Schwenk and J. Polonyi, eds., vol. 852, (Berlin, Heidelberg), pp. 49–132, Springer Berlin
Heidelberg (2012), DOI.
[31] S. Floerchinger, Exact flow equation for the divergence functional, to appear (2023) .

– 21 –
[32] H. Araki, Relative entropy for states of von Neumann algebras. II,
Publications of the Research Institute for Mathematical Sciences 13 (1977) 173.

– 22 –

You might also like