Professional Documents
Culture Documents
Paul D. Mitchener
e-mail: mitch@uni-math.gwdg.de
November 3, 2005
Contents
1 Classical Probability Theory 2
1.1 Probability Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3 Freeness 9
3.1 Free Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.2 Free Algebras and Random Variables . . . . . . . . . . . . . . . . 11
3.3 Random Walks on Groups . . . . . . . . . . . . . . . . . . . . . . 13
4 Some Combinatorics 14
4.1 The Catalan Numbers . . . . . . . . . . . . . . . . . . . . . . . . 14
4.2 Partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.3 Complements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.4 Free Cumulants . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1
8 Sums of Random Variables 31
8.1 Sums of Independent Random Variables . . . . . . . . . . . . . . 31
8.2 Free Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
8.3 The Cauchy Transform . . . . . . . . . . . . . . . . . . . . . . . . 33
8.4 The R-transform . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
8.5 Random Walks on Free Groups . . . . . . . . . . . . . . . . . . . 37
2
Proposition 1.3 Let X be a complex random variable. Then we can define a
probability measure on the set C by writing
τX (S) = P (X ∈ S)
Definition 1.4 The above measure τX is called the probability law (or just law)
of a random variable X.
One of the fundamental problems in probability theory is to compute the
probability laws of random variables.
Definition 1.5 Let X be a random variable on a probability space (X, µ). The
expectation of X is the integral
Z
E(X) = Xdµ
X
mk (X) = E(X k )
Proposition 1.6 Let 1 denote the random variable with constant value 1. Then
E(1) = 1. 2
Proposition 1.7 Let X and Y be random variables, and let α and β be scalars.
Then E(αX + βY ) = αE(X) + βE(Y ). 2
The variance of a random variable X
can be thought of as a measure of how much and how frequently X differs from
its expectation.
The following result follows immediately from our various definitions.
3
Theorem 1.9 Two random variables with the same moments have the same
probability laws. 2
We will not prove this theorem now since we will look at a similar but more
general result later on.
1.3 Independence
Definition 1.10 Let (Ω, µ) be a probability space. Then we call two events
E1 , E2 ⊆ Ω independent if µ(E1 ∩ E2 ) = µ(E1 )µ(E2 ).
The corresponding result for real random variables is of course also true.
Definition 1.12 The above measure τX is called the joint law of the random
variable X1 , . . . , Xn . The random variables X1 , . . . , Xn are termed independent
if the laws satisfy the equation
The joint laws of independent random variables are thus determined from
their individual laws, which makes them easy to calculate with. We will see
some examples of such calculations later on.
4
Definition 2.1 A non-commutative probability space is a pair (A, φ), where A
is a unital complex algebra, and φ is a linear functional such that φ(1) = 1.
Elements of the algebra A are called random variables.
The functional φ is termed a trace if φ(XY ) = φ(Y X) for all random vari-
ables X and Y .
We usually have slightly more structure to work with, as the following ex-
amples show.
Example 2.3 The weak topology on the space of operators BL2 (Ω) is defined by
saying that a sequence of operators (Tn ) has limit T if and only if the sequence
(hXn f, gi) has limit hT f, gi for all functions f, g ∈ L2 (Ω).
The C ∗ -algebra C(Ω) is not closed in the space BL2 (Ω) under the weak topol-
ogy; the closure is the von Neumann algebra L∞ (Ω) consisting of all bounded
measurable functions on Ω.
The expectation E: L∞ (Ω) → C is continuous with respect to the weak
topology.
5
that is to say by taking the transpose of the complex conjugate. The functional
φ is a trace defined by the formula
1 1
φ(X) = E( tr(X)) = E( (X11 + X22 + · · · + Xnn ))
n n
We thus obtain the C ∗ -probability space of continuous random matrices. We
can similarly define the W ∗ -probability space of all bounded random matrices.
Example 2.6 Let G be a discrete group. Let CG be the complex group algebra,
made up of formal sums
n
X
αi gi αi ∈ C, gi ∈ G
i=1
m
! n
m,n
X X X
αi gi βj hj = αi βj gi ◦ hj
i=1 j=1 i,j=1
and trace !
m
X X
φ αi gi = αi
i=1 gi =e
(λ(g)ξ)(h) = ξ(g −1 h) ξ ∈ l2 G, g, h ∈ G
6
2.2 Probability Laws
Definition 2.7 Let (A, φ) be a non-commutative proability space, and let X ∈
A. Then the k-th moment of X is the number
mk (X) = φ(X n )
τX : C(R) → C
The real case of theorem 1.9 follows as an immediate corollary. The complex
case is then easily deduced.
Corollary 2.9 Two classical random variables with the same moments have
the same probability laws. 2
Now, let us equip the space of complex continuous functions C(C) with the
topology defined by saying that a sequence of functions converges if it converges
uniformly on compact subsets. The space of polynomials C[X] is topologised as
a subspace of the space C(C).
7
Less formally, we say that the sequence (Xn ) converges in distribution to a
random variable X if it converges in distribution to the probability law of X.
Note that it is not necessary for the each variable Xn to lie in the same
non-commutative probability space for the above definition to make sense.
The following result follows by linearity.
2.3 Independence
Definition 2.14 Let (A, φ) be a non-commutative probability space. Then a
family of subalgebras {Aλ | λ ∈ Λ} is called independent if:
• [Aλ1 , Aλ2 ] = 0 if λ1 6= λ2 .
• φ(a1 a2 · · · an ) = φ(a1 )φ(a2 ) · · · φ(an ) whenever ak ∈ Aλk and λi 6= λj
when i 6= j.
The following two results follow immediately from the definition of indepen-
dence.
whenever a1 , a2 ∈ A1 and a ∈ A2 . 2
We call a set of random variables {Xλ | λ ∈ Λ} independent if the algebras
generated by them form an independent family, that is to say:
8
• [Xλ1 , Xλ2 ] = 0 if λ1 6= λ2 .
• φ(Xλ1 Xλ2 · · · Xλn ) = φ(Xλ1 )φ(Xλ2 ) · · · φ(Xλn ) whenever λi 6= λj when
i 6= j.
Similarly, we call a collection of sets independent if the algebras generated
by them form an independent family.
3 Freeness
3.1 Free Products
In this subsection we look at a number of free product constructions without
proofs. For further details, see for example chapter 1 of [VDN92].
Definition 3.1 Let {Gλ | λ ∈ Λ} be a family of groups. Then the free product
∗λ∈Λ Gλ is the unique (up to isomorphism) group G equipped with homomor-
phisms ψλ : Gλ → G such that for any group H equipped with homomorphisms
ϕ: Gλ → H there is a unique homomorphism Φ: G → H such that Φ ◦ ψλ = ϕλ
for all λ ∈ Λ.
9
Given a finite family of groups G1 , . . . , Gn , we denote the free product by
writing G1 ∗ G2 ∗ · · · ∗ Gn .
Definition 3.2 Let G be a group. Then subgroups G1 and G2 are called free
if there are no relations between them, that is to say, if w = g1 g2 · · · gn , where
the elements gi belong alternately to the subgroups G1 and G2 , and gi 6= e for
all i, then w 6= e.1
Definition 3.4 Let {Aλ | λ ∈ Λ} be a family of unital algebras. Then the free
product ∗λ∈Λ Aλ is the unique (up to isomorphism) unital algebra A equipped
with homomorphisms ψλ : Aλ → A such that for any algebra B equipped with
homomorphisms ϕ: Aλ → B there is a unique homomorphism Φ: A → B such
that Φ ◦ ψλ = ϕλ for all λ ∈ Λ.
The definition of a free product of C ∗ -algebras is exactly analagous to the
above.
where ξλ0 i is the orthogonal complement of the vector ξλi in the Hilbert space
Hλ .
We consider the Hilbert space H to have the distinguished unit vector ξ.
Each Hilbert space Hλ is embeddded in H, and each unit vector ξλ is identified
with the vector ξ.
Definition 3.7 Let H be a Hilbert space. Then we define the full Fock space
T (H) = Cξ ⊕ ⊕n≥1 H ⊗n
2
1 Here we use e to denote the identity element of a group.
10
Definition 3.9 Let {Aλ | λ ∈ Λ} be a family of von Neumann algebras, where
Aλ ⊆ B(Hλ ), and the Hilbert space Hλ is equipped with a distinguished unit
vector ξλ .
Let us regard Aλ as a subalgebra of the operator space B(∗λ∈Λ (Hλ .ξλ ). Then
we define the free product as a double commutant
!00
[
∗λ∈Λ Aλ = Aλ
λ∈Λ
We will often talk about freeness of a set of random variables rather than
algebras.
11
Proof: By definition of freeness
whenever X1 , X2 ∈ A1 and X ∈ A2 .
The second part of the above expression is again zero by freeness. Hence
so by linearity
12
3.3 Random Walks on Groups
Let G be a discrete group, with finite generating set {s1 , . . . , sn }. Let us consider
a random walk on the group G, beginning at the point 1, and going in each of
the directions represented by elements of the generating set and their inverses,
with equal probability 1/2n.
Each step of the random walk is the random variable
1
X= (s1 + · · · + sn + s−1 −1
1 + · · · + sn )
2n
in the space CG.
The position after k steps is given by the random variable X k . The prob-
ability at being at position g ∈ G after k steps is the coefficient of g in the
expression for X k , that is to say
P (X k = g) = hλ(X k ), gi
Example 3.18 Consider a random walk on F2 , the free group with two gener-
ators, s1 and s2 . Then the random variable X is given by the formula
1 1
X = X1 + X2 X1 = (s1 + s−1 −1
1 ), X2 = (s2 + s2 )
4 4
Observe φ(X1 ) = 0, φ(X2 ) = 0, and
1 1
X1 X2 = (s1 s2 +s1 s−1 −1 −1 −1
2 +s1 s2 +s1 s2 ), X2 X1 = (s2 s1 +s2 s−1 −1 −1 −1
1 +s2 s1 +s2 s1 )
16 16
The group elements in the above expressions are distinct, so φ(X1 X2 ) =
φ(X2 X1 ) = 0. It follows that the random variables X1 and X2 are free. Freeness
actually gives us the tools to precisely compute the moments φ(X k ) and the
associated probability law. We will return to this subject later on.
13
4 Some Combinatorics
4.1 The Catalan Numbers
The Catalan numbers are a sequence that occur naturally in many combinatorial
problems. The following definition is probably the one that is easiest to use in
combinatorial problems.
when n > 1.
Proposition 4.2 The Catalan numbers can also be defined by the explicit for-
mula
(2n)!
Cn =
(n + 1)!n!
Proof: Let us define the generating function for the Catalan numbers
∞
X
c(x) = Cn xn
n=0
so
∞ X
X n ∞
X
c(x)2 = Ck Cn−k xn = Cn+1 xn
n=0 k=0 n=0
and
c(x) = 1 + xc(x)2
We can solve this equation to obtain the expression
√
1 − 1 − 4x
c(x) =
2x
since we know that c(0) = C0 = 1
Taking a binomial expansion of the square root, we obtain the formula we
claimed. 2
14
4.2 Partitions
Definition 4.4 A partition, π, of a finite set S is a collection of blocks
{V1 , . . . , Vs } that partition the set S.
If the set S is ordered, we call a partition crossing if we can find elements
p1 ≤ q1 ≤ p2 ≤ q2 such that the elements p1 and p2 belong to the same block,
and the elements q1 and q2 belong to another distinct block.
A partition is called a pairing if each block contains exactly two elements.
Let us write P (n) to denote the set of all partitions of the set {1, . . . , n},
N CP (n) to denote the set of all non-crossing partitions, P2 (n) to denote the
set of pairings, and N CP2 (n) to denote the set of all non-crossing pairings.
1n = {(1, . . . , n)}
Proposition 4.6 The number of elements in the set P2 (n) is zero if n is odd,
and equal to the product
(n − 1)(n − 3) · · · 3.1
if n is even.
Proposition 4.7 The number of elements in the set N CP2 (n) is zero if n is
odd, and equal to the Catelan number Cn/2 if n is even.
15
It follows that the partition π can be restricted to define non-crossing pairings
on the sets {2, . . . , m − 1} and {m + 1, . . . , n}. It follows that m must be even.
Counting the numbers of such partitions, we obtain the recurrence relation
n/2
X
\N CP2 (n) = \N CP2 (m − 2l)\N CP2 (n − 2l)
l=1
If we set n = 2k, we see that the number \N CP2 (2k) satisfies the recurrence
relation for the Catalan numbers. The result now follows. 2
4.3 Complements
The following construction works for non-crossing partitions, but has no analogy
in the general case.
Definition 4.8 Let N CP (n) be the set of all non-crossing partitions on the
ordered set {1, . . . , n}. Let N CP (n, n) be the set of all non-crossing partitions
on the ordered set {1, 1, . . . , n, n}.
Let π ∈ N CP (n) be a partition. Then we define the complement,
K(π) ∈ N CP (n), to be the maximal non-crossing partition σ such that
π ∪ σ ∈ N CP (n, n).
Of course, the set N CP (n) can be cannonically identified with the set
N CP (n), so the complement operation can be viewed as a natural map
K: N CP (n) → N CP (n).
where
r
Y
kπ [X1 , . . . , Xn ] = kV (i) [X1 , . . . , Xn ] π = {V (1), . . . , V (r)}
i=1
and
kV [X1 , . . . , Xn ] = ks (Xv(1) , . . . Xv(s) ) V = (v(1), . . . , v(s))
As a special case, for a random variable X, we write
knX = kn (X, . . . , X)
16
Proposition 4.11 The free cumulants are well-defined.
Proof: We can write the moment-cumulant formula in the form
X
φ(X1 · · · Xn ) = kn (X1 , . . . , Xn ) + kπ [X1 , . . . , Xn ]
π∈N CP (n)
π6=1n
where
φV [X1 , . . . , Xn ] = φ(Xv(1) , . . . Xv(s) ) V = (v(1), . . . , v(s))
17
Definition 5.1 Let (A, φ) be a non-commutative probability space. Then a
self-adjoint random variable X ∈ A is called normal or Gaussian, with expec-
tation µ and variance σ 2 if the probability law is defined by the formula
Z
1 1
τX (P ) = √ P (t) exp − 2 (t − µ)2 dt
2πσ 2 R 2σ
It follows that
m2n = σ 2n (2n − 1)(2n − 3) · · · 3.1
as claimed. 2
18
5.2 Semicircular Random Variables
As we will see in the next section, the Gaussian law has a fundamental role in
the study of independent random variables. The following probability law has
a similarly central role when we look at free random variables.
19
5.3 Families of Random Variables
In this section we will generalise Gaussian and semicircular random variables by
looking at two classes of joint laws for families of random variables. We begin
by looking at Gaussian families.
holds.
φ(Xλ(1) , . . . , Xλ(n) )
is equal to zero whenever λ(i) 6= λ(j) for i 6= j. Since φ(Xλ(i) ) = 0 for all i and
the variables commute, it follows that the set S is independent.
Conversely, suppose that the set S is independent. Since the random vari-
ables Xλ commute, it suffices to look at moments of the form
r(1) r(n)
φ(Xλ(1) · · · Xλ(n) )
This value is equal to zero unless the numbers r(i) are all even. Write
Cλ(i)λ(j) = φ(Xλ(i) Xλ(j) ) so that Cλ(i)λ(j) = 0 if i 6= j. By proposition 4.6 and
proposition 5.2, the n-th even moment of a Gaussian random variable is equal
to the number of pairs in the set P2 (n) multiplied by the 2nd moment. Hence
we have the formula
r(i)
X
φ(Xλ(1) = Cλ(i)λ(i)
π∈P2 (r(i))
and so
n
r(1) r(n)
Y X
φ(Xλ(1) · · · Xλ(n) ) = Cλ(i)λ(i)
i=1 π∈P2 (r(i))
Write
r(1) r(n)
Xλ(1) · · · Xλ(n) = Xλ0 (1) · · · Xλ0 (k)
20
Then, by counting possible pairings, we can write the above formula in the
form
n
Y X
φ(Xλ0 (1) · · · Xλ0 (k) ) = Cλ0 (i)λ(j)
i=1 π∈P2 (k)
holds.
The following result can be proved similarly to proposition 5.7
T (H) = Cξ ⊕ ⊕n≥1 H ⊗n
The vector ξ is called the vacuum vector. We set its norm to be equal to
one.
The adjoint, l(h)∗ , is called the left-annihilation operator and is given by the
formulae
and
l(h)∗ (λξ) = 0 λ∈C
Definition 5.11 Let H be a Hilbert space. Then we define Φ(H) to be the von
Neumann algebra generated by the set of left-creation operators. We define the
vacuum state on Φ(H) by the formula
v(T ) = hξ, T ξi
21
The algebra Φ(H) equipped with the state v is a W ∗ -probability space.
If the above conditions all hold, then v(Xk(n) · · · Xk(1) ) = khkn . The ques-
tion is thus how many elements there are in the set S, consisting of all indices
(k(1), . . . , k(n)) there are which satisfy the above conditions.
If n is odd, then the first of the above conditions implies that the set S
is empty. Let n be even. We can define a map α: S → N CP2 (n) by writing
α(S) = {V1 , . . . Vs } where:
• For p ≥ 1, Vp = {s, t} where s is the p-th number such that k(s) = 1 and
t is the lowest number such that
t
X
k(i) = 0
i=s
There are approaches to free probability theory (see for example [Haa97])
where the operators s(h) are fundamental constructions. Here, we take a dif-
ferent, combinatorial approach, and regard these operators simply as major
examples.
A proof of the following result can be found in [VDN92].
22
Proposition 5.13 Let {Vλ | λ ∈ Λ} be an orthogonal family of subspaces of a
Hilbert space H. Then the family of algebras
{Φ(Vλ ) | λ ∈ Λ}
Proof: The result follows immediately from the above two propositions and
proposition 5.9. 2
Definition 6.1 Let π be a partition of the set {1, . . . , n}. Then we write π ∼ =
(r(1), . . . , r(n)) whenever i and j belong to the same block of π precisely when
r(i) = r(j).
Proof: By proposition 2.16 (in the independent case) or 3.16 (in the free case),
the expression φ(Xr(1) · · · , Xr(n) ) can be calculated from the moments of the
23
individual random variables. Since the variables all have the same distribution,
it follows that
φ(Xr(1) · · · Xr(n) ) = φ(Xp(1) · · · Xp(n) )
whenever
r(i) = r(j) ⇔ p(i) = p(j) for all i, j
The result now follows. 2
Proof: By linearity
n
X
φ((X1 + · · · + XN )n ) = φ(Xr(1) · · · Xr(n) )
r(1),...,r(n)=1
Lemma 6.4 Suppose π = {V1 , . . . , Vs } and Vi = {r} for some block Vi . Then
the number mπ defined in lemma 6.2 is equal to zero.
Proof: By proposition 2.16 (in the independent case) or proposition 3.15 (in
the free case) we can write
Observe
N!
AN
π = N (N − 1) · · · (N − \π + 1) =
(N − \π)!
24
so
AN
π 0 \π < n/2
lim = lim N \π−n/2 =
N →∞ N n/2 N →∞ 1 \pi = n/2
and n
X1 + · · · + XN X
lim φ( √ )= mπ
N →∞ N π∈P (n) 2
as claimed. 2
Corollary 6.7 n
X1 + · · · + XN
lim φ( √ )=0
N →∞ N
if n is odd.
Proof: There are no pairings of the set {1, . . . , n} if n is odd. 2
We can now bring our calculations together for independent random vari-
ables.
so
n
X1 + · · · + XN X 0 n odd
lim φ( √ )= mπ =
N →∞ N σ n (n − 1)(n − 3) · · · 3.1 n even
π∈P (n)2
But, by proposition 5.2, the above limits are the moments of a Gaussian law
with mean µ and variance σ 2 . The result therefore follows by proposition 2.11.
2
25
6.2 The Free Central Limit Theorem
The difference between the independent and the free case of the central limit
theorem is the result of the following lemma.
Lemma 6.9 Let (Xi ) be a free sequence of random variables, identically dis-
tributed, such that φ(Xi ) = 0 for all i. Write (r/2)2 = φ(Xi2 ). Let π ∈ P2 (n).
Then
(r/2)n π ∈ N CP2 (n)
mπ =
0 otherwise
Proof: Let π ∼
= (r(1), . . . , r(n)). Then there are two possibilities.
mπ = φ(Xr(1) · · · Xr(n) )
by definition of freeness.
26
6.3 The Multi-dimensional Case
An elaboration of the calculations in the previous two sections lets us formulate
versions of the classical and free central limit theorems for families.
As in the previous section, the free version of the above result involves re-
placing pairings by non-crossing pairings.
27
In these notes we will restrict our attention to square random matrices,
although of course others are possible. There are certain important classes of
random matrices. We begin with the most general.
28
7.2 The Semicircular Law
We now want to look at limits of standard self-adjoint Gaussian matrices as the
size increases, initially for single matrices, and then for independent families.
We will see that such matrices converge in distribution to semicircular random
variables, and independent families converge in distribution to free semicircular
families. These results first appeared in [Voi91]. The combinatorial papproach
that we take comes from [Spe93].
For convenience, let us count modulo m, that is to say set i(m + 1) = i(1).
Then by proposition 5.7, we see that
n
1 X X Y
φ(X m ) = E(Xi(r)i(r+1) Xi(s) Xi(s+1) )
n
i(1),...,i(m)=1 π∈P2 (m) (r,s)∈π
2 That is to say
1 a=b
δab =
0 a 6= b
29
or
n m
1 X X Y
φ(X m ) = δi(r)i(γπ(r))
n1+m/2
π∈P2 (m) i(1),...,i(m)=1 r=1
where
γ = (1, 2, . . . , m − 1, m) ∈ Σm
The index function i must
Qm be constant on the cycles of the permutation γπ
in order for the product r=1 δi(r)i(γπ(r)) to be one; otherwise, the product is
zero.
It follows that we have nZ(γπ) possible index functions which make the above
product zero. Hence
n
X m
Y
δi(r)i(γπ(r)) = nZ(γπ)
i(1),...,i(m)=1 r=1
and X
φn (X m ) = nZ(γπ)−1−m/2
π∈P2 (m)
as claimed. 2
Observe
1 Z(γπ) = k + 1
lim nZ(γπ)−1−k =
n→∞ 0 Z(γπ) < k + 1
Thus
lim φn (Xn2k )) = \N CP2 (k)
n→∞
By proposition 4.7, \N CP2 (k) is the k-th Catalan number. Hence, by propo-
sition 5.5, the sequence (Xn ) converges in distribution to a semicircular random
variable, as claimed. 2
30
7.3 Asymptotic Freeness
We can quite easily extend the main theorem of the previous section to a result
on families of Gaussian matrices. As a first step, we have the following result.
We omit the proof since the computations needed are almost identical to those
in the proof of lemma 7.6.
Observe
Z(γπ)−1−k 1 Z(γπ) = k + 1
lim n =
n→∞ 0 Z(γπ) < k + 1
Thus
X m
Y
lim φ(X λ(1) · · · X λ (n)) = δλ(r)λ(π(r))
n→∞
π∈P2 (m) r=1
which is, by definition, the law of a free semicircular family (with covariance
matrix the identity). 2
31
For a Borel set S ⊆ R, we thus define
Z Z
µ1 ∗ µ2 (S) = dµ1 (u)dµ2 (u − v)
S R
Proposition 8.2 Let X and Y be real independent random variables, with laws
defined by probability measures µX and µY respectively. Then the sum X + Y
has law defined by the convolution of the measures µX and µY .
and
n n Z Z
X n X
mn (X+Y ) = φ((X+Y )n ) = φ(X k )φ(Y n−k ) = xk y n−k dµX (x)dµY (y)
k R R
k=0 k=0
Let z = x + y. Then
Z Z Z
n
mn (X + Y ) = z dµX (x)dµY (z − x) = z n dµX ∗ dµY (z)
R R R
Thus the convolution gives the moments of the sum X + Y , and the result
follows. 2
Proposition 8.3 Let X and Y be free random variables. Then we have free
cumulants
knX+Y = knX + knY
32
Proof: By definition of the free cumulants
knX+Y = kn (X + Y, . . . , X + Y )
= kn (X, . . . , X) + kn (Y, . . . , Y )
= knX + knY
where the second step follows from the fact that the mixed cumulants kn : An →
C are multilinear, and mixed cumulants vanish by freeness of the random vari-
ables X and Y and theorem 4.12. 2
kπX = k\V
X
1
X
· · · k\Vr
The above formula can be used to determine the free cumulants in terms of
the moments. The following definition therefore makes sense.
The following result follows immediately from proposition 8.3 and the defi-
nition of the free convolution.
Theorem 8.6 Let X and Y be free random variables, with probability laws τ
and τ 0 respectively. Then the sum X + Y has probability law τ τ 0 . 2
33
Definition 8.7 Let µ be a probability measure on the real line. Then we define
the Cauchy transform of µ by the formula
Z
µ 1
G (z) = dµ(t)
R z−t
Proposition 8.8 Let µ be a probability measure on the real line, with moments
Z
mk (µ) = tk dµ(t)
R
34
Theorem 8.11 Let X and Y be free random variables, then
There is a close relation between the R-transform and the Cauchy transform.
In order to establish the relation, we begin with a purely algebraic result.
Lemma 8.12 Let (mn )n≥1 and (kn )n≥1 be sequences of complex numbers, with
corresponding formal power series
∞
X ∞
X
M (z) = 1 + mn z n C(z) = 1 + kn z n
n=1 n=1
Then
C(zM (z)) = M (z)
V1 = {v1 , . . . , vs }
π = V 1 ∪ π1 ∪ · · · ∪ πs
kπ = ks kπ1 · · · kπs
35
But X
mi j = kπs
πr ∈N CP (ij )
Now
∞
X ∞ X
X n n−s
X
M (z) = 1 + mn z n = 1 + ks z s mi1 z i1 · · · mis z is
n=1 n=1 s=1 i1 ,...,is =0
i1 +···+is +s=n
Theorem 8.13 Let τ be a probability law, with Cauchy transfrom G(z) and
R-transform R(z). Then
1 1
R(G(z)) + =z G(R(z) + ) = z
G(z) z
Proof: Let (mn ) be the sequence of moments of the probability law τ , and let
(kn ) be the sequence of free cumulants. Write
1
K(z) = R(z) +
z
The R-transform is defined by the formula
C(z) = 1 + zR(z)
where
∞
X
C(z) = kn z n
n=0
By the above lemma and proposition 8.8, we have the relations
1 1
M (z) = C(zM (z)) G(z) = M
z z
So
1 1 1 1 1 1 1
K(G(z)) = R(G(z))+ = C(G(z)) = C M = M =z
G(z) G(z) G(z) z z G(z) z
Set w = G(z). Then we have the relation G(K(w)) = w. Since this formula
is purely a formal relation between non-trivial power series, it follows in general
that
1
G R(z) + G(K(z)) = z
z
and we are done. 2
36
8.5 Random Walks on Free Groups
Let F2 be the free group on 2 generators, s1 and s2 . Consider the random
variable
1
X = (X1 + · · · + Xn ) Xi = si + s−1
i
4
representing a step of a random walk going with equal probability in each pos-
sible direction on the group F2 . It is straightforward to check that the set of
random variables {X1 , X2 } is free.
The operator X1 has the same law as the operator s + s−1 on l2 (Z), where
s is a generator of the group Z. Let S 1 ⊆ C be the unit circle. Then we have
an isomorphism of Hilbert spaces
l2 (Z) → L2 (S 1 )
P∞
defined by mapping the series (ak )k∈Z to the Laurent series k=−∞ ak z k . Under
this transformation, the algebra CZ is mapped to the algebra of all Laurent
polynomials, and the trace is mapped to the function
Z
φ(P ) = P (z) dµ(z)
S1
that is to say
− 21
G1 (z) = z 2 − 4
Now, the R-transformation, R1 (z), is determined by the formula
37
so !− 12
2
1
R1 (z) + −4 =z
z
and r
1 1
R1 (z) = − + 4 − 2
z z
The random variable X2 has the same law as the random variable X1 , and
therefore R-transformation
r
1 1
R2 (z) = − + 4 − 2
z z
R(G(z)) + 1/G(z) = z
so r
3 1
+ 1− =z
4G(z) 4Gz 2
and we obtain the equation
38
9 Products of Random Variables
9.1 Products of Independent Random Variables
Let X and Y be independent random variables. Then, by the definition of
independence, we have moments
39
By the generalised moment-cumulant formula in proposition 4.14, we have
the equation
X
φ(a1 b1 a2 b2 · · · an bn ) = kσ [a1 , . . . , an ]φK(σ) [b1 , . . . , bn ]
σ∈N CP (n)
The following result follows immediately from theorem 9.1 and the definition
of the multiplicative free convolution.
Theorem 9.3 Let X and Y be free random variables, with probability laws τ
and τ 0 respectively. Then the product XY has probability law τ τ 0 . 2
Proposition 9.5 Let ψ1 (z), ψ2 (z), and ψ3 (z) be formal power series. Then
(ψ1 .ψ2 ) ◦ ψ3 = (ψ1 ◦ ψ3 )(ψ2 ◦ ψ3 )
2
40
Proposition 9.6 Let
∞
X
ψ(z) = an z n
n=1
such that
ψ ◦ ψ −1 (z) = ψ −1 ◦ ψ(z) = z
Lemma 9.8 Let τ be a probability law, weith associated sequence of free cumu-
lants (kn ). Define a formal power series
∞
X
Φτ (z) = kn z n
n=1
Φ−1
τ (z)
S τ (z) =
z
41
Proof: Write
Φτ (ψτ−1 (w)(1 + w) = w
ie:
Φ−1 −1
τ (w) = ψτ (w)(1 + w)
π∈N CP (n)
for all n.
Let us define formal powers series ΦX , ΦY , and ΦXY as in the above lemma.
Define
N CP 0 (n) = {π ∈ N CP (n) | (1) is a block of π}
and write
X ∞
X
gamman = kπX kK(π)
Y
Φγ (z) = γn z n
π∈N CP 0 (n) n=1
Define
X ∞
X
δn = kπY kK(π)
X
Φδ (z) = δn z n
π∈N CP 0 (n) n=1
The random variables XY and Y X have the same probability law. Therefore
The formula
zΦXY (z) = Φγ (z)Φδ (z)
42
is also easy to verify. It follows that
Φγ (z)Φ−1
X ◦ ΦXY (z) Φδ (z)Φ−1
Y ◦ ΦXY (z)
so
zΦXY (z) = Φ−1 −1
X ◦ ΦXY (z)ΦY ◦ ΦXY (z)
Φ−1 −1 −1
XY (z)z = ΦX (z)ΦY (z)
that is to say
Φ−1
XY (z) Φ−1 (z) Φ−1
Y (z)
= X
z z z
and we are done, by the above lemma. 2
43
(k)
Proof: Write Dk = (Dij ), and X = (Xij ). Then by definition of the state φ,
n
1 X (q(1)) (q(2)) (q(m))
φ(Dq(1) XDq(2) X · · · Dq(m) X) = E[Dj(1)i(1) Xi(1)j(2) Dj(2)i(2) Xi(2)j(3) · · · Dj(m)i(m) Xi(m)j(1) ]
n i(1),...,i(m)
j(1),...,j(m)=1
(q(k))
Since the variables Dj(k)i(k) are all constant, we can write the above sum as
the expression
n
1 X (q(1)) (q(2)) (q(m))
E[Xi(1)j(2) Xi(2)j(3) Xi(m)j(1) ]Dj(1)i(1) Dj(2)i(2) · · · Dj(m)i(m)
n i(1),...,i(m)
j(1),...,j(m)=1
which by the Wick formula for moments of Gaussian families can be written
n
1 X X (q(1)) (q(2)) (q(m))
π(r,s)∈π E[Xi(r)j(r+1) Xi(s)j(s+1) ]Dj(1)i(1) Dj(2)i(2) · · · Dj(m)i(m)
n i(1),...,i(m) π∈P2 (m)
j(1),...,j(m)=1
The fact that the matrix (Xij ) is a standard self-adjoint Gaussian random
matrix now tells us that φ(Dq(1) X · · · Dq(m) X) =
n
1 X X 1 (q(1)) (q(2)) (q(m))
π(r,s)∈π δi(r)j(r+1) δi(s)j(s+1) D D · · · Dj(m)i(m)
n i(1),...,i(m)
nm/2 j(1)i(1) j(2)i(2)
π∈P2 (m)
j(1),...,j(m)=1
44
Proof: When m is odd, the set set P2 (m) is empty, so by the above formula
the moments φ(Dq(1) XDq(2) X · · · Dq(m) X) are all zero. Let m = 2k.
Let π ∈ P2 (2k). Then the number of cycles, Z(γπ), can be at most 1 + k,
and this number is achieved precisely when the pairing is non-crossing. By the
above lemma, we have the formula
X
φ(Dq(1) XDq(2) X · · · Dq(m) X) = tr [Dq(1) · · · Dq(m) ]nZ(γπ)−1−m/2
γπ
π∈P2 (m)
Observe
1 Z(γπ) = k + 1
lim nZ(γπ)−1−k =
n→∞ 0 Z(γπ) < k + 1
Thus
X
lim φ(Dq(1) XDq(2) X · · · Dq(m) X) = tr [Dq(1) · · · Dq(m) ]
n→∞ γπ
π∈N CP2 (m)
By theorem 9.1, the above formula shows that in the limit the mixed mo-
ments are those of a pair of free random variables. The result now follows.
2
{sλ | λ ∈ Λ} {dλ | λ ∈ Λ}
45
Let U (n) denote the space of complex unitary n × n matrices. A random
unitary matrix is then by definition a map of the form V : Ω → U (n), where Ω
is a probability space.
However, the space U (n) is a compact topological group, and so can be
equipped with the Haar measure, h, namely the a unique multiplication-
invariant probability measure defined on all Borel measurable sets.3
X = AV
Proposition 10.8 The random unitary matrix V defined above is a Haar ma-
trix.
Proof: Let W be a fixed unitary matrix. Then we want to show that the
random matrices V and W V have the same law. The result then follows by
theorem 2.8 and the definition of the Haar measure as the unique multiplication-
invariant probability measure on the space of unitary matrices.
By functional calculus, we can write
−1 −1
V = (X ∗ X) 2 X W V = ((W X ∗ )(W X)) 2 WX
{V λ | λ ∈ Λ}
46
Again using independence of the family {X λ }, we know that
φ(V λ(1) Vλ(2) · · · V λ(n) ) = φ(V λ(1) )φ(V λ(2) ) · · · φ(V λ(n) )
2
We omit the proof of the following (actually quite involved) result on the
asymptotic behaviour of the Weingarten function. For details, the article
[Nic93], where results involving the asymptotic behaviour of random unitaries
were originally examined, can be consulted.
47
Definition 10.13 Let (A, φ) be a C ∗ -probability space. A random variable
V ∈ A is called a Haar unitary if φ(V k ) = 0 for all k ∈ Z\{0}.
It is clear than any Haar matrix is a Haar unitary. The following result
can be proved using our usual methods on the asymptotic behaviour of random
matrices and lemma 10.12.
The above proposition along with theorem 10.14 gives us the following result.
The above theorem tells us that any two free random variables can be realised
as a limit (in distribution) of a sequence of constant matrices and a sequence of
constant matrices rotated by Haar matrices.
References
[Con94] A. Connes. Noncommutative Geometry. Academic Press, 1994.
48
[Rud87] W. Rudin. Real and Complex Analysis. McGraw-Hill, 1987.
[Spe93] R. Speicher. Free convolution and the random sum of matrices. Pub-
lications of the RIMS, 29:731–744, 1993.
[VDN92] D.V. Voiculescu, K.J. Dykema, and A. Nica. Free Random Variables,
volume 1 of CRM Monograph Series. American Mathematical Society,
1992.
[Voi91] D.V. Voiculescu. Limit laws for random matrices and free products.
Inventiones Mathematica, 104:201–220, 1991.
49