You are on page 1of 7

Lie Algebras and Lie Groups.

Problem set 3.

Sergio Ernesto Aguilar Gutiérrez


saguilar@ictp.it

October 12, 2019

I will use a shorthand ’LN’ which stands for Lecture Notes; GMM for Gell-Mann matrices; and HWS for
highest weight state.

Solution 7.1
From the definition
n
X
RT (X)Ti1 i2 ···in = Rf (X)kis Ti1 ...k...in ,
s=1
n
X n
X
RT ([X, Y ])Ti1 i2 ···in = Rf ([X, Y ])kis Ti1 ...k...in = [Rf (X), Rf (Y )]kis Ti1 ...k...in ,
s=1 s=1
where the last equality follows from the fact that Rf is a representation, so it preserves the commutation
relations. Moreover:
X X
RT (X)RT (Y )Ti1 i2 ···in = Rf (X)lis Rf (Y )kl Ti1 ...k...in + Rf (X)lip Rf (Y )kis Ti1 ...l...k...in ,
s s, p6=s

notice that in the last sum we have the possibilities p > s or p < s. This P means that whenk we consider the
commutator [RT (X), RT (Y )]Ti1 i2 ···in we would get a sum of the form s [Rf (X), Rf (Y )]is Ti1 ...k...in plus a
term of the form: X  
Rf (X)lip Rf (Y )kis − Rf (Y )lip Rf (X)kis Ti1 ...l...k...in
s, p6=s

which must vanish because we can write Rf (Y )lip Rf (X)kis = Rf (X)kis Rf (Y )lip , change summation indexes
(s to p) and (p to s) in the second sum, and relabel dummy indices (l to k) and (k to l); leading to the
cancellation. This results into:
X
[RT (X), RT (Y )]Ti1 i2 ···in = [Rf (X), Rf (Y )]kis Ti1 ...k...in ,
s

which means that commutation relations are preserved:

RT ([X, Y ]) = [RT (X), RT (Y )].

Solution 7.2
   
1 0
Let’s consider a basis of column vectors = e1 2
,e = for of su(2) (if the problem was for su(n), it
0 1
would be straightforward to generalize this, instead of 2 element vectors they would have n components),
so that can create a basis of larger column vectors by means of tensor products:
     
1 0 T11
1 1
0 1 2
1 i j
T12 
e ⊗e = 0 , e ⊗ e = 0 , . . . , Tij e ⊗ e = T21  .
    

0 0 T22

1
Lie Algebras and Lie Groups.

These basis vectors come in handy when expressing the p-rank tensor elements Ti1 ...ip into a column vector:
 
T11...11
T11...12 
 ..  = Ti1 i2 ...ip ei1 ⊗ ei2 ⊗ · · · ⊗ eip .
 
 . 
T22...22

Now consider the action of RT (X):


 
T11...11 " p #
T11...12  X
RT (X)  .  = Rf (X)is Ti1 i2 ...l...ip ei1 ⊗ · · · ⊗ eip ,
l
 
 ..  s=1
T22...22
p
" #
X
= ei1 ⊗ · · · ⊗ Rf (X)iim
s
eis ⊗ · · · ⊗ eip Ti1 i2 ...im ...ip ,
s=1
p
" #
X
= I2 ⊗ · · · ⊗ Rf (X) ⊗ · · · ⊗ I2 Ti1 ...ip ei1 ⊗ · · · ⊗ eip ;
s=1

where in the final equality I used the fact that (A ⊗ B)(C ⊗ D) = (AC) ⊗ (BD) for any matrices A, B, C, D;
I2 denotes the 2 × 2 identity matrix; and Rf (X) only appears on the s−th position of the tensor product.
Since Ti1 i2 ...ip was arbitrary, we have found that:
" p #
X
RT (X) = I2 ⊗ · · · ⊗ Rf (X) ⊗ · · · ⊗ I2 .
s=1

Solution 7.3
RT T[ij] = Rf (X)ki T[kj] + Rf (X)kj T[ik] = Rf (X)ki T[kj] − Rf (X)kj T[ki] .
Clearly, this means that the original antisymmetric tensor was mapped to another antisymmetric tensor in
the indices ij.

Solution 7.6
Given the totally antisymmetric tensor T[µ1 µ2 ...µn ] (rank n) in n dimensions, we see that (since we cannot
repeat indices when deriving the number of non-zero elements of the tensor) we have n possible values for
the first index µ1 , n − 1 possibilities for µ2 , and so on until we have only 1 possible entry for µn , so in all
we have n! possible entries, however not all of them can be independent entries since when we exchange
any pair of indices of T we get a plus or minus sign of the element T ; since there are n indices for the
tensor, there are n! possible permutation of indices; therefore there is only 1 independent component. We
can then express the totally antisymmetrized tensor as Tµ1 µ2 ...µn = f µ1 µ2 ...µn , and since there is only one
variable, it is 1 dimensional. Also, when the rank of the totally antisymmetric tensor is greater than the
number of dimensions, then there is no option but for some of the indices to be repeated; in such case,
since the tensor changes of sign when two indices are exchange, all the elements of the tensor must be zero
in case that the rank is larger than the number of dimensions.

2
Lie Algebras and Lie Groups.

Solution 8.1
As we saw in the LN, the most general 3 × 3−matrix M ∈ Rf (su(3)), satisfies the requirements M † = M ,
tr(M ) = 0:    
z11 z12 z13 z 11 z 12 z 13
M ≡ z21 z22 z23  = M † = z 21 z 22 z 23  .
z31 z32 z33 z 31 z 32 z 33
which together with the condition tr M = 0, or equivalently z33 = −(z11 + z22 ), lead to a general
parametrization of M . Let a, b, c, d, f, g, h be real constants; we may express:
 
a c − id e − if √ √
a b
M =  c + id b g − ih  = cλ1 + dλ2 + eλ4 + f λ5 + gλ6 + hλ7 + (λ3 + 3λ8 ) + (−λ3 + 3λ8 )
2 2
e + if g + ih −a − b
where in the last line I wrote the most general M in terms of the GMM shown in page 48 of the LN.
Since we are able to write the most element of su(3) in terms of the GMM, we have found that they are a
complete basis for this vector space.

Solution 8.2
tr([λa , λb ]λc ) = tr(λa λb λc ) − tr(λb λa λc ) = tr(λb λc λa ) − tr(λc λb λa ) = tr([λb , λc ]λa ),
where we used the cyclic property of the trace tr(AB) = tr(BA) for any two matrices A and B. Then, we
have: tr([λa , λb ]λc ) = tr([λb , λc ]λa ), which implies: fabc = fbca because 4ifabc = tr([λa , λb ]λc ).

Solution 8.3
From the Jacobi identity in Eq. (8.3.3), page 50:
d e d e d e
Cab Cdc + Cbc Cda + Cca Cdb =0

let’s represent (Ta )b c = Cac


b , since [T , T ] = C c T then C c = −C c . This implies:
a b ab c ab ba
d e d e d e
−Cab Ccd + Ccb Cad + Cca Cdb =0

(Tc )e d (Ta )d b − (Ta )e d (Tc )d b = Cca


d
(Td )e b = ifdca (Td )e b
Our final result implies that the representation is consistent with the commutation relations.

Solution 8.4
       
1 0 0 0 1 0 0 1 0 1 0 0 0 1 0
[h1 , Xα1 ] = 0
 −1 0  0 0 0 − 0
  0 0   0 −1 0 = 2 0
  0 0 = 2Xα1
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
       
0 0 0 0 1 0 0 1 0 0 0 0 0 1 0
[h2 , Xα1 ] = 0
 1 0  0 0 0 − 0
  0 0   0 1 0  =− 0
 0 0 = −Xα1
0 0 −1 0 0 0 0 0 0 0 0 −1 0 0 0
       
1 0 0 0 0 0 0 0 0 1 0 0 0 0 0
[h1 , Xα2 ] = 0 −1 0 0 0 1 − 0 0 1  0 −1 0 = − 0 0 1 = −Xα2
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

3
Lie Algebras and Lie Groups.

       
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
[h2 , Xα2 ] = 0
 1 0  0 0 1 − 0
  0 1 0 1 0  = 2 0 0 1 = 2Xα2
0 0 −1 0 0 0 0 0 0 0 0 −1 0 0 0
       
1 0 0 0 0 1 0 0 1 1 0 0 0 0 1
[h1 , Xα3 ] = 0
 −1 0  0 0 0 − 0
  0 0 0 −1 0 = 0 0 0 = Xα3
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
       
0 0 0 0 0 1 0 0 1 0 0 0 0 0 1
[h2 , Xα3 ] = 0
 1 0  0 0 0 − 0
  0 0 0 1 0  = 0 0 0 = Xα3
0 0 −1 0 0 0 0 0 0 0 0 −1 0 0 0

This implies α1 (h1 , h2 ) = (2, −1), α2 (h1 , h2 ) = (−1, 2), and α3 (h1 , h2 ) = (1, 1).

Solution 8.5
Using the Jacobi identity:

[h, [Xα , Xβ ]] = [[h, Xα ], Xβ ] − [Xα , [h, Xβ ]] = (α(h) + β(h))[Xα , Xβ ]

We are considering α + β = 6 0. We see that the resulting vector of [Xα , Xβ ] has to been an eigenvector of h.
If α + β 6 ∈ ∆, it would be simply impossible for [Xα , Xβ ] to be a non-trivial eigenvector of h; we already
know the eigenvalues of the eigenvectors Xα of h; we would end up in contradiction unless: [Xα , Xβ ] = 0.
On the other hand, when α + β ∈ ∆ then [Xα , Xβ ] would be legitimately a non-trivial eigenvector of
h with eigenvalue α(h) + β(h), so it must be the eigenvector Xα+β up to a normalization; which means
[Xα , Xβ ] = Nαβ Xα+β . The specific values of Nαβ are verified from computing the commutators explicitly.

Solution 8.6
Using the same representation as in the LN:
 
a 0 0
H ≡ aH1 + bH2 = 0 −a + b 0  =⇒ He1 = ae1 , He2 = (−a + b)e2 , He3 = −be3 .
0 0 −b

From our results of Problem 8.4: α1 (H) = 2a − b, α2 (H) = −a + 2b.


2 1 1 2
a = α1 (H) + α2 (H), b = α1 (H) + α2 (H),
3 3 3 3
2 1 1 1 1 2
=⇒ M 1 (H) = α1 (H) + α2 (H), M 2 (H) = − α1 (H) + α2 (H), M 3 (H) = − α1 (H) − α2 (H).
3 3 3 3 3 3

Solution 10.1
Since (Eij )lk = δik δjl then

(Eij )m v m v v v
u (Ekl )m = δiu δj δkm δl = δiu δjk δl = (Eil )u δjk ,

which means Eij Ekl = Eil δjk , and then [Eij , Ekl ] = Eij Ekl − Ekl Eij = Eil δjk − Ekj δil .

4
Lie Algebras and Lie Groups.

Class problem 1
By applying the lowering operators X−α1 [which changes the weights by (-2, 1)] and X−α2 [which changes
the weights by (1, -2)] we can construct all possible representations of the eigenstates of h1 and h2 starting
from a highest weight state, as showed in Figure 1. Let’s analyze the multiplicity for each of the cases of
Figure 1.

(-2 2) (0 1) (2 0)

(-1 1) (1 0) (0 1)

(-1 0) (1 -1)

(0 -1) (-1 0) (1 -1) (0 -2)

(a) (3) with HWS at (1, 0). (b) (3) with HWS at (0, 1). (c) (6) with HWS at (2, 0).

(-3 3) (-1 2) (1 1) (3 0)

(-1 2) (1 1)

(-2 1) (0 0) (2 -1)

(-2 1) (0 0) (2 -1)
(-1 -1) (1 -2)

(-1 -1) (1 -2) (0 -3)

(d) (8) with HWS at (1, 1). (e) (10) with HWS at (3, 0).

Figure 1: Different representations of su(3). The action of X−α1 and X−α2 is represented by a horizontal (←)
and diagonal ( ) arrow respectively.

(a) Clearly all points have multiplicity 1 since, given the starting point (1, 0), we can reach all other
points by applying the lowering operators in the displayed order, and so there is no way to reach any
given point more than once.
(b) Same argument as in case (a), all elements have multiplicity 1.
(c) Something similar happens, we can reach each point of the grid on only one way, so all of them
have multiplicity 1, except in principle for the point (-1, 0), which we are able to reach in 2 different
ways; however if we consider the inverse scenario of starting from point (0, 2) and apply raising
operators Xα1 , Xα2 (which is completely equivalent to the original prescription of applying lowering
operators from (2, 0) given that we are considering su(2) subalgebras of su(3)), then (-1, 0) must
have multiplicity 1 just as (0, 1) does.
(d) The same story goes, the only point where the multiplicity might not be 1 is at (0, 0) since even if
we reverse the argument of starting from (-1, -1) and applying raising operators; we can still reach
the point by 2 ways, so there might be 2 independent states at such position. Consider a state that

5
Lie Algebras and Lie Groups.

is a superposition of the 2 states that are obtained when reaching (0, 0):

|ψi = a |ψ1 i + b |ψ2 i

where:
|ψ1 i = X−α2 X−α1 |1, 1i , |ψ2 i = X−α1 X−α2 |1, 1i ;
|1, 1i denotes the HWS of the given representation with eigenvalues 1, 1 respectively for the operators
h1 , h2 .
The condition |ψi = 0 is the same as having a zero norm vector; that would mean:
  
2 2 2 2 ∗ ∗ ∗ ∗
 hψ1 |ψ1 i hψ1 |ψ2 i a
|a| kψ1 k + |b| kψ2 k + a b hψ1 |ψ2 i + ab hψ2 |ψ1 i = a b = 0. (1)
hψ2 |ψ1 i hψ2 |ψ2 i b

In the linearly independent case, a, b have to be 0; if they are linearly dependent we just need the
determinant of the matrix with terms hψi |ψj i to vanish. Computing:

hψ1 |ψ1 i = h1, 1| Xα1 Xα2 X−α2 X−α1 |1, 1i = h1, 1| Xα1 (h2 + X−α2 Xα2 )X−α1 |1, 1i
= 2 h1, 1| h1 |1, 1i = 2

where I used the fact that [Xα2 , X−α1 ] = 0, Xα1 |1, 1i = Xα2 |1, 1i = 0, h2 X−α1 |1, 1i = 2X−α1 |1, 1i
(which can be seen from the figure; since I have to apply this sort of technique over and over again,
I’m not going to explicitly mention the eigenvalues again), and considered h1, 1|1, 1i = 1. In the
same way:

hψ2 |ψ2 i = h1, 1| Xα2 Xα1 X−α1 X−α2 |1, 1i = h1, 1| Xα2 h1 X−α2 |1, 1i = 2 h1, 1| h2 |1, 1i = 2,

hψ1 |ψ2 i = (hψ2 |ψ1 i)∗ = h1, 1| Xα1 Xα2 X−α1 X−α2 |1, 1i = h1, 1| h1 h2 |1, 1i = 1.
Therefore the determinant of the matrix appearing in Eq. 1 is 22 − 1 6= 0, which means that the
states |ψ1 i and |ψ2 i are linearly independent. Thus, (0,0) has multiplicity 2.
(e) We can use the same argument as in (d) to see that all states must have multiplicity 1, except for
(0,0) possibly. Consider: |ψi = a |ψ1 i + b |ψ2 i where:
2
|ψ1 i = X−α2 X−α 1
|3, 0i , |ψ2 i = X−α1 X−α2 X−α1 |3, 0i ,

I’m going to show 2 different ways to find out whether or not they we have 2 linearly independent
states at such point:
• We have to show that |ψ1 i and |ψ2 i are linearly dependent, so we should have |ψi = 0 for some
choice of a and b. If ∃ a, b 3 |ψi = 0, then Xα1 |ψi = 0. Indeed:

Xα1 |ψ1 i = X−α2 ([Xα1 , X−α1 ]X−α1 + X−α1 Xα1 X−α1 ) |3, 0i
= X−α2 h1 X−α1 |3, 0i + X−α2 X−α1 h1 |3, 0i
= 4X−α2 X−α1 |3, 0i .

Xα1 |ψ2 i = ([Xα1 , X−α1 ]Xα2 Xα1 + X−α1 Xα2 [Xα1 , X−α1 ]) |3, 0i
= h1 Xα2 Xα1 |3, 0i + X−α1 Xα2 h1 |3, 0i = 2Xα2 Xα1 |3, 0i .

Thus we see that:


Xα1 ψ = 0 =⇒ b = −2a,
which is exactly the condition that we found in class when we computed Xα2 |ψi for the same
state |ψi as in this exercise. Thus |ψ1 i and |ψ1 i are linearly dependent, meaning that there is
only 1 state at (0, 0).

6
Lie Algebras and Lie Groups.

• Following the same procedure as in (d), let’s calculate:


2
hψ1 |ψ1 i = h3, 0| Xα2 1 Xα2 X−α2 X−α 1
2
|3, 0i = h3, 0| Xα2 1 (h2 + X−α2 Xα2 )X−α 1
|3, 0i
= 2 h3, 0| Xα2 1 X−α
2
1
|3, 0i + h3, 0| X−α2 Xα2 1 X−α
2
1
Xα2 |3, 0i
= 2 h3, 0| Xα1 (h1 + X−α1 Xα1 )X−α1 |3, 0i = 2 h3, 0| (h1 + h21 ) |3, 0i = 24,

where I used the fact that Xα2 |3, 0i = 0, h3, 0|3, 0i = 1; and I repeatedly used the weights of
h1 , h2 . Similarly:

hψ1 |ψ2 i = h3, 0| Xα1 Xα1 Xα2 X−α1 X−α2 X−α1 |3, 0i = h3, 0| Xα1 (h1 + X−α1 Xα1 )h2 X−α1 |3, 0i
= h3, 0| (h1 + h21 ) |3, 0i = 12,

hψ2 |ψ1 i = (hψ1 |ψ2 i)∗ = 12,

hψ2 |ψ2 i = h3, 0| Xα1 Xα2 Xα1 X−α1 X−α2 X−α1 |3, 0i
= h3, 0| Xα1 Xα2 (h1 + X−α1 Xα1 )X−α2 X−α1 |3, 0i
= h3, 0| Xα1 (2(h2 + X−α2 Xα2 )X−α1 + X−α1 (h2 + X−α2 Xα2 )h1 ) |3, 0i
= 2 h3, 0| h1 |3, 0i = 6.

Then we see that the determinant of the matrix with elements hψi |ψj i (same definition as in
Eq. 1) is equal to 24 × 6 − 122 = 0; thus |ψ1 i and |ψ2 i are linearly dependent, so there is only
one state at (0, 0) of the representation in Fig. 1 (e).

Class problem 2
Recall that all weights are displayed in Fig. 1:

• Since weights are additive under direct products, the weights of (3) ⊗ (3) are (2,0), (-2,2), (0,-2),
(0,1) with multiplicity 2, (1,-1) with multiplicity 2, (-1,0) with multiplicity 2. We can compare with
the weights of the (6) and (3) representations of su(3); (2,0) is in (6), (0, 1) in (3) and so on;

(3) ⊗ (3) = (6) ⊕ (3).

• (3)⊗(3) has the following weights: (2,-1), (1,1), (-2,1), (-1,2), (-1,-1), (1,-2), and (0,0) with multiplicity
3. Comparing with the the (8) representation [(1, 1) is on (8) and so on], and the trivial one (1)
with weight (0,0):
(3) ⊗ (3) = (8) ⊕ (1).
• (3) ⊗ (3) ⊗ (3) has weights (3,0), (-3,3), (0,-3), (1,1) with multiplicity 3, (-1,-1) with multiplicity 3,
(-1,2) with multiplicity 3, (1,-2) with multiplicity 3, (2,-1) with multiplicity 3, (-2,1) with multiplicity
3, and (0,0) with multiplicity 6. Comparing with the weights of the (1), (8), (10) [(3, 0) and others
appear on (10)] representations, we see that:

(3) ⊗ (3) ⊗ (3) = (10) ⊕ (8) ⊕ (8) ⊕ (1).

You might also like