You are on page 1of 15

Details of Some Representation Spaces for SU(3)

I. The defining representation:

The defining representation is of course just the set of matrices that one uses to identify
what is meant by the group in the first place. As a group, given a set of basis vectors in C3 ,
these are then the set of all 3 × 3 unitary matrices with determinant +1. As an algebra, these
are the set of all 3 × 3 skew-Hermitean, traceless matrices, Xa , usually created by writing
them as i times Hermitean, traceless matrices. A standard (complete) set of those Hermitean
traceless matrices, {λa |a = 1, . . . , 8}, was originally given by Gell-Mann, being a plausible
generalization of the Pauli matrices used for studies of the group SU(2). It is customary, then,
to use a set of 8 real parameters, aa , lying in an appropriate (finite) range of values, such that
any group element may be written in the form

i a a
g = g(a) ≡ e− 2 a λa
≡ e−ia Ta
. (1.1)

(Note that some people put +i there instead of −i; used consistently this simply changes the
signs of the parameters, and nothing more.)

In the abstract algebra we often use the Chevalley basis, {h1 , h2 }, along with the associated
simple roots, {α1 , α2 }, and their eigen-elements, {e1 , e2 } and {f1 , f2 }. The simple roots have
values on the Cartan subalgebra given by the Cartan matrix, A, with elements Aij ≡ αi (hj ),
all of which are integers:

[hi , ej ] = αj (hi ) ej ≡ Aji ej , [hi , fj ] = −αj (hi ) fj ≡ −Aji fj ,

[ei , fj ] = δji hj , (1.2)


µ ¶
+2 −1
A −→
= .
−1 +2

This defining representation is variously denoted in many distinct ways, all equivalent:
a. [1 0] describes it in terms of its highest weight, the (two) integers indicating multiples of
the lattice basis vectors, {µa }r1 , which are vectors reciprocal to the basis of root vectors,
{αa }r1 . (Each of these sets of vectors constitute a basis for the root space, which is the

space of vectors dual to the Cartan subalgebra of the set of generators, i.e., the Lie algebra,

of the group in question. The Cartan subalgebra has dimension r, usually referred to as

the rank of the Lie algebra.) As this particular representation is described simply by one

of the basic lattice vectors themselves, it is sometimes referred to as a fundamental or a

basic representation; the other one for SU(3) is the [0 1] representation.

b. 3 is another desription of this representation since it acts as a set of linear operators on

a vector space of three dimensions. This is not necessarily completely descriptive because

there could be more than one inequivalent three-dimensional representation, which would

then require bars, primes, etc., in order to distinguish them, along with a table giving the

meaning of those bars and primes. In fact, for SU(3) there is one other, which, as we will

explain below, is described as 3.

c. the tensor components v a also form a common description for this representation. (Since

this is the defining representation, this is truly just the statement that this is the most

natural sort of a vector, in analogy with the set of coordinate differences, xa .) As is usual

with tensor components, they really describe a tensor relative to some chosen, fixed set of

basis elements. In this case, using the language of “ket-vectors,” we suppose that |vi is

some vector in the representation space, that { |a i | a = 1, 2, 3} form a basis for the space,

and, then, that the vector can be written in terms of its components with respect to that

basis, i.e., |vi = v a |a i, where there is a sum on all the allowed values of a.

d. the Young tableaux symbol, ¤ , is also used to describe the type of tensor representation

it is. This simple symbol simply means that the tensor in question has one (contravariant)

index, so that there are not really any symmetry questions to ask. For more complicated

tensors this will be a very useful way of describing them.

Knowing that this representation has highest weight [1 0], we would then want to find

the rest of the allowed weights, to describe a basis for the entire vector space. The standard

calculation, using lowering operators, tells us that this representation has a total of 3 weights,

2
each one with a single associated weight vector, since the fundamental representation weights

are all simple, i.e., non-degenerate:

weight weight vector va particle state (Iz , Y )


 
1
[1 0] , |1 i, 0, u, |ui, ( 12 , 13 )
0
0 (1.3)
[−1 1] , |2 i, 1, d, |di, (− 12 , 31 )
0
0
[0 − 1] , |3 i, 0, s, |si, (0, − 32 )
1

where the other columns indicate various equivalent sets of nomenclature for the same things,

i.e., the three weight vectors. Each of the statements that these are equivalent nomenclatures

needs to be justified and/or explained.

We begin that process with the first column, which simply lists the three allowed weights.

These are the sets of eigenvalues for the r = 2 members of the representation of the basis

elements that have been already chosen for the Cartan subalgebra. Denoting an arbitrary such

weight by [m n], the (Dirac notion) ket-vector that is the eigenvector for that eigenvalue by

|[m, n]i, and the representatives of the hi ’s by Hi , this means that H1 |[m, n]i = m|[m n]i and

H2 |[m, n]i = n|[m n]i. The second column only gives simpler names to those 3 vectors so that,

for example H1 |2 i = −1|2 i and H2 |2 i = +1|2 i. The third column, however, now begins to treat

our vectors, and operators, in terms of a specific choice of basis, so that they are represented

as row matrices (row vectors) and square matrices, respectively. In particular, it picks out the

three eigenvectors themselves as the basis set, so that their representations, as given in this

third column, are just Kronecker delta’s, i.e., 0’s and 1’s. One must then check that it is in fact

just these eigenvectors that were chosen as a basis when Gell-Mann wrote down his λ-matrices,

although surely one would suspect this. We do this by creating the representations for the hi ’s

in terms of the actual 3 × 3 matrices, and checking that they have the right eigenvalues on

3
these vectors. For the representations of the hi ’s, we have
   
+1 0 0 √ 0 0 0
H1 = λ3 = 2Tz =  0 −1 0  , H2 = 3 1
2 λ8 − 2 λ3 = 32 Y −Tz =  0 +1 0  . (1.4)
0 0 0 0 0 −1

As they are diagonal, we know that the basis is indeed their joint eigenvectors. We can read the

three weights off by just reading jointly down their diagonals. This gives the expected answers,

indicating, also, that we have the eigenvectors in the correct order. However, it is worth

continuing to check the rest of the properties that weight vectors should have, namely that

they should be transformed one into another via the proper lowering and/or raising operators.

In this case, knowing the weight diagrams, and their relationships to the roots, we can write

down the requirements:

E1 |[1, 0]i = 0 = E2 |[1, 0]i , F1 |[1, 0]i = |[−1, 1]i , F2 |[1, 0]i = 0

E1 |[−1, 1]i = |[1, 0]i , E2 |[−1, 1]i = 0 , F1 |[−1, 1]i = 0 , F2 |[−1, 1]i = |[0, −1]i , (1.5)

E1 |[0, −1]i = 0 , E2 |[0, −1]i = |[−1, 1]i , F1 |[0, −1]i = 0 = F2 |[0, −1]i .

On the other hand, via the 3 × 3 λ-matrices, we have their specific representatives:
   
0 1 0 0 0 0
1 
E1 = 2 (λ1 +iλ2 ) = 0 0 0  = (F1 )T , 1 
E2 = 2 (λ6 +iλ7 ) = 0 0 1  = (F2 )T . (1.6)
0 0 0 0 0 0

It is then straightforward matrix arithmetic to verify that each of the requirements listed in

Eqs. (1.5) are indeed satisfied by the matrices in Eqs. (1.6), acting on the vectors as listed

in Eqs. (1.3). One should do so! The rest of the description of the matrices defining the Lie

algebra are then given by the remaining non-zero commutators, which are simply
 
0 0 1
1 
[E1 , E2 ] = 2 (λ4 + iλ5 ) = 0 0 0  = −([F1 , F2 ])T . (1.7)
0 0 0

The set of these eight matrices do close upon themselves upon multiplication using the com-

mutator product.

4
At this point it is worth commenting on the particular labelling given for the “simpler

names” for the vectors given in the second column, or, if you prefer, on the ordering that

was given to the basis vectors |a i. This was made so that their tensor components, v a , would

be, respectively, δ1a , δ2a , and δ3a . Keep track of this as we go through higher-level tensor

representations.

The further columns of Eq. (1.3) concern the standard physical interpretation of these

weight vectors in the “Standard Model” in particle physics. (Before describing them, I should

surely note that there are many other physical systems in which the group SU(3) is an im-

portant symmetry group; an example is gravity-induced ocean waves far out at sea, described

by the Boussinesq equation.) The first of the second set of columns gives the names of the

three quarks, simply as symbols, under the standard identifications, while the next one, the

fifth column, gives those same names, as labels for Dirac ket-vectors. They are the up quark,

u, the down quark, d, and the strange quark, s, respectively. The last column lists the appro-

priate physical properties of these quarks, which allows the respective identifications vis-a-vis

Gell-Mann’s names for the choices of diagonal λ-matrices, i.e., their values of Tz and Y . Given

the relationships to H1 and H2 , as in Eqs. (1.4), one can use the individual weight vectors

to calculate the values for the z-component of isospin, i.e., the eigenvalue for Tz , and the

hypercharge, the eigenvalue for Y . It is interesting to plot these various objects on a lattice

diagram for su(3), where we see that while the two simple roots meet at an angle of 120◦ and

the two reciprocal lattice vectors meet at an angle of 60◦ , the physical quantities are linear

combinations chosen so as to be orthogonal, i.e., they meet at 90◦ . Such a plot is on the next

page.

As a “summary” of the content of this discussion, and especially Eqs. (1.3), we can de-

scribe the physical content of this representation via a single column vector, which contains as

elements the vectors for the three particles it describes. They are put there so that the tensor

components that describe a state with just one of those particles is written by simply replacing

5
its symbol with a one, and the other symbols with a zero:
 
u
d . (1.8)
s

II. The other 3-dimensional representation:

In general, for g1 , g2 ∈ G, a representation is a mapping into a set of linear operators such

that D(g1 )D(g2 ) = D(g1 g2 ). Denoting D(gi ) simply by Di , and D(g1 g2 ) by D12 , we can try

to use various standard operations on linear operators to get other representations, which only

sometimes works:

D1 D2 = D12 =⇒ (D2 )−1 (D1 )−1 = (D12 )−1 , (D2 )† (D1 )† = (D12 )† ,

(D1 )∗ (D2 )∗ = (D12 )∗ , {(D1 )† }−1 {(D2 )† }−1 = {(D12 )† }−1 , {(D1 )T }−1 {(D2 )T }−1 , (2.1)

which gives us three additional representations for any given one: the conjugate one, the

adjoint-inverse (or contragredient) one, and the conjugate-contragredient one. Of course it

may well be that some of these are equivalent to the original one, so that not all of these are

necessarily new. In particular, it can be shown that any finite-dimensional representation of

any compact, simple Lie group can be chosen so that the matrices are unitary; therefore for

our case, for example, the last of the three entries in Eqs. (2.1) is identical to the original

representation. As well, the other two entries there are identical, so that we have acquired at

most one other by this means. In the case of SU(3), it turns out that this one other one, the

complex conjugate one, is indeed inequivalent, in general, and in particular for the defining

one. Therefore, there is one other 3-dimensional representation, which we label by the symbol

3, to remind us that it is the complex conjugate to the original representation.

How are the representations of the generators related? We easily have

a a a
Ta∗ a
(−Ta∗ ) a
(−TaT )
e−ia Ta
≡ D(ga ) = (e−ia Ta ∗
) = e+ia = e−ia = e−ia (2.2)

6
where the last equality is caused by the fact that the Ta are all Hermitean matrices, so that

T ∗ = (T † )∗ = (T T )∗ ∗ = T T . This would be a perfectly reasonable choice for the complex-

conjugate representation for the generators. In particular, one should note that it leaves

invariant the form of the Chevalley commutation relations, Eqs. (1.2), so that these would be

acceptable names for these matrices. However, as it puts minus signs into the representations

of the simple raising and lowering operators it violates the standard versions of the choices of

phase for such things, due originally to Condon and Shortley, for su(2), and more recently to

de Swart for su(3)—see Rev. Mod. Phys. 35 916-939 (1963). Luckily one can notice that

the Chevalley relations are unchanged if one agrees to change the signs of just the raising and

lowering operators, doing nothing else. Therefore, following de Swart, we will agree to label

the generators for 3 with an overbar, so that

H i = − Hi ,

E i = +Fi , F i = +Ei ,
  (2.3)
0 0 0
E 12 ≡ [E 1 , E 2 ] = [F1 , F2 ] =  0 0 0  = −([F 1 , F 2 ])T .
−1 0 0

Since in this representation, the matrices that describe the diagonal elements have opposite

signs to those that were used in the 3-representation, the various weights for the 3 have opposite

signs from the 3-representation; therefore the weights for this representation are [−1 0], [1 − 1],

and [0 1]. As well, since the raising and lowering operators have been switched, it follows that

(the negative of) the old highest weight will be the lowest weight, and vice versa. Therefore

we recognize this representation as that one with highest weight [0 1]; i.e., the other basic (or

fundamental) representation is just the 3-representation. The complex-conjugate version of

7
Eqs. (1.3) will then be

weight weight vector va W bc = W [bc] particle state (Iz , Y )


 
0 +1 0
[0 1] , |3 i, (0 0 1), 1 0 , s, |si, (0, + 32 )
 0 0 0 
0 0 −1
[1 − 1] , |2 i, (0 1 0),  0 0 0 , d, |di, (+ 12 , − 31 )
 +1 0 0 
0 0 0
[−1 0] , |1 i , (1 0 0),  0 0 +1  , u, |ui, (− 12 , − 31 )
0 −1 0
(2.4)

Most of the columns are, by now, either self-explanatory or already explained, but not quite

all. The three weights have their weight vectors numbered in a way that might be unexpected;

this comes about from our desire to ensure that the particles and their associated antiparticles

have the same number, only with the location being switched from contravariant to covari-

ant. Therefore we may think of the mapping between the two sorts of levels as the particle

conjugation operator. Therefore we will treat an arbitrary vector in this vector space, for the

3-representation, as having covariant components: |vi = v a |a i, the overbar also distinguishing

them from the other ones. Also notice that by taking the matrix presentations for these vectors

as row vectors instead of column vectors, it automatically forces us to use the transpose of the

original representation matrices when they are being acted upon, only insisting that we recall

correctly the various minus signs involved. This is intentional and one should not now go

ahead and use an extra transpose on the matrices, unless it is the intent to write them in the

“standard” approach as column vectors.

One column, however, in the table is quite new; it is the one with 3×3 matrices in it. While

there is not any operator in these tensor spaces that raises or lowers indices, in the mode of the

“metric” in a physical coordinate space, there are two universal, invariant tensors within the

purview of SU(n), namely the Levi-Civita symbol, ²a1 a2 ...an , used to create the determinants

of n × n matrices—invariant since all the representation matrices have determinant +1—and

8
the Kronecker delta symbol, δba , which are the components of the identity matrix—invariant
because everything commutes with the identity:

Invariant Tensors for SU(3)

²abc , δba , (2.5)

We may therefore use the Levi-Civita symbol to trade any covariant tensor index for two
upper, skew-symmetric indices. In 3 dimensions, the two approaches have exactly the same
number of degrees of freedom, namely 3. The Levi-Civita symbol trades a covariant index
of 1, for example, with the contravariant pair {2, 3}. Either the tensor components va , or
the equivalent tensor components, W bc = W [bc] may be used to describe the vectors in this
representation space. The matrices shown are then the presentations of the contravariant,
skew-symmetric tensor components for the vectors.

Considering the indices of both the fundamental in terms of contravariant indices only
allows them to be considered equivalently from the point of view of Young tableaux:

3 ⇐⇒ ¤ , 3 ⇐⇒ ¤
¤. (2.6)

The single vector summary statement about the particle content of the representation is then
the following:  
0 s −d
(u d s) ⇐⇒  −s 0 u  . (2.7)
d −u 0

III. Product Representations and their Particle Content

I first want to perform the simplest of the various products of representations, namely the
product of the defining representation and its conjugate one, the result for which is already
known to be

3 ⊗ 3 = [1 0] ⊗ [0 1] = [1 1] ⊕ [0 0] = 8 ⊕ 1 , (3.1)

¤⊗¤ ¤¤ ¤
¤=¤ ⊕ ¤.
¤
9
When the tensors for these two representations are treated simultaneously, each with 3 compo-
nents, a total of 9 components are generated, described by the generic tensor v a wb ; however,
this representation is no longer irreducible, i.e., there are sub-vector spaces that transform
only among themselves. In this particular case it is the multiples of the identity that trans-
form among themselves alone, and the other, traceless parts of the tensor that also tranform
among themselves alone; this is simply the statement made earlier that the components of
the identity tensor—the Kronecker delta symbol—are left invariant by every group element.
Therefore, we may use the Kronecker delta to reduce this representation into its irreducible
parts:
v a wb = (v a wb − 13 δba v m wm ) + 13 δba v m wm ≡ W a b + 13 δba W . (3.2)

This is simply the mathematical justification for the decomposition already stated in Eq. (3.1),
since the (traceless) tensor components W a b have eight independent degrees of freedom, and
the single invariant W corresponds to a single degree of freedom, namely an SU(3)-scalar.
Now, what happens to the weights in this product, and the particle identifications? This is
the interesting part that we want to understand better.

To begin with we need to see what the representation of the generators of the algebra
look like, as they act on the vectors in this 8-dimensional representation, as labelled in tensor
terms. Writing out the previous equation, Eq. (3.2), in terms of the symbolic matrices that
present the tensor components, we have
   
u uu ud us
 d  ( u d s ) =  du dd ds 
s su sd ss
   
(2uu − dd − ss)/3 ud us 1 0 0
  W 
= du (2dd − uu − ss)/3 ds + 0 1 0 ,
3
su sd (2ss − uu − dd)/3 0 0 1
 
u
W ≡ ( u d s ) d  = uu + dd + ss .

s
(3.3)
Notice that the matrix presentation for W a b is in fact traceless!

10
Since this is a direct product representation, that means that for every group element g,
there is a (3×3) matrix D(g) that gives the action of that group element on the space of vectors
that span the representation space for the 3 = [1 0] representation, and the associated matrix,
D(g) for the 3 = [0 1] representation, as generated by the matrices defined in Eqs. (2.3). When
both act together, of course the one representation simply ignores the vectors for the other
representation, and then we have the reverse operation for the other one; this is the meaning of
“the direct product”; therefore, the action of the generators alone is given by T ⊗ I3 + I3 ⊗ T ,
for any abstract Lie algebra elements t ∈ G:

¡ ¢
T (W a b ) = T a c W c b − ηT T c b W a c = T a c δbd − ηT δca T d b W c d ≡ {T [1 1] }ad cb W c d , (3.4)

where—because it’s quantum mechanical representations in which we are interested—we may


have a phase factor ηT between the two portions. We take this phase factor in the usual
way, namely (−1)i for T ∈ Gi , the (usual) grading of the Lie algebra relative to how many
commutators of simple root vectors one must take; therefore it is +1 for the members of the
Cartan subalgebra, H ≡ G0 , −1 for the simple raising and lowering operators that reside in
G±1 , the value −1 for their commutators, in G±2 , etc. The result above does not present the
representation matrix acting in quite the (expected) matrix form; instead, we are given the
result in terms of a (2,2)-tensor, i.e., a tensor with 2 contravariant indices and 2 covariant
indices. Using that, we could calculate, for example, the weight of any of the locations in our
matrix. On the other hand, there is a simpler way that is also directly put into evidence,
namely to simply continue to use the original 3 and 3 versions, keeping good track of what one
is doing. We will use this approach, and give several examples.

We begin by calculating the weights of the various states in the matrix presentation of the
tensor for the representation, using the two Cartan subalgebra elements. As a simple example,
we calculate the weight vector for the upper right-hand corner of the matrix presentation:
[1 1]
Hi W 1 3 = H [1 1] us = {Hi ⊗ I3 + I3 ⊗ H i }us = {Hi ⊗ I3 − I3 ⊗ (H)i }us
(3.5)
= (Hi u)s − u(Hi s) = [1 0] us + u{[0 1] s} = ([1 0] + [0 1])us = [1 1]us .

11
Repeating this sort of calculation for the entire matrix of elements W a b , we present the matrix
with just the weights that are obtained for each element:
 
[0 0] [2 − 1] [1 1]
weights for elements:  [−2 1] [0 0] [−1 2]  . (3.6)
[−1 − 1] [1 − 2] [0 0]
As expected, every location corresponds to only one weight; however, there are two, or perhaps
three, locations with the same weight, namely [0 0]. This was expected since we knew that
[0 0] was degenerate, i.e., there are two different vectors in the representation space that have
this weight. (There are not three; the appearance of three is simply because there are only 8
degrees of freedom in this matrix, since it is traceless, so one of the 3 appearances of [0 0] is
just the negative of the sum of the other two.)

To break the degeneracy on the weights, we need to recall that one of them corresponds
to an isospin singlet, with I = 0 as well as Iz = 0, while the other one is simply the Iz = 0
member of an isospin triplet, with I = 1. Therefore, we expect the triplet member to have
contributions only in the W 1 1 and W 2 2 locations, since these are the places that the matrix
Tz can reach. We therefore choose A as the triplet state, and B as the singlet. Insuring that
A occurs only in the two upper locations, and that the matrix is traceless gives us

W 11 = A + B , W 2 2 = −A + B , W 3 3 = −2B . (3.7)

The result is
uu − dd uu + dd − 2ss
A= √ , B= √ . (3.8)
2 6
allowing us to re-write explicitly our matrix one last time, describing in detail the elements
irreducibly, in terms of the usual pseudoscalar meson octet members, where A becomes the
neutral π 0 and B the neutral η 0 meson:
 
(uu − dd)/2 + (uu + dd − 2ss)/6 ud us
 du (dd − uu)/2 + (uu + dd − 2ss)/6 ds 
su sd (2ss − uu − dd)/3
 0 √ 0
√ + + 
π / 2+η / 6 √π √ K
− 0 0
=  π −π / 2 + η / 6 K0  .
− 0 0

K K −2η / 6
(3.9)

12
IV. The Baryon Representations

We now want to consider the quark triple product, and its tensor version:

3 ⊗ 3 ⊗ 3 = [1 0]3 = [3 0] ⊕ [1 1] ⊕ [1 1] ⊕ [0 0] = 10 ⊕ 8 ⊕ 8 ⊕ 1 , (4.1)

¤ ⊗ ¤ ⊗ ¤ = ¤¤¤ ⊕ ¤¤ ⊕ ¤¤ ⊕ ¤
¤.
¤ ¤ ¤

The following equality splits the simple triple product into its constituent irreducible parts,
following the usual rules for creating appropriate Young Tableaux:

ta ub v c = 16 {ta ub v c + ta uc v b + tb uc v a + tb ua v c + tc ua v b + tc ub v a }

+ 13 {ta ub v c + tb ua v c − tc ub v a − tb uc v a }

+ 31 {ta ub v c + tc ub v a − tb ua v c − tc ua v b }

+ 61 {ta ub v c − ta uc v b + tb uc v a − tb ua v c + tc ua v b − tc ub v a } ,

≡Dabc + √2 ²acd X b d
6
+ √2 ²abd Y c d
6
+ √1 ²abc Z
6
, (4.2)

X bd ≡ √1 ²def (te ub v f
6
+ tb ue v f ) ,

Y bd ≡ √1 ²def (te uf v b
6
+ tb uf v e ) ,

Z≡ √1 ²def td ue v f .
6

a. The tensor components Dabc are totally symmetric in their indices, and thereby constitute
the 10-dimensional representation, [3 0].
b. The next two braces of components are skew-symmetric in a, c and in a, b, respectively,
and therefore allow us to pick out the equivalent tensors X b d and Y c d , which one sees by
calculation are traceless; they constitute two copies of the [1 1] representation, which are
each 8 dimensional.
c. The last brace of components are skew-symmetric in every pair of indices, and are therefore
simply proportional to the Levi-Civita symbol itself, since we have only 3 values for these
indices to take. This is then just a scalar representation, [0 0], and is 1 dimensional.

13
Picking one of the eight-dimensional representations, say X a b , we may calculate weights

by using the individual 3 × 3 representations of the Cartan subalgebra for each part. While

each representative will be composed of three quarks, since it is skew in two of those three

indices, it is not allowed to have all three the same. Therefore the highest weight will be

the one composed of two u-quarks and one d-quark, composed so as to create the element

X 1 3 ∝ ²312 t1 u1 v 2 + ²321 t1 u2 v 3 = 2uud − duu − udu. This has Iz = 1/2 and Y = 1, for

a weight of [1 , 1], as expected. In the process of identifying particular representations as

physical particles, we identify this one as the proton, p, and normalize it appropriately for a

physical state:

[1 , 1] = p+ = (2uud − duu − udu)/ 6 = X 1 3 . (4.3)

Then we can easily outline the other, non-degenerate members of the octet:

[−1 , 2] = n0 = X 23 = (−udd − dud + 2ddu)/ 6 ,

[2 , −1] = Σ+ = X 12 = (2uus − suu − usu)/ 6 ,

[−2 , 1] = Σ− = X 21 = (2dds − sdd − dsd)/ 6 , (4.4)

[1 , −2] = Ξ0 = X 32 = (2ssu − uss − sus)/ 6 ,

[−1 , −1] = Ξ− = X 31 = ((2ssd − dss − sds)/ 6 .

The two independent but degenerate states that occur in the diagonal elements of the matrix

both have weight [0 , 0]. Using the method espoused at Eqs. (3.7), and the tensor products in

Eqs. (4.2), we have


X 1 1 = (dus + uds − sud − usd)/ 6 = A + B ,

X 2 2 = (sdu + dsu − uds − dus)/ 6 = −A + B ,

X 3 3 = (usd + sud − dsu − sdu)/ 6 = −2B ,
 0 √ √
 Σ = 2A = (2uds + 2dus − usd − sud − dsu − sdu)/ 12 ,
=⇒ √ (4.5)
 0
Λ = 6B = (dsu + sdu − usd − sud)/2 ,

14
Inserting all these states into the matrix form for X a b , we then have
 
Σ0 Λ0

2
+ √
6
Σ+ p+
 0
Λ0

((X a b )) = 
 Σ− Σ
−√ 2
+ √
6
n0   . (4.6)
0
Λ
Ξ− Ξ0 −2 √ 6

15

You might also like