You are on page 1of 10

Mathematical Methods in Physics

Physics 70003, Fall 2018


Solutions to HW #2

Change of basis in a Hilbert space.


As discussed in class, if the eigenvectors of an operator  are used to
P span the Hilbert space, then  is
represented in a straightforward way with projection operators  = a|aiha| and within that basis the
a
matrix corresponding to  is diagonal:  = δa,a0 . If, however, some other basis set |bi is used, then the
matrix representing  is not diagonal:  = |b ihb |A|b”ihb”|.
P 0 0
b0 ,b”
(a) Prove that if both bases |ai and |bi are orthonormal and complete sets, then there exists a unitary operator
Û (satisfying the condition Û Û † = Û † Û ), such that |b(l) i = Û |a(l) i for all values of l = 1, 2, ...N .
(b) Provide the matrix representation of Û and Û † in both basis sets |ai and |bi.

a) We start with the operator  written in the basis |bi


X
 = |bi hb| A |b0 i hb0 | .
b,b0

Now consider the matrix Û that transforms the basis of the eigenvalues of  i.e. |ai into the basis |bi

|bi = Û |ai . (1)

We need to prove that Û is unitary i.e. it satises Û † Û = Û Û † = 1, if |bi and |ai are orthonormal complete
sets.
From 1

hb| = ha| Û † ,
hb| Û = ha| Û † Û ,
ha| = ha| Û † Û ,
ha|a0 i = ha| Û † Û |a0 i ,
δaa0 = ha| Û † Û |a0 i .

This proves that the product U U † is equal to the unit matrix I, which automatically proves that U † U is also
equal to the identity b/c I† = I.

b) Finding the matrix representation of Û in the basis |ai means to nd the coecients Uij in the expansion
X
Û = Uij |ai i haj | .
ij

From eq. 1 we have X


|bi i = Uik |ak i ,
k

1
and because |ai i is orthonormal
X
haj |bi i = Uik haj |ak i ,
k
X
= Uik δjk ,
k
= Uij .

By denition
 
Û † ∗
= Uji ,
ij
= hbj |ai i .

Riley & Hobson, problem 8.7. Prove the following results involing Hermitian matrices.
(a) If A is Hermitian and U is unitary, then U −1 AU is Hermitian.
(b) If A is anti-Hermitian, then iA is Hermitian.
(c) The product of two Hermitian matrices A and B is Hermitian if and only if A and B commute.
(d) If S is a real antisymmetric matrix, then A = (I − S)(I + S)−1 is orthogonal. If A is given by
 
cos θ sin θ
A=
− sin θ cos θ

then nd the matrix S that is needed to express A in the above form.
If K is skew-hermitian, i.e. K † = −K , then V = (I + K)(I − K)−1 is unitary.

(a) Since A = A† and U † U = I , consider


(U −1 AU )† = U † A† (U −1 )† = U −1 A(U † )−1 = U −1 A(U −1 )−1 = U −1 AU

(U −1 AU )† = U −1 AU , meaning U −1 AU is hermitian.

(b) A† = −A, so
(iA)† = −iA† = −i(−A) = iA
(iA)† = iA means that iA is hermitian.

(c) We know A = A† and B = B † . We need to prove that AB = BA ⇒ (AB)† = AB , and, vice versa,
(AB) = AB ⇒ AB = BA.

(1) Let AB = BA. Then


(AB)† = B † A† = BA = AB
(2) Let (AB)† = AB . Then
BA = B † A† = (AB)† = AB

(d) First, let us prove that if S is real antisymmetric (so S T = −S ), then A = (I − S)(I + S)−1 is orthogonal.
Consider
AT A = [(I − S)(I + S)−1 ]T [(I − S)(I + S)−1 ] =
= [(I + S)−1 ]T (I + S)(I − S)(1 + S)−1 =
= (1 − S)−1 (I + S − S − S 2 )(1 + S)−1 =
= (1 − S)−1 (I − S)(I + S)(I + S)−1 =
= II = I,

2
So AT A = I , and A is orthogonal.
Now, to nd the corresponding S , we can unravel the expression fo A as follows:

A + AS = I − S =⇒ (A + I)S = I − A =⇒ S = (A + I)−1 (I − A)

Finally, we will use the following relation


 −1  
1 + cos θ sin θ 1 1 + cos θ − sin θ
= ,
− sin θ 1 + cos θ 2(1 + cos θ) sin θ 1 + cos θ

Which can be veried by writing down the product of (A − I)−1 and (A − I). In the end:
 −1  
1 + cos θ sin θ 1 − cos θ − sin θ
S= =
− sin θ 1 + cos θ sin θ 1 − cos θ
  
1 1 + cos θ − sin θ 1 − cos θ − sin θ
= =
2(1 + cos θ) sin θ 1 + cos θ sin θ 1 − cos θ
   
1 0 −2 sin θ 0 tan(θ/2)
= = .
4 cos2 (θ/2) 2 sin θ 0 tan(θ/2) 0
(e) In the rst part in (d), replace S with −K , and T
with †, and the proof is essentially the same. This is
because Hermitian matrices play the same role among complex matrices, as orthogonal matrices do among
real matrices.

Dierentiating determinants.
| Hint: First, read carefully section 8.10 of RBH on the inverse of a matrix and make sure you understand
how Eqs. (8.55) and (8.57) are obtained (Your textbook already gives an outline, but ll in the missing steps,
if any). You will need these results for the next problem. |
Show that if A is an n × n square matrix, whose elements depend on x, then
d X daij
det A = Cij ,
dx i,j
dx

where Cij is the cofactor of aij as dened in class and by RBH in section 8.9. Use this result and those
obtained in the previous problem to show that
d dA −1
ln(det A) = T r{ A }.
dx dx
Furthermore, show that you can use these results to give the inverse of a matrix as the partial log-derivative
of its determinant det A with respect to its elements aij . In other words, show that

ln(det A) = (A−1 )ji .
∂aij

The derivative of the logarithm of a determinant is calculated the same way the derivative of the logarithm
of any function is, so for the left part of the equation:
d 1 d 1 X daij
ln(det A) = det A = Cij
dx det A dx det A ij dx

The right part can be calculated by using the relation A−1 = det A :
CT

dA −1 dA CT 1 X daij
T 1 X daij T 1 X daij
T r{ A } = T r{ }= Tr Cjk = Cji = Cij
dx dx det A det A j
dx det A ij dx det A ij dx

The right part exactly matches the left part, hence the equation holds.

3
To obtain the nal relation in the problem, we will use the second relation we just derived, as well as the
following property of the full derivative:

df (x) ∂f (x) ∂t1 ∂f (x) ∂t2 ∂f (x) ∂tn


= + + ... + ,
dx ∂t1 ∂x ∂t1 ∂x ∂tn ∂x
where A = A(t1 , t2 , ..., tn ) and t1 = t1 (x), t2 = t2 (x), ..., tn = tn (x). Substituting f for A in this case and
noting that A depends on all aij , we get:

d 1 d(det A) 1 X ∂(det A) ∂aij X ∂ ln(det A) ∂aij


ln(det A) = = =
dx det A dx det A ij ∂aij ∂x ij
∂aij ∂x

On the other hand, from the relation we obtained earlier:


 
 
d dA −1 X dAij −1  X dAij −1 X ∂aij
ln(det A) = T r A = Tr  (A )jk = (A )ji = (A−1 )ji
dx dx j
dx ij
dx ij
∂x

Since both expressions were obtained from the same starting expression, we get:
X ∂ ln(det A) ∂aij X ∂aij X ∂aij  ∂ ln(det A) 
= (A−1 )ji =⇒ − (A−1 )ji = 0
ij
∂aij ∂x ij
∂x ij
∂x ∂aij

Now, note that A can be any square n × n matrix with elements depending on x. The part inside the
parentheses does not depend on x, and since dependency of individual aij on x can be arbitrary, this
equation can only hold if the expression inside the parentheses equals 0, that is

ln(det A) = (A−1 )ji
∂aij

for any (i, j). This is equivalent to the last relation.

Matrix representation of spin-1/2 operators.


Consider the two-dimensional Hilbert space spanned by the spin-up (|+i) and spin-down (|−i) states of a
spin-1/2 particle. (For simplicity, in this problem I will be suppressing the "hat" symbol on operators. It
should be clear from the context what is an operator and what isn't. If in doubt, please ask).
(a) Derive the projection operator representation of the Sz operator, as well as the raising and lowering
operators S + and S − .
(b) Use these to derive (i.e. don't just quote/refer to) the matrix representation of Sz , S + , S − as well as Sx ,
and Sy . (Hint: you will end up with either individual Pauli matrices, or some linear combination of them).
(c) Show that the matrices Si , where i = x, y, z satisfy the following commutation and anticommutation
rules: [Si , Sj ] = iijk ~Sk , and {Si , Sj } = 12 ~2 δij .


(d) Show that the square of the total spin operator S 2 = Sx2 + Sy2 + Sz2 commutes with all components of the


spin operators [ S 2 , Si ] = 0.

− − → − → − →
− − − →
→ − →
− − − →
→ − →

(e) Show that ( S · →
a )( S · b ) = (→

a · b )1̂ + i S (→
a × b ). Argue that S · →
a and S · b commute if →

a || b .

(a) The Sz operator is dened by having eigenvalues equal ~ times the value of spin's projection on the axis.
This means that:
~ ~
Sz |+i = + |+i, Sz |−i = − |−i (2)
2 2
Now, note that by the requirements of normalization and orthogonalization h+|+i = h−|−i = 1, h+|−i =
h−|+i = 0. By multiplying each of the two equations above by h+| or h−|, we get:

~ ~
h+|Sz |+i = , h−|Sz |−i = − , h+|Sz |−i = 0, h−|Sz |+i = 0 (3)
2 2

4
From the rst two of these equations, it is clear that Sz cannot be simply a number. That requires the most
general expression for this operator:

Sz = a + b|+ih+| + c|+ih−| + d|−ih+| + e|−ih−|

From plugging this in (2), we get the following constraints:


~ ~
a+b= , d = 0; a+e=− , c=0
2 2
We can rewrite it as b = ~
2 − a, e = − ~2 − a. Then the expression is:
   
~ ~
Sz = a + − a |+ih+| + − − a |−ih−|
2 2

a is just a constant that, in fact, disappears, as |+ih+| + |−ih−| ≡ 1̂ (you can easily show it by applying
it to the general p|+i + q|−i vector and proving that the result is the same vector). So the nal result is:
~
Sz = ( |+ih+| − |−ih−| )
2
The raising operators are dened by the following expressions:

S + |−i = ~|+i, S − |+i = ~|−i, S + |+i = S − |−i = 0

Using the same reasoning as for Sz , we get:

S ± = ~|±ih∓|

(b) The projection states are represented in terms of matrix as:


   
1 0
|+i = , |−i =
0 1

The expressions for Sz , S + and S − can all be obtained similarly here. For example, for Sz :
   
1 ~ ~ 1
Sz |+i = Sz = |+i =
0 2 2 0
   
0 ~ ~ 0
Sz |−i = Sz = − |+i = −
1 2 2 1
Combining:    
1 0 ~ 1 0
Sz =
0 1 2 0 −1
Writing down  
s1 s2
Sz =
s3 s4
And solving this matrix equation, we obtain:
 
~ 1 0
Sz =
2 0 −1

Same way, for S + and S − :    


0 1 0 0
S+ = ~ , S− = ~
0 0 1 0
Finally, matrices Sx and Sy are dened by the expressions that allow us to directly plug in the found S ±
values:    
1 + − ~ 0 1 1 + − ~ 0 −i
Sx = (S + S ) = , Sy = (S − S ) =
2 2 1 0 2i 2 i 0

5
(c) Commutators and anti-commutators are dened respectively by the relations:
[Si , Sj ] = Si Sj − Sj Si , {Si , Sj } = Si Sj + Sj Si

First, let us consider the commutators. We have the obvious properties:

[Si , Si ] = 0, [Si , Sj ] = −[Sj , Si ] (4)

Now, let us calculate 3 of the commutators:

~2 ~2 1
       
0 1 0 −i 0 −i 0 1 0
[Sx , Sy ] = Sx Sy − Sy Sx = − =i
4 1 0 i 0 i 0 1 0 2 0 −1

~2 ~2 0
       
0 −i 1 0 1 0 0 −i 1
[Sy , Sz ] = Sy Sz − Sz Sy = − =i
4 i 0 0 −1 0 −1 i 0 2 1 0

~2 ~2 0
       
1 0 0 1 0 1 1 0 1
[Sz , Sx ] = Sz Sx − Sx Sz = − =
4 0 −1 1 0 1 0 0 −1 2 −1 0
By comparing the left and the right part in each of these equations, it is easy to show that:

[Sx , Sy ] = i~Sz , [Sy , Sz ] = i~Sx , [Sz , Sx ] = i~Sy

And due to the properties in (4), the general expression is:

[Si , Sj ] = iijk ~Sk

For the anti-commutators, in the calculations for {Sx , Sy }, {Sy , Sz }, {Sz , Sx } in the above lines, there will
be "+" instead of "−" between the matrices inside the square brackets. In each line, the left and the right
side have dierent signs as can be shown by calculation, hence nullifying the values of the anticommutators.
Due also to the property {Si , Sj } = {Sj , Si }, we have {Si , Sj } = 0 for all i 6= j .
For i = j , we just need to calculate all three {Si , Si } = 2Si2 .

~2 0 1 ~2
  
2 0 1
{Sx , Sx } = 2Sx = 2 · = 1̂
4 1 0 1 0 2

~2 0 −i ~2
  
2 0 −i
{Sy , Sy } = 2Sy = 2 · = 1̂
4 i 0 i 0 2
~2 1 0 ~2
  
2 1 0
{Sz , Sz } = 2Sz = 2 · = 1̂
4 0 −1 0 −1 2
Since any other commutators between these matrices are 0, as shown before, the nal result is:
1 2
{Si , Sj } = ~ δij
2


−2
(d) S = Sx2 +Sy2 +Sz2 can be easily calculated. Above we found the expressions for 2Sx2 , 2Sy2 , 2Sz2 . Summing
over them, we get:
1 ~2 ~2 ~2 3~2
 

−2
S = + + 1̂ = 1̂
2 2 2 2 4
So it is a coecient times the identity operator. But the identity operator commutes with everything by
how it is dened, hence we get:

− 3~2
[ S 2 , Si ] = [1̂, Si ] = 0
4

(e) (Note: the coecients are omitted in the problem formulation.)

6
This part involves very large calculations, with a loot of room for error. The key here is to carefully write
down the left part and show that it is equal to the right part. Let us do just that. Starting with the initial
expression:

− − → − → −
(S · →
a )( S · b ) = (Sx ax + Sy ay + Sz az )(Sx bx + Sy by + Sz bz ) = (Sx ax Sx bx + Sy ay Sy by + Sz az Sz bz )+

+(Sx ax Sy by + Sx ax Sz bz + Sy ay Sx bx + Sy ay Sz bz + Sz az Sx bx + Sz az Sy by ) =
= (Sx2 ax bx + Sy2 ay by + Sz2 az bz ) + (Sx Sy ax by + Sy Sx ay bx + Sx Sz ax bz + Sz Sx az bx + Sy Sz ay bz + Sz Sy az by ) =
" 2 2 2 # 2 
~2
         
0 1 0 −i 1 0 ~ 0 1 0 −i 0 −i 0 1
= ax bx + a y by + a z bz + ax by + ay bx + ...
4 1 0 i 0 0 −1 4 1 0 i 0 i 0 1 0
            
0 1 1 0 1 0 0 1 0 −i 1 0 1 0 0 −i
... + ax bz + bx az + ay bz + a z by =
1 0 0 −1 0 −1 1 0 i 0 0 −1 0 −1 i 0
~2
= (ax bx 1̂ + ay by 1̂ + az bz 1̂)+
4
~2
         
  
i 0 −i 0 0 −1 0 1 i 0 0 −i
+ ax by + ay bx + ax bz + az bx + ay bz + az by =
4 0 −i 0 i 1 0 −1 0 0 i −i 0
~2 → ~2
 

− i(ax by − ay bx ) (az bx − ax bz ) + i(ay bz − az by )
= (−a · b)+
4 4 (ax bz − az bx ) + i(ay bz − az by ) i(ay bx − ax by )
Now, note that:    
cx ay bz − az by

−c = c  ≡ (→
− →

y a × b ) = az bx − ax bz 
cz ax by − ay bx
With this in mind, we can rewrite the last expression as:

~2 → ~2 ~2 → i~2
   
− →
− icz cy + icx − →
− cz −icy + cx
(a · b)+ = (a · b)+
4 4 −c y + ic x −icz 4 4 ic y + cx −cz

This looks much less scary than before! Remembering the relation we need to prove, we can rewrite the last
matrix in the following way:
       
cz −icy + cx 0 1 0 −i 1 0 2 2→−−
= cx + cy + cz = (Sx cx + Sy cy + Sz cz ) = S → c
icy + cx −cz 1 0 i 0 0 −1 ~ ~

Recalling the denition of →


−c and putting everything together, we nally obtain:


− − → − → − ~2 → →
− i~ − →−
(S · →
a )( S · b ) = (−
a · b )1̂ + (→a × b)
4 2

− →
− →
− →

Now, suppose → −
a || b . If either →

a = 0 or b = 0 , then the proof of commutation is trivial, since one of the
dot products equals 0. Let us consider non-zero vectors instead. In this case, by denition of the parallel
vectors, the vectors only dier from each other by a constant factor, meaning we can write:


b = c→

a,

where c is a constant. Then:



− − → − →− →
− − → − −
[S · →
a , S · b ] = c[ S · →
a, S ·→
a]=0

Dirac matrices
As you will see (or have already seen), the Dirac equation can be written as (iγ µ δµ − m)Ψ = 0, where m is
the mass, Ψ(x) is the wave function, ∂µ ≡ ∂/∂xµ is the four-gradient, γ µ are the so-called Dirac matrices,
and the indices µ, ν take values from {0, 1, 2, 3}. (~ = c = 1). We are interested here only in the properties
of the γ matrices, which satisfy the following relations: 12 (γ µ γ ν + γ ν γ µ ) = g µν . Here g µν stands for the
µν element of a 4 × 4 matrix with nonzero entries only for its diagonal terms g 00 = 1; g ii = −1 where
i = {1, 2, 3}. (Several of you will recognize g µν as the Minkowski metric tensor).

7
(a) Calculate the square of all gamma matrices and show that (γ 0 )2 = +1, and (γ i )2 = −1 for all values
of i = {1, 2, 3}, and that γ µ γ ν = −γ ν γ µ , for all µ 6= ν . This list requirement indicates that the objects ν µ
cannot be ordinary numbers.
(b) What are the possible eigenvalues of γ µ ?
(c) Show that the trace of all Dirac gamma matrices vanishes: T rγ µ = 0 for any value of µ = {0, 1, 2, 3}.
(d) Combine your results in the previous parts to argue that all γ µ matrices must be (2n × 2n), i.e. they
must have even order.
(e) Show that the lowest acceptable dimension of matrices satisfying the conditions stated above are 4 × 4
matrices.
(f) A typical representation
 of Dirac's
 gamma matrices in terms of the Pauli matrices can be given as follows:
σi

I 0 0
γ0 = and γ i = i where each individual element represents 2 × 2 matrices. I is the unit
0 −I −σ 0
matrix and σ are the Pauli matrices. Use the results you obtained for the Pauli matrices to show that this
i

representation of γ µ satises the requirement 12 (γ µ γ ν + γ ν γ µ ) = g µν . [Hint: While it is possible to show


this directly, itNis a lot easier to write the Dirac matrices as direct products of Pauli matrices: γ 0 = τ 3 I
N
and γ i = iτN 2
σ i , where
N j τ are,2 once
i
again, regular NPauli matrices. In this representation, for example
i j 2 i 2
σ ) = (i τ 2 τ 2 ) (σ i σ j ) = −I (σ i σ j ). ]
N
γ γ = (iτ σ )(iτ
(g) Dene γ 5 ≡ iγ 0 γ 1 γ 2 γ 3 . Show that γ 5 = τ 1 I and that γ 5 γ µ + γ µ γ 5 = 0, i.e. γ 5 anticommutes with
N
all other γ matrices.

(a) In the relation 1 µ ν


2 (γ γ + γ ν γ µ ) = g µν , let us put ν = µ. Then we get (γ µ )2 = g µµ . By the denition of
the tensor g µν , this means that:

(γ 0 )2 = 1; (γ 1 )2 = (γ 2 )2 = (γ 3 )2 = −1.

Also, since for any µ 6= ν , g µν = 0, we conclude that for such µ, ν pairs:

γ µ γ ν = −γ ν γ µ , µ 6= ν

(b) The values received in (a) for (γ i )2


are obviously their eigenvalues, and then the eigenvalues of γ i will
be the square roots of these values. This means ±1 for γ 0 , and ±i for γ i , i = {1, 2, 3}.

(c) Note the relation:


γµγµ
=1
g µµ
We will use this relation, as well as the anti-symmetry property proven in (a), as follows:

γµγµ
   ν µ   µ ν 
γ γ µ γ γ µ
T r(γ ν ) = T r γ ν µµ = Tr γ = −T r γ
g g µµ g µµ

But trace also has the cyclical property: T r(ABC) = T r(BCA) for any matrices A, B, C , so:
 µ ν   µ µ

γ γ µ 1 µ ν µ 1 ν µ µ νγ γ
−T r γ = − T r(γ γ γ ) = − T r(γ γ γ ) = −T r γ = −T r(γ ν )
g µµ g µµ g µµ g µµ

So, in the end:


T r(γ ν ) = −T r(γ ν ) =⇒ T r(γ ν ) = 0.

(d) Since T r(γ µ ) = 0, the sum of all eigenvalues of any γ µ has to be 0.But for any γ µ , the only two possible
values of eigenvalues are ±c, where c is either 1, or i. For their sum to be 0, each of the two values should be
encountered on the matrix diagonal the same number of times  hence that number n times 2 is the number
of diagonal elements of the matrix, and the matrix is (2n × 2n).

8
(e) The only possible even number lower than 4 is 2. Can γ µ be (2 × 2)?
Consider γ . From the relation (γ0 ) = 1, we get:
0 2

γ0 = (γ0 )−1

Now, let us represent γ0 as:  


a b
γ0 =
c d
Remember that for the (2 × 2) matrices, the inverse is calculated as:
   
a b 1 d −b
=
c d ad − bc −c a

Calculating the determinant of both matrix, we get:


ad − bc
ad − bc = = 1,
ad − bc
and the matrix equation then becomes
   
a b d −b
= .
c d −c a

From this, follows c = b = 0, and a = d. But if a = d, and (since T r(γ 0 ) = 0) a + d = 0, then a = d = 0. γ 0


is a zero matrix, and hence for it (γ 0 )2 = 1 cannot hold. Contradiction.
Hence, this representation of gamma-matrices contradicts their properties, and larger n is needed for the
(2n × 2n) matrices to be viable candidates for gamma-matrices. n = 4 turns out to be enough, as is shown
in part (f).

(f) Pauli matrices are:


     
0 1 0 −i 1 0
σ1 = ; σ2 = ; σ3 =
1 0 i 0 0 −1

As the hint suggests, let us derive the stated relations:


   
0 I 0 1 0 O O
γ = = I = σ3 I
0 −I 0 −1

σi
   
0 0 1 O i O
γi = i = σ = iσ 2 σi
−σ 0 −1 0
In this representation, for µ 6= ν 6= 0:
O O O
γ µ γ ν = (iσ 2 σ µ )(iσ 2 σ ν ) = −(σ 2 )2 (σ µ σ ν )

Then:
1 µ ν 1 O (σ 2 )2 O µ ν
(γ γ + γ ν γ µ ) = (−(σ 2 )2 ) (σ µ σ ν + σ ν σ µ ) = − {σ , σ }
2 2 2
As was shown in the previous problem (and can be used by dropping the coecients), {σ µ , σ ν } = 2δµν . And
since µ 6= ν , δµν = 0, so the result is 0.
The case µ = ν 6= 0 is the same up to this point, but now δµν = 1, so, since σ22 = 1̂:

1 µ ν O
(γ γ + γ ν γ µ ) = −1̂ 1̂ = 1̂
2
Next case is µ 6= 0, ν = 0. Then:
O O O
γ µ γ 0 = (iσ 2 σ µ )(σ 3 I) = iσ 2 σ 3 σµ

9
O O O
γ 0 γ µ = (σ 3 I)(iσ 2 σ µ ) = iσ 3 σ 2 σµ
1 µ 0 1 O O i O i O
(γ γ ) = (iσ 2 σ 3 σ µ − iσ 3 σ 2 σ µ ) = {σ 2 , σ 3 } σ µ = 2δ2,3 σµ = 0
2 2 2 2
Finally, in case of µ = ν = 0:

1 0 0 O O O 1 O
(γ γ + γ 0 γ 0 ) = γ 0 γ 0 = (σ 3 I)(σ 3 I) = (σ 3 )2 I = (σ 3 σ 3 + σ 3 σ 3 ) I=
2 2
1 O 1 O O
= {σ3 , σ3 } I = 2δ3,3 I=I I = 1̂
2 2
So, to summarize:

• 1 µ ν
2 {γ γ + γ ν γ µ } = 1 if µ = ν = 0.
• 1 µ ν
2 {γ γ + γ ν γ µ } = −1 if µ = ν 6= 0.

• 1 µ ν
2 {γ γ + γ ν γ µ } = 0 if µ 6= ν .

This all combined means:


1 µ ν
{γ γ + γ ν γ ν } = g µν
2

(g) We will directly calculate γ 5 from the form of the matrices:


σ1 σ2 σ3 σ1
       2 3 
I 0 0 0 0 0 −σ σ 0
γ 5 = iγ 0 γ 1 γ 2 γ 3 = i =i =
0 −I −σ 1 0 −σ 2 0 −σ 3 0 σ1 0 0 −σ 2 σ 3

σ1 σ2 σ3
   
0 0 1 O 1 2 3 O
= −i = −i σ σ σ = −iσ 1 σ1 σ2 σ3
σ1 σ2 σ3 0 1 0
From the previous problem:
        
1 2 2 0 1 0 −i 1 0 i 0 1 0 i 0
σ σ σ = = = = iI
1 0 i 0 0 −1 0 −i 0 −1 0 i

Returning to the expression for γ 5 then:


O O
γ 5 = −iσ 1 iI = σ 1 I

From this expression, we can prove the last relation. First, consider the case when µ = 0, then:
O O O O O O
γ 5 γ 0 + γ 0 γ 5 = (σ 1 I)(σ 3 I) + (σ 1 I)(σ 3 I) = (σ 1 σ 3 + σ 3 σ 1 ) I = {σ 1 , σ 3 } = 2δ13 I=0

Now, the case of µ 6= 0:


O O O O O O
γ 5 γ µ +γ µ γ 0 = (σ 1 I)(iσ 2 σ µ )+(iσ 2 σ µ )(σ 1 I) = (σ 1 σ 2 +σ 2 σ 1 ) σ µ = {σ 1 , σ 2 }σ ν I = 2δ12 σµ = 0

So, for all possible µ = {0, 1, 2, 3} we have proved:

σ5 σµ + σµ σ5 = 0

10

You might also like